video

Lesson video

In progress...

Loading...

Hello, my name is Mrs. Holborow, and welcome to Computing.

I'm so pleased that you've decided to join me for today's lesson.

Today, we're going to be looking at algorithmic bias and what can be done to reduce algorithmic bias.

Welcome to today's lesson from the unit, Algorithms. This lesson is called Algorithmic Bias.

And by the end of today's lesson, you'll be able to describe algorithmic bias and suggest ways to make algorithms fairer.

Shall we make a start? We'll be exploring these keywords throughout today's lesson.

Bias, bias, to disproportionately favour one side, group, or outcome over others.

Discriminatory, discriminatory, to make or show an unjust or prejudicial distinction between different categories of people.

Watch out for these key words throughout today's lesson.

Today's lesson is split into two sections.

We'll start by describing algorithmic bias.

And then we'll move on to look at how we can reduce algorithmic bias.

Let's make a start by describing algorithmic bias.

Think about some of the simple recipes you may know.

What makes a good recipe? Maybe pause the video and have a think.

Lucas says, "I think a good recipe should be clear to read and describe the logical step-by-step instructions needed to make the meal." That's a really good response, Lucas.

Izzy says, "I think a good recipe should also have a list of the correct ingredients with weights and quantities." That's another really good response, Izzy.

Think again about some simple recipes you may know.

What makes a bad recipe? Maybe pause the video and have a think.

Lucas says, "A bad recipe might be confusing and have mixed up instructions, or important steps that are missing." Izzy says, "A bad recipe might have wrong or missing ingredients that are needed to make the meal.

It might not give the weights and quantities of ingredients needed for the meal." Again, some really good responses there.

What might be the result of following a good recipe? Lucas says, "By following a good recipe, it should be quite easy for people to cook a tasty meal.

The meal should turn out as intended because the instructions and ingredients are correct." What might be the result of following a bad recipe? Izzy says, "If you follow a bad recipe, the meal may not taste very good, or it could even be dangerous If it isn't cooked properly.

The meal probably wouldn't turn out as intended because the instructions and/or ingredients are wrong." A recipe is an algorithm that is designed to provide the instructions and ingredients needed for a human to cook a meal.

Computers also use algorithms to solve a variety of problems. Computer algorithms can be compared to recipes.

The instructions of a recipe are similar to the logic of a computer algorithm.

The ingredients of a recipe are similar to the data used by a computer algorithm.

Using poor quality or wrong ingredients will result in a bad meal.

Using poor quality or incorrect data in an algorithm will likely result in inaccurate results, unreliable predictions, or biassed outcomes.

The term bias means to disproportionately favour one side, group, or outcome over others.

Algorithmic bias occurs when an algorithm produces unfair or discriminatory outcomes that favour some groups over others.

Computer algorithms can be designed or developed in a number of ways, such as human-designed, rule-based algorithms, artificial intelligence, or AI algorithms, developed by humans using machine learning techniques.

Note that algorithmic bias can occur in both rule-based algorithms and AI algorithms that use machine learning techniques.

Algorithms can become biassed if the data they use is biassed in some way, or the design of the algorithm instructions are biassed.

AI machine learning algorithms learn from and use sets of data.

If the data used to train or inform the algorithm itself is biassed, the algorithm will learn and continue those biases.

True or false, algorithmic bias only occurs because of bias data sets.

Is this true or false? Pause the video and have a think.

That's right, it's false.

Algorithms can be biassed if the design of the algorithm instructions is biassed or the data that they use is biassed.

Using poor quality data in algorithms may lead to, A, biassed outcomes and predictions, B, unbiased outcomes and reliable predictions, or C, unbiased outcomes but reliable predictions? Pause the video whilst you have a think.

That's right, the correct answer is A.

Using poor quality data in algorithms may lead to biassed outcomes and predictions.

True or false? Algorithmic bias only occurs in algorithms that have been developed using machine learning techniques.

Pause the video whilst you have a think.

That's right, this is false.

Algorithmic bias can occur in algorithms that have been designed by humans, as well as machine learning algorithms trained from sets of data.

Okay, we are now moving on to the first task of today's lesson.

Imagine you are searching for images online using a search engine and you search for the term CEO.

CEO stands for chief executive Officer, and is basically another term for a boss or a director.

When you look at the image results, you notice something interesting.

The vast majority of the images show men.

There are very few images of women in the results.

For part one, "In your own words, describe the term algorithmic bias.

And then for part two, Describe the potential algorithmic bias you observe in the CEO image search results.

Pause the video here whilst you complete the activity.

For part one, you were asked to describe the term algorithmic bias in your own words.

Let's have a look at a sample answer from Izzy.

"To me, algorithmic bias means when any type of algorithm produces unfair outcomes or predictions that favour some groups of people over others.

Like if an algorithm assumed a certain group of people liked a particular food or style of music." For part two, you were asked to describe the potential algorithmic bias you observed in the CEO image search results.

Let's have a look at a sample answer from Lucas this time.

"It looks like the search engine algorithm might be outputting results that are biassed towards more men being CEOs than women.

If it's an AI algorithm, it could be that the data the algorithm was trained on was biassed, or that the design of the algorithm's logic is biassed." That's a great response, Lucas.

Well done.

We're now moving on to the second part of today's lesson where we are going to look at how we can reduce algorithmic bias.

Izzy is planning to collect data to find the five most popular lunchtime meals in her school.

Izzy says, "I could conduct a survey and ask all the other students in my class what their top five favourite lunchtime meals are.

I think that's fair because I'll make sure I ask everyone in the class." Think about Izzy's plan to collect data.

Do you think the data Izzy collects will be biassed or unbiased? There you pause the video and have a think.

Lucas says, "I like your survey idea, Izzy, but I think the data you collect might be biassed towards just the members of your class.

What about all the other students that aren't in your class or year group? They may have different opinions." That's a really good point, Lucas.

Izzy says, "Oh yes, I hadn't thought about that.

Maybe I should do a survey across all classes and year groups so that the data represents the opinions of students across the whole school.

Thanks, Lucas!" There are a number of ways in which algorithms can become biassed, such as using or learning from biassed data, using bad design choices in the algorithm.

Note that there are other ways in which algorithms can become biassed.

Data may be biassed in various ways, such as historical bias, representation bias, and measurement bias.

Again, there may be other ways in which data could be biassed.

But these are the three we've shared today.

Historical bias is when data reflects past societal biases or inequalities that existed when the data was collected.

Algorithms that use or are trained on historically biassed data will likely produce discriminatory outcomes that reflect the bias present at the time of data collection.

An example could be a hiring algorithm that uses old bias data to suggest jobs to candidates.

Representation bias is when data doesn't accurately represent the real-world population or the problem it's trying to solve.

This means that some groups are underrepresented, overrepresented, or completely missed from the data set.

An example of this would be a facial recognition algorithm that is trained on data that doesn't represent the true diversity of human faces in the world.

Measurement bias is when something is measured that favours certain outcomes or groups because the measurement method itself is flawed or incomplete.

Measurement bias results in inaccurate data, misleading information, unfair comparisons, and ultimately poor decisions.

An example of this is a survey that asks questions in a certain way that are biassed in some way towards a certain gender.

Note that it's extremely difficult, maybe even impossible, for complex algorithms to be completely free from bias.

By aiming to understand sources of bias, actively working to reduce it, and being transparent, algorithmic bias can be reduced as much as possible.

This goes some way to ensure algorithms are as fair and ethical as possible.

Time to check your understanding.

Representational bias is when data, A, reflects past societal biases, B, is measured in a way that favours certain groups, or C, doesn't represent the true diversity of humans in the world? Pause the video whilst you have a think.

That's right, the correct answer is C.

Representational bias is when data doesn't represent the true diversity of humans in the world.

True or false? Historical bias might might occur when the data used reflects past societal biases or inequalities.

Pause the video whilst you have a think.

That's right, that statement is true.

Okay, we're now moving on to our final set of tasks for today's lesson.

And you've done a fantastic job, so well done.

Folade is Nigerian and has recently moved from Nigeria to the UK to study.

When she arrived, she bought a new phone that has a voice-activated virtual assistant.

Folade is finding it difficult to get the voice recognition software to correctly recognise her voice and act upon the commands she gives it in English.

For part one, describe what type of algorithmic bias might have occurred in the phone's voice-activated virtual assistant.

For part two, suggest how the algorithmic bias of the voice assistant could be reduced.

Pause the video whilst you complete the activity.

How did you get on? Did you manage to answer the questions? Great work.

Let's have a look at Izzy's sample answer together.

"There could be many reasons why the assistant is showing algorithmic bias.

I think the most likely type of bias would be representational bias because the algorithm may have been trained using data that didn't contain voices or accents like Folade's.

This would mean that the data didn't truly represent the real-world population and would be biassed towards certain groups of people with certain voices or accents." Let's have a look at Izzy's sample answer for part two.

"I think that the algorithmic bias of the voice-activated assistant could be reduced by ensuring that the data used to train the assistant contains a much wider and diverse range of voices.

For example, voices with different accents from different areas of the world and a range of ages and genders should be included.

This would mean that the assistant would be able to recognise a wider range of voices that more accurately represent the world." That's a brilliant sample answer from Izzy.

Remember, if you need to go back and add any detail to your answer, you can always pause the video now and do them.

Okay, we've come to the end of today's lesson.

And you've done a fantastic job, so well done.

Algorithmic bias occurs when an algorithm produces unfair or discriminatory outcomes.

Using poor quality data in an algorithm will likely result in inaccurate results, unreliable predictions, or biassed outcomes.

By aiming to understand sources of bias, actively working to reduce it, and being transparent, algorithmic bias can be reduced as much as possible.

I hope you found today's lesson interesting.

And I hope you'll join me again soon.

Bye!.