warning

Content guidance

Depiction or discussion of discriminatory behaviour

Depiction or discussion of sensitive content

Adult supervision recommended

video

Lesson video

In progress...

Loading...

Hi, I'm Mrs. Olchin and I'm going to be taking you through the Citizenship lesson today.

I'm going to give you all the information that you need to be able to take part in the lesson.

And I'll also pause and tell you when you need to complete an activity or complete a check for understanding.

I hope you enjoy the lesson.

Ground rules for this lesson are as follows.

'Cause we're going to be talking about different opinions, we need to make sure that we're listening to others.

It's okay to disagree with each other, but we should listen properly before making assumptions or deciding how to respond.

When disagreeing, challenge a statement, not the person.

We also need to make sure that we're respecting privacy.

We can discuss examples, but do not use names or descriptions that identify anyone, including ourselves.

We need to make sure we're using no judgement so we can explore beliefs and misunderstandings about a topic without fear of being judged.

And finally, we need to choose the level of participation.

Everyone has the right to choose not to answer a question or join a discussion.

We never put anyone on the spot.

So it's really important that we're following these ground rules.

So this lesson is, What did the UK Undercover Voters Project tell us? And it's taken from the unit, how is social media changing our view of democracy? So by the end of this lesson you'll be able to explain why the UK Undercover Voters Project was carried out and what its conclusions were.

These are the key words for this lesson.

So investigate, which means to look into something carefully to learn more about it and find out the facts; targeted, which is when something is directed at a specific person or group; and algorithms, which are a set of instructions or steps that a computer follows to solve a problem or to make decisions.

And this is the lesson outline.

We're going to look at what was the UK Undercover Voters Project.

We're then going to look at how did voter types experience media content before moving on to look at how might this impact voting behaviour.

So we're going to start by looking at what was a UK Undercover Voters Project.

So Jacob's saying, "I've never heard of the UK Undercover Voters Project.

What is it and what did it involve?" So if you like, pause and think, have you heard about this project? So let's break it down.

So undercover means doing secret work, usually to gather information for an investigation.

And voters are also known as the electorate.

So they're the citizens who are able to vote in an election.

So the UK Undercover Voters Project took place during the 2024 General Election, and it was run by the BBC.

And the project aimed to investigate how online, targeted political adverts might influence people's voting opinions and their behaviour.

So let's break it down a little bit further to understand.

So many people have online profiles; so social media accounts, online activity, and that will have personal information about them and it might include things such as their interests and their beliefs.

Algorithms then use this information to show people targeted content that matches their interests and beliefs, including their political views.

So at a really simple level, that might be someone on social media spending longer, looking at videos of cute puppies.

And therefore, algorithms will keep sending them videos of cute puppies because it shows that that's what they're engaging with and that's what they enjoy.

So that's what algorithms will do.

So for example, if you search for a product on an online shop, for example, again, you're likely to see adverts for similar products when you use other apps or websites.

So that's how algorithms work.

They match content to what you're interested in.

So the UK Undercover Voters Project wanted to investigate whether targeted political information could influence people's opinions and behaviour.

So let's have a check for understanding.

What did the BBC want to investigate through the UK Undercovers Voters Project? Was it A, whether targeted content algorithms could impact voter behaviour? B, whether generic content algorithms could impact voter behaviour? Or C, whether targeted content algorithms could impact consumer behaviour? And it's A, whether targeted content algorithms could impact voter behaviour.

So Jacob's now asking, "Did the project try to look at random people's online profiles to figure out how they voted?" Well no, they didn't do that.

What they did was they created 24 online characters with detailed political backgrounds and they registered them to multiple social media platforms. And this allowed algorithms to target content based on their political beliefs.

So they created characters rather than looking at real people's online profiles.

The BBC could then monitor the political information that each of these characters were receiving.

So let's have a check for understanding.

Why did each of the characters have a political background created? Fill in the missing words.

So this allowed, what, to pick up on the character's political beliefs and target the content being delivered to their online profiles.

And the missing word was algorithms. The characters were designed to represent a cross-section of the UK electorate, allowing the BBC to investigate how personalised content algorithms might influence political debate and voter behaviour.

So let's have a look at some of the types of characters that are included.

There are quite a few, but this will just give you a bit of a flavour of that cross-section.

So a pro-Brexit voter, someone that strongly supports Brexit and has nationalist views.

They're supportive of the Royal family and has concerns over immigration.

An anti-Brexit voter.

So someone that's against Brexit, believes that the UK should remain in the European Union and supports close relationships with other EU members.

A Swing voter, and that's someone that's politically unsure.

So someone that's undecided on who to vote for and could be persuaded by any political party.

So really undecided.

A Young Green voter.

So someone that's concerned about the environment.

They support policies that are climate focused and they want political leaders to be ambitious in tackling climate change.

Some other characters include an older conservative voter.

So someone that has traditional values and supports the Conservative Party's policies, and they're keen on policies that are tough on law and order.

A Labour loyalist.

So someone that's really, really permitted and loyal to Labour.

They're a long-term Labour supporter, believes strongly in the protection and value of public services, and they're passionate about social justice.

A Lib Dem supporter.

So someone that has centrist views, kind of in the middle of Labour and Conservative.

They're against Brexit and support progressive, but balanced reforms. And working-class voter.

So someone that comes from a working-class background.

They agree with Brexit and they're concerned about economic change and immigration.

Let's look at a few more.

So a disengaged young person.

So someone that has little interest in politics and they're more likely to engage with celebrity news and influencers content rather than political information.

A Scottish nationalist.

So someone that supports Scottish independence and prioritises local Scottish issues.

And they share the same values as the Scottish National Party.

So the SNP.

A far-right sympathiser.

So someone that shares far-right beliefs and they're passionate about nationalism, anti-immigration and cultural identity.

And a far-left activist.

So someone that supports radical socialist ideas, challenges the establishment and focuses on sharing wealth and promoting equality.

So you could see a huge range there.

So just to try and remind us of some that we've seen, let's have a quick check for understanding.

So what I'd like you to do is to match the character from the project to their description.

So we've got four we're looking at: Swing voter, Old Conservative voter, Far-Right Sympathiser, and a Far-Left Activist.

And the descriptions that we've got are: someone that supports radical socialist ideas and focuses on sharing wealth and promoting equality; someone that could be persuaded by any political party; someone that's passionate about nationalism, anti-immigration and cultural identity; and someone that's keen on policies that are tough on law and order.

So have a go at matching.

So the Swing voter could be persuaded by any political party.

The old Conservative voter is keen on policies that are tough on law and order.

The far-right sympathiser, passionate about nationalism, anti-immigration and cultural identity.

And far-left activist supports radical socialist ideas and focuses on sharing wealth and promoting equality.

So for Task A, I would like you to write a summary of the UK Undercovers Voters Project.

Your summary should include: when and who ran the project, a brief overview of what they did, and what they hoped to find out.

So when you were writing the summary of the UK Undercover Voters Project, your answer could include: "The UK Undercover Voters Project took place during the 2024 General Election.

It was run by the BBC.

They created a range of online characters representing various political beliefs and ensured these were registered to multiple social media platforms. The BBC wanted to investigate the impact that online, targeted political information could have on voting opinion and behaviour." We're now going to look at how did voter types experience media content.

So Jacob's saying, "Did the different voter types receive different information that was tailored to their political beliefs?" So what do you think? Do you think they did or do you think they didn't? Yes, the characters received very different information in the run-up to the election, and that's because algorithms, remember we looked at that keyword earlier.

Algorithms targeted the content they saw, so it generally matched the character's perceived or stated political preferences.

Let's have a check for understanding.

True or false, there was a large difference in the type of information the characters received in the run-up to the election.

Is that true? Is that false? And can you tell me why? It's true.

Why? Algorithms ensured that what the voters were receiving aligned their perceived or stated political preferences.

For example, Pro-Brexit voter is someone that might strongly support Brexit and has patriotic views, is supportive of the Royal family and has concerns over immigration.

So that's what their online, kind of persona is showing.

So therefore, their content experience was that they were bombarded with strong pro-Brexit messaging, including misinformation about the economic benefits of Brexit, and often saw negative portrayals of immigration.

So for example, they might have seen posts emphasising how leaving the EU would lead to immediate NHS funding boost, which was a common but misleading claim.

Let's have a look at another example.

So if we remember, the Older Conservative voter might have traditional values, supports the Conservative Party's policies and is keen on policies that are tough on law and order.

So their content experience was that they were exposed to strong pro-Conservative messaging focusing on law and order policies and economic stability.

Opposition parties were often portrayed to them as being unpatriotic, for example.

So, some of the examples that they might have received is social media post claiming Labour would dismantle border controls or increase taxes for pensioners.

Let's take a look at another.

So if we remember the Young Green voter, their online persona was showing them to be concerned about the environment and supporting policies that are climate focused, wanting political leaders to be ambitious in tackling climate change.

So their content experience was that they were targeted with content about climate change and promises from the Green Party or the Labour manifestos.

They encountered less disinformation, but more content driven by activist groups.

So an example is they were shown articles about Labour's Green Industrial Revolution, and these are really prominently featured.

And lastly, let's look at the Swing voters.

So they are the people that are politically unsure and they haven't yet decided who to vote for and they could persuaded by any political party.

So they were exposed to a mix of neutral facts and conflicting political opinions.

They were subjected to the most intense misinformation and disinformation, with all sides trying to influence them.

So for example, they received claims that voting Conservative would lead to hidden NHS privatisation, while Labour was accused of economic recklessness.

So they were getting information from both sides.

The project highlighted how algorithms ensure that individuals receive different political content based on their perceived or publicly shared personal information.

People are often shown content that reinforces their political beliefs and fits the narrative of their online persona.

So let's have a check for understanding.

What are the missing words? Individuals are shown content that will, something, their political beliefs and fit a, something, that aligns with their, something, persona.

So individuals are shown content that will reinforce their political beliefs and fit a narrative that aligns with their online persona.

So for Task B, I would like you to really think about these statements.

Are these statements about the information that different voter types received correct or incorrect? So really have a think of each of them and decide, are you sure it's correct or do you think it's correct? Do you think it's incorrect? Are you sure it's incorrect? So have a little think.

So we've got, Swing voters Were shown the most misinformation.

What we think about that? Young Green voters were shown anti-immigration content.

Old Conservative voters were shown environmental content.

Pro-Brexit voters were shown claims that Brexit would benefit the NHS.

So have a go.

So let's have a look.

Swing voters were shown the most misinformation.

Correct.

Young Green voters were shown anti-immigration content.

Incorrect.

Old Conservative voters were shown environmental content.

Incorrect.

Pro-Brexit voters were shown claims that Brexit would benefit the NHS.

Correct.

For part two, for each incorrect statement about the voters types' experience, explain why it is incorrect.

So let's have a look.

For each incorrect statement about the voter types' experiences, when you're explaining why it's incorrect your answer may have included, "Young Green voters were unlikely to be shown anti-immigration content because their algorithms would not have suggested that this type of content as it would not align with that online profile.

They would more likely be sent environmental content.

If we look at the Old Conservative voters, they were unlikely to be shown environmental content as this did not align with their online profile.

They would more likely be shown information about traditional Conservative Party policies relating to law and order." We're now going to look at how might this impact voting behaviour.

So Jacob's asking, "Do these different experiences of media content actually impact voting behaviour?" What do you think? Ultimately, since the characters were fictional, it's hard to say for sure if voting behaviour would be directly impacted by algorithms. Because these were made up characters, they weren't going to vote in the General Election.

However, the experiment did show that different media experiences could lead to a reinforcement of bias, misinformation and disinformation influence, polarisation, and emotional appeals.

So how could the reinforcement of bias impact voting behaviour? Pause and have a think for yourself.

So algorithms often created echo chambers that reinforce political beliefs and limit information and facts about different and opposing views.

This can make it difficult for voters to think critically.

For example, pro-Brexit voters rarely saw information outlining the potential benefits of Brexit, which means they might struggle to fully weigh up the pros and cons.

And that's what we mean by that echo chamber.

That idea that people are only receiving information that aligns with what they already think.

So it's really hard for them to critically think and explore other opinions.

So now Jacob's asking, "How could the influence of misinformation and disinformation impact voting behaviour?" Have a think to yourselves.

So misinformation and disinformation targeted to the individual concerns that specific voters have could manipulate their voting behaviour.

The same is true for false information that validates an individual's beliefs or hopes.

This too could manipulate voting behaviour.

For example, disinformation claiming Labour would dismantle border controls could manipulate the voting behaviour of individuals who have real concerns about immigration.

So this shows actually, you know, incorrect information could absolutely impact people's voting behaviour.

So Laura is saying, "How could polarisation of bias impact voting behaviour?" Have a think to yourselves.

So polarisation is when people's political views become more extreme and divided, with less agreement or middle ground in discussing that kind of middle ground together.

So targeted information that reinforces a person's political beliefs could create polarisation by deepening the divides between different voter groups, making political debates more hostile, and limiting individual's ability to think critically and reducing their mutual understanding.

Laura's thinking, "How could emotional appeals impact voting behaviour?" So again, pause and have a think to yourselves.

So targeted content that's based on a person's political beliefs often emphasise and prioritise emotional over rational appeals.

So for example, information might focus on the emotion of fear when providing content about immigration or NHS privatisation, rather than providing factual information from both sides of the argument.

So let's have a check for understanding.

So what I want you to try and do is to match the rationale to its description.

So these are the ones that we've just looked at.

So we've got reinforcement of bias, misinformation and disinformation.

We've got polarisation and we've got emotional appeals.

And you need to match them to one of these.

So one of them is false information that validates an individual's beliefs or hopes.

One of them is algorithms can create echo chambers that reinforce political beliefs.

One is prioritising feelings, such as fear or hope, over factual and balanced appeals.

And one is when people's political views become more divided with less agreement or middle ground.

So pause and have a go at matching the rationale to its description.

So reinforcement of bias is when algorithms can create echo chambers that reinforce political beliefs.

Misinformation and disinformation is when false information that validates an individual's beliefs or hopes is provided.

Polarisation is when people's political views become more divided with less agreement or middle ground.

And emotional appeals are prioritising feelings, such as fear or hope, over factual and balanced appeals.

The UK Undercover Voters Project came to the conclusion that targeted information likely did influence voting behaviour as it reinforced existing political beliefs, spread misinformation and disinformation, manipulated opinions by ensuring political information reinforced individual's concerns and hopes.

And this highlights how targeted content that's driven by algorithms can impact voting behaviour.

Let's have a check for understanding.

The project came to the conclusion that targeted information likely didn't influence voting behaviour.

Is that true, is that false, and can you tell me why? It's false, and why? It likely did influence voting behaviour as it reinforced political beliefs, spread misinformation and disinformation, and manipulated opinions.

For Task C, I would like you to read this text and I'd like you to think about what might the missing words be.

So you need to fill in the missing words.

So let's read it together.

The UK Undercover Voters Project came to the conclusion that, something, information likely did, something, voting behaviour as it reinforced existing, something, beliefs, spread misinformation and, something, manipulated opinions by ensuring political information reinforced individuals, something, and hopes.

This highlights how targeted content driven by, something, can impact voting behaviour.

So pause while you have a go at this task and try to find the missing words.

So these were the missing words.

Let's read it together again.

The UK Undercover Voters Projects came to the conclusion that targeted information likely did influence voting behaviour as it reinforced existing political beliefs, spread misinformation and disinformation, manipulated opinions by ensuring political information reinforced to individuals concerns and hopes.

And this highlights how targeted content driven by algorithms can impact voting behaviour.

So for Task Two, I'd like you to read the scenario below and then answer the questions.

So Amir is an 18-year-old who regularly watches videos about climate change and sustainability on social media.

Amir often clicks on articles about the environment and follows several environmentalist groups.

Over time, the algorithms on Amir's social media platforms begin showing him more content related to green politics and policies.

So what I want you to think about is, A, how could the targeted content that Amir sees reinforce their existing beliefs? And if Amir started seeing false claims about the benefits of certain environmental policies, how could this impact Amir's voting behaviour in the future? So pause and have a think about this task.

So you might have said, for A, how could the targeted content Amir sees reinforce his existing beliefs? "The targeted content Amir sees might reinforce his existing beliefs 'cause the algorithms will likely show them more of the same kind of information he already agrees with.

For example, since Amir is interested in climate change and sustainability, he'll see more posts and articles about green politics.

This could make Amir believe even more strongly that supporting green policies is a right choice for him 'cause they're constantly seeing content that matches his views." So for B, if Amir started seeing false claims about the benefits of certain environmental policies, how could this impact his voting behaviour in the future? "So if Amir started seeing false claims about the benefits of certain environmental policies, it could affect his voting behaviour.

For example, if he saw misleading post claim that a specific policy would solve climate change immediately, Amir might believe that policy is a good option without fully understanding the truth.

This could make him vote for a party or candidate based on incorrect information.

'cause the algorithms are shown him content that matches what he wants to believe." So in summary for what did the UK Undercover Voters Project tell us.

The UK Undercover Voters Project that was run by the BBC during the 2024 General Election investigated whether algorithms send targeted information based on online profiles and how this might influence voting behaviour.

They created 24 online characters with different political beliefs and register them to various social media platforms. The conclusion of the project was that targeted information likely influenced voting by reinforcing beliefs, spreading misinformation, and shaping opinions based on personal concerns and hopes.

That brings us to the end of this lesson.

Well done for all your hard work, and I hope that you come back for future Citizenship lessons.