Loading...
Hi, I'm Mrs. Allchin and I'm going to be taking you through the Citizenship lesson today.
I'm going to give you all the information that you need to be able to take part in the lesson, and I'll also pause and tell you when you need to complete an activity or complete a check for understanding.
I hope you enjoy the lesson.
This lesson is entitled: "Why are people concerned by the influence of digital media?", and it's taken from the unit: "How is social media changing our view of democracy?" By the end of today's lesson, you will be able to explain why people are concerned by the influence of digital media.
Keywords for today's lesson are: digital media, which is content shared through electronic devices such as websites, social media, videos, and online news platforms. Bad actor, which are individuals or groups that engage in harmful, illegal, or unethical behaviour online, such as hacking, spreading disinformation or manipulating information.
Deepfake, which is using artificial intelligence to manipulate a video clip or images to create new deceptive content, often with known faces or voices.
And disinformation, which is false or misleading information deliberately spread to deceive or manipulate people.
So this is our lesson outline for the lesson: Why are people concerned by the influence of digital media? We're going to look at how is social media open to bad actors? Then we're going to look at what is a deepfake? Before lastly, looking at what can happen when digital media goes wrong.
And we're going to start by looking at how is social media open to bad actors? So Sofia is asking "What are bad actors? I guess as this is a citizenship lesson, we're not talking about film stars?" So why don't you pause and have a think between yourselves.
Have you ever heard this term before? What could it mean? So a bad actor in digital media is an individual or a group that uses digital media in negative ways.
Bad actors might spread disinformation, so that's information they know is untrue, but they're spreading it any anyway to try and cause harm.
Or they might commit identity theft by stealing other users' information, or they could try and scam people for personal and financial gain.
So they're basically using digital media in negative ways.
Bad actors may also use social media in negative ways to try and manipulate people's political beliefs and actions.
So Sofia's asking, "How can social media be used to manipulate people's political beliefs and actions?" Again, pause and have a think about this yourselves.
So these are some of the things that a bad actor might do, and you might have thought about some of these yourself.
So they might spread disinformation, information that they know is untrue, but they spread it anyway.
They might spread malformation.
So information that is true, but it's spread for malicious reasons or they're only sharing a half truth.
They might hack into accounts to gain access to political information or personal information about political figures, or they might create false accounts and pretend to be political figures.
So during key political moments, bad actors might purposefully share disinformation to try and impact people's voting decisions.
So for example, during the Brexit referendum in 2016, the Leave campaign faced accusations of disinformation.
A prominent example was a claim that leaving the European Union would free up 350 million pounds per week to fund the NHS.
This statistic was widely disputed and it was deemed to be misleading and such false information may have influenced some voters to support Brexit, but this could have been based on incorrect claims. So Lucas is saying, "Is hacking an issue within politics? What could happen if political accounts are hacked?" So again, pause and have a think about this yourself.
So hacking is when someone without authority gains access to someone else's online account.
Politicians are under constant threat of hacking attempts.
So in 2021, the UK Electoral Commission, which is the independent body that oversees elections, was also hacked by bad actors, allowing them access to voter data of millions of people.
This hack did not jeopardise election results, and that is really important, but the data gained could possibly be used for disinformation campaigns because by receiving that data, those people then might have been targeted through their social media accounts with false information.
Let's have a quick check for understanding.
Which is not an example of how social media can be open to bad actors? Is it A: the spread of disinformation, B: hacking to access personal information, or C: talking to friends about political issues? And it's C: talking to friends about political issues.
Lucas is saying, "What about fake profiles? Is this an issue?" And anyone can quite easily set up a fake profile pretending to be someone else, and sometimes these fake profiles can be quite sophisticated and look very realistic.
So fake profiles can be set up to impersonate individual politicians, candidates, or even political parties, and they're often used to discredit politicians or mislead the public.
Although fake profiles are often taken down relatively quickly, the damage they can cause is huge as citizens can be unsure what information they can or cannot trust.
So there's two issues there.
One could be that somebody sees a fake profile of a politician or a prime minister and believe that it's true and believe what's being said, or they might know that it's fake, but then they start to get concerned about what they can trust, which is a really negative thing for our media.
And it isn't just fake accounts that can cause confusion and mistrust.
Fake followers can too.
So bots, and you may have heard of these, you may not.
So bots which are programmes on the internet or another network that can interact with systems or users and they can also be used to manipulate people's political beliefs.
Bots ultimately look and interact like normal social media users.
So if a bot was interacting on someone's social media page, it wouldn't look like a bot.
It would just look like it was a normal person, and that's what makes them so sort of influential and powerful, potentially.
So bots look and interact like normal social media users and can therefore be created to make it look as though a certain person is more popular than they actually are, or to detect and spread disinformation quickly.
So they can be used in lots of different ways.
In many ways, social media has been advantageous for democracy.
It has allowed citizens to engage with issues, communicate with candidates, and take part in political discussion.
But on the other hand, it has caused issues due to how easy it is for social media to be manipulated by bad actors, creating false information or fake accounts to mislead and confuse the public.
So Sofia's saying, "I don't think social media should be used for anything related to politics.
It all sounds too risky." Whereas Lucas is saying, "I disagree.
I think it's really useful.
We just need to be cautious of bad actors." So true or false? There are no benefits to using social media within a democracy, only risks.
Is that true? Is that false? And can you tell me why? It's false.
And why? In many ways social media has been advantageous for democracy.
It has allowed citizens to engage with issues, communicate with candidates, and take part in political discussion.
So for Task A, I'd like you to think about and create a list that identifies and describes techniques that bad actors may use online to manipulate people's political beliefs and actions.
You could include disinformation, hacking or creating fake profiles and bots.
So your list identifies and describes techniques that bad actors may use online to manipulate people's political beliefs and actions could include: disinformation: Bad actors might purposefully spread false, damaging and malicious information with the aim of changing people's political views.
Hacking: Bad actors might try to hack into political accounts to steal and share information or access follower data, this could then be used to target further false information.
Or creating fake profiles and bots: Bad actors might create fake political profiles to mislead the public.
They might also create bots to share false information or create the illusions that certain people are more popular than they actually are.
We're now going to move on to look at: What is a deepfake? So Sofia is asking, "What is a deepfake?" A deepfake is something that is created by artificial intelligence to replace or change someone's appearance or voice to make it look as though they have said something or done something that they haven't.
And deepfakes can take the form of photos, videos, or audio.
They can be very realistic.
So often when people see deepfakes, it could be really, really challenging or difficult for someone to be able to differentiate between something that's real or something that's been manipulated by artificial intelligence.
So let's have a quick check for understanding.
What words are missing from this definition of a deepfake? A deepfake is something that is created by what intelligence to replace or change someone's appearance or what to make it look as though they have said something or done something that they what? Pause while you have a go at this check for understanding.
And the answers were "artificial," "voice" and "haven't." Video deepfakes edit videos to make it look as though someone has carried out a particular action or done a certain thing.
Common deepfakes of this kind include putting a famous person's head on a different person's body within a video.
So deepfakes of politicians could be made to show them doing something that could damage their reputation.
So for example, this could be a deepfake video that shows a politician acting drunk and disorderly or being physically abusive.
And you can imagine how seeing these sort of videos could really manipulate the public.
So Izzy's asking, "Are there any real life examples of video deepfakes being used in a democracy?" And absolutely.
So in the 2019 election, deepfakes were created that showed the then leader of the Labour party endorsing the then leader of the Conservative party and vice versa.
And although the videos did not have malicious aims and they were clearly fake, what they did highlight was very important because they highlighted to people how realistic deepfakes are and how actually deepfakes could be used in future elections and that this could cause potential difficulties and perhaps getting people to start to think about how much they can actually trust the things that they see online.
Audio deepfakes mimic people's voices.
They do not rely on any imagery.
Instead they use artificial intelligence to mimic the really minute and specific details of an individual's unique voice and then use this to create fake audio content.
Deepfakes of politicians could be made to hear them saying something that could discredit their character, or something that could confuse voters.
So this could be something like a voicemail from a politician speaking badly about their constituents, or a fake voicemail announcement about an election.
That voice would sound absolutely like a specific politician or leader, but it wouldn't be, it would all be done through artificial intelligence as a deepfake.
And image deepfakes are fake photos.
They do not rely on any sound or movement.
Instead, they use artificial intelligence to edit photos to make it look like someone was doing something or visiting somewhere when they hadn't.
And deepfakes of politicians could be made to see them doing something that could damage their reputation.
So this could be a photo of a politician having lunch with a well-known criminal or other controversial character, or even being on a romantic date with someone other than their husband or wife or partner.
So again, these photos can look really, really realistic and it can manipulate the people that are seeing them, getting them to think something that isn't actually true.
Let's have a check for understanding.
So can you match the type of deepfake to its description? So we've got video, audio, and image, and then its description.
So pause while you have a go at this check for understanding.
So a video is a deepfake which can use both sound and visuals, audio that mimics people's voices, and image, which is fake photos.
So Lucas is saying, "Do deepfakes always have to be deep? So are they always a real concern? What do you think? Just pause for a second and think actually, how big is the problem of deepfakes, do you think? So, although not strictly deepfakes, heavily edited photos and videos have also been used during election periods to mislead the public, so they can be pretty serious.
These have included heavily edited images of Joe Biden, for example, during the US presidential election that made him look frail.
And another example is a video of Nancy Pelosi, the Speaker of the US House of Representatives, which was edited to slow down her voice and make her sound as though she was drunk.
So actually these deepfakes can really impact how people are viewing very important people, potential leaders within their country.
So you can absolutely be highly impactful and have a very, very big impact on, ultimately, what's happening within a democracy.
So Izzy's saying, "Are deepfakes really a problem? They just sound kind of funny to me." And "They are an issue," is saying Sofia.
"They are an issue because they can cause huge distrust as people won't know what is true and what is fake.
We should be able to trust information." And Lucas is saying, "I agree, DeepFakes could also influence voting decisions, which is a huge issue in a democracy." So yes, of course there are likely to be deepfakes that could be made just for a little bit of fun, but they can also really, really manipulate people during a time of democracy, such as an election, which is a concern.
So let's have a check for understanding.
So true or false? Deepfakes are just a bit of fun, they are nothing to worry about.
Is that true or false? And can you tell me why? It's false.
Why? They can cause huge distrust as people won't know what is true and what is fake.
Deepfakes could also influence voting decisions, which is a huge issue in a democracy.
So for Task B, I want you to write a description of the term deepfake.
Your description should include video, audio, and photo deepfakes and briefly explain why they may be used.
So pause while you have a go at this task.
So your description may have included something like this: A deepfake is something that is created by artificial intelligence to replace or change someone's appearance or voice to make it look as though they have said something or done something that they haven't.
Deepfakes can take the form of photos, videos, or audio that mimics people's voices.
They can be very realistic and may therefore be used to mislead the public and damage their trust in the media.
We're now going to move on to look at what can happen when digital media goes wrong.
So digital media absolutely has a place in a democracy.
It can be used to share information, find out about political candidates and spark political debates.
There are lots, lots of positives.
But when it goes wrong and is used maliciously, it can be disadvantageous in a democracy by misleading the public, spreading false information and interfering with people's right to make informed choices about important issues, it can even lead to hostility and create dangerous situations when used to purposefully deceive.
So Lucas is saying, "I've heard people talk about the Cambridge Analytica Scandal, what was that?" Have you heard about this? Just pause and have a think.
So the Cambridge Analytica was a political consulting firm, and in 2016, during the US presidential election, they were able to gain access to over 80 million social media users' personal data.
And this was without the user's consent.
The data was then used to target individual voters with highly personalised and tailored political information.
This scandal was viewed by many as being highly manipulative, and as a result, the social media site was fined over $700 million and Cambridge Analytica eventually closed after much scrutiny and legal action.
So this is where people's information was used without their consent to be able to target specific information to them.
The Cambridge Analytica Scandal raised serious questions about the protection and misuse of personal data that's stored online, and the European Union created as a result, the General Data Protection Regulation, you might know this as GDPR, in May, 2018, and this actually enhanced individual's rights to protect their personal data.
And as a result of this, there was lots and lots of rules and laws and policies that organisations had to follow to ensure that they were keeping people's data safe.
The scandal also raised awareness of how personal data could be exploited to meet political aims. So let's have a quick check for understanding.
So what did the Cambridge Analytica scandal involve? Is it a, B, or C? Pause while you have a go at this check for understanding.
And the correct answer was A.
So Sofia's asking, "Are there any other examples of digital media going wrong? Can you think of any? Pause and have a little think to yourselves.
So Brazil actually experienced widespread disinformation campaigns during their 2018 and 2022 elections.
Groups that wanted the right wing candidate to win the election coordinated mass messaging campaigns using a popular instant messaging platform.
These messages falsely accused the left wing candidate of actions with the intention of damaging their reputation.
And businesses supporting right-wing parties paid for millions of messages to be sent throughout the campaigns, discrediting their opponents.
So this is when information that they absolutely knew was completely false was sent to absolutely loads and loads and loads of people to try and really discredit, ultimately, the opposition.
And fake news articles were also shared with millions of people, quickly going viral, and bots, which we learned about before, so made to look like real people, were also used to further spread this disinformation to keep it going and to keep it getting to more and more people.
The disinformation was completely uncontrollable, 'cause once it starts to spread, it's really difficult to stop that.
The messaging platforms faced lots of criticism and thousands of accounts were taken down.
Illegal campaign funding was also investigated.
The right wing candidate did end up winning the election, and it's difficult to judge what role the disinformation campaign played in this victory, which is a real shame.
Shockingly, similar disinformation campaigns were also used in the 2022 campaign.
However, by this point, social media platforms were vigilant and they were quicker in closing down groups.
The Brazilian government were also quicker to crack down on fake content.
So it did still take place in the next election, but organisations and the government were a bit more aware of it and were quicker to be able to take fake content down.
So let's have a check for understanding.
What disinformation tactics were used during the 2018 and 2022 Brazilian elections? Was it A, B, or C? Pause while you have a go.
And it was C: Really heavily coordinated mass messaging campaigns.
These examples of when digital media goes wrong highlight how disinformation campaigns and data hacking can be used to mislead and manipulate people within a democracy.
In a democracy, it's important that citizens can use all forms of media freely and trust the information that they are receiving.
When this trust is broken or when digital media goes wrong, this can be very concerning for citizens.
So let's have a check for understanding.
What is the missing word in this statement? So in a democracy, it is important that citizens can use all forms of media freely and something the information that they are receiving.
And the missing word is "trust." So for Task C, I want you to write an opposing statement to what Izzy is saying.
So let's have a look at that.
So Izzy is saying, "We don't need to be concerned by digital media going wrong.
It's just a bit of fun and no one takes it seriously." So you are going to write the other side of the argument.
You're going to oppose that statement.
Your statement should highlight the reasons why digital media can go wrong, and you should refer to at least one of the case studies in your answer.
So that might be the Cambridge Analytica Scandal and/or the Brazilian disinformation campaign.
So pause while You have a go at this task.
So your opposing statement could have included: Digital media can go wrong if it is used to spread disinformation, like during the Brazilian elections.
This can mislead citizens which is bad for democracy.
Likewise, if digital media is used to hack into citizens' personal information, like during the Cambridge Analytica Scandal, this can be used to manipulate citizens.
When digital media goes wrong, this can impact citizens' trust of the media.
So in summary of this lesson: Why are people concerned by the influence of digital media? A bad actor in digital media is an individual or a group that uses digital media in negative ways, such as to manipulate people's political beliefs and actions.
A deepfake is media that is created by artificial intelligence to replace or change someone's appearance or voice to make it look as though they have said something or done something that they haven't.
Again, these can be used to manipulate people's political beliefs and actions.
Digital media can and does go wrong.
This was seen during the disinformation campaigns that happened during the Brazilian elections and also during the Cambridge Analytica Scandal where personal information was accessed with the aim of manipulating voting behaviours.
This brings us to the end of this lesson.
Well done for all of your hard work, and we hope to see you back for more Citizenship lessons in the future.
Where to get support if you're concerned about any of the information that we've looked at during this lesson is available here.
So there is Ofcom, NSPCC and Internet Matters.
Please do go through this if you need to.