Loading...
Hello, my name's Mr. Davidson and I'm going to be guiding you through your learning today.
Welcome to today's lesson called Representing Sound from the unit Representation of text, images, and sound.
We will be thinking and learning about how we can describe how computers represent sound.
We have four keywords that we're going to use today.
Sample, which is the measurement of a sound wave at a point in time.
Digitization where we convert data into binary sequences.
Bit depth, which is the number of bits used to represent a sample, and sample rate, which are the number of samples taken per second measured in hertz.
So let's start with today's lesson where we've got two learning cycles.
Our first learning cycle is where we're going to learn about how we describe what sound is.
Sound is everywhere all around us in the real world in our lives.
Those sounds that we hear also end up on digital devices, but we need to understand what sound is and how it gets on those devices in the first place.
So Sam is describing one particular example.
"Our Spanish teacher uses a microphone to record examples of speaking for exams onto a computer." And then the teacher can play it back.
So how do things we hear with our ears end up as sound on a computer? Well, imagine like Izzy, we're actually talking into a microphone.
What's actually happening is the vibrations of the particles of air are being manipulated by the pressure caused by Izzy when she talks, and that pressure change wave causes air particles to vibrate backwards and forwards and knock into one another.
A microphone detects these vibrations and changes in the air pressure and its job is to create an electrical signal that matches the pattern of those vibrations.
If we plotted those changes of that electrical wave on a graph, we'd see that jagged up and down movement that would represent the changes in the air pressure.
So let's check you've understood that.
Can you fill in the missing blanks in these two sentences? That's right, well done.
A microphone detects vibrations in air particles, and then an electrical signal is produced that matches how the pressure of the particles changes.
So a microphone captures sound into an electrical signal initially, but we also have the opposite part of that when we need to reproduce the sound that the microphone detects, the same electrical signal can be fed back into a speaker, the same pattern of vibrations can be recreated.
So as you can see in the diagram there, when the wave goes high, all the particles are squashed together.
The speaker moves to create that pressure.
As the wave drops down to its lowest point, the particles become more spread out and there's less pressure applied.
The speaker moves back, meaning that there's more space in the air around and there's less pressure caused by the sound wave.
A sound wave has two properties that change.
The first is the height of the wave, which we refer to as its amplitude.
That represents the volume that the sound can be, how much energy the wave has at that point.
We also can count the number of waves that occur in a second, and we call that the frequency, that determines the pitch of the sound.
So we can have high-pitched squeaky noises or we can have low-pitched bassy noises, very deep.
So let's just check you've understood that.
Complete the sentence.
A higher amplitude makes a sound either louder, quieter, higher pitched, or lower pitched.
What do you think? That's correct.
A higher amplitude makes a sound louder.
You are going to put some of that into practise now with our first task.
For the first part of the task, I want you to number the stages of recording someone singing and playing it back.
Put the steps into order from the first thing that happens through to the last.
For the second part of the task, I want you to use the terms amplitude and frequency, and I want you to describe how loud high-pitched sound is different to a quiet low-pitched sound.
Pause the video and have a go now.
Let's check how you got on.
In the first part of task A, you had to number the stages of recording someone singing and playing it back.
The steps in order would firstly be that the vibrations of air particles are caused by the singer.
The next thing that would happen is that the microphone would detect those vibrations and then it would produce an electrical signal which would match the pattern of those vibrations.
Then the electrical signal is fed into a speaker where the vibrations of air particles are going to be caused by the speaker so that we can play back the sound that was then picked up by the microphone.
For the second part, we needed to use the terms amplitude and frequency to describe how a loud high-pitched sound was different to a quiet low-pitched sound.
Well, the loud high-pitched sound is going to have a large amplitude and a higher frequency.
In comparison, the quiet low-pitched sound is going to have a smaller amplitude, and its frequency is going to be lower.
Let's get onto our second learning cycle where we're going to learn how we create a binary sequence from a sound wave.
So far, we've only considered physical properties in the real world.
So the pressure of a sound wave with particles of air moving backwards and forwards in areas of high pressure and low pressure.
We've also represented that same change in air pressure as a changing electrical signal that varies over time.
Unfortunately, computers aren't designed to interpret these analogue waves as they are.
It needs to be put into a format that it can understand.
And as with any other data, computers need to represent sound as binary sequences.
We need to understand the way that the analogue values are turned into digital data.
That means the sound wave needs to be measured at different points along the analogue wave.
We refer to those measurements as samples.
In order to measure that sample, we need a reference point so we can read off and decide what that measurement is as a value as a binary sequence.
In this case, the sequence one one is going to represent the sample at that point at which we've measured.
If it was any lower than this point, we'd have to use any of the other three binary sequences.
So in this case, we're using a two-bit binary sequence to create our samples.
This process of taking analogue data and turning it into digital values is a process referred to as digitization.
Let's check you've understood that now.
Fill in the gaps to complete the sentences.
Let's check what the right answers are.
When digitising sound, a sample is taken of the sound wave.
A measurement of the wave is taken from a reference point to be turned into a binary sequence.
Digitization is performed with a set number of bits for each sample.
The number of bits is referred to as the bit depth.
In this case, we are only using two bits for our bit depth.
So our values are samples and measurements at the time can only take on one of four possible combinations.
Those measurements need to be repeated.
So digitization of a sound wave continues at regular intervals.
The number of samples taken in one second is referred to as the sample rate.
In this case in one second, we've taken four samples.
The unit for this rate is known as hertz, and we abbreviate that as a capital H and a lowercase Z.
In our example here, because we've got four samples in one second, we would say the sample rate is four hertz.
It's important to remember that unit.
So what do you think to this question? Can you tell me what is measured in hertz when digitising sound? Is it amplitude, sample rate, or bit depth? What do you think? The correct answer is sample rate.
The sample rate is measured in hertz when we are digitising sound.
The binary sequence that we end up when we put all these samples together is the representation of the sound wave digitally.
So in this case, with regular sampling and with a specified number of bits that we are measuring per sample, we can determine what the sequence is for that sound wave.
That means the sample rate and the bit depth must be consistent as the length of the digitised sound increases.
So the longer the sound wave, the more bits we have, but that is a direct result of that sample rate and bit depth being kept the same.
It's an extension of the sequence we had previously.
The reason we keep the sample rate and the bit depth the same is because the binary sequence is used to recreate the original signal.
We need to know how many bits and how many measurements are likely to be created so that when we create the equivalent analogue electrical signal, we can break the sequence apart and place those values and determine what the sequence is and how it relates to the changing electrical signal.
Let's put some of that into practise now, in task B.
For the first part of this task, I want you to describe how sound in the real world is converted into binary sequences that computers can process.
Key terms are important to remember, so I want you to use those key terms in your answer.
For the second part of the task, I want you to consider Sofia's statement.
She says that "When sound digitised, where there is silence, there won't be any bits needed for that sample." Sofia's actually wrong in this case.
I want you to explain why that statement is wrong.
And for the third part, I've given you a sound wave on a graph.
So for the sound wave that's shown on the graph, I want you to work out what the binary sequence is if there's one sample every millisecond.
So the X-axis on the graph represents time in milliseconds.
You are going to have to choose the closest binary sample value.
Once you've done that, I want you to consider what issues are going to be caused when we try and decide what value each sample should be.
Pause the video and have a go at that now.
Well done, you worked really well on that.
For the first part of task B, you have to describe how sound in the real world is converted into binary sequences that computers can process.
My answer for this was so that a computer can process sound, the sound wave is digitised into a binary sequence.
The sound wave is sampled at regular intervals known as the sample rate to produce a binary sequence that represents the sound wave as binary digits, which we also know refer to as bits.
Each sample is measured using a set number of bits known as the bit depth.
For the second part of task B, we had to explain why Sofia's statement was wrong.
Well, the bit depth of a sample should always be consistent.
This is so the binary sequence that is created can easily identify each sample in order to recreate it accurately and consistently.
And for the third part of task B, you have to create some measurements from the graph every millisecond.
So taking measurements at the X-axis values of one, two, three and so on.
Now your answers could be different to mine, and that's because we can't always get close to a value on the Y-axis at that particular point in time.
If we considered at four milliseconds, we don't know which value to choose.
In my example, I've chosen the binary sequence 0010, but someone else might have chosen 0011.
The answers are going to vary between us all, and we'd write that as a response to the last part of the question.
What issues were caused when trying to decide what value each sample should be? Well, some samples at particular times are between binary measurements.
It is not always possible to tell what the nearest sample value is, and we have to approximate.
We can go to the one above or to the one below.
Well done, you did really well today.
Let's just recap what we've learned.
We learned that sound is a pressure wave that causes vibrations in the air.
A sample is a measure of a sound wave at a point in time.
Sound samples are digitised with a specific number of bits known as the bit depth so that a computer can process them, and samples are taken at regular intervals known as the sample rate.