Loading...
Hello, my name is Mrs. Holbrook, and welcome to Computing.
I'm so pleased you've decided to join me for the lesson today.
In this lesson, we'll be looking at how search engines work.
We'll be looking at indexing and crawling and how search ranking works.
Welcome to today's lesson from the unit Developing for the Web.
This lesson is called Search Engines and Results Rankings.
And by the end of today's lesson, you'll be able to explain how search engines find and rank web pages.
Shall we make a start? We will be exploring these keywords in today's lesson.
Crawl.
Crawl.
When a search engine automatically explores websites to collect information.
Index.
Index.
A big catalogue where search engines store webpage information so they can find it later.
Ranking.
Ranking.
The order search results appear in, based on how useful and relevant they are.
Look out for these keywords throughout today's lesson.
Today's lesson is divided into two sections.
We'll start by understanding crawling and indexing, and then we'll move on to explain how search ranking works.
Let's make a start with Understand crawling and indexing.
What parts of a search engine can you identify? Maybe pause the video whilst you have a look.
Did you manage to identify any of these? The search bar is the space where you type in the search terms you are looking for.
The search term is the keywords that you are going to type in for your search.
Categories are the types of pages that the search engine will return.
A visited hyperlink is normally shown in a different colour to an unvisited hyperlink.
A search engine is a digital tool that helps you find information on the internet.
When you type a question or keywords into a search engine, it looks through millions of websites to find the best answers for you.
Search engines process billions of searches every day.
Search engines use keywords to categorise the webpages they find.
When a user wants to find a useful webpage, they enter keywords and the search engine provides hyperlinks so that the user can access relevant web pages.
Izzy says, "A search engine is a catalogue of web pages, called an index." Sam says, "When a user enters keywords, the search engine locates relevant websites and displays them as hyperlinks." Search engines use crawlers or spiders to find content on the World Wide Web.
Alex says, "I don't like spiders!" Aisha says, "These crawlers are programmes! They travel the World Wide Web to catalogue information." So you don't need to be too scared, Alex.
Crawlers visit links from one page to another, recording common keywords that they find.
Crawlers move across the web by following links on web pages.
Their aim is to visit as many web pages as possible and keep a record of what they find.
This information is returned in the search engines and stored in a database called the index.
The index is ready to be searched by users at a later date.
Alex said, "These are helpful spiders." Time to check your understanding.
What is the main job of a search engine? Is it a, to create new websites, b, to help users find relevant information, or c, to store all of the information on the internet? Pause the video whilst you have a think.
Did you select b? Well done.
The main job of a search engine is to help users find relevant information.
When a crawler finds a website, it first checks the HTML code for any metadata.
Metadata consists of extra bits of information added by the designer to make sure crawlers get the information they need.
Good designers add precise metadata to pages so that crawlers can pick up information more easily.
Adding high-quality metadata can also help a webpage appear nearer the top of the search results.
After the metadata has been checked, the crawler records any keywords and how often they're used.
Articles, e.
g.
, a, the, and connective words, like also and but, are usually ignored as they do not add to the description of the page.
Aisha says, "Keywords in titles and ones nearer the top of the page are seen as extra important." The last step for a crawler is to check and record any hyperlinks on the page.
The crawler records which other pages the page connects to, and visits them.
By travelling along these links, the crawler can eventually find newly created content.
When crawlers finish their journey, they are stored in a data structure called an index.
An index records the following about each webpage: frequently used keywords, type of content found.
For example, images or text.
The date of the last update.
Other useful information is recorded that may be used later when users run their searches.
Izzy says, "Do search engines search the entire internet every time you do an online search?" Sam says, "No, when you search, the results are pulled from the index that the crawlers have recorded." Time to check your understanding.
Fill in the blanks in the sentences using the words provided.
Pause the video whilst you complete the task.
How did you get on? Did you manage to fill in the blanks? Let's go through the answers together.
When a crawler finds a website, it first checks the HTML code for any metadata.
This consists of extra bits of information added by the designer to help crawlers understand the webpage.
After this, the crawler records important keywords and how often they appear on the page.
The crawler also checks and records any hyperlinks on the page, which help it to find and visit other websites.
Remember, if you need to pause the video now and make any corrections, you can do that.
Okay, we are moving on to our first task of today's lesson.
Read the content of oak.
link/about-programming as if you were a crawler.
Fill in the index table to show what information you believe is most important when cataloguing the page.
So find five keywords and put them in in order of importance.
Explain the type of content found, whether it's images, text, and then find the date of the last update.
Pause the video whilst you have a go at the task.
How did you get on? Let's have a look at some sample answers together.
So the top five keywords were: programming, computer, instructions, languages, and device.
The type of content found was text and images.
And the date of the last update was the 20th of the 3rd, 2025.
Okay, we are now moving on to the second part of today's lesson, where we are going to explain how search ranking works.
If you are looking up to buy a ladder, why would this webpage come up near the top of a list in search results? Maybe pause the video whilst you have a think.
The keyword "ladder" appears at the top of the page.
The keyword "ladder" appears multiple times on the webpage.
This puts this website at the top of the user's search results, even though it's a safety website and not a site to buy ladders.
There are potentially millions of web pages that could be stored in a search engine index that match a single keyword.
Searches have to rank the pages in some way in order to show how relevant the results are.
It isn't useful for a search engine to just return all of the results with most keywords.
Web crawlers can be tricked if web designers use multiple keywords that are unrelated to the content of the page.
Izzy says, "This has so many keywords, it keeps coming up in my results.
It's like spam!" Keyword stuffing can make webpages appear in more search results as more keywords would be flagged.
The repeated keywords will also force that page to the top of the search results.
Search engine designers now create complex algorithms that try to rank the importance of web pages.
Search engines have upgraded over time and are now able to detect this trick when ranking pages.
Sam says, "Keyword stuffing is also used on social media, which is why you sometimes see hashtags that don't match the content." Aisha says, "If you just add tonnes of random hashtags to a post to just get noticed, it's a bit like shouting random words in a conversation." Time to check your understanding.
Match each word or phrase to its definition.
So we have keyword stuffing, ranking, and hyperlinks.
Pause the video whilst you have a go.
How did you get on? Keyword stuffing is adding too many keywords or unrelated words to a webpage to manipulate search rankings.
Ranking is the process of deciding which webpages appear first in search results.
Hyperlinks are links from one page to another, which help search engines understand how pages are connected.
Did you get all of those correct? Well done.
How do ranking algorithms decide if a webpage is relevant? They may look at when the page was last updated.
If pages aren't updated, the information is likely to be less useful.
They'll look at webpages that link to the crawled page.
If other pages link to it, other pages think the information is useful.
It'll look at how long visitors to the page tended to stay.
People stay longer on pages that are easy to use and have relevant information.
Time to check your understanding.
Which of the following helps search engines decide if a webpage is relevant? Is it a, how recently the page was updated, b, the number of keywords on the page, or c, the length of the page's URL? Pause the video whilst you have a think.
Did you select a? Well done.
How recently the page was updated helps the search engine to decide if a webpage is relevant.
Okay, we are now moving on to our second set of tasks for today's lesson.
I'd like you to open the starter code oak.
link/terrys-travel-blog.
For part one, improve the search ranking of this webpage by adding information into the metadata.
a, add the author's name.
Add a line reference.
b, add a description of the webpage.
And c, choose five meaningful keywords and add them.
For part two, write a short paragraph explaining your changes and why they will help improve the search ranking of the webpage.
Pause the video whilst you complete the activity.
How did you get on? Let's have a look at some sample code together.
If you want to open the full solution, you can go to oak.
link/terrys-travel-blog-solution.
So you can see here we've got some lines of metadata.
On line five, we have meta name is equal to "description," and then we have content="Terry's Travel Blog about Mexico." On line six, we have meta name="keywords," and we have five keywords.
So we've used "travel, Mexico, beaches, culture, and exploration." And then on line seven we have the meta name="author," and we've put content is equal to "Terry Singh." Remember, your keywords may have been slightly different, but as long as they're related to the travel blog, that's absolutely fine.
For part two, you were asked to write a short paragraph explaining your changes to the metadata and why they will improve the search ranking of this webpage.
Let's have a look at a sample answer together.
I added the author's name, which is Terry Singh, and I added a description of the travel blog so that it clearly describes what the webpage is about.
For the keywords, I chose words that are specific and relevant to holidays.
So travel, Mexico, beaches, culture, and exploration.
These will help search engines show my page to people who are searching for these topics.
Remember, if you need to pause your video now and add any extra detail to your answers, you can do that.
Okay, we've come to the end of today's lesson, and you've done a fantastic job.
So well done.
Let's summarise what we've learned together.
Search engines help users find information by crawling the web.
A crawler visits webpages and collects data about their content.
This information is then stored in an index.
When somebody searches for something, the search engine looks through its index to find the most relevant results.
To decide which pages appear first, it uses a ranking system.
Pages are ranked based on factors like keywords, links, and how often they are updated.
I hope you've enjoyed today's lesson, and I hope you'll join me again soon.
Bye.