Search engine algorithms: what they are and how they work

Put us to the test
Put us to the test!
Analyze your site
Select the database

They are the complex set of mathematical formulas, rules and criteria that determine what most people, starting with ourselves, see when they browse the Web: we are talking about search engine algorithms, the “mysterious” machines that are responsible for analyzing all the web pages in the Index and then proceeding to rank them, and thus the order in which the various pages appear in the SERPs, the search engine results pages. In a sense, then, algorithms are the backbone of the way search engines such as Google, Bing, and Yahoo! perform the ranking of web pages to be shown in response to a user’s search query, and while it is impossible to fully unravel the “mystery,” today let’s try to understand how these systems work.

What are search engine algorithms

Search engine algorithms can be defined as the set of rules and procedures used by search engines to determine the relevance and importance of web pages in order to rank them in response to a specific search query.

These algorithms are fundamental to the operation of search engines, as they help filter the vast amount of information available on the Internet and provide users with the most relevant and useful results.

Thus, they are sophisticated tools that help organize and make accessible the enormous amount of information and content found online, supporting the mission of helping users find the best answers to their questions and thus playing a key role behind the scenes in determining what results we see when we search online.

Understanding algorithms and their importance

In general computer science, an algorithm is called a set of detailed, well-defined instructions that a computer follows to solve a problem or perform a task. It is like a recipe that the computer follows, step by step, to arrive at the end result. Algorithms can be simple, such as sorting a series of numbers, or extremely complex, such as analyzing large amounts of data to predict market trends.

A search algorithm, on the other hand, is the one that already most closely affects our topics, because it is the unique set of formulas that a search engine uses to retrieve specific information stored within a data structure and determine the meaning of a web page and its content.

As we know, search engines are essentially answer machines that, prompted by a user who throws an input, examine billions of pieces of content they have in their repository (the Index) and evaluate thousands of factors to determine which page is most likely to answer the question posed and satisfy the person’s original intent.

In a nutshell, search engine activity consists of three stages:

  • Crawling. Through special programs called “crawlers” or “spiders,” search engines automatically explore the Web and encounter newly available content (Web pages, PDFs, images, videos, and so on): crawlers follow links from one page to another, gathering information about each resource they encounter.
  • Indexing. Once a page has been discovered, the information gathered by the crawlers is analyzed and cataloged in a huge database, called the Index, hence the name indexing. At this stage, search engine algorithms analyze the page’s content, inbound and outbound links, and other factors to determine what the page is about and how it should be ranked.
  • Ranking. When a user enters a query into a search engine, the algorithm searches its database for pages that are most relevant to that input: all pages are then sorted, or “ranked,” based on a number of factors, including content relevance, page authority, quality of inbound links, user experience, and many others. The result of this ranking process is the list of search results we see when we perform a search.

Therefore, algorithms are used by search engines to analyze and index the content of web pages, and also to support them in determining the relevance and quality of a web page with respect to a user’s search query. Each search engine creates, manages, uses and updates its specific search algorithm.

Therefore, we can talk about Google algorithm, Bing algorithm or Yahoo! algorithm, each of which uses particular and complex criteria to do its job, taking into consideration the signals deemed most useful in achieving the goal of providing more accurate and relevant search results.

Google algorithm or algorithms: how many algorithms are there?

At this point it is appropriate to open a small “lexical” parenthesis, but not only. Often, for the sake of simplicity and convenience, we speak of “Google algorithm” in the singular, even if this means exaggerating a much more complex reality. Google in fact uses a number of algorithms and machine learning techniques to analyze and rank web pages, which work together, each with a specific task, to provide the most relevant and useful search results.

For example, Google has historically used an algorithm called PageRank to determine the authority and importance of a web page based on the quantity and quality of links linking it to other pages, while the other algorithm called Panda is designed to penalize sites with low-quality content; these are joined by other algorithms specific to image search, news search, local search, and more, forming precisely the powerful set of “Google algorithms,” at work every moment to answer users’ queries.

For some time now, Google has been sharing the nomenclature it uses to accomplish these processes, introducing the concept of “Google Search ranking systems,” which represent every automated ranking algorithm that participates in the work of analyzing factors and reporting on the hundreds of billions of web pages and other content to be included in the Index, with the goal of “presenting the most relevant and useful results, all in a split second.” Some of these systems are part of Google’s broader ranking systems-the underlying technologies that produce search results in response to queries-while others are involved in specific ranking requirements.

How algorithms work: a general look

Of course, search engine formulas are secret and intended to remain so, but we can still try to describe (a little) how the algorithms work, which at least broadly follow fairly standardized processes.

For example, the top priority is to respond promptly to the user’s needs, making their experience smooth and positive.Google has managed to become the most popular search engine on the planet thanks to its complex algorithms that improve the search process by using sophisticated tactics that provide users with the information they are looking for, and doing it all in very few moments.

Search engine algorithms typically analyze hundreds of factors-specifically, we have seen how Google also uses more than 200 ranking signals to determine which results to provide and in what order-including content relevance, relevant keyword usage, page performance, mobile optimization, backlinks, query context, and so on, which are globally weighted and “transformed” into complex mathematical formulas so that web pages are quickly ranked.

Search engine algorithms and the impact on SEO

Search engine algorithms are thus a collection of formulas that are used to determine the quality and relevance of a web page to the user’s query. And so, if we want our content to have visibility in Google and achieve good rankings, we must try to get into the good graces of the algorithms, try to identify which criteria they reward and instead keep away from objectively prohibited tactics, which risk having the opposite effect.

To achieve good results, therefore, it is not always enough to be certain that you know the tactics and tricks of SEO: in order to succeed and work more strategically and scientifically, it is also advisable to have a basic understanding of how the whole system behind SERPs works. In other words, it is useful to have at least a smattering of how search engine algorithms work and, therefore, to find out why some tactics are better than others, while still remaining on topics “shrouded in mystery.”

Indeed, search engine algorithms have a significant impact on SEO and organic traffic, and there is another aspect not to be overlooked: these formulas and recipes are constantly evolving, with frequent and regular changes that serve to improve the user experience and ensure that users always find the best answer to their queries.

It follows, then, that even those who work online cannot stop and must continue to keep up with the latest algorithm updates to ensure that SEO strategies remain effective, aiming to keep the quality and relevance of the landing page content high with respect to the needs and intent of the audience and also monitoring other relevant aspects of the pages.

What and what are algorithmic updates

Google itself reports that it changes its algorithms hundreds of times each year, between more sensitive interventions and changes that instead have no particular impact on SERPs and site rankings.

In fact, not all algorithm updates have the power to significantly affect the results pages, reshuffling the ranking of sites, because in truth there are different types of algorithmic updates launched by Google.

In particular, we recognize three of them:

  • Broad core updates, the main updates. They are performed several times throughout the year (usually on a quarterly or broader basis) and adjust the importance of different ranking factors, acting on a broad scale and globally.
  • Specific key updates. These are infrequent interventions that affect a specific search algorithm problem or search system, such as Helpful Content for helpful content or Reviews for reviews.
  • Minor updates. Even implemented on a daily or weekly basis, these interventions usually do not create major visible changes to site performance and analytics; they are often small changes that improve the search user experience and do not affect the ranking of high-quality or large-scale Web sites.

Although, as mentioned several times, Google keeps the details of its search ranking algorithm private, we can imagine that it regularly updates its formulas to improve the quality of search results to reflect changes in user behavior and technological advances. No less important, it is also a way to try to stay one step ahead of unfair actors or spammers seeking visibility on its platform.

How to prevent the negative effects of algorithm updates

And so, search engine algorithms are complex mechanisms that are constantly evolving (to boot): this means, first of all, that it is a mistake to think that we are creating evergreen content that will be successful forever, because subsequent updates risk making our strategy obsolete and therefore ineffective.

On the other hand, however, it is also wrong to focus excessively on search engine algorithms, frantically adapting marketing strategies to every small change: it is more useful and appropriate to look at the broader picture, trying to understand the questions and needs of our audience, because only in this way can we reduce our dependence on algorithm changes and control, at least in part, how much they influence our initiatives.

Updates by search engines and their algorithms can in fact be as unpredictable as the weather: the only thing we know for sure is that they will be there, with effects that can be of various kinds. And so, what we can do is to make sure we are prepared and ready at the right time, working on a few key aspects on our pages such as those listed below.

  • Quality first

Despite algorithm updates, one thing remains constant: the quality of content on landing pages has a positive effect on conversions. Search engines, and especially Google, pay close attention to page quality and keyword relevance, and this is the first point to ensure on our site.

  • Beyond keywords

Google’s algorithm updates tend to shift the focus away from keywords and toward more long-tail phrases and nuances: instead of focusing only on the keyword, we need to consider intent, relevance, and context. So we need to figure out how best to answer the questions posed by our audience and design useful content, working on the entire user journey rather than just individual keywords.

  • Staying informed

In some cases, search engines offer advance notice of an upcoming algorithm update-this gives us time to familiarize ourselves with the new factors and adapt accordingly. In addition, our page also reports all the most relevant Google Updates!

  • Keep calm and analyze returns

When search engines change their algorithms there are moments of chaos for us marketers, although all we can do is implement relevant improvements and follow the latest guidelines. If we use Google Analytics, taking note of when an algorithm update occurred can help us explain any out-of-the-ordinary results, for example, just as SEOZoom monitoring helps us understand if there are sudden fluctuations in ranking.

  • React carefully

If a recent algorithm update has hurt our ranking, it is important not to panic. Launching into a mad scramble to adjust strategies and make quick changes with the goal of stemming the bleeding in rankings could even further damage organic visibility, and the best results will come from taking the time to study the goals of the new update and see how our Web site does not meet these requirements. Most likely, a few simple steps are all it takes to get back on track, although the recovery time may be longer.

Google’s algorithm is not a formula

We can explore this topic further by taking advantage of Dave Davies’ work in Search Engine Journal, which tries to explain in a simple and accessible way what the Google algorithm is.

Starting with Wikipedia’s definition, an algorithm “is a process or program that solves a class of problems through a finite number of elementary, clear, and unambiguous instructions,” but there is also a more general meaning, which is “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.” In either case, it is important to understand a fundamental point: an algorithm is not a single formula.

For this, Davies uses the example of a menu: as a good Canadian, the expert tells us, he loves to eat the classic “Roast Beef with gravy” accompanied by other ingredients needed for its preparation:

  • Roast beef
  • Horseradish
  • Yorkshire pudding
  • Green beens
  • Mashed potatoes
  • Gravy

The roast beef must be seasoned and cooked perfectly, explains Davies, who then adds other details: “the seasoning paired to the roast would be an example of a formula – how much of everything is needed to produce a product; a second formula used would be the amount of time and temperature at which the roast should be cooked, given its weight, and the same operation is repeated for each item in the list”.

So, “at a very simple level we would have 12 formulas (6 ingredients x 2 – one for measurements and the other for cooking time and duration according to volume), which create an algorithm with the aim of creating one of Dave’s favorite meals”, to add “another formula, to consider the amount of different foods I would like on my plate,” he writes.

And this without even including “the various formulas and algorithms needed to produce the ingredients themselves, such as raising a cow or growing potatoes”.

But this is only the custom algorithm of Davies’ preferences: “we must consider that each person is different and will want different amounts of each ingredient and may want different condiments”. You can then customize that algorithm to give everyone the ideal result by adding a formula for each person.

An algorithm of algorithms

This long parenthesis seems to have taken us far from the topic of our article, but in reality an algorithm and a dining table have many things in common, and to prove it Davies lists some of the main features of a website (limited to 6 for direct comparison):

  • URL
  • Contents
  • Internal links
  • External links
  • Images
  • Speed

As already seen with the dinner algorithm, “each of these areas is further subdividable using different formulas and, in fact, different sub-algorithms”. And so, the first step to understanding how the system works is to consider not just one algorithm, but multiple ones.

It is however important to keep in mind that, although there are “many algorithms and countless formulas in play, there is still only one algorithm“: its task is to determine how these others are weighted to produce the final results that we see on the SERP, explains the expert.

According to Davies, “it is perfectly legitimate to recognize that at the top there exists a type of algorithm – the only algorithm to rule them all, so to speak (quoting The Lord of the Rings) – but without forgetting that there are countless other algorithms”, which generally are those “which we think of when we are considering how they affect search results”.

The algorithm is a perfect recipe

Going back to the analogy, there are a multitude of different features of a website that are evaluated, “just as we have a number of food items that we find on our plate”.

To produce the desired result, “we must have a large number of formulas and sub-algorithms to create each element on the plate and the master algorithm to determine the quantity and placement of each element”.

Similarly, when we use the Google algorithm expression we actually refer to “a massive collection of algorithms and formulas, each set to perform a specific function and collected together by a chief algorithm or, dare I say, core algorithm to place the results”.

Examples of algorithms

In the article we are reminded of some types of Google algorithms to exemplify the speech:

  • Algorithms like Panda, which help Google judge, filter, penalize, and reward content based on specific features (and Panda himself probably included a myriad of other algorithms).
  • The Penguin algorithm to judge links and identify spam. Such algorithm certainly requires data from other pre-existing algorithms that are responsible for the evaluation of links and probably some new algorithms in charge of understanding the common features of the spam link, so that the greatest Penguin algorithm can do its job.
  • Specific algorithms of a task.
  • Organizational algorithms.
  • Algorithms responsible for the collection of all data and their insertion in a context that produces the desired result, a SERP that users will find useful.

The role of entities for the algorithms

In this context, an element that is receiving more and more attention is the analysis of entities, which we know to be “singular, unique, well-defined and distinguishable things or concepts”.

Returning to the analogy of the supper, Davies shows us which are the entities concerned: he himself, each member of his family and also his family nucleus (an entity in its own right), and then again the roast and every ingredient of which it is composed are other single entities, and so does Yorkshire pudding and the flour it takes to make it.

Google sees the world as a collection of entities, and this has been a decisive step in the process of improving search algorithms.

Continuing his reasoning, Davies explains that at his dining table there are “four individual entities that would have the status of are eating and a number of entities that are eating instead: classifying us all in this way has many advantages for Google over the simple evaluation of our activities as a set of words”. Each eating entity can be assigned the entities it has on its plate (roast beef, horseradish, green beans, mashed potatoes, Yorkshire pudding and so on).

Google uses this type of classification to judge a website. Every web page is an entity, just like every person sitting at the dining table.

The global entity representing them all (called “Davies”) would be “roast beef dinner”, but each individual entity representing an individual or page is different. In this way, Google can easily classify and judge the interconnection of websites and the world at large.

The work of algorithms with entities

Basically, search engines do not have the task of judging a single website, but they must rank them all. They therefore focus not only on the Davies entity seen as roast beef dinner, but also on the extent of the Robinson neighbors that concerns stir fry. Now, if an external entity known as Moocher wants to know where to eat, options can be ranked by Moocher based on their preferences or queries.

This is important for understanding search algorithms and how entities work: instead of just seeing what a website as a whole is about, Google understands “that my beef roast and beef sauce are correlated and, in fact, come from the same main entity”.

And this is also crucial to understand what it takes to rank, as the article explains. If Google understands that a web page is about roast beef, while another page that “links to it is about beef dip (typical Canadian sandwich produced with roast beef leftovers)”, it is absolutely important that “Google knows that roast beef and beef dip are derived from the same main entity“.

Applications to determine the relevance of the pages

In this way, algorithms “can assign relevance to this link based on the connection of these entities”.

Before the idea of entities entered the Search, the engines could assign relevance and relevance only based on parameters such as word density, word proximity and other easily interpretable and manipulable elements.

Entities are much more difficult to manipulate: “either a page is about an entity, or it is not related”.

By crawling the Web and mapping the common ways in which entities relate, search engines can predict which relationships should have the most weight.

Understanding the basics of operation

Ultimately, it is important to understand how algorithms work to add context to what we are experiencing or reading.

When Google announces an update of the algorithm (as in the case of the next Google Page Experience), what is updated is probably a small piece of a very large puzzle; instead, broad core updates are significant and far-reaching changes to Google’s basic engine, a sort of periodic ticket to always ensure the effectiveness of the results.

Entering into this perspective can help to interpret “what aspects of a site or the world are adapted in an update and how this adaptation fits the great goal of the engine”.

In this sense, it is crucial to understand how important entities are in search algorithms today: they have enormous relevance, destined to grow, and are based in turn on algorithms that identify and recognize the relationships between them.

To name a few perks, “knowing this mechanism is useful not only to understand what content is valuable (that is, the one that is closest to the entities we are writing about), but also to know which links will probably be judged in a more favorable and heavy“.

Everything is based on search intent

The search algorithms work “as a vast collection of other algorithms and formulas, each with its own purposes and tasks, to produce results that a user will be satisfied with,” writes Davies in the final bars.

This vast system includes algorithms specifically designed to understand entities and how entities relate to each other, in order to provide relevance and context to other algorithms.

Moreover, according to the expert, “there are algorithms to monitor only this aspect of the results and make changes where it is believed that the classification pages do not meet the search intent based on the way users interact”, meaning that there are algorithms they analyze the search behavoiur, as we were saying talking about the search journey.

Because, as the public voices of Mountain View often remember, Google’s task is to provide accurate answers to users.

Iscriviti alla newsletter

Try SEOZoom

7 days for FREE

Discover now all the SEOZoom features!
TOP