Search engine algorithms: what they are and how they work
It is not always enough to know the tactics and tricks of SEO in order to get good results: In order to be able to achieve the objectives and work in a more strategic and scientific way, it is also appropriate to have a basic knowledge of how the whole system that is behind the Serps works. In other words, it would be useful to understand how search engine algorithms work and, therefore, to find out why some tactics are better than others: here is a smattering on this world, which is still shrouded in mystery.
Google’s algorithm is not a formula
According to the Wikipedia definition, an algorithm “is a procedure or program that solves a class of problems through a finite number of elementary, clear and unambiguous instructions”. In his article on Search Engine Journal, Dave Davies refers to the more general definition, “a process or set of rules to follow in calculations or other troubleshooting operations, especially from a computer”.
In both cases, it is important to understand a fundamental point: an algorithm is not a formula.
The example of the menu and the recipe
It is still Davies to try and simplify the understanding of such a complex system through the example of a menu: as a good Canadian, the expert tells us, he loves to eat the classic “Roast Beef with gravy” accompanied by other ingredients needed for its preparation:
- Roast beef
- Yorkshire pudding
- Green beens
- Mashed potatoes
The roast beef must be seasoned and cooked perfectly, explains Davies, who then adds other details: “the seasoning paired to the roast would be an example of a formula – how much of everything is needed to produce a product; a second formula used would be the amount of time and temperature at which the roast should be cooked, given its weight, and the same operation is repeated for each item in the list”.
So, “at a very simple level we would have 12 formulas (6 ingredients x 2 – one for measurements and the other for cooking time and duration according to volume), which create an algorithm with the aim of creating one of Dave’s favorite meals”, to add “another formula, to consider the amount of different foods I would like on my plate,” he writes.
And this without even including “the various formulas and algorithms needed to produce the ingredients themselves, such as raising a cow or growing potatoes”.
But this is only the custom algorithm of Davies’ preferences: “we must consider that each person is different and will want different amounts of each ingredient and may want different condiments”. You can then customize that algorithm to give everyone the ideal result by adding a formula for each person.
An algorithm of algorithms
This long parenthesis seems to have taken us far from the topic of our article, but in reality an algorithm and a dining table have many things in common, and to prove it Davies lists some of the main features of a website (limited to 6 for direct comparison):
- Internal links
- External links
As already seen with the dinner algorithm, “each of these areas is further subdividable using different formulas and, in fact, different sub-algorithms”. And so, the first step to understanding how the system works is to consider not just one algorithm, but multiple ones.
Many algorithms, one single algorithm
It is however important to keep in mind that, although there are “many algorithms and countless formulas in play, there is still only one algorithm“: its task is to determine how these others are weighted to produce the final results that we see on the SERP, explains the expert.
According to Davies, “it is perfectly legitimate to recognize that at the top there exists a type of algorithm – the only algorithm to rule them all, so to speak (quoting The Lord of the Rings) – but without forgetting that there are countless other algorithms”, which generally are those “which we think of when we are considering how they affect search results”.
The algorithm is a perfect recipe
Going back to the analogy, there are a multitude of different features of a website that are evaluated, “just as we have a number of food items that we find on our plate”.
To produce the desired result, “we must have a large number of formulas and sub-algorithms to create each element on the plate and the master algorithm to determine the quantity and placement of each element”.
Similarly, when we use the Google algorithm expression we actually refer to “a massive collection of algorithms and formulas, each set to perform a specific function and collected together by a chief algorithm or, dare I say, core algorithm to place the results”.
Examples of algorithms
In the article we are reminded of some types of Google algorithms to exemplify the speech:
- Algorithms like Panda, which help Google judge, filter, penalize, and reward content based on specific features (and Panda himself probably included a myriad of other algorithms).
- The Penguin algorithm to judge links and identify spam. Such algorithm certainly requires data from other pre-existing algorithms that are responsible for the evaluation of links and probably some new algorithms in charge of understanding the common features of the spam link, so that the greatest Penguin algorithm can do its job.
- Specific algorithms of a task.
- Organizational algorithms.
- Algorithms responsible for the collection of all data and their insertion in a context that produces the desired result, a SERP that users will find useful.
The role of entities for the algorithms
In this context, an element that is receiving more and more attention is the analysis of entities, which we know to be “singular, unique, well-defined and distinguishable things or concepts”.
Returning to the analogy of the supper, Davies shows us which are the entities concerned: he himself, each member of his family and also his family nucleus (an entity in its own right), and then again the roast and every ingredient of which it is composed are other single entities, and so does Yorkshire pudding and the flour it takes to make it.
Because Google sees and uses entities
Google sees the world as a collection of entities, and this has been a decisive step in the process of improving search algorithms.
Continuing his reasoning, Davies explains that at his dining table there are “four individual entities that would have the status of are eating and a number of entities that are eating instead: classifying us all in this way has many advantages for Google over the simple evaluation of our activities as a set of words”. Each eating entity can be assigned the entities it has on its plate (roast beef, horseradish, green beans, mashed potatoes, Yorkshire pudding and so on).
Google uses this type of classification to judge a website. Every web page is an entity, just like every person sitting at the dining table.
The global entity representing them all (called “Davies”) would be “roast beef dinner”, but each individual entity representing an individual or page is different. In this way, Google can easily classify and judge the interconnection of websites and the world at large.
The work of algorithms with entities
Basically, search engines do not have the task of judging a single website, but they must rank them all. They therefore focus not only on the Davies entity seen as roast beef dinner, but also on the extent of the Robinson neighbors that concerns stir fry. Now, if an external entity known as Moocher wants to know where to eat, options can be ranked by Moocher based on their preferences or queries.
This is important for understanding search algorithms and how entities work: instead of just seeing what a website as a whole is about, Google understands “that my beef roast and beef sauce are correlated and, in fact, come from the same main entity”.
And this is also crucial to understand what it takes to rank, as the article explains. If Google understands that a web page is about roast beef, while another page that “links to it is about beef dip (typical Canadian sandwich produced with roast beef leftovers)”, it is absolutely important that “Google knows that roast beef and beef dip are derived from the same main entity“.
Applications to determine the relevance of the pages
In this way, algorithms “can assign relevance to this link based on the connection of these entities”.
Before the idea of entities entered the Search, the engines could assign relevance and relevance only based on parameters such as word density, word proximity and other easily interpretable and manipulable elements.
Entities are much more difficult to manipulate: “either a page is about an entity, or it is not related”.
By crawling the Web and mapping the common ways in which entities relate, search engines can predict which relationships should have the most weight.
How algorithms work
Ultimately, it is important to understand how algorithms work to add context to what we are experiencing or reading.
When Google announces an update of the algorithm (as in the case of the next Google Page Experience), what is updated is probably a small piece of a very large puzzle; instead, broad core updates are significant and far-reaching changes to Google’s basic engine, a sort of periodic ticket to always ensure the effectiveness of the results.
Entering into this perspective can help to interpret “what aspects of a site or the world are adapted in an update and how this adaptation fits the great goal of the engine”.
The relevance of entities
In this sense, it is crucial to understand how important entities are in search algorithms today: they have enormous relevance, destined to grow, and are based in turn on algorithms that identify and recognize the relationships between them.
To name a few perks, “knowing this mechanism is useful not only to understand what content is valuable (that is, the one that is closest to the entities we are writing about), but also to know which links will probably be judged in a more favorable and heavy“.
Everything is based on search intent
The search algorithms work “as a vast collection of other algorithms and formulas, each with its own purposes and tasks, to produce results that a user will be satisfied with,” writes Davies in the final bars.
This vast system includes algorithms specifically designed to understand entities and how entities relate to each other, in order to provide relevance and context to other algorithms.
Moreover, according to the expert, “there are algorithms to monitor only this aspect of the results and make changes where it is believed that the classification pages do not meet the search intent based on the way users interact”, meaning that there are algorithms they analyze the search behavoiur, as we were saying talking about the search journey.
Because, as the public voices of Mountain View often remember, Google’s task is to provide accurate answers to users.