Algorithms, sanctions and quality to Google, how to understand traffic drops

Put us to the test
Put us to the test!
Analyze your site
Select the database

It has been a little over a month since Google launched its latest update to the algorithm, the December 2020 Core Update, and many are still struggling with the analysis of its effects and attempts to lift the site after the decline. But how do you really know when traffic is decreasing due to an update and, when, instead, other factors such as manual actions, sanctions etc.? A former Googler accompanies us to the discovery of all these elements, because it is only by knowing their differences and the consequences they generate that you can react with the right strategy.

Understanding how algorithms and manual actions work

Our guide in this complex field is Pedro Dias, who worked in the Google Search Quality team between 2006 and 2011, and who in the article published on Search Engine Land wants to emphasize that his perception could therefore be currently obsolete, since things at Google can change at a reasonably fast pace.

In any case, thanks to his work experience at “the most sought after input box on the Web”, Dias offers us useful and interesting information on “what makes Google work and how the search works, with all its nuts and bolts“, analyzing “this tiny input field and the power it exerts on the Web and, ultimately, on the lives of those who manage a Web site”, so as to clarify the concept of penalties and recognize the various forms and causes for which Google can make a site to “collapse”.

Google’s algorithms are like recipes

“It’s no surprise that anyone who has a presence on the Web usually holds their breath every time Google decides to make changes to his organic search results“, writes the former Googler, who then remembers how “being primarily a software engineering company, Google aims to solve all its problems on a large scale, also because it would be “virtually impossible to solve the problems that Google faces exclusively with human intervention”.

To Dias, “algorithms are like recipes: a set of detailed instructions in a particular order that aim to complete a specific task or solve a problem”.

Small algorithms are better in order to solve one problem at a time

The probability that an algorithm “produces the expected result is indirectly proportional to the complexity of the task it has to complete”: this means that “More often than not, it is better to have more (smaller) algorithms that solve a (larger) complex problem by breaking it down into simple sub-tasks, rather than a gigantic single algorithm that tries to cover all the possibilities”.

As long as there is an input, the expert continues, “an algorithm will work tirelessly, returning what it was programmed for; the scale in which it operates depends only on the available resources, such as storage, processing power, memory and so on”.

These are “quality algorithms, which are often not part of the infrastructure”, but there are also “infrastructure algorithms that make decisions about how content is scanned and stored, for example”. Most search engines “apply quality algorithms only at the time of publication of search results: the results, that is, are only qualitatively evaluated, at the time of service”.

Algorithmic updates to Google

In Google, quality algorithms “are seen as filters that aim to bring out good content and look for quality signals throughout the Google index”, which are often provided at page level for all websites and which can then be combined, producing “Directory or hostname level scores, for instance”.

For website owners, SEO and digital marketing experts, in many cases, “the influence of algorithms can be perceived as a penalty, especially when a website does not fully meet all the quality criteria and the algorithms of Google decide instead to reward other websites of higher quality”.

In most of these cases, what common users see is “a drop in organic performance“, which does not necessarily result from the site being “pushed down”, but more likely “because it has stopped being incorrectly evaluated, which can be positive or negative”.

What is quality

To understand how these quality algorithms work, we must first understand what quality is, explains Dias: as we often say about quality articles, this value is subjective, it is “in the eyes of the beholder”.

Quality “is a relative measure within the universe in which we live, depends on our knowledge, experience and environment”, and “what for one person is quality, probably for another might not be”. We can’t “tie quality to a simple binary process without context: for example, if I’m dying of thirst in the desert, do I care if a bottle of water has sand on the bottom?” , adds the author.

For websites there is no difference: the quality is, basically, to provide “Performance over Expectation or, in terms of marketing, Value Proposition“.

Google’s evaluations on quality

If quality is relative, we need to try to figure out how Google can determine what meets its values and what does not.

In fact, Dias says, “Google does not say what is and what is not quality: all the algorithms and documentation that Google uses for its instructions for webmasters are based on feedbacks and real user data”.

We also said this when talking about the search journey: when “users perform searches and interact with websites on the index, Google analyzes the behavior of users and often performs more recurring tests, in order to ensure that it is aligned with their intentions and needs”. This process “ensures that the guidelines for websites issued by Google align to what users of the search engine want, are not necessarily what Google wants unilaterally”.

Algorithm updates chase users

This is why Google often says that “algorithms are made to chase users“, the article reminds us, which therefore also invites sites to “chase users instead of algorithms, so as to be on par with the direction Google moves”.

However, “to understand and maximize the potential of highlighting a website, we should look at our websites from two different perspectives, Service and Product“.

Il sito come servizio e prodotto

The site as a service

When we look at a website from a service point of view, we should first analyze all the technical aspects involved, from code to infrastructure: for example, among many other things “how it is designed to work, how technically it is robust and consistent, how it manages the communication process with other servers and services, and then again the integrations and front-end rendering”.

This analysis alone is not enough because “all technical frills do not create value where and if value does not exist“, underlines Pedro Dias, but at most “they add value and make any hidden value shine at its best”. For this reason, the former Googler advises, “you should work out the technical details, but also consider looking at your own website from the product point of view”.

The site as a product

When we look at a website from a product point of view, “we should aim to understand the experience that users have about it and, ultimately, what value we are providing to distinguish ourselves from the competition”.

To make these elements “less ethereal and more tangible”, Dias resorts to a question that asks its customers: “If your site disappeared from the Web today, what would your users be missing, which they would not find in any of the websites of the competition?”. It’s from the answer that you understand “if you can aim to build a sustainable and sustainable business strategy on the web,” and to help us better understand these concepts provides us with this image – Peter Morville’s Honeycomb User Experience, amended to include a reference to the specific concept of EAT which is part of the guidelines for Google’s quality evaluators.

Il favo della User Experience

Most SEO professionals examine in depth the technical aspects of the UX such as accessibility, usability and possibilities to be found (which in fact concern the SEO), but tend to overlook the qualitative aspects (more strategic) as Utility, Desirability and Credibility.

At the center of this hive is the Value, which can be “fully achieved only when all the other surrounding factors are satisfied“: “applying it to your web presence means that, unless you look at the whole holistic experience, you will lose the main goal of your website, which is to create value for you and your users“.

Quality is not static, but rather evolving

To further complicate things there is the fact that “quality is a moving target”, it is not static, and therefore a site that wants to “be perceived as quality must provide value, solve a problem or need” in a constant manner over time. Similarly, even Google must evolve to always ensure the highest level of effectiveness of its responses, and that is why it “constantly runs tests, pushing quality updates and improvements in algorithms”.

La qualità non è statica

As Dias says, “if you start your site and never improve it, over time your competitors will eventually reach you, improving the technology of their site or working on the experience and value proposition”. Just as old technology becomes obsolete and deprecated, “over time even innovative experiences tend to become common and most likely fail to go beyond expectations”.

To clarify the concept, the former Googler makes an immediate example: “In 2007 Apple conquered the smartphone market with a touch screen device, but nowadays, most people won’t even consider a phone that doesn’t have a touchscreen, because it has become a fact and can no longer be used as a competitive advantage“.

And even the SEO can not be a one-off action: it is not that “after optimizing the site once, this remains permanently optimized“, but “any area that supports a company must improve and innovate over time, to remain competitive”.

Otherwise, “when all this is left to chance or we do not give the necessary attention to ensure that all these features are understood by users, it is precisely then that sites start to run into organic performance problems”.

Manual actions can complete algorithmic updates

It would be naive to “assume that algorithms are perfect and do everything they should do impeccably”, Dias admits, according to which the great advantage “of humans in the battle against machines is that we can face the unexpected“.

We have “the ability to adapt and understand abnormal situations, and understand why something can be good even if it might seem bad, or vice versa”. That is, “human beings can deduce context and intention, while machines are not so good at that”.

In the field of software engineering, “when an algorithm detects or loses something that it should not have, it generates what is referred to respectively as false positive or false negative“; to apply the right corrections, it is necessary to identify “the output of false positives or false negatives, a task that is often best performed by humans”, and therefore usually “engineers set a level of trust (threshold) that the machine should consider before requesting human intervention”.

When and why a manual action is triggered

Inside the Search Quality there are “teams of people who evaluate the results and look at the websites to make sure that the algorithms work correctly, but also to intervene when the machine is wrong or can not make a decision”, reveals Pedro Dias, who then introduces the figure of the Search Quality Analyst.

The task of this research quality analyst is “Understanding what is in front of you, examining the data you provide, and making judgements: these judgements can be simple, but they are often supervised, approved or rejected by other analysts globally, in order to minimize human prejudices”.

This activity frequently results in static actions that aim at (but not only):

  • Create a set of data that can then be used to train algorithms;
  • Address specific and impacting situations, where algorithms have failed;
  • Report to website owners that specific behaviour does not fall within the quality guidelines.

These static actions are often defined as manual actions, and can be activated for a wide variety of reasons, although “the most common goal remains to counter the manipulative intent, which for some reason managed to successfully exploit a flaw in quality algorithms”.

The disadvantage of manual actions “is that they are static and not dynamic like algorithms: the latter that is they work continuously and react to changes on Web sites, based only on the repetition of the scan or the improvement of the algorithm”. On the contrary, “with manual actions the effect will remain for as long as expected (days / months / years) or until a request for reconsideration is received and processed successfully”.

Differences between algorithmic impact and manual actions

Pedro Dias also provides a useful synthetic mirror that compares algorithms and manual actions:

Algorithms
– Aim at the value of resurfacing
– Large scale work
– Dynamic
– Fully automated
– Indefinite duration
Manual actions
– Aim to penalise behaviour
– Address specific scenarios
– Manual static + Semi-automatic
– Defined duration (expiry date)

Before applying any manual action, a search quality analyst “must consider what it is dealing with, assess the impact and the desired result”, answering questions such as:

  • Does behavior have manipulative intents?
  • Is the behavior striking enough?
  • Will the manual action have an impact?
  • What changes will the impact produce?
  • What am I penalizing (common or single behavior)?

As the former Googler explains, these are aspects that “must be properly considered and considered before even considering any manual action”.

What to do in the event of manual action

Since Google is “increasingly moving towards algorithmic solutions, exploiting artificial intelligence and machine learning to both improve results and combat spam, manual actions will tend to fade and eventually disappear completely in the long run,” Dias says.

Anyway, if our website has been hit by manual actions “the first thing you need to do is to understand what behavior activated it“, referring to “Google’s technical and quality guidelines for evaluating your website based on them”.

This is a job to be done calmly and accurately, since “it is not the time to let the sense of haste, stress and anxiety take the lead”; it is important “collect all the information, clean up the errors and problems of the site, fix everything and only after sending a request for reconsideration”.

How to recover from manual actions

A widespread but mistaken belief is “that when a site is hit by manual action and loses traffic and positioning, it will return to the same level once manual actions are revoked”. That ranking, in fact, is the result (also) of the use of tools not allowed (which caused the penalty), so it makes no sense that, after cleaning and revoking the manual action, the site can return exactly where it was before.

However, points out the expert, “any website can be restored from almost any scenario” and cases where a property is considered “irrecoverable are extremely rare”. However, it is important that recovery begins with “a full understanding of what you are dealing with”, with the awareness that “manual actions and algorithmic problems can coexist” and that “sometimes, you won’t start to see anything before setting priorities and solving all the problems in the right order”.

In fact “there is no easy way to explain what to look for and the symptoms of each individual algorithmic problem”, and therefore “the advice is to think about your value proposition, the problem you are solving or the need you are responding to, without forgetting to ask your users for feedbacks, inviting them to express opinions about your activity and the experience you provide on your site, or how they expect you to improve”.

Takeaways on algorithmic drops or resulting from manual actions

In conclusion, Pedro Dias leaves us four pills to synthesize his (extended) contribution to the cause of sites penalized by algorithmic updates or manual actions.

  1. Rethink your value proposition and your competitive advantage, because “having a website is not in itself a competitive advantage”.
  2. Think of your website as a product and constantly innovates it: “if you don’t go ahead, you will be overcome, while successful websites improve and evolve continuously”.
  3. Search the needs of your users through the User Experience: “the priority is your users, and then Google”, so it is useful to “talk and interact with your users and ask their opinions to improve critical areas”.
  4. Technical SEO is important, but that alone will not solve anything: “if your product / content has no appeal or value, no matter how technical and optimized it is, and therefore it is crucial not to lose value proposition“.

Call to action

Iscriviti alla newsletter

Try SEOZoom

7 days for FREE

Discover now all the SEOZoom features!
TOP