Fact checking: what it is, how it works, and Big Tech’s choices
The fight against disinformation has long been at the center of global tensions involving technology companies, governments and civil society. Born as a response to the growth of fake news, in a context deeply marked by political polarization and increasingly fragmented access to information, fact checking has emerged as an indispensable tool to try to preserve the quality of public debate and ensure more reliable information : from analyzing viral news stories on social media to verifying political statements and problematic content during emergencies such as health care, it has helped correct erroneous narratives and nurture trust in an increasingly complex information ecosystem. But now there appears to be a new tipping point , following recent decisions by digital giants such as Meta and Google to scale back or eliminate fact-checking programs, which have reignited the debate over the trustworthiness of online information and the responsibilities of platforms to users. These changes come at a critical time, amid aversion to accusations of censorship and the need to ensure effective moderation against disruptive disinformation, and raise profound questions about the evolution of digital communication. What will be the implications of the reduction of centralized fact checking for information? And what alternatives will remain to stem the expansion of fake news? Analyzing the role and challenges of fact checking thus becomes essential to understanding what is changing and what future patterns may emerge.
What fact checking is
Fact checking is a structured and methodical process used to ensure the accuracy of information. The primary function of fact checking is to protect the public from misinformation, which is increasingly common with news and content spreading at extraordinary speed, often without any prior verification, by offering concrete tools to discern between credible data and false or manipulated claims. Through this process, statements, images, videos, or statistics are analyzed with the goal of confirming or refuting the content disseminated.
Its origin is rooted in traditional journalism, when newsrooms included professionals responsible for checking news in detail before it was published. With the evolution of the media landscape and the spread of online copywriting, fact checking has taken on new meanings and applications, transforming into a practice aimed at countering the consequences of fake news and targeted disinformation campaigns. It is no longer the sole preserve of newsrooms or individual copywriters, but involves independent entities, teams of experts and technology platforms, often in collaboration with each other.
What characterizes modern fact checking is a combination of manual and technological verification: on the one hand, professional fact checkers analyze information with scientific rigor; on the other, algorithms, often integrated with artificial intelligence, accelerate the recognition of disinformation trends and provide support through automated data analysis.
This process is also relevant for ensuring the quality of public debate, especially within democracies, where correct information is a pillar for the formation of informed opinion. The success of fact checking, however, depends not only on the accuracy with which it analyzes content, but also on its ability to educate users to independently recognize indicators of trustworthiness, strengthening their critical sense of what we read, share, and believe.
What fact checking means: definition and overview
Fact checking is a methodical procedure that involves verifying and validating the accuracy of a statement to unmask any partially true, false, manipulated or decontextualized information.
It applies to a wide range of content: political statements, news articles, viral news, social media posts, and even visual material such as photos and videos. It is not exclusive to the world of journalism, but has expanded to areas such as politics, public health and academic investigations, where preserving data integrity is essential.
The process relies on rigorous analysis to determine whether a piece of information is true, false, partially true, or manipulated. Professionals in this field make use of reliable sources and verifiable data, such as official documents, scientific research, and recognized media articles, combined with specific techniques, such as image analysis to flush out visual manipulations, to debunk half-truths and outright lies, providing the public with the necessary context to better understand the facts in question. The end result does not simply disprove or confirm a piece of information, but aims to provide a clear and contextualized picture that helps the public understand the implications of what is being verified.
Why fact checking is relevant
The importance of fact checking lies not only in its technical ability to detect false information, but more importantly in its influence on the health of democracies and the mechanisms of social cohesion. In recent years, fake news, due to its viral and highly divisive nature, has shown that it can profoundly alter perceptions of collective reality, and the case of the 2016 U.S. elections or the proliferation of fake news during the COVID-19 pandemic represent concrete examples of how distorted narratives can have tangible consequences.
When implemented with scientific criteria and rigorous standards, the fact-verification process protects not only information, but also public dialogue, because accurate information becomes a strategic tool for governments and citizens alike. Without reliable verification mechanisms, the risk is to give way to a “post-truth society”, where the manipulation of emotions outweighs the appeal to rationality.
In short, the value of fact checking goes beyond mere verification of information and serves as a bulwark against manipulation, increasing the public’s resilience in distinguishing between facts and lies, strengthening the pillars (or rather, pillars) of transparency and information ethics. Moreover, public institutions, digital platforms and citizens have in fact checkers an impartial and methodological reference point capable of promoting an evidence-based dialogue.
The intervention of fact checkers is also crucial in holding digital platforms, which are often accused of fostering disinformation, accountable. The verification tools are not so much aimed at censorship as at providing the public with the necessary context to lucidly understand the content, pointing out any distortions. Beyond the specific moment, the long-term goal remains clear: to stimulate a more resilient and knowledgeable information ecosystem, in which each individual can develop autonomous criteria to distinguish what is reliable and what is not.
Who are fact checkers
Fact checkers are an indispensable bridge between information and the public. They are professionals, independent organizations or teams embedded in digital media, whose sole job is to analyze, verify and, when necessary, refute potentially false or misleading content. They are not simply “error correctors, ” but guarantors of transparency and reliability in a media ecosystem that is increasingly fragmented and prone to manipulation. In addition to their traditional role, fact checkers act as allies of social platforms, which often rely on them to implement external verification systems.
Significant collaborations between fact checkers and actors such as Meta and Google have led to important projects to counter misinformation, although recent Big Tech decisions have called into question the effectiveness and future of these partnerships.
Who does fact checking: independent organizations and virtuous examples in Italy
Independent fact checking organizations are the main players in the field, recognized for their transparency, impartiality and rigorous methodology. These bodies operate according to standards defined by global networks such as theInternational Fact-Checking Network (IFCN), which ensures that members adhere to principles of editorial independence and accuracy. Requirements for being accredited by the IFCN include willingness to make sources public, adherence to nonpartisan rules, and regular review by independent peers.
In Italy, projects such as Facta.news and Pagella Politica represent excellent examples of local fact checking. Facta.news focuses on debunking viral news frequently spread on social media, combining an educational approach with detailed explanations. Pagella Politica, on the other hand, specializes in verifying statements by politicians and institutions, providing scores based on rigorous analysis criteria.
Both organizations work in collaboration with global projects and international digital platforms, such as Meta, which has included their interventions in the now dismantled Third-Party Fact-Checking Program. This shows how local entities are also essential to counter widespread and specific disinformation, which is often not covered by large international networks.
Independent organizations never act in isolation: many collaborate with each other to share methodologies, resources and research. This collective approach enables them to address broad phenomena such as disinformation campaigns orchestrated by foreign states or ideological activists, thereby increasing the effectiveness of their interventions.
Transparency and rigorous commitment to the dissemination of verified information make independent fact checking organizations the vital core of the fight against global disinformation.
How fact checking works in practice
Fact checking is a structured process that combines rigorous methodologies, timely analysis and the use of reliable sources. Before arriving at conclusions, fact checkers follow a specific process designed to ensure maximum accuracy and transparency; although the ultimate goal is to confirm or refute a piece of information, the main value lies in the methodical approach that leads to providing the public not only with a verdict, but also with context and verifiable evidence.
The method is divided into distinct phases, each designed to address a particular aspect of verification. This approach allows for the detection of any errors, manipulation, or intentional misinformation, returning contextualized and reliable information to users.
The stages of the fact-checking process
As mentioned, the fact-checking process is divided into severaloper ative steps, each of which helps to ensure a meticulous and unbiased analysis of the claims being checked.
- Identification of the claim to be checked
Not everything is a matter of fact checking. Fact checkers carefully select statements that have a significant impact on the public or are likely to influence important decisions, such as election campaigns, health issues, or events of international significance. This stage also involves an assessment of urgency: the higher the spread of potentially false information, the higher the priority given to its verification.
The guiding criterion during this phase is to focus efforts on news that is capable of generating amplified reactions, avoiding dispersing resources on minor or collectively irrelevant content.
- Collection of reliable data
The second phase is devoted to researching and compiling the sources needed to verify the information. This is detailed work that relies on:
- Official documents (laws, regulations, public budgets).
- Scientific research published in accredited contexts, such as peer-reviewed journals or institutional reports.
- Recognized and independent journalistic sources, also selected on the basis of their objectivity and history of reliability.
Essential at this stage is transparency of sources: any verification loses value if there is a lack of concrete documentation on which to base the check. Every piece of data is archived so that it can be consulted later, either by the fact checker or by the public.
- Cross-checking between sources
At this stage, fact checkers compare the information collected looking for consistency, discrepancies or omissions. A true news story must find support in multiple and concordant sources, while inconsistencies in content or interpretations often offer useful clues to reveal manipulations or falsehoods.
A practical example is image analysis: a visual content shared online might be authentic but decontextualized. Through tools such as Google reverse image search or TinEye, fact checkers trace back to the original source, verifying whether the shot is recent or extrapolated from a past event.
The intersection of independent sources is a necessary step. Even if a single source turns out to be reliable, fact checking relies on finding parallel confirmations to minimize the possibility of error.
- Publication of the result
The last step does not end with a simple “true” or “false.” Each verification is accompanied by a detailed explanation of the method used, the sources consulted and the steps followed to reach a conclusion. This approach makes the process transparent and accessible, allowing the public to understand not only the result but also the reasons behind it.
Transparency in the method of publication is crucial: a fact check produces real value only when anyone can replicate the analysis, verify the sources, and independently assess the reliability of the result. A common practice is to include direct links to primary sources, giving the reader the opportunity to investigate for themselves.
The goal, in addition to the immediate correction of inaccurate information, is to educate users in the critical and informed use of information. In this way, fact checking becomes not only a tool for containing fake news, but also an ally in improving the quality of public debate.
Tools and technologies of fact checking
Modern fact checking is not limited to manual checking of information: it takes advantage of a wide range of tools and technologies to speed up the work and increase the accuracy of the results. Techniques such as reverse image search, specialized databases, and content analysis software make it possible to trace the origin of a piece of information, verify its authenticity, and contextualize it effectively. These tools provide essential support for identifying possible manipulation and countering misinformation in a quick and targeted manner.
- Google Reverse Image Search: discovering the origin of an image
One of the most popular tools used by fact checkers is Google Reverse Image Search, which allows you to search by images instead of keywords. Simply uploading an image or pasting its URL takes you to a list of sites that have published it, indicating its original source. This is especially useful for detecting visual manipulation or decontextualization. For example, a photo seemingly related to a recent event may have been taken years earlier in a completely different context.
- NewsGuard: a database for source credibility
NewsGuard provides an assessment of news outlets and digital sources based on criteria such as transparency, accuracy, and editorial independence. This is a key resource for distinguishing trustworthy platforms from those that are suspect or dedicated to misinformation. In addition to qualitative analysis, NewsGuard maintains a database of “disinformation fingerprints,” identifying recurring narratives and content that has already been debunked. This tool is particularly valued for its systematic approach to categorizing sites and content, making it easier to detect patterns of propaganda or orchestrated campaigns.
- Advanced analytics software and artificial intelligence
Advances in artificial intelligence have led to the creation of software that can monitor large volumes of digital content. Machine learning algorithms can quickly identify fraudulent narratives, fake news and troll networks through analysis of text, images and videos. Tools such as TinEye or CrowdTangle, frequently used in fact checking, complement the technological landscape by supporting the detection of manipulated content and its tracking.
Technological tools, therefore, form the backbone of modern fact checking, providing greater efficiency and reaching areas impossible to monitor manually. However, human input remains essential to add cultural context and interpret complex situations that software alone cannot decipher.
The actuality: fact checking under attack from Big Tech decisions
The scenario now described-which was already fragile and complicated anyway-is now in danger of being further shaken by profound changes, largely resulting from the strategic choices of large technology companies. Meta, Google and LinkedIn have recently announced sweeping changes to their approaches in countering misinformation, aiming to reduce or eliminate traditional fact-checking programs. These decisions come at a time when the role of digital platforms as arbiters of information is increasingly contested, and usher in a highly volatile phase in the fight against fake news.
While Meta has decided to abandon its Third-Party Fact-Checking Program in the United States, introducing the decentralized Community Notes system, Google and LinkedIn have withdrawn their participation in fact-checking engagements in the European context, justifying the choice on economic and strategic grounds. The impact of these decisions is significant: they redefine not only the concept of moderation, but also the relationship of trust between platforms, users, and institutions.
Meta: from the end of fact checking to Community Notes
On January 7, 2025, Mark Zuckerberg made official the decision to end the Meta fact checking program in the United States. Active since 2016, this program had involved independent organizations certified by the International Fact-Checking Network (IFCN) to analyze and evaluate news spread on the platforms of Facebook, Instagram, and, more recently, Threads. The goal was to curb viral misinformation and provide users with clear context about the credibility of online content. However, with the announcement of the trial of the Community Notes system, the responsibility for verifying information will shift from experts to the users themselves.
Community Notes, inspired by the moderation model of X (formerly Twitter), is a participatory solution in which a community of users can comment and add context to controversial posts. Once voted on and approved by enough users with different perspectives, the notes become visible to the entire platform. According to Zuckerberg, this tool aims to promote greater transparency and balance the information ecosystem. However, the move has been met with strong criticism, highlighting both concerns about its effectiveness and a clear shift of responsibility from the platform itself to the user community.
Why Meta chose this path
Meta’s decision to abandon third-party fact checking can be attributed to several factors, both strategic and political.
- Free expression and inclusiveness (according to Meta’s position)
Zuckerberg stated that the move to Community Notes reflects a desire to encourage open debate. According to the CEO, labeling content as fake would often result in accusations of censorship, fueling further divisions. With Community Notes, Meta seeks to reduce this perception, instead encouraging direct empowerment of users and their interactions.
- Political pressures in the United States
Meta has long been under crossfire from the two major U.S. political players. The Biden administration had accused the platforms of disinformation related to COVID-19, pushing for greater controls on content. With the newly elected Trump administration, the predominant charge shifts to censoring conservative views. Zuckerberg thus seems to be aligning himself with prevailing political demands, taking a more minimalist approach to moderation.
- Reducing operational costs
The verification process conducted by professional fact checkers involved large economic investments. Meta collaborated with several recognized fact checking organizations, including Reuters and The Associated Press, to monitor millions of pieces of content. Through Community Notes, the company aims to drastically reduce these costs by delegating responsibility to its users.
Limitations of the Community Notes model
If Meta’s proposed model promises to democratize moderation-and is presented as a tool that fosters plurality of opinion and decentralization of information control-existing experiences, such as those gathered with X’s similar system, raise significant doubts among experts about the real effectiveness of Community Notes. The greatest risk is that this choice amplifies disinformation rather than reducing it: delegation to users may represent an escape from the institutional responsibilities of the platform, leaving people more vulnerable to the wave of fake or manipulated news.
- Slow process
Community Notes require a large number of users to approve or reject a note. This mechanism significantly slows down response time compared to professional fact checking, creating a delay that leaves fake content active and viral for days or even weeks before it is contextualized.
- Ineffectiveness in tracking complex misinformation
Manipulated posts cannot always be effectively contextualized by a formless mass of users. Sophisticated narratives or intentionally ambiguous content, such as that produced by state disinformation campaigns (Russia, China, etc.), are likely to go unnoticed or downplayed, especially if users lack the technical skills to recognize them.
- Vulnerability to political manipulation
The model based on user perspectives lends itself to manipulation by organized groups that share a political or ideological style. As already observed with X, Community Notes can be gaming, i.e., organized to support misleading narratives through coordinated campaigns, thus negating their originally intended function.
- Absence of authority
Where traditional fact checking involves qualified experts and transparent sources, Community Notes base their judgment on a generalized consensus. This approach drastically lowers the reliability rate of fact-checking and alienates discerning users from considering it a safe and authoritative reference.
Not just Meta: what’s happening to Google and LinkedIn fact checking
The announcement by Zuckerberg’s group seems to have paved the way for similar decisions on fact checking by other Big Tech giants, which are radically changing their approach to fact checking. Key players include Google and LinkedIn, which recently announced their disengagement from initiatives to combat disinformation, particularly in the context of the Code of Conduct on Disinformation introduced by the European Union in 2022.
Specifically, Google-which, needless to say, is a giant among search engines and owner of the YouTube video platform-announced that it would not integrate fact-checking tools into search results and content hosted on its video portal. The news dates back to the first weeks of 2025, just before the obligations under the Digital Services Act (DSA) came fully into effect. This disengagement marks a step backward from commitments Google made when it signed the EU Code of Conduct, which obligated platforms to work with fact-checking organizations to label and analyze fake content. While initially participating in discussions with the European Commission, the company said that fact checking “is neither appropriate nor feasible” for its core products.
LinkedIn, a professional networking platform owned by Microsoft, has also decided to permanently abandon any fact-checking efforts. Although not traditionally seen as a primary source of news, the spread of problematic content has nonetheless increased on the platform, particularly posts and re-shared articles related to political or economic issues. Despite this, the company said that fact checking activities do not align with the professional nature of the platform and are not a strategic priority.
These decisions signal a gradual but obvious retreat by major Big Tech from countering disinformation at a time when the European Union is trying to impose increasingly stringent standards with regulations such as the DSA. The disengagement of Google and LinkedIn sets a serious precedent, raising questions about the overall effectiveness of global initiatives to protect users from the uncontrolled spread of fake news.
Google and LinkedIn’s “no” to fact checking under the EU umbrella
Google and LinkedIn’s retreat comes in the context of the Code of Conduct on Disinformation, an initiative launched by the European Commission in 2022 to require online platforms to collaborate with independent fact checkers and take concrete action against disinformation. While initially voluntary, the code is set to become legally binding with the Digital Services Act, which includes stringent obligations for Very Large Online Platforms (VLOPs), such as fast removal of false content and transparency in moderation methodologies.
In January 2025, Google made its withdrawal from the agreements official, communicating it directly to Renate Nikolay, the head of content and technology for the European Commission. The company said that incorporating verification tools directly into search results or YouTube videos would involve unsustainable complexity and would not fit the unbiased search model on which Google bases its reputation. LinkedIn, on the other hand, announced in parallel the termination of any moderation program related to fact checking, calling such activities as “not relevant to the platform’s profile”.
These steps backward complicate the implementation of the European standards, which were intended to create an across-the-board alliance against misinformation. Now, a significant part of online news control is at risk of remaining uncovered in the face of the challenges posed by propaganda, fake news, and manipulative content.
What are the motivations of the digital giants
The statements by Google and LinkedIn highlight a common strategy based on economic, operational and regulatory reasons, which aim to protect their respective business models without compromising platform efficiency.
- Incompatibility with business models
Google argues that integrating fact-checking tools into its search algorithms and on YouTube would be “not relevant, relevant or practicable” with respect to the nature of its services. The company reiterated that its focus is on freedom of search and providing results that reflect diversity of opinion, maintaining in tools such as “About this image” and “About this result” a more neutral and exploratory approach, without explicit judgments about trustworthiness.
For LinkedIn, a platform with a professional focus, the inclusion of structured verification in user-generated content was not a significant priority. According to Microsoft, owner of LinkedIn, fact checking “does not fit into the core risk profile” of the platform, which is based more on interactions between individuals than on journalistic or viral content with collective impact.
- Regulatory complexity and operational costs
The demands made by the EU under the Code of Conduct and the DSA involve investment in resources and infrastructure to monitor and moderate information on a pan-European level, while meeting local standards. For global companies like Google and LinkedIn, this represents an effort that they say does not justify the outcome, leading to a clear conflict between regulatory compliance and operational sustainability .
- Geopolitical pressures
Increasing European regulation, reinforced by the introduction of the DSA, has been perceived as an attempt to limit U.S. digital platforms’ control over information. In the context of U.S.-EU competition for digital sovereignty, Big Tech such as Google and LinkedIn seem intent on protecting their strategic priorities, avoiding constraints deemed too restrictive and favoring more flexible approaches in traditionally less regulated markets.
Criticism and consequences: what we really risk
The choices made by Meta, Google and LinkedIn highlight a global trend toward decentralizing responsibility for content moderation or, in the most extreme cases, loosening controls altogether. The consequences of these decisions are reflected on multiple levels, both in the short term and in the long term, with significant impacts on citizens, institutions and the media.
- Increased disinformation
Without specific systems dedicated to verifying information, platforms with millions of daily users risk becoming fertile ground for fake news and propaganda, often orchestrated by actors with specific goals. Disinformation is no longer corrected in a timely manner: this fosters cycles of uncontrolled sharing, intensifying the spread of false and manipulative content.
NewsGuard studies have already shown that, even with the support of fact checking, only 14 percent of disinformation from sources such as Russia and China was actually being intercepted. With recent decisions, that figure may even be reduced further.
- Reduction in the reliability of digital platforms
The loss of professional support in verifying information undermines users’ trust in platforms such as Facebook, YouTube, and LinkedIn. This decline in credibility not only penalizes the end user, but risks damaging access to accurate information, leaving a gap that can be filled by less reliable alternative sources.
- Political and social risks
In contexts of political instability or on the threshold of crucial events, such as elections or international crises, disinformation can become a strategic tool for manipulation. The reduction of fact-checking tools increases the risk of foreign interference and social polarization, with possible repercussions for democratic stability.
Some countries, such as developing countries, are particularly vulnerable to these dynamics, as they are less equipped to deal with large-scale orchestrated campaigns. The disengagement of major digital players leaves open the possibility that informational manipulation will proliferate without any form of restraint.
- Regulatory consequences and geopolitical clashes
The decisions of companies like Google and LinkedIn could intensify the clash between Big Tech and the European Union. As the EU seeks to tighten control over disinformation through legislative means, such choices risk slowing or undermining the effectiveness of regulations, forcing European institutions to reevaluate their approaches.
What is the future of fact checking?
In sum, fact checking faces a number of challenges that expose its structural and operational limitations. Despite the progress made in recent years in the fight against disinformation, the current model has often proven insufficient to curb complex phenomena such as international propaganda campaigns and the rapid virality of fake news. The aforementioned study conducted by NewsGuard found that, even with verification systems in place, only 14 percent of content linked to foreign disinformation campaigns (from Russia, China and Iran) is actually labeled as fake or manipulated; this is compounded by structural difficulties, such as slow response and dependence on limited human resources compared to the vast amount of content produced online daily.
In this suboptimal scenario, the advent of new technologies and regulations is driving a rethinking of the verification model itself. Artificial intelligence algorithms, human oversight, and global regulations such as the Digital Services Act seek to propose complementary solutions, but their real effectiveness remains in question. For the future of fact checking, it will be crucial to address not only the approach to fake news, but also how to integrate technology, governance, and collaboration with digital platforms to arrive at systems that are faster, more extensive, and more reliable than the current ones.
Algorithms, AI and the human role
The introduction of artificial intelligence-based tools is transforming the capabilities and timing of fact checking. Advanced algorithms make it possible to analyze large amounts of content in real time, identifying patterns of misinformation, suspicious accounts, and propaganda narratives. For example, machine learning technologies can detect manipulated images, detect chatbot-generated text, and quickly verify the provenance of content. However, despite machine efficiency, human input remains indispensable.
The potential of artificial intelligence
Generative AI is a powerful tool for handling the complexity of fake news. Among the most promising solutions are:
- Predictive analytics: algorithms capable of identifying suspicious content before it goes viral.
- Semantic recognition: tools capable of understanding the meaning and context of a sentence, distinguishing between satirical statements, neutral statements and deliberate manipulation.
- Media content processing: platforms that can sift through billions of images or videos to detect imperceptible changes and attribute them to an original source.
For example, software such as CrowdTangle (used by multiple social platforms) monitors the spread of content on social media, providing fact checkers with data on the viral trend of information. This allows them to act quickly to prevent further spreads.
The irreplaceability of human intelligence
Despite technological advances, no algorithm can fully replicate the critical human ability to understand the cultural, value or political nuances of a piece of information. An AI can assess the consistency of a piece of data with other sources, but only a human fact checker can identify the context of a narrative or potential underlying biases.
Human presence is also crucial to ensure transparency and reliability. Fact checkers, with their documented and publicly accessible methods, act as guarantees of good faith, cementing people’s trust in the verification process. Intervening on polarizing or high emotional value topics requires sensitivity and judgment that only a human being can offer.
In the near future, the most effective model will be the hybrid model : combining powerful and efficient algorithms with expert control and supervision, creating a balance between automation and critical interpretation. This approach will be particularly effective in contexts where disinformation is constantly evolving and requires immediate but thoughtful reactions.
Europe’s position and the Digital Services Act
While the United States seems to be pushing toward decentralization of control with models such as Community Notes, Europe is taking a diametrically opposite approach based on strict and binding rules to counter disinformation. In this context, the Digital Services Act (DSA) becomes the benchmark for regulating the dissemination of online content and ensuring the accountability of platforms.
The DSA, passed by the European Union, represents one of the most ambitious pieces of legislation ever introduced to counter the risks of disinformation. This law imposes strict rules on Very Large Online Platforms, which include giants such as Google, Meta and Amazon, requiring:
- Transparency obligations: platforms must make public the methodologies used to detect and moderate fake news and legal but harmful content.
- Direct collaboration with fact checkers and research institutes: the EU wants to consolidate alliances with specialized entities to monitor and debunk misleading content on a large scale.
- Protection of democratic processes: special attention is paid to safeguarding elections and preventing political manipulation through social media.
- Strict penalties: companies that fail to comply with obligations can be fined up to 6 percent of their annual global turnover.
This framework represents an attempt not only to regulate content, but also to place direct responsibility on digital platforms, which have often presented themselves as neutral entities while having a strong impact on the flow of information.
Differences with the U.S. approach
Unlike Europe, the U.S. philosophy revolves around a more laissez-faire approach, justified by the importance placed on freedom of expression. Participatory models such as Meta’s Community Notes reflect this perspective, delegating the verification of information to the users themselves. However, this decentralization presents the risk of amplifying cultural, political or ideological bias, allowing organized groups to distort the true meaning of the debate.
Europe, by contrast, aims for centralized and normalized control, which balances freedom of expression with the need to safeguard the quality of information. This normative approach is particularly relevant in light of continental-scale elections, where disinformation can undermine political stability.
Challenges and prospects for Europe
Despite its ambitions, the DSA poses complex challenges to both European institutions and technology platforms. Implementing a uniform system in all member states requires significant resources and close collaboration between governments, civil society, and technology companies. In addition, the risk of geopolitical confrontation with U.S. Big Tech, which is increasingly reluctant to comply with European constraints, could slow the implementation of DSA rules.
However, the European regulatory approach is bound to influence the global debate, creating a replicable model in other regions of the world. The EU aims to demonstrate that effective regulation is not only possible, but necessary to ensure a safer and more responsible information ecosystem.
FAQ on fact checking: key issues and concerns on the topic
Fact checking is a process that, in recent years, has taken a central role in countering misinformation, especially in the digital environment dominated by social media and viral content. Despite its technical features and stated goals, many questions, doubts, and sometimes misunderstandings still exist around this practice, aggravated by the recent decisions of Big Tech like Meta and Google. This section brings together answers to frequently asked questions on the topic, clarifying doubts and offering tools to better understand the importance of this crucial activity for the health of public debate and the quality of information.
- What is meant by the term fact checking?
Fact checking is the process of verifying the accuracy of a statement, often contained in articles, public statements or social media posts, with the goal of determining whether it is true, false, manipulated or decontextualized. This practice is based on the analysis of reliable sources such as official documents, scientific studies or journalistic reports, and is aimed at reducing the spread of false or misleading information.
- What does fact checking mean?
The term fact checking, which can be translated as “fact-checking,” represents an activity that focuses on ascertaining the truth of an information content. It covers not only traditional journalism, but also areas such as politics, social media, public health and academic investigations. The main goal is to protect the public from misinformation and promote greater transparency.
- When did fact checking originate?
Fact checking has distant origins, with roots in early 20th century journalism. However, its formalization as a specific practice emerged in the 1920s with the emergence of American newsrooms such as Time , where there were roles dedicated to checking content before publication. In recent years, the phenomenon has seen exponential growth, driven by the need to curb the fake news that has proliferated in digital platforms.
- Who are fact checkers and to whom do they answer?
Fact checkers are specialized professionals who work for independent organizations, traditional media or digital platforms with the task of analyzing and verifying information. For example, bodies such as the International Fact-Checking Network (IFCN) set standards of transparency and impartiality to which fact checkers must adhere. Although they may collaborate with companies such as Meta or Google, fact checkers maintain operational and methodological independence to avoid political or commercial interference.
- What task do fact checkers perform?
The task of fact checkers is to check the veracity of content disseminated online or offline, using verifiable sources and objective data. They analyze articles, political statements, social media posts and multimedia content to provide the public with a clear and detailed assessment of the accuracy of the information. They also explain their methods to enable users to understand and trust the results.
- What is the function of fact checking sites?
Fact checking sites are responsible for analyzing and verifying controversial claims or viral news, providing an accurate assessment of their degree of veracity. They also offer detailed explanations of the verification method, fueling a more transparent and accountable information ecosystem. Well-known examples include Facta.news in Italy or Snopes internationally.
- Is it true that fact checking is equivalent to censorship?
No, fact checking does not equate to censorship. This practice does not remove content or restrict freedom of expression, but it adds context and corrects inaccurate statements, giving users access to more informed knowledge. Accusations of censorship are often linked to misinterpretations of the role of fact checkers, who work to ensure transparency and accuracy without impeding debate.
- What is happening with fact checking on social?
Social platforms are scaling back or transforming their approaches to fact checking. Meta, for example, has replaced its Third-Party Fact-Checking Program in the United States with a participatory system called Community Notes, while Google and LinkedIn have dropped support for fact checkers altogether, mostly for strategic and regulatory reasons. These changes raise concerns about effectiveness in countering misinformation.
- Will Meta eliminate fact checking in Europe?
At present, Meta has decided to keep the fact checking program in Europe, in part to comply with stringent European Union regulations such as the Digital Services Act (DSA). However, the introduction of the Community Notes system in the United States has raised questions about its future global approach. Should the U.S. model expand, the EU may object and take corrective measures.
- How can we independently verify a news item?
Independently verifying a news story requires some simple precautions:
- Check the source: is it reliable and experienced?
- Look for confirmations: has the same information been reported by other authoritative sources?
- Analyze details: do dates, names and pictures match the context?
- Use digital tools: reverse Google image search or analysis of suspicious URLs are valuable techniques.
Educating oneself to take a critical approach to information is the first step in becoming better able to resist misinformation.