Methodology

Myth Detector checks fake news revealed in media by means of open sources. In doing so, researchers follow the webpage’s Code of Conduct and this methodology.

Myth Detector’s Code of Conduct rests on seven fundamental principles. Namely: 1. Accuracy; 2. Impartiality; 2. Transparency of Sources; 4. Transparency of Methodology; 5. Correction; 7. Prevention of Discrimination; 8. Transparency of Organization and Financing.

The fact-checking process involves four stages:

  1. Selection
  2. Verification
  3. Evaluation
  4. The search for information about the source

1. Selection

The monitoring team regularly monitors Georgian TV, online, print and social media, as well as media outlets created by the Russian government (Sputnik-Georgia, Sputnik-Ossetia, Sputnik-Abkhazia). The monitoring is focused on revealing fake news, anti-Western messages and hate speech. Moreover, the audience of Myth Detector can send alleged fake content through a special webpage section, Report Fake, and directly through our Facebook page.

On a daily basis, the editorial team of Myth Detector selects materials from the monitoring database, which may be false or manipulative. In selecting such information from media monitoring materials and those sent by the audience, the editorial staff uses the following criteria:

  • How verifiable is the information reported by media/statement made by a source? Is it a fact or an opinion, forecast, or hypothesis?
  • Does it involve the manipulation of facts? Does it look like a half-truth and cherry-picking of facts?
  • How significant is the error?
  • How reliable is the source?
  • Has similar mis- disinformation been spread before, and could the information be part of a coordinated and targeted campaign?
  • How newsworthy is the story?

Myth Detector only deals with claims, data and facts that are verifiable and can be debunked using tangible fact-based counterarguments. We do not verify opinions, forecasts, or hypotheses;

Myth Detector concentrates on topics that concern the welfare of society and/or individuals. When the error made by a source is not significant, viral or harmful, meaning that it could not have any possible consequence on the public, we will not debunk it in order not to contribute to its amplification. When we deal with a claim that could have serious damage to the public, such as claims related to public health, attacks, civil coexistence or natural catastrophes, we will try to reduce its spread in a rapid manner.

As outlined in our code of conduct, “Myth Detector” will apply the same standards regardless of who made the original claim, or what side of the political spectrum it falls on. This is essential to ensure no personal, political, or confirmation biases.

2. Verification.

Myth Detector verifies facts through open sources.

The first step is to deconstruct the identified claim to determine whether a story is unbalanced and/or which source/fact it emits/ whether the facts presented in it are completely fabricated. Then, based on the specifics, we will look for/contact the primary sources, look for the origin of the information, and look for credible facts and data in the most up-to-date databases of official sources and/or, when needed, carry out technological image, video or audio identification processes.

In the process of verification, Myth Detector follows the following media and information literacy guidelines:

2.1 Sources

Myth Detector links and documents all the evidence used in the articles so that the readers themselves are able to replicate the verification process, except in cases where a source’s personal security could be compromised. Anonymous sources may only be used in such situations if the data they supply is supported by named sources or tangible evidence. Myth Detector fact-checks should disclose any possible conflict of interest or bias that could potentially affect the investigation process.

The articles should include at least two, preferably more, sources to support any major conclusion about the truth of a claim, except for the cases where there is only a single source that can be used. This can be the case when one deals with an official document. Secondary sources, or those that analyze, nuance or criticize the primary source, can be used as extra context but should by no means be used as the foundation for a conclusion. In extraordinary circumstances where a fact-check bases all of its conclusions on secondary sources, this should be justified.

2.2 Confidentiality

The image and/or identity of the subjects of an investigation will be blurred when there is a reasonable concern for their safety. The name and image (when available) of the user who distributed the verified content will be blurred, along with any other element that enables the identification of the spreader of the false information (Example here) to avoid undue public scrutiny or overexposure. The identities of individuals who appear in user-shared content but whose identities are unrelated to the misinformation (see this example) or whose exposure could subject them to harassment or overexposure will be concealed as well.

If the user is a public figure, organization, website, or media outlet (Example here), or if the user repeatedly spreads false information (here is an example), its identity won’t be blurred. When dealing with repeated offenders, the article should include clear evidence that they have a history of spreading misinformation. This can be, but is not limited to, links to other fact-checks of debunked claims made by this particular user.

Personal information, such as telephone numbers, addresses or other ID information that could lead to the harassment of an individual, should be concealed. If such information is necessary for the debunk, we will partially obscure it, such as only showing part of a telephone number or ID card.

Similarly, When a minor’s image appears in the verification, unless they are a public figure, in order to preserve their privacy, the face should be blurred. Unless the minor is a well-known person or is necessary for verification, their complete name won’t be disclosed.

3. Evaluation

Based on fact-checking findings, the editorial team assigns the following types of violations:

  • Disinformation: A deliberately disseminated falsehood that has no factual basis and aims at deceiving the public. Disinformation is often used in relation to claims that are being disseminated as part of a bigger attempt of a tactical informational subversion (example here).
  • False Information (Misinformation): A false claim disseminated with no prior intention to deceive or mislead others (example here).
  • Manipulation: The facts/opinions/events are presented/interpreted in a way that creates a false perception among the public (example here).  
  • Visual Manipulation: A photo/video is falsified or doctored. As a rule, such images are accompanied by misleading captions. (Example 1); A fragment of a film, advertisement, or graphic game, which is used to illustrate real rather than staged developments. (Example 2); An authentic visual, which does not match the caption describing the event (Example 3).
  • Partly False: While the disseminated claim entails some elements of truth, the central/significant part of the claim is false or emits important nuances (example here). 
  • Misleading/Missing Context: The claim would mislead the audience without providing additional context and/or details (example here). 
  • Without Evidence: There is no factual evidence or a primary source that would confirm the disseminated claim (example here). 
  • Fabricated Quote: A quote has been made up completely and ascribed to a specific person, or a significant part of the quote has been altered, so that the meaning is changed (example here). 
  • Conspiracy: A claim, that, by providing no reasonable evidence, aims at conveying the idea that certain events or situations are secretly manipulated behind the scenes by powerful forces with negative intent (example here).
  • Satire: Claims that have satirical/humorous nature and should not be perceived as authentic information (example here).
  • False Treatment: A claim that encourages the public to use medicines/substances that are either non-existent, can be damaging to their health or the efficiency of which have not been approved by recognised health professionals. Similar claims have become particularly active amid the Covid-19 pandemic, where different substances have been presented as the cure for the virus (example here).
  • Scam: A claim, post, or advertisement that is being disseminated in accordance with a predetermined plan to deceptively acquire either money or personal information from social media users (example here).

4. About the Source 

When a source regularly spreads fake news or a media outlet is newly established with its real owner hidden, a researcher seeks additional data about the source. In case of a media outlet, such additional data includes information about owners and funding, which is available from open sources and are verified based on the Transparency Guideline. A profile about the source is prepared when he/she is a public figure and the profile will help the reader to create a more nuanced picture.

5. Publishing the Article 

An article is published after it goes through the editing of a duty editor and the final approval of either the deputy editor-in-chief or the editor-in-chief. In preparing an article, researchers follow a style guide. The findings should be clearly presented in the text, while any language that might be construed as value judgments must be avoided.

For more information about the editorial team of Myth Detector, see here.

6. Identification of Trolls

Trolls are identified based on open sources and primarily on the open data of the suspicious account. Monitoring of trolls is carried out on the basis of analysis of a social media account profile and covers the following aspects: 1. “About me” section of a troll account, personal photos and videos, 2. public comments, posts and behaviour in the timeline.

To verify whether a social media account is real, we additionally use person identification search engines, such as Webmii.com, pimeyes.com, Google Image Search, Yandex Image Search, Findclone.ru

In case trolls are using other persons’ photos to make their identity more real and credible, we use photo verification resources, like: images.google.com, Tineye.com, yandex.com/images, Baidu.com

7. Correction

Everyone is free to appeal to Myth Detector if information checked by us is, in the applicant’s view, inaccurate and to offer corresponding counterarguments. To this end, an applicant must write to us ([email protected]) or fill in the complaint form.

For more information about the process, see the Corrections/Complaints policy.

Welcome Back!

Login to your account below

Retrieve your password

Please enter your username or email address to reset your password.

Add New Playlist