With the numerous sources of information available to us lately-each one reporting a different version of the same event/ story, telling the truth from false has never been more challenging. Also, the tendency of people to brandish anything that doesn’t suit their interests as ‘fake news’ only makes the situation worse.
For the fact-checking community, this has been a period of crisis. Sites such as Snopes and Politifact have conventionally concentrated on particular claims, which is commendable but tiresome; since by the time they complete the verification process, the information has already spread all over the world.
Likewise, social media companies have made rigorous attempts to limit the spread of misinformation and propaganda but have gotten mixed results so far. For one, Facebook plans on hiring 20, 000 human moderators by year-end and is investing several million on the development of its fake news identifying algorithms.
Researchers from Qatar computing research institute and MIT’s computer science and artificial intelligence Lab have an alternative solution- they believe that to eliminate fake news; efforts should be directed towards the news sources and not individual claims. It is on the same note that they showcased a new system that utilizes machine learning to establish whether a news source is accurate or prejudiced.
Any site that has shared inaccurate information is bound to do it again. The idea is to prevent this from happening at all. Therefore, the system automatically scraps information about these Websites and determines the ones that are likely to do so. According to RamyBaly, the system’s lead author, it requires approximately 150 articles to determine whether a particular site is reliable or not accurately.