Over the last decade, misinformation (including disinformation and malinformation) has become a main concern of policy makers, the media, academic researchers, and the general public. Fuelled by the growing capabilities of Artificial Intelligence to fabricate realistic fake content, its impact is likely to continue to expand, creating risks of large-scale personalized manipulation. At the same time, also the possible reactions to misinformation pose important challenges, as governments and other actors may be tempted to resort to censorship in various guises, thus jeopardizing fundamental liberties.
An explanation for the impact of misinformation is our innate reliance on cognitive heuristics. These are mental short-cuts that help us to make decisions efficiently, but that also make us vulnerable to biases and judgment errors. Misinformation tends to exploit those vulnerabilities, thus greatly increasing its impact.
In the ERC-project VIGILIA, we will investigate a novel strategy to address the so-called post-truth challenges. In doing so, we will stay far from censorship of any kind, to safeguard fundamental liberties. The VIGILIA approach is centered around the detection and mitigation of triggers of cognitive biases and heuristics in humans and societies when consuming and sharing information, whether true or false. We will integrate our results within tools that we refer to as 'VIrtual GuardIan AngeLs' (VIGILs), aimed at news and social media consumers, journalists, scientific researchers, and political decision makers.