Computational propaganda
Computational propaganda is the use of computational tools to distribute misleading information using social media networks. The advances in digital technologies and social media resulted in enhancement in methods of propaganda. It is characterized by automation, scalability, and anonymity.
Autonomous agents can analyze big data collected from social media and Internet of things in order to ensure manipulating public opinion in a targeted way, and what is more, to mimic real people in the social media. Coordination is an important component that bots help achieve, giving it an amplified reach. Digital technology enhance well-established traditional methods of manipulation with public opinion: appeals to people's emotions and biases circumvent rational thinking and promote specific ideas.
A pioneering work in identifying and analyzing of the concept has been done by the team of Philip N. Howard at the Oxford Internet Institute who since 2012 have been investigating computational propaganda, following earlier Howard's research of the effects of social media on general public, published, e.g., in his 2005 book New Media Campaigns and the Managed Citizen and earlier articles. In 2017, they published a series of articles detailing computational propaganda's presence in several countries.
Regulatory efforts have proposed tackling computational propaganda tactics using multiple approaches. Detection techniques are another front considered towards mitigation; these can involve machine learning models, with early techniques having issues such as a lack of datasets or failing against the gradual improvement of accounts. Newer techniques to address these aspects use other machine learning techniques or specialized algorithms, yet other challenges remain such as increasingly believable text and its automation.
Mechanisms
Computational propaganda is the strategic posting on social media of misleading information by fake accounts that are automated to a degree in order to manipulate readers.Bots and coordination
In social media, bots are accounts pretending to be human. They are managed to a degree via programs, and are used to spread information that leads to mistaken impressions. In social media, they may be referred to as "social bots", and may be helped by popular users that amplify them and make them seem reliable through sharing their content. Bots allow propagandists to keep their identities secret. One study from Oxford's Computational Propaganda Research Project indeed found that bots achieved effective placement in Twitter during a political event.Bots can be coordinated, which may be leveraged to make use of algorithms. Propagandists mix real and fake users; their efforts make use of a variety of actors, including botnets, online paid users, astroturfers, seminar users, and troll armies. Bots can provide a fake sense of prevalence. Bots can also engage in spam and harassment. They are progressively becoming sophisticated, one reason being the improvement of AI. Such development complicates detection for humans and automatized methods alike.
Problematic information
The problematic content tactics propagandists employ include disinformation, misinformation, and information shared regardless of veracity. The spread of fake and misleading information seeks to influence public opinion. Deepfakes and generative language models are also employed, creating convincing content. The proportion of misleading information is expected to grow, complicating detection.Algorithmic influence
Algorithms are another important element to computational propaganda. Algorithmic curation may influence beliefs through repetition. Algorithms boost and hide content, which propagandists use to their favor. Social media algorithms prioritize user engagement, and to that end their filtering prefers controversy and sensationalism. The algorithmic selection of what is presented can create echo chambers and assert influence.One study poses that TikTok's automated and interactive features can also boost misleading information. Furthermore, anonymity is kept through deleting the audio's origin.
Multidisciplinary studies
A multidisciplinary approach has been proposed towards combating misinformation, proposing the use of psychology to understand its effectiveness. Some studies have looked at misleading information through the lens of cognitive processes, seeking insight into how humans come to accept it.Media theories can help understand the complexity of relationships present in computational propaganda and surrounding actors, its effect, and to guide regulation efforts. Agenda-setting theory and framing theory have also been considered for analysis of computational propaganda phenomena, finding these effects present; algorithmic amplification is an instance of the former, which states media's selection and occlusion of topics influences the public's attention. It also states that repetition focuses said attention.
Repetition is a key characteristic of computational propaganda; in social media it can modify beliefs. One study posits that repetition makes topics fresh on the mind, having a similar effect on perceived significance. The Illusory Truth Effect, which states people will believe what is repeated to them over time, has also been suggested to bring into light that computational propaganda may be doing the same.
Other phenomena have been proposed to be at play in Computational Propaganda tools. One study posits the presence of the megaphone effect, the bandwagon effect, and cascades. Other studies point to the use of content that evokes emotions. Another tactic used is suggesting connection between topics by placing them in the same sentence. Incidence of Trust bias, Validation By Intuition Rather Than Evidence, Truth Bias, Confirmation Bias, and Cognitive Dissonance are present as well. Another study points to the occurrence of Negativity Bias and Novelty Bias.
Spread
Bots are used by both private and public parties and have been observed in politics and crises. Its presence has been studied across many countries, with incidence in more than 80 countries. Some studies have found bots to be effective, though another found limited impact. Similarly, algorithmic manipulation has been found to have an effect.Regulation
Some studies propose a strategy that incorporates multiple approaches towards regulation of the tools used in computational propaganda. Controlling misinformation and its usage in politics through legislation and guidelines; having platforms combat fake accounts and misleading information; and devising psychology-based intervention tactics are some of the possible measures. Information Literacy has also been proposed as an affront to these tools.However, it has also been reported that some of these approaches have their faults. In Germany, for example, legislation efforts have encountered problems and opposition. In the case of social media, self-regulation is hard to request. These platforms' measures also may not be enough and put the power of decision on them. Information literacy has its limits as well.