Disinformation attack

From Wikipedia, the free encyclopedia

Disinformation attacks are the intentional dissemination of false information, with an end goal of misleading, confusing, or manipulating an audience.[1] Disinformation attacks may be executed by state or non-state actors to influence domestic or foreign populations. These attacks are commonly employed to reshape attitudes and beliefs, drive a particular agenda, or elicit certain actions out of a target audience.[2][3]

Disinformation attacks can be employed through traditional media outlets, such as state-sponsored TV channels and radios.[4] However, disinformation attacks have become increasingly widespread and potent with the advent of social media. Digital tools such as bots, algorithms, and AI technology are leveraged to spread and amplify disinformation and micro-target populations on online platforms like Instagram, Twitter, Facebook, and YouTube.[5]

Disinformation attacks can pose threats to democracy in online spaces, the integrity of electoral processes such as the 2016 United States presidential election, and national security in general.[6]

Defense measures include machine learning applications that can flag disinformation on platforms, fact-checking and algorithmic adjustment systems, and collaboration between private social media companies and governments in creating solutions and sharing key information.[3] Educational programs are also being developed to teach people how to better discern between facts and disinformation online.[7]

Disinformation attack methods[]

Traditional media outlets[]

Traditional media channels can be utilized to spread disinformation. For example, Russia Today is a state-funded news channel that is broadcast internationally. It aims to boost Russia’s reputation abroad and also depict Western nations, such as the U.S., in a negative light. It notably covers negative aspects of the U.S. and presents conspiracy theories aimed to mislead and misinform its audience.[4]

Social media[]

Perpetrators primarily utilize social media channels as a medium to spread disinformation. They leverage a variety of tools to carry out disinformation attacks, such as bots, algorithms, deep fake technology, and psychological principles.

  • Bots are automated agents that can produce and spread content on online social platforms. Many bots can engage in basic interactions with other bots and humans. In disinformation attack campaigns, they are leveraged to rapidly disseminate disinformation and breach digital social networks. Bots can produce the illusion that one piece of information is coming from a variety of different sources. In doing so, disinformation attack campaigns make their content seem believable through repeated and varied exposure.[8] By flooding social media channels with repeated content, bots can also alter algorithms and shift online attention to disinformation content.[3]
  • Algorithms are leveraged to amplify the spread of disinformation. Algorithms filter and tailor information for users and modify the content they consume.[9] A study found that algorithms can be radicalization pipelines because they present content based on its user engagement levels. Users are drawn more to radical, shocking, and click-bait content.[10] As a result, extremist, attention-grabbing posts can garner high levels of engagement through algorithms. Disinformation campaigns may leverage algorithms to amplify their extremist content and sow radicalization online.[11]
  • A deep fake is digital content that has been manipulated. Deep fake technology can be harnessed to defame, blackmail, and impersonate. Due to its low costs and efficiency, deep fakes can be used to spread disinformation more quickly and in greater volume than humans can. Disinformation attack campaigns may leverage deep fake technology to generate disinformation concerning people, states, or narratives. Deep fake technology can be weaponized to mislead an audience and spread falsehoods.[12]
  • Human psychology is also applied to make disinformation attacks more potent and viral. Psychological phenomena, such as stereotyping, confirmation bias, selective attention, and echo chambers, contributes to the virality and success of disinformation on digital platforms.[8][13] Disinformation attacks are often considered a type of psychological warfare because of their use of psychological techniques to manipulate populations.[14]

Examples[]

Domestic voter disinformation attacks[]

Domestic voter disinformation attacks are most often employed by autocrats aiming to cover up electoral corruption. Voter disinformation includes public statements that assert local electoral processes are legitimate and statements that discredit electoral monitors. Public-relations firms may also be hired to execute specialized disinformation campaigns, including media advertisements and behind-the-scenes lobbying. For example, state actors employed voter disinformation attacks to reelect Ilham Aliyev the 2013 Azerbaijani presidential election. They restricted electoral monitoring, allowing only certain groups like ex-Soviet republic allies to observe the electoral process. Public-relations firms were also hired to push the narrative of an honest and democratic election.[15]

Russian campaigns[]

  • A Russian operation known as The Internet Research Agency (IRA) spent thousands on social media ads to influence the 2016 U.S. presidential elections. These political ads leveraged user data and to micro-target and spread misleading information to certain populations, with an end goal of exacerbating polarization and eroding public trust in political institutions.[16] The Computation Propaganda Project at Oxford University found that the IRA's ads specifically sought to sow mistrust towards the U.S. government among Mexican Americans and discourage voter turnout among African Americans.[17]
  • Russia Today is a state-funded news channel that aims to boost Russia’s reputation abroad and depict Western nations in a negative light. It has served as a platform to disseminate propaganda and conspiracies concerning Western states such as the U.S.[4]
  • During the Russo-Ukrainian War of 2014, Russia's combined traditional combat warfare with disinformation attacks in its offensive strategy. Disinformation attacks were leveraged to sow doubt and confusion among enemy populations and intimidate adversaries, erode public trust in Ukrainian institutions, and boost Russia’s reputation and legitimacy. This hybrid warfare allowed Russia to effectively exert physical and psychological dominance over target populations during the conflict.[18]

Other notable campaigns[]

An app called “Dawn of Glad Tidings,” developed by Islamic State members, assists in the organization's efforts to rapidly disseminate disinformation in social media channels. When a user downloads the app, they are prompted to link it to their Twitter account and grant the app access to tweeting from their personal account. As a result, this app allows for automated Tweets to be sent out from real user accounts and helps create trends across Twitter that amplify disinformation produced by the Islamic State on an international scope.[17]

Ethical concerns[]

  • There is growing concern that Russia could employ disinformation attacks to destabilize certain NATO members, such as the Baltic states. States with highly polarized political landscapes and low public trust in local media and government are particularly vulnerable to disinformation attacks.[19] Russia may employ disinformation, propaganda, and intimidation to coerce such states into accepting Russian narratives and agenda.[17]
  • Disinformation attacks can erode democracy in the digital space. With the help of algorithms and bots, disinformation and fake news can be amplified, users' content feeds can be tailored and limited, and echo chambers can easily develop.[20] In this way, disinformation attacks can breed political polarization and alter public discourse.[21]
  • During the 2016 U.S. Presidential election, Russian influence campaigns employed hacking techniques and disinformation attacks to confuse the public on key political issues and sow discord. Experts worry that disinformation attacks will increasingly be used to influence national elections and democratic processes.[6]

Defense measures[]

Federal[]

The Trump Administration backed initiatives to evaluate blockchain technology as a potential defense mechanism against internet manipulation. The Blockchain is a decentralized, secure database that can store and protect transactional information. Blockchain technology could be applied to make data transport more secure in online spaces and the Internet of Things networks, making it difficult for actors to alter or censor content and carry out disinformation attacks.[22]

"Operation Glowing Symphony" in 2016 was another federal initiative that sought to combat disinformation attacks. This operation attempted to dispel ISIS propaganda in social media channels. However, it was largely unsuccessful: ISIS actors continued to disseminate propaganda on other unmonitored online platforms.[23]

Private[]

Private social media companies have engineered tools to identify and combat disinformation on their platforms. For example, Twitter uses machine learning applications to flag content that does not comply with its terms of services and identify extremist posts encouraging terrorism. Facebook and Google have developed a content hierarchy system where fact-checkers can identify and de-rank possible disinformation and adjust algorithms accordingly.[6] Many companies are considering using procedural legal systems to regulate content on their platforms as well. Specifically, they are considering using appellate systems: posts may be taken down for violating terms of service and posing as a disinformation threat, but users can contest this action through a hierarchy of appellate bodies.[11]

Collaborative measures[]

Cyber security experts claim that collaboration between public and private sectors is necessary to successfully combat disinformation attacks. Cooperative defense strategies include:

  • The creation of "disinformation detection consortiums" where stakeholders (i.e. private social media companies and governments) convene to discuss disinformation attacks and come up with mutual defense strategies.[3]
  • Sharing critical information between private social media companies and the government, so that more effective defense strategies can be developed.[24][3]
  • Coordination among governments to create a unified and effective response against transnational disinformation campaigns.[3]

Education and awareness[]

In 2018, the Executive Vice President of the European Commission for A Europe Fit for the Digital Age gathered a group of experts to produce a report with recommendations for teaching digital literacy. Proposed digital literacy curricula familiarizes students with fact-checking websites such as Snopes and FactCheck.org. This curricula aims to equip students with critical thinking skills to discern between factual content and disinformation online.[7]

See also[]

References[]

  1. ^ Fallis, Don (2015). "What Is Disinformation?". Library Trends. 63 (3): 401–426. doi:10.1353/lib.2015.0014. hdl:2142/89818. ISSN 1559-0682. S2CID 13178809.
  2. ^ Behavioral Sciences Department, De La Salle University, Manila, Philippines; Collado, Zaldy C.; Basco, Angelica Joyce M.; Communication Department, Adamson University, Manila, Philippines; Sison, Albin A.; Communication Department, Adamson University, Manila, Philippines (2020-06-26). "Falling victims to online disinformation among young Filipino people: Is human mind to blame?". Cognition, Brain, Behavior. 24 (2): 75–91. doi:10.24193/cbb.2020.24.05.CS1 maint: multiple names: authors list (link)
  3. ^ Jump up to: a b c d e f Frederick, Kara (2019). "The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight". Center for a New American Security. Cite journal requires |journal= (help)
  4. ^ Jump up to: a b c Ajir, Media; Vailliant, Bethany (2018). "Russian Information Warfare: Implications for Deterrence Theory". Strategic Studies Quarterly. 12 (3): 70–89. ISSN 1936-1815. JSTOR 26481910.
  5. ^ Katyal, Sonia K. (2019). "Artificial Intelligence, Advertising, and Disinformation". Advertising & Society Quarterly. 20 (4). doi:10.1353/asr.2019.0026. ISSN 2475-1790.
  6. ^ Jump up to: a b c Downes, Cathy (2018). "Strategic Blind–Spots on Cyber Threats, Vectors and Campaigns". The Cyber Defense Review. 3 (1): 79–104. ISSN 2474-2120. JSTOR 26427378.
  7. ^ Jump up to: a b Glisson, Lane (2019). "Breaking the Spin Cycle: Teaching Complexity in the Age of Fake News". Portal: Libraries and the Academy. 19 (3): 461–484. doi:10.1353/pla.2019.0027. ISSN 1530-7131. S2CID 199016070.
  8. ^ Jump up to: a b Kirdemir, Baris (2019). "HOSTILE INFLUENCE AND EMERGING COGNITIVE THREATS IN CYBERSPACE". Centre for Economics and Foreign Policy Studies. Cite journal requires |journal= (help)
  9. ^ Sacasas, L. M. (2020). "The Analog City and the Digital City". The New Atlantis (61): 3–18. ISSN 1543-1215. JSTOR 26898497.
  10. ^ Brogly, Chris; Rubin, Victoria L. (2018). "Detecting Clickbait: Here's How to Do It / Comment détecter les pièges à clic". Canadian Journal of Information and Library Science. 42 (3): 154–175. ISSN 1920-7239.
  11. ^ Jump up to: a b Heldt, Amélie (2019). "Let's Meet Halfway: Sharing New Responsibilities in a Digital Age". Journal of Information Policy. 9: 336–369. doi:10.5325/jinfopoli.9.2019.0336. ISSN 2381-5892. JSTOR 10.5325/jinfopoli.9.2019.0336. S2CID 213340236.
  12. ^ "Weaponised deep fakes: National security and democracy on JSTOR". www.jstor.org. Retrieved 2020-11-12.
  13. ^ Buchanan, Tom (2020-10-07). Zhao, Jichang (ed.). "Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation". PLOS ONE. 15 (10): e0239666. Bibcode:2020PLoSO..1539666B. doi:10.1371/journal.pone.0239666. ISSN 1932-6203. PMC 7541057. PMID 33027262.
  14. ^ Thomas, Timothy L. (2020). "Information Weapons: Russia's Nonnuclear Strategic Weapons of Choice". The Cyber Defense Review. 5 (2): 125–144. ISSN 2474-2120. JSTOR 26923527.
  15. ^ Merloe, Patrick (2015). "Election Monitoring Vs. Disinformation". Journal of Democracy. 26 (3): 79–93. doi:10.1353/jod.2015.0053. ISSN 1086-3214. S2CID 146751430.
  16. ^ Crain, Matthew; Nadler, Anthony (2019). "Political Manipulation and Internet Advertising Infrastructure". Journal of Information Policy. 9: 370–410. doi:10.5325/jinfopoli.9.2019.0370. ISSN 2381-5892. JSTOR 10.5325/jinfopoli.9.2019.0370.
  17. ^ Jump up to: a b c Prier, Jarred (2017). "Commanding the Trend: Social Media as Information Warfare". Strategic Studies Quarterly. 11 (4): 50–85. ISSN 1936-1815. JSTOR 26271634.
  18. ^ Wither, James K. (2016). "Making Sense of Hybrid Warfare". Connections. 15 (2): 73–87. doi:10.11610/Connections.15.2.06. ISSN 1812-1098. JSTOR 26326441.
  19. ^ Humprecht, Edda; Esser, Frank; Van Aelst, Peter (July 2020). "Resilience to Online Disinformation: A Framework for Cross-National Comparative Research". The International Journal of Press/Politics. 25 (3): 493–516. doi:10.1177/1940161219900126. ISSN 1940-1612. S2CID 213349525.
  20. ^ Peck, Andrew (2020). "A Problem of Amplification: Folklore and Fake News in the Age of Social Media". The Journal of American Folklore. 133 (529): 329–351. doi:10.5406/jamerfolk.133.529.0329. ISSN 0021-8715. JSTOR 10.5406/jamerfolk.133.529.0329.
  21. ^ Unver, H. Akin (2017). "Politics of Automation, Attention, and Engagement". Journal of International Affairs. 71 (1): 127–146. ISSN 0022-197X. JSTOR 26494368.
  22. ^ Sultan, Oz (2019). "Tackling Disinformation, Online Terrorism, and Cyber Risks into the 2020s". The Cyber Defense Review. 4 (1): 43–60. ISSN 2474-2120. JSTOR 26623066.
  23. ^ Brown, Katherine A.; Green, Shannon N.; Wang, Jian “Jay” (2017). "Public Diplomacy and National Security in 2017: Building Alliances, Fighting Extremism, and Dispelling Disinformation". Center for Strategic and International Studies (CSIS). Cite journal requires |journal= (help)
  24. ^ White, Adam J. (2018). "Google.gov: Could an alliance between Google and government to filter facts be the looming progressive answer to "fake news"?". The New Atlantis (55): 3–34. ISSN 1543-1215. JSTOR 26487781.
Retrieved from ""