Top 7 Latest Insights on AI Driven Disinformation from Media Forensics Hub
CXO Talk: Patrick Warren and Darren Linvill of the Media Forensics Hub - Clemson University
Human Generated: No AI/LLMs were used in the creation of this article.
Key Takeaways:
The goal is not to persuade new audiences, it’s to entrench existing audiences and influence opinions toward greater extremes.
“When you spin a story that somebody wants to believe, they’re going to believe it. That’s what good disinformation actors know fundamentally.” - Darren Linvill
We as a nation need to invest in defensive technology; there’s clear ROI.
“For every advance ai offers the offensive player, there is a mirrored advantage that it offers the defensive player.” - Patrick Warren
Definition of Disinformation: Deception purposefully spread for a specific ends, aka “lying on the internet“ to inauthentically influence conversations.
Research Findings:
Russian Disinformation: Tends to create “artisanally crafted” fake troll accounts that take on the persona of real people, infiltrate online communities to promote attitudes, worldviews, and beliefs with the goal of creating divides, increasing radicalization and shifting ideology towards greater extremism. Task is to sew falsehoods, influence, and persuade.
Chinese Disinformation: Tends to create army of generic bots with a quantity over quality strategy to flood out certain hashtags in an attempt to over saturate and overwhelm trending topics, thereby demoting ideas and shifting narratives away from certain politically charged topics. This includes targeting individual voices with bot networked harassment campaigns that carry out verbal attacks, like threatening physical and/or sexual violence.
“Objects of Influence”: A more comprehensive way to think about disinfo beyond accounts. Accounts are only one tactic in the context of FIMI (Foreign Information Manipulation and Interference) — corporations, websites, hashtags, narratives, and other entities are all tools and approaches (TTP - Tactics, Techniques, and Procedures) that are employed depending on the specific goals, capabilities, and technologies available to carry out the overall intent of promoting and demoting elements of the conversation.
Russian Narrative Laundering: Tends to create fake news papers like the “DC Weekly” and “Austin Prior” and populate them with stolen content from genuine news sites that has been rewritten using ai for authenticity, then insert narratives to layer the message into social media through real users.
Anatomy of Disinformation: This “How To” example is from Russia regarding Ukraine. First, post a video of a fake “Ukrainian Insider Perspective” (from a Russian actor) on YouTube linked to a new channel no followers. Next, use the same narrative told with text in a non-western media article that gets traction as a paid promotion or sponsored content. Next, use non-western fake news article to link as source in fake western news site. Then, use fake western news article to disseminate on social media through a mix of real influencers (paid and unpaid, some with connections to Russia), along with other coordinated fake accounts. Finally, use influencers to spread narrative to tens of thousands of users, and disinfo eventually makes its way to US Senators as talking points when discussing a lack of support for Ukraine. Mission accomplished.
Active Measures in The Age of AI: The biggest challenges are speed, scope, and scale.
Speed: AI can speed up the rate of narrative travel from initial placement to popular discussion from years to weeks. It is impossible to create anything that addresses the issue of speed from an adversarial standpoint, meaning generative AI disinformation vs AI content moderation.
Scale: AI makes scaling is easy and inexpensive. Suspending accounts is virtually zero cost because it’s easy to create accounts. We see diffusion across actors — everyone is using AI, particularly LLMs. Everything is off the shelf ready to go out of the box.
Scope: AI enables a broadening of content with a reduction in quality. “Flooding campaigns” are using chat GPT-type tools to automate the writing of slightly different text, and not to be completely duplicative — in an attempt to avoid the platform’s detection algorithm. It’s easier to detect verbatim copy bots, harder to detect the same narrative with slight variations. Additionally, cheap fakes are proving higher risk as they are low hanging fruit with the best bang for the buck, while still incredibly effective.
Economic Implications: AI disinfo presents a growing threat with huge business implications. Competitors and motivated actors are actively targeting brands with inauthentic social media firestorms as an economically advantageous business strategy. Corporations have also been targets and suffered collateral damage in narrative warfare arising from conflict between states. For example,
News: Proliferation of a fake news article about a real pharmaceutical company testing in Ukraine and causing child deaths, intended to stoke anti-vaccine sentiment.
Entertainment: After Daryl Morey’s, the former general manager of the Houston Rockets, shared positive comments about democracy in Hong Kong, the NBA experienced fake bot retaliation from China and lost hundreds of millions in revenue.
Academia: Disgruntled employees using disinformation to promote narratives as retaliation against institutions.
Very interesting. Thank you for your research. Curious to know what falsehoods and disinformation other countries do besides Russia and China …