⬇️ Prefer to listen instead? ⬇️
- AI-generated disinformation by Russian propagandists can double content volume without reducing perceived credibility.
- Most readers cannot distinguish between AI-written news and human-authored articles.
- Topic diversity nearly doubled after adopting generative AI, mimicking real media outlets.
- Repetitive exposure to AI-enhanced propaganda enhances believability through psychological effects.
- Prebunking education could immunize readers from manipulative narratives.
Artificial intelligence is changing propaganda a lot, and not always for good. A study from PNAS Nexus shows how Russian operations are using generative AI to make and spread lies effectively and convincingly. By putting GPT-3 into their content creation, these campaigns have increased output without hurting believability. This makes today’s propaganda smarter, more emotional, and harder to spot than before.
What Is Generative AI – and Why It Matters
Generative AI means machine learning systems that can make real-sounding content—text, images, audio, or even video. These models, like OpenAI’s GPT-3 and GPT-4, are trained on lots of online data. This lets them copy language patterns, styles, and ideas with scary accuracy.
These abilities are very promising in many areas. For example, they power smart chatbots in customer service. In healthcare, they help with patient talks. In education, they make study materials that change to fit the student. But this power has a bad side: generative AI makes it much cheaper to produce believable lies.
Instead of many writers or paid trolls, one AI model—with no limits—can make thousands of articles, posts, and fake stories at scale. For bad actors, this means propaganda is not just a smart choice. It’s a ready-made weapon that can be quickly used in different forms, places, and cultures.
How a Modern Propaganda Campaign Works
To see how this works, the study looked at DC Weekly. This is an online site that looks like a local, right-leaning U.S. news source. It seemed like a real publication—good design, U.S. stories, catchy titles—but it really came from Russian actors, as shown by past investigations like the BBC’s.
At first, DC Weekly just copied news content with small idea changes. It seemed new enough, but mostly it just changed how American issues were seen, to favor Russia. But this changed a lot in late 2023.
When AI tools were secretly added, DC Weekly started making many articles. These weren’t just repeats, but new stories with new framing, different styles, and stronger emotional pulls. AI wasn’t just helping propaganda—it was running the news desk, but looking like normal news work.
Key Change: AI Use with Clear Results
The change became clear around September 20, 2023. Researchers found that new articles had hidden instructions that could be traced to GPT-3. This wasn’t just guessed; actual instruction text for OpenAI’s models was found in the code—proving AI use.
From then on, the amount of content more than doubled. But it wasn’t just quantity. The articles had a more skilled voice—more flexible, creative, and convincing. Instead of simple angry posts or badly translated text, these AI-helped posts seemed like focused, emotional arguments, made to cause small doubts about Western groups.
The study by Wack et al. (2024) says this accuracy was key to making propaganda both widespread and effective. The AI didn’t just write—it improved, adjusted, and shaped the story to fit into the noisy, divided online world.
Scaling Up Without Losing Quality: The Credibility Problem
More output often means lower quality—especially in news. But the most surprising thing from the study was that DC Weekly’s believability stayed the same even after using AI.
A special test with 880 American adults looked at how people reacted to AI and non-AI articles. People judged each article’s believability, emotion, and perceived realness. Surprisingly, readers could not tell the difference between human and AI pieces. Both scored the same on trust (Wack et al., 2024).
This is a main problem for today’s media understanding. AI can now copy not just grammar, but speaking tricks, cultural sayings, and idea echoes. By keeping up a style that looks real, lies become “truthy”—believable even when false.
Even worse, news bots don’t get tired, biased, or change their minds. They grow constantly and adapt fast—meaning propaganda can be smooth, emotionally tuned, and never-ending.
How Psychology Plays a Role
To understand how AI-made propaganda works, we need to see why it works. Human psychology has weaknesses that lies—especially AI-made ones—can easily use
False Truth Effect
Repeated claims, even if false, start to feel true over time. This happens not because of proof, but because of familiarity. Seeing AI-made stories on different sites makes them stronger—even when people say they “think hard.”
Easy Thinking
People tend to see smoothly written information as more trustworthy. Generative AI models are great at making smooth, readable content that is better than old propaganda in polish and readability.
Bias Confirmation
When AI is used to target content based on what people already believe, it sends messages they want to believe. This avoids doubt and strengthens existing ways of thinking, making people less likely to question ideas that fit their identity.
Emotional Start
AI can be trained to add trigger words or emotional hooks that increase anger, distrust, or panic. This emotional connection makes people more involved, making lies not just spread—but stick.
Together, these things weaken the thinking skills that would normally spot bad logic or untrustworthy sources.
AI Allows More Topics, Wider Spread
One big benefit of using large language models in propaganda is the ability to easily make different content. The PNAS study found that topic variety almost doubled after DC Weekly started using AI tools (Wack et al., 2024).
Before, topics were about anti-NATO talk, U.S. race issues, and culture fights. After AI, the site talked about criminal justice, global tech fears, and fake stories of international scandals. This range was like that of regular online news, letting the platform spread a wider mental net.
More topics mean more ways in. Before, readers might ignore specific ideas, but now they see titles on topics they care about—each with small twists or doubts. This grows reach and puts lies into normal talks.
How We See Things vs. Reality: Human and AI Blending
The days when bad grammar or robot-like words gave away AI content are gone. Modern tools trained on trillions of words—from Reddit to government records—can write like humans, sound like news writers, and even copy local ways of speaking.
So, how things look has replaced where they come from in judging believability. If it looks real and sounds smooth, we often think it’s true.
This is made worse by the “halo effect” of website design. A clean, professional site with consistent writing is often mistaken for a trusted source—even with unclear ownership or red flags in the writing. AI makes sure the tone and structure are the same, making sites like DC Weekly almost impossible to tell apart from real local news sites.
Effects on Mental Health and Trust
There’s a quiet mental cost to these shiny lies. Seeing lots of false content can cause distrust—not just in the fake sites, but in all news.
This loss of trust can make people stop paying attention to facts. When confused by many claims, the brain uses shortcuts: emotion over logic, familiarity over proof, group over truth.
Signs include more worry, group-based politics, and negative views of groups. What’s worse? Propaganda doesn’t need to convince you of something—it just needs to convince you that nothing is true. Then it wins without even trying.
Groups at Risk and Media Knowledge Gaps
Not everyone is equally at risk from AI-improved propaganda. Research shows some groups are more likely to be fooled
- Older Adults: Often less skilled with digital tools and more trusting of written words.
- People with Low Media Knowledge: Not familiar with news rules or how to check sources.
- People with Fixed Ideas: Strong biases or anger makes it harder to filter information fairly.
- People Isolated Socially: People without community talks or different viewpoints are more likely to believe fringe claims.
Educators are important here. Without updated media knowledge lessons, these groups stay easy targets—not just believing lies but also spreading them.
Mental Signs of False Content
Being alert starts with mental clues. Here are signs an article might be AI-made disinformation
- Very Emotional Language: Meant to cause anger or fear without fair arguments.
- Quick Claims Without Proof: Big statements without real evidence or expert sources.
- Simple Choices: Us vs. them. Good vs. bad. Simple answers to complex issues.
- Always the Same Ideas: Never showing other viewpoints or details.
- Questionable Website Info: No clear purpose, unclear “About Us,” or unknown writers.
Careful reading needs more than doubt. It needs structure: using many tabs, checking sources, and, most importantly, not sharing until checked.
Moral Questions About AI Systems
With growing worries about Russian propaganda and lies spreading through AI, moral problems come up. Who is responsible when AI-made content turns into political manipulation?
Now, there are few rules. Most open-source language models have few limits after they are released. The companies making them often don’t see—or care about—bad uses later on.
Possible answers include
- Tracking Systems: Developer records that follow instructions linked to bad actors.
- Government Control: Rules on idea manipulation or false media creation.
- Action Plans: Working together between tech companies and law groups to take down bad content.
While new ideas must grow, honesty and moral limits are key for strong democracies.
The Role of Psychologists, Educators, & Citizens
We can’t just code our way out of this. Fighting lies—especially those helped by AI—needs combined human plans.
- Psychologists must study how lies spread and find mental patterns that build strength.
- Educators need new lessons on digital skills, thinking biases, and story breaking down.
- Citizens must keep learning about how tech affects beliefs.
This is not about being paranoid. It’s about getting what some experts call “knowledge watchfulness”—a healthy doubt of claims that agree with all your fears or likes.
A Call for Strength: Mental Inoculation
One good idea is mental “inoculation”—or prebunking. This plan, started by thinking scientists, involves showing people weak forms of manipulation tricks before they see them in real life.
Examples include
- Showing the structure of a misleading article.
- Showing emotional bait-and-switch framing.
- Showing how bias controls understanding.
Like vaccines train the body to fight viruses, prebunking helps readers build defenses against false stories. Evidence suggests it lowers risk even weeks later.
Digital learning sites can add interactive versions of these—making strength a built-in part, not an afterthought.
Knowing Is the First Defense
The findings of Wack et al. (2024) are more than just a warning—they’re a wake-up call. Russian propaganda now uses AI not just to trick, but to overwhelm. With believable style, emotional accuracy, and fact-based looking stories, these campaigns are becoming impossible to tell apart from real news.
But knowledge is the defense. Understanding how AI shapes perception gives us back our power. As sites grow, so must our defenses—based not just in tech, but psychology, civic learning, and clear thinking.
Lies are no longer a small problem—they’re a product that can grow. But so is truth, if we teach people to see it.
Citations
- Wack, M., Ehrett, C., Linvill, D., & Warren, P. (2024). Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. PNAS Nexus. https://doi.org/10.1093/pnasnexus/pgaf083