By Trygve Olson
From the newsletter: Searching for Hope
Why This Series
A few weeks ago, I shared how I decided to get ahead of the curve on artificial intelligence.
Not because I was looking for a new job.
Not because I had some tech epiphany.
Because I’ve spent the last two decades fighting autocratic systems, and I recognized AI for what it is:
A new front in the same old battle.
I’ve worked with democratic resistance movements in Belarus, Georgia, Lithuania, Zimbabwe, and more. I’ve trained activists under surveillance, helped build political strategies under repression, and watched regimes evolve faster than we were ready for.
Back then, tech tools like YouTube, Twitter, and even SMS gave people new ways to speak out and organize.
For a while, it seemed like technology would save us.
But technology is neutral.
It doesn’t care who’s using it — or why.
And now, with the rise of large language models (LLMs) and real-time generative AI, the most powerful tool in the information ecosystem is one that authoritarians are already learning to use better than democrats.
What This Series Is
This isn’t a primer on AI.
It’s a field manual for people trying to protect democracy from the systems that now threaten it, digitally and psychologically.
This is about how AI — and LLMs in particular — are being used to confuse, divide, suppress, and surveil.
And what those of us still fighting for democratic values need to do in response.
Each lesson includes:
A core threat from the authoritarian playbook
A counter-strategy for democratic actors
Three practical actions anyone can take now
The Seven Lessons
They’ll Use AI to Control the Narrative. We Must Use It to Rebuild Trust.
They’ll Use AI to Target Dissent. We Must Use It to Protect Expression.
They’ll Use AI to Engineer Division. We Must Use It to Strengthen Solidarity.
They’ll Use AI to Erode Reality. We Must Use It to Anchor Shared Truth.
They’ll Use AI to Undermine Elections. We Must Use It to Defend Them.
They’ll Use AI to Centralize Power. We Must Use It to Decentralize Resistance.
They’ll Use AI to Accelerate Fear. We Must Use It to Scale Hope.
This is the first lesson.
Lesson 1: They’ll Use AI to Control the Narrative. We Must Use It to Rebuild Trust.
When I talk about AI, I’m not speaking in abstract terms.
I’m not talking about robots or sci-fi scenarios.
I’m talking about large language models (LLMs) — the tools that can now generate text, mimic voices, create fake news sites, impersonate real people, and flood our information ecosystem with convincing disinformation at scale.
That’s not the future.
That’s right now.
And it’s already changing the battlefield.
In every fear-based system I’ve worked in — Belarus, Zimbabwe, Russia — one principle always held:
Authoritarians don’t need to convince everyone.
They need to confuse enough people.
If you can’t tell what’s real, you stop trusting anything.
If you stop trusting anything, you stop engaging.
If you stop engaging, they win.
LLMs are about to supercharge that playbook.
What the Autocrats Will Do
Authoritarian actors will use LLMs and generative AI to overwhelm public trust:
Deepfake videos of activists or politicians saying things they never said
Fake headlines from real-looking media outlets
AI-generated talking points, tailored for maximum emotional impact
Synthetic influencers built to manipulate targeted communities
Bot networks trained to flood platforms with coordinated confusion
They don’t have to win the argument.
They just have to make you doubt the terms of it.
As I’ve written before:
“Authoritarian regimes don’t require trust in themselves. They only need to destroy trust in anyone else.”
What Democracy Must Do
We won’t win this fight by ignoring AI — or banning it.
We must use it strategically and defensively to rebuild what is being eroded: trust.
That means:
Equipping trusted messengers — community leaders, faith groups, educators — with the tools to respond quickly and credibly
Pre-bunking disinformation before it goes viral
Using LLMs defensively — to detect patterns, trace manipulation, and respond with clarity at scale
But most of all, it means grounding people in relationships, not just content.
LLMs can’t build trust.
People can.
Three Things You Can Do Today
1. Support local journalists, educators, and civic leaders.
They’re the people your community already trusts. Help them get trained and tooled up to detect AI manipulation — and respond effectively.
2. Use AI to explain, not to win arguments.
LLMs can clarify complex issues. You can use them to inform people on the fence, not to perform for people already convinced.
3. Call out narrative warfare — and shift the frame.
Say it plainly: “This is designed to divide us.” Then pivot: “Here’s what we still believe in, together.” That’s your counterattack.
Bottom Line
Autocrats will use AI — especially LLMs — to dominate the narrative, muddy the truth, and wear people down.
Democrats must use the same tools to amplify truth, rebuild trust, and protect the messengers who still carry credibility.
Because the fight isn’t just about data.
It’s about belief.
And belief is something we defend together.
Ready for Lesson 2?
Please let me know, and we’ll continue.
Or drop your thoughts in the comments — this is a conversation we all need to be a part of.
Subscribe: Searching for Hope
Follow: @trygveolson.bsky.social
Keep going..
Let's hear Lesson 2!