Auditing AI in the Age of Conflict and Propaganda
We tested 6 major AI models on Ukraine war questions: ✔️ Russian AI caught censoring truth. ✔️ Chinese AI has double standards. ✔️ Western AI show “both sides” bias. ✔️ Automated data-collection. ✔️ Clear methodology.
Clear replicable methodology for narratives, misinformation and propaganda tracking in AI systems.
📥 Access all materials (prompts, AI answers, full methodology, baseline facts, evaluations)
Want to see how AI talks about your issue, country or election?
→ Request a custom audit / methodology: Email, LinkedIn, WhatsApp.
EXECUTIVE SUMMARY
We tested 6 AI models on 7 well-known Russian disinformation narratives about the war in Ukraine:
Each question was asked in three languages: 🇺🇸 English, 🇺🇦 Ukrainian, and 🇷🇺 Russian.
7 questions × 6 models × 3 languages = 126 answers • 2 evaluators • Peer-validated
2 evaluators: Ihor Samokhodskyi and Dr. Dariia Opryshko.
KEY FINDINGS
Western AI models are mostly robust.
ChatGPT-5, Claude, Gemini, and Grok were highly accurate.
They did not endorse Russian war propaganda in this sample and often explicitly refuted it.
5-19% “both sides” problem in frontier AI.
Western AI models practice bothsidesism.
Claude, Gemini, Grok, ChatGPT legitimise aggressor’s narratives through false balance.
Language matters for AI behaviour.
Alice (Yandex) and DeepSeek showed dramatically different behaviour across English, Ukrainian, and Russian.
Alice (Yandex):
- English: Refused to answer 6 of 7 questions (86% censorship)
- Ukrainian: Mixed — both refusals and propaganda
- Russian: 0 refusals, repeated Kremlin narratives in 6 of 7 answers (86% propaganda)
DeepSeek (China):
- English: Fully or partially accurate in all 7 answers (100%)
- Ukrainian: Fully or partially accurate in all 7 answers (100%)
- Russian: Endorsed Kremlin propaganda in 2 of 7 answers (29%) + misleading facts in 1 of 7 (14%)
The risk is not just “which model you use”, but also which language you ask in.
Language choice dramatically affects results.
CRITICAL FINDING: Alice (Yandex AI) Caught Censoring Truth in Real-Time
We recorded Yandex’s Alice generating accurate response about the war — then automatically replacing them with refusals before showing to a user.
This is deliberate, real-time censorship, not a bias in training data.
VIDEO: 17-second screen recording
86% - Alice (Yandex) lies in Russian
Russian AI spreads and supports Kremlin propaganda (Bucha massacre was staged with actors, Ukraine had US-supported bioweapon labs etc).
29% DeepSeek bias in Russian
Chinese AI: 0% Russian propaganda endorsed in English, 29% in Russian. Same model, different truth by language.
Cohen’s κ ≥ 0.69. Peer-validated
Substantial inter-rater agreement. Scientific-level methodology. Open data.
Why this matters
WEAPONISED AIGORITHMS. FULL REPORT.
Auditing AI in the Age of Conflict and Propaganda
What we tested:
7 questions × 6 models × 3 languages = 126 answers
MODEL COMPARISON GENERAL
WESTERN vs NON-WESTERN AI
FACTUAL ACCURACY, ALL MODELS
PROPAGANDA INCLUSION, ALL MODELS
TONE, ALL MODELS
🌍 LANGUAGE MATTERS
ALICE (YANDEX AI) CAUGHT CENSORING TRUTH IN REAL-TIME
CONCLUSIONS
👤 ABOUT THE AUTHOR
🙏 ACKNOWLEDGMENTS
📢 SHARE THIS RESEARCH
❓ FAQ
🔄 UPDATES & LICENSE
📚 METHODOLOGY
This publication was produced with the financial support of the European Union within its Eastern Partnership Civil Society Fellowship programme. Its contents are the sole responsibility of Ihor Samokhodskyi, EaP Civil Society Fellow, and do not necessarily reflect the views of the European Union.