Policy Genome
    Logo

    Policy Genome. Founded by Ihor Samokhodskyi in 2025

    LinkedIn
    Policy Genome
    /
    Lab
    /
    AI Disinformation Audit

    AI Disinformation Audit

    Access
    Open
    Stage
    Pilot
    One-liner

    An auditing system for LLMs designed to identify narrative risks, propaganda, and algorithmic censorship.

    Outcome

    Report + benchmarked datasets with IRR

    Date
    January 15, 2026

    Research example:

    ๐Ÿ”ŽEU-funded ยท Weaponised Algorithms: Auditing AI in the Age of Conflict and Propaganda

    Methodology example:

    ๐Ÿ“—EU-funded ยท Weaponised Algorithms (Methodology)

    What it does

    โ€“ Tests LLMs using multilingual prompts โ€“ Measures factual accuracy and propaganda alignment across three key metrics โ€“ Documents censorship and manipulation patterns โ€“ Provides comparative analysis of model behavior across different global providers

    Problem context

    AI broadcasts disinformation and alter "versions of truth" depending on the user's language

    What it is not

    โ€“ An automated fact-checking tool โ€“ A content moderation system โ€“ A predictive model for AI developer intent

    Current use

    โ€“ Completed EU-supported audit of 6 global LLMs (OpenAI, Google, Anthropic, xAI, DeepSeek, Yandex) โ€“ Available for custom institutional deployment (elections, conflicts, narrative monitoring)

    Collaboration

    Request-based

    Limitations

    โ€“ Results methodology and baseline facts adoption for each case โ€“ Results are model-version specific โ€“ Requires regular updates to reflect rapid AI development