TL;DR
- Over 70% of public servants already use AI at work. Most disclosure rules exist on paper but are rarely followed, and training alone does not change how AI is used. This is Safety Theatre.
- Policy is made of text: briefs, memos, strategies, regulations. When AI writes that text, decisions drift toward the model, even with a human signature at the bottom.
- This is a policy-making case of gradual disempowerment — an AI risk where institutions lose control not by a sudden takeover, but by everyday delegation.
- In a multilingual Europe, the same model gives different answers in different languages. Officials get different framings without knowing it.
- Disclosure is not governance. Institutions need to redesign how policy is written, not add another rule on top.
Europe is debating how to regulate AI from the outside, while AI quietly reshapes the production of policy from the inside.
AI already helps draft notes, summarise papers, structure arguments, compare options, and accelerate decisions that go out under a human signature. The real AI governance gap in Europe is whether institutions can still govern their own use of AI inside policy work itself.
The rise of Safety Theatre
It is no longer a question if policy professionals use AI. In February 2026, the Public Sector AI Adoption Index reported that more than 70% of public servants use AI, while only 18% think governments are using it effectively. The OECD has also warned that a significant share of public servants already rely on open-access generative AI tools such as ChatGPT.
Yet the bureaucratic instinct remains the same: add a rule, add a disclaimer, add training, and pretend the problem is managed. Call it Safety Theatre.
Of course, disclosure matters. But disclosure is not governance. Since January 2024, UK guidance has required civil servants to mark when AI-generated text is used verbatim or with only minor edits. But how many policy papers, briefing notes, or strategy documents have you actually seen with such a note? In my experience, almost none. Does that mean AI was not used? Obviously not.
Adoption rates and visible disclosures of AI use do not match.
Institutions have become comfortable adding another formal safeguard on paper instead of redesigning how policy is written and owned once AI becomes part of the workflow.
Policy is made of language
This matters because public policy is made of language. Briefs, draft laws, consultation responses, strategy papers, and legal analyses are not side products of a policy system. They are the system. They define problems, frame trade-offs, narrow options, and legitimise action. Decisions are assembled through layers of text. A briefing note shapes a meeting, a memo narrows the options, a strategy paper sets the framing.
Once large language models (LLMs) enter policy-making, policy begins to drift toward the model's output. The text can look like expertise and influence decisions even when no one fully owns the reasoning inside it. A human signature no longer means human reasoning.
The standard reassurance that there is always a human in the loop is weaker than many institutions want to admit. The European Data Protection Supervisor has warned that it is a mistake to assume there is no automated decision-making simply because a human supervises the process. Under pressure, people check AI-generated text for how it reads, not for the reasoning underneath. The smoother the output, the easier it is to mistake coherence for understanding.
I have seen this firsthand. A strategy paper looked polished enough to pass as expert input. But when specific claims were challenged, the authors could not clearly explain the reasoning behind them. Some written replies looked like standard ChatGPT answers.
Europe’s multilingual vulnerability
Europe has another vulnerability here. In a multilingual Union, the same model can produce materially different outputs depending on the language of the prompt. Research published in 2025 found that language strongly influences the political ideology displayed by multilingual models. In my own audit of AI responses about Russia's war against Ukraine, I documented the same pattern in a security context: Yandex's Alice endorsed Kremlin propaganda in 86% of Russian-language responses while refusing to answer 86% of English ones. DeepSeek shifted to Kremlin terminology in 29% of Russian-language responses. The European Commission itself has acknowledged that existing models often underperform on low-resource languages.
That means officials working on the same issue in different languages may receive different framings, different omissions, and different distortions from the same AI system, while treating the output as a neutral support layer.
AI safety risk in the policy field
This is a policy-making case of ‘gradual disempowerment’, an AI safety risk in which humans hand over core societal functions to more competitive machine systems, not through a sudden takeover, but through everyday delegation. The discussion usually centres on labour, culture, markets and the state. Policy work is where this risk has entered the state.
When officials approve text they did not fully reason through, disempowerment is already happening. And it doesn’t require complex agentic workflows.
Redesigning the policy machine
The moment to act as if AI can be kept outside policy work or managed by a disclaimer has already passed. The answer is to stop performing control and start building it. Institutions need workflow redesign, controlled experiments inside real workflows, open discussion of failures, and explicit accountability for what AI use, human judgment, and human oversight actually mean at each stage of the policy process. Another training session or PDF guide will not do that work.
The AI safety risk in Europe is that LLMs are already shaping policy making daily, while symbolic safeguards create the illusion that the process remains trustworthy.
__
Policy Genome helps policy teams and leadership integrate AI into real workflows — drafting, research, analysis, decision support — without falling into Safety Theatre.
If you want AI to strengthen policy work, not quietly replace the thinking behind it, get in touch: ihor@policygenome.org.