A free research chat where you can ask questions about AI safety and get answers with references to the major sources.
Source-linked synthesis across AI safety science, EU compliance practice, military AI governance, interoperability, and AGI strategic stability.
Link to the product:
https://notebooklm.google.com/notebook/1509870c-1631-4b76-acc1-72b5d5b4c365?authuser=1
What it does - Curates top-tier public materials on frontier AI safety and AI–security dynamics into one searchable environment (Google NotebookLM). - Lets you ask complex governance / security questions and get answers grounded in cited passages. - Answers in your language. - Supports cross-source comparison (where sources agree, disagree, or talk past each other). - Designed for policy people who don’t have time to read 200 pages, but still need the substance.
Included sources (current library) 1. International AI Safety Report 2026 — led by Yoshua Bengio, authored by 100+ experts. (2 files)
2. EU General-Purpose AI (GPAI) Code of Practice (European Commission / EU digital strategy)
3. Dario Amodei (Anthropic) — The Adolescence of Technology
4. Harvard Kennedy School – Belfer Center — Code, Command, and Conflict: Charting the Future of Military AI (Priyesh Mishra + co-authors)
5. United Nations University Institute in Macau (UNU Macau) — Interoperability in AI Safety Governance: Ethics, Regulations, and Standards
6. RAND + Perry World House — The Artificial General Intelligence Race and International Security (edited by Jim Mitre, Michael C. Horowitz, Natalia Henry, Emma Borden, Joel B. Predd; contributors include Miles Brundage, Sarah Kreps, James D. Fearon, Karl P. Mueller, Jane Vaynman, Tristan A. Volpe)
7. Policy Genome — Weaponised Algorithms (cross-lingual audit of model narrative divergence; reference dataset + framing)
Permission-dependent add-on - Carnegie Endowment for International Peace — China AI policy work by Matt Sheehan (often with co-authors, e.g., Scott Singer).
Example questions you can ask - If Amodei is right about “the adolescence of AI”, what does that imply for escalation risk and decision ambiguity in wartime environments?
- Where might AI safety theory underestimate the speed, fog, and incentives of conflict settings?
- What policy steps already have cross-institutional consensus (vs. where serious disagreements remain)?
- How do “AI race” dynamics change the feasibility of governance ideas like coordination / restraint / “AI cartel” concepts?
- Where are the blind spots between lab safety culture and real-world strategic incentives?
Who it’s for - Policy teams working on AI governance, security, democracy resilience - Think tanks and researchers who need fast orientation across multiple frameworks - Journalists and analysts doing explainers and briefings - Donors / partners assessing whether a project has real intellectual grounding (not vibes)
Why this exists (problem context)
Most people won’t read 200 pages. But decisions still get made because of them. This tool converts that stack into a working research surface.
What it is NOT - A policy position paper - A forecasting model for conflict outcomes - “The truth about AI safety” - A substitute for primary-source reading - A legal compliance or advisory tool
How to use (30 seconds) 1. Open the link 2. Ask a question in natural language 3. Check citations and expand passages 4. Compare answers across sources (ask: “what do sources disagree on?”)
Known limitations - Answers are limited to what’s inside the curated library (it won’t “know” external facts unless present in sources). - NotebookLM-style synthesis can still miss nuance — always verify in the cited passages. - Coverage is intentionally selective (quality > quantity).
Credits, sourcing, and permissions - All included materials are publicly available or added under explicit permission / fair-use confirmation where applicable (e.g., RAND confirmation referenced in the project update).
Want tailored solution?
If you’re working on AI governance, security, or institutional readiness and want: - a tailored version for your domain (elections, defense, justice, media), reach out we can build a dedicated instance with your preferred corpus and a clear governance question set.