Independent research, policy analysis, and public education to ensure artificial intelligence is developed safely, transparently, and in the national interest.
Our Mission
The United States Institute for AI Safety (USIAIS) is an independent, nonpartisan institution dedicated to ensuring that artificial intelligence systems are developed and deployed in a manner that is safe, transparent, and accountable to the public.
We produce rigorous research, policy analysis, and public-facing education on the risks posed by advanced AI systems — from alignment and interpretability challenges to the societal consequences of widespread automation, surveillance, and algorithmic decision-making.
USIAIS is not opposed to AI development. We believe that safety and innovation are complementary — but that safety does not happen by default. It requires dedicated, well-resourced institutions that answer to the public interest.
Areas of Focus
Research into alignment, interpretability, and robustness — the foundational questions that determine whether advanced AI systems behave as intended.
Regulatory frameworks, standards development, and legislative analysis — translating safety research into actionable guidance for policymakers and institutions.
Analysis of labor displacement, synthetic media, algorithmic bias, and surveillance — the second-order effects of AI systems on American life.
Accessible briefings, public reports, and educational resources — because AI safety is too important to be understood only by industry insiders.
Why Now
The U.S. AI Safety Institute (AISI) within NIST has been dismantled. The companies building the most powerful AI systems in history are effectively self-regulating. Independent oversight capacity is disappearing at the exact moment it is most needed.
USIAIS is being established to fill this gap — not as a government agency, but as an independent institution with the credibility, rigor, and public mandate that the current moment demands. We believe the American public deserves an institution that monitors AI development on their behalf, free from commercial conflicts of interest.
Get Involved
Support independent AI safety research and policy work. Contributions fund research, publications, and public education.
Coming SoonWe work with academic institutions, civil society organizations, and responsible technologists. Inquiries are welcome.
Coming SoonReceive research updates, policy briefs, and announcements from the Institute.