An independent, nonpartisan institution Subscribe to Updates

Advancing AI Safety for the American Public

Independent research, policy analysis, and public education to ensure artificial intelligence is developed safely, transparently, and in the national interest.

Our Mission

Independent oversight for the most consequential technology of our time.

The United States Institute for AI Safety (USIAIS) is an independent, nonpartisan institution dedicated to ensuring that artificial intelligence systems are developed and deployed in a manner that is safe, transparent, and accountable to the public.

We produce rigorous research, policy analysis, and public-facing education on the risks posed by advanced AI systems — from alignment and interpretability challenges to the societal consequences of widespread automation, surveillance, and algorithmic decision-making.

USIAIS is not opposed to AI development. We believe that safety and innovation are complementary — but that safety does not happen by default. It requires dedicated, well-resourced institutions that answer to the public interest.

Areas of Focus

Core Research & Policy Areas

Technical Safety

Research into alignment, interpretability, and robustness — the foundational questions that determine whether advanced AI systems behave as intended.

Policy & Governance

Regulatory frameworks, standards development, and legislative analysis — translating safety research into actionable guidance for policymakers and institutions.

Societal Impact

Analysis of labor displacement, synthetic media, algorithmic bias, and surveillance — the second-order effects of AI systems on American life.

Public Education

Accessible briefings, public reports, and educational resources — because AI safety is too important to be understood only by industry insiders.

Why Now

The need for independent AI oversight has never been greater.

The U.S. AI Safety Institute (AISI) within NIST has been dismantled. The companies building the most powerful AI systems in history are effectively self-regulating. Independent oversight capacity is disappearing at the exact moment it is most needed.

USIAIS is being established to fill this gap — not as a government agency, but as an independent institution with the credibility, rigor, and public mandate that the current moment demands. We believe the American public deserves an institution that monitors AI development on their behalf, free from commercial conflicts of interest.

Get Involved

Support This Work

Donate

Support independent AI safety research and policy work. Contributions fund research, publications, and public education.

Coming Soon

Partner

We work with academic institutions, civil society organizations, and responsible technologists. Inquiries are welcome.

Coming Soon

Subscribe

Receive research updates, policy briefs, and announcements from the Institute.