International Physicians for the Prevention of AI Catastrophe

An international coalition of physicians advocating for the prevention of catastrophic harm from advanced artificial intelligence.

In the tradition of our predecessors who warned against nuclear proliferation, we believe the medical community bears a duty to speak — early, clearly, and without alarm — on threats to human welfare at civilisational scale.

Sign the Open Letter Read the Letter
1.
Our Concern

The physician's duty extends to emerging threats.

The precautionary principle that governs medicine must now be extended to the development of systems powerful enough to reshape human civilisation.

The medical profession has, throughout its history, been called upon to speak when the welfare of populations is at risk. From the early documentation of occupational disease, to the warnings issued on tobacco, nuclear weapons, and climate change, physicians have served as witnesses — offering the clinical evidence and moral clarity that policy reform requires.

Advanced artificial intelligence now constitutes a novel category of risk. These systems are being deployed at a pace that outstrips existing regulatory frameworks, with inadequate independent evaluation, minimal democratic oversight, and little input from the health and life sciences. Their implications for public health, biosecurity, human autonomy, and the integrity of medical practice itself are matters of immediate professional concern.

We do not write as opponents of technological progress. We write as practitioners of a discipline whose central tenet — primum non nocere — demands that we raise our voices when the prevention of harm so plainly requires it.

2.
Founding Principles

A framework for responsible AI governance.

I.

Precautionary Governance

Frontier AI systems with the potential to cause widespread harm must undergo rigorous, independent safety evaluation prior to deployment. The burden of proof must rest with developers, not with the public.

II.

Meaningful Human Oversight

Consequential decisions affecting human health, liberty, and welfare must not be fully delegated to autonomous systems. Human judgement must be preserved as the final arbiter in matters of medical and civic consequence.

III.

Equity of Access and Risk

The benefits and risks of artificial intelligence must not be distributed along existing lines of inequality. These technologies must serve the most vulnerable, not only the most powerful.

IV.

Transparency and Interpretability

AI systems that materially influence clinical outcomes or public policy must be subject to independent audit. Opacity in high-stakes systems is incompatible with the principles of informed consent and democratic accountability.

V.

Biological and Ecological Safeguards

AI applications in biotechnology, synthetic biology, and environmental modelling require specialised oversight. The potential for catastrophic misuse in these domains constitutes an acute, underappreciated threat.

VI.

International Cooperation

Effective governance of a transnational technology requires transnational instruments, modelled on the most successful precedents of twentieth-century arms control and public health.

3.
The Open Letter

An open letter from the medical community.

International Physicians for the Prevention of AI Catastrophe
London
22 April 2026.
To heads of state, to technology leaders, and to the global public —

We are physicians, surgeons, researchers, and public health professionals representing every inhabited continent. We write not as opponents of technological progress, but as practitioners of a discipline whose central tenet is the prevention of harm.

The rapid development of increasingly capable artificial intelligence systems has outpaced the establishment of adequate safety standards, regulatory frameworks, and democratic oversight. We believe this trajectory poses serious risks to global public health, to biosecurity, and to the long-term flourishing of human civilisation.

We call upon governments, international institutions, and AI developers to implement mandatory safety evaluations for frontier AI systems; to establish independent, international oversight bodies with meaningful enforcement authority; and to ensure that the pace of deployment does not outstrip society's capacity to understand, govern, and — if necessary — restrain these technologies.

History teaches that the medical community's early warnings on nuclear weapons, tobacco, and environmental toxins were vindicated, often decades before policy caught up. We urge the world not to repeat the pattern of belated action. The time for precaution is now.

With conviction, and in the service of our patients and our profession,
IPPAIC
4.
Signatories

Verified signatories will appear here.

Verified signatories will be published here once the public register is live.

5.
Add Your Name

Sign the open letter.

If you are a licensed physician, medical researcher, or public health professional, we invite you to add your name to the letter.

Your name and affiliation will be added to the public register of signatories. Email addresses are held confidentially and are never published or shared.