In the tradition of our predecessors who warned against nuclear proliferation, we believe the medical community bears a duty to speak — early, clearly, and without alarm — on threats to human welfare at civilisational scale.
The medical profession has, throughout its history, been called upon to speak when the welfare of populations is at risk. From the early documentation of occupational disease, to the warnings issued on tobacco, nuclear weapons, and climate change, physicians have served as witnesses — offering the clinical evidence and moral clarity that policy reform requires.
Advanced artificial intelligence now constitutes a novel category of risk. These systems are being deployed at a pace that outstrips existing regulatory frameworks, with inadequate independent evaluation, minimal democratic oversight, and little input from the health and life sciences. Their implications for public health, biosecurity, human autonomy, and the integrity of medical practice itself are matters of immediate professional concern.
We do not write as opponents of technological progress. We write as practitioners of a discipline whose central tenet — primum non nocere — demands that we raise our voices when the prevention of harm so plainly requires it.
Frontier AI systems with the potential to cause widespread harm must undergo rigorous, independent safety evaluation prior to deployment. The burden of proof must rest with developers, not with the public.
Consequential decisions affecting human health, liberty, and welfare must not be fully delegated to autonomous systems. Human judgement must be preserved as the final arbiter in matters of medical and civic consequence.
The benefits and risks of artificial intelligence must not be distributed along existing lines of inequality. These technologies must serve the most vulnerable, not only the most powerful.
AI systems that materially influence clinical outcomes or public policy must be subject to independent audit. Opacity in high-stakes systems is incompatible with the principles of informed consent and democratic accountability.
AI applications in biotechnology, synthetic biology, and environmental modelling require specialised oversight. The potential for catastrophic misuse in these domains constitutes an acute, underappreciated threat.
Effective governance of a transnational technology requires transnational instruments, modelled on the most successful precedents of twentieth-century arms control and public health.
We are physicians, surgeons, researchers, and public health professionals representing every inhabited continent. We write not as opponents of technological progress, but as practitioners of a discipline whose central tenet is the prevention of harm.
The rapid development of increasingly capable artificial intelligence systems has outpaced the establishment of adequate safety standards, regulatory frameworks, and democratic oversight. We believe this trajectory poses serious risks to global public health, to biosecurity, and to the long-term flourishing of human civilisation.
We call upon governments, international institutions, and AI developers to implement mandatory safety evaluations for frontier AI systems; to establish independent, international oversight bodies with meaningful enforcement authority; and to ensure that the pace of deployment does not outstrip society's capacity to understand, govern, and — if necessary — restrain these technologies.
History teaches that the medical community's early warnings on nuclear weapons, tobacco, and environmental toxins were vindicated, often decades before policy caught up. We urge the world not to repeat the pattern of belated action. The time for precaution is now.
Verified signatories will be published here once the public register is live.
If you are a licensed physician, medical researcher, or public health professional, we invite you to add your name to the letter.