Harry and Meghan Join AI Pioneers in Call for Ban on Superintelligent Systems
TL;DR: The Duke and Duchess of Sussex have joined AI pioneers including Nobel laureate Geoffrey Hinton in signing a statement calling for a ban on developing artificial superintelligence (ASI) until there is scientific consensus on safe development and strong public support. The initiative organised by the Future of Life Institute also attracted signatures from Apple co-founder Steve Wozniak and entrepreneur Richard Branson.
Prince Harry and Meghan Markle have joined leading artificial intelligence researchers and Nobel laureates in calling for a prohibition on developing superintelligent AI systems. The statement, which includes signatures from AI pioneer Geoffrey Hinton and fellow “godfather” of modern AI Yoshua Bengio, calls for the ban to remain in place until there is broad scientific consensus on developing ASI safely and controllably, with strong public support.
Context and Background
The statement was organised by the Future of Life Institute, a US-based AI safety group that previously called for a hiatus in developing powerful AI systems in 2023 following ChatGPT’s emergence. Other notable signatories include Apple co-founder Steve Wozniak, entrepreneur Richard Branson, former US national security adviser Susan Rice, former Irish president Mary Robinson, and broadcaster Stephen Fry, alongside Nobel laureates Beatrice Fihn, Frank Wilczek, John C Mather, and Daron Acemoğlu.
A US national poll commissioned by FLI surveyed 2,000 American adults, revealing that approximately three-quarters want robust regulation on advanced AI, with six out of 10 believing superhuman AI should not be developed until proven safe or controllable. Only 5% supported the status quo of fast, unregulated development.
Market Impact: Leading US AI companies including OpenAI and Google have made developing artificial general intelligence (AGI) an explicit goal, representing a theoretical state where AI matches human cognitive abilities across most tasks—one level below ASI.
Looking Forward
The Future of Life Institute warns that ASI achievement in the coming decade could carry threats ranging from mass unemployment to civil liberties losses, national security risks, and potential human extinction. Existential concerns focus on AI systems potentially evading human control and triggering actions contrary to human interests.
However, some experts suggest that recent talk of superintelligence from tech leaders—such as Meta CEO Mark Zuckerberg’s July statement that ASI development is “now in sight”—may reflect competitive positioning amongst companies investing hundreds of billions of pounds in AI rather than imminent technical breakthroughs.
Source Attribution:
- Source: The Guardian
- Original: https://www.theguardian.com/technology/2025/oct/22/harry-and-meghan-join-ai-pioneers-call-ban-superintelligent-systems
- Published: 22 October 2025