TL;DR
Major AI companies including Anthropic, OpenAI, xAI and Meta are racing to develop superintelligence without robust safety strategies, according to a new study from the Future of Life Institute. The independent expert panel found current practices fall “far short of emerging global standards.”
A Regulatory Gap
The study comes amid heightened public concern about AI’s societal impact, following cases where AI chatbots were linked to suicide and self-harm incidents. Despite these warnings, the AI race shows no signs of slowing, with major tech companies committing hundreds of billions to expanding their machine learning capabilities.
“Despite recent uproar over AI-powered hacking and AI driving people to psychosis and self-harm, US AI companies remain less regulated than restaurants and continue lobbying against binding safety standards,” said Max Tegmark, MIT Professor and Future of Life President.
Expert Calls for Action
In October, a group including renowned AI scientists Geoffrey Hinton and Yoshua Bengio called for a ban on developing superintelligent AI until the public demands it and science establishes a safe path forward. The Future of Life Institute, founded in 2014 with early support from Elon Musk, has consistently raised concerns about the risks intelligent machines pose to humanity.
Looking Forward
For UK businesses adopting AI tools, this study underscores the importance of due diligence when selecting AI providers. As regulatory frameworks evolve globally, organisations should consider not just AI capabilities but also the safety practices and governance structures of their technology partners. The gap between AI development speed and safety measures remains a significant concern for the industry.
Source: Reuters