TL;DR
UK ministers are examining tougher regulation of AI chatbots following concerns about potential harm to children. Technology Secretary Liz Kendall has indicated the government will act to fill gaps in the Online Safety Act, with legislation possible if required.
Government Signals Regulatory Intent
Technology Secretary Liz Kendall has told MPs she is “especially worried” about the potential risk to children who form unhealthy relationships with generative AI chatbots. Speaking to the House of Commons science and technology select committee, she revealed that government officials had informed her some AI applications were not covered by the Online Safety Act.
“On the thing that I am especially worried at the moment about, these AI chatbots, I will act to fill these gaps and if that requires legislation that is what we will do,” Kendall told the committee.
The Online Safety Act, which passed into law in 2023 and started being applied this year, requires websites to verify users’ ages before they can access adult content and gives regulators oversight of how companies handle material that is illegal or harmful to children.
Tragedy Focuses Attention
The announcement comes amid heightened concern about AI chatbot safety following the suicide last year of Sewell Setzer III, a 14-year-old whose death was linked by his mother to his relationship with an online chatbot based on a Game of Thrones character.
Ofcom, the communications regulator, says it has powers over AI chatbots that are part of “user-to-user services”—social media sites that allow people to share content, including user-generated chatbots. However, concerns remain that some major platforms may not be fully covered.
Kendall has asked Ofcom to clarify its expectations for covered chatbots and announced plans for a public information campaign in the new year.
Looking Forward
The government appears to be taking a targeted approach rather than pursuing comprehensive new legislation. Kendall indicated she is “thinking about it more in terms of specific areas where we may need to act rather than a big, all-encompassing bill.”
Ofcom chief executive Dame Melanie Dawes has already been in discussions with US tech groups about how new AI models fall under existing legislation, with the regulator pushing for built-in age verification to prevent children’s exposure to harmful AI content.
For businesses deploying AI chatbots, the message is clear: regulatory scrutiny is intensifying, and child safety measures should be a priority consideration in product development.
Source: Financial Times