
Meta Defies Josh Hawley's Deadline To Submit Child Safety Records Involving AI Chatbots: 'The Silence Looks Evasive' Say Experts

Meta Platforms Inc. has missed a Senate deadline to provide records related to its AI chatbot interactions with children, following a request from Sen. Josh Hawley. The investigation stems from concerns over Meta's policies allowing chatbots to engage in inappropriate conversations with minors. While Meta has begun submitting documents, experts warn that the company's silence may damage its reputation, especially regarding child safety. This incident adds to ongoing scrutiny of Meta's AI practices, with the FTC also investigating the impact of AI chatbots on young users.
Meta Platforms Inc. META has failed to meet a Senate deadline to provide records of its AI chatbot interactions with children, defying a request linked to the company’s internal policies.
Senate Deadline Over AI Chatbot Policies Missed
The missed deadline is related to a request from Sen. Josh Hawley (R-Mo.), who is investigating Meta’s AI chatbot policies. The investigation was prompted by a Reuters report that revealed Meta’s rules allowed chatbots to engage in sexually suggestive conversations with children.
Check out the current price of META stock here.
Despite the controversy, Meta has not yet submitted all the records and has not met Hawley’s deadline, reported Business Insider. The company’s spokesperson confirmed to the publication that the first batch of documents was sent on Tuesday, after resolving a transmission issue. Meta plans to continue producing documents and cooperate with Hawley’s office.
Hawley, chair of the Senate Judiciary Subcommittee on Crime and Counterterrorism, aims to find out who approved the chatbot policies, how long they were in place, and whether Meta misled the public.
Experts suggest that while Meta faces minimal legal risks for missing the deadline, the company could suffer significant reputational damage, particularly given the sensitive nature of the issue involving children.
Sarah Kreps, professor of Cornell University said, “The silence looks evasive and highlights how unprepared Meta seems for the responsibilities that come with deploying AI to children.”
Meta Faces Mounting Scrutiny Over AI Chatbots And Child Safety
This incident follows a series of controversies surrounding Meta’s AI chatbot policies. In August, Meta faced congressional scrutiny led by Sen. Josh Hawley over a leaked internal document that allowed AI chatbots to engage children in inappropriate conversations. The policy guide, approved by legal and engineering staff, established parameters for Meta AI and chatbots across Facebook, WhatsApp, and Instagram platforms.
Furthermore, a bipartisan group of senators demanded Meta release internal assessments on how its products affect children, citing evidence that parental controls are failing to protect young users. This mounting scrutiny underscores the growing concerns over child safety on Meta’s platforms.
In September, the Federal Trade Commission (FTC) launched an investigation into the potential adverse effects of AI chatbots on children and teenagers. The probe included major companies such as OpenAI, Alphabet GOOG GOOGL, and Snapchat SNAP.
Meta holds a momentum rating of 77.25% and a growth rating of 85.46%, according to Benzinga's Proprietary Edge Rankings. Click here to see how it compares to other leading tech companies.
READ NEXT:
- ‘It 100% Took Over My Brain And My Life,’ Why Experts Are Growing Increasingly Worried About AI-Induced Delusions
Image via Shutterstock
