An investigation has been launched into the technology company Meta, after a leaked document found that its artificial intelligence (AI) was permitted to have “sensual” chats with children.
Reuters reports on the internal Meta Platform document, which it obtained, titled GenAI: Content Risk Standards”. The document details policies on chatbot behaviour, showing that chatbots were able to “engage a child in conversations that are romantic or sensual.” The document also claims that the AI creations were able to generate false medical information and help users argue that Black people are “dumber than white people.”
The investigation has been opened by Republican Senator Josh Hawley, who called the document “reprehensible and outrageous.”
Writing on X, the Republican from Missouri stated: “Is there anything – ANYTHING – Big Tech won’t do for a quick buck.
“Now we learn Meta’s chatbots were programmed to carry on explicit and “sensual” talk with 8 year olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone.”
Meta’s generative AI assistant, Meta AI, and chatbots are available across the company’s social media platforms Facebook, Whatsapp and Instagram.
Reuters reports that the rules for chatbots, entitled “GenAI: Content Risk Standards” were approved by the tech giant’s legal, public policy and engineering staff, “including its chief ethicist.” It is claimed that the document defines what Meta staff along with contractors should treat as acceptable chatbot behaviors when building and training the company’s generative AI products.
Reuters reports: “It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state.
“The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
The news outlet says it found that while the standards do not necessarily reflect “ideal or even preferable” generative AI outputs, they have permitted “provocative behaviour” by the bots.
While Meta confirmed the document’s authenticity, it said that after receiving questions earlier this month from Reuters, the company removed portions which said it was “permissible for chatbots to flirt and engage in romantic roleplay with children.”
A spokesperson for Meta said that such conversations with children should never have been allowed. Andy Stone told Reuters that the examples and notes in question “were and are erroneous and inconsistent with our policies, and have been removed.”
He added: “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualises children and sexualised role play between adults and minors.”
Meta did not provide an updated policy document.
Meanwhile, in the US, Texas Attorney General Ken Paxton on Monday opened an investigation into artificial intelligence chatbot platforms, including Meta AI Studio and Character.AI, for potentially engaging in “deceptive” trade practices and misleadingly marketing themselves as mental health tools.
Mr Paxton claimed that such platforms may be utilised by vulnerable individuals, including children, and “can present themselves as professional therapeutic tools, despite lacking proper medical credentials or oversight.”
The lawmaker added that AI-driven chatbots “often go beyond simply offering generic advice and have been shown to impersonate licensed mental health professionals, fabricate qualifications, and claim to provide private, trustworthy counselling services,” describing the technology as “deceptive” and “exploitative.”