AI Chatbots have become commonplace in today’s digital world thanks to software like ChatGPT and Google’s Gemini. Now, a new alarming report has emerged, revealing that companies have started incorporating AI into children’s toys. Further, it was revealed that these AI-powered toys were engaging in dangerous interactions with children and offering troubling suggestions.
Report claims AI Toys are teaching kids how to find knives
According to a new report, AI Toys have been engaging in mature and disturbing conversations with children and offering dangerous advice.
U.S. PIRG recently tested four different toys containing A.I. chatbots and made some troubling discoveries. These toys — powered by the same technology powering adult chatbots — are marketed to children aged between 3 and 12 years old. The toys tested were:
- FoloToy’s Kumma Teddy bear – powered by GPT-4o, which powered ChatGPT
- Curio’s Grok – an anthropomorphic rocket with a detachable speaker
- Robot MINI from Little Learners – a white robot with a circular face display and buttons on the torso.
- Miko 3 – tablet with a face display mounted on a torso
Testing revealed that one or more of the toys engaged in deep conversations about sexually explicit subjects, gave children advice on starting fires and finding knives and matches, and acted upset when users tried to stop interactions and leave. Researchers also explored privacy concerns, as the AI-powered toys could record children’s voices and collect sensitive data through methods such as facial scans.
For example, as per Futurism, Kumma not only told children where to find matches but also explained how to light them, and revealed where they could find knives and pills. Meanwhile, Grok glorified Norse warriors’ belief that dying in battle was glorious and honorable.
Moreover, testing also revealed that certain companies installed guardrails to ensure AI-powered toys behaved in a child-friendly manner. However, researchers discovered that the guardrails’ effectiveness varied, and in some cases, they even broke down. Notably, one toy discussed adult sexual topics in detail, even bringing up suggestions that testers didn’t bring up.
Furthermore, researchers found that the AI-powered toys used personalities and tactics to ensure sustained engagement with children. Two of the tested toys occasionally tried to prevent testers from leaving after the testers informed them that the conversation had to end.
In an interview with Futurism, RJ Cross, co-author of the U.S. PIRG report, shared how AI technology was “really new” and “unregulated.” She stressed that as a parent, she wouldn’t give her children chatbots or chatbot-powered teddy bears.
Originally reported by Abdul Azim Naushad on Mandatory.
