An AI-powered children’s toy has exposed more than 50,000 chat logs containing highly sensitive information about children, after researchers discovered that the company behind the product had left its web-based administrative console almost entirely unprotected.
The toy, called Bondu, is a stuffed dinosaur that offers an AI chat feature, allowing children to talk to the toy. It encourages kids to talk freely, sharing thoughts, feelings, preferences, and daily experiences. But according to security researchers, nearly all of those conversations were accessible to anyone with a Gmail account.
The issue was uncovered by security researchers Joseph Thacker and Joel Margolis, who began investigating Bondu’s web portal after Thacker’s neighbour mentioned she had pre-ordered one of the toys for her children. The portal was intended to serve two purposes: allowing parents to review their children’s conversations and enabling Bondu staff to monitor system performance and usage.
Within minutes, however, the researchers found that no hacking or exploitation was required. Logging in with an arbitrary Google account granted access to a massive collection of children’s data. What they discovered included:
- Full chat transcripts and summaries of conversations between children and their toys
- Children’s names and birth dates
- Names of family members
- Objectives set by parents for their children
- Pet names children had given their toys
- Personal likes, dislikes, favourite foods, and even dance moves
In total, more than 50,000 conversation logs were exposed.
The researchers stress that they did not retain copies of the data, aside from a limited number of screenshots and a screen recording shared with WIRED to verify their findings. After they alerted the company, Bondu reportedly took the administrative console offline and later relaunched it with proper authentication controls in place.
When contacted by WIRED, Bondu CEO Fateen Anam Rafid said that security fixes “were completed within hours,” followed by a broader security review and the deployment of additional preventative measures. Rafid also stated that the company found “no evidence of access beyond the researchers involved.” Bondu says it has since hired a security firm to help monitor for future security issues.
Despite the swift response, Thacker and Margolis argue that the incident highlights a much larger and unresolved problem: the volume and sensitivity of data being collected and stored by companies selling AI-enabled toys for children.
Even with the security issue now addressed, Bondu, like many similar products continues to collect detailed personal information about children. It remains unclear how access to this data is internally restricted, how usage is monitored, or how long the information is retained.
Adding to the concern, Bondu appears to rely on Google’s Gemini and OpenAI’s GPT-5 to power its AI functionality, meaning at least some user data may be shared with third-party AI providers. The same is likely true across the growing market of AI-enabled toys.
Thacker says the experience has fundamentally changed his view of such products. Prior to the research, he had considered purchasing AI toys for his own children. Now, he says, he doesn’t want them in his home at all.
“They’re a privacy nightmare,” he said.
