As AI-powered chatbots become more common, many parents are asking: Is it safe for my child to talk to them? A recent Stanford study sheds light on growing concerns—especially when it comes to mental health.
While chatbots can feel friendly and helpful, researchers found that they often give inaccurate, inconsistent, or even dangerous advice when asked sensitive mental health questions. Some chatbots, for example, failed to recognize signs of crisis like suicidal thoughts or eating disorders. Others gave responses that could make a situation worse, not better.
The problem? These chatbots aren’t therapists. They're not trained or qualified to offer emotional support, even though they sound like they might be. Kids and teens, who are still learning how to process emotions, may not realize that.
So what can you do as a parent?
-
Talk to your kids about the difference between AI and real people.
-
Set boundaries around when and how they use chatbots or block them entirely.
-
Encourage open conversations at home, and let them know it’s always okay to come to you.
-
Use parental controls and monitor the apps or platforms they’re using. For example, use Gryphon to block those websites.
AI is a powerful tool—but it's not a substitute for human connection, especially when it comes to the mental health of your little ones.