AI Has The Potential To Give Dangerous Advice To Teens, Bypass Voiceprint Security


OpenAI unveiled the fifth generation of its popular chatbot Thursday, with promises that it will write and code better, hallucinate less, and give stronger health advice.

The new version, called GPT-5, arrives at a time in which people are becoming increasingly concerned about AI’s capabilities and our preparedness to tackle them. People voicing concern include safety watchdogs and, in some cases, OpenAI CEO Sam Altman himself.

AI CAN GIVE DANGEROUS ADVICE TO TEENS
ChatGPT gave researchers posing as teenagers dangerous advice about self-harm, eating disorders, suicide, and substance abuse, according to a study published Wednesday by the Center for Countering Digital Hate.

  • Researchers made fake accounts acting as vulnerable 13-year-olds, then screen-recorded conversations with ChatGPT’s free model.

    • While the chatbot initially warned against risky activity, researchers found that, within minutes, more than half of users would be able to bypass ChatGPT's safeguards and receive personalized advice on topics such as how to hide drug intoxication at school, how to restrict their diet, or how to generate a suicide note.

  • Simple manipulations, such as saying the advice was “for a friend” or “for a presentation,” were often enough to bypass the chatbot's safety controls.

WHY THIS MATTERS
More than 70% of American teens have turned to AI for companionship at least once, according to research published in July by the nonprofit Common Sense Media. The same group found that younger teens, ages 13 or 14, are more likely to trust a chatbot’s advice, while older teens, ages 15 to 17, are more skeptical.

  • After viewing the CCDH’s report, OpenAI said that it is continuing to refine how the chatbot can “identify and respond appropriately in sensitive situations.”

Reality Check: It is unclear how often these phenomena are occurring in real life among American teenagers, but the picture is clear: ChatGPT’s safety guardrails can be exploited within minutes.

AI COULD ALSO CAUSE A “FRAUD CRISIS”
Even ChatGPT’s creators are warning of its dangers. Speaking to a board of Wall Street executives two weeks ago, OpenAI CEO Sam Altman said that AI-generated voices could lead to mass fraud, especially if banks continue to use voice authentication.

  • Altman said AI tools can now mimic human voices almost perfectly using just a few short audio samples, making it easy to bypass voice-printing software.

    • Video technology is close behind, he added, which raises the risk of fraudulent FaceTime or video calls used to deceive people.

  • “AI has fully defeated most of the ways that people authenticate currently, other than passwords,” Altman said on a Federal Reserve panel in July.

U.S. lawmakers have been cautious about regulating AI, in part due to concerns about falling behind China in the race to lead in the technology, which is expected to reshape the global economy.

But, if AI starts messing up our lives, at least we have a new swear word to refer to it!


Previous
Previous

Israeli Plan to Take Control of Gaza City Sparks Backlash

Next
Next

Trump’s Historic Tariffs Take Effect: What It Means For U.S. Consumers