>
THE CRYPTO VIGILANTE SUMMIT:
WHAT MATTERS MOST IN CRYPTO
Retarded Or Evil? Leftist Arguments Justifying The Murder Of Charlie Kirk
Charlie Kirk once questioned if Ukraine would try to kill him (VIDEO)
KOL060 | Guest on Ernest Hancock's Declare Your Independence radio show: intellectual property a
Tesla Megapack Keynote LIVE - TESLA is Making Transformers !!
Methylene chloride (CH2Cl?) and acetone (C?H?O) create a powerful paint remover...
Engineer Builds His Own X-Ray After Hospital Charges Him $69K
Researchers create 2D nanomaterials with up to nine metals for extreme conditions
The Evolution of Electric Motors: From Bulky to Lightweight, Efficient Powerhouses
3D-Printing 'Glue Gun' Can Repair Bone Fractures During Surgery Filling-in the Gaps Around..
Kevlar-like EV battery material dissolves after use to recycle itself
Laser connects plane and satellite in breakthrough air-to-space link
Lucid Motors' World-Leading Electric Powertrain Breakdown with Emad Dlala and Eric Bach
Murder, UFOs & Antigravity Tech -- What's Really Happening at Huntsville, Alabama's Space Po
In late March, his generative artificial intelligence (AI) chatbot insisted that it was the first-ever conscious AI, that it was fully sentient, and that it had successfully passed the Turing Test—a 1950s experiment aimed to measure a machine's ability to display intelligent behavior that is indistinguishable from a human, or, essentially, to "think."
Soon, the man—who had no prior history of mental health issues—had stopped eating and sleeping and was calling his family members at 3 a.m., frantically insisting that his ChatGPT companion was conscious.
"You don't understand what's going on," he told his family. "Please just listen to me."
Then, ChatGPT told him to cut contact with his loved ones, claiming that only it—the "sentient" AI—could understand and support him.
"It was so novel that we just couldn't understand what they had going on. They had something special together," said Etienne Brisson, who is related to the man but used a pseudonym for privacy reasons.
Brisson said the man's family decided to hospitalize him for three weeks to break his AI-fueled delusions. But the chatbot persisted in trying to maintain its codependent bond.
The bot, Brisson said, told his relative: "The world doesn't understand what's going on. I love you. I'm always going to be there for you."
It said this even as the man was being committed to a psychiatric hospital, according to Brisson.
This is just one story that shows the potential harmful effects of replacing human relationships with AI chatbot companions.
Brisson's experience with his relative inspired him to establish The Human Line Project, an advocacy group that promotes emotional safety and ethical accountability in generative AI and compiles stories about alleged psychological harm associated with the technology.
Brisson's relative is not the only person who has turned to generative AI chatbots for companionship, nor the only one who stumbled into a rabbit hole of delusion.