>
AI-Powered "Digital Workers" Deployed At Major Bank To Work Alongside Humans
New 'Mind Reading" AI Predicts What Humans Do Next
Dr. Bryan Ardis Says Food Producers Add 'Obesogens' to Food and Drugs to Make Us Fat
Health Ranger Report: Team AGES exposes Big Pharma's cancer scam and threats from AI
xAI Grok 3.5 Renamed Grok 4 and Has Specialized Coding Model
AI goes full HAL: Blackmail, espionage, and murder to avoid shutdown
BREAKING UPDATE Neuralink and Optimus
1900 Scientists Say 'Climate Change Not Caused By CO2' – The Real Environment Movement...
New molecule could create stamp-sized drives with 100x more storage
DARPA fast tracks flight tests for new military drones
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
How China Won the Thorium Nuclear Energy Race
Sunlight-Powered Catalyst Supercharges Green Hydrogen Production by 800%
Researchers have developed an AI called Centaur that accurately predicts human behavior across virtually any psychological experiment. It even outperforms the specialized computer models scientists have been using for decades. Trained on data from more than 60,000 people making over 10 million decisions, Centaur captures the underlying patterns of how we think, learn, and make choices.
"The human mind is remarkably general," the researchers write in their paper, published in Nature. "Not only do we routinely make mundane decisions, such as choosing a breakfast cereal or selecting an outfit, but we also tackle complex challenges, such as figuring out how to cure cancer or explore outer space."
An AI that truly understands human cognition could revolutionize marketing, education, mental health treatment, and product design. But it also raises uncomfortable questions about privacy and manipulation when our digital footprints reveal more about us than ever before.
How Scientists Built a Digital Mind Reader AI
The research team started with an ambitious goal: create a single AI model that could predict human behavior in any psychological experiment. Their approach was surprisingly straightforward but required massive scale.
Scientists assembled a dataset called Psych-101 containing 160 experiments covering memory tests, learning games, risk-taking scenarios, and moral dilemmas. Each experiment was converted into plain English descriptions that an AI could understand.
Rather than building from scratch, researchers took Meta's Llama 3.1 language model (the same type powering ChatGPT) and gave it specialized training on human behavior. They used a technique that allows them to modify only a tiny fraction of the AI's programming while keeping most of it unchanged. The entire training process took only five days on a high-end computer processor.