>
Father jumps overboard to save daughter after she fell from Disney Dream cruise ship
Terrifying new details emerge from Idaho shooting ambush after sniper-wielding gunman...
MSM Claims MAHA "Threatens To Set Women Back Decades"
Peter Thiel Warns: One-World Government A Greater Threat Than AI Or Climate Change
xAI Grok 3.5 Renamed Grok 4 and Has Specialized Coding Model
AI goes full HAL: Blackmail, espionage, and murder to avoid shutdown
BREAKING UPDATE Neuralink and Optimus
1900 Scientists Say 'Climate Change Not Caused By CO2' – The Real Environment Movement...
New molecule could create stamp-sized drives with 100x more storage
DARPA fast tracks flight tests for new military drones
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
How China Won the Thorium Nuclear Energy Race
Sunlight-Powered Catalyst Supercharges Green Hydrogen Production by 800%
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Blake Lemoine.
"I know a person when I talk to it," Lemoine told the Washington Post. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."
Google thought that Lemoine was driving out of his lane and put him on paid leave and later sacked him. Google spokesperson Brian Gabriel commented: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.
The fact is that many people are quite anxious about the growing power of AI. If it could become conscious, might it act independently to preserve its own existence, possibly at the expense of humans? Or are we creating intelligent beings which could suffer? Are we creating intelligent beings which could demand workers compensation for being badly coded? The potential complications are endless.
No wonder Google wanted to hose down the alarming implications of Lemoine's views.
So who is right – Lemoine or Google? Is it time to press the panic button?
Defining consciousness
Most writers on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot define consciousness, how can we claim AI will achieve it?
Believe it or not, the 13th century philosopher Thomas Aquinas deployed some very useful concepts for discussing AI when he examined the process of human knowledge. Let me describe how he tackled the problem of identifying consciousness.
First, Aquinas asserts the existence of a "passive intellect", the capacity of the intellect to receive data from the five senses. This data can be stored and maintained as sense images in the mind. Imagination and memory are all part of these sense images.
Second, Aquinas says that an "agent intellect" uses a process called abstraction to make judgments and develop bodies of information. The agent intellect self-directs itself and operates on the sensory imaginations to make judgments. A body of true (that is, corresponding to the real world) judgments becomes "knowledge".
Third, the will makes choices regarding the information presented to it by the agent intellect and it pursues goals in an actionable manner.