>
Still No Justice for COVID Nursing Home Deaths
How To Make A FREE Drip Irrigation System With An Old 5 Gallon Bucket
Homemade LMNT Electrolyte Drink | ACTUALLY Hydrate Yourself!
Cab-less truck glider leaps autonomously between road and rail
Can Tesla DOJO Chips Pass Nvidia GPUs?
Iron-fortified lumber could be a greener alternative to steel beams
One man, 856 venom hits, and the path to a universal snakebite cure
Dr. McCullough reveals cancer-fighting drug Big Pharma hopes you never hear about…
EXCLUSIVE: Raytheon Whistleblower Who Exposed The Neutrino Earthquake Weapon In Antarctica...
Doctors Say Injecting Gold Into Eyeballs Could Restore Lost Vision
Dark Matter: An 86-lb, 800-hp EV motor by Koenigsegg
Spacetop puts a massive multi-window workspace in front of your eyes
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Blake Lemoine.
"I know a person when I talk to it," Lemoine told the Washington Post. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."
Google thought that Lemoine was driving out of his lane and put him on paid leave and later sacked him. Google spokesperson Brian Gabriel commented: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.
The fact is that many people are quite anxious about the growing power of AI. If it could become conscious, might it act independently to preserve its own existence, possibly at the expense of humans? Or are we creating intelligent beings which could suffer? Are we creating intelligent beings which could demand workers compensation for being badly coded? The potential complications are endless.
No wonder Google wanted to hose down the alarming implications of Lemoine's views.
So who is right – Lemoine or Google? Is it time to press the panic button?
Defining consciousness
Most writers on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot define consciousness, how can we claim AI will achieve it?
Believe it or not, the 13th century philosopher Thomas Aquinas deployed some very useful concepts for discussing AI when he examined the process of human knowledge. Let me describe how he tackled the problem of identifying consciousness.
First, Aquinas asserts the existence of a "passive intellect", the capacity of the intellect to receive data from the five senses. This data can be stored and maintained as sense images in the mind. Imagination and memory are all part of these sense images.
Second, Aquinas says that an "agent intellect" uses a process called abstraction to make judgments and develop bodies of information. The agent intellect self-directs itself and operates on the sensory imaginations to make judgments. A body of true (that is, corresponding to the real world) judgments becomes "knowledge".
Third, the will makes choices regarding the information presented to it by the agent intellect and it pursues goals in an actionable manner.