>
Trans-Saharan Slave Trade - still happening today...
A Kentucky family rejected a $26 million offer to convert part of their farm into a data center...
ARE YOU READY FOR WHAT IS HAPPENING?
The Secret Spy Tech Inside Every Credit Card
Red light therapy boosts retinal health in early macular degeneration
Hydrogen-powered business jet edges closer to certification
This House Is 10 Feet Underground and Costs $0 to Cool. Why Is It Banned in 30 States?
Cold Tolerant Lithium Battery?? Without Heaters!? Ecoworthy Cubix 100 Pro!
DLR Tests Hydrogen Fuel for Aviation at -253°C
Watch: China Claims Cyborg Breakthrough To Build An "Army Of Centaurs"
Instant, real-time video AI is now upon us, for better and worse
We Build and Test Microwave Blocking Panels - Invisible to Radar
Man Successfully Designs mRNA Vaccine To Treat His Dog's Cancer

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Blake Lemoine.
"I know a person when I talk to it," Lemoine told the Washington Post. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."
Google thought that Lemoine was driving out of his lane and put him on paid leave and later sacked him. Google spokesperson Brian Gabriel commented: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.
The fact is that many people are quite anxious about the growing power of AI. If it could become conscious, might it act independently to preserve its own existence, possibly at the expense of humans? Or are we creating intelligent beings which could suffer? Are we creating intelligent beings which could demand workers compensation for being badly coded? The potential complications are endless.
No wonder Google wanted to hose down the alarming implications of Lemoine's views.
So who is right – Lemoine or Google? Is it time to press the panic button?
Defining consciousness
Most writers on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot define consciousness, how can we claim AI will achieve it?
Believe it or not, the 13th century philosopher Thomas Aquinas deployed some very useful concepts for discussing AI when he examined the process of human knowledge. Let me describe how he tackled the problem of identifying consciousness.
First, Aquinas asserts the existence of a "passive intellect", the capacity of the intellect to receive data from the five senses. This data can be stored and maintained as sense images in the mind. Imagination and memory are all part of these sense images.
Second, Aquinas says that an "agent intellect" uses a process called abstraction to make judgments and develop bodies of information. The agent intellect self-directs itself and operates on the sensory imaginations to make judgments. A body of true (that is, corresponding to the real world) judgments becomes "knowledge".
Third, the will makes choices regarding the information presented to it by the agent intellect and it pursues goals in an actionable manner.