>
The Road to War in Ukraine - The History of NATO and US Military Exercises With Ukraine - Part 1&2
Farming = Rebellion Against the Machine | Joel Salatin
DOGE Is Now in Charge of U.S. National Parks
@Benz_Pilled ANIMATION: The Reuters Kerfuffle
Cramming More Components Onto Integrated Circuits
'Cyborg 1.0': World's First Robocop Debuts With Facial Recognition And 360° Camera Visio
The Immense Complexity of a Brain is Mapped in 3D for the First Time:
SpaceX, Palantir and Anduril Partnership Competing for the US Golden Dome Missile Defense Contracts
US government announces it has achieved ability to 'manipulate space and time' with new tech
Scientists reach pivotal breakthrough in quest for limitless energy:
Kawasaki CORLEO Walks Like a Robot, Rides Like a Bike!
World's Smallest Pacemaker is Made for Newborns, Activated by Light, and Requires No Surgery
Barrel-rotor flying car prototype begins flight testing
Coin-sized nuclear 3V battery with 50-year lifespan enters mass production
"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Blake Lemoine.
"I know a person when I talk to it," Lemoine told the Washington Post. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."
Google thought that Lemoine was driving out of his lane and put him on paid leave and later sacked him. Google spokesperson Brian Gabriel commented: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.
The fact is that many people are quite anxious about the growing power of AI. If it could become conscious, might it act independently to preserve its own existence, possibly at the expense of humans? Or are we creating intelligent beings which could suffer? Are we creating intelligent beings which could demand workers compensation for being badly coded? The potential complications are endless.
No wonder Google wanted to hose down the alarming implications of Lemoine's views.
So who is right – Lemoine or Google? Is it time to press the panic button?
Defining consciousness
Most writers on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot define consciousness, how can we claim AI will achieve it?
Believe it or not, the 13th century philosopher Thomas Aquinas deployed some very useful concepts for discussing AI when he examined the process of human knowledge. Let me describe how he tackled the problem of identifying consciousness.
First, Aquinas asserts the existence of a "passive intellect", the capacity of the intellect to receive data from the five senses. This data can be stored and maintained as sense images in the mind. Imagination and memory are all part of these sense images.
Second, Aquinas says that an "agent intellect" uses a process called abstraction to make judgments and develop bodies of information. The agent intellect self-directs itself and operates on the sensory imaginations to make judgments. A body of true (that is, corresponding to the real world) judgments becomes "knowledge".
Third, the will makes choices regarding the information presented to it by the agent intellect and it pursues goals in an actionable manner.