>
After Trump's War, the US Military Won't Be Invited Back
US Will Take and Keep Kharg Island Before War is Over
Counterterrorism/Military Expert Warns That President Trump Is Walking Into A Deep State Trap...
The US Military Is Involved In A Massive Mobilization Of Infantry Troops To Launch A FULL SCALE...
The Pentagon is looking for the SpaceX of the ocean.
Major milestone by 3D printing an artificial cornea using a specialized "bioink"...
Scientists at Rice University have developed an exciting new two-dimensional carbon material...
Footage recorded by hashtag#Meta's AI smart glasses is sent to offshore contractors...
ELON MUSK: "With something like Neuralink… we effectively become maybe one with the AI."
DARPA Launches New Program Generative Optogenetics, GO,...
Anthropic Outpaces OpenAI Revenue 10X, Pentagon vs. Dario, Agents Rent Humans | #234
Ordering a Tiny House from China, what's the real COST?
New video may offer glimpse of secret F-47 fighter
Donut Lab's Solid-State Battery Charges Fast. But Experts Still Have Questions

"If I didn't know exactly what it was, which is this computer program we built recently, I'd think it was a 7-year-old, 8-year-old kid that happens to know physics," said Blake Lemoine.
"I know a person when I talk to it," Lemoine told the Washington Post. "It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn't a person."
Google thought that Lemoine was driving out of his lane and put him on paid leave and later sacked him. Google spokesperson Brian Gabriel commented: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.
The fact is that many people are quite anxious about the growing power of AI. If it could become conscious, might it act independently to preserve its own existence, possibly at the expense of humans? Or are we creating intelligent beings which could suffer? Are we creating intelligent beings which could demand workers compensation for being badly coded? The potential complications are endless.
No wonder Google wanted to hose down the alarming implications of Lemoine's views.
So who is right – Lemoine or Google? Is it time to press the panic button?
Defining consciousness
Most writers on this issue just assume that everyone knows what consciousness is. This is hardly the case. And if we cannot define consciousness, how can we claim AI will achieve it?
Believe it or not, the 13th century philosopher Thomas Aquinas deployed some very useful concepts for discussing AI when he examined the process of human knowledge. Let me describe how he tackled the problem of identifying consciousness.
First, Aquinas asserts the existence of a "passive intellect", the capacity of the intellect to receive data from the five senses. This data can be stored and maintained as sense images in the mind. Imagination and memory are all part of these sense images.
Second, Aquinas says that an "agent intellect" uses a process called abstraction to make judgments and develop bodies of information. The agent intellect self-directs itself and operates on the sensory imaginations to make judgments. A body of true (that is, corresponding to the real world) judgments becomes "knowledge".
Third, the will makes choices regarding the information presented to it by the agent intellect and it pursues goals in an actionable manner.