>
Does AIPAC really have too much power?
California Is Facing A "Fuelmaggedon" As Fighting Erupts In The Middle East And The Strait
Grand Theft World Podcast 285 | Ba'alroom Psyops with Guest Ryan Graham
Robot Dives 1.5 Miles, Maps French Shipwreck With 86,000 Images And Recovers Artifacts
Brain-inspired chip could reduce AI energy use by 70%
"This is the first synthetic species," microbiologist J. Craig Venter told 60 Minutes'
Humanoid robots are hitting the factories at an increasing pace
Microsoft's $400 Billion Mistake Is Now a $200 Phone With Zero Tracking
Turn Sand to Stone With Vinegar. Stronger Than Steel. Hidden Since 1627
This is a bioprinter printing with living human cells in real time
The remarkable initiative is called The Uncensored Library,...
Researcher wins 1 bitcoin bounty for 'largest quantum attack' on underlying tech

In the not-too-distant future, we'll have plenty of reasons to want protect ourselves from facial detection software. Even now, companies from Facebook to the NFL and Pornhub already use this technology to identify people, sometimes without their consent. Hell, even our lifelines, our precious phones, now use our own faces as a password.
But as fast as this technology develops, machine learning researchers are working on ways to foil it. As described in a new study, researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill developed a robust, scalable, and inconspicuous way to fool facial recognition algorithms into not recognizing a person.
This paper builds on the same group's work from 2016, only this time, it's more robust and inconspicuous. The method works in a wide variety of positions and scenarios, and doesn't look too much like the person's wearing an AI-tricking device on their face. The glasses are also scalable: The researchers developed five pairs of adversarial glasses that can be used by 90 percent of the population, as represented by the Labeled Faces in the Wild and Google FaceNet datasets used in the study.
It's gotten so good at tricking the system that the researchers made a serious suggestion to the TSA: Since facial recognition is already being used in high-security public places like airports, they've asked the TSA to consider requiring people to remove physical artifacts—hats, jewelry, and of course eyeglasses—before facial recognition scans.
It's a similar concept to how UC Berkeley researchers fooled facial recognition technology into thinking a glasses-wearer was someone else, but in that study, they toyed with the AI algorithm to "poison" it. In this new paper, the researchers don't fiddle with the algorithm they're trying to fool at all. Instead, they rely on manipulation of the glasses to fool the system. It's more like the 3D-printed adversarial objects developed by MIT, which tricked AI into thinking a turtle was a gun by adjusting a few pixels on an image of a turtle. Only this time, it's tricking the algorithm into thinking one person is another, or not a person at all.