>
Does AIPAC really have too much power?
California Is Facing A "Fuelmaggedon" As Fighting Erupts In The Middle East And The Strait
Grand Theft World Podcast 285 | Ba'alroom Psyops with Guest Ryan Graham
Robot Dives 1.5 Miles, Maps French Shipwreck With 86,000 Images And Recovers Artifacts
Brain-inspired chip could reduce AI energy use by 70%
"This is the first synthetic species," microbiologist J. Craig Venter told 60 Minutes'
Humanoid robots are hitting the factories at an increasing pace
Microsoft's $400 Billion Mistake Is Now a $200 Phone With Zero Tracking
Turn Sand to Stone With Vinegar. Stronger Than Steel. Hidden Since 1627
This is a bioprinter printing with living human cells in real time
The remarkable initiative is called The Uncensored Library,...
Researcher wins 1 bitcoin bounty for 'largest quantum attack' on underlying tech

Google has curated a set of YouTube clips to help machines learn how humans exist in the world. The AVAs, or "atomic visual actions," are three-second clips of people doing everyday things like drinking water, taking a photo, playing an instrument, hugging, standing or cooking.
Each clip labels the person the AI should focus on, along with a description of their pose and whether they're interacting with an object or another human.
"Despite exciting breakthroughs made over the past years in classifying and finding objects in images, recognizing human actions still remains a big challenge," Google wrote in a recent blog post describing the new dataset. "This is due to the fact that actions are, by nature, less well-defined than objects in videos."
The catalog of 57,600 clips only highlights 80 actions but labels more than 96,000 humans. Google pulled clips from popular movies, emphasizing that they drew from a "variety of genres and countries of origin."