>
SpaceX Starship About Nine Days From Next Launch
Air-powered robot uses physics instead of circuits to run on tube-legs
Musk Promised Budget Cuts, But Stole All Our Data And Delivered A Panopticon Instead
Cab-less truck glider leaps autonomously between road and rail
Can Tesla DOJO Chips Pass Nvidia GPUs?
Iron-fortified lumber could be a greener alternative to steel beams
One man, 856 venom hits, and the path to a universal snakebite cure
Dr. McCullough reveals cancer-fighting drug Big Pharma hopes you never hear about…
EXCLUSIVE: Raytheon Whistleblower Who Exposed The Neutrino Earthquake Weapon In Antarctica...
Doctors Say Injecting Gold Into Eyeballs Could Restore Lost Vision
Dark Matter: An 86-lb, 800-hp EV motor by Koenigsegg
Spacetop puts a massive multi-window workspace in front of your eyes
So he helped found a research nonprofit, OpenAI, to help cut a path to "safe" artificial general intelligence, as opposed to machines that pop our civilization like a pimple. Yes, Musk's very public fears may distract from other more real problems in AI. But OpenAI just took a big step toward robots that better integrate into our world by not, well, breaking everything they pick up.
OpenAI researchers have built a system in which a simulated robotic hand learns to manipulate a block through trial and error, then seamlessly transfers that knowledge to a robotic hand in the real world. Incredibly, the system ends up "inventing" characteristic grasps that humans already commonly use to handle objects. Not in a quest to pop us like pimples—to be clear.
The researchers' trick is a technique called reinforcement learning. In a simulation, a hand, powered by a neural network, is free to experiment with different ways to grasp and fiddle with a block. "It's just doing random things and failing miserably all the time," says OpenAI engineer Matthias Plappert. "Then what we do is we give it a reward whenever it does something that slightly moves it toward the goal it actually wants to achieve, which is rotating the block."