>
Massie Introduces Bill to Get US Out of NATO
Somali Flag Raised Over Vermont School District
"Kill Babies, Superman"? New CHILDREN'S Book Calls Abortion a "Superpower"
Gold Warning Issued as New Monetary System Takes Hold
This tiny dev board is packed with features for ambitious makers
Scientists Discover Gel to Regrow Tooth Enamel
Vitamin C and Dandelion Root Killing Cancer Cells -- as Former CDC Director Calls for COVID-19...
Galactic Brain: US firm plans space-based data centers, power grid to challenge China
A microbial cleanup for glyphosate just earned a patent. Here's why that matters
Japan Breaks Internet Speed Record with 5 Million Times Faster Data Transfer
Advanced Propulsion Resources Part 1 of 2
PulsarFusion a forward-thinking UK aerospace company, is pushing the boundaries of space travel...
Dinky little laser box throws big-screen entertainment from inches away
'World's first' sodium-ion flashlight shines bright even at -40 ºF

They show :
before 2010 training compute grew in line with Moore's law, doubling roughly every 20 months.
Deep Learning started in the early 2010s and the scaling of training compute has accelerated, doubling approximately every 6 months.
In late 2015, a new trend emerged as firms developed large-scale ML models with 10 to 100-fold larger requirements in training compute.
Based on these observations they split the history of compute in ML into three eras: the Pre Deep Learning Era, the Deep Learning Era and the Large-Scale Era . Overall, the work highlights the fast-growing compute requirements for training advanced ML systems.
They have detailed investigation into the compute demand of milestone ML models over time. They make the following contributions:
1. They curate a dataset of 123 milestone Machine Learning systems, annotated with the compute it took to train them.