>
Parsley: Nature's Powerful Ally in the Fight Against Cancer and Chronic Disease
Iran's Khamenei Killed & Austin Mass Shooting | PBD #750
'Just Flatten It' Is the West's Answer to War
US particle accelerators turn nuclear waste into electricity, cut radioactive life by 99.7%
Blast Them: A Rutgers Scientist Uses Lasers to Kill Weeds
H100 GPUs that cost $40,000 new are now selling for around $6,000 on eBay, an 85% drop.
We finally know exactly why spider silk is stronger than steel.
She ran out of options at 12. Then her own cells came back to save her.
A cardiovascular revolution is silently unfolding in cardiac intervention labs.
DARPA chooses two to develop insect-size robots for complex jobs like disaster relief...
Multimaterial 3D printer builds fully functional electric motor from scratch in hours
WindRunner: The largest cargo aircraft ever to be built, capable of carrying six Chinooks

GPT-4 can output 25000 words. GPT-4 can write a higher quality novel while GPT3.5 could only output a very short story.
GPT-4 can score 1410 on the SAT tests vs 1260 for GPT 3.5.
GPT-4 can score 161 on the LSAT vs 149 for GPT 3.5.
GPT-4 can score 99 percentil for GRE (high school equivalent) verbal test vs 63 percentile for GPT3.5.
GPT-4 is a Transformer based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4's performance based on models trained with no more than 1/1,000th the compute of GPT-4.
A large focus of the GPT-4 project was building a deep learning stack that scales predictably. The primary reason is that for very large training runs like GPT-4, it is not feasible to do extensive model-specific tuning. To address this, we developed infrastructure and optimization methods that have very predictable behavior across multiple scales. These improvements allowed us to reliably predict some aspects of the performance of GPT-4 from smaller models trained using 1, 000× –10, 000× less compute.