>
FCC Approves Echostar ATT and SpaceX Deals
Elon Musk and Jensen Huang Are Only CEOs That Flew in Air Force One to China
AI Safety Institute Debuts with Big-Name Backers and a Censorship Agenda
Federal Appeals Court Blocks Trump's $83 Million Payment to E Jean Carroll
US To Develop Small Modular Nuclear Reactors For Commercial Shipping
New York Mandates Kill Switch and Surveillance Software in Your 3D Printer ...
Cameco Sees As Many As 20 AP1000 Nuclear Reactors On The Horizon
His grandparents had heart disease.
At 11, Laurent Simons decided he wanted to fight aging.
Mayo Clinic's AI Can Detect Pancreatic Cancer up to 3 Years Before Diagnosis–When Treatment...
A multi-terrain robot from China is going viral, not because of raw speed or power...
The World's Biggest Fusion Reactor Just Hit A Milestone
Wow. Researchers just built an AI that can control your body...
Google Chrome silently installs a 4 GB AI model on your device without consent
The $5 Battery That Never Dies - Edison Buried This 100 Years Ago

Common Sense Media's Youth AI Safety Institute arrived at the Danish Parliament this week and the guest list is stacked with people who think you can't be trusted to speak freely online.
Hillary Clinton, Ursula von der Leyen, former Biden Surgeon General Vivek Murthy, Ofcom chief Melanie Dawes, and the head of an organization that wants to break end-to-end encryption are all gathering at Christiansborg Palace in Copenhagen to announce what they'd like to do next about AI and children.
The "next" part is where it gets concerning. The Youth AI Safety Institute, launched by Common Sense Media on May 5, says it will "complement efforts by regulators and policymakers to translate frameworks such as the EU AI Act, the Digital Services Act, and the UK Online Safety Act into practical protections for child-safe AI."
Those three censorship laws represent the most aggressive government-directed speech suppression regimes currently operating in the Western world. The Institute isn't questioning them. In fact, it wants to help implement them and push them further.
The summit, titled "Keeping Our Children and Families Safe in the AI Era," is co-hosted by Common Sense Media, Save the Children Denmark, and Margrethe Vestager, who spent years as the European Commission's executive vice president building the regulatory architecture that now lets EU officials order platforms to delete content.
More than 200 policymakers, tech executives, and civil society figures are expected. King Frederik X of Denmark is giving the opening address. The Duchess of Edinburgh will attend. Danish Prime Minister Mette Frederiksen is on the bill.
And so is Pinterest CEO Bill Ready, whose company helped pay for the Institute's creation.
Who's Funding This?
The Youth AI Safety Institute is bankrolled by a mix of philanthropic donors and deep industry money.
The industry funders are Anthropic, the OpenAI Foundation, and Pinterest. All three make AI products that the Institute will evaluate and rate. The Institute says it "maintains complete editorial independence over published results." But the structural incentive is obvious enough to name. Companies are funding an organization that will publish safety ratings of their competitors, define what "safe" means, and push governments to enforce those definitions through law.
John Giannandrea, a former senior AI executive at both Apple and Google, sits on the Institute's Board of Advisors. So does Murthy, who has publicly advocated for digital ID systems to combat online "misinformation" and worked directly with Big Tech companies to target speech the government classified as false during the Biden administration.