>
Tulsi Gabbard calls for PROSECUTION of Obama, Clapper, Brennan, Comey...
Decentralize TV - Catherine Austin Fitts on the government's FINANCIAL ENSLAVEMENT plot...
Do You Know How to Pick a Lock?
Democrats Demand Investigation Into Colbert's Cancelation
The Wearables Trap: How the Government Plans to Monitor, Score, and Control You
The Streetwing: a flying car for true adventure seekers
Magic mushrooms may hold the secret to longevity: Psilocybin extends lifespan by 57%...
Unitree G1 vs Boston Dynamics Atlas vs Optimus Gen 2 Robot– Who Wins?
LFP Battery Fire Safety: What You NEED to Know
Final Summer Solar Panel Test: Bifacial Optimization. Save Money w/ These Results!
MEDICAL MIRACLE IN JAPAN: Paralyzed Man Stands Again After Revolutionary Stem Cell Treatment!
Insulator Becomes Conducting Semiconductor And Could Make Superelastic Silicone Solar Panels
Slate Truck's Under $20,000 Price Tag Just Became A Political Casualty
Wisdom Teeth Contain Unique Stem Cell That Can Form Cartilage, Neurons, and Heart Tissue
Imagine you're a parent locked in a custody dispute, and a video emerges of you abusing your child; or that you're a police officer, and you're seen on video brutalizing a suspect; or that you're a teacher "caught" on video beating a young student; or that a video goes public of your favorite politician engaging in serious sexual misconduct. Now imagine that the guilty party is actually the person who made the video — because it looks real, but isn't.
Welcome to the brave new world of "deepfake."
It has been said that seeing is believing. But this may change, at least regarding online content, with "perfectly real" faked videos, which associate computer science professor Hao Li says are perhaps just months away.
As the International Business Times reports, "Morphed images and videos that appear 'perfectly real' in everyday life will be accessible to [average] people within six months or a year, computer graphics entrepreneur Hao Li has said. The revolutionary technique may bother the fact checkers but for animation films it may be a game changer soon."
"'In some ways, we already know how to do it, but it is only a matter of training with more data and implementation' to make manipulated graphics appear real, the Taiwanese descent deepfake pioneer said," the site also informs.
"The technology of 'deepfake' — the process to manipulate videos or digital representation using computers and machine-learning software to make them appear real, even though they are not — has given rise to concerns about how these creations could cause confusion and propagate misinformation, especially in the context of global politics," the Times continues.
In fact, online "disinformation through targeted social-media campaigns and apps such as WhatsApp has already roiled elections around the world," CNBC adds.
"'It's still very easy, you can tell from the naked eye most of the deepfakes,' Li, an associate professor of computer science at the University of Southern California, said on 'Power Lunch,'" CNBC also tells us.
"'But there also are examples that are really, really convincing,' Li said, adding those require 'sufficient effort' to create."
Li had previously predicted, at a Massachusetts Institute of Technology conference just last week, that perfect deepfakes were just "two to three years" away. But "Li said recent developments, in particular the emergence of the wildly popular Chinese app Zao and the growing research focus, have led him to 'recalibrate' his timeline," CNBC further reports.
"Zao is a face-swapping app that allows users to take a single photograph and insert themselves into popular TV shows and movies. It is among China's most popular apps, although significant privacy concerns have arisen," the site further relates.
"'Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions,' Li said on 'Power Lunch,'" CNBC continues.
Li said that the problem with deepfakes isn't the technology's existence, but that it can be used for evil as well as good. While it's true that any tool — whether nukes or guns or robots or the Internet — can be used for good or ill and that it's too often the latter given man's fallen nature, the technology-specific problem wouldn't exist if the technology didn't. The point, however, is that since it will be developed by someone, all we can do is try to stay a step ahead of the miscreants.
Thus does Li say that academic research is imperative. "'If you want to be able to detect deepfakes, you have to also see what the limits are,' Li said," CNBC also writes. "'If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways it's impossible to detect those if you don't know how they work.'"