>
Japan's bond selloff is a warning to the world
The Algocracy Agenda: How AI and the Deep State Are Digitizing Tyranny
Former Israeli Prime Minister Ehud Olmert Accuses Israeli Govt of Committing War Crimes in Gaza
Inside North Korea: A Survivor's Story of Socialism, Starvation, Torture, and Escape
New AI data centers will use the same electricity as 2 million homes
Is All of This Self-Monitoring Making Us Paranoid?
Cavorite X7 makes history with first fan-in-wing transition flight
Laser-powered fusion experiment more than doubles its power output
Watch: Jetson's One Aircraft Just Competed in the First eVTOL Race
Cab-less truck glider leaps autonomously between road and rail
Can Tesla DOJO Chips Pass Nvidia GPUs?
Iron-fortified lumber could be a greener alternative to steel beams
One man, 856 venom hits, and the path to a universal snakebite cure
Dr. McCullough reveals cancer-fighting drug Big Pharma hopes you never hear about…
Imagine you're a parent locked in a custody dispute, and a video emerges of you abusing your child; or that you're a police officer, and you're seen on video brutalizing a suspect; or that you're a teacher "caught" on video beating a young student; or that a video goes public of your favorite politician engaging in serious sexual misconduct. Now imagine that the guilty party is actually the person who made the video — because it looks real, but isn't.
Welcome to the brave new world of "deepfake."
It has been said that seeing is believing. But this may change, at least regarding online content, with "perfectly real" faked videos, which associate computer science professor Hao Li says are perhaps just months away.
As the International Business Times reports, "Morphed images and videos that appear 'perfectly real' in everyday life will be accessible to [average] people within six months or a year, computer graphics entrepreneur Hao Li has said. The revolutionary technique may bother the fact checkers but for animation films it may be a game changer soon."
"'In some ways, we already know how to do it, but it is only a matter of training with more data and implementation' to make manipulated graphics appear real, the Taiwanese descent deepfake pioneer said," the site also informs.
"The technology of 'deepfake' — the process to manipulate videos or digital representation using computers and machine-learning software to make them appear real, even though they are not — has given rise to concerns about how these creations could cause confusion and propagate misinformation, especially in the context of global politics," the Times continues.
In fact, online "disinformation through targeted social-media campaigns and apps such as WhatsApp has already roiled elections around the world," CNBC adds.
"'It's still very easy, you can tell from the naked eye most of the deepfakes,' Li, an associate professor of computer science at the University of Southern California, said on 'Power Lunch,'" CNBC also tells us.
"'But there also are examples that are really, really convincing,' Li said, adding those require 'sufficient effort' to create."
Li had previously predicted, at a Massachusetts Institute of Technology conference just last week, that perfect deepfakes were just "two to three years" away. But "Li said recent developments, in particular the emergence of the wildly popular Chinese app Zao and the growing research focus, have led him to 'recalibrate' his timeline," CNBC further reports.
"Zao is a face-swapping app that allows users to take a single photograph and insert themselves into popular TV shows and movies. It is among China's most popular apps, although significant privacy concerns have arisen," the site further relates.
"'Soon, it's going to get to the point where there is no way that we can actually detect [deepfakes] anymore, so we have to look at other types of solutions,' Li said on 'Power Lunch,'" CNBC continues.
Li said that the problem with deepfakes isn't the technology's existence, but that it can be used for evil as well as good. While it's true that any tool — whether nukes or guns or robots or the Internet — can be used for good or ill and that it's too often the latter given man's fallen nature, the technology-specific problem wouldn't exist if the technology didn't. The point, however, is that since it will be developed by someone, all we can do is try to stay a step ahead of the miscreants.
Thus does Li say that academic research is imperative. "'If you want to be able to detect deepfakes, you have to also see what the limits are,' Li said," CNBC also writes. "'If you need to build A.I. frameworks that are capable of detecting things that are extremely real, those have to be trained using these types of technologies, so in some ways it's impossible to detect those if you don't know how they work.'"