Updated May 7, 2021
"“Deep fakes”—a term that first emerged in 2017 to describe
realistic photo, audio, video, and other forgeries generated
with artificial intelligence (AI) technologies—could present
a variety of national security challenges in the years to
come. As these technologies continue to mature, they could
hold significant implications for congressional oversight,
U.S. defense authorizations and appropriations, and the
regulation of social media platforms.
How Are Deep Fakes Created?
Though definitions vary, deep fakes are most commonly
described as forgeries created using techniques in machine
learning (ML)—a subfield of AI—especially generative
adversarial networks (GANs). In the GAN process, two ML
systems called neural networks are trained in competition
with each other. The first network, or the generator, is
tasked with creating counterfeit data—such as photos, audio
recordings, or video footage—that replicate the properties
of the original data set. The second network, or the
discriminator, is tasked with identifying the counterfeit
data. Based on the results of each iteration, the generator
network adjusts to create increasingly realistic data. The
networks continue to compete—often for thousands or
millions of iterations—until the generator improves its
performance such that the discriminator can no longer
distinguish between real and counterfeit data.
Though media manipulation is not a new phenomenon, the
use of AI to generate deep fakes is causing concern because
the results are increasingly realistic, rapidly created, and
cheaply made with freely available software and the ability
to rent processing power through cloud computing. Thus,
even unskilled operators could download the requisite
software tools and, using publically available data, create
increasingly convincing counterfeit content..."
Deep Fakes
No comments:
Post a Comment