The time is July 1969. President Richard Nixon hurriedly takes his seat in the Oval Office, perched in front of an American flag and a flag of the United States seal. The president is clutching a handful of papers laying out a speech written for him by Bill Safire, a senior aide.
“Good evening, my fellow Americans. Fate has ordained the men who went to the moon to explore in peace will stay on the moon to rest in peace,” Nixon announces, glancing unpreparedly at the text of the speech.
Just days earlier, the Apollo 11 mission was launched from NASA’s Kennedy Space Center on Cape Canaveral, Florida. The Nixon White House had hastily convened a press conference to deliver news of the worst possible outcome: that equipment transporting astronauts Neil Armstrong and Buzz Aldrin to the moon had fatally malfunctioned, leaving them permanently stranded in the microgravity of outer space.
Of course, this press conference never actually happened, and Nixon never had to deliver that fateful speech. But a new art installation from the Massachusetts Institute of Technology uses artificial intelligence to demonstrate how recent technological advancements can, in a sense, appear to rewrite history.
Through the use of deepfakes, MIT’s Center for Advanced Virtuality was able to imagine with stunning authenticity what such a speech would have looked and sounded like. Safire, Nixon’s speechwriter, did in fact prepare such a speech as a contingency, although the president never had to deliver it. But, with MIT’s help, Nixon might as well have.
And that is what is worrying stakeholders in the current discourse about disinformation. While previous attempts to doctor videos have relied on misleading quotes taken out of context or more obvious forgeries, artificial intelligence is helping these edits become more convincing.
“We hope that our work will spark critical awareness among the public,” said Francesca Panetta, director of the project, titled “In Event of Moon Disaster,” in a press release. “We want them to be alert to what is possible with today’s technology, to explore their own susceptibility, and to be ready to question what they see and hear as we enter a future fraught with challenges over the question of truth.”
To create the video, MIT employed “deep learning techniques” and hired a voice actor to help re-recreate Nixon’s voice. Ukrainian-based company Respeecher helped stitch together a voice resembling Nixon’s iconic tenor, and Israeli company Canny AI used “video dialogue replacement techniques” to re-create the movement of Nixon’s mouth and lips.
The video won the International Documentary Film Festival Amsterdam’s 2019 special jury award for creative technology in digital storytelling. A digital version of the full project is expected to be released in spring 2020.
Disinformation and deepfakes in particular are especially acute challenges in the run-up to next year’s presidential election. According to a data security report prepared by Experian, “the technology required for these audio-generated attacks has made transformative progress as a result of breakthroughs in how algorithms can be used to process data.”
The report listed dissemination of deepfakes as a top area of concern for data security in 2020. Experian predicted that these videos will be used “to foster real disruption—both in financial markets and in politics.”
Already, forged videos have had a substantial effect on the news cycle. And critics have argued that social media platforms are wholly unprepared to deal with the onslaught of confusion that could result from viral deepfake videos.
In May, a recording of House Speaker Nancy Pelosi—distorted to make it appear as if she was drunk—began to circulate widely on social media. The video was not altered using the same sophisticated techniques that are applied to deepfakes, but the result was passably convincing.
The president and his personal attorney Rudy Giuliani shared excerpts or selectively edited portions of Pelosi’s speech. The so-called cheapfake itself was seen millions of times. Facebook decided to allow the video to remain on its website without continuing to prioritize its distribution.
“We think it’s important for people to make their own informed choice for what to believe,” Monika Bickert, a Facebook vice president, said on CNN at the time. “Our job is to make sure we are getting them accurate information.”
Americans themselves are expressing concerns about the threat of misinformation in today’s frenzied media environment. A Pew Research Center survey released in June found that half of Americans think made-up information is a “very big problem.” Almost 70 percent of Americans said it affected confidence in government, and more than 50 percent said it affected people’s trust in each other.
Among their greatest concerns, 90 percent of Americans said that altered videos or images cause at least some confusion about basic facts, the second-highest ranking problem measured in the report, just behind “made-up” news.