CDSA

NAB 2019: A Deep Dive into ‘Deep Fakes’ (CDSA)

[svg-table-content]

Film and TV companies are now facing a new era in which “deep fake” images can fool almost anybody, so content owners must take precautions to protect their brands, properties and talent from being counterfeited and harming them, according to Nonny de la Peña, a journalist, filmmaker and CEO and founder of Emblematic Group, a Santa Monica, Calif.-based company that produces augmented, mixed and virtual reality content.

We are living in an era during which artificial intelligence (AI) and graphics tools are more accessible than ever, the distribution power is in everyone’s hands, and technology now has the potential to topple the truth.

Technology can now be used to bootleg a celebrity, newscast or even an entire movie, creating an unauthorized, artificial reality that is almost indiscernible to the untrained eye, de la Peña pointed out April 7 during the Cybersecurity & Content Protection Summit closing keynote called “Is that Real? The Threat & Opportunity of Deep Fakes”at NAB 2019.

She started off the session by presenting a disturbing quiz from ABC in Australia in which two images of people were presented side by side five times and, in each instance, it was difficult for attendees to tell which image was real or fake.

Whether attendees could correctly guess every image was fake or not, she said: “Just imagine applying that scrutiny to every piece of video. Right? That’s the kind of state we’re in now.”

She then showed videos of President Barack Obama in which she noted, “you just can’t even tell which is fake and which is real, and it’s kind of astonishing to the point which we’ve gotten to.”

And these kinds of fake videos are on the rise, with it now taking just one selfie to start modeling the facial features of a video avatar, she said.

She then pointed to AI pioneer Yoshua Bengio’s concerns about the dangers of abuse from this technology being very real, which has led him to try and establish international guidelines for the ethical use of AI. The technology “can actually divide society further by creating discrimination and biases,” she said, noting that’s already happened when AI has been used for job recruitment. Bengio is also concerned about “killer drones” and the security issues they raise, as well as surveillance technology used by authoritarian governments, she noted.

There are, meanwhile, “at least 175 fake news sites that have been identified,” and that’s making it tough for the public to tell what’s real and what’s not, she said. Most students today don’t know when news is fake, she said, noting it was found in a Stanford University study that 82% of middle-schoolers couldn’t tell the difference between an ad labeled “sponsored content” and a real news story on a website.

This has helped lead to a situation in which trust in major U.S. democratic institutions is “crumbling,” and “it’s really affecting our democracy” because, “without trust, democracy cannot function,” she said, noting that was a finding by the Aspen Institute and Knight Foundation’s Knight Commission on Trust, Media and Democracy that she served as a commissioner on.

While you can trust, we now also have to verify, she went on to say, suggesting that people also think before sharing on social media. Instead of spreading some content, perhaps it’s better to spread goodwillthat can help spread disinformation, she said.

While trying to combat the proliferation of fake videos, the important first step to take is figuring out the source of the fake content, she noted during the Q&A. But there’s a challenge with that also because “if you want to have an ability to assess who’s made something and yet protect people in situations where they’re vulnerable, how do you do that?”

One possibility, she pointed to: “Is there a way to have community blockchains that can recognize somebody’s information and know that it’s part of the community without necessarily having to reveal their identity?” That’s something being debated now, she said.

During her presentation, however, she also gave examples of how virtual reality and other technology can be used for the public good, such as to access places not normally accessible, including the Guantanamo Bay detention camp run by the U.S. in Cuba, and to help people understand climate change better by allowing people to see a melting glacier.

Emblematic has been building Reach, a walk-around virtual reality platform and distribution toolset that she told attendees will “let other people make volumetric content themselves” to tell stories.

Co-produced by the NAB Show and the Content Delivery & Security Association (CDSA), the Content Protection & Cybersecurity Summit was presented by SafeStream by SHIFT, Akamai, IBM Security, Microsoft Azure, Convergent Risks, the Digital Watermarking Alliance, the Trusted Partner Network, and produced by the Media & Entertainment Services Alliance (MESA) and the Content Delivery & Security Association (CDSA), in cooperation with the NAB Show.