Loading...

Loading...

How will AI-generated deepfake content impact your business?

  • Posted on February 19, 2020
  • Estimated reading time 3 minutes
How will AI-generated deepfake content impact your business

According the Wall Street Journal, the CEO of an unnamed UK-based energy firm was recently scammed out of approximately $243,000 by AI technology.  The CEO believed he was on the phone with his boss, the chief executive of the firm’s German parent company, when he immediately followed orders to transfer funds to a Hungarian supplier.  Turns out, the entire discussion was a scam and the first noted instance of an AI-generated voice “deepfake.”  The CEO was one of the first enterprise victims of this new and potentially dangerous technology, but he certainly won’t be the last.

So what is a deepfake?  At the most basic level, a deepfake is a lie disguised to look like the truth. The term deepfake generally describes manipulated videos, or other digital representations, that have been doctored with sophisticated Artificial Intelligence (AI) and yield fabricated images and sounds that appear to be very real.  The term comes from the combination of deep learning (a subset of AI) and refers to the arrangements of algorithms that can learn and make intelligent decisions on their own.

Deepfakes have gained traction because of applications that are now available to the masses AND because of the growing use of a technology called Generative Adversarial Networks (GANs).  With GANs, two machine-learning models basically “duke it out.”  One of the models creates forgeries. The other attempts to detect the forgery.  This game continues until the second can no longer detect that the created video or voice is not real.  GAN technology makes deepfakes more believable, and it is a powerful new tool for those who want to use misinformation to influence everything from stock prices to elections.

The challenge with deepfake technology is that bad actors have an outsized advantage and there are no solid technology solutions in place to address the growing threat.  In AI circles, identifying fake media has long received less attention, funding and institutional backing than creating it.  There is basically little money to be made out of detecting deepfakes.

Social media platforms are trying to address the deepfake challenge, but each platform handles the problem differently.  For example, a recently doctored video of US House Speaker Nancy Pelosi slurring words was immediately removed by both Twitter and YouTube.  However, Facebook left the video on the platform noting its policy on free speech (Facebook doesn’t have a policy that information posted must be true).  The tech giants do not align on whether deepfakes should be deleted, flagged, demoted or preserved.  

Governments have stepped in to address this growing challenge, with laws being passed across China and in various US states.  However, in the United States, the constitutionality of these laws has been challenged.  In a democracy, the “marketplace of ideas” is meant to sort the truth from falsehood, not a government censor.  Most agree on the need to avoid establishing legal rules that will push too far and that pressure platforms to engage in censorship of free expression on-line.

Although it seems most discussions, around the danger of deepfakes, deal with the spread of political disinformation, it is only a matter of time before the technology will be used against organizations. There are really three primary areas where organizations should be concerned.  The first is extortion.  Deepfakes will enhance and likely increase extortion attempts against influential business executives. The second would be market manipulation where deepfakes have significant potential to cause a company’s stock price to plummet or soar.  The third is social engineering.  As in the example of the CEO above, the technology can be used to manipulate individuals into divulging content.

So how can a company address this growing concern?  Preparedness is key.  We recommend there are three things organizations should start to do immediately.  The first is to train employees on the dangers of deepfakes and how they might detect them.  Rather than taking videos at face value, for example, individuals should seek out related contextual information.  

The second thing an organization should do is to put together a team that can respond quickly if a deepfake is released.  It is not reasonable to expect enterprises to stop deepfakes from occurring (it is inevitable), but they can focus on detecting the forgeries as early as possible and then mitigating the effects.  This approach would involve the use of corporate communications, public affairs and other parts of the organization to quickly and preemptively counter the narrative that is being presented by the deepfake.

Finally, an organization should consider partnering with companies that have an expertise in AI and cybersecurity technologies.  Being able to detect AI content and cybersecurity related matters is beyond the capabilities of most enterprises.  This area requires significant expertise and must be continuously deployed, which is an expense most enterprises cannot commit to in-house and on their own.
 

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract