Can AI and blockchain be used in fight against deepfake?

Most of us have heard of phishing, we may get an email supposedly from the CEO of the company we work for demanding we transfer some money. As it’s the boss, and we are human, and don’t always react calmly when our boss aggressively demands we do something, we may well comply. But these days, more people are aware of the danger — and are likely to check the authenticity of the email. Suppose, however, we get a phone call apparently from the boss, complete with the cadences of the boss’s voice that we are familiar with — we are far less likely to be suspicious. But now, in a variation of deepfake, it has been reported that AI has been used to scam an organisation out of money by impersonating the voice of a company’s chief executive. So what is the answer? Can AI or blockchain be used to fight in the battle against deepfake? Or does it boil down to staff training? Information Age spoke to three experts.

According to a report in the Washington Post,  criminals used software applying AI to scam a UK energy company out of $220,000 or £194,000. The CEO of the company thought he recognised the voice of the chief executive of the parent company — and duly transferred the money he was asked to transfer.

Another recent example of deepfake, was less serious, but had more serious implications. A YouTube creator going by the name of Ctrl Shift, deep faked a scene from the AMC TV series Better Call Saul with the voice of Donald Trump and his son in law Jared Kushner. 

How can organisations respond to the threat of deepfake?

According to Dr Alexander Adam, Data Scientist at Faculty, it’s much harder for AI to create deepfake audio than video.

He explained: “The human ear is sensitive to sound waves extending over an impressively large spectrum of frequencies, so generating human-quality speech requires the algorithm to correctly predict the sound wave thousands of times per second. By comparison, the human eye can only perceive data at around 30 frames per second. This means in general, small incorrectness in video deepfake is less noticeable than in its audio counterpart.”

AI ethics and how adversarial algorithms might be the answer

AI ethics and bias arising from data: fairness, robustness and explainability, plus adversarial algorithms may be the answer

Training staff for Deepfake

As for the fix, Jake Moore, Cybersecurity Specialist at ESET puts the emphasis on training staff.

He said: “We will see a huge rise in machine-learned cybercrimes in the near future. We have already seen Deepfake videos imitating celebrities and public figures, but to create convincing materials, cyber-criminals use footage that is already available in the public domain. As computing power increases, we are starting to see this become even easier to create, which paints a scary picture ahead.

“To help reduce these types of risks, companies should start by raising awareness and educating their employees, then introduce a second layer of protection and verification, one that would be hard to spoof, like a single-use password generator (OTP devices). Two-factor authentication is a powerful, inexpensive and simple technique to add an extra layer of security to protect your money from going into a rogue account.

“Before you know it, deepfake will be more convincing than ever, therefore companies need to consider investing in deepfake detecting software sooner rather than later. However, counter software is never developed that fast, so companies should focus on training their employees rather than just rely on software.”

AI in cyber security: a help or a hindrance?

Dan Panesar, VP EMEA at Certes Networks, explains to Information Age the role AI is having in the cyber security space

Blockchain response to deepfake

Blockchain expert Kevin Gannon, who is the blockchain tech lead and solutions architect, PwC said: When it comes to the area of Deepfake, emerging technology like blockchain can come to the fore to provide some levels of security, approval and validation. Blockchain has typically been touted as a visibility and transparency play, where once something is done, the who and when becomes apparent; but it can go further.

“When a user who has a digital identity wants to do something — they could be prompted for proof of their identity before access to something (like funds) can be granted. From another angle, the actual authenticity of video, audio files can be proven via a blockchain application where the hash of certain files (supposed proofs) can be compared against the originals. Though, it is not a silver bullet, and as always, the adoption and applicability of the technology in the right way is key. From a security perspective, more open data mechanisms (like a public ledger) have an increased attack surface, so inherent protection can not just be assumed.

“But enhancing security protocols around the approvals process, where smart contracts could also come into play, can strengthen such processes. In addition, at a more technical level, by applying multi-sig (multiple signature) transactions in the processes can mean that even if one identity is compromised, there is more than one identity needed to provide ultimate approval.”

Blockchain: single source of truth and digital twins

Blockchain could become the single source of truth, it could create digital twins, but there is one thing you must never forget.

AI and Deepfake

As for how AI can be used to combat deepfake, we return to Dr Alexander Adam. He said: “Machine learning algorithms are great at recognising patterns in large amounts of data. ML can provide a way to detect fake audio from real audio by using classification techniques that work by showing an algorithm large amounts of deepfake and real audio and teaching it to distinguish the difference in (for example) the frequency composition between the two. For example, by using image classification on the audio spectrograms you can teach an ML model to ‘spot the difference’. However, as far as I am aware no out-of-the-box solution exists yet.”

“In part, this may be because audio deepfake hasn’t been regarded as being as much of a threat as video deepfake. Audio deepfake are not pitch perfect and you should be able to tell the difference if it’s tailored to a specific person that you know. That said, interference across phone lines or staging general ‘outside’ background noise could probably be used to mask a lot of this. And as there has been so much high profile media attention on deepfake videos, the public are perhaps less aware of the potential risks of audio deepfake. So, if you have a reason to be suspicious, you should always validate it’s who you think it might be.

“However, we expect that the creation and use of audio deepfake for malicious purposes will increase in the coming years and become more sophisticated. This is because there is a better understanding of machine learning models and how to transfer what was used on one model to another person and train it quickly. But, it’s worth noting that as the generation of deepfake content gets better, typically so do the detection methods.”


Related articles

A history of AI; key moments in the story of AI

Gartner: debunking five artificial intelligence misconceptions

AI: A new route for cyber-attacks or a way to prevent them?

Enterprise AI adoption hampered by lack of skilled experts, says survey

Understanding the viability of blockchain in supply chain management

Driving business value with responsible AI

Emerging technologies, are they set to transform business?

UK tech sector leads Europe in AI — but what about the rest of the world?

EU artificial intelligence guidelines will help unlock potential of AI technology


Avatar photo

Michael Baxter

.Michael Baxter is a tech, economic and investment journalist. He has written four books, including iDisrupted and Living in the age of the jerk. He is the editor of Techopian.com and the host of the ESG...

Related Topics

Blockchain
Fake Data