The Ever Concern of Deepfakes

Chief Information Security Officer

Deepfake technology is increasingly becoming a threat to cybersecurity, as it is becoming more accessible and challenging to detect. Deepfakes have been used to spread misinformation, manipulate public opinion, and trick organizations into sharing sensitive information. Governments and organizations are taking steps to combat the threat posed by deepfakes, but it’s essential for everyone within the organization to play a role in strengthening their cyber defenses. While cybersecurity practitioners have limits to their technical skills, non-cybersecurity professionals have to be knowledgeable, too, as cybercrimes become more pervasive. Education and staying up-to-date with trends and advancements in technology are essential aspects of combating deepfake threats. Lastly, AI technology has the potential to help organizations enhance their cybersecurity defenses, but it also brings new risks.

CIO World Asia Spoke with Jon France, Chief Information Security Officer (ISC)² about the concerns of Deepfakes for organizations and how can organizations mitigate such threats.

Concerns and Implications of Deepfakes for Organizations and Governments

Deepfake technology poses a significant threat to cybersecurity, as it can lead to financial loss, reputational damage, and public unrest when used maliciously. The alarming aspect of deepfakes is that they are becoming increasingly easy to generate and more challenging for the average person to detect. As deepfake technology matures and becomes more accessible, there has been a surge in phishing and wiperware attacks using deepfakes through email, video, and messaging platforms. In 2022, a global report found that two out of three cybersecurity professionals witnessed malicious deepfakes being used as part of an attack, which is a 13% increase from the previous year.

The rise of remote work has made it easier for scammers to use personal identifiable information (PII) and deepfakes to apply for remote job positions that provide access to sensitive information, such as financial data, customer PII, and proprietary information. Reports indicate that scammers are increasingly using deepfakes to trick organizations into sharing sensitive information, which is a cause for concern.

Deepfakes have also been used to manipulate public opinion in the political sphere. For instance, a deepfake video of Ukrainian President Volodymyr Zelenskyy calling for surrender circulated online, leading to confusion and unrest. The use of deepfake technology to spread misinformation and manipulate public opinion is a significant concern that requires urgent attention.

Governments worldwide are taking steps to combat the threat posed by deepfakes. For example, the Singapore Police Force is looking to strengthen joint efforts between the public and private sectors to tackle the trend of rising scams, while China has implemented first-of-its-kind deepfake regulations. However, more needs to be done to curb the advancement of deeptech and generative AI technologies, which can lead to more advanced cyber threats. Organizations, particularly small- and medium-sized businesses, are already vulnerable to social engineering attacks and stand at risk of falling prey to more sophisticated cyber threats.

State of the Current Workforce Skillsets in Detecting and Combatting Deepfakes

Cybersecurity practitioners are generally aware of the threats associated with deepfakes and its toolings, given that some of the cybersecurity tactics toward combating social engineering attacks are also applicable to deepfakes. However, there are limits to the technical skills cybersecurity professionals have. Deepfakes make use of synthetic videos and images to mimic human speech and behavior as closely as possible. The nature of this threat makes it necessary for cybersecurity professionals to understand the softer skills behind social engineering, such as human psychology and behavior, in order to effectively differentiate what’s genuine and fake.

While it’s a given that cybersecurity practitioners should constantly upskill themselves to be better equipped at combating the latest cyber threats, non-cybersecurity professionals have to be knowledgeable, too, as cybercrimes become more pervasive. For instance, with scammers spoofing identities using deepfakes during online job interviews, HR employees need to remain vigilant when conducting job interviews, and look out for telltale signs such as unnatural blinking and lip movements.

A well-trained cybersecurity team alone is far from sufficient in protecting organizations from social engineering attacks, especially when non-IT staff can be the main targets. By educating non-IT employees about cybersecurity threats and encouraging them to boost their cybersecurity skills, companies will create a more holistic defense against incoming cyberthreats.

Overall, organizations are concerned about the potential damage that deepfakes can cause, including financial loss and reputational damage. As deepfake technology becomes more accessible and easier to generate, the risk of these threats increases. To address these concerns, organizations are turning to their cybersecurity teams for guidance and education on how to detect and prevent deepfake attacks. However, it’s not just the responsibility of the IT department. Non-IT staff also need to be educated on the risks and trained to identify and prevent these attacks. By taking a holistic approach to cybersecurity, organizations can better protect themselves from the risks posed by deepfakes.

Protecting Organizations from Deepfakes: What Measures Can be Taken

The prevalence of deepfakes is a growing concern, as they are increasingly difficult to detect. However, education can help combat this issue. It’s essential for organizations to understand how to better detect deepfakes, and this responsibility shouldn’t rest solely on the cybersecurity team. Instead, everyone within the organization should play a role in strengthening their cyber defenses. To deal with the complexity of deepfakes and find ways to detect and counter them, it’s crucial to have a platform where organizations and governing bodies can challenge and discuss their learnings, facilitating the exchange of ideas and solutions.

Staying up-to-date with trends and advancements in technology is also an important aspect of education, particularly for threat intelligence. While organizations should take the initiative to do this, it’s also time for educational institutions to get on board. Many markets still struggle to incorporate media literacy into their curriculum, but Finland has proven that it can be done. Finland is currently ranked first globally for its resilience against misinformation, thanks in part to their approach to education. Children as young as preschoolers are taught how to distinguish between genuine and fake information, and libraries serve as hubs for adult media literacy education.

The recent release of ChatGPT in February 2023 has sparked renewed interest and conversations in the public domain regarding the influence of generative AI technologies on employment, efficiency, and creativity.

Assessing the Cybersecurity Implications of Advancements in AI Technology: Should Industry Professionals be Heralding or Wary

AI technology has the potential to help organizations enhance their cybersecurity defenses, but it also brings new risks. AI can be used to improve signal-to-noise ratios on indicators of compromise and indicators of attack, which can detect subtle anomalies and alert security professionals for further investigation. However, cybersecurity professionals still need to review the results and take action when necessary.

Despite the potential benefits, concerns have been raised about the rise of AI in cybersecurity. Cybercriminals are always looking for new ways to exploit vulnerabilities, and AI can be used to make attacks more convincing by training algorithms to appear more human. This requires engineers and solution architects to continuously refine their technologies to keep up with emerging threats.

The rise of AI in cybersecurity is a double-edged sword. While it offers new ways to detect and combat cyber threats, it also creates new risks and challenges that must be addressed. Ultimately, it is up to organizations to use AI in a responsible and effective manner to protect their assets and data from cyber threats.

Exploring Interesting Perspectives and Discussions within Cybersecurity Circles

There have been growing concerns within cybersecurity circles regarding the malicious use of AI tools like ChatGPT and Google Bard. Cyber criminals have been discussing using these tools for phishing and malware attacks on underground forums, which could make their efforts more efficient. While these emerging AI tools may not transform cyberattacks to a large extent, they could still pose a threat. For instance, generative pretrained transformers (GPTs) can be used to create realistic deepfakes of well-known individuals and even build an army of non-existent people to spread misinformation.

As AI continues to advance, it’s important to be vigilant and assess the potential impacts of its use in cybersecurity. The battle between using AI for good and malicious intent will likely continue, making it important for everyone to be aware of potential threats and what they can do to avoid falling victim to them. Cybersecurity is not just for professionals, and it’s essential to stay informed and partner closely with organizations and governing authorities to combat cyberthreats.