Skip to main content Skip to footer

Perspective

Redefining resilience: Cybersecurity in the generative AI era

5-minute read

May 2, 2024

In brief

  • Cybercriminals are increasingly using gen AI-powered attacks, such as ransomware and phishing, to target organizations.

  • Certain industries are more targeted, as their advanced technology landscapes make them more vulnerable to sophisticated attacks.

  • To address these risks, organizations need to update their security posture and embed security by design throughout their gen AI journey.

Secure gen AI

When it comes to gen AI and cybersecurity, who has the advantage, attackers or defenders?

Gen AI’s impact on the threat landscape

Cybercriminals are eager to harness the potential of gen AI, leading to a rise in gen AI-powered cyberattacks. As a result, there has been a rise in ransomware attacks. These attacks are often initiated through gen AI-powered phishing, and are affecting local governments, education, manufacturing and healthcare. We are also seeing an increase in voice deepfakes emulating executives to fraudulently authorize financial transfers.

Threat actors have been experimenting with dark LLMs to create python-based ransomware, which is distributed with high levels of obfuscation that increase its potential success. Some industries—including financial services, government and energy—are more targeted when it comes to gen AI attacks. These industries use more sophisticated technology, making them more vulnerable.

56%

of executives believe attackers will have the advantage over defenders in the next two years.

76%

Increase in ransomware attacks since the launch of ChatGPT.

1,265%

Phishing attack increase in the last eighteen months.

Gen AI vulnerabilities

Gen AI exposes organizations to a broader threat landscape, more sophisticated attackers and also new points of attack. As organizations move from pilots and discrete use cases to larger-scale, gen AI implementations, the risks will increase because there will be more scale and complexity of adoption. Vulnerabilities like these are new, and most organizations are not prepared to handle them.

New capabilities such as shadow AI discovery, LLM prompt and response filtering and specialized AI workload integration tests are now required to properly mitigate these new risks. Whether guarding against AI-powered attacks, or protecting their own AI landscapes, organizations must quickly update their security posture. The key to gaining the upper hand will be embedding security by design.

The secure gen AI journey

How to accelerate the adoption of gen AI at scale and to protect gen AI environments:

Gen AI security should be an integral part of GRC, establishing a clear governance framework, policies and processes.

Conduct a comprehensive security assessment —informed by cyber intelligence—to understand the current security maturity within your gen AI environment. Evaluate gen AI architectures and ensure alignment with industry best practices.

Organizations must focus on securing the entire gen AI stack, including the data layer, the foundational model, gen AI applications, as well as identity access and controls.

Reinvent cyber resilience with gen AI

Gen AI also presents an opportunity for cyber defense and the reinvention of cybersecurity. By fully leveraging gen AI, organizations can turn the tables on potential attackers and enhance their cyber defense capabilities.

Organizations should embrace AI-powered defense technologies, and test using the gen AI technologies that threat actors could use against them. Examples include AI-powered red teaming and penetration testing—which will become mandatory for organizations as gen AI regulations evolve.

Many platform companies and hyperscalers are releasing AI security features in their own environments and for broader consumption. Plus, there are new players in the space that have created gen AI security specific solutions from scratch to protect environments.