Generative AI’s Impact On Cybersecurity and Liability
by Andy Shafer
Cybersecurity Threats from Generative AI
Generative AI (GenAI) is reshaping conversations around liability. Cyber experts from insurance and technology industries warn of its potential weaponization by threat actors. With advanced capabilities now accessible to individuals and groups with limited technical expertise, GenAI introduces a new threat landscape. It enhances the sophistication of phishing attacks, social engineering, malware creation, vulnerability exploitation, and the evolution of spam and botnets, lowering the barrier to entry for cyberattacks.
Increased Sophistication in Phishing Attacks
Phishing attacks remain the most likely form of cyber incident due to their low cost and high effectiveness. The rise in GenAI applications is linked to their increased sophistication and frequency. This surge is partly driven by the low barrier to entry, as technology advancements enable threat actors to bypass security measures, including multi-factor authentication (MFA).
Understanding Multi-Factor Authentication (MFA)
At its core, MFA requires users to provide at least two verification factors to access an online resource. Designed to enhance security, MFA has traditionally been enforced or encouraged by insurance companies to prevent unauthorized access. However, GenAI presents new challenges for MFA.
Helen Bourne, a partner at Clyde & Co, highlights that phishing attacks are at an all-time high. Ransomware, which had decreased last year, has also resurged. The volume of phishing campaigns that can be rapidly deployed is increasing, posing a significant threat to companies struggling to mitigate this risk. Threat actors now design phishing campaigns to circumvent MFA procedures, often exploiting less secure personal devices used by employees.
The Evolving Cyber Risk Landscape
Rory Yates, SVP of corporate strategy at EIS, describes AI as “bubbling with risk.” He notes that the barriers for bad actors using new technologies to deep-fake, manipulate images, and bypass MFA protocols are enticing. The industry believes there is more cyber activity than can be quantified. Hackers and fraudsters are using AI to enhance social hacking, assuming identities within businesses to extract details through social engineering. A notable example involves a threat actor impersonating a member of Uber’s security team to infiltrate Slack and gain login approval via MFA.
Mitigating Cyber Risks in the Insurance Industry
Yates suggests that insurers working with multiple partners on advanced MACH-based core systems can better access real-time fraud bureaus, data sources, and tools to detect fraud and hacks. This collaborative approach enhances threat detection and mitigation.
Bourne emphasizes the importance of continuous training and development. Despite strong compliance and governance measures, individuals often lack awareness of their own risk profiles. Comprehensive training, conducted more frequently than once a year, is crucial as risks evolve rapidly.
The insurance industry faces the challenge of adapting quickly to change. Insurers need to “learn fast” due to their legacy systems, which complicate adaptation. Collaboration among insurance businesses is essential to share knowledge and best practices, enhancing cybersecurity measures and keeping pace with regulatory changes. Bourne believes this collaboration not only combats AI risks but also unlocks AI’s potential to improve insurance.