Darktrace Launches New Risk and Compliance Models to Address Growing Threats of IP Loss and Data Leakage from Generative AI Tools

Darktrace introduces new risk and compliance models to combat IP loss and data leakage from generative AI tools.

Darktrace, a global leader in cybersecurity solutions, has unveiled new risk and compliance models aimed at assisting its 8,400 customers worldwide in mitigating the escalating risks associated with the use of generative AI tools. The release of these models for Darktrace DETECT™ and RESPOND™ empowers organizations to establish effective guardrails for monitoring and responding to activities and connections related to generative AI and large language model (LLM) tools.

Recent data from Darktrace’s AI reveals that an astounding 74% of its active customer deployments involve employees utilizing generative AI tools within their workplaces [1]. As an illustrative example, Darktrace successfully detected and prevented the unauthorized upload of over 1GB of data to a generative AI tool at one of its customers in May 2023.

While the emergence of generative AI tools presents promising opportunities for increased productivity and novel approaches to enhancing human creativity, Chief Information Security Officers (CISOs) now face the challenge of striking a balance between leveraging these innovations and managing associated risks. Acknowledging this predicament, government agencies, such as the UK’s National Cyber Security Centre, have already issued guidance emphasizing the need to effectively handle risk when employing generative AI tools and other LLMs in professional settings. Moreover, regulators in various jurisdictions, including the UK, EU, and US, are expected to provide guidance to companies on maximizing the benefits of AI while mitigating potential hazards.

Allan Jacobson, Vice President and Head of Information Technology at Orion Office REIT, emphasized the need for businesses to embrace a combination of technology and clear guardrails to harness the advantages of generative AI tools while effectively managing risks. He stated, “Since generative AI tools like ChatGPT have gone mainstream, our company is increasingly aware of how companies are being impacted. First and foremost, we are focused on the attack vector and how well prepared we are to respond to potential threats. Equally as important is data privacy, and we are hearing stories in the news about potential data protection and data loss.”

In an effort to shed light on securing organizations against cyber compromise and preparing teams to combat unpredictable threats, Darktrace’s Chief Executive Officer, Poppy Gustafsson, will participate in a fireside chat titled ‘Securing Our Future by Uplifting the Human’ at London Tech Week. She will be interviewed by Guy Podjarny, CEO of Snyk.

Poppy Gustafsson stressed the imperative for CISOs worldwide to navigate the risks and opportunities presented by publicly available AI tools. She stated, “CISOs across the world are trying to understand how they should manage the risks and opportunities presented by publicly available AI tools in a world where public sentiment flits from euphoria to terror. Sentiment aside, the AI genie is not going back in the bottle, and AI tools are rapidly becoming part of our day-to-day lives, much in the same way as the internet or social media.”

As part of Darktrace’s commitment to providing tailored security solutions for organizations, the company’s latest announcement allows customers to gain comprehensive visibility and control over the usage of AI tools within their operations. Gustafsson added, “At Darktrace, we have long believed that AI is one of the most exciting technological opportunities of our time. With today’s announcement, we are providing our customers with the ability to quickly understand and control the use of these AI tools within their organizations.”

Darktrace, known for its advanced Self-Learning AI for attack prevention, threat detection, autonomous response, and policy enforcement, continually develops new AI models through its Cyber AI Research Center. These models, including proprietary large language models, are deployed across Darktrace’s product suite within the Cyber AI Loop™, empowering customers to proactively combat increasingly sophisticated threats.

Jack Stockdale, Chief Technology Officer at Darktrace, highlighted the significance of generative AI and LLMs as crucial additions to the evolving cybersecurity landscape. He stated, “Recent advances in generative AI and LLMs are an important addition to the growing arsenal of AI techniques that will transform cybersecurity. But they are not one-size-fits-all and must be applied with guardrails to the right use cases and challenges. Over the last decade, the Darktrace Cyber AI Research Center has championed the responsible development and deployment of a variety of different AI techniques, including our unique Self-Learning AI and proprietary large language models.”

Darktrace remains committed to equipping its global customer base with the latest innovations in cybersecurity to safeguard against disruptive cyber threats that continue to pose challenges worldwide.