London Tech Week: Darktrace Addresses Generative AI Concerns with the Introduction of AI Models That Help Protect Data Privacy and Intellectual Property
London Tech Week is in full swing, and developments in digital trust are coming hard and fast. Darktrace will be discussing the use of generative AI (artificial intelligence) tools, in a bid to shed light on how they can affect our behaviour online. When it comes to generative AI concerns, they might not be the ones you’re expecting…
As reported by PR Newswire, Darktrace has introduced new “risk and compliance models” to help CISOs (that’s chief information security officers), as they chart a new unknown: the rise of generative artificial intelligence. Darktrace’s efforts will help businesses balance opportunity and the risk of inadvertent IP loss and data leakage from the use of generative AI and LLM-based tools. You might have heard of IP, intellectual property, before.
Intellectual property – protecting what belongs to you
In an entertainment context, intellectual property is something an organisation owns – think Spider-Man – but in reality, it can be anything from work methodology to best practices and documentation. IP loss, in this context, could result from generative AI being used incorrectly; the majority of workers are using it, and often without the requisite training.
New Darktrace data indicates 74% of active customer deployments have employees using generative AI tools
– PR Newswire
Now, “new risk and compliance models” will help usher in a new age of generative AI best practices, empowering Darktrace’s 8,400-strong worldwide customer base. This breakthrough technology will directly address risks associated with new ways of working – Darktrace Detect and Respond will make it easier for customers to enforce security, putting digital guardrails in place to both monitor and respond to activity and connections associated with the burgeoning generative AI landscape.
Generative AI concerns
We’ve all got them! As reported by Darktrace, 74% of their active customer deployments see employees using generative AI in the workplace. Across sectors, workers are overly trusting of the possibilities surrounding these services – in May 2023, for instance, Darktrace detected (and prevented – an upload of over 1GB of data. It could have been disastrous for the company, and indeed, the individual.
Of course, new tools increase productivity and new ways of, as Darktrace puts it, augmenting human creativity. Yet CISOs must balance the risks of embracing such innovations with the very real threat of their data getting into the wrong hands. Darktrace is taking matters into its own hands, as jurisdictions across the EU, UK and US race to play catchup and address growing concerns. We could all learn a thing or two about how to make the most of AI without exposing its current flaws and potential dangers.
“Darktrace is in the business of providing security personalized to an organization, and it is no surprise we are already seeing the early signs of CISOs leveraging our technology to enforce their specific compliance policies.”
– Poppy Gustafsson
Ahead of her appearance at London Tech Week, the company’s chief executive officer Poppy Gustafsson commented, “CISOs across the world are trying to understand how they should manage the risks and opportunities presented by publicly available AI tools in a world where public sentiment flits from euphoria to terror.”
Nervous about the limitations or apparent dangers of generative AI in your business? Darktrace might just be able to help you make the right decisions.
How is your company currently tackling the increasing influence of AI in the workplace?
Want more of the latest digital trust news? Click here to read more on the Top 3 Ways Cyber Insurance Safeguards Digital Finance.