Cyber Defense Advisors

Empower Users and Protect Against GenAI Data Loss

GenAI Data Loss

When generative AI tools became widely available in late 2022, it wasn’t just technologists who paid attention. Employees across all industries immediately recognized the potential of generative AI to boost productivity, streamline communication and accelerate work. Like so many waves of consumer-first IT innovation before it—file sharing, cloud storage and collaboration platforms—AI landed in the enterprise not through official channels, but through the hands of employees eager to work smarter.

Faced with the risk of sensitive data being fed into public AI interfaces, many organizations responded with urgency and force: They blocked access. While understandable as an initial defensive measure, blocking public AI apps is not a long-term strategy—it’s a stopgap. And in most cases, it’s not even effective.

Shadow AI: The Unseen Risk

The Zscaler ThreatLabz team has been tracking AI and machine learning (ML) traffic across enterprises, and the numbers tell a compelling story. In 2024 alone, ThreatLabz analyzed 36 times more AI and ML traffic than in the previous year, identifying over 800 different AI applications in use.

Blocking has not stopped employees from using AI. They email files to personal accounts, use their phones or home devices, and capture screenshots to input into AI systems. These workarounds move sensitive interactions into the shadows, out of view from enterprise monitoring and protections. The result? A growing blind spot is known as Shadow AI.

Blocking unapproved AI apps may make usage appear to drop to zero on reporting dashboards, but in reality, your organization isn’t protected; it’s just blind to what’s actually happening.

Lessons From SaaS Adoption

We’ve been here before. When early software as a service tool emerged, IT teams scrambled to control the unsanctioned use of cloud-based file storage applications. The answer wasn’t to ban file sharing though; rather it was to offer a secure, seamless, single-sign-on alternative that matched employee expectations for convenience, usability, and speed.

However, this time around the stakes are even higher. With SaaS, data leakage often means a misplaced file. With AI, it could mean inadvertently training a public model on your intellectual property with no way to delete or retrieve that data once it’s gone. There’s no “undo” button on a large language model’s memory.

Visibility First, Then Policy

Before an organization can intelligently govern AI usage, it needs to understand what’s actually happening. Blocking traffic without visibility is like building a fence without knowing where the property lines are.

We’ve solved problems like these before. Zscaler’s position in the traffic flow gives us an unparalleled vantage point. We see what apps are being accessed, by whom and how often. This real-time visibility is essential for assessing risk, shaping policy and enabling smarter, safer AI adoption.

Next, we’ve evolved how we deal with policy. Lots of providers will simply give the black-and-white options of “allow” or “block.” The better approach is context-aware, policy-driven governance that aligns with zero-trust principles that assume no implicit trust and demand continuous, contextual evaluation. Not every use of AI presents the same level of risk and policies should reflect that.

For example, we can provide access to an AI application with caution for the user or allow the transaction only in browser-isolation mode, which means users aren’t able to paste potentially sensitive data into the app. Another approach that works well is redirecting users to a corporate-approved alternative app which is managed on-premise. This lets employees reap productivity benefits without risking data exposure. If your users have a secure, fast, and sanctioned way to use AI, they won’t need to go around you.

Last, Zscaler’s data protection tools mean we can allow employees to use certain public AI apps, but prevent them from inadvertently sending out sensitive information. Our research shows over 4 million data loss prevention (DLP) violations in the Zscaler cloud, representing instances where sensitive enterprise data—such as financial data, personally identifiable information, source code, and medical data—was intended to be sent to an AI application, and that transaction was blocked by Zscaler policy. Real data loss would have occurred in these AI apps without Zscaler’s DLP enforcement.

Balancing Enablement With Protection

This isn’t about stopping AI adoption—it’s about shaping it responsibly. Security and productivity don’t have to be at odds. With the right tools and mindset, organizations can achieve both: empowering users and protecting data.

Learn more at zscaler.com/security

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.

 

Leave feedback about this

  • Quality
  • Price
  • Service
Choose Image