Wing Security announced today that it now offers free discovery and a paid tier for automated control over thousands of AI and AI-powered SaaS applications. This will allow companies to better protect their intellectual property (IP) and data against the growing and evolving risks of AI usage.
SaaS applications seem to be multiplying by the day, and so does their integration of AI capabilities. According to Wing Security, a SaaS security company that researched over 320 companies, a staggering 83.2% use GenAI applications. While this statistic might not come as a surprise, the research showed that 99.7% of organizations use SaaS applications that leverage AI capabilities to deliver their services. This usage of GenAI in SaaS applications that are not ‘pure’ AI often goes unnoticed by security teams and users alike.
When examining hundreds of AI-using SaaS applications, Wing Security was able to categorize the different ways in which these applications use organizational data, as well as offer a solution to this new threat:
Data storing: In some cases, data is stored by the AI for very long periods of time; in others, it can be stored for short periods only. Storing data allows AI learning models, and future models, to continually train on it. That said, the main concern is when considering the many different types of attacks seen on SaaS applications. When an application is compromised, the data it stores might be compromised too.
Model training: By processing vast amounts of information, AI systems can identify patterns, trends, and insights that may elude human analysis. Through machine learning algorithms, AI models learn from data and adapt over time, refining their performance and accuracy, resulting in better service to their end users. On the downside, allowing these models to learn your code, patents, sales, and marketing know-how provides AI-using applications with the potential means to commoditize your organization’s competitive edge. To some, these knowledge leaks are considered more significant than data leaks
The human element: Certain AI applications leverage human validation to ensure the accuracy and reliability of the data they gather. This collaborative approach, often referred to as human-in-the-loop or human-assisted AI, involves integrating human expertise into the algorithmic decision-making process. This results in higher accuracy for the AI model, but also means a human, working for the GenAI application, is exposed to potentially sensitive data and know-how.
Leveraging automation to combat AI-SaaS risks
Wing’s recently released AI solution guarantees security teams will better adapt to, and control, the ever-growing and practically unstoppable AI usage in their organizations. Their solution follows three basic steps – Know, Assess, Control.
Know: As with many security risks, the first step is to discover them all. In the case of AI, it is not enough to simply flag the “usual suspects” or the pure GenAI applications such as ChatGPT or Bard. With thousands of SaaS applications now using AI to improve their service, discovery must include any application leveraging customer data to improve their models. As with their previous solutions, Wing is offering this first and fundamental step as a free, self-service solution for users to self-onboard and start discovering the magnitude of AI-powered applications used by their employees.
Assess: Once AI-using SaaS has been uncovered, Wing automatically provides a security score and details the ways in which company data is used by the AI: How long is it stored for? Is there a human factor? And perhaps most importantly, is it configurable? Providing a detailed view of the application’s users, permissions, and security information. This automatic analysis allows security teams to make better-informed decisions.
Control: Wing’s discovery and analysis pin-points the most critical issues to address, allowing security teams to easily understand the level of risk and types of actions needed. For example, deciding whether or not they should permit a certain application’s usage or simply configure the AI elements to better match their security policy.
The Secret: Automating All Of The Above
By automating Discovery, Assessment and Control, security teams save time on figuring out where to focus their efforts instead of spreading themselves thin trying to solve a huge and evolving attack surface. Subsequently, this significantly reduces risk.
Wing’s automated workflows also allow for a unique cross-organizational solution: By allowing users to directly communicate with the application’s admin or users, Wing prompts better-informed security solutions alongside a stronger security culture of inclusion rather than simple black or white listing.
In an era where SaaS applications are omnipresent, their integration with artificial intelligence raises a new type of challenge. On the one hand, AI usage has become a great tool for boosting productivity, and employees should be able to use it for its many benefits. On the other hand, as the reliance on AI in SaaS applications continues to surge, the potential risks associated with data usage become more pronounced.
Wing Security has responded to this challenge by introducing a new approach, aimed at empowering organizations to navigate and control the escalating use of AI within their operations, while involving the end users in the loop and ensuring they may use the AI-SaaS they need, safely. Their automated control platform provides a comprehensive understanding of how AI applications utilize organizational data and know-how, addressing issues such as data storing, model training, and the human element in the AI loop. Security teams can save precious time thanks to clear risk-prioritization and user involvement.
Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.