Cybersecurity automation solutions provider, Torq, has released an AI-based capability, dubbed Torq Socrates, to help organizations track, prioritize, and respond to critical security threats.
The offering combines intelligence signals from across organizations’ security ecosystems to drive autonomous remediation, while learning and evolving as it analyzes security events, according to the company.
“Torq Socrates is a rare example of a breakthrough innovation that aims at changing the rules of the game, putting AI in the ‘pilot’ action seat while introducing a responsible AI adoption architecture, leaving the control over the activities strictly ‘in the hands’ of analysts and architects,” said Leonid Belkind, co-founder and chief technology officer of Torq.
Torq Socrates is now available on a limited availability basis to select enterprise organizations. Torq will showcase its capabilities at the upcoming Black Hat conference next week.
Torq’s AI automates security response
Torq Socrates is designed to use AI for automating key security operation activities, including alert triage, contextual data enrichment, incident investigation, escalation, and response. For this, the AI model uses open source data.
“The unique property of Torq Socrates is that it is built on top of off-the-shelf commercial and open source Large Language AI Models (LLMs), instead of developing dedicated models trained on specific data,” Belkind said.
The AI Agent serves as a “connective tissue” between the LLM capabilities and the organizational tools and data, according to Belkind.
The agent also leverages public documents — including security frameworks like the MITRE Att&ck — to describe security operations procedures and other relevant materials used in its model training, and to contextualize the outcomes of events and actions.
Socrates is powered by LLMs
Torq Socrates is based on LLMs that analyze and understand each organization’s unique SOC playbooks and adapt responses accordingly.
“It is based on the ReAct (Reason + Act) LLM approach that interleaves AI-based reasoning with an informed, continuously updated actionable methodology,” Belkind said.
“LLM analyzes the tool output (provided in a potentially large, structured document format) to extract the information critical to deciding on the next action to be taken according to the operational guidelines,” he added. “For example: ‘Is the sample malicious?’ ‘Is the user a VIP?’ and ‘Have any activities matching a specific pattern been found?’.”
Socrates is based on Torq workflows only, and provisions operating within organization-defined parameters, the company said explaining why Socrates should be considered safe AI. The agent implements a human-in-the-loop approach that requires human approval in order to perform potentially disruptive actions such as quarantining an executive’s laptop or blocking entire network segments, according to Torq.
Security Operations Center, Security Software