Scoop has an Ethical Paywall
Work smarter with a Pro licence Learn More

Video | Agriculture | Confidence | Economy | Energy | Employment | Finance | Media | Property | RBNZ | Science | SOEs | Tax | Technology | Telecoms | Tourism | Transport | Search


Sysdig Launches AI Workload Security To Mitigate Active AI Risk

Sysdig, the leader in cloud security powered by runtime insights, on Tuesday announced the launch of AI Workload Security to identify and manage active risk associated with AI environments. The newest addition to the company’s cloud-native application protection platform (CNAPP) is designed to help security teams see and understand their AI environments, identify suspicious activity on workloads that contain AI packages, and fix issues fast ahead of imminent regulation.

“The addition of AI Workload Security to the Sysdig CNAPP comes in response to widespread demand for a solution that empowers the secure adoption of AI so companies can harness its power and accelerate business. With AI Workload Security, organizations can understand their AI infrastructure and identify active risks such as workloads containing in-use AI packages, that are publicly exposed, and have exploitable vulnerabilities. AI workloads are a prime target of attack for bad actors, and AI Workload Security allows defenders to detect suspicious activity within these workloads and address the most imminent threats to their AI models and training data,” said Knox Anderson, SVP of Product Management at Sysdig.

Kubernetes has become the deployment platform of choice for AI. However, securing data and mitigating active risk in containerized workloads are inherently difficult due to their ephemerality. Understanding malicious activities and runtime events that may lead to a breach of sensitive training data requires a real-time solution with runtime visibility. The Sysdig CNAPP is rooted in open source Falco, the standard for threat detection in the cloud. It is designed for cloud-native runtime security, like Kubernetes clusters, regardless of whether those workloads are in the cloud or on-premises.

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

With the introduction of real-time AI Workload Security, Sysdig helps companies immediately identify and prioritize workloads in their environment with leading AI engines and software packages, such as OpenAI, Hugging Face, Tensorflow, and Anthropic. By understanding where AI workloads are running, Sysdig enables organizations to manage and control their AI usage — whether that usage is official or deployed without proper approval. Sysdig also simplifies triage and reduces response times by fully integrating real-time AI Workload Security with the company’s unified risk findings feature, providing security teams with a single view of all correlated risks and events to provide a more efficient workflow to prioritize, investigate, and remediate Active AI Risks.

Widespread AI Adoption Brings Growing Public Exposure

Of all GenAI workloads currently deployed, Sysdig found that 34% are publicly exposed. Public exposure, which refers to a workload’s accessibility from the internet or another untrusted network without appropriate security measures in place, puts the sensitive data leveraged by GenAI models in urgent danger. In addition to increasing the risk of security breaches and data leaks, public exposure also opens the door for regulatory compliance challenges.

Today’s announcement is timely given the increasingly rapid pursuit of AI deployment, as well as growing concern with the security of these models and the data used to train them. A recent Cloud Security Alliance survey concluded that over half of organizations, 55%, are planning to implement GenAI solutions this year. Sysdig also found that, since December, the deployment of OpenAI packages has nearly tripled. Of the GenAI packages currently deployed, OpenAI makes up 28%, followed by Hugging Face’s Transformers at 19%, Natural Language Toolkit (NLTK) at 18%, TensorFlow at 11%, and Anthropic at less than 1%.

The introduction of AI Workload Security also aligns with forthcoming guidelines and increasing pressures to audit and regulate AI, as proposed by the Biden Administration’s October 2023 Executive Order and following recommendations from the National Telecommunications and Information Administration (NTIA) in March 2024. By highlighting public exposure, exploitable vulnerabilities, and runtime events, Sysdig AI Workload Security also helps organizations across industries fix issues fast ahead of this imminent AI legislation.

“Without adequate runtime insights, AI workloads expose organizations to undue risk. Threat actors can exploit vulnerabilities in running packages to access sensitive training data or modify AI requests and responses,” continued Anderson. “Organizations must establish enhanced security controls and runtime detections tailored to these unique challenges, and Sysdig helps customers address these ethical concerns and blind spots so they can reap all the benefits of efficiency and speed that generative AI offers.”


  • Explore the AI Workload Security landing page.
  • Read the AI Workload Security blog.
  • Visit Sysdig at the RSA Conference, Booth S-742, in San Francisco, CA, May 6-9, 2024 and see an AI Workload Security demo.

© Scoop Media

Advertisement - scroll to continue reading
Business Headlines | Sci-Tech Headlines


Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.