Oligo Security’s cover photo
Oligo Security

Oligo Security

Computer and Network Security

New York, NY 10,382 followers

The Runtime Security Company

About us

Oligo Security is a runtime security platform that protects cloud applications from real-world attacks. Using deep application inspection, real-time monitoring, and context-aware analysis, Oligo helps teams discover vulnerabilities in production, prioritize what matters, and stop application-based exploits at runtime. Download the new CADR for Dummies guide: https://www.oligo.security/cadr-for-dummies

Website
https://www.oligo.security/
Industry
Computer and Network Security
Company size
51-200 employees
Headquarters
New York, NY
Type
Privately Held

Locations

Employees at Oligo Security

Updates

  • One of the most trusted libraries in the open-source ecosystem was weaponized this week, and most developers had no idea it was happening. Axios, used by millions of projects for making HTTP requests, had two malicious versions quietly published to npm today (1.14.1 and 0.30.4). An attacker compromised a maintainer's account, slipped past the CI/CD pipeline, and bundled a dependency that installed a remote access trojan the moment someone ran npm install. This is why supply chain attacks are so effective. npm install isn't a decision developers consciously make every time. It's automated, it's invisible, and it happens constantly. Our research team broke down what you need to know, recommended actions to take, and where runtime visibility and protection fit into the equation. https://lnkd.in/e_sZ4A6p

  • There's no shortage of hype around securing AI, yet most of it is built around the wrong signal. We just released our AI in Production: The 2026 Runtime Execution Report, and it is blatant that what actually runs matters far more than what shows up in a scan. Across the production workloads observed by our platform, we uncovered: → Most AI is shelfware: 76% of installed AI libraries never execute at runtime  → Presence doesn't mean production: the OpenAI SDK is in 86% of environments, but only a third actually invoke it  → Orchestration frameworks are now the de facto AI control plane: 80% of environments run LangChain or LangSmith  → Agentic behavior is already widespread: 49% of organizations use function calling and 45% have adopted MCP (often without explicitly building "agents”) Based on the data, it’s clear that real risk emerges only when models are actually invoked, orchestrated, and interacting with data. In most cases, security teams are managing AI risk based on incomplete information, like what's installed, what's declared, and what's assumed. But AI risk doesn't live there. It lives in what actually executes in production: the workflows that run, the models that are called, and the behavior that emerges at runtime. This is the gap our report exposes, and why execution evidence is becoming the defining signal for AI security. If you're thinking about securing AI, or simply want to see what organizations are actually using today, grab a copy at the link in the comments.

    • No alternative text description for this image
  • Oligo was named to Fast Company’s Most Innovative Companies list. Not only a proud moment for our team, but a clear sign of where security is going. For years, security has been built around static analysis and assumptions. But attacks don’t happen in theory; they happen at runtime. That shift is why we’re focused on making runtime security real for the world’s largest organizations. Grateful to our team for building this, and to our customers for trusting us to be the definitive source of truth for their security programs.

    • No alternative text description for this image
  • If you can believe it, #RSAC2026 is right around the corner 👀 Team Oligo will be in full force, sharing how we empower security teams with the runtime context and protection needed to: 1. Reduce exploitable risk 2. Detect exploit attempts at the application layer 3. Surgically block malicious activity without introducing downtime If you're thinking about unifying runtime protection across your apps, cloud workloads, and AI systems, let's connect.

    • No alternative text description for this image
  • Great breakdown of how #runtime #security is evolving from Andrew Green. Where and how you observe runtime behavior matters, especially with the majority of modern attacks originating inside the application layer. That’s why deep application context at runtime is critical for: 1. Understanding what’s exploitable vs. noise 2. Securing environments that are increasingly AI-driven and dynamic Appreciate the inclusion and the thoughtful map of the space. Also, if you want to separate fact from fiction when it comes to eBPF sensors, check out the link in the comments.

    Runtime security ranges from: 😱 Clunky sensors that use 20% CPU and break every device in July 2024 😵 Kernel-level enforcement of app policies enriched w/ SBOM context 🏗️ Architecturally, you can categorize vendors depending on where they observe and enforce runtime policies: 1. Infrastructure and environment (purple box) - the underlying Host and OS 2. Process execution (blue box) - the thing where apps run, like containers, with the associated registries and whatever else 3. Application logic (yellow box) - looking at whether code is doing what it should 4. These damn LLM agents - which really nobody knows how to properly secure yet, but my advice is that something is always better than nothing There is no one method that's better than others 💾 🐝 If you're running Windows server 2000 you really don't need to bother with eBPF-based solutions 🐳 👴 If you have a K8s based microservices architecture with all the bells and whistles, you can't secure much with an AV 🛠️ Featurally, next-genner solutions are looking to detect nuances in attacker behavior that would either bypass a simple signature scan or get burried with a bunch of false positives You can do so with stuff like: ➡️ Context correlation: for identify high-risk toxic combinations ➡️ Attack Paths: analysis showing how an adversary can chain together specific vulnerabilities, misconfigurations, and excessive permissions ➡️ SBOM/VEX: inventories of software components + context to specify whether those components are actually exploitable in a given environment. What's funny about the big players with good products is that they've skipped some feature nuances to jump straight to agentic remediation, so please forgive me the marketing teams of Palo Alto Networks CrowdStrike and SentinelOne for disobeying your logo guidelines But we all know the big players, so I call out all the next-gen guys, which include: Oligo Security Miggo Security Raven.io Contrast Security Wiz Upwind Security RoonCyber AccuKnox ARMO Sysdig Sweet Security Aikido Security Spyderbat If I finish writing the full blog post in time it'll be in the comments 👇

    • No alternative text description for this image
  • Great to see Oligo shortlisted in two categories in the 2026 SC Media Awards: 🏆 Best Threat Detection Technology 🏆 Best Cloud Workload Protection Solution It's an honor to be recognized alongside some of the top companies in cybersecurity. Proud of the work our team is doing to help organizations block modern attacks at runtime. https://lnkd.in/egPi-riS

  • Attackers don’t wait for your next scan. They operate in real time. As AI adoption accelerates, security teams need real-time visibility and protection where AI systems actually run. We’re excited to share that Oligo Runtime AI Security is now integrated with the Amazon Web Services (AWS) Security Hub Extended Plan. The AWS Security Hub Extended Plan delivers curated enterprise security solutions from AWS and trusted partners through a simplified experience: one contract, one bill, consolidated support, and flexible pricing. With Oligo included, customers can: • Gain real-time visibility into AI models, agents, and applications • Prioritize risk based on actual runtime behavior • Protect AI systems against active threats and supply chain risk • Align AI security with their existing AWS security operating model Through this collaboration, we’re enabling enterprises to adopt AI at scale, with the runtime clarity these systems demand.

    • No alternative text description for this image
  • Tim Starks from CyberScoop broke down the details from the distillation campaign published yesterday from Anthropic. Our co-founder and CTO Gal Elbaz shared his perspective on how illicit distillation at industrial scale is a form of IP extraction, and why separating powerful model capabilities from their original guardrails creates meaningful downstream cyber risk. As frontier AI systems advance at a rapid pace, continuous monitoring, safeguards, and enforcement are foundational to security. https://lnkd.in/eSGiVUbM

Similar pages

Browse jobs

Funding