[go: up one dir, main page]

Cloud Security Podcast

Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!

cloud-security-podcast_high_res.png

Episode list

#269
March 30, 2026

EP269 Reflections on RSA 2026 - Beyond AI AI AI AI AI AI AI

Guest:

  • No guests! Just Tim and Anton
29:29

Topics covered:

  • Hard to believe we've been doing these since 2022, is that right?
  • What did we see this year at RSA, apart from AI? And more AI? And more AI?
  • What framework can we use to understand the approaches vendors take to AI and security? Just saying “AI washing” is not enough!
  • How to tell “AI washer” from “AI tourist”? 
  • I sense that “securing AI” (and agents) is finally growing as fast as "using AI for security”, do you agree?
  • Is the AI vulnerability apocalypse coming? Soon?
  • Have we seen any signs of AI backlash?
#268
March 23, 2026

EP268 Weaponizing the Administrative Fabric: Cloud Identity and SaaS Compromise in M Trends 2026

Guests:

29:29

Topics covered:

  • Do we need to rethink "Mean Time to Respond" entirely, or are we just in deep trouble?
  • Why are threat groups collaborating so well, and are there actual lessons for defenders in their "business" model?
  • What is the scalable advice for teams worried about voice phishing and GenAI cloning?
  • What does "weaponizing the administrative fabric" actually mean in a world where identity is the perimeter?
  • Why is identity/SaaS compromise "news" in 2026 when cloud security folks have been shouting about it for years? What actually changed?
  • What’s the latest in supply chain compromise, particularly regarding malicious open-source packages?
  • How do we defend against malware that is "lazy" enough to use the victim’s own AI tools for reconnaissance?
  • What is the specific advice for Detection and Response (D&R) teams to handle "living off the land" (or "living off the cloud")?
  • How do you fix the situation when IT and Security departments genuinely hate each other?
  • Besides reading the report, what is the one book or piece of advice for a CISO to survive this year?
#267
March 16, 2026

EP267 AI SOC or AI in a SOC? Cutting Through Hype, Pricing Models, and SIEM Detection Efficacy with Raffy Marty

Guest:

29:29

Topics covered:

  • You argue that declaring existing SIEM being obsolete is a "marketing slogan" rather than a true thesis. What is the real pain point and the actual gap in traditional SIEMs as opposed to the more sensational claims?
  • You highlight that "correlation, state, timelines, and real-time detection require locality," making centralization a necessary trade-off. Can a truly federated or decoupled SIEM architecture achieve the same fidelity and real-time performance for complex, stateful detections as a centralized one?
  • You call the rise of independent security data pipelines the "SIEM Trojan Horse." How quickly is this abstraction layer turning SIEM into a “swappable” component, and what should SIEM vendors have done differently years ago to prevent this market from existing?
  • This "AI SOC" thing, is this even real? Is AI in a SOC a better label? Do you think major SIEM vendors will own this very soon, like they did with UEBA and SOAR?
  • If volume-based pricing is flawed because it penalizes good security hygiene, what is a better SIEM pricing model that fairly addresses compute, enrichment, and retention costs without just shifting the volume cost to unpredictable query charges?
  • You question the idea that startups can find a better way to release detection rules than large vendors with significant content teams. What metrics should security leaders use to evaluate the quality of a vendor's detection engineering (DE) output beyond just coverage numbers? Can AI fix DE?
#266
March 9, 2026

EP266 Resetting the SOC for Code War: Allie Mellen on Detecting State Actors vs. Doing the Basics

Topics:

CISO SIEM and SOC
29:29

Topics covered:

  • Your book focuses on the US, China, and Russia. When you were planning the book did you also want to cover players like Israel, Iran, and North Korea?
  • Most of our listeners are migrating to or operating heavily in the cloud. As nations refine their “digital battlefield” strategies, does the "shared responsibility model" actually hold up against a nation-state actor?
  • How does a company’s detection strategy need to change when the adversary isn't a teenager looking for a ransom, but a state-funded group whose goal might be long-term persistence or subtle data manipulation? How should people allocate their resources to defending against both of these threats? 
  • How afraid are you of a “bad guy with AI” scenarios? Mild anxiety or apocalyptic fears? 
  • Do you see AI primarily helping "Tier 2" nations close the capability gap with the "Big Three," or does it just further cement the dominance of the nations that own the underlying compute and models?
  • You’ve spent a lot of time as an analyst looking at how enterprises buy and run security tech. For a CISO at (say) mid-tier logistics company, should 'nation-state cyberattacks' even be on their threat model? Or is worrying about the spies just a form of security theater when they haven’t even solved basic credential theft yet?
#265
March 2, 2026

EP265 Beyond Shadow IT: Unsanctioned AI Agents Don't Just Talk, They Act!

Guest:

29:29

Topics covered:

  • Harmonic Security focuses on securing generative AI in use. Can you walk us through a real, anonymized example of a data leak caused by employee AI usage that your platform has identified?
  • AI governance gets thrown around a lot. What does this mean in the context of Shadow AI? How should organizations be thinking about governing AI in light of upcoming AI regulations in the US and in the EU?
  • If we generally agree that employees are using AI tools before they are sanctioned, how can organizations control this? Network, API, endpoint?
  • Many organizations struggle with the "ban vs. embrace" debate for generative AI. Based on your experience, what's a compelling argument for moving from a blanket ban to a managed, secure adoption model? Can you share a success story where this approach demonstrably reduced risk?
  • The term "shadow AI" is often used interchangeably with "shadow IT" (but for AI-powered applications)  but you've highlighted that AI is a different beast. What is the single biggest distinction between managing the risk of unsanctioned AI tools versus unsanctioned IT applications?
  • Looking forward, where do you see the biggest risks in the evolution of shadow AI? For instance, will the next threat be from highly specialized AI agents trained on proprietary data, or from the rapid proliferation of new, unmonitored open-source models?
  • Given the speed of change in this space, what's one piece of advice you'd give to a CISO today who is just beginning to get a handle on their organization's shadow AI problem?
#264
February 23, 2026

EP264 Measuring Your (Agentic) SOC: Two Security Leaders Walk into a Podcast

Guests:

29:29

Topics covered:

  • We’ve spent decades obsessed with MTTD (Mean Time to Detect) and MTTR (Mean Time to Respond). As AI agents begin to handle the bulk of triage at machine speed, do these metrics become "vanity metrics"? If an AI resolves an alert in seconds, does measuring the "mean" still tell us anything about the health of our security program, or should we be looking at "Time to Context" instead?
  • You mentioned the Maturity Triangle. Can you walk us through that framework? Specifically, how does AI change the balance between the three points of that triangle—is it shifting us from a "People-heavy" model to something more "Engineering-led," and where does the "Measurement" piece sit?
  • Google is famous for its "Engineering-led" approach to D&R. How is Google currently measuring the success of its own internal D&R program? Specifically, how are you quantifying "Toil Reduction"? Are we measuring how many hours we saved, or are we measuring the complexity of the threats our humans are now free to hunt?
  • Toil reduction is a laudable goal for the team members, what are the metrics we track and report up to document the overall improvement in D&R for Google’s board?
  • When you talk to your board about the success of AI in your security program, what are the 2 or 3 "Golden Metrics" that actually move the needle for them? How do you prove that an AI-driven SOC is actually better, not just faster?
  • We often talk about AI as an "assistant," but we’re moving toward Agentic SOCs. How should organizations measure the "unit economics" of their SOC? Should we be tracking the ratio of AI-handled vs. Human-handled incidents, and at what point does a high AI-handle rate become a risk rather than a success?
#263
February 16, 2026

EP263 SOC Refurbishing: Why New Tools Won’t Fix Broken Processes (Even With AI)

Guest:

29:29

Topics covered:

  • What is the right way for people to bridge the gap and translate executive dreams and board goals into the reality of life on the ground?
  • How do we talk to people who think they have "transformed" their SOC simply by buying a better, shinier product (like a modern SIEM) while leaving their old processes intact?
  • What are the specific challenges and advantages you’ve seen with a federated SOC versus a centralized one? What does a "federated" or "sub-SOC" model actually mean in practice?
  • Why is the message that "EDR doesn't cover everything" so hard for some people to hear? Is this obsession with EDR a business decision or technology debt?
  • How do you expect AI to change the calculus around data centralization versus data federation?
  • What is your favorite example of telemetry that is useful, but usually excluded from a SIEM?
  • What are the Detection and Response organizational metrics that you think are most valuable?
  • Is the continued use of Excel an issue of tooling, laziness, or just because it is a fundamentally good way to interact with a small database?
#262
February 9, 2026

EP262 Freedom, Responsibility, and the Federated Guardrails: A New Model for Modern Security

Guest:

Topics:

CISO
29:29

Topics covered:

  • You mentioned that centralized security can't work anymore. Can you elaborate on the key changes—driven by cloud, SaaS, and AI—that have made this traditional model unsustainable for a modern organization?
  • Why do some persist at centralized, top down approach to security, despite that?
  • What do you mean by "Freedom, Responsibility and distributed security”? 
  • Can you explain the difference between “centralized security” and what you define as “security with distributed ownership”?  Is this the same “federated”?
  • In our conversation you mentioned “cloud and AI- native”, what do you mean by this (especially “AI-native”) and how is this changing your approach to security? 
  • You introduce the concept of "Security as quality" suggesting that a security-unaware developer is essentially a bad software developer. How do you shift the culture and internal metrics to make security an inherent quality standard, rather than a separate, compliance-driven checklist?
  • You likened the central security team's new role to a "911 emergency service." Beyond incident response, what stays central no matter what, and how does the central team successfully influence the security posture of the entire organization without being directly responsible for the day-to-day work.
#261
February 2, 2026

EP261 No More Aspiration: Scaling a Modern SOC with Real AI Agents

Guest:

29:29

Topics covered:

  • We ended our season talking about the AI apocalypse. In your opinion, are we living in the world that the guests describe in their apocalypse paper
  • Do you think AI-powered attacks are really here, and if so, what is your plan to respond? Is it faster patching? Better D&R? Something else altogether? 
  • Your team has a hybrid agent workflow: could you tell us what that means?  Also, define “AI agent” please.
  • What are your production use cases for AI and AI agents in your SOC?
  • What are your overall SOC metrics and how does the agentic AI part play into that?
  • It's one thing to ask a team "hey what did y'all do last week" and get a good report - how are you measuring the agentic parts of your SOC?
  • How are you thinking about what comes next once AI is automatically writing good (!) rules for your team out of research blog posts and TI papers? 
#260
January 26, 2026

EP260 The Agentic IAM Trainwreck: Why Your Bots Need Better Permissions Than Your Admins

Guest:

29:29

Topics covered:

  • Why is agent security so different from “just” LLM security?
  • Why now? Agents are coming, sure, but they are - to put it mildly - not in wide use. Why create a top 10 list now and not wait for people to make the mistakes?
  • It sounds like “agents + IAM” is a disaster waiting to happen. What should be our approach for solving this? Do we have one?
  • Which one agentic AI risk keeps you up at night? 
  • Is there an interesting AI shared responsibility angle here? Agent developer, operator, downstream system operator?
  • We are having a lot of experimentation, but sometimes little value from Agents. What are the biggest challenges of secure agentic AI and AI agents adoption in enterprises?