vCluster’s cover photo
vCluster

vCluster

Software Development

San Francisco, California 21,600 followers

About us

At vCluster, we build the tools that make Kubernetes simpler, more efficient, and cost-effective. Our open-source projects enable platform engineers to streamline multi-tenancy, reduce cloud costs, and scale their Kubernetes environments with ease. With over 100 enterprises using our solutions, we’re focused on helping teams move faster, save money, and maintain stability in their platform stacks. Whether you’re managing multi-tenant clusters or optimizing resource usage, vCluster provides the building blocks to make it happen.

Website
https://vcluster.com
Industry
Software Development
Company size
51-200 employees
Headquarters
San Francisco, California
Type
Privately Held
Specialties
Kubernetes, vCluster, DevPod, JsPolicy, Cloud-Native Technologies, EKS, AKS, GKE, k3s, Multi-tenancy, Platform Engineering, Cloud Cost, virtual clusters, FinOps, and Developer Self-Service

Products

Locations

  • Primary

    415 Mission St

    Fl. 37

    San Francisco, California 94105, US

    Get directions

Employees at vCluster

Updates

  • View organization page for vCluster

    21,600 followers

    Every platform team hits the same wall: giving developers real Kubernetes environments without spinning up real clusters for each one. Sharan Kumar Reddy Sutrapu breaks down how vCluster solves this, running a full control plane inside a single pod on your existing cluster. The post covers the architecture end to end. From the Syncer, to networking, to isolation. Worth a read if you want to understand how virtual clusters actually work under the hood: https://lnkd.in/eAC-b-XE #Kubernetes #PlatformEngineering #DevOps #CloudNative

  • View organization page for vCluster

    21,600 followers

    Namespaces give you weak isolation. Separate clusters give you massive overhead. And the gap between what the public cloud offers and what you can run on prem keeps getting wider. We've spent years building across that spectrum - virtual clusters, Private Nodes, Auto Nodes, Sleep Mode, all designed to give teams real isolation without the infrastructure tax. Now, we're going deep. Multi-Tenancy March is a series of live sessions where we break down the technologies behind Kubernetes tenancy. Not surface-level overviews. We're talking architecture under the hood, real-world production patterns, and the trade-offs. Whether you're a platform engineer managing 10 teams on shared infrastructure, a GPU cloud provider isolating tenants, or an enterprise running on prem trying to get an EKS-like experience. This is for you. Register now and join us live: https://lnkd.in/dDbiaYNk #Kubernetes #MultiTenancy #PlatformEngineering #CloudNative

    • No alternative text description for this image
  • vCluster reposted this

    🤔 Multi-tenancy in Kubernetes used to be a single question: how do you slice a cluster? Namespaces, virtual clusters, dedicated clusters — it was all about giving teams a consistent, standardized space. Last year, I mapped every mechanism into a single spectrum. It worked. But AI agents changed the game. These workloads are persistent, execute arbitrary code, and we can't trust them. A compromised model can escape the sandbox, exfiltrate data, or both. Suddenly, one axis isn't enough. You need three: 1. Tenancy slicing — how you carve the cluster 2. Workload protection — how hard the sandbox is (gVisor, Kata, microVMs) 3. Policy — what traffic goes in and out They're orthogonal: ❌ Sandboxing doesn't replace the tenant organization ❌ Network policies don't replace sandboxing 😰 Each answers a different question. I put together a short visual guide walking through all three axes and how they connect. I'll also cover this in depth in my Multitenancy March webinar series with vCluster: https://lnkd.in/g7jj-CtZ

  • vCluster reposted this

    vind: a better way to run Kubernetes demos (including vCluster Platform) If you’ve ever built or supported Kubernetes-based demo environments, you’ve probably hit pain on both ends of the spectrum: Cloud demo clusters (GKE / EKS / AKS): • Slow to provision • Expensive to keep around • Dependent on solid internet (conference Wi-Fi issues anyone?) Local Kubernetes setups (KinD, Docker Desktop, OrbStack, etc.): • Fragile, snowflake configs • Missing pieces out of the box (like real load balancers in KinD) • Ingress, DNS, and networking quirks that love to surface mid-demo • Hard to reset cleanly between customers Different environments, same result: unreliable demos and unnecessary stress. vCluster Labs vind solves for all of these issues. vind is a lightweight, opinionated way to run Kubernetes inside Docker containers which makes it an excellent foundation for reliable K8s demos, including vCluster Platform. What vind brings to the table: Repeatable, demo-ready Kubernetes Ingress, DNS, networking, and clean resets handled the way demos actually need, not as an afterthought. Local-first without the usual pitfalls Runs locally without cloud dependencies, while avoiding the fragile workarounds common with KinD or desktop Kubernetes setups. Fast to spin up, easy to tear down Create a clean environment per demo, per customer, without leftovers. And launching today, vCluster Platform free tier makes this even more compelling for presales and SE teams. It allows teams to spin up vCluster instances on top of a local vind Kubernetes cluster giving you realistic multi-tenancy, isolation, and workflows without needing cloud infrastructure at all. But you can also easily add cloud nodes to your locally running vCluster with the free tier vCluster VPN feature. vind + vCluster Platform free tier turns Kubernetes demos into something that’s: • Fast • Portable • Repeatable • And far less stressful If you regularly demo Kubernetes-based software, this combo is absolutely worth a look. #Kubernetes #vCluster #Presales #DevEx #PlatformEngineering #CloudNative #Demos

  • Multi-tenant Kubernetes has an observability problem. You give every team their own virtual cluster. They get isolation, their own CRDs, their own API server. But now someone has to monitor all of it. Deploy a Prometheus per tenant? That's 50 monitoring stacks to maintain. That doesn't scale. The answer: one Prometheus in the host cluster, scraping metrics from every vCluster. Liquid Reply just published a detailed walkthrough of exactly how to do this, for both regular vClusters and Private Node vClusters. With regular vClusters, it's straightforward. Tenant pods run on shared worker nodes, so the host Prometheus discovers them automatically. No extra config. No per-tenant monitoring stack. With Private Nodes, it gets more interesting. Tenants run on dedicated EC2 instances, full infrastructure isolation. The host Prometheus can't see those pods directly. So you deploy a lightweight Prometheus inside each vCluster that remote-writes into the central instance. One operational plane. Strong isolation. Metrics still flow to a single Grafana with tenant-scoped dashboards. One monitoring stack. Many tenant clusters. Clear separation of responsibilities. Full implementation guide from Liquid Reply: https://lnkd.in/gcXxNX7F #Kubernetes #PlatformEngineering #Monitoring #MultiTenancy

    • No alternative text description for this image
  • vCluster reposted this

    GPU sharing breaks down fast once AI training gets real. If you’re running multi-tenant AI workloads on Kubernetes, you’ve probably felt the tension: either you share GPUs and deal with noisy neighbours, or you isolate teams and pay for idle hardware. In a new blog post, Jannis Schoormann walks through a practical alternative on GKE: isolated GPU nodes that spin up on demand and disappear when the job is done. The setup combines vCluster Auto Nodes, Private Nodes, and Karpenter-style provisioning to deliver: - hardware-level GPU isolation per tenant - on-demand provisioning for expensive GPUs - no separate clusters to manage - predictable scheduling without wasted capacity This isn’t a conceptual piece — it’s a hands-on walkthrough for teams who want GPU efficiency and isolation, without compromise. 👉 Read the full guide in the comments https://lnkd.in/d-59yJeg If GPU contention or cost is already slowing down your ML teams, this approach is worth a look. #Kubernetes #AI #GPU #GKE #vCluster #MLOps #CloudNative #LiquidReply

    • No alternative text description for this image
  • 🎟️ FREE PASS GIVEAWAY: KCD DELHI 2026 🎟️ Want to level up your Kubernetes skills? We're giving away free passes to KCD Delhi 2026: ✅ Full day of technical sessions ✅ Hands-on workshops ✅ Network with 500+ engineers ✅ Learn from 20+ expert speakers **HOW TO WIN:** 1️⃣ Like this post 2️⃣ Comment: Which session are you most excited about? 3️⃣ Tag someone whose career would grow from attending! 📍 Holiday Inn Aerocity, New Delhi 📅 February 21, 2026 #KCDDelhi2026 #Kubernetes #DevOps

    • No alternative text description for this image
  • View organization page for vCluster

    21,600 followers

    KinD has been the go-to for local Kubernetes development. It's solid, reliable, and does what it promises. But developer needs have evolved. Teams now need LoadBalancer support, remote cluster access, GPU node attachment, and built-in resource optimization. That's why we built vind (vCluster in Docker), an open source tool that extends the Docker-based local cluster workflow: ✅️ Native LoadBalancer support out of the box ✅️ Free vCluster Platform UI to manage clusters from anywhere ✅️ Attach external nodes (even GPU nodes from EC2) via vCluster VPN ✅️ Pull-through image cache via Docker daemon ✅️ Sleep and wake clusters to save resources ✅️ Multi-node clusters with flexible CNI choices Real example: Spin up a 4-node cluster locally, then attach a GPU node from Google Cloud. All managed through a single UI. No additional tooling required. Same Docker-based workflow developers know, with the features modern development demands. Full technical walkthrough with examples: https://lnkd.in/dGqz5rbp #Kubernetes #PlatformEngineering #DevOps

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

vCluster 3 total rounds

Last Round

Series A

US$ 24.0M

See more info on crunchbase