ScaleOps builds software that automatically manages cloud and AI infrastructure in real time. Now the New York-headquartered startup has announced $130 million in a Series C led by Insight Partners, a bet that companies don’t need more compute nearly as often as they need to stop wasting the compute they already have. CEO Yodar Shafrir co-founded the company in 2022 after seeing the resource mess up close at Run:ai, where static Kubernetes settings kept colliding with dynamic production workloads.
That pitch is landing because it’s painfully familiar. GPUs sit idle. Clusters get overprovisioned. DevOps teams burn hours tuning YAML and chasing incidents. They also have to beg other teams to approve infrastructure changes that should’ve been automatic by now.
What is ScaleOps and how does it work?
ScaleOps is an autonomous Kubernetes optimization platform that runs on top of existing infrastructure and continuously adjusts resources in real time. It can be installed using a simple Helm command and then starts observing workload behavior and cluster signals. Based on this, it makes context-aware decisions around CPU, memory, replicas, nodes, and GPUs without requiring teams to replace their existing autoscalers.
The platform focuses on practical automation rather than just visibility. It automatically rightsizes pod requests and limits, detects workload types such as stateless services, Spark, Kafka, and batch jobs, and applies policies without manual configuration. It also handles pod healing and reacts to demand spikes, while working alongside tools like HPA and KEDA.
On the cost side, ScaleOps improves infrastructure efficiency by consolidating underused nodes, optimizing pod placement, and increasing the use of spot instances with safe fallback options. For AI workloads, it introduces GPU-aware optimization, including dynamic GPU sharing and scaling based on real usage instead of averages.
The impact is clear in day-to-day operations. Without ScaleOps, engineers spend time tuning configurations and reacting to issues. With it, infrastructure decisions happen automatically in production, helping teams reduce waste, improve performance, and manage cloud environments more efficiently.
Who founded ScaleOps and what has the company done so far?
The founding story
Shafrir started ScaleOps after a pattern kept repeating during his time at Run:ai. Customers liked GPU orchestration, but production teams still struggled to run real workloads efficiently once inference and broader cloud infrastructure demands showed up. His view is blunt: “Kubernetes is a great system. It’s flexible and highly configurable. But that’s also the problem.” That line gets at the whole company thesis—too much of modern infrastructure still depends on static settings in systems that are anything but static.
Why Yodar Shafrir had a head start
Shafrir wasn’t coming in cold. Before founding ScaleOps in March 2022, he worked at Run:ai as a senior software engineer and then as software team lead for AI orchestration. That matters. Run:ai lived at the intersection of scarce compute and enterprise infrastructure pain. He’d already seen how badly teams wanted automation that could do more than surface a problem on a dashboard.
Traction, customers, and the Series C
ScaleOps says the product is already in live production use across enterprise environments, not stuck in pilot mode. The company names Adobe, Wiz, DocuSign, Salesforce, and Coupa among its users. It serves enterprises globally across large organizations in markets including Europe and India, and reported more than 450% year-over-year growth in the source article. It also said it tripled headcount over the last 12 months and plans to more than triple again by the end of 2026.
The Series C totals $130 million at an $800 million valuation. Insight Partners led the round. Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture Capital joined in again. ScaleOps says total funding is now about $210 million, and this comes roughly 18 months after its $58 million Series B in November 2024.
How ScaleOps stacks up against Cast AI, Kubecost, and Spot
This isn’t an empty category. Cast AI is probably the closest startup comp: it raised a $108 million Series C in April 2025 and has been pushing automation for Kubernetes, AI, and cloud workloads with a similar efficiency story. Kubecost came from a different angle—cost visibility and allocation for Kubernetes—and IBM bought it in September 2024 after its earlier venture funding. Spot Ocean, now part of Flexera after the Spot portfolio changed hands in 2025, focuses on continuous Kubernetes infrastructure optimization around cost, availability, and performance.
ScaleOps is trying to separate itself by saying visibility isn’t enough and partial automation isn’t trusted enough. Its differentiation pitch is full autonomy, application context, and production-safe execution out of the box, without piles of manual configuration. Whether that’s truly unique is debatable. But investors are clearly backing a platform that can bridge traditional cloud optimization and the newer AI-infrastructure problem in one control layer.
Why does this ScaleOps funding round matter?
Because this isn’t just growth capital for sales hires.
ScaleOps says the new money will fund new products and broaden the platform as enterprises spend more on AI infrastructure. That suggests the company wants to move from “Kubernetes cost optimizer” into something closer to an autonomous infrastructure control plane. One that handles compute, memory, storage, networking, and GPUs without constant human tuning. Shafrir’s own framing is that the company is building toward “infrastructure that manages itself,” and the roadmap now has the cash to chase that idea properly.
For customers, the point is speed and trust. Lots of teams already know where the waste is. What they usually lack is a production-safe system willing to act on that information in real time. If ScaleOps can keep reducing manual work without breaking SLOs, the value isn’t just lower cloud bills. It’s fewer interruptions, faster incident recovery, and less time spent babysitting autoscalers.
Investors’ thesis looks pretty clear. Insight isn’t backing another reporting layer. It’s backing software that touches one of the most expensive and least efficiently managed parts of the modern stack. And because Shafrir came from Run:ai, there’s a logic to the bet: he’s not selling “AI” as a vibe. He’s selling automation against a bottleneck he’s already worked on before.
What market trend is driving ScaleOps funding now?
Start with the money. Gartner forecast worldwide public cloud end-user spending at $723.4 billion in 2025, up from $595.7 billion in 2024. Infrastructure-as-a-service alone was projected to hit $211.9 billion, while platform-as-a-service was expected to reach $208.6 billion. When the spend base gets that large, even small efficiency gains become huge line items.
Then there’s Kubernetes. CNCF’s annual survey, published in January 2026, found that 82% of container users now run Kubernetes in production. That matters because the optimization problem has shifted from early adoption to day-2 operations. Kubernetes isn’t the experiment anymore. It’s the default operating layer for a lot of modern software and, increasingly, for AI workloads too.
AI is making FinOps messier, not simpler. The FinOps Foundation’s 2025 report surveyed organizations responsible for more than $69 billion in cloud spend and found that 97% were investing in multiple infrastructure areas for AI. Spend isn’t just rising. It’s spreading across more resource types, which makes static rules and siloed tools even less useful than they were a couple of years ago.
Final take on ScaleOps funding
The interesting part of this ScaleOps funding round isn’t the size by itself. It’s that investors are backing a harder claim: that cloud and AI infrastructure should be managed automatically, in production, with enough context to avoid the outages and performance hits that make operators distrust automation in the first place.
That’s ambitious. And honestly, it should be.
The next thing to watch is whether ScaleOps can turn that promise into a broader platform for AI-era infrastructure—not just cheaper Kubernetes clusters, but software enterprises are willing to let touch their most expensive compute in real time.
Read how Rebellions AI chip startup raises $400M for IPO to scale inference chips and expand into global AI infrastructure markets
FAQ
What is the latest ScaleOps funding round?
ScaleOps just raised $130 million in a Series C round led by Insight Partners at an $800 million valuation. Existing investors Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture Capital also participated, bringing total funding to about $210 million.
How does ScaleOps work for Kubernetes and AI workloads?
ScaleOps continuously adjusts infrastructure in production instead of relying on fixed configurations. It rightsizes CPU and memory, optimizes replicas and nodes, increases spot usage, and now extends that logic to GPU sharing and GPU-aware scaling for AI workloads.
Who founded ScaleOps?
ScaleOps was co-founded in 2022, with Yodar Shafrir serving as founder and CEO. Before that, he worked at Run:ai, where he held engineering roles focused on AI orchestration, which gave him direct experience with the compute-efficiency problems ScaleOps is now trying to solve.
Is ScaleOps a FinOps company or a Kubernetes infrastructure company?
It sits between those categories. ScaleOps clearly overlaps with FinOps because it targets cloud and AI infrastructure waste, but the product behaves more like an autonomous Kubernetes and AI infrastructure operations layer than a classic cost-reporting tool.




