ScaleOps, a startup working in the cloud resource management sector, has unveiled a fully automated cloud-native cost-saving platform. ScaleOps claims to reduce cloud costs by up to 80% with a fully automated platform that continuously optimizes and manages cloud-native resources during runtime.
The platform ensures that application scaling aligns with real-time demand, dynamically allocating resources and automatically right-sizing containers based on application needs. ScaleOps claims to guarantee that every container runs in the most suitable node type, leading to significant reductions in cloud costs.
The company has witnessed experienced engineers spending time predicting demand and constantly tweaking container configurations. Engineers often find themselves caught in a cycle of manual adjustments of container sizing, scaling thresholds and node type selections to avoid under or overprovisioning. This is time-consuming and leads to wasted resources and application performance issues during peak demand, adding potentially significant costs. Yodar Shafrir, co-founder and CEO of ScaleOps, stated:
It’s impossible to manage this at scale. We realized there’s a huge need for a context-aware platform that can optimize these constantly-changing environments automatically, adapting to changes in demand in real-time.
Within a Kubernetes cluster, ScaleOps offers continuous automatic Pod right-sizing, dynamically adjusting CPU and memory allocations according to real-time needs. ScaleOps also optimizes node usage by consolidating pods onto appropriate nodes and removing unneeded ones.
Shafrir added:
The only way to free engineers from ongoing, repetitive configurations and allow them to focus on what truly matters is by completely automating resource management down to the smallest building block: the single container. By employing AI, the ScaleOps platform is context-aware and autonomously handles resource management for engineers, lowering infrastructure costs and delivering better performance.
On the use of AI, Shafrir explains some of the mistake mitigation techniques available:
The platform operates on policy-based decisions. This allows customers to configure and select specific policies and rules that dictate how the platform should behave. These policies can be easily adjusted and updated as needed.
Whilst other similar cost-saving products on the market can provide recommendations based on static configuration, ScaleOps accounts for the dynamic nature of consumption and demand by matching real-time demand and automatically adjusting the size of containers based on application needs. Speaking about other products in this space, Shafrir comments: "Engineers still need to manually tune resources and adjust the allocation repeatedly if they use these tools, and even then, they wouldn’t be able to respond to unexpected bursts."
The ScaleOps platform operates in conjunction with the cost-saving mechanisms offered by cloud providers and enhances their operations. The cloud providers focus on the machine layer, which is more closely related to the services they provide. The ScaleOps platform optimizes resources at the container level, an area in which the cloud provider's cost-saving tools tend not to reach.
Since its establishment in 2022, ScaleOps has grown rapidly and currently manages the production environments of many industry leaders. Customers also claim that ScaleOps can save money even at the busiest times. Ron Tzrouya, lead cloud FinOps at Wiz says:
"ScaleOps automatically optimizes Wiz’s workloads in production according to our real-time needs, improving performance even during demand spikes. While dramatically reducing our Kubernetes cloud costs, the hands-free automation freed our teams from dealing with ongoing configuration."
The company plans to use a recent funding round to expand into the US and Europe. Further information on the product is available on the ScaleOps website.