Whitepaper 'FinOps and cost management for Kubernetes'
OptScale is fully available as an open source solution under Apache 2.0 on GitHub
Ebook 'From FinOps to proven cloud cost management & optimization strategies'

OptScale is an open source FinOps and multi-cloud cost optimization solution built for ML/AI, Big Data, CI/CD and regular workloads

OptScale open source solution
OptScale is an open source FinOps and ML/AI optimization tool available under the Apache 2.0 license. Also users can get the OptScale solution both for on-premise deployment and as SaaS.

OptScale ML/AI application profiling and performance optimization capabilities

ML/AI task profiling and optimization


ML/AI and Data engineering teams get a tool for tracking and profiling ML/AI model training. OptScale collects inside/outside performance and model-specific metrics, which help give performance and cost optimization tips for ML/AI experiments or production tasks.

ML/AI metrics and KPI tracking and transparency across ML/AI teams


OptScale profiles ML/AI models, gives deep analysis of inside/outside metrics to identify training issues and bottlenecks. OptScale improves ML/AI profiling process by getting optimal performance and helps reach the best outcome for ML/AI experiments.

Dozens of tangible performance improvement recommendations

OptScale performance improvement recommendations

OptScale performance optimization tips include utilizing Reserved/Spot instances and Saving Plans, rightsizing and instance family migration, Spark executors’ idle state, detecting CPU/IO, IOPS inconsistencies that can be caused by data transformations.



OptScale enables ML /AI engineers to run a bunch of training jobs based on a pre-defined budget, different hyperparameters, and hardware (leveraging Reserved/Spot instances) to reveal the best and most efficient outcome for your ML/AI model training.

Spark integration

Spark integration

OptScale supports Spark to make Spark ML/AI task profiling process transparent and more efficient. A set of OptScale recommendations, which are delivered to users after profiling ML/AI models, includes avoiding Spark executors’ idle state.

Minimal cloud cost for ML/AI experiments and development

Optscale minimal cloud cost

OptScale in-depth cost analysis and dozens of optimization best practices help minimize cloud costs for ML/AI experiments and development. The tool delivers ML/AI metrics and KPI tracking, providing complete transparency across ML/AI teams.

google cloud platform
Alibaba Cloud Logo

Supported platforms

About us

Hystax, the leading MLOps and FinOps solution provider has been developing its flagship product, OptScale, which allows running ML/AI or any type of workload with optimal performance and infrastructure cost by profiling ML jobs, running automated experiments, and analyzing cloud usage. Access to the OptScale open source solution is granted to users by the Apache 2.0 license. This enables Hystax to deliver the OptScale platform to a wider range of ML & Data engineers, cloud capacity managers, and FinOps enthusiasts.

The mission of Hystax is to help businesses optimize the performance and cost of ML model training jobs and increase the number of experiments an ML engineer can run.

The solutions of Hystax are currently the choice for such iconic brands as PwC, Ives Rocher, Nokia, DHL, and Airbus for its FinOps/MLOps adoption, offering them a platform that provides countless optimization recommendations and complete cloud cost visibility/control over Kubernetes, AWS, Microsoft Azure, Google Cloud Platform and Alibaba Cloud costs. The company was founded in 2016 and has customers in 48 countries.


Email: [email protected]
Phone: +1 628 251 1280
Address: 1250 Borregas Avenue Sunnyvale, CA 94089

Enter your email to be notified about new and relevant content.

You can unsubscribe from these communications at any time. Privacy Policy