Scientists at LG’s Advanced AI division released Auptimizer, an open-source framework for hyperparameter optimization (HPO) of machine learning models - the process of fine-tuning ML models for better performance. The software focuses on the job distribution, scheduling and bookkeeping aspects associated with performing HPO at scale, relying on existing packages for optimization algorithms.
An accompanying paper, accepted to the IEEE Big Data 2019 conference at UCLA, outlines the system design of Auptimizer and its experiment tracking capabilities. Noteworthy is the capability to manage various types of computing resources, from individual CPUs and GPUs to whole clusters and AWS resources, while receiving HPO jobs from several possible proposer algorithms and tracking the experiments using an SQLite database.
Auptimizer system design
The framework supports several prominent HPO methods, including Gaussian-process-based Bayesian optimization (backed by Spearmint), TPE-based Bayesian optimization (backed by Hyperopt) and Hyperband. Auptimizer also supports the EAS algorithm for architecture search for neural network models.
Since the package is modeled as an HPO administration framework, incorporating several existing implementations of state-of-the-art optimization algorithms, rather than an algorithmic framework, it competes with other such frameworks, most notably Tune, another Python library for hyperparameter tuning at scale, rather than the above implementation-focused packages.
HPO algorithms currently supported by Auptimizer
Compared to Tune, Auptimizer currently lacks support for several key HPO algorithms, including the state-of-the-art BOHB algorithm, combining Bayesian optimization with the Hyperband early-stopping approach, and gradient-free optimization algorithms like those implemented by Facebook Research’s Nevergrad. It also avoids the separation Tune provides between search algorithms and trial schedulers, preventing combinations of early stopping algorithms like Hyperband with search algorithms more sophisticated than random sampling, for example.
This release comes as a part of a wider move by LG to position itself as a more prominent player in the field of applied AI, incorporating such capabilities into many of its products and services, as demonstrated by LG’s plans for AI-powered smart homes and the ThinQ ecosystem. This move also includes the development of its own specialised AI chip and a $17 million investment in SoftBank's venture fund for AI startups.