Introducing Tuun, an Open Source System
Authors: Petuum CASL Team
We are excited to launch our newest open source offering, Tuun!
What is Tuun?
Tuun lets you build AutoML and meta-learning techniques into your Machine Learning pipelines to boost task performance and accuracy. Our initial release supports black-box tuning, hyperparameter optimization, data augmentation, and zeroth order optimization, with Neural Architecture Search coming in our next major update. Tuun scales to computationally-intensive ML pipelines via integration with CASL’s AdaptDL for cluster scheduling, and we’ve made Tuun easy to use by integrating with Microsoft NNI as a front-end.
Tuun’s Bayesian optimization-based search algorithms perform especially well with complex and high dimensional search spaces that would normally require many search trials — such as tuning 10+ model hyperparameters simultaneously (benchmarks provided later in this blog), or auto-building data augmentation policies that choose from dozens of accuracy-boosting transformations to apply to training data (see our earlier blog post on data augmentation using Tuun). Some of the core features offered by Tuun are:
- Composability: Tuun takes advantage of multiple backend tuning systems and modeling packages (such as ProBO, Dragonfly, Stan, GPyTorch, others).
- Integration: Tuun integrates with other hyperparameter optimization and distributed communication backends (such as NNI, with other integrations in the works).
- Performance: Automatic model selection, flexible choice of acquisition optimization strategy, good performance in high dimensions.
Why use Tuun?
Bayesian optimization is powerful, fast and extensible
Tuun uses sample-efficient Bayesian optimization (BO) to perform tuning using a small number of function evaluations — that means fewer search trials are required! It uses specialized Bayesian and other uncertainty-based models (e.g. Gaussian process, hierarchical Gaussian process, and deep ensemble models) and acquisition optimization procedures, and leverages years of work from other BO and probabilistic modeling packages (such as Stan, GPyTorch, Dragonfly, ProBO, and others) for state-of-the-art performance and speed. Here are some reasons why BO is a great choice for meta-learning and AutoML problems:
- BO supports parallel trials, so you can take advantage of a cluster (via Tuun’s integration with our own CASL AdaptDL, and Microsoft NNI).
- BO can flexibly support more tuning goals than many other approaches, including goals that aren’t differentiable or part of the model loss function — such as energy consumption and memory usage! BO also supports multi-objective modeling: you have several competing objectives (e.g. accuracy vs memory usage) and you need to trade off between them by finding a Pareto front.
- BO is highly extensible, and supports compute-and-time-saving strategies like offline, transfer, and multi-task learning. We’re planning a new Tuun feature that lets you import the “search history” from previous BO tuning sessions, which Tuun will use to further reduce the number of trials required by new searches!
This makes BO great for meta-learning and AutoML applications, where we wish to optimize an unknown, expensive function by taking a sequence of function evaluations. For example, each function evaluation may involve partially or fully training a machine learning model on a different hyperparameter setting, model architecture, data processing routine, or some other configuration. Thanks to BO, Tuun can find good configurations using a minimal number of function evaluations.
Integration with Open Source
Tuun is designed to integrate with existing frameworks for distributed communication and hyperparameter optimization, such as our own CASL AdaptDL and Microsoft NNI. This allows Tuun to carry out hyperparameter tuning parallelized over a cluster of machines, and to be used in conjunction with our cluster scheduler.
How do I use it?
Using Tuun Standalone
Simple API — The following code illustrates the Tuun API, showing a simple example of optimizing a synthetic function over a one dimensional search space defined on the range x=[-5, 5].
See the following example in the Tuun repo and code snippet below.
- Instantiate Tuun object.
- Define search space.
- Define function f to minimize.
- Minimize f.
In response, Tuun produces the following output. Each line corresponds to one function query, showing the iteration number, chosen input x to query, observed function output y, and minimum objective found so far. At the bottom, after tuning is complete, the overall minimizer x and minimum y are displayed.
You can also configure details like acquisition function types and optimize over larger domains.
See the following example in the Tuun repo and code snippet below
- Configure Tuun settings.
- Instantiate Tuun object.
- Define search space (here: two dimensional).
- Define function f to minimize (defined on two dimensional search space).
- Minimize f.
Using Tuun Within NNI
We can also use Tuun within NNI.
See the following example in the Tuun repo and code snippets below
- Define an NNI configuration file
2. Specify a search space via NNI type and value syntax
3. Run NNI
Results on optimizing benchmark functions
The following two plots show the performance of Tuun on two moderate-to-high dimensional (6 and 40 dimension) tuning benchmark functions. The goal here is minimization (lower is better), where we see the performance of Tuun with respect to iteration, in comparison with a few other popular optimization methods.
We’ll be adding Neural Architecture Search (NAS) in the next major Tuun update! Deep Learning performance is greatly influenced by architecture choice (in addition to hyperparameter tuning and data augmentation, which Tuun already covers). With Tuun NAS, you’ll be able to tailor your ML pipeline’s Deep Learning models for not just higher accuracy, but also other objectives that are more difficult to target — such as smaller memory footprint/fewer parameters and faster inference time! To do this, Tuun leverages black-box Bayesian Optimization, which has had success in optimizing non-differentiable objectives and has been applied in other fields to systems optimization (e.g. databases, ranking systems, compilers, virtual machines), and the physical sciences (e.g. nuclear fusion and particle accelerator tuning). At Petuum, we have used Tuun to automate the design of data augmentation policies for improved accuracy in deep learning.
We have a number of exciting directions we aim to pursue next, and invite potential collaborators and contributors to help us with the following:
- A larger suite of uncertainty models: building a greater variety of uncertainty models, including those based on neural networks, and combinations of these with Gaussian processes.
- More complex objectives beyond optimization: multi-objective optimization (multiple quantities to tune), multi-fidelity optimization (low-cost proxies to the expensive objective), contextual optimization (tuning with side information), and a variety of other experimental design objectives.
- Transfer optimization: using transfer learning on data from previous tuning sessions. Tuun learns from the previous data in order to warm-start new tuning sessions, which reduces the number of trials and thus speeds up tuning!
CASL provides a unified toolkit for composable, automatic, and scalable machine learning systems, including distributed training, resource-adaptive scheduling, hyperparameter tuning, and compositional model construction. CASL consists of many powerful Open-source components that were built to work in unison or leveraged as individual components for specific tasks to provide flexibility and ease of use.
Thanks for reading! Please visit the CASL website to stay up to date on additional CASL and Tuun announcements soon: https://www.casl-project.ai. If you’re interested in working professionally on CASL, visit our careers page at Petuum!