Hyperparameter Optimization as a service with Celestial-AutoML

Lars Hertel
6 min readApr 20, 2021

--

Celestial (www.celestial-automl.com) is a free web service for hyperparameter optimization of machine learning algorithms. It makes getting started very easy by providing a web user interface for setting up the optimization and tracking results. Since Celestial is just an API to get parameter configurations and send back loss values, you can easily scale up. On top of that, the web UI gives you the power to visualize your results and go back to older studies to review and learn from them.

www.celestial-automl.com

Why use Celestial

Most hyperparameter optimization tools are built as a Python library. The user downloads them from PyPi or Github, imports them, and uses them directly in the code. One major disadvantage of this approach is that it is difficult to decouple the hyperparameter optimization from the infrastructure that runs it. This becomes apparent as soon as there is a requirement for parallel evaluation of parameter settings. Some hyperparameter optimization libraries attempt to tackle parallelism themselves. However, parallelization can quickly become a big task and that is why entire companies are concerned with the issue (e.g. Anyscale). A way to sidestep the infra issue is to set hyperparameter optimization up as a service. This is exactly what Celestial does. Celestial leaves it up to the user how they want to evaluate their trials. All they have to do is to add the line of code that retrieves the parameter configuration and the one that sends results back. Whether they want to run in parallel using a bash script, a grid, Amazon EC2 instances, or an army of Ray workers, it’s up to them. At the same time the user doesn’t have to go through complicated installation. Setting up Celestial is amazingly simple as you will see below.

Another reason to use a web service is visualization. In our opinion, visualization is hugely important in hyperparameter optimization to monitor and interpret results. You are not just looking for that one optimal result, you are trying to find robust, sensible results and hopefully learn something from them. You may even run an ablation study for a new method and try to understand what parameters matter.

Finally, with a web service archiving and sharing hyperparameter optimization runs is straight forward. You can keep track of your old optimizations, add results later, and potentially even share results with collaborators or in publications.

Celestial offers all of these benefits and is free to use. Keep reading for an example of how to get started.

Example of optimizing a neural network with Celestial

Creating an account

To get started, sign up with an email address and a password. This is so that only you have access to your hyperparameter optimization runs. The email is needed in case you need to reset your password.

Setting up the hyperparameter optimization

Once you have signed up and logged in, click “Create Study” in the top right corner of the web page. This will prompt you to enter a name, description (optional), and definitions of parameters. For the parameter definitions: Name — the name you’ll use to refer to this parameter in code, Type — “Continuous” float values, “Discrete” integers, or “Choice” a list of comma separated strings. Domain — comma separated low and high for Continuous/Discrete; comma separated list of strings for Choice.

Study setup.

If you want to follow along, copy the setup in the figure. We define four hyperparameters for a multi-layer perceptron: “solver” of type Choice will return one of the strings “lbfgs”, “adam”, or “sgd”; “learning_rate_init” will define the initial learning rate and be uniformly continuous between 0.001 and 0.01. Learning rate scaling is denoted “learning_rate” and will have options “constant”, “invscaling”, and “adaptive”. Finally, “batch_size” will be integers between 8 and 64 (inclusive).

Click “Submit” and you will see an empty dashboard with the parameters you defined. Hit the “Python Setup” to get instructions on how to set up your Python script or follow along here.

Empty dashboard just after creating the study.

The training script

Now that we have created the study in the web service, we need our machine learning algorithm to receive parameter settings from Celestial and send back loss values. For that we use the Celestial Client.

To keep following along, use the accompanying Google Colab notebook or copy the code from GitHub.

Install the Celestial client using pip:

pip install celestial-client

or check it out from the GitHub repository.

We use the Celestial client in our Python code where we train and evaluate the machine learning model. Specifically, the client obtains a parameter configuration which is used to train and evaluate a model. Finally, another call to the Celestial client sends loss values back to the web service. Below is a general template you can use (can also be found by hitting “Python Setup” in the dashboard).

import celestial
# ...

number_of_trials = 100
for _ in range(number_of_trials):
# ...use your study id to retrieve parameters...
trial = celestial.Trial(study_id=<your study id>)

# ...train with trial.parameters...
# ...assign resulting loss to loss and submit...
trial.submit_result(loss=loss)

If you are following along, make sure you get the study ID from the study you created in the web service. Find the study ID as the last part of the URL when you are in the dashboard (see picture below). If you use the Colab notebook to follow along, you will be prompted for the study ID. If you are using the Python script, pass the study ID along as a command line argument: python examples/mlp.py --study-id <your-study-id>

The study ID can be found in the URL, here 6.

Once you run the optimization you will be asked for your username and password. Here, use the ones you used to register on www.celestial-automl.com .

Console output when running the MLP example.

Hitting refresh on the dashboard shows the evaluated parameter settings and a graph of the best result achieved so far. That’s it! :)

Dashboard at the end of the study.

A few more details

As optimization algorithm Celestial uses random search. We chose random search as a starting point because it is robust and works well across most problems. We may also implement Bayesian optimization in the future.

Objective values: Celestial accepts loss values which can represent any objective that should be minimized. To keep it simple, Celestial only does minimization and only accepts one loss value per parameter configuration.

Scaling up: the easiest way to start with Celestial might be to run an optimization via a for loop using a single Python script. However, if multiple cores or GPUs are available one can also run multiple instances of the same script. Since Celestial does not care how many configurations the user requests in parallel or in what order loss values come in, the options to scale up are endless.

Authentication: there is currently a limit of 24 hours on authentication tokens for the client. That means, after 24 hours the client will ask you to re-authenticate and halt the evaluation until you have done so.

Future Work

We have three items in mind for future work. First, we are considering different optimization backends, mostly in the area of Bayesian optimization. Second, we would like to provide more ways to explore the results via the dashboard. Finally, we would like to make study results shareable so that collaborators or reviewers can view them. Feel free to reach out for feedback and we hope Celestial can help you optimize your machine learning algorithm.

Cheers!

--

--

Lars Hertel
Lars Hertel

Written by Lars Hertel

I am a machine learning engineer with background in statistics. I like building stuff :)

Responses (1)