Skip to content


Experiments are hyperparameter searches based on a specific commit and data snapshot. A trial is the evaluation of a specific hyperparameter setting, using the selected validation scheme. A number of trials are done with different hyperparameter settings and can be seen in the user interface. Because of the parallelism supplied by Cubonacci, trials can be run next to each other.

Creating an experiment can be done from a commit. After selecting the algorithm and giving a name to the experiment, a data snapshot can be selected if applicable or you can choose to create a new data snapshot that needs to be named.



Configuration for the trials involves both the number of trials to run in total and how many to run in parallel. For the resource settings, you will need to define which resources to reserve per trial and which instances to run these trials on - see instance selection.


During an experiment or after it has been completed, the individual trials are visible with the suggested hyperparameters and the results if finished. Clicking on a trial will give a more in-depth view of the trial, including logs that were generated by the project code.

A graph that shows the metrics on the left side and the hyperparameters on the right side shows how these values are correlated to each other. Every line represents a trial. You can make a selection over a certain dimension which will highlight these trials, as well as filtering the table of succeeded trials below.

Experiment results

Suggestion algorithm

Suggestion algorithms are used to suggest new hyperparameters to try. Currently, Cubonacci supports two suggestion algorithms.

Bayesian optimization

By looking at previous results and the corresponding hyperparameters, bayesian optimization makes educated guesses on new sets of hyperparameters to try, at the beginning more for exploration and later for exploiting areas that seem the most promising. This method performs marginally better than the alternatives.

Using a random search, every suggestion is just a random set of hyperparameters. For integers and floats, a random number between the min and max (inclusive) is taken and for the categorical values, a random uniform value is picked. This is a better choice than doing a grid search due to hyperparameters that might not be relevant at all.