Search Algorithms (tune.suggest)

Repeater

class ray.tune.suggest.Repeater(search_alg, repeat=1, set_index=True)[source]

A wrapper algorithm for repeating trials of same parameters.

It is recommended that you do not run an early-stopping TrialScheduler simultaneously.

Parameters
  • search_alg (SearchAlgorithm) – SearchAlgorithm object that the Repeater will optimize. Note that the SearchAlgorithm will only see 1 trial among multiple repeated trials. The result/metric passed to the SearchAlgorithm upon trial completion will be averaged among all repeats.

  • repeat (int) – Number of times to generate a trial with a repeated configuration. Defaults to 1.

  • set_index (bool) – Sets a tune.suggest.repeater.TRIAL_INDEX in Trainable/Function config which corresponds to the index of the repeated trial. This can be used for seeds. Defaults to True.

AxSearch

class ray.tune.suggest.ax.AxSearch(ax_client, max_concurrent=10, mode='max', **kwargs)[source]

A wrapper around Ax to provide trial suggestions.

Requires Ax to be installed. Ax is an open source tool from Facebook for configuring and optimizing experiments. More information can be found in https://ax.dev/.

Parameters
  • parameters (list[dict]) – Parameters in the experiment search space. Required elements in the dictionaries are: “name” (name of this parameter, string), “type” (type of the parameter: “range”, “fixed”, or “choice”, string), “bounds” for range parameters (list of two values, lower bound first), “values” for choice parameters (list of values), and “value” for fixed parameters (single value).

  • objective_name (str) – Name of the metric used as objective in this experiment. This metric must be present in raw_data argument to log_data. This metric must also be present in the dict reported/returned by the Trainable.

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute. Defaults to “max”.

  • parameter_constraints (list[str]) – Parameter constraints, such as “x3 >= x4” or “x3 + x4 >= 2”.

  • outcome_constraints (list[str]) – Outcome constraints of form “metric_name >= bound”, like “m1 <= 3.”

  • use_early_stopped_trials (bool) – Whether to use early terminated trial results in the optimization process.

from ray import tune
from ray.tune.suggest.ax import AxSearch

parameters = [
    {"name": "x1", "type": "range", "bounds": [0.0, 1.0]},
    {"name": "x2", "type": "range", "bounds": [0.0, 1.0]},
]

algo = AxSearch(parameters=parameters,
    objective_name="hartmann6", max_concurrent=4)
tune.run(my_func, algo=algo)

BayesOptSearch

class ray.tune.suggest.bayesopt.BayesOptSearch(space, max_concurrent=10, reward_attr=None, metric='episode_reward_mean', mode='max', utility_kwargs=None, random_state=1, verbose=0, **kwargs)[source]

A wrapper around BayesOpt to provide trial suggestions.

Requires BayesOpt to be installed. You can install BayesOpt with the command: pip install bayesian-optimization.

Parameters
  • space (dict) – Continuous search space. Parameters will be sampled from this space which will be used to run trials.

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • utility_kwargs (dict) – Parameters to define the utility function. Must provide values for the keys kind, kappa, and xi.

  • random_state (int) – Used to initialize BayesOpt.

  • verbose (int) – Sets verbosity level for BayesOpt packages.

  • use_early_stopped_trials (bool) – Whether to use early terminated trial results in the optimization process.

from ray import tune
from ray.tune.suggest.bayesopt import BayesOptSearch

space = {
    'width': (0, 20),
    'height': (-100, 100),
}
algo = BayesOptSearch(
    space, max_concurrent=4, metric="mean_loss", mode="min")

tune.run(my_func, algo=algo)

TuneBOHB

class ray.tune.suggest.bohb.TuneBOHB(space, bohb_config=None, max_concurrent=10, metric='neg_mean_loss', mode='max')[source]

BOHB suggestion component.

Requires HpBandSter and ConfigSpace to be installed. You can install HpBandSter and ConfigSpace with: pip install hpbandster ConfigSpace.

This should be used in conjunction with HyperBandForBOHB.

Parameters
  • space (ConfigurationSpace) – Continuous ConfigSpace search space. Parameters will be sampled from this space which will be used to run trials.

  • bohb_config (dict) – configuration for HpBandSter BOHB algorithm

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

Example:

import ConfigSpace as CS

config_space = CS.ConfigurationSpace()
config_space.add_hyperparameter(
    CS.UniformFloatHyperparameter('width', lower=0, upper=20))
config_space.add_hyperparameter(
    CS.UniformFloatHyperparameter('height', lower=-100, upper=100))
config_space.add_hyperparameter(
    CS.CategoricalHyperparameter(
        name='activation', choices=['relu', 'tanh']))

algo = TuneBOHB(
    config_space, max_concurrent=4, metric='mean_loss', mode='min')
bohb = HyperBandForBOHB(
    time_attr='training_iteration',
    metric='mean_loss',
    mode='min',
    max_t=100)
run(MyTrainableClass, scheduler=bohb, search_alg=algo)

DragonflySearch

class ray.tune.suggest.dragonfly.DragonflySearch(optimizer, max_concurrent=10, reward_attr=None, metric='episode_reward_mean', mode='max', points_to_evaluate=None, evaluated_rewards=None, **kwargs)[source]

A wrapper around Dragonfly to provide trial suggestions.

Requires Dragonfly to be installed via pip install dragonfly-opt.

Parameters
  • optimizer (dragonfly.opt.BlackboxOptimiser) – Optimizer provided from dragonfly. Choose an optimiser that extends BlackboxOptimiser.

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • points_to_evaluate (list of lists) – A list of points you’d like to run first before sampling from the optimiser, e.g. these could be parameter configurations you already know work well to help the optimiser select good values. Each point is a list of the parameters using the order definition given by parameter_names.

  • evaluated_rewards (list) – If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate.

from ray import tune
from dragonfly.opt.gp_bandit import EuclideanGPBandit
from dragonfly.exd.experiment_caller import EuclideanFunctionCaller
from dragonfly import load_config

domain_vars = [{
    "name": "LiNO3_vol",
    "type": "float",
    "min": 0,
    "max": 7
}, {
    "name": "Li2SO4_vol",
    "type": "float",
    "min": 0,
    "max": 7
}, {
    "name": "NaClO4_vol",
    "type": "float",
    "min": 0,
    "max": 7
}]

domain_config = load_config({"domain": domain_vars})
func_caller = EuclideanFunctionCaller(None,
    domain_config.domain.list_of_domains[0])
optimizer = EuclideanGPBandit(func_caller, ask_tell_mode=True)

algo = DragonflySearch(optimizer, max_concurrent=4,
    metric="objective", mode="max")

tune.run(my_func, algo=algo)

HyperOptSearch

class ray.tune.suggest.hyperopt.HyperOptSearch(space, max_concurrent=10, reward_attr=None, metric='episode_reward_mean', mode='max', points_to_evaluate=None, n_initial_points=20, random_state_seed=None, gamma=0.25, **kwargs)[source]

A wrapper around HyperOpt to provide trial suggestions.

Requires HyperOpt to be installed from source. Uses the Tree-structured Parzen Estimators algorithm, although can be trivially extended to support any algorithm HyperOpt uses. Externally added trials will not be tracked by HyperOpt. Trials of the current run can be saved using save method, trials of a previous run can be loaded using restore method, thus enabling a warm start feature.

Parameters
  • space (dict) – HyperOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • points_to_evaluate (list) – Initial parameter suggestions to be run first. This is for when you already have some good parameters you want hyperopt to run first to help the TPE algorithm make better suggestions for future parameters. Needs to be a list of dict of hyperopt-named variables. Choice variables should be indicated by their index in the list (see example)

  • n_initial_points (int) – number of random evaluations of the objective function before starting to aproximate it with tree parzen estimators. Defaults to 20.

  • random_state_seed (int, array_like, None) – seed for reproducible results. Defaults to None.

  • gamma (float in range (0,1)) – parameter governing the tree parzen estimators suggestion algorithm. Defaults to 0.25.

  • use_early_stopped_trials (bool) – Whether to use early terminated trial results in the optimization process.

space = {
    'width': hp.uniform('width', 0, 20),
    'height': hp.uniform('height', -100, 100),
    'activation': hp.choice("activation", ["relu", "tanh"])
}
current_best_params = [{
    'width': 10,
    'height': 0,
    'activation': 0, # The index of "relu"
}]
algo = HyperOptSearch(
    space, max_concurrent=4, metric="mean_loss", mode="min",
    points_to_evaluate=current_best_params)

NevergradSearch

class ray.tune.suggest.nevergrad.NevergradSearch(optimizer, parameter_names, max_concurrent=10, reward_attr=None, metric='episode_reward_mean', mode='max', **kwargs)[source]

A wrapper around Nevergrad to provide trial suggestions.

Requires Nevergrad to be installed. Nevergrad is an open source tool from Facebook for derivative free optimization of parameters and/or hyperparameters. It features a wide range of optimizers in a standard ask and tell interface. More information can be found at https://github.com/facebookresearch/nevergrad.

Parameters
  • optimizer (nevergrad.optimization.Optimizer) – Optimizer provided from Nevergrad.

  • parameter_names (list) – List of parameter names. Should match the dimension of the optimizer output. Alternatively, set to None if the optimizer is already instrumented with kwargs (see nevergrad v0.2.0+).

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • use_early_stopped_trials (bool) – Whether to use early terminated trial results in the optimization process.

Example

>>> from nevergrad.optimization import optimizerlib
>>> instrumentation = 1
>>> optimizer = optimizerlib.OnePlusOne(instrumentation, budget=100)
>>> algo = NevergradSearch(optimizer, ["lr"], max_concurrent=4,
>>>                        metric="mean_loss", mode="min")

Note

In nevergrad v0.2.0+, optimizers can be instrumented. For instance, the following will specifies searching for “lr” from 1 to 2.

>>> from nevergrad.optimization import optimizerlib
>>> from nevergrad import instrumentation as inst
>>> lr = inst.var.Array(1).bounded(1, 2).asfloat()
>>> instrumentation = inst.Instrumentation(lr=lr)
>>> optimizer = optimizerlib.OnePlusOne(instrumentation, budget=100)
>>> algo = NevergradSearch(optimizer, None, max_concurrent=4,
>>>                        metric="mean_loss", mode="min")

SigOptSearch

class ray.tune.suggest.sigopt.SigOptSearch(space, name='Default Tune Experiment', max_concurrent=1, reward_attr=None, metric='episode_reward_mean', mode='max', **kwargs)[source]

A wrapper around SigOpt to provide trial suggestions.

Requires SigOpt to be installed. Requires user to store their SigOpt API key locally as an environment variable at SIGOPT_KEY.

Parameters
  • space (list of dict) – SigOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.

  • name (str) – Name of experiment. Required by SigOpt.

  • max_concurrent (int) – Number of maximum concurrent trials supported based on the user’s SigOpt plan. Defaults to 1.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

Example:

space = [
    {
        'name': 'width',
        'type': 'int',
        'bounds': {
            'min': 0,
            'max': 20
        },
    },
    {
        'name': 'height',
        'type': 'int',
        'bounds': {
            'min': -100,
            'max': 100
        },
    },
]
algo = SigOptSearch(
    space, name="SigOpt Example Experiment",
    max_concurrent=1, metric="mean_loss", mode="min")

SkOptSearch

class ray.tune.suggest.skopt.SkOptSearch(optimizer, parameter_names, max_concurrent=10, reward_attr=None, metric='episode_reward_mean', mode='max', points_to_evaluate=None, evaluated_rewards=None, **kwargs)[source]

A wrapper around skopt to provide trial suggestions.

Requires skopt to be installed.

Parameters
  • optimizer (skopt.optimizer.Optimizer) – Optimizer provided from skopt.

  • parameter_names (list) – List of parameter names. Should match the dimension of the optimizer output.

  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.

  • metric (str) – The training result objective value attribute.

  • mode (str) – One of {min, max}. Determines whether objective is minimizing or maximizing the metric attribute.

  • points_to_evaluate (list of lists) – A list of points you’d like to run first before sampling from the optimiser, e.g. these could be parameter configurations you already know work well to help the optimiser select good values. Each point is a list of the parameters using the order definition given by parameter_names.

  • evaluated_rewards (list) – If you have previously evaluated the parameters passed in as points_to_evaluate you can avoid re-running those trials by passing in the reward attributes as a list so the optimiser can be told the results without needing to re-compute the trial. Must be the same length as points_to_evaluate. (See tune/examples/skopt_example.py)

  • use_early_stopped_trials (bool) – Whether to use early terminated trial results in the optimization process.

Example

>>> from skopt import Optimizer
>>> optimizer = Optimizer([(0,20),(-100,100)])
>>> current_best_params = [[10, 0], [15, -20]]
>>> algo = SkOptSearch(optimizer,
>>>     ["width", "height"],
>>>     max_concurrent=4,
>>>     metric="mean_loss",
>>>     mode="min",
>>>     points_to_evaluate=current_best_params)

SearchAlgorithm

class ray.tune.suggest.SearchAlgorithm[source]

Interface of an event handler API for hyperparameter search.

Unlike TrialSchedulers, SearchAlgorithms will not have the ability to modify the execution (i.e., stop and pause trials).

Trials added manually (i.e., via the Client API) will also notify this class upon new events, so custom search algorithms should maintain a list of trials ID generated from this class.

See also: ray.tune.suggest.BasicVariantGenerator.

add_configurations(experiments)[source]

Tracks given experiment specifications.

Parameters

experiments (Experiment | list | dict) – Experiments to run.

next_trials()[source]

Provides Trial objects to be queued into the TrialRunner.

Returns

Returns a list of trials.

Return type

trials (list)

on_trial_result(trial_id, result)[source]

Called on each intermediate result returned by a trial.

This will only be called when the trial is in the RUNNING state.

Parameters

trial_id – Identifier for the trial.

on_trial_complete(trial_id, result=None, error=False, early_terminated=False)[source]

Notification for the completion of trial.

Parameters
  • trial_id – Identifier for the trial.

  • result (dict) – Defaults to None. A dict will be provided with this notification when the trial is in the RUNNING state AND either completes naturally or by manual termination.

  • error (bool) – Defaults to False. True if the trial is in the RUNNING state and errors.

  • early_terminated (bool) – Defaults to False. True if the trial is stopped while in PAUSED or PENDING state.

is_finished()[source]

Returns True if no trials left to be queued into TrialRunner.

Can return True before all trials have finished executing.

set_finished()[source]

Marks the search algorithm as finished.

SuggestionAlgorithm

class ray.tune.suggest.SuggestionAlgorithm(metric=None, mode='max', use_early_stopped_trials=True)[source]

Bases: ray.tune.suggest.search.SearchAlgorithm

Abstract class for suggestion-based algorithms.

Custom search algorithms can extend this class easily by overriding the suggest method provide generated parameters for the trials.

To track suggestions and their corresponding evaluations, the method suggest will be passed a trial_id, which will be used in subsequent notifications.

suggester = SuggestionAlgorithm()
suggester.add_configurations({ ... })
new_parameters = suggester.suggest()
suggester.on_trial_complete(trial_id, result)
better_parameters = suggester.suggest()
add_configurations(experiments)[source]

Chains generator given experiment specifications.

Parameters

experiments (Experiment | list | dict) – Experiments to run.

next_trials()[source]

Provides a batch of Trial objects to be queued into the TrialRunner.

A batch ends when self._trial_generator returns None.

Returns

Returns a list of trials.

Return type

trials (list)

_generate_trials(num_samples, experiment_spec, output_path='')[source]

Generates trials with configurations from suggest.

Creates a trial_id that is passed into suggest.

Yields

Trial objects constructed according to spec

suggest(trial_id)[source]

Queries the algorithm to retrieve the next set of parameters.

Parameters

trial_id – Trial ID used for subsequent notifications.

Returns

Configuration for a trial, if possible.

Else, returns None, which will temporarily stop the TrialRunner from querying.

Return type

dict|None

Example

>>> suggester = SuggestionAlgorithm(max_concurrent=1)
>>> suggester.add_configurations({ ... })
>>> parameters_1 = suggester.suggest()
>>> parameters_2 = suggester.suggest()
>>> parameters_2 is None
>>> suggester.on_trial_complete(trial_id, result)
>>> parameters_2 = suggester.suggest()
>>> parameters_2 is not None
property metric

The training result objective value attribute.

property mode

Specifies if minimizing or maximizing the metric.