Tune Package Reference

ray.tune

Convenience method for specifying grid search over a value.

Parameters:values – An iterable whose parameters will be gridded.
ray.tune.register_env(name, env_creator)

Register a custom environment for use with RLlib.

Parameters:
  • name (str) – Name to register.
  • env_creator (obj) – Function that creates an env.
ray.tune.register_trainable(name, trainable)

Register a trainable function or class.

Parameters:
  • name (str) – Name to register.
  • trainable (obj) – Function or tune.Trainable class. Functions must take (config, status_reporter) as arguments and will be automatically converted into a class during registration.
ray.tune.run_experiments(experiments=None, search_alg=None, scheduler=None, with_server=False, server_port=4321, verbose=True, queue_trials=False, trial_executor=None)

Runs and blocks until all trials finish.

Parameters:
  • experiments (Experiment | list | dict) – Experiments to run. Will be passed to search_alg via add_configurations.
  • search_alg (SearchAlgorithm) – Search Algorithm. Defaults to BasicVariantGenerator.
  • scheduler (TrialScheduler) – Scheduler for executing the experiment. Choose among FIFO (default), MedianStopping, AsyncHyperBand, and HyperBand.
  • with_server (bool) – Starts a background Tune server. Needed for using the Client API.
  • server_port (int) – Port number for launching TuneServer.
  • verbose (bool) – How much output should be printed for each trial.
  • queue_trials (bool) – Whether to queue trials when the cluster does not currently have enough resources to launch one. This should be set to True when running on an autoscaling cluster to enable automatic scale-up.
  • trial_executor (TrialExecutor) – Manage the execution of trials.

Examples

>>> experiment_spec = Experiment("experiment", my_func)
>>> run_experiments(experiments=experiment_spec)
>>> experiment_spec = {"experiment": {"run": my_func}}
>>> run_experiments(experiments=experiment_spec)
>>> run_experiments(
>>>     experiments=experiment_spec,
>>>     scheduler=MedianStoppingRule(...))
>>> run_experiments(
>>>     experiments=experiment_spec,
>>>     search_alg=SearchAlgorithm(),
>>>     scheduler=MedianStoppingRule(...))
Returns:List of Trial objects, holding data for each executed trial.
class ray.tune.Experiment(name, run, stop=None, config=None, trial_resources=None, repeat=1, num_samples=1, local_dir=None, upload_dir='', checkpoint_freq=0, checkpoint_at_end=False, max_failures=3, restore=None)

Tracks experiment specifications.

Parameters:
  • name (str) – Name of experiment.
  • run (function|class|str) – The algorithm or model to train. This may refer to the name of a built-on algorithm (e.g. RLLib’s DQN or PPO), a user-defined trainable function or class, or the string identifier of a trainable function or class registered in the tune registry.
  • stop (dict) – The stopping criteria. The keys may be any field in the return result of ‘train()’, whichever is reached first. Defaults to empty dict.
  • config (dict) – Algorithm-specific configuration for Tune variant generation (e.g. env, hyperparams). Defaults to empty dict. Custom search algorithms may ignore this.
  • trial_resources (dict) – Machine resources to allocate per trial, e.g. {"cpu": 64, "gpu": 8}. Note that GPUs will not be assigned unless you specify them here. Defaults to 1 CPU and 0 GPUs in Trainable.default_resource_request().
  • repeat (int) – Deprecated and will be removed in future versions of Ray. Use num_samples instead.
  • num_samples (int) – Number of times to sample from the hyperparameter space. Defaults to 1. If grid_search is provided as an argument, the grid will be repeated num_samples of times.
  • local_dir (str) – Local dir to save training results to. Defaults to ~/ray_results.
  • upload_dir (str) – Optional URI to sync training results to (e.g. s3://bucket).
  • checkpoint_freq (int) – How many training iterations between checkpoints. A value of 0 (default) disables checkpointing.
  • checkpoint_at_end (bool) – Whether to checkpoint at the end of the experiment regardless of the checkpoint_freq. Default is False.
  • max_failures (int) – Try to recover a trial from its last checkpoint at least this many times. Only applies if checkpointing is enabled. Defaults to 3.
  • restore (str) – Path to checkpoint. Only makes sense to set if running 1 trial. Defaults to None.

Examples

>>> experiment_spec = Experiment(
>>>     "my_experiment_name",
>>>     my_func,
>>>     stop={"mean_accuracy": 100},
>>>     config={
>>>         "alpha": tune.grid_search([0.2, 0.4, 0.6]),
>>>         "beta": tune.grid_search([1, 2]),
>>>     },
>>>     trial_resources={
>>>         "cpu": 1,
>>>         "gpu": 0
>>>     },
>>>     num_samples=10,
>>>     local_dir="~/ray_results",
>>>     upload_dir="s3://your_bucket/path",
>>>     checkpoint_freq=10,
>>>     max_failures=2)
classmethod from_json(name, spec)

Generates an Experiment object from JSON.

Parameters:
  • name (str) – Name of Experiment.
  • spec (dict) – JSON configuration of experiment.
class ray.tune.function(func)

Wraps func to make sure it is not expanded during resolution.

class ray.tune.Trainable(config=None, logger_creator=None)

Abstract class for trainable models, functions, etc.

A call to train() on a trainable will execute one logical iteration of training. As a rule of thumb, the execution time of one train call should be large enough to avoid overheads (i.e. more than a few seconds), but short enough to report progress periodically (i.e. at most a few minutes).

Calling save() should save the training state of a trainable to disk, and restore(path) should restore a trainable to the given state.

Generally you only need to implement _train, _save, and _restore here when subclassing Trainable.

Note that, if you don’t require checkpoint/restore functionality, then instead of implementing this class you can also get away with supplying just a my_train(config, reporter) function to the config. The function will be automatically converted to this interface (sans checkpoint functionality).

classmethod default_resource_request(config)

Returns the resource requirement for the given configuration.

This can be overriden by sub-classes to set the correct trial resource allocation, so the user does not need to.

classmethod resource_help(config)

Returns a help string for configuring this trainable’s resources.

train()

Runs one logical iteration of training.

Subclasses should override _train() instead to return results. This class automatically fills the following fields in the result:

done (bool): training is terminated. Filled only if not provided.

time_this_iter_s (float): Time in seconds this iteration took to run. This may be overriden in order to override the system-computed time difference.

time_total_s (float): Accumulated time in seconds for this entire experiment.

experiment_id (str): Unique string identifier for this experiment. This id is preserved across checkpoint / restore calls.

training_iteration (int): The index of this training iteration, e.g. call to train().

pid (str): The pid of the training process.

date (str): A formatted date of when the result was processed.

timestamp (str): A UNIX timestamp of when the result was processed.

hostname (str): Hostname of the machine hosting the training process.

node_ip (str): Node ip of the machine hosting the training process.

Returns:A dict that describes training progress.
save(checkpoint_dir=None)

Saves the current model state to a checkpoint.

Subclasses should override _save() instead to save state. This method dumps additional metadata alongside the saved path.

Parameters:checkpoint_dir (str) – Optional dir to place the checkpoint.
Returns:Checkpoint path that may be passed to restore().
save_to_object()

Saves the current model state to a Python object. It also saves to disk but does not return the checkpoint path.

Returns:Object holding checkpoint data.
restore(checkpoint_path)

Restores training state from a given model checkpoint.

These checkpoints are returned from calls to save().

Subclasses should override _restore() instead to restore state. This method restores additional metadata saved with the checkpoint.

restore_from_object(obj)

Restores training state from a checkpoint object.

These checkpoints are returned from calls to save_to_object().

stop()

Releases all resources used by this trainable.

_train()

Subclasses should override this to implement train().

Returns:A dict that describes training progress.
_save(checkpoint_dir)

Subclasses should override this to implement save().

_restore(checkpoint_path)

Subclasses should override this to implement restore().

_setup()

Subclasses should override this for custom initialization.

Subclasses can access the hyperparameter configuration via self.config.

_stop()

Subclasses should override this for any cleanup on stop.

ray.tune.schedulers

class ray.tune.schedulers.TrialScheduler

Bases: object

Interface for implementing a Trial Scheduler class.

CONTINUE = 'CONTINUE'

Status for continuing trial execution

PAUSE = 'PAUSE'

Status for pausing trial execution

STOP = 'STOP'

Status for stopping trial execution

on_trial_add(trial_runner, trial)

Called when a new trial is added to the trial runner.

on_trial_error(trial_runner, trial)

Notification for the error of trial.

This will only be called when the trial is in the RUNNING state.

on_trial_result(trial_runner, trial, result)

Called on each intermediate result returned by a trial.

At this point, the trial scheduler can make a decision by returning one of CONTINUE, PAUSE, and STOP. This will only be called when the trial is in the RUNNING state.

on_trial_complete(trial_runner, trial, result)

Notification for the completion of trial.

This will only be called when the trial is in the RUNNING state and either completes naturally or by manual termination.

on_trial_remove(trial_runner, trial)

Called to remove trial.

This is called when the trial is in PAUSED or PENDING state. Otherwise, call on_trial_complete.

choose_trial_to_run(trial_runner)

Called to choose a new trial to run.

This should return one of the trials in trial_runner that is in the PENDING or PAUSED state. This function must be idempotent.

If no trial is ready, return None.

debug_string()

Returns a human readable message for printing to the console.

class ray.tune.schedulers.HyperBandScheduler(time_attr='training_iteration', reward_attr='episode_reward_mean', max_t=81)

Bases: ray.tune.schedulers.trial_scheduler.FIFOScheduler

Implements the HyperBand early stopping algorithm.

HyperBandScheduler early stops trials using the HyperBand optimization algorithm. It divides trials into brackets of varying sizes, and periodically early stops low-performing trials within each bracket.

To use this implementation of HyperBand with Tune, all you need to do is specify the max length of time a trial can run max_t, the time units time_attr, and the name of the reported objective value reward_attr. We automatically determine reasonable values for the other HyperBand parameters based on the given values.

For example, to limit trials to 10 minutes and early stop based on the episode_mean_reward attr, construct:

HyperBand('time_total_s', 'episode_reward_mean', 600)

See also: https://people.eecs.berkeley.edu/~kjamieson/hyperband.html

Parameters:
  • time_attr (str) – The training result attr to use for comparing time. Note that you can pass in something non-temporal such as training_iteration as a measure of progress, the only requirement is that the attribute should increase monotonically.
  • reward_attr (str) – The training result objective value attribute. As with time_attr, this may refer to any objective value. Stopping procedures will use this attribute.
  • max_t (int) – max time units per trial. Trials will be stopped after max_t time units (determined by time_attr) have passed. The scheduler will terminate trials after this time has passed. Note that this is different from the semantics of max_t as mentioned in the original HyperBand paper.
on_trial_add(trial_runner, trial)

Adds new trial.

On a new trial add, if current bracket is not filled, add to current bracket. Else, if current band is not filled, create new bracket, add to current bracket. Else, create new iteration, create new bracket, add to bracket.

on_trial_result(trial_runner, trial, result)

If bracket is finished, all trials will be stopped.

If a given trial finishes and bracket iteration is not done, the trial will be paused and resources will be given up.

This scheduler will not start trials but will stop trials. The current running trial will not be handled, as the trialrunner will be given control to handle it.

on_trial_remove(trial_runner, trial)

Notification when trial terminates.

Trial info is removed from bracket. Triggers halving if bracket is not finished.

on_trial_complete(trial_runner, trial, result)

Cleans up trial info from bracket if trial completed early.

on_trial_error(trial_runner, trial)

Cleans up trial info from bracket if trial errored early.

choose_trial_to_run(trial_runner)

Fair scheduling within iteration by completion percentage.

List of trials not used since all trials are tracked as state of scheduler. If iteration is occupied (ie, no trials to run), then look into next iteration.

debug_string()

This provides a progress notification for the algorithm.

For each bracket, the algorithm will output a string as follows:

Bracket(Max Size (n)=5, Milestone (r)=33, completed=14.6%): {PENDING: 2, RUNNING: 3, TERMINATED: 2}

“Max Size” indicates the max number of pending/running experiments set according to the Hyperband algorithm.

“Milestone” indicates the iterations a trial will run for before the next halving will occur.

“Completed” indicates an approximate progress metric. Some brackets, like ones that are unfilled, will not reach 100%.

class ray.tune.schedulers.AsyncHyperBandScheduler(time_attr='training_iteration', reward_attr='episode_reward_mean', max_t=100, grace_period=10, reduction_factor=3, brackets=3)

Bases: ray.tune.schedulers.trial_scheduler.FIFOScheduler

Implements the Async Successive Halving.

This should provide similar theoretical performance as HyperBand but avoid straggler issues that HyperBand faces. One implementation detail is when using multiple brackets, trial allocation to bracket is done randomly with over a softmax probability.

See https://openreview.net/forum?id=S1Y7OOlRZ

Parameters:
  • time_attr (str) – A training result attr to use for comparing time. Note that you can pass in something non-temporal such as training_iteration as a measure of progress, the only requirement is that the attribute should increase monotonically.
  • reward_attr (str) – The training result objective value attribute. As with time_attr, this may refer to any objective value. Stopping procedures will use this attribute.
  • max_t (float) – max time units per trial. Trials will be stopped after max_t time units (determined by time_attr) have passed.
  • grace_period (float) – Only stop trials at least this old in time. The units are the same as the attribute named by time_attr.
  • reduction_factor (float) – Used to set halving rate and amount. This is simply a unit-less scalar.
  • brackets (int) – Number of brackets. Each bracket has a different halving rate, specified by the reduction factor.
on_trial_add(trial_runner, trial)

Called when a new trial is added to the trial runner.

on_trial_result(trial_runner, trial, result)

Called on each intermediate result returned by a trial.

At this point, the trial scheduler can make a decision by returning one of CONTINUE, PAUSE, and STOP. This will only be called when the trial is in the RUNNING state.

on_trial_complete(trial_runner, trial, result)

Notification for the completion of trial.

This will only be called when the trial is in the RUNNING state and either completes naturally or by manual termination.

on_trial_remove(trial_runner, trial)

Called to remove trial.

This is called when the trial is in PAUSED or PENDING state. Otherwise, call on_trial_complete.

debug_string()

Returns a human readable message for printing to the console.

class ray.tune.schedulers.MedianStoppingRule(time_attr='time_total_s', reward_attr='episode_reward_mean', grace_period=60.0, min_samples_required=3, hard_stop=True, verbose=True)

Bases: ray.tune.schedulers.trial_scheduler.FIFOScheduler

Implements the median stopping rule as described in the Vizier paper:

https://research.google.com/pubs/pub46180.html

Parameters:
  • time_attr (str) – The training result attr to use for comparing time. Note that you can pass in something non-temporal such as training_iteration as a measure of progress, the only requirement is that the attribute should increase monotonically.
  • reward_attr (str) – The training result objective value attribute. As with time_attr, this may refer to any objective value that is supposed to increase with time.
  • grace_period (float) – Only stop trials at least this old in time. The units are the same as the attribute named by time_attr.
  • min_samples_required (int) – Min samples to compute median over.
  • hard_stop (bool) – If False, pauses trials instead of stopping them. When all other trials are complete, paused trials will be resumed and allowed to run FIFO.
  • verbose (bool) – If True, will output the median and best result each time a trial reports. Defaults to True.
on_trial_result(trial_runner, trial, result)

Callback for early stopping.

This stopping rule stops a running trial if the trial’s best objective value by step t is strictly worse than the median of the running averages of all completed trials’ objectives reported up to step t.

on_trial_complete(trial_runner, trial, result)

Notification for the completion of trial.

This will only be called when the trial is in the RUNNING state and either completes naturally or by manual termination.

on_trial_remove(trial_runner, trial)

Marks trial as completed if it is paused and has previously ran.

debug_string()

Returns a human readable message for printing to the console.

class ray.tune.schedulers.FIFOScheduler

Bases: ray.tune.schedulers.trial_scheduler.TrialScheduler

Simple scheduler that just runs trials in submission order.

on_trial_add(trial_runner, trial)

Called when a new trial is added to the trial runner.

on_trial_error(trial_runner, trial)

Notification for the error of trial.

This will only be called when the trial is in the RUNNING state.

on_trial_result(trial_runner, trial, result)

Called on each intermediate result returned by a trial.

At this point, the trial scheduler can make a decision by returning one of CONTINUE, PAUSE, and STOP. This will only be called when the trial is in the RUNNING state.

on_trial_complete(trial_runner, trial, result)

Notification for the completion of trial.

This will only be called when the trial is in the RUNNING state and either completes naturally or by manual termination.

on_trial_remove(trial_runner, trial)

Called to remove trial.

This is called when the trial is in PAUSED or PENDING state. Otherwise, call on_trial_complete.

choose_trial_to_run(trial_runner)

Called to choose a new trial to run.

This should return one of the trials in trial_runner that is in the PENDING or PAUSED state. This function must be idempotent.

If no trial is ready, return None.

debug_string()

Returns a human readable message for printing to the console.

class ray.tune.schedulers.PopulationBasedTraining(time_attr='time_total_s', reward_attr='episode_reward_mean', perturbation_interval=60.0, hyperparam_mutations={}, resample_probability=0.25, custom_explore_fn=None)

Bases: ray.tune.schedulers.trial_scheduler.FIFOScheduler

Implements the Population Based Training (PBT) algorithm.

https://deepmind.com/blog/population-based-training-neural-networks

PBT trains a group of models (or agents) in parallel. Periodically, poorly performing models clone the state of the top performers, and a random mutation is applied to their hyperparameters in the hopes of outperforming the current top models.

Unlike other hyperparameter search algorithms, PBT mutates hyperparameters during training time. This enables very fast hyperparameter discovery and also automatically discovers good annealing schedules.

This Tune PBT implementation considers all trials added as part of the PBT population. If the number of trials exceeds the cluster capacity, they will be time-multiplexed as to balance training progress across the population.

Parameters:
  • time_attr (str) – The training result attr to use for comparing time. Note that you can pass in something non-temporal such as training_iteration as a measure of progress, the only requirement is that the attribute should increase monotonically.
  • reward_attr (str) – The training result objective value attribute. As with time_attr, this may refer to any objective value. Stopping procedures will use this attribute.
  • perturbation_interval (float) – Models will be considered for perturbation at this interval of time_attr. Note that perturbation incurs checkpoint overhead, so you shouldn’t set this to be too frequent.
  • hyperparam_mutations (dict) – Hyperparams to mutate. The format is as follows: for each key, either a list or function can be provided. A list specifies an allowed set of categorical values. A function specifies the distribution of a continuous parameter. You must specify at least one of hyperparam_mutations or custom_explore_fn.
  • resample_probability (float) – The probability of resampling from the original distribution when applying hyperparam_mutations. If not resampled, the value will be perturbed by a factor of 1.2 or 0.8 if continuous, or changed to an adjacent value if discrete.
  • custom_explore_fn (func) – You can also specify a custom exploration function. This function is invoked as f(config) after built-in perturbations from hyperparam_mutations are applied, and should return config updated as needed. You must specify at least one of hyperparam_mutations or custom_explore_fn.

Example

>>> pbt = PopulationBasedTraining(
>>>     time_attr="training_iteration",
>>>     reward_attr="episode_reward_mean",
>>>     perturbation_interval=10,  # every 10 `time_attr` units
>>>                                # (training_iterations in this case)
>>>     hyperparam_mutations={
>>>         # Perturb factor1 by scaling it by 0.8 or 1.2. Resampling
>>>         # resets it to a value sampled from the lambda function.
>>>         "factor_1": lambda: random.uniform(0.0, 20.0),
>>>         # Perturb factor2 by changing it to an adjacent value, e.g.
>>>         # 10 -> 1 or 10 -> 100. Resampling will choose at random.
>>>         "factor_2": [1, 10, 100, 1000, 10000],
>>>     })
>>> run_experiments({...}, scheduler=pbt)
on_trial_add(trial_runner, trial)

Called when a new trial is added to the trial runner.

on_trial_result(trial_runner, trial, result)

Called on each intermediate result returned by a trial.

At this point, the trial scheduler can make a decision by returning one of CONTINUE, PAUSE, and STOP. This will only be called when the trial is in the RUNNING state.

choose_trial_to_run(trial_runner)

Ensures all trials get fair share of time (as defined by time_attr).

This enables the PBT scheduler to support a greater number of concurrent trials than can fit in the cluster at any given time.

debug_string()

Returns a human readable message for printing to the console.

ray.tune.suggest

class ray.tune.suggest.SearchAlgorithm

Bases: object

Interface of an event handler API for hyperparameter search.

Unlike TrialSchedulers, SearchAlgorithms will not have the ability to modify the execution (i.e., stop and pause trials).

Trials added manually (i.e., via the Client API) will also notify this class upon new events, so custom search algorithms should maintain a list of trials ID generated from this class.

See also: ray.tune.suggest.BasicVariantGenerator.

add_configurations(experiments)

Tracks given experiment specifications.

Parameters:experiments (Experiment | list | dict) – Experiments to run.
next_trials()

Provides Trial objects to be queued into the TrialRunner.

Returns:Returns a list of trials.
Return type:trials (list)
on_trial_result(trial_id, result)

Called on each intermediate result returned by a trial.

This will only be called when the trial is in the RUNNING state.

Parameters:trial_id – Identifier for the trial.
on_trial_complete(trial_id, result=None, error=False, early_terminated=False)

Notification for the completion of trial.

Parameters:
  • trial_id – Identifier for the trial.
  • result (dict) – Defaults to None. A dict will be provided with this notification when the trial is in the RUNNING state AND either completes naturally or by manual termination.
  • error (bool) – Defaults to False. True if the trial is in the RUNNING state and errors.
  • early_terminated (bool) – Defaults to False. True if the trial is stopped while in PAUSED or PENDING state.
is_finished()

Returns True if no trials left to be queued into TrialRunner.

Can return True before all trials have finished executing.

class ray.tune.suggest.BasicVariantGenerator

Bases: ray.tune.suggest.search.SearchAlgorithm

Uses Tune’s variant generation for resolving variables.

See also: ray.tune.suggest.variant_generator.

Example

>>> searcher = BasicVariantGenerator()
>>> searcher.add_configurations({"experiment": { ... }})
>>> list_of_trials = searcher.next_trials()
>>> searcher.is_finished == True
add_configurations(experiments)

Chains generator given experiment specifications.

Parameters:experiments (Experiment | list | dict) – Experiments to run.
next_trials()

Provides Trial objects to be queued into the TrialRunner.

Returns:Returns a list of trials.
Return type:trials (list)
is_finished()

Returns True if no trials left to be queued into TrialRunner.

Can return True before all trials have finished executing.

class ray.tune.suggest.HyperOptSearch(space, max_concurrent=10, reward_attr='episode_reward_mean', **kwargs)

Bases: ray.tune.suggest.suggestion.SuggestionAlgorithm

A wrapper around HyperOpt to provide trial suggestions.

Requires HyperOpt to be installed from source. Uses the Tree-structured Parzen Estimators algorithm, although can be trivially extended to support any algorithm HyperOpt uses. Externally added trials will not be tracked by HyperOpt.

Parameters:
  • space (dict) – HyperOpt configuration. Parameters will be sampled from this configuration and will be used to override parameters generated in the variant generation process.
  • max_concurrent (int) – Number of maximum concurrent trials. Defaults to 10.
  • reward_attr (str) – The training result objective value attribute. This refers to an increasing value.

Example

>>> space = {
>>>     'width': hp.uniform('width', 0, 20),
>>>     'height': hp.uniform('height', -100, 100),
>>>     'activation': hp.choice("activation", ["relu", "tanh"])
>>> }
>>> config = {
>>>     "my_exp": {
>>>         "run": "exp",
>>>         "num_samples": 10 if args.smoke_test else 1000,
>>>         "stop": {
>>>             "training_iteration": 100
>>>         },
>>>     }
>>> }
>>> algo = HyperOptSearch(
>>>     space, max_concurrent=4, reward_attr="neg_mean_loss")
>>> algo.add_configurations(config)
on_trial_result(trial_id, result)

Called on each intermediate result returned by a trial.

This will only be called when the trial is in the RUNNING state.

Parameters:trial_id – Identifier for the trial.
on_trial_complete(trial_id, result=None, error=False, early_terminated=False)

Passes the result to HyperOpt unless early terminated or errored.

The result is internally negated when interacting with HyperOpt so that HyperOpt can “maximize” this value, as it minimizes on default.

class ray.tune.suggest.SuggestionAlgorithm

Bases: ray.tune.suggest.search.SearchAlgorithm

Abstract class for suggestion-based algorithms.

Custom search algorithms can extend this class easily by overriding the _suggest method provide generated parameters for the trials.

To track suggestions and their corresponding evaluations, the method _suggest will be passed a trial_id, which will be used in subsequent notifications.

Example

>>> suggester = SuggestionAlgorithm()
>>> suggester.add_configurations({ ... })
>>> new_parameters = suggester._suggest()
>>> suggester.on_trial_complete(trial_id, result)
>>> better_parameters = suggester._suggest()
add_configurations(experiments)

Chains generator given experiment specifications.

Parameters:experiments (Experiment | list | dict) – Experiments to run.
next_trials()

Provides a batch of Trial objects to be queued into the TrialRunner.

A batch ends when self._trial_generator returns None.

Returns:Returns a list of trials.
Return type:trials (list)
_generate_trials(experiment_spec, output_path='')

Generates trials with configurations from _suggest.

Creates a trial_id that is passed into _suggest.

Yields:Trial objects constructed according to spec
is_finished()

Returns True if no trials left to be queued into TrialRunner.

Can return True before all trials have finished executing.

_suggest(trial_id)

Queries the algorithm to retrieve the next set of parameters.

Parameters:trial_id – Trial ID used for subsequent notifications.
Returns:
Configuration for a trial, if possible.
Else, returns None, which will temporarily stop the TrialRunner from querying.
Return type:dict|None

Example

>>> suggester = SuggestionAlgorithm(max_concurrent=1)
>>> suggester.add_configurations({ ... })
>>> parameters_1 = suggester._suggest()
>>> parameters_2 = suggester._suggest()
>>> parameters_2 is None
>>> suggester.on_trial_complete(trial_id, result)
>>> parameters_2 = suggester._suggest()
>>> parameters_2 is not None