lale.lib.lale.hyperopt module

class lale.lib.lale.hyperopt.Hyperopt(*, estimator=None, scoring, best_score=0.0, args_to_scorer=None, cv=5, handle_cv_failure=False, verbose=False, show_progressbar=True, algo='tpe', max_evals=50, frac_evals_with_defaults=0, max_opt_time=None, max_eval_time=None, pgo=None)

Bases: PlannedIndividualOp

Hyperopt is a popular open-source Bayesian optimizer.

This documentation is auto-generated from JSON schemas.

Examples

>>> from lale.lib.sklearn import LogisticRegression as LR
>>> clf = Hyperopt(estimator=LR, cv=3, max_evals=5)
>>> from sklearn import datasets
>>> diabetes = datasets.load_diabetes()
>>> X = diabetes.data[:150]
>>> y = diabetes.target[:150]
>>> trained = clf.fit(X, y)
>>> predictions = trained.predict(X)

Other scoring metrics:

>>> from sklearn.metrics import make_scorer, f1_score
>>> clf = Hyperopt(estimator=LR,
...    scoring=make_scorer(f1_score, average='macro'), cv=3, max_evals=5)
Parameters
  • estimator (union type, default None) –

    Planned Lale individual operator or pipeline.

    • operator

    • or None

      lale.lib.sklearn.LogisticRegression

  • scoring (union type, optional, not for optimizer) –

    Scorer object, or known scorer named by string.

    • None

      When not specified, use accuracy for classification tasks and r2 for regression.

    • or union type

      Scorer object, or known scorer named by string.

      • callable

        Callable with signature scoring(estimator, X, y) as documented in sklearn scoring.

        The callable has to return a scalar value, such that a higher score is better. This may be created from one of the sklearn metrics using make_scorer. Or it can be one of the scoring callables returned by the factory functions in lale.lib.aif360 metrics, for example, symmetric_disparate_impact(**fairness_info). Or it can be a completely custom user-written Python callable.

      • or ‘accuracy’, ‘explained_variance’, ‘max_error’, ‘roc_auc’, ‘roc_auc_ovr’, ‘roc_auc_ovo’, ‘roc_auc_ovr_weighted’, ‘roc_auc_ovo_weighted’, ‘balanced_accuracy’, ‘average_precision’, ‘neg_log_loss’, or ‘neg_brier_score’

        Known scorer for classification task.

      • or ‘r2’, ‘neg_mean_squared_error’, ‘neg_mean_absolute_error’, ‘neg_root_mean_squared_error’, ‘neg_mean_squared_log_error’, or ‘neg_median_absolute_error’

        Known scorer for regression task.

  • best_score (float, optional, not for optimizer, default 0.0) –

    The best score for the specified scorer.

    Given that higher scores are better, passing (best_score - score) as a loss to the minimizing optimizer will maximize the score. By specifying best_score, the loss can be >=0, where 0 is the best loss.

  • args_to_scorer (union type, optional, not for optimizer, default None) –

    A dictionary of additional keyword arguments to pass to the scorer. Used for cases where the scorer has a signature such as scorer(estimator, X, y, **kwargs).

    • dict

    • or None

  • cv (union type, default 5) –

    Cross-validation as integer or as object that has a split function.

    The fit method performs cross validation on the input dataset for per trial, and uses the mean cross validation performance for optimization. This behavior is also impacted by the handle_cv_failure flag.

    • union type

      • integer, >=2, >=3 for optimizer, <=4 for optimizer, uniform distribution, default 5

        Number of folds for cross-validation.

      • or None, not for optimizer

        to use the default 5-fold cross validation

    • or CrossvalGenerator, not for optimizer

      Object with split function: generator yielding (train, test) splits as arrays of indices. Can use any of the iterators from https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators

  • handle_cv_failure (boolean, not for optimizer, default False) –

    How to deal with cross validation failure for a trial.

    If True, continue the trial by doing a 80-20 percent train-validation split of the dataset input to fit and report the score on the validation part. If False, terminate the trial with FAIL status.

  • verbose (boolean, optional, not for optimizer, default False) – Whether to print errors from each of the trials if any. This is also logged using logger.warning.

  • show_progressbar (boolean, not for optimizer, default True) – Display progress bar during optimization.

  • algo (union type, optional, not for optimizer, default 'tpe') –

    Algorithm for searching the space.

  • max_evals (integer, >=1, default 50) – Number of trials of Hyperopt search.

  • frac_evals_with_defaults (float, >=0.0, optional, not for optimizer, default 0) – Sometimes, using default values of hyperparameters works quite well. This value would allow a fraction of the trials to use default values. Hyperopt searches the entire search space for (1-frac_evals_with_defaults) fraction of max_evals.

  • max_opt_time (union type, not for optimizer, default None) –

    Maximum amount of time in seconds for the optimization.

    • float, >=0.0

    • or None

      No runtime bound.

  • max_eval_time (union type, optional, not for optimizer, default None) –

    Maximum amout of time in seconds for each evaluation.

    • float, >=0.0

    • or None

      No runtime bound.

  • pgo (union type, not for optimizer, default None) –

    • any type

      lale.search.PGO

    • or None

fit(X, y=None, **fit_params)

Train the operator.

Note: The fit method is not available until this operator is trainable.

Once this method is available, it will have the following signature:

Parameters
  • X (any type) –

  • y (any type) –

predict(X, **predict_params)

Make predictions.

Note: The predict method is not available until this operator is trained.

Once this method is available, it will have the following signature:

Parameters

X (any type) –

Returns

result

Return type

any type