lale.lib.autogen.passive_aggressive_regressor module

class lale.lib.autogen.passive_aggressive_regressor.PassiveAggressiveRegressor(*, C=1.0, fit_intercept=True, max_iter=1000, tol=0.001, early_stopping=False, validation_fraction=0.1, n_iter_no_change=5, shuffle=True, verbose=0, loss='epsilon_insensitive', epsilon=0.1, random_state=None, warm_start=False, average=False)

Bases: PlannedIndividualOp

Combined schema for expected data and hyperparameters.

This documentation is auto-generated from JSON schemas.

Parameters
  • C (float, not for optimizer, default 1.0) – Maximum step size (regularization)

  • fit_intercept (boolean, default True) – Whether the intercept should be estimated or not

  • max_iter (integer, >=10 for optimizer, <=1000 for optimizer, uniform distribution, default 1000) – The maximum number of passes over the training data (aka epochs)

  • tol (union type, default 0.001) –

    The stopping criterion

    • float, >=1e-08 for optimizer, <=0.01 for optimizer

    • or None

  • early_stopping (boolean, not for optimizer, default False) –

    Whether to use early stopping to terminate training when validation

    See also constraint-2.

  • validation_fraction (float, not for optimizer, default 0.1) –

    The proportion of training data to set aside as validation set for early stopping

    See also constraint-2.

  • n_iter_no_change (integer, not for optimizer, default 5) – Number of iterations with no improvement to wait before early stopping

  • shuffle (boolean, default True) – Whether or not the training data should be shuffled after each epoch.

  • verbose (integer, not for optimizer, default 0) – The verbosity level

  • loss (‘huber’, ‘squared_epsilon_insensitive’, ‘squared_loss’, or ‘epsilon_insensitive’, default ‘epsilon_insensitive’) – The loss function to be used: epsilon_insensitive: equivalent to PA-I in the reference paper

  • epsilon (float, >=1e-08 for optimizer, <=1.35 for optimizer, loguniform distribution, default 0.1) – If the difference between the current prediction and the correct label is below this threshold, the model is not updated.

  • random_state (union type, not for optimizer, default None) –

    The seed of the pseudo random number generator to use when shuffling the data

    • integer

    • or numpy.random.RandomState

    • or None

  • warm_start (boolean, not for optimizer, default False) – When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution

  • average (union type, not for optimizer, default False) –

    When set to True, computes the averaged SGD weights and stores the result in the coef_ attribute

    • boolean

    • or integer

Notes

constraint-1 : any type

constraint-2 : union type

validation_fraction, only used if early_stopping is true

  • validation_fraction : 0.1

  • or early_stopping : True

fit(X, y=None, **fit_params)

Train the operator.

Note: The fit method is not available until this operator is trainable.

Once this method is available, it will have the following signature:

Parameters
  • X (array of items : array of items : float) – Training data

  • y (array of items : float) – Target values

  • coef_init (array, optional of items : float) – The initial coefficients to warm-start the optimization.

  • intercept_init (array, optional of items : float) – The initial intercept to warm-start the optimization.

predict(X, **predict_params)

Make predictions.

Note: The predict method is not available until this operator is trained.

Once this method is available, it will have the following signature:

Parameters

X (array of items : array of items : float) –

Returns

result – Predicted target values per element in X.

Return type

array of items : float