lale.lib.autogen.mlp_regressor module

class lale.lib.autogen.mlp_regressor.MLPRegressor(*, hidden_layer_sizes='(100,)', activation='relu', solver='adam', alpha=0.0001, batch_size='auto', learning_rate='constant', learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10)

Bases: PlannedIndividualOp

Combined schema for expected data and hyperparameters.

This documentation is auto-generated from JSON schemas.

Parameters
  • hidden_layer_sizes (tuple, not for optimizer, default (100,)) – The ith element represents the number of neurons in the ith hidden layer.

  • activation (‘identity’, ‘logistic’, ‘tanh’, or ‘relu’, default ‘relu’) – Activation function for the hidden layer

  • solver (‘lbfgs’, ‘sgd’, or ‘adam’, default ‘adam’) –

    The solver for weight optimization

    See also constraint-1, constraint-2, constraint-3, constraint-4, constraint-5, constraint-7, constraint-9, constraint-10, constraint-11, constraint-12.

  • alpha (float, >=1e-10 for optimizer, <=1.0 for optimizer, loguniform distribution, default 0.0001) – L2 penalty (regularization term) parameter.

  • batch_size (union type, default 'auto') –

    Size of minibatches for stochastic optimizers

    • integer, >=3 for optimizer, <=128 for optimizer, uniform distribution

    • or ‘auto’

  • learning_rate (‘constant’, ‘invscaling’, or ‘adaptive’, default ‘constant’) –

    Learning rate schedule for weight updates

    See also constraint-1.

  • learning_rate_init (float, not for optimizer, default 0.001) –

    The initial learning rate used

    See also constraint-2.

  • power_t (float, not for optimizer, default 0.5) –

    The exponent for inverse scaling learning rate

    See also constraint-3.

  • max_iter (integer, >=10 for optimizer, <=1000 for optimizer, uniform distribution, default 200) – Maximum number of iterations

  • shuffle (boolean, default True) –

    Whether to shuffle samples in each iteration

    See also constraint-4.

  • random_state (union type, not for optimizer, default None) –

    If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.

    • integer

    • or numpy.random.RandomState

    • or None

  • tol (float, >=1e-08 for optimizer, <=0.01 for optimizer, default 0.0001) – Tolerance for the optimization

  • verbose (boolean, not for optimizer, default False) – Whether to print progress messages to stdout.

  • warm_start (boolean, not for optimizer, default False) – When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution

  • momentum (float, not for optimizer, default 0.9) –

    Momentum for gradient descent update

    See also constraint-5.

  • nesterovs_momentum (boolean, default True) – Whether to use Nesterov’s momentum

  • early_stopping (boolean, not for optimizer, default False) –

    Whether to use early stopping to terminate training when validation score is not improving

    See also constraint-7, constraint-8.

  • validation_fraction (float, not for optimizer, default 0.1) –

    The proportion of training data to set aside as validation set for early stopping

    See also constraint-8.

  • beta_1 (float, not for optimizer, default 0.9) –

    Exponential decay rate for estimates of first moment vector in adam, should be in [0, 1)

    See also constraint-9.

  • beta_2 (float, not for optimizer, default 0.999) –

    Exponential decay rate for estimates of second moment vector in adam, should be in [0, 1)

    See also constraint-10.

  • epsilon (float, >=1e-08 for optimizer, <=1.35 for optimizer, loguniform distribution, default 1e-08) –

    Value for numerical stability in adam

    See also constraint-11.

  • n_iter_no_change (integer, not for optimizer, default 10) –

    Maximum number of epochs to not meet tol improvement

    See also constraint-12.

Notes

constraint-1 : union type

learning_rate, only used when solver=’sgd’

  • learning_rate : ‘constant’

  • or solver : ‘sgd’

constraint-2 : union type

learning_rate_init, only used when solver=’sgd’ or ‘adam’

  • learning_rate_init : 0.001

  • or solver : ‘sgd’ or ‘adam’

constraint-3 : union type

power_t, only used when solver=’sgd’

  • power_t : 0.5

  • or solver : ‘sgd’

constraint-4 : union type

shuffle, only used when solver=’sgd’ or ‘adam’

  • shuffle : True

  • or solver : ‘sgd’ or ‘adam’

constraint-5 : union type

momentum, only used when solver=’sgd’

  • momentum : 0.9

  • or solver : ‘sgd’

constraint-6 : any type

constraint-7 : union type

early_stopping, only effective when solver=’sgd’ or ‘adam’

  • early_stopping : False

  • or solver : ‘sgd’ or ‘adam’

constraint-8 : union type

validation_fraction, only used if early_stopping is true

  • validation_fraction : 0.1

  • or early_stopping : True

constraint-9 : union type

beta_1, only used when solver=’adam’

  • beta_1 : 0.9

  • or solver : ‘adam’

constraint-10 : union type

beta_2, only used when solver=’adam’

  • beta_2 : 0.999

  • or solver : ‘adam’

constraint-11 : union type

epsilon, only used when solver=’adam’

  • epsilon : 1e-08

  • or solver : ‘adam’

constraint-12 : union type

n_iter_no_change, only effective when solver=’sgd’ or ‘adam’

  • n_iter_no_change : 10

  • or solver : ‘sgd’ or ‘adam’

fit(X, y=None, **fit_params)

Train the operator.

Note: The fit method is not available until this operator is trainable.

Once this method is available, it will have the following signature:

Parameters
  • X (union type) –

    The input data.

    • array of items : Any

    • or array of items : array of items : float

  • y (union type) –

    The target values (class labels in classification, real numbers in regression).

    • array of items : float

    • or array of items : array of items : float

predict(X, **predict_params)

Make predictions.

Note: The predict method is not available until this operator is trained.

Once this method is available, it will have the following signature:

Parameters

X (array of items : array of items : float) – The input data.

Returns

result – The predicted values.

Return type

array of items : array of items : float