lale.lib.sklearn.ridge module

class lale.lib.sklearn.ridge.Ridge(*, alpha=1.0, fit_intercept=True, copy_X=True, max_iter=None, tol=0.0001, solver='auto', random_state=None, positive=False)

Bases: PlannedIndividualOp

Ridge regression estimator from scikit-learn.

This documentation is auto-generated from JSON schemas.

Parameters
  • alpha (union type, default 1.0) –

    Regularization strength; larger values specify stronger regularization.

    • float, >0.0, >=1e-10 for optimizer, <=1.0 for optimizer, loguniform distribution, default 1.0

    • or array, not for optimizer of items : float, >0.0

      Penalties specific to the targets.

  • fit_intercept (boolean, default True) –

    Whether to calculate the intercept for this model.

    See also constraint-1.

  • copy_X (boolean, optional, default True) – If True, X will be copied; else, it may be overwritten.

  • max_iter (union type, optional, default None) –

    Maximum number of iterations for conjugate gradient solver.

    • integer, >=1, >=10 for optimizer, <=1000 for optimizer

    • or None

  • tol (float, >=1e-08 for optimizer, <=0.01 for optimizer, optional, default 0.0001) – Precision of the solution.

  • solver (‘auto’, ‘svd’, ‘cholesky’, ‘lsqr’, ‘sparse_cg’, ‘sag’, ‘saga’, or ‘lbfgs’, default ‘auto’) –

    Solver to use in the computational routines:

    • ’auto’ chooses the solver automatically based on the type of data.

    • ’svd’ uses a Singular Value Decomposition of X to compute the Ridge coefficients. More stable for singular matrices than ‘cholesky’.

    • ’cholesky’ uses the standard scipy.linalg.solve function to obtain a closed-form solution.

    • ’sparse_cg’ uses the conjugate gradient solver as found in scipy.sparse.linalg.cg. As an iterative algorithm, this solver is more appropriate than ‘cholesky’ for large-scale data (possibility to set tol and max_iter).

    • ’lsqr’ uses the dedicated regularized least-squares routine scipy.sparse.linalg.lsqr. It is the fastest and uses an iterative procedure.

    • ’sag’ uses a Stochastic Average Gradient descent, and ‘saga’ uses its improved, unbiased version named SAGA. Both methods also use an iterative procedure, and are often faster than other solvers when both n_samples and n_features are large. Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale. You can preprocess the data with a scaler from sklearn.preprocessing.

    • ’lbfgs’ uses L-BFGS-B algorithm implemented in scipy.optimize.minimize. It can be used only when positive is True.

    All last six solvers support both dense and sparse data. However, only ‘sag’, ‘sparse_cg’, and ‘lbfgs’ support sparse input when fit_intercept is True.

    See also constraint-1, constraint-2, constraint-3, constraint-4.

  • random_state (union type, optional, not for optimizer, default None) –

    The seed of the pseudo random number generator to use when shuffling

    • integer

    • or numpy.random.RandomState

    • or None

  • positive (boolean, optional, not for optimizer, default False) –

    When set to True, forces the coefficients to be positive. Only ‘lbfgs’ solver is supported in this case.

    See also constraint-3, constraint-4.

Notes

constraint-1 : union type

solver {svd, lsqr, cholesky, saga} does not support fitting the intercept on sparse data. Please set the solver to ‘auto’ or ‘sparse_cg’, ‘sag’, or set fit_intercept=False.

  • negated type of ‘X/isSparse’

  • or fit_intercept : False

  • or solver : ‘auto’, ‘sparse_cg’, or ‘sag’

constraint-2 : union type

SVD solver does not support sparse inputs currently.

  • negated type of ‘X/isSparse’

  • or solver : negated type of ‘svd’

constraint-3 : union type

Only ‘lbfgs’ solver is supported when positive is True. auto works too when tested.

  • positive : False

  • or solver : ‘lbfgs’ or ‘auto’

constraint-4 : union type

lbfgs solver can be used only when positive=True.

  • positive : True

  • or solver : negated type of ‘lbfgs’

fit(X, y=None, **fit_params)

Train the operator.

Note: The fit method is not available until this operator is trainable.

Once this method is available, it will have the following signature:

Parameters
  • X (array of items : array of items : float) – Training data

  • y (union type) –

    Target values

    • array of items : array of items : float

    • or array of items : float

  • sample_weight (union type, optional) –

    Individual weights for each sample

    • float

    • or array of items : float

    • or None

predict(X, **predict_params)

Make predictions.

Note: The predict method is not available until this operator is trained.

Once this method is available, it will have the following signature:

Parameters

X (union type, optional) –

Samples.

  • array of items : float

  • or array of items : array of items : float

Returns

result – Returns predicted values.

  • array of items : float

  • or array of items : array of items : float

Return type

union type