lale.lib.sklearn.linear_svr module

class lale.lib.sklearn.linear_svr.LinearSVR(*, epsilon=0.0, tol=0.0001, C=1.0, loss='epsilon_insensitive', fit_intercept=True, intercept_scaling=1.0, dual=True, verbose=0, random_state=None, max_iter=1000)

Bases: PlannedIndividualOp

LinearSVR from scikit-learn.

This documentation is auto-generated from JSON schemas.

Parameters
  • epsilon (float, >=1e-08 for optimizer, <=1.35 for optimizer, loguniform distribution, default 0.0) – Epsilon parameter in the epsilon-insensitive loss function. Note that the value of this parameter depends on the scale of the target variable y. If unsure, set epsilon=0.

  • tol (float, >=1e-08 for optimizer, <=0.01 for optimizer, default 0.0001) – Tolerance for stopping criteria.

  • C (float, not for optimizer, default 1.0) – Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.

  • loss (‘squared_epsilon_insensitive’ or ‘epsilon_insensitive’, default ‘epsilon_insensitive’) –

    Specifies the loss function.

    The epsilon-insensitive loss (standard SVR) is the L1 loss, while the squared epsilon-insensitive loss (‘squared_epsilon_insensitive’) is the L2 loss.

    See also constraint-1.

  • fit_intercept (boolean, default True) – Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered).

  • intercept_scaling (float, not for optimizer, default 1.0) – When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a “synthetic” feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.

  • dual (union type, default True) –

    Select the algorithm to either solve the dual or primal optimization problem.

    • boolean

      Prefer dual=False when n_samples > n_features.

    • or ‘auto’

      Choose the value of the parameter automatically, based on the values of n_samples, n_features, loss, multi_class and penalty. If n_samples < n_features and optimizer supports chosen loss, multi_class and penalty, then dual will be set to True, otherwise it will be set to False.

    See also constraint-1.

  • verbose (integer, not for optimizer, default 0) – Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context.

  • random_state (union type, not for optimizer, default None) –

    Seed of pseudo-random number generator.

    • numpy.random.RandomState

    • or None

      RandomState used by np.random

    • or integer

      Explicit seed.

  • max_iter (integer, >=10 for optimizer, <=1000 for optimizer, uniform distribution, default 1000) – The maximum number of iterations to be run.

Notes

constraint-1 : union type

loss=’epsilon_insensitive’ is not supported when dual=False.

  • loss : ‘squared_epsilon_insensitive’

  • or dual : True

fit(X, y=None, **fit_params)

Train the operator.

Note: The fit method is not available until this operator is trainable.

Once this method is available, it will have the following signature:

Parameters
  • X (array of items : array of items : float) – Training vector, where n_samples in the number of samples and n_features is the number of features.

  • y (array of items : float) – Target vector relative to X

  • sample_weight (union type, optional, default None) –

    Array of weights that are assigned to individual samples

    • array of items : float

    • or None

predict(X, **predict_params)

Make predictions.

Note: The predict method is not available until this operator is trained.

Once this method is available, it will have the following signature:

Parameters

X (array) –

The outer array is over samples aka rows.

  • items : array of items : float

    The inner array is over features aka columns.

Returns

result – Returns predicted values.

Return type

array of items : float