lale.lib.sklearn.svr module

class lale.lib.sklearn.svr.SVR(*, kernel='rbf', degree=3, gamma='scale', coef0=0.0, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200.0, verbose=False, max_iter=-1)

Bases: PlannedIndividualOp

Support Vector Classification from scikit-learn.

This documentation is auto-generated from JSON schemas.

Parameters
  • kernel (union type, default 'rbf') –

    Specifies the kernel type to be used in the algorithm.

    • ’precomputed’, not for optimizer

    • or ‘linear’, ‘poly’, ‘rbf’, or ‘sigmoid’

    • or callable, not for optimizer

    See also constraint-1.

  • degree (integer, >=0, >=2 for optimizer, <=5 for optimizer, default 3) – Degree of the polynomial kernel function (‘poly’).

  • gamma (union type, default 'scale') –

    Kernel coefficient for ‘rbf’, ‘poly’ and ‘sigmoid’.

    • ’scale’ or ‘auto’

    • or float, >0.0, >=3.0517578125e-05 for optimizer, <=8 for optimizer, loguniform distribution

  • coef0 (float, >=-1 for optimizer, <=1 for optimizer, not for optimizer, default 0.0) – Independent term in kernel function.

  • tol (float, >0.0, <=0.01 for optimizer, default 0.001) – Tolerance for stopping criteria.

  • C (float, >0.0, >=0.03125 for optimizer, <=32768 for optimizer, loguniform distribution, default 1.0) – Penalty parameter C of the error term.

  • epsilon (float, >=0.0, >=1e-05 for optimizer, <=10000.0 for optimizer, not for optimizer, default 0.1) – Epsilon in the epsilon-SVR model. It specifies the epsilon-tube within which no penalty is associated in the training loss function with points predicted within a distance epsilon from the actual value.

  • shrinking (boolean, default True) – Whether to use the shrinking heuristic.

  • cache_size (float, >=0, <=1000 for optimizer, not for optimizer, default 200.0) – Specify the size of the kernel cache (in MB).

  • verbose (boolean, not for optimizer, default False) – Enable verbose output.

  • max_iter (integer, >=1 for optimizer, <=1000 for optimizer, not for optimizer, default -1) – Hard limit on iterations within solver, or -1 for no limit.

Notes

constraint-1 : union type

Sparse precomputed kernels are not supported.

  • negated type of ‘X/isSparse’

  • or kernel : negated type of ‘precomputed’

fit(X, y=None, **fit_params)

Train the operator.

Note: The fit method is not available until this operator is trainable.

Once this method is available, it will have the following signature:

Parameters
  • X (array) –

    The outer array is over samples aka rows.

    • items : array of items : float

      The inner array is over features aka columns.

  • y (array of items : float) –

predict(X, **predict_params)

Make predictions.

Note: The predict method is not available until this operator is trained.

Once this method is available, it will have the following signature:

Parameters

X (array, optional) –

The outer array is over samples aka rows.

  • items : array of items : float

    The inner array is over features aka columns.

Returns

result – The predicted classes.

Return type

array of items : float