lale.lib.sklearn.gradient_boosting_regressor module¶
- class lale.lib.sklearn.gradient_boosting_regressor.GradientBoostingRegressor(*, loss='squared_error', learning_rate=0.1, n_estimators=100, subsample=1.0, criterion='friedman_mse', min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, init=None, random_state=None, max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None, warm_start=False, validation_fraction=0.1, n_iter_no_change=None, tol=0.0001, ccp_alpha=0.0)¶
Bases:
PlannedIndividualOp
Gradient boosting regressor random forest from scikit-learn.
This documentation is auto-generated from JSON schemas.
- Parameters
loss (union type, optional, default ‘squared_error’ of ‘squared_error’, ‘absolute_error’, ‘huber’, or ‘quantile’) – Loss function to be optimized. ‘squared_error’ refers to the squared error for regression. ‘absolute_error’ refers to the absolute error of regression and is a robust loss function. ‘huber’ is a combination of the two. ‘quantile’ allows quantile regression (use alpha to specify the quantile).
learning_rate (float, >=0.01 for optimizer, <=1.0 for optimizer, loguniform distribution, optional, not for optimizer, default 0.1) – learning rate shrinks the contribution of each tree by learning_rate.
n_estimators (integer, >=1, >=10 for optimizer, <=100 for optimizer, uniform distribution, optional, default 100) – The number of boosting stages to perform. Gradient boosting
subsample (float, >0.0, >=0.01 for optimizer, <=1.0, <=1.0 for optimizer, uniform distribution, optional, not for optimizer, default 1.0) – The fraction of samples to be used for fitting the individual base
criterion (union type, optional, not for optimizer, default ‘friedman_mse’ of ‘squared_error’ or ‘friedman_mse’) – Function to measure the quality of a split.
min_samples_split (union type, optional, default 2) –
The minimum number of samples required to split an internal node:
integer, >=2, uniform distribution, not for optimizer
or float, >0.0, >=0.01 for optimizer, <=1.0, <=0.5 for optimizer, default 0.05
min_samples_leaf (union type, optional, default 1) –
The minimum number of samples required to be at a leaf node.
integer, >=1, not for optimizer
or float, >0.0, >=0.01 for optimizer, <=0.5, default 0.05
min_weight_fraction_leaf (float, >=0.0, <=0.5, optional, not for optimizer, default 0.0) – The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_depth (integer, >=3 for optimizer, <=5 for optimizer, optional, default 3) – maximum depth of the individual regression estimators.
min_impurity_decrease (float, >=0.0, <=10.0 for optimizer, optional, not for optimizer, default 0.0) – A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
init (union type, not for optimizer, default None) –
An estimator object that is used to compute the initial predictions.
operator
or ‘zero’ or None
random_state (union type, optional, not for optimizer, default None) –
If int, random_state is the seed used by the random number generator;
integer
or numpy.random.RandomState
or None
max_features (union type, optional, default None) –
The number of features to consider when looking for the best split.
integer, >=2, <=’X/items/maxItems’, not for optimizer
Consider max_features features at each split.
or float, >0.0, >=0.01 for optimizer, <1.0, uniform distribution, default 0.5
or ‘auto’, ‘sqrt’, ‘log2’, or None
alpha (float, >=1e-10 for optimizer, <=0.9999999999 for optimizer, loguniform distribution, optional, default 0.9) – The alpha-quantile of the huber loss function and the quantile
verbose (integer, optional, not for optimizer, default 0) – Enable verbose output. If 1 then it prints progress and performance
max_leaf_nodes (union type, optional, not for optimizer, default None) –
Grow trees with
max_leaf_nodes
in best-first fashion.integer, >=1, >=3 for optimizer, <=1000 for optimizer
or None
Unlimited number of leaf nodes.
warm_start (boolean, optional, not for optimizer, default False) – When set to
True
, reuse the solution of the previous call to fitvalidation_fraction (float, >=0.0, <=1.0, optional, not for optimizer, default 0.1) – The proportion of training data to set aside as validation set for early stopping.
n_iter_no_change (union type, optional, not for optimizer, default None) –
n_iter_no_change
is used to decide if early stopping will be usedinteger, >=5 for optimizer, <=10 for optimizer
or None
tol (float, >=1e-08 for optimizer, <=0.01 for optimizer, optional, not for optimizer, default 0.0001) – Tolerance for the early stopping. When the loss is not improving
ccp_alpha (float, >=0.0, <=0.1 for optimizer, optional, not for optimizer, default 0.0) – Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ccp_alpha will be chosen. By default, no pruning is performed.
- fit(X, y=None, **fit_params)¶
Train the operator.
Note: The fit method is not available until this operator is trainable.
Once this method is available, it will have the following signature:
- Parameters
X (array of items : array of items : float) – The input samples. Internally, it will be converted to
y (array of items : float) – Target values (strings or integers in classification, real numbers
sample_weight (union type, optional, default None) –
Sample weights. If None, then samples are equally weighted. Splits
array of items : float
or None
monitor (union type, optional, default None) –
The monitor is called after each iteration with the current the current iteration, a reference to the estimator and the local variables of _fit_stages as keyword arguments callable(i, self, locals()).
callable
or None
- predict(X, **predict_params)¶
Make predictions.
Note: The predict method is not available until this operator is trained.
Once this method is available, it will have the following signature:
- Parameters
X (array, optional of items : array of items : float) – The input samples. Internally, it will be converted to
- Returns
result – The predicted values.
- Return type
array of items : float