lale.lib.sklearn.extra_trees_regressor module¶
- class lale.lib.sklearn.extra_trees_regressor.ExtraTreesRegressor(*, n_estimators=100, criterion='squared_error', max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, bootstrap=False, oob_score=False, n_jobs=None, random_state=None, verbose=0, warm_start=False, ccp_alpha=0.0, max_samples=None, monotonic_cst=None)¶
Bases:
PlannedIndividualOp
Extra trees regressor random forest from scikit-learn.
This documentation is auto-generated from JSON schemas.
- Parameters
n_estimators (integer, >=1, >=10 for optimizer, <=100 for optimizer, default 100) – The number of trees in the forest.
criterion (union type, default 'squared_error') –
The function to measure the quality of a split. Supported criteria are “squared_error” for the mean squared error, which is equal to variance reduction as feature selection criterion, and “absolute_error” for the mean absolute error.
’squared_error’ or ‘absolute_error’
or ‘mae’ or ‘mse’, not for optimizer
max_depth (union type, default None) –
The maximum depth of the tree. If None, then nodes are expanded until
integer, >=3 for optimizer, <=5 for optimizer
or None
min_samples_split (union type, default 2) –
The minimum number of samples required to split an internal node:
integer, >=2, <=’X/maxItems’
or float, >0.0, >=0.01 for optimizer, <=1.0, <=0.5 for optimizer, default 0.05
min_samples_leaf (union type, default 1) –
The minimum number of samples required to be at a leaf node.
integer, >=1, <=’X/maxItems’, not for optimizer
or float, >0.0, <=0.5, default 0.05
min_weight_fraction_leaf (float, >=0.0, <=0.5, optional, not for optimizer, default 0.0) – The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_features (union type, default None) –
The number of features to consider when looking for the best split.
integer, not for optimizer
or float, >0.0, >=0.01 for optimizer, <=1.0 for optimizer, uniform distribution, default 0.5
or ‘sqrt’, ‘log2’, or None
max_leaf_nodes (union type, optional, not for optimizer, default None) –
Grow trees with
max_leaf_nodes
in best-first fashion.integer, >=1, >=3 for optimizer, <=1000 for optimizer
or None
Unlimited number of leaf nodes.
min_impurity_decrease (float, >=0.0, <=10.0 for optimizer, optional, not for optimizer, default 0.0) – A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
bootstrap (boolean, default False) –
Whether bootstrap samples are used when building trees. If False, the
See also constraint-2.
oob_score (union type, optional, not for optimizer, default False) –
Whether to use out-of-bag samples to estimate the generalization accuracy.
callable, not for optimizer
A callable with signature metric(y_true, y_pred).
or boolean
See also constraint-2.
n_jobs (union type, optional, not for optimizer, default None) –
The number of jobs to run in parallel for both fit and predict.
integer
or None
random_state (union type, optional, not for optimizer, default None) –
If int, random_state is the seed used by the random number generator;
integer
or numpy.random.RandomState
or None
verbose (integer, optional, not for optimizer, default 0) – Controls the verbosity when fitting and predicting.
warm_start (boolean, optional, not for optimizer, default False) – When set to
True
, reuse the solution of the previous call to fitccp_alpha (float, >=0.0, <=0.1 for optimizer, optional, not for optimizer, default 0.0) – Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller than ccp_alpha will be chosen. By default, no pruning is performed.
max_samples (union type, optional, not for optimizer, default None) –
If bootstrap is True, the number of samples to draw from X to train each base estimator.
None
Draw X.shape[0] samples.
or integer, >=1
Draw max_samples samples.
or float, >0.0, <1.0
Draw max_samples * X.shape[0] samples.
monotonic_cst (union type, optional, not for optimizer, default None) –
Indicates the monotonicity constraint to enforce on each feature. Monotonicity constraints are not supported for: multioutput regressions (i.e. when n_outputs > 1),
regressions trained on data with missing values.
array of items : -1, 0, or 1
array-like of int of shape (n_features)
or None
No constraints are applied.
Notes
constraint-1 : negated type of ‘y/isSparse’
This classifier does not support sparse labels.
constraint-2 : union type
Out of bag estimation only available if bootstrap=True
bootstrap : True
or oob_score : False
- fit(X, y=None, **fit_params)¶
Train the operator.
Note: The fit method is not available until this operator is trainable.
Once this method is available, it will have the following signature:
- Parameters
X (array of items : array of items : float) – The training input samples. Internally, its dtype will be converted
y (array of items : float) – The target values (class labels in classification, real numbers in
sample_weight (union type, optional) –
Sample weights. If None, then samples are equally weighted. Splits
array of items : float
or None
- predict(X, **predict_params)¶
Make predictions.
Note: The predict method is not available until this operator is trained.
Once this method is available, it will have the following signature:
- Parameters
X (array, optional of items : array of items : float) – The input samples. Internally, its dtype will be converted to
- Returns
result – The predicted values.
- Return type
array of items : float