lale.lib.autogen.fast_ica module¶
- class lale.lib.autogen.fast_ica.FastICA(*, n_components=None, algorithm='parallel', whiten='arbitrary-variance', fun='logcosh', fun_args=None, max_iter=200, tol=0.0001, w_init=None, random_state=None, whiten_solver='svd')¶
Bases:
PlannedIndividualOp
Combined schema for expected data and hyperparameters.
This documentation is auto-generated from JSON schemas.
- Parameters
n_components (union type, default None) –
Number of components to use
integer, >=2 for optimizer, <=256 for optimizer, uniform distribution
or None
algorithm (‘parallel’ or ‘deflation’, default ‘parallel’) – Apply parallel or deflational algorithm for FastICA.
whiten (union type, default 'arbitrary-variance') –
Specify the whitening strategy to use.
False
The data is already considered to be whitened, and no whitening is performed.
or ‘arbitrary-variance’
A whitening with variance arbitrary is used
or ‘unit-variance’
The whitening matrix is rescaled to ensure that each recovered source has unit variance.
fun (‘cube’, ‘exp’, or ‘logcosh’, default ‘logcosh’) – The functional form of the G function used in the approximation to neg-entropy
fun_args (None, not for optimizer, default None) – Arguments to send to the functional form
max_iter (integer, >=1, >=10 for optimizer, <=1000 for optimizer, uniform distribution, default 200) – Maximum number of iterations during fit.
tol (float, >=1e-08 for optimizer, <=0.01 for optimizer, default 0.0001) – Tolerance on update at each iteration.
w_init (None, not for optimizer, default None) – The mixing matrix to be used to initialize the algorithm.
random_state (union type, not for optimizer, default None) –
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by np.random.
integer
or numpy.random.RandomState
or None
whiten_solver (union type, optional, not for optimizer, default 'svd') –
The solver to use for whitening.
’eigh’
Generally more memory efficient when n_samples >= n_features, and can be faster when n_samples >= 50 * n_features.
or ‘svd’
More stable numerically if the problem is degenerate, and often faster when n_samples <= n_features.
- fit(X, y=None, **fit_params)¶
Train the operator.
Note: The fit method is not available until this operator is trainable.
Once this method is available, it will have the following signature:
- Parameters
X (array of items : array of items : float) – Training data, where n_samples is the number of samples and n_features is the number of features.
y (any type) –
- transform(X, y=None)¶
Transform the data.
Note: The transform method is not available until this operator is trained.
Once this method is available, it will have the following signature:
- Parameters
X (array of items : array of items : float) – Data to transform, where n_samples is the number of samples and n_features is the number of features.
y (Any, optional) –
copy (Any, optional) – If False, data passed to fit are overwritten
- Returns
result – Recover the sources from X (apply the unmixing matrix).
- Return type
array of items : array of items : float