The preprocessing
module provides the StandardScaler
utility class, which is a quick and easy way to perform the following operation on an array-like dataset:
>>> from sklearn import preprocessing >>> import numpy as np >>> X_train = np.array([[ 1., -1., 2.], ... [ 2., 0., 0.], ... [ 0., 1., -1.]]) >>> scaler = preprocessing.StandardScaler().fit(X_train) >>> scaler StandardScaler() >>> scaler.mean_ array([1. ..., 0. ..., 0.33...]) >>> scaler.scale_ array([0.81..., 0.81..., 1.24...]) >>> X_scaled = scaler.transform(X_train) >>> X_scaled array([[ 0. ..., -1.22..., 1.33...], [ 1.22..., 0. ..., -0.26...], [-1.22..., 1.22..., -1.06...]])
Scaled data has zero mean and unit variance:
>>> X_scaled.mean(axis=0) array([0., 0., 0.]) >>> X_scaled.std(axis=0) array([1., 1., 1.])
This class implements the Transformer
API to compute the mean and standard deviation on a training set so as to be able to later re-apply the same transformation on the testing set.
>>> from sklearn.datasets import make_classification >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.model_selection import train_test_split >>> from sklearn.pipeline import make_pipeline >>> from sklearn.preprocessing import StandardScaler >>> X, y = make_classification(random_state=42) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) >>> pipe = make_pipeline(StandardScaler(), LogisticRegression()) >>> pipe.fit(X_train, y_train) # apply scaling on training data Pipeline(steps=[('standardscaler', StandardScaler()), ('logisticregression', LogisticRegression())]) >>> pipe.score(X_test, y_test) # apply scaling on testing data, without leaking training data. 0.96
It is possible to disable either centering or scaling by either passing with_mean=False
or with_std=False
to the constructor of StandardScaler
.
API:
class sklearn.preprocessing.
StandardScaler
(*, copy=True, with_mean=True, with_std=True)
Standardize features by removing the mean and scaling to unit variance
The standard score of a sample x
is calculated as:
z = (x - u) / s
where u
is the mean of the training samples or zero if with_mean=False
, and s
is the standard deviation of the training samples or one if with_std=False
.
Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using transform
.
Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance).
For instance many elements used in the objective function of a learning algorithm (such as the RBF kernel of Support Vector Machines or the L1 and L2 regularizers of linear models) assume that all features are centered around 0 and have variance in the same order. If a feature has a variance that is orders of magnitude larger that others, it might dominate the objective function and make the estimator unable to learn from other features correctly as expected.
This scaler can also be applied to sparse CSR or CSC matrices by passing with_mean=False
to avoid breaking the sparsity structure of the data.
Read more in the User Guide.
- Parameters
- copybool, default=True
-
If False, try to avoid a copy and do inplace scaling instead. This is not guaranteed to always work inplace; e.g. if the data is not a NumPy array or scipy.sparse CSR matrix, a copy may still be returned.
- with_meanbool, default=True
-
If True, center the data before scaling. This does not work (and will raise an exception) when attempted on sparse matrices, because centering them entails building a dense matrix which in common use cases is likely to be too large to fit in memory.
- with_stdbool, default=True
-
If True, scale the data to unit variance (or equivalently, unit standard deviation).
- Attributes
- scale_ndarray of shape (n_features,) or None
-
Per feature relative scaling of the data to achieve zero mean and unit variance. Generally this is calculated using
np.sqrt(var_)
. If a variance is zero, we can’t achieve unit variance, and the data is left as-is, giving a scaling factor of 1.scale_
is equal toNone
whenwith_std=False
.New in version 0.17: scale_
- mean_ndarray of shape (n_features,) or None
-
The mean value for each feature in the training set. Equal to
None
whenwith_mean=False
. - var_ndarray of shape (n_features,) or None
-
The variance for each feature in the training set. Used to compute
scale_
. Equal toNone
whenwith_std=False
. - n_samples_seen_int or ndarray of shape (n_features,)
-
The number of samples processed by the estimator for each feature. If there are no missing samples, the
n_samples_seen
will be an integer, otherwise it will be an array of dtype int. Ifsample_weights
are used it will be a float (if no missing data) or an array of dtype float that sums the weights seen so far. Will be reset on new calls to fit, but increments acrosspartial_fit
calls.
Methods
|
Compute the mean and std to be used for later scaling. |
|
Fit to data, then transform it. |
|
Get parameters for this estimator. |
|
Scale back the data to the original representation |
|
Online computation of mean and std on X for later scaling. |
|
Set the parameters of this estimator. |
|
Perform standardization by centering and scaling |
Examples
>>> from sklearn.preprocessing import StandardScaler >>> data = [[0, 0], [0, 0], [1, 1], [1, 1]] >>> scaler = StandardScaler() >>> print(scaler.fit(data)) StandardScaler() >>> print(scaler.mean_) [0.5 0.5] >>> print(scaler.transform(data)) [[-1. -1.] [-1. -1.] [ 1. 1.] [ 1. 1.]] >>> print(scaler.transform([[2, 2]])) [[3. 3.]]