API Reference - Scikit-Learn 0.19.2 Documentation
API Reference - Scikit-Learn 0.19.2 Documentation
2 documentation
Home Installation
Documentation
Examples
API Reference
This is the class and function reference of scikit-learn. Please refer to the full user guide for further details, as
the class and function raw specifications may not be enough to give full guidelines on their uses.
»
Base classes
Functions
User guide: See the Probability calibration section for further details.
calibration.CalibratedClassifierCV([…]) Probability calibration with isotonic regression or sigmoid.
calibration.calibration_curve(y_true, y_prob) Compute true and predicted probabilities for a calibration
curve.
sklearn.cluster: Clustering
Classes
Functions
sklearn.cluster.bicluster: Biclustering
Classes
The sklearn.covariance module includes methods and algorithms to robustly estimate the covariance of
features given a set of points. The precision matrix defined as the inverse of the covariance is also estimated.
Covariance estimation is closely related to the theory of Gaussian Graphical Models.
User guide: See the Covariance estimation section for further details.
covariance.EmpiricalCovariance([…])
Previous Maximum likelihood covariance estimator Next
http://scikit-learn.org/stable/modules/classes.html 2/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
User guide: See the Cross decomposition section for further details.
cross_decomposition.CCA([n_components, …]) CCA Canonical Correlation Analysis.
cross_decomposition.PLSCanonical([…]) PLSCanonical implements the 2 blocks canonical PLS
of the original Wold algorithm [Tenenhaus 1998] p.204,
referred as PLS-C2A in [Wegelin 2000].
cross_decomposition.PLSRegression([…]) PLS regression
cross_decomposition.PLSSVD([n_components, …]) Partial Least Square SVD
sklearn.datasets: Datasets
The sklearn.datasets module includes utilities to load datasets, including methods to load and fetch popular
reference datasets. It also features some artificial data generators.
User guide: See the Dataset loading utilities section for further details.
Loaders
dataset
datasets.fetch_lfw_people([data_home, …]) Loader for the Labeled Faces in the Wild (LFW) people
dataset
datasets.fetch_mldata(dataname[, …]) Fetch an mldata.org data set
datasets.fetch_olivetti_faces([data_home, …]) Loader for the Olivetti faces data-set from AT&T.
datasets.fetch_rcv1([data_home, subset, …]) Load the RCV1 multilabel dataset, downloading it if
necessary.
datasets.fetch_species_distributions([…]) Loader for species distribution dataset from Phillips et.
datasets.get_data_home([data_home]) Return the path of the scikit-learn data dir.
datasets.load_boston([return_X_y]) Load and return the boston house-prices dataset
» (regression).
datasets.load_breast_cancer([return_X_y]) Load and return the breast cancer wisconsin dataset
(classification).
datasets.load_diabetes([return_X_y]) Load and return the diabetes dataset (regression).
datasets.load_digits([n_class, return_X_y]) Load and return the digits dataset (classification).
datasets.load_files(container_path[, …]) Load text files with categories as subfolder names.
datasets.load_iris([return_X_y]) Load and return the iris dataset (classification).
datasets.load_linnerud([return_X_y]) Load and return the linnerud dataset (multivariate
regression).
datasets.load_mlcomp(name_or_id[, set_, …]) DEPRECATED: since the http://mlcomp.org/ website
will shut down in March 2017, the load_mlcomp
function was deprecated in version 0.19 and will be
removed in 0.21.
datasets.load_sample_image(image_name) Load the numpy array of a single sample image
datasets.load_sample_images() Load sample images for image manipulation.
datasets.load_svmlight_file(f[, n_features, …]) Load datasets in the svmlight / libsvm format into
sparse CSR matrix
datasets.load_svmlight_files(files[, …]) Load dataset from multiple files in SVMlight format
datasets.load_wine([return_X_y]) Load and return the wine dataset (classification).
datasets.mldata_filename(dataname) Convert a raw name for a data set in a mldata.org
filename.
Samples generator
»
sklearn.decomposition: Matrix Decomposition
The sklearn.decomposition module includes matrix decomposition algorithms, including among others PCA,
NMF or ICA. Most of the algorithms of this module can be regarded as dimensionality reduction techniques.
User guide: See the Decomposing signals in components (matrix factorization problems) section for further
details.
decomposition.DictionaryLearning([…]) Dictionary learning
decomposition.FactorAnalysis([n_components, …]) Factor Analysis (FA)
decomposition.FastICA([n_components, …]) FastICA: a fast algorithm for Independent Component
Analysis.
decomposition.IncrementalPCA([n_components, …]) Incremental principal components analysis (IPCA).
decomposition.KernelPCA([n_components, …]) Kernel Principal component analysis (KPCA)
decomposition.LatentDirichletAllocation([…]) Latent Dirichlet Allocation with online variational
Bayes algorithm
decomposition.MiniBatchDictionaryLearning([…]) Mini-batch dictionary learning
decomposition.MiniBatchSparsePCA([…]) Mini-batch Sparse Principal Components Analysis
decomposition.NMF([n_components, init, …]) Non-Negative Matrix Factorization (NMF)
decomposition.PCA([n_components, copy, …]) Principal component analysis (PCA)
decomposition.SparsePCA([n_components, …]) Sparse Principal Components Analysis (SparsePCA)
decomposition.SparseCoder(dictionary[, …]) Sparse coding
decomposition.TruncatedSVD([n_components, …]) Dimensionality reduction using truncated SVD (aka
LSA).
decomposition.dict_learning(X, n_components, …) Solves a dictionary learning matrix factorization
problem.
decomposition.dict_learning_online(X[, …]) Solves a dictionary learning matrix factorization
problem online.
decomposition.fastica(X[, n_components, …]) Perform Fast Independent Component Analysis.
decomposition.sparse_encode(X, dictionary[, …]) Sparse coding
User guide: See the Linear and Quadratic Discriminant Analysis section for further details.
discriminant_analysis.LinearDiscriminantAnalysis([…]) Linear Discriminant Analysis
discriminant_analysis.QuadraticDiscriminantAnalysis([…]) Quadratic Discriminant Analysis
Previous Next
http://scikit-learn.org/stable/modules/classes.html 5/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
User guide: See the Model evaluation: quantifying the quality of predictions section for further details.
dummy.DummyClassifier([strategy, …]) DummyClassifier is a classifier that makes predictions using
simple rules.
dummy.DummyRegressor([strategy, constant, …]) DummyRegressor is a regressor that makes predictions
using simple rules.
»
sklearn.ensemble: Ensemble Methods
The sklearn.ensemble module includes ensemble-based methods for classification, regression and anomaly
detection.
User guide: See the Ensemble methods section for further details.
ensemble.AdaBoostClassifier([…]) An AdaBoost classifier.
ensemble.AdaBoostRegressor([base_estimator, …]) An AdaBoost regressor.
ensemble.BaggingClassifier([base_estimator, …]) A Bagging classifier.
ensemble.BaggingRegressor([base_estimator, …]) A Bagging regressor.
ensemble.ExtraTreesClassifier([…]) An extra-trees classifier.
ensemble.ExtraTreesRegressor([n_estimators, …]) An extra-trees regressor.
ensemble.GradientBoostingClassifier([loss, …]) Gradient Boosting for classification.
ensemble.GradientBoostingRegressor([loss, …]) Gradient Boosting for regression.
ensemble.IsolationForest([n_estimators, …]) Isolation Forest Algorithm
ensemble.RandomForestClassifier([…]) A random forest classifier.
ensemble.RandomForestRegressor([…]) A random forest regressor.
ensemble.RandomTreesEmbedding([…]) An ensemble of totally random trees.
ensemble.VotingClassifier(estimators[, …]) Soft Voting/Majority Rule classifier for unfitted
estimators.
partial dependence
The sklearn.exceptions module includes all custom warnings and error classes used across scikit-learn.
exceptions.ChangedBehaviorWarning Warning class used to notify the user of any change in the
behavior.
exceptions.ConvergenceWarning Custom warning to capture convergence problems
exceptions.DataConversionWarning Warning used to notify implicit data conversions happening in the
code.
exceptions.DataDimensionalityWarning Custom warning to notify potential issues with data dimensionality.
exceptions.EfficiencyWarning Warning used to notify the user of inefficient computation.
exceptions.FitFailedWarning Warning class used if there is an error while fitting the estimator.
Previous
exceptions.NotFittedError Next
http://scikit-learn.org/stable/modules/classes.html 6/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
The sklearn.feature_extraction module deals with feature extraction from raw data. It currently includes
methods to extract features from text and images.
»
User guide: See the Feature extraction section for further details.
feature_extraction.DictVectorizer([dtype, …]) Transforms lists of feature-value mappings to vectors.
feature_extraction.FeatureHasher([…]) Implements feature hashing, aka the hashing trick.
From images
From text
The sklearn.feature_extraction.text submodule gathers utilities to build feature vectors from text
documents.
feature_extraction.text.CountVectorizer([…]) Convert a collection of text documents to a matrix of
token counts
feature_extraction.text.HashingVectorizer([…]) Convert a collection of text documents to a matrix of
token occurrences
feature_extraction.text.TfidfTransformer([…]) Transform a count matrix to a normalized tf or tf-idf
representation
feature_extraction.text.TfidfVectorizer([…]) Convert a collection of raw documents to a matrix of
TF-IDF features.
User guide: See the Feature selection section for further details.
feature_selection.GenericUnivariateSelect([…]) Univariate feature selector with configurable strategy.
feature_selection.SelectPercentile([…]) Select features according to a percentile of the highest
scores.
feature_selection.SelectKBest([score_func, k])
Previous Select features according to the k highest scores. Next
http://scikit-learn.org/stable/modules/classes.html 7/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
feature_selection.SelectFpr([score_func, alpha]) Filter: Select the pvalues below alpha based on a FPR
test.
feature_selection.SelectFdr([score_func, alpha]) Filter: Select the p-values for an estimated false
discovery rate
feature_selection.SelectFromModel(estimator) Meta-transformer for selecting features based on
importance weights.
feature_selection.SelectFwe([score_func, alpha]) Filter: Select the p-values corresponding to Family-wise
error rate
feature_selection.RFE(estimator[, …]) Feature ranking with recursive feature elimination.
feature_selection.RFECV(estimator[, step, …]) Feature ranking with recursive feature elimination and
» cross-validated selection of the best number of
features.
feature_selection.VarianceThreshold([threshold]) Feature selector that removes all low-variance
features.
feature_selection.chi2(X, y) Compute chi-squared stats between each non-negative
feature and class.
feature_selection.f_classif(X, y) Compute the ANOVA F-value for the provided sample.
feature_selection.f_regression(X, y[, center]) Univariate linear regression tests.
feature_selection.mutual_info_classif(X, y) Estimate mutual information for a discrete target
variable.
feature_selection.mutual_info_regression(X, y) Estimate mutual information for a continuous target
variable.
The sklearn.gaussian_process module implements Gaussian Process based regression and classification.
User guide: See the Gaussian Processes section for further details.
gaussian_process.GaussianProcessClassifier([…]) Gaussian process classification (GPC) based on
Laplace approximation.
gaussian_process.GaussianProcessRegressor([…]) Gaussian process regression (GPR).
Kernels:
gaussian_process.kernels.CompoundKernel(kernels) Kernel which is composed of a set of other kernels.
gaussian_process.kernels.ConstantKernel([…]) Constant kernel.
gaussian_process.kernels.DotProduct([…]) Dot-Product kernel.
gaussian_process.kernels.ExpSineSquared([…]) Exp-Sine-Squared kernel.
gaussian_process.kernels.Exponentiation(…) Exponentiate kernel by given exponent.
gaussian_process.kernels.Hyperparameter A kernel hyperparameter’s specification in form of a
namedtuple.
gaussian_process.kernels.Kernel Base class for all kernels.
gaussian_process.kernels.Matern([…]) Matern kernel.
gaussian_process.kernels.PairwiseKernel([…]) Wrapper for kernels in sklearn.metrics.pairwise.
gaussian_process.kernels.Product(k1, k2) Product-kernel k1 * k2 of two kernels k1 and k2.
gaussian_process.kernels.RBF([length_scale, …]) Radial-basis function kernel (aka squared-exponential
kernel).
gaussian_process.kernels.RationalQuadratic([…]) Rational Quadratic kernel.
gaussian_process.kernels.Sum(k1, k2) Sum-kernel k1 + k2 of two kernels k1 and k2.
gaussian_process.kernels.WhiteKernel([…]) White kernel.
User guide: See the Isotonic regression section for further details.
isotonic.IsotonicRegression([y_min, y_max, …]) Isotonic regression model.
isotonic.check_increasing(x, y) Determine whether y is monotonically correlated with x.
isotonic.isotonic_regression(y[, …]) Solve the isotonic regression model:
User guide: See the Kernel Approximation section for further details.
kernel_approximation.AdditiveChi2Sampler([…]) Approximate feature map for additive chi2 kernel.
kernel_approximation.Nystroem([kernel, …]) Approximate a kernel map using a subset of the training
data.
kernel_approximation.RBFSampler([gamma, …]) Approximates feature map of an RBF kernel by Monte
Carlo approximation of its Fourier transform.
kernel_approximation.SkewedChi2Sampler([…]) Approximates feature map of the “skewed chi-squared”
kernel by Monte Carlo approximation of its Fourier
transform.
User guide: See the Kernel ridge regression section for further details.
kernel_ridge.KernelRidge([alpha, kernel, …]) Kernel ridge regression.
The sklearn.linear_model module implements generalized linear models. It includes Ridge regression,
Bayesian Regression, Lasso and Elastic Net estimators computed with Least Angle Regression and
coordinate descent. It also implements Stochastic Gradient Descent related algorithms.
User guide: See the Generalized Linear Models section for further details.
linear_model.ARDRegression([n_iter, tol, …]) Bayesian ARD regression.
linear_model.BayesianRidge([n_iter, tol, …]) Bayesian ridge regression
linear_model.ElasticNet([alpha, l1_ratio, …]) Linear regression with combined L1 and L2 priors as
regularizer.
linear_model.ElasticNetCV([l1_ratio, eps, …]) Elastic Net model with iterative fitting along a
regularization path
linear_model.HuberRegressor([epsilon, …]) Linear regression model that is robust to outliers.
linear_model.Lars([fit_intercept, verbose, …]) Least Angle Regression model a.k.a.
linear_model.LarsCV([fit_intercept, …]) Cross-validated Least Angle Regression model
linear_model.Lasso([alpha, fit_intercept, …]) Linear Model trained with L1 prior as regularizer (aka
the Lasso)
linear_model.LassoCV([eps, n_alphas, …])
Previous Lasso linear model with iterative fitting along a Next
http://scikit-learn.org/stable/modules/classes.html 9/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
regularization path
linear_model.LassoLars([alpha, …]) Lasso model fit with Least Angle Regression a.k.a.
linear_model.LassoLarsCV([fit_intercept, …]) Cross-validated Lasso, using the LARS algorithm
linear_model.LassoLarsIC([criterion, …]) Lasso model fit with Lars using BIC or AIC for model
selection
linear_model.LinearRegression([…]) Ordinary least squares Linear Regression.
linear_model.LogisticRegression([penalty, …]) Logistic Regression (aka logit, MaxEnt) classifier.
linear_model.LogisticRegressionCV([Cs, …]) Logistic Regression CV (aka logit, MaxEnt) classifier.
linear_model.MultiTaskLasso([alpha, …]) Multi-task Lasso model trained with L1/L2 mixed-norm
as regularizer
» linear_model.MultiTaskElasticNet([alpha, …]) Multi-task ElasticNet model trained with L1/L2 mixed-
norm as regularizer
linear_model.MultiTaskLassoCV([eps, …]) Multi-task L1/L2 Lasso with built-in cross-validation.
linear_model.MultiTaskElasticNetCV([…]) Multi-task L1/L2 ElasticNet with built-in cross-
validation.
linear_model.OrthogonalMatchingPursuit([…]) Orthogonal Matching Pursuit model (OMP)
linear_model.OrthogonalMatchingPursuitCV([…]) Cross-validated Orthogonal Matching Pursuit model
(OMP)
linear_model.PassiveAggressiveClassifier([…]) Passive Aggressive Classifier
linear_model.PassiveAggressiveRegressor([C, …]) Passive Aggressive Regressor
linear_model.Perceptron([penalty, alpha, …]) Read more in the User Guide.
linear_model.RANSACRegressor([…]) RANSAC (RANdom SAmple Consensus) algorithm.
linear_model.Ridge([alpha, fit_intercept, …]) Linear least squares with l2 regularization.
linear_model.RidgeClassifier([alpha, …]) Classifier using Ridge regression.
linear_model.RidgeClassifierCV([alphas, …]) Ridge classifier with built-in cross-validation.
linear_model.RidgeCV([alphas, …]) Ridge regression with built-in cross-validation.
linear_model.SGDClassifier([loss, penalty, …]) Linear classifiers (SVM, logistic regression, a.o.) with
SGD training.
linear_model.SGDRegressor([loss, penalty, …]) Linear model fitted by minimizing a regularized
empirical loss with SGD
linear_model.TheilSenRegressor([…]) Theil-Sen Estimator: robust multivariate regression
model.
linear_model.enet_path(X, y[, l1_ratio, …]) Compute elastic net path with coordinate descent
linear_model.lars_path(X, y[, Xy, Gram, …]) Compute Least Angle Regression or Lasso path using
LARS algorithm [1]
linear_model.lasso_path(X, y[, eps, …]) Compute Lasso path with coordinate descent
linear_model.lasso_stability_path(X, y[, …]) DEPRECATED: The function lasso_stability_path is
deprecated in 0.19 and will be removed in 0.21.
linear_model.logistic_regression_path(X, y) Compute a Logistic Regression model for a list of
regularization parameters.
linear_model.orthogonal_mp(X, y[, …]) Orthogonal Matching Pursuit (OMP)
linear_model.orthogonal_mp_gram(Gram, Xy[, …]) Gram Orthogonal Matching Pursuit (OMP)
User guide: See the Manifold learning section for further details.
manifold.Isomap([n_neighbors, n_components, …]) Isomap Embedding
manifold.LocallyLinearEmbedding([…]) Locally Linear Embedding
manifold.MDS([n_components, metric, n_init, …]) Multidimensional scaling
Previous Next
http://scikit-learn.org/stable/modules/classes.html 10/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
»
sklearn.metrics: Metrics
See the Model evaluation: quantifying the quality of predictions section and the Pairwise metrics, Affinities and
Kernels section of the user guide for further details.
The sklearn.metrics module includes score functions, performance metrics and pairwise metrics and distance
computations.
See the The scoring parameter: defining model evaluation rules section of the user guide for further details.
metrics.get_scorer(scoring) Get a scorer from string
metrics.make_scorer(score_func[, …]) Make a scorer from a performance metric or loss function.
Classification metrics
See the Classification metrics section of the user guide for further details.
metrics.accuracy_score(y_true, y_pred[, …]) Accuracy classification score.
metrics.auc(x, y[, reorder]) Compute Area Under the Curve (AUC) using the
trapezoidal rule
metrics.average_precision_score(y_true, y_score) Compute average precision (AP) from prediction
scores
metrics.brier_score_loss(y_true, y_prob[, …]) Compute the Brier score.
metrics.classification_report(y_true, y_pred) Build a text report showing the main classification
metrics
metrics.cohen_kappa_score(y1, y2[, labels, …]) Cohen’s kappa: a statistic that measures inter-
annotator agreement.
metrics.confusion_matrix(y_true, y_pred[, …]) Compute confusion matrix to evaluate the accuracy of
a classification
metrics.f1_score(y_true, y_pred[, labels, …]) Compute the F1 score, also known as balanced F-
score or F-measure
metrics.fbeta_score(y_true, y_pred, beta[, …]) Compute the F-beta score
metrics.hamming_loss(y_true, y_pred[, …]) Compute the average Hamming loss.
metrics.hinge_loss(y_true, pred_decision[, …]) Average hinge loss (non-regularized)
metrics.jaccard_similarity_score(y_true, y_pred) Jaccard similarity coefficient score
metrics.log_loss(y_true, y_pred[, eps, …]) Log loss, aka logistic loss or cross-entropy loss.
metrics.matthews_corrcoef(y_true, y_pred[, …]) Compute the Matthews correlation coefficient (MCC)
metrics.precision_recall_curve(y_true, …) Compute precision-recall pairs for different probability
thresholds
metrics.precision_recall_fscore_support
Previous (…) Compute precision, recall, F-measure and support for Next
http://scikit-learn.org/stable/modules/classes.html 11/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
each class
metrics.precision_score(y_true, y_pred[, …]) Compute the precision
metrics.recall_score(y_true, y_pred[, …]) Compute the recall
metrics.roc_auc_score(y_true, y_score[, …]) Compute Area Under the Receiver Operating
Characteristic Curve (ROC AUC) from prediction
scores.
metrics.roc_curve(y_true, y_score[, …]) Compute Receiver operating characteristic (ROC)
metrics.zero_one_loss(y_true, y_pred[, …]) Zero-one classification loss.
» Regression metrics
See the Regression metrics section of the user guide for further details.
metrics.explained_variance_score(y_true, y_pred) Explained variance regression score function
metrics.mean_absolute_error(y_true, y_pred) Mean absolute error regression loss
metrics.mean_squared_error(y_true, y_pred[, …]) Mean squared error regression loss
metrics.mean_squared_log_error(y_true, y_pred) Mean squared logarithmic error regression loss
metrics.median_absolute_error(y_true, y_pred) Median absolute error regression loss
metrics.r2_score(y_true, y_pred[, …]) R^2 (coefficient of determination) regression score
function.
See the Multilabel ranking metrics section of the user guide for further details.
metrics.coverage_error(y_true, y_score[, …]) Coverage error measure
metrics.label_ranking_average_precision_score(…) Compute ranking-based average precision
metrics.label_ranking_loss(y_true, y_score) Compute Ranking loss measure
Clustering metrics
See the Clustering performance evaluation section of the user guide for further details.
The sklearn.metrics.cluster submodule contains evaluation metrics for cluster analysis results. There are
two forms of evaluation:
supervised, which uses a ground truth class values for each sample.
unsupervised, which does not and measures the ‘quality’ of the model itself.
Biclustering metrics
See the Biclustering evaluation section of the user guide for further details.
metrics.consensus_score(a, b[, similarity]) The similarity of two sets of biclusters.
»
Pairwise metrics
See the Pairwise metrics, Affinities and Kernels section of the user guide for further details.
metrics.pairwise.additive_chi2_kernel(X[, Y]) Computes the additive chi-squared kernel between
observations in X and Y
metrics.pairwise.chi2_kernel(X[, Y, gamma]) Computes the exponential chi-squared kernel X and
Y.
metrics.pairwise.cosine_similarity(X[, Y, …]) Compute cosine similarity between samples in X and
Y.
metrics.pairwise.cosine_distances(X[, Y]) Compute cosine distance between samples in X and
Y.
metrics.pairwise.distance_metrics() Valid metrics for pairwise_distances.
metrics.pairwise.euclidean_distances(X[, Y, …]) Considering the rows of X (and Y=X) as vectors,
compute the distance matrix between each pair of
vectors.
metrics.pairwise.kernel_metrics() Valid metrics for pairwise_kernels
metrics.pairwise.laplacian_kernel(X[, Y, gamma]) Compute the laplacian kernel between X and Y.
metrics.pairwise.linear_kernel(X[, Y]) Compute the linear kernel between X and Y.
metrics.pairwise.manhattan_distances(X[, Y, …]) Compute the L1 distances between the vectors in X
and Y.
metrics.pairwise.pairwise_distances(X[, Y, …]) Compute the distance matrix from a vector array X
and optional Y.
metrics.pairwise.pairwise_kernels(X[, Y, …]) Compute the kernel between arrays X and optional
array Y.
metrics.pairwise.polynomial_kernel(X[, Y, …]) Compute the polynomial kernel between X and Y:
metrics.pairwise.rbf_kernel(X[, Y, gamma]) Compute the rbf (gaussian) kernel between X and Y:
metrics.pairwise.sigmoid_kernel(X[, Y, …]) Compute the sigmoid kernel between X and Y:
metrics.pairwise.paired_euclidean_distances(X, Y) Computes the paired euclidean distances between X
and Y
metrics.pairwise.paired_manhattan_distances(X, Y) Compute the L1 distances between the vectors in X
and Y.
metrics.pairwise.paired_cosine_distances(X, Y) Computes the paired cosine distances between X
and Y
metrics.pairwise.paired_distances(X, Y[, metric]) Computes the paired distances between X and Y.
metrics.pairwise_distances(X[, Y, metric, …]) Compute the distance matrix from a vector array X
and optional Y.
metrics.pairwise_distances_argmin(X, Y[, …]) Compute minimum distances between one point and
a set of points.
metrics.pairwise_distances_argmin_min(X, Y) Compute minimum distances between one point and
a set of points.
Previous Next
http://scikit-learn.org/stable/modules/classes.html 13/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
User guide: See the Gaussian mixture models section for further details.
mixture.BayesianGaussianMixture([…]) Variational Bayesian estimation of a Gaussian mixture.
mixture.GaussianMixture([n_components, …]) Gaussian Mixture.
»
sklearn.model_selection: Model Selection
User guide: See the Cross-validation: evaluating estimator performance, Tuning the hyper-parameters of an
estimator and Learning curve sections for further details.
Splitter Classes
Splitter Functions
Hyper-parameter optimizers
Model validation
The estimators provided in this module are meta-estimators: they require a base estimator to be provided in
their constructor. For example, it is possible to use these estimators to turn a binary classifier or a regressor
into a multiclass classifier. It is also possible to use these estimators with multiclass estimators in the hope that
their accuracy or runtime performance improves.
All classifiers in scikit-learn implement multiclass classification; you only need to use this module if you want to
experiment with custom multiclass strategies.
The one-vs-the-rest meta-classifier also implements a predict_proba method, so long as such a method is
implemented by the base classifier. This method returns probabilities of class membership in both the single
label and multilabel case. Note that in the multilabel case, probabilities are the marginal probability that a given
sample falls in the given class. As such, in the multilabel case the sum of these probabilities over all possible
labels for a given sample will not sum to unity, as they do in the single label case.
User guide: See the Multiclass and multilabel algorithms section for further details.
multiclass.OneVsRestClassifier(estimator[, …]) One-vs-the-rest (OvR) multiclass/multilabel strategy
multiclass.OneVsOneClassifier(estimator[, …]) One-vs-one multiclass strategy
multiclass.OutputCodeClassifier(estimator[, …]) (Error-Correcting) Output-Code multiclass strategy
The estimators provided in this module are meta-estimators: they require a base estimator to be provided in
their constructor. The meta-estimator extends single output estimators to multioutput estimators.
Previous Next
http://scikit-learn.org/stable/modules/classes.html 15/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
User guide: See the Multiclass and multilabel algorithms section for further details.
multioutput.ClassifierChain(base_estimator) A multi-label model that arranges binary classifiers into a
chain.
multioutput.MultiOutputRegressor(estimator) Multi target regression
multioutput.MultiOutputClassifier(estimator) Multi target classification
» The sklearn.naive_bayes module implements Naive Bayes algorithms. These are supervised learning
methods based on applying Bayes’ theorem with strong (naive) feature independence assumptions.
User guide: See the Naive Bayes section for further details.
naive_bayes.BernoulliNB([alpha, binarize, …]) Naive Bayes classifier for multivariate Bernoulli models.
naive_bayes.GaussianNB([priors]) Gaussian Naive Bayes (GaussianNB)
naive_bayes.MultinomialNB([alpha, …]) Naive Bayes classifier for multinomial models
User guide: See the Nearest Neighbors section for further details.
neighbors.BallTree BallTree for fast generalized N-point problems
neighbors.DistanceMetric DistanceMetric class
neighbors.KDTree KDTree for fast generalized N-point problems
neighbors.KernelDensity([bandwidth, …]) Kernel Density Estimation
neighbors.KNeighborsClassifier([…]) Classifier implementing the k-nearest neighbors vote.
neighbors.KNeighborsRegressor([n_neighbors, …]) Regression based on k-nearest neighbors.
neighbors.LocalOutlierFactor([n_neighbors, …]) Unsupervised Outlier Detection using Local Outlier
Factor (LOF)
neighbors.RadiusNeighborsClassifier([…]) Classifier implementing a vote among neighbors within
a given radius
neighbors.RadiusNeighborsRegressor([radius, …]) Regression based on neighbors within a fixed radius.
neighbors.NearestCentroid([metric, …]) Nearest centroid classifier.
neighbors.NearestNeighbors([n_neighbors, …]) Unsupervised learner for implementing neighbor
searches.
neighbors.kneighbors_graph(X, n_neighbors[, …]) Computes the (weighted) graph of k-Neighbors for
points in X
neighbors.radius_neighbors_graph(X, radius) Computes the (weighted) graph of Neighbors for points
in X
User guide: See the Neural network models (supervised) and Neural network models (unsupervised) sections
for further details.
neural_network.BernoulliRBM([n_components, …])
Previous Bernoulli Restricted Boltzmann Machine (RBM). Next
http://scikit-learn.org/stable/modules/classes.html 16/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
sklearn.pipeline: Pipeline
The sklearn.pipeline module implements utilities to build a composite estimator, as a chain of transforms and
estimators.
pipeline.FeatureUnion(transformer_list[, …])Concatenates results of multiple transformer objects.
» pipeline.Pipeline(steps[, memory]) Pipeline of transforms with a final estimator.
pipeline.make_pipeline(*steps, **kwargs) Construct a Pipeline from the given estimators.
pipeline.make_union(*transformers, **kwargs) Construct a FeatureUnion from the given transformers.
The sklearn.preprocessing module includes scaling, centering, normalization, binarization and imputation
methods.
User guide: See the Preprocessing data section for further details.
preprocessing.Binarizer([threshold, copy]) Binarize data (set feature values to 0 or 1) according
to a threshold
preprocessing.FunctionTransformer([func, …]) Constructs a transformer from an arbitrary callable.
preprocessing.Imputer([missing_values, …]) Imputation transformer for completing missing values.
preprocessing.KernelCenterer Center a kernel matrix
preprocessing.LabelBinarizer([neg_label, …]) Binarize labels in a one-vs-all fashion
preprocessing.LabelEncoder Encode labels with value between 0 and n_classes-1.
preprocessing.MultiLabelBinarizer([classes, …]) Transform between iterable of iterables and a
multilabel format
preprocessing.MaxAbsScaler([copy]) Scale each feature by its maximum absolute value.
preprocessing.MinMaxScaler([feature_range, copy]) Transforms features by scaling each feature to a given
range.
preprocessing.Normalizer([norm, copy]) Normalize samples individually to unit norm.
preprocessing.OneHotEncoder([n_values, …]) Encode categorical integer features using a one-hot
aka one-of-K scheme.
preprocessing.PolynomialFeatures([degree, …]) Generate polynomial and interaction features.
preprocessing.QuantileTransformer([…]) Transform features using quantiles information.
preprocessing.RobustScaler([with_centering, …]) Scale features using statistics that are robust to
outliers.
preprocessing.StandardScaler([copy, …]) Standardize features by removing the mean and
scaling to unit variance
preprocessing.add_dummy_feature(X[, value]) Augment dataset with an additional dummy feature.
preprocessing.binarize(X[, threshold, copy]) Boolean thresholding of array-like or scipy.sparse matrix
preprocessing.label_binarize(y, classes[, …]) Binarize labels in a one-vs-all fashion
preprocessing.maxabs_scale(X[, axis, copy]) Scale each feature to the [-1, 1] range without breaking
the sparsity.
preprocessing.minmax_scale(X[, …]) Transforms features by scaling each feature to a given
range.
preprocessing.normalize(X[, norm, axis, …]) Scale input vectors individually to unit norm (vector
length).
preprocessing.quantile_transform(X[, axis, …]) Transform features using quantiles information.
Previous Next
http://scikit-learn.org/stable/modules/classes.html 17/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
Random Projections are a simple and computationally efficient way to reduce the dimensionality of the data by
» trading a controlled amount of accuracy (as additional variance) for faster processing times and smaller model
sizes.
The dimensions and distribution of Random Projections matrices are controlled so as to preserve the pairwise
distances between any two samples of the dataset.
The main theoretical result behind the efficiency of random projection is the Johnson-Lindenstrauss lemma
(quoting Wikipedia):
User guide: See the Random Projection section for further details.
random_projection.GaussianRandomProjection([…]) Reduce dimensionality through Gaussian random
projection
random_projection.SparseRandomProjection([…]) Reduce dimensionality through sparse random
projection
random_projection.johnson_lindenstrauss_min_dim(…) Find a ‘safe’ number of components to randomly
project to
User guide: See the Support Vector Machines section for further details.
Previous Next
http://scikit-learn.org/stable/modules/classes.html 18/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
Estimators
Low-level methods
The sklearn.tree module includes decision tree-based models for classification and regression.
User guide: See the Decision Trees section for further details.
tree.DecisionTreeClassifier([criterion, …])A decision tree classifier.
tree.DecisionTreeRegressor([criterion, …]) A decision tree regressor.
tree.ExtraTreeClassifier([criterion, …]) An extremely randomized tree classifier.
tree.ExtraTreeRegressor([criterion, …]) An extremely randomized tree regressor.
tree.export_graphviz(decision_tree[, …]) Export a decision tree in DOT format.
sklearn.utils: Utilities
Developer guide: See the Utilities for Developers page for further details.
utils.as_float_array(X[, copy, force_all_finite]) Converts an array-like to an array of floats.
utils.assert_all_finite(X) Throw a ValueError if X contains NaN or infinity.
utils.check_X_y(X, y[, accept_sparse, …]) Input validation for standard estimators.
utils.check_array(array[, accept_sparse, …]) Input validation on an array, list, sparse matrix or
similar.
utils.check_consistent_length(*arrays) Check that all arrays have consistent first
dimensions.
utils.check_random_state(seed) Turn seed into a np.random.RandomState instance
utils.class_weight.compute_class_weight(…) Estimate class weights for unbalanced datasets.
utils.class_weight.compute_sample_weight(…) Estimate sample weights by class for unbalanced
datasets.
Previous
utils.estimator_checks.check_estimator(Estimator) Check if estimator adheres to scikit-learn Next
http://scikit-learn.org/stable/modules/classes.html 19/21
7/31/2018 API Reference — scikit-learn 0.19.2 documentation
conventions.
utils.extmath.safe_sparse_dot(a, b[, …]) Dot product that handle the sparse matrix case
correctly
utils.indexable(*iterables) Make arrays indexable for cross-validation.
utils.resample(*arrays, **options) Resample arrays or sparse matrices in a consistent
way
utils.safe_indexing(X, indices) Return items or rows from X using indices.
utils.shuffle(*arrays, **options) Shuffle arrays or sparse matrices in a consistent way
utils.sparsefuncs.incr_mean_variance_axis(X, …) Compute incremental mean and variance along an
axix on a CSR or CSC matrix.
» utils.sparsefuncs.inplace_column_scale(X, scale) Inplace column scaling of a CSC/CSR matrix.
utils.sparsefuncs.inplace_row_scale(X, scale) Inplace row scaling of a CSR or CSC matrix.
utils.sparsefuncs.inplace_swap_row(X, m, n) Swaps two rows of a CSC/CSR matrix in-place.
utils.sparsefuncs.inplace_swap_column(X, m, n) Swaps two columns of a CSC/CSR matrix in-place.
utils.sparsefuncs.mean_variance_axis(X, axis) Compute mean and variance along an axix on a
CSR or CSC matrix
utils.validation.check_is_fitted(estimator, …) Perform is_fitted validation for estimator.
utils.validation.check_memory(memory) Check that memory is joblib.Memory-like.
utils.validation.check_symmetric(array[, …]) Make sure that array is 2D, square and symmetric.
utils.validation.column_or_1d(y[, warn]) Ravel column or 1d numpy array, else raises an error
utils.validation.has_fit_parameter(…) Checks whether the estimator’s fit method supports
the given parameter.
Recently deprecated
To be removed in 0.21
To be removed in 0.20
Previous Next
http://scikit-learn.org/stable/modules/classes.html 21/21