1.4. Support Vector Machines - Scikit-Learn 1.5.1 Documentation
1.4. Support Vector Machines - Scikit-Learn 1.5.1 Documentation
1 documentation
If the number of features is much greater than the number of samples, avoid over-fitting in choosing
Kernel functions and regularization term is crucial.
SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold
cross-validation (see Scores and probabilities, below).
The support vector machines in scikit-learn support both dense ( numpy.ndarray and convertible to that by
numpy.asarray ) and sparse (any scipy.sparse ) sample vectors as input. However, to use an SVM to make
predictions for sparse data, it must have been fit on such data. For optimal performance, use C-ordered
numpy.ndarray (dense) or scipy.sparse.csr_matrix (sparse) with dtype=float64 .
https://scikit-learn.org/stable/modules/svm.html 1/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
1.4.1. Classification
SVC , NuSVC and LinearSVC are classes capable of performing binary and multi-class classification on a
dataset.
SVC and NuSVC are similar methods, but accept slightly different sets of parameters and have different
mathematical formulations (see section Mathematical formulation). On the other hand, LinearSVC is
https://scikit-learn.org/stable/modules/svm.html 2/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
another (faster) implementation of Support Vector Classification for the case of a linear kernel. It also lacks
some of the attributes of SVC and NuSVC , like support_ . LinearSVC uses squared_hinge loss and due to
its implementation in liblinear it also regularizes the intercept, if considered. This effect can however be
reduced by carefully fine tuning its intercept_scaling parameter, which allows the intercept term to have
a different regularization behavior compared to the other features. The classification results and score can
therefore differ from the other two classifiers.
As other classifiers, SVC , NuSVC and LinearSVC take as input two arrays: an array X of shape (n_samples,
n_features) holding the training samples, and an array y of class labels (strings or integers), of shape
(n_samples) :
After being fitted, the model can then be used to predict new values:
SVMs decision function (detailed in the Mathematical formulation) depends on some subset of the training
data, called the support vectors. Some properties of these support vectors can be found in attributes
support_vectors_ , support_ and n_support_ :
https://scikit-learn.org/stable/modules/svm.html 3/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
Examples
https://scikit-learn.org/stable/modules/svm.html 4/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
On the other hand, LinearSVC implements “one-vs-the-rest” multi-class strategy, thus training n_classes
models.
Examples
https://scikit-learn.org/stable/modules/svm.html 5/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
Note
The same probability calibration procedure is available for all estimators via the
CalibratedClassifierCV (see Probability calibration). In the case of SVC and NuSVC , this
procedure is builtin in libsvm which is used under the hood, so it does not rely on scikit-learn’s
CalibratedClassifierCV .
The cross-validation involved in Platt scaling is an expensive operation for large datasets. In addition, the
probability estimates may be inconsistent with the scores:
the “argmax” of the scores may not be the argmax of the probabilities
in binary classification, a sample may be labeled by predict as belonging to the positive class even if
the output of predict_proba is less than 0.5; and similarly, it could be labeled as negative even if the
output of predict_proba is more than 0.5.
Platt’s method is also known to have theoretical issues. If confidence scores are required, but these do not
have to be probabilities, then it is advisable to set probability=False and use decision_function instead
of predict_proba .
https://scikit-learn.org/stable/modules/svm.html 6/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
Please note that when decision_function_shape='ovr' and n_classes > 2 , unlike decision_function , the
predict method does not try to break ties by default. You can set break_ties=True for the output of
predict to be the same as np.argmax(clf.decision_function(...), axis=1) , otherwise the first class
among the tied classes will always be returned; but have in mind that it comes with a computational cost.
See SVM Tie Breaking Example for an example on tie breaking.
SVC (but not NuSVC ) implements the parameter class_weight in the fit method. It’s a dictionary of the
form {class_label : value} , where value is a floating point number > 0 that sets the parameter C of
class class_label to C * value . The figure below illustrates the decision boundary of an unbalanced
problem, with and without weight correction.
https://scikit-learn.org/stable/modules/svm.html 7/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
SVC , NuSVC , SVR , NuSVR , LinearSVC , LinearSVR and OneClassSVM implement also weights for individual
samples in the fit method through the sample_weight parameter. Similar to class_weight , this sets the
parameter C for the i-th example to C * sample_weight[i] , which will encourage the classifier to get these
samples right. The figure below illustrates the effect of sample weighting on the decision boundary. The
size of the circles is proportional to the sample weights:
https://scikit-learn.org/stable/modules/svm.html 8/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
Examples
1.4.2. Regression
The method of Support Vector Classification can be extended to solve regression problems. This method is
called Support Vector Regression.
The model produced by support vector classification (as described above) depends only on a subset of the
training data, because the cost function for building the model does not care about training points that lie
https://scikit-learn.org/stable/modules/svm.html 9/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
beyond the margin. Analogously, the model produced by Support Vector Regression depends only on a
subset of the training data, because the cost function ignores samples whose prediction is close to their
target.
There are three different implementations of Support Vector Regression: SVR , NuSVR and LinearSVR .
LinearSVR provides a faster implementation than SVR but only considers the linear kernel, while NuSVR
implements a slightly different formulation than SVR and LinearSVR . Due to its implementation in
liblinear LinearSVR also regularizes the intercept, if considered. This effect can however be reduced by
carefully fine tuning its intercept_scaling parameter, which allows the intercept term to have a different
regularization behavior compared to the other features. The classification results and score can therefore
differ from the other two classifiers. See Implementation details for further details.
As with classification classes, the fit method will take as argument vectors X, y, only that in this case y is
expected to have floating point values instead of integer values:
Examples
https://scikit-learn.org/stable/modules/svm.html 10/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
See Novelty and Outlier Detection for the description and usage of OneClassSVM.
1.4.4. Complexity
Support Vector Machines are powerful tools, but their compute and storage requirements increase rapidly
with the number of training vectors. The core of an SVM is a quadratic programming problem (QP),
separating support vectors from the rest of the training data. The QP solver used by the libsvm-based
implementation scales between O(n features × n 2samples ) and O(n features × n 3samples ) depending on how
efficiently the libsvm cache is used in practice (dataset dependent). If the data is very sparse n features
should be replaced by the average number of non-zero features in a sample vector.
For the linear case, the algorithm used in LinearSVC by the liblinear implementation is much more efficient
than its libsvm-based SVC counterpart and can scale almost linearly to millions of samples and/or features.
https://scikit-learn.org/stable/modules/svm.html 11/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
For LinearSVC (and LogisticRegression ) any input passed as a numpy array will be copied and
converted to the liblinear internal sparse data representation (double precision floats and int32 indices
of non-zero components). If you want to fit a large-scale linear classifier without copying a dense
numpy C-contiguous double precision array as input, we suggest to use the SGDClassifier class
instead. The objective function can be configured to be almost the same as the LinearSVC model.
Kernel cache size: For SVC , SVR , NuSVC and NuSVR , the size of the kernel cache has a strong impact
on run times for larger problems. If you have enough RAM available, it is recommended to set
cache_size to a higher value than the default of 200(MB), such as 500(MB) or 1000(MB).
Setting C: C is 1 by default and it’s a reasonable default choice. If you have a lot of noisy
observations you should decrease it: decreasing C corresponds to more regularization.
LinearSVC and LinearSVR are less sensitive to C when it becomes large, and prediction results stop
improving after a certain threshold. Meanwhile, larger C values will take more time to train, sometimes
up to 10 times longer, as shown in [11].
Support Vector Machine algorithms are not scale invariant, so it is highly recommended to scale
your data. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or standardize it
to have mean 0 and variance 1. Note that the same scaling must be applied to the test vector to obtain
meaningful results. This can be done easily by using a Pipeline :
See section Preprocessing data for more details on scaling and normalization.
https://scikit-learn.org/stable/modules/svm.html 12/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
Regarding the shrinking parameter, quoting [12]: We found that if the number of iterations is large,
then shrinking can shorten the training time. However, if we loosely solve the optimization problem (e.g.,
by using a large stopping tolerance), the code without using shrinking may be much faster
Parameter nu in NuSVC / OneClassSVM / NuSVR approximates the fraction of training errors and support
vectors.
In SVC , if the data is unbalanced (e.g. many positive and few negative), set class_weight='balanced'
and/or try different penalty parameters C .
Randomness of the underlying implementations: The underlying implementations of SVC and
NuSVC use a random number generator only to shuffle the data for probability estimation (when
probability is set to True ). This randomness can be controlled with the random_state parameter. If
probability is set to False these estimators are not random and random_state has no effect on the
results. The underlying OneClassSVM implementation is similar to the ones of SVC and NuSVC . As no
probability estimation is provided for OneClassSVM , it is not random.
The underlying LinearSVC implementation uses a random number generator to select features when
fitting the model with a dual coordinate descent (i.e. when dual is set to True ). It is thus not
uncommon to have slightly different results for the same input data. If that happens, try with a smaller
tol parameter. This randomness can also be controlled with the random_state parameter. When
dual is set to False the underlying implementation of LinearSVC is not random and random_state
has no effect on the results.
https://scikit-learn.org/stable/modules/svm.html 13/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
linear: ⟨x, x ′ ⟩.
polynomial: (γ⟨x, x ′ ⟩ + r) d , where d is specified by parameter degree , r by coef0 .
rbf: exp(−γ∥x − x ′ ∥ 2 ), where γ is specified by parameter gamma , must be greater than 0.
sigmoid tanh(γ⟨x, x ′ ⟩ + r), where r is specified by coef0 .
See also Kernel Approximation for a solution to use RBF kernels that is much faster and more scalable.
https://scikit-learn.org/stable/modules/svm.html 14/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
Proper choice of C and gamma is critical to the SVM’s performance. One is advised to use GridSearchCV
with C and gamma spaced exponentially far apart to choose good values.
Examples
Classifiers with custom kernels behave the same way as any other classifiers, except that:
Field support_vectors_ is now empty, only indices of support vectors are stored in support_
A reference (and not a copy) of the first argument in the fit() method is stored for future reference.
If that array changes between the use of fit() and predict() you will have unexpected results.
Examples
https://scikit-learn.org/stable/modules/svm.html 15/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
In general, when the problem isn’t linearly separable, the support vectors are the samples within the margin
boundaries.
https://scikit-learn.org/stable/modules/svm.html 16/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
We recommend [13] and [14] as good references for the theory and practicalities of SVMs.
1.4.7.1. SVC
Given training vectors x i ∈ R p , i=1,…, n, in two classes, and a vector y ∈ {1, −1} n , our goal is to find
w ∈ R p and b ∈ R such that the prediction given by sign(w T ϕ(x) + b) is correct for most samples.
n
1 T
Section Navigation
min w w + C ∑ ζ i
w,b,ζ 2
i=1
1.2. Linear and Quadratic Intuitively, we’re trying to maximize the margin (by minimizing ||w|| 2 = w T w), while incurring a penalty
Discriminant Analysis when a sample is misclassified or within the margin boundary. Ideally, the value y i (w T ϕ(x i ) + b) would be
1.3. Kernel ridge regression ≥ 1 for all samples, which indicates a perfect prediction. But problems are usually not always perfectly
1.4. Support Vector separable with a hyperplane, so we allow some samples to be at a distance ζ i from their correct margin
Machines boundary. The penalty term C controls the strength of this penalty, and as a result, acts as an inverse
regularization parameter (see note below).
1.5. Stochastic Gradient
Descent
The dual problem to the primal is
1.6. Nearest Neighbors
1 10 Decision Trees
https://scikit-learn.org/stable/modules/svm.html 17/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
1.10. Decision Trees
multioutput algorithms
where e is the vector of all ones, and Q is an n by n positive semidefinite matrix, Q ij ≡ y i y j K(x i , x j ),
1 13 Feature selection where K(x i , x j ) = ϕ(x i ) T ϕ(x j ) is the kernel. The terms α i are called the dual coefficients, and they are
upper-bounded by C . This dual representation highlights the fact that training vectors are implicitly
mapped into a higher (maybe infinite) dimensional space by the function ϕ: see kernel trick.
Once the optimization problem is solved, the output of decision_function for a given sample x becomes:
∑ y i α i K(x i , x) + b,
i∈SV
and the predicted class correspond to its sign. We only need to sum over the support vectors (i.e. the
samples that lie within the margin) because the dual coefficients α i are zero for the other samples.
These parameters can be accessed through the attributes dual_coef_ which holds the product y i α i ,
support_vectors_ which holds the support vectors, and intercept_ which holds the independent term b
Note
While SVM models derived from libsvm and liblinear use C as regularization parameter, most
other estimators use alpha . The exact equivalence between the amount of regularization of two
models depends on the exact objective function optimized by the model. For example, when the
estimator used is Ridge regression, the relation between them is given as C = 1
alpha .
https://scikit-learn.org/stable/modules/svm.html 18/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
LinearSVC
NuSVC
1.4.7.2. SVR
Given training vectors x i ∈ R p , i=1,…, n, and a vector y ∈ R n ε-SVR solves the following primal problem:
n
1 T
min w w + C ∑(ζ i + ζ i∗ )
w,b,ζ,ζ ∗ 2
i=1
subject to y i − w T ϕ(x i ) − b ≤ ε + ζ i ,
w T ϕ(x i ) + b − y i ≤ ε + ζ i∗ ,
ζ i , ζ i∗ ≥ 0, i = 1, . . . , n
Here, we are penalizing samples whose prediction is at least ε away from their true target. These samples
penalize the objective by ζ i or ζ i∗ , depending on whether their predictions lie above or below the ε tube.
1
min∗ (α − α ∗ ) T Q(α − α ∗ ) + εe T (α + α ∗ ) − y T (α − α ∗ )
α,α 2
subject to e T (α − α ∗ ) = 0
0 ≤ α i , α ∗i ≤ C, i = 1, . . . , n
https://scikit-learn.org/stable/modules/svm.html 19/20
18/07/24, 21:27 1.4. Support Vector Machines — scikit-learn 1.5.1 documentation
https://scikit-learn.org/stable/modules/svm.html 20/20