There is a rationale here.
Perceptron
shares most of the code with SGDClassifier
Perceptron and SGDClassifier share the same underlying implementation. In fact, Perceptron() is equivalent to SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”, penalty=None).
and SGDClassifier
is better documented:
Note: The sparse implementation produces slightly different results than the dense implementation due to a shrunk learning rate for the intercept.
we have more details latter:
In the case of sparse feature vectors, the intercept is updated with a smaller learning rate (multiplied by 0.01) to account for the fact that it is updated more frequently.
Note that this implementation details comes from Leon Bottou:
The learning rate for the bias is multiplied by 0.01 because this frequently improves the condition number.
For completeness, in scikit-learn code:
SPARSE_INTERCEPT_DECAY = 0.01
# For sparse data intercept updates are scaled by this decay factor to avoid
# intercept oscillation.
Bonus example:
import numpy as np
import scipy.sparse as sp
from sklearn.linear_model import Perceptron
np.random.seed(42)
n_samples, n_features = 1000, 10
X_dense = np.random.randn(n_samples, n_features)
X_csr = sp.csr_matrix(X_dense)
y = np.random.randint(2, size=n_samples)
for X in [X_dense, X_csr]:
model = Perceptron(penalty='l2', alpha=0.0001, fit_intercept=True,
max_iter=10000, tol=1e-8, shuffle=True, verbose=0,
eta0=1.0, n_jobs=1, random_state=0, class_weight=None,
warm_start=False, n_iter=None)
model.fit(X, y)
print(model.coef_)
You can check that the coefficients are different.
Changing fit_intercept
to False
makes the coefficients equal, yet the fit may be poorer.