1

Perceptron, when given the matrix in dense format, gives different results as compared to giving the same matrix in sparse format. I thought it could be a shuffling issue, so I ran Cross Validation using cross_validate from sklearn.model_selection but no luck.

A similar issue is discussed here. But there is some rationale given. Is there any rationale here?

FYI, the parameters I am using Perceptron with are: penalty='l2', alpha=0.0001, fit_intercept=True, max_iter=10000, tol=1e-8, shuffle=True, verbose=0, eta0=1.0, n_jobs=1, random_state=0, class_weight=None, warm_start=False, n_iter=None

I am using sparse.csr_matrix to convert the dense to sparse matrix as accepted answer here

TomDLT
  • 4,346
  • 1
  • 20
  • 26
TheRajVJain
  • 390
  • 5
  • 15

1 Answers1

1

There is a rationale here.

Perceptron shares most of the code with SGDClassifier

Perceptron and SGDClassifier share the same underlying implementation. In fact, Perceptron() is equivalent to SGDClassifier(loss=”perceptron”, eta0=1, learning_rate=”constant”, penalty=None).

and SGDClassifier is better documented:

Note: The sparse implementation produces slightly different results than the dense implementation due to a shrunk learning rate for the intercept.

we have more details latter:

In the case of sparse feature vectors, the intercept is updated with a smaller learning rate (multiplied by 0.01) to account for the fact that it is updated more frequently.

Note that this implementation details comes from Leon Bottou:

The learning rate for the bias is multiplied by 0.01 because this frequently improves the condition number.

For completeness, in scikit-learn code:

SPARSE_INTERCEPT_DECAY = 0.01
# For sparse data intercept updates are scaled by this decay factor to avoid
# intercept oscillation.

Bonus example:

import numpy as np
import scipy.sparse as sp
from sklearn.linear_model import Perceptron

np.random.seed(42)
n_samples, n_features = 1000, 10
X_dense = np.random.randn(n_samples, n_features)
X_csr = sp.csr_matrix(X_dense)
y = np.random.randint(2, size=n_samples)

for X in [X_dense, X_csr]:
    model = Perceptron(penalty='l2', alpha=0.0001, fit_intercept=True,
                       max_iter=10000, tol=1e-8, shuffle=True, verbose=0,
                       eta0=1.0, n_jobs=1, random_state=0, class_weight=None,
                       warm_start=False, n_iter=None)
    model.fit(X, y)
    print(model.coef_)

You can check that the coefficients are different. Changing fit_intercept to False makes the coefficients equal, yet the fit may be poorer.

TomDLT
  • 4,346
  • 1
  • 20
  • 26
  • So, ideally speaking, if I change the delay to 1, they should perform the same but I might face problems as mentioned (intercept oscillation). Right? @TomDLT – TheRajVJain Oct 20 '17 at 17:31
  • Right. But if your data is not sparse (even though the object is a CSR matrix), you won't have problems. Reversely, if your data is very sparse but stored in a dense object, you might face intercept oscillations problems. – TomDLT Oct 20 '17 at 18:02