1

I am running the stochastic gradient regressor from sklearn (docs).

Here are the parameters I used:

 {loss: "huber", 
    "learning_rate": "adaptive", 
    "penalty": "l1", 
    "alpha": "0.001", 
    "l1_ratio": "0.75", 
    "early_stopping": "True", 
    "max_iter": "2000", 
    "n_iter_no_change": "15", 
    "validation_fraction": "0.1", 
    "warm_start": "True", 
    "tol": "0.0001", "random_state": "1"}

Unfortunately my epoch does not reach 2000. I understand I set that if it does change after 15 runs, it should terminate, how can I get better with the stochastic gradient? because the final validation are not very impressive.

   -- Epoch 38
    Norm: 38.43, NNZs: 218, Bias: 6.923232, T: 2062792, Avg. loss: 0.119096
desertnaut
  • 57,590
  • 26
  • 140
  • 166
JA-pythonista
  • 1,225
  • 1
  • 21
  • 44
  • Please update your post to include the exact call to `SGDRegressor`. – desertnaut Nov 26 '20 at 11:43
  • If the question is how to cancel SGD from any early stopping then remove n_iter_no_change and set tol = None. See if you reach a better local min – Latent Nov 26 '20 at 11:46

1 Answers1

0

From the parameters shown, it is apparent that you call SGDRegressor with early_stopping=True. You should change it to early_stopping=False (or omit the argument altogether, since its default value is indeed False - see the docs).

desertnaut
  • 57,590
  • 26
  • 140
  • 166