The 'merror' and 'logloss' result from XGB multiclass classification differs by about 0.01 or 0.02 on each run, with the same parameters. Is this normal?
I want 'merror' and 'logloss' to be constant when I run XGB with the same parameters so I can evaluate the model precisely (e.g. when I add a new feature).
Now, if I add a new feature I can't really tell whether it had a positive impact on my model's accuracy or not, because my 'merror' and 'logloss' differ on each run regardless of whether I made any changes to the model or the data fed into it since the last run.
Should I try to fix this and if I should, how can I do it?