Okay, there's 3 things going on here:
1) there is a loss function while training used to tune your models parameters
2) there is a scoring function which is used to judge the quality of your model
3) there is hyper-parameter tuning which uses a scoring function to optimize your hyperparameters.
So... if you are trying to tune hyperparameters, then you are on the right track in defining a "loss fxn" for that purpose. If, however, you are trying to tune your whole model to perform well on, lets say, a recall test - then you need a recall optimizer to be part of the training process. It's tricky, but you can do it...
1) Open up your classifier. Let's use an RFC for example: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
2) click [source]
3) See how it's inheriting from ForestClassifier? Right there in the class definition. Click that word to jump to it's parent definition.
4) See how this new object is inheriting from ClassifierMixin? Click that.
5) See how the bottom of that ClassifierMixin class says this?
from .metrics import accuracy_score
return accuracy_score(y, self.predict(X), sample_weight=sample_weight)
That's your model being trained on accuracy. You need to inject at this point if you want to train your model to be a "recall model" or a "precision model" or whatever model. This accuracy metric is baked into SKlearn. Some day, a better man than I will make this a parameter which models accept, however in the mean time, you gotta go into your sklearn installation, and tweak this accuracy_score to be whatever you want.
Best of luck!