I'm using scikit-learn in Python and I want to use BayesianRidge regression for prediction of a continuous valued target from my continuous inputs. My problem is that I also have a series of binary/categorical inputs and I dont know whether I should still use the BayesianRidge regressor.
If I supply the values as 0 or 1 (or -1, 0, 1) to the BayesianRidge regression, will I get good results? Or is there a better way to do this?
I'm still new to machine learning and I have to admit I find the scikit learn documentation to be overwhelming.
I saw this question regarding a Naive Bayes Classifier, is there a similar approach for Bayesian Ridge Regression?
Mixing categorial and continuous data in Naive Bayes classifier using scikit-learn