In a machine learning project, I have some training data about clients of a company that includes 20 input features and a label representing clients' feedback to a marketing campaign in the shape of Yes/No answers:
c1 => {f1_1,f2_1,...,f20_1} {Yes}
c2 => {f1_2,f2_2,...,f20_2} {No}
The requirement is to predict the 'acceptance probability' of each client to the campaign.
So, the training data has a binary classification label, while the requirement is a regression prediction.
I was able to extract the amount of the correlation of each feature w.r.t. the classification label.
Does it make sense to apply so-called importance weights to the features based on the strength of their correlation with the classification label and apply those weights on features' values to produce something like scoring rate for each client and use them as the regression label?
c1_score = w1(f1_1) + w2(f2_1) + ... + w20(f20_1)
c2_score = w1(f1_2) + w2(f2_2) + ... + w20(f20_2)
If not, is there any other suggestion?