I have Dataframe which can be simplified to this:
import pandas as pd
df = pd.DataFrame([{
'title': 'batman',
'text': 'man bat man bat',
'url': 'batman.com',
'label':1},
{'title': 'spiderman',
'text': 'spiderman man spider',
'url': 'spiderman.com',
'label':1},
{'title': 'doctor evil',
'text': 'a super evil doctor',
'url': 'evilempyre.com',
'label':0},])
And I want to try different feature extraction methods: TFIDF, word2vec, Coutvectorizer with different ngram settings, etc. But I want to try it in different combinations: one feature set will contain 'text' data transformed with TFIDF, and 'url' with Countvectoriser and second will have text data converted by w2v, and 'url' by TFIDF and so on. In the end, of course, I want to make a comparison of different preprocessing strategies and choose the best one.
And here are the questions:
Is there a way to do such things using standard sklearn tools like Pipeline?
Is there a common sense in my idea? Maybe there are good ideas how to treat text data with many columns in Dataframes which I am missing?
Many thanks!