You may be able to improve matters by building your list of keywords as a regular expression.
This may allow them to be tested in parallel, but will very much depend on what the keywords are (eg. some work may be reused testing for "hello" and "hell", rather than searching every phrase from the start for each word.
You could do this by executing:
import re
keyword_re = re.compile("|".join(map(re.escape, keywords)))
Then:
>>> bool(keyword_re.search('hello, world'))
True
>>> bool(keyword_re.search('hi, earth'))
False
(It will actually return a match object on found, and None if not found - this might be useful if you need to know which keyword matched)
However, how much (if anything) this gains you will depend on the keywords. If you only have one or two, keep your current approach. If you have a large list, it may be worth tring and profiling to see which performs better.
[Edit]
For reference, here's how the approaches do for your example:
good1 good2 good3 bad1 bad2
original : 0.206 0.233 0.229 0.390 63.879
gnud (join) : 0.257 0.347 4.600 0.281 6.706
regex : 0.766 1.018 0.397 0.764 124.351
regex (join) : 0.345 0.337 3.305 0.481 48.666
Obviously for this case, your approach performs far better than the regex one. Whether this will always be the case depends a lot on the number and complexity of keywords, and the input data that will be checked. For large numbers of keywords, and lengthy lists or rarely matching phrases, regexes may work better, but do get timing information, and perhaps try even simpler optimisations (like moving the most common words to the front of your keyword list) first. Sometimes the simplest approach really is the best.
[Edit2] Updated the table with gnud's solution, and a similar approach before applying the regexes. I also added 2 new tests:
good_data3 = good_data2 * 500 # 1000 items, the first of which matches.
bad_data2 = bad_data * 500 # 1000 items, none of which matches.
Which show up the various strengths and weaknesses. Joining does do worse when a match would immediately be found (as there is an always paid, up-front cost in joining the list - this is a best possible case for the linear search method), however for non-matching lists, it performs better. Much better when there are a large number of items in the list.case).