Search engines (or similar web services) use flat file and nosql databases. The structure of an Inverted Index is simpler than many-to-many relationship, but it should be more efficient to handle it with the latter one. There should be two tables for few billions of webpages and millions of keywords. I have tested for a table of 50 million row; the speed of mysql can be comparable with that of BerkeleyDB.
I think the problem of working with large mysql database appears when dealing with something like ALTER TABLE (which is not a case here). This performance is read-intensive in which mysql is quite good. When reading a row by SELECT I did not find a singificant difference between a table with few rows or few million rows; does it different when having billions of row?
NOTE: I do not mean Google or Bing (or advanced features like full-text search), I am discussing the concept.