Why would a mere 1.1 million rows cause problems? Most (if not all) RDBMS'es can handle many, many more (like billions) as long as storage etc. suffices ofcourse and as long as the partition can handle files of considerable size (e.g. Fat32 only supports up to 2GB per file for example).
Also; you need to be more specific on what you're referring to when saying "before I begin seeing problems(if any)". What kind of problems? You might already have problems if you're not using correct indices for example which might slow queries down. That might be a problem but can, in some cases, also be fine.
Another issue that might actualy be a problem is stuff like an autoincrement primary key field of type (unsigned) int
which might overflow at values around 2.1 (signed) or 4.2 billion rows (unsigned) but since you're at 1.1 million rows currently that is way outside of what to worry about now. (Exact values are, ofcourse, 231-1 and 232-1 respectively for signed and unsigned int
). In that case you'll have to think about using types like bigint
or others (maybe even (var)char etc.) for your PK.
The only thing interesting here, for MySQL specifically, could be: are you using InnoDB or MyISAM? I don't know the exact details since I'm not usually working with MySQL but I seem to remember that MyISAM can cause trouble (probably in old(er) versions like <5.0 or something). Correct me if I'm wrong. Edit: read up here. MyISAM supports a max. of 232 rows apparently, unless compiled with specific options.