0

I am designing a database which will to store JSON strings of various sizes. I'm considering using different tables (each with two columns, 'id' and 'data') to store different string sizes (tinytext - bigtext). In this case each table would be searched, starting with the table containing the smallest string sizes.

I'm also considering using a single table with a single string size and using multiple rows to store large JSON strings.

..or I could just create a table with a large VARCHAR size and save myself some development time.

There are two points that I am designing around:

In some cases, mysql stores small pieces of data "in row" which helps performance. What does this mean and how can I take advantage of this?

In some cases, mysql processes VARCHAR as its largest possible size. When does this happen and how can I avoid it?

Hoytman
  • 1,722
  • 2
  • 17
  • 29

2 Answers2

1

From the database point of view there is no particular "good" length for varchar. However try to keep maximum row size under 8kb, including non-clustered indexes. Then you will avoid MySQL storing data out of row, which hampers performance.

Stoleg
  • 8,972
  • 1
  • 21
  • 28
  • if my database is set to use UTF-8, that would be around 8000 ascii characters or around 1200 characters, in the worst case senario right? – Hoytman Jun 25 '13 at 22:28
  • To store `Varchar`data server uses `number of characters + 1 byte` and for `Nvarchar` it uses `number of characters * 2 + 2 bytes`. Check dedicated article on Dev.MySQL – Stoleg Jun 25 '13 at 22:30
0

use 255

Why historically do people use 255 not 256 for database field magnitudes?

Although, as a side note, if you are working with PHP and trying to insert strings in excess of 1000 characters, you will need to truncate to your max col size on the PHP side before inserting, or you will hit an error.

Community
  • 1
  • 1
BTC
  • 3,802
  • 5
  • 26
  • 39