If I were you, I would simply add the new columns for the data set.
Using JSON inside a MySQL field isn't bad. It's saved me a lot of grief. But it does introduce a good bit of overhead and limits what functionality you can use from the database engine. Constantly manipulating SQL schema is not the best thing to do, but neither is decoding JSON objects when you don't have to.
If the data schema is fairly static, like your example, where you store a user's gender, birthday, etc, it's best to use columns. Then you can manipulate the data quickly and easily directly with SQL... sort, filter, create indexes for faster lookups, etc. Since the data schema is fairly static, you don't really gain anything from JSON except maybe a few minutes of your time creating the columns. In the end you lose a lot more time in machine cycles over the life of the application.
Where I use JSON in MySQL fields is where the data schema is very fluid. As a test engineer, this is pretty much the norm. For example, in one of my current projects, the list of target metrics (which are stored in MySQL) changes very regularly, depending on what issues are being addressed or what performance characteristics are being tweaked. It's a regular event for the development engineers to ask for new metrics, and they of course expect this to all get neatly displayed and changes to be made quickly. So instead of futzing with the SQL schema on a daily basis, I store the static schema (test type, date, product version, etc) as columns, but the ever-fluid test result data as a JSON object. This means I can still query the data using SQL statements based on test type, version, date, etc, but never have to touch the table schema when integrating new metrics. For display of the actual test data, I simply iterate the results and decode the JSON objects into arrays and go from there. As this project expands, I'll eventually implement memcached to cache everything.
This also has the side-effect of bundling the 100+ test metrics into one text blob, the whole of which I zlib-compress, making it about 10% of the original size. That adds up to quite a significant data storage savings as we're at 7 figures of rows already.