153

I thought this was a n00b thing to do. And, so, I've never done it. Then I saw that FriendFeed did this and actually made their DB scale better and decreased latency. I'm curious if I should do this. And, if so, what's the right way to do it?

Basically, what's a good place to learn how to store everything in MySQL as a CouchDB sort of DB? Storing everything as JSON seems like it'd be easier and quicker (not to build, less latency).

Also, is it easy to edit, delete, etc., things stored as JSON on the DB?

Jonathan Hall
  • 75,165
  • 16
  • 143
  • 189
Oscar Godson
  • 31,662
  • 41
  • 121
  • 201
  • For reference, I believe this is FriendFeed's discussion on using JSON in MySQL: http://backchannel.org/blog/friendfeed-schemaless-mysql – dimo414 Nov 23 '13 at 22:47
  • 13
    MySQL 5.7 now supports a native JSON datastore. – eecue Jan 25 '16 at 06:29

16 Answers16

134

Everybody commenting seems to be coming at this from the wrong angle, it is fine to store JSON code via PHP in a relational DB and it will in fact be faster to load and display complex data like this, however you will have design considerations such as searching, indexing etc.

The best way of doing this is to use hybrid data, for example if you need to search based upon datetime MySQL (performance tuned) is going to be a lot faster than PHP and for something like searching distance of venues MySQL should also be a lot faster (notice searching not accessing). Data you do not need to search on can then be stored in JSON, BLOB or any other format you really deem necessary.

Data you need to access is very easily stored as JSON for example a basic per-case invoice system. They do not benefit very much at all from RDBMS, and could be stored in JSON just by json_encoding($_POST['entires']) if you have the correct HTML form structure.

I am glad you are happy using MongoDB and I hope that it continues to serve you well, but don't think that MySQL is always going to be off your radar, as your app increases in complexity you may well end up needing an RDBMS for some functionality and features (even if it is just for retiring archived data or business reporting)

BenMorel
  • 34,448
  • 50
  • 182
  • 322
  • 8
    -1 for "it is fine to store JSON code via PHP in a relational DB" -- Storing JSON (which can represent an entire entity as non-atomic data) in a single field violates the relational model and prevents 1NF. Also, don't make sweeping claims about performance without metrics to back you up. – Sage Gerard Feb 23 '14 at 19:55
  • 110
    As mentioned it depends upon what you are storing, i.e. for an invoice do you really need to store each entry separately? NO, your comment comes across like you know so much but 1NF is not for every field or there would not be BLOB and text types... this is pure nonsense for a production system, you only need to optimize what you need to search from i.e. dates, keys and set up indexes on some data. I did not say store everything as JSON, I said store some data as JSON if it helps to solve your problem. – Lewis Richard Phillip Cowles Feb 26 '14 at 04:46
  • 2
    What you say is possible and convenient, but deviating from well-formed relations means doing more work to accommodate and maintain said deviations. Bastardizing the relational model needs better justification than what you provided. See Database Processing by Kroenke and Auer for more information on complications related to your answer, since they touch on misuse of attributes in relations. – Sage Gerard Mar 02 '14 at 06:31
  • 41
    You are assuming I have not consulted with a DBA on this issue and do not understand what you are saying. I am not kept in the loop on exactly what the implications are for this, both for small systems and further down the line, but what I am saying is that you are wrong and that the research you point to is old and not using our application strategy. Its just plain wrong, and the problems lye in poor implementations of this process. For example I am not saying only have one model or do not use an RDBMS, I am saying be smart about where you use RDBMS and where you don't need to. – Lewis Richard Phillip Cowles Mar 04 '14 at 05:25
  • 2
    This is a good answer. I did the same thing with a small program that I made, and added whatever indexes I needed to the database and then threw in the JSON. I could have used a full relational schema for what I was doing, but it worked well, and be even more to the purpose if I had a bunch of varying data that only needed to be stored, not processed, on the server. – Arlen Beiler Jun 18 '15 at 17:08
  • 9
    This was the best answer from my experience. You can use RDBMS but store JSON only in specific situations if you know what you are doing. In fact I've used it a lot for temporary cache storage of array data and some other situations where you achieve a faster result and less code. Many projects in reality have some mixed features. – Heroselohim Oct 05 '15 at 21:02
  • Here's a use case that I need to store JSON... temporary tables! Why make a bunch of calls when you can create a procedure to wrap all the data you need into a temporary table... Temporary tables have no need for normalization... The data will never be accessed after the client session. And the date *is* already normalized elsewhere. – Jeff Vdovjak Nov 11 '22 at 23:41
96

MySQL 5.7 Now supports a native JSON data type similar to MongoDB and other schemaless document data stores:

JSON support

Beginning with MySQL 5.7.8, MySQL supports a native JSON type. JSON values are not stored as strings, instead using an internal binary format that permits quick read access to document elements. JSON documents stored in JSON columns are automatically validated whenever they are inserted or updated, with an invalid document producing an error. JSON documents are normalized on creation, and can be compared using most comparison operators such as =, <, <=, >, >=, <>, !=, and <=>; for information about supported operators as well as precedence and other rules that MySQL follows when comparing JSON values, see Comparison and Ordering of JSON Values.

MySQL 5.7.8 also introduces a number of functions for working with JSON values. These functions include those listed here:

  1. Functions that create JSON values: JSON_ARRAY(), JSON_MERGE(), and JSON_OBJECT(). See Section 12.16.2, “Functions That Create JSON Values”.
  2. Functions that search JSON values: JSON_CONTAINS(), JSON_CONTAINS_PATH(), JSON_EXTRACT(), JSON_KEYS(), and JSON_SEARCH(). See Section 12.16.3, “Functions That Search JSON Values”.
  3. Functions that modify JSON values: JSON_APPEND(), JSON_ARRAY_APPEND(), JSON_ARRAY_INSERT(), JSON_INSERT(), JSON_QUOTE(), JSON_REMOVE(), JSON_REPLACE(), JSON_SET(), and JSON_UNQUOTE(). See Section 12.16.4, “Functions That Modify JSON Values”.
  4. Functions that provide information about JSON values: JSON_DEPTH(), JSON_LENGTH(), JSON_TYPE(), and JSON_VALID(). See Section 12.16.5, “Functions That Return JSON Value Attributes”.

In MySQL 5.7.9 and later, you can use column->path as shorthand for JSON_EXTRACT(column, path). This works as an alias for a column wherever a column identifier can occur in an SQL statement, including WHERE, ORDER BY, and GROUP BY clauses. This includes SELECT, UPDATE, DELETE, CREATE TABLE, and other SQL statements. The left hand side must be a JSON column identifier (and not an alias). The right hand side is a quoted JSON path expression which is evaluated against the JSON document returned as the column value.

See Section 12.16.3, “Functions That Search JSON Values”, for more information about -> and JSON_EXTRACT(). For information about JSON path support in MySQL 5.7, see Searching and Modifying JSON Values. See also Secondary Indexes and Virtual Generated Columns.

More info:

https://dev.mysql.com/doc/refman/5.7/en/json.html

eecue
  • 1,023
  • 9
  • 6
66

CouchDB and MySQL are two very different beasts. JSON is the native way to store stuff in CouchDB. In MySQL, the best you could do is store JSON data as text in a single field. This would entirely defeat the purpose of storing it in an RDBMS and would greatly complicate every database transaction.

Don't.

Having said that, FriendFeed seemed to use an extremely custom schema on top of MySQL. It really depends on what exactly you want to store, there's hardly one definite answer on how to abuse a database system so it makes sense for you. Given that the article is very old and their main reason against Mongo and Couch was immaturity, I'd re-evaluate these two if MySQL doesn't cut it for you. They should have grown a lot by now.

hatef
  • 5,491
  • 30
  • 43
  • 46
deceze
  • 510,633
  • 85
  • 743
  • 889
  • 3
    Yeah, im looking at Mongo, and php has an extension for it and the actual syntax for the DB transactions seems easier than MySQL and the overall working with it seems easier that couchDB. Thanks, I think im going to go with MongoDB :) – Oscar Godson Aug 25 '10 at 16:51
  • 90
    There certainly are valid cases for storing JSON blobs in a RDBMS. If you just want to store and retrieve opaque blobs of JSON data with no need for querying that data, which happens fairly often in some scenarios, you may very well do it. – markus Mar 08 '15 at 00:19
  • 10
    @markus I actually do this in one of my websites, specifically the fields of a complicated form that aren't ever searched for directly in MySQL queries, but used when viewing forms (either from a table view or directly via a link). It's probably not ideal but it certainly makes it a lot quicker to implement and removes the need for an exorbitant amount of tables or table columns. – Nick Bedford Sep 18 '15 at 06:04
  • 1
    If you want to have both RDBMS and document type storage for your application then this is a good approach so you don't want to have to manage multiple databases. – rjarmstrong Sep 01 '16 at 21:23
  • also compounded by the fact this very comment is already 7 years old – Felipe Valdes Oct 09 '17 at 06:07
  • Do not store full json in a MySQL database. Someone did that on a legacy project I have to maintain which one feature has to go through each database row repeatedly just to narrow down a parent category from json of categories. This creates 150+ mysql query calls that could've been easily solved using relationships. – Ishmael Jun 14 '18 at 15:52
  • I store for querying all the downlines of a member in multi level marketing. I store all uplines in a column in a json format. and do a LIKE search to retrieve rows that contain a specific member as an upline. – Ramesh Pareek Aug 04 '18 at 15:23
  • 9
    This is pretty short-sited advice, perhaps by someone who spends way too much time on stack exchange? When I have a record with 100 fields that I want to store and I only need to search on 3 or 4 of the fields, creating a table with 100 fields is nonsensical. You can store a customer record with his entire address book stored in 1 field in JSON, and just add the customer id, last name, company as other fields to use to search for records. It's a. huge time saver. – Danial Nov 09 '18 at 01:37
  • @Danial The way to properly do this would be a 1:n relationship to another table. 100 columns are indeed madness. But as I said, you decide how much you want to stick to database normalization, or forego it knowingly. – deceze Nov 09 '18 at 01:45
  • You clearly don't understand this idea. JSON is a way to store a large amount of info as a "blob" that's easy to unravel, particularly in a language like php. The proper way to do something is totally dependent on what you need to do with the data. – Danial Nov 09 '18 at 02:45
  • @Danial That’s exactly what I’m saying. There’s a proper way to use RDBMSs, and there’s whatever way works for you. – deceze Nov 09 '18 at 03:35
  • Actually, there's the what you learned in school way and the way that works the best in the actual real world. it's the difference between education and experience. – Danial Dec 11 '18 at 01:41
  • @Danial I agree, sometimes you may also want to store objects containing a user-defined amount of objects as properties. Say you wanted to store a record to an object of products that could be several levels deep. Like a carton > row > roll> item in an inventory table of products. I think storing a table of units and looping through them for each product would require too many db calls and you could just call the product id and get its sub-units instead. – Eric McWinNEr Jul 05 '19 at 12:11
27

json characters are nothing special when it comes down to storage, chars such as

{,},[,],',a-z,0-9.... are really nothing special and can be stored as text.

the first problem your going to have is this

{ profile_id: 22, username: 'Robert', password: 'skhgeeht893htgn34ythg9er' }

that stored in a database is not that simple to update unless you had your own proceedure and developed a jsondecode for mysql

UPDATE users SET JSON(user_data,'username') = 'New User';

So as you cant do that you would Have to first SELECT the json, Decode it, change it, update it, so in theory you might as well spend more time constructing a suitable database structure!

I do use json to store data but only Meta Data, data that dont get updated often, not related to the user specific.. example if a user adds a post, and in that post he adds images ill parse the images and create thumbs and then use the thumb urls in a json format.

RobertPitt
  • 56,863
  • 21
  • 114
  • 161
  • Is it good enough to store the json string in database, when i don't update it at all? I just want to perform a normal search on the json data using LIKE. I see that even Wordpress stores the plugin metadata as json string in the database. – shasi kanth Sep 07 '15 at 10:43
  • @shasikanth, if you're searching for values in the JSON data then I would look for a better approach – Kirby Jun 16 '16 at 17:53
15

To illustrate how difficult it is to get JSON data using a query, I will share the query I made to handle this.

It doesn't take into account arrays or other objects, just basic datatypes. You should change the 4 instances of column to the column name storing the JSON, and change the 4 instances of myfield to the JSON field you want to access.

SELECT
    SUBSTRING(
        REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', ''),
        LOCATE(
            CONCAT('myfield', ':'),
            REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', '')
        ) + CHAR_LENGTH(CONCAT('myfield', ':')),
        LOCATE(
            ',',
            SUBSTRING(
                REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', ''),
                LOCATE(
                    CONCAT('myfield', ':'),
                    REPLACE(REPLACE(REPLACE(column, '{', ''), '}', ','), '"', '')
                ) + CHAR_LENGTH(CONCAT('myfield', ':'))
            )
        ) - 1
    )
    AS myfield
FROM mytable WHERE id = '3435'
Jorjon
  • 5,316
  • 1
  • 41
  • 58
  • 6
    You wouldn't query this server side tho. This would be to store a blob and get it back client side. You'd then just use JS to query it. This was a long time ago tho :) I've since moved to MongoDB for this stuff :) Upvote for this pretty slick query tho. – Oscar Godson Feb 15 '13 at 23:46
  • I think it's a question of if the person is going to access that JSON data on a regular basis. In example I'm moving non-essential headers in to an array, parse to JSON and then store. When I will retrieve the JSON (for the rare extra-headers AJAX request) I'll simply pull from MySQL, read the JSON in to an array and echo out the headers. For anything more data intensive it probably shouldn't be stored as JSON. – John Aug 06 '13 at 01:47
11

This is an old question, but I am still able to see this at the top of the search result of Google, so I guess it would be meaningful to add a new answer 4 years after the question is asked.

First of all, there is better support in storing JSON in RDBMS. You may consider switching to PostgreSQL (although MySQL has supported JSON since v5.7.7). PostgreSQL uses very similar SQL commands as MySQL except they support more functions. One of the functions they added is that they provide JSON data type and you are now able to query the JSON stored. (Some reference on this) If you are not making up the query directly in your program, for example, using PDO in php or eloquent in Laravel, all you need to do is just to install PostgreSQL on your server and change database connection settings. You don't even need to change your code.

Most of the time, as the other answers suggested, storing data as JSON directly in RDBMS is not a good idea. There are some exception though. One situation I can think of is a field with variable number of linked entry.

For example, for storing tag of a blog post, normally you will need to have a table for blog post, a table of tag and a matching table. So, when the user wants to edit a post and you need to display which tag is related to that post, you will need to query 3 tables. This will damage the performance a lot if your matching table / tag table is long.

By storing the tags as JSON in the blog post table, the same action only requires a single table search. The user will then be able to see the blog post to be edit quicker, but this will damage the performance if you want to make a report on what post is linked to a tag, or maybe search by tag.

You may also try to de-normalize the database. By duplicating the data and storing the data in both ways, you can receive benefit of both method. You will just need a little bit more time to store your data and more storage space (which is cheap comparing to the cost of more computing power)

Jihoon Baek
  • 730
  • 7
  • 11
cytsunny
  • 4,838
  • 15
  • 62
  • 129
10

It really depends on your use case. If you are storing information that has absolutely no value in reporting, and won't be queried via JOINs with other tables, it may make sense for you to store your data in a single text field, encoded as JSON.

This could greatly simplify your data model. However, as mentioned by RobertPitt, don't expect to be able to combine this data with other data that has been normalized.

Phil LaNasa
  • 2,907
  • 1
  • 25
  • 16
  • 3
    My thoughts exactly. If its data that is never joined/searched on or even rarely updated why not use json in a TEXT field. A good example of this is an fooditem table where each food item would need to store the nutritional information. Serving size, protien, carbs, fat total, fat sat, etc etc. But no only that, you would need to store the value (0.2) and the unit it was measured in (g, oz, fl oz, ml). Considering it is data that (depending on what you are doing I guess) does not need to be searched on I would say 1 TEXT vs 16 int/varchar/enum columns is a good trade off. – Brad Moore May 03 '14 at 04:54
  • Exactly!!! It's useful when you have to store variable and/or unknown data structure that you don't plan to filter with SQL at all. The data is simply stored as is and someone else (app code) might know the structure and what to do with it. – Delmo May 23 '15 at 22:29
8

I would say the only two reasons to consider this are:

  • performance just isn't good enough with a normalised approach
  • you cannot readily model your particularly fluid/flexible/changing data

I wrote a bit about my own approach here:

What scalability problems have you encountered using a NoSQL data store?

(see the top answer)

Even JSON wasn't quite fast enough so we used a custom-text-format approach. Worked / continues to work well for us.

Is there a reason you're not using something like MongoDB? (could be MySQL is "required"; just curious)

Community
  • 1
  • 1
Brian
  • 6,391
  • 3
  • 33
  • 49
6

Here is a function that would save/update keys of a JSON array in a column and another function that retrieves JSON values. This functions are created assuming that the column name of storing the JSON array is json. It is using PDO.

Save/Update Function

function save($uid, $key, $val){
 global $dbh; // The PDO object
 $sql = $dbh->prepare("SELECT `json` FROM users WHERE `id`=?");
 $sql->execute(array($uid));
 $data      = $sql->fetch();
 $arr       = json_decode($data['json'],true);
 $arr[$key] = $val; // Update the value
 $sql=$dbh->prepare("UPDATE `users` SET `json`=? WHERE `id`=?");
 $sql->execute(array(
   json_encode($arr), 
   $uid
 ));
}

where $uid is the user's id, $key - the JSON key to update and it's value is mentioned as $val.

Get Value Function

function get($uid, $key){
 global $dbh;
 $sql = $dbh->prepare("SELECT `json` FROM `users` WHERE `id`=?");
 $sql->execute(array($uid));
 $data = $sql->fetch();
 $arr  = json_decode($data['json'], true);
 return $arr[$key];
}

where $key is a key of JSON array from which we need the value.

Subin
  • 3,445
  • 1
  • 34
  • 63
  • 1
    This fails in conflicting cases, what if the json you just read,gets updated by another process, and then you save the json in current thread overwriting it ? You might need locks like `SELECT FOR UPDATE` or versioning within the json data. – DhruvPathak Feb 03 '14 at 13:16
  • @DhruvPathak Can you please update the answer by using `SELECT FOR UPDATE` so that it will be more better. I don't know how to use it. – Subin Jun 04 '14 at 16:11
  • Should I declare field type as JSON in MySQL table? or just leave it as text and store and retrieve data via `json_decode` and `json_encode`. – Salem May 20 '22 at 17:17
6

It seems to me that everyone answering this question is kind-of missing the one critical issue, except @deceze -- use the right tool for the job. You can force a relational database to store almost any type of data and you can force Mongo to handle relational data, but at what cost? You end up introducing complexity at all levels of development and maintenance, from schema design to application code; not to mention the performance hit.

In 2014 we have access to many database servers that handle specific types of data exceptionally well.

  • Mongo (document storage)
  • Redis (key-value data storage)
  • MySQL/Maria/PostgreSQL/Oracle/etc (relational data)
  • CouchDB (JSON)

I'm sure I missed some others, like RabbirMQ and Cassandra. My point is, use the right tool for the data you need to store.

If your application requires storage and retrieval of a variety of data really, really fast, (and who doesn't) don't shy away from using multiple data sources for an application. Most popular web frameworks provide support for multiple data sources (Rails, Django, Grails, Cake, Zend, etc). This strategy limits the complexity to one specific area of the application, the ORM or the application's data source interface.

CheddarMonkey
  • 449
  • 5
  • 12
  • 2
    In your opinion RabbitMQ is a database server or some kind of? I would say it is an message-oriented middleware with a nice little persistence feature for not losing any messages, but nothing I would save data with. Just my two cents. – Osiriz Mar 06 '15 at 10:48
  • @Osiriz: You are correct. I probably shouldn't have included it in this discussion. – CheddarMonkey Mar 06 '15 at 18:08
3

Early support for storing JSON in MySQL has been added to the MySQL 5.7.7 JSON labs release (linux binaries, source)! The release seems to have grown from a series of JSON-related user-defined functions made public back in 2013.

This nascent native JSON support seems to be heading in a very positive direction, including JSON validation on INSERT, an optimized binary storage format including a lookup table in the preamble that allows the JSN_EXTRACT function to perform binary lookups rather than parsing on every access. There is also a whole raft of new functions for handling and querying specific JSON datatypes:

CREATE TABLE users (id INT, preferences JSON);

INSERT INTO users VALUES (1, JSN_OBJECT('showSideBar', true, 'fontSize', 12));

SELECT JSN_EXTRACT(preferences, '$.showSideBar') from users;

+--------------------------------------------------+
| id   | JSN_EXTRACT(preferences, '$.showSideBar') |
+--------------------------------------------------+
| 1    | true                                      |
+--------------------------------------------------+

IMHO, the above is a great use case for this new functionality; many SQL databases already have a user table and, rather than making endless schema changes to accommodate an evolving set of user preferences, having a single JSON column a single JOIN away is perfect. Especially as it's unlikely that it would ever need to be queried for individual items.

While it's still early days, the MySQL server team are doing a great job of communicating the changes on the blog.

Rich Pollock
  • 1,190
  • 12
  • 15
2

JSON is a valid datatype in PostgreSQL database as well. However, MySQL database has not officially supported JSON yet. But it's baking: http://mysqlserverteam.com/json-labs-release-native-json-data-type-and-binary-format/

I also agree that there are many valid cases that some data is better be serialized to a string in a database. The primary reason might be when it's not regularly queried, and when it's own schema might change - you don't want to change the database schema corresponding to that. The second reason is when the serialized string is directly from external sources, you may not want to parse all of them and feed in the database at any cost until you use any. So I'll be waiting for the new MySQL release to support JSON since it'll be easier for switching between different database then.

2

I know this is really late but I did have a similar situation where I used a hybrid approach of maintaining RDBMS standards of normalizing tables upto a point and then storing data in JSON as text value beyond that point. So for example I store data in 4 tables following RDBMS rules of normalization. However in the 4th table to accomodate dynamic schema I store data in JSON format. Every time I want to retrieve data I retrieve the JSON data, parse it and display it in Java. This has worked for me so far and to ensure that I am still able to index the fields I transform to json data in the table to a normalized manner using an ETL. This ensures that while the user is working on the application he faces minimal lag and the fields are transformed to a RDBMS friendly format for data analysis etc. I see this approach working well and believe that given MYSQL (5.7+) also allows parsing of JSON this approach gives you the benefits of both RDBMS and NOSQL databases.

prashant
  • 1,382
  • 1
  • 13
  • 19
1

I use json to record anything for a project, I use three tables in fact ! one for the data in json, one for the index of each metadata of the json structure (each meta is encoded by an unique id), and one for the session user, that's all. The benchmark cannot be quantified at this early state of code, but for exemple I was user views (inner join with index) to get a category (or anything, as user, ...), and it was very slow (very very slow, used view in mysql is not the good way). The search module, in this structure, can do anything I want, but, I think mongodb will be more efficient in this concept of full json data record. For my exemple, I user views to create tree of category, and breadcrumb, my god ! so many query to do ! apache itself gone ! and, in fact, for this little website, I use know a php who generate tree and breadcrumb, the extraction of the datas is done by the search module (who use only index), the data table is used only for update. If I want, I can destroy the all indexes, and regenerate it with each data, and do the reverse work to, like, destroy all the data (json) and regenerate it only with the index table. My project is young, running under php and mysql, but, sometime I thing using node js and mongodb will be more efficient for this project.

Use json if you think you can do, just for do it, because you can ! and, forget it if it was a mistake; try by make good or bad choice, but try !

Low

a french user

low
  • 49
  • 5
  • 1
    Didn't understand. I don't speak English natively, but I would recommend you to use periods (.), commas (,) and paragraphs (the Enter key) to organize your ideas. Then, only then, try to organize a database ;-) – Diego Jancic Jan 16 '15 at 21:06
  • You're right, confuse answer in fact, must be more explicit by showing example. But, if mysql can be replaced by mongoDB, it will be more efficiant to use json (as native for mongodb), if mysql is mandatory, ok, let's try again in a few days ! – low Feb 07 '15 at 16:44
1

I believe that storing JSON in a mysql database does in fact defeat the purpose of using RDBMS as it is intended to be used. I would not use it in any data that would be manipulated at some point or reported on, since it not only adds complexity but also could easily impact performance depending on how it is used.

However, I was curious if anyone else thought of a possible reason to actually do this. I was thinking to make an exception for logging purposes. In my case, I want to log requests that have a variable amount of parameters and errors. In this situation, I want to use tables for the type of requests, and the requests themselves with a JSON string of different values that were obtained.

In the above situation, the requests are logged and never manipulated or indexed within the JSON string field. HOWEVER, in a more complex environment, I would probably try to use something that has more of an intention for this type of data and store it with that system. As others have said, it really depends on what you are trying to accomplish, but following standards always helps longevity and reliability!

Mark
  • 163
  • 1
  • 10
0

You can use this gist: https://gist.github.com/AminaG/33d90cb99c26298c48f670b8ffac39c3

After installing it to the server (just need root privilege not super), you can do something like this:

select extract_json_value('{"a":["a","2"]}','(/a)')

It will return a 2 .You can return anything inside JSON by using this The good part is that it is support MySQL 5.1,5.2,5.6. And you do not need to install any binary on the server.

Based on old project common-schema, but it is still working today https://code.google.com/archive/p/common-schema/

Aminadav Glickshtein
  • 23,232
  • 12
  • 77
  • 117