I've seen a lot of SQL server script variables of this kind: @variable
. But can we, in fact, store an array (associative or not) behind the @variable
?
UPDATE
This question turns out to be a duplicate of this one, which suggests to consider possible using of:
- SET type and JSON type, which seem to be only column types but not
@variable
types. - A TEMPORARY TABLE, which seem to be stored in HDD (right?).
- Functions working with JSON strings (e.g.,
JSON_VALUE
andJSON_LENGTH
), which are usable entirely within MySQL server script. Although, these functions do not help to derive an array and store it in a@variable
and are merely JSON walkarounds. I would accept this variant but it seems like@json_string
is parsed each time we callJSON_VALUE(@json_string)
.
So, till now it seems that there IS an opportunity to CREATE an array (associative or not!) but there IS NO an opportunity to surely KEEP the array for its further processing!
Regarding the question mentioned in the beginning of this one. Right now I've only reached 5th and 6th answers, which are related to JSON strings. They are interesting! Be sure to check them out if you're interested in the subject!
Thanks to everyone for your time!
UPDATE
As @Panagiotis Kanavos has mentioned, fetching data by value is slower in case of arrays.
But what if:
We indeed want to simply iterate over M input arrays simultaneously and produce N output arrays? (Maybe, we are simply interested in collation of parameters along a timeline and keep the results.) Arrays are perfectly suitable for this task. But of course, in this case we can still use tables. The question is what will be faster? If our iterative process involves many requests to arrays' elements (can we rely on the server caching the M input arrays and that they'll always be at hand?) and creation of multiple result arrays (how long will it take in case of tables and how do we know that tables are created in RAM for fast access?)?
We want to create an array manually along the course of a server script and are going to only use it in C-like style (aren't going to fetch its data by value) and after the script execution there'll be no need in the array? So, this will be a classic C-like script-only array. To me, in this case putting the array directly into the RAM is what we need and will be more effective than table creation (which'll probably go to HDD), won't it?
And so, the 2nd (and more general) question arises: How far can we rely on the server's optimizations?
I understand that a huge work's been put in optimization in many ways. But has somebody met a situation when a server didn't optimize in the best way? When a programmer had to explicitly rearrange the code in order to manually bring it to the optimal state?