Consider these two JSON files:
{
"tables": [
{
"columns": [
{
"name": "action",
"type": "string"
},
{
"name": "region",
"type": "string"
},
{
"name": "count_",
"type": "long"
},
{
"name": "min_timestamp",
"type": "datetime"
},
{
"name": "max_timestamp",
"type": "datetime"
}
],
"name": "PrimaryResult",
"rows": [
[
"ChangePassword",
"CN",
1,
"2022-07-19T11:51:52.8430729Z",
"2022-07-19T11:51:52.8430729Z"
]
]
}
]
}
{
"tables": [
{
"columns": [
{
"name": "action",
"type": "string"
},
{
"name": "region",
"type": "string"
},
{
"name": "count_",
"type": "long"
},
{
"name": "min_timestamp",
"type": "datetime"
},
{
"name": "max_timestamp",
"type": "datetime"
}
],
"name": "PrimaryResult",
"rows": [
[
"ChangePassword",
"CN",
1,
"2022-07-19T11:51:52.8430729Z",
"2022-07-19T11:51:52.8430729Z"
],
[
"Register",
"CN",
1,
"2022-07-19T11:51:52.8430729Z",
"2022-07-19T11:51:52.8430729Z"
]
]
}
]
}
They have the exact same schema and are almost the same. The difference is that first file has 1 row in the rows
property, the second file has 2 rows in the rows
property.
I use ConvertFrom-Json
to load the files into PowerShell. When I start accessing .tables.rows[0]
, I expect that it will return the first row.
However, in the first file that has only 1 row, it actually returns the first column of the first row.
(gc file1.json | ConvertFrom-Json).tables.rows[0]
Outputs:
ChangePassword
If there are more than 1 row as in the second file, then the .tables.rows[0]
behaves as expected.
(gc file2.json | ConvertFrom-Json).tables.rows[0]
Outputs:
ChangePassword
CN
1
2022-07-19T11:51:52.8430729Z
2022-07-19T11:51:52.8430729Z
How can I reliably process these JSON files in PowerShell regardless of if they have 1 row or multiple rows in them?
btw. These are actually JSON files produced by az cli
as results from Azure Application Insights queries.