Using NewtonSoft JSON - We get a JSON File which can consist of both Singular Objects and Arrays (example below). For each Object and each Array we need to parse and enter into separate Datatable. We can also get separate JSON files for each Singular Object and each Array. This we can do. BUT - for the "All in One" - first thing we need to do is detect if the first JSON read - is a "All In One JSON File". If yes, then we need to perform the parsing of each Singular Object into it's DT and also each Array and it's value into a DT. Is this something to do by stepping thru line by line and detecting Token Type? and then performing base on that? Seems like there should be a faster way. NOTE: we don't know the Array Name...they will be dynamic - meaning not every Combine JSON file will include ALL Array's per the Schema. Some could be left out.
Looking for the best and most efficient way of doing the Combine JSON file parsing to DT
JSON Example:
{
"$schema": "https://abc.def.bay/schema-v1-0-0.json",
"THISID": "2023",
"THIS_status_date": "2023-03-30",
"Array01": [
{
"This_ID": "1",
"title": "Proj",
"level": 1,
"type": "typeA",
"That_ID": "1",
"Person": "Smith, John",
"where": "N",
"when": "Exit 1",
"why": "Because"
},
{
"This_ID": "2",
"title": "Proj",
"level": 1,
"type": "typeB",
"That_ID": "2",
"Person": "Jones, Kelley",
"where": "N",
"when": "Exit 2",
"why": "Because"
}
],
"Array2": [
{
"This_ID": "1",
"title": "Title A",
"level": 1,
"where": "N",
"why": "Because"
},
{
"This_ID": "2",
"title": "Title B",
"level": 2,
"BelongTo": "1",
"where": "N",
"why": "Because"
}
],