0

I have seen answers on how to join/merge two raw JSON files but what I am trying to do is join the deserialized data together. This means I get the JSON file and deserialize it. Then get the next JSON file in the series and deserialize that and then merge that deserialized data with the original deserialized data and so on till I get the last JSON file in the series.

I need to deserialize the JSON file to know the URL of the next file so I can request it. If I do not deserialize the file I will not know I need to request more in the series. This is why I want to join the deserialized data and not the JSON files.

I am able to get data from the energy company via an open API. This comes back as a JSON file. Depending on what time period I am requesting data from I can get more than one file back in the stream. This is my problem. How can I join the data together to form a single file which I can deserialize. When I get back just 1 file this is not a problem, I use the code below and deserialize it.

myOctopusDeserializeData =
                JsonConvert.DeserializeObject<Rootobject>(
                    new StreamReader(request.GetResponse().GetResponseStream()).ReadToEnd());

This deserializes into the following

public class Rootobject
    {
        public int count { get; set; }
        public string next { get; set; }
        public object previous { get; set; }
        public Result[] results { get; set; }
    }

    public class Result
    {
        public double consumption { get; set; }
        public DateTime interval_start { get; set; }
        public DateTime interval_end { get; set; }
    }

The count is the total items in results and I use this in a for loop to go through all the items. I need to join the files so I get the total of all the data in a single file. Here is some mock data as an e.g.

File 1 returned =

{
  "count": 3,
  "next": "https://api.octopus.energy/v1/gas-meter-points/30831.....",
  "previous": null,
  "results": [
    {
      "consumption": 0.0,
      "interval_start": "2023-06-20T01:00:00+01:00",
      "interval_end": "2023-06-20T01:30:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-20T01:30:00+01:00",
      "interval_end": "2023-06-20T02:00:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-20T02:00:00+01:00",
      "interval_end": "2023-06-20T02:30:00+01:00"
    },
  ]
}

File 2 returned =

{
  "count": 3,
  "next": "https://api.octopus.energy/v1/gas-meter-points/30842.....",
  "previous": "https://api.octopus.energy/v1/gas-meter-points/30831.....",
  "results": [
    {
      "consumption": 0.0,
      "interval_start": "2023-06-21T23:30:00+01:00",
      "interval_end": "2023-06-22T00:00:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-22T00:00:00+01:00",
      "interval_end": "2023-06-22T00:30:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-22T00:30:00+01:00",
      "interval_end": "2023-06-22T01:00:00+01:00"
    },
  ]
}

The single file I want (note I do not need the pointer to the next or previous file in the single file) =

{
  "count": 6,
  "next": null,
  "previous": null,
  "results": [
    {
      "consumption": 0.0,
      "interval_start": "2023-06-20T01:00:00+01:00",
      "interval_end": "2023-06-20T01:30:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-20T01:30:00+01:00",
      "interval_end": "2023-06-20T02:00:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-20T02:00:00+01:00",
      "interval_end": "2023-06-20T02:30:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-21T23:30:00+01:00",
      "interval_end": "2023-06-22T00:00:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-22T00:00:00+01:00",
      "interval_end": "2023-06-22T00:30:00+01:00"
    },
    {
      "consumption": 0.0,
      "interval_start": "2023-06-22T00:30:00+01:00",
      "interval_end": "2023-06-22T01:00:00+01:00"
    },
  ]
}
user3884423
  • 556
  • 1
  • 5
  • 20
  • FWIW, it's probably easier to write a loop to deserialize each individual file and add it to an array in C#, or some similar data structure. – Robert Harvey Jun 28 '23 at 15:06

0 Answers0