I have a json file that's like for e.g.
[{"fu": "thejimjams", "su": 232104580}, {"fu": "thejimjams", "su": 216575430}, {"fu": "thejimjams", "su": 184695850}]
I need to put all the values for a bunch of json files in the "su" category in a list. So each file (about 200) will have their own list, then I'm going to combine the list and remove duplicates. Is there and advisable while I go about doing this to save system resources and time?
I'm thinking of making a list, loop through the json file get each "su" put it on a list go to the next file then append list, then scan through to remove duplicates.
In terms of removing the duplicates I'm thinking of following what the answer was on this question: Combining two lists and removing duplicates, without removing duplicates in original list unless that's not efficient
Basically open to recommendations about a good way to implement this.
Thanks,