I am trying to read a text file that has some funky formatting and manipulate some of the strings and export a csv file. The import text is a list of polygons and coordinates and the export is a list of coordinates with unique IDs associated with them. I am stuck at trying to make the code flexible for various lengths of polygons and various amounts of polygons. Example of list:
['poly', '1', '317806.6570985045', '4312355.239299678', '317808.2079078924', '4312354.675368992', '317806.1871562657', '4312348.754096784', '317804.4953642061', '4312349.365021695', 'poly', '2', '317811.4975035638', '4312361.724502574', '317810.651607534', '4312362.006467917', '317809.3357692654', '4312358.199935783', '317810.3226479669', '4312357.917970439']
Example of output csv:
poly1A, 317806.6570985045, 4312355.239299678
poly1B, 317808.2079078924, 4312354.675368992
poly1C, 317806.1871562657, 4312348.754096784
poly2A, 317811.4975035638, 4312361.724502574
So far, I have cleaned up the input .txt and have a list of the relevant info. I loop through the list and count the number of times 'poly' appears to know how many polygons there will be. I'm stuck at how to loop through the list and count the number of coordinate sets between polys to know where to cut them out of the list into a new list while maintaining flexibility for different size polygons. My code so far:
with open("Polys.txt", "r") as in_file, open("csvout.csv", 'w') as out_file:
lines_list = in_file.readlines()
del lines_list[0:5]
#print(lines_list[])
poly_list = []
for i in range(len(lines_list)):
poly_list.append(lines_list[i].strip('\n'))
poly_list_strip = []
for i in range(len(poly_list)):
poly_list_strip.append(poly_list[i].strip())
poly_list_untab =[]
for i in range(len(poly_list_strip)):
poly_list_untab.append(poly_list_strip[i].split())
#print(poly_list_untab)
flat_poly_list = [val for sublist in poly_list_untab for val in sublist]
poly_index = [i for i, x in enumerate(flat_poly_list) if x == 'poly']
poly_comb = flat_poly_list[]