The large file is 12 million lines of text such as this:
81.70, 89.86, 717.985
81.74, 89.86, 717.995
81.78, 89.86, 718.004
81.82, 89.86, 718.014
81.86, 89.86, 718.024
81.90, 89.86, 718.034
This is latitude, longitude, and distance from the nearest coastline (respectively).
My code uses coordinates of known places (for example: Mexico City: "-99.1, 19.4) and searches the large file, line by line, to output the distance from the nearest coastline of that coordinate.
I put each line into a list because many lines meet the long/lat criteria. I later average the distances from the coastline.
Each coordinate takes about 12 seconds to retrieve. My entire script takes 14 minutes to complete.
Here's what I have been using:
long = -99.1
lat = 19.4
country_d2s = []
# outputs all list items with specified long and lat values
with open(r"C:\Users\jason\OneDrive\Desktop\s1186prXbF0O", 'r') as dist2sea:
for line in dist2sea:
if long in line and lat in line and line.startswith(long):
country_d2s.append(line)
I am looking for a way to search through the file much quicker and/or rewrite the file to make it easier to work with.