I have a large data file with 7000 lines (not very large though!) which looks like this:
# data can be obtained from pastebin
# filename = input.csv
# lots of comments
# wave flux err
0.807172 7.61973e-11 1.18177e-13
0.807375 7.58666e-11 1.18288e-13
0.807577 7.62136e-11 1.18504e-13
0.80778 7.64491e-11 1.19389e-13
0.807982 7.62858e-11 1.18685e-13
0.808185 7.63852e-11 1.19324e-13
0.808387 7.60547e-11 1.18952e-13
0.80859 7.52287e-11 1.18016e-13
0.808792 7.53114e-11 1.18979e-13
0.808995 7.58247e-11 1.20198e-13
# lots of other lines
Link to the input data: http://pastebin.com/KCW9phzX
I want to extract data for wavelength between 0.807375 and 0.807982.
So that the output looks like this:
#filename = output.csv
0.807375 7.58666e-11 1.18288e-13
0.807577 7.62136e-11 1.18504e-13
0.80778 7.64491e-11 1.19389e-13
0.807982 7.62858e-11 1.18685e-13
Similar links are following:
https://stackoverflow.com/questions/8956832/python-out-of-memory-on-large-csv-file-numpy/8964779#=
efficient way to extract few lines of data from a large csv data file in python
What is the most efficient way to match list items to lines in a large file in Python?
Extract specific lines from file and create sections of data in python
how to extract elements from a list in python?
How to use numpy.genfromtxt when first column is string and the remaining columns are numbers?
genfromtxt and numpy