I'm really confused.
My python script, over and over, is only writing ~70,000 rows of what should be ~325,000 rows.
The script executes fine - I've tested it on multiple files, and it only fails to render the entire file when the source is this large (325,000), as opposed to smaller files with 5,000 rows or so. I'm wondering if I'm doing something wrong.
import csv,time,string,os, requests
dw = "\\\\network\\folder\\btc.csv"
inv_fields = ["id", "rsl", "clr_five"]
with open(dw) as infile, open("c:\\upload\\log.csv", "wb") as outfile:
r = csv.DictReader(infile)
w = csv.DictWriter(outfile, inv_fields, extrasaction="ignore")
#write our custom header to match solr, also include new "id" column
wtr = csv.writer( outfile )
wtr.writerow(["id", "resale", "favorite_color"])
for i, row in enumerate(r, start=1):
row['id'] = i
w.writerow(row)
The script loads the first file, which has about 42 columns in it, and 325,000 rows. It finds the two columns named "rsl" and "clr_five", then writes those, along with a new "id" column, to a new file.
Is there something native to this code that just... stops it after it reaches a certain number?