Indeed, as you're intuiting, never make N write operations to disk if you can reduce N to 1. For your particular case, let's measure what'd be the increase of reducing N to 1 by using json.dumps to serialize your test array:
import timeit
import json
x = []
for z in range(0, 800000):
x.append(z)
def f1():
with open("f1.txt", "w") as f:
for z in range(0, len(x)):
f.write(str(z))
def f2():
with open("f1.txt", "w") as f:
f.write(json.dumps(x))
N = 10
print(timeit.timeit('f1()', setup='from __main__ import f1', number=N))
print(timeit.timeit('f2()', setup='from __main__ import f2', number=N))
The output would be ~5.5x times faster:
8.25567383571545
1.508784956563721
If i increased the array size to 8000000 the output would be 3.6x times faster:
82.87025446748072
22.56552114259503
And if i set N=1 and the array size to 8000000 the output would be 3.8x times faster:
8.355991964475688
2.227114091734297
Not saying using json.dumps is the best way to dump your custom data but is a reasonable good one. My point is, try to reduce always the number of disk operations at minimum if possible. Disk write operations are generally quite expensive.
Additional info: If you want to a good comparison about faster serialization methods, I suggest you take a look to this article
EDIT: Added @ShadowRanger's suggestion to add a couple of more tests to the experiment comparing write&writelines, here you go:
import timeit
import json
K = 8000000
x = []
for z in range(0, K):
x.append(z)
def f1():
with open("f1.txt", "w") as f:
for z in range(0, len(x)):
f.write(str(z))
def f2():
with open("f1.txt", "w") as f:
f.write(json.dumps(x))
def f3():
with open("f1.txt", "w") as f:
f.write(''.join(map(str, x)))
def f4():
with open("f1.txt", "w") as f:
f.writelines(map(str, x))
N = 1
print(timeit.timeit('f1()', setup='from __main__ import f1', number=N))
print(timeit.timeit('f2()', setup='from __main__ import f2', number=N))
print(timeit.timeit('f3()', setup='from __main__ import f3', number=N))
print(timeit.timeit('f4()', setup='from __main__ import f4', number=N))
The output would be:
8.369193972488317
2.154246855128056
2.667741175272406
8.156553772208122
Where you still can see that write is faster than writelines and how a simple serialization method like json.dumps is slightly faster than a manual one like one using pure python code ie: ''.join(map(str, x))