I want to copy a big file (>=1GB) to memory:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from subprocess import check_output
from shlex import split
zeroes = open('/dev/zero')
SCALE = 1024
B = 1
KB = B * SCALE
MB = KB * SCALE
GB = MB * SCALE
def ck(str):
print('{}:\n{}\n'.format(str, check_output(split('free -m')).decode()))
ck('## Before')
buffer = zeroes.read(GB)
ck('## After')
Output:
## Before:
total used free shared buff/cache available
Mem: 15953 7080 6684 142 2188 8403
Swap: 2047 0 2047
## After:
total used free shared buff/cache available
Mem: 15953 9132 4632 142 2188 6351
Swap: 2047 0 2047
Obviously 6684 - 4632 = 2052 MB (which is almost 2x the size of expected 1 GB).
Tests with dd
show expected results:
# mkdir -p /mnt/tmpfs/
# mount -t tmpfs -o size=1000G tmpfs /mnt/tmpfs/
# free -m
total used free shared buff/cache available
Mem: 15953 7231 6528 144 2192 8249
Swap: 2047 0 2047
# dd if=/dev/zero of=/mnt/tmpfs/big_file bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.695143 s, 1.5 GB/s
# free -m
total used free shared buff/cache available
Mem: 15953 7327 5406 1168 3219 7129
Swap: 2047 0 2047
What's the problem? Why python was 2x as large?
What are the best practices to replicate desired output * in Python 3x
?
* Desired output - python
uses the same amount of memory as dd
.