I have a large bytes object (raw data from a 16-bit WAVE file with about 8 million samples) that I need to convert to a list of integers to do some processing on. So far I used list comprehension and int.from_bytes
for the conversion, but I have noticed it is taking a considerable amount of time. I am wondering whether there is a faster solution.
Here is my current method:
data = [int.from_bytes(raw[i * sampwidth:((i + 1) * sampwidth)], "little", signed=True) for i in range(len(raw) // sampwidth)]
On my machine this method is taking about 9 seconds per file (I have multiple files) on a single core, and I would like to know whether I am pushing Python's limits, or whether there exists a more optimal method.