Suppose I have a RGB image with 3 bytes per pixel stored in a raw memory buffer. In numpy
I am able to create a ndarray
from such raw buffer with the following code:
import ctypes
import numpy as np
# ...
shp = (image.height, image.width, 3)
ptr = ctypes.cast(image.ptr, ctypes.POINTER(ctypes.c_ubyte))
arr = np.ctypeslib.as_array(ptr, shape=shp)
Where image.ptr
is the actual native pointer to the allocated buffer. This appears to work well with a trivial stride/row size, but it's very common to find bitmap memory layouts where the size of a row may be bigger than actually required. For example a Windows GDI+ PixelFormat24bppRGB bitmap has a row size that is the closest multiple of 4 that can be computed with the formula 4 * (3 * width + 3) / 4)
. How I can modify the above code to create a ndarray
that correctly access such custom bitmap memory layout?