I have a C library that I am wrapping in Python using ctypes
. The C library contains many arrays (tens of thousands of arrays on the order of 5-100 elements each, plus a few much longer arrays) that I want to access as numpy
arrays in Python. I thought that this would be straightforward using numpy.ctypeslib.as_array
; however, when I profile my code using cProfile
, I notice that it is much faster to use a Python loop to manually copy (!) data from the ctypes
pointers to numpy
arrays that I create on the Python side. Is ctypeslib.as_array
known to be slow? - I would have been thought it would be much faster just to interpret some memory as a numpy array than to copy it element-by-element inside a Python loop.
Asked
Active
Viewed 1,171 times
4

ajd
- 1,022
- 1
- 8
- 21
-
1Do you notice a discrepancy between the small and large arrays? For larger (1M element) arrays in my own experience its much, much faster. – Daniel Jul 26 '13 at 19:43
-
Used with an array it creates an `__array_interface__` property on the ctypes array type. That would have be done for each size/type of array. Used with a pointer it creates the interface on the pointer object itself. Then it returns `array(obj, copy=False)`. – Eryk Sun Jul 26 '13 at 20:17
-
Related question: [Getting data from ctypes array into numpy](http://stackoverflow.com/questions/4355524/getting-data-from-ctypes-array-into-numpy?rq=1) – Bakuriu Jul 27 '13 at 08:47
-
@Bakuriu: But `as_array` now adds the `__array_interface__`, so that question is out of date. If the OP is using ctypes arrays instead of pointers, adding the interface only has to be done once for each size/type of array. – Eryk Sun Jul 27 '13 at 13:23
-
Profiling seems to indicate that most of the time is spent inside the `array` function. I'm using ctypes pointers, not ctypes arrays. I'll look into small vs. large arrays. – ajd Jul 29 '13 at 22:23
-
1Yes, large arrays are converted plenty fast - but it appears that there is a lot of overhead when converting lots of small arrays. – ajd Jul 30 '13 at 16:40