I'd like to allocate stack memory for a memoryview defined with a ctypedef, and return it as numpy ndarray. This question discussed a few allocation methods, but the catch is I don't know how to programatically map my custom ctypedef to the corresponding numpy dtype or Python type code, which are needed for the allocation.
For example:
from cython cimport view
import numpy as np
ctypedef int value_type # actual type subject to change
# np.empty requires me knowing that Cython int maps to np.int32
def test_return_np_array(size_t N):
cdef value_type[:] b = np.empty(N, dtype=np.int32)
b[0]=12 # from ctypedef int ^
return np.asarray(b)
# or, Cython memoryview requires the type code 'i'
def test_return_np_array(size_t N):
cdef value_type[:] b = view.array(shape=(N,), itemsize=sizeof(int), format="i")
b[0]=12 # from ctypedef int ^
return np.asarray(b)
I'm using the typedef so that I can flexibly change the actual data type (say from int
to long long
) without having to modify all the code.
In pure Python, type checking is easy:
value_type = int
print(value_type is int) # True
print(value_type is float) # False
In numpy this can also be easily achieved by parameterizing the dtype as a string, like value_type="int32"
then calling np.empty(N, dtype=value_type)
. With my ctypedef, Cython won't compile np.empty(N, dtype=value_type)
, and complains "'value_type' is not a constant, variable or function identifier". Is it possible to achieve something like this at compile time?
The user shouldn't have to manage the returned memory, so malloc
won't be an option.
I came up with a hack using C++ vector: <value_type[:N]>vector[value_type](N).data()
, but this seems to cause memory errors.