0

In the first call cuda.mem_alloc allocated the memory in the GPU but in the second call, cuda.mem_alloc did not allocate as you can see below. Both the calls are being from the same cell in the jupyter notebook can anyone explain the reason behind this?

import numpy as np
#PyCUDA imports
import pycuda.driver as cuda
import pycuda.autoinit
#####################first call###############
print(cuda.mem_get_info()) #(16608854016, 17062100992)

distances = np.zeros(shape = 6, dtype = np.float32)
distances_gpu = cuda.mem_alloc(distances.nbytes)
print(cuda.mem_get_info()) #(16606756864, 17062100992)

#####################second call###############
print(cuda.mem_get_info()) #(16606756864, 17062100992)
d = np.zeros(shape = 6, dtype = np.float32)
d1 = cuda.mem_alloc(d.nbytes)
print(cuda.mem_get_info()) #(16606756864, 17062100992)
talonmies
  • 70,661
  • 34
  • 192
  • 269
  • It does allocate memory. You just don't understand how malloc works or how to instrument memory management to observe its activity – talonmies May 30 '18 at 11:53
  • can u correct it and give me your insights about it – revanth kalavala May 30 '18 at 13:14
  • 1
    Read the other answers. The first memory allocate reserved much more than the 24 bytes you asked for, and the second allocation was made from the same page of memory that the first call reserved. It is completely incorrect to think that calling either malloc or free will increase or decrease the amount of free memory. That isn't how modern memory managers work. If the second call failed, it will raise a runtime error. I can guarantee that no such error was raised. – talonmies May 30 '18 at 13:31

0 Answers0