0

everyone! I tried to use

parfor i = 1 : 1000
    net{i} = train(net_sample, x, t,'useGPU','yes');
end;

but it failed. Is there any way to teach them simultaneously? Is there any code for other programming language to train multiple networks simultaneously?

For simple example,
let's assume that We have network which contains 2 x 2 x 1 neurons and takes 10 input train vectors 5 x 1.

  • How do you expect to have multiple things trying to use the *same* GPU at the same time? – Suever Jan 28 '17 at 21:57
  • What is the error message? How much resources (RAM) does each network take? – mpaskov Jan 28 '17 at 21:59
  • Suever, I thought, Matlab can do some optimization automatically. – user7484269 Jan 28 '17 at 22:10
  • mpaskov, I haven't gpu in my notebook now, but error message loked like "cannot use gpu in parfor" or "gpu is unavailable". But when I trained only one net, it worked. n1 = train(net_sample, x, t,'useGPU','yes'); I tried to train only 10 networks having 16 GB RAM. The main problem was in GPU. – user7484269 Jan 28 '17 at 22:12
  • I think GPU does not have enough memory to train 10 networks. Check GPU usage for 1 network and then multiply by 10. It should be more than 16 GB. 1000 networks on one GPU is way over the limit. I don't know about MATLAB, but I have trained two networks on one GPU simultaneously using Theano. – Autonomous Jan 28 '17 at 22:17
  • Llet's assume that We have network which contains 2 x 2 x 1 neurons and takes 10 input train vectors 5 x 1. Can GPU train 1 000 networks simultaneously? – user7484269 Jan 28 '17 at 22:24
  • Since the network is so small, from GPU memory perspective, it may be possible. However, the error says that you cannot use `parfor` in GPU. So probably the problem is with `parfor`, not with GPU. – Autonomous Jan 28 '17 at 22:43
  • Very simple, just buy 1000 GPUs – Mendi Barel Jan 29 '17 at 01:16

1 Answers1

0

If you don't have more than one GPU, there's just no point in doing this. Every worker in your pool is competing for the same resources, and if your network is anything more than trivially small the GPU will be fully utilized so you can't even get any benefit out of using MPS.

Community
  • 1
  • 1
Joss Knight
  • 222
  • 1
  • 4
  • But why gpu is using for acceleration? Because of multiple cuda chips, so they can calculate some parallel operations faster. Am i wright? Again, let's assume that We have 1000 networks which contains 2 x 2 x 1 neurons and takes 10 input train vectors 5 x 1. Can GPU be faster in this case? – user7484269 Jan 30 '17 at 20:35
  • Joss Knight, have you ever use another languages for training neural networks using GPU? – user7484269 Jan 30 '17 at 20:39
  • A Tesla K40 has 2880 CUDA cores, so in theory it could perform 2880 simultaneous floating point operations. The last Fully Connected layer of AlexNet needs to perform 4 million multiplications per observation. So you can see how even running a single input through a typical neural network is quickly going to fully occupy your GPU. In your example network with only 4 neurons you are right, you're not fully occupying the GPU. If you are on linux you may get some benefit from using MPS as mentioned above. – Joss Knight Feb 01 '17 at 11:17
  • If by 'languages' you mean 'libraries' then yes, but I'm not sure what you're getting at. The only way to automatically share a GPU's resources (without MPS) is for the operations to share the same process, which means using threads and streams. Perhaps there are tools that do this for you, but I suspect it's something you mostly have to do yourself. But I freely admit that it's not something MATLAB can do. – Joss Knight Feb 01 '17 at 11:26