- If
num_workers
is 2, Does that mean that it will put 2 batches in the RAM and send 1 of them to the GPU or Does it put 3 batches in the RAM then sends 1 of them to the GPU? - What does actually happen when the number of workers is higher than the number of CPU cores? I tried it and it worked fine but How does it work? (I thought that the maximum number of workers I can choose is the number of cores).
- If I set
num_workers
to 3 and during the training there were no batches in the memory for the GPU, Does the main process waits for its workers to read the batches or Does it read a single batch (without waiting for the workers)?
Asked
Active
Viewed 9.5k times
89
-
might be of interest: https://discuss.pytorch.org/t/guidelines-for-assigning-num-workers-to-dataloader/813 – Charlie Parker Mar 09 '21 at 19:25
1 Answers
93
- When
num_workers>0
, only these workers will retrieve data, main process won't. So whennum_workers=2
you have at most 2 workers simultaneously putting data into RAM, not 3. - Well our CPU can usually run like 100 processes without trouble and these worker processes aren't special in anyway, so having more workers than cpu cores is ok. But is it efficient? it depends on how busy your cpu cores are for other tasks, speed of cpu, speed of your hard disk etc. In short, its complicated, so setting workers to number of cores is a good rule of thumb, nothing more.
- Nope. Remember
DataLoader
doesn't just randomly return from what's available in RAM right now, it usesbatch_sampler
to decide which batch to return next. Each batch is assigned to a worker, and main process will wait until the desired batch is retrieved by assigned worker.
Lastly to clarify, it isn't DataLoader
's job to send anything directly to GPU, you explicitly call cuda()
for that.
EDIT: Don't call cuda()
inside Dataset
's __getitem__()
method, please look at @psarka's comment for the reasoning

Shihab Shahriar Khan
- 4,930
- 1
- 18
- 26
-
50Just a remark to the last sentence - it is probably not a good idea to call `.cuda()` in the `Dataset` object, as it will have to move each sample (rather than the batch) to GPU separately, incurring a lot of overhead. – psarka Sep 10 '19 at 13:10
-
I also want to add that setting an umber of workers higher than 0 on windows might lead to errors (cf. https://discuss.pytorch.org/t/errors-when-using-num-workers-0-in-dataloader/97564/3). – Marine Galantin Mar 07 '21 at 19:19
-
I have not tested this, but you may be able to move data to gpu in your collate_fn function. Assuming that this function happens in parallel as well, it could speed things up. The potential problem being that you now have >= n_workers batches on the gpu so memory could be restricted. – mkohler Apr 12 '23 at 17:14