2

I found torch.cuda.Stream() is manually defined in some open source code.

    self.input_stream = torch.cuda.Stream()
    self.model_stream = torch.cuda.Stream()
    self.output_stream = torch.cuda.Stream()

On torch page, it says

You normally do not need to create one explicitly: by default, each device uses its own “default” stream.

Trying to understand why they had to define this manually. From the quick google search, there are lots of how to use cuda.Stream() but no why/when/best-practice to use it.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
aerin
  • 20,607
  • 28
  • 102
  • 140

2 Answers2

2

Streams are sequences of cuda kernels. Operations in different streams may run in parallel. I don't believe they have to use them. They are are just making the code more parallel and thus hopefully faster.

Michal Hradiš
  • 481
  • 2
  • 9
0

I read in a StackOverflow post regarding it. Though it is also how to use. But after reading it, all I got is:

  • Why/When use stream: To parallelize some computations.
  • Best Practices to use the stream: To boost up some really extensive computational tasks through parallelizing. (Mostly; you are welcome to comment if any better answer is found.)

Reference:

Joyanta J. Mondal
  • 888
  • 1
  • 8
  • 20
  • Where did you get the information that torch allows only 2 streams? Even the code in the question contains 3. If I use 8 GPUs I definitely use at least 8 streams. – Michal Hradiš Dec 01 '21 at 15:22
  • I might be wrong here since I don't have access to more GPUs. And since it is a piece of very limited knowledge. so I am removing that sentence. Thanks for pointing it out. – Joyanta J. Mondal Dec 01 '21 at 16:50