I read some posts and books about .Net/C#/CLR and so on, and found following slide in Microsoft's presentation of 2005 year:
- GC takes time – “% time in GC” counter
- If objects die in gen0 (survival rate is 0) it’s the ideal situation
- The longer the object lives before being dead, the worse (with exceptions)
- Gen0 and gen1 GCs should both be relatively cheap; gen2 GCs could cost a lot
- LOH – different cost model
- Temporary large objects could be bad
- Should reuse if possible
My question is what does it mean Should reuse if possible
? Is it that CLR reuse allocated memory for new object in LOH or that user (developer in our case) should do it ?