They're both code samples useful, just in different scenario. Let's pick your first example to add some code:
Socket socket = ...;
byte[] buffer = new byte[1024 * 32];
int bytesRead;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
socket.BeginSend(buffer, 0, bytesRead , SocketFlags.None, null, null);
// Wait for completion doing something else?
}
In this case buffer
is reused each time then yes, data will be overwritten and it's the intended behavior because you read a chunk at time, you use it and you go on.
Your second example is pretty different: you're filling a buffer
with read data but you don't read all data all together, you read a smaller chunk each time then offset
in target buffer
must increase.
How they differ? In first case buffer can be as small as you want (ideally even one byte), multiple read will consume input stream. In second case buffer must be big enough to accommodate all data you need.
// Note we need to know file size in advance and buffer must be big enough
// to accommodate all data we need
int read, offset = 0;
while(count > 0 && (read = source.Read(buffer, offset, count)) > 0) {
socket.BeginSend(buffer, offset , read, SocketFlags.None, null, null);
// Here we don't need to wait BeginSend() completes.
offset += read;
count -= read;
}
Which one is better? Hard to say, if you allocate all memory in one shot then seldom you need to read chunk by chunk increasing offset
(only case I can think about is a performance optimization because of block size on input stream or when you want to - in parallel - start some processing on received data while reading new ones). In contrast to allocate a buffer big enough to accommodate all data has at least two big drawbacks:
- you must know file size in advance (and it's not always true);
- if file is big enough then you'll run out of memory.
In general (IMO) if first method (reusing same buffer) is pretty good in most situations, performance gain you may have from a single read (and non blocking send) is negligible in most network scenario and drawbacks are serious. To summarize:
1 2
Unknown file size yes no
Can run out of memory yes no
Parallel processing friendly no yes
Performance optimized no yes
Of course you may also "mix" both methods: one big circular buffer with multiple smaller reads, for each read you advance offset pointer starting a new read and in parallel do processing on the previous one(s). With this you may have advantages from both methods but it's little bit more tricky to tune (because of concurrent access and possibly overlapping read/write).