3

I originally had a race condition when sending data, the issue was that I was allowing multiple SocketAsyncEventArgs to be used to send data, but the first packet didn't send fully before the 2nd packet, this is because I have it so if the data doesn't fit in the buffer it loops until all the data is sent, and the first packet was larger than the second packet which is tiny, so the second packet was being sent and reached to the client before the first packet.

I have solved this by assigning 1 SocketAyncEventArgs to an open connection to be used for sending data and used a Semaphore to limit the access to it, and make the SocketAsyncEventArgs call back once it completed.

Now this works fine because all data is sent, calls back when its complete ready for the next send. The issue with this is, its causing blocking when I want to send data randomly to the open connection, and when there is a lot of data sending its going to block my threads.

I am looking for a work around to this, I thought of having a Queue which when data is requested to be sent, it simply adds the packet to the Queue and then 1 SocketAsyncEventArgs simply loops to send that data.

But how can I do this efficiently whilst still being scalable? I want to avoid blocking as much as I can whilst sending my packets in the order they are requested to be sent in.

Appreciate any help!

Matty
  • 37
  • 1
  • 8

1 Answers1

4

If the data needs to be kept in order, and you don't want to block, then you need to add a queue. The way I do this is by tracking, on my state object, whether we already have an active send async-loop in process for that connection. After enqueue (which obviously must be synchronized), just check what is in-progress:

    public void PromptToSend(NetContext context)
    {
        if(Interlocked.CompareExchange(ref writerCount, 1, 0) == 0)
        { // then **we** are the writer
            context.Handler.StartSending(this);
        }
    }

Here writerCount is the count of write-loops (which should be exactly 1 or 0) on the connection; if there aren't any, we start one.

My StartSending tries to read from that connection's queue; if it can do so, it does the usual SendAsync etc:

if (!connection.Socket.SendAsync(args)) SendCompleted(args);

(note that SendCompleted here is for the "sync" case; it would have got to SendCompleted via the event-model for the "async" case). SendCompleted repeats this "dequeue, try send async" step, obviously.

The only thing left is to make sure that when we try to dequeue, we note the lack of action if we find nothing more to do:

        if (bufferedLength == 0)
        {  // nothing to do; report this worker as inactive
            Interlocked.Exchange(ref writerCount, 0);
            return 0;
        }

Make sense?

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
  • So would I make a queue, when something is added to the queue, check to see if a send operation is already in progress, if it isn't then start one and it keeps looping until the queue is empty? Once the queue is empty from the sending change the writer count to 0 and it will re-start the sending process when it needs to? – Matty Jul 25 '12 at 12:12
  • @mattysouthall exactly. Just note that "keeps looping" there is an *async* loop, not a regular loop (`while`) etc. So basically, whenever you get the "done" event, and you've checked it was OK etc, then start the next one going. – Marc Gravell Jul 25 '12 at 13:03
  • All done and working great! My logging shows it is acting exactly how I want it to, thank you! – Matty Jul 25 '12 at 15:20