0

I wrote a network logger which works in separate thread. The idea was to allow application push any amount of data and logger should process it separately without slowing down the main thread. The pseudocode looks like:

void LogCoroutine::runLogic()
{
    mBackgroundWorker = std::thread(&LogCoroutine::logic, this);
    mBackgroundWorker.detach();
}

void LogCoroutine::logic()
{
    while (true)
    {
        _serverLogic();
        _senderLogic();

        std::this_thread::sleep_for(std::chrono::milliseconds(10)); // 10ms
    }
}

void LogCoroutine::_senderLogic()
{
    std::lock_guard<std::mutex> lock(mMutex);    

    while (!mMessages.empty() && !mClients.empty())
    {
        std::string nextMessage = mMessages.front();
        mMessages.pop_front();

        _sendMessage(nextMessage);
    }
}

_serverLogic checks the socket for the new connections (peers) and _senderLogic processes queue with messages and send it to all connected peers.

And the last function: pushing message:

void LogCoroutine::pushMessage(const std::string& message)
{
    std::lock_guard<std::mutex> lock(mMutex);
    mMessages.push_back(message);
}

Everything works well when the packages send not very often. There is a cycle when application starts which logs a lot of information. And application hangs up for a 5-10 seconds, without logging it doesn't slow down.

So, where is the bottleneck of this architecture? Maybe pushing each message with mutex inside is a bad idea?

Max Frai
  • 61,946
  • 78
  • 197
  • 306
  • 4
    I can't help screaming when I see sleep_for in a while(true) loop... Why not having a std::condition_variable instead ? If I were you, I'd create a thread-safe queue class instead of having mutexes in logger class – Kek Aug 14 '13 at 07:40
  • 1
    why not take a ready and tested message queue, i.e. [ØMQ](http://zeromq.org/). At one point I created [an example](https://github.com/d-led/zmqlogger) that only showed the principle, as your code is trying. But there can be many more caveats: whether your harddrive io blocks the rest of the application because it waits for it as well, or perhaps, you want to log over the network from multiple clients, etc. Check out ØMQ, I'd suggest. You can even use it to communicate within a process. All message passing is done asynchronously. Watch out for overfilled queues, though if you're loggin too fast – Dmitry Ledentsov Aug 14 '13 at 07:43
  • @Kek, i.e. see [g2log](http://www.codeproject.com/Articles/288827/g2log-An-efficient-asynchronous-logger-using-Cplus) which is based on a lock-free queue :). Combining that with a message queue is probably what the author wants to achieve – Dmitry Ledentsov Aug 14 '13 at 07:45
  • @DmitryLedentsov: in case of logging, you should probably defined a maximum size on the queue and decide whether to drop incoming logs in case of overflow (much simpler than overwriting existing logs because those are normally the domain of the sender). – Matthieu M. Aug 14 '13 at 07:46
  • @MatthieuM.: perhaps, yes, in any case. If one takes it seriously, then a load balancing network architecture might help. In 0mq there are some configuration options for [dropping or offloading messages](http://api.zeromq.org/2-1:zmq-setsockopt) – Dmitry Ledentsov Aug 14 '13 at 07:57

2 Answers2

1

Your approach is basically polling for log events with some interval (10 ms). This approach (which is in fact busy waiting) is not very performant, since you always consume some CPU even if there are no any log messages. On another hand if new message arrives you don't notify the waiting thread.

I would propose to use some kind of blocking queue which solves both issues. Internally blocking queue has mutex and condition variable, so that consumer thread is waiting (not busy looping!) while queue is empty. I think your use case is just ideal for blocking queue. You can really easily implement your own queue based on mutex + condition variable.

Pushing each message with mutex is not a bad idea, you have to synchronize it anyway. I would just propose to get rid of polling.

Community
  • 1
  • 1
nogard
  • 9,432
  • 6
  • 33
  • 53
0

See this example: How to use work queues for producer & consumers (1 to many). Very well explained.

Sammy
  • 257
  • 2
  • 8