1

I have a service that is using a load-balancer in order to expose externally a certain IP. I am using metallb because my cluster is bare metal.

This is the configuration of the service: enter image description here

Inside of the cluster the application running perform a binding to a zmq socket (TCP type) like:

m_zmqSock->bind(endpoint);

where endpoint = tcp://127.0.0.1:1234 and

m_zmqSock = std::make_unique<zmq::socket_t>(*m_zmqContext,zmq::socket_type::pair);
m_zmqSock->setsockopt(ZMQ_RCVTIMEO,1);

Then from an application in my local computer (with access to the cluster) I am trying to connect and send data like:

zmqSock->connect(zmqServer);

where zmqServer = tcp://192.168.49.241:1234 and

zmq::context_t ctx;
auto zmqSock = std::make_unique<zmq::socket_t>(ctx,zmq::socket_type::pair);

Any idea on how could I make the zmq socket connect from my host to send data to the application and receive response also?

user3666197
  • 1
  • 6
  • 50
  • 92
panurjoma
  • 13
  • 3

1 Answers1

0

Q: "Any idea on how could I make the zmq socket connect from my host to send data to the application and receive response also?"

Welcome to ZeroMQ - let's sketch a work-plan:

  1. let's prove the ZeroMQ can be served with an end-to-end visibility working:
  • for doing this, use PUSH-PULL pattern, being fed from the cluster-side by a aPushSIDE->send(...) with regularly spaced timestamped messages, using also a resources saving setup there, using aPushSIDE->setsockopt( ZMQ_COMPLETE,... ) and aPushSIDE->setsockopt( ZMQ_CONFLATE,... )
  1. once you can confirm your localhost's PULL-end recv()-s regular updates, feel free to also add an up-stream link from localhost towards the cluster-hosted code, again using a PUSH-PULL pattern in the opposite direction.

Why a pair of PUSH-PULL-s here?

First, it helps isolate the root-cause of the problem. Next, it allows you to separate concerns and control each of the flows independently of any other ( details on control loops with many interconnects, with different flows, different priority levels and different error handling procedures are so common to all have exclusively only the non-blocking forms of the recv()-methods & doing multi-level poll()-methods' soft-control of the maximum permitted time spent ( wasted ) on testing a new message arrival go beyond of the scope of this Q/A text - feel free to seek further in this formal event-handling framing and about using low-level socket-monitor diagnostics )

Last, but not least, the PAIR-PAIR archetype used to be reported in ZeroMQ native API documentation as "experimental" for the most of my ZeroMQ-related life ( since v2.1, yeah, so long ). Accepting that fact, I never used a PAIR archetype on any other Transport Class but for a pure-in-RAM, network-protocol stack-less inproc: "connections" ( that are not actually any connections, but a Zero-Copy, almost Zero-Latency smart pure pointer-to-memory-block passing trick among some of a same-process co-operating threads ).

user3666197
  • 1
  • 6
  • 50
  • 92