0

My (linux) server has two public IPs and I would like to make some parallel connections, to a same or different servers (my server acts as client program here; it just runs a C++ program that communicates with other servers to fetch some data).

Let's suppose I want to stablish 100 parallel connections, is there any difference in performance or stability, from the point of view of the SO, between making 100 connections from a same IP, or 50 connections from the first IP, and another 50 conenctions from the second one?

In other words, is there any difference (is it safer) if I distribute the connections between the different available (local) IPs?

MAYBE RELATED: https://stackoverflow.com/a/3923785/1794803.

ABu
  • 10,423
  • 6
  • 52
  • 103
  • Performance is only going to be different if one or both of the peers uses a different route for the two IPs. If the route is the same then it's not even safer, as there is still a single point of failure. – cdhowie Jan 04 '18 at 00:08
  • @cdhowie That's right, but that is a thing that I cannot control. Read the "related" answer, for further things to be taken in consideration. That linked answer is what brought me to wonder about this. – ABu Jan 04 '18 at 00:47
  • Is each IP assigned to its own NIC? Anyway, I am pretty sure that splitting up the connections as suggested does not really affect performance, at least not with these numbers. You may run into some buffer issues with much bigger numbers, though. – Nils Magnus Jan 04 '18 at 01:16
  • I don't know if Linux is like this, but many years ago I worked with Solaris servers, and they had per-virtual interface network queues. We were able to handle more incoming queries (they were DNS servers) by spreading the load over multiple IPs. – Barmar Jan 04 '18 at 01:16
  • @NilsMagnus I don't know. I have to ask that to my server company. What if it has? – ABu Jan 04 '18 at 01:24
  • If the IPs are spread over several pieces of hardware, they may be able to handle more packets (per seconds) or larger quantities of octets. But to run into these issues, you need way more than 100 simultaneous connections. Where is your peer located? Somewhere in the Internet? Directly attached to a local highspeed network (10GBit+)? What latencies do you expect, what is acceptable for your application? – Nils Magnus Jan 04 '18 at 01:37
  • @NilsMagnus The "100 connections" was just an example. I'm actually connecting to multiple trading servers, some of them behind the cloudflare network which bans IPs if they make more than 30 queries per minute! and what I need to make is about 4000 HTTP queries per minute, so I'm using a multi-ip proxy to avoid bans to make trading decisions each 2 seconds. Besides, I will send each query two or three times over different proxies to get the first that arrives and discard the others, to try to fit below the 2 seconds/per decision limit. – ABu Jan 04 '18 at 01:54
  • @NilsMagnus So the number of connections depends on how many trading platforms I'm connected to, and the features and restrictions of each of them. In the end, I'm looking for an optimus way of handle that amount of connections to make the program scales well whenever the app grows. And according to the linked answer, in the default configuration, linux cannot "consistently guarantee more than ... 470 sockets per second". That's why I asked if spreading the connections over my two IPs will reduce the error rate or increse performance or whatever. Each HTTP response is below 10 KiB though. – ABu Jan 04 '18 at 02:01
  • @Peregring-lk Wait, so you're trying to circumvent restrictions on a website? I'm not sure anyone here is going to be comfortable helping you do that. – cdhowie Jan 04 '18 at 04:18

1 Answers1

2

Outgoing TCP connections have also port numbers assigned with them. These are 16 bit numbers, resulting in 65.535 possible connections at one single point of time (port 0 has a special meaning). After dismantling a connection the TCP protocol requires the connection to stay in a special state TIME-WAIT (see http://www.tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm for a more complete description of the finite state machine). This is usually preconfigured with 60 seconds or so. With some extra tricks the period the source port ressource stays in TIME-WAIT can be significantly lowered. However, these two parameters in fact limit the number of connections at a time. All these restrictions apply to a single IP address. If you have n times IP addresses your TCP/IP stack is able to maintain n times many connections.

Be careful with potential NAT gateways between your client and the servers, if you run a huge number of parallel connections these routers' NAT tables may or may not be able to deal with that many connections.

In general I am not sure if your general architecture suits your use case. There may be reasons your servers won't allow only for a limited number of connections. Coding around these shaping mechanisms might just lead to a hare and tortoise race.

Nils Magnus
  • 310
  • 1
  • 7