24

Is there a reliable way in Windows, apart from changing the routing table, to force a newly created socket to use a specific network interface? I understand that bind() to the interface's IP address does not guarantee this.

Ajay
  • 18,086
  • 12
  • 59
  • 105
Ofir
  • 8,194
  • 2
  • 29
  • 44

2 Answers2

20

(Ok second time lucky..)

FYI there's another question here perform connect() on specific network adapter along the same lines...

According to The Cable Guy

Windows XP and Windows Server® 2003 use the weak host model for sends and receives for all IPv4 interfaces and the strong host model for sends and receives for all IPv6 interfaces. You cannot configure this behavior. The Next Generation TCP/IP stack in Windows Vista and Windows Server 2008 supports strong host sends and receives for both IPv4 and IPv6 by default on all interfaces except the Teredo tunneling interface for a Teredo host-specific relay.

So to answer your question (properly, this time) in Windows XP and Windows Server 2003 IP4 no, but for IP6 yes. And for Windows Vista and Windows 2008 yes (except for certain circumstances).

Also from https://forums.codeguru.com/showthread.php?487139-Socket-binding-with-routing-table

On Windows, a call to bind() affects card selection only incoming traffic, not outgoing traffic. Thus, on a client running in a multi-homed system (i.e., more than one interface card), it's the network stack that selects the card to use, and it makes its selection based solely on the destination IP, which in turn is based on the routing table. A call to bind() will not affect the choice of the card in any way.

It's got something to do with something called a "Weak End System" ("Weak E/S") model. Vista changed to a strong E/S model, so the issue might not arise under Vista. But all prior versions of Windows used the weak E/S model.

With a weak E/S model, it's the routing table that decides which card is used for outgoing traffic in a multihomed system.

See if these threads offer some insight:

"Local socket binding on multihomed host in Windows XP does not work" at http://www.codeguru.com/forum/showthread.php?t=452337

"How to connect a port to a specified Networkcard?" at http://www.codeguru.com/forum/showthread.php?t=451117. This thread mentions the CreateIpForwardEntry() function, which (I think) can be used to create an entry in the routing table so that all outgoing IP traffic with a specified server is routed via a specified adapter.

"Working with 2 Ethernet cards" at http://www.codeguru.com/forum/showthread.php?t=448863

"Strange bind behavior on multihomed system" at http://www.codeguru.com/forum/showthread.php?t=452368

Hope that helps!

eudoxos
  • 18,545
  • 10
  • 61
  • 110
Ezz
  • 642
  • 6
  • 11
  • Thanks, Unless I am missing something, neither answer is relevant, as SO_BINDTODEVICE is not relevant to windows. I suspect your summary is correct though, even though I do not have the evidence. – Ofir Jan 18 '10 at 11:56
  • Oops right you are! Updated my answer above with better info - sorry! – Ezz Jan 18 '10 at 16:18
  • Note that the weak and strong host models do not affect how traffic sent from a socket is routed. I.e. the strong host model ensures that traffic leaving a network adapter uses that adapter's assigned IP address, but does not affect how the network stack in the host itself selects the outgoing adapter to actually use. This is actually explained in "The Cable Guy" link above, as well as on Wikipedia: https://en.wikipedia.org/wiki/Host_model – Peter Duniho Feb 09 '16 at 08:22
  • In certain protocols, such as with icmpv6 neighbor discovery, it is possible to send packets with the unspecified address (i.e. `::`) as the source. A router solicitation message for instance might have (dst `ff02::2` (link local all routers), and `::` as source). If one opened a raw IPv6 socket on windows and sent such a packet, would it go out on all interfaces? Is the implication in this case that there is no other way to restrict the output interface besides forcing the src to be the interface's link-local IP? – init_js Oct 14 '16 at 16:37
  • 3
    So, 11 years later, the best answer is still "Not possible"? – AndroC Mar 09 '21 at 11:21
  • 1
    All of the CodeGuru links in this answer are broken – Remy Lebeau Jun 10 '22 at 19:05
  • (Main (first) CodeGuru link fixed) – eudoxos Mar 27 '23 at 08:49
  • Look at the other answer, 10 years later it actually seems possible. – MappaM May 27 '23 at 12:43
1

I'm not sure why you say bind is not working reliably. Granted I have not done exhaustive testing, but the following solution worked for me (Win10, Visual Studio 2019). I needed to send a broadcast message via a particular NIC, where multiple NICs might be present on a computer. In the snippet below, I want the broadcast message to go out on the NIC with IP of .202.106.

In summary:

  • create a socket
  • create a sockaddr_in address with the IP address of the NIC you want to send FROM
  • bind the socket to that FROM sockaddr_in
  • create another sockaddr_in with the IP of your broadcast address (255.255.255.255)
  • do a sendto, passing the socket created is step 1, and the sockaddr of the broadcast address.

`

static WSADATA     wsaData;

static int ServoSendPort = 8888;
static char ServoSendNetwork[] = "192.168.202.106";
static char ServoSendBroadcast[] = "192.168.255.255";

` ... < snip >

if ( WSAStartup(MAKEWORD(2,2), &wsaData) != NO_ERROR )
      return false;


// Make a UDP socket
SOCKET ServoSendSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
int iOptVal = TRUE;
int iOptLen = sizeof(int);
int RetVal = setsockopt(ServoSendSocket, SOL_SOCKET, SO_BROADCAST, (char*)&iOptVal, iOptLen);
              
// Bind it to a particular interface
sockaddr_in ServoBindAddr={0};
ServoBindAddr.sin_family = AF_INET;
ServoBindAddr.sin_addr.s_addr = inet_addr( ServoSendNetwork ); // target NIC
ServoBindAddr.sin_port = htons( ServoSendPort );
int bindRetVal = bind( ServoSendSocket, (sockaddr*) &ServoBindAddr, sizeof(ServoBindAddr) );
if (bindRetVal == SOCKET_ERROR )
{
    int ErrorCode = WSAGetLastError();
    CString errMsg;
    errMsg.Format (  _T("rats!  bind() didn't work!  Error code %d\n"), ErrorCode );
    OutputDebugString( errMsg );
}


// now create the address to send to...
sockaddr_in ServoSendAddr={0};
ServoSendAddr.sin_family = AF_INET;
ServoSendAddr.sin_addr.s_addr = inet_addr( ServoSendBroadcast ); // 
ServoSendAddr.sin_port = htons( ServoSendPort );

...

#define NUM_BYTES_SERVO_SEND 20
unsigned char sendBuf[NUM_BYTES_SERVO_SEND];
int BufLen = NUM_BYTES_SERVO_SEND;

ServoSocketStatus = sendto(ServoSendSocket, (char*)sendBuf, BufLen, 0, (SOCKADDR *) &ServoSendAddr, sizeof(ServoSendAddr));
if(ServoSocketStatus == SOCKET_ERROR)
{
    ServoUdpSendBytes = WSAGetLastError();
    CString message;
    message.Format(_T("Error transmitting UDP message to Servo Controller: %d."), ServoSocketStatus);
    OutputDebugString(message);
    return false;
 }
El Ronaldo
  • 383
  • 1
  • 3
  • 11
  • 1
    Thanks. I've asked this during 2010, at which time bind() definitely wasn't doing so reliably, glad to hear that's resolved now :) – Ofir Jul 20 '22 at 07:29
  • Was looking for such a refresh of the answer. I wonder how things are supposed to be handled on stackoverflow for that. – MappaM May 27 '23 at 12:42