3

I'm reading Programming Erlang 2E. In Active and Passive Sockets of Chapter 17, it says:

You might think that using passive mode for all servers is the correct approach. Unfortunately, when we’re in passive mode, we can wait for the data from only one socket. This is useless for writing servers that must wait for data from multiple sockets.

Fortunately, we can adopt a hybrid approach, neither blocking nor nonblocking. We open the socket with the option {active, once}. In this mode, the socket is active but for only one message. After the controlling processes has been sent a message, it must explicitly call inet:setopts to reenable reception of the next message. The system will block until this happens. This is the best of both worlds.

Relevant code:

% passive mode
loop(Socket) ->     
    ​case​ gen_tcp:recv(Socket, N) ​of​​    
        {ok, B} ->​     
            ... do something with the data ...​     
            loop(Socket);​  
        {error, closed}​    
            ...​    
    ​end​.

% once mode
loop(Socket) ->     
    ​receive​​  
        {tcp, Socket, Data} ->​     
            ... do something with the data ...​     
            ​%% when you're ready enable the next message​​     
            inet:setopts(Sock, [{active, once}]),​  
            loop(Socket);​  
        {tcp_closed, Socket} ->​    
            ...​    
    ​end​.

I don't see any real difference between the two. gen_tcp:recv in passive mode essentially does the same thing as receive in once mode. How does once mode fix this issue of passive mode:

Unfortunately, when we’re in passive mode, we can wait for the data from only one socket. This is useless for writing servers that must wait for data from multiple sockets.

an0
  • 17,191
  • 12
  • 86
  • 136

2 Answers2

10

The main difference is when you are choosing to react to an event on that socket. With an active socket your process receives a message, with a passive socket you have to decide on your own to call gen_tcp:recv. What does that mean for you?

The typical way to write Erlang programs is to have them react to events. Following that theme most Erlang processes wait for messages which represent outside events, and react to them depending on their nature. When you use an active socket you are able to program in a way that treats socket data in exactly the same way as other events: as Erlang messages. When you write using passive sockets you have to choose when to check the socket to see if it has data, and make a different choice about when to check for Erlang messages -- in other words, you wind up having to write polling routines, and this misses much of the advantage of Erlang.

So the difference between active_once and active...

With an active socket any external actor able to establish a connection can bombard a process with packets, whether the system is able to keep up or not. If you imagine a server with a thousand concurrent connections where receipt of each packet requires some significant computation or access to some other limited, external resource (not such a strange scenario) you wind up having to make choices about how to deal with overload.

With only active sockets you have already made your choice: you will let service degrade until things start failing (timeout or otherwise).

With active_once sockets you have a chance to make some choices. An active_once socket lets you receive one message on the socket and sets it passive again, until you reset it to active_once. This means you can write a blocking/synchronous call that checks whether or not it is safe for the overall system to continue processing messages and insert it between the end of processing and the beginning of the next receive that listens on the socket -- and even choose to enter the receive without reactivating the socket in the event the system is overloaded, but your process needs to deal with other Erlang messages in the meantime.

Imagine a named process called sysmon that lives on this node and checks whether an external database is being overloaded or not. Your process can receive a packet, process it, and let the system monitor know it is ready for more work before allowing the socket to send it another message. The system monitor can also send a message to listening processes telling them to temporarily stop receiving packets while they are listening for packets, which isn't possible with the gen_tcp:recv method (because you are either receiving socket data, or checking Erlang messages, but not both):

loop(S = {Socket, OtherState}) ->
    sysmon ! {self(), ready},
    receive
        {tcp, Socket, Data} ->
            ok = process_data(Data, OtherState),
            loop(S);
        {tcp_closed, Socket} ->
            retire(OtherState),
            ok;
        {sysmon, activate} ->
            inet:setopts(Socket, [{active, once}]),
            loop(S);
        {sysmon, deactivate} ->
            inet:setopts(Socket, [{active, false}]),
            loop(S);
        {other, message} ->
            system_stuff(OtherState),
            loop(S)
    end.

This is the beginning of a way to implement system-wide throttling, making it easy to deal with the part that is usually the most difficult: elements that are across the network, external to your system and entirely out of your control. When coupled with some early decision making (like "how much load do we take before refusing new connections entirely?"), this ability to receive socket data as Erlang messages, but not leave yourself open to being bombarded by them (or fill up your mailbox, making looking for non-socket messages arbitrarily expensive), feels pretty magical compared to manually dealing with sockets the way we used to in the stone age (or even today in other languages).

This is an interesting post by Fred Hebert, author of LYSE, about overload: "Queues Don't Fix Overload". It is not specific to Erlang, but the ideas he is writing about are a lot easier to implement in Erlang than most other languages, which may have something to do with the prevalence of the (misguided) use of queues as a capacity management technique.

zxq9
  • 13,020
  • 1
  • 43
  • 60
  • Learn You Some Erlang states, "In general, if you're waiting for a message right away, passive mode will be much faster. Erlang won't have to toy with your process' mailbox to handle things, you won't have to scan said mailbox, fetch messages, etc. Using recv will be more efficient. " – Roman Rabinovich Mar 16 '18 at 18:00
3

Code that takes advantage of this would look something like:

loop(Socket1, Socket2) ->     
    ​receive​​  
        {tcp, Socket1, Data} ->​     
            ... do something with the data ...​     
            ​%% when you're ready enable the next message​​     
            inet:setopts(Socket1, [{active, once}]),​  
            loop(Socket1, Socket2);​
        {tcp, Socket2, Data} ->
            ... do something entirely different
            inet:setopts(Socket2, [{active, once}]),​  
            loop(Socket1, Socket2);
        ...
    end.

However, in my experience you usually don't do things like that; more often you'll have one process per socket. The advantage with active mode is that you can wait for network data and messages from other Erlang processes at the same time:

loop(Socket) ->     
    ​receive​​  
        {tcp, Socket, Data} ->​     
            ... do something with the data ...​     
            ​%% when you're ready enable the next message​​     
            inet:setopts(Socket, [{active, once}]),​  
            loop(Socket);​  
        reverse_flux_capacitor ->​    
            reverse_flux_capacitor(),
            %% keep waiting for network data
            loop(Socket)
    ​end​.

Also, when writing a "real" Erlang/OTP application, you would usually write a gen_server module instead of a loop function, and the TCP messages would be handled nicely in the handle_info callback function along other messages.

legoscia
  • 39,593
  • 22
  • 116
  • 167
  • I see. So the key point is not socket but messaging. This part of the book is written poorly and misleading. And yes, why not just use parallel server? – an0 Jan 05 '15 at 17:50
  • 2
    @legoscia, where you say "The advantage with passive mode..." you really meant "The advantage with active mode..." – Steve Vinoski Jan 05 '15 at 20:47
  • 2
    @an0 Keep reading... the book does implement parallel servers, a lot of them. His point isn't that you can receive from more than one socket in a single process, but that you can receive *any* Erlang messages alongside socket data, which is enormously convenient and eliminates an entire category of annoying socket management/polling/timing/queue-building code. – zxq9 Jan 05 '15 at 22:16
  • @SteveVinoski Indeed, fixed now. Thanks! – legoscia Jan 06 '15 at 11:20