2

I'm trying to understand how server-client networking works for live multiplayer games.

Suppose I'm building a real-time multiplayer game like an FPS.

If player A shoots player B, the backend server needs to tell player B that they got shot.

I know how to make player A tell the backend server that he fired a shot, just send a request to the server, but then how does one make the backend server tell player B that they got shot?

Does player B have to be constantly checking the backend server every 0.1 seconds to see if something happened, or is there a more efficient way?

user3666197
  • 1
  • 6
  • 50
  • 92
user3915477
  • 275
  • 4
  • 11

2 Answers2

1

The How-part is less important, the efficient-way is cardinal:

Let's jump a few months forward. Your game has several million downloads and your server part has to cope with a huge herd of keen gamers online 24/7/365.

This landscape will help to show the core importance of a distributed-system design properties and a pain any low-latency controlled software ( the gaming being out of question such case ) will put on table.

Why latency? A good game design strives to create and maintain a realistic user-experience. Let's take a shooting example - if player A achieves to destroy target X, then target Y and finally target Z within a last few moments, but a player C has shot player A even "before" the player A started to shoot, while due to a poor ( un-equal ) latency of end-to-end game connection delays, the "bullets" from C reached server "late" and got distributed "after" targets X, Y and Z were confirmed to be destroyed - will you kill A and respawn X, Y, Z or will you leave X, Y, Z exploded and let A die later, under blue sky without a warning or would you let him/her go ahead as if not having been already killed from player C? - your platform gaming-experience will become very unfair for not just one of the players C, X, Y, Z involved. Users hate this. Flames will follow very fast.

Latency variability in time ( a latency jitter ) is another problem -- Some remarkable platforms, IL2-Sturmovik was such case to name some, experienced "jumping" in-game objects due to this type of un-handled latency-related distributed game-engine controls. The gaming community soon realised this handicap and black-MOD-ers spawned many MODs, where this weakness was explited to an unfair benefit to (cheating) players, who benefited from blocking packets, that distributed in-game reality updates and using this hack their planes become stealth ( un-hit-able by attackers' bullets ) and sometimes even UFO-ed in 3D-scene, due to late/very-late arrivals of some plane's in-game 3D-coordinates' updates.

If you realise, that 0.1 sec is about not going above 10 FPS of in-game reality updates, while contemporary in-game reality strives to have some 50 - 80 - 120 FPS for High-Fidelity user-experience, the pain you will face on this grows even higher.

So, how to attack the issue?

ZeroMQ is a wonderfull framework for distributed systems. Mastering this tool will out of question help you learn a lot about non-blocking, about efficient end-to-end messaging ( updates of any kind, controls, signalling ) and you will also learn, how to reduce the network traffic ( imagine 10 gamers v/s 1.000 gamers v/s 100.000 gamers network-segment loads ).

ZeroMQ will also help you to include work-load balancing among multiple server-side machines, thus increasing your back-end capabilities as your player counts increase - ZeroMQ supports almost linear scalability, that is worth some pain during one's learning curve, isn't it?

The server side has typically many duties to handle concurrently. An over- simplified core-loop of the in-game distributed messaging framework event-processing engine looks similar to this:

...                 # setup
while True:         # ||||||||||||||||||||||| DANGER-ZONE vvvvvvvvvvvvvvv FAST
      anInterruptSegmentedCDOWN = 60 * 100  # 100 *  10[ms] dual-poll-overhead -> aMinuteCOUNTER
      #-----------------------------------------------------------
      while anInterruptSegmentedCDOWN  > 0: # CDOWN-loop-[aMinuteCOUNTER ~ 1 minute with 6000-loops 10 [ms] each ]-----------100x [sec] 6000x [min]
            anInterruptSegmentedCDOWN -= 1
            #-------------------------------# SIGsPORT >>> AVOID WARM-UP delay at the end of the story, where SIG_EXIT needs to be delivered IMMEDIATELY >>> https://stackoverflow.com/a/33435244/3666197
            try:  #----------.send()--------# NOBLOCK-# .send() aMSG, may throw an EXC
                    ...
            except:
                    ...
            finally:
                    ...
            #-------------------------------# CtrlPORT
            if ( 0 != aCtrlSOCKET.poll( 0 ) ):     # IF  .poll(  1 )    - a fast .poll()
               ...
            #-------------------------------# RecvPORT
            if ( 0 == aRecvSOCKET.poll( 9 ) ):     # IF !.poll(  9 )    - relax & wait for aMsg ...  10 [ms] REDUCED waiting, due to a need to keep a SIGsPORT heart-beat about 100 ms so as to avoid IDLE/WarmUp on socket once SIG_EXIT is to be delivered
                 ...
            else:                                  # ELSE = [MSG] IS HERE,
                # GO GET THE JOB DONE ~~~~~~~ AI-ZONE <<<< [MSG]
                ...                 # ~~~~~~~ HERE
                #                   #         THE MAGICS
                #                   #         HAPPEN
      continue
 # |||||||||||||||||||||||||||||||||||||||||| DANGER-ZONE ^^^^^^^^^^^^^^^ FAST

The best next step?

You might be interested in more details -- What is a difference between WebSockets and ZeroMQ

However for your further questions you might want to see a bigger picture on this subject >>> with more arguments, a simple signalling-plane / messaging-plane illustration and a direct link to a must-read book from Pieter HINTJENS.

Community
  • 1
  • 1
user3666197
  • 1
  • 6
  • 50
  • 92
0

The are 2 methods to get the data from a server:

1) synchronous method (or poll method) - it is what you described - check every interval for updates.

2) asynchronous method (or push method) - it is more efficient for the use case you described - a client subscribes to updates once and a server notifies a client when it has got updates. You can implement it using websockets, for example. If you are not bound to http-only layer, you can use ZeroMQ to implement it.

user3666197
  • 1
  • 6
  • 50
  • 92
Andrew
  • 2,055
  • 2
  • 20
  • 27
  • You might want to know, that since early **`ZeroMQ`** implementations, the `PUB` pattern flooded network distribution due to `SUB`-side filtering, so a naive use-case of this **Formal Communication Pattern** looks friendly just in the source-code, but are devastating resources ( network-wise, remote-client buffers/`HWM`-wise ). Skip WebSockets. Broadcasting anything to everybody is not a wise architecture. Async **non-blocking** ( segmented ) `.poll()`-s are way to go, while other distributed-systems architectures allow for even harder constrained real-time patterns ( in MIL / HPC domains ). – user3666197 Jun 04 '16 at 19:12
  • Poll (non blocking, of course) or Push depends on many application's factors. And for some systems Push is wise architecture. Broadcasting is not right when every message (1000x per second) is delivered to every client (1000x connected) - simply because it does not scale. However, when broadcasting is partitioned (for example, by a game session which spans to only few clients) it might be right architecture, especially when it is necessary to achieve lower delivery times (end-to-end measured). – Andrew Jun 06 '16 at 08:06
  • I am not aware about ZeroMQ harmful resource usage strategies.. Maybe it is the case with some protocols... When it is run over TCP, impact on the network and resources is controlled by TCP definitions - the size of incoming (receive) buffer and the size of outgoing (send) buffer could be reduced (if required), congestion control mechanism of TCP is in charge of "tolerant" network usage as per RFC defined. – Andrew Jun 06 '16 at 08:14
  • ( sure, there is no universal answer / wisdom, so forgive my shortcut notes. `ZeroMQ` has brought a lot of powers, including the *almost*-linear-scalability, however, on pumping messages into sockets with `tcp://` transport-class, **doubled memory footprint on dual memory allocations** happen, losing the Zero-Copy of the Zero-Everything design maxims. **ADD** `SUB`-side topic-filtering and **MUL** by gamers online. That was the point behind the remark. Anyway, enjoy the day, Andrew. ) – user3666197 Jun 06 '16 at 08:53