Designing a Distributed System for an Online Multiplayer Game — Game Client (Part 6)
This is part number six of the Garage series, you can access all parts at the end of this post.
I started my career by working as a freelancer windows application developer (C# and C++) about 12 years ago, after a while I start working for a game company that was working on an FPS PC game. I got familiar with Unity there, we worked on lots of mini-games. after some years, I focused on software engineering and web services. I’m not so experienced in Unity but I started to make the game client with it.
I did lots of research and development about online multiplayer games and networking fundamentals and the result is a mix of tricks and methods to face this big problem, latency!
Game client loop
The game client has a loop the same as the server loop with the same algorithm and codes. I developed the server by golang and Unity uses C#. oh man, two kinds of implementation with different languages! I know that unity also supports the server-side (without GUI) but I wanted to experience the golang as a game server and the server performance was a priority, besides, the game server doesn’t need most of the Unity features.
Our main problem here is the latency, because of the physic limits, the player inputs must travel to the server and the distance and network quality play a major role here. This is
RTT definition from Wikipedia:
In telecommunications, round-trip delay (RTD) or round-trip time (RTT) is the amount of time it takes for a signal to be sent plus the amount of time it takes for acknowledgement of that signal having been received.
You may know the
RTT as ping. here half of this value matters, the amount of time it takes for a packet to be delivered to the server.
The client and the server loops must run synchronized but we can’t sync them in the same frame number. For example, consider a connection with RTT of 120ms, it means that if the player pushes the
W button on the keyboard, it took 60ms for packets to be delivered to the server which is equal to 3 frames! Thus, the player inputs in frame #1 are delivered to the server when it’s calculating frame #4 and the inputs are not valid at that time. To deal with this challenge, the client frame must be further than the server frame and it’s proportional to half of the RTT divide to the loop interval time.
On another side, the other players’ states are delivered to the client with more delay. They are suffering from different latencies. We’ll review this problem further.
Briefly, the client is always in the future and the other clients are in the past.
Change the client clock
Timothy Ford mentioned an interesting trick in the Overwatch game architecture presentation in GDC about changing the client clock depending on the network situation. While the server lost a player input in the buffer, it sends a signal to the client about that, then the client starts to run the loop a little faster to compensate the losses and increase the buffer size, and when everything was OK (by the server signal again), it increases the clock time to get back normal. To implement this, I change the client loop
Timer interval value when the signal is received.
Because of the authoritative server, the player inputs must be sent to the server to process and validate, but thanks to the latency, the game client couldn’t wait for the server response, because the result is delayed and laggy.
To fix this problem, the client doesn’t wait for the server response to render. It predicts the game state and renders it with the player inputs immediately. In parallel, it sends the inputs to the server and validates the server authorized state with the predicted state.
There may be some packet loss or out-of-order delivery because of the UDP, so miss prediction could have occurred. If the server snapshot has conflicts with the predicted state, we need to recalculate all frames from the last valid frame to the current frame and move the players to recalculated states.
Put all parts together
The clients calculate RTT via sending a Ping request to the server and waiting for the response and measuring the time it took. To achieve this, I used a stopwatch. The
system.datetime.now is not reliable and libraries are not accurate to report the exact time because of dependence on the underlying OS. The c# date library has an error of around 0.5 to 15ms and it’s not working here.
Half of RTT divided into the clock frequency (50hz = 20ms) is equal to the number of frames that the client needs to be ahead of the server frame number. for example, for latency around 60ms, the client needs to run the loop 3 frames ahead of the server.
The prediction algorithm starts to predict the player state with the inputs and sends the inputs to the server in parallel.
As I said before, the other players have different latencies and their states are correspond to the past of the current player. To face this issue, we use the last received snapshot states in a loop (considering the constant acceleration) to calculate all frames from the snapshot frame to the current client frame and predict the other player’s states in the future (which is present here).
All calculated states are cached with the frame number and are pushed to the render queue. The render function lerps two last frames to change player states.
In the next part, we’ll start to ship and run these applications using K8S.