Designing a Distributed System for an Online Multiplayer Game — Game Manager (Part 4)
This is part number four of the Garage series, you can access all parts at the end of this post.
Our exciting journey has begun, we are going to design and describe each of the services in detail.
Let’s start again with the sub-services of the game manager.
I developed an open-source matchmaking package in go to handle the queue. The repository exists here:
GitHub - theredrad/matchmaker: 🤼♂️ a simple FIFO matchmaker supporting scoring
This package is a simple FIFO matchmaker that supports player rank & latency (as tags). The rank & latency tags help…
It’s a simple matchmaking package with Redis implementation to handle the queue with supporting player latency and rank. The Redis matchmaker supports concurrent multiple instances using the Redis-based distributed mutual exclusion lock.
The Redis matchmaker accepts the game configs (like max players and a callback function) as an argument. When the max players are matched, it calls the callback method with a list of matched players.
The director is a package to interact with the Kubernetes API. It’s responsible for creating a new game instance (pod), deleting the pod, and getting the public IP of the node.
The session manager is a facade package that manages the game sessions, caches the data, interacts with the director, listens to the broker for the game instance events, and exposes some APIs to query the games and players' states.
The game client opens a WebSocket with the game manager on the game menu, so the game manager can inform the client of the events.
Inventory service uses MySQL database to store the cars, cars’ items, users’ items.
Game service uses Redis as a temporary database to cache and store games data and player’s state.
HTTP API Server
It’s a REST API to serve some endpoints like adding the player to the matchmaking queue, providing the game server public key over TLS, loading store cars, users' garage cars, and….
The client opens a web socket to the game manager in the game menu, when the player taps on the “Start Matchmaking” button, an HTTP request is sent to the API, then the matchmaker adds the player to the queue.
After matching the players, the matchmaker calls the callback function (OnPlayersMatched method on the session manager instance). Next, the session manager loads the user’s items (like active cars, assets names like 3d models, and textures), generates a new server RSA private key, and tries to initiate a new game instance by the director.
Subsequently, the director creates a new pod with the Game server docker image and passes the configurations (like server private key, player list, cars, and …) to the container, then returns the node public IP (which pod is scheduled on). After that, the session manager caches the game data (including players, private key, and IP) using Redis and waits for the game pod to get ready.
Wait a sec, node public IP??
Hmm, yes. as I mentioned before, to cope with latency issues (we’ll talk more about in the future), the game client connects to the game server directly, so the pod needs to use the node network namespace (HostNetwork namespace) to access the internet. The game pods are running on the same network and they can’t listen to the same port, thus, the game server chooses a random port and starts listening to it. Afterward, it publishes an event on the broker channel to inform the game session manager of the readiness and listening port numbers.
The director creates a new pod with these definitions:
There are three important options for the PodSpec:
- HostNetwork is set to
trueto make the pod able to use the host network namespace to be accessible from the public
- DNSPolicy is set to
DNSClusterFirstWithHostNetto make pod use the cluster DNS as the priority.
- RestartPolicy is set to
RestartPolicyNeverto tell the k8s that this pod is not replaceable.
In the following, the session manager receives the game readiness event and caches the IP and port of the server, then it publishes a PlayerMatched event again to the broker channel with the node public IP and the game container listening ports, on other hand, the WebSocket server event consumer receives the PlayerMatched event and writes the event to the client socket.
The game client receives the PlayerMatched event and starts connecting to the game server directly using the node public IP and ports.
In the next part, we’ll look at the game server