Designing a Distributed System for an Online Multiplayer Game — Deployment (Part 7)

  • Redis: used as cache DB and matchmaking queue in the game manager
  • MySQL: used as database in the game manager (need persistence storage)
  • KubeMQ: used as the event broker
  • Game Server
  • Ingress: to expose the game manager APIs to the public

Cluster

I used Hashicorp Terraform (IAC) to create the cluster. I used both the Digitalocean Managed Kubernetes and AWS Managed Kubernetes Service (EKS).

AWS Managed Kubernetes Service (EKS)

There are two kinds of resources to define the nodes: node groups and worker groups. The node groups are EKS managed nodes whereas the worker groups are self manage nodes. as talked about before, the game servers nodes need to be accessible from the public directly, so we need to define a Security Group for the game servers nodes to open a range of ports for public access (game client). As of this writing, there is an open issue on AWS container roadmap repository which is requested to add an option for node groups to assign a Security group. To fix this challenge, I used worker groups to define the nodes. Therefore, two worker groups are needed, one for scheduling the game manager and other services and one to schedule the game servers with a custom Security Group.

Digitalocean Managed Kubernetes

Node Pools
We need two node pools here same as EKS, a manager node pool to schedule the game manager, MySQL, Redis, and KubeMQ pods, and a game server node pool to schedule the game server pods.

Scaling

Remember that the game manager is stateless and can be scaled horizontally but the game server is stateful and is run as a dedicated server and can not be scaled horizontally, we’ll scale the game server node pool manual. (in a different blog post)

Kustomize

It’s needed to be mentioned that I know about GitOps and the tools which make all these easy and secure but I wanted to try the fundamentals.

The directory structure of each service

Base

  • config.env: the hard-coded default configs like default ports
  • deployment.yaml: the default Deployment file
  • kustomization.yaml: the default Kustomization file to define resources and configs
  • service.yaml: the default Service file

Overlays

  • config.env: the hard-coded configs for environment like ports
  • config.secret: the environment secrets, like passwords and …
  • deployment-patch.yaml: the Deployment patch file to merge with the base (default) deployment like changing the replicas
  • kustomization.yaml: the Kustomization file to define the base resources and configs maps for the environment

Game Server Configs

The game server has no Deployment file because it is run by the game manager using Kubernetes API. Actually, the deployment configs are passed via an API call instead of YAML file.

Roles

The game manager needs permissions to create, watch or list pods across the cluster. To achieve this, the RBAC Authorization is used to define Role and ClusterRole and bind them to the API Group.

Ingress

The Nginx ingress proxies the game manager exposed ports to the public. First, we need to deploy the Nginx ingress controller:

Services

These pods need to be exposed in the cluster by services (ClusterIP):

  • KubeMQ
  • Redis
  • MySQL

Diagram

Now, let's take a look at the diagram we talked about in the architecture post:

  • The Nginx ingress works as the load balancer to proxy the user connection from the public to the game manager instances.
  • The user opens a long-living connection to one of the game manager pods.
  • The game node pool is scaled manually.

Applying the configs for the first time

The k8s object configs must be applied in order because of dependencies and this pipeline is automated in the Makefile.

Docker Registry secret

The artifacts (docker images) are published on the ghcr privately and to access them in the cluster, a secret for docker-registry is created.

Roles

The Role and ClusterRole configs are applied first to make the game manager able to access the pods and APIs.

KubeMQ

The KubeMQ configs are applied to expose it as a service.

Redis

The Redis configs are applied to expose it as a service.

MySQL

MySQL needs persistent storage to store the data to prevent data loss on the pod destruction. after applying the MySQL configs, we must wait for the pod to be ready and responsive, then create the required database and users. Kubernetes has an API for pod readiness status, therefore, I created a shell script to wait for the MySQL pod readiness status first, before continuing to apply other objects configs.

Nginx Ingress

It needs the ingress controller to be deployed first, then a shell script waits for the controller pod readiness status. Sequentially, the ingress configs are applied.

Updating the DNS Records

After getting the load balancer public IP (DO) or public hostname (AWS), we need to update the DNS records for our API domain.

Game Manager

After all, it’s the game manager configs’ turn to be applied.

Final Result

Now all of our applications and services are running:

Garage prototype

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Sajjad Rad

Sajjad Rad

Currently a software engineer, always an adventurer