Aller au contenu

Software Factory

Comment Kubernetes nous a aidé à rationaliser notre processus de développement [Anglais]

13 Mar 2019

par

Sylvain Chery

Introduction

At Agile Partner we recently worked on a project for a client who needed to deploy the solution we were developing on premises in their infrastructure. Basically, the solution consisted of a React Web App, an ASP.NET Core Web API and a MongoDB instance.

To simplify deployments, we decided to use Docker container images. The idea was to build images for the front-end and back-end in our continuous integration (CI) pipeline, push them to Agile Partner's Docker registry, deploy to our test environment via a continuous deployment (CD) pipeline, and once tested, tag validated images as release versions. The client could then pull the tagged images from the registry and deploy them in their Rancher cluster without intervention from our part.

With this process, the deployment often ended up being a simple refresh of the client's environment with new Docker images. As a bonus, deployment automation also facilitated quick creation of different test and demo environments on our side.

Why Kubernetes?

Rather than setting up a Kubernetes cluster in our offices, we decided to take a quicker route and setup a Kubernetes cluster in the Cloud. The main options in the cloud are Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) and Azure Kubernetes Service (AKS). For this project, we needed to use services from Azure, such as Azure Active Directory, so it made sense to choose AKS for our Kubernetes setup.

Solution

Conceptually, the solution we put in place looks like this:

Kubernetes-01

Let's explain the solution in the diagram above a bit more in detail. In the following paragraphs we will describe the following four steps we needed to achieve our goals:

  • configuring AKS,
  • building Docker images in continuous integration pipeline,
  • defining Kubernetes manifests and
  • automating deployments in continuous deployment pipeline.

Configuring AKS

There is a very good tutorial by Microsoft available online to guide you through AKS' setup, which allows you to quickly get a working AKS environment up and running in minutes. With a Kubernetes cluster in place we were ready to define our deployment pipeline.

Building Docker images

The first step was to define Docker files to describe the images for the front-end and back-end. Once we were able to create Docker images, we needed to build them and push them to our Docker registry as part of our Continuous Integration pipeline. We configured all this in Azure DevOps CI pipeline by adding steps to build and push Docker images to Agile Partner's Docker registry. For the registry, we used the Azure Container Registry service. We also added a step that modified the Kubernetes manifest with the updated Docker image tags to let Kubernetes know which version of the image to deploy.

Specifying Kubernetes manifest

The various parts of an app being deployed to a Kubernetes cluster are described with a manifest file. At a minimum, the manifest should describe a Deployment which, to put it simply, describes what container image we want to run (within a Pod) and how many instances we need (ReplicaSet). It should also include a definition for a Kubernetes Service to expose the Pods. For our solution we ended up with two manifest files, one for React app and another for ASP.NET Core Web API.

Additionally, we needed to expose our app to the internet. For this we defined an Ingress Controller, which allowed us to configure HTTPS access and expose the app at a given URL. That way, whenever we needed to add a new test or demo environment, we just had to create new manifests for the front-end and back-end and configure another Ingress Controller to expose the new environment at a different URL.

Deploying to Kubernetes is extremely easy once you have a manifest file. It boils down to running a single command from the Kubernetes command line interface (kubectl) to apply the manifest. Kubernetes will only apply the parts of the manifest that have changed or those that are new, which further optimizes already fast deployments. Deployment rollbacks are easy to perform too; besides the current deployment, by default Kubernetes will keep the two previous deployments, hence making it possible to perform a quick deployment rollback, should that be needed.

Deploying using Continuous Deployment

The next step was to automate the test environment deployments. We added the command to apply the updated manifest file (as mentioned before, each build produced an updated Kubernetes manifest that referenced a different Docker image) which instructed AKS to pull down the new Docker container image from the registry. In context of Azure DevOps, it simply required adding a step to execute the kubectl command against AKS in our CD pipeline.

Conclusion

While setting up a Kubernetes cluster can initially be a little intimidating, it presents a neat way to rapidly provision instances of app services. It also allows for easy and quick deployments of new versions (and deployment rollbacks) and makes scaling up or down as needed super easy. In our case it enabled us to quickly deploy new builds, perform manual testing and qualify developments. Our client gained transparency and autonomy. They could follow progress of development and/or validations and pull preview or production ready container images from Agile Partner's Docker registry and deploy them in their environments without involvement from our side.

Thanks to Docker and Kubernetes, we managed to streamline and automate a deployment process that could have otherwise been quite painful, time consuming and error prone, hence gaining in velocity and flexibility throughout our development process.