Construire un cluster kubernetes avec eksctl [ENGLISH] | Agile Partner
share on

Build a kubernetes cluster with eksctl

By Olivier Robert, a Senior Consultant and DevOps Engineer at Agile Partner.

Kubernetes is available on AWS as a managed service: AWS takes care of the control plane (replacing unhealthy nodes, automated updates), you take care of worker nodes. This saves you a lot of setup and maintenance work. Of course the control plane is highly available as it is deployed across multiple availability zones.

You can use ECR to store container images, EBS volumes for your persistent volume claims, ELBs for you load balancer services and IAM to map users and groups to your cluster. It is getting more and more interesting, isn’t it? Specially if you consider the amount of work and competences necessary to get an equivalent setup and integration on premises.

These are huge advantages. You can create a cluster from the console but I find it more efficient to do it from the command line with eksctl, a command line utility for creating and managing Kubernetes clusters on Amazon EKS.

What is needed

Tools installation

The tools are available on all platforms (MacOS, Linux, Windows)

AWS Cli

There are different ways to install the aws cli. I like to create a python virtual environment and install via pip. Pick your preferred method.

aws-iam-authenticator

Here’s how to install aws-iam-authenticator. It allows IAM users to get authenticated on the cluster.

eksctl

Installing eksctl is straightforward as well.

kubectl

Same goes for installing kubectl.

Cluster creation

We now have all the tooling we need to create and then use the cluster.

I would recommend to not just run the eksctl create cluster command because your cluster will be created by default in us-west-2 region with m5.large instances. Maybe this is not what you want for your test run, maybe it is. I keep it very basic, but I chose to control the region, instance type (t2.medium) and instance number.

Use the aws cli to setup your AWS credentials

$ aws configure
AWS Access Key ID [None]: <your key>
AWS Secret Access Key [None]: <your secret key>
Default region name [None]: <your region>
Default output format [None]:

To configure and control the cluster creation, you can use eksctl’s command line options and flags or create a configuration file. I created a cluster.yaml configuration file for easy re-use.

  1. apiVersion: eksctl.io/v1alpha5
  2. kind: ClusterConfig
  3. metadata:
  4.   name: EKSTestDrive
  5.   region: us-east-1
  6.  
  7. nodeGroups:
  8.   - name: ng-1-workers
  9.     instanceType: t2.medium
  10.     desiredCapacity: 3

The cluster gets a name: EKSTestDrive.
It will be created in Northern Virginia
I’ll create one node group (ng-1-workers) with 3 nodes using t2.medium instances. More than enough for my tinkering.

If you have several AWS profiles and want to use a particular one, prepend the command with the AWS_PROFILE environment variable: AWS_PROFILE=demo eksctl create cluster ...

Be aware that if you set a profile during cluster creation aws-iam-authenticator will need to use that profile instead of the default one. The IAM user that creates the cluster gets admin rights. To get aws-iam-authenticator to use the right profile: export AWS_DEFAULT_PROFILE=demo (in this example).

Don’t forget to unset the environment variables when you are done.

You’ll notice I use 2 command flags: one is to automatically create the Kubernetes config file, the other is to set this configuration as the current context for the kubectl cli.

If you have a kube config file already, you might want to write the one generated by eksctl to another place and refer to it via an environment variable. You could use --kubeconfig kube/config --write-kubeconfig for instance. That would write the config file in the current directory in kube/config. An export KUBECONFIG=kube/config would select that configuration and allow kubectl to access the EKS cluster. Again, don’t forget to unset the variable when you are done.

Enough! Let’s create that cluster now.

$ eksctl create cluster -f cluster.yaml --write-kubeconfig --set-kubeconfig-context
[]  using region us-east-1
[]  setting availability zones to [us-east-1f us-east-1c]
[]  subnets for us-east-1f - public:192.168.0.0/19 private:192.168.64.0/19
[]  subnets for us-east-1c - public:192.168.32.0/19 private:192.168.96.0/19
[]  nodegroup "ng-1-workers" will use "ami-0200e65a38edfb7e1" [AmazonLinux2/1.12]
[]  creating EKS cluster "EKSTestDrive" in "us-east-1" region
[]  1 nodegroup (ng-1-workers) was included
[]  will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --name=EKSTestDrive'
[]  2 sequential tasks: { create cluster control plane "EKSTestDrive", create nodegroup "ng-1-workers" }
[]  building cluster stack "eksctl-EKSTestDrive-cluster"
[]  deploying stack "eksctl-EKSTestDrive-cluster"
[]  building nodegroup stack "eksctl-EKSTestDrive-nodegroup-ng-1-workers"
[]  --nodes-min=3 was set automatically for nodegroup ng-1-workers
[]  --nodes-max=3 was set automatically for nodegroup ng-1-workers
[]  deploying stack "eksctl-EKSTestDrive-nodegroup-ng-1-workers"
[]  all EKS cluster resource for "EKSTestDrive" had been created
[]  saved kubeconfig as "/home/cloud_user/.kube/config"
[]  adding role "arn:aws:iam::992159286727:role/eksctl-EKSTestDrive-nodegroup-ng-1-NodeInstanceRole-14FOPKHLL342M" to auth ConfigMap
[]  nodegroup "ng-1-workers" has 0 node(s)
[]  waiting for at least 3 node(s) to become ready in "ng-1-workers"
[]  nodegroup "ng-1-workers" has 3 node(s)
[]  node "ip-192-168-30-80.ec2.internal" is ready
[]  node "ip-192-168-34-141.ec2.internal" is ready
[]  node "ip-192-168-8-70.ec2.internal" is ready
[]  kubectl command should work with "/home/cloud_user/.kube/config", try 'kubectl get nodes'
[]  EKS cluster "EKSTestDrive" in "us-east-1" region is ready

It takes a while (don’t ctrl-C out), but … cluster’s up, config’s done, I can interact with the cluster.

Drum roll …

$ kubectl get nodes
NAME                             STATUS    ROLES     AGE       VERSION
ip-192-168-30-80.ec2.internal    Ready     <none>    1m        v1.12.7
ip-192-168-34-141.ec2.internal   Ready     <none>    1m        v1.12.7
ip-192-168-8-70.ec2.internal     Ready     <none>    1m        v1.12.7

Shortly after I ran my test cluster, I wanted to run another test cluster but this time, in an existing default VPC. So, I had to add existing subnets to my configuration file. Here’s the configuration for my case, selecting only 3 public subnets in the default VPC:

  1. apiVersion: eksctl.io/v1alpha5
  2. kind: ClusterConfig
  3. metadata:
  4.   name: EKSTestDrive
  5.   region: us-east-1
  6.  
  7. vpc:
  8.   id: vpc-0ce2e4f5ca4d03d65
  9.   cidr: "172.31.0.0/16"
  10.   subnets:
  11.     public:
  12.       us-east-1a:
  13.         id: subnet-08592c29816595efe
  14.       us-east-1b:
  15.         id: subnet-056189bf741ad4a8e
  16.       us-east-1c:
  17.         id: subnet-07bf113f6d926039f
  18.  
  19. nodeGroups:
  20.   - name: ng-1-workers
  21.     instanceType: t2.medium
  22.     desiredCapacity: 2

During your tests, if your pods need access to AWS services like dynamodb, s3, … you can add the necessary policies to the node instance role created by eksctl at cluster creation. For testing purposes only! Keep in mind that in doing so, all nodes have the same permissions: any pod on any node will have these permissions. So a pod that normally should not have access to say a specific S3 bucket, well, …, it will have that access.

For production, secrets and configuration mappings can be used. AWS has already writeup on github discussing their plan for IAM and Kubernetes integration. There are a few projects on github working just that: kube2iam, kiam, kube-aws-iam-controller.

In order to delete the resources created for the cluster, run:

$ eksctl delete cluster --name=EKSTestDrive

Want to know more? The experts of our Agile Software Factory are here to help you!

share on