Software Factory
Build a kubernetes cluster with eksctl
05 Aug 2019
by
Olivier Robert
Kubernetes is available on AWS as a managed service: AWS takes care of the control plane (replacing unhealthy nodes, automated updates), you take care of worker nodes. This saves you a lot of setup and maintenance work. Of course the control plane is highly available as it is deployed across multiple availability zones.
You can use ECR to store container images, EBS volumes for your persistent volume claims, ELBs for you load balancer services and IAM to map users and groups to your cluster. It is getting more and more interesting, isn't it? Specially if you consider the amount of work and competences necessary to get an equivalent setup and integration on premises.
These are huge advantages. You can create a cluster from the console but I find it more efficient to do it from the command line with eksctl, a command line utility for creating and managing Kubernetes clusters on Amazon EKS.
What is needed
- aws cli
- aws-iam-authenticator
- eksctl
- kubectl
Tools installation
The tools are available on all platforms (MacOS, Linux, Windows)
AWS Cli
There are different ways to install the aws cli. I like to create a python virtual environment and install via pip. Pick your preferred method.
aws-iam-authenticator
Here's how to install aws-iam-authenticator. It allows IAM users to get authenticated on the cluster.
eksctl
Installing eksctl is straightforward as well.
kubectl
Same goes for installing kubectl.
Cluster creation
We now have all the tooling we need to create and then use the cluster.
I would recommend to not just run the eksctl create cluster
command because your cluster will be created by default in us-west-2 region with m5.large instances. Maybe this is not what you want for your test run, maybe it is. I keep it very basic, but I chose to control the region, instance type (t2.medium) and instance number.
Use the aws cli to setup your AWS credentials
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]:
To configure and control the cluster creation, you can use eksctl's command line options and flags or create a configuration file. I created a cluster.yaml configuration file for easy re-use.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: EKSTestDrive
region: us-east-1
nodeGroups:
- name: ng-1-workers
instanceType: t2.medium
desiredCapacity: 3
The cluster gets a name: EKSTestDrive.
It will be created in Northern Virginia
I'll create one node group (ng-1-workers) with 3 nodes using t2.medium instances. More than enough for my tinkering.
If you have several AWS profiles and want to use a particular one, prepend the command with the AWS_PROFILE environment variable: AWS_PROFILE=demo eksctl create cluster ...
Be aware that if you set a profile during cluster creation aws-iam-authenticator will need to use that profile instead of the default one. The IAM user that creates the cluster gets admin rights. To get aws-iam-authenticator to use the right profile: export AWS_DEFAULT_PROFILE=demo
(in this example).
Don't forget to unset the environment variables when you are done.
You'll notice I use 2 command flags: one is to automatically create the Kubernetes config file, the other is to set this configuration as the current context for the kubectl cli.
If you have a kube config file already, you might want to write the one generated by eksctl to another place and refer to it via an environment variable. You could use --kubeconfig kube/config --write-kubeconfig
for instance. That would write the config file in the current directory in kube/config. An export KUBECONFIG=kube/config
would select that configuration and allow kubectl to access the EKS cluster. Again, don't forget to unset the variable when you are done.
Enough! Let's create that cluster now.
$ eksctl create cluster -f cluster.yaml --write-kubeconfig --set-kubeconfig-context
[ℹ] using region us-east-1
[ℹ] setting availability zones to [us-east-1f us-east-1c]
[ℹ] subnets for us-east-1f - public:192.168.0.0/19 private:192.168.64.0/19
[ℹ] subnets for us-east-1c - public:192.168.32.0/19 private:192.168.96.0/19
[ℹ] nodegroup "ng-1-workers" will use "ami-0200e65a38edfb7e1" [AmazonLinux2/1.12]
[ℹ] creating EKS cluster "EKSTestDrive" in "us-east-1" region
[ℹ] 1 nodegroup (ng-1-workers) was included
[ℹ] will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --name=EKSTestDrive'
[ℹ] 2 sequential tasks: { create cluster control plane "EKSTestDrive", create nodegroup "ng-1-workers" }
[ℹ] building cluster stack "eksctl-EKSTestDrive-cluster"
[ℹ] deploying stack "eksctl-EKSTestDrive-cluster"
[ℹ] building nodegroup stack "eksctl-EKSTestDrive-nodegroup-ng-1-workers"
[ℹ] --nodes-min=3 was set automatically for nodegroup ng-1-workers
[ℹ] --nodes-max=3 was set automatically for nodegroup ng-1-workers
[ℹ] deploying stack "eksctl-EKSTestDrive-nodegroup-ng-1-workers"
[✔] all EKS cluster resource for "EKSTestDrive" had been created
[✔] saved kubeconfig as "/home/cloud_user/.kube/config"
[ℹ] adding role "arn:aws:iam::992159286727:role/eksctl-EKSTestDrive-nodegroup-ng-1-NodeInstanceRole-14FOPKHLL342M" to auth ConfigMap
[ℹ] nodegroup "ng-1-workers" has 0 node(s)
[ℹ] waiting for at least 3 node(s) to become ready in "ng-1-workers"
[ℹ] nodegroup "ng-1-workers" has 3 node(s)
[ℹ] node "ip-192-168-30-80.ec2.internal" is ready
[ℹ] node "ip-192-168-34-141.ec2.internal" is ready
[ℹ] node "ip-192-168-8-70.ec2.internal" is ready
[ℹ] kubectl command should work with "/home/cloud_user/.kube/config", try 'kubectl get nodes'
[✔] EKS cluster "EKSTestDrive" in "us-east-1" region is ready
It takes a while (don't ctrl-C out), but ... cluster's up, config's done, I can interact with the cluster.
Drum roll ...
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-30-80.ec2.internal Ready 1m v1.12.7
ip-192-168-34-141.ec2.internal Ready 1m v1.12.7
ip-192-168-8-70.ec2.internal Ready 1m v1.12.7
Shortly after I ran my test cluster, I wanted to run another test cluster but this time, in an existing default VPC. So, I had to add existing subnets to my configuration file. Here's the configuration for my case, selecting only 3 public subnets in the default VPC:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: EKSTestDrive
region: us-east-1
vpc:
id: vpc-0ce2e4f5ca4d03d65
cidr: "172.31.0.0/16"
subnets:
public:
us-east-1a:
id: subnet-08592c29816595efe
us-east-1b:
id: subnet-056189bf741ad4a8e
us-east-1c:
id: subnet-07bf113f6d926039f
nodeGroups:
- name: ng-1-workers
instanceType: t2.medium
desiredCapacity: 2
During your tests, if your pods need access to AWS services like dynamodb, s3, ... you can add the necessary policies to the node instance role created by eksctl at cluster creation. For testing purposes only! Keep in mind that in doing so, all nodes have the same permissions: any pod on any node will have these permissions. So a pod that normally should not have access to say a specific S3 bucket, well, ..., it will have that access.
For production, secrets and configuration mappings can be used. AWS has already writeup on github discussing their plan for IAM and Kubernetes integration. There are a few projects on github working just that: kube2iam, kiam, kube-aws-iam-controller.
In order to delete the resources created for the cluster, run:
$ eksctl delete cluster --name=EKSTestDrive