Ajout d'utilisateurs à votre cluster EKS [ENGLISH] | Agile Partner
share on

Adding users to your EKS Cluster

By Olivier Robert, a Senior Consultant and DevOps Engineer at Agile Partner.

This is a followup on the article « Build a kubernetes cluster with eksctl« . It is assumed that you have a running EKS cluster.

Adding users to your EKS cluster has 2 sides: one is IAM (Identity and Access Management on the AWS side). The other one is RBAC (Role Based Access Management on Kubernetes).

New users and/or roles are declared via the aws-auth ConfigMap within Kubernetes.

Here is a very nice introduction to RBAC in Kubernetes over at Bitnami.

We will use it as a base for the 2 users we will add to the cluster. One will be our backup administrator, the other will have a role for managing deployments in the « office » namespace.

Let’s look at the aws-auth ConfigMap before we change anything.

kubectl -n kube-system get configmap aws-auth -o yaml
  1. apiVersion: v1
  2.  data:
  3.    mapRoles: |
  4.      - groups:
  5.        - system:bootstrappers
  6.        - system:nodes
  7.        rolearn: arn:aws:iam::356198252393:role/eksctl-EKSTestDrive-nodegroup-ng-NodeInstanceRole-5C1P4COCU9W1
  8.        username: system:node:{{EC2PrivateDNSName}}
  10.  kind: ConfigMap
  11.  metadata:
  12.    creationTimestamp: "2019-07-08T09:47:15Z"
  13.    name: aws-auth
  14.    namespace: kube-system
  15.    resourceVersion: "719"
  16.    selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  17.    uid: 5df31bad-a165-11e9-90f3-12c84ad916b6

I am working with a IAM user with administrator rights. So, basically, I can do whatever. But I do not want to give that kind of rights to all collaborators. I will create a new IAM user called eksadmin in the AWS console. This user will not need access to the AWS console, but programmatic access will be necessary. No need to set any permissions in the creation process. No tags. Create the eksadmin user and save the credentials and IAM arn.

Edit the aws-auth configmap and add the « mapUsers » section. We will map the IAM user arn to a pre-defined systems:masters group. That group gives admin rights to our new user.

kubectl -n kube-system edit configmap aws-auth
  1. data:
  2.   mapRoles: |
  3.     - groups:
  4.       - system:bootstrappers
  5.       - system:nodes
  6.       rolearn: arn:aws:iam::356198252393:role/eksctl-EKSTestDrive-nodegroup-ng-NodeInstanceRole-5C1P4COCU9W1
  7.       username: system:node:{{EC2PrivateDNSName}}
  8.   mapUsers: |
  9.     - userarn: arn:aws:iam::356198252393:user/eksadmin
  10.       username: eksadmin
  11.       groups:
  12.         - system:masters
  13. kind: ConfigMap
  14. metadata:
  15.   creationTimestamp: "2019-07-08T09:47:15Z"
  16.   name: aws-auth
  17.   namespace: kube-system
  18.   resourceVersion: "4169"
  19.   selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  20.   uid: 5df31bad-a165-11e9-90f3-12c84ad916b6

We can now test our new IAM user. Run the aws configure --profile eksadmin command and use the user credentials for the key and secret key.

Export that configuration for aws-iam-authenticator to authenticate the eksadmin user: export AWS_DEFAULT_PROFILE="eksadmin".

If you want to control what identity you are using, run:

$ aws sts get-caller-identity
    "Account": "356198252393",
    "Arn": "arn:aws:iam::356198252393:user/eksadmin"

We now are identified as eksadmin on AWS. Ensure our eksadmin user has admin rights.

$ kubectl get node
NAME                            STATUS   ROLES    AGE   VERSION
ip-172-31-13-117.ec2.internal   Ready    <none>   65m   v1.12.7
ip-172-31-89-13.ec2.internal    Ready    <none>   65m   v1.12.7
$ kubectl -n kube-system get pod
NAME                       READY   STATUS    RESTARTS   AGE
aws-node-p4xgv             1/1     Running   0          65m
aws-node-q5qw4             1/1     Running   0          65m
coredns-7f66c6c4b9-5j9ck   1/1     Running   0          72m
coredns-7f66c6c4b9-sl9t5   1/1     Running   0          72m
kube-proxy-qqrkb           1/1     Running   0          65m
kube-proxy-zvr6r           1/1     Running   0          65m

We now have an IAM user with no permissions in our AWS account, but with admin rights in our Kubernetes cluster. Pretty cool.

We still need to create a user that has deployment permissions in the « office » namespace. Let’s start by creating the namespace: kubectl create namespace office (notice that I am using our eksadmin user)

We use the exact same role as in the Bitnami article to limit confusion. Here is the role-deployment-manager.yaml file.

  1. kind: Role
  2. apiVersion: rbac.authorization.k8s.io/v1beta1
  3. metadata:
  4.   namespace: office
  5.   name: deployment-manager
  6. rules:
  7. - apiGroups: ["", "extensions", "apps"]
  8.   resources: ["deployments", "replicasets", "pods"]
  9.   verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # You can also use ["*"]

Add a role binding in the rolebinding-deployment-manager.yaml file:

  1. kind: RoleBinding
  2. apiVersion: rbac.authorization.k8s.io/v1beta1
  3. metadata:
  4.   name: deployment-manager-binding
  5.   namespace: office
  6. subjects:
  7. - kind: User
  8.   name: officedep
  9.   apiGroup: ""
  10. roleRef:
  11.   kind: Role
  12.   name: deployment-manager
  13.   apiGroup: ""

Apply role and role binding:

$ kubectl apply -f role-deployment-manager.yaml
role.rbac.authorization.k8s.io/deployment-manager created
$ kubectl apply -f rolebinding-deployment-manager.yaml
rolebinding.rbac.authorization.k8s.io/deployment-manager-binding created

OK, so the deployment-manager role is bound to a cluster user named officedep. This is what we need to remember for now. That role allows specific actions (verbs) in the office namespace.

Let’s create an IAM user named officedep in the AWS console (as previously, only programmatic access, no permissions, no tags, save credentials and IAM arn. Done? OK!

We need to edit the aws-auth configmap and add a new user mapping for the officedep user, but this time mapped to the « deployment-manager » role.

  1. apiVersion: v1
  2. data:
  3.   mapRoles: |
  4.     - groups:
  5.       - system:bootstrappers
  6.       - system:nodes
  7.       rolearn: arn:aws:iam::356198252393:role/eksctl-EKSTestDrive-nodegroup-ng-NodeInstanceRole-5C1P4COCU9W1
  8.       username: system:node:{{EC2PrivateDNSName}}
  9.   mapUsers: |
  10.     - userarn: arn:aws:iam::356198252393:user/eksadmin
  11.       username: eksadmin
  12.       groups:
  13.         - system:masters
  14.     - userarn: arn:aws:iam::356198252393:user/officedep
  15.       username: officedep
  16.       groups:
  17.         - deployment-manager
  18. kind: ConfigMap
  19. metadata:
  20.   creationTimestamp: "2019-07-08T09:47:15Z"
  21.   name: aws-auth
  22.   namespace: kube-system
  23.   resourceVersion: "9345"
  24.   selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  25.   uid: 5df31bad-a165-11e9-90f3-12c84ad916b6

Create the aws profile for the officedep user with aws configure --profile officedep. Export the new profile: export AWS_DEFAULT_PROFILE="officedep"

Remember that to switch users (eksadmin or officedep) you just need to export the desired profile.

We are ready to test our deployment manager user.

He/she should not be able to list cluster nodes:

$ kubectl get node
Error from server (Forbidden): nodes is forbidden: User "officedep" cannot list resource "nodes" in API group "" at the cluster scope

That’s expected! Let’s run a simple pod in the office namespace. Here’s the nginx.yaml file we will use for that test.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4.   name: nginx-demo
  5.   namespace: office
  6. spec:
  7.   containers:
  8.   - name: nginx-ctr
  9.     image: nginx:latest
  10.     ports:
  11.       - containerPort: 80

Run kubectl apply -f nginx.yaml

$ kubectl apply -f nginx.yaml
pod/nginx-demo created
$ kubectl -n office get pods
nginx-demo   1/1     Running   0          8s

That’s it! We have 2 new users:

Want to know more? The experts of our Agile Software Factory are here to help you!

share on