12

Is there a way to add node labels when deploying worker nodes in EKS. I do not see an option in the CF template available for worker nodes.

EKS-CF-Workers

The only option I see right now is to use kubectl label command to add labels which is post cluster setup. However, the need to have complete automation which means applications are deployed automatically post cluster deployments and labels help in achieving the segregation.

f-z-N
  • 1,645
  • 4
  • 24
  • 38
  • Check this article out... Amazon EC2 Launch Templates now support the ability to expose EC2 tags in the instance metadata within K8s. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html?icmpid=docs_ec2_console#work-with-tags-in-IMDS – jtgorny Aug 11 '22 at 12:38

5 Answers5

20

With the new EKS-optimized AMIs(amazon-eks-node-vXX) and Cloudformation template refactors provided by AWS it is now possible to add node labels as simple as providing arguments to the BootstrapArguments parameter of the [amazon-eks-nodegroup.yaml][1] Cloudfomation template. For example --kubelet-extra-args --node-labels=my-key=my-value. For more details check the AWS announcement: Improvements for Amazon EKS Worker Node Provisioning

user9269906
  • 111
  • 1
  • 6
Luis Govea
  • 341
  • 1
  • 3
  • 3
    Apparently we can use commas in between to add multiple labels: --node-labels=alabel=foo,another=bar – Adverbly Nov 14 '18 at 20:25
4

You'll need to add the config in user_data and use the --node-labels option for the kubelet. Here's an example user_data which includes node_labels:

NodeLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  AssociatePublicIpAddress: 'true'
  IamInstanceProfile: !Ref NodeInstanceProfile
  ImageId: !Ref NodeImageId
  InstanceType: !Ref NodeInstanceType
  KeyName: !Ref KeyName
  SecurityGroups:
  - !Ref NodeSecurityGroup
  UserData:
    Fn::Base64:
      Fn::Join: [
        "",
        [
          "#!/bin/bash -xe\n",
          "CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki", "\n",
          "CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt", "\n",
          "MODEL_DIRECTORY_PATH=~/.aws/eks", "\n",
          "MODEL_FILE_PATH=$MODEL_DIRECTORY_PATH/eks-2017-11-01.normal.json", "\n",
          "mkdir -p $CA_CERTIFICATE_DIRECTORY", "\n",
          "mkdir -p $MODEL_DIRECTORY_PATH", "\n",
          "curl -o $MODEL_FILE_PATH https://s3-us-west-2.amazonaws.com/amazon-eks/1.10.3/2018-06-05/eks-2017-11-01.normal.json", "\n",
          "aws configure add-model --service-model file://$MODEL_FILE_PATH --service-name eks", "\n",
          "aws eks describe-cluster --region=", { Ref: "AWS::Region" }," --name=", { Ref: ClusterName }," --query 'cluster.{certificateAuthorityData: certificateAuthority.data, endpoint: endpoint}' > /tmp/describe_cluster_result.json", "\n",
          "cat /tmp/describe_cluster_result.json | grep certificateAuthorityData | awk '{print $2}' | sed 's/[,\"]//g' | base64 -d >  $CA_CERTIFICATE_FILE_PATH", "\n",
          "MASTER_ENDPOINT=$(cat /tmp/describe_cluster_result.json | grep endpoint | awk '{print $2}' | sed 's/[,\"]//g')", "\n",
          "INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)", "\n",
          "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /var/lib/kubelet/kubeconfig", "\n",
          "sed -i s,CLUSTER_NAME,", { Ref: ClusterName }, ",g /var/lib/kubelet/kubeconfig", "\n",
          "sed -i s,REGION,", { Ref: "AWS::Region" }, ",g /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,MAX_PODS,", { "Fn::FindInMap": [ MaxPodsPerNode, { Ref: NodeInstanceType }, MaxPods ] }, ",g /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service", "\n",
          "DNS_CLUSTER_IP=10.100.0.10", "\n",
          "if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi", "\n",
          "sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g  /etc/systemd/system/kubelet.service", "\n",
          "sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig" , "\n",
          "sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g  /etc/systemd/system/kubelet.service" , "\n"
          "sed -i s,INTERNAL_IP/a,--node-labels tier=development,g  /etc/systemd/system/kubelet.service" , "\n"
          "systemctl daemon-reload", "\n",
          "systemctl restart kubelet", "\n",
          "/opt/aws/bin/cfn-signal -e $? ",
          "         --stack ", { Ref: "AWS::StackName" },
          "         --resource NodeGroup ",
          "         --region ", { Ref: "AWS::Region" }, "\n"
        ]
      ]

The relevant line is:

"sed -i s,INTERNAL_IP/a,--node-labels tier=development,g  /etc/systemd/system/kubelet.service" , "\n"

WARNING: I haven't tested this, but I do something similar and it works fine

jaxxstorm
  • 12,422
  • 5
  • 57
  • 67
  • 2
    this seems pretty ridiculous to me. What exactly does EKS help in then? Other than being a wrapper of CloudFormation? – cryanbhu Aug 29 '18 at 19:25
  • 1
    It is ridiculous. No arguments there. – jaxxstorm Aug 29 '18 at 19:27
  • You're correct in that their Node Group offering is lacking. However, what EKS offers is a managed Control Plane. Meaning you don't have to worry about ensuring the reliability and performance of your Kubernetes master. Nor monitor the state of the (often fragile) etcd database. They also offer their own CNI which allows you to bind pods directly to ENIs on the worker nodes which then allows you to control pod access with AWS Security Groups. Honestly though the ENI binding aspect is more limiting than beneficial unless network performance is your goal. I'm considering not using EKS any longer – TJ Zimmerman Apr 25 '19 at 18:52
  • Also this example is a monster. and the Node Launch Config doesn't normally look this obtuse. Here is the official example CloudFormation file for building this: https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml Which leverages this open source AMI (Where the bootstrap.sh script lives): https://github.com/awslabs/amazon-eks-ami – TJ Zimmerman Apr 25 '19 at 18:53
  • The original answer was pulled from the same source, it's obviously been updated since Jul 20th – jaxxstorm Apr 25 '19 at 19:51
  • Coming back to this in 2020 - I'm pretty sure this is NOT a good idea since Cloud Provider integrations are moving out-of-tree which means that the AWS Cloud Controller Manager will discover which resources it should be managing via the AWS API by Resource Labels. A requirement of the Cloud Controller Manager is that all EC2 instances have the same hostname as their AWS Private DNS Name. So updating the `--node-label` would break things. – TJ Zimmerman May 13 '20 at 21:20
3

If you are using eksctl you can add labels to the node groups:

Like so:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: dev-cluster
  region: eu-north-1

nodeGroups:
  - name: ng-1-workers
    labels: { role: workers }
    instanceType: m5.xlarge
    desiredCapacity: 10
    privateNetworking: true
  - name: ng-2-builders
    labels: { role: builders }
    instanceType: m5.2xlarge
    desiredCapacity: 2
    privateNetworking: true

See https://eksctl.io/usage/managing-nodegroups/ for more info

Ric
  • 31
  • 1
1

I've managed to get it work with the next sed expression:

sed -i '/--node-ip/ a \ \ --node-labels group=node \\' /etc/systemd/system/kubelet.service
-1

Now with EKS managed node group, you can specify node label in CFN.

See the docs.

General Grievance
  • 4,555
  • 31
  • 31
  • 45