Skip to main content

Ensure Kubernetes Cluster Runs on a Supported Version

Overview

This check verifies that your Amazon EKS clusters are running on a supported Kubernetes version. AWS regularly releases new Kubernetes versions and deprecates older ones. Running on a supported version ensures you receive security patches, bug fixes, and access to new features.

Risk

Running an EKS cluster on an unsupported Kubernetes version creates several security risks:

  • No security patches: Unsupported versions no longer receive fixes for known vulnerabilities (CVEs)
  • Potential exploits: Attackers can target documented vulnerabilities in outdated versions
  • API deprecations: Older APIs may stop working, breaking your applications and add-ons
  • Compliance issues: Many compliance frameworks require running supported software versions

Remediation Steps

Prerequisites

  • AWS account access with permissions to manage EKS clusters
  • Knowledge of which Kubernetes version you want to upgrade to (check EKS version support for current versions)
Important upgrade considerations

Before upgrading your EKS cluster:

  1. Review the changelog: Check the Kubernetes changelog for breaking changes
  2. Test in non-production first: Always test upgrades in a development or staging environment
  3. Back up critical resources: Export important Kubernetes manifests and application configurations
  4. Check add-on compatibility: Verify that your add-ons (CoreDNS, kube-proxy, VPC CNI) are compatible with the target version
  5. Node group versions: All managed node groups must match the cluster version before you can upgrade

EKS only supports upgrading one minor version at a time (e.g., 1.28 to 1.29, not 1.28 to 1.30).

AWS Console Method

  1. Sign in to the AWS Management Console
  2. Navigate to Amazon EKS (search for "EKS" in the search bar)
  3. In the left sidebar, click Clusters
  4. Click on the name of the cluster you want to upgrade
  5. On the cluster details page, look for the Kubernetes version field
  6. If an upgrade is available, you will see an Update now link next to the version
  7. Click Update now
  8. Select the target Kubernetes version from the dropdown
  9. Review the upgrade information and click Update
  10. Wait for the control plane update to complete (this typically takes 20-30 minutes)

After the control plane upgrade completes, update your node groups and add-ons to match the new version.

AWS CLI Method

Check the current cluster version:

aws eks describe-cluster \
--name <your-cluster-name> \
--region us-east-1 \
--query 'cluster.version' \
--output text

Update the cluster to a supported version:

aws eks update-cluster-version \
--name <your-cluster-name> \
--kubernetes-version 1.31 \
--region us-east-1

The command returns an update ID. You can monitor the upgrade progress:

aws eks describe-update \
--name <your-cluster-name> \
--update-id <update-id-from-previous-command> \
--region us-east-1

Wait until the status shows Successful before proceeding with node group updates.

If the upgrade is blocked by readiness checks, you can force it (use with caution):

aws eks update-cluster-version \
--name <your-cluster-name> \
--kubernetes-version 1.31 \
--force \
--region us-east-1
CloudFormation Template

This template creates a new EKS cluster with a supported Kubernetes version:

AWSTemplateFormatVersion: '2010-09-09'
Description: EKS Cluster with supported Kubernetes version

Parameters:
ClusterName:
Type: String
Description: Name of the EKS cluster

KubernetesVersion:
Type: String
Default: '1.31'
AllowedValues:
- '1.28'
- '1.29'
- '1.30'
- '1.31'
Description: Kubernetes version (must be supported by EKS)

SubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: List of subnet IDs for the EKS cluster

SecurityGroupIds:
Type: List<AWS::EC2::SecurityGroup::Id>
Description: List of security group IDs for the EKS cluster

Resources:
EKSClusterRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: eks.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

EKSCluster:
Type: AWS::EKS::Cluster
Properties:
Name: !Ref ClusterName
Version: !Ref KubernetesVersion
RoleArn: !GetAtt EKSClusterRole.Arn
ResourcesVpcConfig:
SubnetIds: !Ref SubnetIds
SecurityGroupIds: !Ref SecurityGroupIds
EndpointPublicAccess: true
EndpointPrivateAccess: true

Outputs:
ClusterName:
Description: EKS Cluster Name
Value: !Ref EKSCluster

ClusterEndpoint:
Description: EKS Cluster Endpoint
Value: !GetAtt EKSCluster.Endpoint

ClusterArn:
Description: EKS Cluster ARN
Value: !GetAtt EKSCluster.Arn

To update an existing cluster's version via CloudFormation, modify the Version property and update the stack.

Terraform Configuration
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
}

variable "kubernetes_version" {
description = "Kubernetes version for the EKS cluster"
type = string
default = "1.31"

validation {
condition = can(regex("^1\\.(2[89]|3[01])$", var.kubernetes_version))
error_message = "Kubernetes version must be a supported EKS version (1.28, 1.29, 1.30, or 1.31)."
}
}

variable "subnet_ids" {
description = "List of subnet IDs for the EKS cluster"
type = list(string)
}

variable "vpc_id" {
description = "VPC ID where the cluster will be deployed"
type = string
}

data "aws_iam_policy_document" "eks_assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["eks.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}

resource "aws_iam_role" "eks_cluster" {
name = "${var.cluster_name}-eks-cluster-role"
assume_role_policy = data.aws_iam_policy_document.eks_assume_role.json
}

resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}

resource "aws_eks_cluster" "main" {
name = var.cluster_name
version = var.kubernetes_version
role_arn = aws_iam_role.eks_cluster.arn

vpc_config {
subnet_ids = var.subnet_ids
endpoint_public_access = true
endpoint_private_access = true
}

depends_on = [
aws_iam_role_policy_attachment.eks_cluster_policy
]
}

output "cluster_name" {
description = "EKS cluster name"
value = aws_eks_cluster.main.name
}

output "cluster_endpoint" {
description = "EKS cluster endpoint"
value = aws_eks_cluster.main.endpoint
}

output "cluster_version" {
description = "EKS cluster Kubernetes version"
value = aws_eks_cluster.main.version
}

To upgrade an existing cluster, update the version argument and run terraform apply.

Verification

After the upgrade completes, verify your cluster is running the expected version:

  1. In the AWS Console: Navigate to your EKS cluster and check the Kubernetes version field shows the updated version
  2. Confirm the cluster status shows Active
CLI verification commands
# Check the cluster version
aws eks describe-cluster \
--name <your-cluster-name> \
--region us-east-1 \
--query 'cluster.{Name:name,Version:version,Status:status}' \
--output table

# Verify cluster is healthy with kubectl
kubectl version --short
kubectl get nodes

Additional Resources

Notes

  • One version at a time: EKS only supports upgrading one minor version at a time. If you need to go from 1.27 to 1.30, you must upgrade through 1.28 and 1.29 first.
  • Control plane first: Always upgrade the control plane before updating node groups.
  • Add-on compatibility: After upgrading the control plane, update your EKS add-ons (CoreDNS, kube-proxy, Amazon VPC CNI) to versions compatible with the new Kubernetes version.
  • Node groups: Managed node groups must be updated separately after the control plane upgrade.
  • Downtime: Control plane upgrades are rolling and should not cause downtime, but plan for potential brief API unavailability.
  • Version support window: AWS typically supports Kubernetes versions for 14 months after release. Plan regular upgrades to stay current.