Skip to main content

Ensure EKS Clusters Have Private Endpoint Enabled and Public Access Disabled

Overview

This check verifies that your Amazon EKS (Elastic Kubernetes Service) cluster API server endpoints are configured securely. By default, EKS clusters are created with a public API endpoint accessible from the internet. This check flags clusters that have public access enabled and recommends switching to a private-only endpoint configuration.

Risk

When your EKS cluster API endpoint is publicly accessible:

  • Exposure to attacks: The Kubernetes API server is reachable from any IP address on the internet, increasing the attack surface
  • Credential-based attacks: If IAM credentials or Kubernetes tokens are compromised, attackers can access your cluster from anywhere
  • Reconnaissance: Attackers can probe your API endpoint to gather information about your cluster
  • Compliance violations: Many security frameworks (CIS, SOC 2, PCI-DSS) require private network access for management interfaces

Severity: High

Remediation Steps

Prerequisites

  • Access to the AWS Console with permissions to modify EKS clusters
  • A way to access your cluster from within the VPC after disabling public access (bastion host, VPN, AWS Cloud9, or Direct Connect)
Important: Before you begin

Do not disable public access until you have verified:

  1. Private endpoint connectivity: Ensure you have a way to reach the cluster from within the VPC (bastion host, VPN, Direct Connect, or Cloud9)
  2. IAM access mapping: Your IAM users/roles are mapped to Kubernetes RBAC so you won't be locked out
  3. VPC DNS settings: Your VPC has enableDnsHostnames and enableDnsSupport set to true
  4. Node connectivity: Your worker nodes can reach the private endpoint

If you lose access after disabling public access, you'll need to re-enable it to regain control.

AWS Console Method

  1. Open the Amazon EKS console
  2. Click on Clusters in the left navigation
  3. Select the cluster you want to modify
  4. Click the Networking tab
  5. In the Cluster endpoint access section, click Manage
  6. Configure the endpoint access:
    • Set Private access to Enabled
    • Set Public access to Disabled
  7. Click Save changes
  8. Wait for the cluster status to return to Active (this typically takes a few minutes)
AWS CLI

To disable public access and enable private access for your EKS cluster:

aws eks update-cluster-config \
--region us-east-1 \
--name <your-cluster-name> \
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true

Replace <your-cluster-name> with your actual cluster name.

The command returns an update ID you can use to track progress:

aws eks describe-update \
--region us-east-1 \
--name <your-cluster-name> \
--update-id <update-id>

Wait for the update status to change from InProgress to Successful.

CloudFormation

Update your CloudFormation template to configure private-only endpoint access:

AWSTemplateFormatVersion: '2010-09-09'
Description: EKS Cluster with private endpoint access only

Parameters:
ClusterName:
Type: String
Description: Name of the EKS cluster

VpcId:
Type: AWS::EC2::VPC::Id
Description: VPC ID for the EKS cluster

SubnetIds:
Type: List<AWS::EC2::Subnet::Id>
Description: Subnet IDs for the EKS cluster (minimum 2 in different AZs)

Resources:
EKSClusterRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: eks.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

EKSCluster:
Type: AWS::EKS::Cluster
Properties:
Name: !Ref ClusterName
RoleArn: !GetAtt EKSClusterRole.Arn
ResourcesVpcConfig:
SubnetIds: !Ref SubnetIds
EndpointPublicAccess: false
EndpointPrivateAccess: true
Version: '1.29'

Outputs:
ClusterEndpoint:
Description: EKS cluster endpoint (private)
Value: !GetAtt EKSCluster.Endpoint

ClusterArn:
Description: EKS cluster ARN
Value: !GetAtt EKSCluster.Arn

Deploy or update the stack:

aws cloudformation deploy \
--region us-east-1 \
--template-file eks-private-endpoint.yaml \
--stack-name my-eks-cluster \
--parameter-overrides \
ClusterName=my-cluster \
VpcId=vpc-12345678 \
SubnetIds=subnet-11111111,subnet-22222222 \
--capabilities CAPABILITY_IAM
Terraform

Configure your EKS cluster with private-only endpoint access:

resource "aws_eks_cluster" "main" {
name = var.cluster_name
role_arn = aws_iam_role.eks_cluster.arn
version = "1.29"

vpc_config {
subnet_ids = var.subnet_ids
endpoint_private_access = true
endpoint_public_access = false

# Optional: specify security groups for the cluster
security_group_ids = [aws_security_group.eks_cluster.id]
}

depends_on = [
aws_iam_role_policy_attachment.eks_cluster_policy
]
}

resource "aws_iam_role" "eks_cluster" {
name = "${var.cluster_name}-cluster-role"

assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
}]
})
}

resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}

variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
}

variable "subnet_ids" {
description = "List of subnet IDs for the EKS cluster"
type = list(string)
}

Apply the configuration:

terraform apply
Alternative: Restrict public access with CIDR blocks

If you cannot fully disable public access (e.g., no VPN or bastion host), you can restrict access to specific IP addresses as an interim measure:

AWS Console:

  1. In the Cluster endpoint access settings, keep Public access enabled
  2. Click Advanced settings
  3. Remove 0.0.0.0/0 and add your specific CIDR blocks (e.g., your office IP range)

AWS CLI:

aws eks update-cluster-config \
--region us-east-1 \
--name <your-cluster-name> \
--resources-vpc-config \
endpointPublicAccess=true,\
endpointPrivateAccess=true,\
publicAccessCidrs='["203.0.113.0/24","198.51.100.0/24"]'

Note: This is less secure than disabling public access entirely, but it's better than allowing access from 0.0.0.0/0.

Verification

After making changes, verify your configuration:

  1. In the AWS Console, navigate to your EKS cluster
  2. Click the Networking tab
  3. Confirm that:
    • Private access shows Enabled
    • Public access shows Disabled
CLI verification
aws eks describe-cluster \
--region us-east-1 \
--name <your-cluster-name> \
--query 'cluster.resourcesVpcConfig.{PublicAccess:endpointPublicAccess,PrivateAccess:endpointPrivateAccess}'

Expected output:

{
"PublicAccess": false,
"PrivateAccess": true
}

Additional Resources

Notes

  • Update is non-disruptive: Your cluster continues to function during the endpoint configuration change
  • Worker nodes: Existing worker nodes will automatically use the private endpoint after the change
  • DNS resolution: The private endpoint is resolved via a Route 53 private hosted zone that AWS manages automatically
  • Hybrid nodes: If you're using EKS hybrid nodes, do not enable both public and private endpoints simultaneously - hybrid nodes only resolve public IP addresses
  • Recovery: If you accidentally lock yourself out, you can re-enable public access from the AWS Console using root account credentials or another IAM user with appropriate permissions