Release EKS Node Secondary IP [Production fix ]

NIRAV SHAH
5 min readJul 11, 2023

Amazon Elastic Kubernetes Service (EKS) has revolutionized the way organizations deploy and manage their containerized applications. As EKS continues to evolve, Amazon Web Services (AWS) is constantly working to enhance its features and optimize resource allocation. In this article, we will explore three new variables: WARM_IP_TARGET, MINIMUM_IP_TARGET, and WARM_ENI_TARGET, which aim to improve the efficiency and performance of EKS by fine-tuning IP and ENI allocation strategies.

Problem Statement:

One morning we found that the new pod allocation is failing due to unable to allocate IP from our subnet. We got below error:

2023-07-11T09:07:53.053Z ERROR controller.provisioner launching machine, 
creating cloud provider instance, creating instance, with fleet error(s),
InsufficientFreeAddressesInSubnet: There are not enough free addresses in
subnet 'subnet-06d6a8xxxxxxxx' to satisfy the requested number of instances.
{"commit": "698f22f-dirty"}

We have performed below observations:

Total Number of Nodes: 72
EKS VPC contains 2 subnets [ 1 in each zone ]
IP Range per subnet defined as 10.x.x.x/21 [ ~2000 IP per subnet]
Number of pods on production: 900
Utilisation of percent of IP = 900+72/4000 ~ 25% IP used

Now the Question arises what doour 75% of IP doing?

Analysis:

Received answer by checking secondary IP assigned to the EKS nodes. As we had some heavy pod, we choose to configure 1 node per pod & default configuration allows Secondary IP allocated as per Instance Type.

Solution:

Tuning parameter vpc-cni parameter resolved our problem. We added WARM_ENI_TARGET, WARM_IP_TARGET, MINIMUM_IP_TARGET environment variables to our terraform configuration.

# Set environment variable
resource "kubernetes_env" "vpc-cni" {
count = var.kubernetes_addon_enable_vpc_cni_driver == true ? 1 : 0
container = "aws-node"
metadata {
name = "aws-node"
namespace = "kube-system"
}
api_version = "apps/v1"
kind = "DaemonSet"
env {
name = "ENABLE_POD_ENI"
value = "true"
}
env {
name = "POD_SECURITY_GROUP_ENFORCING_MODE"
value = "standard"
}
env {
name = "AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG"
value = "false"
}
env {
name = "AWS_VPC_K8S_CNI_EXTERNALSNAT"
value = "true"
}
env {
name = "WARM_ENI_TARGET"
value = "0"
}
env {
name = "WARM_IP_TARGET"
value = "8"
}
env {
name = "MINIMUM_IP_TARGET"
value = "15"
}
force = true
}

Effect:

We have observed a noticeable disparity in the number of available IP addresses within the subnet. There is a clear variation in the IP address availability that is easily visible to us. We have witnessed a tangible difference in the quantity of IPs that are currently accessible within the subnet.

In the test environment Available IP Before terraform apply

In the test environment Available IP After terraform apply

Overall ~200 IPs were released [ this is a test environment ]. In production, we would save approximately 1000 IPs.

Understanding WARM_IP_TARGET:
The WARM_IP_TARGET variable is a valuable addition to EKS that allows users to optimize the allocation of secondary IP addresses within their clusters. IP addresses are finite resources, and effectively managing them is crucial for ensuring high availability and scalability of applications. By setting the WARM_IP_TARGET value, users can define the desired percentage of secondary IP addresses to be allocated at any given time. This enables better resource planning and ensures that IP addresses are readily available when needed, reducing application downtime and enhancing cluster reliability.

MINIMUM_IP_TARGET for Enhanced IP Utilization:
In addition to WARM_IP_TARGET, EKS now introduces the MINIMUM_IP_TARGET variable. This feature allows users to specify the minimum percentage of secondary IP addresses to be allocated within a cluster. Users can prevent IP address depletion by defining a minimum threshold and maintain a sufficient pool of available addresses for scaling applications. MINIMUM_IP_TARGET ensures that EKS automatically replenishes IP addresses as they approach the specified minimum, promoting efficient resource utilization and minimizing potential service disruptions.

Maximizing Efficiency with WARM_ENI_TARGET:
The third variable, WARM_ENI_TARGET, focuses on optimizing Elastic Network Interface (ENI) allocation in EKS clusters. ENIs play a crucial role in connecting pods and services within the cluster, and efficient ENI utilization is essential for achieving high performance. By setting the WARM_ENI_TARGET value, users can define the desired percentage of ENIs to be allocated at any given time. This allows for better resource management, ensuring that ENIs are readily available for scaling pods and minimizing network-related bottlenecks.

Benefits and Use Cases:
Implementing the WARM_IP_TARGET, MINIMUM_IP_TARGET, and WARM_ENI_TARGET variables in your EKS clusters can yield several benefits. These include:

a. Improved Scalability: Fine-tuning IP and ENI allocation strategies ensures that resources are readily available for scaling applications, allowing clusters to accommodate increased workloads seamlessly.

b. Enhanced Reliability: By setting appropriate thresholds for IP and ENI allocation, you can prevent resource depletion and maintain a robust infrastructure that minimizes service disruptions and downtime.

c. Efficient Resource Utilization: The variables enable optimal utilization of IP addresses and ENIs, ensuring that resources are allocated as needed and reducing wastage.

d. Simplified Resource Planning: With clear control over IP and ENI allocation, organizations can better plan for future resource requirements, avoiding unexpected shortages and capacity constraints.

Conclusion:
As AWS continues to enhance the capabilities of EKS, the introduction of WARM_IP_TARGET, MINIMUM_IP_TARGET, and WARM_ENI_TARGET marks a significant step towards optimizing resource allocation within EKS clusters. By utilizing these variables, users gain more control over IP and ENI allocation, leading to improved scalability, enhanced reliability, and efficient resource utilization. Whether you are running a small application or a large-scale production system, leveraging these features will help you maximize the benefits of EKS and ensure a seamless experience for your containerized workloads.

Reference:

--

--