Kubernetes Add Node Pool on GKE

NIRAV SHAH
2 min readMar 26, 2020

Kubernetes envrionment you can use verticale scaleup & horizontal scaleup. Horizontal scaleup is easily possible with adding nodes or keeping nodepool in autoscaling group. However there are scenarios where your application do require bigger node server. for example, if applicatin is memory intensive & default scaling is cpu based. your pods will frequently go out of memory, also you waste money by expensive CPU being underutilized. So right size of nodepool is important. However there is simple process to add nodepool & remove old nodepool in Kubernetes world. We will follow them as below

1. Adding new Node Pool

Note: Project: My-user-project, Cluster Name: userprod , Newer Nodepool Name:user-perf

gcloud beta container 
--project "My-user-project" \
node-pools create "user-perf" \
--cluster "userprod" --region "asia-northeast1" \
--node-version "1.13.12-gke.25" \
--machine-type "n1-standard-4" \
--image-type "COS" \
--disk-type "pd-standard" \
--disk-size "100" \
--metadata disable-legacy-endpoints=true \
--scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
--num-nodes "1" \
--enable-autoscaling \
--min-nodes "1" --max-nodes "8" --enable-autoupgrade \
--enable-autorepair \
--max-pods-per-node "8" \
--tags "k8s"

2. Cordon Older nodepool

Note: Older Nodepool Name: userprod-np

for node in $(kubectl get nodes -l cloud.google.com/gke-nodepool=userprod-np -o=name); do
echo $node
kubectl cordon $node
sleepy 200
done

3. Drain Older nodepool (optional)

Note: Older Nodepool Name: userprod-np

for node in $(kubectl get nodes -l cloud.google.com/gke-nodepool=userprod-np -o=name); do
echo $node
kubectl drain $node
sleepy 200
done

4. Force Drain Older nodepool

Note: Older Nodepool Name: userprod-np

for node in $(kubectl get nodes -l cloud.google.com/gke-nodepool=userprod-np -o=name); do
echo $node
kubectl drain --force --ignore-daemonsets --delete-local-data --grace-period=10 $node
sleepy 200
done

5. Monitor nodepool

Note: Older Nodepool Name: userprod-np , Newer Nodepool Name:user-perf

kubectl get pod -o wide                                                                                                                 kubectl top nodeskubectl get nodes

6. Remove Old Nodepool

Note: Older Nodepool Name: userprod-np

#Delete Nodepool                                                                                                                        gcloud container node-pools delete userprod-np \
--cluster userprod \
--region asia-northeast1 \
--project My-user-project
#Get list of Nodepoolgcloud container node-pools list \
--cluster userprod \
--region asia-northeast1 \
--project My-user-project

Note: Setup sleepy timer

In case you dont want to setup sleepy timer, change it to sleep command.

vi ~/.bash_aliasesfunction sleepy() {
seconds=${1}; date1=$((`date +%s` + $seconds));
while [ "$date1" -ge `date +%s` ]; do
echo -ne "$(date -u --date @$(($date1 - `date +%s` )) +%H:%M:%S)\r";
done
}

--

--

NIRAV SHAH

Working as Cloud Architect & Software enthusiastic