Network IP Ranges of a Private Kubernetes Cluster in Google Cloud Platform

Rajanarayanan Thottuvaikkatumana
8 min readMar 24, 2019

In a secure and private Kubernetes (K8S) cluster in Google Cloud Platform (GCP), it is important to make sure that you are using private IPs and right-sized IP ranges for your current and future scaling needs. A bad network design is very difficult to fix especially after the services started running in production. The story Securing Your Kubernetes Cluster in Google Cloud Platform covered the basics of the setup. Detailed coverage of the IP address ranges in the K8S cluster deserves a story on its own and this one is trying to achieve that. Before jumping into the matter, it is better to do a deep dive into the K8S fundamentals and K8S documentation provides a lot of materials for your needs. In addition to that, the story Kubernetes 101: Pods, Nodes, Containers, and Clusters by Daniel Sanche serve as a quick refresher of the subject.

Private IP Addresses

Private IP addresses cannot be used for any kind of routing purposes on the Internet. The Network Working Group’s RFC1918 gives all the details of the private IP address ranges that private networks can use. According to RFC1918, following are the private IP address ranges.

10.0.0.0        -   10.255.255.255  (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)

A Primer on K8S Inter-Pod Networking

This section is going to talk about a high level 50,000 feet view of the K8S inter-pod networking. For a detailed, coverage of this topic, you may refer the K8S documentation on K8S cluster networking.

  1. All the pods communicate with other pods through a NAT-less network
  2. Corresponding to a pod’s eth0 interface, there is a vethX interface pair talking to the bridge of the node. A pod to pod communication within a node goes through this bridge and the packets don’t leave the node.
  3. Communication between pods across different nodes has to happen through the eth0 interface of the node. The bridge of the node passes the packets that have to leave the node through the eth0 interface of the node. The routing tables handle the packet routing rules.
  4. Services in K8S gets stable IP addresses, and port. K8S does the service discovery to identify the pods that are running the actual services. The service IP addresses are virtually created in K8S API Server as Endpoint objects in conjunction with kube-proxy running in each and every node.
  5. Any new service creation notification is sent to all the kube-proxy components running in the nodes and it makes the service addressable within that node.
  6. When clients connect to a service IP, the K8S API Server sends the request to a node and the kube-proxy in that node passes the request to a random pod running that service. The kube-proxy maintains IP tables for these purposes.

Private K8S Cluster IP Addresses

It is advisable to have your K8S cluster in a dedicated custom Virtual Private Cloud (VPC). In this custom VPC, you need IP addresses for the following resources.

1. General purpose VMs, K8S nodes etc.
2. Pods created by K8S
3. Services created by K8S
4. K8S API Server

It is important to have non-overlapping IP address ranges for all of the above resources. You should have the scaling requirements identified very early on before even architecting and designing your K8S infrastructure. The IP address range selection depends a lot on those requirements. All the configurations related to GCP’s infrastructure takes the IP address ranges in the CIDR notation. When you are coming up your IP address ranges for your VPC and K8S infrastructure, it is advisable to use a CIDR calculator so that you are not overlooking and making mistakes in IP address calculations. There are many such tools available and CIDR.xyz is being used here while writing this story. The Terraform script captures all the details including the IP address ranges of the private K8S cluster in discussion here.

The following IP range is chosen for the general purpose VMs such as K8S nodes, bastion hosts etc. You can see that there is a possibility of ~65,536 (discounting the reserved IPs) VMs that you can create with this IP range.

Visualization Using http://cidr.xyz/

The following IP range is chosen for the pods created by K8S. You can see that there is a possibility of ~4,096 (discounting the reserved IPs) K8S pods that K8S can create with this IP range.

Visualization Using http://cidr.xyz/

The following IP range is chosen for the services created by K8S. You can see that there is a possibility of ~256 (discounting the reserved IPs) K8S services that K8S can create with this IP range.

Visualization Using http://cidr.xyz/

The following IP range is chosen for the K8S API Server. You can see that there is a possibility of ~16 (discounting the reserved IPs) IPs that K8S can use for its API URL.

Visualization Using http://cidr.xyz/

Validation

It is advisable to validate that your K8S cluster indeed is having the configured IP address ranges. This is also a requirement to have conversations with your security team and evidence what you claim. The following validations are required to confirm that all the IP address ranges in this K8S ecosystem are really private.

VPC

Use the following commands to make sure that your VPC has the correct IP address range. In addition to the VPC, you also need to make sure that you have the correct VPC peering in your infrastructure. The VPC peering that you are seeing below is created by K8S to have a separate VPC for the K8S API Server. For all the K8S nodes and pods to talk to the K8S API server, there should be a peering established with your VPC.

$ gcloud compute networks subnets list | grep europe-west2
default europe-west2 default 10.154.0.0/20
mservice-subnetwork europe-west2 mservice-network 10.1.0.0/16
$ gcloud compute networks peerings list
NAME NETWORK PEER_PROJECT PEER_NETWORK AUTO_CREATE_ROUTES STATE STATE_DETAILS
gke-7884e5a1eff6b98b5d90-517b-450a-peer mservice-network gke-prod-europe-west2-f88a gke-7884e5a1eff6b98b5d90-517b-c02f-net True ACTIVE [2019-03-24T02:28:40.141-07:00]: Connected.

K8S API Server

Use the following commands to make sure that your K8S API Server has the correct IP address which is also in the private IP address range.

$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2$ $ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.16.0.2
name: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster
contexts:
- context:
cluster: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster
user: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster
name: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster
current-context: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster
kind: Config
preferences: {}
users:
- name: gke_YYYY-XXXX_europe-west2_mservice-dev-cluster
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp

K8S Nodes

Use the following commands to make sure that your K8S nodes have the correct IP addresses which are also in the private IP address range. Note that there is no external IP address for any of the nodes.

$ kubectl get nodes -o wide | awk '{print $1, $6, $7}'
NAME INTERNAL-IP EXTERNAL-IP
gke-mservice-dev-clu-mservice-node-po-528e24d4-d46s 10.1.0.6 Container-Optimized
gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6 10.1.0.7 Container-Optimized
gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32 10.1.0.8 Container-Optimized

K8S Pods

Use the following commands to make sure that your K8S pods have the correct IP addresses which are also in the private IP address range. In the below result, you can see that some of the pods have got IP addresses in the VPC’s subnet’s primary IP range which is 10.1.0.0/16 even though we have explicitly declared that 10.2.0.0/20 to be used for the K8S cluster secondary range when you defined the IP allocation policy. This is because of the kube-system namespace has an infrastructure in a different VPC

$ kubectl get pods -o wide --all-namespaces | awk '{print $1, $7, $8}'
NAMESPACE IP NODE
istio-system 10.2.2.12 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.6 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.14 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.20 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.15 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.16 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.9 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.21 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.17 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
istio-system 10.2.2.13 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.8 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.7 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.18 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.1.3 gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6
kube-system 10.2.0.2 gke-mservice-dev-clu-mservice-node-po-528e24d4-d46s
kube-system 10.2.2.2 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.3 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.10 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.4 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.1.0.6 gke-mservice-dev-clu-mservice-node-po-528e24d4-d46s
kube-system 10.1.0.7 gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6
kube-system 10.1.0.8 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.5 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.2.11 gke-mservice-dev-clu-mservice-node-po-bc49f52a-vp32
kube-system 10.2.1.2 gke-mservice-dev-clu-mservice-node-po-5628ab4e-60v6

K8S Services

Use the following commands to make sure that your K8S services have the correct IP addresses which are also in the private IP address range.

$ kubectl get services    
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 192.168.0.1 <none> 443/TCP 1h

Infrastructure Test Automation

When you use IaaC tools like Terraform, it is also important to make sure that you implement the right level of infrastructure test automation in place so that these test reports can be used to evidence the security of your system without any manual intervention.

Conclusion

When you expose your K8S resources to the Internet through public IP addresses, there are thousands of sophisticated adversaries trying to have a peep into your network. A guarantee that all of your K8S resources are having private IP addresses make sure that those resources are not directly accessible from outside your network for staging any kind of attack. Even if you are protecting your internal network elements with private IP addresses, you can still expose your service to have external IP addresses including your K8S API Server URL. Caution has to be exercised to protect the exposed services from the adversaries with various authentication and authorization techniques in conjunction with transport layer security.

--

--

Rajanarayanan Thottuvaikkatumana

Fellow @Equifax, published author, architect… Speaks Go, Scala, and Python. Loves Mathematics, Computer Science, Classical Music, and Tennis. Find me @rajtmana