A Visual Guide to Kubernetes Networking

A Visual Guide to Kubernetes Networking

The network inside Kubernetes is not much different from the network in the physical world. With the basic knowledge of network, you can easily achieve communication between container Pods and services.

Moving from physical networks using switches, routers, and Ethernet cables to virtual networks using software-defined networking (SDN) and virtual interfaces requires a slight learning curve. The principles remain the same, of course, but there are different norms and best practices. Kubernetes has its own set of rules, and if you're dealing with containers and clouds, it helps to understand how Kubernetes networking works.

The Kubernetes networking model has some general rules to remember:

  • Each pod has its own IP address: there is no need to create links between Pods, and no need to map container ports to host ports.
  • No NAT required: Pods on a node should be able to communicate with all Pods on all nodes without NAT.
  • Agents get all access rights: An agent on a node (system daemon, Kubelet) can communicate with all Pods in that node.
  • Shared Namespace: Containers in a Pod share the network namespace (IP and MAC addresses), so they can communicate with each other using loopback addresses.

What problems does Kubernetes networking solve?

Kubernetes networking is designed to ensure that different entity types in Kubernetes can communicate. The layout of the Kubernetes infrastructure has a lot of separation by design. Namespaces, containers, and Pods are designed to keep components distinct from each other, so a highly structured communication plan is very important.

Container-to-container networking

Container-to-container networking happens through Pod network namespaces. Network namespaces allow you to have independent network interfaces and routing tables that are isolated and run independently from the rest of the system. Each Pod has its own network namespace, and the containers within it share the same IP address and ports. All communication between these containers happens through localhost because they are all part of the same namespace. (Represented by the green line in the diagram.)

Pod-to-Pod Networking

With Kubernetes, each node has a designated CIDR IP range for Pods. This ensures that each Pod receives a unique IP address that can be seen by other Pods in the cluster. When creating new Pods, IP addresses never overlap. Unlike container-to-container networking, Pod-to-Pod communication occurs using real IPs, regardless of whether you deploy the Pods on the same node or on different nodes in the cluster.

The above diagram shows that in order for Pods to communicate with each other, traffic must flow between the Pod network namespace and the root network namespace. This is achieved by connecting the Pod namespace and the root namespace via a virtual Ethernet device or veth pair (veth0 to Pod namespace 1, veth1 to Pod namespace 2 in the diagram). A virtual bridge connects these virtual interfaces, allowing communication to flow between them using the Address Resolution Protocol (ARP).

When data is sent from Pod 1 to Pod 2, the event flow is:

  • Pod 1 traffic flows through eth0 to the virtual interface veth0 in the root network namespace.
  • The traffic then goes through veth0 to the virtual bridge connected to veth1.
  • Traffic goes through the virtual bridge to veth1.
  • Finally, the traffic reaches Pod 2’s eth0 interface through veth1.

Pod to Service Network

Pods are dynamic. They may need to scale up or down based on demand. They may be created again in case of application crashes or node failures. These events cause the IP address of the Pod to change, which will create challenges for networking.

Kubernetes solves this problem by using the Service feature, which does the following:

  • Assign static virtual IP addresses on the frontend to connect to any backend pods associated with the service.
  • All traffic addressed to this virtual IP will be load balanced to the set of backend Pods.
  • Keep track of the Pod's IP address so that even if the Pod IP address changes, clients won't have any issues since they only connect directly to the static virtual IP address of the Service itself.

There are two ways to load balance within a cluster:

  • IPTABLES: In this mode, kube-proxy watches the API server for changes. For each new service, it installs iptables rules that capture traffic to the Service's clusterIP and port, and then redirects the traffic to the Service's backend Pods. The Pods are chosen randomly. This mode is reliable and has low system overhead because Linux Netfilter does not need to switch between user space and kernel space when processing traffic.
  • IPV: IPV is built on top of Netfilter and implements transport layer load balancing. IPVS uses Netfilter hook functions, uses hash tables as underlying data structures, and works in kernel space. This means that the kube proxy in IPVS mode has lower latency, higher throughput, and better performance to redirect traffic than the kube proxy in iptables mode.

The above diagram shows the flow of packets from Pod 1 to Pod 3 through the Service to different nodes (marked in red). Packets heading to the virtual bridge must use the default route (eth0) because the ARP running on the bridge does not understand the service. After that, the packets must be filtered through iptables, which uses the rules defined in the node by the kube-agent. Therefore, the diagram shows the current path.

Internet to the service network

So far, we have discussed how to route traffic within the cluster. However, there is another side to Kubernetes networking, which is exposing applications to the external network.

You can expose your application to the external network in two different ways.

  • Egress: Use this option when you want to route traffic from a Kubernetes service to the Internet. In this case, iptables performs source NAT so that the traffic appears to come from the node, not the pod.
  • Ingress: This is the incoming traffic from the outside world to the service. Ingress also allows and blocks specific communications with services using connection rules. Typically, there are two ingress solutions that run on different areas of the network stack: service load balancers and ingress controllers.

Discovery Services

Kubernetes discovers services in two ways:

  • Environment variables: The kubelet service running on the node where the Pod runs is responsible for setting environment variables for each active service in the format of {SVCNAME}_service_HOST and {SVCNAME}_service_PORT. You must create the service before the client Pods appear. Otherwise, these client Pods will not have their environment variables populated.
  • DNS: The DNS service is implemented as a Kubernetes service, mapped to one or more DNS server Pods, which are scheduled like any other Pods. Pods in the cluster are configured to use the DNS service, and the DNS search list includes the Pod's own namespace and the cluster's default domain. Cluster-aware DNS servers such as CoreDNS monitor the Kubernetes API for new services and create a set of DNS records for each service. If DNS is enabled across the cluster, all Pods can automatically resolve services based on their DNS names. The Kubernetes DNS server is the only way to access ExternalName services.

ServiceTypes for published services

A Kubernetes Service provides a way to access a set of Pods, typically defined by using a label selector. This might be an application trying to access other applications in the cluster, or it might allow you to expose applications running in the cluster to the outside world.
ServiceTypes allows you to specify the type of service you want.

The different ServiceTypes include:

  • ClusterIP: This is the default ServiceType. It makes the service accessible only from within the cluster and allows applications within the cluster to communicate with each other. There is no external access.
  • LoadBalancer: This service type uses the cloud provider's load balancer to expose the service externally. Traffic from the external load balancer is directed to the backend pods. The cloud provider decides how to implement load balancing.
  • NodePort: This allows external communication to access the service by opening a specific port on all nodes. Any traffic sent to this port is then forwarded to the service.
  • ExternalName: This type of service uses the contents of the ExternalName field to map the service to a DNS name by returning a CNAME record and its value. No proxy of any kind is set.

Network software

As long as you understand the technologies used, networking inside Kubernetes is not much different from networking in the physical world. Learn and remember the basics of networking, and you can easily achieve communication between containers, Pods, and services.

<<:  5G and the Future of Commercial Security Surveillance

>>:  Direction determines success or failure: a review of the 5G development plans of the four major operators

Recommend

Once together, now separated, 5G baseband will connect everything

2019 is the first year of 5G, but SoC and 5G base...

Four predictions for SD-WAN in 2018

2018 will be the year of WAN transformation, as r...

Will 5G replace WiFi? Not in the short term

As global operators begin to pave the way for com...

Wi-Fi 6 is here, are you interested?

If we were to vote for the "hottest names&qu...

Network Slicing: A Booster for 5G

Preface I have recently become interested in 5G n...

Do you understand the benefits of 5G? Learn about the pros and cons

We talk about 5G every day, but do you know what ...

Top 5 Internet Trends for 2020

Top networking trends for the coming year include...

Uncovering the Cost of Cyber ​​Attacks in the 5G Era

With the advent of the 5G era, smart IoT devices ...

What is FlexE in 5G bearer network?

[[413331]] This article is reprinted from the WeC...

IT Asset Management System - ForceView ITAM

Introduction ForceView ITAM (IT Asset Management)...

The impact of blockchain technology on the future world and data centers

As organizations gain a deeper understanding of t...

Will Wi-Fi cost more than 5G connections?

This seems to subvert common sense! Recently, Eri...