How to configure OVN load balancer?

How to configure OVN load balancer?

Translator profile: Zheng Minxian works for Nooyun Systems (Shanghai) Co., Ltd. He works at Nooyun R&D Center in Nanjing as a solution engineer.

Overview

Based on my last post, I will now introduce OVN's load balancing features. But before we begin, let's take a look at the configuration from the last experiment.

OVN Load Balancer

The OVN load balancer is designed to provide very basic load balancing services for workloads within the OVN logical network space. Due to its simple feature set, it is not designed to replace hardware load balancers that provide more bells and whistles for advanced use cases.

Most other load balancers use a hash-based algorithm to balance requests for a VIP to a pool of related IP addresses within a logical space. Since the hash algorithm is calculated using the header of the client request, the balancing should be random, with each individual client request always selecting a specific member of the same load balancing pool for the duration of the connection.

Experimental physical network topology:

OVN logical network topology:

Load balancing in OVN can be applied to either logical switches or logical routers. The choice depends on your specific requirements. Each approach has considerations.

The following considerations need to be kept in mind when applied to logical routers:

  1. Load balancing can only be applied to "centralized" routers (i.e. gateway routers).
  2. The first consideration already determines that load balancing on a router is a non-distributed service.

The following considerations need to be kept in mind when applied to logical switches:

  1. Load balancing is “distributed” in that it is applied across potentially multiple OVS hosts.
  2. Load balancing on a logical switch is only evaluated at the ingress of traffic from a VIF (virtual interface). This means it must be applied on the "client" logical switch, not on the "server" logical switch.
  3. Because of consideration #2, you may need to apply load balancing to multiple logical switches depending on the scale of your design.

Use our "pseudo virtual machine" as a web server

To demonstrate the load balancer, we want to create a pair of web servers in our "dmz" that serve a file identifying them. To simplify the experiment, we will use a python web server running in each of our vm1/vm2 namespaces.

Let's start the web server.

On ubuntu2:

  1. mkdir /tmp/www
  2. echo "i am vm1" > /tmp/www/ index .html
  3. cd /tmp/www
  4. ip netns exec vm1 python -m SimpleHTTPServer 8000

On ubuntu3:

  1. mkdir /tmp/www
  2. echo "i am vm2" > /tmp/www/ index .html
  3. cd /tmp/www
  4. ip netns exec vm2 python -m SimpleHTTPServer 8000

The above command will create a web server listening on TCP 8000.

We also want to be able to test connectivity to our web server. To do this, we will use curl from the global namespace of the Ubuntu host. If curl is not already installed, then you will need to install it first.

  1. apt-get -y install curl

Configuring Load Balancer Rules

First, we need to define our load balancing rules, the VIP and the backend server IP pool. What this involves is creating an entry in the OVN northbound database and capturing the generated UUID. For this lab, we will use the VIP 10.127.0.254 located in the lab's "data" network. We will use the address of vm1/vm2 as the pool IP.

On ubuntu1:

  1. uuid=`ovn-nbctl create load_balancer vips:10.127.0.254= "172.16.255.130,172.16.255.131" `
  2. echo $uuid

The above command creates an entry in the load_balancer table in the northbound database and stores the generated UUID to the variable "uuid". We will reference this variable in subsequent commands.

Configure load balancing on the gateway router

Enable the load balancer feature on the OVN gateway router "edge1".

On ubuntu1:

  1. ovn-nbctl set logical_router edge1 load_balancer=$uuid

You can verify whether the load balancer function is successfully enabled by checking the database entry of edge1.

  1.      
  2. ovn-nbctl get logical_router edge1 load_balancer

Now, we can connect to the VIP from the global namespace of any Ubuntu host.

  1. root@ubuntu1:~# curl 10.127.0.254:8000
  2. i am vm2
  3. root@ubuntu1:~# curl 10.127.0.254:8000
  4. i am vm1
  5. root@ubuntu1:~# curl 10.127.0.254:8000
  6. i am vm2

After testing it several times, I can confirm that the load balancing is fairly random.

Let's see what happens if we disable one of the web servers. Try stopping the python process running in the vm1 namespace. This is the output I get:

  1. root@ubuntu1:~# curl 10.127.0.254:8000
  2. curl: (7) Failed to   connect   to 10.127.0.254 port 8000: Connection refused
  3. root@ubuntu1:~# curl 10.127.0.254:8000
  4. i am vm2

As you can see, the load balancer does not perform any kind of health checking. The current plan is that health checking will be performed by an orchestration solution such as Kubernetes, and this functionality will be added at some point in the future.

Before moving on to the next test, restart the Python web server on vm1.

The load balancer is running outside the VM, let's see what happens when we access the VIP from an internal VM.

Call curl of vm3 on ubuntu2:

  1. root@ubuntu2:~# ip netns exec vm3 curl 10.127.0.254:8000
  2. i am vm1
  3. root@ubuntu2:~# ip netns exec vm3 curl 10.127.0.254:8000
  4. i am vm2

Great, this seems to be working fine, but there is something interesting here. Let’s look at the logical diagram of our OVN network and consider the traffic from the curl request from vm3. Also, take a look at some of the logs from the python web server:

  1. 10.127.0.130 - - [29/Sep/2016 09:53:44] "GET / HTTP/1.1" 200 -
  2. 10.127.0.129 - - [29/Sep/2016 09:57:42] "GET/HTTP/1.1" 200 -

Notice the client IP addresses in the logs. The first IP is ubuntu1 from the previous round of testing. The second IP is edge1 (the request came from vm3). Why is the request coming from edge1 and not directly from vm3? The answer is that the OVN developers who implemented load balancing used a method called "proxy mode" where the load balancer hides the client IP in certain circumstances. Why is this necessary? Think about what would happen if the web server saw the real IP of vm3. The response from the server would be routed directly back to vm3, bypassing the load balancer on edge1. From vm3's perspective, it would look like it made a request to the VIP, but received a reply from the real IP of one of the web servers. The load balancer wouldn't work (if proxy mode wasn't used), which is why the proxy mode feature is important.

For the second round of testing, delete the load balancer configuration first.

  1. ovn-nbctl clear logical_router edge1 load_balancer
  2. ovn-nbctl destroy load_balancer $uuid

Configuring load balancing on a logical switch

The next lab applies load balancing rules to the logical switch, what will happen? Since we are moving the load balancing away from the edge, the first step is to create a new load balancer with an internal VIP. We will use 172.16.255.62 as the VIP.

On ubuntu1:

  1. uuid=`ovn-nbctl create load_balancer vips:172.16.255.62= "172.16.255.130,172.16.255.131" `
  2. echo $uuid

First test: Applying a load balancer to the "internal" logical switch.

On ubuntu1:

  1. # apply and verify
  2. ovn-nbctl set logical_switch inside load_balancer=$uuid
  3. ovn-nbctl get logical_switch inside load_balancer

Then test from vm3 (located "inside"):

  1. root@ubuntu2:~# ip netns exec vm3 curl 172.16.255.62:8000
  2. i am vm1
  3. root@ubuntu2:~# ip netns exec vm3 curl 172.16.255.62:8000
  4. i am vm1
  5. root@ubuntu2:~# ip netns exec vm3 curl 172.16.255.62:8000
  6. i am vm2

The experiment seems to have been successful.

Then remove the load balancer from "inside" and apply it to "dmz". On ubuntu1:

  1. ovn-nbctl clear logical_switch inside load_balancer
  2. ovn-nbctl set logical_switch dmz load_balancer=$uuid
  3. ovn-nbctl get logical_switch dmz load_balancer

Then test again from vm3:

  1. root@ubuntu2:~# ip netns exec vm3 curl 172.16.255.62:8000
  2. ^C

Oops, it's hung. Let's try testing it from vm1 (which also resides in "dmz"):

  1. root@ubuntu2:~# ip netns exec vm1 curl 172.16.255.62:8000
  2. ^C

No. This strongly suggests that you should apply load balancing to the client's logical switch instead of the server's logical switch. Make sure to clean up your environment.

On ubuntu1:

  1. ovn-nbctl clear logical_switch dmz load_balancer
  2. ovn-nbctl destroy load_balancer $uuid

Conclusion

Basic load balancing functionality is very useful. Since it is built directly into OVN, it means one less piece of software to deploy in your SDN solution. Although the OVN load balancer does not have many features, I think it meets the needs of most users. I also expect that some of the deficiencies, such as the lack of health checking capabilities, will be implemented in OVN in the future. In the next article, I will introduce OVN's network security features.

<<:  Accelerating 5G standardization requires coping with the complexity of test scenarios

>>:  It has become an industry consensus that multi-antenna technology is an important evolution direction of LTE

Recommend

Ethernet cables: A billion-dollar market, but growth will be hampered

[[177568]] Allied Market Research forecasts that ...

ByteDance 2: How many methods do you know to optimize HTTPS?

The conversion from the HTTP protocol for naked d...

spinservers New Year promotion: $39/month-E3-1280v5/32GB/1TB NVMe/30TB@10Gbps

spinservers has released several special packages...

Ruijie Networks: Continue to Lead, "Our Journey Is to the Stars and the Sea"

[51CTO.com original article] As cloud desktop tec...

...