Translator profile: Zheng Minxian works for Nooyun Systems (Shanghai) Co., Ltd. He works at Nooyun R&D Center in Nanjing as a solution engineer. Overview Based on my last post, I will now introduce OVN's load balancing features. But before we begin, let's take a look at the configuration from the last experiment. OVN Load Balancer The OVN load balancer is designed to provide very basic load balancing services for workloads within the OVN logical network space. Due to its simple feature set, it is not designed to replace hardware load balancers that provide more bells and whistles for advanced use cases. Most other load balancers use a hash-based algorithm to balance requests for a VIP to a pool of related IP addresses within a logical space. Since the hash algorithm is calculated using the header of the client request, the balancing should be random, with each individual client request always selecting a specific member of the same load balancing pool for the duration of the connection. Experimental physical network topology: OVN logical network topology: Load balancing in OVN can be applied to either logical switches or logical routers. The choice depends on your specific requirements. Each approach has considerations. The following considerations need to be kept in mind when applied to logical routers:
The following considerations need to be kept in mind when applied to logical switches:
Use our "pseudo virtual machine" as a web server To demonstrate the load balancer, we want to create a pair of web servers in our "dmz" that serve a file identifying them. To simplify the experiment, we will use a python web server running in each of our vm1/vm2 namespaces. Let's start the web server. On ubuntu2:
On ubuntu3:
The above command will create a web server listening on TCP 8000. We also want to be able to test connectivity to our web server. To do this, we will use curl from the global namespace of the Ubuntu host. If curl is not already installed, then you will need to install it first.
Configuring Load Balancer Rules First, we need to define our load balancing rules, the VIP and the backend server IP pool. What this involves is creating an entry in the OVN northbound database and capturing the generated UUID. For this lab, we will use the VIP 10.127.0.254 located in the lab's "data" network. We will use the address of vm1/vm2 as the pool IP. On ubuntu1:
The above command creates an entry in the load_balancer table in the northbound database and stores the generated UUID to the variable "uuid". We will reference this variable in subsequent commands. Configure load balancing on the gateway router Enable the load balancer feature on the OVN gateway router "edge1". On ubuntu1:
You can verify whether the load balancer function is successfully enabled by checking the database entry of edge1.
Now, we can connect to the VIP from the global namespace of any Ubuntu host.
After testing it several times, I can confirm that the load balancing is fairly random. Let's see what happens if we disable one of the web servers. Try stopping the python process running in the vm1 namespace. This is the output I get:
As you can see, the load balancer does not perform any kind of health checking. The current plan is that health checking will be performed by an orchestration solution such as Kubernetes, and this functionality will be added at some point in the future. Before moving on to the next test, restart the Python web server on vm1. The load balancer is running outside the VM, let's see what happens when we access the VIP from an internal VM. Call curl of vm3 on ubuntu2:
Great, this seems to be working fine, but there is something interesting here. Let’s look at the logical diagram of our OVN network and consider the traffic from the curl request from vm3. Also, take a look at some of the logs from the python web server:
Notice the client IP addresses in the logs. The first IP is ubuntu1 from the previous round of testing. The second IP is edge1 (the request came from vm3). Why is the request coming from edge1 and not directly from vm3? The answer is that the OVN developers who implemented load balancing used a method called "proxy mode" where the load balancer hides the client IP in certain circumstances. Why is this necessary? Think about what would happen if the web server saw the real IP of vm3. The response from the server would be routed directly back to vm3, bypassing the load balancer on edge1. From vm3's perspective, it would look like it made a request to the VIP, but received a reply from the real IP of one of the web servers. The load balancer wouldn't work (if proxy mode wasn't used), which is why the proxy mode feature is important. For the second round of testing, delete the load balancer configuration first.
Configuring load balancing on a logical switch The next lab applies load balancing rules to the logical switch, what will happen? Since we are moving the load balancing away from the edge, the first step is to create a new load balancer with an internal VIP. We will use 172.16.255.62 as the VIP. On ubuntu1:
First test: Applying a load balancer to the "internal" logical switch. On ubuntu1:
Then test from vm3 (located "inside"):
The experiment seems to have been successful. Then remove the load balancer from "inside" and apply it to "dmz". On ubuntu1:
Then test again from vm3:
Oops, it's hung. Let's try testing it from vm1 (which also resides in "dmz"):
No. This strongly suggests that you should apply load balancing to the client's logical switch instead of the server's logical switch. Make sure to clean up your environment. On ubuntu1:
Conclusion Basic load balancing functionality is very useful. Since it is built directly into OVN, it means one less piece of software to deploy in your SDN solution. Although the OVN load balancer does not have many features, I think it meets the needs of most users. I also expect that some of the deficiencies, such as the lack of health checking capabilities, will be implemented in OVN in the future. In the next article, I will introduce OVN's network security features. |
<<: Accelerating 5G standardization requires coping with the complexity of test scenarios
[[177568]] Allied Market Research forecasts that ...
The conversion from the HTTP protocol for naked d...
EtherNetservers is a rare hosting company that st...
spinservers has released several special packages...
[51CTO.com original article] Recently, the WOT201...
In response to rumors that "the first tens o...
[[355610]] In addition to high-speed downloads an...
In today's digital economy era, digital trans...
In June, the three major operators who obtained 5...
5G network technology has increased the speed of ...
[51CTO.com original article] As cloud desktop tec...
[51CTO.com original article] On the afternoon of ...
The complaint, filed in the U.S. District Court f...
[51CTO.com original article] "It's time ...