1. Overview

A load balancer is a device or software that distributes network traffic across multiple servers. It ensures that no single server becomes overwhelmed with requests, which can cause delays and downtime. Load balancers are important as they improve the performance and availability of applications by evenly distributing traffic across multiple servers. This can help to prevent bottlenecks and improve the overall user experience.

In this tutorial, we’ll understand the working of a load balancer with a practical example.

2. Load Balancer Operation

Let us now understand how a typical load balancer handles a large number of requests using a practical scenario. For instance, consider a website that receives a lot of traffic. To handle all of the incoming requests, we have multiple web servers that are capable of serving the website’s content. In order to distribute the incoming traffic evenly among these web servers, we would use a load balancer.

A load balancer acts as a “traffic cop”. It sits in front of the web servers and routes incoming requests to the appropriate server. When a client (such as a user’s web browser) sends a request to the application, the request is first received by the load balancer. The load balancer then uses a load-balancing algorithm to determine which of the web servers should receive the request. The algorithm considers factors such as the current workload of each server, the server’s capacity, and the type of request sent.

Once the load balancer decides on the target server using a routing algorithm, it then forwards the request. The web server processes the request and sends back the response to the load balancer. Further, the load balancer forwards the response to the client. This process ensures that the web servers are not directly accessible to anyone. The Load Balancer is the only public endpoint to access web servers and secures our infrastructure:

load balancer

Hence, placing the load balancer in this position allows it to distribute incoming traffic evenly among the web servers, which can improve the performance and availability of the application. It also allows the load balancer to provide other benefits, such as increased scalability and improved security.

2.1. Routing Algorithms

The decision to route traffic to a specific server is made using routing algorithms. There are two most popular routing algorithms: round-robin and least connections.

In the round-robin method, the load balancer sends incoming requests to each server. Considering that we have three servers, the load balancer will send the first request to the first server, the second request to the second server, the third request to the third server, and then start the cyclic loop. The fourth request will then be served by the first server, and so on. This ensures that each server receives an approximately equal number of requests over time. In case of failover of any server, the load balancer will make sure not to forward any request to that server.

In the least connections method, the load balancer keeps track of the number of active connections that each server has. It sends incoming requests to the server with the fewest active connections. For example, consider a web application with three web servers and a load balancer. The web servers are named Server A, Server B, and Server C, and they each have the following number of active connections:

Least connection algorithm illustration

Using the least connections method, the load balancer will forward the request to Server B since it has the fewest number of active connections.

3. Types of Load Balancers

In general, we categorize load balancers based on the type of traffic they handle. The two main categories of load balancers are layer 4 load balancers and layer 7 load balancers.

Layer 4 load balancers operate at the transport layer of the OSI model and are responsible for distributing incoming traffic based on the source and destination IP addresses and port numbers. Moreover, this type of load balancer distributes traffic to servers hosting simple, stateless applications that do not maintain session information. These are also known as network load balancers.

Layer 7 load balancers, also called application load balancers, operate at the application layer of the OSI model. They are responsible for distributing incoming traffic based on the content of the request. This allows them to provide advanced features such as application-level routing and SSL/TLS termination, which can be useful for more complex applications that maintain a session state or require content-based routing.

We can also categorize load balancers by the type of technology they use, such as hardware, software, or cloud-based. Additionally, we can classify them as global or regional based on the scope of traffic they handle.

4. Pros and Cons

Below are the pros and cons of using load balancers:

Rendered by QuickLaTeX.com

5. Conclusion

In this article, we dive deep into the working of load balancers. Further, we discussed a few routing algorithms that help distribute the load.

In conclusion, load balancers are an important tool for improving the performance and availability of networked systems. By distributing traffic among multiple servers, load balancers ensure that no single server receives too many requests.

Comments are open for 30 days after publishing a post. For any issues past this date, use the Contact form on the site.