You are viewing the documentation for Blueriq 17. Documentation for other versions is available in our documentation directory.

Introduction

A load balancer is a device that distributes network or application traffic across a cluster of servers. Load balancing improves responsiveness and increases availability of applications.

A load balancer sits between the client and the server farm accepting incoming network and application traffic and distributing the traffic across multiple backend servers using various methods. By balancing application requests across multiple servers, a load balancer reduces individual server load and prevents any one application server from becoming a single point of failure, thus improving overall application availability and responsiveness.

Load balancing Blueriq Runtime

When running the runtime in a clustered configuration, load balancing is required in order to distribute the load and assure high availability.

Servers can be added (they just also need to be added to the load balancer). Servers can only be shut down when all requests to that server have been completed. How to achieve this depends on the load balancer.

A specific load balancer is not imposed, but nginx is used as an example.

Nginx example configuration

Assumptions:

  • 2 nodes running Blueriq Runtime are configured, 1 for ip 192.168.1.1 and the other at 192.168.1.2
  • Blueriq Runtimes are configured for running in a clustered configuration with all the external resources needed(ex. key value store etc)

The basic nginx config should look something like this:

nginx config
http {
    upstream runtime {
        server 192.168.1.1;
        server 192.168.1.2;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://runtime;
        }
    }
}

Warning

This example can be used for testing. For setting up HTTPS or other more detailed configuration, refer to the official documentation of nginx.

Configuring nginx to address scalability (Load Balancer)

Nginx supports one of the following load-balancing methods, please check the workings to see which one is best for your situation (preferably based on utilization):

1. Round-robin : requests to the application servers are distributed in order across the list of servers.. This is the default behavior used by nginx.

This behavior of a node working in the round-robin setup is configurable and can result in weighted load-balancing, by using server weights.

Default load balancing configuration
http {
	upstream myapp {
		server 192.168.1.1;
    	server 192.168.1.2;
        server 192.168.1.3;
	}
	server {
        listen 80;
        location / {
            proxy_pass http://myapp1;
        }
    } 
}

When the weight parameter is specified for a server, the weight is accounted as part of the load balancing decision. With the configuration example below, every 5 requests will be distributed across the application instances in the following manner : 3 requests to the first server, one request to the second server and one request to the third server.

upstream myapp2 {
	server 192.168.1.1 weight=3;
	server 192.168.1.2;
	server 192.168.1.3;
}


2. Least-connected load-balancing — next request is assigned to the server with the least number of active connections.

This method can be activated by adding the least_conn directive as part of the server group configuration:

Least connected
http { 
	upstream myapp1 {
        least_conn;
        server 192.168.1.1;
        server 192.168.1.2;
        server 192.168.1.3;
    }
 }


3. ip-hash — a hash-function is used to determine what server should be selected for the next request (based on the client’s IP address). This method is similar to the sticky session mechanism.

This method can be activated by adding the ip_hash directive as part of the server group configuration:

ip-hash
http { 
	upstream myapp1 {
        ip_hash;
        server 192.168.1.1;
        server 192.168.1.2;
        server 192.168.1.3;
    }
}

Configuring nginx to address high-availability (Failover Clustering)

One should bear in mind that passing a request to the next server is only possible if nothing has been sent to a client yet. That is, if an error or timeout occurs in the middle of the transferring of a response, fixing this is impossible.

Setting up a failover mechanism involves having at least a primary server and a backup server which is in stand-by, being prepared to process requests in case the primary one fails (occurrence of unsuccessful attempts).

Setting a server as the backup server is done with the help of the following parameter :

backup : marks the server as a backup server. It will be passed requests when the primary servers are unavailable.


proxy_next_upstream : Specifies in which cases a request should be passed to the next server. More specifically, this directive defines the events which are considered unsuccessful attempts of communication with a server.

Default values for the cases in which requests should be passed to the next server are : error and timeout, as in the example configuration below.

Additionally, requests can be passed to the next server when the following events occur:

invalid_header : a server returned an empty or invalid response;

http_[statusCode] : a server returned a response with the code [statusCode], where statusCode can have the following values: 500, 502, 503, 504, 403, 404, 429

non_idempotent : allows retrying of requests with non-idempotent methods (POST, LOCK, PATCH), which are not normally redirected to other upstream servers;

Allowing retrying of requests with non-idempotent methods is required when using the Blueriq Runtime in a failover configuration, as most of the requests use the POST method. For more details on how Blueriq mitigates potential concurrency problems with non-idempotent requests, see 9. Concurrency Control on Multiple Nodes and 10. Session de-synchronization protection

Example configuration for Failover
http { 
	upstream myapp1 {
        server 192.168.1.1;
        server 192.168.1.2;
        server 192.168.1.3 backup;
    }
	proxy_next_upstream error timeout;
    proxy_connect_timeout 1s;
 }

In the case of unsuccessful attempts, the way in which the request is going to be handled is determined by the following settings on the server :

max_fails : sets the number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout parameter to consider the server unavailable for a duration also set by the fail_timeout parameter. By default, the number of unsuccessful attempts is set to 1. The zero value disables the accounting of attempts

fail_timeout : sets the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable; and the period of time the server will be considered unavailable. By default, the parameter is set to 10 seconds.