GlassFish 3.1 Load Balancer Plugin One Pager
GlassFish 3.1/Load Balancer Plugin 1.2. Name(s) and e-mail address of Document Author(s)/SupplierKshitiz Saxena : kshitiz.saxena@sun.com 1.3. Date of This Document
In GlassFish 3.1 the support for clustering of application server instances is being introduced, thus there is a evident need to have a load-balancer to front-end the cluster of instances. GlassFish 2.1.1 already had a load-balancer and same will be leveraged for GlassFish 3.1. The load-balancer for GlassFish is a native plugin which need to be installed on a web-server. After installing load-balancer plugin on web-server and configuring it, web-server will start distributing requests across cluster of GlassFish instances and handle fail-over among many other features. A variety of web-server are supported by load-balancer plugin. The list includes Sun Java System Web Server, Apache HTTP server and Internet Information Service(IIS). Being a external component, which is installed on a web-server, there are no functional changes required to make it work with GlassFish 3.1. 2.2. Risks and AssumptionsLoad-balancer plugin pushes certain information as proxy headers alone with request. GlassFish implementation must be able to interpret these headers and populate request object appropriately. Also GlassFish web-container is required to stamp sticky information provided by load-balancer plugin in proxy header either as cookie or url-rewriting for correct functioning of load-balancer plugin. 3. Problem Summary |
Commands | Details |
---|---|
create-http-lb-config | Creates the lb-config element with provided values. |
create-http-lb-ref |
Creates cluster-ref or server-ref under lb-config element with provided values |
create-http-health-checker |
Creates health-checker element with provided values |
enable-http-lb-server |
Sets the lb-enabled flag to true for the given instance. If cluster name is used, then for all instances in cluster lb-enabled flag is set to true. |
enable-http-lb-application |
Sets the lb-enabled flag to true for the given application. |
delete-http-lb-config | Deletes the given lb-config |
delete-http-lb-ref | Deletes the given cluster-ref or server-ref. All instances need to be disabled to execute this command successfully. |
delete-http-health-checker |
Deletes the given health-checker |
disable-http-lb-server | Sets the lb-enabled flag to false for the given instance. If cluster name is used, then for all instances in cluster lb-enabled flag is set to false. |
disable-http-lb-application | Sets the lb-enabled flag to false for the given application. |
configure-lb-weight |
Configures the weight for a particular instance. |
list-http-lb-configs |
Lists all the lb-config elements |
create-http-lb |
Creates the load-balancer element. It can create lb-config, cluster-ref/server-ref, enable instance/application by running a single command. This command will be needed if apply-http-lb-changes command is supported. |
delete-http-lb |
Delete the given load-balancer. This command will be needed if apply-http-lb-changes command is supported. |
list-http-lbs |
Lists all the load-balancer elements. This command will be needed if apply-http-lb-changes command is supported. |
Main purpose of creating load-balancer view in domain.xml is to facilitate generation of load-balancer xml to be consumed by load-balancer plugin. Below is the list of commands to achieve the same :
Commands |
Details |
---|---|
export-http-lb-config |
Generates the load-balancer xml corresponding to given lb-config. User can provide a lb-config or load-balancer name. It will be exported to provided file name. This command will be further over-loaded to provide a new parameter target. This will enable user to generate load-balancer xml for a set of cluster/standalone-instance, without a need to create a lb-config or load-balancer element in domain.xml. |
apply-http-lb-changes |
Generates the load-balancer xml corresponding to given load-balancer and pushes it over the wire to configured web-server host name and port number. Web-server require some specific configuration for this feature to work. This command may not be supported in GlassFish 3.1. |
The generated load-balancer must conform to glassfish-loadbalancer_1_2.dtd.
For more details on these command refer to GlassFish 2.1.1 documentation.
In addition to support for above commands, there will be following changes with respect to GlassFish 2.1.1
Additional changes for backward compatibility are as follows :
The load-balancer plugin need to be installed on web-server and then web-server need to be configured. If configured correctly, web-server will use the plugin to handle the requests. The installation and configuration is a tedious task and error prone if done manually. To ease out this process, a tool will be provided to enable users to install and configure the loadbalancer plugin on the web server.
In GlassFish 2.1.1., a tool called GlassFish LoadBalancer Configurator was developed to provide above mentioned capability. The same tool will be leveraged for GlassFish 3.1 as well. It is a IzPack based installer and requires JAVA to execute. It accepts user inputs and perform installation and configuration tasks based on that. The tool performs installation and configuration in a two step process :
The tool also provides Post Installation steps, if any. User can also generate a automation script which can be used for silent install at a later point of time. It also provides an uninstall script, which can be used to remove the load-balancer configuration and plugin from the web server.
The tool will also provide support for upgrade. User will be able to perform upgrade from GlassFish 2.x to GlassFish 3.1 using this tool. It will detect that load-balancer configuration already exists in web-server, and will only update the load-balancer plugin binary. In future, it can also be used to distribute new load-balancer plugin binaries having bug fixes or new features.
Load-balancer detects instance failure and fails over requests being serviced by that instance to another healthy instance thus providing high availability. This newly selected instance is called fail-over instance. In current implementation of load-balancer plugin, the selection of fail-over instance is done using round-robin algorithm. Since round-robin algorithm is in general stateless, there is no preferred fail-over instance.
The session replication framework in GlassFish replicates session to a partner to provide high availability of session. The partner is know as replica. In case of instance failure, the session can be restored on any instance from replica. When request is failed over to another instance, it needs to figure out the replica to load the session. Session replication now uses consistent hash algorithm to identify the replica. In case identified replica does not hold the session, it resorts to the broadcast mechanism to identify the replica. This will happen for all sessions being handled by failed instance resulting in lot of network traffic and loss of throughput. However if load-balancer plugin take intelligent decision, based on some information available in request, it can directly route request to replica instance. The replica instance can directly load session from its local cache without a need for broadcast mechanism. This will provide better performance and throughput even in case of instance failure.
Option 1 : A contract between session replication framework and load-balancer to identify the replica for a session
The information of replica must be available in incoming request to enable load-balancer plugin to select that instance for handling session fail-over. This information can be present either as cookie or parameter in request-uri. The load-balancer plugin depends on web-container to stamp session stickiness information on response, so that it is available in subsequent requests. Now it will further depend on web-container to stamp replica information on response as well.
One important point to note here is that information of instance currently handling session is not stamped in clear text. Load-balancer plugin actually generates unique identification for each instance and use that value instead of instance name in clear text. Thus it will expect replica instance information being stamped to be its unique identification. It must be same as one generated by load-balancer plugin for that instance. Due to this constraint, there are two approaches to implement this feature.
Approach 1 - Load-balancer plugin selects replica partner : When load-balancer gets a new request(request not belonging to any session), it will select an instance to service the request. It will also select another instance using same round-robin algorithm to act as replica partner. The replica instance name, both in clear text and its unique identification, will be added as proxy headers on request. The request will then be forwarded to GlassFish instance. The session replication framework can extract replica instance name from proxy headers and use it as replica partner. Also web-container can extract unique identification for replica instance from proxy header, and stamp that information either as cookie or parameter on uri(url-rewriting). Upon instance failure, load-balancer will select instance corresponding to replica instance information available in request to act as a fail-over instance.. It will also select a new replica partner and add it as proxy headers on request. In this approach, a replica partner will be selected for all new requests, even those which does not create any session.
Approach 2 - Session replication framework selects replica partner : The load-balancer plugin will not modify a new request in any manner. In case session is created by the new request, the session replication framework will select an instance to act as replica. The information is made available to web-container. The web-container then generates a unique identification for that instance using the mechanism used by load-balancer plugin. This will require that logic to generate instance identification is duplicated in web-container as well. Also it need to be ensured that both implementations remain same even in future. Upon instance failure, load-balancer will select instance corresponding to replica instance information available in request to act as a fail-over instance. It will perform a check whether identified replica instance is with-in cluster boundary. This will guard against malicious requests trying to move session to another cluster, which will result in loss of session. The session replication framework will select a new replica partner.
Option 2 : Using consistent hash algorithm in both session replication framework and load-balancer
Using a consistent hash algorithm across load-balancer and session replication framework is another option to handle this scenario. There will be no contract between load-balancer and session replication framework and they can work independent of each other. However both of them need to use identical implementation of consistent hash algorithm.
This approach was used in SailFin and can be used here as well. The load-balancer uses consistent hash algorithm to distribute incoming traffic. The consistent hash algorithm is stateless in nature and thus yield same result for a given key. This implies for a given key, it will select same instance on load-balancer as well as session replication framework. The session replication framework will use same algorithm to select a replica partner.
Main drawback of this approach is that distribution will not be as fair as round robin mechanism. However in SailFin, it provided close to round robin distribution.
Supported platforms by load-balancer plugin will be subset of platforms supported by GlassFish 3.1. It will continue to support platforms already supported in GlassFish 2.1.1. As of now there is plan to support any new platform. Below is the list of supported platforms.
It will continue to support web servers which were supported by load-balancer plugin in GlassFish 2.1.1. Below is the list of supported web servers :
N.A.
All features described in this one pager are with-in scope of this document.
N.A.
Interface | Comments | |
---|---|---|
asadmin commands | Newly added asadmin commands to create load-balancer view in domain.xml and also to generate load-balancer xml |
|
load-balancer xml |
Load-balancer xml to be consumed by load-balancer plugin |
|
sun-loadbalancer_1_2.dtd |
DTD for load-balancer xml |
None
None
Load-balancer documentation will be part of high availability guide. Documentation existing from GlassFish 2.1.1 can be reused. It will require man pages for admin commands(CLI as well as GUI). Need additional documentation for preferred fail-over instance feature.
A set of new commands are being added to GlassFish 3.1 for creating http load-balancer element in domain.xml and then to generate load-balancer xml based on that. These commands will be available in both CLI and GUI.
Load-balancer is a core feature of high-availability.
The i18n/l10n impact consists of making sure that the output from the new sub commands follows the patterns that are already established for administrative commands.
Load-balancer plugin will be packaged a jar file generated using IzPack.
IzPack based bundle will be delivered as an add-on feature.
Since load-balancer is a standalone component outside GlassFish, there is no upgrade or migration requirement for it.
Load-balancer plugin is installed on web-server and thus utilizes security framework of web-server without impacting it adversely.
This feature is compatible with older version of GlassFish. All features except newly introduced feature of preferred fail-over instance will continue to work.
GlassFish LoadBalancer Configurator is a IzPack based installer and configurator. Thus it has dependency on IzPack.
The feature need to be tested in fashion similar to GlassFish 2.1.1. All functional test cases from GlassFish 2.1.1 need to be executed.
The GlassFish Loadbalancer Configurator provided to install and configure load-balancer plugin need to be exhaustively tested.
The preferred fail-over instance feature introduced in GlassFish 3.1 will require writing and execution of new set of test cases.
1. GlassFish 2.1.1 documentation on load-balancer plugin
2. GlassFish 2.1.1 documentation on load-balancer administration
3. GlassFish 2.1.1 documentation on load-balancer admin commands
4. White paper on GlassFish load-balancer
5. Blog on GlassFish load-balancer plugin
Item | Date/Milestone | Feature-ID | Description | QA/Docs Handover | Status / Comments |
---|---|---|---|---|---|
1 | MS1 | N.A. | Load-balancer one pager describing features and implementation details | No | |
2 | MS3 | LBREC-001 | Admin command to create load-balancer elements in domain xml | Yes | |
3 | MS3 | LBREC-002 | Admin command to generate load-balancer xml | Yes | |
4 | MS4 | LBREC-004 | Preferred fail-over instance | Yes | |
5 | MS5 | LBREC-003 | Installer support for Load-balancer plugin | Yes | |
6 | ? | LBREC-005 | Pushing load-balancer xml over the wire to web-server | Yes | This feature will be implemented if time permits. |