GlassFish Server Open Source Edition 3.1 - Rolling Upgrade

This page captures information for the rolling upgrade feature as part of the clustering support in GlassFish 3.1.


Introduction

Part of clustering in 3.1 is support for rolling upgrade of applications. The GlassFish 2 documentation describes the procedure, which is mostly manual. GlassFish 3.1 simplifies some of the manual steps.

The general flow of rolling upgrade in 3.1 is:

  • Administrator delivers a new version of an app - in disabled state - to all instances in the cluster.
  • One instance at a time, the administrator brings the application to a quiescent state on that instance, then enables the new version of the application on that instance.

Role of Application Versioning

GlassFish 3.1 supports application versioning which allows the administrator to deploy multiple versions of an application concurrently. At most one version of an app can be enabled at any given time.

App versioning will simplify rolling upgrade.

Suppose the cluster is currently running myApp:1.0 (version 1.0 of myApp). With app versioning support, the administrator can deploy the new version of the application in a disabled state, using a command like

asadmin deploy --enabled=false --target myCluster myApp:1.1

Clustering's dynamic deployment support will deliver the new version's bits - in a disabled state as requested on the illustrated deploy command - to each instance to which the app is assigned if that instance is up at the time of deployment. And only the instance which are up at deployment time are the relevant ones during a rolling app upgrade. (Instances which are down when the administrator deploys the new app will synchronize with the DAS during instance restart.) When deployment finishes, each active instance has the preceding version of the app - which remains live - and the files for the new version.

Remaining Rolling Upgrade Steps

To continue the rolling upgrade, for each live instance in the cluster which hosts the app, the administrator:

  1. Brings the app on that instance to a quiescent state, which includes diverting load balancer traffic away from the instance (as described in the GlassFish 2 documentation).
  2. Uses enable --target instanceID myApp:1.1 to simultaneously disable myApp:1.0 and enable myApp:1.1.
  3. Restores load balancer traffic to the instance.

Key improvements over the 2.x scheme:

  • No server restarts are needed. Because GlassFish can deploy the new version in a disabled state without perturbing the preceding version which still runs, the administrator no longer needs to restart each instance to retrieve the new version's files.
  • Dynamic reconfiguration can be left on during this process, saving two steps (turning it off and turning it back on).

Questions/notes/issues:

  • GlassFish 2 required an administrator to redeploy an app to the same instances where it already resided. Normally for clusters this was natural because the admin would specify the same cluster name for the deployment and the redeployment...or if --target was omitted from the redeployment GlassFish automatically used the targets where the app was deployed.
    Currently we have no similar restriction in the app versioning proposal. That is, we don't currently mandate that a new version of an app be deployed to the same target(s) where its predecessor is already assigned. I don't think we want to do that, because it seems like a valid use case to deploy one version of an app to one target and another version to another target (for testing, training, etc.).

Instead, GlassFish 3.1 could warn if the user deploys a subsequent version of an app to a different set of targets from the set where it's already deployed. That would create "noise" in the example above - where the admin wants one version on one target and another version on another target - but the example would be allowed. And in the case where the administrator intended to hit the same targets but didn't the warning would alert him or her to that.

  • For this scheme to work, administrators must be able to enable/disable application versions selectively on the individual instances in a cluster, as opposed to on the cluster as a whole. This is a departure from most commands – such as deployment – which do not allow the administrator to operate on one instance in a cluster individually.

Some Implementation Notes

  • With the new ability to enable or disable an app individually on a cluster instance, the enabled state of the app ref at the cluster level becomes slightly ambiguous and must be managed differently. Specifically, the cluster-level state of an app ref will be enabled if at least one instance-level app ref is enabled; it will be disabled only if all instance-level states are disabled. Internally, then, when applied to a cluster instance the enable command logic will need to check the states of the corresponding app ref on other instances in the cluster and, if needed, update the cluster-level app ref state accordingly.