Hudson Plugin for GlassFish Server Open Source Edition 3.1 - One Pager

Draft Version 0.06 (under construction)

1. Introduction

1.1. Project/Component Working Name:

Hudson GlassFish Plugin

1.2. Name(s) and e-mail address of Document Author(s)/Supplier:

Harshad Vilekar : hvilekar@java.net

1.3. Date of This Document:

...

2. Project Summary

2.1. Project Description:

Hudson Plugin for GlassFish Cluster.

Creates Multi Node GlassFish Cluster on top of Hudson Cluster. Provides Hooks for User Tasks and Ant Tasks.

2.2. Risks and Assumptions:

To Add.

3. Problem Summary

3.1. Problem Area:

This work will provide easy way to configure and deploy multi node GlassFish Cluster on top of Hudson Cluster.

3.2. Justification:

  • Ease of development: This is a part of Dev Test Framework.
  • Ease of use: Simple GUI interface to configure multi node cluster.
  • Overall Improvement in Quality: Help test GlassFish Clustering functionality on Hudson Continuous Integration Server.

4. Technical Description:

4.1. Details:

Feature list :

  • Multi Node GlassFish Cluster
  • GlassFish Installation Support
  • Cluster Life Cycle Support: Create / Start / Stop the Cluster
  • The plugin is accessible to user jobs via "Add Build Step – GlassFish Cluster" Menu.

  • Cluster Creation Use Cases: (Number of Instances = I,   Number of Nodes = N)
    • Single Node Cluster:  
      • N = 1, I >= N
    • Multi Node Cluster, One Instance per Node:
      • N > 1, I = N
    • Multi Node Cluster, Multiple Instances per Node: 
      • N > 1, I < N

  • Collection and Archival of Server Logs:
    • In the Job Configuration, Select: Archive the Artifacts: "logs/**"
  • Dynamic Port Allocation:
    • User specified preferred base port.
    • Dynamically reassign the ports for GF Instances, in case of port conflict.
    • Note: No dynamic port allocation for DAS.
  • Hooks for Ant Scripts / User Tasks:
    • Cluster information passed using cluster config file.

How does the plugin work

  • Some of the Hudson nodes are labeled "glassfish-cluster". They work as plugin's "subslaves".
  • When the build is launched:
    • The Plugin finds available "N" subslaves.
    • Installs GlassFish on each node.
    • Starts DAS on the "build" node
    • Starts one clustered GF instance on each subslave (this is optional step)

Limitations / Known Issues:

  • Hudson core node allocation logic is bypassed when assigning the subslaves. This has various side effects. The fix is planned in
    Phase 3 (date TBD).
    • The build for "N" instance multinode cluster fails if N subslaves are not available.
    • If two or more multi-nodes jobs are running at the same time – they might use the same subslave at the same time.
  • Can't use Hudson Master as a build node due to port conflict with DAS.

Plugin Setup (Hudson Admin)

  • Upload GlassFish Plugin: glassfish.hpi
  • Install Port Allocator Plugin version 1.5
  • Restart Hudson
  • Add the label "glassfish-cluster" to selected Hudson nodes. This marks the node as "plugin subslave". Make sure these nodes are online.
  • Set Number Of Executors on the Master node to "0".

4.2. Bug/RFE Number(s):

N. A.

4.3. In Scope:

4.4. Out of Scope:

4.5. Interfaces:

4.5.1 Public Interfaces

4.5.2 Private Interfaces

N/A.

4.5.3 Deprecated/Removed Interfaces:

NONE.

4.6. Doc Impact:

NONE.

4.7. Admin/Config Impact:

N. A.

4.8. HA Impact:

N. A.

4.9. I18N/L10N Impact:

NONE.

4.10. Packaging, Delivery & Upgrade:

4.10.1. Packaging

The plugin will be packages as Hudson .hpi file: glassfish.hpi

4.10.2. Delivery

Internal: Hosted on internal Server (to add link here).

External: Hudson Plugin Repository (details TBD)

4.10.3. Upgrade and Migration:

N/A

4.11. Security Impact:

Each node on the Hudson cluster will be granted SSH access to every other node on that cluster.

4.12. Compatibility Impact

N. A.

4.13. Dependencies:

4.13.1 Internal Dependencies

NONE.

4.13.2 External Dependencies

  • Open SSH on all platforms
  • Cygwin on Windows

4.14. Testing Impact:

NONE.

5. References / Reference Documents:

6. Schedule:

6.1. Projected Availability:

Review