v0.5

GlassFish 3.1 supports provisioning using SSH. This page describes basic steps for using this feature.

On the Windows platform:

Supported Software:

  • Windows 7
  • Windows XP Professional
  • Windows 2003
  • CYGWIN_NT-6.1
  • CYGWIN_NT-5.6p1
  • MKS 9.2
  • JDK 6_18 or higher
  • GlassFish 3.1 Build 26 or later
Cygwin:

Steps:

1) Install Cygwin per product install instructions on all hosts that are part of the cluster (here are some informal instructions for installing Cygwin on Windows). We have tested using the default installation options. Update your Window's path to include the cygwin bin directory (C:\cygwin\bin for example) so that the GlassFish tools can find the ssh related commands.

2) Install Java and make sure its bin directory is in your path: both your Windows path and your $PATH when in a cygwin shell.

3) Download and install the GlassFish bundle using the installer on the host that you want to run the GlassFish Domain Admin Server (DAS) on. Note the location of the installed product. Build 26 or later is recommended.

4) Start the GlassFish server in a command prompt window. Go to the bin directory of the GlassFish installation and run the command:
C:> asadmin start-domain

5) Setup SSH for communication between the current host (host1) and a remote host (host2) using the setup-ssh command. Note: If the command fails and states that ssh-keygen fails then you may need to create a directory named ".ssh" in your Window's home directory first (issue 13985).
C:> asadmin setup-ssh --generatekey host2

At this point, a directory called .ssh has been created in the user's Windows home directory on host1 and it contains the key files. A directory called .ssh has also been created on host2 in the user's cygwin home directory and it contains an authorized_keys file.

6) Install GlassFish on host2 using the install-node command. This command will look in the .ssh directory created for the keys needed to copy the glassfish image to host2. It will generate a zip file based on the current installation, copy it to host2 and unzip the image under the installation directory provided by the --installdir option. Note: The installdir on the remote host must be specified in Unix style path since the remote commands are running in the cygwin shell at that time, see issue 13998.
C:> asadmin install-node --installdir /cydrive/c/gf-install host2
if the installation directory (gf-install in this example) already exists and is writeable then the native Windows path can be used like this:
C:> asadmin install-node --installdir 'c:\gf-install' host2

7) We are ready to start creating a cluster and instances. First part is to create a node that describes the host where the instance will be created. A default node for localhost already exists in GlassFish for instances that are on the same system as the DAS. A second node will need to be created for the remote host. Then we create the cluster and instances that belong to the cluster. Here we create a cluster with two instances i1 which is on host2 and i2 which is on the localhost (host1).
C:> asadmin create-node-ssh --nodehost host2 --installdir /cygdrive/c/gf-install/glassfish3/glassfish n1
C:> asadmin create-cluster c1
C:> asadmin create-instance --cluster c1 --node n1 in1
C:> asadmin create-instance --cluster c1 --node localhost in2

8) Starting the instances in the cluster can be done with one command. All information about the instances and their location is in the configuration file.
C:> asadmin start-cluster

MKS:

1) Install MKS per product install instructions on all hosts that are part of the cluster We have tested using the default installation options. Update your Window's path to include the MKS bin directory (C:\Program Files\MKS Toolkit\mksnt for example) so that the GlassFish tools can find the ssh related commands. The test machines did not have any patches installed..

2) Install Java and make sure its bin directory is in your path: both your Windows path and your $PATH when in the MKS shell.

3) Download and install the GlassFish bundle using the installer on the host that you want to run the GlassFish Domain Admin Server (DAS) on. Note the location of the installed product. Build 26 or later is recommended.

4) Start the GlassFish server in a command prompt window. Go to the bin directory of the GlassFish installation and run the command:
C:> asadmin start-domain

5) Setup SSH for communication between the current host (host1) and a remote host (host2) using the setup-ssh command. Note: If the command fails and states that ssh-keygen fails then you may need to create a directory named ".ssh" in your Window's home directory first
C:> asadmin setup-ssh

At this point, a directory called .ssh has been created in the user's Windows home directory on host1 and it contains the key files. A directory called .ssh has also been created on host2 in the user's home directory and it contains an authorized_keys file.

6) Install GlassFish on host2 using the install-node command. This command will look in the .ssh directory created for the keys needed to copy the glassfish image to host2. It will generate a zip file based on the current installation, copy it to host2 and unzip the image under the installation directory provided by the --installdir option. The installdir may be specified using either Unix style paths or Windows style paths.

C:> asadmin install-node --installdir c:\myglassfish host2

7) We are ready to start creating a cluster and instances. First part is to create a node that describes the host where the instance will be created. A default node for localhost already exists in GlassFish for instances that are on the same system as the DAS. A second node will need to be created for the remote host. Then we create the cluster and instances that belong to the cluster. Here we create a cluster with two instances i1 which is on host2 and i2 which is on the localhost (host1).
C:> asadmin create-node-ssh --nodehost host2 n1
C:> asadmin create-cluster c1
C:> asadmin create-instance --cluster c1 --node n1 in1
C:> asadmin create-instance --cluster c1 --node localhost in2

8) Starting the instances in the cluster can be done with one command. All information about the instances and their location is in the configuration file.
C:> asadmin start-cluster

Summary of testing various values of installdir on Windows platform.
In all cases the directory on the remote machine was empty even if it existed.

In cygwin tests I was running as user cmott. In the MKS tests I was running as user Administrator.

Error messages displayed during testing:

  1. IOException: Error during SCP Transfer

These tables summaries the results when the install-node command where installdir is left as specified by the user. On cygwin I was running as user cmott and on MKS I was running as user Administrator

Specify as Windows path AND directory on remote host does NOT exist cygwin MKS
c:\mygf3 Failed with error #1, see Note 1  
\mygf3 Failed with error #1, see Note 2  
Specify as Windows path AND directory on remote host does exist cygwin MKS
c:\mygf3 Failed with error #1  
\mygf3 Failed with error #1, see Note 3  
Specify as Unix path AND directory on remote host does exist cygwin MKS
/mygf3 PASS PASS

Note 1: The directory c:\mygf3\tmp was NOT created on the remote machine
Note 2: The directory \mygf3\tmp were created on the remote machine
Note 3: The directories had 'rwx' permissions for all

These tables summaries the results when the install-node command converted installdir to always use Unix style path separators. On cygwin I was running as user cmott and on MKS I was running as user Administrator

Specify as Windows path AND directory on remote host does NOT exist cygwin MKS
c:\mygf3 PASS** PASS*
\mygf3 PASS* PASS*
Specify as Windows path AND directory on remote host does exist cygwin MKS
c:\mygf3 PASS PASS*
\mygf3 PASS PASS*
Specify as Unix path AND directory on remote host does exist cygwin MKS
/mygf3 PASS PASS

* mygf3 was put under \ dir.

** mygf3 was put under \cygdrive\c

Additional info: I didn't specify my home directory in any of the tests because my home directory on the remote host contains a space as there are known issues with spaces in path names.