2. Create ClusterWe will need to add support for create-cluster and delete-cluster commands. Create cluster will create the necessary configuration elements in domain.xml for a cluster. If --config option is not mentioned during create-cluster, it creates a copy of the default-config. deploy command needs to support --target option where it will add the necessary application-ref for a target (cluster or stand alone instance). For example, <cluster config-ref="cluster1-config" ...>
<server-ref disable-timeout-in-minutes="30" enabled="true"
lb-enabled="false" ref="instance1"/>
<server-ref disable-timeout-in-minutes="30" enabled="true"
lb-enabled="false" ref="instance2"/>
<application-ref disable-timeout-in-minutes="30" enabled="true"
lb-enabled="false" ref="applicationFoo"/>
...
</cluster>
2.1 Token Support create-cluster has --systemproperties option that allows user to define the necessary tokens. For example, <config name="cluster1-config" ...>
...
<http-listener acceptor-threads="1" address="0.0.0.0" blocking-enabled="false"
default-virtual-server="server" enabled="true" family="inet" id="http-listener-1"
port="${HTTP_LISTENER_PORT}" security-enabled="false" server-name=""
type="default" xpowered-by="true">
<!-- defines http port value for everyone using the config -->
<system-property name="HTTP_LISTENER_PORT" value="8080"/>
...
</config>
<cluster config-ref="cluster1-config" ...>
<!-- overrides http port value at config level -->
<system-property name="HTTP_LISTENER_PORT" value="38080"/>
</cluster>
<server config-ref="cluster1-config" ....>
<!-- overrides http port value at cluster level for this server-->
<system-property name="HTTP_LISTENER_PORT" value="38181"/>
</server>
XXX Config - We will need a mechanism to get un-processed token value for administration at GUI and CLI Current v3 implementation 2.2 Support for Port Conflicts Refer to this write-up for more details on exiting GlassFish v2.x behavior. 2.3 Manual Synchronization ||---- config (config files common to all servers including the DAS)
| ---- domain.xml, etc.
||---- <server/cluster-name>-config (cluster/server-specific data)
||---- applications
||---- <application-name>
||---- java-web-start
||---- <application-name>
||---- generated
||---- ejb
||---- <application-name>
||---- jsp
||---- <application-name>
||---- policy
||---- <application-name>
||---- xml
||---- <application-name>
||---- lib (libraries common to all servers including the DAS)
||---- docroot (the default web-container docroot, files are copied to instance's docroot)
XXX We may include only deployed applications for a target as an optimization For example, on DAS machine user will do the following... % asadmin generate-sync-bundle --target <cluster1> </tmp/cluster1.zip> User will FTP the newly created bundle zip from DAS server to remote instance machine(s) and apply the content using a local command. For example, % asadmin apply-sync-bundle --target <instance1> <cluster1.zip> % asadmin apply-sync-bundle --target <instance2> <cluster1.zip> Use Case Scenarios
--> Bundle contains subset of directories under domains/domain1 - config (all files)
- domain.xml, etc
- c1-config (c1 specific)
- applications (c1 specific)
- hello
- generated (c1 specific)
- jsp
- hello
-lib (all files)
-docroot (all files)
--> Unzip c1.zip under nodeagents/remote/i1 Technical Requirements
2.4 Ref Support Note: In this project, it is sufficient to create the application-ref or resource-ref in domain.xml and ensure that the associated applications/resources are loaded during server startup. Dynamic re-config or hot deployment project will deal with deploying the resource/application to the target server instance(s) dynamically.
|