This page provides links to review drafts of new and changed documentation for the Cluster Infrastructure project as listed in the Cluster Infrastructure Documentation Plan.

Mandatory reviewers for each item are listed in each section.

Changes to existing documentation since the last release are marked with change bars. No changes are marked in new documentation.

Please provide your feedback by adding a comment to this page. To simplify the processing of your comments, please add your comments in the format in the sample comment. Review existing comments to see known issues and avoid duplicates.

Changes to Books

Administration Guide Changes


Note - Comments are now closed for the Administration Guide. To suggest an improvement or report an error, file a GlassFish issue, subcomponent docs, against the relevant section.


Section Documentation Impact Reviewers Status
"About Administering Domains" in Chapter 3, "Administering Domains" Major Tom Mueller
Bill Shannon
Review comments incorporated.
Configuring a DAS or a GlassFish Server Instance for Automatic Restart
Major
Byron Nevins
Alex Pineda
Review comments incorporated.

High Availability Administration Guide Changes


Note - Comments are now closed for the High Availability Administration Guide. To suggest an improvement or report an error, file a GlassFish issue, subcomponent docs, against the relevant section.


Section Documentation Impact Reviewers Status
Chapter 1, High Availability in GlassFish Server Moderate Tom Mueller
Bill Shannon
Reviewed.
Chapter 3, Administering GlassFish Server Clusters New (previous version completely rewritten) Tom Mueller
Bill Shannon
Joe Fialli
Bhakti Mehta
Review comments incorporated.
Introduction and instance-lifecycle tasks in Chapter 4, Administering GlassFish Server Instances New (previous version completely rewritten) Joe Di Pol
Carla Mott
Byron Nevins
Jennifer Chou
Review comments incorporated.
Synchronizing GlassFish Server Instances and the DAS in Chapter 4, Administering GlassFish Server Instances New Tom Mueller
Bill Shannon
Byron Nevins
Jennifer Chou
Review comments incorporated.
Chapter 5, Administering Named Configurations New (previous version completely rewritten) Bhakti Mehta
Tom Mueller
Bill Shannon
Review comments incorporated.
Chapter 7, Upgrading Applications Without Loss of Availability Minor Hong Zhang
Tim Quinn
Yamini KB
Vijay Ramachandran
Review comments incorporated.

Deployment Planning Guide Changes

Section Documentation Impact Reviewers Status
Chapter 1, "Product Concepts" Minor Tom Mueller
Bill Shannon
Bhakti Metha
 
   "HTTP Load Balancer Plug-in" in Chapter 1, "Product Concepts" Minor Kshitiz Saxena  
   "Session Persistence" in Chapter 1, "Product Concepts" Minor Mahesh Kannan  
   "IIOP Load Balancing in a Cluster" in Chapter 1, "Product Concepts" Minor Kshitiz Saxena
Tim Quinn
 
   "Message Queue and JMS Resources" in Chapter 1, "Product
   Concepts"
Minor Satish Kumar
Amy Kang
Nigel Deakin
 
Chapter 2, "Planning your Deployment" Minor Tom Mueller
Bill Shannon
Bhakti Metha
 
   "Planning for Availability" in Chapter 2, "Planning your Deployment" Minor Mahesh Kannan  
   "Planning Message Queue Broker Deployment" in Chapter 2,
   "Planning your Deployment"
Minor Satish Kumar
Amy Kang
Nigel Deakin
 
Chapter 3, "Checklist for Deployment" Minor Tom Mueller
Bill Shannon
Bhakti Metha
 

A PDF file of the latest Deployment Planning Guide, with change bars, is here.

Upgrade Guide Changes

Section Documentation Impact Reviewers Status
Deprecated and Unsupported Options Moderate Bhakti Mehta
Bill Shannon
Tom Mueller
Review comments incorporated.

Domain File Format Reference Changes

Section Documentation Impact Reviewers Status
Element Hierarchy Minor Byron Nevins
Jennifer Chou
Canceled
application-ref Minor Byron Nevins
Jennifer Chou
Canceled
cluster Minor Byron Nevins
Jennifer Chou
Canceled
clusters Minor Byron Nevins
Jennifer Chou
Canceled
configs Minor Byron Nevins
Jennifer Chou
Canceled
health-checker Minor Byron Nevins
Jennifer Chou
Canceled
resource-ref Minor
Byron Nevins
Jennifer Chou
Canceled
server-ref Minor Byron Nevins
Jennifer Chou
Canceled
servers Minor Byron Nevins
Jennifer Chou
Canceled
system-property Minor Byron Nevins
Jennifer Chou
Canceled

Add-On Component Development Guide Changes

Section Documentation Impact Reviewers Status
Chapter 4. Extending the asadmin Utility Moderate Vijay Ramachandran  

Changes to Man Pages


Note - Comments are now closed for man pages. To suggest an improvement to a man page or report an error in a man page, file a GlassFish issue, subcomponent docs, against the man page.


Man Page Name and Section Documentation Impact Reviewers Status
add-resources(1) Minor Jagadish Ramu Review comments incorporated
create-cluster(1) Minor Bhakti Mehta
Review comments incorporated
create-domain(1) Minor Tom Mueller
Review comments incorporated
create-instance(1) Minor Carla Mott
Review comments incorporated
create-local-instance(1) New Jennifer Chou
Review comments incorporated
create-service(1) Moderate Byron Nevins
Review comments incorporated
copy-config(1) Minor Bhakti Mehta
Review comments incorporated
delete-cluster(1) Minor Bhakti Mehta
Review comments incorporated
delete-instance(1) Minor Byron Nevins
Review comments incorporated
delete-local-instance(1) New Byron Nevins
Review comments incorporated
delete-config(1) Minor Bhakti Mehta
Review comments incorporated
export-sync-bundle(1) New Jennifer Chou
Review comments incorporated
import-sync-bundle(1) New Jennifer Chou
Review comments incorporated
list-configs(1) Minor Bhakti Mehta
Review comments incorporated
list-clusters(1) Minor Bhakti Mehta
Review comments incorporated
list-instances(1) Minor Byron Nevins
Review comments incorporated
list-jndi-entries(1) Minor
Jagadish Ramu
Cheng Fang
Review comments incorporated
restart-domain(1) Minor
Byron Nevins
Review comments incorporated
restart-local-instance(1)
New
Byron Nevins
Review comments incorporated
restart-instance(1)
New
Byron Nevins
Review comments incorporated
start-cluster(1) Minor Joe Di Pol
Review comments incorporated
start-instance(1) Minor Carla Mott
Review comments incorporated
start-local-instance(1) New Byron Nevins Review comments incorporated
stop-cluster(1) Minor Joe Di Pol
Review comments incorporated
stop-instance(1) Minor Byron Nevins
Review comments incorporated
stop-local-instance(1) New Byron Nevins
Review comments incorporated

copy-config.1.pdf (application/pdf)
create-instance.1.pdf (application/pdf)
delete-cluster.1.pdf (application/pdf)
create-local-instance.1.pdf (application/pdf)
create-cluster.1.pdf (application/pdf)
import-sync-bundle.1.pdf (application/pdf)
delete-local-instance.1.pdf (application/pdf)
delete-instance.1.pdf (application/pdf)
delete-config.1.pdf (application/pdf)
export-sync-bundle.1.pdf (application/pdf)
restart-instance.1.pdf (application/pdf)
list-nodes.1.pdf (application/pdf)
list-clusters.1.pdf (application/pdf)
list-configs.1.pdf (application/pdf)
list-instances.1.pdf (application/pdf)
stop-cluster.1.pdf (application/pdf)
start-local-instance.1.pdf (application/pdf)
restart-local-instance.1.pdf (application/pdf)
start-cluster.1.pdf (application/pdf)
start-instance.1.pdf (application/pdf)
stop-instance.1.pdf (application/pdf)
stop-local-instance.1.pdf (application/pdf)
stop-cluster.1.pdf (application/pdf)
restart-domain.1.pdf (application/pdf)
create-service.1.pdf (application/pdf)
add-resources.1.pdf (application/pdf)
list-jndi-entries.1.pdf (application/pdf)
create-domain.1.pdf (application/pdf)
rolling-upgrade.pdf (application/pdf)
rolling-upgrade.pdf (application/pdf)
about-das-instances.pdf (application/x-download)
configurations.pdf (application/x-download)
clusters.pdf (application/x-download)
instance-lifecycle.pdf (application/x-download)
SJSASEEDPG.pdf (application/pdf)
instance-resync.pdf (application/x-download)
ha-intro.pdf (application/x-download)
SJSASEEDPG.pdf (application/pdf)
SJSASEEDPG.pdf (application/pdf)
SJSASEEDPG.pdf (application/pdf)
SJSASEEDPG.pdf (application/pdf)
SJSASEEDPG.pdf (application/pdf)
create-service-tasks.pdf (application/x-download)
obsolete-options.pdf (application/x-download)
Comment ID Location Comment
PMD-001 create-cluster(1) man page. Sample comment. Should provide a proposed fix and correct content if applicable.
PMD-002 HA Admin Guide, Chapter 6, To Create a Cluster Another sample comment.
Posted by pauldavies at Oct 25, 2010 17:43
Comment ID Location Comment
trm-1 add-resources The difference between server and domain should be explained.  AFAIK, all resources are created at the domain level but with server, cluster, and instance, there are resouce-refs also created.
trm-2 add-resources The operand section should clarify that if it is only a file, the file must be on the DAS even if asadmin is being run elsewhere.
trm-3 add-resources does the DTD have a different name now - without "sun"?
trm-4 general When saying that an instance name can be passed to --target, should it clarify that the instance has to be a standalone instance, i.e., not part of a cluster?
trm-5 create-cluster typo - missing "is" in "that used for GMS"
trm-6 create-cluster Should the man page explain the limitations on a cluster name? i.e. what special characters are allowed, etc.
trm-7 general Should the options that are only there for compatibility and are ignored show up in the usage message in the man page?  The usage message generated by asadmin doesn't show these options.  To me, this makes the manual page cluttered.
trm-8 create-cluster, copy-config There are some additional system-properties that should be listed here: OSGI_SHELL_TELNET_PORT, JAVA_DEBUGGER_PORT, ASADMIN_LISTENER_PORT.
trm-9 create-domain, Description WRT domain customizers, these have to be found in the JAR files in the GlassFish installation, not in the domain.xml.  At the time a domain is created the domain.xml file doesn't exit.  The customizers are classes in JAR files in the modules directory that implement the DomainInitializer interface.
trm-10 create-domain, --portbase
There are other ports assigned too.  Is this just some samples or is it intended to be the complete list?
trm-11 create-domain, --template Missing "/" after as-install.  Also, should there be a pointer here to more details about creating domain templates?  For example, info on how variable substitution works in the template.
trm-12 create-domain, --domainproperties Is the debug port missing from this list?  How about the HTTP port?
trm-13 create-domain, --savelogin typo "admit".  Also in the --checkports option.

trm-14 create-domain, operand,
copy-config
Should restrictions on what can be in the name be specified?

trm-15 create-instance, create-local-instance "JVM machine" is redundant
trm-16 create-instance, description
and delete-instance
The command requires SSH only if the specific node is an SSH node. If the node is a config node that refers to the local host, then the command will run fine without SSH.
trm-17 general <ranton>The word "machine" is used throughout where "computer" or "server" or "host" would do.  Why do we use the word machine to refer to computers?</rantoff>
trm-18 create-instance, 
create-local-instance
See comment trm-10 which applies to this command too
trm-19 create-instance, --system-properties
also in create-local-instance
There are two more ports: OSGI_SHELL_TELNET_PORT, JAVA_DEBUGGER_PORT
trm-20 create-instance, create-local-instance See comment trm-14
trm-21 create-local-instance In the first bullet in the description, "where nodes are stored" should be "where information related to nodes is stored" (or something like that).
trm-22 create-instance, create-local-instance The assignment of ports is not actually "random", but the algorithm is probably not worth describing here, since no one should depend on it.  Maybe rather than random, use "based on an internal algorithm".
trm-23 create-local-instance Be aware of issue 13963, which questions whether there should be a --bootstrap option.
trm-24 create-local-instance The output from the command has changed since you took the copy for the manual page.  It is less verbose.
trm-25 create-service, description WRT "default domain", if no arguments are specified, I suspect that this works only if there is one domain.  If there are multiple domains it is probably an error.
trm-26 create-service, linux behavior On linux, WRT "installs a link", where is the link created?  (the example answers this question - in all of the /etc/rc?.d directories).  This is actually strange because in level 2, network services aren't supposed to come up.
trm-27 create-service "You must have write permission for the path /var/svc/manifest/application/GlassFish." is there twice.
trm-28 create-service Linux and Windows should mention the need to be root and administrator, respectively.
trm-29 create-service typo: "This is the If this option"
trm-30 delete-cluster there is an obsolete --nodeagent option too. 
trm-31 delete-cluster If the name of the config is not "cname-config" where cname is the name of the cluster, then the config will not be deleted even if it is not used. The idea is that only automatically created configs should be deleted by delete-cluster.
trm-32 delete-local-instance This command is not supported in remote mode.
trm-33 delete-local-instance Since the instance-name is optional, the man page should say what happens if it is not provided.  What happens is that the one and only instance on the node is deleted.  If there is more than one, it is an error.
trm-34 delete-config typo: "sever instance"
trm-35 export-sync-bundle typo: missing word "subcommand" in first sentence of description
trm-36 export-sync-bundle I'm not sure that we want to say that this is a ZIP archive. Isn't it enough to just say it is an archive.  Users should not be encouraged to unzip the file.
trm-37 export-sync-bundle, --target option Maybe this should say that if you want to export the data for a clustered instance, specify the name of the cluster as the target rather than the instance.
trm-38 export-sync-bundle, --retrieve Is the qualification about what happens if you are running asadmin on the same host as the DAS really true?  I had though --retrieve could be true or false anywhere.
trm-39 export-sync-bundle What happens if --retrieve=true and a relative pathname is specified?  Also, what happens if an absolute pathname is specified (for both --retrieve settings)?
trm-40 export-sync-bundle, description I'd like to see the description talk about the use cases for using export-sync-bundle/import-sync-bundle, unless this is in another part of the documentation, and then a reference would be fine. Specifically, the idea is that the sync bundle file can be transfered to a host for an instance without having to have the instance be able to communicate with the DAS. I see that some of this is in the description for import-sync-bundle, so maybe just a reference to that manual page would be sufficient.
trm-41 import-sync-bundle Regarding the phrase "Attempting to register the instance with the DAS", what really happens is that import-sync-bundle tries to set the rendezvousOccurred flag for the instance, which is different than what create-local-instance or create-instance does when they create the instance. It might be that the term "registers" is fine here as long as that term is not used with create-instance.
trm-42 general There are a few "hybrid" commands here, i.e., local commands that run one or more remote commands.  These include create-local-instance, delete-local-instance, import-sync-bundle. These commands will normally require specifying some of the asadmin command options (host, port, user, etc.) so that the command can communicate with the DAS.  However, none of the examples show this, and the manual pages don't even mention the asadmin command options. 
trm-43 import-sync-bundle, --instance option Regarding the phrase "The instance must already exist.", this might be misleading.  The data for the instance does not have to exist on the local host.  It is true that the instance must exist in the DAS configuration.  However, this had to be the case to be able to run export-sync-bundle. The instance must already
exist
trm-44 list-configs I would expect that one could pass an existing config as a target, as a test to see if that config exists. However, you get an NPE if you do that (I just filed issue 14278).
trm-45 list-clusters, list-instances, list-jndi-entries, restart-domain Synopsis is missing "-?"  (note: maybe this is missing on all of them - I just started noticing it with list-clusters.
trm-46 list-jndi-entries The "server" value for target is for the DAS.  Maybe it should say that.
trm-47 restart-domain The --domaindir argument is the "domains" directory, not the directory of the domain itself, e.g., "domains/domain1"
trm-48 restart-local-instance, restart-instance, start-instance, start-local-instance The explanation for synchronization is incorrect.  Changes to the top-level subdirectories of docroot trigger a synchronization of all files under that subdirectory. So if a file at a lower level is changed, as long as its containing directory (and on up the tree) is changed, everything will be synchronized as it should.  It might be better to have this section refer to an explanation of synchronization in the start-local-instance command.
trm-49 restart-instance SSH is required only if the instance references a node-ssh.  If it references a node-config that has a nodehost that is the local host, then restart-instance works.
trm-50 start-cluster If all instances in a cluster are local to the DAS, then start-cluster works fine without SSH.
trm-51 start-instance If the instance is local to the DAS, then start-instance works fine without SSH.
trm-52 start-instance, start-local-instance The full synchronization description should say that all files are synchronized whether they have changed or not, i.e., all data is copied.
trm-53 start-local-instance, --verbose This option does not open a window.  The output from the server is sent to the same window where the command is executed.
trm-54 start-cluster The start-cluster command starts instances in parallel.  The degree of parallelism is determined by the size of the admin-threadpool.  Should this be mentioned in the manual page?
trm-55 stop-instance The implementation of stop-instance was just changed yesterday so that the default for --force is false.  Also, the meaning of force is now different. Even when --force is false, the server exits without waiting for all threads to stop.  Now if --force is true, stop-instance uses an operating system-specific method to terminate the process immediately, without calling System.exit.
trm-56 stop-local-instance Why does this command mention that SSH is not required?  For other commands that mention this, the non-local version of the command uses SSH.  But in this case, stop-instance doesn't use SSH anyway. So the mention of SSH here should be omitted.
Posted by trmueller at Oct 27, 2010 08:38
Comment ID Location Comment
JC-001 create-local-instance(1) man page I think you can create a stand alone instance when specifying --config. There's a couple places where it says stand alone instance is created when not specify --config or --cluster. Definitely --cluster cannot be specified. But you could do copy-config on xxx-config and name it as sa-config. Then do create-local-instance and specify sa-config. Probably not very common though.
JC-002 export-sync-bundle(1) man page --retrieve does not have to be true if the subcommand is run on the host where the instance resides. In this case if it is false, the bundle will still be exported to the DAS host.
JC-003 export-sync-bundle(1) man page --retrieve does not have to be false if the subcommand is run on the DAS. In this case if it is true, the bundle will still be exported to the DAS host. The default location of the bundle will be different. When false, the default directory is in the sync directory. When true it is in current directory
JC-004 export-sync-bundle(1) man page The location of the default directory when --retrieve=true seems to have changed. Maybe Tim has fixed this. It is now the current directory, not the user home directory. And the relative path is also relative to the current directory, not the config directory. This is better user experience I think.
JC-005 import-sync-bundle(1) man page Regarding Tom's comment trm-41: This is how I think of it. create-instance and create-local-instance register the instance with DAS, meaning a server element is created in domain.xml for that instance (among other configurations). The rendezvousOccurred property being set to true also happens in create-local-instance but user doesn't really need to know about it. The instance should already be created or registered (except for the rendezvousOccurred property) on the DAS before running import-sync-bundle. Then import-sync-bundle will set the rendezvousOccurred to true if it can contact the DAS. In the error message I referred to this as completing the registration. But I think to make a more clear distinction, we should maybe just say 'setting rendezvousOccurred property to true' instead 'completing the registration'.
JC-006 import-sync-bundle(1) man page I'm planning to fix issue 13119 for MS7, so if you do not specify --node, it will try to look up the instance's node name and use that. (It will look up the instance's node name form the bundle's domain.xml)
Posted by jenchou at Nov 01, 2010 10:32
Comment ID Response
trm-1 Done.
trm-2 Done.
trm-3 Yes. Done.
trm-4 Decline. The --target option of some subcommands does accept a clustered instance. For specific commands where a clustered instance is an error, could you file an issue for fixing in a future release?
trm-5 Done.
trm-6 Yes. Done.
trm-7 Decline. A conscious decision was taken to leave these options in the man page to avoid giving the impression to users of existing releases that they would now give a syntax error. I am open to revisiting this decision in a later release.
trm-8 Done.
trm-9 Done.
trm-10 Done.
trm-11 Added missing /. We do not have more details about creating templates, so no pointer can be added.
trm-12 Done.
trm-13 Done.
trm-14 Done.
trm-15 Agreed, but decline. Unfortunately, the redundancy is mandated by the trademark lawyers.
trm-16 Done.
trm-17 I have tried to standardize on "host", but I can't guarantee to have eliminated all instances of "machine".
trm-18 Done.
trm-19 Done.
trm-20 Done.
trm-21 Done.
trm-22 Done.
trm-23 Done.
trm-24 Done.
trm-25 Done.
trm-26 Done.
trm-27 Done.
trm-28 Done.
trm-29 Done.
trm-30 Done.
trm-31 Done.
trm-32 Decline. Unfortunately, the terms "local" and "remote" as applied to asadmin subcommands are potentially misleading. According to the asadmin(1M) manual page, "remote" means "requires a running DAS" and "local" means "does not require a running DAS. In fact, by these definitions, delete-local-instance is supported in remote mode only.
trm-33 Done.
trm-34 Done.
trm-35 Done.
trm-36 Done.
trm-37 Done.
trm-38 Done.
trm-39 The behavior is explained in the description of the operand.
trm-40 Done.
trm-41 The term "registers" is fine here because that term is not used with create-instance.
trm-42 Done for create-local-instance and import-sync-bundle.
Not required for delete-local-instance, restart-local-instance, and start-local-instance. The DAS host and port come from the DAS properties of the node where the instance resides.
Not done due to lack of time for list-commands, multimode, and version
trm-43 Done.
trm-44 Done.
trm-45 Decline. To reduce clutter in the synopsis line, a conscious decision was taken to list only the long form in the Synopsis section and to list both forms in the Options section.
trm-46 Done.
trm-47 Done.
trm-48 The correction is done in all 4 man pages.
trm-49 Done.
trm-50 Done.
trm-51 Done.
trm-52 Done.
trm-53 Done.
trm-54 Maybe not in the man page, but in the Performance Tuning Guide?
trm-55 This decription of --force seems to be for the --kill option that was added. I have left the description of --force unchanged and added a description of --kill.
trm-56 Decline. Users reading the man pages for other *-local-* subcommands might be led to expect a statement about SSH here. Without the statement, such users might be wondering if this subcommand requires SSH or not.
Posted by pauldavies at Jan 12, 2011 16:16
Comment ID Response
JC-001 Done.
JC-002 Done.
JC-003 Done.
JC-004 Done. However, the relative path is relative to the current directory only if --retrieve is true.If --retrieve is false, the path is still relative to the config directory.
JC-005 Decline. See response to trm-41. I'm not sure what "setting rendezvousOccurred property to true" means from a user's point of view.
JC-005 Done.
Posted by pauldavies at Jan 12, 2011 16:18

Comments on "Administering Domains" in about-das-instances.pdf

Comment ID Location Comment
trm-001 p82, "Domains for Admin..", 3rd pp Can "administrative" before "domain" in two places be omitted?  There isn't really anything administrative about domain1.  It is just another domain. Are there some domains that are administrative domains and others that are not?  No. Domains are just domains.
trm-002 p83, 1st pp Might want to add that the DAS has the master copy of the configuration data for all of the instances in a domain.  If an instance is destroyed, for example, due to a computer crash, it can be recreated from the data in the DAS. 
Posted by trmueller at Jan 20, 2011 09:04

Comments on "Administering Named Configurations" in configurations.pdf

Comment ID Location Comment
trm-001 general I suggest not using the term "inherits" when referring to the use of a configuration by a cluster or instance. "Inherits" is applicable when to objects have an "IS-A" relationship.  However, a configuration does not have that relationship with a cluster or an instance. How about using "uses" or "refers to"?
trm-002 p78, last pp The system properties are not actually passed in to the JVM using the -D option. However, the server does set them as if they were internally. 

trm-003 p76 These port numbers are merely the defaults that are in the initial default-config and the properties that are set by the create-cluster and create-instance commands.  Maybe this section should explain that additional system properties can be created as desired by the user, and referenced within the rest of the config using ${prop-name} notation. For example, if a configuration defined more HTTP listeners, the ports for those listeners can be defined with system properties too (although they will not be assigned automatically by the create-cluster and create-instance commands). 
Posted by trmueller at Jan 20, 2011 12:49

Comments on "Chapter 3 : Administrating GlassFish Clusters"

Comment ID Location Comment
sre-001 p35, point 1 copy-config needs two operands, name of the config to copy and the new config name, if it was changed to:
asadmin copy-config default-config mycfg
then the rest of the example makes sense.
sre-002 p36, point 5 shouldn't this use:
asadmin stop-domain domain1
asadmin start-domain domain1
list-instances won't list server and on b39 at least trying stop-instance server appears to hang - I don't know if this is meant to work or not. See defect 7014072 for a possible issue/change in behaviour using asadmin in multimode that might need documenting.
sre-003 p37, get clusters.cluster.c1.gms-multicast* Results in an error (in b38):
remote failure: Dotted name path clusters.cluster.c1.gms-multicast* not found.

In 2.x and earlier wildcards wouldn't match multiple dotted-names from a partial name, they simply replaced a full dotted name in a hiearchy, e.g. get clusters.cluster.c1.* would work. 

For this example to work you'd need to fetch each value to avoid the larger list the clusters.cluster.c1.* returns.  So:
asadmin> get clusters.cluster.c1.gms-multicast-address
clusters.cluster.c1.gms-multicast-address=228.9.101.223
Command get executed successfully.
asadmin> get clusters.cluster.c1.gms-multicast-port
clusters.cluster.c1.gms-multicast-port=13205
Command get executed successfully.
sre-004 p38, GMS-BIND-INTERFACE-ADDRESS-c1 example I simply can't replicate this. Created a config, a cluster using two instances, listing the system properties for either the config or both instances does not list a GMS-BIND-INTERFACE-ADDRESS-<cluster-name> property, even though I can see that property being used in the domain.xml in the gms configuration section - there is no definition in the configuration system properties.
sre-005
p39
The comment made in sre-004 is redundant if you create the instances as described on p39, point 3, where the GMS-BIND-INTERFACE-ADDRESS-cl1 is being specifically set.  However, on p35, in point 4, this system property is not specified which is what appears to lead to the problem in sre-004.
Posted by tecknobabble at Jan 22, 2011 03:12
Comment ID Response
sre-001 Added default-config operand.
sre-002 Changed to stop-domain and start-domain.
sre-003 Changed to get clusters.cluster.c1 which I verified myself.
sre-004 Added "if this system property has been set" to the introductory sentence and added "For information on how to set this system property, see Using the Multi-Homing Feature With GMS" afterward.
sre-005 I believe that my response to sre-004 takes care of this.
Posted by rebeccaparks at Jan 25, 2011 15:05

Comments on "Administering Named Configurations" in configurations.pdf

Comment ID Location
Comment
sre-001
p71, first para
The text "This directory is used to synchronize configurations for all instances that reference the
configuration" feels a little clunky, as it implies to me that all the configuration information for that config-name is in that directory, which isn't entirely true.  It may be simpler just to say that the contents of the domain-dir/config/config-name directory are synchronized to all the instances that reference that configuration.  Customers have been adding their own files to the config-name directory to synchronized common content since AS 8.x.
sre-002
p76, ASADMIN_LISTENER_PORT You can only access the Admin Console from the DAS server instance.  The fact that you have to assign an admin port on non-admin instances is a change from GF 2.x and earlier cluster-capable releases and might warrant an explanation for the change and the need for an additional port.  Is this just a side-effect because the DAS server-config is particularly special, and is created from the default-config, whereas in 2.x and earlier the default domain.xml contained the server-config with its additional admin-listener and a default-config that didn't have an admin-listener?  I assume if a customer created a new named configuration and deleted the admin network listener and then used that as the basis for their clusters and standalone instances they wouldn't hit any issues?
Posted by tecknobabble at Jan 26, 2011 05:15

Comments on "Administering GlassFish Server Instances" in instance-lifecycle.pdf

Comment ID
Location Comment
sre-001
p49, last paragraph before
point 1
"The instance might be managed by GMS" - I believe this will only be clustered instances?  If so, perhaps it should be emphasized as the text makes it sound like an option for any instance.
sre-002
p52, Master Broker Caution note
Feels like a silly question, but how do you know which instance has the MQ master broker?  There isn't a list-master-broker command that I can see.... will the delete-instance command warn you the instance hosts the master-broker?  It also isn't information provided by list-instances --long=true
sre-003
p57, Paragraph 3
The sections referred to, Stopping and Starting Individual Instances on pages 55 and 56 make no mention about how synchronization can be controlled.  For that to be the case the start-instance section would need to add some explanation of the
--sync [none|normal|full]

option, or make it more explicit that the manual page for start-instance contains the details on how to control synchronization.

While there was no restart-instance command in GF2.x/AS 8.x, the closest was the node-agent automatically starting an instance, which caused no synchronization.  Therefore, the restart-instance behavior is the opposite to its closest equivalent in the previous releases, that and might be worthy of a note to highlight that fact - or a reference to wherever else synchronization is explained in the documentation.
sre-004
p58, last para
Same point as sre-001
sre-005
p59, point 3, examples for first two bullet points
Is missing the create-local-instance before the instance-name.  The third bullet point does have the create-local-instance command present.
sre-006
p61, master broker caution note
Same as sre-002
sre-007
p63, node-name explanation
The start-local-instance command does not communicate with the DAS, and the number of nodes in the domain is irrelevant.  What matters is the number of nodes in the local installation - if there is more than one node in the as-install/nodes directory the --node <node-name> is required.
sre-008
p64, node-name explanation
See sre-007
sre-009
p64, example 4-13
stop-local-instance does not use/take/accept the --hosts <das-host> option, its a local and not a remote command.
sre-010
p65, node-name explanation
See sre-007
Posted by tecknobabble at Jan 26, 2011 07:36

Comments on "Administering GlassFish Server Clusters" in "clusters.pdf"

Comment ID 
Location Comment 
trm-001
p32, 1st bullet regarding "GlassFish server... redirects requests", GlassFish really doesn't do this.  It is up to a load balancer in front of the cluster to do this, and although GlassFish provides support for helping to configure a load balancer, the GlassFish product does not actually include a load balancer. 
trm-002 p33 in group-discovery-timeout-in-millis, the text doesn't actually say that the value is in milliseconds.  Hopefully someone can figure out what "in-millis" means :-)
trm-003 p35, step 4 a create-instance without a --node option isn't going to work
trm-004 p38, example 3-2 This example should recommend running the validate-multicast command at the same time on all of the hosts in the cluster using separate windows. Otherwise you don't really see if multicast is working.
trm-005 p38 typo "may or may not be the same network" missing "on" before "the"
Posted by trmueller at Jan 27, 2011 12:50

Comments on Deployment Planning Guide, Ch 1. IIOP Loadbalancing section

Comment ID Location Comment
tjq-001 1st paragraph and last paragraph Not sure how you want to handle this, if at all, in this section.

There are two steps to fail-over and load balancing. The first step, bootstrapping, is the process by which the client sets up the initial naming context with one ORB in the cluster. This is, by default, where load balancing and some degree of failover occur. The client will attempt to connect to one of the IIOP endpoints specified by the user. When launching an app client using the appclient script, for example, the user specifies these endpoints using the -targetserver option on the command line or using the <target-server> elements in the sun-acc.xml config file. The client will randomly choose one of these endpoints and try to connect to it, trying other endpoints if needed until one works.

The second step concerns sending messages to a specific EJB. By default, all naming look-ups - and therefore all EJB accesses - will use the cluster instance chosen during bootstrapping. The client exchanges messages with an EJB through the client ORB and server ORB. As this happens, the server ORB will update the client ORB as servers enter and leave the cluster. Later, if the client loses its connection to the server from step 1, the client will fail over to some other server using its list of currently-active members. In particular this cluster member might have joined the cluster after the client made the initial connection.

This is the default. There is also per-request load balancing (which works a little differently. See this review document for more info.
Posted by timq at Jan 28, 2011 08:09

Comments on Chapter 4: Administering GlassFish Server Instances

Comment ID Location Comment
jfd-001 p47, 2nd paragraph "Every instance must contain" -> "Every instance contains"
jfd-002 p48 Should we state somewhere here that all instances in a domain (and the DAS) should be running on the same OS? I.e. we only support homogeneous domains?
jfd-003 p48, last paragraph "managed by GMS": I think it would be good to cite the appropriate Chapter 3 section that clarifies what this is referring to.
jfd-004 p49, prerequisites "...and is enabled for remote communications." I find this vague. How about something like "...and is either an SSH node or represents the same host that the DAS is running on."
jfd-005 p49, define "node-name" I guess another way to handle jfd-004 is to add some text here like: "The node must be an SSH node or a node that represents the same host that the DAS is running on."
jfd-006 p53, prerequisites See jfd-004
jfd-007 p55, prerequisites See jfd-004
jfd-008 p58, Before You Begin Should we describe the auto-node creation capability here? The easiest way to create an instance locally is to not create the node first, but to just go ahead and run create-local-instance. This will create a CONFIG node for you, named the same as the hostname of the system you run create-local-instance on.
jfd-009 p59, define "node-name" Related to jfd-008: if you provide a node name it must already exist. If you do not provide a node name then one will be created for you. It will be named the same as the hostname.
jfd-010 p60, Example 4-10 Depending one what you do with jfd-008, 009, may want to add an example without a node name.
jfd-011 p63, node-name "If only one node is defined in the domain...", I think it is more accurate to say: "If only one node is defined for this host in the domain...".
jfd-012 p64, node-name See jfd-011
Posted by dipol at Jan 31, 2011 11:04
Comment ID Location Comment
TJQ-002 Resync'ing Instances and DAS, p. 64, sentence after first bullet points Internally when we refer to "synchronizing" we refer to the file copying that occurs during instance start-up. As written, though, the text suggests the instances' caches will be out-of-step with the DAS's domain.xml until a server start. In fact, if a server is up then it updates its cache with configuration changes so that the cache in fact stays up-to-date with the DAS's copy of the configuration. Although the last sentence is true using engineering's internal meaning of "synchronization" = copying of files, most readers think of "synchronization" = "being up-to-date" with respect to the DAS. This should be clarified.
Posted by timq at Jan 31, 2011 14:21

Comments on Chapter 4: Administering GlassFish Server Instances

Comment ID Location Comment
cm-001 pg 59 the command create-local-instance is missing from the examples on this page
cm-002 pg 58 if the pages for the other sections are generated then not an issue but config nodes are described on page 32
Posted by carlavmott at Feb 01, 2011 17:04

Comments on "Deployment Planning Guide" in "SJSASEEDPG.pdf"

Comment ID 
Location Comment 
trm-001
p17, 1st bullet This mentions Pointbase which is no longer included with GlassFish.
trm-002 p18 1pp under "server instances" - I had thought J2SE had been replaced with Java SE.
trm-003 p18 missing "one" in "used in more than administrative domain"used inmore than
administrative domain
trm-004 p19 typo: "on a one" on the 2nd line of the page
trm-005 p19, 3rd pp missing "and" in the first sentence
trm-006 p19, 6th pp typo: "configuration its domain"
trm-007 p19, last pp
p35, 3rd pp
Should this document still be talking about Sun Cluster?
trm-008 p23 install_dir/imq should be install_dir/mq
trm-009 p30 I'm surprised to see 8 bits/byte used here.  This doesn't account for parity bits and packet header overhead.  I recommend using 10 bits/byte as a rule-of-thumb.
trm-010 p30 The "calculating bandwidth" section doesn't include information on how to calculate the bandwidth needed for session replication, database transactions, messaging, etc.
trm-011 p31 typo: "makes sense identify"
trm-012 p32 typo: "become unavailable" should be "becomes unavailable"
trm-013 p32 typo: "an failed"
trm-014 p35, last pp and p37 With GlassFish 3.1, an MQ broker can now run embedded in the same VM as GlassFish. On p37, the embedded type should be described too.
trm-015 p38, 2nd pp The default broker type for a stand-alone instance or a cluster instance is "embedded", not "remote".
trm-016 p38, 5th pp This paragraph contains a self-reference.  Is that really needed?
trm-017 general This document doesn't say anything about the use of hardware load balancers?  Are they supported with GlassFish?  If so, how does one configure a hardware load balancer to use GlassFish?
trm-018 general For large clusters (> 50 instances), it may be desirable to configure the size of the admin thread pool to improve the responsiveness of admin commands that operate on clusters.
Posted by trmueller at Feb 02, 2011 14:02

Comments on the deployment planning guide:

Comment ID
Location
Comment
JC-001
Page 9
change https://glassfish.java.net

to

http://glassfish.java.net

JC-002
Page 9
Java EE:" has a ":" character
JC-003
Page 11
First Cup URL should be http://download.oracle.com/javaee/5/firstcup/doc/firstcup.pdf

JC-004
Page 11
Java EE 6 tutorial URL should be http://download.oracle.com/javaee/6/tutorial/doc/

JC-005
Page 17-18
OSGi Services can be accessed as resources, albeit "local" resources. Contact Sahoo & Nazrul for more information. We are still formalizing OSGi support.
JC-006
Page 18
"The recommended J2SE distribution is included with the GlassFish Server installation".  Remove this sentence.  Oracle GlassFish Server does not ship with a J2SE distribution, only the SDK.
JC-007
Page 18
more than one??":  Incorrect grammar: " In some cases, a large server with multiple instances can be used in more than administrative domain."
JC-008
Page 19
First sentence has incorrect grammar: "The administration tools are the asadmin command-line tool, the browser-based Administration Console
JC-009
Page 19
In addition to JMX, we also support the RESTful administration API in GlassFish Server 3.1. So sentence can be modified to say "GlassFish Server also provides JMX and RESTful APIs for server administration."
JC-010
Page 19
Incorrect grammar: "The DAS keeps a repository containing the configuration its domain and all the deployed applications."
JC-011
Page 19
A better example for shutting down a DAS would be to reboot the host operating system for installing a kernel patch or hardware upgrade. (We can freeze a configuration using suspend-domain).
JC-012
Page 19
Has anyone tested SunCluster Data services with GlassFish server 3.1 DAS?
JC-013
Page 20
We can't optimize for all three: The default configuration is optimized for developer productivity and for security and high availability.  For example, by default the DAS has little security. Re-phrase as "The default configuration is optimized for developer productivity."
JC-014
Page 21
1st paragraph "Web Server" should say "Oracle iPlanet Web Server, Oracle HTTP Server, ..."  Note that I added Oracle HTTP Server.
JC-015
Page 21
Homogeneity assures that before and after failures, the load balancer always distributes load evenly across the active instances in the cluster."
We have weighted load balancer algorithm to explicitly distribute load unevenly. Also, there is Homogeneity of environment (same Operating System) and homogeneity using named configurations (same configuration).  Re-phrase as "Homogeneity enables configuration consistency, and improves the ability to support a production deployment."
JC-016
Page 22
GlassFish offers two persistence stores: ActiveCache for GlassFish (Coherence*Web) and replicated (in-memory)
JC-017
Page 24
Step 4 should say "Create domains, nodes, clusters, and standalone instances as needed.
Posted by johnclingan at Feb 02, 2011 16:47
Comment ID
Location
Comment
bs-1
page 15
"Java 2 Enterprise Edition" -> "the Java platform, Enterprise Edition"; check everywhere else too for this obsolete usage
bs-2
page 16
"the Java API for XML-based RPC (JAX-RPC)" -> "the Java API for XML-based web services (JAX-WS)"
bs-3
page 17, first paragraph
"JAX-RPC" -> "JAX-WS"
bs-4
page 18, first paragraph
"to send and receive email" -> "to send email, and connect to an IMAP or POP3 to receive email"
bs-5
page 18, Server Instances, first paragraph
"Java 2 Standard Edition (J2SE) 6" -> "Java platform, Standard Edition (Java SE) 6"; "J2SE" -> "Java SE"
bs-6
page 19, first paragraph
the last sentence is not exactly true.  you can authenticate to more than one domain at a time by using more than one browser window or more than one asadmin CLI.
bs-7
page 19, third paragraph
the first sentence lists two items separated by a comma, they should be separated by "and".
bs-8
page 19, third paragraph
I'm not sure we want to continue to promote the JMX-based admin API
bs-9
page 19, 6th paragraph
"configuration its domain" -> "configuration of its domain"
bs-10
page 19, 7th paragraph
"restore domain configuration" -> "restore the domain configuration"
bs-11
page 19, last paragraph
does Sun Cluster still exist?  do we still support it with GlassFish?
bs-12
page 20, 2nd to last paragraph
"for developer productivity and for security and high availability"?  I don't think so...
bs-13
page 21, first paragraph
this seems to be listing three web servers, and the first one is named "the Web Server"? something seems wrong here.
bs-14
page 22, last paragraph
"a InitialContext" -> "an InitialContext"
bs-15
page 23
is it still called the "Sun Java System Message Queue Administration Guide"?
bs-16
page 24
ditto
bs-17
page 32, first paragraph
"become unavailable" -> "becomes unavailable"
bs-18
page 32, first paragraph
"an failed" -> "a failed"
bs-19
page 34, second paragraph
remove the version number "1.5" twice; at the end replace "JCA" with "a JCA resource adapter"
bs-20
page 35, 3rd paragraph
again, Sun Cluster.  probably remove this paragraph if we're not supporting Sun Cluster anymore.
bs-21
page 35, last paragraph
doesn't the MQ broker normally run in the DAS process?
bs-22
page 36
"Manualor" -> "Manual or"
bs-23
page 36
isn't there a third type, "embedded" or something like that?
bs-24
page 37
"an GlassFish Server" -> "a GlassFish Server", "an Message Queue" -> "a Message Queue", several places in this chapter
bs-25
page 38, first paragraph
isn't the default broker type "embedded"?
bs-26
page 38, "Specifying an ..."
the second sentence references this same section
Posted by bill.shannon at Feb 02, 2011 22:36
Comment ID
Location
Comment
JC-001
instance-resync.pdf page 66
For most bullets, "what" is synchronized (directory vs files) is covered. However, two cases where this is missing is lib/applibs and lib/ext.  For these, please add what is synchronized, file-b-file or entire directory.
JC-002
instance-resync.pdf page 68
When running "list-instances", "requires restart" is displayed.  It would be good to have some kind of description of what causes this message since restarting a production instance is undesirable.  Does any out-of-sync file/directory described earlier in the section cause this message? Do only config directory changes generate this message?
JC-003 instance-resync.pdf page 72-73 Please remove the "Synchronizing only Specific Configuration Files" section, as this has the potential to create some serious issues in a production environment. For example, not including domain.xml in the file list (or even misspelling it) will remove domain.xml on the remote host.  A better approach for us to take would be to create an infodoc on this, and to perhaps get the support organization engaged with customer.
JC-004
[Introduction and instance-lifecycle tasks in Chapter 4, Administering GlassFish Server Instances||||\||] I know this is comment is late, but the "start-local-instance --upgrade" option seems under-documented.  The built-in help doesn't help much, and I don't see anything in the product documentation. For example:

  • "Specifies whether the configuration of the  instance  is upgraded  to the current release."  What does an upgrade entail?   For example, does this mean that if I upgrade the DAS to version 3.2 from 3.1 (in the future), that running "--upgrade" upgrades the instance configuration and binaries to 3.2?  What is the scope of an upgrade?
  • "You  should not need to use this option explicitly." Then why is it here? Under what condition(s) would I want to use this option?
JC-005
instance-resync.pdf page 74-75 The section is titled "Resynchronizing an Instance and the DAS Offline". Interestingly, the example of "export-sync-bundle" uses a target of ymlcluster.  What is the difference of specifying the target as a cluster vs an instance of a cluster? What if the target was "instance1" (a member of ymlcluster) instead of ymlcluster?  A description should explain when a target should be a cluster vs an instance of a cluster.
JC-006
instance-resync.pdf page 74-75 One of the described use cases for offline instance<--> DAS synchronization is "To reestablish the instance after an upgrade".  However, I do not see the upgrade guide referencing this feature at all. I also added a comment on the Upgrade Guide wiki page.
Posted by johnclingan at Feb 03, 2011 11:59

Comments on instance-resync.pdf

Comment ID
Location Comment
sre-001
p64, Default Synchronization for Files and Directories, applications
"only a change to a top-level application subdirectory".
I assume this means a "applications directory top-level sub-directory"?  Since the subdirectories of the applications directory are the actual applications, why not have "only a change to an application's top-level directory within the application directory".  With the original text its ambiguous as its not clear which subdirectories, of which there could be many, including lower level subdirectories, it refers to.
sre-002
p65, docroot section.
"Therefore, by default, only a change to a file or a top-level subdirectory of the docroot directory".  Is it clear that the file must be in the top-level of the docroot directory?  With the rest of the text in this section I think a reader can figure it out, but perhaps: "Therefore, by default, only a change to a file or a subdirectory in the top-level of the docroot directory" is clearer.
sre-003 p65, generated
The generated directory is used internally by the application server.  Should we say that customers should not typically modify the contents of this unless directed to by support, for example?
sre-004 p65, java-web-start
Is this content auto-generated?  Are customers allowed to modify the contents without a redeployment of the application that has jumpstart capabilities, e.g. an appclient implementation?
sre-005 p66, point 2
From the example at the top of page 68 it implies that an instance will be reported as "requires restart" if it needs synchronizing.  Now I've changed one of the port properties for the instance, which is in turn used by its configuration, and touched the domain.xml file to change its timestamp, and neither resulted in the "requires restart" being reported.

Therefore I have to conclude that only certain changes will trigger this state being reported.  For example, a change to the java-config does result in "requires restart".  Should we therefore document that the output from "list-instances" only provides a hint if synchronization is needed, and that certain changes will not be detected or reported?  I suspect it might be difficult to provide a definitive list somewhere of what sorts of changes would result in "requires restart", helpful as that would be.
sre-006 p67, node-name description at the end of the page.
You only need --node <node-name> if there is more than one node in the installation on the host you are starting the instance locally on.  
sre-007 Example 4-15
The --node sj01 is only needed if there are multiple nodes, in the same installation, on the local host.

The example does tie in with the example in the list-nodes man page:


asadmin> list-nodes --long=true
       NODE NAME           TYPE     NODE HOST          INSTALL DIRECTORY     REFERENCED BY
       localhost-domain1   CONFIG   localhost          /export/glassfish3
       sj02                SSH      sj02.example.com   /export/glassfish3    pmd-i2, yml-i2
       sj01                SSH      sj01.example.com   /export/glassfish3    pmd-i1, yml-i1
       devnode             CONFIG   localhost          /export/glassfish3    pmdsa1
       Command list-nodes executed successfully.

and would make more sense it that was present to provide context.

sre-008 p68, To Resynchronize Library Files.
"Some types of library file must be added to the class path."  Given the title of the section, when I read this I feel it implies that its saying that some library files need to be on the classpath to be synchronized.  I'd be surprised if that's correct.  Of course some of the locations that are synchronized have their contents automatically added to the classpath, and some don't - but I don't believe that aspect has any influence on the synchronization process.
sre-009 p69, domain-dir/config/config-name/lib The description should emphasise that its a specific cluster or instance.  That's inferred by the config-name in the path but it probably won't hurt to make the point clear, e.g. "Library files for all applications that are deployed to a specific cluster or a standalone instance."
sre-010 p69, domain-dir/config/config-name/lib/ext Same as sre-009.
sre-011
p69, point 2
similar to sre-009/10.... perhaps "For library files for all applications that are deployed to a specific".

This sort of information used to be part of the documentation on the ClassLoader hierarchy, has it been removed from there and this is its replacement, or is this a more verbose explanation of what happens with these specific library locations?
sre-012 p70, Next Steps
Instead of "you can specify only the JAR file name, for example" perhaps "you specify just the JAR file name"
sre-013
p72 , To Resynchronize User's Changes to Files
Instead of "Adding files to lib directory", perhaps "Adding files to the lib directory"
sre-014
Regarding comment JC-003
I'd have to concur that this sounds dangerous.  Was there a specific use case in mind? 

The only one I've come across was related to the files that made up the NSS certificate database, because back in 8.x/9.x/2.x the secmod.db file wasn't part of the default synchronization.  This file is used to integrated NSS with crypto hardware, being able to sync that file would have been useful.  However NSS isn't currently in GF 3.1 and with the change in synchronization mechanism it would appear that the file, if it was present in the config directory, would be synchronized.
sre-015
p73, To Prevent Deletion of Application-Generated files
In 8.x/9.x/2.x customers specifically asked for the ability to prevent the deletion of application generated files, it was a "do not remove list" that could be used to protect directories or individual files from deletion.  This wasn't in the base product, but was added because multiple customers requested it... see defect 6316965.

There is an argument that an application should not write files in its own deployment area, let alone outside of it.  Customers do do this, but I believe the majority write to a location with the application itself.

Removing this ability is only going to result in an eventual escalation with a request to add it back into the product.

The suggested path is also only relevant to the machine the DAS is installed on, and not to remote nodes.
sre-016 p75, Example 4-18
The import-sync-bundle has to be run on the host that has the instance that the import is being done against.  How realistic is the example, given that as it stands its all being run on the machine that has the DAS?  Either the export-sync-bundle is done locally on the DAS host, the zip file transferred to the instance's host by some means (assuming there's a reason the instance can't sync with the DAS directly), and then the import is done, or the export is run with --retrieve=true so the zip file is downloaded to the instance's host and the import run, e.g.


$ asadmin --user admin --host dashost.example.com export-sync-bundle --retrieve=true --target=ymlcluster
Command export-sync-bundle executed successfully.
$ asadmin import-sync-bundle
--node sj01 --instance yml-i1 ymlcluster-sync-bundle.zip
Command import-sync-bundle executed successfully.

If the original form of the example is kept then the the "asadmin> export-sync-bundle" needs to be "asadmin export-sync-bundle" or an "asadmin> exit" is needed between the two commands as the first is being run inside an asadmin session, and the second is being run directly from the command line prompt.
sre-017 Migrating EJB Timers
Perhaps an example?  Probably would need to be along the lines of:

asadmin list-instance <source-instance-name> (to show the instance is down)
asadmin list-timers <source-instance-name> (to show it has timers)
asadmin migrate-timers --target <source-instance-name> <destination-instance-name>
Posted by tecknobabble at Feb 04, 2011 04:05

Comments on "Resynchronizing GlassFish Server Instances and theDAS" in "instance-resync.pdf"

Comment ID  
Location Comment  
trm-001 
p64, 2nd pp The data is also synchronized by the dynamic reconfiguration process that is used with every command as the server runs. I realize that is not the topic of this chapter, but it might be useful to have a reference here to the place where dynamic reconfig is described.
trm-002 p65, note The domain.xml is used to control the entire synchronization process, not just the files in the config directory.  If the domain.xml file is not sync'd, then the entire rest of the process is skipped, for all other directories.
trm-003 p65, 6th last pp typo: "directory directory"
trm-004 p66, lib/applibs and lib/ext There isn't any description of when these directories are sync'd.
trm-005 p67, step 2 What should the user look for in the output of the command to know whether it needs resynchronization?
trm-006 p67, "node-name" The --node option may be omitted if there is only one node on the host, not one node in the domain.
trm-007 p70, next steps Suggest avoiding "foo" in official documentation.
trm-008 p71, 1st pp This should indicate that this section applies to all instances in a cluster too, basically anything that is using a named config.
trm-009 p72, 1st pp under "To Resynchronize..." heading See to trm-002. That change applies to this paragraph too.
trm-010 p72, last pp The typical reason for using this feature would be to add files to the list.
trm-011 p73 Note that JC's comments about removing this section were based on incorrect information that I provided to him about domain.xml.  Later, I checked the code and found that domain.xml is special in that it will be synced whether or not it is listed in the config-files file. So the danger of having your own config-files file is reduced because of that.
trm-012 p73 typo: "that not listed"
trm-013 p74, step 2 the export-sync-bundle details are missing (step 4 has the details for import-sync-bundle)
trm-014 p74, step 3.5 there should be a step here about copying the archive from the DAS host to the instance host
trm-015 p74, step 4 Since this is supposed to be an offline resync, why would one need to specify the host and port of the DAS?  AFAIK, import-sync-bundle is strictly a local command and doesn't talk to the DAS at all.
trm-016 p75, "Migrating.." I'm not sure why this section is in this chapter.  There is a typo: "a another" in the first pp.

Posted by trmueller at Feb 04, 2011 09:58
Comment ID
Location
Comment
JC-018
HA-intro page 18
Change "Sun Java System Message Queue" to "GlassFish Message Queue"
JC-019
HA Intro page 19
Change "MQ Enterprise Edition" to "GlassFish Message Queue". MQ is no longer available as a standalone product.
JC-020
HA INtro page 20
Need a space between "ServerProvides" heading
JC-021
HA INtro page 20
"... GlassFish Server 3.1 as OSGi module." should be "... GlassFish Server 3.1 as an OSGi module."
JC-022
HA Intro Page 21
I think we should append to this section ("Storage for Session Data") that "Oracle GlassFish Server, the commercial distribution of GlassFish Server, offers ActiveCache for GlassFish for more flexible deployment options and for improved performance of highly available web applications."  This paragraph would apply to both commercial and open source documentation.
JC-023
HA Intro Page 22
Under "Recovering the Domain Administration Server", it correctly states to "back up the DAS periodically."  We should state something akin to "Oracle GlassFish Server can automate DAS backup on a scheduled basis" so readers of the Open Source documentation understands that options are available.
JC-024
HA Intro page 22
The last bullet on the page mentions that the recovered DAS should come p with same IP as failed DAS.  Note that with GlassFish Server 3.1, the DAS does not have to be brought up with the same IP address.  We have a command, "update-admin-server-coordinates" and "update-admin-server-local-coordinates" that can direct running instances to the new DAS host.
JC-025
HA Intro Page 23
Verify with Ed Bratt that data is still stored in /var/imq.
Posted by johnclingan at Feb 07, 2011 12:59
Comment ID Response
trm-001 Done.
trm-002 Done.
Posted by pauldavies at Feb 08, 2011 08:44

Comments on "High Availability in GlassFish" in ha-intro.pdf

Comment ID  
Location Comment  
trm-001 
p18 Why is only Apache HTTPD highlighted here, when the GlassFish load balancer plugin supports the Oracle Web Server and IIS too? It seems that all three should be mentioned.
trm-002 p18 Change "Sun Java System Message Queue" to whatever the new name is.
trm-003 p20, last pp typo: missing "an" in "as OSGi module"
trm-004 p21, 1st bullet My understanding is that all of the hosts must have multicast enabled to each other.  Being on the same subnet is typically part of achieving this, but being on the same subnet is neither necessary nor sufficient. Please check with Joe or Bobby to confirm.
trm-005 p21, 5th bullet I don't understand the reference here to multiple clusters.  Typically a load balancer is configure to forward requests to multiple instances within a single cluster.  (I haven't look in the reference section, so maybe that would explain it.)
trm-006 p23, 1st bullet This seems to be a duplicate of the 1st bullet in this section (backup the domain). 
trm-007 p23 typo: "of he operating"
trm-008 p23 WRT recovering GlassFish instances, I would recommend backing up instance data ONLY if the application stores data on the instance file system, i.e., not in a database. This practice is generally not recommended. If the application does not store data on the instance, then instance data should be viewed like a cache which can be removed and recreated at any time. If the instance data is lost, just recreate the local instance data by sync'ing it from the DAS using export-sync-bundle and import-sync-bundle. 
Posted by trmueller at Feb 08, 2011 13:32
Comment ID 
Location Comment
jc-001
Resynchronizing GlassFish Server Instances and the DAS,
To Resynchronize an Instance and theDASOffline,
 p 74, Step 2
In Step 2, missing the command usage for export-sync-bundle
(same comment as trm-013)
jc-002
Resynchronizing GlassFish Server Instances and the DAS,
To Resynchronize an Instance and theDASOffline,
 p 75, Example 4-18
Follow up to comment sre-017: Can we describe for the example that the instance and DAS are on different machines?  In the example, there should be a step for transferring of the archive from the DAS machine to the instance machine.
jc-003
instance-lifecycle.pdf, Chapter 4 Administering GlassFish Server Instances, p 55
typo: missing 'a' in 'subcommnd
Use the start-instance subcommnd in remote mode to start an individual instance centrally.
Posted by jenchou at Feb 09, 2011 13:46

Responses to comments on the Deployment Planning Guide

Comment IDs Response
tjq-001, trm-001 to trm-009, trm-011 to trm-016, JC-006 to JC-017, bs-1 to bs-26 Done.
trm-010, trm-017, trm-018 Please let me know where I can find all this information. These items may have to wait until the proposed doc library update in March.
JC-001 to JC-004 These comments refer to the common doc preface and are Paul's responsibility.
JC-005 I will ask Sahoo and Nazrul about this, but it may have to wait until the proposed doc library update in March.
Posted by rebeccaparks at Feb 09, 2011 14:37

For trm-010, please contact Scott Oaks.

For trm-017, please contact Yamini

For trm-018, please contact Joe Di Pol

In each of these cases, the information may not have been written down anywhere and it has to be generated. Deferring this to the doc update would be fine. 

Posted by trmueller at Feb 10, 2011 09:24

comments on "Planning Message Queue Broker Deployment" in Chapter 2 of Deployment Planning Guide

Comment ID Location Comment
ak-00 p35 "While Message Queue will reestablish a failed connection with a different broker in a cluster, it will lose transactional messaging and roll back transactions in progress.", here it does not mention MQ conventional cluster or enhanced cluster. In any case, "lose transactional messaging" is misleading. In MQ conventional cluster, transactions owned by the failed broker will not be available until it restarts; in MQ enhanced cluster, transactions owned by the failed broker will be taken over by another running broker in the cluster and non-prepared transactions will be rolled back. Similar comment for "Thus,Message Queue does not support high availability persistent messaging in a cluster ...", don't know what "high availability" definition referring to here. MQ conventional cluster provides service availability and enhanced cluster provides both service and data availability
ak-01 p36 "Configure the JMS Hosts list to contain all the Message Queue brokers in the cluster", please check with Satish to clarify how does this relates to auto-clustering of MQ cluster in a GlassFish Cluster in EMBEDDED or LOCAL JMS modes
ak-02 p37 "To use a Message Queue broker cluster, delete the default JMS host, then add all the Message Queue brokers in the cluster as JMS hosts.", please check with Satish whether this applies to EMBEDDED, LOCAL as well as REMOTE
ak-03 p38 Default Deployment: "when you add a stand-alone server instance .. and its default JMS host is the broker started by the DAS" is questionable, please check with Satish
ak-04 p38 Using a MessageQueue Broker Cluster with a GlassFish Server Cluster:, please check with Satish, same comment as ak-01
ak-05 p38 Application Clients: ".. or standalone application accesses a JMS administered object for the first time, the client JVM retrieves the JavaMessage Service configuration from the server", please check with Satish for possible inconsistency terminology: does "JMS administered object" here means "JMS resource" in GlassFish server ? and unclear wording: does the 'server' here mean GlassFish server ? and also please check with Satish whether is it supported to look up a JMS resource in a GlassFish server from a standalone application (not GlassFish appclient)
ak-06   This section should mention at high level what user can do to configure JMS/MQ cluster using asadmin configure-jms-cluster and its restrictions in usage for configuring Message Queue Broker(s) is part of "planning" Message Queue Broker Deployment
ak-07   For EMBEDDED JMS service mode, should mention configuring GlassFish server with sufficient Java heap size considering Message Queue broker running in the same JVM as GlassFish server
ak-08 p35 "Master Broker and Client Synchronization", this section should refer to MQ Technical Overview "Conventional Clusters" for what master broker is for and mention at high level how master broker is choosen in a GlassFish cluster. This section should also mention in GlassFish 3.1, one can configure a cluster to use a shared database for MQ change records - that is, without using a master broker
ak-09 p35 "Message Queue brokers (JMS hosts) can run in a separate JVM from the GlassFish Server process.", this sentence is confusing, for broker can also run in the same JVM as in EMBEDDED JMS service mode.
Posted by a-k at Feb 10, 2011 17:34
Comment ID Location Comment
Nigel-1 Deployment Planning Guide, p17 "GlassFish Server includes a high-performance JMS broker, GlassFishMessage
Queue. GlassFish Server includesMessage Queue" is repetition
Nigel-2 Deployment Planning Guide, p34 Change "Message Queue, which implements JMS, is integrated with GlassFish Server, enabling you to create components such as message-driven beans (MDBs)." to "Message Queue, which implements JMS, is integrated with GlassFish Server, enabling you to create components that send and receive JMS messages including message-driven beans (MDBs)." (This wording is intended to show that JMS isn't just about special components such as MDBs, it is also about allowing ordinary components such as session beans and servlets to send/receive messages)
Nigel-3 Deployment Planning Guide, p45 Change
"Message Queue is integrated with GlassFish Server using a connector module, also known as a resource adapter, as defined by the Java EE Connector Architecture Specification (JCA). A connector module is a standardized way to add functionality to the GlassFish Server. Java EE components deployed to the GlassFish Server exchange JMS messages using the JMS provider integrated via the connector module. By default, the JMS provider isMessage Queue, but if you wish you can use a different JMS provider, as long as it implements a JCA resource adapter."
to
"Message Queue is integrated with GlassFish server using a resource adapter, also known as a connector module. A resource adapter is a Java EE component defined according to the Java EE Connector Architecture (JCA) Specification. This specification defines a standard way in which application servers such as GlassFish Server can integrate with enterprise information systems such as JMS providers. GlassFish server includes a resource adapter which will integrate with its own JMS provider, Message Queue. If you with to use a different JMS provider you will need to obtain and deploy a suitable resource adapter that is designed to integrate with it."
Nigel-4 Deployment planning guide, p34 I refer to "Creating a JMS resource in GlassFish Server creates a connector resource in the background. So, each JMS operation invokes the connector runtime and uses the Message Queue resource adapter in the background."

This is a bit misleading; I think it relates only to the admin console, when creating nodes under the "JMS Resources" node. These create resources specifically preconfigured to use the JMSRA resource adapter, which is for MQ only. This is a short-cut mechanism, which I don't think is available when creating resources using asadmin.

I think it's misleading to saying that it creates "connector resources in the background", it's a way to create preconfigured connector resources. In the case of connection factories, it also provides a single UI for defining both the connection factory and the connection factory pool.

If you want to create JMS Resources that use any other resource adapter (including GenericJMSRA), you need to create them under the "Connectors" node. In this case you need to explicitly specify what resource adapter you need to use. Also, in the case of connection factories, you need to create both the connection factory and its pool as separate objects.

I'm not sure what text is best to put in the documentation for this!
Nigel-5 Deployment planning guide, p35 I refer to "Thus,Message Queue does not support high availability persistent messaging in a cluster. If a broker restarts after failure, it will automatically recover and complete delivery of persistent messages. Persistent messages may be stored in a database or on the file system. However if the machine hosting the broker does not recover from a hard failure, messages may be lost."

I'm not sure whether this is meant to be a description of conventional clustering, enhanced clustering or both. It sounds like a description of conventional clustering, in which case we should avoid the bald statement that it doesn't support HA and reflect the MQ dogma that it supports service availability but not data availability.

In addition, I think this section needs to explicitly state that two types of clustering is available, and the features of each. If enhanced clustering is described elsewhere please put in a cross-reference

Also, what is the expression "hard failure" above meant to mean? I think it may be a reference to conventional clustering, where the hard disk fails and is not recoverable. This is the main case where messages could be permanently lost.
Nigel-6 Deployment planning guide, p35 The section "Master Broker and Client Synchronization" needs to be extended to refer to the new feature which allows a shared database (of cluster change records) to be configured as an alternative to using a master broker.
Nigel-7 Deployment planning guide, p36 I refer to "Message Queue brokers (JMS hosts) can run in a separate JVM from the GlassFish Server
process. This allows multiple GlassFish Server instances or clusters to share the same set of Message Queue brokers."

This should start by saying that by default the broker runs in the same JVM as the GlassFish instance, but that optionally it can be configured to run in a separate JVM. The reason stated applies to REMOTE mode but not to LOCAL mode (these modes are described on the next page) Another reason for using REMOTE mode is to allow the broker and instance to be on different machines. You might want to use LOCAL mode is because you prefer separate JVMs for some reason or because embedded mode is not supported for enhanced clusters. Note that embedded mode is usually the fastest by some margin
Nigel-8 Deployment planning guide, p37 I refer to "In EMBEDDED mode, the JMS operations bypass the networking stack, which leads to
performance optimization. The EMBEDDED type is most suitable for a development environment in which theDAS is the only GlassFish Server instance."

Two points here: the networking stack is only bypassed if the embedded broker is running in a standalone, non-clustered instance. If clustering is used, normal TCP connections are used. Note that embedded mode is not supported for enhanced broker clusters.

Also, I think the statement that embedded mode is for development is out of date, since in 3.1 embedded mode is the default for (conventional) clusters.
Nigel-9 Deployment planning guide, p37 I refer to this sentence on LOCAL mode: "The LOCAL type is most suitable for stand-alone GlassFish Server instances"

This is not correct. I'm not sure where this suggestion came from. I would say this mode is for use with enhanced clusters, and for other cases where the administrator prefers the use of separate JVMs.

Also, the reference to using the "start arguments attribute" also applies to embedded mode. In addition, you can specify broker properties via the admin console/asadmin, which is a new feature presumably described elsewhere
Nigel-10 Deployment planning guide, p37 I refer to this sentence on REMOTE mode: "The REMOTE type is most suitable for GlassFish Server clusters."

This is out of date. If you configure a GlassFish cluster (whether conventional or enhanced), LOCAL mode is offered by default. The main reasons for using REMOTE is if you want to have the brokers running on different machines to the instances, perhaps to share the load amongst more machines, or if you want a different number of brokers to instances, or if you think having them on different machines gives you higher availability.
Nigel-11 Deployment planning guide, p37 Under "Default JMS Host", I refer to the sentence "If the Java Message Service type is LOCAL, then GlassFish Server will start the default JMS host when the GlassFish Server instance starts." I'm not sure that's what the default JMS Host means, especially if the instance is clustered. Please check this sentence and the rest of this section with Satish.
Nigel-12 Deployment planning guide, p37 Under "Default JMS Host", I refer to the sentence "If the JavaMessage Service type is LOCAL, then GlassFish Server will start the default JMS host when the GlassFish Server instance starts.".

I don't know whether the reference to the default JMS host is correct, but the basic idea is that when an instance that is defined to use LOCAL mode starts, the broker starts automatically when the instance is started. However if the instance is defined to use EMBEDDED mode (which is the default for standalone instances and conventional clusters) then the broker will be started lazily when needed (where the definition of "when needed" is a bit complicated).
Posted by nigeldeakin at Feb 11, 2011 07:21
Satish-1 Deployment planning guide, p36 (Managing JMS with the Administration Console) The following section is only valid for REMOTE mode of JMS integration "Configure the JMS Hosts list to contain all the Message Queue brokers in the cluster. For example, to set up a cluster containing three Message Queue brokers, add a JMS host within the Java Message Service for each one. Message Queue clients use the configuration information in the Java Message Service to communicate with Message Queue broker."
Satish-2 Deployment planning guide, p37 (Default JMS Host) Again, the following section is only valid for REMOTE mode. Hence there should be a note stating that if you wish to configure GF to use REMOTE mode, do the following: "To use a Message Queue broker cluster, delete the default JMS host, then add all the Message Queue brokers in the cluster as JMS hosts. In this case, the default JMS host becomes the first JMS host in the JMS host list.
Satish-3 Deployment planning guide, p38 (Using a Message Queue Broker Cluster with a GlassFish Server Cluster) We will need to mention here that when a GF cluster is configured, a MQ cluster is auto-configured with each GF instance associated with a MQ broker instance. The description that is currently in this section is only valid for REMOTE mode and should explicitly say so -"To configure a GlassFish Server cluster to use a Message Queue broker cluster, add all the Message Queue brokers as JMS hosts in the GlassFish Server's Java Message Service. Any JMS connection factories created and MDBs deployed will then use the JMS configuration specified."
Posted by sats_i at Feb 14, 2011 04:50

I have incorporated all the Deployment Planning Guide comments from ak, Nigel, and Satish. I trust that in incorporating Satish's comments I'm also incorporating the comments from ak and Nigel that ask for Satish's input. I have posted another Deployment Planning Guide draft.

Posted by rebeccaparks at Feb 14, 2011 14:02

Comments on create-service-tasks.pdf (Configuring a DAS or GF Server Instance for Automatic Restarts)

Comment ID Location
Comment
sre-001
p90, First paragraph of reviewable text
"aGlassFish Server" should be "a GlassFish Server"
sre-002
p92, first paragraph
Are the links created in all the /etc/rc?.d directories, or a sub-set of them?
Posted by tecknobabble at Feb 21, 2011 07:09

Comments on obsolete-options.pdf:

Comment ID Location 
Comment 
trm-001 
p15
more obsolete options:
--target is now obsolete for create-connector-connection-pool, create-resource-adapter-config, delete-connector-connection-pool, delete-connector-security-map, delete-jdbc-connection-pool, delete-resource-ref, 
--autoapplyenabled is obsolete for create-http-lb
--retrievefile is obsolete for export-http-lb-config
--description is obsolete for restore-domain
--autohadboverride is obsolete for start-cluster, stop-cluster
--setenv is obsolete for start-instance
--target is now an operand rather than an option for list-custom-resources, list-jndi-entries
The --ignoredescriptoritem option is now all lowercase for set-web-context-param, set-web-env-entry
Posted by trmueller at Feb 21, 2011 11:41
Comment ID Response
trm-001 Decline. In this context "inherit" is OK.
trm-002 Done.
trm-003 Done.
Posted by pauldavies at Feb 25, 2011 13:33
Comment ID Response
sre-001 Done.
sre-002 Done. After further research, the description was found to be inaccurate and was changed.
Posted by pauldavies at Feb 25, 2011 13:35
Comment ID Response
sre-001 Done.
sre-002 Deferred awaiting response from subject matter expert.
sre-003 Done. Cross-reference to the man page is replaced with a cross-reference to the section about default synchronization and the cross-references to the procedures is replaced with a cross-reference to the procedure for resynchroniing an intsance.I have also made this change in To Restart an Individual Instance Locally (p65)
sre-004 Done.
sre-005 Done.
sre-006 See response to sre-002
sre-007 Done, and similar text on p62 also corrected.
sre-008 Done.
sre-009 Done. However, note that the version of the command with --hosts would not give a syntax error. The asadmin utility always accepts the --hosts option, regardless of what the subcommand does or doesn't do with it.
sre-010 Done.
Posted by pauldavies at Feb 25, 2011 13:37
Comment ID Response
jfd-001 Done.
jfd-002 Rebecca updated the Deployment Planning Guide, where this information is more appropriate.
jfd=003 Done, and in the corresponding local procedure on p58.
jfd-004 Decline "is an SSH node". The decision to avoid mentioning SSH explicitly was taken to allow for the possiblity in a future release of nodes that are enabled for remote communication by some other means.
Done: "represents the host on which the DAS is running."
jfd-005 Decline. The response to jfd-004 should be sufficient.
jfd-006
jfd-007
Same response as for jfd-004.
jfd-008 Done.
jfd-009 Done.
jfd-010 Done.
jfd-011 Done.
jfd-012 Done.
Posted by pauldavies at Feb 25, 2011 13:39
Comment ID Response
TJQ-002 Done.
Posted by pauldavies at Feb 25, 2011 13:40
Comment ID Response
JC-001 Done by restructring the information to clarify that the synchronization behavior for all files and directories under lib is the same.
JC-002 Done/discuss: "requires restart" is not the same as "requires resynchronization". An instance requires resynchronization only if it is stopped. I have updated this section accordingly. I also plan to add a description of "requires restart" to the section about impact of configuration changes in the Administration Guide.
JC-003 Ignored as requested.
JC-004 Done. The man page described an option that was erroneoulsy addded to the subcommand. Both subcommand and man page have since been corrected.
JC-005 Answer: As the export-sync-bundle(1) man page explains, the difference is that in the latter case, an error occurs. You can't specify a clustered instance as a target, only a cluster or a standalone instance. I have added this information to the procedure.
JC-006 Answer: No action required here. The cross-reference should be added to the Upgrade Guide.
Posted by pauldavies at Feb 25, 2011 13:42
Comment ID Response
trm-001 Done.
trm-002-
trm-005
Done by another writer.
Posted by pauldavies at Feb 25, 2011 13:44
Comment ID Response
cm-001 Done.
cm-002 Answer: Not an issue. The page numbers are generated automatically. Content was added since this draft was created - hence the discreapancy.
Posted by pauldavies at Feb 25, 2011 13:45
Comment ID Response
sre-001 Done.
sre-002 Done.
sre-003 Done.
sre-004 Answer: Yes, and the description is updated accordingly.
sre-005 Done/discuss: "requires restart" is not the same as "requires resynchronization". An instance requires resynchronization only if it is stopped. I have updated this section accordingly. I also plan to add a description of "requires restart" to the section about impact of configuration changes in the Administration Guide.
sre-006 Done.
sre-007 Done. I have made explicit what is implied in the procedure by adding: "In this example, multiple nodes are defined for the GlassFish Server installation that is running on the node's host."
sre-008 Done. I have rewritten as follows to change the emphasis as you suggest: "You must add files in some directories for library files to the class path yourself. Files in other directories are added to the class path automatically."
sre-009
sre-010
Done.
sre-011 Done. This information was in the v2 documentation and has been forward-ported to 3.1. I am not aware of any effort to move this documentation from the documentation on the ClassLoader hierarchy.
sre-012 Discuss: Actually more precise to say, "only the JAR file name is required," which is also more consistent with the text that follows the example.
sre-013 Done.
sre-014 The intended use case is to add files to the default list. I have rewritten this section accordingly to make it less dangerous.
sre-015 Decline/discuss: The information for this section was provided by Bill Shannon, who told me that the doNotRemoveList Flag is not in this release. Perhaps you would care to file an RFE to request its reinstatement.
sre-016 Done.
sre-017 Decline. This section is outwith the scope of this review and its presence here is an accident of pagination. The section continues beyond the end of the PDF file. To review this section, see the EJB doc review page.
Posted by pauldavies at Feb 25, 2011 13:48
Comment ID Response
trm-001 Done.
trm-002 Done.
trm-003 Done.
trm-004 Done by restructring the information to clarify that the synchronization behavior for all files and directories under lib is the same.
trm-005 Done/discuss: "requires restart" is not the same as "requires resynchronization". An instance requires resynchronization only if it is stopped. I have updated this section accordingly. I also plan to add a description of "requires restart" to the section about impact of configuration changes in the Administration Guide.
trm-006 Done.
trm-007 Done.
trm-008 Done.
trm-009 Done.
trm-010 Done.
trm-011 Done.
trm-012 Done.
trm-013 Done.
trm-014 Done.
trm-015 Done.
trm-016 Typo corrected. Otherwise, decline. This section is outwith the scope of this review and its presence here is an accident of pagination. The section about Migrating EJB Timers and this section both legitimately belong in the same chapter, which discusses the administration of GlassFish Server instances.
Posted by pauldavies at Feb 25, 2011 13:49
Comment ID Response
jc-001 Done.
jc-002 Answer: The procedure has been exapanded to explain how to create the file on the target host or transfer the file from the DAS to the target host. The example has been changed so that all steps are performed on the target host.
jc-003 Done.
Posted by pauldavies at Feb 25, 2011 13:50

Responses only to comments on content for which I am responsible:

Comment ID Response
trm-005 Decline. The section that is cross-referenced explains.
trm-006
trm-007
trm-008
Answer: This text was mistakenly carried forward from the documentation for an earlier release. I have updated this section to cross-refer to "To Resynchronize an Instance and the DAS Offline."
Posted by pauldavies at Feb 25, 2011 13:52
Comment ID Response
sre-001 Done.
sre-002 Answer: According to the engineer, if present, links are created in these directories: 0, 1, 2, 3, 4, 5, 6, and S. I have updated the text accordingly.
Posted by pauldavies at Feb 25, 2011 13:53
Comment ID Response
trm-001 Done.
Posted by pauldavies at Feb 25, 2011 13:54