• A dialog fragment may involve a chain of applications. What happens if one of the applications gets undeployed? Should the dialog terminate? Erik to find out how this is handled in EAS.
  • The LB team has decided to apply the consistent hashing algorithm consistently, meaning that after a failed instance has recovered, any of its "original" traffic will be routed back to it. See CLB meeting minutes from August 20, 2007. This means that we will have to migrate any active session from the failover instance back to the recovered instance when a request attempts to resume it. If the active session happens to be involved in any transactions at the time it is being migrated, these transactions will be lost, because transactions are not replicated.
  • Another important decision taken at the CLB meeting has been that SipSessionsUtil.getApplicationSession() must return null if the requested SAS resides on a different instance.
  • While some data (such as SipSession, SipApplicationSession, and Timers) are application-specific, and may be configured on a per-application basis (see the session-timeout element in sip.xml as an example), there are additional data structures such as dialog fragments, which span multiple applications (each dialog fragment contains a sequence of path nodes, each of which corresponds to an application and has a SipSession and SipApplicationSession associated with it) and also need to be replicated. Dialog fragments are mapped by a compound key, consisting of fromTag+toTag+fragmentId+callDd.
    While we want any application-specific data to be replicated by an application-specific replication manager, dialog fragments will need to be replicated by a container-wide dialog replication manager, which may be implemented as a singleton. The notion of a container-wide replication manager is currently not supported in GlassFish.