Hello All, Following is the understanding (summary) we have had of the meetings over the week - 1. We have opted for option 1 for handing failover of the two options we have discussed. Option 1 - Pseudo-failure upon recovery of the node. Once a failed node recovers, it would result in any requests tied to its haskey would be sent to the recovered node. Consistent hashing would ensure this. Option 2 - Using time stamp/node last failure time /node last recovery time/ delta-t/ This fundamentally place requirements on SIPSessionUtil. This option would be recorded as alternate to be considered in Rnext of SailFin. Requires more work across the various components - CLB, Container/Replication and GMS. 2. Policy for HTTP load balancing Default policy would be round-robin. However configuration would allow to specify use of Consistent hashing. This places design constraints on the application to expose the equivalent hashing values as HTTP query parameters in the first HTTP request establishing the SAS. This has impact on Documentation to note this constraint. SIPSessionUtil remote access would result in null in case of split sessions. 3. SASKey Decided to go for option 1 of the two options considered. Option1 - Use Consistent hashing configurable using a set of rules defined over the cluster. Option 2 - Use a programmatic interface, which the developer needs to provide implementation for. This interface would process the request header and request line in application specific way while returning a java.lang.String/ Object (representation of the SASkey) which is the input for consistent hashing. This also places requirements on the deployment framework to able to make such a component available as a utility which can be invoked by the CLB server component. This option would be captured in a to-do-list for Rnext of SailFin 4. Updating the ConvergedLB FS AI - Joel would be updating the following sections of the spec -
- 2.4.1 Configuration
Details on converged load balancer rules
- 2.4.2 LB Runtime
Updated details on handling of initial and subsequent requests while using Via, Local Contact, Remote Contact and Record-Route.
- 2.5.2 Container Changes
To reflect the commonly referred back-end changes to support the load balancer functioning. These are changes which on the SIP stack are not integrated tightly into the core container however maintained as a loosely coupled architectural component (- layer).
5. Erik would update the Session Replication group on the point 1. Given this Session Replication would be designed to take into account the pseudo-failure use case. 6. Overall Work distribution Ericsson - To provide the SIP load balancer component / Proxy
- Mutual components which have dependency / usage from HTTP CLB aswell. Ex - Consistent hashing - Rules support for consistent hashing. (Re factor the existing EAS rules configuration file)
- Integration of the configuration support for the SIP CLB.
Sun - Converged HTTP load balancer component / Proxy
- Seed the common design interfaces
- GMS support to detect the failure and recovery of instances (cluster membership changes)
- Runtime support for load balancer configuration (converged-loadbalancer.xml / Supporting DAS changes for loadbalancer
initialization as part server start-up) **
-
- The admin group (Sun/Ericsson) is responsible for core admin (CLI/ GUI/DAS changes)
regards Pankaj
|