Home > Enterprise >  The WAS cluster request distribution and load balancing problems
The WAS cluster request distribution and load balancing problems

Time:09-20

Three servers have A production environment, the building Lord, hereinafter referred to as A/B/C, built A/B/C above three node node, IHS and DM are deployed on A only one IHS, simple 1 3 architecture,

Built a horizontal cluster, three distribution on a three node SERVER, the SERVER configuration only changes the size of the JVM, webcontainer size, the JVM log, cookie name each SERVER (different), other are unchanged,

IHS only changes the maxclients, other unchanged,

IHS PLUGIN is the default configuration, unchanged,

The problems faced by the original poster:

A application, login will have some information stored in the SESSION, then after a request to remove, but production environment appeared on the next request sometimes take less than a request to save the SESSION, probability is small,

Further testing, if turn off the a server, server, only b/c take less than the SESSION condition probability increase greatly, can reach 50%,

And if a server is open, seems much request sent to a server processing,

The IHS PLUGIN logs is adjusted for the DEBUG level, tracking and found the following pattern:

1) if a request on the server response is b/c, the next request is sent to a, so he appeared to take less than the situation of the SESSION,

The PLUGIN - logging IN: DEBUG: ws_server_group: NewserverGroupNextRoundRobinServer: Round Robin load balancing; (RR)

2) if a request is a server response, the next request basically or to a treatment,

The PLUGIN - logging IN: DEBUG: ws_common: websphereParseCloneID: Parsing clone ids from 0000 hywovdhhx4htesmetgkoccz: 1 a854qhjl; (session Affinity)

3) inside the PLUGIN record statistical information as follows, you can see only a1 affinityRequest, for the most part and a1 to respond to the request,

STATS: ws_server: serverSetFailoverStatus: Server Node134_cluster2_server_a1: failedRequests pendingRequests 0 0 affinityRequests 997 totalRequests 1026.

STATS: ws_server: serverSetFailoverStatus: Server hrac1Node01_cluster2_server_b1: totalRequests affinityRequests failedRequests pendingRequests 0 0 0 5.

STATS: ws_server: serverSetFailoverStatus: Server hrac2Node01_cluster2_server_c1: affinityRequests failedRequests pendingRequests 0 0 0 totalRequests 35.

From the point of the above test results, a server have the session affinity effect, as long as one to open up, will put pressure on the above,

B/c server no session affinity effect, request distribution by random polling, it may appear on the next request to take less than a request to save the problem of the session,

Check the plugin - CFG. XML, see it configuration AffinityCookie="cluster2_a1", but no other two session: cluster2_b1/cluster2_c1 configuration,

That is to say, only cluster2_a1 朿 session Affinity, while other cluster2_b1/cluster2_c1 no,





If I put cluster2_b1/cluster2_c1 two cookies are also added to the above affinityCookie configuration inside, still only cluster_a1 has affinity effect,

Summary question: why WAS only given a AffinityCookie to a1, while the other two? It will not load imbalance?

Is there any way can make the dish 3 server has Affinity effect,

CodePudding user response:

Pay attention to create a new cluster check
Configure the HTTP session memory to memory replication, so you can make the session within the cluster Shared;
  • Related