Recently met a TCP communications engineering phenomenon, there has been no reasonable explanation, BBS a great god, please help analysis,
Replacement background:
There was a client device, through communication network connected to the distant server implementation, using TCP/IP protocol,
Client periodic regularly send data to the server, the server receives the data, the data flow is about 1.2 Mbps, communication network to the client distribution network bandwidth of about 2 Mbps,
Abnormal phenomenon:
Frequently, the communication network congestion phenomenon, the client sends the TCP data frame RTT is the time during the period of congestion, 200 + ms, under normal circumstances around 30 ms,
Client applications every 20 ms write a frame data to TCP send buffer (write socket), I in the application of setting SO_SNDBUF to a larger value (conform to the requirements of the kernel parameters, confirm effect), according to my calculations, if appear this kind of degree of network congestion, causes the already send data in the TCP send window because don't get ack to clear, the TCP send buffer can cache data for at least five seconds,
But, in fact, found that when the network latency is bigger, just a few hundred milliseconds, the client application write socket error, print Resource temporarily unavailable (my client socket is blocked), I think this is because the TCP sending buffer is full, the write error, but after caught calculation, from the write error calculation is taken, send buffer should be only about 40 KB of data (not sent and did not get confirmation of ack), and set up a buffer size of 256 KB,
Why 256 KB of send buffer size, only around 40 KB of data, is full?
Although the send buffer storage is not just a application data content, and other expenses, but not accounted for so much? Or my understanding of the mechanism of TCP protocol have other problems?