Home > Software design >  What happens when a socket client and server both send() simultaneously?
What happens when a socket client and server both send() simultaneously?

Time:05-13

I'm trying to learn about sockets, and most simple examples online have either the client or server doing most of the send()'ing and the other doing most of the recv()'ing, or vice-versa. Sometimes there's a good mix of both, but there's a simple "protocol", such that you always know when to expect a message, or when it's ok to send one, etc.

I'm imagining a client and server, both with their own list of things they want to send, and both end up calling send() at once. What happens? It seems like they would both block or timeout waiting for each other to recv().

One thing I've seen is the sending code using select(...) to make sure the socket's available for send()'ing. Is this how this problem is solved in practice? Is this guaranteed not be racy?

Is this problem simply avoided by protocols on top of TCP for example?

CodePudding user response:

Sockets are bi-directional. Any party can send() data at any time. send() will not block waiting for the other party to recv() unless the sending socket is running in blocking mode (the default) and the other party's receive buffer is full.

It is pretty rare to have both parties sending at the same time, but protocols can certainly allow it. For instance, when a client is sending commands in a pipelining manner, and the server is sending an earlier response while the client is sending a newer command. Or when a server is sending an unsolicited notification while a client is sending a command.

And yes, you should implement a protocol that defines the rules for who can send and read, and when they should do so.

But typically, the way to avoid a deadlock on both parties sending at the same time is to not read and send in the same thread to begin with, or else to use asynchronous I/O that can be multiplexed in the same thread. Either way will allow you to performs sends and reads in parallel.

CodePudding user response:

No protocol can work successfully if it has any state in which it allows both sides to wait for the other. So, for example, suppose you have a protocol that allows both sides to initiate the sending of a very large block of data to the other. You could get into a situation where both sides are blocked in send, each waiting for the other to read some data before they can make more forward progress.

To fix this, a protocol that has this potential deadlock can simply allow either one particular side to do this or allow neither side to do this. If the protocol allows neither side to do this, that means that you must not wait for your sending operation to complete before receiving.

There are two common ways to honor this requirement:

  1. You can use a different thread to send and to receive. So long as the receiving thread is always willing to receive data, no deadlock is possible.

  2. You can use a non-blocking send operation. If the send buffer gets full, you can then call receive to receive some data and retry the send later.

On a related note: A common mistake you will see in TCP code that knows how many bytes to expect to read is checking the number of bytes available to read (with some kind of 'peek' operation) and waiting for the full number of bytes to be available before actually performing the read. This can also deadlock.

TCP allows either side to say "I won't send any more data until the other side receives the data I've already sent". So it can't allow either side to ever say "I won't receive any more data until the other side sends more than it has already sent".

This can deadlock even if data only flows in one direction and even if the mistake is only made on one side!

  • Related