Home > Net >  UDP Packet loss when sending possible?
UDP Packet loss when sending possible?

Time:01-11

My professor gave me the task to realize a file transfer via UDP, which implements the protection mechanisms for reliability like as TCP (CRC check, correct packet order, ACK/NACK). I got some default classes from him (Socket & Channel) to simulate packet loss and delay on local machine. A packet loss when sending the packets is also simulated in its classes. However, this means that if an ACK is not sent correctly i can't notice it either, since an ACK from server to client is not confirmed with an ACK for an ACK from client to server. I thought that packet loss can only happen when receiving packets. Is it possible in real cases that an packet can be lost while sending without getting a code exception?

Greetings

CodePudding user response:

Is it possible in real cases that an packet can be lost while sending without getting a code exception?

Easy. Simply send the packets from your application faster than the network interface can forward these. Of course any intermediate systems on the way (i.e. switches, routers) might also get overloaded and loose packets.

But at the end it does not actually matter how the packet is lost, i.e. if one the local system while sending, on the remote system while receiving or in between while forwarding. One simply cannot assume that a successful send will be matched by a successful recv.

CodePudding user response:

IP packets contain the transport protocol (TCP, UDP, etc.) in their payload, and you lose IP packets all the time due to things like oversubscription at some point in the path. QoS, controls can also use something like RED that purposely discards packets in order to keep queues from filling and doing tail-drop. TCP can realize that TCP segments (not packets) are lost and resend them, but UDP has no mechanism to realize that its datagrams (not packets) are lost. UDP is a fire-and-forget protocol.

  • Related