I am on the server:
Create a socket: fd,
Create an epoll: epoll_fd
A thread dedicated to monitor fd, accpet the new connections, new connection fd - x, using epoll_ctl register to epoll_base,
A thread wait (epoll_fd), find out the data to fd - x, read from the fd - x/request data into the queue in the queue,
A thread pool from the queue request processing,
Question:
TCP streaming data,
1, read from the fd - x/request data how to ensure that can read the full request?
2, if read from fd - x data, there is still no read a complete/request data, blocking in there waiting for the bus? If no traffic there and so on, and then processing before the next fd - n, then read it out half of the data in where?
My group of friends and I talked once, some solutions are as follows:
1: read data on the fd - from fd - x x corresponds to a temporary buff, inspection request data is complete, complete is put into the queue queue, recycling buff, incomplete will buff in the remaining queue temporarily, but after that, to be continued when the queue is not 0, once epoll have new data, the data of fd and fd - to be continued queue x comparison, the same will continue to receive follow-up data,
Sure: logic is complex, each time to determine fd and fd - x are consistent, to be continued queue performance loss,
2, give each link fd - x are opening up a ring buffer buf, buf to fixed size, used to receive data frame and complete inspection,
Weakness: the space consumption
The memory on the socket I rough calculation, the kernel sends to accept almost 10 k buffer, buffer support dead play a 4 k, the big aspect is about 16 k, 16 k * 1000000=15 gb of memory, now one can service online server memory is not a problem,
Is really millions of concurrent network, but is not in memory, millions of concurrent activities socket is not much, and the message is not very big, so for the overall network traffic will not reach bottleneck; For like cloud storage, I can tell you that a large file to download, you don't say millions of concurrent, 1 w have difficulty in concurrent, limited network bandwidth is