Home > OS >  Asynchronous Server Sockets with Multithreading
Asynchronous Server Sockets with Multithreading

Time:04-17

I want to setup a single server - multiple client system.

Situation

Multiple clients want to start a TCP socket connection with the server, the server accepts the connections. (The connections stay up for a long time). How should I implement the server socket part such that it can always immediately accept a new client connection in the most efficient way (meaning, spread the load over all the available cores). These are the methods I found so far:

Synchronous server sockets with unbound threadpool

For each client connection, the server creates a new server socket in a new thread using an unbound threadpool. The problem is that when there are many clients, there will be too many threads and the server won't be able to handle it (because of the garbage collector?)

Synchronous server sockets with fixed threadpool and LinkedBlockingQueue

For each client connection, the server creates a new server socket in a new thread using a fixed threadpool. When there are more clients than there are threads in the fixed threadpool, the clients have to wait until a thread becomes available again.

Asynchronous server sockets on single thread

For each client connection, the server creates an asynchronous server socket, since these sockets are asynchronous, they can all run on the same thread. However, then all the load on the server is distributed over 1 thread, which seems less performant, because it all runs on 1 core.

Asynchronous server sockets on multiple threads?

Is it possible/does it make sense, to spread these asynchronous connections over all the available cores? For example, create a thread for each core and then evenly fill these threads with the asynchronous tasks? This way it would be possible to have "unlimited" client-connections and also spread the load over all the available cores.

CodePudding user response:

This is largely an extension of @markspace answer, but with added focus on the why.

If your question is asking is it possible to ensure that every single request gets a unique core and is evenly split, well yes, it is possible, but if you make that an implementation requirement, you are kind of fighting the language to get a requirement met. So in short, it is possible, but no, it does not make sense.

The Java devs put a lot of time into getting concurrency and task scheduling right, which means that they wanted the tool to be a multipurpose one-size-fits-most type of tool. Obviously, not every tool is right for the job, but the goal was to limit the amount of effort you would need to spend tweaking the tool to make it right for the job. If your concern is about throughput, it would likely be to your benefit to avoid trying to force the resource allocation in a certain direction, and just request resources as you need it. Let Java focus on doing that efficiently.

In java, you can also manually allocate memory, much like C or Rust. Yes, it is possible, but the entirety point of java was to abstract all of that information away from you, via a garbage collector, so that it is one less problem you have to solve. And more specifically, it is a problem that is currently solved in a much better way than a vast majority of us would be able to do.

But back to your original question. By all means, if you think that you can perform task scheduling and thread management better than the OS (or JVM, when Loom gets here), go right ahead. But unless you have an extremely good idea on how to do that (or an extremely good reason), it likely would not be wise, let alone worth the effort. The current implementation is good enough for 99% of what you will do in your career.

Of course, if this is just to learn something (and not for a system that is being deployed to PROD), then dive in headfirst - here is the repo for most of Java's concurrency and asynchronous logic.

https://github.com/openjdk/jdk/tree/master/src/java.base/share/classes/java/util/concurrent

  • Related