I'm trying to design a SwiftNIO server where multiple clients (like 2 or 3) can connect to the server, and when connected, they can all receive information from the server.
To do this, I create a ServerHandler
class which is shared & added to each pipeline of connected clients.
let group = MultiThreadedEventLoopGroup(numberOfThreads: 2)
let handler = ServerHandler()
let bootstrap = ServerBootstrap(group: group)
.serverChannelOption(ChannelOptions.backlog, value: 2)
.serverChannelOption(ChannelOptions.socketOption(.so_reuseaddr), value: 1)
.childChannelInitializer { $0.pipeline.addHandler(handler) }
.childChannelOption(ChannelOptions.socketOption(.so_reuseaddr), value: 1)
The above code is inspired from https://github.com/apple/swift-nio/blob/main/Sources/NIOChatServer/main.swift
In the ServerHandler
class, whenever a new client connects, that channel is added to an array. Then, when I'm ready to send data to all the clients, I just loop through the channels in the ServerHandler
, and call writeAndFlush
.
This seems to work pretty well, but there are a couple things I'm concerned about:
- It seems that creating a shared handler is not really recommended, and you should instead create a new handler for each client. But then, how would I access all the client channels which I need to send data to? (I send data at times determined by the UI)
- Why does
Channel.write
not seem to do anything? My client is unable to receive any data if I useChannel.write
instead ofwriteAndFlush
in the server.
I apologize if these questions are stupid, I just started with SwiftNIO
and networking in general very recently.
If anybody could give me some insight, that would be awesome.
CodePudding user response:
Your questions aren't stupid at all!
Yeah, sharing a
ChannelHandler
probably counts as "not recommended". But not because it doesn't work, it's more that it's unusual and probably not something other NIO programmers would expect. But if you're comfortable with it, it's fine. If you're high-performance enough that you worry about the exact number of allocations perChannel
then you may be able to save some by sharing handlers. But I really wouldn't optimise the prematurely.If you didn't want to share handlers, then you could use multiple handlers that share a reference to some kind of coordinator object. Don't get me wrong, it's really still the same thing: One shared reference across multiple network connections. The only real difference is that testing that may be a little easier and it would possibly feel more natural to other NIO programmers. (In any case be careful to either make sure that all those
Channel
s are on the sameEventLoop
or to use external synchronisation (with say a lock, which might not be ideal from a performance point of view).write
just enqueues some data to be written.flush
makes SwiftNIO attempt to send all the previously written data.writeAndFlush
simply callswrite
and thenflush
.Why does NIO distinguish between
write
andflush
at all? In high-performance networking applications, the biggest overhead might be the syscall overhead. And to send data over TCP, SwiftNIO has to do a syscall (write
,writev
,send
, ...).Any SwiftNIO program will work if you just ignore
write
andflush
and always usewriteAndFlush
. But, if the network is keeping up, this will cost you one syscall perwriteAndFlush
call. In many cases however, a library/app that's using SwiftNIO already knows that it wants to enqueue multiple bits of data to be sent over the network. And in that case doing say threewriteAndFlush
in a row would be wasteful. If would be much better to accumulate the three bits of data and then send them all in one syscall using a "vector write" (e.g.writev
syscall). And that's exactly what SwiftNIO would do if you did saywrite
,write
,write
,flush
. So the three writes will all be sent using onewritev
system call. SwiftNIO will simply get the three pointers to the bits of data and hand them to the kernel which then attempts to send them over the network.You can take this even a little further. Let's assume you're a high-performance server and you want to respond to a flood of incoming requests. You'll get your requests from the client over
channelRead
. If you're now able to reply synchronously, you could justwrite
them responses (which will enqueue) them. And once you getchannelReadComplete
(which marks the end of a "read burst") you canflush
. That would allow you to respond to as many requests as you can get in a single read burst using just onewritev
syscall. This can be quite an important optimisation in certain scenarios.