I'm experimenting reading data from HTTP requests
If my buffer size is 1024 bytes and the message coming in is 768 bytes, it works perfectly and I can see that the buffer is not full in which case I can assume that it's done
read=692, buffer=java.nio.HeapByteBuffer[pos=692 lim=1024 cap=1024] :: read < cap
If my buffer size is 512, I can still determine when the request is done
read=512, buffer=java.nio.HeapByteBuffer[pos=512 lim=512 cap=512]
read=160, buffer=java.nio.HeapByteBuffer[pos=160 lim=512 cap=512] :: read < cap
At buffer size 64, it still works
read=64, buffer=java.nio.HeapByteBuffer[pos=64 lim=64 cap=64]
...
read=64, buffer=java.nio.HeapByteBuffer[pos=64 lim=64 cap=64]
read=32, buffer=java.nio.HeapByteBuffer[pos=32 lim=64 cap=64] :: read < cap
When I go into the single digits, it appears that the channel is reading garbage data and filling up the buffer
read=7, buffer=java.nio.HeapByteBuffer[pos=7 lim=7 cap=7]
...
read=7, buffer=java.nio.HeapByteBuffer[pos=7 lim=7 cap=7]
read=7, buffer=java.nio.HeapByteBuffer[pos=7 lim=7 cap=7] :: read == cap ?
This means I never know when all the data has been read With a buffer size of 7, I'm expecting the last one to be
692 % 7 = 98
692 - (98*7) = 6
I'm expecting to see
read=6, buffer=java.nio.HeapByteBuffer[pos=6 lim=7 cap=7]
but instead I'm seeing read=7
If I make the buffer exactly the size of the http request, I'm seeing
read=692, buffer=java.nio.HeapByteBuffer[pos=692 lim=692 cap=692] :: read == cap ?
but it never goes back into the channel.read::completed until I refresh the browser, then read goes to -1, but then the next request hangs and never reaches read = -1 to trigger the channel close.
read=692, buffer=java.nio.HeapByteBuffer[pos=692 lim=692 cap=692]
read=-1, buffer=java.nio.HeapByteBuffer[pos=0 lim=692 cap=692]
DONE, closing channel
read=692, buffer=java.nio.HeapByteBuffer[pos=692 lim=692 cap=692] << hangs here until refresh, but then the next request does the same again
Any ideas on how to make the "end-detection" more reliable?
AsynchronousServerSocketChannel.open().let { server ->
server.bind(InetSocketAddress(8080))
// accept next socket
server.accept(ByteBuffer.allocate(bufferSize), object : CompletionHandler<AsynchronousSocketChannel, ByteBuffer> {
override fun completed(channel: AsynchronousSocketChannel, buffer: ByteBuffer) {
// accept next
server.accept(ByteBuffer.allocate(bufferSize), this)
channel.read(
buffer,
channel,
object : CompletionHandler<Int, AsynchronousSocketChannel> {
override fun completed(read: Int, channel: AsynchronousSocketChannel) {
println(("read=$read, buffer=$buffer"))
if (read > 0 || buffer.position() > 0) {
buffer.rewind()
channel.read(
buffer,
channel,
this
)
} else {
println("DONE, closing channel")
channel.close()
}
}
override fun failed(exception: Throwable, channel: AsynchronousSocketChannel) {
exception.printStackTrace()
channel.close()
}
}
)
}
override fun failed(exception: Throwable, attachment: ByteBuffer?) {
exception.printStackTrace()
}
})
}
CodePudding user response:
Your whole idea of detecting the end of the stream by reading less than the size of the buffer is entirely wrong.
Sockets are not message-based, they're streams of data and data is not guaranteed to arrive as it was sent. When you send 692 bytes on one side, it may arrive as a single read of 692 bytes, but it could be e.g. two reads: 400 and 292 bytes. Even if the reading buffer is 1024 bytes.
Two ways to detect the end of a stream are:
- Close the socket on the sending side, wait for
-1
on receiving side. - Implement some kind of a protocol (or use already existing one) to encode messages on top of the stream of data. The easiest is to send the size of the message first and then read as long as we receive the expected amount of data.