I’m using the boost::beast
wrapper of unix domain socket. my platform is macOS.
First, I define the socket :
boost::asio::local::stream_protocol::socket socket;
I’d like to use it to read messages sized up to 2k.
boost::asio::streambuf input_streambuf;
...
boost::asio::async_read(socket, input_streambuf,
boost::asio::transfer_at_least(1), yield);
However, the input_streambuf is only 512 bytes.
Any idea if I can increase this limit from Boost::beast ? or perhaps it's some definition in the system level ?
thanks
CodePudding user response:
I don't see any Beast types here. It's all exclusively Asio.
Also, streambuf
models the DynamicBuffer
concept. It has no fixed size, so your claim is not accurate.
Finally, no matter how big the initial capacity of the streambuf
were to be, it won't do you much good because you instruct the async_read
to transfer_at_least(1)
, which means that any platform-dependent buffer in the underlying implementations will probably cause the first read_some
to return much smaller quantity. E.g. with this simple server:
#include <boost/asio.hpp>
#include <boost/asio/spawn.hpp>
#include <iostream>
namespace net = boost::asio;
using U = net::local::stream_protocol;
int main() {
net::thread_pool io;
using net::yield_context;
U::acceptor acc(io, "./test.sock");
acc.listen();
U::socket s = acc.accept();
spawn(io, [&](yield_context yield) { //
net::streambuf buf;
while (auto n = async_read(s, buf, net::transfer_at_least(1), yield))
std::cout << "Read: " << n << " (cumulative: " << buf.size() << ")"
<< std::endl;
});
io.join();
}
When using a client like e.g.:
netcat -U ./test.sock <<RESP
Hello world
This is Brussels calling
From pool to pool
RESP
Will print e.g.:
Read: 55 (cumulative: 55)
But when running netcat -U ./test.sock
interactively:
Read: 12 (cumulative: 12)
Read: 30 (cumulative: 42)
Read: 17 (cumulative: 59)
In fact, we can quite literally throw a thousand dictionaries at it:
for a in {1..1000}; do cat /etc/dictionaries-common/words; done | netcat -U ./test.sock
And the output is:
Read: 512 (cumulative: 512)
Read: 512 (cumulative: 1024)
Read: 512 (cumulative: 1536)
Read: 512 (cumulative: 2048)
Read: 512 (cumulative: 2560)
Read: 1536 (cumulative: 4096)
Read: 512 (cumulative: 4608)
Read: 3584 (cumulative: 8192)
Read: 512 (cumulative: 8704)
Read: 7680 (cumulative: 16384)
Read: 512 (cumulative: 16896)
Read: 15872 (cumulative: 32768)
Read: 512 (cumulative: 33280)
Read: 32256 (cumulative: 65536)
Read: 512 (cumulative: 66048)
Read: 65024 (cumulative: 131072)
Read: 512 (cumulative: 131584)
Read: 65536 (cumulative: 197120)
...
Read: 49152 (cumulative: 971409238)
Read: 49152 (cumulative: 971458390)
Read: 49152 (cumulative: 971507542)
Read: 49152 (cumulative: 971556694)
Read: 21306 (cumulative: 971578000)
terminate called after throwing an instance of 'boost::wrapexcept<boost::exception_detail::current_exception_std_exception_wrapper<std::runtime_error> >'
what(): End of file [asio.misc:2]
As you can see, the size of the buffer is dependent on the OS, which actually does a decent job of scaling it up to requirements.
Improving The Instructions
So, instead of transfer_at_least
use whatever minimum you found reasonable:
while (auto n = async_read(s, buf, net::transfer_at_least(1024*1024), yield))
std::cout << "Read: " << n << " (cumulative: " << buf.size() << ")"
<< std::endl;
Prints instead:
Read: 1048576 (cumulative: 1048576)
Read: 1048576 (cumulative: 2097152)
Read: 1063342 (cumulative: 3160494)
Read: 1086266 (cumulative: 4246760)
...
Read: 1086266 (cumulative: 969962524)
Read: 1102650 (cumulative: 971065174)
Or if you are content to read until EOF:
while (auto n = async_read(s, buf, yield[ec])) {
std::cout << ec.message() << ": " << n
<< " (cumulative: " << buf.size() << ")" << std::endl;
if (ec.failed())
break;
}
Note that subtly we added error_code checking so we don't miss the whole transfer due to EOF. Now prints:
End of file: 971578000 (cumulative: 971578000)
Finally, you may want to limit maximum capacity, say to 50 Mib:
constexpr size_t _50MiB = 50<<20;
net::streambuf buf(_50MiB);
boost::system::error_code ec;
while (auto n = async_read(s, buf, yield[ec])) {
std::cout << ec.message() << ": " << n
<< " (cumulative: " << buf.size() << ")" << std::endl;
if (ec.failed())
break;
}
if (ec != net::error::eof && buf.size() == _50MiB) {
std::cout << "Warning: message truncated" << std::endl;
}
Now we can't force feed dictionaries to run us out of memory:
Success: 52428800 (cumulative: 52428800)
Warning: message truncated