I'm trying to learn about coroutines, senders/receivers, async, structured concurrency, etc. within C . I've watched cppcon talks, read online blogs, and read example code on GitHub, yet I'm still struggling to understand asynchronous computation. To simplify things, I am making this question about a single problem: opening a file asynchronously.
Why I think you should transfer files asynchronously
In my head the problem looks like this:
- There are system calls to move files between RAM and the disk. These system calls used to have two downsides: (1) they were blocking and (2) upon completion, they cause an interrupt from the DMA.
- Blocking calls and interrupts are bad because they result in a context switch, which is inefficient.
- Instead of blocking, async paradigms allow us to signal to the DMA that we want data to be moved. Moving file data to/from RAM does not require participation of the CPU, so while the DMA does its thing, our thread can continue its computation. When the DMA completes, it should (in some way) signal to our program that the file transfer is complete. Our program can then (somehow?) run a call-back function scheduled for when the file transfer completes.
The Question
Is this the kind of problem that the async paradigms exist to solve? If not, what kinds of problems are they meant to solve? If so, what is the correct way to asynchronously open a file using these new asynchronous paradigms?
My Attempt at an answer
I've been trying to answer my own question by looking at example code in CppCoro and libunifex. I can't see how the code solves the problem I have laid out. My expectation is that there should be (1) a system call for a non-blocking file transfer and (2) a way to receive a signal that the file transfer is complete so the call-back can be called. I do not see either of things. This leads me to believe that these asynchronous libraries do not exist to solve the problem I have laid out.
CodePudding user response:
Is this the kind of problem that the async paradigms exist to solve?
I'd say the archetypical problem that async paradigms exists to solve is non-blocking UI.
I'd recommend looking into C /WinRT WinUI/UWP.
In WinUI/UWP, you have a UI thread that shouldn't be blocked. And this is enforced by C /WinRT raising an exception if their async type IAsyncOperation<T>
(equivalent to unifex::task<T>
) blocks on the UI thread.
IAsyncOperation<StorageFile> future_file = GetFileFromPathAsync(L"my_file.txt");
future_file.get(); // error on UI thread
All functions in WinRT working with the file system return an IAsyncOperation<T>
because we don't want the UI to go unresponsive while the system calls block.
To use the IAsyncOperation<T>
, C /WinRT uses coroutines, which are syntax sugar that chop up the remainder of your function after co_await
into a callback.
IAsyncOperation<StorageFile> future_file = GetFileFromPathAsync(L"my_file.txt");
StorageFile file = co_await future_file;
// this is now a callback
The UI thread runs tasks from a Dispatcher Queue. When the future_file
is ready, the remainder of the coroutine will be queued, and the UI thread will get around to processing it.
The actual work of opening the file (GetFileFromPathAsync
) is done on the Windows thread pool. In fact, you can manually switch which thread your coroutine is running on by using winrt::resume_background
and winrt::resume_forground
:
void MainPage::ClickHandler(
IInspectable const& sender,
RoutedEventArgs const& /* args */)
{
// start on UI thread
co_await winrt::resume_background();
// now on thread pool
// do compute
co_await winrt::resume_forground(sender.dispatcher());
// Back on UI thread and can now access UI components again
myButton().Content(box_value(L"Clicked"));
}
CodePudding user response:
My expectation is that there should be (1) a system call for a non-blocking file transfer and (2) a way to receive a signal that the file
I'd also say that you can already roll you own non-blocking async functionality easily with just standard C 11. This is similar structure to my directX12 render loop where I have to do compute on the gpu then on the cpu without droping frames.
// you could use std::future and std::promise instead of std::shared_ptr<std::optional>
// but we don't need that functionality here
std::queue<std::shared_ptr<std::optional<int>>> queue;
for(;;)
{
auto future = std::make_shared<std::optional<int>>(std::nullopt);
queue.push(future);
std::thread([promise = future]() {
//non-blocking work
std::this_thread::sleep_for(std::chrono::seconds(1))
*promise = 1;
}).detach();
while(queue.size() && queue.front()->has_value())
{
Render(queue.front()->value());
queue.pop();
}
PresentFrame();
}
You are always going to have a thread polling if tasks in a queue have completed hidden somewhere.
What cppcoro and libunifex provide is async algorithms, coroutine support (which make writing async code easier), more performance with sender/receiver and thread pools (as creating a new thread with std::thread
for every task is quite inefficient).