Home > database >  How to use async_std::task::sleep to simulate blocking operation?
How to use async_std::task::sleep to simulate blocking operation?

Time:11-17

I have a simple code like this to simulate how asynchronous code works on a blocking operation.

I'm expecting all these "Hello" prints will be shown after a 1000ms.

But this code works like a normal blocking code, each hello_wait call waits 1000ms and prints another Hello after 1000ms.

How can I make it run concurrently?

use std::{time::Duration};
use async_std::task;

async fn hello_wait(){
    task::sleep(Duration::from_millis(1000)).await;
    println!("Hello");
}

#[async_std::main]
async fn main() {
    hello_wait().await;
    hello_wait().await;
    hello_wait().await;
    hello_wait().await;
    hello_wait().await;
}

This is what is happening:

// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello
// -- Wait 1000ms --
Hello

This is what I want to:

// -- Wait 1000ms --
Hello
Hello
Hello
Hello
Hello

CodePudding user response:

How can I make it run concurrently?

You can either:

  • spawn a task, which will make every hello_wait be scheduled independently
  • or "merge" the futures, which will concurrently drive all the futures in parallel

Your expectation might come from Javascript or C# async, where the "base" awaitable is a task. Tasks are "active", as soon as you create them they can be scheduled and do their thing concurrently.

But rust's core awaitable is more of a coroutine, so it's inert (/ passive): creating one doesn't really do anything, futures have to be polled to progress and await will repeatedly poll until completion before resuming. So when you await something, it runs completely with no opportunity to interleave at that point.

Therefore running futures concurrently requires one of two things:

  • upgrading them to a task, meaning they can be scheduled on their own, that is what spawn does
  • or composing futures into a single "meta-future" which can poll them all in turn when it's polled, that's what constructs like join_all or tokio::join do

Note that composing futures doesn't allow parallelism, since the futures are "siblings" they can only get polled (and thus actually do things) consecutively, it's just that this polling (and thus progress) gets interleaved.

Spawning tasks does allow for parallelism (if the runtime is multithtreaded and the machine has mutiple cores -- though the latter is pretty much universal these days), but has its own limitations with respect to lifetime and memory management, and is a bit more expensive.

Here is a playground demo of various options. It uses tokio because apparently the playground doesn't have async_std, and I'm not sure it'd be possible to enable the "unstable" feature anyway, but aside from that and the use of tokio::join (async_std's Future::join can only join 2 futures at a time so you have to chain the call) it should work roughly the same in async_std.

CodePudding user response:

I was able to do it this way with futures crate's join_all:

use std::time::Duration;
use async_std::task;
use futures::future;

async fn hello_wait(){
    task::sleep(Duration::from_millis(1000)).await;
    println!("Hello");
}

#[async_std::main]
async fn main() {
    let mut asyncfuncs = vec![];
    asyncfuncs.push(hello_wait());
    asyncfuncs.push(hello_wait());
    asyncfuncs.push(hello_wait());
    asyncfuncs.push(hello_wait());
    asyncfuncs.push(hello_wait());

    future::join_all(asyncfuncs.into_iter()).await;
}

  • Related