3

I would like to know if the answer to this rather old question about futures still applies to the more recent language constructs async/await. It seems to be so, since code below prints:

hello 
good bye 
hello

although the guide says

The futures::join macro makes it possible to wait for multiple different futures to complete while executing them all concurrently.

Clearly, it's a diversion of the expected behavior in many, many other asynchronous systems (node.js for example), with regard to sleep.

Any fundamental reason to be that way?

use std::time::Duration;
use std::thread;

async fn sayHiOne() {
    println!( " hello " );
    thread::sleep( Duration::from_millis( 3000 ) );
    println!( " good bye " );
} // ()

async fn sayHiTwo() {
    println!( " hello " );
} // ()

async fn mainAsync() {

    let fut1 = sayHiOne();

    let fut2 = sayHiTwo();

    futures::join!( fut1, fut2 );
} // ()

fn main() {
    block_on( mainAsync() );
} // ()

Addition: the behavior (I) expected with actual threads

fn main() {

    let fut1 = do_async( move || {
        println!( "hello" );
        thread::sleep( Duration::from_millis( 3000 ) );
        println!( "good bye" );
    });

    let fut2 = do_async( move || {
        println!( "hello" );
    });

    fut1();
    fut2();

}

use std::thread;
use std::time::Duration;
use std::sync::mpsc::channel;


fn do_async<TOut, TFun>( foo: TFun ) -> (impl FnOnce()-> TOut)
 where
    TOut: Send + Sync + 'static,
    TFun: FnOnce() -> TOut + Send + Sync + 'static
    
{

    let (sender, receiver) 
        = channel::< TOut >();

    let hand = thread::spawn(move || {
        sender.send( foo() ).unwrap(); 
    } );

    let f = move || -> TOut {
        let res = receiver.recv().unwrap();
        hand.join().unwrap();
        return res;
    };

    return f;
} // ()
cibercitizen1
  • 20,944
  • 16
  • 72
  • 95
  • 2
    if you use sleep of std, aka BLOCKING sleep, you obviously can't expect good result. – Stargateur Feb 02 '22 at 16:31
  • BTW, I don't at all understand what you ask – Stargateur Feb 02 '22 at 16:32
  • @Stargateur Concurrent functions can be executed by one single thread and still appear to be parallel. If one of them executes a blocking operation, the thread jumps to the other one. If that's not the case, and in rust we must default to actual threads, then what's the point in having concurrent async functions? – cibercitizen1 Feb 02 '22 at 16:59
  • "If one of them executes a blocking operation, the thread jumps to the other one" how the hell the thread know you call a blocking function and even if this is the case, what could it do about it ?!? use sleep async from tokio if you want async behavior, you can't mix blocking and async and magically expect it's will sort out. Rust is close to the OS. OS have blocking api and async api. if you use blocking api that your choice don't blame async Rust. Again I don't understand what you ask. BTW async rust is very very very hard to understand. – Stargateur Feb 02 '22 at 17:10
  • @Stargateur It's pretty clear that a "concurrent system" should -under the hood- make whatever it takes to achieve what it advertises. After all, that's boilerplate code. – cibercitizen1 Feb 02 '22 at 17:45
  • there is no silver bullet. Rust doesn't run on a VM – Stargateur Feb 02 '22 at 17:48
  • @Stargater **Parts of async Rust are supported with the same stability guarantees as synchronous Rust. Other parts are still maturing and will change over time.** https://rust-lang.github.io/async-book/01_getting_started/03_state_of_async_rust.html. That's the reason of the question. – cibercitizen1 Feb 02 '22 at 17:51
  • I'm not the best at English but I think this mean no breaking change API. If you want rant on Rust go on reddit or a chat or something I don't think SO question is the appropriate place. Also from your source "Some compatibility constraints, both between sync and async code, and between different async runtimes." – Stargateur Feb 02 '22 at 17:53
  • 1
    @Stargateur Alright, thanks for your enlightment. – cibercitizen1 Feb 02 '22 at 17:56

2 Answers2

5

Since the standard/original thread::sleep is blocking, it turns out that the async library is providing async_std::task::sleep( ... ) which is the nonblocking version for sleep. It's to be used with .await(no parentheses):

task::sleep::( Duration::from_millis( 1 ) ).await;

This sleep has the same effect that unstable version: yield_now in the sense that it

moves the currently executing future to the back of the execution queue, making room for other futures to execute. This is especially useful after running CPU-intensive operations inside a future.

So I guess, the intended use is to "kindly" share the use of the thread among the futures, whenever a task is planning to perform a long work.

cibercitizen1
  • 20,944
  • 16
  • 72
  • 95
1

Yes, it still applies. It fundamentally has to be that way because, like the linked answer says, each async function will be running on the same thread - std::thread::sleep knows nothing about async, and so will make the whole thread sleep.

Nodejs (and JavaScript in general) is much more designed around async, so the language primitives and the language runtime are more async-aware in that way.

lkolbly
  • 1,140
  • 2
  • 6
  • Alright, but imho `std::thread::sleep` should know *something* about `async`, shouldn't it? At least since async/await were introduced into rust. As well as they should know something about any other blocking functions, either I/O or not; so as advertised: *The futures::join macro makes it possible to wait for multiple different futures to complete while executing them all concurrently.* – cibercitizen1 Feb 02 '22 at 17:04
  • @cibercitizen1 I disagree, `std::thread::sleep` is in the `thread` module so it deals with threads. But I come from a more low-level perspective: I think of async functions as just glorified state machines. If you come from a different place, like nodejs, then this behaviour may very well be surprising. `futures::join` does still wait for multiple different futures, it's just that one of the futures takes a long time to execute (imagine if it were doing a computation or something - should it spin up a threadpool?) – lkolbly Feb 02 '22 at 17:37
  • @Ikolbly Well, its pretty easy to simulate *actual* concurrent behavior (https://stackoverflow.com/a/70948063/286335). If Rust want's to alleviate this boilerplate code and save us from errors, is not that difficult. **Parts of async Rust are supported with the same stability guarantees as synchronous Rust. Other parts are still maturing and will change over time.** https://rust-lang.github.io/async-book/01_getting_started/03_state_of_async_rust.html – cibercitizen1 Feb 02 '22 at 17:49
  • 2
    @cibercitizen1 Spawning a thread for every async operation just because it might sleep is too much overhead for me. I use async on microcontrollers, the light-weightedness of async is valuable. – lkolbly Feb 02 '22 at 18:08