4

I have asynchronous code which calls synchronous code that takes a while to run, so I followed the suggestions outlined in What is the best approach to encapsulate blocking I/O in future-rs?. However, my asynchronous code has a timeout, after which I am no longer interested in the result of the synchronous calculation:

use std::{thread, time::Duration};
use tokio::{task, time}; // 0.2.10

// This takes 1 second
fn long_running_complicated_calculation() -> i32 {
    let mut sum = 0;
    for i in 0..10 {
        thread::sleep(Duration::from_millis(100));
        eprintln!("{}", i);
        sum += i;
        // Interruption point
    }
    sum
}

#[tokio::main]
async fn main() {
    let handle = task::spawn_blocking(long_running_complicated_calculation);
    let guarded = time::timeout(Duration::from_millis(250), handle);

    match guarded.await {
        Ok(s) => panic!("Sum was calculated: {:?}", s),
        Err(_) => eprintln!("Sum timed out (expected)"),
    }
}

Running this code shows that the timeout fires, but the synchronous code also continues to run:

0
1
Sum timed out (expected)
2
3
4
5
6
7
8
9

How can I stop running the synchronous code when the future wrapping it is dropped?

I don't expect that the compiler will magically be able to stop my synchronous code. I've annotated a line with "interruption point" where I'd be required to manually put some kind of check to exit early from my function, but I don't know how to easily get a notification that the result of spawn_blocking (or ThreadPool::spawn_with_handle, for pure futures-based code) has been dropped.

Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366

1 Answers1

4

You can pass an atomic boolean which you then use to flag the task as needing cancellation. (I'm not sure I'm using an appropriate Ordering for the load/store calls, that probably needs some more consideration)

Here's a modified version of your code that outputs

0
1
Sum timed out (expected)
2
Interrupted...
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::{thread, time::Duration};
use tokio::{task, time}; // 0.2.10

// This takes 1 second
fn long_running_complicated_calculation(flag: &AtomicBool) -> i32 {
    let mut sum = 0;
    for i in 0..10 {
        thread::sleep(Duration::from_millis(100));
        eprintln!("{}", i);
        sum += i;
        // Interruption point
        if !flag.load(Ordering::Relaxed) {
            eprintln!("Interrupted...");
            break;
        }
    }
    sum
}

#[tokio::main]
async fn main() {
    let some_bool = Arc::new(AtomicBool::new(true));

    let some_bool_clone = some_bool.clone();
    let handle =
        task::spawn_blocking(move || long_running_complicated_calculation(&some_bool_clone));
    let guarded = time::timeout(Duration::from_millis(250), handle);

    match guarded.await {
        Ok(s) => panic!("Sum was calculated: {:?}", s),
        Err(_) => {
            eprintln!("Sum timed out (expected)");
            some_bool.store(false, Ordering::Relaxed);
        }
    }
}

playground


It's not really possible to get this to happen automatically on the dropping of the futures / handles with current Tokio. Some work towards this is being done in http://github.com/tokio-rs/tokio/issues/1830 and http://github.com/tokio-rs/tokio/issues/1879.

However, you can get something similar by wrapping the futures in a custom type.

Here's an example which looks almost the same as the original code, but with the addition of a simple wrapper type in a module. It would be even more ergonomic if I implemented Future<T> on the wrapper type that just forwards to the wrapped handle, but that was proving tiresome.

mod blocking_cancelable_task {
    use std::sync::atomic::{AtomicBool, Ordering};
    use std::sync::Arc;
    use tokio::task;

    pub struct BlockingCancelableTask<T> {
        pub h: Option<tokio::task::JoinHandle<T>>,
        flag: Arc<AtomicBool>,
    }

    impl<T> Drop for BlockingCancelableTask<T> {
        fn drop(&mut self) {
            eprintln!("Dropping...");
            self.flag.store(false, Ordering::Relaxed);
        }
    }

    impl<T> BlockingCancelableTask<T>
    where
        T: Send + 'static,
    {
        pub fn new<F>(f: F) -> BlockingCancelableTask<T>
        where
            F: FnOnce(&AtomicBool) -> T + Send + 'static,
        {
            let flag = Arc::new(AtomicBool::new(true));
            let flag_clone = flag.clone();
            let h = task::spawn_blocking(move || f(&flag_clone));
            BlockingCancelableTask { h: Some(h), flag }
        }
    }

    pub fn spawn<F, T>(f: F) -> BlockingCancelableTask<T>
    where
        T: Send + 'static,
        F: FnOnce(&AtomicBool) -> T + Send + 'static,
    {
        BlockingCancelableTask::new(f)
    }
}

use std::sync::atomic::{AtomicBool, Ordering};
use std::{thread, time::Duration};
use tokio::time; // 0.2.10

// This takes 1 second
fn long_running_complicated_calculation(flag: &AtomicBool) -> i32 {
    let mut sum = 0;
    for i in 0..10 {
        thread::sleep(Duration::from_millis(100));
        eprintln!("{}", i);
        sum += i;
        // Interruption point
        if !flag.load(Ordering::Relaxed) {
            eprintln!("Interrupted...");
            break;
        }
    }
    sum
}

#[tokio::main]
async fn main() {
    let mut h = blocking_cancelable_task::spawn(long_running_complicated_calculation);
    let guarded = time::timeout(Duration::from_millis(250), h.h.take().unwrap());
    match guarded.await {
        Ok(s) => panic!("Sum was calculated: {:?}", s),
        Err(_) => {
            eprintln!("Sum timed out (expected)");
        }
    }
}

playground

Shepmaster
  • 388,571
  • 95
  • 1,107
  • 1,366
Michael Anderson
  • 70,661
  • 7
  • 134
  • 187
  • @MichaelAnderson: The Relaxed memory ordering is perfect for this situation, since the state of `flag` does not have to be coordinated with any other piece of memory sent across threads for the desired functionality. – Matthieu M. Jan 30 '20 at 13:26