0

I've a GUI application that is based on a loop. The loop can run more often than every frame, so it needs to be lightweight. There's a heavy workload that needs to be done from time to time. I'm not sure how to implement that. I'm imagining something like:

extern crate tokio; // 0.1.7
extern crate tokio_threadpool; // 0.1.2

use std::{thread, time::Duration};
use tokio::{prelude::*, runtime::Runtime};

fn delay_for(
    seconds: u64
    ) -> impl Future<Item = u64, Error = tokio_threadpool::BlockingError> 
{
    future::poll_fn(move || {
        tokio_threadpool::blocking(|| {
            thread::sleep(Duration::from_secs(seconds));
            seconds
        })
    })
}

fn render_frame(n: i8) {
    println!("rendering frame {}", n);
    thread::sleep(Duration::from_millis(500));
}

fn send_processed_data_to_gui(n: i8) {
    println!("view updated. result of background processing was {}", n);
}

fn main() {
    let mut frame_n = 0;
    let frame_where_some_input_triggers_heavy_work = 2;
    let mut parallel_work: Option<BoxedFuture> = None;
    loop {
        render_frame(frame_n);
        if frame_n == frame_where_some_input_triggers_heavy_work {
            parallel_work = Some(execute_in_background(delay_for(1)));
        }

        // check if there's parallel processing going on
        // and handle result if it's finished
        parallel_work
            .take()
            .map(|parallel_work| {
                if parallel_work.done() {
                    // giving result back to app
                    send_processed_data_to_gui(parallel_work.result())
                }
            });

        frame_n += 1;
        if frame_n == 10 {
            break;
        }
    }
}

fn execute_in_background(work: /* ... */) -> BoxedFuture {
    unimplemented!()
}

Playground link

Above example is based on the linked answer's tokio-threadpool example. That example has a data flow like this:

let a = delay_for(3);
let b = delay_for(1);
let sum = a.join(b).map(|(a, b)| a + b); 

The main difference between that example and my case is that task a triggers task b and when b is finished a gets passed result of b and continues working. It will also repeat this any number of times.

I feel like I'm trying to approach this in a way that is not idiomatic async programming in Rust.

How to run that workload in the background? Or to rephrase in terms of the code sketch above: how do I execute the future in parallel_work in parallel? If my approach is indeed severely off-track, can you nudge me in the right direction?

Dominykas Mostauskis
  • 7,797
  • 3
  • 48
  • 67
  • I believe your question is answered by the answers of [What is the best approach to encapsulate blocking I/O in future-rs?](https://stackoverflow.com/q/41932137/155423) (answers cover more than I/O). If you disagree, please [edit] your question to explain the differences. Otherwise, we can mark this question as already answered. – Shepmaster Sep 18 '18 at 22:37
  • If it's not answered by that, please review how to create a [MCVE] and then [edit] your question to include it. We cannot tell what crates, types, traits, fields, etc. are present in the code. Try to produce something that reproduces your situation on the [Rust Playground](https://play.rust-lang.org) or you can reproduce it in a brand new Cargo project. There are [Rust-specific tips](//stackoverflow.com/tags/rust/info) as well. – Shepmaster Sep 18 '18 at 22:41
  • Providing a MCVE will also reduce the chances of this question being closed as *too broad*. – Shepmaster Sep 18 '18 at 22:42
  • @Shepmaster, I've updated my question, clarifying what I could. – Dominykas Mostauskis Sep 19 '18 at 10:49

0 Answers0