0

In my understanding, asynchronous can only handle I/O intensive tasks such as reading and writing sockets or files, but can do nothing with CPU intensive tasks such as encryption and compression.

So in Rust Tokio Runtime, I think only need to use spawn_blocking to handle CPU intensive tasks. But I have seen this repo, the example is

#[tokio_02::main]
async fn main() -> Result<()> {
    let data = b"example";
    let compressed_data = compress(data).await?;
    let de_compressed_data = decompress(&compressed_data).await?;
    assert_eq!(de_compressed_data, data);
    println!("{:?}", String::from_utf8(de_compressed_data).unwrap());
    Ok(())
}

This library crates adaptors between compression and async I/O types.

I have 3 questions:

  1. What is the purpose of awaiting compress/decompress?

  2. Are these adaptors necessary or my understanding of asynchrony wrong?

  3. Can I do compression operations directly in Tokio multithreaded runtime? Like this

async fn foo() {
    let mut buf = ...;
    compress_sync(&mut buf);
    async_tcp_stream.write(buf).await;
}
kmdreko
  • 42,554
  • 6
  • 57
  • 106
Iv4n
  • 239
  • 1
  • 8

1 Answers1

5

In my understanding, asynchronous can only handle I/O intensive tasks such as reading and writing sockets or files, but can do nothing with CPU intensive tasks such as encryption and compression.

This is misguided. Asynchronous constructs are designed to work with non-blocking operations, irrespective of what kinds of resources are involved underneath. For instance, a future which delegates computation to a separate thread would be a valid use of async/await.

With that said, the reason why asynchronous compression is useful is because they expose I/O adaptors which also work in asynchronous programming. Even though these compression algorithms are primarily CPU-bound, they work in bulks which are fed from an arbitrary reader or writer, meaning that the process may have to wait for I/O to be conducted.

See for example the documentation of bufread::ZstdDecoder

This structure implements an AsyncRead interface and will read compressed data from an underlying stream and emit a stream of uncompressed data.

This is something which you do not get with synchronous byte source adapters such as flate2::bufread::GzDecoder. Even if you just use a compress function, a synchronous version of it would have blocked while waiting for the possibility to read or write data.

See also:

E_net4
  • 27,810
  • 13
  • 101
  • 139