3

I use sttp lib with akka backend to load a file from server. Either of the following approaches results in significant memory footprint to load 1Gb file:

import com.softwaremill.sttp._

val file: File = new File(...)

sttp.response(asStream[Source[ByteString, Any]])
.mapResponse { src =>
    src.runWith(FileIO.toPath(file.toPath, options, 0))
}

sttp.response(asFile(file, false))

VisualVM plots of sequential loads for 1Gb file. enter image description here

Is there any chance to write data in chunks and evict chunks from memory right after the write?

morsik
  • 1,250
  • 14
  • 17

1 Answers1

1

According to your plot, your application does not require a significant amount of memory. There are "spikes" up to 1700 MB, but just after that the Garbage Collector runs the usage of the heap drops to 250 MB. sttp and Akka creates a lot of short-living objects; however, Garbage Collector cleans your memory quite well.

I've run your client app with 124 MB memory only to verify that it does not need 2GB of heap to download 1GB file:

sbt -mem 124 run

The app didn't crash, it just used as much memory as it was available. enter image description here

Daniel
  • 773
  • 1
  • 6
  • 13
  • Agree. On the other hand, approach with apache httpclient and buffer re-usage doesn't lead to such an excessive memory layout. `val rd = new BufferedReader(new InputStreamReader(response.getEntity.getContent), 8192) ` – morsik Nov 29 '19 at 08:12
  • I see akka uses CPU more heavy (10-15%, peaks up to 30%) if to compare with apache httpclient (5-7%, peaks up to 20%) – morsik Nov 29 '19 at 09:55