3

There's plenty of articles on how to set up firebase functions with nestjs, and I've been developing this for about a month now on my local machine. Finally, it was time to release. I used a combination of nestjs and mikro orm to handle my server and database layer, with 12 entities and around 20 routes. Everything ran smooth as butter locally.

However, in a firebase functions environment (defaults) where you have 256mb of RAM and god knows how much vCPU, suddenly my cold start + 30ish seconds of nest spinning up, it meant that it's no longer a production ready environment. In contrast, express spun up in around 500ms + cold start time.

Here's a few log screenshots to show the time each step took:

enter image description here

enter image description here

Could I have architected my application incorrectly or is Nest just that slow in tiny environments? I can't imagine lazy loading would help, and im auto discovering entities for mikro orm. Is there anything I can try to speed up the process or does anyone have any experience with nestjs in firebase functions?

plusheen
  • 1,128
  • 1
  • 8
  • 23
  • 1
    Hi @plusheen, is it only occuring on cold start? Could you please check [Best Practices: Performance](https://cloud.google.com/functions/docs/bestpractices/tips#performance) and also this [thread](https://stackoverflow.com/questions/42726870/firebase-cloud-functions-is-very-slow/59243736#59243736) and see if it helps. – Marc Anthony B Feb 28 '22 at 07:24
  • Regarding the ORM, you should enable debug mode to see how much the discovery takes, those logs are all from nest DI and they might not be connected to what the ORM does but rather to how nest DI works. If the discovery takes too much, you could use metadata cache you can warm up on build time via CLI. I also remember someone having speed gains when passing the driver class to the ORM config instead of type, so instead of `type: 'sqlite'` you can do `driver: SqliteDriver` - this way there won't be any dynamic import under the hood which can be slow. – Martin Adámek Feb 28 '22 at 12:41
  • Regarding the autodiscovery of entities, I can imagine that will also add some unnecessary time to the ORM bootstrap. Using entity references (`entities: [Author, Book, ...]` will be always fastest. – Martin Adámek Feb 28 '22 at 12:43
  • Thanks for the reply @MartinAdámek. I'll give those things a shot to see how they impact performance. Debug mode is currently on however, and it's difficult to see cos of the color formatting, but entity discovery took 42ms. It took a good 20-30s when I used glob discovery, and then drastically reduced when using auto discover. – plusheen Feb 28 '22 at 15:22
  • @MarcAnthonyB It's only on cold start yeah. Nest has to do all the route discovery and DI magic which has a bigger overhead than express. I don't think its possible to lazy load stuff as it needs everything importing for the route discovery. – plusheen Feb 28 '22 at 15:31

1 Answers1

0

NestJS has a guide on optimizing serving in environments that have cold starts. That might be a good place to start.

For any case where a lot of setup work has to be done on cold start (like the 30 seconds of NestJS spinning up that you're experiencing), the minInstances option may be worth using. It lets you keep a minimum number of instances "hot" so that there is a lower chance of an end user experiencing a cold start.

You can also try adjusting the amount of memory available to the function with the memory option (more memory gives you more CPU too).

Jeff
  • 2,425
  • 1
  • 18
  • 43
  • 1
    Thanks for the response. I'm cautious of keeping containers up or increasing memory as it's a noddy side project and don't want to start paying for this one project. My current express implementation runs fine so it feels like a downgrade if i have to start paying. I didn't see the serverless guide before though so i'll give that a try at least. If I have any success i'll come back and mark this answer. – plusheen Mar 01 '22 at 20:13