88

I deployed the Google Custom Search API as AWS lambda function for my project. It uses the 3GB (full memory provided by lambda) and task got terminated.

It throws a warning :

"OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k"

I don't know why its consuming more memory?

Roshin Raphel
  • 2,612
  • 4
  • 22
  • 40
Karthik Subramaniyam
  • 1,141
  • 2
  • 8
  • 15
  • 2
    I've been getting this warning too, I'm curious to know why that's happening as well. – zyd Jul 26 '19 at 21:12
  • Just running into this now, did you ever solve it? – JonYork Aug 22 '19 at 19:06
  • i got this too. just happened randomly – Christopher Pisz Oct 05 '19 at 02:31
  • I had the same problem and I found that it is very similar to that described in [this question](https://stackoverflow.com/questions/55016899/appengine-warning-openblas-warning-could-not-determine-the-l2-cache-size-on) (referred to GCP). It seems a memory issue and I resolved the problem setting more memory. – francesco lc Nov 05 '19 at 14:49
  • Why is Google Custom Search API trying to install/use OpenBLAS? – charlesreid1 Dec 10 '19 at 22:38
  • Hope this will help you. https://stackoverflow.com/questions/55016899/appengine-warning-openblas-warning-could-not-determine-the-l2-cache-size-on – Biswajit Apr 19 '20 at 02:26
  • Can you try to break down your task into smaller tasks/problems? See if that helps you in limiting the memory use? Also provide more context on where you are running this custom search. And if you are keeping too many objects in memory (or specially a dict). – HPKG Jun 24 '20 at 21:26
  • 2
    I'm fairly confident the termination of your lambda and that warning are unrelated - i get that warning quite often. I suspect it depends on your numpy/similar build which might have different C bindings which would like to make assumption as to cache size, which it can't do in the Lambda for some reason. – zylatis Jul 31 '20 at 04:25

3 Answers3

37

This warning is just a warning, and has nothing to do with your problems.

BLAS is a highly optimised library, aiming to get near-perfect performance on all hardware. AWS Lambdas are supposed to run in a more abstract environment than most, and the low-level details of what CPU it's running on are not available to your code. Therefore OpenBLAS just guesses.

The only impact it would have is slightly reduced performance of certain mathematical operations, if the guess were incorrect.

OrangeDog
  • 36,653
  • 12
  • 122
  • 207
4

It's about the configurations of your function. Solution is as below:

  1. Go to configuration tab in your function.
  2. In the General configuration tab, increase Timeout and Memory size.

I hope this will help to you.

berkayln
  • 935
  • 1
  • 8
  • 14
0

You wrote:

I don't know why its consuming more memory?

It is unrelated to the warning message which just comes with Pandas, I believe.

Lambda's costs, tiny as they are, are based on the cpu time x memory product. So, keeping the memory consumption low ensures you are not given AWS any more money than you need to :-) For the cause of higher than expected memory consumption, you have to look at your code, what it is doing. Or you can ask a question here: how can I reduce the memory consumption of this code. But, it is just general programming.

Kevin Buchs
  • 2,520
  • 4
  • 36
  • 55