0

I have a webjob running on Azure on their S3 standard platform meaning it has 7 gb of ram available to run my app.

On the machine 3 jobs are running of which one is the one doing all the processing and the other two handles small tasks. My problem lies in the fact that I on certain memoryintensive large tasks gets a memory exception meaning that results in the given job crashing.

The job I try to run is a very memory intensive job, and requires around 1,5 gb of ram, but based on the graph below I do not understand how this should be a problem since I never am above 2.2gb of used ram for the app service. I do have to add, that I run 3 instances, so it might be that one instance is using way more memory, but I can not find anywhere to view that information.

Memory consumption on server

When I look in the process explorer in the Kudo I see I use around 1.3gb of ram currently, which is still way below the needed memory for the job.

Kudo screenshot

The job has run without any problems no more than 2 days ago on the same server setup, so I am completely lost as to where to look.

Update: Code works fine in visual studio with same data running the same exact task.

Do anyone have ideas as to how to approach this problem

Dennis C
  • 71
  • 1
  • 13
  • Have a look at this answer: https://stackoverflow.com/questions/14186256/net-out-of-memory-exception-used-1-3gb-but-have-16gb-installed – rickvdbosch Sep 22 '17 at 11:16
  • Very relevant, and I am aware of the max 2gb limitation in most datatypes. I work with files in the size of 500-700 mb in xml and csv files. The program have been running for months without any problem, until 2 days ago and now it no longer runs with certain sizes giving me the memory exception. I tested the code in visual studio on the same data, and it works as it should – Dennis C Sep 22 '17 at 14:27

2 Answers2

0

Per my understanding, you could capture the memory dump in your azure app service and analyze the dump to narrow this issue. You could refer to this tutorial about how to get a full memory dump in Azure App Services. Also, you could leverage the Crash Diagnoser extension to monitor the CPU and Memory, for more details you could refer to this blog.

Bruce Chen
  • 18,207
  • 2
  • 21
  • 35
  • Thank you for your comment. Problem with memory dump is, that I don't know when the crash will happen, and the memory dump as far as I understand is only a snapshot. The crash diagnoser I definitely will read up, that name sounds promising :) – Dennis C Sep 28 '17 at 16:27
-1

Well well well first of all how do you deal with Garbage collector ?! i mean do you dispose disposable objects after they will do their tasks. You said app is being run for 2 days , seems app has clashed with more intensive memory loading stuff. As you said your app is "very memory intensive" , i guess you should fetch it out (source code) and make sure you are managing objects correctly, because garbage collector couldn't care all of your "source code mess". Good luck.

  • I said the job ran without any problems just two days ago. I also ran the same job an hour ago locally in visual studio with the same data without any problems. I keep the scopes as limited as possible, and make sure to optimize usage. Just hard when you get 500-700mb xml files to keep the overhead below twice the size of the data – Dennis C Sep 22 '17 at 14:24