133

All of the sudden I've been having problems with my application that I've never had before. I decided to check the Apache's error log, and I found an error message saying "zend_mm_heap corrupted". What does this mean.

OS: Fedora Core 8 Apache: 2.2.9 PHP: 5.2.6

trincot
  • 317,000
  • 35
  • 244
  • 286
bkulyk
  • 1,694
  • 2
  • 12
  • 17
  • 3
    I used `USE_ZEND_ALLOC=0` to get the stacktrace in the error log And found the bug `/usr/sbin/httpd: corrupted double-linked list`, I found out that commenting out the `opcache.fast_shutdown=1` worked for me. – Spidfire Jun 18 '15 at 15:50
  • Yes, same here. Also see another report further below http://stackoverflow.com/a/35212026/35946 – lkraav Mar 23 '16 at 02:10
  • I had the same thing using Laravel. I injected a class into the constructor of another class. The class I was injecting, was injecting the class it was injected into, basically creating a circular reference causing the heap issue. – Thomas Jan 12 '17 at 09:18
  • 1
    Restart the Apache server for quickest and temporary solutions :) – Leopathu Jun 05 '17 at 09:53

41 Answers41

61

This is not a problem that is necessarily solvable by changing configuration options.

Changing configuration options will sometimes have a positive impact, but it can just as easily make things worse, or do nothing at all.

The nature of the error is this:

#include <stdio.h>
#include <string.h>
#include <stdlib.h>

int main(void) {
    void **mem = malloc(sizeof(char)*3);
    void *ptr;

    /* read past end */
    ptr = (char*) mem[5];   

    /* write past end */
    memcpy(mem[5], "whatever", sizeof("whatever"));

    /* free invalid pointer */
    free((void*) mem[3]);

    return 0;
}

The code above can be compiled with:

gcc -g -o corrupt corrupt.c

Executing the code with valgrind you can see many memory errors, culminating in a segmentation fault:

krakjoe@fiji:/usr/src/php-src$ valgrind ./corrupt
==9749== Memcheck, a memory error detector
==9749== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al.
==9749== Using Valgrind-3.10.1 and LibVEX; rerun with -h for copyright info
==9749== Command: ./corrupt
==9749== 
==9749== Invalid read of size 8
==9749==    at 0x4005F7: main (an.c:10)
==9749==  Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749== 
==9749== Invalid read of size 8
==9749==    at 0x400607: main (an.c:13)
==9749==  Address 0x51fc068 is 24 bytes after a block of size 16 in arena "client"
==9749== 
==9749== Invalid write of size 2
==9749==    at 0x4C2F7E3: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749==    by 0x40061B: main (an.c:13)
==9749==  Address 0x50 is not stack'd, malloc'd or (recently) free'd
==9749== 
==9749== 
==9749== Process terminating with default action of signal 11 (SIGSEGV): dumping core
==9749==  Access not within mapped region at address 0x50
==9749==    at 0x4C2F7E3: memcpy@@GLIBC_2.14 (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==9749==    by 0x40061B: main (an.c:13)
==9749==  If you believe this happened as a result of a stack
==9749==  overflow in your program's main thread (unlikely but
==9749==  possible), you can try to increase the size of the
==9749==  main thread stack using the --main-stacksize= flag.
==9749==  The main thread stack size used in this run was 8388608.
==9749== 
==9749== HEAP SUMMARY:
==9749==     in use at exit: 3 bytes in 1 blocks
==9749==   total heap usage: 1 allocs, 0 frees, 3 bytes allocated
==9749== 
==9749== LEAK SUMMARY:
==9749==    definitely lost: 0 bytes in 0 blocks
==9749==    indirectly lost: 0 bytes in 0 blocks
==9749==      possibly lost: 0 bytes in 0 blocks
==9749==    still reachable: 3 bytes in 1 blocks
==9749==         suppressed: 0 bytes in 0 blocks
==9749== Rerun with --leak-check=full to see details of leaked memory
==9749== 
==9749== For counts of detected and suppressed errors, rerun with: -v
==9749== ERROR SUMMARY: 4 errors from 3 contexts (suppressed: 0 from 0)
Segmentation fault

If you didn't know, you already figured out that mem is heap allocated memory; The heap refers to the region of memory available to the program at runtime, because the program explicitly requested it (with malloc in our case).

If you play around with the terrible code, you will find that not all of those obviously incorrect statements results in a segmentation fault (a fatal terminating error).

I explicitly made those errors in the example code, but the same kinds of errors happen very easily in a memory managed environment: If some code doesn't maintain the refcount of a variable (or some other symbol) in the correct way, for example if it free's it too early, another piece of code may read from already free'd memory, if it somehow stores the address wrong, another piece of code may write to invalid memory, it may be free'd twice ...

These are not problems that can be debugged in PHP, they absolutely require the attention of an internals developer.

The course of action should be:

  1. Open a bug report on http://bugs.php.net
    • If you have a segfault, try to provide a backtrace
    • Include as much configuration information as seems appropriate, in particular, if you are using opcache include optimization level.
    • Keep checking the bug report for updates, more information may be requested.
  2. If you have opcache loaded, disable optimizations
    • I'm not picking on opcache, it's great, but some of it's optimizations have been known to cause faults.
    • If that doesn't work, even though your code may be slower, try unloading opcache first.
    • If any of this changes or fixes the problem, update the bug report you made.
  3. Disable all unnecessary extensions at once.
    • Begin to enable all your extensions individually, thoroughly testing after each configuration change.
    • If you find the problem extension, update your bug report with more info.
  4. Profit.

There may not be any profit ... I said at the start, you may be able to find a way to change your symptoms by messing with configuration, but this is extremely hit and miss, and doesn't help the next time you have the same zend_mm_heap corrupted message, there are only so many configuration options.

It's really important that we create bugs reports when we find bugs, we cannot assume that the next person to hit the bug is going to do it ... more likely than not, the actual resolution is in no way mysterious, if you make the right people aware of the problem.

USE_ZEND_ALLOC

If you set USE_ZEND_ALLOC=0 in the environment, this disables Zend's own memory manager; Zend's memory manager ensures that each request has it's own heap, that all memory is free'd at the end of a request, and is optimized for the allocation of chunks of memory just the right size for PHP.

Disabling it will disable those optimizations, more importantly it will likely create memory leaks, since there is a lot of extension code that relies upon the Zend MM to free memory for them at the end of a request (tut, tut).

It may also hide the symptoms, but the system heap can be corrupted in exactly the same way as Zend's heap.

It may seem to be more tolerant or less tolerant, but fix the root cause of the problem, it cannot.

The ability to disable it at all, is for the benefit of internals developers; You should never deploy PHP with Zend MM disabled.

Joe Watkins
  • 17,032
  • 5
  • 41
  • 62
  • So the underlying problem could be which version of PHP you're running? – Ishmael Apr 20 '16 at 16:21
  • @Ishmael Yes, as well as versions of all extensions, as the warning may arise from an extension. – bishop Dec 20 '16 at 18:16
  • 4
    This answer seems to be the best one for me. I've personally experienced the problem a few times and it was always related to a faulty extension (in my case, the Enchant spelling library). Other than php itself, it could also be a bad environment (lib version mismatch, wrong dependencies, etc.) – Fractalizer Jul 31 '18 at 05:34
  • 2
    By far, the best answer for this question, and for many other similar questions as well – Nikita 웃 Aug 04 '18 at 08:41
  • This answer is indeed instructive but I believe it's not the job of an application developper to debug the server core. Indeed it's way more easy if you have a full stack trace but what's next ? ask to fix it on a pull request ? Not everyone is devops or able to understand low level language like C. The opposite is true too. So in the end I believe it would be much easier is the developpers would not make memory management errors in the first place. Which as you suggest is kinda common with opcache, but not surprisingly not with all the modules, because you know some dev know how to dev. – job3dot5 Aug 20 '19 at 18:15
  • 1
    I didn't suggest that the developer should debug the problem. I gave an explanation of the problem in easy to understand code and words, and advised them to create a bug report, and lastly gave them advice about creating and maintaining a useful bug report. The only thing to do here is create a bug report, messing with settings, extensions, versions, and environment variables is just terrible guesswork; Someone can fix the problem in two seconds, you don't need to debug it, or be a C guru, or even know how GDB works, just send a mail (report) to the right person and the problem goes away. – Joe Watkins Aug 21 '19 at 04:09
59

After much trial and error, I found that if I increase the output_buffering value in the php.ini file, this error goes away

nhahtdh
  • 55,989
  • 15
  • 126
  • 162
dsmithers
  • 694
  • 6
  • 2
  • 66
    Increase to what? Why would this change make this error go away? – JDS Apr 03 '12 at 17:22
  • 2
    @JDS this answer helps explain what output_buffering is and why increasing it can help http://stackoverflow.com/a/2832179/704803 – andrewtweber May 29 '12 at 19:46
  • 8
    @andrewtweber I know what ob is, I was wondering about the specific details that were left out of dsmithers' answer, as I was having the same error message as the op. For closure: it turned out my problem was a misconfigured setting pertaining to memcached. Thanks, though! – JDS May 30 '12 at 20:41
  • @JDS what misconfigured setting? – Kyle Cronin Jun 10 '12 at 18:17
  • 3
    @KyleCronin our service platform uses Memcache in production. However, some single instances -- non-production/sandbox, customer one-offs -- do not use memcache. In the latter case, I had a configuration copied from production to a customer one-off, and the memcache configuration indicated a memcache server URI that was not available in that environment. I deleted the line and disabled memcache in the app, and the problem went away. So, long story short, a very specific problem encountered in a specific environment, that might not be generally applicable. But, since you asked... – JDS Aug 09 '12 at 15:50
  • @JDS Thanks for following up, I was getting the same error. Don't remember how I fixed it though... – Kyle Cronin Aug 09 '12 at 17:55
  • As for me, the solution was @Justin MacLeod anwser. I already had output buffer enabled and increasing it's size didnt help. – ioleo Sep 04 '14 at 20:36
  • I'm getting this problem any time I call an OpenSSL method. For some environments, raising output_buffering fixes it, for some it doesn't. This is a really frustrating error to troubleshoot. – Matt van Andel Jul 11 '16 at 19:53
  • This should not be increased, first need to check code why large chunks are sent :P – Arun Killu Nov 30 '19 at 05:04
  • I get this error when a conditional breakpoint is used with XDebug `no-debug-non-zts-20180731` and VS Code 1.59.1, similar to https://bugs.xdebug.org/1647. – billrichards Aug 27 '21 at 13:48
51

I was getting this same error under PHP 5.5 and increasing the output buffering didn't help. I wasn't running APC either so that wasn't the issue. I finally tracked it down to opcache, I simply had to disable it from the cli. There was a specific setting for this:

opcache.enable_cli=0

Once switched the zend_mm_heap corrupted error went away.

Justin MacLeod
  • 765
  • 8
  • 10
  • Same problem and solution here! Thanks! – Mauricio Sánchez Dec 03 '14 at 20:59
  • 2
    Huge plus 1 for this post. We tried everything but in the end, only this worked. – Geoffrey Brier Sep 18 '15 at 07:26
  • 7
    I am sure that you know that cli is command line version of php and it has nothing to do with php module used with apache web server for example and I am curious how disabling opcache with cli helped ? (I am assuming that this is happening on web server) – BIOHAZARD Jan 11 '16 at 11:29
  • @BioHazard, apart of cli there is general setting opcache.enable=0. But it does not necessary helps the case – Konstantin Ivanov Nov 29 '16 at 15:29
  • This should be the accepted answer to this question. Raising the output_buffering is not the answer, since this can have negative side-effects to your website or application, according to the documentation in php.ini. – BlueCola Jan 02 '17 at 01:08
  • Out of all the answers to this question, this was the option that finally made the error go away in my Docker setup for Drupal development (Docker for Mac 17.06, Apache 2.4, PHP 7.1, Drupal 8.4). – Paul Oct 16 '17 at 08:07
  • I get "Violación de segmento (`core' generado)" and "zend_mm_heap corrupted" issues running drush comands (from the cli), and finally this solve the problems, thanks!!! – tongadall Jan 11 '18 at 22:36
46

If you are on Linux box, try this on the command line

export USE_ZEND_ALLOC=0
nhahtdh
  • 55,989
  • 15
  • 126
  • 162
Hittz
  • 688
  • 6
  • 7
  • This saved me! I add this inside the php-fpm service file (systemd override) – fzerorubigd Aug 20 '14 at 14:45
  • This did it for me. Remember to add this line to `/etc/apache2/envvars` if you're running this on ubuntu server with both apache and php installed from the ppas (apt). PHP 7.0-RC4 started throwing this error when I installed it from ondrej's repository. – Pedro Cordeiro Oct 06 '15 at 18:31
  • And also it's works on windows: `set USE_ZEND_ALLOC=0` – Nabi K.A.Z. May 27 '19 at 05:11
23

Check for unset()s. Make sure you don't unset() references to the $this (or equivalents) in destructors and that unset()s in destructors don't cause the reference count to the same object to drop to 0. I've done some research and found that's what usually causes the heap corruption.

There is a PHP bug report about the zend_mm_heap corrupted error. See the comment [2011-08-31 07:49 UTC] f dot ardelian at gmail dot com for an example on how to reproduce it.

I have a feeling that all the other "solutions" (change php.ini, compile PHP from source with less modules, etc.) just hide the problem.

f.ardelian
  • 6,716
  • 8
  • 36
  • 53
  • 6
    I was getting this issue with simple html dom, and changed from an unset, to $simplehtmldom->clear() which solved my problems, thanks! – alexkb Feb 21 '13 at 07:56
13

For me none of the previous answers worked, until I tried:

opcache.fast_shutdown=0

That seems to work so far.

I'm using PHP 5.6 with PHP-FPM and Apache proxy_fcgi, if that matters...

Jesús Carrera
  • 11,275
  • 4
  • 63
  • 55
  • 1
    There's a ton of "me too" responses for all different scenarios, but this seemed most similar to my configuration, and boom - this exact change seems to have eliminated my issue. – lkraav Mar 23 '16 at 02:10
6

In my case, the cause for this error was one of the arrays was becoming very big. I've set my script to reset the array on every iteration and that sorted the problem.

Piotr
  • 79
  • 1
  • 4
  • This did it for me - thanks! I didn't think the garbage collector would free the memory of a cyclic reference, so I didn't check it. – half-fast Mar 03 '17 at 06:58
5

As per the bug tracker, set opcache.fast_shutdown=0. Fast shutdown uses the Zend memory manager to clean up its mess, this disables that.

Taco de Wolff
  • 1,682
  • 3
  • 17
  • 34
4

I don't think there is one answer here, so I'll add my experience. I seen this same error along with random httpd segfaults. This was a cPanel server. The symptom in question was apache would randomly reset the connection (No data received in chrome, or connection was reset in firefox). These were seemingly random -- most of the time it worked, sometimes it did not.

When I arrived on the scene output buffering was OFF. By reading this thread, that hinted at output buffering, I turned it on (=4096) to see what would happen. At this point, they all started showing the errors. This was good being that the error was now repeatable.

I went through and started disabling extensions. Among them, eaccellerator, pdo, ioncube loader, and plenty that looked suspicion, but none helped.

I finally found the naughty PHP extension as "homeloader.so", which appears to be some kind of cPanel-easy-installer module. After removal, I haven't experienced any other issues.

On that note, it appears this is a generic error message so your milage will vary with all of these answers, best course of action you can take:

  • Make the error repeatable (what conditions?) every time
  • Find the common factor
  • Selectively disable any PHP modules, options, etc (or, if you're in a rush, disable them all to see if it helps, then selectively re-enable them until it breaks again)
  • If this fails to help, many of these answers hint that it could be code releated. Again, the key is to make the error repeatable every request so you can narrow it down. If you suspect a piece of code is doing this, once again, after the error is repeatable, just remove code until the error stops. Once it stops, you know the last piece of code you removed was the culprit.

Failing all of the above, you could also try things like:

  • Upgrading or recompiling PHP. Hope whatever bug is causing your issue is fixed.
  • Move your code to a different (testing) environment. If this fixes the issue, what changed? php.ini options? PHP version? etc...

Good luck.

A.B. Carroll
  • 2,404
  • 2
  • 17
  • 19
3

I wrestled with this issue, for a week, This worked for me, or atleast so it seems

In php.ini make these changes

report_memleaks = Off  
report_zend_debug = 0  

My set up is

Linux ubuntu 2.6.32-30-generic-pae #59-Ubuntu SMP  
with PHP Version 5.3.2-1ubuntu4.7  

This didn’t work.

So I tried using a benchmark script, and tried recording where the script was hanging up. I discovered that just before the error, a php object was instantiated, and it took more than 3 seconds to complete what the object was supposed to do, whereas in the previous loops it took max 0.4 seconds. I ran this test quite a few times, and every time the same. I thought instead of making a new object every time, (there is a long loop here), I should reuse the object. I have tested the script more than a dozen times so far, and the memory errors have disappeared!

Smar
  • 8,109
  • 3
  • 36
  • 48
sam
  • 193
  • 2
  • 12
2

I've tried everything above and zend.enable_gc = 0 - the only config setting, that helped me.

PHP 5.3.10-1ubuntu3.2 with Suhosin-Patch (cli) (built: Jun 13 2012 17:19:58)

Bethrezen
  • 171
  • 1
  • 5
2

I had this error using the Mongo 2.2 driver for PHP:

$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField', 'yetAnotherField')); 

^^DOESN'T WORK

$collection = $db->selectCollection('post');
$collection->ensureIndex(array('someField', 'someOtherField')); 
$collection->ensureIndex(array('yetAnotherField')); 

^^ WORKS! (?!)

hernanc
  • 101
  • 5
  • This answer helped me debug, going on the Mongo issue path. In my case, PHP 5.6 + Mongo 1.6.9 driver, the zend_mm_heap corrupted message was thrown when iterating and querying values from an array previously populated via `foreach(selectCollection()->find()) { $arr = .. }` – Mihai MATEI Oct 25 '16 at 06:57
2

On PHP 5.3 , after lot's of searching, this is the solution that worked for me:

I've disabled the PHP garbage collection for this page by adding:

<? gc_disable(); ?>

to the end of the problematic page, that made all the errors disappear.

source.

Kuf
  • 17,318
  • 6
  • 67
  • 91
2

I think a lot of reason can cause this problem. And in my case, i name 2 classes the same name, and one will try to load another.

class A {} // in file a.php
class A // in file b.php
{
  public function foo() { // load a.php }
}

And it causes this problem in my case.

(Using laravel framework, running php artisan db:seed in real)

Yarco
  • 763
  • 9
  • 18
2

Look for any module that uses buffering, and selectively disable it.

I'm running PHP 5.3.5 on CentOS 4.8, and after doing this I found eaccelerator needed an upgrade.

Scott Davey
  • 857
  • 6
  • 3
2

I just had this issue as well on a server I own, and the root cause was APC. I commented out the "apc.so" extension in the php.ini file, reloaded Apache, and the sites came right back up.

Vance Lucas
  • 2,798
  • 1
  • 25
  • 21
1

If you are using traits and the trait is loaded after the class (ie. the case of autoloading) you need to load the trait beforehand.

https://bugs.php.net/bug.php?id=62339

Note: this bug is very very random; due to it's nature.

srcspider
  • 10,977
  • 5
  • 40
  • 35
1

For me the problem was using pdo_mysql. Query returned 1960 results. I tried to return 1900 records and it works. So problem is pdo_mysql and too large array. I rewrote query with original mysql extension and it worked.

$link = mysql_connect('localhost', 'user', 'xxxx') or die(mysql_error());
mysql_select_db("db", $link);

Apache did not report any previous errors.

zend_mm_heap corrupted
zend_mm_heap corrupted
zend_mm_heap corrupted
[Mon Jul 30 09:23:49 2012] [notice] child pid 8662 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:50 2012] [notice] child pid 8663 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:54 2012] [notice] child pid 8666 exit signal Segmentation fault (11)
[Mon Jul 30 09:23:55 2012] [notice] child pid 8670 exit signal Segmentation fault (11)
broadband
  • 3,266
  • 6
  • 43
  • 73
1

"zend_mm_heap corrupted" means problems with memory management. Can be caused by any PHP module. In my case installing APC worked out. In theory other packages like eAccelerator, XDebug etc. may help too. Or, if you have that kind of modules installed, try switching them off.

Muto
  • 11
  • 1
1

I am writing a php extension and also encounter this problem. When i call an extern function with complicated parameters from my extension, this error pop up.

The reason is my not allocating memory for a parameter(char *) in the extern function. If you are writing same kind of extension, please pay attention to this.

cedricliang
  • 352
  • 1
  • 7
1

A lot of people are mentioning disabling XDebug to solve the issue. This obviously isn't viable in a lot of instances, as it's enabled for a reason - to debug your code.

I had the same issue, and noticed that if I stopped listening for XDebug connections in my IDE (PhpStorm 2019.1 EAP), the error stopped occurring.

The actual fix, for me, was removing any existing breakpoints.

A possibility for this being a valid fix is that PhpStorm is sometimes not that good at removing breakpoints that no longer reference valid lines of code after files have been changed externally (e.g. by git)

Edit: Found the corresponding bug report in the xdebug issue tracker: https://bugs.xdebug.org/view.php?id=1647

LarryFisherman
  • 322
  • 2
  • 11
  • Hopefully you're not debugging on a production server ;) – bkulyk Apr 09 '19 at 22:33
  • 1
    Ahh no no, I was getting the heap corruption error on a local docker container – LarryFisherman Apr 14 '19 at 23:56
  • 1
    Same here, specifically a conditional breakpoint - at a spot not actually reachable from my code at that point, but nevertheless.. removing that single breakpoint (leaving another regular breakpoint) fixed it. – MSpreij Oct 06 '20 at 09:34
1

The issue with zend_mm_heap corrupted boggeld me for about a couple of hours. Firstly I disabled and removed memcached, tried some of the settings mentioned in this question's answers and after testing this seemed to be an issue with OPcache settings. I disabled OPcache and the problem went away. After that I re-enabled OPcache and for me the

core notice: child pid exit signal Segmentation fault

and

zend_mm_heap corrupted

are apparently resolved with changes to

/etc/php.d/10-opcache.ini

I included the settings I changed here; opcache.revalidate_freq=2 remains commmented out, I did not change that value.

opcache.enable=1
opcache.enable_cli=0
opcache.fast_shutdown=0
opcache.memory_consumption=1024
opcache.interned_strings_buffer=128
opcache.max_accelerated_files=60000
1

I had this same issue and when I had an incorrect IP for session.save_path for memcached sessions. Changing it to the correct IP fixed the problem.

Travis D
  • 318
  • 1
  • 2
  • 10
0

Setting

assert.active = 0 

in php.ini helped for me (it turned off type assertions in php5UTF8 library and zend_mm_heap corrupted went away)

Wilk
  • 7,873
  • 9
  • 46
  • 70
0

I've also noticed this error and SIGSEGV's when running old code which uses '&' to explicitly force references while running it in PHP 5.2+.

Phillip Whelan
  • 1,697
  • 2
  • 17
  • 28
0

For me the problem was crashed memcached daemon, as PHP was configured to store session information in memcached. It was eating 100% cpu and acting weird. After memcached restart problem has gone.

0

Since none of the other answers addressed it, I had this problem in php 5.4 when I accidentally ran an infinite loop.

Mikayla Maki
  • 473
  • 6
  • 18
0

Some of tips that may helps some one

fedora 20, php 5.5.18

public function testRead() {
    $ri = new MediaItemReader(self::getMongoColl('Media'));

    foreach ($ri->dataReader(10) as $data) {
       // ...
    }
}

public function dataReader($numOfItems) {
    $cursor = $this->getStorage()->find()->limit($numOfItems);

    // here is the first place where "zend_mm_heap corrupted" error occurred
    // var_dump() inside foreach-loop and generator
    var_dump($cursor); 

    foreach ($cursor as $data) {
        // ...
        // and this is the second place where "zend_mm_heap corrupted" error occurred
        $data['Geo'] = [
            // try to access [0] index that is absent in ['Geo']
            'lon' => $data['Geo'][0],
            'lat' => $data['Geo'][1]
        ];
        // ...
        // Generator is used  !!!
        yield $data;
    }
}

using var_dummp() actually not an error, it was placed just for debugging and will be removed on production code. But real place where zend_mm_heap was happened is the second place.

lexand
  • 81
  • 7
0

I was in same situation here, nothing above helped, and checking more seriously I find my problem, it consist in try do die(header()) after send some output to buffer, the man who did this in the Code forgot about CakePHP resources and did not made a simples "return $this->redirect($url)".

Trying to re-invent the well, this was the problem.

I hope this relate help someone!

0

For me it was RabbitMq with Xdebug into PHPStorm, so > Settings/Language and frameworks/PHP/Debug/Xdebug > untick "Can accept external connections".

0

There was a bug fixed in PHP on Nov 13, 2014:

Fixed bug #68365 (zend_mm_heap corrupted after memory overflow in zend_hash_copy).

This was updated in versions 5.4.35, 5.5.19 and 5.6.3. In my case when I changed from using Ubuntu's official trusty package (5.5.9+dfsg-1ubuntu4.14) to the 5.5.30 version packaged by Ondrej Sury, the problem went away. None of the other solutions worked for me and I didn't want to disable opcache or suppress errors since this really was causing segfaults (500 responses).

Ubuntu 14.04 LTS:

export LANG=C.UTF-8       # May not be required on your system
add-apt-repository ondrej/php5
apt-get update
apt-get upgrade
ColinM
  • 13,367
  • 3
  • 42
  • 49
0

For me, it was the ZendDebugger that caused the memory leak and cuased the MemoryManager to crash.

I disabled it and I'm currently searching for a newer version. If I can't find one, I'm going to switch to xdebug...

Structed
  • 59
  • 1
  • 4
0

On the off chance that somebody else has this problem in the same way that I do, I thought I'd offer the solution that worked for me.

I have php installed on Windows on a drive other than my system drive (H:).

In my php.ini file, the value of several different file system variables were written like \path\to\directory - which would've worked fine if my installation was on C:.

I needed to change the value to H:\path\to\directory. Adding the drive letter several different places in my php.ini file fixed the problem right away. I also made sure (though I don't think this is necessary) to fix the same problem in my PEAR config - as several variable values excluded the drive letter there as well.

dgo
  • 3,877
  • 5
  • 34
  • 47
0

Many of the answers here are old. For me (php 7.0.10 via Ondrej Sury's PPA on ubuntu 14.04 and 16.04) the problem appears to lie in APC. I was caching hundreds of small bits of data using apc_fetch() etc, and when invalidating a chunk of the cache I'd get the error. Work around was to switch to filesystem based caching.

More detail on github https://github.com/oerdnj/deb.sury.org/issues/452#issuecomment-245475283.

Steve
  • 3,601
  • 4
  • 34
  • 41
0

Because I never found a solution to this I decided to upgrade my LAMP environment. I went to Ubuntu 10.4 LTS with PHP 5.3.x. This seems to have stopped the problem for me.

bkulyk
  • 1,694
  • 2
  • 12
  • 17
0

Really hunt through your code for a silent error. In my Symfony app I got the zend_mm_heap corrupted error after removing a block from a twig base template not remembering it was referenced in sub templates. No error was thrown.

hipnosis
  • 618
  • 1
  • 8
  • 13
0

Had zend_mm_heap corrupted along with child pid ... exit signal Segmentation fault on a Debian server that was upgraded to jessie. After long investigation it turned out that XCache was installed prior Zend-Engine was generally available.

after apt-get remove php5-xcache and service apache2 restart the errors vanished.

Martin Seitl
  • 628
  • 10
  • 19
0

In my case, i forgot following in the code:

);

I played around and forgot it in the code here and there - in some places i got heap corruption, some cases just plain ol' seg fault:

[Wed Jun 08 17:23:21 2011] [notice] child pid 5720 exit signal Segmentation fault (11)

I'm on mac 10.6.7 and xampp.

dsomnus
  • 1,391
  • 14
  • 21
0

This option has already been written above, but I want to walk you through the steps how I reproduced this error.

Briefly. It helped me:

opcache.fast_shutdown = 0

My legacy configuration:

  1. CentOS release 6.9 (Final)
  2. PHP 5.6.24 (fpm-fcgi) with Zend OPcache v7.0.6-dev
  3. Bitrix CMS

Step by step:

  1. Run phpinfo()
  2. Find "OPcache" in output. It should be enabled. If not, then this solution will definitely not help you.
  3. Execute opcache_reset() in any place (thanks to bug report, comment [2015-05-15 09:23 UTC] nax_hh at hotmail dot com). Load multiple pages on your site. If OPcache is to blame, then in the nginx logs will appear line with text

104: Connection reset by peer

and in the php-fpm logs

zend_mm_heap corrupted

and on the next line

fpm_children_bury()

  1. Set opcache.fast_shutdown=0 (for me in /etc/php.d/opcache.ini file)
  2. Restart php-fpm (e.g. service php-fpm restart)
  3. Load some pages of your site again. Execute opcache_reset() and load some pages again. Now there should be no mistakes.

By the way. In the output of phpinfo(), you can find the statistics of OPcache and then optimize the parameters (for example, increase the memory limit). Good instructions for tuning opcache (russian language, but you can use a translator)

0

I experienced this issue in local development while using docker & php's built in dev server with Craft CMS.

My solution was to use Redis for Craft's sessions.

PHP 7.4

jakub_jo
  • 1,494
  • 17
  • 22
  • Any further investigation as to why that helps? Were the sessions too large, causing some kind of overflow? – bkulyk Feb 14 '21 at 16:20
0

In my case; Apache does not started becauase of zend_mm_heap corrupted problem. Apache itself had no problem; because disabling php;

sudo emacs /etc/apache2/mods-enabled/php7.2.load

comment the line

# LoadModule php7_module /usr/lib/apache2/modules/libphp7.2.so

get apache to work properly. So i know problem was in php. I had more than one php installed i.e. php 7.2 and php 8. My site was using php 7.2 (So i had to use php7.2). Individually single php had no problem at all. But installing other (later) version somehow changes some thing that causes this zend_mm_heap corrupted problem. Purging and installing doesn't solved.

Solution was I was dsabling the wrong php version. I was disabling php8.0 whereas I had installed php8.1.

sudo a2dismod php8.0

Changing php8.0 to php8.1 solved everything

sudo a2dismod php8.1