The "heap spraying" wikipedia article suggests that many javascript exploits involve positioning a shellcode somewhere in the script's executable code or data space memory and then having interpreter jump there and execute it. What I don't understand is, why can't the interpreter's entire heap be marked as "data" so that interpreter would be prevented from executing the shellcode by DEP? Meanwhile the execution of javascript derived bytecode would be done by virtual machine that would not allow it to modify memory belonging to the interpreter (this wouldn't work on V8 that seems to execute machine code, but probably would work on Firefox that uses some kind of bytecode).
I guess the above sounds trivial and probably something a lot like that is in fact being done. So, I am trying to understand where is the flaw in the reasoning, or the flaw in existing interpreter implementations. E.g. does the interpreter rely on system's memory allocation instead of implementing its own internal allocation when javascript asks for memory, hence making it unduly hard to separate memory belonging to interpreter and to javascript? Or why is it that the DEP based methods cannot completely eliminate shellcodes?