Why certain kernels refuse C++ code in their code base? Politics and preference, but I digress.
Some parts of modern OS kernels are written in certain subsets of C++. In these subsets mainly exceptions and RTTI are disabled (sometimes multiple inheritance and templates are disallowed, too).
This is the case too in C. Certain features should not be used in a kernel environment (e.g. VLAs).
Outside of exceptions and RTTI, certain features in C++ are heavily critiqued, when we are talking about kernel code (or embedded code).
These are vtables and constructors/destructors. They bring a bit of code under the hood, and that seems to be deemed 'bad'. If you don't want a constructor, then don't implement one. If you worry about using a class with a constructor, then worry too about a function you have to use to initialize a struct. The upside in C++ is, you cannot really forget using a dtor outside of forgetting to deallocate the memory.
But what about vtables?
When you implement a object which contains extension points (e.g. a linux filesystem driver), you implement something like a class with virtual methods. So why is it sooo bad, to have a vtable? You have to control the placement of this vtable when you have certain requirements in which pages the vtable resides. As far as I recall, this is irrelevant for linux, but under windows, code pages can be paged out, and when you call a paged out function from a too high irql, you crash. But you really have to watch out what functions you call, when you are on a high irql, whatever function it is. And you don't need to worry, if you don't use a virtual call in this context. In embedded software this could be worse, because (very seldomly) you need to directly control in which code page your code goes, but even there you can influence what your linker does.
So why are so many people so adamant of 'use C in kernel'?
Because they either got burned by a toolchain problem, or got burned by overenthusiastic developers using the latest stuff in kernel mode.
Maybe the kernel-mode developers are rather conservative, and C++ is a too newfangled thing ...
Why are exceptions not used in kernel-mode code?
Because they need to generate some code per function, introduce complexity in a code path and not handling an exception is bad for a kernel mode component, because it kills the system.
In C++, when an exception is thrown, the stack must be unwound and the according destructors must be called. This involves at least a bit of overhead. This is mostly negligible, but it does incur a cost, which may not be something you want. (Note I do not know how much a table base unwind does actually cost, I think I read that there is no cost when no exception is running, but ... I guess I have to look it up).
A code path, which cannot throw exceptions can be much easier reasoned about, then one which may.
So :
int f( int a )
{
if( a == 0 )
return -1;
if( g() < 0 )
return -2;
f3();
return h();
}
We can reason about every exit path, in this function, because we can easily see all returns, but when exceptions are enabled, the functions may throw and we cannot guarantee what the actual path is, that the function takes. This is the exact point of the code may do something we cannot see at once. (This is bad C++ code, when exceptions are enabled).
The third point is, you want user mode applications to crash, when something unexpected occurs (E.g. when memory runs out), a user mode application should crash (after freeing resources) to allow the developer to debug the problem or at least get a good error message. You should not have a uncaught exception in a kernel mode module, ever.
Note that all this can be overcome, there are SEH exceptions in the windows kernel, so point 2+3 are not really good points in the NT kernel.
There are no memory management problems with C++ in the kernel. E.g. the NT kernel headers provide overloads for new and delete, which let you specify the pool type of your allocation, but are otherwise exactly the same as the new and delete in a user mode application.