Say you're developing in a JIT-compiled language. Is there any performance downside to making your functions very large, in terms of the code size of the generated assembly?
I ask because I was looking through the source code of Buffer.MemoryCopy in C# the other day, which is obviously a very performance-sensitive method. It appears they use a large switch
statement to specialize the function for all byte counts <= 16, resulting in some pretty gigantic generated assembly.
Are there any cons, performance-wise, to this approach? For example, I noticed the glibc and FreeBSD implementations of memmove
do not do this, in spite of the fact that C is AOT-compiled, meaning it doesn't suffer from the cost of JIT precompilation (which is one downside)-- for C#, the JIT waits until the first call to compile the method, and so for really long methods the first invocation will take longer.
What are the up/downsides to having a gigantic switch
statement and increasing code size (other than the precompilation cost I just mentioned) for JIT-ed languages? Thanks. (I'm a bit new to assembly so please go easy on me :) )