Let's say I have a 6-core machine with 12MB cache. I use it for a server application that has a few gigabytes of heap (much of it 2nd level Hibernate cache).
I noticed that most of the time I have a handful threads actively serving client requests (burning CPU and talking to DB), as well as about 30-50 more threads that are only doing ol' good synchronous network IO with client.
As I am learning about Java memory model, I am wondering if this can impact performance. Does context switching for one of the many network IO threads ruin thread/CPU cache of the "active" threads? Is this level of concurrency harmful in itself (memory cache aside)?
Does it really matter, given how small the CPU cache is in relation to the whole application memory? How can I determine where the boundary is?