That's a broad question.
Fundamentally, compiled languages get translated into machine instructions (op codes) just as ASM does (ASM is a layer of abstraction, too). A good compiler is actually likely to out-perform an average ASM coder's result because it can examine a large patch of code and apply optimization rules that most programmers could not do by hand (ordering instructions for optimal execution, etc).
In that regard, all compiled languages are created "equal". However, some are more equal than others. How well the compiled code performs depends fundamentally on how good the compiler is, and much less on the specific language. Certain features such as virtual methods incur a performance penalty (last time I checked virtual methods were implemented using a table of function pointers, though my knowledge may be dated here).
Interpreted languages fundamentally examine human-readable language as the program executes, essentially necessitating the equivalent of both the compile and execution stages during program runtime. Therefore they will almost always be somewhat slower than a compiled counterpart. Smart implementations will incrementally interpret parts of the code as executed (to avoid interpreting branches that are never hit), and cache the result so that a given portion of code is only interpreted once.
There's a middle ground as well, in which human-readable language is translated into pseudo-code (sometimes called P-code or byte code). The purpose of this is to have a compact representation of the code that is fast to interpret, yet portable across many operating systems (you still need a program to interpret the P-code on each platform). Java falls into this category.