The answer is not so simple. There may be cases where threads that go into the blocked state may end up causing CPU utilization.
Most JVMs employ tiered locking algorithms. The often involve algorithms such as spinlocks especially for locks held for a short duration. When a thread tries to acquire a monitor and finds it cannot, the JVM may actually put it in a loop and have the thread attempt to acquire the monitor, rather than context switching it out immediately. If the thread fails to acquire the lock after a certain number of tries or duration (depending on the specific JVM implementation), the JVM switches to a "fat lock" or "inflated lock" mode where it does context switch out the thread.
It is with the spinlock behavior where you may incur CPU costs. If you have code that holds lock for a very short duration and the contention is high, then you may see appreciable bump in the CPU utilization. For some discussions on various techniques JVMs use to reduce costs on contention, see http://www.ibm.com/developerworks/java/library/j-jtp10185/index.html.