The proper way to do Microbenchmarks is to learn about, and use correctly, the Java micobenchmark harness (JMH), which is augmented by the JEP 230 Microbenchmark Suite from OpenJDK 12 onward. A search for "java jmh" will yield links to some useful tutorials. I liked Jakob Jenkov's blog post, and of course anything by Aleksey Shipilëv, who is the principal developer and maintainer of JMH. Just pick the most current version of his JMH talks on the link provided.
Java benchmarking is everything but trivial, and the less work your tested code does, the deeper the rabbit hole. Timestamping can be very misleading when trying to get a grip on performance issues. The one place where timestamping does work is when you try to measure wait time for external events (such as waiting for a reply to a HTTP request and these kinds of things), as long as you can ensure that there is negligible time spent between the unblocking of a waiting thread and the taking of the "after" timestamp, and as long as the thread is unblocked duly in the first place. This is typically the case if, and only if, the wait is at least in the order of tens of milliseconds. You're good if you wait seconds on something. Still, warmup and cache effects will occur and ruin the applicability of your measurements to real-world performance any day.
In terms of measuring "exact CPU time", one can take the approach as detailed by Joachim Sauer's answer. When using JMH, one can measure CPU usage externally and then average against the number of iterations measured, however as this will include the harness' overhead that approach is fine for comparative measurements, but not suitable to derive a "my function xy, on average, takes such-and-such number of CPU seconds on each iteration on the CPU architecture I've used.". On a modern CPU and JVM such an observation is virtually impossible to make.