If multiple cores are available, the operating system will use them as it sees fit to distribute work whether it's running multithreaded or multiprocess applications. I'll adopt the meaning of the words concurrency and parallelism described here What is the difference between concurrency and parallelism?. It's something to watch out for because often people even inside the computing universe use them differently.
With a single core, single 'thread' CPU, you won't get 'parallelism', meaning multiple streams of instructions being executed at the same time. The operating system will instead rapidly switch between them creating something which often appears to be parallelism but actually isn't. This can also occur with multiple cores, but the operating system will try to keep the available cores busy and therefore get the work done quickly. If there are more threads/processes than available CPU 'threads', then it will have no choice but to share them.
It's a bit confusing because the term 'thread' is used to mean different things in different contexts. In CPUs that support hyper-threading, a single core can run multiple, usually 2, streams of instructions in parallel and share the computational and other resources of that CPU core between them. For most purposes though, these CPU 'threads' present themselves as cores in their own right.
To complicate things, some languages, though none relevant to HPC, support multi-threading but without parallelism. There's also instruction level parallelism, and SIMD which you should look up if you're not sure what they are.