The answer is: both links "talk" about the same thread.
The major difference is: the first is effectively asking about the number of threads that a given CPU can really execute in parallel.
The other link talks about the fact how many threads you can coexist within a certain scope (for example one JVM).
In other words: the major "idea" behind threads is that ... most of the time, they are idle! So having 6400 threads can work out - assuming that your workload is such, that 99.9% of the time, each thread is just doing nothing (like: waiting for something to happen). But of course: such a high number is probably not a good idea, unless we are talking about a really huge server that has zillions of cores to work with. One has to keep in mind that threads are also a resource, owned by the operating system, and many problems that you did solve using "more threads" in the past have no different answers (like using nio packages and non-blocking io instead of having zillions of threads waiting for responses for example).
Meaning: when you write example code where each thread just computes something (so, if run alone, that thread would consume 100% of the available CPU cycles) - then adding more threads just creates more load on the system.
Typically, a modern day CPU has c cores. And each cores can run t threads in parallel. So, you got often like 4 x 2 threads that can occupy the CPU in parallel. But as soon as your threads spent more time doing nothing (waiting for a disk read or network request to come back), you can easily create, manage, and utilize hundreds or even thousands of threads.