First, processes are not threads. A process contains one or more threads. It starts with one thread, and may create more if it so chooses. It's a similar idea, but threads within a process share the process's address space (each thread can access/share the program's variables). Processes each get a separate address space, and are unable to access memory outside that assigned to it by the OS.
Any program you run gets a process. When you run a script, the language interpreter--be it shell, Python, whatever--gets invoked to execute the script, and that's a process. The difference between a program and a process is that the process is the running instance. So if you have 3 terminals open running bash
, you have 3 processes running the one program. Note this doesn't necessarily mean windows: my mail program can have several windows open, but it's all still done by one process.
Yes, you can start numerous concurrent processes. Limits are imposed by the OS. 32K is a common limit, but different flavors of Unix/Linux support different process counts. There's usually also a per-user process limit, unless you're root
.
In practice, concurrent process count is also limited by available memory and CPU. If you have 4GB of RAM, and you've got a program where each process/instance takes up 500K, you could run about 6000 copies before you exhaust RAM (500K*6000 copies = 3GB, and the OS needs some for itself). Your system will rely on its swapfile at this point, but you're going to encounter thrashing if all these processes are trying to run. If you do this to your SSD, you will shorten its life.
And, unless you've got a supercomputer with hundreds or thousands of processors, more than a few concurrent, CPU-intensive processes is all that's practical. If you start 100 CPU-intensive ("CPU bound") processes on a 4-core machine, the OS will spread core time over all 100 using time slicing, so each process will run at 4 cores/100 processes = 1/25 the rate it would run had it a core to itself. You won't get more done by forking thousands of concurrent processes, unless you have the hardware to actually do the work.
The flipside of being CPU bound is being I/O bound---suppose you want to mirror a website, so you're going to try downloading all 1000 pages in parallel. It's not going to be any faster than a limited number of parallel connections each grabbing items sequentially, because only so many bits can flow through the network. Once you saturate the network, more concurrency won't make anything faster.
You can use ps
to list your personal processes, or ps -ef
or ps aux
to view all processes. There are many: as I'm writing this, my system has 235, but most of them are idle: terminals I'm not using at the moment, networking support, audio support at-the-ready in case it's called on, the web browser I'm writing in, the compositor that updates the screen when asked to by the web browser. You can learn a lot about your OS by looking through this list, and looking up what various programs do/what services they provide. This is where you see your OS is not one big black box, but a collection of many programs/processes, each providing some limited functionality, but together provide most of the OS services.