Note that if your nested repos were declared as submodules, then a simple git submodule update --remote
would be enough.
That is, provided you had your submodules configured to follow a branch.
See also "Git submodule to track remote branch".
Those updates (involving a pull) would not be multithreaded though (both for the checkout part, but for the fetch part as well.
The multi-threading is only for one operation, as mentioned in this thread:
A few selected operations are multi-threaded if you compile with
thread support (i.e., do not set NO_PTHREADS
when you build).
But object packing (used during fetch/push, and during git-gc
) is multi-threaded (at least the delta compression portion of it is).
git may fork to perform certain asynchronous operations.
E.g., during a fetch, one process runs pack-objects to create the output, and the other speaks the git protocol, mostly just passing through the output to the client.
On systems with threads, some of these operations are performed using a thread rather than fork.
This is not about CPU performance, but about keeping the code simple (and cannot be controlled with config).
All that means, as Etan Reisner comments, that you would need to script those git pull updates yourself in order to multithread those commands.
See "Multithreading in Bash" for scripting solution.