Yes, you should care about the limitation of concurrent calls; but only as per use case.
Bottom line: it depends on your infrastructure.
For example, if you are on a shared network connection; fetching a lot of data from different sources can clog your network.
Another example: writing to a network server might be more efficient when doing so in larger chuncsk, with fewer simultaneous connections.
Depending on where you run this, it worth to know how many possible parallel tasks are being executed.
If you know that, you can tweak it to be reasonable within the boundaries of your environment. This depends on the hardware and the possible load....
That's basically what's meant in the article by:
The optimum number of connections depends on the actual conditions in which the application runs. Increasing the number of connections available to the application may not affect application performance. To determine the impact of more connections, run performance tests while varying the number of connections.
The default of the HttpWebRequest, as described, is pretty low. Which would result in a pretty good "general approach" scenario, even on home/consumers internet connections. If your situation is e.g.: high performant server to server communication, then, different values would make more sense.
The key is to know the environment/context and it's capabilities.
Then you can determine a good number of parralel operations.
And that's why you must care about it, and sometimes limit it :-)