(V8 developer here.)
For parsing speed, the overall amount of script matters, not the organization into files.
That said, there could be secondary effects, which could have more impact on overall perceived page load performance than raw parsing speed. As Ashok's answer points out, the downloading is part of the picture, where historically many small resources were at a disadvantage, which as you point out should be addressed by HTTP2. On the flip side, a few resources might get a speedup thanks to concurrent connections, compared to a single larger chunk.
Another effect worth considering is caching. If you have one part of your code that changes rarely (e.g. a third-party library, which you only update every few months) and another part that changes a lot (e.g. your own code, where you deploy new versions every other day), then it makes sense to split the script files along this line, so that the browser can cache the parts that don't change. That would at least avoid the download of the part that hasn't changed; some browsers might even be able to cache the result of parsing/compiling the code, which would save even more work.
The applicable rules of thumb are:
(1) Do what makes sense for your case (i.e. what's most convenient); or at least start with that and see if it works well enough. Premature optimization (i.e.: making things more complicated in the hope of enabling more speed, without having verified whether that's actually necessary or helpful) is usually a bad idea.
(2) Measure any alternatives yourself, with a test that's as close as possible to your real situation. For example, apply a realistic split to your actual sources (maybe in a handful of chunks? or combine them into one if splitting was what you did before) and test that, rather than generating thousands of files with dummy content. If you can't measure a difference, then there is no difference that matters! And if you can measure a difference, then you have your answer :-)