Either way, you have to perform one or two operations per item. The necessary one is the checking of the predicate. The second one of adding 1 depends on the result of the predicate.
So, if you don't consider the effects of cache etc., both cases generate equal number of operations.
The picture that one gets is that there are two separate traversals in the first case, one gathering the elements, and one counting them. Given a list bigger than the cache can handle, this will slow down the processing. And this actually happens in a strict language.
However Haskell's laziness comes into picture here. The list generated by filter
is evaluated (comes into existence) element-by-element, as the counting function length
needs it. And then as length
uses them just for counting and does not preserve references to the newly created list, the elements are immediately free to be garbage-collected. So at any time, there is only O(1)
memory occupied during the calculation.
There is a slight overhead of constructing the resultant "filtered" list in the first version. But this is usually negligible compared with cache effects which come in when the lists are large. (For small lists it may matter; this needs to be tested.) Further, it may be optimized away depending upon the compiler and the optimization level chosen.
Update. The second version will actually consume memory as is pointed out in one of the comments. For a fair comparison ,you need to write it with accumulating parameter and strictness annotator at the right place, because (I expect) length
is already written this way.