There are, in general, two different styles of collection operation libraries:
- type-preserving: that is what you are confused about in your question
- generic (not in the "parametric polymorphism sense" but the standard English sense of the word) or maybe "homogeneous"
Type-preserving collection operations try to preserve the type exactly for operations like filter
, take
, drop
, etc. that only take existing elements unmodified. For operations like map
, it tries to find the closest super type that can still hold the result. E.g. mapping over an IntSet
with a function from Int
to String
can obviously not result in an IntSet
, but only in a Set
. Mapping an IntSet
to Boolean
could be represented in a BitSet
, but I know of no collections framework that is clever enough to actually do that.
Generic / homogeneous collection operations always return the same type. Usually, this type is chosen to be very general, to accommodate the widest range of use cases. For example, In .NET, collection operations return IEnumerable
, in Java, they return Stream
s, in C++, they return iterators, in Ruby, they return arrays.
Until recently, it was only possible to implement type-preserving collection operations by duplicating all operations for all types. For example, the Smalltalk collections framework is type-preserving, and it does this by having every single collections class re-implement every single collections operation. This results in a lot of duplicated code and is a maintenance nightmare. (It is no coincidence that many new object-oriented abstractions that get invented have their first paper written about how it can be applied to the Smalltalk collections framework. See Traits: Composable Units of Behaviour for an example.)
To my knowledge, the Scala 2.8 re-design of the collections framework (see also this answer on SO) was the first time someone managed to create type-preserving collections operations while minimizing (though not eliminating) duplication. However, the Scala 2.8 collections framework was widely criticized as being overly complex, and it has required constant work over the last decade. In fact, it actually lead to a complete re-design of the Scala documentation system as well, just to be able to hide the very complex type signatures that the type-preserving operations require. But, this still wasn't enough, so the collections framework was completely thrown out and re-designed yet again in Scala 2.13. (And this re-design took several years.)
So, the simple answer to your question is this: Scala tries as much as possible to preserve the type of the collection.
In your second case, the type of the collection is Array
, and when you map
over an Array
, you get back an Array
.
In your first case, the type of the collection is Range
. Now, a Range
doesn't actually have elements, though. It only has a beginning and an end and a step, and it produces the elements on demand while you are iterating over it. So, it is not that easy to produce a new Range
with the new elements. The map
function would basically need to be able to "reverse engineer" your mapping function to figure out what the new beginning and end and step should be. (Which is equivalent to solving the Halting Problem, or in other words impossible.) And what if you do something like this:
val seq1: IndexedSeq[Int] = for (i <- 1 to 10) yield scala.util.Random.nextInt(i)
Here, there isn't even a well-defined step, so it is actually impossible to build a Range
that does this.
So, clearly, mapping over a Range
cannot return a Range
. So, it does the next best thing: It returns the most precise super type of Range
that can contain the mapped values. In this case, that happens to be IndexedSeq
.
There is a wrinkle, in that type-preserving collections operations challenge what we consider to be part of the contract of certain operations. For example, most people would argue that the cardinality of a collection should be invariant under map
, in other words, map
should map each element to exactly one new element and thus map
should never change the size of the collection. But, what about this code:
Set(1, 2, 3).map { _ % 2 == 0 }
//=> Set(true, false)
Here, you get back a collection with fewer elements from a map
, which is only supposed to transform elements, not remove them. But, since we decided we want type-preserving collections, and a Set
cannot have duplicate values, the two false
values are actually the same value, so there is only one of them in the set.
[It could be argued that this actually only demonstrates that Set
s aren't collections and shouldn't be treated as collections. Set
s are predicates ("Is this element a member?") rather than collections ("Give me all your elements!")]