The exact implementation details are up to the implementors. But the GNU implementation (glibc) implements rand() like that:
http://sourceware.org/git/?p=glibc.git;a=blob;f=stdlib/random_r.c;hb=glibc-2.15#l361
The comment explains it pretty well.
/* If we are using the trivial TYPE_0 R.N.G., just do the old linear
congruential bit. Otherwise, we do our fancy trinomial stuff,
which is the same in all the other cases due to all the global
variables that have been set up. The basic operation is to add
the number at the rear pointer into the one at the front
pointer. Then both pointers are advanced to the next location
cyclically in the table. The value returned is the sum generated,
reduced to 31 bits by throwing away the "least random" low bit.
Note: The code takes advantage of the fact that both the front and
rear pointers can't wrap on the same call by not testing the rear
pointer if the front one has wrapped. Returns a 31-bit random number. */
Regarding your question why you always need a seed value: There are no truly random numbers in computer science. Computers are (in computation theory) completely deterministic machines. They can't perform any operations with an outcome which is up to chance.
There are only pseudorandom number generators which generate streams of numbers which look random, but they are still the results of deterministic calculations. That's why you need a seed value: each seed results in a different sequence of numbers. When you use the same seed, you get the same sequence of pseudorandom numbers.
The behavior that a RNG always returns the same sequence when getting the same seed can be exploited: The classic space simulation Elite, for example, was able to store a huge universe with hundreds of planets in a single integer. How did it do that? The whole universe was randomly-generated. All data which was required to recreate the universe was the seed value, which always resulted in the exact same universe being generated.