Here's another idea:
using System;
using System.Collections.Generic;
using System.Linq;
namespace RandomElements
{
class Program
{
static IEnumerable<int> GetRandomElements(IEnumerable<int> source, int count)
{
var random = new Random();
var length = source.Count();
var enumerator = source.GetEnumerator();
if (length < count)
{
throw new InvalidOperationException("Seriously?");
}
while (count > 0)
{
const int bias = 5;
var next = random.Next((length / bias) - count - bias) + 1; // To make sure we don't starve.
length -= next;
while (next > 0)
{
if (!enumerator.MoveNext())
{
throw new InvalidOperationException("What, we starved out?");
}
--next;
}
yield return enumerator.Current;
--count;
}
}
static void Main(string[] args)
{
var sequence = Enumerable.Range(1, 100);
var random = GetRandomElements(sequence, 10);
random.ToList().ForEach(Console.WriteLine);
}
}
}
It only needs to go through the enumeration once (if you pass in an ICollection, that is, otherwise it needs to know the length). This might be useful if it's expensive to traverse the enumeration or copy all the elements or whatever.
I'm not a statistician or mathematician or magician, so don't hold it against me, but I found that without the 'bias' introduced at line 22 I felt it sort of wanted to pick more from the rear end of the sequence. Perhaps someone could tweak the probabilities more? If enumerating really is expensive, you could make it bias more towards the front.
Comments are welcome.