I have this huge json file neatly formated starting with the characters "[\r\n" and ending with "]". I have this piece of code:
foreach (var line in File.ReadLines(@"d:\wikipedia\wikipedia.json").Skip(1))
{
if (line[0] == ']') break;
// Do stuff
}
I'm wondering, what would be best performance-wise, what machine code would be the most optimal in regards to how many clock cycles and memory is consumed if I were to compare the above code to one where I have replaced "break" with "continue", or would both of those pieces of code compile to the same MSIL and machine code? If you know the answer, please explain exactly how you reached your conclusion? I'd really like to know.
EDIT: Before you close this as nonsensical, consider that this code is equivalent to the above code and consider that the c# compiler optimizes when the code path is flat and does not fork in a lot of ways, would all of the following examples generate the same amount of work for the CPU?
IEnumerable<char> text = new[] {'[', 'a', 'b', 'c', ']'};
foreach (var c in text.Skip(1))
{
if (c == ']') break;
// Do stuff
}
foreach (var c in text.Skip(1))
{
if (c == ']') continue;
// Do stuff
}
foreach (var c in text.Skip(1))
{
if (c != ']')
{
// Do stuff
}
}
foreach (var c in text.Skip(1))
{
if (c != ']')
{
// Do stuff
}
}
foreach (var c in text.Skip(1))
{
if (c != ']')
{
// Do stuff
}
else
{
break;
}
}
EDIT2: Here's another way of putting it: what's the prettiest way to skip the first and last item in an IEnumerable while still deferring the executing until //Do stuff?