When you place semicolons between css rules, the rule following the semi colon will be ignored. This can lead to some very strange results. The MDN has a jsfiddle that can be used to show this effect rather clearly.
This is the initial state, and this is after the first rule has a semicolon at its end.
Fortunately it is, essentially, universal practice to exclude semicolons from between one's css blocks.
My question is: Why is this the case? I've heard that this is the case because it will save space (in this case, exactly one character per css rule). But this reasoning, while true, seems a tad strange. I couldn't find specifics on how much space each char in a css file occupies, but if it's analogous to JS, this SO post tells us that each char is approximately 16 bits, or 2 bytes. Meaning we would save 2 bytes per rule.
According to this list of average connection speed by country the global average connection speed is 5.1 Megabits/second. Since we save exactly 1 char per rule by not allowing semi-colons, and each char is 16 bits, we can show that on average the amount of rules it takes it takes us to save one second is:
5,100,000(bits/second) / 16(bits{saved}/rule)
(5,100,000/16)*[(bits * rule)/(second * bits] or
318750 (rule/second)
And so based on the global average connection speed, it would require over 300,000 rules to save us one second of time.
Surely there must exist more efficient methods of saving download time for the user, and there does such as minification/uglification of css/js. Or the reduction of length of the names of CSS Properties, since these are much longer than 1 char and can appear many times, shortening these could save orders of magnitudes of more bytes when compared to chopping off a trailing semicolon.
More important than the bytes saved, in my opinion, is how confusing this can get for the developer. Many of us are trained by habit to follow closed braces with a semicolon.
returnType/functionDec functionName(arguments){
//...function body
};
is a VERY common pattern found in a great many of languages (including JavaScript), and it is absolutely possible to imagine a developer typing
cssRuleA{
/*style Rules */
};
cssRuleB{
/* Style Rules*/
};
as an accidental result of this habit. The console will log no errors, the developer will have no indication that a mistake has been made outside of styles not showing up correctly. The absolutely WORST part of this, is that even though cssRuleA is what's causing the error, it will work just fine, cssRuleB will be the rule not displaying correctly even if there is nothing wrong with it. The fact that
- This logs no error in the console and
- The style not displaying is never the style at fault in this situation
can especially cause issues in large projects where style/UI issues can have many different possible roots.
Does there exist some factor inherent in CSS that makes this convention make more sense? Is there something in some white papers I missed that explains why this is how CSS behaves? Personally, I tried to see if it is faster to exclude semicolons from a perspective of Finite Automata/Grammars, but I couldn't definitively determine if it was faster or not.