40

Not found it in StyleCop Help Manual, on SO and Google so here it is ;)

During StyleCop use I have a warning:

SA1121 - UseBuiltInTypeAlias - Readability Rules

The code uses one of the basic C# types, but does not use the built-in alias for the type.

Rather than using the type name or the fully-qualified type name, the built-in aliases for these types should always be used: bool, byte, char, decimal, double, short, int, long, object, sbyte, float, string, ushort, uint, ulong.

so String.Empty is wrong (depend on above rules) and string.Empty is good.

Why using built-in aliases is better? Can String. Int32, Int64 (etc.) complicate something in the code on special scenarios?

akjoshi
  • 15,374
  • 13
  • 103
  • 121
binball
  • 2,255
  • 4
  • 30
  • 34
  • 16
    I use the built-in aliases just to make my code more colorful in Visual Studio. – BoltClock May 14 '11 at 08:02
  • 2
    The clue's in the name: *Style* Cop. It is a matter of style. – Richard May 14 '11 at 08:08
  • @Richard, yes, read the name Style _COP_. It is supposed to represent a set of rules on which everyone can agree. I spend too much time reformatting the code of marginal engineers in order to make the code readable. This industry needs a single set of rules for writing code, not some sort of coloring book for people to 'express' themselves. – Quark Soup Jun 25 '13 at 00:46
  • 1
    @DRAirey1 Formatting, naming are not things we'll all agree on *ever*. Hence my previous comment. You're older than me (from you're profil), surely you remember that disagreements about sytle have always gone on? "to make the code readable": suggestion: providing the formatting is consistent and not unreasonable (eg. has indenting following structure) then it is readable even if not *my* style. Choices about naming are equally subjective. – Richard Jun 25 '13 at 08:32
  • @Richard. As an employer and owner of a significant IP library, I couldn't disagree more. There is no advantage to having 6 different programmers code to 6 different styles and naming convensions. Turning the argument around, there is NO disadvantage to a single style such as the one enforced by the default settings of Style COP. The only thing you loose is the endless discussions around naming convensions, hungarian notation, order of properties vs. methods, etc. – Quark Soup Jun 25 '13 at 19:15
  • @DRAirey1 There can definitely be disadvantages to a enforced style. Depending on what the rules of the style are and how good your automated tools are at 'fixing up' code towards that desired style, StyleCop can present a big time drain. As with most things in life it's a question of finding the right balance. – Нет войне Jul 06 '15 at 12:40
  • topo moto, I would agree with you if you were right. There is no question that StyleCop is a drain on resources, especially to set it up initially. However, that investment is paid pack several-fold in maintenance. If you are writing one-shot code that isn't used by anyone, then I can see how your position has some merit. But anyone who has supported a large project spanning hundreds of users over many years will appreciate the cost savings of unified code. The cost savings for code-reviews alone pays for the investment. – Quark Soup Jul 07 '15 at 14:02

6 Answers6

61

Just to clarify: not everyone agrees with the authors of StyleCop. Win32 and .NET guru Jeffrey Richter writes in his excellent book CLR via C#:

The C# language specification states, “As a matter of style, use of the keyword is favored over use of the complete system type name.” I disagree with the language specification; I prefer to use the FCL type names and completely avoid the primitive type names. In fact, I wish that compilers didn’t even offer the primitive type names and forced developers to use the FCL type names instead. Here are my reasons:

  • I’ve seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used. Similarly, I’ve heard some developers say that int represents a 32-bit integer when the application is running on a 32-bit OS and that it represents a 64-bit integer when the application is running on a 64-bit OS. This statement is absolutely false: in C#, an int always maps to System.Int32, and therefore it represents a 32-bit integer regardless of the OS the code is running on. If programmers would use Int32 in their code, then this potential confusion is also eliminated.

  • In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does treat long as an Int32. Someone reading source code in one language could easily misinterpret the code’s intention if he or she were used to programming in a different programming language. In fact, most languages won’t even treat long as a keyword and won’t compile code that uses it.

  • The FCL has many methods that have type names as part of their method names. For example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32, ReadSingle, and so on, and the System.Convert type offers methods such as ToBoolean, ToInt32, ToSingle, and so on. Although it’s legal to write the following code, the line with float feels very unnatural to me, and it’s not obvious that the line is correct:

    BinaryReader br = new BinaryReader(...);
    float val = br.ReadSingle(); // OK, but feels unnatural
    Single val = br.ReadSingle(); // OK and feels good
    
  • Many programmers that use C# exclusively tend to forget that other programming languages can be used against the CLR, and because of this, C#-isms creep into the class library code. For example, Microsoft’s FCL is almost exclusively written in C# and developers on the FCL team have now introduced methods into the library such as Array’s GetLongLength, which returns an Int64 value that is a long in C# but not in other languages (like C++/CLI). Another example is System.Linq.Enumerable’s LongCount method.

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
Andrew Savinykh
  • 25,351
  • 17
  • 103
  • 158
  • 1
    Consider using blockquotes to indicate what *exactly* is a quote from the book, and what is your original content. – Cody Gray - on strike May 15 '11 at 12:47
  • 7
    @Cody Gray: feel free to edit. I spent around an hour and a half typing this and formatting the text. The result that you see is the best I could come up with. I clearly indicated in the text that the quote goes until the end of the answer. If you feel that you can improve this, please edit the answer. – Andrew Savinykh May 15 '11 at 20:08
  • @Cody Gray: I basically don't know how to use block quote so that all the rest of the formatting (bullet list, code highlighting, etc) are retained. If you can show me, I can use it in the future. – Andrew Savinykh May 15 '11 at 20:23
  • Ah, no problem. The formatting can be kind of confusing when you're combining multiple elements. It's taken me a lot of practice. The confusion was the phrase "everything until the end of the answer". I wasn't sure if that meant *all* the rest of it, or what exactly. And +1 since I've got more votes today. ;-) – Cody Gray - on strike May 16 '11 at 04:58
  • @Cody Gray. Sign me up. Is there somewhere where we can voice our strong opinion on this subject to the people that write the Style COPY rules. I realize I can turn them off, but this one needs to be changed. All the other rules make a lot of sense, I can't understand what this one is doing there. The only reason the old datatypes were kept around was so they didn't scare off the C++ programmers with something completely alien. Now that C# has claimed the title of best programming language (so far), they need to drop this archaic practice. – Quark Soup Jun 25 '13 at 00:51
21

It would only really complicate the code if you had your own String, Int32 etc types which might end up being used instead of System.* - and please don't do that!

Ultimately it's a personal preference. I use the aliases everywhere, but I know some people (e.g. Jeffrey Richter) advise never using them. It's probably a good idea to be consistent, that's all. If you don't like that StyleCop rule, disable it.

Note that names of methods etc should use the framework name rather than the alias, so as to be language-neutral. This isn't so important for private / internal members, but you might as well have the same rules for private methods as public ones.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
  • Can't the aliases be defined differently for wierd environments that .net might or might not run on where an int is less that 32 bit? It is possible and I thought that as the reason to use aliases. – the_drow May 14 '11 at 08:10
  • 1
    @the_drow: The C# specification explicitly defines `int` as an alias for `global::System.Int32`. So unless you're considering a situation where `Int32` is less than 32 bits, you're guaranteed the size of `int` with any compliant compiler. So no, that has nothing to do with using aliases. It's more for familiarity's sake, IMO... many C# developers have come from a background where `int`, `char` etc are already common - admittedly they may have slightly different semantics to those in C#, but they're still familiar. – Jon Skeet May 14 '11 at 08:22
  • @JonSkeet: Doesn't this restriction decrease portability? Can Int32 be less than 32 bits? That doesn't make much sense. – the_drow May 14 '11 at 11:15
  • @the_drow: No, Int32 is always 32 bits. So this increases consistency and therefore portability. – Jon Skeet May 14 '11 at 13:07
  • @JonSkeet: The portability remark was that int is always Int32. What if you are implementing .NET for embedded devices that has for example 24bit int (Or other strange situations)? – the_drow May 15 '11 at 05:04
  • @the_drow: Then either you implement your system such that "Int32" means "24 bits" (which would be very strange) or you make Int32 behave as normal, and possibly provide a separate 24-bit integer type. But making "int" mean the same thing everywhere means the C# code is portable. – Jon Skeet May 15 '11 at 06:20
  • 1
    "Ultimately it's a personal preference" -- I couldn't disagree more strongly. Ultimately it's about a standard for coding. If you want to express yourself, then make jewelry or boutique soap. If you want to be an engineer, then try to create standard code. Imagine if the auto industry felt personal preferences were a justification for some design? Yeah, I see everyone is using CAN, but I prefer token-ring. – Quark Soup Aug 19 '15 at 15:29
  • @DRAirey1: I certainly didn't mean to give the impression that everyone should just go with their own individual preferences. As in so many things, it's worth getting together the team, define the conventions for the project, and then stick to them. I don't think there are sufficient objective reasons to prefer one style over the other to try to mandate it globally. Note that this only applies to implementation details - for things like the naming of public types, parameters, properties, methods etc the globally-accepted convention is vital. But whether I use `int` or `Int32` for locals? Meh. – Jon Skeet Aug 19 '15 at 18:00
  • 1
    @JonSkeet - I, again, respectfully disagree. There is NO benefit to the customer (or your employer) for you to take all your engineers and sit in a conference room for several hours trying to figure out what your team's 'standards' are going to be. Your stockholders will never see an ROI for the time spent trying to pick and choose which rules should be respected. – Quark Soup Aug 19 '15 at 19:49
  • @DRAirey1: Google certainly disagrees with you. We have code conventions for Java and other languages (some of which are public) and they *are* discussed internally - because it makes it easier for engineers to work with code written by other teams in the company. Personally, I'm glad of that. Where there are clear standards used by the vast majority of the industry (e.g. naming conventions for each of Java and .NET) there's no need for discussion. For other matters, where there is no universal convention, I believe it does make sense to establish them for a company - in at least some cases. – Jon Skeet Aug 19 '15 at 20:07
1

Because the built in alias is a more natural way to express the concept in that language.

Some cultures say soccer, others say football. Which one is more appropriate depends on the context.

Esteban Araya
  • 29,284
  • 24
  • 107
  • 141
  • 1
    It is only 'natural' if you're still thinking like a 'C' or C++ programmer. For evolved programmers, there is no difference between a 'System.Boolean' and a 'MyLib.MyType'. C# treats all data types equally. – Quark Soup Jul 11 '13 at 13:15
0

An analogy might help: string is to System.String as musket is to rifle. string is a relic of an old language and is provided for old programmers. C# has no "built-in" datatypes and these aliases are provided for a generation of 'C' programmers who have trouble with this concept.

Quark Soup
  • 4,272
  • 3
  • 42
  • 74
0

This StyleCop rule supposes that using aliases introduces less confusion to the so-believed-to-exist "average language user". Which knows, for example, 'long' type, but somehow scary of 'System.Int64' type and gets confused then sees it. Personally I think it's importatnt just to be consistent in your code style, it's impossible to satisfy everyone.

Petr Abdulin
  • 33,883
  • 9
  • 62
  • 96
0

Less confusing? It seems very akward to me for a base data type, which traditionally is just a value, to include static functions. I understand using the base data type equivalent if you're just storing a value, but for accessing members of the class, it seems very akward to put a .(dot) after the base type name.

David Hollowell - MSFT
  • 1,065
  • 2
  • 9
  • 18