2

I've been reading a book on C# and it explains that int, float, double etc. are "basic" types meaning that they store the information at the lowest level of the language, while a type such as 'object' puts information in the memory and then the program has to access this information there. I don't know exactly what this means as a beginner, though!

The book however does not explain what the difference is. Why would I use int or string or whatever instead of just object every time, as object is essentially any of these types? How does it impact the performance of the program?

Paze
  • 183
  • 1
  • 1
  • 11
  • 5
    They're not basic *operators*, they're basic *types* – Mathias R. Jessen Oct 18 '15 at 19:47
  • 1
    Do you understand why you need to declare a type in the first place in contrast to languages such as PHP or JavaScript where you don't? – Jon Oct 18 '15 at 19:49
  • No, I'm afraid I don't. – Paze Oct 18 '15 at 19:49
  • 4
    _"int, float, double etc. are "basic" types meaning that they store the information directly at the core of the computer"_ - I hope this is you paraphrasing, otherwise I'd go find another book. – CodeCaster Oct 18 '15 at 19:49
  • I may be paraphrasing but not by far. I just can't say whether they said it was directly in the "CPU" or some other part of the hardware. I could look it up. It was just some really basic part of the hardware, as I understood it. – Paze Oct 18 '15 at 19:50
  • 4
    @Paze then it would be extremely difficult to explain successfully. Basically, declaring a type means you tell the compiler "I authorize you to assume this thing is a X and prevent me from doing anything that doesn't make sense with an X". This contract that you voluntarily enter into is restraining you (bad) but also guarantees that the compiler will catch a whole class of mistakes you might make (good). So the type to declare is mostly decided by what tradeoff you deem suitable. – Jon Oct 18 '15 at 19:53
  • Here is the book quote: "These data types are called "primitive types" because they are embedded in the C# language at the lowest level" Sorry, I must have remembered the language as hardware. – Paze Oct 18 '15 at 19:54
  • 2
    Check the difference between **value types** and **reference types**. – w.b Oct 18 '15 at 19:58
  • _Why would I use int or string or whatever instead of just object every time?_ because you dont know what are the types when use everything as object and that makes everything hard. besides these values are boxed when stored as object and you cant do much things with it. – M.kazem Akhgary Oct 18 '15 at 20:04
  • Why would I want to use a float instead of decimal if they are both primitive data types but decimal is more accurate? – Paze Oct 18 '15 at 20:59
  • @Paze you should always stick with least functionalities if possible. decimal is much heavier than float and if you try to always pick most accurate or most powerful you will end up with horrible programmer which does not know how to deal with system resources. when somewhere float can serve all your needs then using decimal is not smart. – M.kazem Akhgary Oct 19 '15 at 16:11
  • Okay, so it does use more resources even though they are both "primitive data types". – Paze Oct 19 '15 at 18:54

3 Answers3

4

This is a very broad subject, and I'm afraid your question as currently stated is prone to being put on hold as such. This answer will barely scratch the surface. Try to educate yourself more, and then you'll be able to ask a more specific question.

Why would I use int or string or whatever instead of just object every time, as object is essentially any of these types?

Basically you use the appropriate types to store different types of information in order to execute useful operations on them.

Just as you don't "add" two objects, you can't get the substring of a number.

So:

int foo = 42;
int bar = 21;
int fooBar = foo + bar;

This won't work if you declared the variables as object. You can do an addition because the numeric types have mathematical operators defined on them, such as the + operator.

You can refer to an integer type as an object (or any type really, as in C# everything inherits from object):

object foo = 42;

However now you won't be able to add this foo to another number. It is said to be a boxed value type.

Where exactly these different types are stored is a different subject altoghether, about which a lot has been written already. See for example Why are Value Types created on the Stack and Reference Types created on the Heap?. Also relevant is the difference between value types and reference types, as pointed out in the comments.

Community
  • 1
  • 1
CodeCaster
  • 147,647
  • 23
  • 218
  • 272
  • Thank you. That clears it up a bit for me. Maybe the book I'm reading is going a bit too deep for someone who just wants to be able to code some C# and not understand the theory behind it like an engineer would? – Paze Oct 18 '15 at 19:59
  • 1
    @Paze Maybe. In the long run though, understanding how the runtime environment actually treats your code and the data it processes is a very fundamental lesson that all programmers should learn at some point. – Mathias R. Jessen Oct 18 '15 at 20:02
  • @Paze: I doubt you could code anything complex enough to be of practical use without understanding this. We are talking about the *absolute basics*. – Jon Oct 18 '15 at 20:05
  • Oh okay, I better get to it then. – Paze Oct 18 '15 at 20:05
3

C# is a strongly typed language, which means that the compiler checks that the types of the variables and methods that you use are always consistent. This is what prevents you from writing things like this:

void PrintOrder(Order order)
{
    ...
}

PrintOrder("Hello world");

because it would make no sense.

If you just use object everywhere, the compiler can't check anything. And anyway, it wouldn't let you access the members of the actual type, because it doesn't know that they exist. For instance, this works:

OrderPrinter printer = new OrderPrinter();
printer.PrintOrder(myOrder);

But this doesn't

object printer = new OrderPrinter();
printer.PrintOrder(myOrder);

because there is no PrintOrder method defined in the class Object.

This can seem constraining if you come from a loosely-typed language, but you'll come to appreciate it, because it lets you detect lots of potential errors at compile time, rather than at runtime.

Thomas Levesque
  • 286,951
  • 70
  • 623
  • 758
2

What the book is referring to is basically the difference between value types (int, float, struct, etc) and reference types (string, object, etc).

Value types store the content in a memory allocated on the stack which is efficient where as reference types (almost anything that can have the null value) store the address where data is. Reference types are allocated on the heap which is less efficient than the stack because there is a cost to allocating and deallocating the memory used to store your data. (and it's only deallocated by the garbage collector)

So if you are using object every time it will be slower to allocate the memory and slower to reclaim it.

Documentation

Nasreddine
  • 36,610
  • 17
  • 75
  • 94
  • IMHO this is missing the point. You could substitute `string` for `int` and the question would still stand, but this answer would not. – Jon Oct 18 '15 at 20:07
  • @Jon My answer (tries to) partially answer the bit about performance in the OP "How does it impact the performance of the program?" – Nasreddine Oct 18 '15 at 20:10
  • Fair enough, I didn't pay much attention to that because of the way I parsed the question. Cheers! – Jon Oct 18 '15 at 20:12