-7

as computer understands only 0's and 1's underneath,how does floating point numbers like 12.1234 gets represented in memory as a set of 0's and 1's ,

does it gets stored by respective ASCII values of 1 ,2, . ,1,2,3,4 respectively..?

lazarus
  • 677
  • 1
  • 13
  • 27
  • 1
    No offense, but since you know about the binary system: why don't you search for it yourself on the web? Start with something like http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html – fuesika Aug 29 '14 at 14:01
  • Thanx ,I will keep it in mind for the next time.. – lazarus Aug 29 '14 at 14:03
  • 2
    Jesus. No, don't start with Goldberg. Start with the wikipedia page. – tmyklebu Aug 29 '14 at 14:40
  • Take a look at this answer (mine): [How to represent FLOAT number in memory in C](http://stackoverflow.com/questions/6910115/how-to-represent-float-number-in-memory-in-c/6911412#6911412). – Rudy Velthuis Aug 29 '14 at 15:26

2 Answers2

3

Since a computer understands only 0s and 1s, have you ever wondered how it can store emails, pictures, movies, sound? There are 0s and 1s stored. These are interpreted. We assign meaning to bits depending on your purposes.

Google for IEEE 754 for a thorough explanation.

gnasher729
  • 51,477
  • 5
  • 75
  • 98
1

As far as I know flaoting numbers(for single precision) are stored in memory as follows:

  • sign s (denoting whether it's positive or negative) - 1 bit
  • mantissa m (essentially the digits of your number - 24 bits
  • exponent e - 8 bits

For example:

3.14159 would be represented like this:

0 10000100 11001001000011111100111
    ^     ^               ^
    |     |               |
    |     |               +--- significand = 0.7853975
    |     |
    |     +------------------- exponent = 4
    |
    +------------------------- sign = 0 (positive)

Do note that . is not stored at all in memory.

As a good reference read What Every Computer Scientist Should Know About Floating-Point Arithmetic and Floating Point

Rahul Tripathi
  • 168,305
  • 31
  • 280
  • 331
  • This strongly depends on architecture as well as bit-ness of a system. – fuesika Aug 29 '14 at 14:02
  • Actually, for single precision IEEE 754 format, mantissa is 24 bits of which the highest bit is not stored, and the exponent is 8 bits. But that is just single precision, and there are other formats. – gnasher729 Aug 29 '14 at 14:04
  • 2
    @pyStarter: The floating-point format doesn't depend on the "bit-ness" of a system at all. For example the very first Macs with a 16/32 bit processor and the latest ones with 64 bit processor support the same floating-point formats. – gnasher729 Aug 29 '14 at 14:06
  • @gnasher729 I think we have a little misunderstanding here. How would one store double precision my means of 32bit? Of course the same standard may apply, but the actual number of bits will be different. – fuesika Aug 29 '14 at 14:08
  • 1
    NO, *What Every Computer Scientist Should Know About Floating-Point Arithmetic* is a terrible reference for a beginner. It's aimed at designers of f-p arithmetic systems and circuits, not users. Big down vote from me. – High Performance Mark Aug 29 '14 at 14:44
  • 1
    @HighPerformanceMark:- There is one more reference floating point reference wiki. And this is the first time I heard that someone said that What every computer... is a terrible reference :) – Rahul Tripathi Aug 29 '14 at 14:47
  • You didn't read @tmyklebu's comment on the question, above, did you ! – High Performance Mark Aug 29 '14 at 14:49
  • 1
    @HighPerformanceMark:- I do but as I said that I have linked the wiki page as well. But since that was a famous link so I added that as well. I think you are getting it harsh on me! :( – Rahul Tripathi Aug 29 '14 at 14:50
  • 1
    I think some of the comments here treat "What Every Computer Scientist..." too narrowly. Its target audience is "every computer scientist" Computer scientists presumably not only have deep understanding of data representation in general, but also should be trying to understand, not just blindly use, floating point. That audience is far wider than just people building f-p arithmetic systems and systems. For example, its discussion of rounding error, and rounding error in IEEE 754, is useful for anyone using floating point who has the mathematical background to understand it. – Patricia Shanahan Aug 29 '14 at 15:17
  • @PatriciaShanahan:- Yes thats correct. And thats why I said that people are getting harsh on me and giving me downvotes(*Now I cant delete my answer even as it is accepted*) – Rahul Tripathi Aug 29 '14 at 15:18
  • 2
    I think it is silly to downvote an answer just because it recommends "What Every...". It is an almost standard recommendation for this tag, for everyone who seems to have problems understanding floating point. – Rudy Velthuis Aug 29 '14 at 22:35