4

I have been searching about this for so long, but i am not able to understand what this question means.

Question:

Write a program in any language to determine how your computer handles graceful 
underflow.

I understand that a overflow condition is something like this: if an integer can store a maximum value of x and if we assign a value of x+1, the value x+1 will be converted to the the lowest value the integer can hold. I understand that underflow is just the reverse.

How does it stand from High performance scientific computing / Linear algebra point of view ?

I have read this link , but i think it's the same underflow/ overflow stuff that i mentioned above. What does the graceful underflow stand for?

Community
  • 1
  • 1
SeasonalShot
  • 2,357
  • 3
  • 31
  • 49
  • Underflow happens when a floating-point number becomes too small. Meaning the negative exponent becomes too large. to find out how your system handles it divide the smallest number by 2. Still stump n the graceful part. – King-Ink Nov 06 '14 at 04:28
  • 2
    Try http://www.cs.rice.edu/~taha/teaching/05F/210/Labs/Lab07/gradualUnderflow.html – StoneBird Nov 06 '14 at 04:33
  • And this http://stackoverflow.com/questions/8111307/gradual-underflow-and-denormalized-numbers-in-ieee – StoneBird Nov 06 '14 at 04:34
  • Could you give me a program that demonstrates the same? – SeasonalShot Nov 06 '14 at 04:37
  • @King-Ink Yes, it is a known fact. – SeasonalShot Nov 06 '14 at 04:39
  • @BoyLittUp you just write a loop division to your underflow boundary and multiply it back to see the difference and precision lost. – StoneBird Nov 06 '14 at 04:45
  • @StoneBird Okay, but can you give an example as such? – SeasonalShot Nov 06 '14 at 04:50
  • @BoyLittUp what do you mean? I think the idea is pretty clear...You want me to write up the code? Or you want me to explain "How does it stand from High performance scientific computing / Linear algebra point of view"? – StoneBird Nov 06 '14 at 04:55
  • I know it has to do with bit manipulation if we are taking C as our language or try using sys.float_inf incase of python. However, i am looking for a code snippet that does the same. – SeasonalShot Nov 06 '14 at 04:58
  • @StoneBird The first like that you posted. Actually that link was deemed useless. here http://stackoverflow.com/questions/26642756/representation-of-a-gradual-underflow-program – SeasonalShot Nov 06 '14 at 05:00
  • @BoyLittUp as mentioned in that link, the software doesn't handle underflow. The hardware does. All you can do is to run some numerically unstable algorithm such that you observe a loss of precision... – StoneBird Nov 06 '14 at 05:06

1 Answers1

2

Okay,as per the link posted by @StoneBird in this link was particularly helpful. Here i have created a program in c that demonstrates the same.

#include <stdio.h>
#include <math.h>

int main(int argc, char **argv)
{
    unsigned int s,e,m;
    unsigned int* ptr;
    float a,temp=0;
    a=1;
    float min=pow(2,-129);
    while(a>min){
        temp=a;
        a=a/2;
    }
    printf("Value=%e\n",temp);
    ptr=(unsigned int*)&temp;
    s = *ptr >> 31;
    e = *ptr & 0x7f800000;
    e >>= 23;
    m = *ptr & 0x07fffff;
    printf("sign = %x\n",s);
    printf("exponent = %x\n",e);
    printf("mantissa = %x\n",m);
    return 0;
}

Here the min variable is used to change the final number...i used min=pow(2,-129), pow(2,-128) and pow(2,-130) to see the results and the saw the Denormal number appear.This wiki page explains it all.

Community
  • 1
  • 1
SeasonalShot
  • 2,357
  • 3
  • 31
  • 49