I am currently trying to convert a decimal value into IEEE 754 Half Precision Floating Point format but ran into some issues with the conversion. Currently this code only works in converting into Single Precision format.
If I am not wrong, it defaults converts decimal values into Single Precision format. Otherwise, which line of the code is doing that?
Otherwise, is there an alternative way to solve my issue?
#include <limits.h>
#include <iostream>
#include <math.h>
#include <bitset>
// Get 32-bit IEEE 754 format of the decimal value
std::string GetBinary32( float value )
{
union
{
float input; // assumes sizeof(float) == sizeof(int)
int output;
} data;
data.input = value;
std::bitset<sizeof(float) * CHAR_BIT> bits(data.output);
std::string mystring = bits.to_string<char,
std::char_traits<char>,
std::allocator<char> >();
return mystring;
}
int main()
{
// Convert 19.5 into IEEE 754 binary format..
std::string str = GetBinary32( (float) 19.5 );
std::cout << "Binary equivalent of 19.5:" << std::endl;
std::cout << str << std::endl << std::endl;
return 0;
}
The output of the above code would give 01000001100111000000000000000000
However, I would want to convert it into a 16 bit format.
EDIT: NOT a duplicate since this is not relating to precision error.