0

I am in the process of making a simple program, which loads vertices and triangles from a file (uints and floats).

They will be used in OpenGL and i want them to be 16-bit (to conserve memory), however i only know how to convert to 32-bit. I don't want to use assembly, because i want it to run on ARM as well.

So, is it possible to convert a string to a 16-bit int/float?

philsegeler
  • 163
  • 1
  • 1
  • 9

2 Answers2

5

One possible answer would be to something like this :

#include <string>
#include <iostream>

std::string str1 = "345";
std::string str2 = "3.45";

int myInt(std::stoi(str1));
uint16_t myInt16(0);
if (myInt <= static_cast<int>(UINT16_MAX) && myInt >=0) {
    myInt16 = static_cast<uint16_t>(myInt);
}
else {
    std::cout << "Error : Manage your error the way you want to\n";
}

float myFloat(std::stof(str2));
Actaea
  • 66
  • 4
0

For the vertex coordinates, you have a floating point number X and you need to convert it to one of the 16 bit alternatives in OpenGL: GL_SHORT or GL_UNSIGNED_SHORT or GL_HALF_FLOAT. First, you need to decide whether you want to use integers or floating point.

If you're going with integers, I recommend unsigned integers, so that zero maps to the minimal value and 65536 maps to the maximal value. With integers, you need to decide on the range of valid values for X.

Suppose you know that X is between Xmin and Xmax. Then, you can calculate a GL_UNSIGNED_SHORT-compatible representation by:

unsigned short convert_to_GL_UNSIGNED_SHORT(float x, float xmin, float xmax) {
  if (x<=xmin)
     return 0;
  else if (x>=xmax)
     return 65535;
  else
     return (unsigned short)((X-Xxmin)/(X-Xmax)*65535 + 0.5)
}

If you go with half floats, I suggest you look at 16-bit floats and GL_HALF_FLOAT

For the face indices, you have unsigned 32 bit integers, right? If they are all below 65536, you can easily convert them to 16 bit unsigned shorts by

unsigned short i16 = (unsigned short)i32;
Community
  • 1
  • 1
Adi Levin
  • 5,165
  • 1
  • 17
  • 26