2

I know hexadecimal numbers but I can`t seem to get how it is used to create a bitmap image or fonts. I studied from the link http://www.glprogramming.com/red/chapter08.html which shows how to create an F. What do the hexadecimal numbers correspond to? For example, what part of the bitmap image does 0xff,0xc0 cover? I thought they gave information about the colour of a pixel.

user3124361
  • 471
  • 5
  • 21

2 Answers2

1

Hexadecimal numbers are somewhat misleading, because bitmaps are, well... bits.

0xff is 1111 1111 and 0xc0 is 1100 0000.

Put those together and you have 1111 1111 1100 0000, if you repeat the process for each row in your bitmap you get the following:

GLubyte rasters[24] = {
 0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00,
 0xff, 0x00, 0xff, 0x00, 0xc0, 0x00, 0xc0, 0x00, 0xc0, 0x00,
 0xff, 0xc0, 0xff, 0xc0
};

  // Keep in mind, the origin of your image is the **bottom-left**

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

0xff,0xc0   1111111111000000     1111111111
0xff,0xc0   1111111111000000     1111111111
0xc0,0x00   1100000000000000     11
0xc0,0x00   1100000000000000     11
0xc0,0x00   1100000000000000     11
0xff,0x00   1111111100000000     11111111   // The 0s make it hard to read, so I ...
0xff,0x00   1111111100000000     11111111   //   removed them on the right-hand side.
0xc0,0x00   1100000000000000     11
0xc0,0x00   1100000000000000     11
0xc0,0x00   1100000000000000     11
0xc0,0x00   1100000000000000     11
0xc0,0x00   1100000000000000     11

This should look pretty familiar ;)


Regarding how this all translates into actual color. OpenGL will replace any part of your bitmap that has a 1 in it with the current raster color (e.g. glColor3f (0.0f, 1.0f, 0.0f) will produce a green F). 0 bits are simply discarded when you call glBitmap (...).

Community
  • 1
  • 1
Andon M. Coleman
  • 42,359
  • 2
  • 81
  • 106
  • thanks for the reply. The thing that confuses me is when the hexadecimal numbers are input using the array `rasters`, how does it know that only 16 bits of data is used for a line? It could have used 0xff,0xc0,0xff,0xc0 making 32 bits(being a multiple of 8 bits) in a line, couldn`t it? If the size of a single `F` is restricted to 20 bits, so couldn`t it cover the whole 20 bits while remaining bits is discarded as garbage? – user3124361 Jun 17 '14 at 03:43
  • Yes, if you notice in the code for that example it does: `glPixelStorei (GL_UNPACK_ALIGNMENT, 1)`. That means that each row begins on a 1-byte boundary. By the way, OpenGL defaults to a 4-byte alignment (each row would begin on a 4-byte (32-bit) boundary without that call). What this means is that after all 10 bits for your row are fetched from memory, OpenGL advances 6 bits (up to the next byte) before reading the next row. – Andon M. Coleman Jun 17 '14 at 03:48
  • As such, `0xff,0xff` (instead of `0xff,0xc0`) would actually produce the same image (the last 6-bits are skipped)... but that would be more confusing than helpful in the long-run :P I may need to take a step backwards here and explain that the call to `glBitmap (...)` in that code describes a row as having 10 pixels (10-bit because width=10) and that there are a total of 12 rows (height=12). If you understand that, everything else I said might make more sense. – Andon M. Coleman Jun 17 '14 at 03:55
  • It`t not related to this topic. Could you please answer a question related to opengl asked in this link: http://stackoverflow.com/questions/24329285/how-to-orient-the-faces-of-an-icosahedron-in-opengl – user3124361 Jun 20 '14 at 15:22
1

One hex digit represents four bits of information: it can have values ranging from zero (binary 0000) to fifteen (binary 1111). So four hex digits represent sixteen bits of information.

Each row of the letter "F" bitmap on that page is sixteen pixels wide, so it can be represented by a sixteen-bit number whose bits say whether the corresponding pixels should be black or white. The hex digits are just a way of writing those numbers whose bits define the picture.

Note that this is a black-and-white bitmap: each pixel is described by only a single bit, so it can only be black or white. If you want shades of grey (e.g. for antialiasing), you need multiple bits of information for each pixel; typically eight bits (one byte) per pixel, for 256 possible shades.

Grayscale and color images are easier to think about because each pixel is represented by one or more entire bytes, so all the bits in a given byte correspond to the same pixel and you don't really need to think about the individual bits at all. But in a black-and-white image, a single byte can describe more than one pixel, so you have to think about the individual bits within the byte.

Wyzard
  • 33,849
  • 3
  • 67
  • 87
  • OpenGL bitmaps (as defined using OpenGL's terminology) are always monochrome, by the way. Not necessarily black-and-white, any place there is a **1** bit will be replaced with the *current* raster color; **0s** will not be written to the framebuffer at all. Other applications and libraries may define them as referencing larger color palettes, but OpenGL never will. In OpenGL a bitmap is binary. – Andon M. Coleman Jun 17 '14 at 02:50