0

Possible Duplicate:
Floating Point to Binary Value(C++)

Currently I'm working on a genetic algorithm for my thesis and I'm trying to optimize a problem which takes three doubles to be the genome for a particular solution. For the breeding of these doubles I would like to use a binary representation of these doubles and for this I'll have to convert the doubles to their binary representation. I've searched for this, but can't find a clear solution, unfortunately.

How to do this? Is there a library function to do this, as there is in Java? Any help is greatly appreciated.

Community
  • 1
  • 1
Joep
  • 9
  • 1
  • 3
  • possible duplicate of [Floating Point to Binary Value(C++)](http://stackoverflow.com/questions/474007/floating-point-to-binary-valuec), but see also http://stackoverflow.com/questions/4328342/float-bits-and-strict-aliasing and linked questions – Mat Jan 01 '13 at 11:14
  • 1
    Wouldn't working with `__int64`s be easier? – irrelephant Jan 01 '13 at 11:16

7 Answers7

2

What about:

double d = 1234;
unsigned char *b = (unsigned char *)&d;

Assuming a double consists of 8 bytes you could use b[0] ... b[7].

Another possibility is:

long long x = *(long long *)&d;
CubeSchrauber
  • 1,199
  • 8
  • 9
2
Community
  • 1
  • 1
edgar.holleis
  • 4,803
  • 2
  • 23
  • 27
1

Why do you want to use a binary representation? Just because something is more popular, does not mean that it is the solution to your specific problem.

There is a known genome representation called real that you can use to solve your problem without being submitted to several issues of the binary representation, such as hamming cliffs and different mutation values.

Please notice that I am not talking about cutting-edge, experimental stuff. This 1991 paper already describes the issue I am talking about. If you are spanish or portuguese speaking, I could point you to my personal book on GA, but there are beutiful references in English, such as Melanie Mitchell's or Eiben's books that could describe this issue more deeply.

The important thing to have in mind is that you need to tailor the genetic algorithm to your problem, not modify your needs in order to be able to use a specific type of GA.

rlinden
  • 2,053
  • 1
  • 12
  • 13
0

I wouldn't convert it into an array. I guess if you do genetic stuff it should be performant. If I were you I would use an integer type (like suggested from irrelephant) and then do the mutation and crossover stuff with int operations.

If you don't do that you're always converting it back and forth. And for crossover you have to iterate through the 64 elements.

Here an example for crossover:

__int64 crossover(__int64 a, __int64 b, int x) {
  __int64 mask1 = ...; // left most x bits
  __int64 mask2 = ...; // right most 64-x bits

  return (a & mask1) + (b & mask2);
}

And for selection, you can just cast it back to a double.

duedl0r
  • 9,289
  • 3
  • 30
  • 45
0

You could do it like this:

// Assuming a DOUBLE is 64bits

double  d = 42.0; // just a random double
char*   bits = (char*)&d; // access my double byte-by-byte
int     array[64]; // result

for (int i = 0, k = 63; i < 8; ++i) // for each byte of my double
    for (char j = 0; j < 8; ++j, --k) // for each bit of each byte of my double
        array[k] = (bits[i] >> j) & 1; // is the Jth bit of the current byte 1?

Good luck

cmc
  • 2,061
  • 1
  • 19
  • 18
0

Either start with a binary representation of the genome and then use one-point or two-point crossover operators, or, if you want to use a real encoding for your GA then please use the simulated binary crossover(SBX) operator for crossover. Most modern GA implementation use real coded representation and a corresponding crossover and mutation operator.

awhan
  • 510
  • 6
  • 13
0

You could use an int (or variant thereof).

The trick is to encode a float of 12.34 as an int of 1234.

Therefore you just need to cast to a float & divide by 100 during the fitness function, and do all your mutation & crossover on an integer.

Gotchas:

  • Beware the loss of precision if you actually need the nth bit.
  • Beware the sign bit.
  • Beware the difference in range between floats & ints.
NWS
  • 3,080
  • 1
  • 19
  • 34