0

if i define a structure x1 and then assign an unsigned int xyz:64, it will create a 64 bit integer right?

Now if i want to have all one's in the 64-bit, will the following work:

x1.xyz = 1

Will this value populate the variable with 64 one's? If there is any other way to define a variable and assign value to it, suggest me.

Please Help

Thanks in advance.

Jagmag
  • 10,283
  • 1
  • 34
  • 58
user537670
  • 821
  • 6
  • 21
  • 42

7 Answers7

4

That will fill it with the bit pattern 000...001. The value that results in a 111...111 bit pattern is -1.

Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358
  • 2
    -1 is only 111...1111 if your machine uses two's complement numbers – rve Dec 10 '10 at 10:27
  • @rve: While that is true, I doubt many people here will have to program for one that doesn't. – Ignacio Vazquez-Abrams Dec 10 '10 at 10:31
  • @rve: It doesn't matter whether the machine uses two complements numbers for signed numbers, since `.xyz` is unsigned. Assigning `-1` to an unsigned int gives `UINT_MAX`. See http://stackoverflow.com/questions/809227/is-it-safe-to-use-1-to-set-all-bits-to-true – MSalters Dec 10 '10 at 13:15
3

Doing it with a NOT operation (~) is probably the one with the least amount of typing:

unsigned int x = ~0 ;

However I'm not entirely sure if there is a way to ensure that this fills 64 bits without doing some kind of typecast, like this:

__int64 y = ~ (__int64)0 ;
Rune Aamodt
  • 2,551
  • 2
  • 23
  • 27
  • Bitwise negation as you propose is not the right strategy to obtain an all ones vector. `0` is of type `signed int` and so is `~0`. Depending on the sign representation `~0` can be a so-called trap representation. The right initialization is with `-1`. This is always guaranteed to give you the max value of any unsigned type. The arithmetic of unsigned types is made like this. – Jens Gustedt Dec 11 '10 at 07:53
  • That seems about as likely as not having two's complement to me. And you could always just type out ~ 0U (with the U extension) to be sure right? Another way would also be using numerical_limits, that should be 100% safe either way. – Rune Aamodt Dec 11 '10 at 10:46
  • ~0U would probably result in a value 0000....1111 (32 zeroes, 32 ones) depending on the sizes of `unsigned int` and `unsigned long long`. You need `0ULL`. – MSalters Dec 14 '10 at 09:25
2

Assigning 1 to x1.xyz will result in 63 bit of 0 and 1 bit of 1 which in hex is 0x0000000000000001 what you should do is this : x1.xyz = 0xFFFFFFFFFFFFFFFF

dsafcdsge4rfdse
  • 260
  • 2
  • 12
1

Trying to assign -1 to an unsigned integer will result in compiler warnings at best. You'll have to cast the -1 to the type you are trying to assign to.

The alternative is to explicitly specify the value as 0xFFFFFFFFFFFFFFFFU.

AlastairG
  • 4,119
  • 5
  • 26
  • 41
1

You should use the standard type for a 64 bit unsigned integer which is uint64_t i.e.

#include <stdint.h>

struct MyStruct
{
    uint64_t xyz;
};

// in the code somewhere

struct MyStruct x1;
x1.xyz = ~0;
// or
x1.xyz = 0xFFFFFFFFFFFFFFFF;
JeremyP
  • 84,577
  • 15
  • 123
  • 161
  • If i am not always interested in 64 bits, then what to do suppose my structure has an element which has only 17 bits and i need to assign all the 17 bits as 1? Thanks for your response. – user537670 Dec 11 '10 at 02:56
  • @user537670: With most compilers, either of the two ideas will still work. The top 15 bits will simply be discarded. – JeremyP Dec 11 '10 at 16:56
0

Nope

this will assign the one to the variable, in binary that would be:

0000000000000000000000000000000000000000000000000000000000000001

to assign all ones you'll have to use the maximum value for the unsigned type (which should be unsigned __int64 in c++)

I think that max value is

18,446,744,073,709,551,615

According to this: http://msdn.microsoft.com/en-us/library/s3f49ktz(VS.80).aspx

willvv
  • 8,439
  • 16
  • 66
  • 101
  • 1
    And everyone reading that will immediately understand that that long long number isn't a magic number but in fact sets all the bits to 1. Using `(uint64_t)-1` or `0xFFFFFFFFFFFFFFFFU` is more readable. There is probably a suitable #define somewhere in some header file or other. – AlastairG Dec 10 '10 at 10:16
  • And anyone asking this will know what hexadecimal numbers are and that a hex F means 15 or 11111111 in binary... – willvv Dec 10 '10 at 10:20
  • 1
    I would hope so and if not they can ask. Certainly, the proportion of the programming population who know the significance of 0xFFFFFFFFFFFFFFFF is greater than that who know the significance of 18,446,744,072,709,551,615. Also the decimal version is much much easier to type wrongly and much much harder to spot that it has been typed wrongly. – AlastairG Dec 10 '10 at 11:10
0

It is implementation defined whether or not an int as a bitfield is signed or not. So first of all you should use signed or unsigned to specify this.

But then if you are always just interested in 64 bit (which may or may not be supported for bitfields by your compiler) you should definitively just use the types uint64_t (or int64_t) from "stdint.h". Using a bitfield makes not much sense to me, then.

If you don't have that, do a typedef for the apprioate 64 bit type on your platform that you may later easily replace or put into #ifdef clauses.

Jens Gustedt
  • 76,821
  • 6
  • 102
  • 177