1

I have a hex pattern stored in a variable, how to do I know what is the size of the hex pattern E.g. --

#define MY_PATTERN 0xFFFF

now I want to know the size of MY_PATTERN, to use somewhere in my code.

sizeof (MY_PATTERN)

this is giving me warning -- "integer conversion resulted in truncation".

How can I fix this ? What is the way I should write it ?

The pattern can increase or decrease in size so I can't hard code it.

L Lawliet
  • 2,565
  • 4
  • 26
  • 35
  • 4
    So what *is* the "size" of `0xFFFF`? – Jon May 23 '14 at 20:40
  • I suspect the warning is coming from how you're using the result of the `sizeof()` expression, not the expression itself. – Cameron May 23 '14 at 20:41
  • What is your allowed range for the pattern? – David Zech May 23 '14 at 20:41
  • I can calculate the size man!!! Please read it completely. I might change the defined pattern later and so if I use something like 2 bytes and later MY_PATTERN variable changes to 0xFFFFFFFF. What to do then ? – L Lawliet May 23 '14 at 20:43
  • try this, no warning: #include int main () { #define MY_PATTERN 0xFFFF size_t x=sizeof (MY_PATTERN); printf ("%u\n", x); } – pbhd May 23 '14 at 20:43
  • Hey, no set limit for pattern at least now so want to avoid hard coding an upper bound – L Lawliet May 23 '14 at 20:44
  • @Siddharth: I 'm not sure you are getting the point. `0xFFFF` is an integer literal; *it already has* an upper bound. – Jon May 23 '14 at 20:46
  • @Jon: [It's `4`](http://ideone.com/UYNhwo) :-) (Depends on the compiler, I imagine.) – Cameron May 23 '14 at 20:46
  • You should get familiar with the [sizes of various types](http://en.cppreference.com/w/cpp/language/types) in C++. – Captain Obvlious May 23 '14 at 20:47
  • Can you tell which compiler etc you are using? Maybe the prog which brings up the warning.... – pbhd May 23 '14 at 20:53
  • are you wanting to find the appropriate type for the limit? e.g. 0xFF would be 1 byte, 0xFFFF would be 2 bytes? –  May 23 '14 at 21:27

3 Answers3

1

This will solve your problem:

#define MY_PATTERN 0xFFFF

struct TypeInfo
{
    template<typename T>
    static size_t SizeOfType(T) { return sizeof(T); }
};

void main()
{
    size_t size_of_type = TypeInfo::SizeOfType(MY_PATTERN);
}

as pointed out by Nighthawk441 you can just do:

sizeof(MY_PATTERN);

Just make sure to use a size_t wherever you are getting a warning and that should solve your problem.

  • 2
    Why not just assign size_of_type = sizeof(MY_PATTERN) right away? – David Zech May 23 '14 at 21:08
  • What is the purpose of the struct? What is the purpose of the template? This is way to complicated for what it does, and this will always return `sizeof(int)` for all integer values, so is probably not what the op wants? – Johannes Overmann May 23 '14 at 21:11
  • This is why I dont like using #define and why I ensure I use size_t with sizeof. –  May 23 '14 at 21:15
1

Don't do it.

There's no such thing in C++ as a "hex pattern". What you actually use is an integer literal. See paragraph "The type of the literal". Thus, sizeof (0xffff) is equal to sizeof(int). And the bad thing is: the exact size may vary.

From the design point of view, I can't really think of a situation where such a solution is acceptable. You're not even deriving a type from a literal value, which would be a suspicious as well, but at least, a typesafe solution. Sizes of values are mostly used in operations working with memory buffers directly, like memcpy() or fwrite(). Sizes defined in such indirect ways lead to a very brittle binary interface and maintenance difficulties. What if you compile a program on both x86 and Motorola 68000 machines and want them to interoperate via a network protocol, or want to write some files on the first machine, and read them on another? sizeof(int) is 4 for the first and 2 for the second. It will break.

Instead, explicitly use the exactly sized types, like int8_t, uint32_t, etc. They're defined in the <cstdint> header.

Community
  • 1
  • 1
vines
  • 5,160
  • 1
  • 27
  • 49
0

You could explicitly typedef various types to hold hex numbers with restricted sizes such that:

typedef unsigned char one_byte_hex;
typedef unsigned short two_byte_hex;
typedef unsigned int four_byte_hex;

one_byte_hex pattern = 0xFF;
two_byte_hex bigger_pattern = 0xFFFF;
four_byte_hex big_pattern = 0xFFFFFFFF;

//sizeof(pattern) == 1
//sizeof(bigger_pattern) == 2
//sizeof(biggest_pattern) == 4

four_byte_hex new_pattern = static_cast<four_byte_hex>(pattern);
//sizeof(new_pattern) == 4

It would be easier to just treat all hex numbers as unsigned ints regardless of pattern used though.

Alternatively, you could put together a function which checks how many times it can shift the bits of the pattern until it's 0.

size_t sizeof_pattern(unsigned int pattern)
{
    size_t bits = 0;
    size_t bytes = 0;
    unsigned int tmp = pattern;

    while(tmp >> 1 != 0){
        bits++;
        tmp = tmp >> 1;
    }
    bytes = (bits + 1) / 8; //add 1 to bits to shift range from 0-31 to 1-32 so we can divide properly. 8 bits per byte.
    if((bits + 1) % 8 != 0){
        bytes++; //requires one more byte to store value since we have remaining bits.
    }
    return bytes;
}
Brandon Haston
  • 434
  • 3
  • 5