I have an unsigned int array called hex_values which contains numbers. For example: The size of array is 5 and it contains this nubmers:
830C 830C 830C 830C 830C
What I need to do is to show them as big-endian values:
0C83 0C83 0C83 0C83 0C83
For coverting 830C
to 0C83
I used this function :
unsigned int little_big(unsigned int little)
{
return((little&0xff)<<24)+((little&0xff00)<<8)+((little&0xff0000)>>8)+ \
((little>>24)&0xff);
}
The problem is , that if the first character of number is a zero, then printf will not show that zero ... Is there any way to fix it ?
THX for any help ...
edit :
Ok i have now another problem ... For example i have an array of size 5 which contains 5 values lets say 830C 830C 830C 830C 830C ... I have 2 functions (the first which is mentioned in my first post). The second function has this definition
void show_big(unsigned int * pt, unsigned int numbers)
{
<br>unsigned int hex_b, hex_l;
printf("\n");
for (unsigned int i = 0; i < numbers; i++)
{
hex_l = *(pt + i);
hex_b = little_big(hex_l);
printf("%04X", (hex_b >> BITS));
}
}
I am calling this function with this parameters show_big(hex_values, numbers);
// hex_values is the name of the array with 5 numbers and numbers is
only a number which represent how many values has the array ... Problem is , if the array contains more values than 1 then show_big function wont show those numbers in big endian correctly (0C83 0C83 ...)... I have really no idea why ... If it is necessary i can past here full code ... THX for any help.