0

I am trying to load 32-bit hexadecimal into char array.

#define NUC 0xA8051701

unsigned char debug_msg[100];
sprintf (debug_msg, "%08x", NUC);

But it is loading only "A805" that to as ASCII character instead of hexadecimal. Can any one suggest what could be the problem.

What I'm actually looking for is:

debug_msg[0]=0xA8
debug_msg[1]=0x05
debug_msg[2]=0x17
debug_msg[3]=0x01

Instead of the correct value, it is loading as below:

debug_msg[0]=0x30
debug_msg[1]=0x30
debug_msg[2]=0x30
debug_msg[3]=0x30
debug_msg[4]=0x61
debug_msg[5]=0x38
debug_msg[6]=0x30
debug_msg[7]=0x35

Effectively it is loading 0x0000a805 that to in ASCII.

Vinod kumar
  • 67
  • 1
  • 11
  • `sprintf` takes a `char *`, e.g. `char debug_msg[100];` – David C. Rankin Mar 24 '17 at 05:38
  • 3
    What is the size of `int` on your machine? Is it a big-endian architecture? What do you actually want as the result? (I guess that `sizeof(int) == 2` — 16 bits, and it is big-endian, and you want `debug_msg[0] = 0xA8; debug_msg[1] = 0x05; debug_msg[2] = 0x17; debug_msg[3] = 0x01;` — but that's guesswork. You should certainly explain what you want and it might help to identify the processor type and so on.) – Jonathan Leffler Mar 24 '17 at 05:39
  • My platform is 8-bit micro-controller. Yes i want as you mentioned debug_msg[0] = 0xA8; debug_msg[1] = 0x05; debug_msg[2] = 0x17; debug_msg[3] = 0x01 – Vinod kumar Mar 24 '17 at 05:45
  • @Vinodkumar what is the value of `sizeof(int)`? – jjm Mar 24 '17 at 05:47
  • int size is 32-bit – Vinod kumar Mar 24 '17 at 05:48
  • 1
    "But it is loading only "A805" that to as ascii character instead of hexadecimal." . How did u confirmed that? – jjm Mar 24 '17 at 05:55
  • That's curious. If `int` were 16 bits, then your constant would be treated as a `long`, and when passed to `printf()`, the `%x` would read the first two bytes — 0xA805 — and treat that as a (16-bit) integer. Hence my suggestions. On the face of it, you need: `debug_message[0] = (NUC >> 24) & 0xFF; debug_msg[1] = (NUC >> 16) & 0xFF; debug_msg[2] = (NUC >> 8) & 0xFF; debug_msg[3] = (NUC >> 0) & 0xFF;` (where the `>> 0` is optional but makes it look neater — if the compiler optimizes to omit any shift for that). – Jonathan Leffler Mar 24 '17 at 05:55
  • @JonathanLeffler he said `int` is 32... – jjm Mar 24 '17 at 05:56
  • @jjm: yes; I know. I don't have a good explanation for the observed behaviour — I merely explained why I hypothesized as I did. – Jonathan Leffler Mar 24 '17 at 05:57
  • @JonathanLeffler Hmm... if `int` is 32, his code should works fine... I doubts his output.. He might confirmed that wrongly – jjm Mar 24 '17 at 06:00
  • @4386427: why the first 2 bytes? Because `%x` is for formatting an `int`, and only 2 bytes worth of the data got formatted, it seems, so I guessed (apparently incorrectly) that `int` was just 2 bytes long — a 16-bit quantity. But guesses are just that — guesses. Apparently I guessed wrong, so I have no clue why you got only 4 bytes of output and not 8. Also, `%x` would produce `a805` and not `A805`. There's some more cause for confusion. How are you determining what is in `debug_msg`? – Jonathan Leffler Mar 24 '17 at 06:00
  • 1
    @Vinodkumar Please let me know the output of `printf("%s",debug_msg);` – jjm Mar 24 '17 at 06:02
  • @4386427: That's where 'big-endian' vs `little-endian' comes into play. What you get on a little-endian system would be different from what you get on a big-endian system. Do you know whether ideone.com uses Intel or some other chips? My guess is Intel, and hence little-endian. Also, a 28 hex-digit number is completely undefined behaviour on most systems. – Jonathan Leffler Mar 24 '17 at 06:04
  • @4386427 Its surely system dependent. Cant say it as a conclusion. It will vary based on endian. – jjm Mar 24 '17 at 06:05
  • i have an 8051 microcontroller emulator where i can watch values of all the variables being used. – Vinod kumar Mar 24 '17 at 06:06
  • @Vinodkumar I just searched in internet, most of the compilers support `int` size as 16 bit for 8051 – jjm Mar 24 '17 at 06:15
  • Keil uses 16 bit int for 8051 - http://www.keil.com/support/man/docs/c51/c51_le_datatypes.htm – Support Ukraine Mar 24 '17 at 06:16
  • please check my update post for clarity – Vinod kumar Mar 24 '17 at 06:20
  • Have you tried: `sprintf (debug_msg, "%08lx", NUC);` or `sprintf (debug_msg, "%08llx", NUC);` – Support Ukraine Mar 24 '17 at 06:23
  • Your use `debug_msg[7] = 0xA8` in the desired output is puzzling. Given a byte number in `NUC`, you somehow have bytes 0, 1 and 3 in positions 0, 1, and 7 in the desired output, leaving unspecified how byte 2 is spread across positions 2-6. – Jonathan Leffler Mar 24 '17 at 06:25
  • @4386427 i have tried sprintf (debug_msg, "%08lx", NUC) . Now its loading full 32-bit value but still loading in ascii format not hex – Vinod kumar Mar 24 '17 at 06:26
  • @Vinodkumar - yes, it will be ascii - that is what `printf` produce. If you want the hex numbers then you can't use `printf` directly. But you can convert the array after the `printf` – Support Ukraine Mar 24 '17 at 06:31
  • @4386427 pl. check the updated post. my requirement is just loading the char array with different 32-bit hex values. I'm not sticking to sprintf . Any function which does the task is acceptable. – Vinod kumar Mar 24 '17 at 06:34
  • The `sprintf()` function converts to text. It is the wrong tool for the job. To it, hex means ASCII digits 0-9 and letters A-F (or a-f). I showed how to do it with the shift and mask operations, where you probably don't absolutely need the masking operations. – Jonathan Leffler Mar 24 '17 at 06:36
  • @JonathanLeffler i can't use your method. i need to load debug_msg many tiimes with different 32-bit values. i need a macro kind of thing. – Vinod kumar Mar 24 '17 at 06:39
  • @Vinodkumar (http://stackoverflow.com/questions/42942697/convert-long-long-int-to-string-without-sprintf-using-functions/42943207#42943207)[Please check its answer]. Here instead of adding `+'0'` in the loop you can use the string itself. There they used that for `long long`. u can use that for `long` also. – jjm Mar 24 '17 at 06:39
  • 1
    Replace NUC with a variable holding the value to convert. The whole thing macroizes easily. But you should have explained all this in the question without us having to dig it out tidbit by tidbit. – Jonathan Leffler Mar 24 '17 at 06:42
  • @JonathanLeffler can you elaborate what you are trying to explain – Vinod kumar Mar 24 '17 at 06:46
  • Possible duplicate of [Converting an int into a 4 byte char array (C)](http://stackoverflow.com/questions/3784263/converting-an-int-into-a-4-byte-char-array-c) – Maximilian Köstler Mar 24 '17 at 06:56
  • If you didn't want ASCII then why are you using sprintf in the first place? You call a function that converts raw data to ASCII then ask us why the code is in ASCII format... what are you actually trying to do here? – Lundin Mar 24 '17 at 09:05

2 Answers2

3

On the face of it, you need:

 debug_msg[0] = (NUC >> 24) & 0xFF;
 debug_msg[1] = (NUC >> 16) & 0xFF;
 debug_msg[2] = (NUC >>  8) & 0xFF;
 debug_msg[3] = (NUC >>  0) & 0xFF;

(where the >> 0 is optional but makes it look neater — if the compiler optimizes to omit any shift for that). If you want to handle different values in place of NUC, then:

unsigned long value = 0xA8051701;

debug_msg[0] = (value >> 24) & 0xFF;
debug_msg[1] = (value >> 16) & 0xFF;
debug_msg[2] = (value >>  8) & 0xFF;
debug_msg[3] = (value >>  0) & 0xFF;

Or in macro form:

#define MANGLE(value, debug_msg) \
    debug_msg[0] = (value >> 24) & 0xFF; \
    debug_msg[1] = (value >> 16) & 0xFF; \
    debug_msg[2] = (value >>  8) & 0xFF; \
    debug_msg[3] = (value >>  0) & 0xFF

used as:

MANGLE(0xA8051701, debug_msg)

or, if you want the values at arbitrary offsets in the array:

#define MANGLE(value, debug_msg, offset) \
    debug_msg[offset+0] = (value >> 24) & 0xFF; \
    debug_msg[offset+1] = (value >> 16) & 0xFF; \
    debug_msg[offset+2] = (value >>  8) & 0xFF; \
    debug_msg[offset+3] = (value >>  0) & 0xFF

Used as:

MANGLE(0xA8051701, debug_msg, 24);

There might be a need to wrap the body of the macro in a do { … } while (0) loop to make it work properly after an if statement, etc.

Or you could write an inline function to do the job. Or …

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • excellent its working. But if i increase to 64-bit i could not get correct values ie. from 24 to 32 to 40 to 48 – Vinod kumar Mar 24 '17 at 07:10
  • 1
    You're on an 8-bit micro. You'll need to read how the compiler handles 64-bit numbers, if it handles them at all. If it supports them, you can extend the macro to do more shifting, but you have to be careful about types and so on. – Jonathan Leffler Mar 24 '17 at 07:12
0

You can use a Union here

#define NUMBER_OF_BYTES_FOR_UINT    4
Union Char_UInt
{
  U8 u8data[NUMBER_OF_BYTES_FOR_UINT];
  U32 u32data;
};

Now, you can assign "NUC" (or any 32-bit number) to "u32data" and then read "u8data", It will give you byte-by-byte values.

This is because union assigns same memory to character array and u32data. You write and read from same memory location but through different interfaces.

Note that, Endianness of system plays a role here.

Swanand
  • 4,027
  • 10
  • 41
  • 69