1

I need to convert a 256-bit integer that is represented as a array of 32-bit integer to string in decimal representation.

typedef struct {
    uint32_t x[8];
} Uint256;

I can't add dependencies to the project so I cannot use additional libraries. Is there a way to achieve that efficiently?

  • The *simplest* approach is to iteratively divide by 10 and take the remainder each time. If you try that, is it fast *enough*? – Oliver Charlesworth Dec 05 '17 at 12:00
  • lookup table of 8 bit representation that returns the string format of the value – Elad Hazan Dec 05 '17 at 12:03
  • 2
    Also, to clarify, you want to get the **decimal representation of this 256-bit value**, is that correct? (As opposed to say, a hex representation, or the representation of the individual components, or something like that.) – Oliver Charlesworth Dec 05 '17 at 12:04
  • @OliverCharlesworth How do you divide such a number by 10 and take the remainder? – Gerhardh Dec 05 '17 at 12:09
  • @OliverCharlesworth Exactly, edit to include this. Also, how I can divide a number represented as this? (I only implemented addition, multiplication and bitwise operations) – Isaac Monteiro Dec 05 '17 at 12:13
  • @Gerhardh - Think of it like how you probably did long division at school. Divide each digit (in this case, each 32-bit component) by 10, starting from the most-significant. The remainder is combined with the next digit (which in this case is relatively easy if you have 64-bit integers available), and so on. – Oliver Charlesworth Dec 05 '17 at 12:13
  • @OliverCharlesworth I would assume this is the main part of the question. ;) – Gerhardh Dec 05 '17 at 12:23
  • Have u got an answer ? – Ruslan R. Laishev Mar 30 '22 at 18:28

0 Answers0