-1

Perhaps this task is a bit more complicated than what I've written below, but the code that follows is my take on decimal to BCD. The task is to take in a decimal number, convert it to BCD and then to ASCII so that it can be displayed on a microcontroller. As far as I'm aware the code works sufficiently for the basic operation of converting to BCD however I'm stuck when it comes to converting this into ASCII. The overall output is ASCII so that an incremented value can be displayed on an LCD.

My code so far:

int dec2bin(int a){              //Decimal to binary function                               
    int bin;                                                    
    int i =1;
    while (a!=0){                                               
    bin+=(a%2)*i;                                           
    i*=10;                                                  
    a/=2;                                                   
    }                                                           
    return bin;
}

unsigned int ConverttoBCD(int val){

    unsigned int unit = 0;
    unsigned int ten = 0;
    unsigned int hundred = 0;

    hundred = (val/100);
    ten = ((val-hundred*100)/10);
    unit = (val-(hundred*100+ten*10));
    uint8_t ret1 = dec2bin(unit);
    uint8_t ret2 = dec2bin((ten)<<4);
    uint8_t ret3 = dec2bin((hundred)<<8);
    return(ret3+ret2+ret1);
}
  • 2
    Referring to your question, how do you think ASCII differs from decimal, binary etc? Why convert to BCD and then to ASCII, when you already have that value? – Weather Vane Jun 08 '17 at 21:00
  • I understand ASCII is simply 48(dec) above a given digit, but i have no clue how to implement that here. I understand this is a huge rookie question but ive been busy with this for hours with no luck – J. Middleton Jun 08 '17 at 21:02
  • 1
    Can you please define what is your input and what should be the output? Including their types. – Eugene Sh. Jun 08 '17 at 21:03
  • @J.Middleton say you have an `int` called `d` that contains number from 0 to 9 (a digit), then `(char)d+'0'` is the ASCII character that represents it. – Jean-Baptiste Yunès Jun 08 '17 at 21:06
  • @EugeneSh. the project is to increment using a switch and display the value on an LCD screen. So int input and ASCII out, so unsigned int out that correlates to the ASCII value of the int input – J. Middleton Jun 08 '17 at 21:07
  • 2
    And where BCD is fitting here? Please add examples to the question – Eugene Sh. Jun 08 '17 at 21:08
  • The task isn't "more complicated" than you show, but *less*. The lack is your understanding of how values are stored in a computer: which is in binary. With the exception of BCD often used for chip interfaces - decimal, hexadecimal, ASCII are just human representations for numbers. – Weather Vane Jun 08 '17 at 21:11
  • @EugeneSh. press button, increment count to 1, convert count to BCD, then to ASCII then display on LCD, etc. – J. Middleton Jun 08 '17 at 21:12
  • Please post the [Minimal, Complete, and Verifiable example](http://stackoverflow.com/help/mcve) that shows the problem. Show the input, the expected output, and the actual output as text *in the question*. – Weather Vane Jun 08 '17 at 21:14
  • Ok. You want a function `do_stuff(unsigned int input, char* output)`. Right? Now give some examples of the `input` and corresponding `output` you want. – Eugene Sh. Jun 08 '17 at 21:14
  • I guess OP wants to split an int into a sequence of bytes, each of which represents one power of 10 from the integer. Then convert each into an ASCII representation of those 0-9 digits. – Yunnosch Jun 08 '17 at 22:01

1 Answers1

1

The idea to convert to BCD for an ASCII representation of a number is actually the "correct one". Given BCD, you only need to add '0' to each digit for getting the corresponding ASCII value.

But your code has several problems. The most important one is that you try to stuff a value shifted left by 8 bits in an 8bit type. This can never work, those 8 bits will be zero, think about it! Then I absolutely do not understand what your dec2bin() function is supposed to do.

So I'll present you one possible correct solution to your problem. The key idea is to use a char for each individual BCD digit. Of course, a BCD digit only needs 4 bits and a char has at least 8 of them -- but you need char anyways for your ASCII representation and when your BCD digits are already in individual chars, all you have to do is indeed add '0' to each.

While at it: Converting to BCD by dividing and multiplying is a waste of resources. There's a nice algorithm called Double dabble for converting to BCD only using bit shifting and additions. I'm using it in the following example code:

#include <stdio.h>
#include <string.h>

// for determining the number of value bits in an integer type,
// see https://stackoverflow.com/a/4589384/2371524 for this nice trick:
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
                  + (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))

// number of bits in unsigned int:
#define UNSIGNEDINT_BITS IMAX_BITS((unsigned)-1)


// convert to ASCII using BCD, return the number of digits:
int toAscii(char *buf, int bufsize, unsigned val)
{
    // sanity check, a buffer smaller than one digit is pointless
    if (bufsize < 1) return -1;

    // initialize output buffer to zero
    // if you don't have memset, use a loop here
    memset(buf, 0, bufsize);
    int scanstart = bufsize - 1;
    int i;

    // mask for single bits in value, start at most significant bit
    unsigned mask = 1U << (UNSIGNEDINT_BITS - 1);
    while (mask)
    {
        // extract single bit
        int bit = !!(val & mask);

        for (i = scanstart; i < bufsize; ++i)
        {
            // this is the "double dabble" trick -- in each iteration,
            // add 3 to each element that is greater than 4. This will
            // generate the correct overflowing bits while shifting for
            // BCD
            if (buf[i] > 4) buf[i] += 3;
        }

        // if we have filled the output buffer from the right far enough,
        // we have to scan one position earlier in the next iteration
        if (buf[scanstart] > 7) --scanstart;

        // check for overflow of our buffer:
        if (scanstart < 0) return -1;

        // now just shift the bits in the BCD digits:
        for (i = scanstart; i < bufsize - 1; ++i)
        {
            buf[i] <<= 1;
            buf[i] &= 0xf;
            buf[i] |= (buf[i+1] > 7);
        }
        // shift in the new bit from our value:
        buf[bufsize-1] <<= 1;
        buf[bufsize-1] &= 0xf;
        buf[bufsize-1] |= bit;

        // next bit:
        mask >>= 1;
    }

    // find first non-zero digit:
    for (i = 0; i < bufsize - 1; ++i) if (buf[i]) break;
    int digits = bufsize - i;

    // eliminate leading zero digits
    // (again, use a loop if you don't have memmove)
    // (or, if you're converting to a fixed number of digits and *want*
    //  the leading zeros, just skip this step entirely, including the
    //  loop above)
    memmove(buf, buf + i, digits);

    // convert to ascii:
    for (i = 0; i < digits; ++i) buf[i] += '0';

    return digits;
}

int main(void)
{
    // some simple test code:
    char buf[10];

    int digits = toAscii(buf, 10, 471142);
    for (int i = 0; i < digits; ++i)
    {
        putchar(buf[i]);
    }

    puts("");
}

You won't need this IMAX_BITS() "magic macro" if you actually know your target platform and how many bits there are in the integer type you want to convert.