-1

I'm currently working on creating an I2C network between a few microcontrollers (Atmega32), For start I'm just trying to interface it between a slave and master. I have a sensor connected to the slave which gives me data in an unsigned int format but the data transmission happens in unsigned char format.

I'm not able to figure out how to transmit data which I get in an unsigned int format in a network which works in an unsigned char format.

Any sort of help would be helpful.

  • 1
    can you write the code for the part you are able to do yourself? – wimh Jun 02 '15 at 09:41
  • check each bit of the number by using '&' – finesse Jun 02 '15 at 09:45
  • 2
    An unsigned char already is binary (as is your unsigned int) and most likely has eight bits on your platform. If you want to output it in any particular form, you need to translate it to a *sequence* of characters. – molbdnilo Jun 02 '15 at 09:47
  • 2
    That is not conversion from decimal to binary, it is presentation of an int as a string with binary radix. And there is no language "embedded C"; its just C, any differences or extensions would be specific to your compiler/target. – Clifford Jun 02 '15 at 09:50
  • possible duplicate of [Is there a printf converter to print in binary format?](http://stackoverflow.com/questions/111928/is-there-a-printf-converter-to-print-in-binary-format) – technosaurus Jun 02 '15 at 12:29
  • There is no specific "embedded C" language. Only a _freestanding environment_ where most libraries are optional, but that is still the same language. – too honest for this site Jun 02 '15 at 16:51
  • @technosaurus: I disagree. This is about not using the standard libraries at all. – too honest for this site Jun 02 '15 at 17:04
  • @Clifford: _freestanding environment_. Its restrictions would still conform to the standard. – too honest for this site Jun 02 '15 at 17:08
  • @Olaf : I don't think I said anything to the contrary. However some compilers for some targets have target specific extensions (such as [bit-addressable objects in Keil's Cx51](http://www.keil.com/support/man/docs/c51/c51_le_bitaddrobj.htm) ) - my point was any "special syntax" is not defined by "embedded C" but rather the specific compiler. None of which is relevant to the question. In fact even being embedded is irrelevant. – Clifford Jun 02 '15 at 20:06
  • @Clifford: I just wanted to clarify that the standard allows to omit most (actually all) libraries and most headers for freestanding environment. So not having a stdlib is not necesarily an deviation from the standard (unless the compiler claims to suooprt a _hosted environment_, of course (this for "differences"; most ppl just don't know the standard mentions two types of execution environment). For the extensions: right. Specifically the '51 actually has bit-addressing in hardware and very limited ressources. So a compiler will do pretty well to support this (optimizer or extension). – too honest for this site Jun 02 '15 at 20:44
  • @Clifford i used the term embedded C to indicate that i'm trying to program embedded devices. I wanted to give a clear idea of what i'm trying to do. :) – Rohan Narlanka Jun 02 '15 at 20:47
  • Problem is: you actually did not. Your headline and text was missleading, that was more or less clarified in the comments, You started like requiring some kind of `itoa()` and now you ended up to portable data transmission. Not intending to blame you, but please work on getting the terms right. You would not trust a doctor stitch you up a bit if he tells you to "cut your throat". – too honest for this site Jun 02 '15 at 21:23
  • @Olaf Very sorry for the confusion I've caused. I'll make the necessary changes to the question so that it helps other people referring it. – Rohan Narlanka Jun 02 '15 at 21:48
  • @RohanNarlanka: I give you a start with the headline. You might accept or not, either is ok for me. – too honest for this site Jun 02 '15 at 21:51
  • Note that "unsigned int" can be anything from 16 bits upwards. It could even be 24 bits, for instance. Do yourself a favour and use the types defined in stdint.h! – too honest for this site Jun 02 '15 at 21:59
  • @Olaf I'll keep that in mind when I work on the code, I guess I have to get reading about a lot of stuff now. – Rohan Narlanka Jun 02 '15 at 22:05
  • @RohanNarlanka : You have changed the question such that it is a different question! Bad form IMO - I am going to delete my answer as it no longer makes any sense in this context. – Clifford Jun 03 '15 at 06:05

3 Answers3

2

Divide Decimal number by 2 until it becomes 0 and save each divisions remainder in an array & reverse the array to get the Binary number of that decimal number. Suppose your decimal number is 7.

7/2=3 remainder=1
3/2=1 remainder=1
1/2=0 remainder=1

So array=11100000
After reverse binary is= 00000111

In c it will be like this:

int des=7,binary[8],indexNo=0;
while(des!=0)
{
    binary[indexNo]=des%2;
    des/=2;
    indexNo++;
}
now inverse the binary[] or you can directly start indexNo from last index
(Ex: 8)
Shohan Ahmed Sijan
  • 4,391
  • 1
  • 33
  • 39
  • Thanks... I actually take initial values of binary[] array is 0. For decimal value 0 no loop occur but binary value will be still 00000000. – Shohan Ahmed Sijan Jun 02 '15 at 10:52
  • This seems needlessly complex. Use bit-wise operators instead. – Lundin Jun 02 '15 at 14:42
  • There is no "decimal number". Last time I had a look, most computers are binary. Unless he is programming an ENIAC, of course (is there a C compiler avilable?). – too honest for this site Jun 02 '15 at 16:58
  • That might not work for negative values. use unsigned. Also, the TO wanted a string, not an array of int. Also: why reverse and not top-down? The array having allo 0 is only valid for the first call and a local (which would make more sense actually) would not have this prerequisite at all. – too honest for this site Jun 02 '15 at 17:02
  • While that does indeed convert a decimal to a binary representation; the machine already stores *all* data in a binary representation; so while this algorithm is fine on paper it is wasteful on a machine - all that is necessary is to present the bits that already represent the number – Clifford Jun 02 '15 at 20:21
  • @Olaf im programming an Atmega32 microcontroller actually – Rohan Narlanka Jun 02 '15 at 20:45
  • @Clifford let me explain what I'm actually trying to do, I'm actually working on an I2C communication network between a few microcontrollers. But to start I'm just trying to communicate between a master and a slave. I had a sensor connected to the slave which gives the data to the microcontroller in an unsigned int format and the data transmission in I2C happens in an unsigned char format, so even if the machine stores all the data in a binary representation, the other microcontroller will still recognize t as an unsigned int data and therefore i wouldnt be able to communicate. – Rohan Narlanka Jun 02 '15 at 20:53
  • @RohanNarlanka: But for that there is no need to use ASCII format. While I prefer this for larger systems and networks, I2C is quite restricted in size and speed, so you should concentrate on a compact message format. Also, the single MCUs will be pretty busy converting data bit-wise (you could at least us a hex format). Just define the byte-ordering and send each data as a byte. I2C has already very good framing using ACK/NACK, so there is also no problem to re-sync for binary data. – too honest for this site Jun 02 '15 at 21:00
  • @Olaf Thanks for the information. I never thought about the data transmission speed in the network. I'll keep your advice in mind, But there will be around say 4 microcontrollers in the I2C network, Will it actually make that much of a difference? – Rohan Narlanka Jun 02 '15 at 21:05
  • @RohanNarlanka: Wether it runs like a charm or screws up depends mostly on how often the data is actually transmitted. If that is for sensors: temperature sensors are mostly quite slow, while audio-sensors or gyroscopes, accelleration sensors might require very frequent polling. In any way, there is no benefit in your original format. Note that if you really only want to receive a single value, and need no other data format, you actually do not need the type-byte. That would only be required if the slave will send different data types. – too honest for this site Jun 03 '15 at 02:29
  • @RohanNarlanka : I see that you have changed your question such that it no longer even resembles the original question. That is somewhat poor form; since it renders this an other answers nonsense! You should have asked a new question and either left or deleted this one. I cannot see how your original requirement is related to this issue. – Clifford Jun 03 '15 at 06:00
  • How funny!!! Why you totally changed your question??? I tried to help you it dosen't means that you try to make me fool..!! – Shohan Ahmed Sijan Jun 04 '15 at 10:32
0

I think I understand now better after your comment. What you need is not a binary to ASCII-string conversion, but a binary transmission format.

Note that this is an XY-problem: asking for X, but requiring Y.

For this, you just have to preceed each data element with a type-prefix (byte).

// This enum must be known to all transmitters and receivers, of course!
typedef enum {
    DATA_TYPE_uint16 = 0,
    DATA_TYPE_int8,
    ...
} DataType;

So, for instance, an uint16_t is transfered as:

uint8_t tx_buffer[MAX_BUFFER_SIZE];
uint16_t internal_var1;

...

tx_buffer[0] = DATA_TYPE_uint16;            // type being sent
tx_buffer[1] = (uint8_t)internal_var1;      // lower 8 bits
tx_buffer[2] = (uint8_t)(internal_var1>>8); // lower 8 bits
send_i2c(3, tx_buffer);

Receivers' code:

uint8_t rx_buffer[MAX_BUFFER_SIZE];
uint16_t internal_var1;

// read the next frame (i2c delimts frames automatically)
receive(MAX_BUFFER_SIZE, rx_buffer);  // do not overflow the buffer

switch ( rx_buffer[0] ) {
    case DATA_TYPE_uint16:
        internal_var1 = (uint16_t)rx_buffer[1] | (uint16_t)rx_buffer[2] << 8;
        break;
    ...
    default:
        // invalid frame format (error handling!)
}

Note that the data is sent little endian (lowest byte first). This is common practice in embedded programming. It is much more compact (i.e. faster sent) than packing each bit into a byte and much easier&faster to process on either side.

You can extend this to any kind of datatype actually. So you can send a struct as a single type by packing its single elements as I showed for the uint16_t. Do not follow the temptation to pass data as a binary string! This is the way to desaster in communication protocols.

This is actually called "marshalling" (serializing structured data). Note to use only portable operations.

too honest for this site
  • 12,050
  • 4
  • 30
  • 52
  • Your guess was perfectly correct, This is actually way easier to do than what I was previously trying to do. :D – Rohan Narlanka Jun 02 '15 at 21:34
  • @RohanNarlanka: Sometimes I'm good at guessing... You get used to it if you work in the embedded field. – too honest for this site Jun 02 '15 at 21:35
  • @RohanNarlanka: I update my answer a bit (last two paragraphs). – too honest for this site Jun 02 '15 at 21:42
  • What exactly do you mean when you say portable operations? – Rohan Narlanka Jun 02 '15 at 21:50
  • @RohanNarlanka: For instance: use operations to extract the single bytes. Just stick to the pattern I used. Also be careful when using signed integers: right shifts for instance might or might not the sign (they might not be identical to divisions). You might want to google for "c pitfalls". And only use the types from `stdint.h` (important! get comfortable with this), at least when preparing the data to transfer/received. Just read, read, read, you got quite some tips from everyone here. – too honest for this site Jun 02 '15 at 21:56
  • As of now I'll be working mostly with variables of an unsigned int type so I'm not that concerned about the transmission of an signed integer type of data. I'll still give it a proper reading though, never know what comes to help later. – Rohan Narlanka Jun 02 '15 at 22:03
  • If you do this not only for an assignment/homework/etc., it definitively will! – too honest for this site Jun 02 '15 at 22:05
  • Actually I study mechanical engineering and I recently started working on a side project in which I found embedded systems quite interesting, So yeah I don't actually do this as an assignment/homework sort of a thing but as a hobby. – Rohan Narlanka Jun 02 '15 at 22:09
0

Most communications physical layers are byte oriented - that does not prevent the transport of larger data structures or types.

For example to split and reassemble a 16 bit integer to/from byte components:

word8_msb = (word16 >> 8) ; 
word8_lsb = (word16 & 0xFF) ; 

word16 = (word8_msb << 8) | word8_lsb) ;

Moreover if the byte-order of the two nodes happens to be the same you can simply cast the data to/from a byte array:

Sending an integer array as a byte stream:

uint16_t integer_data[] = { 1u ,2u ,3u ,4u } ;
uint8_t byte_data = (uint8_t*)integer_data ;
size_t data_length_bytes = sizeof(integer_data) * sizeof(*integer_data) ;
for( int i = 0; i < data_length_bytes )
{
    send_byte( byte_data[i] ) ;
}

Receiving a byte stream to an integer array:

uint16_t integer_data[4] ;
uint8_t byte_data = (uint8_t*)integer_data ;
size_t data_length_bytes = sizeof(integer_data) * sizeof(*integer_data) ;
for( int i = 0; i < data_length_bytes )
{
    byte_data[i] = read_byte() ;
}

In most cases however it is not as simple as that; some sort of a protocol is required at least to indicate the start of data or synchronisation (so data is not misaligned), and often data is transferred in packets that contain the data and meta-data to ensure reliable data transfer.

Clifford
  • 88,407
  • 13
  • 85
  • 165