40

I'm trying to receive data from a custom device based on an FTDI 2232H chip.

I am using a simple Async FIFO mode, and the incoming data rate is 3.2MB/sec.

Everything works perfectly with test code on my PC, but I'm having problems receiving data on my Toshiba Thrive.

TDI's Android driver fails, so I am coding using Java.

I can receive 95%+ of the data perfectly, but every once in a while the data 'sputters' and I get portions of the same 4-5K of data two or three times, then back to good data.

I am not going too fast for the Thrive or Android, because I previously had the data coming in at double (6.4MB/sec) and it got about 95% of that as well. (So it should have no problem at half the rate.)

It seems like there is some sort of bug in the buffering (or double-buffering) that happens within Android. (It is not the buffer within the FTDI 2232H because the repeated data is larger than the chip's 4K internal buffer.)

The setup code is simple, and again it's working ~almost~ perfectly.

The loop where the data grab occurs is very simple:

while(!fStop)
  if(totalLen < BIG_BUFF_LEN-IN_BUFF_LEN)
  {
    len=conn.bulkTransfer(epIN, inBuff, IN_BUFF_LEN, 0);
    System.arraycopy(inBuff, 0, bigBuff, totalLen, len);
    totalLen+=len;
  }

In case you think it's the time delay for the arraycopy - I still lose the data even if I comment that line out.

The IN_BUFF_LEN is 16384 (bulkTransfer won't return more than that even if I increase the size of the inBuff).

The bigBuff is several megabytes.

As a secondary question - does anyone know how to pass a pointer to bulkTransfer that will populate bigBuff directly --- at an offset (not starting at position '0'?

Greg
  • 839
  • 8
  • 8
  • 1
    Maybe android is garbage collecting during those times and something is getting lost. Check your logcat to see if you can match up what is happening in the OS when you lose data. – RightHandedMonkey Jan 23 '13 at 02:26
  • Strange problem, because if you use a FIFO it should never happen. Because when you read a FIFO the data goes out. Have you tried clearing the buffer each time before you read the FIFO? i.e. making sure you are not reading the same data twice, not out of the FIFO but your buffer. – fonZ Feb 01 '13 at 10:13
  • Can you tell where do you get the 4-5K of duplicate data? I mean is the pattern of receiving data occur on the same index every time. for example you get a duplicate chunk everytime when big buffer is full 50%. By chance during your testing have you ever received full 100% of data with thrive? – user_CC Feb 12 '13 at 11:45
  • Have you boosted the priority of the thread reading from USB to the maximum available? It should mitigate problems related to periodic GC or other tasks within android. It should be fine to do that assuming the call to bulk transfer is blocking. – c.fogelklou Feb 16 '13 at 22:20
  • See my clarification attached to the next answer... – Greg Feb 17 '13 at 02:32

4 Answers4

4

UsbConnection.bulktransfer(...) is buggy. Use UsbRequest.queue(...) Api.

Many people has reported that using bulktransfer directly fails around 1% or 2% of the input transfers.

wscourge
  • 10,657
  • 14
  • 59
  • 80
Pablo Valdes
  • 734
  • 8
  • 19
  • Does UsbRequest.queue(..) works 100% without fails? I'm also encountering that kind of problem. – support_ms Mar 04 '16 at 11:17
  • I did a project which had heavy usb traffic using the UsbRequest API. I never had a complaint on missing data. However, I remember I use UsbRequest.queue(..) only for incoming data. To send data out I used the synchronous API with timeout 1 second. When using UsbRequest.queue(..) make sure the request you put on the queue is the same request you receive from the queue. I also created a new request per blocking reading call. – Pablo Valdes Mar 05 '16 at 17:38
  • @PabloValdes What was the achieved throughput ? I faced issues with Bulk and switched to usb request. I'm still not getting very high speeds. Can you share the source, If its okay. – RohitMat Jun 12 '17 at 08:47
  • 1
    @support_ms I have not found the missing data problem while using the UsbRequest.queue(..) API. But you need to be very cautious because multiple request may interleave. The queueing and wait process must be very precise, otherwise you may end up missing or not attending certain requests. – Pablo Valdes Jun 14 '17 at 00:35
2

Just to clarify a few of the approaches I tried...The USB code ran in it's own thread and was given max priority (no luck) - I tried API calls, libUSB, native C, and other methods (no luck) - I buffered, and polled, and queued (no luck) - ultimately I decided Android could not handle USB data at 'high speed' (constant 3.2MB/sec w/ no flow control). I built an 8MB hardware FIFO buffer into my design to make up for it. (If you think you have an answer, come up with something that feeds data in at 3.2MB/sec and see if Android can handle it without ANY hiccups. I'm pretty sure it can't.)

Greg
  • 839
  • 8
  • 8
1

In Nexus Media Importer I can consistently push through about 9MB/s, so it is possible. I'm not sure if you have control of the source, but you may want to break the feed into 16K blocks with some sort of sequenced header so you can detect missing blocks and corruption.

Also, you are not checking for len < 0. I'm not sure what will have if the underlying stack gets a NAK or NYET from the other end. I get this enough that I have recovery code to handle this.

I have looked long and hard for a way to offset the bulkTransfer destination buffer, but I have yet to find it. FYI: USBRequest.queue() does not respect the ByteBuffer.position().

I'm kind of surprised we can do 16K on bulkTransfer anyway. According to the USB 2.0 spec, the max is supposed to be 512 bytes for a bulkTransfer endpoint. Is Android bundling the the bulkTransfers, or are we breaking the rules?

Dustin
  • 2,064
  • 1
  • 16
  • 12
0

You have to be sure that there are no other traffic - on the same bus - with higher priority than your traffic.

HaniGamal
  • 543
  • 4
  • 7