3

I have been using the Teensy 3.6 microcontroller board (180 MHz ARM Cortex-M4 processor) to try and implement a driver for a sensor. The sensor is controlled over SPI and when it is commanded to make a measurement, it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal. The data frame itself consists of 1,024 16-bit values.

My first attempt consisted a relatively naïve approach: I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line. The main loop of the program generates a uint_16 value from these bits and collects 1,024 of these values for the full measurement frame.

However, this program locks up the Teensy almost immediately. From my experiments, it seems to lock up as soon as the interrupt is attached. I believe that the microprocessor is being swamped by interrupts.

I think that the correct way of doing this is by using the Teensy's DMA controller. I have been reading Paul Stoffregen's DMAChannel library but I can't understand it. I need to trigger the DMA measurements from the PCLK digital pin and have it read in bits from the DOUT digital pin. Could someone tell me if I am looking at this problem in the correct way? Am I overlooking something, and what resources should I view to better understand DMA on the Teensy?

Thanks!

I put this on the Software Engineering Stack Exchange because I feel that this is primarily a programming problem, but if it is an EE problem, please feel free to move it to the EE SE.

SquawkBirdies
  • 157
  • 2
  • 13
  • 2
    Why don't you use a SPI port? – Michaël Roy Jul 19 '17 at 00:04
  • 1) are you sure it is swamped by interrupts or did you not do the handler right and the first interrupt never returns and is stuck? try a single state change and no more (control it from another gpio pin on the same mcu if it makes it easier). – old_timer Jul 19 '17 at 03:59
  • 2) dma is not automatically the best solution, sometimes dma is no faster than software/polling, depends on the system design. – old_timer Jul 19 '17 at 03:59
  • 3) if you are the spi slave and dont have a spi slave port, then you should find another mcu. Perhaps there are other things you can do but using the correct interface is best on the receive side (on the master side bit banging sometimes is okay depending on your system design (how much software bandwidth do you have to spare to be doing that or do you need to let the hardware do it) or using the peripheral which this chip probably has). – old_timer Jul 19 '17 at 04:01
  • dma is not automatically the wrong way either, even with a spi slave or master these chips often have a dma that allows you to not have to deal with every data item/transfer but blocks, you still will have to deal with the dma engine, but it makes for less overhead, but bursty overhead. – old_timer Jul 19 '17 at 04:03
  • As old_timer says I doubt the Teensy is being swamped with interrupt. Maybe you could post the code you're using so we could see what's going on. – Tony K Jul 19 '17 at 08:53
  • @TonyK The CPU is 180Mhz with a 5 MHz signal, that is only ~36 instructions to handle each bit. If this involves interrupt overhead, I can definitely see the CPU being swamped? – artless noise Jul 19 '17 at 14:43
  • @artlessnoise right, the interrupt latency would be 12 cycles entering and 12 cycles exiting so that leaves just ~12 cycles to process the interrupt. I stand corrected. – Tony K Jul 19 '17 at 18:22
  • 1
    And the chip has an SPI port, you should use that. – Michaël Roy Jul 21 '17 at 16:36
  • An old thread I realize, but for other late-comers: I don't think this discussion ever resolved whether OP is trying to _read_ a sensor _to_ the Teensy, or to use Teensy to _emulate_ some sensor while OP writes a driver on some other MCU or computer. Either way, OP says the sensor uses SPI, so just using an SPI port seems the obvious answer, already suggested by @MichaëlRoy, twice, to no response. – gwideman Sep 27 '20 at 22:06
  • Yes. I forgot to add this: DMA would ONLY work using the SPI port. – Michaël Roy Oct 02 '20 at 11:48

1 Answers1

2

Is DMA the Correct Way to Receive High-Speed Digital Data on a Microprocessor?

There is more than one source of 'high speed digital data'. DMA is not the globally correct solution for all data, but it can be a solution.

it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal.

I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line.

This approach would be call 'bit bashing'. You are using a CPU to physically measure the pins. It is a worst case solution that I see many experienced developers implement. It will work with any hardware connection. Fortunately, the Kinetis K66 has several peripherals that maybe able to assist you.

Specifically, the FTM, CMP, I2C, SPI and UART modules may be useful. These hardware modules are capable of reducing the work load from processing each bit to groups of bits. For instance, the FTM support a capture mode. The idea is to ignore the PCLK signal and just measure the time between edges. These times will be fixed in a bit period/CLK. If the timer captures a two bit period, then you know that two ones or zeros were sent.

Also, your signal seems like SSI which is an 'digital audio' channel. Unfortunately, the K66 doesn't have an SSI module. Typical I2C is open drain and it always has a start bit and fixed word size. It maybe possible to use this if you have some knowledge of the data and/or can attach some circuit to fake some bits (to be removed later).

You could use the UART and time between characters to capture data. The time will be a run of bits that aren't the start bit. However it looks like this UART module requires stop bits (the SIM feature are probably very limited).


Once you do this, the decision between DMA, interrupt and polling can be made. There is nothing faster than polling if the CPU uses the data. DMA and interrupts are needed if you need to multiplex the CPU with the data transfer. DMA is better if the CPU doesn't need to act on most of the data or the work the CPU is doing is not memory intensive (number crunching). Interrupts depend on your context save overhead. This can be minimized depending on the facilities your main line uses.

Some glue circuitry to adapt the signal to one of the K66 modules could go a long way to making a more efficient solution. If you can't change the signal, another (NXP?) SOC with an SSI module would work well. The NXP modules usually support chaining to an eDMA module as well as interrupts.

Community
  • 1
  • 1
artless noise
  • 21,212
  • 6
  • 68
  • 105
  • On a side note, what would be the difference between "bit bashing" and "bit-banging"? I thought that what I was doing in my 'naive solution' constituted "bit-banging". Thanks for the response – SquawkBirdies Jul 19 '17 at 21:52
  • I think they are the same things. See: [Bit manipulation@wikipedia.](https://en.wikipedia.org/wiki/Bit_manipulation) *Bit twiddling and bit bashing are often used interchangeably with bit manipulation*... Probably bit manipulation is correct as you are actually sampling for input (not *banging* nor *bashing*). – artless noise Jul 20 '17 at 13:41
  • SPI is probably more useful than I2C due to addressing pre-ample. However, check all the serial controllers for *raw modes*. Specicially, the UART can be used to grab 8-10 bits and on the character received interrupt and then changed to a GPIO to avoid start bit issues. You may need a timer as well (to get the number of bits before start bit). – artless noise Jul 20 '17 at 13:47