2

I need to plot an continuous input using pyqtgraph, so I use a circular buffer to hold the data. I use deque with maxlen to do the job. (Python 2.7, numpy 1.9.2, pyqtgraph 0.9.10)

from collections import deque
def create_cbuffer(self):
    buffer_len = self.BUFFER_LEN*self.number_of_points
    data = [0]*buffer_len # buffer_len = 160k
    self.cbuffer[0] = deque(data, maxlen=buffer_len)
    buffer_len = self.BUFFER_LEN
    data = [0]*buffer_len
    self.cbuffer[1] = deque(data, maxlen=buffer_len)

After that I use it like this:

import time
def update_cbuffer(self):
    data_points, data = data_feeds()  # data get every 16ms as lists
    start_t = time.time()
    self.cbuffer[0].extend(data_points) # Thanks to @PadraicCunningham
    # for k in xrange(0, self.number_of_points):
    #     self.cbuffer[0].append(data_points[k])
    self.cbuffer[1].append(data)
    fin_t = time.time() - start_t

setup plot as:

self.curve[0] = self.plots[0].plot(self.X_AXIS, 
                [0]*self.BUFFER_LEN*self.number_of_points,
                pen=pg.intColor(color_idx_0),name='plot1')
self.curve[1] = self.plots[1].plot(self.X_AXIS_2, [0]*self.BUFFER_LEN,
                pen=pg.intColor(color_idx_1),name='plot2')

update plot as:

def update_plots(self):
    self.curve[0].setData(self.X_AXIS, self.cbuffer[0])
    self.curve[0].setPos(self.ptr, 0)
    self.curve[1].setData(self.X_AXIS_2, self.cbuffer[1])
    self.curve[1].setPos(self.ptr, 0)
    self.ptr += 0.016

Then I call it using QTimer:

self.timer = QtCore.QTimer()
self.timer.timeout.connect(self.update_cbuffer)
self.timer.timeout.connect(self.update_plots)
self.timer.start(16)

The Question is:

1. When I plot it, it seems to be much slower than 16ms. Any ideas to speed it up?

2. When I time the update_plots() using time.time() and calculate its average run time (total_time/number_of_runs), it increases gradually, I trying to understand the reason behind it.

Any suggestions? I am new to Python, I could make some mistakes in the code, please do not hesitate to point it out. Thank you for your help in advance.

p.s I've try different circular buffers as suggested in efficient circular buffer?

class Circular_Buffer():
    def __init__(self, buffer_len, data_type='float'):
        if data_type == 'int':
            self.__buffer = np.zeros(buffer_len, dtype=int)
        else:
            self.__buffer = np.zeros(buffer_len)
        self.__counter = 0

    def append(self, data):
        self.__buffer = np.roll(self.__buffer, -1)
        self.__buffer[-1] = data

    def get(self):
        return self.__buffer

But it turns out to be much slower in my case.

I've also try this:

class CB_list():
    def __init__(self, buffer_len):
        self.__buffer = [0]*buffer_len

    def append(self, data):
        self.__buffer = self.__buffer[1:]
        self.__buffer.append(data)

    def get(self):
        return self.__buffer

It performs similar as deque, so I stick with deque.

EDIT: Sorry I made a mistake yesterday. I've already correct it on the code.

data = [0]*buffer_len # buffer_len = 16k  <--- Should be 160k instead
Community
  • 1
  • 1
cityzz
  • 301
  • 2
  • 12
  • Use xrange instead of range, range builds a list. If you are appending all the `data` simply use `self.cbuffer[0].extend(data)`. If you are appending a slice use `self.cbuffer[0].extend(itertools.islice(data,None,self.number_of_points))` – Padraic Cunningham Apr 15 '15 at 09:54
  • @PadraicCunningham Thanks, I will consider that. :D – cityzz Apr 15 '15 at 10:36
  • @PadraicCunningham How about the update_plots() method? Can that be more efficient? – cityzz Apr 15 '15 at 10:36
  • can you add a link to the full code? – Padraic Cunningham Apr 15 '15 at 12:59
  • @PadraicCunningham Thank you for your help, but I am afraid I can't link them all because it is in a proprietary code frame work. I think all the relevant parts are here. Nothing else should affect the plotting speed. – cityzz Apr 15 '15 at 13:51
  • Where is the data actually coming from? How much time does `data_feeds` take? – sebastian Apr 15 '15 at 15:35
  • @sebastian the data_feeds come from another thread, and it took less than 10ms to 'Generate' 2 new dict, data_points (160k length), data(625 length). When I timed update_cbuffer() it only took less than 1ms. – cityzz Apr 16 '15 at 08:03

1 Answers1

3

I'm not sure this is a complete answer, but the information is too long to turn into a comment, and I think it is critical for your understanding of the problem.

I think it is very unlikely you will get your timer to fire every 16 ms. Firstly, if your methods self.update_cbuffer and self.update_plots take longer than 16 ms to run, then the QTimer will skip firing when it should, and fire on the next multiple of 16 ms (eg if the methods take 31 ms to run, your timer should fire after 32 ms. If the methods then take 33 ms to run, the timer will next fire 48 ms after the previous one)

Furthermore, the accuracy of the timer is platform dependent. On windows, timers are only accurate to around 15 ms. As proof of this, I wrote a script to test on my windows 8.1 machine (code included at the end of the post). This graph shows the deviation from the expected timeout in ms.error in timeout trigger

In this case, my example was firing around 12ms early. Note that this isn't quite correct, as I don't think my code takes into account the length of time it takes to append the error to the list of errors. However, that time should be much less than the offset you see in my figure, nor does it account for the large spread of values. In short, timers on windows have an accuracy around the size of your timeout. Not a good combination.

Hopefully this at least explains why the code isn't doing what you expect. Without a minimilistic working example though, or comprehensive profiling of the code by yourself, it is difficult to know where the bottleneck in speed is.

As a small aside, pyqtgraph seemed to stop updating my histogram after a while when the timeout in my code below was very small. Not sure why that was.

Code to produce the above figure

from PyQt4 import QtGui, QtCore
import sys
import time
import pyqtgraph as pg
import numpy as np

start_time = time.time()

timeout = 0.16 # this is in SECONDS. Change to vary how often the QTimer fires
time_list = []

def method():
    global start_time
    time_list.append((timeout-(time.time()-start_time))*1000)
    start_time = time.time()

def update_plot():
    y,x = np.histogram(time_list, bins=np.linspace(-15, 15, 40))
    plt1.plot(x, y, stepMode=True, fillLevel=0, brush=(0,0,255,150))

app = QtGui.QApplication(sys.argv)

win = pg.GraphicsWindow()
win.resize(800,350)
win.setWindowTitle('Histogram')
plt1 = win.addPlot()
y,x = np.histogram(time_list, bins=np.linspace(-15, 15, 40))
plt1.plot(x, y, stepMode=True, fillLevel=0, brush=(0,0,255,150))
win.show()

timer = QtCore.QTimer()
timer.timeout.connect(method)
timer.timeout.connect(update_plot)
timer.start(timeout*1000)

sys.exit(app.exec_())
three_pineapples
  • 11,579
  • 5
  • 38
  • 75
  • Thank you for your explain on QTimers. After I timed update_cbuffer() and update_plots(), I found out update_cbuffer() takes way less than 1ms, while update_plots() is another story. The run time for update_plots() is keep increasing, starts from 13ms. So I think there might be some problem with self.curve[0].setData(), especially with large data size (160k in my case). How can I find out where this problem comes from? Is there another faster way of doing this? – cityzz Apr 16 '15 at 07:59
  • @cityzz Do you mean 160k points? That is quite a lot. Given you have limited resolution on the screen, you could consider downsampling first in a way that preserves features (not sure this will necessarily save you time). Otherwise you could delve into the PyQtGraph code to find out what is taking the longest (or use some sort of profiling tool) but I suspect the slowest part is actually drawing all the data on the screen which you won't be able to speed up. – three_pineapples Apr 16 '15 at 08:08
  • Thank you for pointing it out, I am doing the downsampling at the moment, hopefully it would speed up the plots. – cityzz Apr 16 '15 at 08:54
  • @cityzz Note, I don't know the details of how you are downsampling, but in my experience the fastest way to do it is to actually offload it to a C extension. Python is typically slow at downsampling, and if speed is an issue, then you should put things in C code (called from Python) where necessary. This of course requires learning about C extensions, using numpy from C extensions and how to compile them, and the actually writing C code. – three_pineapples Apr 16 '15 at 09:17
  • I am using numpy array to downsample it at the moment, it seems to be reasonable fast. _data = _data.reshape(-1, R).mean(axis=1) – cityzz Apr 16 '15 at 10:42