3

I'm working on a project using National Instruments boards to do data acquistion. I have functional C codes for doing the tasks, but would like to use Python, so the GUI programming is less painful. In my C-code, I use the API call setTimer, which raises a WM_TIMER event at regular intervals. Is there a similar, mechanism in a Tk loop? I tried using the following code.

def DAQ(self):
    if self.do_DAQ:
        result = self.myDAQ.getData()
        currTime = time.time() - self.start_time
        self.time_label.config(text="{:.1f} seconds".format(currTime))
        self.volt_label.config(text="{:.4f} volts".format(result))
        self.time_data[self.i] = currTime
        self.volt_data[self.i] = result
        self.i += 1
        self.after(1962, self.DAQ)

The magic "1962" in the after() was determined by trial and error to give about a 2 second delay, but the time slices drift depending on what else is in the queue. Is there a way I can do this so my time slices are more accurate? Specifically, can I force Tk to do my DAQ event before other things in the queue?

Carl Houtman
  • 155
  • 1
  • 10
  • 1
    You might want to amend the title of this--the question doesn't really have to do with the data acquisition so much as accurate timing the TK event loop. You might want to do your data acquisition in a separate thread at your preferred rate and have your GUI poll a queue for new data every n ticks. – Iguananaut Dec 17 '12 at 23:37

2 Answers2

3

I actually do NIDAQmx with Python using PyDAQmx. We take data at 20kHz (by setting the clock timer on the NI board, and streaming the data to a file in chunks of 2000 at 10hz).

I would highly recommend separating your GUI process from your data acquisition process if temporal precision is important.

If you are just wanting to log the data every 2 seconds, you could set your sample clock on your NIDAQ to something like 1000, buffer size 1000, and use an AutoRegisterEveryNSamplesEvent callback to write the last index of data for every other buffer (which should be every two seconds) to a file or pipe it to your GUI process. This will ensure that your processing queue for your GUI won't affect the precision that your data is sampled with.

derricw
  • 6,757
  • 3
  • 30
  • 34
3

Here's a sort of quickie example of what I'm talking about in my comment:

import Tkinter as tk
import threading
import random
import time
from Queue import Queue, Empty

root = tk.Tk()
time_label = tk.Label(root, text='<unknown> seconds')
volt_label = tk.Label(root, text='<unknown> volts')
time_label.pack()
volt_label.pack()

def DAQ(q):
    while True:
        q.put((time.time(), random.randrange(100)))
        time.sleep(2)

def update_data(queue, root):
    try:
        timestamp, volts = queue.get_nowait()
    except Empty:
        pass
    else:
        time_label.config(text='{:.1f} seconds'.format(timestamp))
        volt_label.config(text='{:.4f} volts'.format(volts))
    root.after(100, update_data, queue, root)

data_queue = Queue()
t = threading.Thread(target=DAQ, args=(data_queue,))
t.daemon = True
t.start()
update_data(data_queue, root)
root.mainloop()

Obviously the above DAQ() function is just a stand-in for the real thing. The point is, as @ballsdotballs suggested in their answer, you can sample at whatever rate you want in your DAQ thread, add the values to a queue, and then update the GUI at a more appropriate rate.

Iguananaut
  • 21,810
  • 5
  • 50
  • 63
  • Thanks to both you and @ballsdotballs. I will make a step into multithreading when I get back in the office. Sorry, to be so dense. What does t.daemon = True do? – Carl Houtman Dec 18 '12 at 03:47
  • @CarlHoutman the `daemon` flag just means that the program should exit when only non-main threads are remaining. This is just to ensure that it doesn't hang waiting for the endless while loop in the thread to finish. Depending on how your code is implemented it may or may not be necessary. – Iguananaut Dec 18 '12 at 16:10