17

Possible Duplicate:
Return value from thread

I want to get the "free memory" of a bunch of servers like this:

def get_mem(servername):  
    res = os.popen('ssh %s "grep MemFree /proc/meminfo | sed \'s/[^0-9]//g\'"' % servername)  
    return res.read().strip()  

since this can be threaded I want to do something like that:

import threading  
thread1 = threading.Thread(target=get_mem, args=("server01", ))  
thread1.start()

But now: how can I access the return value(s) of the get_mem functions? Do I really need to go the full fledged way creating a class MemThread(threading.Thread) and overwriting __init__ and __run__?

Community
  • 1
  • 1
hansaplast
  • 11,007
  • 2
  • 61
  • 75
  • 1
    Use markdown (http://daringfireball.net/projects/markdown/syntax) rather than HTML to format code: indent lines of code with 4 leading spaces. Click the orange question mark in the post editor toolbar for more info. – outis Apr 05 '10 at 06:49
  • I realize this is long after the question was asked, but I came up with a fairly simple closure function in a `threading.Thread` subclass to save the result of your thread. Answering the question is closed on this post now so I can't answer the question here as well, but see https://stackoverflow.com/a/65447493 for a quick explanation! – slow-but-steady Mar 22 '21 at 03:39

2 Answers2

21

You could create a synchronised queue, pass it to the thread function and have it report back by pushing the result into the queue, e.g.:

def get_mem(servername, q):
    res = os.popen('ssh %s "grep MemFree /proc/meminfo | sed \'s/[^0-9]//g\'"' % servername)
    q.put(res.read().strip())

# ...

import threading, queue
q = queue.Queue()
threading.Thread(target=get_mem, args=("server01", q)).start()
result = q.get()
Marcelo Cantos
  • 181,030
  • 38
  • 327
  • 365
2

For the record, this is what I finally came up with (deviated from multiprocessing examples

from multiprocessing import Process, Queue

def execute_parallel(hostnames, command, max_processes=None):
    """
    run the command parallely on the specified hosts, returns output of the commands as dict

    >>> execute_parallel(['host01', 'host02'], 'hostname')
    {'host01': 'host01', 'host02': 'host02'}
    """
    NUMBER_OF_PROCESSES = max_processes if max_processes else len(hostnames)

    def worker(jobs, results):
        for hostname, command in iter(jobs.get, 'STOP'):
            results.put((hostname, execute_host_return_output(hostname, command)))

    job_queue = Queue()
    result_queue = Queue()

    for hostname in hostnames:
        job_queue.put((hostname, command))

    for i in range(NUMBER_OF_PROCESSES):
        Process(target=worker, args=(job_queue, result_queue)).start()

    result = {}
    for i in range(len(hostnames)):
        result.update([result_queue.get()])

    # tell the processes to stop
    for i in range(NUMBER_OF_PROCESSES):
        job_queue.put('STOP')

    return result
hansaplast
  • 11,007
  • 2
  • 61
  • 75