0

I'm programming a little multi-protocol image streaming server (in Python), and all protocols work well enough, except for the Multicast protocol that makes my CPU usage go up to 150% !

Here's the multicast code:

       delay = 1./self.flux.ips
    imgid = 0
    lastSent = 0

    while self.connected:

        #self.printLog("Getting ready to fragment {}".format(imgid))
        fragments = fragmentImage(self.flux.imageFiles[imgid], self.fragmentSize)
        #self.printLog("Fragmented {} ! ".format(imgid))

        # Checking if the delay has passed, to respected the framerate
        while (time.time() - lastSent) < delay:
            pass

        # Sending the fragments
        for fragmentid in range(len(fragments)):
            formatedFragment = formatFragment(fragments[fragmentid], fragmentid*self.fragmentSize, len(self.flux.imageFiles[imgid]), imgid)
            self.sendto(formatedFragment, (self.groupAddress, self.groupPort))

        lastSent = time.time()

        imgid = (imgid + 1) % len(self.flux.images)

The UDP protocol also sends images as fragments, and I don't have any CPU usage problems. Note that the client also have some latency to get those images.

halflings
  • 1,540
  • 1
  • 13
  • 34

1 Answers1

2

Use time.sleep(delay) instead of the (heavy) busy waiting and you should be good (see this question Python: Pass or Sleep for long running processes?).

For an even better performance you should consider an I/O event reactor like PyUV, gevent, tornado or twisted.

Community
  • 1
  • 1
schlamar
  • 9,238
  • 3
  • 38
  • 76
  • Thank you ! I was worrying about the CPU usage of an infinite loop that does nothing but ``pass`` but said it'd be "just fine". Well, it didn't ! FYI, instead of time.sleep(delay), I've done : ``time.sleep(delay - (time.time() - lastSent))`` (if this value is positive) to only wait the difference between the delay and the elapsed time, as the fragmenting and sending can be time consuming. – halflings Dec 12 '12 at 12:02