I recently used the string format depth data from Kinect in PyOpenNI with OpenCV. Use numpy arrays which can be created from strings and which are the default data type in cv2 (OpenCV) for Python.
Code example here: http://euanfreeman.co.uk/pyopenni-and-opencv/
Not sure how Kinect differs from your depth sensor but that may be a useful starting point. Good luck!
Edit: added code
from openni import *
import numpy as np
import cv2
# Initialise OpenNI
context = Context()
context.init()
# Create a depth generator to access the depth stream
depth = DepthGenerator()
depth.create(context)
depth.set_resolution_preset(RES_VGA)
depth.fps = 30
# Start Kinect
context.start_generating_all()
context.wait_any_update_all()
# Create array from the raw depth map string
frame = np.fromstring(depth.get_raw_depth_map_8(), "uint8").reshape(480, 640)
# Render in OpenCV
cv2.imshow("image", frame)