I’ve spent the past day or so messing around with Kinect and OSX, trying to find a nice combination of libraries and drivers which works well – a more difficult task than you’d imagine! Along the way I’ve found that a lot of these libraries have poor or no documentation.
Here I’m sharing a little example of how I got OpenKinect and OpenCV working together in Python. The Python wrapper for OpenKinect gives depth data as a numpy array which conveniently is the datatype used in the cv2 module.
import freenect import cv2 import numpy as np """ Grabs a depth map from the Kinect sensor and creates an image from it. """ def getDepthMap(): depth, timestamp = freenect.sync_get_depth() np.clip(depth, 0, 2**10 - 1, depth) depth >>= 2 depth = depth.astype(np.uint8) return depth while True: depth = getDepthMap() blur = cv2.GaussianBlur(depth, (5, 5), 0) cv2.imshow('image', blur) cv2.waitKey(10)
Here the getDepthMap function takes the depth map from the Kinect sensor, clips the array so that the maximum depth is 1023 (effectively removing distance objects and noise) and turns it into an 8 bit array (which OpenCV can render as grayscale). The array returned from getDepthMap can be used like a grayscale OpenCV image – to demonstrate I apply a Gaussian blur. Finally, imshow renders the image in a window and waitKey is there to make sure image updates actually show.
This is by no means a comprehensive guide to using freenect and OpenCV together but hopefully it’s useful to someone as a starting point!