Date: Sat, 6 Apr 2013 14:09:05 +0100
From: Martin Ling <martin@earth.li>
Reply-To: discuss@edinburghhacklab.com
To: "discuss@edinburghhacklab.com" <discuss@edinburghhacklab.com>
Subject: [hacklab-discuss] Notes on the state of 3D scanning with a Kinect

Hi all,

Wren & I have just spent a while trying to scan ourselves with the lab's
Kinect, with the aim of getting models that would be useful for sizing &
designing clothes. For the benefit of anyone else trying to do something
similar, here are some notes on how it works and what the state of
available software is.

Hardware:

- The lab's Kinect is the version designed for the Xbox 360, and came
  with a weird Xbox-specific connector that carries 12V + USB. I have
  now modded this to have a standard USB connector and a 12V mains
  supply, so you can plug it into a PC. You can still use it with an
  Xbox 360 if you want to.

- The Kinect lives in the lab store room.

Drivers / SDK:

- The official software is Microsoft's Kinect for Windows SDK, which you
  can get from:
  http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx

  Note that although this SDK is marketed as being for the "Kinect for
  Windows" hardware, it works fine with the Xbox Kinects. The only
  apparent difference in the more expensive "Kinect for Windows" model
  is a firmware that enables "near mode" operation down to ~40cm, rather
  than the ~80cm limit of the original models.
  
  As of version 1.7 released two weeks ago, the SDK includes the Kinect
  Fusion code which was developed at Microsoft Research. This can build
  3D models on the fly from the Kinect data stream, but you need a high
  spec GPU to do the processing. Unsurprisingly I couldn't run this on
  my laptop. Perhaps a GPU with this sort of capability would be a good
  addition to a lab workstation.

- The open source alternative stack is OpenNI: http://www.openni.org/
  This is a framework for working with all sorts of Kinect-like devices.
  There is a Kinect driver for it that works with Windows, Mac and Linux:
  https://github.com/avin2/SensorKinect

- You can have both drivers installed at the same time on Windows, but
  you need to select which is used for the Kinect through Device Manager
  before starting software that depends on one driver or the other.

Scanning Software:

- The best thing seems to be ReconstructMe: http://reconstructme.net/
  This runs on Windows and works with both OpenNI and the Kinect for
  Windows drivers.

  The GUI version for this is very simple - one button to record and
  reconstruct a mesh on the fly - but I you'd need a fast GPU for this
  to work. Also they charge for a non-crippled version.

  The console version (not crippled, but for non-commercial use only)
  allows recording and offline processing. It also has a stitcher that
  will join up multiple scans, although this doesn't do fusion between
  them, just simple stitching.

  There are essentially no options or manual adjustments in this
  software - it's just data in, model out.

  Without a GPU the processing is very, very slow, but it works well.
  You have to make sure to move the sensor very slowly.

  It doesn't output coloured models, but it does go all the way to a
  mesh, rather than just a point cloud. It outputs .obj files. If you
  need to post-process or convert these to something else, I recommend
  MeshLab: http://meshlab.sourceforge.net/

- The other software we tried was SCENECT:
  http://3d-app-center.faro.com/index.php/stand-alone-apps-faro-scenect
  It runs on Windows and uses OpenNI and the driver from the link above,
  which it installs for you.

  This is a modified version of the SCENE LT laser scanning software
  which uses the Kinect as a data source instead of a laser scanner.
  So it already has lots of features for visualÑ–sing, processing and
  exporting the data.

  It does on-the-fly point cloud reconstruction, although we found it
  easier to just record the data stream and process it offline.

  This almost worked for us. The problem is that the tracking between
  frames often jumps, leaving mismatches in the data. To try and
  get round this we ran multiple shorter, slow-moving scans from
  different angles, and tried to match them up afterwards. The software
  has an automatic matching feature for this, but it didn't like our data.

  It's possible that with more effort we could get this to work. It
  runs faster and offers a lot more control than ReconstructMe, and also
  outputs colour data.

Hope this is useful to someone!


Martin