How to get your bust in good shape
Got a 3D printer – now what?
Feature A 3D printer is a great toy, but only if you have something to print. If you want to address the big issue of “yes, but what can you do”, then just downloading models isn’t any more personal than buying the finished thing online.
The bust is back in vogue
You need to make your own.
To this end I looked at 3D scanning. This used to be the stuff of mega expensive custom hardware but it’s just getting into the hobbyist domain.
One approach is to use 123D Catch , an interesting piece of free software from AutoDesk. It takes 50 or so still frames, sucks them onto the AutoDesk mainframe - er, sorry, private cloud - and stitches them into a 3D model.
AutoDesk's 123D Catch iPhone app is a start but don't expect spectacular results
There is an iPhone app version of 123D Catch  too. In theory, you don’t have to use a camera and a PC, but a good lens is always going to help. A DSLR, even with the resolution wound right down is going to be a better bet. It’s also not as seamless as AutoDesk would like you to think, so you really do need to use a PC. There's even an on-line version if you just fancy dabbling from a browser.
One of the important things to realise with 123D Catch is that the software uses the background scene to help with the stitching, so putting someone on a chair and spinning them round isn’t great.
Still life on Thingiverse, the modelling site of MakerBot Industries, manufacturers of 3D printers
victim person also needs to stay still. It’s telling that among the huge number of online models produced using 123D Catch the vast majority of the good ones are of things like statues, there are very few of people, none of which look good enough to print and there are no models of people generated with 123D Catch on Thingiverse  either, the general repository for models. Yet with a huge amount of perseverance, I managed to capture a face good enough to print.
The waiting game
You start by taking lots of pictures, I’ve found it works better to not drop the resolution but as everything has to be uploaded this is a balance between quality an patience. My 76 images filled 246MB of disk space so I went off and did something else while the upload did its stuff.
The initial stitch looks pretty good
You spend a lot of time waiting while using 123D Catch. If you change the model resolution it gets re-stitched and re-downloaded. Most functions need you to log in and there are no cookies, so you'd better not forget the password.
What a mesh: same image shown as a mesh in Netfabb Studio shows how poor the stitching is – click for a larger image 
The image you get back looks quite good. There are holes where the stitching failed and odd bits of background but as the 3D model is mapped with the image files it quite convincing. It’s only when you load the mesh into modelling software that you see that some of the joins are really pretty dreadful.
You can help 123D Catch by matching points between images. Typically, you use the tip of the nose, corners of the eyes and mouth, but points in the hair can be quite hard to match. Even when I’ve been very sure of the matching of points the results have been very poor. I’ve lost whole models and they are rarely better after being helped.
There's the Catch: you spend a lot of time looking at this screen
When you’ve matched points and uploaded them it takes about 20 minutes for the new image to be stitched and downloaded. This is not an interactive process. It’s a bit like waiting for code to compile, the first time you are glad for the break but by the third time it’s getting tiresome and you might need several goes after that.
It is possible to get 3D models out of 123D Catch
I’ve no doubt that with the right rig to capture the images, and taking pictures of the right kind of static objects, it’s possible to get great results from 123D Catch and you can get a much higher resolution than with the other cheap alternative.
Scanning the area
That alternative is Reconstruct Me  which uses a depth sensor. Usually expensive custom hardware, but Microsoft's X-box Kinect  has just the hardware: produced mass market and subsidised. It works by projecting a grid of quite large squares in infra red onto the scene and then looking at the distortion in the squares. If you point the Kinect sensor at a white object in a dark room and look at it with a mobile phone camera you can see the projected pattern. The resolution isn’t great but it’s workable.
The Xbox Kinect does more than you might think
The Kinect has a proprietary connector which plugs into a socket on newer X-boxes and which provides power as well as data. For older X-boxes which only have USB you need a power supply with the USB pass through. This serves our purposes of plugging the sensor into a PC very well. At around £120 quid for the sensor and power supply you have scanning hardware. You can pick up second hand hardware for around half that.
The Kinect API gives a point cloud of the image. This is where Reconstruct Me comes in. It takes the points and produces 3D meshes dynamically as you move around the target. There are other programs which support the Kinect but these produce images from static viewpoints and the meshes have to be stitched by hand.
It’s telling just how hobbyist this areas is, given that the slickest, easiest to use piece of scanning software runs in a DOS window with command line switches. There is no simple instal which looks at what your system needs and gets it for you. Instead you need to instal the Open NI drivers first. Then you need to make sure you have the Microsoft C++ runtime – including installing the x86 version on 64-bit systems and update your video drivers.
Most of the processing is done on the video card using OpenCL. Just checking under Windows devices to see if you have an up to date version isn’t good enough, you need to go to the card manufacturers website and force a download of the latest version.
ReconstructMe software has its CLI moments
The ReconstructMe software is incredible but is very light on documentation. Getting a good scan is pretty hit and miss. The default scan area is a cube of one metre and the nearest you can get to the object is 40 cm. This is a Kinect limitation. If you use Microsoft’s drivers instead of the OpenNI drivers that becomes 80cm, so you wouldn’t want to do that.
A retractable tape measure set to 40cm is quite useful to have around. For scanning a person you’ll need to walk all the way around them, so make sure the path is clear. Putting the Kinect on a table and spinning the person works but you then to miss the top of the head and under the chin. Unfortunately the room my computer is in has a very low ceiling and the tops of heads often pose a problem.
The image windows the view from both the normal and infra red cameras
The instal package includes some scripts to run the software from a command line. A separate video window shows both the image from the video camera and the model grabbed from the infrared. All the processing is done in real time. If your video card is fast enough you can stitch in real time.
Scanning slowly enough and capturing all the angles without the scanning losing track is a patience sapping activity. The software will often pick up the tracking but if it fails you have to start again. The model has to keep still while you are doing it and my son proved incapable of sitting still, holding his guitar without strumming, so a friend sat in.
A rendered image of the scan
Moving the sensor back a tad and waiting seemed to improve the odds of re-acquiring the tracking. It doesn’t seem to be particularly sensitive to lighting, but we kept finding odd patches which didn’t scan. This turned out to be warm spots generated by the halogen lights in the room, which gave the infra-red problems.
Pulling the mesh into a modelling program – Rhino 4.0  in this case – shows that it’s much less noisy than the images coming out of 123D Catch, but the resolution isn’t as good.
What these two approaches to 3D scanning show is that there is real progress being made in the emerging world of hobbyist 3D scanning. It’s still very much at the stage of being used by people who understand the technology rather than artists who just want a means to an end, when we see booths in shopping centres who offer to turn your family into chess pieces we’ll know the technology has matured. ®
Simon Rockman, aside from writing about curious tech, is a purveyor of phones for older users. You can read his Fuss Free Phones blog here .