Siggraph 2008 - Papers of interest for Machinima creators

Whilst there have been a couple of reports on Siggraph 2008, the premiere graphics conference held in LA last month, no-one’s really talked about the academic papers that were presented there. Since these tend to represent the cutting edge of graphics technology that we can expect to be using in between two and five years, I thought it was worth having a quick look over what came out there.

I wasn’t at Siggraph, so I’m going from the list of papers maintained here. In addition, whilst I can code a bit, I’m not at the level of these guys, so I could have completely the wrong end of the stick on something. If you’re interested, check out the papers linked for more.

There aren’t a hell of a lot of papers directly applicable to Machinima and real-time techniques this year. By far the most obvious trend was work on markerless performance capture - in other words, motion capture without ping-pong balls. Perhaps the most impressive demonstration of this field was Performance Capture from Sparse Multi-view Video - it’s worth watching the attached video, as it shows an 8-camera setup being able to completely recreate an actor and his movements as an animated mesh, without markers, lasers, or anything else, clothing and all. Articulated Mesh Animation from Multi-view Silhouettes appears to do much the same thing, but this time working from a base reference mesh. Markerless Garment Capture is doing the same thing again, but this time just with clothes.

All of this, of course, is very exciting. Motion capture from normal cameras - and the first paper listed notes that it uses normal 25 fps cameras - would massively reduce the cost of this technique and open it up to a huge range of applications. Imagine just being able to capture any motion you needed for your movie right in your living room.

There were a couple of papers on crowd techniques - Clone Attack! Perception of Crowd Variety, which dealt with the problem of avoiding obvious cloning in your crowd scenes, and Group Motion Editing, which talked about editing pre-created group paths - changing where the crowd walks in your film.

The former, in particular, is well worth a look for Machinima practitioners. It’s not a programming paper so much as it is a practical psychology paper, testing the various approaches to “de-cloning” your crowds, and many of its techniques are useful straight away. The latter paper is not so much use, but we’ll probably see its research in RTS games before too long.

The remaining individual papers are all interesting. Real-time, All-frequency Shadows in Dynamic Scenes, one of the few specifically real-time targetted papers, presents some really impressive real-time lighting techniques, which give me a lot of hope for next-generation lighting engines. Worth a look if you want to see the future, or you’re currently programming a renderer (you know whom you are!). Real-time Motion Retargeting to Highly Varied User-Created Morphologies is mostly notable for being written by some of the team behind Spore, but presents some fascinating if tricky to understand techniques for animating characters of unknown anatomy. Again, one for the programmers (although the author also links to a much less technical article on Gamasutra giving an overview of the process).

Hair Photobooth: Geometric and Photometric Acquisition of Real Hairstyles does pretty much what it says on the tin, although my reading is that this stuff won’t be real-time for a long while yet. And Statistical Reconstruction and Animation of Specific Speakers’ Gesturing Styles looks fascinating, but I haven’t been able to get it to load yet!

So, not the most fascinating SIGGRAPH ever from the point of view of the Machinima creator, but there’s still some good stuff in there. In particular, the developments in performance capture are really exciting.