The Rest of SIGGRAPH 2013

  • Print

There was a lot to be excited about at this years' Siggraph. When I went to the conference, I knew about the Nvidia Light Field HMD and had a list of exhibitors checked off to go see - but it almost always seems like its the unexpected things that turn out to be the most interesting!

Eric Mizufuka

"Exceed Your Vision" was the tagline at the Epson booth where they were showcasing the Moverio BT-100 see-through display with a collection of partners who have been working on developing applications for it.

The HMD itself is clearly an initial offering and it's encouraging to see companies like Epson bringing a product like this to market. The device has two prism based transparent display elements which can work together to provide a stereoscopic image. But there is no integrated head tracking, camera or sensor – so getting the image floating in your vision to match up with the real reality you are also seeing requires additional equipment.

To demonstrate where things could be going, one of their partners called Meta does combine a 3D sensor for object detection and gesture recognition. The vision for the project is pretty grand, and it's clear there is a long way to go for an immersive and intuitive augmented reality experience – but its also pretty amazing to see what they are already doing. The most impressive part of the demo was the system's ability to detect a sheet of paper I was holding in front of me and overlay video content which was scaled, oriented and positioned properly as if it was on the paper itself. It's not particularly practical to warp a video onto the shape of the paper, but it does show how powerful a system could be that is able to analyze the world around us to incorporate real objects into interfaces and display surfaces.

Let me be clear that I want awesome augmented reality. I think most of us do. What I expect someday is Terminator or Iron Man style visual overlays where the computer is constantly scanning and aware of everything I can see. It identifies people and objects that are of interest, looks up all the pertinent data and tells me whatever I might want to know - helping me understand the world around me with superhuman senses. I imagine natural ways of interacting with the system using voice, eye movement, and gestures. All of this needs to happen with little to no latency, and be calibrated to my personal physiology and vision so the computer display meshes seamlessly with the real world.

It goes without saying that what we have today falls short of those expectations, and these AR challenges are hard and numerous. The equipment that Epson has built and the systems that their partners like Meta are developing are the first ones we actually have – and although there is obviously room for improvement in almost every dimension, it's clear to me that with persistence and ingenuity we will actually get there.

Julius Tuomisto – Delicode

The tag line for Delicode is "Shaping the future of natural interaction" but what I think they really have with Z-Vector is a super nifty party toy. The system uses an Oculus VR devkit with a PrimeSense sensor bar strapped on top and software Julius has written to give the user a psychedelic experience by processing and displaying the data of the space around them with color and patterns that are visualizations of the musical soundtrack you play through it. You can download it for free and use it with or without the headset or sensor bar. Its pretty trippy, and I like it.

Foveated 3D Display
Mark Finch – Microsoft Research

I have always been curious about eye tracking and its implications for user interface, input and display. The project that Mark Finch has been working on with Microsoft Research was really fascinating to see. Their system works to focus the rendering quality in a 3D display right in the region where your vision is the sharpest; the "fovea", which is a remarkably small area compared to the overall field of view.  The fovea is described as being about the size of your thumb nail when your arm is fully stretched out in front of you.

Their demonstration had a standard PC connected to nine 1920x1200 displays and an off the shelf eye tracking device. The software they have developed uses the information about where the user is looking to dynamically change the area of the scene that is rendered at the highest quality.

One thing that was really compelling about the demonstration system was watching other people using it. It was obvious where the system thought they were looking – the clear/high-resolution area moved around the screen and the contrast with the rest of the display was easy to see.  But when I sat down and had it working for me, it was shocking how I could not tell it was working that way! Wherever I looked was indeed sharp and the rest of the image did not appear to be lacking in visual quality. The clear advantage of the system was that it was rendering an overall 5760x3600 image at a higher frame rate focusing only on the area it knows the user is seeing clearly than if it was trying to produce the same quality over the entire display.

Autostereoscopic Projector Array Optimized for 3D Facial Display
XueMing Yu – USC Institute for Creative Technologies and Activision

I'm usually a little skeptical of systems that promise holographic autostereoscopic displays, particularly ones that say they support multiple viewers - but the projector array system on display by XueMing Yu and his colleagues from USC does look very good.

Their demonstration system uses 72 pico projectors arranged on a parabola all shining on a vertically anisotropic lenticular screen. Viewers are identified and their positions tracked with a Microsoft Kinnect, and the system warps multiperspective rendering according to who will see each column of projected pixels.

The actual display area is fairly small, but it surprised me how well it produced the illusion of there being a real object – especially as the viewer moves around to look at it from various angles.

Studio3D Scanner
Joey Hudy – Arizona State University

The Studio3D scanner can make a 3D scan of a person or large object in one minute by rotating the subject on a platform while a Microsoft Kinnect sensor moves along a half circle track. People who were willing to wait and pose for the system were rewarded with a 3D model of themselves.

Joey says he made the scanner for a final project in Design Technology at Arizona State Univerity's "Herberger Academy" (a program for advanced high school students in STEM). He is 16 years old.

True Color 3D Printing
Gary Fudge - Mcor Technologie

3D printers were very well represented in the exhibit hall. There were more than a few vendors and the systems have become quite sophisticated over the last couple years. But one that really stood out to me was from Mcor Technologies – it produces full color 3D objects. I have seen other 3D printers that have several colors to choose from and can combine them in a single model, but this was the best actual full color printer I have seen.

The medium it uses is reams of paper; the same paper you use in your printer or plain paper copier. What it does is print the image of each cross section slice a given page will have in the final model with an ink that soaks through the paper. Then it stacks the paper back up and uses Selective Deposition Lamination technology to glue the right parts together and remove the rest. What it produces are full color 3D models that are vibrant and realistic. Gary had a model of part of his face, and when he held it up, the resemblance was striking.


Overall Siggraph 2013 was a lot of fun and I was pleasantly surprised by a lot of what I saw on display in both the main exhibit hall and the emerging technologies pavilion. Its obviously an exciting time for immersive technologies and people are doing some really neat stuff. It will probably be a few years until the conference returns to southern California where it is convenient for me to attend, and I cant wait to see what will be showcased by then!