Welcome to the second part of Meant to be Seen’s coverage of GDC 2012! Special thanks to MTBS’ Field Writer Kris Roberts. Not only is Kris formerly a Senior Game Designer at Rockstar Games, he is an avid stereoscopic 3D gamer in his own right.
The Cameras of Uncharted 3
When I was working on the Midnight Club games I ended up being the designer responsible for setting up and adjusting all the gameplay cameras, so I was interested in this talk to begin with and the fact that Uncharted 3 was also a really spectacular stereoscopic 3D game was a real bonus.
Thursday started with a programming session presented by Travis McIntosh – Uncharted 3 lead programmer from Naughty Dog. Travis was the lead programmer on all three Uncharted games, and was also the point person for the camera system. He made it clear that there were many people who worked on the cameras and it wasn’t as though he did everything himself. The talk started with an overview and description of how they approached the camera systems and took the approach of developing many cameras that each did simple things rather than trying to make a few smart cameras. They had quite a few types and a ‘camera stack’ system that took care of blending between them and deciding which of the currently available cameras to use at any given point in the game.
There was an emphasis on keeping the camera under player control as much as possible, particularly in combat situations but also allowing the designers and artists to adjust how they were set up to provide visual emphasis or gameplay visibility in a fluid and intuitive sense. The camera stack often had a number of potential cameras and depending on player input, object collision, scene setup and scripting it would pick and transition pretty intelligently.
The 3D specific content in the talk was the last point of his discussion, and it was clear that a great deal of effort and care was invested in getting the stereo presentation looking as good as possible. They took a dual rendering approach which was made easier by the fact that the game already needed to support split screen. To get and maintain a fast enough frame rate, there were art optimizations for geometry, particles and effects that also needed to be made for the stereo version, and this took quite a bit of work.
One thing that was particularly interesting was that they chose to toe in the cameras rather than maintaining parallel projection. Another big decision was not to have any out of screen effects. They chose to have Drake (the main character) always be at the zero plane, and they had to do some adjustments to check and set the zero plane distance in every frame to guard against the times when an object might suddenly appear between the camera and Drake. When they started working on support for stereo 3D, the intention was to give the artists and designers control over the parameters, and expected adjustments on many shots and settings per camera. Eventually they found that too much variation and dynamic changes were distracting and confusing to the player so they pulled back from that and applied general settings which the user could slightly adjust in the game’s options. These adjustments would impact the entire game.
In summary, here are the stereoscopic 3D Do’s and Don’ts that were presented at the session:
Change the distortion.
Hive the animators control over 3D settings per shot.
Mix heavy FOV/Depth of field and 3D together.
Allow items to jut from the screen.
Do a lot of high contrast scenery.
Modulate the zero plane with the player distance.
Put in safeguards to prevent wild distortions.
Use the z-buffer to adjust the zero plane distance.
MTBS NOTE: While these were artistic choices made by this particular game developer, many gamers want the flexibility to have combined depth and pop-out effects, and there are visual trade-offs with toeing in the cameras versus parallel camera rendering. We recommend that game developers experiment according to what would work best for their particular game(s), and not limit themselves to any one artistic guideline.