Virtual Reality test chamber (Real, not speculation!)
Posted: Sun Sep 25, 2011 8:32 pm
Hi guys! Man, I am on a threadmaking role!
Here is the thread that describes what using this system is like: http://www.mtbs3d.com/phpBB/viewtopic.php?f=120&t=13780" onclick="window.open(this.href);return false;
I have mentioned that I have a new position working in a military research lab, and I am able to show off some of our stuff. We actually have other motion capture stages/test chambers that are larger, but I am going to just go over what I am familiar with, which are the two stages we have next to my workshop.
Here is Stage 1:
Not a great picture, thanks to the low FOV of the 3DS camera, but it gets the basic idea across. Stage 1 is about 120 feet by 60 feet, pretty large! If you direct your attention to the ceiling, you will note the scaffolding system that runs along the top of the entire stage. Mounted to the scaffolding are 40 motion cameras, each of which captures 3600x3600 at 480fps. The layout and overlap of the cameras means that at any given moment, you have at least 12 cameras tracking you, which means you get sub-millimeter accuracy in all direction, jitter free. The cameras are used for body, head, and movement tracking across the stage. We can have 1:1 movement mapping, or "enhanced" movement, resulting in much less effort, but with a modicum of realism lost.
Here is the far end of the stage, littered with stuff (Some of which we use as physical props for virtual objects!). Don't worry, it is cleaned up when we run simulations.
Here is a (stunningly bad, curse the 3DS low light performance) picture of the motion capture suits we use:
They use glowing LED trackers to each joint. They could be IR in theory, but ours are in the visible spectrum, bright red! I need to get a picture of the suits in operation, but the occasion has not come up. They are not even used most of the time, actually, since it is rare that you need perfect tracking of every single limb in your body. We also have some Kinects rigged up on the side of the stage, and have used those for body tracking when dealing with touching physical objects that are mapped to virtual ones.
Here is a picture of Stage 2:
Truth be told, I do not do a lot of work in Stage 2. It is a lot smaller, perhaps 30x60 feet, and is used for projection projects. Only 12 cameras, I think.
As far as software goes, we use Unity with a lot of addons we have coded that give us the flexibility we need. Turns out, our entire system works perfectly fine on the free version of Unity! The Pro version can make the graphics look a little nicer, but everything works perfectly fine on the free version. If you have any specific questions about the engine (Cybereality?) post them here so I can ask the software engineers. If there are people who would love to have a copy of all our addons... Let me know.
Here is the thread that describes what using this system is like: http://www.mtbs3d.com/phpBB/viewtopic.php?f=120&t=13780" onclick="window.open(this.href);return false;
I have mentioned that I have a new position working in a military research lab, and I am able to show off some of our stuff. We actually have other motion capture stages/test chambers that are larger, but I am going to just go over what I am familiar with, which are the two stages we have next to my workshop.
Here is Stage 1:
Not a great picture, thanks to the low FOV of the 3DS camera, but it gets the basic idea across. Stage 1 is about 120 feet by 60 feet, pretty large! If you direct your attention to the ceiling, you will note the scaffolding system that runs along the top of the entire stage. Mounted to the scaffolding are 40 motion cameras, each of which captures 3600x3600 at 480fps. The layout and overlap of the cameras means that at any given moment, you have at least 12 cameras tracking you, which means you get sub-millimeter accuracy in all direction, jitter free. The cameras are used for body, head, and movement tracking across the stage. We can have 1:1 movement mapping, or "enhanced" movement, resulting in much less effort, but with a modicum of realism lost.
Here is the far end of the stage, littered with stuff (Some of which we use as physical props for virtual objects!). Don't worry, it is cleaned up when we run simulations.
Here is a (stunningly bad, curse the 3DS low light performance) picture of the motion capture suits we use:
They use glowing LED trackers to each joint. They could be IR in theory, but ours are in the visible spectrum, bright red! I need to get a picture of the suits in operation, but the occasion has not come up. They are not even used most of the time, actually, since it is rare that you need perfect tracking of every single limb in your body. We also have some Kinects rigged up on the side of the stage, and have used those for body tracking when dealing with touching physical objects that are mapped to virtual ones.
Here is a picture of Stage 2:
Truth be told, I do not do a lot of work in Stage 2. It is a lot smaller, perhaps 30x60 feet, and is used for projection projects. Only 12 cameras, I think.
As far as software goes, we use Unity with a lot of addons we have coded that give us the flexibility we need. Turns out, our entire system works perfectly fine on the free version of Unity! The Pro version can make the graphics look a little nicer, but everything works perfectly fine on the free version. If you have any specific questions about the engine (Cybereality?) post them here so I can ask the software engineers. If there are people who would love to have a copy of all our addons... Let me know.