MTBS is pleased to welcome Bertrand Nepveu, CEO of Vrvana to the interview chair. Vrvana, formerly known as True Player Gear, has an active Kickstarter to release a new type of head mounted display that features high resolution panels and a novel implementation of stereoscopic cameras. We’ll let Bertrand do the talking!
MTBS: Hi Bertrand! Welcome to MTBS! How and when did you and your team get interested in VR?
Vrvana: Hi Neil, I’m a hardcore gamer since the ColecoVision and I knew since the PowerGlove that VR was the holy grail of gaming. After seeing the release of the Xbox360, I decided to create a product for myself since there was nothing immersive on the market back then. We’ve been on this quest since then.
MTBS: Vrvana (formerly named True Player Gear) is new to the game. Tell us about your team! What kind of expertise do you have access to?
Vrvana: With the recent hires, we are now eleven people with eight engineers that come from the computer vision industry.
MTBS: From what I have seen so far, Vrvana is primarily focused on adequate VR hardware at this time. Without revealing the solution, what are the biggest challenges that need to be overcome to make a VR solution successful? Are there key specs you are trying to achieve? Can you describe the level of flexibility or the ultimate vision you’d like to see happen?
Vrvana: First is inside-out positional tracking. The other big challenge is to have access to adequate OLED panels. This is hurting us right now. We wanted to show on our Kickstarter an OLED screen, but the screens we were supposed to receive in May are now delayed until November. Without low persistence OLED panels, it’s hard to compete with the Rift. We hope that people can back us based on what we will ship to backers, but it seems people want to see all of them before pledging. If we want to ship in April, we can’t wait until then. We need to do mass production and R&D in parallel.
MTBS: What is positional tracking, and why is it important for a good VR experience? Why are some of the popular HMD brands using external cameras to make this possible? Is there something that optical tracking brings to the mix that is difficult to achieve otherwise?
Vrvana: Positional tracking is really important if you want presence. It tracks you when you lean forward or move your head sideways. Without that, you will feel discomfort in VR. Other HMDs use external cameras because they go with a brute force method where they can use the CPU/GPU of the host computer to track Color or IR LEDs. This is a method that has been used for quite some time. On our side, we want to use a more natural way, a bit like how the brain works: use two high-speed cameras to track objects and know how much you moved your head in reference to them. The onboard cameras also allow you to have pass-through vision. We also want to experiment with augmented reality (AR) based on the feedback of the developers.
MTBS: Let’s talk about the Totem! Can you share some basic specs with our readers? Resolution, field of view, etc.
Vrvana: I already mentioned the cameras. We will use an RGB stripe 1080P low persistence OLED for a reduced screen door effect. Our current prototype has 90 degrees FOV, but we now know we can push this further because of the RGB stripe screen. This is why we have a 105 degree FOV stretch goal! We also have in-house binaural / head related transfer function (HRTF) sound onboard and independent optical focus on the headset so that you can remove your glasses while using Totem (up to -7). Finally, we have hardware acceleration that will do the optical barrel distortion in real time in the headset. This offloads the GPU so that you can have a better frame rate and latency. This also enables us to be compatible with any 3D side by side sources.
MTBS: Can you elaborate how the cameras work? Is there a comparable that proves this technology will work well enough for VR without slippage?
Vrvana: We will start by using infrared (IR) fiducial markers so that we can ship a stable Dev kit to backers. With two cameras and a known camera field of view (FOV), it’s possible to calculate the distance from the fiducial markers and calculate how you move your head relative to them. With time, we will improve the algorithms and we are confident that we can remove those markers. 13th Labs and SoftKinect are commercial companies that proved that this technology works. Go on Google Scholars and you will see that there are a lot of papers that talk about how to do this with the help of an FPGA.
People are skeptical about the fact that we can deliver inside-out positional tracking with sub millimeter accuracy. People state that Carmack said that it is a hard problem to solve with good precision without jitter. He said that while referring to the capabilities of the Samsung Note 4.
On the Totem, we have access to two 1080P RGB/IR (infrared) cameras that run at 120Hz and are connected to a high-speed field-programmable gate array (FPGA) that can process SLAM data 14 times faster than an ARM processor. There are a lot of research papers that show that it is feasible. We even have an engineer who did his PhD on detecting objects using a new algorithm and we are also evaluating this right now. We are also having discussions with 13th Labs in order to tackle this problem. They also think that what we are aiming for is possible.
MTBS: Oculus has been promoting an ultimate goal of 20ms or less in total latency. Is this a reasonable metric to go by? Given the same software and hardware advantages, is Totem’s camera mechanism fast enough to compete at this level?
Vrvana: With the new Nvidia GPU, the 20ms goal is getting closer and closer! This is also our goal. We use 120hz cameras, but these are mostly used for drift compensation. We use the IMU in conjunction with the cameras so that we can keep the latency to a minimum.
MTBS: I know you’ve hinted at augmented reality support with Totem. How are the stereoscopic 3D cameras advantageous for this? Do you expect the latency to be acceptable? Is AR a big priority for you?
Vrvana: AR is another ball game. Our advantage is that the cameras are connected directly with the FPGA where we can do object recognition and send it back to the host where the developer can use that information to do an overlay. But to be frank, this is way too soon to talk about latency (with augmented reality). VR is our focus right now. Once this part is nailed, we will put our efforts in AR.
MTBS: One of the biggest challenges with using mobile display panels is the screen door effect, or seeing black gaps between the pixels. Do you have ideas to get around this problem? Do you expect this to be a thing of the past?
Vrvana: There are ways to get around this with optics. This is what I suspect was done with (Oculus VR’s) Crescent Bay. Maybe you should confirm this with Oculus 😉 The screen door effect will tend to disappear with higher resolution 1440P RGB stripe screens. I can’t wait for 4K flexible stripe screens with large FOV optics!!!
MTBS: Is your panel a low persistence display? Is this an important feature for you? Why or why not?
Vrvana: Low persistence is crucial if you want a good quality image when you move your head in VR. Our screens are low persistence.
MTBS: I understand that the Totem features custom focus for each eye. I’m really excited about this because I’ve only seen one other HMD maker that lets you focus each eye independently. Are there tradeoffs with this? Does added or custom focus impact the HMD’s field of view at all?
Vrvana: Life is about compromise. Focusing will bring the lens closer to the screen and you will see fewer pixels on the screens. We also need to change the chromatic aberration and barrel distortion based on the focus and eye relief.
MTBS: Tell us about your SDK. Once the Totem developer kit is available, which game engines will it be compatible with?
Vrvana: We will start with Unity and Unreal Engine 4. We will add CryEngine and Havok Vision after that.
MTBS: I understand that the Totem is not limited to VR content. Which formats and content choices do you see available for the Totem once release? What formats and connectivity does it support?
Vrvana: We support any platforms that can send a 3D side by side format over HDMI.
MTBS: I was intrigued to learn that the Totem is compatible with existing Oculus DK1 content. How is this possible? What about DK2 support?
Vrvana: Our firmware is emulating a DK1 for now. That way, you can enjoy a lot of content. DK2 emulation is on the roadmap.
MTBS: Is it legal for Totem to interact with the Oculus SDK in this manner? Is this part of your long term vision?
Vrvana: Since we do not distribute or modify the SDK, it is legal. Emulation is legal, modifying the OVR SDK is not. Our long term vision is to be part of an ITA3D (Immersive Technology Alliance) SDK standard where everyone can contribute. Power to the people!
MTBS: Does using the Oculus SDK undermine the Totem’s visual quality at all? For example, is Oculus’ warp effect perfectly compatible with the Totem’s lens choice?
Vrvana: This is something we are tweaking right now. There are some K variables in the firmware that will adjust the distortion based on the values we put.
MTBS: There is a mixed message in the VR industry right now. Oculus VR has been pushing a seated experience, but their flagship Crescent Bay demo which has been earning them a lot of media hoopla was designed to be standing / walking. What’s your vision for Totem? Standing or seated? Why?
Vrvana: This is more of a legal issue so people don’t hurt themselves. Seated experience is what the lawyers recommend 😉 But we all know that we want a VR experience with an Omni or Virtualizer!
I think presence is possible when seated. But you need a convincing seated experience like a driving or flight simulator. But a standing experience is much more versatile. People have a home movie theater room right now, but there will be some VR room in the basement of houses real soon!
MTBS: I know I’m preaching to the choir here: why’d you join The Immersive Technology Alliance? Why was this important for Vrvana?
Vrvana: If we want to make VR mainstream, there is still a lot of hard problems to solve. A key purpose of the ITA is to create collaboration in the VR industry so that we can have the best minds working on solving problems for everyone. This is also a great forum to contribute to an open VR standard so VR can flourish. We don’t want developers to code for every HMD or Device out there. I think that the ITA is the right vehicle to create the equivalent of what Android did for the mobile industry.
MTBS: You recently launched a Kickstarter looking to raise $350,000 US. You seem to have a functioning business with over ten people working for you, you’ve got an active prototype, your team clearly has a vision and is committed to VR. Why is this Kickstarter important for you and VR users in general? What do you want to accomplish from this that would be difficult otherwise? For enthusiasts tracking this space, do you think they would get a new enjoyment from your product at these early stages by supporting your Kickstarter? How so?
Vrvana: Kickstarter is a way for us to mass produce our Dev Kits and make them available to developers and VR enthusiasts. We can’t do this without the support of the backers. Competition is good and benefits the end users. We understand that people would have loved a prototype with all the features working, but at the same time, this is why we are on Kickstarter. We have a great team of engineers with a lot of experience, and by backing us on kickstarter, you will receive an awesome HMD, an SDK and a truly unique product that will work with all the current VR content. Please go to Kickstarter and pledge!
Great stuff! Good luck with this!