Search

MTBS3D Check out this interview with Tony Zheng, Co-Founder of @matatalab. They have developed a new robotic toy for child… https://t.co/PPnrmshvQ5
MTBS3D Adshir makes ray tracing technology that works on countless platforms and device types. They’re are the first on r… https://t.co/yqLobXhx14
MTBS3D We interviewed Steve Venuti, VP of Marketing for @KeyssaTech. They've developed a solid state connector that is cha… https://t.co/081Ie1799L
MTBS3D Jim Malcolm, Chief Marketing Officer for @HumanEyesTech demonstrated their latest combination 360 degree plus 180… https://t.co/dfnwLmc1Os
MTBS3D .@Dlink spoke to us last week at #CES2020 about their latest Wifi 6 mesh router and what it means for end users.… https://t.co/n2HXCcufMX
MTBS3D RT @tifcagroup: Our Client-to-Cloud Revolution Portal is LIVE! Computing is heading towards a world of any place, any device user experienc…
MTBS3D RT @IFCSummit: .@thekhronosgroup President @neilt3d talks open standards and client-to-cloud at #IFCSummit. #CloudComputing https://t.co/T
MTBS3D Please help @tifcagroup by completing their survey by December 16th! https://t.co/nInslvJ1HM
MTBS3D RT @IFCSummit: #IFCSummit Visionaries Panel with @IntelSoftware @intel @AMD and @jonpeddie talks Client-to-Cloud. #CloudComputing #FutureCo
MTBS3D RT @IFCSummit: The Futurists Panel moderated by @deantak of @VentureBeat is online. #IFCSummit #CloudComputing #FutureComputing #Futurists
MTBS3D RT @IFCSummit: Daryl Sartain is Director & Worldwide Head of #XR, Displays, and Wireless Ecosystems for @Radeon at @AMD. He is also Chair o…
MTBS3D RT @IFCSummit: Arvind Kumar is a Senior Principal Engineer for @intel @IntelSoftware. At #IFCSummit he explained the workings of the Client…
MTBS3D RT @IFCSummit: Neil Schneider’s #IFCSummit opening presentation. #CloudComputing https://t.co/CFqiNxSzPV
MTBS3D RT @IFCSummit: Our #videogames in the #clienttocloud revolution is going on now featuring @playhatchglobal @AccelByteInc @awscloud and @Sha

MTBS' VR Settings Guide


Differences Between Stereoscopic 3D Displays and Head Mounted Displays

Neil Schneider, CEO of MTBS, in front of 150" Panasonic 3D Ready plasma.
The beauty of a 3D monitor or television is that there is plenty of room to customize the experience. By moving the cameras apart (the "separation") and adjusting the convergence to your liking, you can have a depth experience where you are seeing straight into the scene like a car windshield, you can have a lot of out of screen experiences where things seem to reach out and grab you, or you can have a mixture of both. S-3D monitors really are magic computer game windows if you know how to play with the separation and convergence settings for the best 3D effects.

Oculus Rift PhotoCrop Contest Entry

Head mounted displays need to be handled very differently. The principles are similar, but our personal biology and eye placement suddenly becomes a lot more important. We can still get a rich 3D experience, but we no longer have the flexibility to spread the separation apart the same way, and the convergence setting is no longer about choosing what is inside the screen and what's grabbing hold of our collar!

We have to remember that the whole goal behind an HMD is to be an immersive display that closely resembles how we see things in real life. In life, we don't have cameras, we have eyeballs! An HMD's lenses are practically pressed up against our pupils which means we can't move our eyes further apart or closer together (that would be a gross trick), and unlike a 3D display which can be placed a distance away, HMDs place greater pressure on our eyes to rotate according to what they see.

Therein lies the problem! When we put our HMD on, each eye is seeing a separate image that we can't easily compare, and we could be inadvertently forcing our eyes to diverge in uncomfortable directions without even realizing it. This is a no-no!

Therefore, the separation of the game's cameras is determined by the distance between our eyes. This measurement is known as the Interpupillary Distance (IP). When choosing our game settings, we have to figure out a way to make sure our eyes never have to look at something that goes beyond our IP and point outward.

Also, with a 3D monitor we were happy to go crazy with our convergence and have objects fly out of our screens because it's fun and it looks great! With an HMD, this technique no longer makes sense because the lenses are up against our eyes which means there is no screen for things to fly out of. We instead see things as we would in real life with our own eyes. We will still want to avoid that flying hatchet, mind you - we just don't have to use the same tricks to get the same results! If we did force a high convergence setting, it not only wouldn't be necessary, it would make the viewing experience painful to watch because our eyes would be forced to point inward.

Convergence is still an important adjustment, just not for the same reasons. Stereoscopic 3D effectiveness is best demonstrated with up close objects. If I were to take a 3D picture of an apple on a table with a stereoscopic 3D camera and overlap the images, it would be very clear that each view is very different. It's these differences that make the apple look interesting. If I took a picture of the same apple from far away, that part of the picture would look as though it could completely overlap. The aspects of 3D where you see the curvature of objects and fine detailing really happen up close, though you still maintain that chasm-like depth for objects in the distance.

With an HMD, the purpose of the convergence setting is to make sure we see that fine detailing up close, we still benefit from the depth in the distance, and we do so in a way that doesn't make our eyes twist and turn in painful ways.