Search

MTBS3D Check out this interview with Tony Zheng, Co-Founder of @matatalab. They have developed a new robotic toy for child… https://t.co/PPnrmshvQ5
MTBS3D Adshir makes ray tracing technology that works on countless platforms and device types. They’re are the first on r… https://t.co/yqLobXhx14
MTBS3D We interviewed Steve Venuti, VP of Marketing for @KeyssaTech. They've developed a solid state connector that is cha… https://t.co/081Ie1799L
MTBS3D Jim Malcolm, Chief Marketing Officer for @HumanEyesTech demonstrated their latest combination 360 degree plus 180… https://t.co/dfnwLmc1Os
MTBS3D .@Dlink spoke to us last week at #CES2020 about their latest Wifi 6 mesh router and what it means for end users.… https://t.co/n2HXCcufMX
MTBS3D RT @tifcagroup: Our Client-to-Cloud Revolution Portal is LIVE! Computing is heading towards a world of any place, any device user experienc…
MTBS3D RT @IFCSummit: .@thekhronosgroup President @neilt3d talks open standards and client-to-cloud at #IFCSummit. #CloudComputing https://t.co/T
MTBS3D Please help @tifcagroup by completing their survey by December 16th! https://t.co/nInslvJ1HM
MTBS3D RT @IFCSummit: #IFCSummit Visionaries Panel with @IntelSoftware @intel @AMD and @jonpeddie talks Client-to-Cloud. #CloudComputing #FutureCo
MTBS3D RT @IFCSummit: The Futurists Panel moderated by @deantak of @VentureBeat is online. #IFCSummit #CloudComputing #FutureComputing #Futurists
MTBS3D RT @IFCSummit: Daryl Sartain is Director & Worldwide Head of #XR, Displays, and Wireless Ecosystems for @Radeon at @AMD. He is also Chair o…
MTBS3D RT @IFCSummit: Arvind Kumar is a Senior Principal Engineer for @intel @IntelSoftware. At #IFCSummit he explained the workings of the Client…
MTBS3D RT @IFCSummit: Neil Schneider’s #IFCSummit opening presentation. #CloudComputing https://t.co/CFqiNxSzPV
MTBS3D RT @IFCSummit: Our #videogames in the #clienttocloud revolution is going on now featuring @playhatchglobal @AccelByteInc @awscloud and @Sha

Kinect Inadvertently Demonstrates 3D, U-Decide Update

We have a proof of concept 3D lesson for everyone today!  Unless you have a medical limitation, you are probably able to perceive 3D, and this is how it works:  if you close your eyes one at a time, you will see that each eye sees a slightly offset view from the other.  In addition to being offset, each eye sees more of one view than the other.  Your brain takes the unique images provided by your left and right eyes, and combines them into a single picture. This picture includes the depth we all take for granted.

Stereoscopic 3D Explanation

In the S-3D gaming market, there are two dominant schools of thought.  The first is complete left and right camera view rendering that could require double the processing power (in some cases).  Most PC stereoscopic 3D drivers and a selection of console games do this 100% of the time.  The second option is 2D+Depth which only renders a single camera, and based on the game's inherent Z-buffer information, it places the objects at different depths.  This is advantageous because there is little to no loss of performance, but you lose that extra information that a second camera would normally provide.  So how important is this extra information?

Kinect For XBOX 360

Microsoft's Kinect has been getting a lot of headline space because it's a new type of game controller that doesn't require you to hold anything in your hands.  It determines your body's positioning with the help of a stereoscopic 3D camera.  Oliver Kreylos' video (inadvertently) demonstrates the importance and quantity of extra information provided by a second camera view:


If 2D+depth was shown on camera, we theorize that straight on, it would look the same as this video.  However, when the camera is rotated, the objects would be on different depth plains and appear as paper thin elements - like a deck of cards standing up on a table.  It's this lack of visual information that makes it difficult for 2D+Depth to have convincing out of screen effects.  Is it worth the trade off in performance?  Share your thoughts!

UPDATE!  On closer inspection, while there are two cameras with Kinect, they aren't equal resolution.  One is 640X480, and one is 320X240.  The 320X240 unit is used for the depth information capture, and it is offset from the first.  So let's leave it to the MTBS membership to decide.  Is Kinect in fact 2D+depth and is this article's theory wrong?  Or is it more a true stereoscopic 3D capture device?  How do you differentiate?


Prizes are still going out for the U-Decide winners, and several more digital download keys will be going out today.

Mark Stuart, U-Decide Winner!

Congratulations Mark Stuart, winner of the SteelSeries Siberia V2 Headphones!  If you get a prize in physical form, or think you can get a picture of yourself and your prize, please let us know!