Search

MTBS3D RT @BelayIP: First meeting of #CES19 in the books. Online interview with Neil Schneider of #mtbs3d. Come say hi to me and @BasemarkLtd if y…
MTBS3D RT @GetImmersed: .@AffordStudio Co-Founder Avery Rueb talked about the status of #technology in the classroom and new innovations that will…
MTBS3D RT @GetImmersed: Dereck Orr of National Institute of Standards and Technology @usnistgov gave the first keynote at #Immersed2018. He talked…
MTBS3D RT @GetImmersed: At #Immersed2018, Charlie Choo of @studio216 talked about how they are using #immersivetechnologies like #AR, #VR, and #MR
MTBS3D RT @GetImmersed: #ArtificialIntelligence is an important part of what's next in #futurecomputing and was a vital topic at #Immersed2018. B…
MTBS3D What better way to exemplify future computing than to talk about prototyping using #VirtualReality and… https://t.co/zeNEOR7Uhw
MTBS3D MTBS-TV: Rama Krishna Aravind is the Founder and Creative Director of Poco Loco Amusements. He shared his inspirati… https://t.co/AR2VAvcZBP
MTBS3D MTBS-TV: Dr. David Rolston @dwrolston speaks about the future of #AI & #VR at @GetImmersed. #Immersed2018 https://t.co/5V9A18c6Vk
MTBS3D RT @GetImmersed: Kevin Williams, Chairman of the @DNA_Conference is easily one of the market's go-to-guys for all things dealing with out o…
MTBS3D RT @GetImmersed: Bob Raikes is the Founder of Meko Ltd. who publish the leading industry journal @Display_Daily. @brmeko spoke about displ…
MTBS3D RT @GetImmersed: Easily one of the highlights at #Immersed2018, Mike Domaguing, @Survios' VP of Marketing, gave a rundown of the #VR projec…
MTBS3D RT @GetImmersed: .@elumenati makes immersive projection solutions that multiple people can enjoy at the same time. At #Immersed2018, Hilary…
MTBS3D Daryl Sartain talked #immersive #AI & #Blockchain at @GetImmersed. Sartain is the Director and Worldwide Head of… https://t.co/LmqWZjQn0w
MTBS3D RT @GetImmersed: #Immersed2018 starts on Thursday! Tickets still available: https://t.co/5CJiYUiNKF #Immersed18 #business #healthcare #AI #…

MTBS Interviews the President & CEO of In-Three Inc., the Current Leader in 2D to 3D Movie Conversion!

3D Biz-Ex is coming up next week, and one of their highly anticipated speakers is Mr. David Seigle, President and CEO of In-Three Inc.

In-Three has earned endorsements from the likes of George Lucas, James Cameron, and Peter Jackson as being a leader in the area of 2D to 3D conversion for pre-existing movies. MTBS had the privilege of interviewing him, and I think you will be excited by what his company’s work means for the stereoscopic 3D (S-3D) industry!


1. Tell me about In-Three. How long have you been around and what do you guys do?

In-Three was formed in 1999 to develop, patent and implement in software ("IN3D", the In-Three Depth-builder) the means to Dimensionalize 2D content, that is, to convert 2D films to 3D films.


2. Can you name some movies that are converted or are being converted using your In-Three process?

I wish I could answer that for you. Non-disclosure agreements cover almost everything we have, and we can show some of it in person at our offices with a signed NDA. We are dimensionalizing public domain material so we will not be constrained in the future.


3. Does your company do the conversion for the studios, or do the studios get your software and do the work internally?

Our mission is to perform three kinds of work: Dimensionalization of films in libraries, Dimensionalization of films in development in parallel with their 2D counterparts for day-and-date release, and work in tandem with material being filmed stereoscopically to eliminate discomforting disparities. We do this ourselves. Our software may be commercialized if there is sufficient demand.


4. I’m trying to wrap my mind around the process. In video games, content is pre-rendered in volumetric 3D, so S-3D conversion is almost effortless when done properly. In pre-existing movies, you are dealing with a completely flat image from the start. How does your system pull depth, or create a dynamic Z axis, out of a completely flat image? In particular, how does your software know that a doorknob is closer than the door, or a nose is closer than the face?

Remember that 2D content has all of the normal depth cues of the real world with the exception of accommodation, convergence and binocular discrepancy. Accommodation is the adaptation of the eye’s shape to distance. Convergence is the eyes’ tracking (crossing) to see an object. Binocular discrepancy is the two different views the eyes have which give objects shape. In a theater, the eyes accommodate to and converge at the screen. In addition, they each see the identical image - there are no binocular discrepancies.

What we do is recreate the convergence and the binocular disparities of the original scene. We do this by treating a frame as a left or right eye image, and copying it to create the alternate eye image. Then we identify each object to be given explicit depth, provide depth, shape, and perspective. Then we fill in previously occluded areas in the newly created image. For more on how we do this, see our web site.


5. When we think of converting a movie from 2D to 3D, it sounds like a simple one step process - but it isn’t. Can you describe how a movie director interacts with your software to get to a final product?

What we prefer is that the director specify what we call a "style" of dimensionalization and provide depth scripts which reflect that style. This can vary from shot to shot. Such a depth script takes into consideration, for example, the dramatic impact the director wants to imbue using z-depth and the z-depth transitions from shot to shot. We then create key frames to show the effect of these choices. Once approval is given we dimensionalize the shot and the director provides the final okay.


6. As you are starting off with a 2D image, how much flexibility can a movie director or converter really expect out of the new 3D image? Can you give an example of how this flexibility would work?

We can provide a control of depth, we call it "depth-grading" - ala, color-grading that allows a director to get close to recreating the real-life scene or a modification of it that satisfies his artistic vision. The only restriction on this is that the "explicit depth" we give individual objects can not conflict with implicit or natural depth cues. Also there are certain physiological limits that must be observed: if the screen is too close for the point of convergence then there may be a conflict between accommodation and convergence and the eyes will become strained or may not be able to fuse the image. We have material at In-Three that we can project on a big screen and on a small screen (3D HDTV) to show these effects.


7. When I think of professional 3D rendering, I think of CPU farms and dedicated rooms of workstations all focused on a single minute of footage. If you could measure it, how would you compare this example to the processing power required for the 2D to 3D conversion process? Are your requirements as extensive?

No. I’ve not analyzed the specific algorithms that CG uses to produce single and second camera views but I am aware they are cpu intensive. Our software uses similar hardware but our computer intensive tasks relate to the propagation of values through the frames of a scene. This is usually measured in seconds, and we have clear development goals to reduce this latency impact on productivity. The other computer intensive task is when we generate a deliverable using our depth and other values to generate frames for specific screen size and resolution.


8. It seems like your company is in the dead center of movie time. The past is filled with a huge library of 2D content ready to be converted, and the future is showing a rush of new movies that are getting rendered in S-3D from the get-go. I understand the benefits of converting older 2D content. How does In-Three fit with newly produced content in the works or coming to market?

This is being worked out as we discuss all this with studios and others. Stay tuned.


9. What’s cheaper, producing a 2D movie and converting it to 3D, or producing an S-3D movie, and using your software to tweak it? Will one option give a better result than the other?

I suspect that the marginal cost of some shots like limited depth and complex objects, will be cheaper using stereo cinematography and that many others will be cheaper using Dimensionalization. Another point to note, though, is that there are some shots that are impractical or impossible using stereography cinematography. There are no shots that can’t be Dimensionalized.


10. Do you think perfect 3D is possible in a movie theater? Why or why not?

I say our goal is to help a director "approach perfect 3D". I define this as allowing the director to meet his artistic vision and to ensure that no discomforting discrepancies exist anywhere in a shot. I use the word "approach" because the eyes are always accommodating (i.e., shaped to focus on) the screen even if they are converged in front or behind and viewing perfect binocular reconstruction. In-Three allows a director to approach perfect 3D.


11. Movie theaters come in all sizes. Does one size fit all when it comes to S-3D movie conversion, or do movie houses need to recalibrate for different theater sizes?

Movies need to be tuned to different screen sizes. A film shot or computer generated or dimensionalized to show an object at infinity on a forty foot screen will, when displayed on a five foot screen, display that same object at 114% of the distance from the viewer. Shown on a sixty foot screen that object would appear beyond infinity and cause the viewers’ eyes to diverge.

In the first case the director’s intended artistic vision is mitigated. In the second case discomforting artifacts are introduced. In neither case do we approach perfect 3D.

We keep the meta-data that allows us to regenerate content to its intended screen size.


12. This isn’t the first run that Hollywood is taking at getting S-3D in movie theaters. What is different today? What was the drawback of S-3D’s past that isn’t there today?

A key change is that there are single projectors. In the past, no matter how good the content was, the projectors got out of alignment, developed uneven light output, had splices break left/right synchronization, etc. That very large cause of discomfort no longer exists.


13. As I’m sure MTBS has demonstrated to you by now, there is a quickly growing at-home consumer market for S-3D content through 3D monitors, HDTVs, HMDs, and more. There is now a leap-to market for 3D movie content that wasn’t there before. After In-Three works with a movie studio for a theatrical release, does the whole film need to be recalibrated for at-home viewing?

Probably for reasons discussed above. But in some cases maybe not - it’s a matter of opinion. I’ve stood in a group watching a clip on a five foot screen and the reactions ranged from "it’s not bad" to "the background looks like it’s painted on the wall behind the TV".


14. This is my favorite question to ask all my interviewees. If a 3D genie appeared, and with the exception of getting every Hollywood director to knock on your door to use the In-Three process, what three wishes would you like that would move the S-3D industry even further ahead?

I have a hard time "conjuring" up an answer to the genie question. I prefer to deal with the situation as it is, just as studio people and others have to deal with the world as it is. This means that they have to focus on generating revenue next year and beyond during a period where 3D theater screen numbers are just beginning to be meaningful. We both are looking past this transition period as they work with In-Three to fill the current content gap.


15. If there was one message our readers could walk away with after this interview, what would it be?

3-D is here to stay.


David Seigle will be speaking at 3D Biz-Ex next week, and all MTBS members qualify for a 20% discount on registration. Learn more in the MTBS Discounts section found under Member Benefits.

Post your comments HERE!