INTERVIEWS

MTBS Interviews the Entertainment Software Association of Canada

By Interviews No Comments

“It would make a great deal of sense to incorporate 3D technologies into the video game experience”Nicole Helsberg, Director of Public Relations for the Entertainment Software Association of Canada (ESAC).

Jointly with Ipsos Reid, ESAC did a fascinating study about the gaming habits of parents and children in the home, and some useful trends.

Not quite a conversation about stereoscopic 3D gaming, but I think Nicole’s insight will keep both your eyes open about modern trends in video game positioning.

1. I understand you have some exciting findings to share, but first I’d like to talk a bit about the Entertainment Software Association of Canada (ESAC). For those unfamiliar, can you elaborate a bit on what you do?

The Entertainment Software Association of Canada is dedicated exclusively to serving the business and public affairs needs of companies in Canada that publish and distribute computer and video games for video game consoles, handheld devices, personal computers and the Internet. Association members include the nation’s leading interactive entertainment software publishers and distributors, which collectively accounted for more than 90 per cent of the $1.67 billion in entertainment software and hardware sales in Canada in 2007. The entertainment software industry currently accounts for over 260 firms and 10,000 direct jobs and thousands more in related fields across Canada.

2. There is ESAC and ESA (for the US market). While you are rooted from the same organization, outside of geography, are there some key differences that set you apart? Are there legal differences, for example?

The ESA is our sister association. ESAC, while an advocate of many of the same industry issues, faces different challenges in Canada. One key difference – some of our provinces have legislation that adopts ESRB ratings into law and restricts the sale of Mature and Adults Only rated games to children. (To date, in the US, only the state of New York has video game legislation.)

Our membership is also different. While we represent many of the same companies, ESAC’s membership includes companies not members of the ESA, and vice versa.

3. What needs does your organization fill that makes you so critical to the video game industry?

ESAC is the voice of the entertainment software industry. We promote the industry to all levels of government – federal, provincial and municipal. ESAC is an advocate for issues affecting our industry, including copyright reform, piracy and intellectual property enforcement, and international trade. We conduct research about and on behalf of the industry, and promote the industry to the public. We work with provincial officials and our partners in the retail sector to promote the industry’s rating system and help parents and consumers make informed choices about the games they purchase or rent for their families.

4.What is the Entertainment Software Rating Board (ESRB), what do they do, and how are your organizations related?

The Entertainment Software Rating Board (ESRB) is a non-profit, self-regulatory body established in 1994 by the ESA. ESRB independently assigns ratings, enforces advertising guidelines, and helps ensure responsible online privacy practices for the interactive entertainment software industry.

5. Once a Mature (M) or Adults Only (AO) rating is determined, how effective are efforts to get retailers to stop selling to minors? Are kids carded the same way they are for alcohol or cigarettes? Do stores prevent sales somehow?

Together, with our partners at the Retail Council of Canada (RCC) and the ESRB, we have a wonderful program called “Commitment to Parents” (CTP).

The CTP program is a voluntary initiative designed to restrict the selling or renting of games to children that are meant for older teenagers and adults. The program’s mandate is to help parents make informed choices for their families by educating consumers about the ESRB rating system and ratings enforcement.

CTP participating retailers agree not to sell Mature (M) or Adults Only (AO) rated games to under age children. They display store signs, which advise customers of their participation in the program and promote awareness and understanding of the ESRB rating system. And yes, sales associates will card someone who looks under age before they are allowed to purchase a game.

6. Let’s talk about your latest research. What was the motivation behind this study?

Each year, we survey the trends of video game play in Canada. We survey Canadians to find out the average age of a gamer, age breakdowns, what types of games are most popular, etc. Given we know that most games rated for sale by the ESRB in Canada are rated for play by children and teens, we were curious to know whether Canadians were findings ample choices for their families. Were they playing games together as a family?

7. Who put the study together and how was it carried out?

Ipsos Reid conducts the survey on our behalf. Further details can be found on their website: www.ipsos.ca

8. Can you elaborate on the demographics or criteria that determined who could participate? How many respondents per household?

I’d refer you to Ipsos Reid for specifics. We survey over 650 Canadians, from all reaches of the country.

9. Just so I don’t feel so old, what is the age of the average gamer? Has this changed a lot over the years? Why do you think that is?

In our most recent survey of Canadians, the average age of a gamer is 40.3 years.

And, yes, we’ve seen a steady rise in that average age over the years. This has occurred for many reasons, including parents who are playing more with their children, likely because, as we’ve discovered, parents are finding a greater selection of games for the entire family. Formerly young gamers are aging and having children, and including games as part of their family time.

10. What ESRB game rating is deemed family friendly? Can you name some popular PC titles that fit this category?

E (everyone) E10+ (everyone 10 and older) and T (teen) would be ratings for family games, depending of course, on the age of the children.

While I don’t want to name specific games, as everyone will have their favourites, I would like to point out that the ESRB has an excellent tool on its website to search games by platform, by rating, and by type of game. For example, when I select rating – E (everyone), platform – Windows PC, and type – any, listed are over 4500 titles.

I’d suggest going to the internet for reviews of games. gamingwithchildren.com is one good resource for reviews of family friendly games. Also, I’d suggest talking to the retailer. A sales associate will have a pretty good idea of what’s fun and appropriate for the family.

11. Video games get a lot of criticism for being violent and not appropriate for family viewing – you’ve heard it all, I’m sure! Are these statements true for the majority of games? How big is the family viewing market in video games?

Big sales titles like Halo and Grand Theft Auto get a lot of buzz in the press. But, truthfully, M rated games make up a fraction of games rated for sale in North America.

The ESRB says that 59% of games rated for sale in 2007 were rated E for everyone, and another 15% were rated E10+. Mature games were a scant 6% – this, out of over 1500 games the ESRB reviewed last year.

12. MTBS is very much PC gaming focused because only modern PC’s are able to take advantage of stereoscopic 3D technologies like 3D monitors, HDTVs, etc. – at least for now. Were your findings strictly for consoles because they are living room technologies, or is this applicable for the PC market too?

Our survey asks respondents to discuss video and computer game play.

13. Growing up, I remember video games as being alien to parents, and reserved for the enjoyment of teenagers and solitaries. How knowledgeable are parents about video games now?

Certainly, Canadian parents demonstrate a great deal of responsibility regarding the selection of their child’s game(s). Our research over the years has consistently demonstrated this. Nevermind that many parents play the game prior to making a purchasing decision. And according to our most recent research, 57% of parents play video games with their children. It’s just another way that parents are spending time with their children. With increased family offerings from publishers, parents are finding an ample selection of games that they can play with their children, and enjoy an activity that is fun for all ages.

14. This family angle, was it encouraged by the game developers, or did it occur naturally?

I can’t say for sure, though, it would make financial sense, I imagine, to broaden the appeal of video and computer games to what may have been considered non-traditional gamers.

I recently attended the E3 Media and Business Summit in Los Angeles. And after attending many press conferences and briefings there, I’ve noted that many publishers were keen on showcasing family friendly games as part of their offerings for 2008 and 2009.

15. If parents had a choice of having their kids watch television or play video games, which would they pick? Why do you think that is?

According to our research, 41% of parents would prefer their children to play video games over watching television.

Video games require skill, thought, communication, strategy…these are all good things when perhaps compared to a more passive activity, like watching TV.

And adults, even seniors are getting in the game, so to speak! Game developers are making games that are not only easy to play for all ages, but are targeted to groups of people that most would not think of as traditional ‘gamers.’

16. Between parents and kids, I’m sure you will agree that it’s often hard to agree on what program to watch. Do you have any findings about parents playing video games with their kids? How many do it? Why do you think that is?

Our research shows that 57% of parents play video games with their children. Parents and children are finding exciting game content that appeals to multiple generations. And the majority of games are family friendly, especially in terms of ratings. Can that be said about most television shows today?

17. Do you think parents are playing the games because they enjoy them, or is it more a concern about monitoring what their kids are playing and who they are chatting with online?

I think it’s both. Games are available for the whole family to enjoy together. But maybe what got parents interested was the fact that this was a significant pastime for their children, and being conscientious, they wanted to be involved. Perhaps they got ’hooked’ as well!

Certainly, in past research, we’ve determined that parents are very aware of what their kids play. According to our 2007 data, 79% of parents monitor closely or very closely the video games their child plays.

18. Do you have any data on who decides on which games to buy? The kids, the parents, or both?

I don’t have specific data on that.

19. Your findings are Canadian. Do you think your results would be similar in the US or overseas? Why or why not?

You’d probably have to ask my friends at the ESA (US), ELSPA (UK) and ISFE (Europe).

In general, though, many trends in the US are trends in Canada as well. For example, the average age of a gamer is on the rise in the US as well as in Canada. Each year, women comprise a greater number of gamers.

20. Can you take a guess on what triggered parents to suddenly take an active interest in their kids video games? Were the parents game players all along and just grew up, or has something happened that made games more appealing?

I think parents have always been interested in what their child plays, and our research has always supported this.

But a, parents are perceiving a greater number of games available for family play – 69%, according to our recent study. And b, publishers are reaching out to all ages and developing content for non-traditional gamers as well as gamers. And finally, c, the Atari generation has grown up. Those who grew up playing video games are embracing video game play with their children.

21. I understand there will be some follow-up data to this study. What do we have to look forward to?

In September, we will be publishing our annual Essential Facts about the Canadian computer and video game industry. This guide outlines further research from Ipsos Reid on game player data, including who plays video games, on what hardware, online game play, and ratings awareness. Essential Facts also includes data from NPD Canada on overall hardware and software sales, as well as top game sales by genre.

Special thanks to Nicole Helsberg of the Entertainment Software Association of Canada. We are looking forward to her follow-up research and findings! Post your thoughts on this interview HERE.

MTBS Interviews Dr. Robert Cailliau, Co-Developer of the WWW, Part Two

By Interviews No Comments

By Neil Schneider

We had the recent honor of interviewing Dr. Robert Cailliau, the co-developer of the World Wide Web. Jointly with Sir Tim Berners-Lee, Dr. Cailliau’s work holds a worldwide importance, and their mass appeal invention demonstrates important lessons learned along the way.


Dr. Robert Cailliau, Co-Developer of the WWW

In the first interview segment, Dr. Cailliau shared some information about his career, physics, and some history about how the World Wide Web was created. In part two, we talk about the politics of the web, Dr. Cailliau’s personal experiences with stereoscopic 3D photography, and a sharp look at the relationship between the way the WWW was developed, and what ramifications this may hold for the stereoscopic 3D industry.

What was “the year of the web”? What critical elements secured the web’s success at this time?

Depends who you talk to. Some think it was 1989, when Tim produced the first proposal (that does not have the name WWW in it). Others think it was 1990, when the first server went on-line. Yet others, especially media, think it was 1993 when CERN put the technology into the public domain.

But to me it was 1994, because that was when the greatest activity took place:

— the First Conference, that brought together for the first time web developers who had until then never met,
— the conception of the Consortium that would guard the standards (the consortium was to be run by CERN and MIT, but the approval of the construction of the LHC on a tight budget made CERN get out).
— the interest of the European Commission to help WWW take off.
— the first stirrings of interest from commercial firms.

1995 was very different: I spent that on transferring the web from CERN to the INRIA-MIT consortium, and Tim had already left CERN end 1994.

Why was the W3 Consortium formed? Was the web destined to be successful before or after the consortium?

The web could have been successful without the Consortium, but it would have been limited to academia. Once there was interest from commercial software companies it was very necessary to ensure that all worked to the same standard. Whenever there is something that only works if two independently produced parts are compatible, you must work to an agreed standard.

Look at the HDDVD/BluRay disaster, or the VHS disaster. In both cases there was content served up on some medium that had to be played on a compatible player. You can either produce all content in many versions so that consumers can buy the one compatible with their player, or you can wait until the market decides which pair format “wins”. It is almost always the worst format that wins.

Netscape was trying hard to impose a set of HTML tags that would only work with the Netscape browser. Others were trying similar things. For years we had to code our pages with if-statements that would pick different code depending on the browser in which the page ended up. A complete disaster and billions of man-hours wasted.

The Consortium intended to avoid that situation but did not at first succeed. It was a long and hard battle to make all companies sit around the same table and agree on a standard.

We now have these standards: XHTML, CSS, XML, DOM, SVG. Yet Internet Explorer 7 still does not respect them well. Here too we lost about five years by companies not understanding the importance of adhering to a common standard.

Separate from the consortium, you formed the first international WWW conference. What made these conferences unique? What need were you trying to fill? Do they still run?

To my knowledge they still run. I have referred to the Conferences several times. There certainly is a need for people to get together physically in the same space for a number of days.

There are many advantages:

— you get to know each other in a different way than by e-mail or blog.
— you are away from the work environment and can concentrate on high-quality contact with like-minded people.
— you generate and absorb a lot of ideas.
— you get some credit for presenting a paper.

Today there is WiFi access in all conference rooms and that is detrimental to the contacts; I think I would disallow network access to the audience.

I ran the first one, and then we formed a legal body so that we could handle money well. There were two big conferences a year for the first two years, then once a year afterwards. I lost interest a few years ago, mainly because I got out of web technologies and things became much more specialized.

I had the intention also to gather large users, but that never happened.

I understand Tim Berners-Lee was resistant to the conferences at first. Why?

I think he thought it was a waste of time, but actually it generated a lot of synergy between developers and the Consortium. He was certainly pleased at the first one, when he saw how many people turned up and how many workshops there were.

In a Wikipedia interview, you compared the W3 Consortium and the WWW conferences to a “church” and “state” relationship. The W3 Consortium is the church, and the WWW conferences are the state. Given that you are personally atheist, I wonder if there is a deeper meaning to what you are saying. Do you think the web’s standards and future development was best served by a selection of companies and organizations, or would it have been better served by an industry that answered to its users? Should it have been a hybrid of both?

Well, given that I am indeed an atheist, I certainly want a very clean separation between church and state in the real world. However, in putting forward that analogy I just meant that the Consortium could not give ambiguous signals, it has to hold up a standard. The Conferences were to me the place where anyone could suggest any wild idea and present it to an audience without creating an impression of uncertainty about where things were going. The Consortium could look at some of the ideas and perhaps later use them.

I think that the market-economy principle (Smith’s invisible hand) is a fantastic mechanism for deciding who is the best cook or the best grocer. But it is possibly the worst mechanism to keep standards. It would almost be like letting the market decide what the value of pi was.

We have had a “market” type system to decide on units, and we have ended up with a metric system that is perfect for quick computing because it sticks to what calculators do: the decimal system. However, magnitudes are binary: the next thing after having one of something is having two of them, not ten. The Imperial system has the advantage that it recognizes this doubling/halving tendency, and it also has base 12 which allows division by 3. But no calculator, human or electronic, can deal with it! Try to find the area of a room that is 9 feet 10 inches wide and 12 feet 7 inches long.

The obviously best combination is a 12-base numbering system, then we change all our calculators to work in base 12, and then revise the metric system to match that. But how do you bring that about with a market? Never! It needs some clever people to sit down and think it through.

For much the same reason you cannot leave the definition of XHTML to the market. It is a job akin to proving a mathematical theorem. If you let the market loose on it you get Javascript. Thanks, no.

Microsoft, and I’m not afraid of mentioning them by name, have always tried to be the “standard” setting industry, but have they answered to their users? Definitely not.

In the domain of setting standards you need a group of intelligent people with no conflict of interest, who are focusing on making it work and keeping it open to the future. The Consortium accepts input, looks at it, makes a proposal and has it reviewed before making a definite standard. In some respects it tries to humor too many trends at once, but at least we do have workable standards.

I understand you are a fan of stereoscopic 3D (S-3D) photography. How did you get into this?

A long time ago I wanted to get depth into pictures I made. As a child I never owned a “Viewmaster”, but I used those of some friends. So I was familiar with the effect and also how it was done. I also had a picture book of the Antwerp Zoo, which was printed in blue/red with a red/blue set of spectacles. This gave black and white stereo pictures.

In 1977 or so I tried my hand at projecting 35mm slides on a metallic screen using two projectors, each equipped with a polarizing filter and spectacles for the audience with polarizing filters. This works beautifully but is complicated to set up.

Then for a long time I did nothing and relatively recently (2002) I started making some digital examples. I looked at getting a viewer with prisms, but prisms are very difficult to get. There are viewers with lenses but they are not so good.

To say I am a fan is perhaps too much. I will occasionally make some, but I have yet to find a good way of showing them without a complicated setup. One can of course put them on the screen and view them with fast shuttering spectacles, but that is difficult for a larger audience.

Ideally I would want to project onto, say, a 2.5m by 2.5m screen for an audience of about ten people and let everyone enjoy this with little or no equipment. Difficult to do if you want very high quality (1024×768 is an absolute minimum).

What equipment do you use to make 3D photos and how did you learn to use it?

Ah, I just take two shots, one after the other, moving the camera a little. If it’s close up, I move not more than the distance between one’s eyes (about 6.5 to 7 cm) but if things are far away I obtain an artificial depth perception by moving much more, sometimes a meter or so.

If you photograph a landscape from two positions a meter apart then the result looks like a miniature because when things are far away you do not really perceive stereo, you judge the distance by other means such as haziness.

When you do perceive stereo, i.e. your left eye sees different sides of objects from what your right eye sees, then it is because the objects are really close and your eyes are somewhat crossed too. Therefore we always think of stereo as close by, and the brain therefore tells you that anything in stereo must be close. This is why such landscapes look like they are miniatures watched from close by.

Tell us about some of your work. What photos are you most proud of and why?

Difficult. I really don’t specialize in stereo photos. I have a few, taken on trips, and mostly to document things I saw.

I made a whole series about the construction of the ATLAS detector at CERN and one of them was the inspiration for a souvenir that the ATLAS PR group sells to visitors. I liked that series. I have also some of plants and a few of statues on Easter Island.

I should do more, and especially I should pay more attention to making them high-quality. This interview will probably make me do just that.

What is it about stereoscopic 3D that fascinates you so much? What qualities does it add that makes it worth the extra effort and attention?

The depth is important to understand the spatial relation between components of what you see. I find it especially gratifying in close-ups of flowers. But I have not done many. I guess I’m stuck until I find a very easy way to show them.

On-screen you are limited to a distance of maybe 30, 40 cm. That implies that you cannot make the pictures larger than about 10cm on the screen. But that’s only 200 to 300 pixels! Quite bad resolution.

Yes, I think the technology to show them very easily is not available to the average user and so I’m not making many until I can show them easily – to myself in the first place.

Do you think your S-3D photography helps make the experiences more memorable?

Oh yes! But I don’t do enough of them.

The reason I named this organization “Meant to be Seen” is because people who haven’t seen S-3D in action really don’t get it. I know you aren’t a game player, but given your experience in stereoscopic 3D photography, and working on the premise that gamers have good stereoscopic 3D equipment, do you agree that when properly implemented, stereoscopic 3D technology should significantly increase the immersion and beauty found in video games? Why or why not?

Seeing things in 3D certainly adds to the aesthetic experience. I personally hate shoot-out games, but I can imagine that walking through the classic Riven or Exile labyrinths would be greatly enhanced if it were in 3D.

We should however not overdo this: the distances of objects is generally of the order of a few meters which is vastly greater than the 6-7 cm separating our eyes. That means that most objects are not seen in 3D: the images in both eyes are not sufficiently different. I don’t know at which point it becomes important, but I would suggest closer than 3m. So overdoing it by pretending that objects that are further away should be rendered in 3d would be bad and make the scene artificial, like my miniature-looking landscapes.

I hear (but again I need to see this now!) that in modern equipment, the separation can be adjusted (indeed, not too difficult to implement!) by the user. That would be a good thing to have, since it would allow contrasting a better perception of depth at the touch of a button.

I find it interesting that today’s world wide web is very much a 2D space and you are personally enamored by 3D imagery. Would you like to see a stereoscopic 3D world wide web future?

Personally I am interested in combining abstract concepts to help me understand the world around me. That usually means I need some notation: we use words and language for most everyday things, plus now also some photos and videos and even 3D. But I need also the notation of maths, chemistry and physics. The notation used there is not words or images but symbols placed in a 2D arrangement, in the form of diagrams, equations etc. There are rules for manipulating the equations.

When I look things up on the web I do not care much about any 3D effect there could be since I’m after abstract stuff. On the other hand, some understanding does need at least 3D (though how one would represent 4D or 5D I would not know). In chemistry especially one has been using 3D for a very long time: 3D molecules were the first 3D objects on the web, they date back to 1994 at least.

3D might bring great benefit in “gut” understanding by the general public, and I’m not trying to be paternalistic here. While a small number of people like me may prefer the abstract knowledge, communicating to the public at large has constraints: there is time, attention span, familiarity with maths. The latter especially is nonexistent or very low, even with many well-educated people.

I could think of web pages being laid out in 3D, each one being much like a room: the 3D would show better the parts that belong together and at the same time the overall structure of the info on that page. Links could be like doors that lead to other rooms. This would let most people understand the info faster and easier. Do not confuse this with 3D navigation in information spaces: I’m talking about a single page that expresses a single concept in many aspects and is laid out in 3D.

In 1976 Kay, Thacker and Lampson made the first 2D computer: we went from interfaces with single command lines to full 2D graphics. That was a big step in making the computer accessible to ordinary people: the graphical user interface, with overlapping windows and recognizable objects. It needed the mouse to interact with it.

Going from 2D to 3D may not be so revolutionary a step, but it will certainly make the information itself easier to grasp.

3D Cinema has become a very big deal with anywhere from 2:1 and 3:1 revenues earned versus 2D movies. Similar to the web, 3D cinema is an artistic endeavor. We are starting to see a wide selection of consumer 3D solutions for the time when 3D movies jump off to the home markets. Having gone through your experiences with the WWW, who do you think should decide on consumer stereoscopic 3D standards and why? On a similar matter, what lessons did you walk away with from the HD DVD versus BlueRay fiasco?

I already commented on the HDDVD/BluRay happening. The only time that I can remember that the industry did it right was with the music CD in 1980. All manufacturers and editors agreed to follow one single standard, well before any product was brought to market. I hope this effort will be repeated for home 3D, and I hope no one will want a replay of the BluRay battle.

The criterion is clear: if content needs to be played on a compatible device, there should be a worldwide standard for the content.

We have, by the way, another total scandal: the number of different video standards. MP4. QuickTime, AVI, WMV, … Not to speak about GSM: 900MHz, 1800MHz, 1900MHz, G3, GPRS, whatnot. That should perhaps be cleaned up first. In still images it’s all JPEG.

I think an open consortium for 3D content would be the best way to go. And I say that not because I was involved in the setting up of the W3C, but because I want to put the consumer before the shareholder. There is no point in making things incompatible on purpose.

If I said it’s the end users that made the World Wide Web the success it is today more so than the companies that set the standards, would you agree with me? Why or why not?

Certainly the authors did it and now almost anyone is an author. They began to fill the web with useful stuff when the worst of the standards battles were over. To my knowledge not one company succeeded in setting web standards, not even Microsoft.

The Consortium did manage to get its authority and that’s good. The average user does not want to know the why’s and the who’s of the standards. He/she wants to know that the page just uploaded will work on any browser. If there are variants then that puts him off or makes him choose one and ignore the rest.

Therefore I think the really great participation of the average users in the web came after the standards issue was resolved by the creation of the consortium. Firefox became the touchstone for the masses, precisely because it implements the standards well. The standards were essential to get the users in.

For the past ten plus years, the S-3D gaming industry has been in limbo because there was never a direct relationship between the video game makers and stereoscopic 3D driver developers – so there was always a compatibility problem. I formed MTBS to create a catalyst for the game developers, the S-3D manufacturers, and the end consumers to have a meeting of minds to overcome this challenge. My biggest frustration is many people are quick to sign off their empowerment to one or two corporate entities when it comes to determining industry success. When you worked with Tim Berners-Lee to develop the world wide web, you were all of four people in a CERN office with one of this century’s greatest contributions being named over a beer. Did it ever occur to you that your invention’s success would be dependent on a company like Microsoft? Why or why not?

Did it depend on Microsoft? As late as 1995 they were still thinking that Microsoft Network would wipe out the Internet. I have never felt that the web’s expansion was driven by Microsoft. On the contrary, their browsers and servers always lagged behind. I certainly have not ever used a Microsoft product to build a site nor to look at one (the exception being FrontPage for a very short-lived experiment).

For 3D there should, as always, first be a phase of wide experimentation by the pioneering enthusiasts. I feel that’s what’s happening now and that phase may be coming towards its end. The changeover from experimentation to public acceptance and consumer use can go two ways: either a single company “wins” and floods the market with its proprietary technology, or the pioneers of today act like the internet community did. Their motto is: “we don’t believe in kings, presidents or voting, we believe in rough consensus and working code”. In other words, the community itself sets the standards and above all remains deeply involved so as to keep moving forward instead of dying in the rise of a single monopoly.

This raises another point. Which browser does your personal website have problems with and why? Can you relate this challenge to the issue of stereoscopic 3D empowerment? Is it dangerous to have rogue corporate super-powers in the 3D industry? Why?

I code my pages (and I need to renovate quite a lot of them still!!) in plain and simple XHTML (strict version). Likewise for CSS. There are no problems with Safari, there are minor type-rendering problems with Firefox but there are ugly problems with Internet Explorer when it comes to positioning relative

elements. I refuse to put in coding that would distinguish to which browser a page is sent and adapt to that browser’s quirks.

I think any industry should watch out for rogue superpowers. So, we need a good, simple, workable standard for 3D. It should not be too difficult as this can be built on top of existing standards for jpeg etc.

Similar to 1994 being “the year of the web”, I think that 2008 is the year of consumer stereoscopic 3D. What criteria will make stereoscopic 3D a success for consumers in gaming and cinema, and who or what groups of people do you see as having the most power to make it happen? Why?

As I understand it, at least some special hardware is needed to view 3D.

If the person who views the 3D does not need to wear anything then 3D will require something very sophisticated such as a color holographic screen, and then I think we’re not there yet. If the viewer needs a special set of spectacles, then he/she will probably not want a number of them but only one. In this respect, video is different from audio.

I do have a number of different headsets: some are earplugs, some are on the ear and two are noise-canceling over the ear. However, all of them are completely interchangeable. All my music sources can use all of my headsets.

The same may not be true for 3D eyewear, and then it will be a disaster. When I buy an iPod I get a pair of earphones with it. They may not be good, but I can start. I can go out to select and buy a headset without having to take my iPod along. If my next Mac has 3D movie capability, then it should also come with 3D eyewear. And I should be able to go buy 3D eyewear without having to worry whether it’s going to be compatible.

When I invite some people over to watch the album of our latest trip in 3D, they should be able to come with their own eyewear. The standard has to be there beforehand, in other words.

I have repeatedly read that you and Tim never patented the web and that you could have made riches beyond your dreams had you done so. Do you think a patent and/or licensing arrangements would have stifled the web’s success? Can similar forces hold back the consumer stereoscopic 3D industry? Why or why not?

Yes, a patent would definitely have relegated us to be a small competitor in a large pond full of incompatible systems. Those systems were already there anyway (Compuserve, AOL, Minitel, …).

I always say that what we did was not provide a service or make software, what we did was set a simple and workable standard: http + HTML. Then we had to work hard to keep that free of adulteration by Netscape and Microsoft. Fortunately the smaller companies joined quickly and supported the idea.

Congratulations on your recent retirement – though I’m guessing this “retirement” is just a facade. What are you really busy with these days?

I’m mainly answering requests for interviews and talks, but I try also to do some of the projects that were shelved during the web years. I really do want to have some fun after more than a decade of continuous hard work.

What key lessons and remarks would you like our members, both industry and consumer, to walk away with from this interview?

Briefly looking back over it, I think the main thing is: simple common standards. Your last few questions made me think how I would have to use the products of an industry, rather than making my own eyewear and tinkering with my own stuff. And that means I would really not want to find out that my new stereo glasses don’t work with my laptop. I may be wrong, but for mass consumption that seems to me to be key.

Dr. Robert Cailliau will be making a third appearance to demonstrate his personal stereoscopic 3D photography. Post your questions and comments HERE, and Robert will answer you in his follow-up interview. A final bonus is Dr. Cailliau has joined the MTBS advisory board, and I am confident his unique experience and mindset will help our efforts to drive the consumer stereoscopic 3D industry forward.

MTBS Interviews Dr. Robert Cailliau, Co-Developer of the WWW, Part One

By Interviews No Comments

The stereoscopic 3D industry almost seems to have exploded overnight. We have a growing number of stereoscopic 3D (S-3D) movie theaters, the S-3D gaming industry is taking off thanks to efforts by the MTBS membership, the S-3D manufacturers (e.g. iZ3D, TDVision Corp, etc.), and leading names like James Cameron, Ubisoft, and Electronic Arts Korea. Yet the more things change, the more things stay the same. What does the S-3D industry need to do to make stereoscopic 3D a success for the masses? What lessons can the past teach us to help move us along?

It is a great honor to be joined by Dr. Robert Cailliau, the co-developer of the World Wide Web. Jointly with Sir Tim Berners-Lee, Dr. Cailliau’s work holds a worldwide importance, and their mass appeal invention demonstrates important lessons learned along the way. Furthermore, Dr. Cailliau has a personal fascination with stereoscopic 3D photography, and will be sharing samples of his library on MTBS in a later interview segment.

In part one of our exclusive interview, Dr. Cailliau shares some insights on his career, physics, the beginnings of the World Wide Web, and the significance of beer. As with all our previous interviews, Dr. Cailliau will make himself available to answer member questions.

Robert! Let’s set the record straight. Parlez vous Francais?

Oui, bien sûr. But I was born in Tongeren, just above the horizontal line running through Belgium that was along the Northern end of the Roman Empire, so I am from the Germanic languages region not the Latin languages region. Hence I was brought up in Dutch.

“Flemish” is not a language, just like “Scottish”, “Australian” and “Canadian” are slight varieties of English, “Flemish” is a slight variety of Dutch. There is a common Flanders-Netherlands commission that edits the dictionary and spelling that is used commonly in the Dutch speaking region that runs from this horizontal line that passes just under Brussels all the way up to the North of Holland.

But since I live in France, obviously I speak and write French. Quite well I should add.

What does it mean to be synaesthetic, and what does this have to do with the early logo for the WWW?

Synaesthesia is a condition whereby stimuli of one sense generate some trigger of another sense. You could call it “cross-talk” (for those who remember what that was in analog audio/video). So when I think of a symbol (a letter for example) then that triggers a colour. When I think of a “W” I see it in green. The early WWW logo therefore was a set of three overlapping W’s in green shades.

The form I have is very mild and the most common form. Some people “feel” shapes when they smell odours, others get tastes when they hear music and so on. There are about one in ten to twenty thousand people who are synaesthetes. But as most have been put off in early childhood by reactions of parents and friends, most are “in the closet” so to speak and it’s only recently, mainly thanks to the web itself, that they have been able to understand their own condition and make contact with other synaesthetes. So I suspect that we will see a higher number turn up. Most synaesthetes are women, in a ratio of 12 to 1.

Let’s talk about your roots. It’s public record that you were born in Tongeren, Belgium and graduated from Ghent University in electrical and mechanical engineering. You later got your MSc from the University of Michigan in Computer, Information and Control Engineering. I understand you also have a military background! Can you share some of your experiences there?

Let’s set something straight here: in Anglo-Saxon countries an engineering degree is somewhat different from what it is in continental Europe. Here it is equivalent to an MSc already. I did most of my engineering MSc work on electrical stuff, but power transfer. Then I needed to use computers, but no such courses were offered in Ghent at the time (1970) and instead of going to, say, Darmstadt or Zurich, I went to Ann Arbor.

I don’t have a military background: military service was obligatory in Belgium for all able males. I did mine in the infirmary, as some sort of first-aid person. But since I was stationed in the Royal Military Academy, there was also a computing centre. They soon found out I would be much more useful helping with the “War Game” programming than with dispensing medicine to ill cadets and I was transferred to the School of War.

It was the era of the Cold War, remember. There were frequent exercises of the Allied armies in Western Germany. But that was very expensive, so they sent out only the officers and kept the soldiers and equipment in the barracks. We (two programmers in the School of War) maintained a large troop movement simulation program, some sort of SimCity type thing (but text data only) that told the officers the results of their commands. The communication was by telex and paper tape, from the computing centre to the fields in Germany and back. Great fun.

Let’s talk about CERN. What do they do? What makes this institution unique?

There is a very good talk available HERE.

In short, CERN started out in 1954 as a common infrastructure for particle physics research. It is impossible for individual countries, let alone individual universities, to construct the complex, large machines needed to obtain the very high energy densities at which the interesting interactions happen.

The machines are particle accelerators. They take charged particles (mainly protons) and push them to high speeds using electric fields, keeping them in a circular orbit by magnetic fields. Then we let them collide to see what happens at high energy densities. The most powerful accelerator is the Large Hadron Collider (LHC) that lies 100m under the ground at CERN, near Geneva. It is about 10km in diameter.

I have to go a little into some concepts:

Energy is something most are familiar with. It can exist in a potential form, such as the energy in a battery or in a large suspended weight or in water behind a dam. It is measured in Joule (J). It can also be seen to flow, as in a lit bulb. Then it is obviously measured in Joules that flow out per second, or J/s better known as a Watt (W). A Watt is one Joule per second.

Energy density is another matter. It is usually very small, i.e. there are very few Joules in a cubic meter of anything. That is, if you don’t count with the conversion of mass into energy (the famous E=mc2). Of the useful energy stores that we commonly use, the most dense is in petrol: 42,000,000 J/liter and it is extremely cheap (not a joke!). By comparison, a rechargeable battery such as used in your iPod holds only 1,500,000 J/liter (i.e. if you made a battery that would occupy 1 liter of space, it would then weigh more than 4kg instead of the petrol’s 0.8kg, which is why electric cars are heavy and run down fast).

In the experiments we do at CERN, the energy of the particles in the LHC is about that of an entire intercity train running at full speed. A single bunch of particles (there are 2,800 in the beam) has about 124,000 J but it is concentrated in a space smaller than a cubic millimeter. That means just a single bunch holds about 400,000,000,000 J/liter!

When such a bunch collides with another one, you can imagine what happens: interactions at a violence that is close to what was going on at the Big Bang.

And that’s exactly what we’re looking for: how did particles behave at that time, a few microseconds after the Big Bang? Why do we do this? Because different phenomena begin to influence each other when they act at high enough energy densities.

Take electricity: you rub a glass bar with a catskin (poor cat) and the bar starts attracting small objects. That’s static electricity. Take a magnet and it will attract small objects made from iron: static magnetism. Now, put a compass next to a torchlight and switch the torchlight on. If it is powerful enough, the compass needle will get a kick. Moving electric particles generate a magnetic field. There is an influence from the electric stuff to the magnetic stuff (and vice-versa). But it does not happen if you wave the glass bar over the compass, because the energy density is not high enough to see the influence.

Once you have enough energy density you can come up with a description of both the electric and magnetic phenomena in one set of mathematical equations. For electromagnetism those are Maxwell’s famous equations that were first formulated in 1875 and for which we still find new applications each day.

The equations “unified” electricity and magnetism.

When CERN did the Z/W boson experiments in 1983 we had enough energy density to show that electromagnetism and radioactivity are also two phenomena that can be unified into what is now called the electro-weak force.

Can we go on? There are two more forces that we know of: the strong force (that keeps the atomic nucleus together) and gravity.

We have a theory that allows us to describe even the strong force and the electroweak force into a single set of equations: the “standard model”. However, the standard model has 18 numbers in it that we don’t know where they come from, and at least one “hole” in its table of particles.
We suspect that the “hole” is taken up by a particle that no-one has yet seen, the so-called Higgs boson. According to the theory, it must be a very heavy particle, and so requires a lot of energy density to create. That’s why we built the LHC.

Should we not detect the Higgs boson, then we have to demolish the standard model and find a different theory. If we do detect it, we hope that its behavior will explain the values of at least some of those 18 numbers.

The numbers, so we suspect, are a little like pi: first you observe that the ratio of the circumference of a circle and its diameter is always the same and about 3. You can obtain a better value of pi (3.14159…) by measuring the circumference of a large circle and dividing it by its diameter. The more accurately you measure, the better your value of pi will be. But when you study the circle and do enough geometry and math, you find that you don’t need to measure pi, you can calculate it from first principles.

Similarly, we suspect that at least some of the 18 numbers should be calculated from other numbers and the Higgs boson properties should help.

That’s my understanding of the physics, but I’m not a physicist.

Wow! How did you fit in all this? Can you give us a brief history of your career at CERN?

I started in the accelerator divisions, doing control systems software for the smaller accelerator that is now used as a first stage for the LHC. That was in 1974. We had Norsk Data computers. Beautiful machines: they already had hardware separation of instructions and data. No way to write a virus and send it off as a piece of data! Also no way a program could mess up its own code and thereby make debugging extremely hard.

We used those machines to write the documentation of our highly specific accelerator control programs, and a friend and I developed a text processing system with interesting relative markup. Tim Berners-Lee used it to write up what he did when he was a contract programmer for a few months in the same project. We did not work close together at that time but I knew who he was.

Tim left CERN and I went to the computing division to become head of Office Computing Systems. My group was responsible for office software and for installing and maintaining a large number of office machines, at the time all Macs (1985-1988). In that capacity I once had dinner with Bill Gates in Zurich, as we wanted a university-type deal for Microsoft’s Excel and Word.

Then in 1989 there was a big restructuring and I wanted to return to more basic things, especially exploring hypertext systems. I left the computing division for a newly created Electronics and Computing for Physics division where I was not sure whether I wanted to work on programming techniques (object oriented programming systems were being introduced) or concentrate on hypertexts. Tim had returned to CERN from a spell in the UK, and his boss, a friend of mine, wanted me to look at his networked hypertext proposal. Tim was in the computing division, in a building more than a kilometer from where I was. I liked very much what he proposed, dropped my own proposal and joined him.

This can all be read in “How the Web was Born” by James Gillies. Click HERE to learn more.

What was your personal role in this collaboration?

From the start I did more on the management and public relations side. I’m eight years older than Tim who himself was considered “old” by the young programmers. I had a family and I could not work through the night and through the weekends, programming in C. So when I came in on Monday I would usually be a couple of versions behind the others. Plus that I fundamentally hate C. It is one of the lousiest programming languages on the planet.

So I went on to get management support (remember that I had been a group leader for some time and knew my way in the hierarchy). I also wanted to get into the European Commission because it was obvious that CERN, a physics lab, could not afford to put much effort into an informatics project.

I did the first entirely WWW based project with the Commission and the Fraunhofer Gesellschaft in Darmstadt. Then I also forged links with INRIA, a true informatics lab, in the hope to find help there.

In 1993 I started organizing the first international WWW Conference (held in May 1994), and that brought in INRIA.

I worked for six months with the CERN Legal Service to make the document that put the web technology into the public domain on 30 April 1993 (now just 15 years ago!), which required convincing top management.

Then I also convinced the European Commission that the web was an instrument for schools and started the “Web for Schools” initiative that was very successful (though I had only time to follow it from the side). I was quite busy, I can tell you that.

Of that, I’m sure! Do you like beer? Tell us about the most important beer in your life.

I used to like beer, but don’t drink so much these days. There is a quote of mine, a reaction to a question from the floor at a plenary session of the Second International Conference, where a person asked why we had physical conferences on WWW when we should do it all through the internet and the web. I said: “People do want to meet in person, there is no such thing as a virtual beer.”

I generally do not like the very light stuff that is popular in warm countries like the US and Australia. I prefer the heavy dark ones. “St Sixtus 13” used to be a favorite but I have not had it in a long time. These beers also require time and proper setting: a quiet long evening after dinner for example. Those seem to have gone too, evaporated into answering e-mail.

But there was another occasion where beer played a (marginal) role: in the early and hot spring of 1990, we were re-writing the proposal for management so that we would get some time to work on the web. We had no good project name and could not find one. Before going home in the evening we used to go to the CERN cafeteria for a light beer after the hot day.

We then discussed the missing name. On one of those evenings, after I had firmly rejected names of mythical characters such as “Zeus”, “Pandora” and whatever, that were very popular as project names at the time, Tim proposed “World Wide Web”. I liked it, except for the fact that the abbreviation was longer than the name and that “WWW” was unpronounceable in Latin languages. We agreed it would be a temporary name, to be used for the proposal only, until we found something better.

Was the web originally designed for a small group of physicists? Can you explain why your co-invention was so critical to the academic world? What problems were you trying to solve?

No, it was never explicitly intended for physicists. What I had observed was that physicists were sending files by floppy disk in internal mail envelopes and could not get at someone else’s files if the other person was not in the office. There was a need for a storage system that would allow people to find documents without the need of interacting with the author. An automatic, electronic, networked library.

Tim’s idea was somewhat different: he wanted to organize thoughts and grow collaborative documents. That could be useful for physicists, but I think that was only a justification, not a goal.

It was not really a case of trying to solve someone else’s problems but rather a case of trying to understand what was possible and how to do hypertexts over a network. Obviously, once we had it, the linking mechanism allowed academics to build a set of papers with references and that was what they wanted.

The web as we know it has grown beyond measure in both size and functionality. What was the core idea behind the original World Wide Web that Tim and yourself created? What is the significance of hypertext?

The functionalities we see today are no different from the early ones. There still is nothing to link documents other than the simple link we had and nothing to do other than filling in forms and get a reaction from a database. The more “advanced” functions rely on programs that come with the page. They are done in Javascript, a language even worse than C.

Javascript fills the big hole that we left in the web: Tim was adamantly opposed to putting a programming language in. If you leave an obvious hole then it will very quickly be filled and it will be filled with something ugly and badly designed. Javascript was not even designed, it is an abomination.

The functionality of running a program on the server is very old. Even before the first server in the US was implemented (December 1991) there was a server at NIKHEF (Netherlands Insitute for Nuclear (Kern) and High Energy Physics (Fisyca)) that returned the square root of the number you gave it. It showed that one could do essentially anything on the server side.
Hypertext has a lot of functionalities that the web does not have. I can’t go into this here, but the web is arguably the simplest and dumbest hypertext system in existence.

As to the core idea: that simply was to let any document on any server link to any other document on any other server. Nothing more than that. The system depends entirely on the specification of the URL (or URI, as you like).

When you were developing this technology, how big was your team? Did you have an army of teachers and students working with you to make this invention possible? Did you have unlimited resources? (Pay attention to this answer fellow S-3D Advocates!)

We had very limited resources and one of my worries was to get resources where I could. Tim was not very well supported in his division, I was better off, but in total we had one student each at any one time. That grew a little after 1993 and at the end of 1994 we had about four, i.e. the team was 6 people in total.

Inventions of this kind do not need vast armies, just a few bright kids and loads of support: the best network connections, machines, programming techniques, quiet offices and so on. There were of course a number of people working in other places too, after all there were 600 people who wanted to come to the first Conference in May 1994. But most of those did work on the periphery. The core development stayed at CERN until end 1994.

Questioning the usefulness of the World Wide Web today would be like setting aside the merits of sliced bread. How receptive were people to the idea in the early stages? Were your ideas welcomed and adopted with open arms or cold shoulders? How do you think your experiences relate to the S-3D industry?

In Belgium most bread is sold unsliced, but there is a slicing machine in the shop. Bread gets sliced just before you take it away, if you want it sliced that is.

In 1984 I read an article in a computer magazine describing the first Mac and how a mouse worked. It was quite impossible to imagine what a mouse could do from a description, you had to use it yourself to understand. It was much the same with the web. Unless you put people in front of a machine and let them click in web pages, they almost never understood how it worked, much less what its potential was.

The best illustration of this was my attempt at explaining it to the European Commission officer who was in charge of the grant for the Fraunhofer project. After about 10 minutes I gave up and suggested that we would set up a special meeting at which I would show what it could do. To the credit of the Commission, they agreed to hold a grant discussion meeting outside their offices, in the computing centre of the Free University of Brussels.

I purposely showed the then newly created “Dinosaur Exhibition” site of the Honolulu Community College in Hawaii (Kevin Hughes’ work). When the officer began to understand that each time I clicked the text and images actually travelled from Hawaii to the terminal, he took the mouse out of my hand and began to click around himself. But it was not always possible to give such a convincing demo because managers still were reluctant to touch computers and especially ones with a mouse.

At a network conference near Paris I sat down for lunch next to a person from the French Telecom and when I asked if he had an internet mailbox, he dryly answered that I should not assume that the Telecom would ever support the internet protocols.

We know what happened: the internet spread anyway, despite resistance from the telecoms.

In a similar vein, I have not experienced 3D movies or video games, and I think it’s no use talking about them until I have actually seen one. There is no number of words that can describe the real experience, and that is a real obstacle sometimes to spreading a technology.

Tell us about Gopher and Mosaic. What gave them temporary advantages against your work? Were you happy about the development of HTML? Why or why not?

Gopher was there a little before, and by coincidence the basic protocol looked much the same. But then that would not be too surprising: both systems deliver a page of information from a server to a client. Gopher was extremely easy to install and also very easy to populate with information. But its information was a tree, there were no links. Our pages contained addresses of more pages they were linked to. WWW required the insertion by hand of these links and so at least one bit of markup was required too.

The links made it much more useful to the reader: you could see from the context where a link was going to lead you to, and authors could use this to attract attention to pieces of information. Instead of browsing through a tree as in Gopher, in WWW you could follow a path laid out by the author.

The markup we used was inspired by a markup used at CERN for physics papers. There was an SGML guru at CERN, Anders Berglund, who understood that SGML was a good framework for making the different markups that physicists needed.

Let me take the opportunity to point out that HTML is not a subset of SGML, and that in fact you can’t write anything in SGML. SGML is the superstructure that allows one to define a certain markup. The “grammar” of a markup language is technically called a “document type definition” or DTD. You can have a DTD for letters, a different one for reports and so on.

HTML is a very bad name for a very lousy grammar. It should technically have been called “the web DTD”. Calling it HTML implies that it is something like SGML but of course it is not. A good analogy is that HTML is to SGML like French is to Indo-European Languages.

SGML has since more or less gone away and is superceded by XML. Again one cannot write a document in XML, but one can write a document in an XML compliant markup. XHTML is an XML compliant markup language.

Back to HTML: apart from the bad acronym, it is also a bad language in itself. It is not well-structured. For example, you can have headings of different importance (h1, h2, … h6) but there is no concept of chapters, sections and so on that can be nested. It is a really flat set of directives and they are absolute to boot. I could go into many details, but it’s really not good. However, at the time it was a secondary worry. Unfortunately, once a large number of people have written servers full of HTML, it’s impossible to change. There are other examples of this disaster: USB is a bad design but we’re stuck with it. IEEE 1394 (FireWire) is much better.

Once there were a number of servers out there, people needed other browsers than the very fancy one that ran on the NeXT system. We just did not have the manpower nor the knowledge to port the browser to the X system but a number of X programmers did make “primitive” X browsers. I say “primitive” because those browsers did not support authoring, did not allow multiple windows and had no support for laying paths. They did attempt to introduce graphics, but usually bitmap graphics rather than vector graphics.

One of them came from NCSA, the National Center for Supercomputer Applications in Illinois. It was called X-Mosaic and was originally meant to do more integrated information access things. Its main characteristic was that it came as a single executable bundle. You could just download it and start browsing. Until then, the X browsers had all required some installation procedure with linking to local resources such as fonts. X-Mosaic spread like wildfire: it was indeed a virus, no installation required!

That happened in 1993. Soon after there were versions for MacOS and for Windows. NCSA had a team of 20 people on Mosaic. They got very sure of themselves and started inventing additions to HTML. In a sense they were right, but I wished that we could have had a better collaboration. Anyway, the world began to see the web through Mosaic. I think that collapse into Mosaic (later Netscape) set us back by about five years in the implementation of style sheets (we had them on the NeXT) and XML.

When I tell people the co-developer of the World Wide Web is making an appearance on MTBS, like clockwork the first response is “I thought Al Gore invented the web”. Can you explain how this impression came to be, and PLEASE set the record straight?

My own understanding of the role of Al Gore is this:

1. From 1973 there was the internet, the infrastructure with NO content that carries the services that are mail, download, chat, and since 1990 the web.

2. During the 80’s, networks became important and there were several attempts at giving people access. The most successful of these was the Minitel in France(*).

3. In the US there was the NII, the National Information Infrastructure initiative in 1991. When we saw it we smiled and thought: “we have that…”, but the guy behind the NII was Al Gore. He did not know about the web then, because the first server in the US only got online in December 1991, but he had the idea right. So, a bit like we said we were actually DOING the NII, he must later have said the web was what he had had in mind with the NII.

You can find all that in the Wikipedia and the book “How the Web Was Born” by James Gillies and myself.

Stay tuned for part two of Dr. Robert Cailliau’s interview. Dr. Cailliau will share his thoughts on the stereoscopic 3D industry and its relationship to the web’s history. Is a stereoscopic 3D world wide web possible and what would it look like? What does the S-3D industry need to reach mass market success? Does our industry need standards, and who should set them?

Post your thoughts on this interview HERE. Dr. Cailliau will be answering our members’ questions and remarks in a special follow-up interview.

Samsung Follows-Up!

By Interviews No Comments

Yesterday, we posted the follow-up interview with Electronic Arts Korea regarding their collaboration with Samsung. Today, Kevin Lee (Lee Kyung Sik), Vice President of the Visual Display Division for Samsung, takes his turn to answer MTBS questions about the new Samsung Plasma 3D HDTV.

All questions posed were from MTBS members sourced HERE. You can read the original interview HERE.

1. What is a PDP, and how does it display 3D?

PDP TV is a flat panel TV that uses plasma display. As it has fast response time and smooth picture quality, it is popular among the consumers who enjoy watching sports games or playing video games.

To create a 3D image, Samsung 3D Ready TV separates the 3D image data coming from the PC into left and right images. Then it displays the left and right images alternately using the 120 Hz technology. Using this technique and the shuttering of the glasses, it displays the left image to the left eye and the right image to the right eye to make a 3D image.

2. How affordable are Samsung PDP HDTVs – can you give an expected price range in US dollars? Does this include S-3D compatibility?

PDP 450 series supports the S-3D function. The retail price for the 42 inch TV is $1,199.50 US and the 50 inch TV is $1,699 US – similar to the prices of existing 2D TVs.

In order to expand the 3D market, Samsung incorporated the 3D function into the volume drive models that are relatively cheap.

3. What are the exact Samsung 3D HDTV models that are supported?

For PDP TV, PN42A450 and PN50A450 series support 3D, while those sold in BestBuy and Circuit City, DLP TV supports 3D.

4. Will we see an EA/Samsung joint marketing campaign promoting S-3D to traditional 2D gamers?

Samsung are satisfied with the 3D graphics of EA’s game titles. Samsung is planning to carry out promotions such as BTL, Sales Guide, and exhibitions together with EA.

5. iZ3D also recently announced their entry into the third party S-3D driver market, which is great news considering the vast number of games they support and the excellent feedback from their users. Does EA/Samsung have any ambition to entertain working with them to create a more complete and compatible S-3D conversion offering?

EA/Samsung has not considered working with iZ3D yet. However, Samsung has the intention to cooperate with various contents providers and other companies to expand the 3D market.

6. I’m a programmer. Will any game that can render in checkerboard S-3D work, or does the Samsung PDP HDTV need some special USB command to make it work in S-3D?

Samsung 3D Ready HDTVs support all checkerboard S-3D content. There is no need for a special USB command.

7. During 2008 CES, Samsung announced two series of PDP HDTV. It seems to me that only the lower-end 720p series are S-3D compatible, not the 1080p series. Will there be 1080p S-3D compatible PDP HDTVs from Samsung?

Samsung has supported 1080p S-3D in DLP TVs since 2007. Also Samsung introduced 720p 3-D Ready PDP HDTV into the market in 2008. It is currently considering producing PDP HDTV that supports 1080p in the future.

8. Are the Samsung S-3D DLP HDTVs being phased out? Will Samsung continue to equally support both S-3D DLP HDTV and S-3D PDP HDTV solutions?

Samsung plans to continue to support S-3D DLP HDTVs in the future. But, the degree of support may change depending on the market response.

9. If I get the 3D ready DLP HDTV, will I still be able to play the same S-3D games coming in the future like Crysis on my tv? Or is it only viewable/playable on the PDP screens?

If the PC graphics card or the software supports games like Crysis with the checkerboard method, you can enjoy S-3D games in the 3D ready DLP HDTV just like PDP.

Special thanks go to Samsung for making a second appearance on MTBS. Post your thoughts on this interview HERE!