Ok here's my solution.
I'll make it a 3 steps description :
Step 1)
2x720x240 x30fps using a cheap TV capture card (the technique i personally use at the moment : certified working)
Shutter output (possible better results with interlaced out but wasn't able to get it working)
Step 2)
2x1280x360 x60fps or 2x 1920x540 x30fps using one HD hdmi capture card (the technique i intend to use next year)
Interlaced output (possible 2x1280x720 x30fps with shutter mode but not recommanded)
Step 3)
2x whatever resolution that your capture cards accept, using two HD capture cards (the technique you're looking for but which i'll never use because too expansive for me)
Dual output (dual projector mode)
So here i go :
Overall Workflow
All three methods follow roughly the same workflow pattern :
-capture via a capture card
-separate raw left and right eye views into two distinct video files
-synchronize and edit as required
-render as two distinct video files : one for each eye
-assemble both files into S-3D compatible format and compress for internet broadcast.
Step 1)
2x720x240 x30fps using a cheap TV capture card (certified working by BlackShark a.k.a. the technique I use)
IMPORTANT NOTE :
This tutorial requires the use of many different software, and some of them are quite complex. You will notice that i won't explain everything step by step. so prior knowledge in video editing is a huge plus.
I'm thinking about making a live video tutorial so that you can actually see how i do it in real time (well, i'll fast forward when the computer does video encoding of course).
I'll do it as soon as a buy my mike.
Hardware :
1 Powerful Gaming PC with GPU with a tv-out and clone mode support (any nvidia or ati card with 2 output + TV-out supports this)
1 TV capture card with S-video or Composite input (S-video has better quality but composite is also fine) you can find this type of card for ~50$
For composite input, don't forget the 9pin s-video to universal TV-out adapter (provided with your graphics card)
1 S-video or composite cable, the smaller the better just make sure it's long enough to connect the tv-out to the capture card.
1 pair of anaglyph glasses
Software :
iZ3D drivers v1.09 or better in shutter-simple mode
Games that run at a steady 30fps in shutter mode with v-sync on during recording
Any FPS drop below 30fps ruins the recording. Small occasional drops may occur during loadings, these can be corrected but require manual intervention after recording so make sure these framerate drops are not too frequent or correcting the video will become a nightmare.
A proper non-linear video editor (Adobe Premiere, Sony Vegas, or other)
Virtualdubmod (the .avi swiss army knife, you can also use the usual Virtualdub but i use the mod one)
Avisynth (a very powerful script based video pre-processing filter)
Megui (or any other H264 encoder able to use avisynth input)
optionnal hardware :
The TV capture card can be inserted into a second PC for better performance, this eliminates any influence of the recording process on the game framerate.
Setup :
Hardware connetions :
-just plug the tv-out to the Capture card input
-in the case you use a second computer to record, connect the sound speaker out of the gaming machine to the sound board line-in plug in your recording machine to record sound. If the capture card has an auxiliary audio input, use it instead.
-in the case you use the same computer to play and record, you may not need to do this connection, depending on the recording software you use, you may be able to directly record the output mixer channel (aka : the "what you hear" channel)
On the game machine
-set your display driver in clone mode between your main screen and the tv-out
-set the TV-out in NTSC mode (if you can, you can try to find a PAL60Hz setting to get better colours but i wasn't able to get this working)
-make the tv-out the primary display,
on my computer this unlocked a widescreen 960x540 resolution to record wide screen gaming which was not available if the main monitor was the primary display (the resolution is automatically resized)
-set iZ3D driver in shutter-simple mode
[ img]images here coming later[ /img]
On the recording machine
Your capture card usually comes with some recording software. It may or may not give you enough control on your recording parameters.
If it is the case, you can try some universal recording software. The one i use is an open source recoding software called DScaler (
http://deinterlace.sourceforge.net/" onclick="window.open(this.href);return false;)
-set your capture card recording format as NTSC
-if your capture software allows you to choose your video recording format : choose Lagarith it's a powerful and fast lossless codec (
http://lags.leetcode.net/codec.html" onclick="window.open(this.href);return false;) Il allows to record the exact pure image without any quality drop while having a lower filesize than uncompressed RGB. I recommand Lagarith setting to check multithreading, check allow null frames, and use colour subsampling : YUY2 (to make sure noting goes wrong with the interlacing)
-if your capture software allows you to resize before recording, you can try to spare some cpu-ressources by reducing the resolution from 720x480 to 640x480 (especially useful if you play and record on the same machine)
-if your capture software allows you to choose your audio recording format : choose uncompressed wave (sometimes also called PCM)
Ok now comes the most annoying par of analog video capture : calibrating colours.
Because the image you grab from your capture card is never perfectly identical to the one you ouput.
What's more, you're sending a PC image over a very old standard designed initially for TVs, which makes a lot of conversions a a huge potential for stuff to screw up.
Set a colourful background on your desktop, open microsoft paint and draw some pure black and pure white shapes, you don't need to save the image it will help you calibrate the colour and contrasts.
In your capture card software, make sure that every setting is set to default.
And make sure the software does NOT deinterlace the footage (deinterlace off or weave)
In your display driver, locate the following settings you will have to play with these :
-Video and TV colour range (there should be 2 settings : 0-255 or 16-235)
-Video and TV colour settings (brightness, contrast, saturation, gamma, etc...)
[ img]images here coming later[ /img]
Record a few seconds of video showing nothing but your desktop with that small ms-paint window.
Check whether the preview and the recording footage are identical. If they aren't, you have to check on both machines' display drivers for the colour range and and colour settings and find the correct settings to make the preview, the recorded footage and the main screen to display all three the same image.
Here are my recommandations (ie : what i had to do)
-i had to set the video and tv range to 0-255 -> this gave the the correct contrast, it also improved the contrasts on many videos i watch due to wrong colourspace conversions.
-i had to reduce the saturation value to around 35% -> this prevented the colours to be oversaturated
-i had to use the output displacement and zoom feature to get a fullscreen recording. Recoding resolution is already low, i don't want to waste any pixel of it !
[ img]images here coming later[ /img]
Once you get the recorded image to match the displayed one, you're ready to play (get a little training, playing in shutter mode at 60Hz without glasses is difficult) and then record (yeah!)
When playing, make sure you disable any external program thay may interfere with performance (downloads, antivirus, internet, p2p apps) or software that may cause noise (instant messaging software are great for ruining a recording stupidly)
Once you have finished recording, you will get an interlaced Stereo 3D file (equivalent to HQFS). Which you can view with iZ3D MPC.
You will have to use the swap view feature because there is ZERO guarantee that the fields are properly ordered. But you're no where close to have a file ready for broadcast. Because you used the shutter mode, any framerate drop will cause (eye swapping) which has to be corrected at the next stage : the editing process.
Viewing the raw footage in iZ3D MPC will give you a rough idea of how much work is required.
If you have just one or two times where the eyes swap, you've got a great source and you're good to go.
If you have constant eyeswap, then you're screwed, your source is trash and useless, you can delete it.
Anything inbetween it's up to you, if you really want this video badly then you can try, but don't complain that it's long and hard, I tried to fix a messy source like this (half life 2 lost coast). I never finished it.
Why use shutter mode to record interlaced ? Wouldn't interlaced mode be better ?
In theory, if you output a 640x480 interlaced video through the tv-out and in the capture card, you should be able to get the interlaced image, this would allow avoiding the framerate drop issue, the fields would be always in the correct order and you could use any framerate, it would still be ok.
Unfortunately, i've never been able to match the resolution perfectly so it doesn't work (i get huge amounts crosstalk/ghosting).
Shutter mode with vsync makes sure that each frame goes in the correct field and provides an extremely pure zero-ghosting image.
If you are able to achieve this, tell me how you did because i'd really like to know.
Having a working interlaced output capture would allow you to completely skip the "Editing process part2", which is quite long (as you will soon discover).
[ img]images here coming later[ /img]
Now comes the editing process : part1 separating fields
The first step is to separate both fields into two separate video files and extract the audio track.
This can be done easily using Virtualdub-Mod (
http://virtualdubmod.sourceforge.net/" onclick="window.open(this.href);return false;)
-1 extract the audio data :
open your raw footage in Virtualdub-Mod, go to the streams menu -> steam list.
click on the audio stream and extract wave file.
Then double click the audio stream to disable the audio stream (you don't need it anymore)
-2 get the two fields into different files
go to the video menu -> set full processing mode
then video menu -> filters
Add a filter called "deinterlace"
The deinterlacing filter has many options, but the one which interests us is the one called "discard field"
select discard field 2 (this will make VDM only keep field #1)
close the filters menu.
go to video -> compression
select the lagarith codec (you can also use uncompressed but lagarith saves disk space with lossless compression).
And save the file, ake sure you give it a name you easily understand like "myvideo-field1.avi"
[ img]images here coming later[ /img]
Then do the second field.
go back to the filters menu and change deinterlacing : discard field2 to discard field1 (this will only keep the field #2)
And then save with a name you easily understand like "myvideo-field2.avi"
And you're done. You now have your two fields in separate files and the audio track, you're ready for the next step.
Notice that Virtualdub mod can add jobs in a queue, so you can do the two fields in a row instead of one after the other. Just check the little checkbox at the bottom of the window when saving the file.
[ img]images here coming later[ /img]
The editing process : part 2 re-synchronizing
Now is the time to correct the eye swapping and resync if necessary.
During recording if the 1st eye view goes to the 1st field and the 2nd eye view goes to the 2nd field, then everything is fine, the video is perfectly synchronized. But if there is any eyeswapping, you'll get one eye in one fram and the other eye will fall in the next frame.
This 1 frame lag doesn't sounds much but whenever there is movement in your video, it will become visible. So it has to be corrected.
In order to do this you need a proper non-linear video editor (N.L.E.) like Adobe Premiere, Sony Vegas, some less expansive NLE software should also be able to do the job but I don't know them all. For my part I use Sony Vegas.
You actually might be able to do it using virtualdub but since it's not able to manage multiple video tracks, it will be hell, so don't even try.
Now comes the tricky part of this step by step guide. NLEs are very advanced programs that allow you to do amazing things, therefore they have a huge amount of features, and we will only use less than 1% of what they are capable of.
NLEs are all different from one another but have roughly the same features, because of this, I won't tell you step by step on which icons to click, I will only tell you what operations you should make. Knowing how to make these operations is up to you.
If you already know how to use an NLE, all i should have to say is "synchronize your sequences everytime they unsync" and you'd already know how to do it and I'd start to explain the next phase...
So keep in mind that everything we'll be required to do is basic stuff : drag and drop, set beginning and ends of sequences, moving a sequence on a timeline and adding a simple colour filter (for anaglyph preview), and removing the filter before making the final render.
NLEs have made significant improvements in the user interface and are now quite logically made tools and with a little logic you should be able to find what you are looking for quickly, if you are REALLY lost in this i'll maybe make a small video tutorial (i said MAYBE !)
The first thing you need to do it to setup your project correctly
NLEs are able to work with files at any resolution and any framerate and outputting a totally different format, don't expect the NLE to autodetect what type of output you want to use, or you're sure it will be wrong, you have to tell the NLE what your files are and how you want it to transform them.
This is done in the project properties : it should open automatically when starting a new project (if it doesn't appear, look in menues).
Here's what your project properties should look like :
You want the NLE to work natively at the exact same resolution of your files in order to preserve the maximum quality.
-resolution is 720x240 (or 640x240 if you recorded 640x480)
-progressive source (not interlaced)
-30.00 fps
-pixel ratio : you have to calculate it, here is how.
the video resolution and the display do not need to have the same width/height ratio, the pixel ratio stetches the pixels until you get the desired displayed ratio.
It is applied by the following formula :
( resolution width / resolution height ) * pixel ratio = ( display width / display height)
or if you prefer :
pixel ratio = ( display width / display height) / ( resolution width / resolution height )
[ img]images here coming later[ /img]
You're now ready to place your sequence on the timeline.
Bring the 3 files you've just created with virtualdub on the timeline. You'll get 2 video layers (one for each field) and 1 stereo audio layer.
The 3 sequences should have the same length (since you extracted all of them from the same source file).
Now, because the field1 and field2 video sequences are stored in .avi files, you NLE thinks they have square pixels (pixel ratio 1:1) which is wrong, in the video preview you should be seeing these sequences squashed.
In order to correct this, you have to go to the sequence properties and tell your NLE that these sequences use the same pixel ratio as the one you previously calculated. Your sequences will now be played normally.
[ img]images here coming later[ /img]
Let's work !
To make stuff clear, before starting to explain what to do , i'll first tell you what we won't do.
-we will
NOT stick the views side by side now, this will be done later
-what we will do is to use one track for each eye, sync both eyes and output each track separately.
-in order to be able to preview what we are doing, we will use track filters to make an anaglyph video preview in real time.
Setup our anaglyph preview.
First you'll need to find the tracks (or layer) blend controls :
You will have to set your video layers as "additive" just like in a photo editor.
Next is applying the color filter to the whole track.
In sony vegas the color filter needed is called "channel blend", i don't know how it is called in other NLEs but it should have a similar name.
This color filter asks the user to enter a Matrix of RGB (or RGB+Alpha) values.
You can use any working anaglyph color matrix you want, it's just for your preview. You can find a number of different anaglyph matrices at 3dtv.at (
http://3dtv.at/Knowhow/AnaglyphComparison_en.aspx" onclick="window.open(this.href);return false;)
I recommand you to create your own presets in your NLE so that next time you want to work on a 3D project you can use the presets to instantly get these values back without having to type them again.
[ img]images here coming later[ /img]
Now that you are able to see the 3D image, start by removing any unnecessary part of the video.
When you started recording, you were probably still under windows, the game not even loaded.
Start all sequences where you want to acutally start the video and end them where you want the video to end.
Make sure you keep your sequences in sync (especially the audio).
Now play through your sequence and find every eye swap of your sequence.
Wherever you find an eyeswap, split the sequence. Make sure you are frame accurate for both Left and Right eye tracks, both tracks may not swap eyes at the exact same frame (for example in case of unsynching). Put your anaglyph glasses on, or use the Mute Track feature to make sure you are splitting the correct sequence.
[ img]images here coming later[ /img]
Once you have splitted your sequences on every eyeswap, check with your anaglyph glasses where the eyes are correctly positionned and where they have to be switched, and switch the sequences wherever needed.
You will notice that some sequences will have a one-frame overlap with other sequences. This is perfectly normal since we haven't re-synchronized the sequences yet.
[ img]images here coming later[ /img]
Now let's resynchronize our sequences.
For every sequence pair, check very closely if you see any small lag between the left eye and the right eye.
If you don't see any, this means your sequences are synchronized : do not touch them.
If you see a difference (the 1-frame lag due to the interlacing) then you have to move one of the eyes by 1 frame (either forward or backwards) to resynchronize both eyes.
Try and choose which eye to move according to the small gaps created where you have splitted your sequences.
You will notice that some splits can be filled perfectly and the eyeswap becomes totally invisible when viewing the video, but other will resist and create a single repeated frame identical on both eyes.
There is nothing you can do in these cases. They have to show up some way.
If you have very few of these, you can try to hide them by starting the next sequence earlier (and overwrite the glitchy frame). This will work a few times but remember that everytime you do this, you make the video shorter while the audio keeps going at the same size.
So do not abuse this technique or you may completely unsync the audio from the video. More than 2 or 3 frames unsynching between the audio and the video becomes visible, so avoid making this more than
[ img]images here coming later[ /img]
Once both eye track is complete and in sync, you are now ready to render each track.
For a normal video, you would render all the audio and video at the same time, but since you are doing a stereo3D video, you should do each video track and the audio separately.
Start by rendering the audio : mute all video tracks, render the audio as a .wav uncompressed audio file (also called PCM) use the same sampling rate as the one you recorded (it should be 48000Hz but 44100 is also fine)
Then make the video tracks : mute one of the video tracks, and deactivate the anaglyph filters to get the full colour picture back.
Render the video. Be careful with the render settings, Most NLEs will automatically display some random preset you don't want (like mini-DV).
Check every setting and make sure you are exporting to the same format as you specified earlier in the project properties, for file format and codec : use a .avi file with lagarith codec.
Do this for both video tracks and names the files accordingly : this time you really have left eye and right eye video files, make sure you don't mix them up.
[ img]images here coming later[ /img]
Final part : Video encoding for the internet
Now that we have our two files, we can combine them into the final desired format.
There are multiple ways to distribute stereoscopic 3D content but the preferred format I recommand is : side by side crosseyed views, since it is compatible with virtually all compression formats and reduces the amount of possible playback issues due to codec and player internal misconfiguration.
In this part of the guide, we will compress our video in the H264 format, the most powerful video compression format available at theis present time , using the free open-source x264 encoder and Megui (for a friendlier user interface, and it's very useful presets), and use avisynth to stack our views on the fly when encoding.
Making the avisynth image stacking script :
Avisynth is a free open-source pre-processing filter, which means it can open videos and then work with them, finally avisynth outputs an uncompressed video stream ready to be used by x264.
Avisynth does not have a graphical user interface (gui), it uses text scripts as input commands.
In order to create a script, just create a simple notepad text (.txt) file and rename it with a .avs extension. This is the script you will have to write (use copy/paste and modify the filenames and directories according to your situation)
There are some automated Avisynth script creator softwares out there but I don't think any of them is really useful in our case.
Lines beginning with a # are comments to help you understand what we are doing, they are not processed by avisynth, you don't need to remove them.
Code: Select all
# Assign useable names to each video stream
VideoLeft = AVISource("C:\Myfolder\MyVideo-lefteye.avi", audio=false)
VideoRight = AVISource("C:\Myfolder\MyVideo-righteye.avi", audio=false)
# Stack our videos
# current used command is crosseyed horizontal stacking, for vertical stacking, switch the # symbols between the two following lines
VideoStacked = StackHorizontal(VideoRight,VideoLeft)
#VideoStacked = StackVertical(VideoRight,VideoLeft)
#YV12 colorspace conversion, x264 requires this, if you forget it, Megui will prompt you to add this command
ConvertToYV12(VideoStacked)
Compressing to H264
Megui a free open-source graphical user interface for various encoding formats, but it's main use is H264 encoding via the x264 encoder.
It is certainly not the most user friendly H264 encoding software in the world (far from it), but it is constantly updated and has an auto-update feature that grabs the latest available version of every software used every time you start it, which is why i recommand it, but you can also use any other x264 graphical user interface you want provided it accepts an avisynth input.
Megui's autoupdate feature should start automatically, make sure you grab all the presets (tip : use the right click menu to select all profiles)
Under the input tab, in the upper video part :
Put your avisynth .avs script in the Avisynth script box, this will trigger an avisynth preview window, showing you what the image being sent to the H264 encoder looks like. You should see your side by side video with a pixel aspect ratio of 1:1 (square pixel), so if you used an anamorphic resolution your video will be squashed, don't worry this will be taken care of later.
Under video output, select the name of the VIDEO output file, this is the compressed video stream only (no audio), it's a temporary file megui needs before assembling the audio and video together.
Under Encoder settings, select "Unrestricted 2pass HQ"
This setting provides almost the best of what x264 can do, without using the insane time consuming options which slow down encoding too much for nothing.
Warning : THESE ARE ALREADY VERY INTENSIVE SETTINGS, you will notice a significant encoding speed difference with the usual DivX encoders immediately.
You will also notice the Hardware assisted DXVA profiles are tempting, x264 does not feature Hardware encoder acceleration, these are to ensure you can make DXVA compliant streams for video players, unfortunately, your stream resolution is already not standard and thus will not be accelerated by current graphics cards. This may change in the future... or not, so there's no need to use these profiles for Stereo-3D content at the moment.
Under file format, select either mp4 or mkv depending on your desired final format.
(if you want to publish your final file as .mp4 use mp4, if you want to publish as .mkv use mkv)
Next in the audio part :
Put your uncompressed audio in the Audio input box
Under Audio output, select the name of the AUDIO output file, this is the compressed Audio stream only, it's also a temporary file megui needs before assembling the audio and video together.
Under Encoder settings, you have a wider range of options via different encoding software (mp3, vorbis, AAC, etc...) if you don't know what to use select LAME MP3: MP3-128ABR
Now, click the "AUTO-ENCODE" button in the bottom right corner.
This will open the final summary of your encoding and the filesize/bitrate calculator.
Under Name of output, enter the name of your final file (this is the true video containing both video and audio)
Under "size and bitrate" comes one of the most important things you have to decide.
The bitrate determines how much data do you allow x264 to use to store your video.
The smaller the bitrate, the smaller the file will be but also the lower the quality will be.
The higher the bitrate, the bigger the file will be but also the higher the quality will be.
If your internet host restricts the maximum filesize (and you don't want to split your video into multiple files), you can set the size you want (always keep a few MB of safety margin)
Otherwise, choose a bitrate.
Now the ideal bitrate required highly depends on the quality you wish to achieve and the very video you are encoding.
A 3D-HD action movie will require much more bitrate than a windows video tutorial for example.
The only way to know for sure what bitrate you should use (what is too low or what is overkill) is to try : make the full encode with some setting and look at the final file if the quality satisfies you.
But i know this is very long and time consuming, so to give you some guidelines for your first tries, here are the average bitrates I personally use to achieve almost transparent quality.
-SD 480p : 1500~2000 kbits/s
<- i use this for my Stereo3D captures with my composite capture card.
-HD 720p : 3000~5000 kbits/s
-HD 1080p : 6000~8000 kbits/s
Again these are only my personnal values, depending on your sources and your desired quality you may end up with completely different values.
Just make sure you don't overkill it, remember it's for distribution over the internet !
Once you have decided on the bitrate, click the "QUEUE" button
Your encoding will not start immediately, it's only added to the job queue, so that you can add multiple encoding jobs and let your computer run them all during the night.
To start the actual encoding process, go to the Queue tab, and click the start button.
some time Later (depending on the length of your video)
Your video is complete, go check it and enjoy your Stereo3D video file in any 3D enabled player supporting Side by Side videos.
Setting Aspect ratio tags for anamorphic video
Our video doesnt' have square pixels, in order to display our video with the correct aspect ratio, our video player needs to know what the correct ratio is.
With mkv files :
this can be done very easily using mkvmergeGUI, a tool part of the mkvtoolnix package :
http://www.bunkus.org/videotools/mkvtoolnix/" onclick="window.open(this.href);return false;
add your video file to the "input files" box (you can drag and drop it' easier)
In the tracks box you should see a video and an audio track, select the video track.
go to the "format specific options tab, and under aspect ratio, enter the correct display ratio you want to use.
NOTICE : this is the display ratio of the entire side by side image (not the pixel ratio you used in the video editor)
if your video is 2x 16/9 side by side, enter 32/9 (two 16/9 images side by side)
if your video is 2x 16/10 side by side, enter 32/10, etc...
MkvmergeGUI keeps your original file untouched, you have to save as a new file, insert a new name in the "output filename" box anc click the "Start muxing" button.
You will notice the stereoscopic tag, i'll talk about it just below.
With mp4 files :
This can be done easily using YAMB, an easy to use graphical user interface for the mp4box command line tool.
http://yamb.unite-video.com/download.html" onclick="window.open(this.href);return false;
In the left menu, select "Creation", and double clock on
"create an MP4 file with multiple audio, video, subtitle and chapters stream"
Click on the "+" icon to add a file and go pick your .mp4 video.
In the input list you should see a video and an audio track, select the video track, and click the "properties button"
Under "pixel aspect ratio" enter the pixel ratio you used previously in the video editor (not the display ratio like with mkv files)
Click the OK button
Add an output name (like mkvmergeGUI, YAMB is going to create a new file with the new settings and leave the original untouched).
And click next and finish.
Your video file should now have the correct aspect ratio straight away when opening in a video player. (except stereoscopic player which asks about the aspect ratio for every single file, even if the aspect ratio tags are there)
Your file if now ready for broadcast over the internet
Dual Stream Stereoscopic files
Dual stream stereoscopic files have both views stored in separate video streams. It's like having one video for each eye, but in only one single file. Which make it easier to download and manage.
At the moment there are only two video formats that allow Dual Stream Stereoscopic files and each one has issues which makes Dual Stream not practical at the moment :
Matroska (.mkv) has an official Stereo3D tag (which you noticed just above in mkvmergeGUI), but VLC is the only player which supports this feature at the moment : VLC opens two windows with the two streams and keeps them in sync (there are no 3D display conversion plugins yet, so the only way to use it is to display with dual projectors)
Windows media video (.wmv) via non-official custom tags proposed by Peter Wimmer, but the only player which supports them is Peter Wimmer's Stereoscopic player, and i personally don't like the windows meda video format.
Dual Stream Stereoscopic files have some advantages :
-only one file without any user configuration (the file knows which video stream corresponds to each eye since Left/Right tags are mandatory)
-the same file is used to play in Stereo3D but also plays as 2D if played in a standard non-3D enabled player
-built-in dual-core optimisation (two completely separate video streams to decode) improves performance with high resolution videos on multicore CPUs.
but also have some drawbacks :
-If you open the file in a standard 2D player, there is no way to tell the file has a S-3D track, you (the creator of the file) have to add a notice message to warn the user about it.
-There is no compression optimisation possible since the streams are completely separated
Although the current available compression optimisation by using side by side is relatively low, since the video compression algorithms currently don't take fully advantage of side by side, there is a huge improvement potential in this area.
H264 playback : Free H264 codecs and multithreading
H264 is a very powerful video compression standard, it provides the best available video quality/filesize ratio in the world at the moment, but it requires more CPU power for playback than previous codecs (Mpeg2, quicktime, DivX, wmv, etc...) especially when dealing with fullHD video, where H264 shines.
H264 gained a terrible reputation in the case of 1080p content, since a single core processor is barely enough to decode the stream, decoding such streams requires the H264 decoder to be optimized for multi-core CPUs, which most codecs (including H264) were not.
This lead a lot of people to wrongly believe that hardware acceleration was required to play H264 streams.
Actually, unless you still use a very very old pc (>8 years old), your PC is able to play Standard definition H264 videos.
Today, ipod nanos can read SD H264 videos (640x480), and any dual core CPU are able to decode 1080p H264 (yes even 3 years old budget AMD AthlonX2 or intel Pentium D CPUs)
These are the reference H264 codecs for windows i recommand :
CoreAVC pro (http://www.coreavc.com" onclick="window.open(this.href);return false;) - payware : 15 $
FFDshow (http://ffdshow-tryout.sourceforge.net" onclick="window.open(this.href);return false;) - FREE -> this codec is used in VLC and comes in absolutely every free codec pack for windows you can find on the internet
The reason why i recommand a non-free software is beacause FFDshow still does not have a multicore optimized H264 decoder but it is coming.
(at the time i write this tutorial, the "FFDshow-MT" patches are under beta testing, no official website, you may find some builds if you search the internet).
DivX Inc. is also developping an H264 decoder, which should be available in 2009 (price not announced yet), beta versions can be found at the DivX labs website.
Anyway, if you are a 3D gamer your current PC should be powerful enough to play High definition videos with ease, so you shouldn't worry about the H264 hardware acceleration marketing stuff.
-More to come later-
The little differences between my currently applied SD process and an HD workflow with HD capture cards
-coming soon-