Recording Games in 3D

Post Reply
User avatar
warface
One Eyed Hopeful
Posts: 43
Joined: Tue Mar 06, 2007 11:41 pm
Location: San Diego, CA
Contact:

Recording Games in 3D

Post by warface »

Hi All,

iZ3D is running a contest. This contest is very simple and open to all of you here at MTBS.

We're looking for ways to record a video game in 3D. As it stands right now, because of supporting 64 bit OS's and programs, the iZ3D drivers are unable to use FRAPS for this purpose. This is why we're looking for alternate ways to record in 3D.

Here are the key points we are looking to accomplish by creating these options:

- The video needs to be in full resolution. The great accomplishment of the iZ3D Monitor is that we don't lose any resolution in 3D, so we need to retain this resolution in recording this as well.

- The video can't be choppy. Recording in 3D can be processor heavy, so we're looking for a way that won't be that heavy. If the video needs to be compressed to make it manageable, you will need to provide a compressor that doesn't sacrifice too much quality of the video.

- We need to be able to do this on any game that is supported by iZ3D. We want to pay special attention to the most recent releases of games.


So.......what is the pay off for coming up with this?

- An 22" iZ3D Monitor

If you can come up with a way to record a PC Video game in 3D, and you have tested it and can assure us that it works, and we can confirm that it works......we will award that person with an iZ3D Monitor.

You will need to submit your Final Ideas to rapp@iz3d.com

We will keep everyone up to date on who wins this contest.

Good Luck!

Regards,

Warface
User avatar
yuriythebest
Petrif-Eyed
Posts: 2476
Joined: Mon Feb 04, 2008 12:35 pm
Location: Kiev, ukraine

Post by yuriythebest »

spent 4 hours trying around different screen capturing software- all to no avail, even the most promising software would crash with the iz3d driver enabled when starting a game. If there is a program that won't then I have not found it yet, but most likely you will have to modify the iz3d code and make your own recording feature. Otherwise the only solution is using hardware recording methods like tv out or on a camera- preferably from a side by side output.
Oculus Rift / 3d Sucks - 2D FTW!!!
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

I think like yuri. The only way to have full resolution without being choppy is to use an external recording device. This can be a second computer with a capture card with VGA or DVI input.

I found some.
UFG-05 4E : Single input so you need two cards. Can capture 60 fps at 1280x720 and 1024x768.
XtremeRGB-II : Has two input channels so you need only one card. Supports 1920x1080 DVI or 2048x1536 Analog resolution. It can acquire up to 60 fps but maybe not at the highest resolution.
AccuStream 50, 75 & 170 : Single input so you need two cards. AccuStream can stream to memory 1280 x 1024 x 60 Hz video without dropping any frames.

By searching more, you could probably find others. I haven't tested any of those. I don't know if HDCP issues happen with those cards and game output. They are probably all very expensive, require to use a very fast computer and fast hardrives in RAID. They might all have sync issues between the two input captures if you can't sync the cards or channels together.
Last edited by Tril on Thu Nov 06, 2008 3:59 pm, edited 1 time in total.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Post by BlackShark »

Copy of my own message on iZ3D board :

I have ideas and the whole workflow with hardware, programs and codec settings using an external capture hardware i can share with you but there are quite a few issues :

1) i don't know any DVI-input capture card
2) i dont know any whatever-input capture card that supports 1680x1050 resolution.
There may be some but i don't know them.

3) it's expansive : you need at least 2 powerful PCs and some hardware
-1 to play
-1 to record
+ 2 DVI output clone devices

4) it's so expansive that i can't try myself and you won't find any gamer who can handles such an expansive experiment.
Standard HD capture cards can cost $1000+ each !!!

-------------------------------------------------------------------------------------------------

I have personally tested a working method to record interlaced SD material using a cheap ($50) analog TV capture card.
http://www.mtbs3d.com/phpBB/viewtopic.php?t=2020

I also intend to step up next year (when i buy a new PC with the iZ3D 26" model).
I'll buy the cheapest hdmi HD capture card i can find (Blackmagic intensity) and try and record some interlaced HD footage.
But that's as far as my budget allows me to go.

I can still send you a step by step process of what i've done and what i intend to do if you wish.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
Tril
Certif-Eyed!
Posts: 655
Joined: Tue Jul 31, 2007 6:52 am
Location: Canada

Post by Tril »

BlackShark wrote:2) i dont know any whatever-input capture card that supports 1680x1050 resolution.
That's the iZ3D monitor resolution. If you record in the iZ3D mode, you will only be able to watch the video on an iZ3D monitor.

You can use any other resolution with the dual output mode.
CPU : Intel i7-7700K
RAM : 32 GB ram
Video card : GeForce GTX 980 Ti
OS : Windows 10
Display : Samsung UN40JU7500 Curved 40-Inch UHD TV with shutter glasses
HMD : Oculus Rift

Image
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Post by BlackShark »

Tril wrote:You can use any other resolution with the dual output mode.
Yes but who cares ?
They want to record 1680x1050 full resolution 3D.
I'm pretty sure some (if not all) DVI capture cards supporting higher resolutions should be able to record 1680x1050.
Unfortunately i have already had un unpleasant surprise with an SD capture card unable to record anything that was not already profiled by developpers.
It was a Pinnacle Studio500 PCI card which officially supports "PAL, SECAM and NTSC" but was unable to record either PAL60 or NTSC out from a PAL game console (NTSC resolution and frequency with PAL colourspace). Settings were locked in the firmware of the card so no software fix were working.

With such expansive hardware i wouldn't take any risk : it's either officially announced or confirmed via email with the harware support team or don't buy it.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
sharky
3D Angel Eyes (Moderator)
Posts: 1819
Joined: Fri May 25, 2007 4:08 am
Location: Italy
Contact:

Post by sharky »

i have a important question. it could be possible that i have a valid idea in mind, but i need an information first.

does it have to be idiot proof? i mean must everybody be able to do it or just the iz3d team?
Adam Savage: "I reject your reality and substitute it for my own."
Jamie Hyneman: "It's really cool, but really unusual."

Image
scouser
iZ3D Developer
Posts: 1
Joined: Mon Feb 11, 2008 6:21 pm

Post by scouser »

It has to be iz3d team proof, or to be more exact it has to be "iz3d idiot proof"

cheers :)[/list]
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Post by BlackShark »

Ok here's my solution.

I'll make it a 3 steps description :
Step 1)
2x720x240 x30fps using a cheap TV capture card (the technique i personally use at the moment : certified working)
Shutter output (possible better results with interlaced out but wasn't able to get it working)

Step 2)
2x1280x360 x60fps or 2x 1920x540 x30fps using one HD hdmi capture card (the technique i intend to use next year)
Interlaced output (possible 2x1280x720 x30fps with shutter mode but not recommanded)

Step 3)
2x whatever resolution that your capture cards accept, using two HD capture cards (the technique you're looking for but which i'll never use because too expansive for me)
Dual output (dual projector mode)




So here i go :

Overall Workflow
All three methods follow roughly the same workflow pattern :
-capture via a capture card
-separate raw left and right eye views into two distinct video files
-synchronize and edit as required
-render as two distinct video files : one for each eye
-assemble both files into S-3D compatible format and compress for internet broadcast.


Step 1)
2x720x240 x30fps using a cheap TV capture card (certified working by BlackShark a.k.a. the technique I use)


IMPORTANT NOTE :
This tutorial requires the use of many different software, and some of them are quite complex. You will notice that i won't explain everything step by step. so prior knowledge in video editing is a huge plus.
I'm thinking about making a live video tutorial so that you can actually see how i do it in real time (well, i'll fast forward when the computer does video encoding of course).
I'll do it as soon as a buy my mike.

Hardware :
1 Powerful Gaming PC with GPU with a tv-out and clone mode support (any nvidia or ati card with 2 output + TV-out supports this)
1 TV capture card with S-video or Composite input (S-video has better quality but composite is also fine) you can find this type of card for ~50$
For composite input, don't forget the 9pin s-video to universal TV-out adapter (provided with your graphics card)
1 S-video or composite cable, the smaller the better just make sure it's long enough to connect the tv-out to the capture card.
1 pair of anaglyph glasses

Software :
iZ3D drivers v1.09 or better in shutter-simple mode
Games that run at a steady 30fps in shutter mode with v-sync on during recording
Any FPS drop below 30fps ruins the recording. Small occasional drops may occur during loadings, these can be corrected but require manual intervention after recording so make sure these framerate drops are not too frequent or correcting the video will become a nightmare.
A proper non-linear video editor (Adobe Premiere, Sony Vegas, or other)
Virtualdubmod (the .avi swiss army knife, you can also use the usual Virtualdub but i use the mod one)
Avisynth (a very powerful script based video pre-processing filter)
Megui (or any other H264 encoder able to use avisynth input)

optionnal hardware :
The TV capture card can be inserted into a second PC for better performance, this eliminates any influence of the recording process on the game framerate.

Setup :
Hardware connetions :

-just plug the tv-out to the Capture card input
-in the case you use a second computer to record, connect the sound speaker out of the gaming machine to the sound board line-in plug in your recording machine to record sound. If the capture card has an auxiliary audio input, use it instead.
-in the case you use the same computer to play and record, you may not need to do this connection, depending on the recording software you use, you may be able to directly record the output mixer channel (aka : the "what you hear" channel)

On the game machine
-set your display driver in clone mode between your main screen and the tv-out
-set the TV-out in NTSC mode (if you can, you can try to find a PAL60Hz setting to get better colours but i wasn't able to get this working)
-make the tv-out the primary display,
on my computer this unlocked a widescreen 960x540 resolution to record wide screen gaming which was not available if the main monitor was the primary display (the resolution is automatically resized)
-set iZ3D driver in shutter-simple mode

[ img]images here coming later[ /img]

On the recording machine
Your capture card usually comes with some recording software. It may or may not give you enough control on your recording parameters.
If it is the case, you can try some universal recording software. The one i use is an open source recoding software called DScaler (http://deinterlace.sourceforge.net/" onclick="window.open(this.href);return false;)
-set your capture card recording format as NTSC
-if your capture software allows you to choose your video recording format : choose Lagarith it's a powerful and fast lossless codec (http://lags.leetcode.net/codec.html" onclick="window.open(this.href);return false;) Il allows to record the exact pure image without any quality drop while having a lower filesize than uncompressed RGB. I recommand Lagarith setting to check multithreading, check allow null frames, and use colour subsampling : YUY2 (to make sure noting goes wrong with the interlacing)
-if your capture software allows you to resize before recording, you can try to spare some cpu-ressources by reducing the resolution from 720x480 to 640x480 (especially useful if you play and record on the same machine)
-if your capture software allows you to choose your audio recording format : choose uncompressed wave (sometimes also called PCM)

Ok now comes the most annoying par of analog video capture : calibrating colours.
Because the image you grab from your capture card is never perfectly identical to the one you ouput.
What's more, you're sending a PC image over a very old standard designed initially for TVs, which makes a lot of conversions a a huge potential for stuff to screw up.

Set a colourful background on your desktop, open microsoft paint and draw some pure black and pure white shapes, you don't need to save the image it will help you calibrate the colour and contrasts.
In your capture card software, make sure that every setting is set to default.
And make sure the software does NOT deinterlace the footage (deinterlace off or weave)

In your display driver, locate the following settings you will have to play with these :
-Video and TV colour range (there should be 2 settings : 0-255 or 16-235)
-Video and TV colour settings (brightness, contrast, saturation, gamma, etc...)

[ img]images here coming later[ /img]

Record a few seconds of video showing nothing but your desktop with that small ms-paint window.
Check whether the preview and the recording footage are identical. If they aren't, you have to check on both machines' display drivers for the colour range and and colour settings and find the correct settings to make the preview, the recorded footage and the main screen to display all three the same image.

Here are my recommandations (ie : what i had to do)
-i had to set the video and tv range to 0-255 -> this gave the the correct contrast, it also improved the contrasts on many videos i watch due to wrong colourspace conversions.
-i had to reduce the saturation value to around 35% -> this prevented the colours to be oversaturated
-i had to use the output displacement and zoom feature to get a fullscreen recording. Recoding resolution is already low, i don't want to waste any pixel of it !

[ img]images here coming later[ /img]

Once you get the recorded image to match the displayed one, you're ready to play (get a little training, playing in shutter mode at 60Hz without glasses is difficult) and then record (yeah!)
When playing, make sure you disable any external program thay may interfere with performance (downloads, antivirus, internet, p2p apps) or software that may cause noise (instant messaging software are great for ruining a recording stupidly)

Once you have finished recording, you will get an interlaced Stereo 3D file (equivalent to HQFS). Which you can view with iZ3D MPC.
You will have to use the swap view feature because there is ZERO guarantee that the fields are properly ordered. But you're no where close to have a file ready for broadcast. Because you used the shutter mode, any framerate drop will cause (eye swapping) which has to be corrected at the next stage : the editing process.
Viewing the raw footage in iZ3D MPC will give you a rough idea of how much work is required.
If you have just one or two times where the eyes swap, you've got a great source and you're good to go.
If you have constant eyeswap, then you're screwed, your source is trash and useless, you can delete it.
Anything inbetween it's up to you, if you really want this video badly then you can try, but don't complain that it's long and hard, I tried to fix a messy source like this (half life 2 lost coast). I never finished it.

Why use shutter mode to record interlaced ? Wouldn't interlaced mode be better ?
In theory, if you output a 640x480 interlaced video through the tv-out and in the capture card, you should be able to get the interlaced image, this would allow avoiding the framerate drop issue, the fields would be always in the correct order and you could use any framerate, it would still be ok.
Unfortunately, i've never been able to match the resolution perfectly so it doesn't work (i get huge amounts crosstalk/ghosting).
Shutter mode with vsync makes sure that each frame goes in the correct field and provides an extremely pure zero-ghosting image.
If you are able to achieve this, tell me how you did because i'd really like to know.
Having a working interlaced output capture would allow you to completely skip the "Editing process part2", which is quite long (as you will soon discover).

[ img]images here coming later[ /img]

Now comes the editing process : part1 separating fields
The first step is to separate both fields into two separate video files and extract the audio track.
This can be done easily using Virtualdub-Mod (http://virtualdubmod.sourceforge.net/" onclick="window.open(this.href);return false;)
-1 extract the audio data :
open your raw footage in Virtualdub-Mod, go to the streams menu -> steam list.
click on the audio stream and extract wave file.
Then double click the audio stream to disable the audio stream (you don't need it anymore)
-2 get the two fields into different files
go to the video menu -> set full processing mode
then video menu -> filters
Add a filter called "deinterlace"
The deinterlacing filter has many options, but the one which interests us is the one called "discard field"
select discard field 2 (this will make VDM only keep field #1)
close the filters menu.
go to video -> compression
select the lagarith codec (you can also use uncompressed but lagarith saves disk space with lossless compression).
And save the file, ake sure you give it a name you easily understand like "myvideo-field1.avi"

[ img]images here coming later[ /img]

Then do the second field.
go back to the filters menu and change deinterlacing : discard field2 to discard field1 (this will only keep the field #2)
And then save with a name you easily understand like "myvideo-field2.avi"

And you're done. You now have your two fields in separate files and the audio track, you're ready for the next step.
Notice that Virtualdub mod can add jobs in a queue, so you can do the two fields in a row instead of one after the other. Just check the little checkbox at the bottom of the window when saving the file.

[ img]images here coming later[ /img]

The editing process : part 2 re-synchronizing
Now is the time to correct the eye swapping and resync if necessary.
During recording if the 1st eye view goes to the 1st field and the 2nd eye view goes to the 2nd field, then everything is fine, the video is perfectly synchronized. But if there is any eyeswapping, you'll get one eye in one fram and the other eye will fall in the next frame.
This 1 frame lag doesn't sounds much but whenever there is movement in your video, it will become visible. So it has to be corrected.
In order to do this you need a proper non-linear video editor (N.L.E.) like Adobe Premiere, Sony Vegas, some less expansive NLE software should also be able to do the job but I don't know them all. For my part I use Sony Vegas.
You actually might be able to do it using virtualdub but since it's not able to manage multiple video tracks, it will be hell, so don't even try.

Now comes the tricky part of this step by step guide. NLEs are very advanced programs that allow you to do amazing things, therefore they have a huge amount of features, and we will only use less than 1% of what they are capable of.
NLEs are all different from one another but have roughly the same features, because of this, I won't tell you step by step on which icons to click, I will only tell you what operations you should make. Knowing how to make these operations is up to you.
If you already know how to use an NLE, all i should have to say is "synchronize your sequences everytime they unsync" and you'd already know how to do it and I'd start to explain the next phase...
So keep in mind that everything we'll be required to do is basic stuff : drag and drop, set beginning and ends of sequences, moving a sequence on a timeline and adding a simple colour filter (for anaglyph preview), and removing the filter before making the final render.
NLEs have made significant improvements in the user interface and are now quite logically made tools and with a little logic you should be able to find what you are looking for quickly, if you are REALLY lost in this i'll maybe make a small video tutorial (i said MAYBE !)

The first thing you need to do it to setup your project correctly
NLEs are able to work with files at any resolution and any framerate and outputting a totally different format, don't expect the NLE to autodetect what type of output you want to use, or you're sure it will be wrong, you have to tell the NLE what your files are and how you want it to transform them.
This is done in the project properties : it should open automatically when starting a new project (if it doesn't appear, look in menues).
Here's what your project properties should look like :
You want the NLE to work natively at the exact same resolution of your files in order to preserve the maximum quality.
-resolution is 720x240 (or 640x240 if you recorded 640x480)
-progressive source (not interlaced)
-30.00 fps
-pixel ratio : you have to calculate it, here is how.
the video resolution and the display do not need to have the same width/height ratio, the pixel ratio stetches the pixels until you get the desired displayed ratio.
It is applied by the following formula :
( resolution width / resolution height ) * pixel ratio = ( display width / display height)
or if you prefer :
pixel ratio = ( display width / display height) / ( resolution width / resolution height )

[ img]images here coming later[ /img]

You're now ready to place your sequence on the timeline.
Bring the 3 files you've just created with virtualdub on the timeline. You'll get 2 video layers (one for each field) and 1 stereo audio layer.
The 3 sequences should have the same length (since you extracted all of them from the same source file).

Now, because the field1 and field2 video sequences are stored in .avi files, you NLE thinks they have square pixels (pixel ratio 1:1) which is wrong, in the video preview you should be seeing these sequences squashed.
In order to correct this, you have to go to the sequence properties and tell your NLE that these sequences use the same pixel ratio as the one you previously calculated. Your sequences will now be played normally.

[ img]images here coming later[ /img]

Let's work !
To make stuff clear, before starting to explain what to do , i'll first tell you what we won't do.

-we will NOT stick the views side by side now, this will be done later
-what we will do is to use one track for each eye, sync both eyes and output each track separately.
-in order to be able to preview what we are doing, we will use track filters to make an anaglyph video preview in real time.

Setup our anaglyph preview.
First you'll need to find the tracks (or layer) blend controls :
You will have to set your video layers as "additive" just like in a photo editor.
Next is applying the color filter to the whole track.
In sony vegas the color filter needed is called "channel blend", i don't know how it is called in other NLEs but it should have a similar name.
This color filter asks the user to enter a Matrix of RGB (or RGB+Alpha) values.
You can use any working anaglyph color matrix you want, it's just for your preview. You can find a number of different anaglyph matrices at 3dtv.at (http://3dtv.at/Knowhow/AnaglyphComparison_en.aspx" onclick="window.open(this.href);return false;)
I recommand you to create your own presets in your NLE so that next time you want to work on a 3D project you can use the presets to instantly get these values back without having to type them again.

[ img]images here coming later[ /img]

Now that you are able to see the 3D image, start by removing any unnecessary part of the video.
When you started recording, you were probably still under windows, the game not even loaded.
Start all sequences where you want to acutally start the video and end them where you want the video to end.
Make sure you keep your sequences in sync (especially the audio).

Now play through your sequence and find every eye swap of your sequence.
Wherever you find an eyeswap, split the sequence. Make sure you are frame accurate for both Left and Right eye tracks, both tracks may not swap eyes at the exact same frame (for example in case of unsynching). Put your anaglyph glasses on, or use the Mute Track feature to make sure you are splitting the correct sequence.

[ img]images here coming later[ /img]

Once you have splitted your sequences on every eyeswap, check with your anaglyph glasses where the eyes are correctly positionned and where they have to be switched, and switch the sequences wherever needed.
You will notice that some sequences will have a one-frame overlap with other sequences. This is perfectly normal since we haven't re-synchronized the sequences yet.

[ img]images here coming later[ /img]

Now let's resynchronize our sequences.
For every sequence pair, check very closely if you see any small lag between the left eye and the right eye.
If you don't see any, this means your sequences are synchronized : do not touch them.
If you see a difference (the 1-frame lag due to the interlacing) then you have to move one of the eyes by 1 frame (either forward or backwards) to resynchronize both eyes.
Try and choose which eye to move according to the small gaps created where you have splitted your sequences.
You will notice that some splits can be filled perfectly and the eyeswap becomes totally invisible when viewing the video, but other will resist and create a single repeated frame identical on both eyes.
There is nothing you can do in these cases. They have to show up some way.
If you have very few of these, you can try to hide them by starting the next sequence earlier (and overwrite the glitchy frame). This will work a few times but remember that everytime you do this, you make the video shorter while the audio keeps going at the same size.
So do not abuse this technique or you may completely unsync the audio from the video. More than 2 or 3 frames unsynching between the audio and the video becomes visible, so avoid making this more than

[ img]images here coming later[ /img]

Once both eye track is complete and in sync, you are now ready to render each track.
For a normal video, you would render all the audio and video at the same time, but since you are doing a stereo3D video, you should do each video track and the audio separately.
Start by rendering the audio : mute all video tracks, render the audio as a .wav uncompressed audio file (also called PCM) use the same sampling rate as the one you recorded (it should be 48000Hz but 44100 is also fine)
Then make the video tracks : mute one of the video tracks, and deactivate the anaglyph filters to get the full colour picture back.
Render the video. Be careful with the render settings, Most NLEs will automatically display some random preset you don't want (like mini-DV).
Check every setting and make sure you are exporting to the same format as you specified earlier in the project properties, for file format and codec : use a .avi file with lagarith codec.
Do this for both video tracks and names the files accordingly : this time you really have left eye and right eye video files, make sure you don't mix them up.

[ img]images here coming later[ /img]

Final part : Video encoding for the internet
Now that we have our two files, we can combine them into the final desired format.
There are multiple ways to distribute stereoscopic 3D content but the preferred format I recommand is : side by side crosseyed views, since it is compatible with virtually all compression formats and reduces the amount of possible playback issues due to codec and player internal misconfiguration.

In this part of the guide, we will compress our video in the H264 format, the most powerful video compression format available at theis present time , using the free open-source x264 encoder and Megui (for a friendlier user interface, and it's very useful presets), and use avisynth to stack our views on the fly when encoding.

Making the avisynth image stacking script :
Avisynth is a free open-source pre-processing filter, which means it can open videos and then work with them, finally avisynth outputs an uncompressed video stream ready to be used by x264.
Avisynth does not have a graphical user interface (gui), it uses text scripts as input commands.
In order to create a script, just create a simple notepad text (.txt) file and rename it with a .avs extension. This is the script you will have to write (use copy/paste and modify the filenames and directories according to your situation)
There are some automated Avisynth script creator softwares out there but I don't think any of them is really useful in our case.

Lines beginning with a # are comments to help you understand what we are doing, they are not processed by avisynth, you don't need to remove them.

Code: Select all

# Assign useable names to each video stream
VideoLeft = AVISource("C:\Myfolder\MyVideo-lefteye.avi", audio=false)
VideoRight = AVISource("C:\Myfolder\MyVideo-righteye.avi", audio=false)

# Stack our videos
# current used command is crosseyed horizontal stacking, for vertical stacking, switch the # symbols between the two following lines
VideoStacked = StackHorizontal(VideoRight,VideoLeft)
#VideoStacked = StackVertical(VideoRight,VideoLeft)

#YV12 colorspace conversion, x264 requires this, if you forget it, Megui will prompt you to add this command
ConvertToYV12(VideoStacked)
Compressing to H264
Megui a free open-source graphical user interface for various encoding formats, but it's main use is H264 encoding via the x264 encoder.
It is certainly not the most user friendly H264 encoding software in the world (far from it), but it is constantly updated and has an auto-update feature that grabs the latest available version of every software used every time you start it, which is why i recommand it, but you can also use any other x264 graphical user interface you want provided it accepts an avisynth input.
Megui's autoupdate feature should start automatically, make sure you grab all the presets (tip : use the right click menu to select all profiles)

Under the input tab, in the upper video part :
Put your avisynth .avs script in the Avisynth script box, this will trigger an avisynth preview window, showing you what the image being sent to the H264 encoder looks like. You should see your side by side video with a pixel aspect ratio of 1:1 (square pixel), so if you used an anamorphic resolution your video will be squashed, don't worry this will be taken care of later.
Under video output, select the name of the VIDEO output file, this is the compressed video stream only (no audio), it's a temporary file megui needs before assembling the audio and video together.
Under Encoder settings, select "Unrestricted 2pass HQ"
This setting provides almost the best of what x264 can do, without using the insane time consuming options which slow down encoding too much for nothing.
Warning : THESE ARE ALREADY VERY INTENSIVE SETTINGS, you will notice a significant encoding speed difference with the usual DivX encoders immediately.
You will also notice the Hardware assisted DXVA profiles are tempting, x264 does not feature Hardware encoder acceleration, these are to ensure you can make DXVA compliant streams for video players, unfortunately, your stream resolution is already not standard and thus will not be accelerated by current graphics cards. This may change in the future... or not, so there's no need to use these profiles for Stereo-3D content at the moment.

Under file format, select either mp4 or mkv depending on your desired final format.
(if you want to publish your final file as .mp4 use mp4, if you want to publish as .mkv use mkv)

Next in the audio part :
Put your uncompressed audio in the Audio input box
Under Audio output, select the name of the AUDIO output file, this is the compressed Audio stream only, it's also a temporary file megui needs before assembling the audio and video together.
Under Encoder settings, you have a wider range of options via different encoding software (mp3, vorbis, AAC, etc...) if you don't know what to use select LAME MP3: MP3-128ABR

Now, click the "AUTO-ENCODE" button in the bottom right corner.
This will open the final summary of your encoding and the filesize/bitrate calculator.
Under Name of output, enter the name of your final file (this is the true video containing both video and audio)

Under "size and bitrate" comes one of the most important things you have to decide.
The bitrate determines how much data do you allow x264 to use to store your video.

The smaller the bitrate, the smaller the file will be but also the lower the quality will be.
The higher the bitrate, the bigger the file will be but also the higher the quality will be.

If your internet host restricts the maximum filesize (and you don't want to split your video into multiple files), you can set the size you want (always keep a few MB of safety margin)
Otherwise, choose a bitrate.
Now the ideal bitrate required highly depends on the quality you wish to achieve and the very video you are encoding.
A 3D-HD action movie will require much more bitrate than a windows video tutorial for example.
The only way to know for sure what bitrate you should use (what is too low or what is overkill) is to try : make the full encode with some setting and look at the final file if the quality satisfies you.
But i know this is very long and time consuming, so to give you some guidelines for your first tries, here are the average bitrates I personally use to achieve almost transparent quality.
-SD 480p : 1500~2000 kbits/s <- i use this for my Stereo3D captures with my composite capture card.
-HD 720p : 3000~5000 kbits/s
-HD 1080p : 6000~8000 kbits/s
Again these are only my personnal values, depending on your sources and your desired quality you may end up with completely different values.
Just make sure you don't overkill it, remember it's for distribution over the internet !

Once you have decided on the bitrate, click the "QUEUE" button
Your encoding will not start immediately, it's only added to the job queue, so that you can add multiple encoding jobs and let your computer run them all during the night.
To start the actual encoding process, go to the Queue tab, and click the start button.

some time Later (depending on the length of your video)
Your video is complete, go check it and enjoy your Stereo3D video file in any 3D enabled player supporting Side by Side videos.

Setting Aspect ratio tags for anamorphic video
Our video doesnt' have square pixels, in order to display our video with the correct aspect ratio, our video player needs to know what the correct ratio is.

With mkv files :
this can be done very easily using mkvmergeGUI, a tool part of the mkvtoolnix package : http://www.bunkus.org/videotools/mkvtoolnix/" onclick="window.open(this.href);return false;
add your video file to the "input files" box (you can drag and drop it' easier)
In the tracks box you should see a video and an audio track, select the video track.
go to the "format specific options tab, and under aspect ratio, enter the correct display ratio you want to use.
NOTICE : this is the display ratio of the entire side by side image (not the pixel ratio you used in the video editor)
if your video is 2x 16/9 side by side, enter 32/9 (two 16/9 images side by side)
if your video is 2x 16/10 side by side, enter 32/10, etc...
MkvmergeGUI keeps your original file untouched, you have to save as a new file, insert a new name in the "output filename" box anc click the "Start muxing" button.

You will notice the stereoscopic tag, i'll talk about it just below.

With mp4 files :
This can be done easily using YAMB, an easy to use graphical user interface for the mp4box command line tool. http://yamb.unite-video.com/download.html" onclick="window.open(this.href);return false;
In the left menu, select "Creation", and double clock on "create an MP4 file with multiple audio, video, subtitle and chapters stream"
Click on the "+" icon to add a file and go pick your .mp4 video.
In the input list you should see a video and an audio track, select the video track, and click the "properties button"
Under "pixel aspect ratio" enter the pixel ratio you used previously in the video editor (not the display ratio like with mkv files)
Click the OK button
Add an output name (like mkvmergeGUI, YAMB is going to create a new file with the new settings and leave the original untouched).
And click next and finish.

Your video file should now have the correct aspect ratio straight away when opening in a video player. (except stereoscopic player which asks about the aspect ratio for every single file, even if the aspect ratio tags are there)
Your file if now ready for broadcast over the internet

Dual Stream Stereoscopic files
Dual stream stereoscopic files have both views stored in separate video streams. It's like having one video for each eye, but in only one single file. Which make it easier to download and manage.

At the moment there are only two video formats that allow Dual Stream Stereoscopic files and each one has issues which makes Dual Stream not practical at the moment :

Matroska (.mkv) has an official Stereo3D tag (which you noticed just above in mkvmergeGUI), but VLC is the only player which supports this feature at the moment : VLC opens two windows with the two streams and keeps them in sync (there are no 3D display conversion plugins yet, so the only way to use it is to display with dual projectors)

Windows media video (.wmv) via non-official custom tags proposed by Peter Wimmer, but the only player which supports them is Peter Wimmer's Stereoscopic player, and i personally don't like the windows meda video format.

Dual Stream Stereoscopic files have some advantages :
-only one file without any user configuration (the file knows which video stream corresponds to each eye since Left/Right tags are mandatory)
-the same file is used to play in Stereo3D but also plays as 2D if played in a standard non-3D enabled player
-built-in dual-core optimisation (two completely separate video streams to decode) improves performance with high resolution videos on multicore CPUs.

but also have some drawbacks :
-If you open the file in a standard 2D player, there is no way to tell the file has a S-3D track, you (the creator of the file) have to add a notice message to warn the user about it.
-There is no compression optimisation possible since the streams are completely separated
Although the current available compression optimisation by using side by side is relatively low, since the video compression algorithms currently don't take fully advantage of side by side, there is a huge improvement potential in this area.

H264 playback : Free H264 codecs and multithreading
H264 is a very powerful video compression standard, it provides the best available video quality/filesize ratio in the world at the moment, but it requires more CPU power for playback than previous codecs (Mpeg2, quicktime, DivX, wmv, etc...) especially when dealing with fullHD video, where H264 shines.

H264 gained a terrible reputation in the case of 1080p content, since a single core processor is barely enough to decode the stream, decoding such streams requires the H264 decoder to be optimized for multi-core CPUs, which most codecs (including H264) were not.
This lead a lot of people to wrongly believe that hardware acceleration was required to play H264 streams.
Actually, unless you still use a very very old pc (>8 years old), your PC is able to play Standard definition H264 videos.
Today, ipod nanos can read SD H264 videos (640x480), and any dual core CPU are able to decode 1080p H264 (yes even 3 years old budget AMD AthlonX2 or intel Pentium D CPUs)

These are the reference H264 codecs for windows i recommand :
CoreAVC pro (http://www.coreavc.com" onclick="window.open(this.href);return false;) - payware : 15 $
FFDshow (http://ffdshow-tryout.sourceforge.net" onclick="window.open(this.href);return false;) - FREE
-> this codec is used in VLC and comes in absolutely every free codec pack for windows you can find on the internet

The reason why i recommand a non-free software is beacause FFDshow still does not have a multicore optimized H264 decoder but it is coming.
(at the time i write this tutorial, the "FFDshow-MT" patches are under beta testing, no official website, you may find some builds if you search the internet).
DivX Inc. is also developping an H264 decoder, which should be available in 2009 (price not announced yet), beta versions can be found at the DivX labs website.

Anyway, if you are a 3D gamer your current PC should be powerful enough to play High definition videos with ease, so you shouldn't worry about the H264 hardware acceleration marketing stuff.

-More to come later-
The little differences between my currently applied SD process and an HD workflow with HD capture cards
-coming soon-
Last edited by BlackShark on Sat Dec 06, 2008 12:55 pm, edited 4 times in total.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
CraziFuzzy
One Eyed Hopeful
Posts: 44
Joined: Fri Oct 05, 2007 1:29 am

Re: Recording Games in 3D

Post by CraziFuzzy »

My solution requires some external hardware.

Using the iZ3D drivers in Interlaced mode, at 1920x1080 resolution (1080i). This output then fed via Component Out into a Happauge HD-PVR, which then encodes the video into a H.264 stream, and sends it back to the PC via USB, to be written to the harddrive.

Pros: Relatively cheap solution (about US$250), very high quality.
Cons: 1920x540 resolution per eye.

Alternative:

Using dual output mode, each in 1920x1080, and TWO HD-PVR's.

Pros: Killer resolution per eye, still fairly inexpensive for what you get (US$500).
Cons: Two seperate vids that then have to be synced and converted to whatever format you are trying to play back with. (should be relativy easy to sync, as the vids will be frame accurate to each other, being synced from the same vid-clock.)
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Recording Games in 3D

Post by BlackShark »

updated my post the H264 encoding side-by-side s-3D file for local and internet broadcast
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Recording Games in 3D

Post by BlackShark »

added, anamorphic video tagging, dual stream files and H264 codecs.

I have decided that i would make a complete live video tutorial when i'll record some left4dead footage.
I'll spend the week preparing thevideo tutorial and i'll try to record and publish it next weekend.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
Zerofool
Cross Eyed!
Posts: 119
Joined: Tue May 06, 2008 3:50 pm
Location: Sofia, Bulgaria

Re: Recording Games in 3D

Post by Zerofool »

Hi, the first thing I tried was using that nifty little app - .kkapture. The great thing about it is the fact that it slows the 3d app down to a given fps, so it renders a frame as long as it takes. That way you get perfectly smooth and stable fps of the captured video at any resolution (audio is synched perfectly too). The bad thing is that input gets laggy when fps is really low. So it's more useful for games which has demo/replay recording&playing options.
Well, it doesn't work with iz3d drivers on (at least in Vista x64), but it's opensource (I think, look for licensing info or contact the author) and if you have the time and manpower you could investigate where the problem is and (if possible) make it work. This would be a cheap alternative to capture cards.
nV 3DTV Play, Geo-11, Tridef on 2014 55" Samsung H7000 (aka H7150) @ 1080p60 Checkerboard-3D | Valve Index | RTX 2080Ti, i7-6700K, 32GB 3200CL14, 1+2TB SSDs | Win10 v2004 | DAN A4-SFX | G27
2umind
One Eyed Hopeful
Posts: 20
Joined: Sun Apr 05, 2009 10:01 am

Re: Recording Games in 3D

Post by 2umind »

Well for me the best program so far I've seen is Growler Guncam. I'm using it right now on a 5 year old Dell Dimension 4600i with the only upgrades being 1.5 gb of Ram and a Nvidia GeForce 7800 GS AGP video card with 256 mb of RAM. The program works like a charm, it cost money of course but if you're really broke you can try it out for free for trial and I'm sure there are torrents or other methods to try and find a way around paying if you're that impatient or the pirate type. I was even able to play Crysis in 3D and get hardly any fps drop using this program, records sound and has built in editor which is awesome. You can splice out what you don't want in a video before you ever turn it into an avi. It's great. Right now I'm trying to get IZ3D activation because my trial ran out and OpenGL no longer works. Do you have to have a seperate activation code for every type of 3D supported besides the Anaglyph and IZ3D?

So far I've managed to get the perfect 3D in these games: Project64 N64 Emulator: Perfect Dark, Pilot Wings, StarFox, etc., PC Games: Crysis, Driv3r, GTA San Andreas, Diver - Deep Water Adventures, JAWS Unleashed, and a few others all which can be recorded with Growler Guncam easily.
System specs:
OS: 64-bit Windows 7 Home Premium
CPU: 2.6GHz Intel i7 920 Quad Core
RAM: DDR3 1333MHz 9GB tripple channel
Video: Nvidia geForce 260 GTX oc PCI-e, 1.7GB Vram, latest 3D driver
MOBO: Dell XPS 9000
User avatar
iondrive
Sharp Eyed Eagle!
Posts: 367
Joined: Tue Feb 10, 2009 8:13 pm

out-of-date 3d recording advice

Post by iondrive »

This post is more for general recording than specifically for iZ3D but anywayz...

Wow Blackshark,

Holy Smackerel that was a long post. Anyway I see you use a different capture card now...

"The mighty Blackmagicdesign Intensity pro has arrived. Behold the power of 720p at 60fps"

But I have some questions and possible advice that might or might not be useful to you, and if not to you, then maybe someone else will find it useful or at least interesting.

First issue:
So, you are using simple shutterglass mode, right? And recording full-frames at 60 fps, so 30 fps per eye, right? So your game runs at 30 fps and your display is at 60 Hz (30 full-frames for each eye), yes?

Sorry to bother you about this, by the way. This info is probably in some other posts but I've just read many pages and I thought I'd just ask.

First advice: use BLC
If you still lose sync once in a while and need to fix things in editing, maybe you could try Blue-Line-Code and that way you will be able to tell which eye each frame belongs to. You might even be able to script something to sort them out automatically.

Second advice: (out of date useless interlaced mode advice)
"possible better results with interlaced out but wasn't able to get it working"
On the off-chance anyone wants to try outputting with interlaced mode to an NTSC capture card, the trick is to find the nvidia control window where it lets you resize the display and click the zoom-out button to max and that gets rid of the ghosting (lines up the lines 1:1). Also adjust a slider labeled "flicker" to max and, of course, shut off any deinterlacing functions that you find. Actually, I didn't try this with iZ3D drivers but it did work for me with old-school nvidia drivers.

Third advice: how to get full frames instead of half-frames
If someone is still using NTSC recording with shutterglass mode, then they record 30 fps interlaced so that's like 60 half-frames per second. For newbies, make sure you understand that last sentence before moving on. Normal old analog NTSC works at 60 half-frames per second but before we go on, let's describe a kind of process. Let's say your game renders 30fps, then the stereo shutterglass driver renders 60 fps by alternating 30 L/R pairs from the original 30 game-generated frames. If your display shows 60 fps, then all is well so far but when you record your cloned output, it throws away half of each image and you get only 60 half-frames per second instead of 60 full-frames per second. How can you get that other half? Well, it turns out there is a way. Don't clone your output to analog and instead use a converter box that inputs VGA and outputs VGA plus Composite or S-video and so it acts like a splitter or a "Y" cable. You then send the VGA out to your monitor and the Composite to a recorder but so far that gets you the same result as before. The first trick is to use a second converter box and get it to be 180 degrees out of phase with the first box and record that with a second recorder. The second trick is to use a powered VGA splitter box since two converter boxes in-line degrades the signal too much. It goes like this: computer video-out to 4-output splitter box, then one output to your monitor and one to each VGA-Composite converter box and each converter box to its recording computer or even VCR if you like. You need some old TV's so you can tell when the boxes are out of phase by wearing your shutterglasses and looking at the TVs after you've connected them to your boxes. You get the boxes out of sync by deactivating/activating one until it just happens to be out of sync. This way, when one box is outputting odd lines for frame L1, the other box is outputting even lines for L1. Next they will output even lines for R1 and odd lines for R1 and so on for L2-odd/even and R2-even/odd. You just have the task of remixing all the even and odd lines to get back the original full-framed frame-sequential video. It's tricky but possibly not as hard as it sounds. This way, you get full-framed 60 fps frame-sequential 3d video except that the res is only 640x480 or 720x480 or something like that depending on how you do things. I played around with this stuff last summer and from what I remember, it wasn't hard to get the boxes to be 180 degrees out of phase. I got them cheap from ebay.

If you want to go really crazy, then use this method on each of 2 dual output VGA signals so that you're recording 4 streams and combining them to get 60fps per eye and so 120 fps frame-sequential 3d video, still at low res though. But that's not worth it if your game is only outputting 30 fps. You would just be recording duplicate frames then. I guess it would be good if your game outputs 60 fps.

The better alternate for full-frame 60fps:
Use dual output and record each using a separate converter box and capture card. That will be much easier to process too. Each card captures 30 fps for each separate eye. Of course, it helps if you have a dual-output 3d setup.

Comments:
Yes, it's alot of stuff and trouble which is why I pushed it to the back of my mind as far as posting about it goes. It's low res and low freq by today's standards but might be good enough for you-tube.

I'll try the VirtualDub 3d recording approach sometime. I just downloaded it today.

May your headaches be minor and few.

--- iondrive ---
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Recording Games in 3D

Post by BlackShark »

Hi iondrive.
I wrote that huge text over a year and a half ago and it now contains some outdated information so i'll try to summarize the new things.

No matter what I tried with scaling and everything i was never able to match the resolutions perfectly between the games and the composite SD capture card so I gave up trying to use interlaced on this card completely.
Shortly after i wrote this text, iZ3D added side by side outputs which made video recording tremendously easier, I no longer had to use shutter mode which was a relief I could now record any game in 3D no matter what the framerate was without fearing constant eye swaps. Fixing eyeswapping is a horribly long process as you have to manually do the corrections frame by frame. It is quite easy (not much need for blue-line code) but it just painfully long to do by hand. Side by side also unlocked 60fps recording since i could now use BOB deinterlacing algorithms however i lost some sharpness in the process, i consider the benefits of 60fps superior to the loss in quality.

More recently, I got myself the BlackMagic Design Intensity capture card which is a wonderful little device.
It is not the magic card that does everything like magic, it is less hackable than the SD capture card, but what it does suits my needs perfectly.

With the new workflow, I now play and record at 720p through Hdmi wich allows pixel perfect precision. This allows me to play and record using the iZ3D interlaced (optimized) output. Since I have a Zalman display, i can record video while seeing the 3D picture at the same time on the monitor, the only requirement is to configure the ATI driver display the picture without resizing on the Zalman display. What I end up with is a pure 720p primary output for the capture card and a 1680x1050 secondary output on the zalman display showing the pixel perfect 720p picture in the middle of the screen surrounded by a black frame to fill the rest of the 1680x1050 picture.

The BlackMagic design capture card is mounted in my second computer (which is really required if you want any hope of good performance).
For choosing the recoding software, i had some trouble finding the right one. The software that comes with the card is designed to be used on professional video editing bays which almost all of them use giant 30" displays with 2560*1600 resolution. And the interface of this software cannot be resized down so this software is pretty much absolutely unusable for me.
Fortunately, the capture card driver includes a DirectShow interface which allowed me to use Virtualdub for capture.

However this is not the end of the problems : since i do not have a HDD raid Array, i cannot record uncompressed HD video, I have to compress in real time. for some reason I was not able to choose which codec I want to use when using the Intensity capture card so my only choice is the included BlackMagicDesign MJPG codec. It isn't bad at all, actually it's a very high quality MJPG compressor (it deals perfectly with the interlaced 3D video) but it can't be configured at all. As a result it is very heavy on the CPU and my Core2Duo e4300 (1.8GHz) isn't powerful enough. I had to overclock it up to 2.7GHz which is right at the limit of what my system can do (I did not design my computer for overclocking). So i was quite lucky i wasn't forced to change the CPU.

Unfortunately there is also a tradeoff for being so close to the limits : the video compressor buffer is constantly filling up and down when recording, when this happens, Virtualdub automaticlly delays the video preview window by a few seconds to maintain a smooth preview. This makes it absolutely impossible to watch the preview window when gaming.
An other very bad surprise came from Windows 7 audio management : at the moment, it is absolutely impossible to broadcast sound to different audio cards at the same time, which means that I cannot send audio tothe capture card through the hdmi and hear audio on the main computer.
In order to hear anything I have to plug my speakers to the recording computer, which is subject to the delay, which is absolutely unacceptable (it's better to have no sound than to have a sound delay of one full second).
Fortunately I was able after some tweaking in the numerous Virtualdub menus to find a configuration that prioritizes sound over video and so I am able to have realtime immediate sound.

So in the end I managed to get a perfectly working recording method. But having the CPU on the edge like that means that there is very little room for error. If anything happens on the computer other than video recording, I loose sync and frames. It can be anything : windows update suddenly waking up, a hard disk drive being just a little too slow because it reaches the end of capacity, or whatever... It screws the recording.
However due to the audio configuration, whenever this happens, I immediately know it by hearing an audio delay. It's like a feedback loop : whenever the audio is good, I know the recording goes well, whenever the audio is delayed, I know the recording goes wrong and I can reset the recording within seconds and continue my recording.

The final recorded picture is an interlaced 3D frame which i then transform into whatever I need using either virtualdub for extracting the views to separate files, or using Avisynth for transforming the views directly to side by side or over/under.
Since my actual recorded resolution is 1280x360 per eye I always go for over/under.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
iondrive
Sharp Eyed Eagle!
Posts: 367
Joined: Tue Feb 10, 2009 8:13 pm

Re: Recording Games in 3D

Post by iondrive »

Hi Blackshark,

I'm glad you noticed my post and decided to write such a long response. It really helped me understand your setup/process. I'm starting to explore this area of s3d gaming and the thread on the iZ3D forum was very helpful too. I can get good recordings with VirtualDub which is a new program to me so I'm glad you mentioned that it can separate fields of an interlaced recording. I have some things for you to try if you want but I want to talk about them on the iZ3D forum first. Here, I'll say some other things.

My "out-of-date 3d recording advice" subject heading was really meant to describe my own post so sorry if you thought it was aimed at your post although yes, it applies to both. Anyway, did you think it was interesting that you could use 2 converters to get full-frame image data by making them 180 degrees out of phase and recording/processing both streams? I thought it was pretty clever and maybe someone else can use that general idea in some other application.

OK, so you're recording 60fps with each frame having 1/2 image per eye. When you upload to YouTube, is that a 30fps O/U 720p converted recording? You are BlackSharkfr on YouTube, right? I am iondrive3d on YouTube since iondrive was taken already. I just created my account this week and you'll see links to my uploads later. I have to fix something first. My tests show that O/U gives sharper images on YouTube compared to R/L so that's the format I'll use too.

Hmmm, I think there might be more to say here, but I'll move to the iZ3D forum now since that's where the contest was and I have info that others will want to read there.

--- iondrive ---
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Recording Games in 3D

Post by BlackShark »

Yes, I am BlackSharkfr on youtube, I currently upload my original 60fps over/under 720p files and then youtube converts them to 30fps with a significant drop in bitrate and image quality, which is why I also upload a higher quality copy to my mediafire free account.

About the 180° phase thing with 2 capture devices and a VGA box, It might work, however I find it just too complex, relying on consumer hardware which you don't know much about, not sure about the internal workings of the devices they may just screw it up without warning.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
iondrive
Sharp Eyed Eagle!
Posts: 367
Joined: Tue Feb 10, 2009 8:13 pm

Re: Recording Games in 3D

Post by iondrive »

OK,

The dual converters approach did work for me but like you say, it's complicated to re-assemble the L/R full-frames from the two full-frame interlaced streams but I'm sure I could automate that. Then again, like you say, it's possible the devices might switch sync after a time since my test recordings were short but my sense is that they lock on and I don't think that would happen. And again, for someone putting a system like this together, they wouldn't really know if it would work until they tried it. And finally it's all out of date so never-mind. :)

Happy recording,

--- iondrive ---
User avatar
Likay
Petrif-Eyed
Posts: 2913
Joined: Sat Apr 07, 2007 4:34 pm
Location: Sweden

Re: Recording Games in 3D

Post by Likay »

I used to have two vga-to-tv adapters with which i recorded the dual streams. Syncing was always a nightmare but it works good on slowly moving scenes. When recording fastpaced scenes it's a roulette if you get a decent capture or not (partly missync between the graphcards outputs but mostly because of the 25Hz capturing and missync from the vga-boxes). I've tried to put a common switch on the powersupply to the converters (to make them boot up exactly at the same time) but the timing is still completely random. I can quite easily make a pll circuit which reads the syncpulses from one vga-box and lock the other to the "master" but i don't find the effort worth it.
So: Blacksharks method is way better but a separate computer and a decent capturing card is needed.
Mb: Asus P5W DH Deluxe
Cpu: C2D E6600
Gb: Nvidia 7900GT + 8800GTX
3D:100" passive projector polarized setup + 22" IZ3D
Image
User avatar
iondrive
Sharp Eyed Eagle!
Posts: 367
Joined: Tue Feb 10, 2009 8:13 pm

Re: Recording Games in 3D

Post by iondrive »

Hi Likay,

Your setup was different than I was describing. You were using two converters from two different video sources. My idea was to use a VGA cable "Y" splitter and send the exact same signal to two different converters. These both sync to the same signal but by chance you can get them 180 degrees out of phase and then I don't see how they could lose sync. You see, the video card is outputting full-frames at 60 Hz in this case (nvidia shutterglass mode) but the converters output interlaced half-frames at 60 Hz. So one converter outputs odd lines of the current frame and then even lines of the next frame. The other converter always outputs the opposite lines since they're out pf phase. Together, you have all the image data for full-frames at 60Hz. Get it? It's a neat trick but too bad it's out of date.

Oh, yeah, I guess the recorders could lose frames but I would use dedicated computers for that. Maybe even VCRs if this was ten years ago. :)

Data transfer rates:
OK, so anyway, using VirtualDub, I was able to record uncompressed video at 3.4GB/minute. I decided to do the math and it works out: 800x600x32bit@30fps x 60sec/minute gets me 3.456 decimal GB/minute. When I try recording 60fps, it stays under 4GB/min so I know something's limiting my transfer rate. My SATA drive and motherboard is rated at SATA II which is 3Gbits/sec and Wikipedia says SATA II has an "actual uncoded transfer rate of 2.4 Gbit/s" which translates (*60/8) to 18 GB/min. So theoretically, you should be able to record 1280x720x32bit@60fps x 60sec/min at a data rate of 13.3 GB/min and so that should be doable on a single machine. I lose somewhere between 0 to 10% of my game's framerate speed when I'm recording so I still think it's doable. OK, well this talk is basically going nowhere. I guess I just wanted to talk numbers.

--- iondrive ---

PS: forgot to mention that if I switch to 16 bit color and recording, then it's 1.7 GB/min just as expected so I should be able to record two fullscreen 800x600 images at 30fps. Nice. Well, nice enough for me anyway. I can play the video with full-frames at 60 fps in shutterglass mode with mplayer. Yay.
User avatar
Likay
Petrif-Eyed
Posts: 2913
Joined: Sat Apr 07, 2007 4:34 pm
Location: Sweden

Re: Recording Games in 3D

Post by Likay »

Ok. Missed that part by a fast read. :D And yes: having both images in one frame effectively gets rid of any timing issues.
Mb: Asus P5W DH Deluxe
Cpu: C2D E6600
Gb: Nvidia 7900GT + 8800GTX
3D:100" passive projector polarized setup + 22" IZ3D
Image
User avatar
Zerofool
Cross Eyed!
Posts: 119
Joined: Tue May 06, 2008 3:50 pm
Location: Sofia, Bulgaria

Re: Recording Games in 3D

Post by Zerofool »

@BlackShark
When you started using the Blackmagic card, I though it's something temporary, just until you save enough money for another PC and second Intensity card, to get full res 720p60 3D videos (1280x1440@60 fps), or 1080i60 per eye :). But I haven't heard anything like that from you yet. So, what is your plan for the future?
Btw I guess that future models of such cards will probably support HDMI 1.4a and it's 3D modes, therefor, second card will become useless... just thinking out loud ;).
iondrive wrote:My SATA drive and motherboard is rated at SATA II which is 3Gbits/sec and Wikipedia says SATA II has an "actual uncoded transfer rate of 2.4 Gbit/s" which translates (*60/8) to 18 GB/min. So theoretically, you should be able to record 1280x720x32bit@60fps x 60sec/min at a data rate of 13.3 GB/min and so that should be doable on a single machine.
Don't get confused, that's just the theoretical maximum bandwidth, no regular HDD on the market is even close. Some of them cross the SATA I 1.5Gbps speed, but that's it :). Try running a speed test on your HDD with HD Tune, HD Tach, or any other similar software, to see the actual maximum speed of your drive (at the beginning of the platters). SSDs, 10,000 & 15,000 rpm drives are different story ;).
Why don't you try FRAPS? It's lossless codec is very lite (even in full RGB mode), and it's bitrate consumption is ... satisfactory :). If it works in those strange modes you described in iZ3D forum, that is :).
nV 3DTV Play, Geo-11, Tridef on 2014 55" Samsung H7000 (aka H7150) @ 1080p60 Checkerboard-3D | Valve Index | RTX 2080Ti, i7-6700K, 32GB 3200CL14, 1+2TB SSDs | Win10 v2004 | DAN A4-SFX | G27
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Recording Games in 3D

Post by BlackShark »

iondrive wrote:Hi Likay,

Data transfer rates:
OK, so anyway, using VirtualDub, I was able to record uncompressed video at 3.4GB/minute. I decided to do the math and it works out: 800x600x32bit@30fps x 60sec/minute gets me 3.456 decimal GB/minute. When I try recording 60fps, it stays under 4GB/min so I know something's limiting my transfer rate.

PS: forgot to mention that if I switch to 16 bit color and recording, then it's 1.7 GB/min just as expected so I should be able to record two fullscreen 800x600 images at 30fps. Nice. Well, nice enough for me anyway. I can play the video with full-frames at 60 fps in shutterglass mode with mplayer. Yay.
You probably don't record 32bit color depth but only 24bit (8 bit per colour channel). Windows states 32bit due to the extra 8 bits alpha channel but it's actually only 24bit of colour information.
Some professional capture card can capture up to 10bit per colour channel, making a total of 30bit colour. Anything higher and you enter the realm of Floating point accuracy and high dynamic ranges which are clearly not available for video, unless you have access to prototype specialized scientific hardware.
Zerofool wrote: @BlackShark
When you started using the Blackmagic card, I though it's something temporary, just until you save enough money for another PC and second Intensity card, to get full res 720p60 3D videos (1280x1440@60 fps), or 1080i60 per eye :). But I haven't heard anything like that from you yet. So, what is your plan for the future?
Btw I guess that future models of such cards will probably support HDMI 1.4a and it's 3D modes, therefor, second card will become useless... just thinking out loud ;).
I have no current plans in going higher than my current BlackMagic card. Frame compatible 720p is going to be pretty much the best the web can afford to offer int erms of bandwidth for the next few years (until we all get symmetric 50Mbps/100Mbps fiber connections and host our own videos at home).
Also since neither the iZ3D or Tridef drivers take advantage of Crossfire, 1080p 60fps is out of reach on most recent games and all future games.

By the time everything works, I guess the 3D formats will be all standardized and the capture cards will be able to capture the correct format directly without having to worry.
I've seen BlackMagic announced a new Decklink card that can record two simultaneous 1080p inputs, which is pretty much what you'd need for recording stereo1080p60. Using two separate computer for each stream is tricky : you need specialized software to keep the video clocks in perfect sync or you get some drifting which is painful to correct.
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
Zerofool
Cross Eyed!
Posts: 119
Joined: Tue May 06, 2008 3:50 pm
Location: Sofia, Bulgaria

Re: Recording Games in 3D

Post by Zerofool »

BlackShark wrote: I've seen BlackMagic announced a new Decklink card that can record two simultaneous 1080p inputs, which is pretty much what you'd need for recording stereo1080p60. Using two separate computer for each stream is tricky : you need specialized software to keep the video clocks in perfect sync or you get some drifting which is painful to correct.
That's interesting, I just saw the info and specs for this Decklink HD 3D card... and that simultaneous dual-stream recording is only available for the dual link SDI input. I'm not sure how this thing could be used with regular consumer-grade video cards. That's probably possible, but you'll have to spend another bag of money for the required converter boxes. HDMI 1.4a-aware cards would be much more user-friendly :).

If I understand correctly what you're saying, people using such dual-output configs (like dual projector) would experience this same drifting (missynch?) issues, is that right? Or the problem is in the capturing end of the chain?
nV 3DTV Play, Geo-11, Tridef on 2014 55" Samsung H7000 (aka H7150) @ 1080p60 Checkerboard-3D | Valve Index | RTX 2080Ti, i7-6700K, 32GB 3200CL14, 1+2TB SSDs | Win10 v2004 | DAN A4-SFX | G27
User avatar
BlackShark
Certif-Eyable!
Posts: 1156
Joined: Sat Dec 22, 2007 3:38 am
Location: Montpellier, France

Re: Recording Games in 3D

Post by BlackShark »

No it's not about outputs, it's a about clocks.

If there is only one clock in a system, everyone works with the same beat.
If you've got two different clocks then you're in trouble.

When you display 3D from one computer, you know it comes from one single clock in the computer. As long as the software and hardware can synchronize the outputs, you can have perfect synchronization. The time flow may not be perfect (you computer constantly accelerates and slows down, but you don't notice it), but as long as the two video streams and the audio are synchronized, you won't notice it.

The same goes with recording : the recording computer may not be off from the playing computer, but as long as the streams are all recorded with the same clock, you won't notice the tiny accelerations and slow downs.

Synchronizing different clocks is not very complicated but the system must be designed to allow it.
For example pro AV equipment using Genlock ports to synchronize their internal clocks with one unique reference clock that drives all the equipment
Passive 3D forever !
DIY polarised dual-projector setup :
2x Epson EH-TW3500 (2D 1080p)
Xtrem Screen Daylight 2.0, for polarized 3D
3D Vision gaming with signal converter : VNS Geobox 501
User avatar
Zerofool
Cross Eyed!
Posts: 119
Joined: Tue May 06, 2008 3:50 pm
Location: Sofia, Bulgaria

Re: Recording Games in 3D

Post by Zerofool »

BlackShark wrote:No it's not about outputs, it's a about clocks.

If there is only one clock in a system, everyone works with the same beat.
If you've got two different clocks then you're in trouble.

When you display 3D from one computer, you know it comes from one single clock in the computer. As long as the software and hardware can synchronize the outputs, you can have perfect synchronization. The time flow may not be perfect (you computer constantly accelerates and slows down, but you don't notice it), but as long as the two video streams and the audio are synchronized, you won't notice it.

The same goes with recording : the recording computer may not be off from the playing computer, but as long as the streams are all recorded with the same clock, you won't notice the tiny accelerations and slow downs.

Synchronizing different clocks is not very complicated but the system must be designed to allow it.
For example pro AV equipment using Genlock ports to synchronize their internal clocks with one unique reference clock that drives all the equipment
OK, thanks for clarifying that. Kind of sounds unbelievable, but it makes sense. I thought HDMI is like universal link, no matter what (compliant) devices you connect with it, they'll all work in perfect sync. Obviously it's more complicated :).
nV 3DTV Play, Geo-11, Tridef on 2014 55" Samsung H7000 (aka H7150) @ 1080p60 Checkerboard-3D | Valve Index | RTX 2080Ti, i7-6700K, 32GB 3200CL14, 1+2TB SSDs | Win10 v2004 | DAN A4-SFX | G27
User avatar
iondrive
Sharp Eyed Eagle!
Posts: 367
Joined: Tue Feb 10, 2009 8:13 pm

Re: Recording Games in 3D

Post by iondrive »

Hi guys,

Likay, I understand. I don't blame you for skimming some of my posts since they can blabber on and on and on... :)

Zerofool,
Maybe I'll try those drive-testers someday but I have too many other distractions right now. In a sense, I've already done some testing when I tried to record at 60 fps and maxed out under 4 GB/min so I already have an idea of the speed limits of my setup but like I said, I may use your suggestions when I want more details. I don't plan on any recordings anytime soon.

FRAPS, I've tried it before and was happy with it but I chose VirtualDub since other people like that too and it was suggested in the iZ3D thread. I think it would be good for me to get more experience with it. It's also free so that helps too.


Blackshark,
Since I get exactly half filesizes when I record with 16 bit compared to 32 bit, I think it is recording 32 bpp of data but I believe you when you say that it's only 24 bits of color data. So I think it's recording 24 bits of color plus 8 more bits of "junk" alpha channel data or some other kind of unused data. It's the math that makes me think that it's recording 32 bits even if it really has only 24 bits of actual color data. Anyway, whatever.

Happy recording y'all.

--- iondrive ---

PS: Shameless self-promotion: please check out my first 3d YouTube videos since it's embarrassing how few people have seen it so far. It's only been viewed like 16 times and half of those are mine. LOL.

"Marvel Ultimate Alliance in s3d"
http://www.youtube.com/watch?v=q97Qwd5S38g" onclick="window.open(this.href);return false;

and an MUA 3d slideshow
http://www.youtube.com/watch?v=VV9wbCxM800" onclick="window.open(this.href);return false;

Hope you like 'em. :)
User avatar
Neil
3D Angel Eyes (Moderator)
Posts: 6882
Joined: Wed Dec 31, 1969 6:00 pm
Contact:

Re: Recording Games in 3D

Post by Neil »

These are great (I"m view #17!).

Don't worry. I shamelessly plug my videos all the time! (have you seen these: http://www.mtbs3d.com/index.php?option= ... &Itemid=57" onclick="window.open(this.href);return false; - the second to last is my favourite!)

If you haven't already, make sure these games are in M3GA.

Regards,
Neil
Post Reply

Return to “iZ3D Legacy Drivers”