It is currently Mon Dec 22, 2014 1:42 am



Reply to topic  [ 41 posts ]  Go to page 1, 2  Next
 Using mplayer/mencoder with 3d videos 
Author Message
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Hi all,

Well I thought this day would come sooner or later and here it is... the day I start a thread about how to use mplayer / mencoder for 3d video playing/encoding/converting. Actually I'm just going to start with playing and maybe never get to encoding but maybe someone else will post about that. Anyway..

Mplayer and Mencoder are command-line programs so if you're not into that, you can stop here. Although there are graphical front ends for these things, I'll be posting about the command line options that you can use for 3d videos. These programs work on Linux and Windows and maybe Mac but I'm not really sure about that. I consider these things as two halves of the same program since they're so closely connected.

Before we go any further, let me say that Stereoscopic Player is a much better program for playback so my advice is to just pay for that and have no/few problems. On the other hand, if you want multiple computers to be able to play 3d videos for free, this may be acceptable. The 3d modes I'm going to cover are interlaced, left/right, over/under, and frame-sequential although that is the hardest to get working. So, if you have a setup that can handle the first 3 modes, this should be OK for you. If you just want frame-sequential (standard shutterglass), you might be in for many hours of effort so consider yourself warned. Onward...

My current test system was win98se with a geforce FX 5200 or GeForce 4 but I've also done some testing on an XP system with a GF-7800-GTX and things work there too except for some differences. My experience has taught me that sometimes the card/OS matters as to how well the thing works with regards to frame-sequential but that's the most complicated setup so I'll save it for later. For starters, let's just go with interlaced 3d like from standard field-sequential DVDs. Don't confuse frame-sequential with field-sequential and don't confuse interlaced with interleaved. HQFS DVDs are High-Quality-Field-Sequential and is interlaced horizontally. If a video has alternating frames of L/R views, then that's frame-sequential and is made up of interleaved L/R video streams. Mplayer can convert between the two on-the-fly in either direction and we'll get to that eventually but for now I will mostly give examples for playing interlaced 3d DVDs. Also, if you want to shop for these things, then search ebay for field-sequential 3d. There are few good ones of these and be advised that some of these are converted from 3d anaglyph. I believe this could be good but some good work must be put into the process and I don't trust others to do a good job of that yet although I haven't sampled the market for this.

I'm going to skip installation instructions for now since I want to get to the 3d stuff sooner and people are smart enough to do that on their own but it would be nice if someone else covered that for me maybe just pointing to a good website for it. First of all I'ld like to concentrate on "on-the-fly" processing so let's get started with just playing a DVD. If you've got that down already, you can skip the next post for playing an interlaced DVD.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Last edited by iondrive on Mon Nov 02, 2009 4:19 am, edited 1 time in total.



Sat Oct 03, 2009 11:06 pm
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Playing a DVD:
So you want to play a DVD with mplayer in windows or linux? Get to a command line, browse to the mplayer dir and type:

mplayer dvd://1

This plays title 1 from your DVD unless it doesn't. If it doesn't, then you might need to tell mplayer which drive is your dvd drive. If it's E: then:

mplayer dvd://1 -dvd-device e:\video_ts

If this doesn't work, you might have DMR problems. In my case, with Sharkboy and Lavagirl (SBLG), I have to play the DVD with PowerDVD for a second, then I can close that, and mplayer then works. You shouldn't have this problem under Linux. Welcome to the first windows quirk. You should only have to do this once per boot, then mplayer should work with that dvd fine except that it takes like 90 seconds for the playing to start. Very annoying and time consuming for testing.

If you have the DVD on your hard drive at C:\movie1, then you can use:

mplayer dvd://1 -dvd-device c:\movie1\video_ts

This might help playback in some cases.

other play options:
for Sharkboy and Lavagirl, you need -alang eng or else it will default to French.
You can use -chapter # to start at some other chapter.
I like to use -nokeepaspect so I can change the aspect ratio by resizing the window. Otherwise changing either width or height will automatically change the other in order to keep the same proportions for the window. 4:3 or 16:9 for widescreen or else a custom aspect ratio.
-mc 2, -framedrop, and -autosync 30 can help to keep A/V in sync.
-cache 16384 helped when I was testing on XP when playing from a disc instead of a hard-drive.
-nocache helped when I used win98se with a GF4.
-fs for fullscreen, press the f key to toggle that during playback.

Putting most of these together, we could type:

mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -framedrop -cache 16384 -alang eng -nokeepaspect -fs -chapter 7

If this doesn't work then something is wrong. Maybe you didn't install codecs?

Advice:
Many of these things I'll show you may not be needed but I tend to throw in more than I need in order to make sure it works good. Also, these command lines get very long and hairy so I suggest that for windows, you should make a bat file and edit that with notepad and then just type the bat filename in your dos window.

Playback hotkeys:
Keys you should know are:
Q to quit. In windows, you might need alt-tab to focus on the player.
Space to toggle pause
arrows and page-up/down to skip forward/back
F to toggle fullscreen mode

That's it for now. For many more hotkeys, read the Man-page (Manual-page).

EDIT: examples above show me using -dvd-device to point to the VIDEO_TS folder but you should also be able to use it's parent folder instead and it may start faster or slower depending on its speed :)


Last edited by iondrive on Mon Apr 25, 2011 11:01 pm, edited 1 time in total.



Sat Oct 03, 2009 11:11 pm
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Playing interlaced 3D DVDs:

For non-DVDs, just use the path and filename. It should work the same as long as you have the proper codecs.

OK, by now you should be able to play DVD's and you want to play a 3D HQFS DVD. Here we go. The key is noaccel. Get your glasses shuttering and start line-blanking mode on your CRT or DLP or use your other interlaced system (Zalman?). Don't use fullscreen just yet:

*** For windows:
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -nokeepaspect -vo directx:noaccel

For Linux, use -vo x11 or -vo xv:noaccel instead of directx. Please post if you use some other method that works.

Be advised that mplayer likes to remember the last way you did something so if you want to play a regular DVD with HW acceleration after this, then you need to use -vo directx or -vo xv depending on your system.

Understanding: (-vo: video output)
The reason you need to disable hardware acceleration for playback of interlaced 3d is because the video card and driver doesn't really know it's not supposed to mix info between two adjacent lines. It's trying to give you a better view by default and doing some de-interlacing or something and that lets one eye's data spill into the other eye's lines. So there are two sources of deinterlacing: the player and the video card driver. Both must be off to preserve the integrity of the interlacing.

Eye-sync / parallax inversion / eye-swapping:
Chances are 50/50 that each eye is seeing the wrong image. If it's wrong, drag the window up or down (be careful not to resize it) untill it's right or just use the functions of your setup like a hotkey or the taskbar tray icon from ED-Activator. There's an mplayer option we'll cover later.

So now you've got a well interlaced window but you might not be happy with that. Stretching it horizontally is OK but be carefull of stretching it vertically because it screws up the interlacing. If you want bigger or smaller, you'll have to use the scale command:


*** A small 320x240 interlaced window:
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -nokeepaspect -vo directx:noaccel -vf scale=320:240:1

I suggest always using a :1 after your new scale resolution because that uses a different scaling method that preserves interlacing integrity, otherwise it defaults to :0 which will mix the data between adjacent lines. You won't always need to preserve interlacing in later 3d methods but it's good to keep it simple and consistent.

Remember that normal DVD quality video is 720x480 (for ntsc) but if you're using 800x600 on a CRT then try something like this for a fullscreen video:


*** Fullscreen interlaced DVD with an 800x600 desktop:
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -vo directx:noaccel -vf scale=800:600:1 -fs

If you have a Zalman 22" 3D LCD monitor, please consider trying this with your monitor's fullscreen res (scale=1680:1050:1 ?) and posting success or problems.


If your DVD is 16:9 widescreen then use this:

*** Fullscreen interlaced widescreen on an 800x600 display:
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -vo directx:noaccel -vf scale=800:450:1 -aspect 16/9 -fs

For other resolutions: divide width by 16, then multiply that by 9 to get your new height.


eye-swapping / line swapping: il=s
If you're playing in fullscreen and you want mplayer to swap the eye-views, then swap even and odd lines using il=s after -vf (video filter) and before scale=#:#:1. Order is important here.

*** Fullscreen swapped interlacing for 16/9 aspect DVDs on an 800x600 display:
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -vo directx:noaccel -vf il=s,scale=800:450:1 -aspect 16/9 -fs


*** For a widescreen window:
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -vo directx:noaccel -nokeepaspect -vf dsize=800:450,scale=800:450:1

dsize helps define new window sizes when your video driver insists on some preset sized scaling.
Else you can choose to just stretch the window wider to get a correct widescreen view thanks to -nokeepaspect.


Review:
OK so we've covered 4:3 and 16:9 aspect windows and fullscreen views as well as eye-swapping. I think that's all for this post. There's one more trick for interlaced viewing that I'll save for later. It's about how to change the overall depth of the video so it can appear further or closer than screen-depth. Further posts will be about how to get interlaced video into left/right and over/under and back again.

Whew. Did I miss anything? Maybe I'll try vertical interlacing if someone's interested.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Sat Oct 03, 2009 11:23 pm
Profile
Cross Eyed!

Joined: Sun Feb 15, 2009 12:50 pm
Posts: 131
From the other thread:
iondrive wrote:
L/R to O/U:
OK, start with a left/right video and try -vf ilpack,fil=i,il=d,scale=X:Y:1. You might need a dsize=X:Y right before the scale and you might not need the ilpack depending on your source file video format and encoding. Compute X and Y from your original L/R dimensions. Halve the original X and double the original Y to get the new values. "fil=i" interlaces L/R video and "il=d" turns the interlaced to over/under as you already know.


Wow! it works. Didn't need to use dsize or ilpack on the material I tested, now have left/right converted into over under. Only problem now of course is that sync-doubling loses a few lines, so i need to add a few blank lines in between the two pictures. To do this I use tfields=0 and tile=1:2:2:0:x in place of the il=d. X is the number of blank lines you need inserting to make the video fit, so my final commandline looks something like this:

mplayer LeftRightInputFile.avi -vo gl -vf fil=i,tfields=0:1,tile=1:2:2:0:82

NB This way generates the above/below video at the original resolution, then lets the GPU handle the scaling through OpenGL. This will result in faster playback, but means that the number of lines to insert is dependent on the source resolution, and due to the scaling it may not be possible to get the vertical alignment 100% perfect.


Sun Oct 04, 2009 10:10 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Hi Mickey,
I'm glad I could help and I'm very glad that you can help me. :D
I'll say a little more to you in a later post. Right now, more info for people unfamiliar with mplayer...

Left/Right and Over/Under (or Above/Below):

Hi again,

It's time to start talking about side-by-side 3d viewing modes. Mostly what I'm showing you is conversion from/to interlaced mode because if you can do that, then you can convert anything to anything using interlaced mode as an intermediary. So anyway, it turns out that mplayer/mencoder has many options for dealing with de/interlacing and some of them are useful to us for 3d purposes. One option is il and another is fil. -vf il=d makes 2 half-height pictures, one from the odd lines and one from the even lines. Then it places one picture above the other so that the final frame is the same resolution as the original. I believe the standard for HQFS DVDs is that odd-lines are for the left eye and even lines are for the right eye but it's possible that it's not really standardized. Anyway, I think that when you use il=d, then the odd lines go into the top half-image so that it now shows the left-eye view and the bottom half-image is the right-eye view. The numbering starts at the top of the screen and I believe it starts with 1 and not 0 but I could be wrong about that. Corrections are welcome. If you want to swap eye-views, we've already covered il=s. Here's an example:

*** interlaced to over/under with eye-swapping
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -nokeepaspect -vo directx:noaccel -vf ilpack,il=s,il=d,scale=720:480:1

ilpack:
ilpack is something to fix colors bleeding into the deinterlaced image and may not be needed for some other file formats and codecs but for HQFS DVD's, you need it. Try it without it to see the difference. Anyway, the rule of thumb is: if you see weird/bad colors after using il, then you need to stick an ilpack before it and use scale after.

The final video is not interlaced so you probably don't need the :1 after scale nor the :noaccel after directx but it doesn't really hurt much so my advice is to just keep them in there. Also, you're supposed to be able to do il=s:d instead of il=s,il=d but it didn't work for me so I just use the longer way. il=i is the opposite of il=d and interlaces the top and bottom halves so if you have an O/U source and want an interlaced final, then you do this:

*** over/under to interlaced
mplayer path\filename -autosync 30 -mc 2 -nokeepaspect -vo directx:noaccel -vf ilpack,il=i,scale=720:480:1


fil: Left/Right - interlaced conversion
fil is like il but it puts the two images side-by-side left/right instead of above/below. This means that the final image will have different dimensions from the original. fil has (i)nterlace and (d)einterlace options but no (s)wap option. You could use this to free-view 3d videos cross-eyed or wide-eyed but you probably will want to scale it down for that. Be advised that the man-page says that sometimes fil doesn't work right so if that happens, then I think you need to do some kind of preconversion. Convert to a resolution that fil works with and go from there. Now for some examples:

*** interlaced to left/right
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -nokeepaspect -vo directx:noaccel -vf ilpack,fil=d,scale=1440:240:1

If you try just -vo fil=d,scale it might work for arbitrary resolutions but I prefer to specify the final scaled res. Just be advised that if your screen is only 800x600, then the window won't fit and on my system, it defaults to 800 pivels wide. You should have noticed that I got 1440:240 by doubling 720 and halving 480. For a 3D DVD source, the images are squashed so for freeviewing, shrink it and rescale it by choosing appropriate values for scale: 720:240 or 360:120 or other. This should result in a L/R image with odd lines going to the left image, but if you want R/L, then use il=s before the fil=d but I expect you knew that already. Finally, if your window is not the specified size then you can try sticking a dsize=width:height before the scale command or else just resize the window manually thanks to -nokeepaspect. fil=i interlaces L/R images:

*** left/right to interlaced
mplayer path\filename -autosync 30 -mc 2 -nokeepaspect -vo directx:noaccel -vf ilpack,fil=i,scale=720:480:1


Those are the basics. That should be enough for you to do alot like L/R to O/U and O/U to L/R. Swap (il=s) after interlacing if needed and rescale as needed. Remember -aspect 16/9 or 4/3 and -fs for fullscreen if desired:

*** L/R to interlaced to O/U (L goes to O)
mplayer path\filename -autosync 30 -mc 2 -nokeepaspect -vo directx -vf ilpack,fil=i,il=d,scale=800:600:1

*** O/U to interlaced to L/R (O goes to L)
mplayer path\filename -autosync 30 -mc 2 -nokeepaspect -vo directx -vf ilpack,il=i,fil=d,scale=320:120:1

I'm pretty impressed with everything this seemingly little program can do but wait, there's more. Check out the next post for some tricks using expand.

Also remember that anything that mplayer can play can be encoded by mencoder so you can convert your 3d files into another format and save it that way instead of converting on-the-fly during playback. This should help old slow computers to play your files. You can get basic info on using mencoder from another website or the man-page.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Last edited by iondrive on Sun Oct 11, 2009 1:43 am, edited 1 time in total.



Mon Oct 05, 2009 12:10 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Using expand to fix vertical parallax mismatch from sync-doubling:

OK, now we're at a point where things get interesting/confusing. It turns out that sometimes you would like to do certain things that involve changing the L/R video streams by adding black bands on the sides or in the middle between O/U images, like when you have a CRT and want to use sync-doubling to view O/U 3d videos. The problem with that method is that if the left-eye view is on top, then the right-eye view will be too high resulting in things not matching up vertically. Mickeyjaw has already adressed this issue with his solution and it looks pretty good but I'm going to explain a different way to do the same thing although his is probably better in some ways.

expand:
expand is a video filter that you use after -vf and it gives your video a bigger canvas. It's like if you put a big poster on your fridge and then make your fridge bigger. The poster stays the same size but now there's a bigger border around it. Only the canvas size has changed and the image size did not change. Then if you resize the whole thing back to the original size you will then see a smaller image with a border around it. What good is this for 3d? Well you can use this to fix that sync-doubling vertical mismatch problem. Here's how it works. Say your video is from a DVD and is 720x480. If you deinterlace to O/U mode and then expand to 720x500, then the new video will be 720x500 and the old video will be O/U in the middle of that with a 10-pixel-wide black border on top and bottom. Got it so far? Good, but you want the black band to be inbetween the top and bottom half and not on the top and bottom edges. This is where you do something clever: interlace, swap, and deinterlace. This swaps the top and bottom halves so that the top half is now on the bottom which means the black band that used to be on the top is now on the top of the bottom half which means it's in the middle. An analagous thing happens to the other half: the bottom band goes to the bottom of the top half meaning it too is in the middle. So you've moved the top and bottom bands from the expand function to the middle. Hurray, you've got the black band in the middle where you need it but now the bottom image is for your left eye and you wanted the left-eye view on top. To fix this you need to swap images before the original deinterlace. Here's what the -vf part looks like when it's all put together:

-vf ilpack,il=s,il=d,scale=720:480:1,expand=720:500,ilpack,il=i,il=s,il=d,scale=720:480:1

in "english": swap, deinterlace to O/U, add borders to top/bottom, then swap top/bottom halves (interlace/swap/deinterlace) to put border-bands in the middle. You might not need the first swap if you just want to use the reverse stereo function of your shutterglasses with your Activator.

So here's how you could play a 4:3 3D DVD in fullscreen in either 640x480 or 800x600 screen res using sync-doubling. These values work for me but your monitor may be different:

*** O/U-sync-doubling 3D DVD playback with 640x480 or 800x600 desktop at 60Hz
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -framedrop -vo directx -fs
-vf ilpack,il=s,il=d,scale=720:480:1,expand=720:500,ilpack,il=i,il=s,il=d,scale=720:480:1

*** for 16:9 widescreen 3D DVDs in 800x600 screen res
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -framedrop -vo directx -fs -sws 0 -vf ilpack=1,il=s,il=d,scale=720:480:1,expand=720:570,ilpack=1,il=i,il=s,il=d,scale=720:480:1 -aspect 16/9

The value for the expand function depends on your CRT res and freq. Here's how to find it:
Play the video in O/U-sync-doubling mode without any black bands and estimate the height-difference in pixels and add that to the original video's vertical resolution for the expand function's v-res. In my case, 20 pixels too high gives expand=720:500.

-framedrop
-framedrop was important for me this time since the previous default was -noframedrop. When -noframedrop was in effect, the video lagged behind the audio more and more as time progressed. You might not have this problem if your computer is fast enough. -framedrop allows the dropping of frames in order to allow better A/V sync but we will be using -noframedrop later when we try frame-sequential shutterglass mode. Framedropping is fine with interlaced, O/U, or L/R modes since every time a frame is dropped, actually two video images are dropped, one for each eye since each frame has both eye-views in that one frame. This is very nice with O/U sync-doubling mode since the glasses never lose sync and the images are fullscreen although still half-res. I suppose if your monitor can do 800x960 at 60Hz, then sync-doubling gets you 800x480 at 120Hz and that might be OK for DVD-quality video. I never really spent much time trying anything like that since I don't see much chance of success because the resolutions are so odd.

-sws: software scaler
If your computer is slow and you are using :noaccel, then you can try using -sws 0 which selects a scaling method that is faster than some others of higher quality.

Mickeyjaw, tfields and tile:
Hi mickey, feel free to post a more detailed explanation of how your approach works. I haven't used tile much and didn't think to use it that way. I'll bet you can come up with a similar method for adjusting overall depth too like described in my next post. I think that would be interesting to see. Else I'll have to figure it out myself someday and who knows when that will be. But I have another challenge for you: can you figure out an easy way to use mplayer to convert 3d anaglyph video into over/under mode? I mean putting the green view on top and the magenta view on bottom. I don't know if this is possible and I'm skeptical but I thought you might like a challenge. I'm actually interested in this because my DLP projector doesn't work good with green/magenta movies like Coraline. I suspect color mismatch between my PJ's colorwheel and the colored glasses but if I can convert the video to green/magenta frame-sequential, then I can use my shutterglasses and there will be no ghosting due to colorwheel/filter mismatch. Crazy, I know, but just interesting enough to try.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Mon Oct 05, 2009 12:36 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
using expand for overall depth adjustment:

Now let's go back to interlaced mode for a sec. Let's say you deinterlace a 3D DVD to L/R mode and that makes the new video 1440x240. Then expand with 1460x240 and reinterlace the video. What 3d effect will that have? Well, if the left-eye view was on the left side of the deinterlaced image, then it gets a 10-pixel-wide black band on its left side and the right-eye view has a black band on its right side. After you reinterlace it, then the two images are blended together so that the left image has been shifted right relative to the right-eye view which has been shifted left. Left-view right and right-view left means your eyes are crossing more than they used to. This means that you've shifted the entire scene closer to you so it's now at less than screen depth. If you want the opposite and you would like to push the whole scene further away from you, then you need to do something a little differently. Also, 10-pixel-wide borders aren't much so let's use 16 on each side so it becomes expand=1472:240 and the result will be a relative shift difference of 32 pixels before the final rescale.

*** shift interlaced DVD out of screen:
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -framedrop -nokeepaspect -vo directx:noaccel -vf ilpack,fil=d,expand=1472:240,fil=i,dsize=720:480,scale=720:480:1

*** shift interlaced DVD into screen:
mplayer dvd://1 -dvd-device e:\video_ts -autosync 30 -mc 2 -framedrop -nokeepaspect -vo directx:noaccel -vf ilpack,il=s,fil=d,expand=1472:240,fil=i,il=s,dsize=720:480,scale=720:480:1

You may have realized that adding the border and resizing makes the image narrower but it should be not hard for you to fix it. In this case, for windowed mode, change 720 to 736 or decrease 480 to something appropriate... stretch the width or shrink the height.

In practice, you would use push back more often then pull forward because many of the old 3d movies have way too much popout in my opinion. There's a movie called "Comin' at ya" and it and others have that problem. It might be OK for a theater where the audience sits far from the screen but for home viewing, it's too much so to fix that, use a smaller screen or sit further away from the screen or shrink the image or push the video back with the above method or an alternate method. Surprisingly, I think these movies might be great on very small 3d devices like the size of an i-phone with vertical interlacing and either parallax barriers or lenticular lenses. There's a company that sells some kind of plastic overlay to make some small devices autostereoscopic. Please post if you know it.

3D visualization brain theory:
I've come to understand that it's the visual content that the eyes receive that's important to the brain in reconstructing a 3d image in your mind rather than the direction of your eyes. I'm saying that the brain is more of an image analysis engine rather than an eyeball-parallax decoder. So although I say that the above commands shift the 3d effect in or out, it should be clear that the content of the images don't change but only get shifted left or right. The way I look at this is from a level-of-comfort perspective. So the movies that have too much popout will still be annoying because the images are the same, but your eyes should be a little more comfortable with the high popout since they will be less cross-eyed than before. What led me to think this way? It was freeviewing. The above methods don't work with freeviewing because your eyes are already way crossed for R/L images and adding black bands on the left and right sides will change the parallax for fusion but not really make the image seem closer or further. In this case, it's the image content that matters most. With this perspective, classical convergence controls only help with eyeball comfort level although border cutoff (frustrum) does matter to affect the illusion of depth. It's still significant for helping the view feel natural though.

ilpack:
If you think something is not needed, try it without it and see the difference although the difference ilpack can make might be sublte. Check it by pausing on some scene with alot of popout and look for some color-ghosting. Compare scenes with ilpack included and excluded. You can even take and compare screenshots by using the s key but you need to include the word screenshot in your -vf filter chain. mplayer will put screenshots in the current working directory. ilpack changes the way color data is encoded in the lines. Check the man-page for more info but the description is not that detailed.

Whew. OK, I think I'm done except that I haven't even gotten to frame-sequential mode yet. I think I need a break so I don't know when I'll get to it. I want to work on my other projects now but maybe I'll change my mind and do more of this. Wait and see I guess.

Please post if you found any of this useful and tell us of your experiences.

--- iondrive ---

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Last edited by iondrive on Sun Oct 11, 2009 1:46 am, edited 1 time in total.



Mon Oct 05, 2009 12:49 am
Profile
Cross Eyed!

Joined: Sun Feb 15, 2009 12:50 pm
Posts: 131
Great stuff!! Nice to see you have another way to handle vertical alignment / horizontal convergence etc. To be honest, I had no idea half of this would be possible until about a week ago - all i knew was over/under to left/right and field sequential to others. Gonna go add all your instructions to my linux3d wiki: http://linux3d.magicbox.org.uk, let me know if it's ok for me to copy/paste your more or less original posts straight in there?

As for anaglyph, I have no idea if it is possible but will delve back into the manpage and see what I throw up. What I really need to do now is pull my thumb out my a**e and finish teaching myself python so i can write a launcher front-end to handle all the options (maybe parse stereoscopic player svi files to set otions too). I am a lazy wotsit though so it may take a while...


Mon Oct 05, 2009 2:49 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Yes, go ahead and post any of my info anywhere.

gotta go.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Mon Oct 05, 2009 2:20 pm
Profile
Cross Eyed!

Joined: Sun Feb 15, 2009 12:50 pm
Posts: 131
Just for the hell of it, here is how to output vertically interlaced from various different formats. Maybe it will be of some use to those following the DIY auto-stereo with parallax barriers thread.

Left/right to vertical interlace:
Code:
 mplayer filename -vf rotate=1,ilpack,il=i,dsize=y1:x1,scale=y2:x2:1,rotate=2
where y1 is the input file y resolution, x1 is HALF the input file x resolution, y2 is the desire output y size and x2 is the desired output x size
e.g. for a 1440x480 L/R unsquashed side by side input file, output vertically interlaced and scaled in an interlace compatible way to 1024x768:
Code:
 mplayer filename -vf rotate=1,ilpack,il=i,dsize=480:720,scale=768:1024:1,rotate=2


Over/Under to vertical interlace:
e.g. for a 640x460 squashed above/below input file, output vertically interlaced and scaled in an interlace compatible way to 1024x768:
Code:
 mplayer filename -vf rotate=1,ilpack,fil=i,dsize=460:640,scale=768:1024:1,rotate=2



Horizontal interlace to vertical interlace:
e.g. for NTSC(720x480i) input file, output vertically interlaced and scaled in an interlace compatible way to 1024x768:
Code:
mplayer filename -vo gl -vf ilpack,il=d,scale=720:480,rotate=1,fil=i,dsize=480:720,scale=768:1024:1,rotate=2

The extra scale command in the middle of the chain is to perform colourspace conversion (I think?) as otherwise interlacing/deinterlacing and rotating messes with the colours.


Wed Oct 07, 2009 1:26 pm
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Ohhhhh, you beat me to interlaced mode. Nice work. Spot on but I'll still post mt own version even though it's pretty much the same since I already typed it up. Regarding multiple scales in a -vf filter chain, I have also found them necessary or else mplayer exits. As before, if anyone thinks they aren't needed, try it without and see how it works or doesn't.

More gap and O/U control:
OK, so recently we've covered some gap control and O/U to L/R conversion but there's another way to do that and the function is called down3dright. Just use -vf down3dright and your O/U will be turned into L/R but you can test this on a regular 2D video file, just use il=d and then you have a 2D O/U:

mplayer videofile -vf ilpack,il=d,scale,down3dright,dsize

I've gotten better at using mplayer since I started this and now I sometimes take advantage of scale's defaults which lets you just use "scale" instead of "scale=720:480:0". :0 is the default so if you need :1, then you can still use some of scale's defaults like this: "scale=::1". I'm more confident now that you only need the :1 after you interlace or resize an interlaced image, but not after you deinterlace. It looks like the same thing happens with dsize so you don't always need dsize=#:#. Compare the above results with this command:

mplayer videofile -vf ilpack,il=d,scale,down3dright

dsize helps prevent some possibly-unwanted resizing that your "-vo" driver does. This is what I prefer since it shows you the actual resolution you're working with, rather than an auto-rescaled output that you didn't ask for.

You should know about the difference between down3dright and fil=d. If you start with a size of 720x480 O/U, then the output should be 1440x240 which is what fil=d gives you. down3dright compresses the width for you so it results in 720x240. This might give you unwanted pixel loss so that's why I prefer fil=d. down3dright may be more convenient sometimes though especially if you want to remove a gap band between the O/U images. That's a built-in option of down3dright. If your gap is 90, then you would use down3dright=45 and then the gap would be removed during the conversion. If you want to test this on a 3d file, you can start with an interlaced file and make it O/U with a gap using expand and then add on the down3dright=45 option:

*** gap creation and removal using expand and then down3dright
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -framedrop -aspect 16/9 -nokeepaspect -vf
ilpack=1,il=s,il=d,scale=720:480:1,expand=720:570,ilpack=1,il=i,il=s,il=d,scale=720:570:1,
down3dright=45,dsize=640:240,scale=640:240

You can remove down3dright and all after it to see the video before the down3dright function.

Now let's see another way to remove gaps just in case you need it for some reason. It uses the crop command. Like before, we'll create a gapped O/U video, then undo the gap. This time we need to do an intelace/swap/deinterlace to move the center bands to the outside, then crop the middle image. We leave it as an U/O video.

*** undoing gaps with crop=w:h:x:y. If no x,y, then deault cropping location is centered.
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -framedrop -aspect 16/9 -nokeepaspect -vf ilpack=1,il=s,il=d,scale=720:480:1,expand=720:570,ilpack=1,il=i,il=s,il=d,scale=720:570:1,
ilpack,il=i,il=s,il=d,scale=720:570:1,crop=720:480

You can remove the last ilpack and all after it to see the video before the gap removal.

You can apply this strategy to remove black bands from 3d video that has depth-control gaps on the left/right sides or center of L/F images too of course.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Last edited by iondrive on Sun Oct 11, 2009 4:59 am, edited 1 time in total.



Sun Oct 11, 2009 2:09 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Alternate gap creation technique:
This method uses tfields and tile and I credit mickeyjaw for showing it to me.

tfields: interlaced to frame-sequential with framerate doubling.
tfields takes an interlaced video and makes one full frame out of all the odd lines and another full frame out of all the even lines so if your original video was 30-fps-interlaced, tfields would make it 60-fps-frame-sequential. We'll play with this later when we try frame-sequential shutterglass mode and I've made some progress with it but it's still not that great with regards to sync issues but if you can get it working, then it does look alot better than line-blanking mode on a projector. I think line-blanking on a CRT can be OK.

tile: making a grid of video images.
tile takes sequential frames of a video and makes a tile out of each frame in a grid you define and lays them out left to right, then top to bottom. Let's make a 3x2 grid of small video frames:

*** interlaced to 6-square grid: LRL/RLR
mplayer dvd://1 -dvd-device e:/video_ts -mc 2 -autosync 30 -framedrop -vo directx:noaccel -vf scale=200:150:1,tfields=0,tile=3:2:6:20:10,scale=660:500,dsize=660:500

NOTE: I need to use :noaccel for this to work or else I get 2 frames forward and 1 frame back when playing. If you suspect the same, use -nosound and then space to pause and "." to step one frame at a time. Sound gets screwed up for me if I pause and framestep forward. That's why I suggest using -nosound if you're troubleshooting video like this.

For tile=3:2:6:20:10, (x:y:frames:border:spacing)
3:2 means 3x2 (2x3 would give us LR/LR/LR),
6 means output one complete tiled frame for every 6 frames input. This means that the final video framerate will be divided by 6. You could have used an odd number like 5 and then the 6th tile would be blank.
20 means put a 20-pixel wide border around the tileset.
10 means put 10-pixel wide black gaps between tiles.

So to make side-by-side videos with a 10-pixel wide gap use:

*** For L/R:
mplayer videofile -vf tfields=0,tile=2:1:2:0:10

*** For O/U:
mplayer videofile -vf tfields=0,tile=1:2:2:0:10

I don't need :noaccel for this to work and this approach works out nicely because tfields doubles the framerate while tile=#:#:2 halves it so it returns to the original framerate. However, I feel that there might be a chance of A/V sync mismatch with this approach although I haven't tested it enough to really know for sure. Regarding tfields=0, you could use tfields=1 for slightly better image generation but I prefer tfields=0 instead because I believe it is faster although that is unproven. See the manpage for details. Next, back to our 3x2 grid for some checkerboard output...

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Sun Oct 11, 2009 2:20 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Vertical interlacing and even diagonal interlacing (checkerboard 3d)

Checkerboard output?
OK, it's bad but I decided to tackle this and this is the best I came up with. Use your 3x2 grid and crop out the left 4 tiles. That gets you LR/RL and now we get to use the rotate command. rotate=1 rotates the video 90 degrees clockwise and rotate=2 undoes that (90 deg. ccw). The strategy is interlace, rotate, interlace, unrotate and you've got your checkerboard 3d pattern.

*** horizontal interlaced to checkerboard
mplayer dvd://1 -dvd-device e:/video_ts -mc 2 -autosync 30 -framedrop -vo directx:noaccel -vf scale=320:240:1,tfields=0,tile=3:2:6:0:0,crop=640:480:0:0,
ilpack,il=i,scale=640:480:1,rotate=1,ilpack,il=i,scale=480:640:1,rotate=2,dsize=640:480

One reason it's bad is because it starts out with small images and the tiled image is 6 so that the final tiled image comes from 3 original interlaced frames so that the final framerate is 1/3 original. Scenes with alot of motion will also look bad for the same reason. I don't have a checkerboard 3d input device but if you do and decide to try this, tell us how it works. Why post this info if it's bad? It illustrates how you might do it if you decide to make or process your own 3d videos. You could do some processing outside of mplayer/mencoder and just get your videos into a 2x2 grid of LR/RL and then do the above interlace/rotate/interlace/derotate routine. I may post more how to do that using linux someday. Heck I'll give you the basics now.


Basic strategy for processing interlaced to checkerboard using mencoder/mplayer and netpbm tools:

Basically, have alot of hard drive space and output your video to 1 jpeg (or other format) for each frame of the movie and process from there with a bash script using netpbm tools like jpegtopnm, pamcut (not pnmcrop), pnmcat or pnmpaste and then pnmtojpeg. I know too many jpeg conversions can cause too much image degradation but of course decide for yourself what is acceptable. Then put all those frames together back into a video file with sound extracted from the original video but this process can also have A/V sync problems. When you save the video to individual pics, I suggest saving it in a 2x2 format like LL/RR from il instead of fil since il doesn't change the dimensions of the image like fil does. Then use your netpbm tools to make all the images LR/RL and save the video in that format. Then play it with mplayer doing the double-interlacing on the fly. I think most computers will be fast enough for that although you may want to rescale the video beforehand. I suggest the 2x2 format because you will have no interlacing problems that way during any conversions or encoding. By the way, I see there is a windows version of netpbm tools.

*** output 1 jpeg per frame for processing (interlaced to 2x2 format)
mkdir moviepics
mplayer path\file -nosound -vf ilpack,il=d,scale=1024:768,rotate=1,ilpack,il=d,scale=768:1024,rotate=2,dsize -vo jpeg:outdir=moviepics:quality=99:maxfiles=162000

Yes, 162000 images for a 90-minute movie at 30fps. Adjust as needed, but know that the default is 1000. Some old filesystems can't handle so many files in one dir but I know that XFS on Linux can. You can also output to pnm, png, tga, and gif but I think jpegs are fine and save alot of space. I chose 1024x768 because I read somewhere that that was a res for a TV that takes checkerboard input.

After processing all the frames to LR/RL format, you can skip encoding and just play all the image files as if they were in a video file already even though they're not:

*** playback 2x2 files into checkerboard output to 1024x768 screensize
mplayer "mf://path/*.jpg" -mf type=jpeg:fps=29.97 -nosound -sws 0 -vo x11 -fs -vf ilpack,il=i,scale=::1,rotate=1,ilpack,il=i,scale=::1,rotate=2,dsize

Be advised that there might be a limit to how long the *.jpg filelist can be. I belive 179000 works on my system.
The actual max is something more than that. Also, you can optimize this by excluding a pair of rotations, the last rotation from the first mplayer and the first rotation from the last mplayer and I don't know if you really need the ilpacks in the second mplayer. Also know that your framrate could be 24.97 or other. You can find it in mplayer's output text. Also I stuck in a -sws 0 in an attempt to speed up rendering (software scaler).

But there's no sound:
Right, start another mplayer with -novideo and sync the sound by hand like in the next post. This will have to do until we get into mencoder and can combine the sound with the video.

What doesn't work:
If you try deinterlacing twice using il=d,fil=d then you get LL/RR and if you change the order to fil=d,il=d then you get LR/LR when what you need is LR/RL. That's why we crop out the 2x2 grid from a 3x2 grid, because that gets us LR/RL.

Having fun yet? Next, vertical interlacing.

Vertical interlacing:
There are 2 ways to display vertical interlaced videos that I want to talk about. One is done by turning your monitor on its side and the other is not. Both use the rotation and interlacing trick except that this time it's deinterlace to O/U, rotate so it's sideways O/U, interlace as if it's L/R, then unrotate. For the sideways monitor display, you can use line-blanking to view it with shutterglasses on a CRT.

*** horizontal-to-vertical interlacing via sideways monitor rotation (800x600 res mode)
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -framedrop -vo directx:noaccel -fs -vf ilpack,il=d,scale,rotate=1,fil=i,scale=338:600:1,dsize=338:600 -nokeepaspect

*** horizontal to vertical interlacing for 16/9 widescreen DVDs (800x600 res mode)
mplayer dvd://1 -dvd-device e:/video_ts -autosync 30 -mc 2 -framedrop -vo directx:noaccel -fs -vf ilpack,il=d,scale,rotate=1,fil=i,scale=450:800:1,rotate=2,dsize=800:450 -nokeepaspect

Well that wasn't too bad after all.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Last edited by iondrive on Mon Apr 25, 2011 11:16 pm, edited 2 times in total.



Sun Oct 11, 2009 4:19 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Mirrored and dual projector modes:
Now for some awkward modes using dual displays and dual mplayers on linux.

Well, I tried to figure out how to do mirrored modes but it looks like the only way to do it is by using two mplayers running simultaneously but first let's do something easy: viewing 2D from a 3D interlaced video. Let's say you have a 3D interlaced DVD but you want to watch the 2D version and you don't have it. No problem... use field. The field function makes full frames out of your choice of either the odd or even lines:

*** playing 2D from 3D interlaced
mplayer dvd://1 -dvd-device e:/video_ts -mc 2 -autosync 30 -framedrop -vf ilpack,field=0,scale

Yes, field needs ilpack for DVDs. The man-page says an even number gets you the even lines and an odd number gets you odd lines. I would just use 1 or 0.

Dual display 3D modes:
So if you could start one mplayer with field=1 and another mplayer with field=0, then each mplayer is showing a different eye's view. Now use a dual display system and put one mplayer on each display and pause/unpause the one that's playing ahead of the other to sync them up and you've got your 3d movie playing. Now for the problems:

Problem 1: displaying on both monitors/projectors
I was unable to solve this problem on windows. If someone knows how to do this, please post. Also, if someone could get mplayer to span dual displays on windows, that would be good info too. I was only able to play mplayer on the primary display. On the secondary, it shows a blank window.
Solution on Linux: you can use -xineramascreen 1 -fstype none -fs to get fullscreen on monitor 1 and use a 0 with the other mplayer's -xineramascreen for monitor 0. Also, "For some reason it only works if the -xineramascreen is at the start of the filter chain before anything else". Thanks to mickeyjaw for telling me about this multiscreen stuff.
UPDATE: if you are in spanning mode on Windows, then you can use -vo directx:noaccel as your video output driver and the video should be able to be seen on both displays. YAY! noaccel means don't use hardware acceleration so this might not be fast enough in HD but it's worth a try. You can use this idea with a parallel projector setup but not mirrored modes unless you process the video to be half-mirrored itself. AviSynth can help with this but I'm not covering that here. If -vo directx:noaccel isn't fast enough, try -vo gl.

Problem 2: too much sound.
Solution: use -nosound on one of the mplayers but I have a hunch that you might then lose sync between video streams. If so, try -af volume=-20 or less to turn down the volume below hearing level. The reason is because the way some video players/codecs work is that they are made to drop frames on purpose to keep the video in sync with the audio stream. If you use -nosound, then that video stream will not drop frames while the other will. With sound playing but volume down, both videos will hopefully drop frames at the same times and stay in sync that way.

Problem 3: can't pause both mplayers easily.
Solution: Don't pause the movie. OK that's bad. The only other thing I can think of is if someone else can figure out how to control both mplayers with the same keystrokes. I think it might be possible if someone knows more about using FIFOs and then uses one or two "tee" functions under Linux to send the same keystrokes to both mplayers.

Problem 4: one of my displays is mirrored.
Solution: No problem. Use the -vf mirror function for left/right mirroring or else use the -vf flip function for up/down mirroring.

Spanning non-mirrored mode:
Why don't you just convert to L/R video and project that with your dual projectors using only one mplayer? Right. You can do that on Linux using "-xineramascreen -2 -fstype none -fs", but if you need mirrored modes then you still have to run two mplayers. I was unable to find any way to mirror half of a video once you've gotten it into a L/R or O/U mode. Some devices, like projectors, often have their own built-in mirroring functions so that might be useful to look for in some rare setups. As above, you can do this with Windows with -vo directx:noaccel or -vo gl.


Anaglyph, yuk:

UPDATE: mplayer now has anaglyph output modes. Search later posts or the man page for "stereo3d". I'll leave the following as it is just in case it's handy for someone for some odd reason.

In case you want to see a single color component of an anaglyph video, do this:

*** using eq2 to render only a single color component
red: mplayer videofile -vf eq2=1:1:0:1:1.0:0.1:0.1
green: mplayer videofile -vf eq2=1:1:0:1:0.1:1.0:0.1
blue: mplayer videofile -vf eq2=1:1:0:1:0.1:0.1:1.0

*** using eq2 to render only two color components
cyan: mplayer videofile -vf eq2=1:1:0:1:0.1:1.0:1.0
magenta: mplayer videofile -vf eq2=1:1:0:1:1.0:0.1:1.0
yellow: mplayer videofile -vf eq2=1:1:0:1:1.0:1.0:0.1

any: mplayer videofile -vf eq2=1:1:0:1:red:grn:blu

0 doesn't work to decrease components, you must use 0.1.

The eq2 filter consists of gamma:contrast:brightness:saturation:rg:gg:bg:weight.
Most default values are 1 or 1.0 but brightness is 0 for normal since it takes negative values for darkening. Read the man-page for more info.

anaglyph-polarized, anaglyph-shutterglass hybrid modes.
You can try this using a dual video 3d setup on Linux by playing one mplayer on each display and having the correct color on each and then use your polarized glasses to watch anaglyph 3d. It might be a bad idea but in my case, my projector's colors don't match the colors of my green/magenta glasses and movies like Coraline look better in anaglyph on my old CRT because of too much ghosting on the projector due to cross color leakage. On the other hand, frame-sequential mode on the projector gives me basically zero ghosting so the question is: How can I use shutterglass mode with an anaglyph movie? I can do this by processing the movie or changing the timing of the shutterglasses. Maybe I'll try both and tell you all how it works someday. We'll see.
UPDATE: it's the DVD player that causes the ghosting for Coraline. When I play it from my computer, it's much better.

I've done some tests and I'm not sure how well a job this eq2 function does, but If I were to process an anaglyph movie, I would rather use ppmtorgb3 as I have more confidence that that function does a good job of color separation. That command is not a part of mplayer. It is from "netpbm tools". Maybe I will write more about how to do that someday.

ARRRRGH. We still haven't gotten to normal frame-sequential shutterglass mode yet!
I can't believe there was so much to say up to now.

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Last edited by iondrive on Tue Apr 26, 2011 2:22 am, edited 2 times in total.



Sun Oct 11, 2009 4:56 am
Profile
Cross Eyed!

Joined: Sun Feb 15, 2009 12:50 pm
Posts: 131
Yeah, I couldn't figure out a way to do one screen mirrored/flipped in mplayer either. It should be possible to make X do a per-display mirroring using XRandR then just play the Left/Right dual output using -xineramascreen -2 -fstype none -fs. However, it seems the proprietary NVIDIA driver does not support RandR properly. When I query the display randr returns one single large display not two separate outputs. Blast NVIDIA for removing a feature in their drivers that even the most basic framebuffer only / VESA etc drivers support. :evil:

However, I am sure this will probably work with the open source 2D acceleration only nv driver. I know for sure RandR works with plain framebuffer output because I use it to rotate the screen on my linux PDA (HTC Universal running Debian+X+IceWM+mplayer) for watching 2D movies and when I switch from tablet to clamshell mode. It may even be that it is the Xorg on Ubuntu that is stuffing XRandR, so maybe you could try on your Gentoo system?

If RandR works on your driver you should do something like:
Code:
xrandr --output n --reflect x
or
Code:
xrandr --output n --reflect y
.
You can also do --reflect xy though I don't know how useful it could be. There is also a grandr app (GTK frontend) and gnome-randr-applet (Gnome panel app) to take the hassle out of RandR (Provided you're not one of those KDE types :P )

I am quite surprised I managed to miss the eq options! I thought I had read the entire manpage and not found any way to do colour adjustments, but obviously not.


Sun Oct 11, 2009 7:25 am
Profile
Cross Eyed!

Joined: Sun Feb 15, 2009 12:50 pm
Posts: 131
OK If you are using the NVIDIA proprietary drivers you need to add
Code:
Option "RandRRotation" "true"
to your xorg.conf. This enables rotation, but still not reflection. Apparently reflection works with the Intel drivers though, and I still haven't tried with nv as i had issues getting it to work.


Tue Oct 13, 2009 6:15 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
hi again,

mickeyjaw,
too bad about xrandr not working so well. I'm using an nvidia driver and xrandr by itself tells me "Reflections possible - none" and it doesn't work when I try it so no luck there. On the plus side, I found a way to process things so you can then use -xineramascreen....

Ready, let's go. I wrote this earlier offline:


Half-mirrored modes: a new approach

Hey! good news, well sort of. I've found a way to process 3d videos into half-mirrored modes. The bad news is that it's very slow but at least it works and you can process your existing interlaced (or other) 3d video into a format that should work on Linux systems and maybe Windows too if someone can figure out how to span 2 monitors on Windows with mplayer or maybe some other player. The trick is to use the geq (general equation) function. You can test it with mplayer but to really use it to watch a video, you should process it overnight. geq took some figuring but I made progress and am happy to report that it works fine for me so if you have a 3d setup that is half mirrored, I think you should try this if you want. First just test it with mplayer and any video file you have handy:
(comma's need escaping with a backslash and do not lowercase the capital letters)

*** testing -geq for half-mirroring, (left half mirrored onto right half)
mplayer d:\video.avi -nosound -vf geq=p(X\,Y)*gt(W/2\,X)+p(W-1-X\,Y)*lt(W/2\,X)

Understanding -vf geq:
I'm not sure about what all the functions are that geq has available to it but I'm guessing they are the same as for vrc_eq which includes the following functions from the manpage:

max(a,b),min(a,b) maximum / minimum
gt(a,b) is 1 if a>b, 0 otherwise
lt(a,b) is 1 if a<b, 0 otherwise
eq(a,b) is 1 if a==b, 0 otherwise
sin, cos, tan, sinh, cosh, tanh, exp, log, abs, mod

NOTE: from my tests, it looks like "lt" acutally does less-than-or-equal-to. Write your equations as if that's the case and I think you'll have fewer problems. It turns out that my efforts showed me that gt(W/2,X) is not the same as lt(X,W/2) so use gt for the first half and lt for the second and then they should work better. I guess it could be a programming bug that someone could check the source for since mplayer/mencoder is open-source. Otherwise you could do "gt(...)" for the first half and then (1-gt(...)) for the second half instead of an lt and then it would be a complementary function for sure but I'm happy with the way it's shown here. My mplayer version is 1.0 rc2 from October of 2007 so maybe it's fixed in a newer version by now. Hmmm, maybe I should update but that also might degrade my frame-sequential performance if they did something differently. I'll try it later.


manpage typo missing commas:
One other oddity is in the html version of the manpage where you might find something like this:

checkerboard invert with geq filter:
mplayer -vf geq="128+(p(XY)-128)*(0.5-gt(mod(X/SW128)64))*(0.5-gt(mod(Y/SH128)64))*4"

I realized that many commas got lost in translation from plain text to html. It should look like this with backslashes included too:
mplayer -vf geq=128+(p(X\,Y)-128)*(0.5-gt(mod(X/SW\,128)\,64))*(0.5-gt(mod(Y/SH\,128)\,64))*4

If you try this, you'll see that's not the checkerboard we want for 3d but let's save the 3d-checkerboard format for later.


Understanding the equation:
For clarity, I'll use "newpixel(X,Y)" instead of geq for the equation:
"newpixel(X,Y)=p(X\,Y)*gt(W/2\,X)+p(W-1-X\,Y)*lt(W/2\,X)"

You can see that there are two parts to the right-hand side of the equation. The first part (before the + ) describes the left half of the image and the second part describes the right half. Taking a step backwards you can try these things:

mplayer d:\video.avi -nosound -vf geq=p(X\,Y) --- 1:1 pixel-mapping
mplayer d:\video.avi -nosound -vf geq=p(X\,Y)*gt(W/2\,X) --- left half only
mplayer d:\video.avi -nosound -vf geq=p(X\,Y)*lt(W/2\,X) --- right half only
mplayer d:\video.avi -nosound -vf geq=p(W-1-X\,Y)*lt(W/2\,X) --- mirrored left on right

geq knows that W is for the video's width and H is for it's height so do not replace these variables with numbers because there are some complications depending on video color format like YUV-4:2:0.

Hopefully you see that gt and lt are used above as complementary 0 and 1 functions to zero-out one half of each video frame and insert pixel values using the appropriate p(x,y) function. Now let's try it for real on an interlaced 3d file although you can still use an ordinary 2d file at this point.

*** testing interlaced 3d to L/R-R-mirrored
mplayer d:\video.avi -nosound -vf ilpack,fil=d,scale,geq=p(X\,Y)*gt(W/2\,X)+p(1.5*W-1-X\,Y)*lt(W/2\,X)

The only thing that has changed from the above equations is the second p(x,y):
p(W-1-X\,Y) became p(1.5*W-1-X\,Y) since we have mirrored the full width of the video instead of only the left half of the video. When making your own equations, this is the tricky part. Just test your ideas with mplayer and tweak your equations from there. Remember that X,Y starts at 0,0 and ends at W-1,H-1. So for the first half of each video frame, you want pixels from 0 to (W/2-1) and not from 0 to W/2 because that wou
ld be 1 pixel over half. In other words, for an 800 pixel wide image, you want pixels 0 - 399 and W/2 is 400. Beyond that, you may have noticed that I don't use an eq function and in my opinion it's not really needed. If you disagree, experiment and see. Actually, I started out using "lt" on both sides but settled on this equation because it gave the best results. If you try out different equations, you will see what I mean. There's a vertical stripe on the edges or the middle of the frame that shows up and it has something to do with the color subsampling of some video formats.

testing with mencoder and -endpos:
If the mplayer test looks good to you, then do a real conversion using mencoder. It should be included with your mplayer installation. If not, look for a new installation that does include it. I suggest 10 seconds or possibly 30 or 60 seconds to get an idea of how long the entire conversion would take. Use -endpos for your new video's duration in seconds.

*** converting interlaced 3d to L/R-R-mirrored (10 sec test)
mencoder d:\video.avi -o d:\newvideo.avi -endpos 10 -oac copy -vf ilpack,fil=d,scale,geq=p(X\,Y)*gt(W/2\,X)+p(1.5*W-1-X\,Y)*lt(W/2\,X) -ovc lavc -lavcopts vcodec=mpeg4:vbitrate=5000

then just do mplayer d:\newvideo.avi to play it. See the manpage for more deatails on using mencoder. Basically you should specify at least three things:

-oac (output audio codec)
-ovc (output video codec)
-o (output filename)

I like a vbitrate of 5000 but others may think it's overkill especially if you use other options to get better quality. Also, I suggest not rescaling this half-height video during encoding since interlaced 3d is half-height-per-eye source and you don't really gain by rescaling during the encode. Just rescale during the playback and it should be fine.

Estimating conversion time:
During the encode, you can read the output text and it should tell you how many fps are being converted. On my athlon-xp-1800, I get about 2.5 fps so that means that with a 30fps source, that's about 12 times slower meaning that each hour of original video will take 12 hours to convert. Got it? Otherwise, convert 1 minute and time how long it takes and multiply that by the number of minutes in the original video.

Size issues: using -ss and -endpos to break up large videos
OK, so if you're happy with that new video, then go ahead and convert the whole thing but with vbitrate=5000, your newfile is going to get big. I believe you get about 1.2 GB per half-hour of original video and that wouldn't be a problem except that the .avi container format is limited to 2GB so one option is to convert your video into parts. I suggest about 45 minutes per file and using -fixed-vo to play multiple files in series. For converting into parts, you can use -chapter #-# (1-1 for only chapter 1) or else use -ss 0 -endpos 2700 for the first 45 minutes, -ss 2700 -endpos 2700 for the next 45 min, -ss 5400 -endpos 2700 and so on. OK, most modern users will want one file but I'll leave it to others to talk details about that. In general, other options include trying to shrink them with lower bitrates and other formats and settings but I mention this method because it will even work on windows98 although we still need someone to solve the dual-monitor playback problem under windows. As mentioned in a previous post, under Linux, you can use "-xineramascreen 1,2 -fstype none -fs" in order to span two displays.

Playing multiple files with the -fixed-vo option:
If you try to play multiple video files with one mplayer command without the -fixed-vo option, mplayer will close the current window and open a new window for each new file and it looks annoying. Just use -fixed-vo like this:

*** playing multiple files without restarting new windows/mplayers:
mplayer -fixed-vo d:\part1.avi d:\part2.avi d:\part3.avi
or of course: mplayer -fixed-vo d:\part*.avi


--- Other mirroring setups

Over/Under with Over-view mirrored vertically (flipped):
I assume that most people with half-mirror setups have either L/R with the right view mirrored or O/U with the top view flipped. The L/R case was given above so here's the O/U-O-flipped equation:

*** converting interlaced 3d to O/U-O-flipped (10 second test)
mencoder d:\video.avi -o d:\newvideo.avi -endpos 10 -oac copy
-vf ilpack,il=d,scale,geq=p(X\,H/2-1-Y)*gt(H/2\,Y)+p(X\,Y)*lt(H/2\,Y)
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=5000

Note that you need to use H/2-1-Y instead of H/2-Y in the first part of the equation because otherwise you lose a line of resolution and you only have 240 per eye so you better keep every one you can.

Ask for help if you need something different for your setup and can't seem to get it yourself. OK, here's one more in case you have O/U monitors that are setup in software as L/R:

*** converting interlaced 3d to L/R-R-flipped (10 sec test)
mencoder d:\video.avi -o d:\newvideo.avi -endpos 10 -oac copy
-vf ilpack,fil=d,scale,geq=p(X\,Y)*gt(W/2\,X)+p(X\,H-1-Y)*lt(W/2\,X)
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=5000

Next: 3d-checkerboard format:

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Last edited by iondrive on Mon Nov 02, 2009 5:20 am, edited 1 time in total.



Mon Nov 02, 2009 4:53 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Alright, so here we are again tackling the 3d-checkerboard (CB) format but this time we've got it. It's just even more slow than the half-mirrored modes and it should be 60fps at either 1280x720 or 1920x1080, left-eye-view first (top left corner pixel). Most formats seem to be left eye first by the way. The 60fps describes the video going to the TV which then creates two frames from every one so that the TV outputs 120fps to your face, 60fps per eye. Anyway here it is:

*** interlaced to LR/RL to 1920x1080 60fps checkerboard (10 second test)
mencoder d:\interlaced.avi -o d:\checkerboard.avi -ofps 60 -endpos 10 -oac copy
-vf ilpack,il=d,scale,rotate=1,ilpack,il=d,scale,rotate=2,
geq=p(X\,Y)*gt(W/2\,X)+p(X\,Y+H/2)*lt(W/2\,X)*gt(H/2\,Y)+p(X\,Y-H/2)*lt(W/2\,X)*lt(H/2\,Y),
rotate=1,ilpack,il=i,scale=1080:1920:1,rotate=2,ilpack,il=i,scale=::1
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=5000

-ofps defines the output fps. Don't forget the "o" in -ofps or else you will be redefining the input fps which is normally not needed.

OK, that command is big and hairy so let's just look at the parts of -vf.

1) ilpack,il=d,scale,rotate=1,ilpack,il=d,scale,rotate=2
This is the deinterlace/rotate/deinterlace/unrotate routine that gets you from interlaced to a 2x2 format of LL/RR.

2) geq=p(X\,Y)*gt(W/2\,X)+p(X\,Y+H/2)*lt(W/2\,X)*gt(H/2\,Y)+p(X\,Y-H/2)*lt(W/2\,X)*lt(H/2\,Y)
changes LL/RR to LR/RL by swapping Quadrant 1 with Quadrant 4.
2a) p(X\,Y)*gt(W/2\,X) --- this gets you the unchanged left half of L/R
2b) p(X\,Y+H/2)*lt(W/2\,X)*gt(H/2\,Y) --- puts Q4 image data into Q1
2c) p(X\,Y-H/2)*lt(W/2\,X)*lt(H/2\,Y) --- puts Q1 image data into Q4

3) rotate=1,ilpack,il=i,scale=1080:1920:1,rotate=2,ilpack,il=i,scale=::1
Converts from LR/RL to CB format with a rescale via a rotate/interlace/scale/unrotate/interlace routine. Note that since the image is sideways during the scale, I use 1080:1920 instead of 1920:1080 so don't think it's a mistake. If you want 1280x720, then use 720:1280. Also don't put a normal scale right after the geq since it gave me bad results for some reason. Also don't forget the :1 after the scale for preserving interlacing integrity and you need it.


Or you could do the process in two steps which is what I prefer:

2x2, LR/RL: an intermediary format
I recommend this conversion in two steps. One to get to a 2x2 format of LR/RL, and the second step to rescale and get to your CB format.

*** interlaced to 60fps LR/RL
mencoder d:interlaced.avi -o d:\LR-RL-60.avi -ofps 60 -endpos 10 -oac copy -vf ilpack,il=d,scale,rotate=1,ilpack,il=d,scale,rotate=2,
geq=p(X\,Y)*gt(W/2\,X)+p(X\,Y+H/2)*lt(W/2\,X)*gt(H/2\,Y)+p(X\,Y-H/2)*lt(W/2\,X)*lt(H/2\,Y)
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=5000

OK, so that gets you to LR/RL at 60fps and original resolution. You can check your file and see how it looks, then proceed with the next step. Note I convert to 60fps now instead of later just because I'm more confident that there will be no ghosting due to interpolated frames in case the encoder does that. That being said, it's probably OK to go to 60 fps during the next step instead if you want to since the encoder probably won't interpolate new frames and just duplicate whole frames instead in order to achieve the new framerate.

*** LR/RL to checkerboard with rescale to 1920x1080
mencoder d:\LR-RL-60.avi -o d:\CB.avi -ofps 60 -oac copy
-vf rotate=1,ilpack,il=i,scale=1080:1920:1,rotate=2,ilpack,il=i,scale=::1
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=5000

This way if there are problems with the final video, you might be able to spot where it is more easily. Like before, use the scale in the place it's shown or you may have problems... ghosting if you scale later and an odd colored stripe on a screen edge if you put it before.

Playback:
From what I've read, you need your desktop in the same res as the 3d input your tv takes, then play your video in fullscreen and enable 3d mode on your tv and it should work. One possible hitch is if your video driver decides to smooth out the image for you and cause massive ghosting so in that case use -vo directx:noaccel or, for linux users, -vo xv:noaccel or -vo x11. I may have to ask someone to test this for me if I give them a CB file I've created. It should play in other players too.

Final comments:
I've heard AVISYNTH is good for converting to CB so it's probably much faster and maybe I'll learn it someday but maybe not. Also, converting to jpegs and using netpbm tools would also be faster but that technique has some potential A/V sync loss problems. As far as it goes, I kind of like this method since it's pretty simple once you have the equations and have the time to do an overnight conversion. Still, I might post about how to do this with netpbm tools but I'll leave that for next year. I think that's it for now.


Whew:
Am I done yet? It wasn't supposed to be this long and I still have to talk about frame-sequential. I have been working on it but it takes alot of time to test different systems. It looks like OS/card/driver matter but I've had great success on my XP system with an FX-5200 card. I just want to do some more testing on another system before I post.

see ya,

--- iondrive ---

_________________
System specs:
OS: 32-bit WinXP Home SP3
CPU: 3.2GHz Athlon 64 X2 6400
RAM: 800MHz 4GB dual channel mode
Video: geForce 8800GTS PCI-e, 640MB ram, driver 196.21


Mon Nov 02, 2009 5:07 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
Hi guys, nice stuff, now I can convert all my 3D movies to field sequential format. That's the only format I can read under Linux with my Simuleyes VR glasses. Thanks !

I've made some modifications to MPlayer to display the White Line Code needed to activate my glasses. All the modifications are made in the libvo/vo_x11.c file, here they are :

After static int int_pause; write :
Code:
static int vo_x11_stereo = 0;

In the flip_page function, write this code between the two existing lines :
Code:
XSetForeground(mDisplay, vo_gc, BlackPixel(mDisplay, mScreen));
XFillRectangle(mDisplay, vo_window, vo_gc, 0, vo_dheight - 2, vo_dwidth, 2);
XSetForeground(mDisplay, vo_gc, WhitePixel(mDisplay, mScreen));
if (vo_x11_stereo)
{
    XFillRectangle(mDisplay, vo_window, vo_gc, 0, vo_dheight - 2, (vo_dwidth * 3) / 4, 1);
    XFillRectangle(mDisplay, vo_window, vo_gc, 0, vo_dheight - 1, (vo_dwidth * 1) / 4, 1);
}
else
{
    XFillRectangle(mDisplay, vo_window, vo_gc, 0, vo_dheight - 2, (vo_dwidth * 1) / 4, 1);
    XFillRectangle(mDisplay, vo_window, vo_gc, 0, vo_dheight - 1, (vo_dwidth * 3) / 4, 1);
}

After case VOCTRL_PAUSE: write :
Code:
vo_x11_stereo = !vo_x11_stereo;

Notes :
    - you can replace WhitePixel(mDisplay, mScreen) by 0x0000FF if you want Blue Line Code instead of White Line Code ;
    - if the stereo isn't right, just make a pause with P or the spacebar to inverse the effect.

I'm still trying to handle page flipping with White Line Code but without success for now...


Thu Feb 18, 2010 9:26 am
Profile WWW
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Fredz, your new nickname is awesome-Fredz :)

Great stuff. I'm very glad you anticipated my request for BLC encoding. :D
Thanks.

(I had to rewrite these next few posts since they got too crazy. Anywayz...)

Now for more mencoder fun... s3d videos for YouTube

I'm still a YouTube newbie, but I've learned some things in my first week and made some s3d videos that you can see. My YouTube name is iondrive3d since iondrive was already taken. For CRT users with line-blanking shutterglass mode, I suggest a screen res of 1024x768 and hitting that double-arrow on the bottom-right of your YouTube video window to make it bigger (480p). That size seems good. Don't choose fullscreen viewing mode because that doesn't work right yet for interlaced users. Anyway, you can see my first video and slideshow here:

Title: Marvel Ultimate Alliance in s3d
http://www.youtube.com/watch?v=q97Qwd5S38g

MUA s3d slideshow, Mandarin level
http://www.youtube.com/watch?v=VV9wbCxM800

UPDATE!!! YouTube has changed some of the tags you can use so that the below info may be out of date. See these threads for info on tags as of early 2011:
viewtopic.php?f=111&t=12846&p=57374&hilit=youtube+tags#p57374
viewtopic.php?f=3&t=12840&p=57596&hilit=youtube+tags#p57596
http://www.google.com/support/youtube/b ... wer=157640

Basically, from the YouTube page:
Quote:
If your video format is:

Side by side "half" or "squashed":
Add the tag yt3d:enable=LR to your video
If the video displays incorrectly or has a poor 3D effect, try changing it to yt3d:enable=RL
Top-bottom "half" or "squashed":
Add the tag yt3d:enable=LonR to your video
If the video displays incorrectly or has a poor 3D effect, try changing it to yt3d:enable=RonL


I will update the following posts now (April, 2011).

YouTube 3d Video Basics:
Firstly, some basics for posting your videos on YouTube. What you need to do is get your video into O/U or R/L format and you can post it on YouTube using special YouTube-3d tags that tell your browser to display the video in some user-selected format. Viewing format choices are currently various anaglyph, interlaced, side-by-side LR or RL for free-viewing, some mirrored modes, and I've heard that iZ3D monitor owners can view in fullscreen. I will be using the 1280x720 standard HD res since 1920x1080 is too big and the maximum filesize YouTube allows is 2GB and that's also the max filesize for .avi files. YouTube is going to convert your video themselves so don't expect the quality of your upload to be the same as the quality of the final video you see on your YouTube page. Yeah, it can be disappointing but I think they do it to try and save some bandwidth. Later I'll tell you how to use mplayer/mencoder to make 3d video slideshows from your screenshots and then later from 3d photos in separate files (L,R photo-pairs). It's pretty cool.

R/L or O/U?
YouTube is currently setup to assume that your upload is in R/L format but you can use others including mirrored formats too. Anyways, I've wondered which is better to upload, R/L or O/U and so I've done some tests using 800x1200 O/U (800x600x2) source and packing the 3d video into the standard HD res of 1280x720. You can see the results of these uploads on YouTube and decide for yourself which is better of if they're equivalent. Looking closely at the text in the video, I think that R/L is better unless you're viewing in interlaced mode in which case I think O/U is slightly better. I'm choosing to upload in R/L unless I record videos in single-screen interlaced, in which case I'll use O/U since I can only record half-vertical res per eye when using interlaced mode. I think the key is how much your video is compressed. In the case of sticking 800x600 views into 1280x360 res for the O/U format, vertical res loss is 40%. In the case of sticking 800x600 views into 640x720 res for the R/L format, horizontal res loss is 20% so it's better mathematically anyway. Stretching a dimension doesn't really make you lose image data like compression does so I'm ignoring that for the sake of simplicity. It's a judgment call anyway. If your target audience is going to be using interlaced, you should probably use the O/U format. Also, results may be different if your original source is different from 800x600x2 like mine was. For example, if you're recording in widescreen format, O/U might be better for that too although you probably should do your own tests for that. I suggest 10 second tests with text in them. Anyway, here's some links to my tests so you don't have to do your own (for 800x600x2 source). You can open them in two separate tabs and scroll them up/down so that the two video windows are in the exact same physical location on your display. Pause both in the beginning and go back and forth between them to spot the difference easier. You can do this in monoscopic modes and then later in interlaced modes if that applies to you.

MUA Over/Under 3d format quality test
http://www.youtube.com/watch?v=sKrGGXxHyr8

MUA Left/Right 3d format quality test
http://www.youtube.com/watch?v=4i7TUm_OEK4


Uploading to YouTube:
If you're a complete newcomer, just go to the YouTube site, create an account if you don't already have one, hit the upload button and browse to your s3d video to select it. Then fill out some info for your video, save settings and let it finish uploading. Rename the Title something better than your filename and include appropriate tags and then some key YouTube3D tags (yt3d). There are 2 or 4 tags that we might want to use:

For LR side-by-side 4:3 aspect videos:
OLD: yt3d:enable=true yt3d:aspect=4:3 yt3d:left=0_0_0.5_1 yt3d:right=0.5_0_1_1
NEW: yt3d:enable=LR
Note: make your video LR so that each image is 50% width of it's original full-framed size. If the original source videos were 4:3 for each view, then your side-by-side video should have an overall aspect of 4:3 and you should not need to specify aspect. RL format is discouraged but you could use yt3d:enable=RL.

For L-on-R above/below videos with left-view on top and 16:9 aspect videos:
OLD: yt3d:enable=true yt3d:aspect=16:9 yt3d:left=0_0_1_0.5 yt3d:right=0_0.5_1_1
NEW: yt3d:enable=LonR
Note: make your video L-on-R so that each image is 50% height of it's original full-framed size. Side-by-side video should then have an overall aspect of 16:9 and you should not need to specify aspect.

For R-on-L, use the same as above but use yt3d:enable=RonL

aspect: (out of date info)
Note that this uses a ":" instead of a "/" and you can experiment with different numbers. I've noticed that some people needed to use 8:9 in order for their videos to look right. Just stretch/squeeze your videos to completely fill the 1280x720 frame and I don't think you'll need to use odd ratios, just 4:3 or 16:9. Also, don't accidentally invert them and use 3:4 or 9:16. If you have trouble remembering 16:9 but you can remember 4:3, then just square both 4 and 3 and you'll get 16:9. That's how I originally used to remember it.

right and left: (out of date info)
The numbers following right and left describe rectangles inside our video images. The top left is (0,0) and the bottom right is (1,1) so...

0_0 are x,y coordinates for the top left of the video,
1_0.5 are x,y coordinates for the middle of the far right edge,
0_0.5 are x,y coordinates for the middle of the far left edge, and
1_1 are x,y coordinates for the bottom right corner of the video.

and so 0_0_1_0.5 describes the top half of your video and 0_0.5_1_1 describes the bottom half. "left" and "right" define which eye that rectangle is intended for. Adjust as needed.

Mirrored formats:
Note that these rectangles are defined from the upper left corner to the lower right corner. If you define the rectangle in the opposite order from lower right to upper left, then your image for that eye will be upside-down. If you define it from top right to bottom left, that view will be mirrored horizontally and finally, if you define it from bottom left to top right, that view will be mirrored vertically. I don't think you'll need that info but you might if your screenshots have one view mirrored since mencoder has trouble mirroring one half of a video. This is a good way around it.

next, slideshow generation with mplayer/mencoder.

--- iondrive ---


Last edited by iondrive on Tue Apr 26, 2011 3:35 am, edited 8 times in total.



Mon May 31, 2010 2:25 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
making 3D video-slideshows for YouTube: general info

YouTube tag info has changed since I first wrote this. I have updated the following as of April, 2011.

Introduction:

OK, so you've got some 3d screenshots you want to share with the world and you're wondering if you can use mencoder to make a video slideshow out of them and post it on YouTube. Well stop wondering. You can. I could talk about anaglyph 3d slideshow-making but if I just explain how to do this, then you'll be able to figure out anaglyph slideshows on your own. Anaglyph screenshots are generally jpg files and you can just treat them as 2d image files which means you can make 2d image slideshows the same way. The only thing to watch out for is making sure that gamma/contrast/brightness/saturation are like you want them and to make sure that your images all have the same aspect ratio like 4:3 or 16:9 and the same goes for 3d screenshots. For now, lets assume all your screenshots have the same aspect but not necessarily the same resolution. Read about using "eq2" in the documentation if you want to tweak brightness/color and use "dsize" or "-aspect" for aspect ratio control.


File selection and management issues: to list.txt or not to list.txt

So it turns out that mencoder can make videos from individual image files if they are in the form of jpeg, png, tga, or sgi. So if you wanted to, you could put together an entire movie from individual frames, but besides that, you could make a nice little video slideshow from your screenshots too. Just choose which images you want to include in your slideshow and drag copies of them to your mplayer folder. Alternatively, put your copies in a temp folder by themselves and put that folder inside your mplayer folder as a new subdirectory. Then use ..\mencoder instead of mencoder to make your new avi file. We will be using an option that looks something like this: "mf://*.jps" and you should be able to use "mf://path/*.jps" but that was buggy for me on winXP so I recommend you avoid it and use one of the other two ideas previously mentioned (files in mplayer dir, or subdir with ../mencoder command). One more option that does work is to use a text file with one-file-per-line format and use it like this: "mf://@list.txt". As an example, list.txt can be like this:

Code:

xblades031.jpg
xblades032.jpg

xblades033.jpg
xblades034.jpg


C:/NVSTEREO.IMG\xblades041.jpg
C:\NVSTEREO.IMG/xblades042.jpg
../xblades027.jpg



This works and shows that blank lines are ignored even at the beginning or end. It also shows that relative and absolute paths work correctly and that you can use "/" or "\" interchangeably and they both work the same. Nice. You can generate a bare list.txt file in the image folder with the command "dir *.jpg /b >list.txt". If you want paths included in the filenames, then include a "/s" like this: "dir *.jpg /b /s >list.txt". Linux users don't need help with something similar. :)


Putting your pics in the right order:

The real advantage of a list like this is that it's easy to get your slideshow to have your pics in the order that you want them to be in. Just cut and paste the lines where they belong. What would be even easier is if you had another viewing program like sView let you organize a kind of playlist graphically and then output a text file with all the filenames in the right order. sView currently can't do this even though it does have a slideshow function although it just goes through the files in a chosen folder. If you know of a 2d slideshow program that can do this, please post about it. Otherwise you need to change filenames so that they are alpha-numerically in the order you want when you use *.jps or *.whatever. I prefer to copy all my desired screenshots into the mplayer directory and change the names. It's pretty simple although you still may need an image browser like sView to show you the actual image content so you know what they are.


Separate sections based on source images:

At this point, I'll split this into separate sections since it gets too big otherwise. The parts will be based on the source images from which mencoder will make the 3d video slideshow from:

Section 1: R/L jps screenshots from iZ3D or nVidia
Section 2: LO/UR jpg screenshots from TriDef
Section 3: real-life left and right photos in separate files
Section 4: interlaced png screenshots from nVidia or other

UPDATE: choosing 3d output display mode for YouTube videos
YouTube used to have a different menu system for choosing which type of 3d display you had and I think it was better before. Anyway, now it uses cookies and you need to set that from a special YouTube page whereas you used to be able to select any output display mode right from the page you're watching. Now to set your display mode, you have to go to a special YouTube page: http://www.youtube.com/select_3d_mode and click your choice there. Then your browser is supposed to remember the setting for all future 3d YouTube videos because of a cookie left on your computer. If your browser is not set to accept and remember that, then you will need to reset your selected 3d output mode when you next start your browser. For Firefox, you should look under Tools/Options/Privacy/History and change the text box to "Use custom settings for history", check the box "accept cookies from sites" and then click "Exceptions..." Enter http://www.youtube.com in the text box, click "Allow" and "Close" and you should be good to go. I suggest you bookmark/favorite the select_3d_mode web page if you sometimes use more than one mode like me :) Thanks to cybereality for the point in the right direction.


Last edited by iondrive on Tue Apr 26, 2011 3:39 am, edited 10 times in total.



Mon May 31, 2010 2:33 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
making s3D video-slideshows for YouTube: specific examples

Section 1, using iZ3D or nVidia jps screenshots


iZ3D or nVidia R/L jps screenshot source files:

These files are standardized in the sense that they are full-framed (non-squeezed) side-by-side with right-view first (on left or crosseyed). All images should have equal aspect ratios. Adjust the YouTube aspect tag as needed.


*** iZ3D/nVidia RL jps 16:9 aspect 3d screenshots to YouTube LR 3d video slideshow:

mencoder "mf://*.jps" -mf type=jpg -o temp.avi -nosound -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf scale=1280:720,ilpack,fil=i,il=s,fil=d,scale,dsize=16/9

YouTube tags: yt3d:enable=LR


Detailed Explanation:

"mf://*.jps" -mf type=jpg:
I think "mf" stands for "multiple-file" but that's just a guess. Anyway, regarding "-mf type=jpg", you don't need this if your files have file-extensions of .jpg, .png, .tga, or .sgi, but since we're using .jps, we specifically inform mencoder to treat these jps files as jpg files or else it won't recognize them and the program will error out. We got kind of lucky in this regard. Lucky jps is a common screenshot format and lucky that mencoder has a "type" option to help recognize it. The documentation says to use "type=jpeg" but I tried "type=jpg" instead and it works so that's what I decided to use in order to keep it easier for you. Both "jpg" and "jpeg" work the same.

-o temp.avi:
That's your output file but you already knew that, right? mencoder will overwrite it without asking you if you run the command again so make sure to change the name to something significant once you're happy with it. The input video file in this case is just a list of image files. Not really a video file but just a group of image files being treated as if it were a video file. OK, sorry to over-explain things but I want things to be clear even to newcomers.

-nosound:
This makes a video with no sound. You can use mencoder to add sound later if you want. Maybe music for your slideshow or your voice explaining and commenting on what's going on. Maybe I'll talk about that someday but it's very low on my to-do list. Maybe someone else will cover it for me. That would be great. YouTube complains a little when you upload a video without sound saying that it doesn't recognize the audio format but it still plays fine without audio so don't worry about it. I guess another option would be to have an audio stream in there but have it just be silence. I think it can be good to have a silent slideshow and I think it works fine.

-ovc x264 -x264encopts crf=20:
This refers to the h264 video compression codec and I've heard it was good so I decided to try and use it. It seems fine but I've done no hard comparisons. The "crf" number determines image quality and can range from 1 to 50 with 50 being the worst quality with the smallest file size. I did some tests and I'm happy with crf=30 for videos but if you have some images with alot of blue sky in them, then it can look blocky so that's why I bumped up the quality to crf=20. You can do your own tests if you want smaller filesizes at the expense of quality. Here's some links to my tests so you can judge something for yourself:

MUA Left/Right 3d format quality test (same video as before, h264 crf=20)
http://www.youtube.com/watch?v=4i7TUm_OEK4
Uploaded Filesize: 7 MB

MUA Left/Right h264 crf=30 test
http://www.youtube.com/watch?v=KhLG2AgcsbE
Uploaded Filesize: 3 MB

-fps 0.25:
Since individual files have no framerate and I want 4 seconds per image, I set the fps to 1/4 frames per second (-fps 0.25). So decide how many seconds/image you want and invert that number and put the decimal value after -fps. The manpage says to use "-mf fps=30" or whatever but you can just forget that and use the more generic "-fps 30" since it overrides the "-mf fps=30". This defines the input fps. This framerate gets saved in the file so that when you play the temp.avi file, it will play at that rate automatically. In this case the video file consists of only 10 frames so you can't get much more compact than that. The video will last 40 seconds since it's 4 seconds per frame. You can see this in the text output after you run this command. It says a bunch of stuff including "40.000 secs 10 frames" or whatever is appropriate for your case.

-ofps 30: This fixes 2 quirks...
OK, so "-fps 0.25" describes the fps for the input video but what about the output video framerate? That's what this defines. If you left it out, then it would default to equal the input fps of 0.25 but that would give you some quirks during playback. One quirk is that the mplayer window and program would be very unresponsive. It's as if it doesn't seem to notice when you try to pause it or drag the window. This is probably because the input routine occurs between frames and since you're playing each frame for 4 seconds, you only have a split second to have your input get noticed. The other quirk is that the last image is only on the screen for a split second and in order to fix that, you could add an extra image file on the end of your list, but -ofps fixes this too so you don't have to do that. I think 30 fps is a good fps for uploading to YouTube. Of course, change it to whatever you want.

-ofps 30: filesize concerns...
It may sound like a file with 30 fps has alot more frames than a file with only 0.25 fps and it does but wait. The duration of your output video is determined by two factors of your input video and they are: the number of input frames and the input fps defined by -fps. In this case the duration is still 40 seconds (in this case 10 frames at 0.25 fps or 4 sec/frame) but the output will be 1200 frames (still 40 seconds but at 30 fps). I've done some tests and the result is that "-ofps 30" makes my temp.avi filesize only 4% bigger. Yes, that's right. It turns out that the two filesizes are very close because the encoding recognizes that the frames are exactly the same and it doesn't really re-encode them but instead, just makes a note that says "repeat previous frame". That's how a video file with 120 times more frames as another, is only 4% bigger when it should be 120 times bigger. Neat huh?

scale=1280:720
This is a good size for your s3d videos to be uploaded to YouTube so that's why I'm using this resolution. If you want a different res for some other reason, then just change the numbers as desired. As is common in image/video processing, there are trade-offs between filesize and image quality. I find this res to be good enough for YouTube. Also, I only have a "slow" DSL modem and I would like to keep the filesize down so that's why I'm not using higher res's. Keep the aspect of the new video equal to the aspect of one of the unsqueezed image-views.

dsize=16/9:
In the past we've used dsize="x-res:y-res" but I've learned that dsize can also take a ratio and then when you play the video, that's the ratio you will see. It's a nice feature but in this case I've decided to use 16/9 which is the ratio of 1280:720 if you use square pixels. I've chosen this because I like to see the image data as it is... squashed or squeezed so that I can get some idea of where image quality is lost. If you want to see normal proportions when you play your temp.avi, you can use 8/3 for 4:3 aspect screenshots and 32/9 for 16:9 widescreen images. You see I've doubled the bottom number that represents height since the frames are 2 images stacked O/U. For a RL side-by-side format you would use either 8/3 or 32/9. Note that this is a "/" and not a ":" here when dsize is used in this way.


RL to R-on-L: you probably don't want these.

Going from RL to R-on-L can be easy or difficult depending on your situation. I'll leave out the complicated bits and just say, if your screenshots all have the same res, you should be fine with these.


*** iZ3D/nVidia RL jps 3d 16:9 aspect screenshots to YouTube R-on-L 3d video slideshow:

mencoder "mf://*.jps" -mf type=jpg -o temp.avi -nosound -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf ilpack,fil=i,il=d,scale=1280:720,dsize=16/9

OLD: YouTube tags: yt3d:enable=true yt3d:aspect=16:9 yt3d:right=0_0_1_0.5 yt3d:left=0_0.5_1_1
NEW: YouTube tags: yt3d:enable=RonL


and finally,

*** iZ3D/nVidia RL jps 3d 16:9 aspect screenshots to YouTube L-on-R 3d video slideshow:

mencoder "mf://*.jps" -mf type=jpg -o temp.avi -nosound -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf ilpack,fil=i,il=s,il=d,scale=1280:720,dsize=16/9

OLD: YouTube tags: yt3d:enable=true yt3d:aspect=16:9 yt3d:left=0_0_1_0.5 yt3d:right=0_0.5_1_1
NEW: YouTube tag: yt3d:enable=LonR


Last edited by iondrive on Tue Apr 26, 2011 3:55 am, edited 3 times in total.



Mon May 31, 2010 9:28 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
(To followers of this thread, I had to do some rewriting, take a second look at the last 3 posts)

Making s3D video-slideshows for YouTube: specific examples

Section 2, using TriDef O/U jpg screenshots


TriDef jpg L-on-R screenshot source files:

These files are standardized in the sense that they are full-framed (non-squashed) Over/Under with left-view first (on top). Otherwise things here are much the same as above so refer to Section 1 for details if needed. As usual, all images should have equal aspect ratios.


*** TriDef L-on-R 3d 16:9 aspect screenshots to L-on-R YouTube 3d video slideshow:

mencoder "mf://*.jpg" -o temp.avi -nosound -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf scale=1280:720,dsize=16/9

YouTube tags:
OLD: yt3d:enable=true yt3d:aspect=4:3 yt3d:left=0_0_1_0.5 yt3d:right=0_0.5_1_1
NEW: yt3d:enable=LonR

If you want the right-eye-view on top, just swap the top and bottom with ilpack,il=i,il=s,il=d,scale like this:

*** TriDef L-on-R 16:9 aspect screenshots to video slideshow for YouTube in R-on-L format

mencoder "mf://*.jpg" -o temp.avi -nosound -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf scale=1280:720,ilpack,il=i,il=s,il=d,scale,dsize=16/9

YouTube tags:
OLD: yt3d:enable=true yt3d:aspect=4:3 yt3d:right=0_0_1_0.5 yt3d:left=0_0.5_1_1
NEW: yt3d:enable=RonL

And finally L-on-R to LR:

*** 3d TriDef L-on-R 16:9 aspect screenshots to video slideshow for YouTube in LR format
mencoder "mf://*.jpg" -o temp.avi -nosound -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf scale=640:1440,ilpack,il=i,fil=d,scale=1280:720,dsize=16/9

YouTube tags:
OLD: yt3d:enable=true yt3d:aspect=4:3
NEW: yt3d:enable=LR


The first "scale" here is 640x1440 because "fil=d" is going to double the width and halve the height so the final res will be 1280x720 just like we want it. Note "fil=d" is not a rescaling. It is a deinterlacing so it takes all odd lines and puts them on the left and takes all even lines and puts them on the right. Line numbering starts with "1".


Last edited by iondrive on Tue Apr 26, 2011 4:04 am, edited 1 time in total.



Wed Jun 02, 2010 4:02 am
Profile
One Eyed Hopeful

Joined: Fri Jul 09, 2010 4:47 pm
Posts: 1
Wow, these are some great mplayer recipes, thank you. I'm trying to play a side by side 3d mpg as magenta/green anaglyph, with this command I can get the two sides overlayed but, I'm not sure how to adjust the colors for the anaglyph, do you know?

mplayer germany-spain-futbol-3d.mpg -autosync 30 -mc 2 -nokeepaspect -vf ilpack,fil=i,scale=1920:1080:1,pp=lb


Fri Jul 09, 2010 4:52 pm
Profile
One Eyed Hopeful

Joined: Tue Nov 30, 2010 12:10 am
Posts: 3
Hi,

I am new to all this but what I was wondering is if it was possible to take two video inputs (Left and Right) and output them as Side-by-Side and Over-Under? This would really help me.

Thanks,

Josh


Tue Nov 30, 2010 12:14 am
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
Unfortunately MPlayer is not able to open two files at the same time for now, you'd have to use something else.


Tue Nov 30, 2010 8:09 am
Profile WWW
Cross Eyed!

Joined: Sun Feb 15, 2009 12:50 pm
Posts: 131
jduncanator:

try bino:
http://nongnu.org/bino/

I had a play with it the other day - it looks quite promising and can do what you are looking for. You will need to build ffmpeg and libswscale from SVN first, as no distro i know of has a recent enough version.

Regards,

mickeyjaw


Tue Nov 30, 2010 10:36 am
Profile
One Eyed Hopeful

Joined: Tue Nov 30, 2010 12:10 am
Posts: 3
mickeyjaw wrote:
jduncanator:

try bino:
http://nongnu.org/bino/

I had a play with it the other day - it looks quite promising and can do what you are looking for. You will need to build ffmpeg and libswscale from SVN first, as no distro i know of has a recent enough version.

Regards,

mickeyjaw


Hi mickey,

Thanks! One question, is this software good enough to encode Full HD videos? The reason I am asking is I work with very high resolution PNGs (ones rendered directly from Sintels render farms. One for each eye of course) and was wondering if it was near lossless? I was also wondering how to go about encoding YouTube videos in 3D from scratch as I am completely new to this and had NO IDEA what you were talking about :O

Thanks,

JD


Tue Nov 30, 2010 11:43 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
You should be able to encode full HD videos in side-by-side or above-below format with mencoder using your PNGs, you'll just need to create side-by-side versions of them before. You can automate that by using ImageMagick in a batch with the append command. It would be very nice to see Sintel in 3D. :)


Wed Dec 01, 2010 5:51 am
Profile WWW
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
Hum, thinking about it I guess you could also tell Blender to directly render side-by-side or above-and-below PNGs, that would be a more effective solution than converting PNGs afterward.


Wed Dec 01, 2010 5:56 am
Profile WWW
One Eyed Hopeful

Joined: Tue Nov 30, 2010 12:10 am
Posts: 3
Yes I agree but I also don't. To save us time rendering it twice (side-by-side and single images) we were just going to render with single images (one for left one for right) and then we could do whatever we wanted to! ImageMagik is good idea thanks ;)


Wed Dec 01, 2010 11:17 pm
Profile
Cross Eyed!

Joined: Sun Feb 15, 2009 12:50 pm
Posts: 131
Sorry, it looks like i misinterpreted your post - the software i referred to is a _player_ not an encoder.
I concur that imagemagik then is probably the way to go in your case since you are already working with a set of still images.
Good to hear that Sintel is getting the stereoscopic treatment - i look forward to seeing the final result!

PS What viewing solution are you using for S3D?


Thu Dec 02, 2010 8:59 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
hi guys!!!

I'm still alive if any of you were wondering what happened to me. I just haven't been reading/posting for a very long time... distractions abound. I don't know if I will get back into the habit of posting more actively but I'll just do a little right now at least.

New stereo3d functions in mplayer source!!! YAY!
This happened late last year but I'm actually not that excited about it since it doesn't really add formats that I could not do before except for easy stereo-to-anaglyph conversion for those times when you want that. Since it's new code, you could have some trouble finding a compiled version of mplayer/mencoder that has it but I found one for windows at
http://oss.netfarm.it/mplayer-win32.php
and tried it and it worked. The file was for athlons: mplayer-athlon-svn-32848.7z

The code adds a video filter called stereo3d and you use it in your -vf segment as in
mplayer futbol-3d.mpg -vf stereo3d=sbsl:agmh,scale
to go from LR to anaglyph green/magenta-half-color while converting on-the-fly.
This works with mencoder too so you can make 3d anaglyph videos if you want but I would just keep the originals and just convert on-the-fly as needed. My feelings about anaglyph are that it's not too repulsive and it's a good option for printing out a screenshot on paper. Remember to use scale after stereo3d or else mplayer will crash.

Anyway, there's more info in the man-page at
http://www.mplayerhq.hu/DOCS/man/en/mplayer.1.txt
Once you are there, search for "stereo3d" and that should take you right to it.

NOTE: if you Google mplayer manpage, you can wind up at the Tivo mplayer manpage which is missing alot of functions in the full source so don't freak out if you don't see the options you're looking for. Just go to mplayerhq.hu instead.

Basically the format is stereo3d=input:output format and the possible s3d formats for in/out are:
LR, RL, L/R, R/L, and squashed L/R, R/L.

Output also includes various anaglyph color-sets...
3 kinds of yellow/blue (gray, half-color, and color)
3 kinds of green/magenta (as above)
4 kinds of red/cyan (as above plus dubois)

as well as mono left or right view.

Anaglyph red/cyan dubois?...
I had to try that one and I think it did look the best of all the red/cyans but it might depend on your particular content. The manpage says it is "color optimized with the least squares projection of dubois". Oh, of course. I should have realized that myself. :)

What's missing from the stereo3d option: half-mirrored modes and checkerboard outputs.
I'm glad this is in the code and there's a chance that this function may do a better job at converting between formats but as I said before, we already could do those conversions so this doesn't add much except for easy anaglyph conversion. What's missing is half-mirrored modes as well as checkerboard outputs. That would have been a more welcome addition to the conversions we already can do. I will for the most part not use this new function since the old approach works with older mplayer versions too. Half-mirrored modes should be easy. Maybe someone will contact the author of that subroutine and suggest it to him/her. Volunteers? I have other things on my mind. Also missing is frame-sequential and field-sequential as well as vertical interlaced. Good thing we can do all those already.

Odd-colored glasses:
If your glasses are colored backwards for some reason, it seems like you should just be able to swap eye-views prior to the stereo3d function and it should then look right to you with those odd glasses. See above posts for examples using il=s for eye-view-swapping / parallax-inversion. The other thing missing is arbitrary user-color choices for glasses. A calibration test pattern would also be nice. OH, better yet, just lie to the program instead of swapping views. If you have cyan/red glasses instead of red/cyan, then just pretend the format is opposite when you run the command Use RL when you really have LR and use BA when you really have AB). Also, treat red/blue glasses as red/cyan. That should work well enough.

Onward...
Hmmmm, should I pick up where I left off or respond to posts I've missed out on? I will respond to posts.

---

to chinesebob:
Hi guy, how did you find this thread? It's kind of buried. I'll have to put all this in a web page some day rewriting some of the things I've learned. It looks like I'm accidentally writing a book. Anyway, nice to see a new poster. Regarding anaglyph generation, you see I've answered your question with the above info so there you go. Sorry to not reply sooner. Let's see, what else...

---

to jduncantor:
What you want is AviSynth. It is a really neat helper program that I've started to use late last year and I really like it. However, I'm not ready to post about it at this point in time but it's pretty easy once you get the idea of it. You will be able to then use mplayer/mencoder to reference a fake video file that is really a text file that refers to your real 2 video files. Basically you make a text file that mentions your 2 video files and change the extension of that file from .txt to .avs, then you pretend that that file is a video file and do mencoder file.avs and that will make your new combo file side-by-side or over/under. http://www.avisynth.org bounces you to avisynth.org/mediawiki/Main_Page and you can go from there. What you want to click on is "filters", then "internal filters", then search/find StackHorizontal or StackVertical. Read/look/study/research examples and I'm confident you will be on your way to happiness and satisfaction. There could be some stumbling blocks for you though if your files are mpegs. Convert them to avi's and that will help make things a little simpler. Oh, I guess I was able to talk about this stuff now although I would have liked to expand on this more, I'm a little rusty with AviSynth right now. I've used it to add Blue-line-codes to my 3d videos that auto-trigger my shutterglasses so it's pretty great!

Oh, darn. After all that I just remembered SMM (StereoMovieMaker). It can do what you want easier. Google it. I don't know if there's a Linux version of that. Oh wait, I don't think it can input png files, just separate video files. Oh well. Just try AviSynth.

Oh yeah, AviSynth is for windows but I've heard that it works with Linux by using Wine although I haven't tried it myself.

mplayer and HD:
mencoder IS capable of HD as the user is control of the quality settings depending on the codecs used but for stereo3d, I recommend the above-below format since mencoder can crash if you try to make a video-file that is over 2000 pixels wide. Side-by-side HD would be 1920x2=3840 pixels wide although I believe you can edit the source and recompile mencoder so that it can handle that res. I would just user RO/UL which I'm starting to call RoL for short (Right-over-Left). My videocard is older so I have trouble with the playback of that high of a res (1920x2160) but I think modern cards should handle it fine. Maybe I need a quad-core which is what I've heard for Blu-Ray-3d. Anyway, it's not mplayer's fault if my system can't handle the res.

mplayer, AviSynth and png files:
mplayer/mencoder can input png files but what I've suggested was AviSynth. I'm not sure if it can import png files. Wait let me check... Yes! It can! Find ImageSource in the internal filter page and go from there. Else you could use mencoder to make avi files and use AviSynth with them instead.

YouTube videos:
Just make a side-by-side video that is 1280x720 and RL instead of LR and things will be easier for you. Go ahead and squash/squeeze the video into that frame and fix the aspect with the aspect youtube-tag. Reread some of my above post and upload your video to YouTube and post a link here so we can see/find it easily. Good luck!

---

Hi Mickeyjaw, I'll put bino on my to-try list but who knows when I'll get to it...
Just took a look. It's a nice clean simple webpage just like I like. Seems like it should be decent but I see it also has no half-mirrored modes. Oh well, I don't need those anyway.

---

Hi Fredz,
Oh, you already answered JD's question. Oh well.

---

OK, all caught up. No wonder I took such a long break. I guess I get carried away with these posts and get burnt out. Hmmm, what else...?

Spanning mode display under Windows:
Earlier in this thread I wondered how to do dual screen video in Windows and it turns out that the answer was to use -vo directx:noaccel or -vo gl since -vo directx (the default) only works on one screen at a time when playing a video. The other screen is green if you use spanning mode with two monitors/projectors. So now we know how to use mplayer to play 3d videos with dual projectors and horizontal or vertical spanning mode. It was just something simple like -vo directx:noaccel although as mentioned earlier your system might not be fast enough in software display mode to show HD 3d. If so, try -vo gl (openGL mode). If that doesn't work, then I guess we're stuck using some other program or else 2 mplayer programs simultaneously?

Does anyone know how to use FIFO's to control two mplayer programs simultaneously using tee or something? That would be slick.

---

OK, that's it for now. I've more to say but let's call it a night. Good night all.

--- iondrive ---


Last edited by iondrive on Tue Mar 13, 2012 7:49 pm, edited 6 times in total.



Wed Feb 23, 2011 5:27 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
hey jduncantor and others,

You don't need to use AviSynth after all. To combine two video files into one using mencoder, you just have to do it the hard way and convert the videos into image files frame-by-frame but JD, you already have that so this solution is fine for you. Try to get all your image files into the same folder with a naming pattern like this:

00000001L.jpg
00000001R.jpg
00000002L.jpg
00000002R.jpg
00000003L.jpg
00000003R.jpg
00000004L.jpg
00000004R.jpg
.
.
.

OK, JD is using png's but my examples will use jpg's cause it's more generic. It still gives good results even though jpegs are lossy. It has a quality option if you want to use it. For example: -vo jpeg:quality=99. Output file formats for mplayer include jpeg, pnm, png, tga, and animated gifs with certain limitations. See the manpage for details in the -vo section. As far as inputs for mencoder go with mf://*.xxx, I think it's the same but haven't tested all those. What's interesting is that mplayer can input files with mf://*.jpg and output files with -vo jpeg in the same command so that you can use mplayer as a file-converter (image files in, image files out). Neat huh? By the way, the mplayer versions I have can play MNG ("ming" like "ping") files. Those are like PNG image files except that it is Multiple image files. This may be of interest to you if you record game video with MAME's built-it recorder since it records in MNG. It seems to make gameplay way to slow though and doesn't record sound. Oh well, onward.

Try to have nothing else in that directory that might cause problems. If you have 100's of thousands of images, it might be a problem for the filesystem to handle so many files in one folder. Anyway, once you've got this setup, get in that directory and use this command to get an avi using h264 codec.

*** frame-sequential image files to left-over-right full frames via tile
mencoder mf://*.jpg -vf tile=1:2:2:0:0,scale -fps 48 -ofps 24 -o left-over-right.avi -ovc x264 -x264encopts crf=20

This is for a video with 24 frames/sec. If you want 30 fps output, then use "-fps 60 -ofps 30" instead of "-fps 48 -ofps 24". Use whatever output fps is right for you, just make -fps double of -ofps where -ofps is your output fps. Output fps should be 1/2 of input fps because of the tile function... every two input images create one output image.

This stacks your images above/below with Left on top just because L comes before R. Adjust as needed.

*** frame-sequential image files to side-by-side LR via tile
For side-by-side, use "-vf tile=2:1:2:0:0,scale" instead of the above but be warned that you may crash mencoder if your output res is more than 2000 pixels wide. If you're up to it, try to edit the source and recompile in order to fix this limitation.

What about sound?
Yes, this has no sound. Do you need help adding sound? I don't want to get into sound right now but maybe you can just use -audiofile file.mp3 or whatever. You may have timing issues.

What if you don't have all those image files?
In that case use mplayer to make them:

*** 2 input files to frame-sequential image files in one folder (windows version)
Code:
mkdir left
mkdir right
mkdir combo
mplayer -nosound left.avi -vo jpeg:outdir=left
mplayer -nosound right.avi -vo jpeg:outdir=right
cd left
rem    renaming for windows (appending L or R to constant-length filenames)
ren ????????.jpg ????????L.jpg
move *.jpg ..\combo
cd ..
cd right
ren ????????.jpg ????????R.jpg
move *.jpg ..\combo
cd ..
cd combo


Dealing with unsynced input videos:
The above assumes that your videos are in perfect frame-by-frame sync. If they are not, then you have to find some work-around like cutting off the beginning of the earlier one or if it's only a few frames off, then you can delay one of the fields when your video is interlaced using -vf phase=t or phase=b. For example, use phase=t,phase=t,phase=t to delay the left-eye view by three frames. t is for top (odd lines) and b is for bottom (even lines) although some videos can have labels in them that designate the opposite relationship. Another option is to use a text file as input with all your image files listed in the order you want them. You need some experience with the text-manipulating gnu/linux/bash command line utilities to make it easier to rearrange long lists of filenames in a text file. Then you could use mf://@list.txt instead of mf://*.jpg

Doing a short test play/encode via -frames :
use -frames 600 in your mplayer commands above if you just want to do a little test that is 20 seconds long if your videos are 30 fps. This is a good option to remember and also works with mencoder. A 90 minute video at 30 fps gives you 162000 frames so make sure you have the space by testing with shorter durations first if you think it's needed.

*** frame-sequential video encoded from 2 input video files
Just follow the above approach and encode it using mf://*.jpg and appropriate -fps. I need to expand on this in a later post since there can be complications involving both encoding and playback. It can get hairy but not always.

So now you have your files all set up in that combo directory ready to run the previously mentioned mencoder command. Linux users should be able to do something similar. My linux is getting rusty or else I'd give you an example straight away. In the past, I've kept left and right images in their own dirs and just used symlinks in the combo dir. Just get the files paired up in any way you can in an alternating LR order.

To get the audio from your video files, use -ao pcm:
mplayer left.avi -novideo -ao pcm
and this will make a file called audiodump.wav that you can use with -audiofile.
If this doesn't work for some reason, try replacing "-novideo" with "-vc null -vo null" instead.

OK, this needs better examples but I'm getting lazy. Sorry.

Yes, it's awkward to make all those image files just to put the videos together but on the other hand, you can script this and it can be pretty transparent.


It's still surprising to me what you can do with mplayer/mencoder, but wait there's more for later. Here's a clue: "-vf phase=b"
Can you guess what kind of 3d conversion that would be good for?

--- iondrive ---

PS: I also wanted to mention using -vf tinterlace=0 instead of tile in order to get to LR or L-on-R. tinterlace is the opposite of tfields and takes sequential frames and combines pairs of frames into one frame by interlacing the two into alternating horizontal lines which makes your video twice as tall when you use the tinterlace=0 option. The top line of the output comes from the first frame and is considered odd so that the top line is line 1 and not line 0. Then you can use il=d to get L-on-R or else if you want LR, then use fil=d. If you get weird colors, then stick an ilpack before il or fil and a scale afterwards if needed. The tests I did, did not need ilpack.

*** frame-sequential (jpeg) image files to Left-over-Right (L-on-R, LoR), full-frames (doubles height)
mencoder mf://*.jpg -vf tinterlace=0,il=d,scale,dsize -o tinterlaced-LonR.avi -ovc x264 -x264encopts crf=20 -fps 48 -ofps 24

As before, -ofps must be half of -fps since you are combining every two frames into one or else you get each pair of frames repeated in the output video. Scale as needed.

*** frame-sequential image files to side-by-side LR, full-frames (doubles width)
For LR instead of L-on-R, just replace il=d with fil=d and rename the output file appropriately. Just beware that the output width will be the double of a single input frame and if it's more than 2000 pixels wide, it will probably crash mencoder. I think that can only be fixed by editing the mencoder source and recompiling but I really haven't investigated that. It's just a guess. Otherwise it could be a codec format limitation.

*** frame-sequential image files to RL or R/L
If you want R-on-L or RL, just append ,il=s after =d in your -vf video filter list section.

*** frame-sequential image files to interlaced (or field-sequential) video file
tinterlace does that job for you so if you just want to go from frame-sequential to field-sequential, then just don't use il=d and then make sure you use a codec that preserves interlacing integrity but I don't have examples of that handy, mpeg2video and mpeg4 should work. That's why I don't save files in field-sequential format since it limits your choices of codecs. Instead, I use R-on-L and convert to field-sequential interlaced on the fly with mplayer. Use il=s,scale to swap fields as needed of course. Also you probably want to squash the double-framed height down to a single-framed height so use scale=width:height:1 . Don't forget the :1 or else you'll lose line integrity. You have to use the right number for the height value since mencoder doesn't have a generic halving option in it's scale filter. Read the text on the console for the original video's res when you play it with mplayer. The left-eye view will be in the top (odd) line in this case.

CONCLUSIONS:
So which approach should you use, tinterlace or tile? In this case I think tile is a little better since it doesn't go through an interlaced stage which sometimes causes color-spilling problems due to subsampling although it's easy enough to test your output and if it looks good, then it looks good. Too bad the stereo3d option includes neither frame-sequential nor field-sequential 3d formats. Oh well, maybe in the future. Bye now.


Last edited by iondrive on Tue Apr 26, 2011 4:12 am, edited 3 times in total.



Wed Feb 23, 2011 8:32 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Hi again,

Before I get to talking about a new topic using the phase filter, let me finish what I started before regarding making YouTube slideshow videos.

Making s3d video slideshows for YouTube: specific examples

Section 3: using real photos from separate left/right image files

Well, I sort of accidentally covered this in the above post. If you read it, then you can guess that you need to put your files in a temp folder so that they are arranged in pairs and then use mf://*.jpg and "-vf tile..." or "-vf tinterlace..." on them as above. Otherwise use a list file. Let's see. Suppose you had a list.txt file like this:

pic0-L.jpg
pic0-R.jpg
pic1-L.jpg
pic1-R.jpg
pic2-L.jpg
pic2-R.jpg
.
.
.
pic9-L.jpg
pic9-R.jpg

All your images should be the same res and aspect ratios. Notice that the left view is first. Then you could do the following:
(in the same directory as the image files and list.txt)

*** separate eye-view 16:9 aspect image files to YouTube LR 3d video slideshow
mencoder mf://@list.txt -o temp.avi -nosound -ovc x264 -x264encopts crf=20 -fps 0.5 -ofps 30 -vf scale=640:720,tile=2:1:2:0:0,scale=1280:720,dsize

YouTube tags:
OLD: yt3d:enable=true yt3d:aspect=4:3
NEW: yt3d:enable=LR

The tricky thing here is that -fps 0.5 makes it look like each image is going to be 2 seconds but the tile filter is combining every two images into one so that each image of your slideshow actually is 4 seconds long. I use scale in the beginning in case your pics are 12 megapixels or something. The last scale could probably be left off or just "scale" but I include it for clarity regarding the final dimensions of the video.

If your files are right-image first:
Then use -vf scale=640:720,tile=2:1:2:0:0,ilpack,fil=i,il=s,fil=d,scale,dsize
or else use -vf scale=640:720,tinterlace=0,ilpack,il=s,fil=d,scale,dsize

All the things mentioned have been explained in previous posts so I think I can skip explaining all things here. I think that's it for now. If I remember something, I'll just have to edit this.

--- iondrive ---


Fri Feb 25, 2011 6:37 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Hey there y'all,

Ready for more boring stuff? Let's finish this so we can get on to more interesting stuff.

Firstly, note that Ive edited my previous posts regarding YouTube tags so that they're now up-to-date as of April, 2011. Basically, I'll try to stick to side-by-side LR unless I have reason to use L-on-R or R-on-L. RL (crosseyed) format is discouraged. These are the current tags.
yt3d:enable=LR
yt3d:enable=LonR
yt3d:enable=RonL
yt3d:enable=RL

Single-eye views must be 50% squeezed horizontally or 50% squashed vertically so that the new video has the same aspect as a full-framed single-eye view. That's how they avoid needing to specify aspect with a tag. That means 4:3 aspect video should be 640x480 or something like that although I have not tried it yet. Hopefully it can take higher res like 1024x768 or 1280x960. Does anyone have more info on this? I think those should be fine. Anyway, onward...


Making s3d video slideshows for YouTube: specific examples

Section 4: using interlaced png files from nvidia or other

Old-school nvidia screenshots are normally jps files but if you use interlaced or checkerboard 3d output formats, then the screenshots are interlaced PNG files since jpegs are lossy and not good for interlaced images. Most likely, this would be used on a Zalman 3d interlaced display and so that means that the top line is meant for the right eye. Strategy is like in the previous examples but in this case, when we deinterlace the screenshots, we get a R-on-L frame that already has it's images 50% squashed so it's pretty convenient. Examples:


*** nvidia (Zalman) interlaced-3d 16:9 aspect png screenshots to YouTube R-on-L 3d video slideshow:

mencoder "mf://*.png" -o temp.avi -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf scale=1280:720:1,ilpack,il=d,scale=1280:720,dsize=16/9

YouTube tags: yt3d:enable=RonL

General notes:
If you decide to remove the first scale, then also remove ilpack or else it will give you some software ghosting. I stuck it in there just in case your screenshots do not have a res that is multiples of 16. It's normally not needed here but it's good practice to have it.
-mf type=png is unneeded since mencoder can autodetect it.
Other options are explained in previous posts.
For left-view-first interlaced images, use il=ds instead of il=d, or else "yt3d:enable=LonR" but not both or else they'll cancel each other out and you'll have inverted stereo.

And finally, if you want to "standardize" your interlaced images into a LR slideshow:
(This is not recommended due to image data loss.)

*** interlaced-3d 16:9 aspect png screenshots to YouTube LR 3d video slideshow:

mencoder "mf://*.png" -o temp.avi -ovc x264 -x264encopts crf=20 -fps 0.25 -ofps 30 -vf scale=640:1440:1,il=s,fil=d,scale=1280:720,dsize=16/9

YouTube tags: yt3d:enable=LR

fil quirks:
ilpack was not needed for some reason. It probably has something to do with fil. If your screenshots were 800x600, then you might have a little problem if you leave off the first scale. 800x600 is not a res that is multiples of 16 (600/16=37.5) and fil does not like it so it puts a thin gray stripe down the middle of your image-pair. The solution is to scale to dimensions that are multiples of 16 before using fil. Just remember how fil changes the dimensions of the image and figure the scale dimensions accordingly.


Viewing the images without making a video file:
Before you make these videos, you may want to preview what the slideshow will look like and you can easily use mplayer to do that if you are an interlaced user. That's what the next post is all about.

--- iondrive ---


Last edited by iondrive on Thu Apr 28, 2011 8:45 am, edited 1 time in total.



Tue Apr 26, 2011 5:11 am
Profile
Sharp Eyed Eagle!
User avatar

Joined: Tue Feb 10, 2009 8:13 pm
Posts: 367
Hi all,

Lately I've been talking about making 3d slideshows from your files in various formats but what if you just want to view some images without actually making a video file? Well, that's what this post is about... using mplayer to view files without mencoding anything. Why not just use a slideshow program like sView or Tridef's or nvidia's 3d image viewer? Well, IDK, maybe they don't work right for you or something or if you're on Linux and have no better options. Anyway, onward.

For interlaced users:
Interlaced users are really blessed with some nice conveniences like not worrying about timing issues, being able to pause a video and still have it be in 3d, viewing 3d in a window or web browser, and being able to play some games that only offer interlaced mode. Anyway, regarding slideshows, if you have interlaced png files it's pretty easy and this works fine for 2d slideshows too of course.

*** The general slideshow command: (1 second / image)
mplayer -fps 1 mf://*.png


Quirk-1: driver ghosting
For interlaced images, if you see more than normal ghosting, you may need to use a different output driver since some of them do you the "favor" of de-interlacing video for you and so they harm the 3d effect. You might find a way to turn that off or just use another driver. I like the opengl output driver. Try -vo gl or else direct:noaccel. Both should work on Windows. Linux users can also try -vo x11.


Quirk-2: Last image lost
hmmm, darn, I see the last image is missing from view. I tried mf://*.png,filename.png but that didn't work. I guess if you care about the last image, then you should make a text file like list.txt or whatever you want to name it. For Windows: dir /b *.png > list.txt (or dir /b /s for full paths ) and then edit that txt file and duplicate the last entry. That will fix the problem.

*** general slideshow command using a list.txt file: (4 seconds / image)
mplayer -fps 0.25 mf://@list.txt -vo gl


Quirk-3: unresponsive keyboard controls
It seems that input is only checked in-between frames so when you play a video at 30 fps, it's fine, but if it's 0.25 fps, then most input seems ignored. If this is intolerable, then make a video file as shown before and play that instead. Unfortunately, mplayer cannot use a pair of input/output fps options like with mencoder using -fps and -ofps as that would probably fix things. There is another great solution for interlaced users though. Use -fps 25 and -loop 0 and then step through the images with the "." key. It's a nice slick, convenient solution but what's better is that an fps of 25 is the default for mf:// so you don't even need to specify it. Yay, a simpler command line.

*** user-controlled slideshow using "." to step through the images.
mplayer mf://*.png -loop 0 -vo gl

-loop 0 means loop the video continuously until you quit mplayer with "q" or escape. This means that you don't need to duplicate the last line of your list.txt file and you don't need to even make one since this method also fixes the missing last image quirk. Just use *.png. Nice. Note that when you reach the last image in the series, it behaves as if there is a second copy of it. Press "." once and the image is still the same. Press "." again and you start back at the first frame again. It's a good way to tell which frame is the last one in the series. Also note that -loop should be after the mf:// option and not before. It just works better that way.


Quirk-4: correct interlacing
if you are viewing in a window, you have a 50/50 chance of having the correct interlacing. If it's wrong, you can slide the window up/down till it's right or if you're using a CRT with an EDim controller and ED-Activator, then just use the hotkeys to flip the sync of the line-blanking. The other option is to rescale to fullscreen and then you will be able to always be in sync once you know if you need to use il=s or not. For 22" Zalman monitors:

*** Fullscreen slideshows for 22" Zalman Trimons ("." steps through images)
mplayer -vf il=s,scale=1680:1050:1,dsize -fs mf://*.png -loop 0 -vo gl

Just drop the "il=s," if the lines are in the wrong place. I didn't need ilpack in my case.


Reminders:
In case you forgot, "-fs" is for fullscreen and the ":1" on the scale is to do the rescale without mixing the interlacing. Of course, you can change the numbers to change the size of your window if you don't use -fs. The dsize is important here to make sure that image alignment on your screen will match pixel for pixel. When dsize has no "=" after it, it defaults to the image's pixel dimensions which is exactly what we want here. If the interlacing is still wrong, then you just use the old ilpack,il=s, options before the scale. If you get weird colors, then drop the ilpack. Also remember that mf:// can autodetect some filetypes like png and jpg. If your input files are jps files, then you can still use them, just add -mf type=jpeg. If you have trouble remembering "mf", just think "multiple files". I think that's what it stands for but I don't really know. It's just a guess.


Side-by-side and above/below input formats:
If you have jpegs in side-by-side or above/below formats, then follow examples from previous posts regarding interlacing using "-vf fil=i" or "-vf il=i". In some cases you don't need ilpack before the il or fil but if you use it, then always follow it with a scale sometime after the il in the filter chain or else you'll get an error message. If you get some other weirdness, you may need to make sure that all your input files are the same size or aspect or that their dimensions are multiples of 16. Normally, things should just work right from the start.


using separate L and R image file-pairs as input:
For this kind of input, just follow the examples in the above posts regarding making slideshow videos. Get the files in pairs in one folder. They should all be the same size and have dimensions in multiples of 16 (800x600 screenshots need tweaking). If they are not sized right, then tile and fil may not work right. tinterlace is probably your best option. Here're some examples in case you have 800x600 separate L and R input files:


tinterlacing:

*** slideshowing separate L and R 3d image files using tinterlace ("." steps through images)
mplayer mf://*.jpg -vf tinterlace=0,scale=800:600:1,dsize -loop 0 -vo gl

tinterlace will make the images doubly tall so rescale before or after. You may need ilpack before tinterlace and then scale after. The first of each image pair will go into the odd lines and the second image will go into the even lines (top line is odd, #1). This causes the framerate to be cut in half but we're not worried about that here and the same goes for tile. Tile examples follow.


Vertical tiling:

*** tiling separate image files vertically and using il ("." steps through images)
mplayer mf://*.jpg -vf scale=640:480,tile=1:2:2:0:0,ilpack,il=i,scale=640:480:1,dsize -loop 0 -vo gl

This could be more efficient/effective but I chose the numbers for clarity. You should understand that after the tile, dimensions of the newly tiled image is 640x960 and after the il=i, it's still 640x960 so you could have made the original scale 640:240 and it would be more efficient. "fil=i" is a little trickier.


Horizontal tiling:

*** tiling separate image files horizontally and using fil ("." steps through images)
mplayer mf://*.jpg -vf scale=640:480,tile=2:1:2:0:0,ilpack,fil=i,scale=640:480:1,dsize -loop 0 -vo gl

As above, this could be more efficient/effective. After the tile, dimensions of the newly tiled image is 1280x480 and after the fil=i, dimensions are again 640x960 so you could have made the original scale 640:240 and it would be more efficient.


That's all for now, next let's start dipping our toes into frame-sequential page-flipping shutterglass mode.

--- iondrive ---


Thu Apr 28, 2011 7:37 am
Profile
One Eyed Hopeful

Joined: Tue Nov 15, 2011 4:08 pm
Posts: 2
Hello;
Sorry I can't find information, what i need to do with my mplayer, :oops: if
I have a glass, with red and blue, and movie with left and right side. How I need to setup mine mplayer for playing. Do you can?
Could you offer me a best glass for movie with mplayer?
thanks


Tue Nov 15, 2011 4:16 pm
Profile
Petrif-Eyed
User avatar

Joined: Sat Jan 09, 2010 2:06 pm
Posts: 2253
Location: Perpignan, France
Have a look at Bino, it's a full featured 3D movie player for Linux, you won't have all the hassles with the command line :
- http://bino3d.org/


Tue Nov 15, 2011 4:37 pm
Profile WWW
One Eyed Hopeful

Joined: Tue Nov 15, 2011 4:08 pm
Posts: 2
Bino is too slow.


Tue Nov 15, 2011 5:18 pm
Profile
Display posts from previous:  Sort by  
Reply to topic   [ 41 posts ]  Go to page 1, 2  Next

Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Jump to:  
Powered by phpBB® Forum Software © phpBB Group
Designed by STSoftware.