360 video demo

Post Reply
che
One Eyed Hopeful
Posts: 35
Joined: Fri Mar 01, 2013 7:01 pm

360 video demo

Post by che »

Hi guys, can you tell me if this works at all with the rift?

http://we.tl/YbKDWLxfUu

Thx
3dcoffee
Cross Eyed!
Posts: 115
Joined: Thu Jul 19, 2012 1:01 am

Re: 360 video demo

Post by 3dcoffee »

che wrote:Hi guys, can you tell me if this works at all with the rift?

http://we.tl/YbKDWLxfUu

Thx

It does work WITHOUT the Rift. I see two videos side by side. And I have the message saying that there was no display and other thingy found. Then the message disappears and I can continue watching the video.
che
One Eyed Hopeful
Posts: 35
Joined: Fri Mar 01, 2013 7:01 pm

Re: 360 video demo

Post by che »

Thx, i know it works without the rift, i wanna know how it looks with the rift.If it gives the sensation of being inside the video or something.
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

che wrote:Hi guys, can you tell me if this works at all with the rift?

http://we.tl/YbKDWLxfUu

Thx
That is AWESOME in my RiftDK! I see a black hole in the floor and ceiling. You could put your "trademark" (or avatar) in those unused spots... ;-)

One suggestion: I can see the black borders on the top and bottom, and horizontally at the outer edges, while looking straight forward with my eyes close to my lenses. There is no good reason to artificially restrict the FoV by drawing black pixels at the (sometimes visible) edges of its FoV. Also, the inner divider looks larger in the Rift than it needs to be, because your warp gave it a curved inner edge instead of clipping it to a straight line (as shown in Oculus demos).

Can you expand the borders a little (all around, with inner clipping) on your pre-warp shader? It would be nice to NOT waste pixels. IMHO, the border should touch the outer edges of the display.

Also, can I see your source code for this? Thanks!
che
One Eyed Hopeful
Posts: 35
Joined: Fri Mar 01, 2013 7:01 pm

Re: 360 video demo

Post by che »

Thx geekmaster.Im using the oculus shader, i will see if i can expand the fov, but without my kit, its a guess work.
I was really hoping this works, thats awesome, it opens so many possibilities for my work.
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

che wrote:Thx geekmaster.Im using the oculus shader, i will see if i can expand the fov, but without my kit, its a guess work.
I was really hoping this works, thats awesome, it opens so many possibilities for my work.
Do not change the rendering FoV. It looks natural now. Just do not clip so much of the image off and the outer edges. But it is not that important -- my eyelashes brush my lenses, and I would not see the borders at all if my eyes were farther from the lenses (and not visible with looking directly at the borders, but only in my peripheral vision while looking straight forward). It just seems a shame to waste perfectly good peripheral vision pixels by making them black for no good reason. And with custom lenses that can see all the way to the top and bottom and outer edges, the image really should not waste anything. I would pre-warp an oversided image and clip it to fit the Rift screen, so there are little or no black pixels (except perhaps in the corners).

Source code?
lnrrgb
Binocular Vision CONFIRMED!
Posts: 294
Joined: Sat Jun 02, 2007 1:29 pm
Location: Wenatchee, WA.

Re: 360 video demo

Post by lnrrgb »

You forgot the obligatory Benny Hill Theme, or it would be perfect.
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

Everything looks a little bit too large when viewed in the Rift. I think that the offset (IPD) between the eyes needs a little adjustment (closer together). Does this program have a hotkey to adjust IPD? Different people will need that adjusted a little differently, and it affects how far away and/or how large things appear. It makes me feel like a young child the way it is now (perhaps appropriate due to the camera height, that makes standing adults look really tall).

Too bad that stereoscopic surround video is such a problem. This would be even more awesome in 3D. But for now, only static panoramas made from slit-scan photography work for that. It is possible to use a pair of mirror balls stacked above and below the cameras, but then you need to create a depth map from that, and use the depth map to compute left and right eye views (and synthesize occluded pixels).

But stereo 3D is not essential, and this is very immersive as it is now.

Obligatory Benny Hill Theme (Yakety Sax):
http://www.youtube.com/watch?v=MK6TXMsvgQg
RoTaToR
Cross Eyed!
Posts: 135
Joined: Sun Aug 26, 2012 5:45 pm

Re: 360 video demo

Post by RoTaToR »

che wrote:Thx, i know it works without the rift, i wanna know how it looks with the rift.If it gives the sensation of being inside the video or something.
:shock: :shock: :woot :woot :woot :shock: :shock:

I dont have words... yes... "something" like inside the video

HOW did you do?
polygonwindow
One Eyed Hopeful
Posts: 13
Joined: Sat Apr 06, 2013 9:07 pm

Re: 360 video demo

Post by polygonwindow »

Hey guys, I've been actively following anything relating to 360 video on the Rift for a couple of months.
I have many plans for video on it, and I have been experimenting with a spherical 360 video cam.

Here is a pretty sweet demo video from the camera at around 4K res (though heavily compressed).

http://we.tl/JseE0ra8SV

Anyone want to get this going on a 360 player with the Rift? Can you integrate it into your demo che?
Id love to hear some reactions to excite me until I get my Oculus DK.
Plagued
One Eyed Hopeful
Posts: 28
Joined: Tue Aug 14, 2012 3:31 pm

Re: 360 video demo

Post by Plagued »

I'm very interested in this, my Rift purchase was mainly for some 3d environments and potentially video projects.
As Geekmaster says stereo 360 video is insanely complicated, and I'm not even that sure how well it would work given the free neck and shoulder movement you'd want with the rift. If you sat very still and just looked left and right, then it would probably be fine.
For me it seems that it's almost easier to recreate an environment via a mix of video, photogrammetry and good old 3D design.
A camera system that would allow you to imersively relive an event, like a wedding, day out with the kids would be great, but I can't see any of the current stereo 360 cameras fitting in my pocket just yet, or fitting in my bank account :)

I'm definitely interested to see how well these 2d videos work with head tracking, as they may be more than "good enough" as a photograph "replacement"
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

Plagued wrote:I'm very interested in this, my Rift purchase was mainly for some 3d environments and potentially video projects.
As Geekmaster says stereo 360 video is insanely complicated, and I'm not even that sure how well it would work given the free neck and shoulder movement you'd want with the rift. If you sat very still and just looked left and right, then it would probably be fine.
For me it seems that it's almost easier to recreate an environment via a mix of video, photogrammetry and good old 3D design.
A camera system that would allow you to imersively relive an event, like a wedding, day out with the kids would be great, but I can't see any of the current stereo 360 cameras fitting in my pocket just yet, or fitting in my bank account :)

I'm definitely interested to see how well these 2d videos work with head tracking, as they may be more than "good enough" as a photograph "replacement"
Where did I say "insanely complicated"? I do not remember that... It is not really particularly complicated, in fact. You just need to adjust the radial distortion correction coefficients a little to get the correct FoV (i.e. zoom factor into a fisheye view). And you adjust the left and right eye spheres for correct IPD and virtual sphere diameter.

I have used the 1-to-1 pixel mapping that creates a huge curved virtual viewing screen, borrowed from GMsphere, in my new code base for my "PixelBall Mentarium" project. I am keeping the central view at 1-to-1 pixel mapping to minimize distortion and maximize rendered text clarity. But I am using pre-warp correction (with radial distortion correction AND tangential/IPD distortion correction AND chromatic aberration correction), but ONLY in the peripheral areas. The effect is a sphere, with the central viewing area a bit aspherical (due to the aspherical lenses in the RiftDK). This foveal aspheric rendering provides MUCH better text readability, IMHO. I just wish my subpixel wobulation did not cause dizziness in my RiftDK, because its resolution enhancement properties allow much smaller but still readable text...

The FIRST thing I am doing with my PixelBall Mentarium is embedding the chromiumembedded web browser, so that I can view 3D 360-degree panoramas in my RiftDK.

The SECOND thing I plan to do is to add video support (from webcams, and from movies rendering directly into my PixelBall Mentarium). I really LOVE the wraparound immersive video in the first post! I want that in my PixelBall Mentarium!
che
One Eyed Hopeful
Posts: 35
Joined: Fri Mar 01, 2013 7:01 pm

Re: 360 video demo

Post by che »

Hey polygonwindow here is your video rifted

http://we.tl/bZIwOO2sZ9

Great quality btw, too bad the theora compression is not that great.

What camera did you used?
Plagued
One Eyed Hopeful
Posts: 28
Joined: Tue Aug 14, 2012 3:31 pm

Re: 360 video demo

Post by Plagued »

geekmaster wrote: Where did I say "insanely complicated"? I do not remember that... It is not really particularly complicated, in fact. You just need to adjust the radial distortion correction coefficients a little to get the correct FoV (i.e. zoom factor into a fisheye view). And you adjust the left and right eye spheres for correct IPD and virtual sphere diameter.
Sorry I wasn't trying to put words into your mouth, you had said "Too bad that stereoscopic surround video is such a problem" but for a lot of people less educated in these things (like me) it seems insanely complicated. A single extreme fisheye or one of these more basic bubblescope style mirrors will give a decent flat image, but the stereo prototypes i'd looked at had about 20+ individual cameras, mirrors and a whole load of post processing needed.
I'm glad (and hopeful) you don't feel it's too complicated as I'd read a few papers on the difficulties in achieving it and they were going way over my head.

Cheers
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

Plagued wrote:
geekmaster wrote:Where did I say "insanely complicated"? I do not remember that... It is not really particularly complicated, in fact. You just need to adjust the radial distortion correction coefficients a little to get the correct FoV (i.e. zoom factor into a fisheye view). And you adjust the left and right eye spheres for correct IPD and virtual sphere diameter.
Sorry I wasn't trying to put words into your mouth, you had said "Too bad that stereoscopic surround video is such a problem" but for a lot of people less educated in these things (like me) it seems insanely complicated. A single extreme fisheye or one of these more basic bubblescope style mirrors will give a decent flat image, but the stereo prototypes i'd looked at had about 20+ individual cameras, mirrors and a whole load of post processing needed.
I'm glad (and hopeful) you don't feel it's too complicated as I'd read a few papers on the difficulties in achieving it and they were going way over my head.

Cheers
Ahh... you are talking about FILMING the video, not rendering it. A fisheye lens only gives a stereoscopic forward view. The closer you get to the sides, the less parallax. And sideways, both cameras are virtually colinear, giving no stereoscopic parallax at all.

I designed a minimal multi-camera array for full 360-degree stereoscopic, using fisheye cameras looking outward from a sphere (12 cameras symmetrically arranged in layers of 3,6,3 -- like stacking marbles). Then your use the unwarped outer edges from two cameras of different parallax that span your viewpoint, to generate a stereoscopic view (with some interpolation and translation). I plan to eventually use three cameras for each view, that film every view from a triangular set of images, so we can generate a warped depth map that includes occlusion pixels in all directions. That way we can just discard occluded pixels appropriate for each eye while creating our stereoscopic view. My planned storage format is a spherical panorama occluded pixels warped in, plus a warp map that includes depth and joint-stero displacement.

Of course, that is all just theoretical, and will remain a personal "pipe dream" until I can afford all those cameras and fisheye lenses. For now, a "common" approach is to use a pair of mirror balls on a vertical axis, but you either need to view them sideways, or create a depth map so you can synthesize right and left eye views.

Another idea I had (based on the Pulfrich effect), was to film fisheye video in a forward-moving vehicle, and use the warped side images with temporal displacement to create 3D, where the forward eye gets a fresh video frame and the rearward eye gets a delayed video frame. This should provide nice stereoscopic parallax, but only while moving. You can experiment with this by filming video out a side car window while moving, then play it back to both eyes with an adjustable delay between eyes. If adjusted correctly (not reversed) you should see nice stereoscopic 3D. I just want to extend that to both sides using a single camera with fisheye lens. Then later when I can afford it, do it with the camera ball described above...
Plagued
One Eyed Hopeful
Posts: 28
Joined: Tue Aug 14, 2012 3:31 pm

Re: 360 video demo

Post by Plagued »

yes, sorry filming to be "played back" on a vr headset.
I had also started designing a prototype based on a small cluster of 4 webcams fitted with 180 degree fisheye lenses. I was going to compensate for the low number of cameras by moving them on a set mounting and taking the same footage from the position of the other cameras to check it would work before building up the number of cameras. That was the point the rift kickstarter happened and I thought the rift would be the perfect headset as it would be more accessible.
However in the 8+ months I've had waiting, the software side of things has made me re consider my abilities. I'm happy with hardware but the idea of trying to cope with and proces 10 or more video feeds would be too steep a learning curve and I just don't have the free time (until the government here brings back the ability to sell your children... and wife).
So I'm definitely looking forward to seeing how effective the mono 360 video is with decent head tracking and a wide FOV.
I started making my own 3d digital cameras 12 years ago and used to take them to friends weddings and then give them some pictures and a basic viewer as a gift. Although I don't think I'll be handing out rifts as presents I'd like to be able to move things up a notch.
The other field I'm interested in is rehabilitation and care homes, which is where I'm looking into building environments now rather than filming them.
User avatar
BOLL
Binocular Vision CONFIRMED!
Posts: 295
Joined: Mon Aug 06, 2012 9:26 pm
Location: Sweden
Contact:

Re: 360 video demo

Post by BOLL »

polygonwindow wrote:Here is a pretty sweet demo video from the camera at around 4K res (though heavily compressed).

http://we.tl/JseE0ra8SV
That is some sweet footage man! O.o Are you handling scientific, military or professional gear or is it somewhat acquirable by mere mortals? :O I'm fairly hyped on spherical video and the rift, my best bet now is the Geonaute 360, but I'm nagging them about higher framerates and stuff :P

While you have experimented with a camera, are you also experimenting with a microphone solution? As you can tell I'm quite interested in all of this...
polygonwindow
One Eyed Hopeful
Posts: 13
Joined: Sat Apr 06, 2013 9:07 pm

Re: 360 video demo

Post by polygonwindow »

BOLL wrote:
polygonwindow wrote:Here is a pretty sweet demo video from the camera at around 4K res (though heavily compressed).

http://we.tl/JseE0ra8SV
That is some sweet footage man! O.o Are you handling scientific, military or professional gear or is it somewhat acquirable by mere mortals? :O I'm fairly hyped on spherical video and the rift, my best bet now is the Geonaute 360, but I'm nagging them about higher framerates and stuff :P

While you have experimented with a camera, are you also experimenting with a microphone solution? As you can tell I'm quite interested in all of this...
Hey BOLL,
The camera used for this footage was this one:

http://360rig.com/

Its basically a shell that you attach 6 Hero3's to. There are a number of reasons why I think this is by far the best solution for shooting 360 video. It records a full sphere of video, so there is no nadir blind spot. You can 'boom' the camera itself with a simple boom poll to operate it, and other mounting options are easy. Its light enough to attach to a quad-copter. When you stitch the video streams together you get about 5.7K of resolution at 45 fps, or 4K res at 100 fps (the Hero3 shoots at 960p 100 and 1440p 45). This is enough for using with the rift.

I am investigating sound recording with it, but it should be a matter of installing very small omni mics on the surface of the camera itself...since you primarily operate it like a boom poll anyways! You can just record dialogue with wireless lav's, and do the ambient sound in post anyways...

The biggest drawback of this system is that the cameras do not perfectly sync. For that you need a genlock. This disparity in sync causes occasional glitches in the image, especially visible if you look at parallel lines like power lines, and cannot be easily fixed in software. I'm investigating using the 3d sync cable to get maximum sync between all 6 cams. The problem is amplified the higher framerate you go (like 100). Still, I think a solution will come along pretty soon.

If youve got enough $$ for 6 hero3's you should be good to go ;)
polygonwindow
One Eyed Hopeful
Posts: 13
Joined: Sat Apr 06, 2013 9:07 pm

Re: 360 video demo

Post by polygonwindow »

che wrote:Hey polygonwindow here is your video rifted

http://we.tl/bZIwOO2sZ9

Great quality btw, too bad the theora compression is not that great.

What camera did you used?
Thanks for that, it looks great! See my post to BOLL for more info about the camera the video was shot with.
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

polygonwindow wrote:... The biggest drawback of this system is that the cameras do not perfectly sync. For that you need a genlock. This disparity in sync causes occasional glitches in the image, especially visible if you look at parallel lines like power lines, and cannot be easily fixed in software. I'm investigating using the 3d sync cable to get maximum sync between all 6 cams. The problem is amplified the higher framerate you go (like 100). Still, I think a solution will come along pretty soon.
There is some EXCELLENT software out there to do FRUC (framerate upconversion):
http://viola.usc.edu/Research/Tanaphol_ ... ICCE09.pdf

I do not remember the name of the package I used for one project, but I upconverted my video 10x and the result was extremely good slow-motion. The software uses motion vector analysis (similar to MPEG encoding) to generate the missing in-between frames.

If you upconvert your videos to a higher framerate in post-processing, you can do a much better job of "genlocking" them after-the-fact, during post-processing.
polygonwindow
One Eyed Hopeful
Posts: 13
Joined: Sat Apr 06, 2013 9:07 pm

Re: 360 video demo

Post by polygonwindow »

geekmaster wrote:
polygonwindow wrote:... The biggest drawback of this system is that the cameras do not perfectly sync. For that you need a genlock. This disparity in sync causes occasional glitches in the image, especially visible if you look at parallel lines like power lines, and cannot be easily fixed in software. I'm investigating using the 3d sync cable to get maximum sync between all 6 cams. The problem is amplified the higher framerate you go (like 100). Still, I think a solution will come along pretty soon.
There is some EXCELLENT software out there to do FRUC (framerate upconversion):
http://viola.usc.edu/Research/Tanaphol_ ... ICCE09.pdf

I do not remember the name of the package I used for one project, but I upconverted my video 10x and the result was extremely good slow-motion. The software uses motion vector analysis (similar to MPEG encoding) to generate the missing in-between frames.

If you upconvert your videos to a higher framerate in post-processing, you can do a much better job of "genlocking" them after-the-fact, during post-processing.
Hmmn ok. I actually figured the opposite was true, that the higher framerate the harder it would be to sync. Would shooting at 100fps, and then conforming the footage to 60 have a similar effect in easing sync in post? Perhaps Im a bit confused...figuring out a streamlined workflow for the footage is a big challenge at the moment!
t0pquark
One Eyed Hopeful
Posts: 35
Joined: Wed Mar 27, 2013 2:23 pm

Re: 360 video demo

Post by t0pquark »

I haven't checked this out in a Rift yet, but this is the sort of thing I always imagined as being featured more prominently in "adult" VR experiences than something like what Sinful Robot is going for.

The main reason being that they already know how to produce video (even if you need to make some adjustments for the rig) and this allows them to use known "actors", many of whom already have pretty strong brands. And yes, I realize you can do pretty high res 3D scans of people already, but you are still limited by two main factors: 1) number of people with systems fast enough to fully use that high res scan, 2) the amount of motion capture or key frame animation you are willing to produce to drive the model. By comparison, displaying high res video mapped to a sphere should be much easier on a system.

If you could have a full 2D sphere like this for head tracking sake, and then overlay a section of the sphere with video shot with a true 3D rig, I think that could make for a pretty compelling experience, regardless of content.
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

polygonwindow wrote:
geekmaster wrote:... If you upconvert your videos to a higher framerate in post-processing, you can do a much better job of "genlocking" them after-the-fact, during post-processing.
Hmmn ok. I actually figured the opposite was true, that the higher framerate the harder it would be to sync. Would shooting at 100fps, and then conforming the footage to 60 have a similar effect in easing sync in post? Perhaps Im a bit confused...figuring out a streamlined workflow for the footage is a big challenge at the moment!
The more frames per second you have, the closer in time you can find one to match. With a high enough frame rate, you should have no problem finding a close enough match. And with predictable differential speeds, you can use simple periodic frame dropping to keep them in sync. With FRUC, you can upsample 10x (in my experience) giving you a lot more control over multi-camera synchronization during post-processing.
User avatar
BOLL
Binocular Vision CONFIRMED!
Posts: 295
Joined: Mon Aug 06, 2012 9:26 pm
Location: Sweden
Contact:

Re: 360 video demo

Post by BOLL »

polygonwindow wrote:Hey BOLL,
The camera used for this footage was this one:

http://360rig.com/

< text >

If youve got enough $$ for 6 hero3's you should be good to go ;)
Interesting! I actually saw a similar product reported on from the NAB show. The brand is called 360 Heroes and essentially does the same thing. They have a number of different models but are also more expensive while looking less professional, to me at least.

I remember when seing the video that whoa, who will ever get six GoPros for this? Now when I have seen what can be done I'm mentally drooling over this... What I think Freedom / 360 / Rig lacks are videos on their site about the product, I'd really like to see how it works with mounting the cameras etc.

About the sync, is the problem that they are out of sync by a sub-frame? Then it sounds like a higher frame rate would make it possible to do finer adjustments to get closer to a perfect sync. Though unless you have an audio cue it will be _harder_ to sync as the difference between frames is smaller. An alternative is to use a photo flash or something similar to sync the streams visually. In addition a bad sync is much easier to see in playback in a higher frame rate, at least from my experience.

I was thinking of using the 3D-sync cable myself to sync two cameras for non-spherical stereoscopic Rift video. But then I had a friend try what the cable synced and I was disappointed. It doesn't sync any exposure, white balance or anything camera settings at all. It also does not sync the recordings as well as you'd think. My friend got sync problems all the time even with cards rated over the requirement... -_- I guess the cameras are just not that well matched. And actually, now with the Hero 3 the old cables does not seem to be compatible, and who knows if there will be new ones as the 3D kit has not been remade with the new case design.

Oh, and for mikes. The Hero 2 was awesome with a dedicated mike port, but for the Hero 3 there is the 3.5 mm adapter, which seems clunky. I have little sound knowledge so adding stuff in post sounds complicated :) And what fascinates me about 360 recording is to capture a space as authentically as possible, including audio :D

Thanks for sharing :D I think the video looks absolutely fantastic. I presume it was recorded in protune mode and elaborately tweaked and stitched together to look so good... I noticed on the site that all the video gets export to image sequences, stitched as individual panoramas, then merged to video again >_> sure sounds like a choir, and like it would require quite some disk space, heh.

Also, I read on the Freedom / 360 / Rig site that there was also a workflow for storing video in a cubic format, like a cubemap. It feels like that would preserve detail better as the distortion would be less and also matching the layout of the cameras. Have you experimented with that?
Skaven252
Cross Eyed!
Posts: 120
Joined: Thu Aug 02, 2012 2:07 am
Location: Sweden

Re: 360 video demo

Post by Skaven252 »

polygonwindow wrote:The biggest drawback of this system is that the cameras do not perfectly sync. For that you need a genlock.
I've had similar issue with my two VIO POV.HD cameras, and turns out this was caused by variable framerate. As in, the timecode is actually accurate enough between the cameras, but when the cameras drop frames inconsistently, the streams go out of sync when the footage is multiplexed together, and quantized to frames at a constant framerate.

Vegas Movie Studio actually refuses to pair two video clips if their average framerates differ too much (due to dropped frames). However, I tried the demo of another editing software, the Magix Movie Maker Pro, and it appears to be able to handle dropped frames and variable framerate that are inconsistent between stereo pairs, and keep them in sync.

I haven't been able to try this with six video streams (!), but I can imagine the problem can be sixfold. :)
geekmaster
Petrif-Eyed
Posts: 2708
Joined: Sat Sep 01, 2012 10:47 pm

Re: 360 video demo

Post by geekmaster »

Skaven252 wrote:
polygonwindow wrote:The biggest drawback of this system is that the cameras do not perfectly sync. For that you need a genlock.
I've had similar issue with my two VIO POV.HD cameras, and turns out this was caused by variable framerate. As in, the timecode is actually accurate enough between the cameras, but when the cameras drop frames inconsistently, the streams go out of sync when the footage is multiplexed together, and quantized to frames at a constant framerate.

Vegas Movie Studio actually refuses to pair two video clips if their average framerates differ too much (due to dropped frames). However, I tried the demo of another editing software, the Magix Movie Maker Pro, and it appears to be able to handle dropped frames and variable framerate that are inconsistent between stereo pairs, and keep them in sync.

I haven't been able to try this with six video streams (!), but I can imagine the problem can be sixfold. :)
This may be obvious to you, but in case it can help even a little, I will add my "two cents worth" to this discussion:

The "old school" solution to this back in the analog days was to record a timecode with your video. I have an old 8-bit ISA MIDI card that could read and write SMPTE timecodes to an audio track, but these days it is usually written to the VBI (where things like Closed Captions are also commonly stored). Digital cameras may be able to interleave timecodes into the digital video stream. Beware that the timecode needs to be sinewave modulated (like my MIDI card did) or the square wave harmonics will not record well on a compressed audio track.

Here are some people recording audio timecodes with goPro cameras:
http://jwsoundgroup.net/index.php?/topi ... o-a-gopro/

Essentially, you can use one of the audio tracks to hold a (sinewave modulated) timecode on each camera. It can even be subsonic (not accurate to an individual frame) so you can still put audio on all tracks, and filter out the timecodes later...

The problem with using the internal timecode generator in video cameras is that they are NOT genlocked and can drift. It is much better to use a single timecode generator to keep them all genlocked (often an output from one of the cameras that is fed into the others). For cameras that do not support genlock, these days you could use a little embedded processor like an arduino to generate the common SMPTE audio signal that gets recorded on one audio track on each of the video cameras.

More on timecodes:
http://campbellcameras.blogspot.com/201 ... -code.html
Last edited by geekmaster on Wed May 01, 2013 12:10 pm, edited 2 times in total.
polygonwindow
One Eyed Hopeful
Posts: 13
Joined: Sat Apr 06, 2013 9:07 pm

Re: 360 video demo

Post by polygonwindow »

geekmaster wrote:
Skaven252 wrote:
polygonwindow wrote:The biggest drawback of this system is that the cameras do not perfectly sync. For that you need a genlock.
I've had similar issue with my two VIO POV.HD cameras, and turns out this was caused by variable framerate. As in, the timecode is actually accurate enough between the cameras, but when the cameras drop frames inconsistently, the streams go out of sync when the footage is multiplexed together, and quantized to frames at a constant framerate.

Vegas Movie Studio actually refuses to pair two video clips if their average framerates differ too much (due to dropped frames). However, I tried the demo of another editing software, the Magix Movie Maker Pro, and it appears to be able to handle dropped frames and variable framerate that are inconsistent between stereo pairs, and keep them in sync.

I haven't been able to try this with six video streams (!), but I can imagine the problem can be sixfold. :)
This may be obvious to you, but in case it can help even a little, I will add my "two cents worth" to this discussion:

The "old school" solution to this back in the analog days was to record a timecode with your video. I have an old 8-bit ISA MIDI card that could read and write SMPTE timecodes to an audio track, but these days it is usually written to the VBI (where things like Closed Captions are also commonly stored). Digital cameras may be able to interleave timecodes into the digital video stream. Beware that the timetime needs to be sinewave modulated (like my MIDI card did) or the square wave harmonics will not record well on a compressed audio track.

Here are some people recording audio timecodes with goPro cameras:
http://jwsoundgroup.net/index.php?/topi ... o-a-gopro/

Essentially, you can use one of the audio tracks to hold a (sinewave modulated) timecode on each camera. It can even be subsonic (not accurate to an individual frame) so you can still put audio on all tracks, and filter out the timecodes later...

The problem with using the internal timecode generator in video cameras is that they are NOT genlocked and can drift. It is much better to use a single timecode generator to keep them all genlocked (often an output from one of the cameras that is fed into the others). For cameras that do not support genlock, these days you could use a little embedded processor like an arduino to generate the common SMPTE audio signal that gets recorded on one audio track on each of the video cameras.

More on timecodes:
http://campbellcameras.blogspot.com/201 ... -code.html
Wow this is very useful info! Perhaps in v2 of the Freedom360 camera we can get a solution like this. I will share this with the inventor.
Post Reply

Return to “Oculus VR”