How to build head tracking into a VR headset like the Oculus

This is for discussion and development of non-commercial open source VR/AR projects (e.g. Kickstarter applicable, etc). Contact MTBS admins at customerservice@mtbs3d.com if you are unsure if your efforts qualify.
Post Reply
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

How to build head tracking into a VR headset like the Oculus

Post by JDuncan »

This is a new approach to VR that builds in head tracking.

First I wll detail what 3D is, and then I will detail how to create a 3D image, then I will detail how to create a VR headset like the Oculus VR.

What is 3D?
Imagine a horizontal number line being displayed on a television.
You have the number 0 in the middle and then the negative numbers on the left and the positive numbers on the right.

Now sitting in front of the television and looking at the number line, hold a pencil in front of you.
Focus on the pencil and the number line will have double vision.
Because the number line will be double vision there will be two number 0's.

Put the pencil equally far from the two zero's on the number line when your focusing on the pencil.
This will put the pencil inbetween the two zero's in your peripheral vision.

Now hold your hand over one eye and still focus on the pencil, the pencil will be over one part of the number line not zero.
Then move the hand covering the eye to the other eye and cover that other eye, and the pencil still in focus the pencil will be over a different part of the number line.

The position of the pencil inbetween the two zero's is lost when one eye is covered, so the vision puts the pencil over a different part of the number line.

The two eyes seeing the pencil, not covering one eye anymore, and the 0 is double vision again and the pencil is inbetween the two zero's on the number line.
What the eyes see as it is focusing on the pencil with two eyes is the two images of the pencil over different parts of the number line.
The brain takes these two different images and joins them into one image so the pencil looks as if it's inbetween two zero's.
This is the principle of stereoscopic 3D.

By taking two different photos of something and showing the two photos to the eyes, the pencil between two zero's phenomena happens again so your brain joins the two different pictures into one picture.

When you get closer to the TV you hold the pencil inbetween the two zero's, you see two zero's because of the double vision from focusing on the pencil.
And then hold the hand over the eye and you see a different number, a smaller value on the number line.
This shows that tv size and distance affect how the stereoscopic pictures function.

If the stereoscopic images for the left and right eye were made for one TV size and to be viewed at a certain distance, then undoing this makes the stereoscopic phenomena invalid.
And does the 3D tv industry and movie industry have a standard on what tv size and viewing distance to use? No.
But this is a problem that games and VR headsets can fix, by building games to use the size and distance on the VR headset the stereoscopic images made should always be valid.

How to create 3D for use in a 3D VR headset?

Assuming the user taking the photograph is wearing glasses, the user is looking at the 3D TV, and the image on the TV is not in 3D yet, and the TV is showing the same number line as before.

The user holds a pencil inbetween the eyes and the number line and see's a double vision of zero.
Then he holds a hand over one eye and the eye sees the pencil over a different number on the number line, the first stereoscopic image.

Now the technology on the glasses is a swivelling laser pointer, and a camera is attached to the swivel laser pointer.
Where the laser points is where the camera films.
The camera has the same field of view as the person sees out of the glasses.

The laser of the eye not covered by the hand, points to the number on the number line the persons eye sees the pencil is over when they cover one eye.
Then the person agreeing the laser is true, the camera takes a photo, then the eye is covered by the hand and the other eye sees and then the process of taking a photo repeats this process and the second stereoscopic photo is taken.

Now the two photos are run through 3D photo software to be viewable on the 3D TV in 3D mode, and the lasers and camera are turned off and the glasses become 3D glasses and the person looks at the 3D image the camera took of the pencil on the number line and agrees if it looks 3D or not when they focus on the pencil.

The moving pencil

Ignoring VR tracking for now.

If the glasses taking a picture of the TV showing the number line takes the two photos and the person looks at the 3D image and agrees the 3D looks fine.
Then the glasses have a camera that is able to view the persons eye, one camera per eye.
The cameras looking at the eyes takes a photo of the physical position of the eyes when the photos of the pencil are taken.

Now this process of photgraphing the pencil is retaken but the pencil is closer to the TV.
And this happens for all distances the pencil can be inbetween the tv and the glasses.

These values are plugged into the virtual environment.
Then inside VR, the person can sit still and view the pencil inbetween the glasses and TV and then the pencil be moved towards and away from the TV, and the person still sees it in 3D.
The same as if the photos of the pencil moving towards and away from the TV were taken in real life and then were shown in 3D mode on the 3D TV and the person seeing if the pencil looked as if it were 3D.

How to create VR headset tracking part 1

Because the pencils position on the number line changes if two eyes focus on the pencil or not,
there is an 'x' shape from one eye then the other eye intersecting past the pencil.

That x shape is made visible by two intersecting lines drawn by the two lasers on the glasses.

With the pencil inbetween the glasses and tv held one distance from the glasses.
When the person moves closer to the tv the lasers behind the pencil become shorter.
When they move farther from the tv the lasers behind the pencil become larger.

When the lasers become shorter the pencil is closer to zero on the number line.

If this distance of the lasers is measured and virtually recreated in VR.
Then as the person moves the laser closer to the virtual TV, the lasers behind the pencil become shorter.

This recreation is equaling the virtual environment to the real world environment.
The shorter laser has a trackable value.
The virtual and real environments agree on the length of the laser behind the pencil.

As the pencil is one distance from the glasses but a variable distance away from the tv, the lasers on the number line give the value that can be plugged into VR.
If the person moves in a straight line towards and away from the tv in VR, the number line should show the laser on the number line change as much as it does in real life.

In the real world the persons shining the two lasers from the glasses onto the number line being displayed by the tv.
Then in VR this is recreated, so the virtual glasses are beaming the two lasers onto the number line the tv is displaying.
Then as the person in real life moves in a straight line towards the tv so the lasers change position on the number line,
then in virtual reality the lasers change position on the number line and the person is moving closer to the tv.
So the exact distance the person moves in reality that changes the number the two lasers are over is also mirrored in virtual reality.

How to create VR headset tracking part 2

Two different concepts;
The pencil is a static distance from the glasses and the person moves the distance from the pencil to the tv.
The person and tv are static values and the pencil changes it's distance from the tv and eyes at the same time.

How to create a VR headset tracking part 3

Now the person physically uses the static pencil virtual reality, but the virtual reality the person sees is the moving pencil and static tv and person.
So the eyes see stereoscopic images that look 3D, but the head position is measured in reality.

The VR headset in the real world must move as the person moves their head.
The person doesn't look at the pencil, but a pencil is used for the lasers.
So something like a unicorn horn is needed to put in front of the vr helmet to play the role of the pencil.
Then lasers on the vr headset beam onto the pencil creating the x shape I mentioned before.
The distance the lasers are behind the pencil lets the virtual and real world agree on measurment.
The left and right camera see the left and right lasers behind the pencil that is inbetween the glasses and the number line.
The cameras feed this into software which finds where the lasers are hitting on the number line.

Then the real world and virtual world put the persons head that far away from the number line,
so in the virtual and real world the lasers hit the number line that exact distance from behind the pencil that is in front of the persons glasses.

The person doesn't know what the lasers are touching on the number line, but only looking at what's in the virtual environment.

Now when the person turns their head in the virtual environment of the moving pencil,
the virtual environment using the number line is used to finds it's position.
Then the virtual environment of the static pencil is used by the virtual environment of the dynamic pencil to find the head position to enable head tracking.

This may mean a number line circling the person so as they turn the head the lasers focused on the pencil but still hit the number line.

Because the vr environment the person sees uses the eye position not the laser position,
the lasers can stay focused in one spot but the person still move their head around.

http://www.youtube.com/watch?v=oKAqvs4KTkk

click on picture to enlarge it

Image
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

In my previous post, I mentioned the number line with the bar on it, this video is sort of what I was talking about. At youtube the video should be in 3D soon. It's side by side right side first.

https://www.youtube.com/watch?v=x8bM-yMdNxE
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

Tripod holds pole 1.
pole 1 swivels on the tripod, up down, and all around.

Pole holds pole 2, pole 2 moves forward and backward.
Pole 2 holds pole 3, pole 3 moves forward and backward.

Pole 3 is attached to the Virtual reality helmet,
the virtual reality helmet can move in all directions like pole 1.

Pole 3 goes through the two doughnut shapes that are stacked vertically.

The players Virtual reality helmet is in the middle of the two donuts.

Pole 1, 2, 3, create a tripod shape.

There's a PlayStation 3 controller on the tripod in between pole 1 and the tripod.
The PlayStation 3 left or right stick is touching pole 1.

The PlayStation 3 is using the motionjoy pc drivers to read the ps3 stick motion.

When the person moves their head in the middle of the donuts,
pole 3 attached to the VR helmet is moved forward or backward,
or to a different horizontal part of the doughnut.
This motion is relayed to pole 2,
then to pole 1,
then to the ps3 stick on the tripod.
Then from the ps3 stick to motionjoy drivers,
which is reflected as the parent in the game and is head tracking.
The child is the arrow buttons which move the player forward or backward and the forward is where the player is looking.

This is a crude way of getting the difference in position to the doughnut from the virtual reality helmet to get the head tracking.
In my theory this is the static pencil in front of the helmet that changes it's distance from the pencil to the number line.
The number line in this crude example is the donuts.

I figure if you can jimmy this mechanism together and see it working,
you might get the camera and lasers I talked about before in my theory working too.

See the picture for a diagram of the idea;

Image

This is a proof of concept design, the head moves further than the PlayStation 3 stick so the head will pull on the stick or push it too far as the person looks around in Virtual reality. If I knew how to correct this I would detail that here as well.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

That example with the three poles was rather crude, so I thought up a more fancy design.

The theory shows a static pencil that changes the distance from the pencil to the tv, but keeps the same distance from the pencil to the eyes.

If a elastic band was cut so it's a long string shape.
Then one end of the elastic string is attached to the chin, this would serve as the pencil in front of the eyes in the theory.
The other end of the elastic string is what shows distance from the pencil to the TV.

On the persons shoulders sits a neck accessory, that looks like a big ring. Like the kind you put on donkeys or ox's so they can plow the ground.

This sits on the shoulders so the ring doesn't wobble very much and rests in one spot when the person moves their head.

The other end of the elastic sting is attached to the ring sitting on the shoulders.

When the ring is pulled by the elastic string, a key press is made in software to indicate the head moved.

Then you attach multiple rubber bands from the chin to the ring and then when the sw reads the key press it can have software programming if conditions to show the key presses means the head moved in a certain direction.

That's how the neck ring and chin that's connected by rubber bands strings can input movement that is read by software that using if statements can show head position for head tracking.

The chin wears a scuba wetsuit mask so the rubber bands are attached to the wetsuit.

This means the ring needs to have some heft to it so the rubber bands don't jiggle it all about, and the ring has some sensors so the sensors read the rubber band is pulling. Then the ring feeds the rubber band stimuli to software so the neck ring needs to be fed into software too.
So the ring is probable plugged into the oculus so they share the connection to the computer.

Then the person puts on the wetsuit mask with dangling rubber bands.
The headset is then put on with dangling neck ring.
Then the neck ring is put on and then the rubber mask clips onto the rubber bands onto the neck ring.
A bit cumbersome to suit up like that, but it gets you head tracking in one clean design.
:)
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

That rubber band solution was more sophisticated than the three poles, but it was still pretty crude, so here is an advanced solution.

The rubber band solution uses the scuba wetsuit to hold the tip of the rubber band, and the neck ring to hold the other end of the rubber band.

This is to create the difference between the chin end of the rubber band, to the rubber band end attached to the neck ring.
This was using theory example where the pencil is in front of the eyes that's one distance from the eyes, but a variable difference to the TV.

The TV in the theory was called the number line, and the number line was said to move from only being on the regular TV screen to circling the person like a doughnut shape.
This is why in the three poles the person sat in the middle of the doughnut.
This doughnut is static and does not move, just like the neck ring that has a rubber band attached to it stays still on the persons shoulders.

I said that to show the doughnut shape is static movement, but the distance to the pencil or rubber band on the chin is dynamic.

If the doughnut shape is lasers, then that is one half of the rubber band solution, so the persons head has a laser on it too.

How is the doughnut shape lasers?
The 5 MW laser pointer used to point at projector screens by teachers in lectures, has a visible splash where the laser touches the screen. And if you shine the green laser onto a wall there is a visible laser splash where the laser hits the walls.
By there being a splash where the laser touches a surface, this can mark there being a surface there.

If the person sits on a chair and the chain sends a laser in four directions circling the persons chair, then the doughnut is made and this serves to show the difference between the head and the doughnut. This is part one of the rubber band solution.

How does the laser on the head work with the doughnut make of laser splashes?

The laser on the head points towards the area the four laser splash on the ground, the four laser splashes stay still while the lasers from the head move.

If the lasers from the head are red, and the lasers from the chair are green, then a camera can read the four green laser splashes and also see the red lasers move on the area of the four green lasers.

This way the head position can have one initial laser position, and then when the head turns these have their own laser position values, then this can be read and matched.

Then the person does some initial calibration to show the initial and turned positions of the head, so that when the red lasers shine around the four green lasers, the red laser match a head position; looking forward, looking left or right, up or down.

This way the head position can be used for head tracking.

Now for a super advanced solution, the four green lasers can be substituted for a camera that sees a reading on the ground, some unique image, and the red lasers work with those four images surrounding the person like they worked with the green lasers that shined from the chair the person was sitting on.

This way maybe augmented reality can use this camera unique image method as anything can serve as the four images surrounding the person. So the person can walk around and isn't tied to a chair. This way head position can allow augmented images to be painted in the persons view of the world.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

This post is why some people leave the Oculus VR and still feel the effect of playing.

First the technical reason, then the guess or rather deductuve assumption.

The eyes get head tracking when the parts attached to the head stays one distance from the head and this part is a variable distance from the TV.

I said the person gets his head closer to the TV and the lasers beaming from his head to the TV get to lesser values on the number line; the number line is a image on the TV and the lasers are beaming on the number line from the head of the person to the TV.

This is the same logic as holding a sliding measuring tape and going one distance and then shortening that distance, the result is the measuing tape gets shorter.

This is used for head tracking, the thing attached to the head is at a static position on the head but a variable distance from the TV that is showing the number line.

Now the other way the device attached to the head for head tracking is used, is it moves closer to the TV, but the person does not more and the TV does not move.
Now the lasers still shoot at the TV and the numbers get smaller and smaller still, but not as a result of the person getting closer to the TV but only the device getting closer to the TV.

The person uses this method to view the 3D virtual reality world. Because when he throws a object that thing moves but what he throws at does not move and he doesn't move. So this is using the principle of the thing moving but he stays still, not the principle of him moving but the object staying still.

Head tracking lets him hold the thing he threw one distance from his head and then walk to what he tried to hit, and what is sees is the number line effect I described.

What the Oculus VR does, is it keeps the eyes at one stereoscopic position or focus. You focus at infinity.

Then you walk around and the effect is similar to when you tried to throw a object ve carry it, in the Oculus VR the effect you see is you hold the object one distance before you and walk towards what you were trying to hit.

Next I will talk about what may be happening to the people getting sick.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

I read things like

"
since I played with the Rift two days ago I feel dizzy, I have nauseas, trouble focusing my eyes, I feel exhausted and I have eye strain. I didn't touch the Rift since then, but the effects are not leaving. I am a little scared.
"
I got this quote from somebody who wrote this on the Oculus VR forums.

Trouble focusing his eyes, when the eyes in the Oculus VR are focused at inifinty and in real life the ball being held may either be seen to be carried or thrown to a target some distance away. But in the Oculus VR it's similar to being forced to carry the object because the focus is fixed at one point.

It looks like some sort of way to change from focusing on infinity only is necessary for virtual reality not to create this mental trauma.

But why do some people not feel this effect?

Some people can carry things around for hours at a time and never feel the compulsive need to throw things rather than carry them.
But some other people have a mental compulsion to try to the throw technique as well as the carry technique.

Therefore,
If you force those who compulsively use the throw technique as well as the carry technique, to use only the carry technique, it gets them to blend in the throw technique to the carry technique.
And then in non-VR when they try the throw technique they use the carry technique instead.
This is unnatural as the carry technique in the real world is not the throw technique.

So, if people feel queezy in VR maybe they are compulsively using the throw technique, but the Oculus VR forces the focus to infinity thus acts to block the throw technique, so they feel the throw technique being forced into the carry technique and they feel ill and need to stop the VR.

To fix this, the throw technique being mangled to focusing at infinity, needs to use the throw technique where the eyes focus sees the object moves to the thing being thrown at.
Cameras on the eye position helps do this, as I described in my theory.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

I wrote about nausea, and how the focusing at infinity and nowhere else, was the cause of the problem, and now I see the Sony wants a Oculus clone;
http://www.mtbs3d.com/phpbb/viewtopic.php?f=140&t=18591

I wrote how I think a VR headset like the Oculus should be made so the focus can be more than at infinity;
http://www.mtbs3d.com/phpbb/viewtopic.php?f=120&t=18468

And the maker of Viiwok wants a Oculus clone to go with the treadmill he's making.

Doesn't Sony know that their headset will cause nausea like the Oculus does if they use the Oculus strategy of focusing at infinity only? It will be interesting to see if they fixed the nausea problem, they delayed the news launch so I'm guessing the answer to if they fixed the nausea problem is a No.
User avatar
HeliumPhoenix
One Eyed Hopeful
Posts: 4
Joined: Wed Aug 28, 2013 10:50 am
Location: Atlanta GA
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by HeliumPhoenix »

You still seem to be working under the apprehension that parallax convergence (eyes both pointed toward an object) and focus (eye lens adjustment) are one in the same. Being focused at infinity (well, not infinity, but a large enough distance that the eye focus is effectively the same as at infinity) reduces eye strain.

Yes, there is a disconnect physiologically and psychologically when parallax convergence and focus don't match up to what we are used to in reality. However, it cannot be solved simply. Adjusting focus distance dynamically to simulate such changes requires tech that simply isn't available at any reasonable price/precision. Adjustable focus lens assemblies require mechanical shifting of the distances as well as alignment, and fluid lenses are expensive and very difficult to control with any speed.

Simply tracking the eye isn't enough. That does not tell you with any precision WHAT that eye is focusing on. One eye may be focused on something in the distance, but the other eye has a near object occluding the view. Focus the eyes differently, and you get even WORSE physiological disturbances. So which is the right focus? You can't tell, short of trying to bounce an IR beam at an oblique angle off the cornea and interpolate based on the baseline 'infinite' focus values. And doing THAT is nigh on impossible with any accuracy or with simple or inexpensive parts.

Focusing at infinity is NOT the problem. It's the disparity between where the brain EXPECTS to focus, and where it is actually focusing. And as I've explained (here and elsewhere) it isn't really something that can be solved mechanically.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by JDuncan »

I was thinking the parallax could be used to allow the person to focus on different parallax than only infinity.

So as the game or video may be mainly on infinity there is times every few minutes the parallax is moved to positive or negative parallax which lets the person see something else than infinity. And so maybe not feel ill as a result?

This is a work around for true eye tracking for a temporary solution. Some parallax difference is a good thing?
User avatar
HeliumPhoenix
One Eyed Hopeful
Posts: 4
Joined: Wed Aug 28, 2013 10:50 am
Location: Atlanta GA
Contact:

Re: How to build head tracking into a VR headset like the Oc

Post by HeliumPhoenix »

It isn't a bad idea. The real problem is implementing it in such a way that it helps, rather than makes the problem worse.

Most nausea and 'side-effects' of VR use are primarily from disparities from what our brain is used to doing, and the changes VR makes to those (due to limitations.) Parallax convergence vs. Focus distance is a perfect example. In RealLife™ we expect close objects (acute parallax convergence) to require our eyes to focus on a close object. However, in a VR headset, the focus doesn't change. Our brain makes our eyes change, then relents when the image goes out of focus.....it's this 'unlearning' that allows VR users to get their 'VR legs' to where they don't have the side effects (or only have them to a lesser degree.) This kind of disparity typically causes eye-fatigue and headaches, and sometimes dizziness and disorientation.

Disparities in orientation are similar, though the typical symptoms reverse their prevalence. If you go around a curve at speed in a virtual car, your eyes are telling you to feel the pull of the g-forces.....but they aren't there.....and your brain expects your inner ear to tell it they are.

Most nausea is caused by visual or motion latency, however. Even relatively imperceptible visual latency or approaching the 'barfomatic' refresh rates can cause nausea in most people. Getting your 'VR legs' once again is the brain adjusting (learning to deal with) such changes to its inputs.....though getting into the 8-10 fps range will almost ALWAYS make people sick if there is any significant motion occuring on screen.

Many forms of optics currently used in VR headsets also cause visual discrepancies our brains have to 'adjust' to. Barrel distortion, fisheye distortion, color-shifting, pixelation, peripheral field stretching......It all adds up.

And once your brain adjusts, when you remove the headset....the brain has to re-adjust to normality again. It takes time for the brain to learn to operate in two modes.....getting your 'VR legs' isn't just training the brain to deal with the new rules, but also how to go back to normal as well. Once it learns to operate in both modes and switch between them appropriately, you no longer have any issues with staying in VR for extended periods. It can take quite a while to do, and any changes to your VR setup can cause it to have to re-learn the parts that change.
Post Reply

Return to “VR/AR Research & Development”