Augmented + head tracking reality using camera and sensors

Discussion of tools and products that add VR physicality. Samples include VR treadmills, special hand controllers, gesture technology and more.
Post Reply
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Augmented + head tracking reality using camera and sensors

Post by JDuncan »

There is one camera
there is 6 sensors the camera can detect;
3 sensors on the person, 3 sensors on the environment.

One of the sensors on the person is on the chest bone that doesn't move when the person moves.

Two sensors are one head, I don't know if the head is the best place so some calibration would have to be done to know for sure, but it's a spot on the body that moves unlike the chest bone that's still.

These sensors on the person are used for triangulation.

The sensors on the body relate to the sensors in the environment.
If the environment has a flat surface like a table, the chest sensor is shown on the environment sensors as a axis the other two sensors tilt around like a seesaw.

Hold out your hand in front of you so you see the side of your hand, with your thumb facing you and the pinky is facing outwards.
Now swing your elbow left and right under your hand so it swings like a pendulum.
While your elbow is swinging hold your hand still so if see your hand is against a straight line in the background it stays on the line.
This is the basic idea.

The line in the background is the environment sensors.
The hand still is the chest sensor, the elbow swinging is the sensors on the head or arms.

e.g. As the person is still the see a virtual 3d triangle on a table surface.
As they move the triangle stays on the surface of the table.
The triangle stays flat on the table and doesn't move because like the hand held still while the elbow swings, the triangle is being corrected in SW to appear still while the person moves.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: Augmented + head tracking reality using camera and senso

Post by JDuncan »

After reading some posts in another of mtbs3d forums about head tracking, I got to thinking of how this system I see can be used for head tracking not just augmented reality.

The idea is that of a tripods legs. As the tripod is set on a uneven surface the legs are lengthened or shortened.

The amount the legs are lengthened or shortened is what allows for head tracking.

The sensors in the environment are what the legs of the tripos sit on, and the legs of the tripod are the sensor on the body that move.
The camera on top of the tripod is the axis sensor on the body, the camera sees a object on top of the tripod in augmented or virtual reality.

To see what I mean;
- use your two hands and one hand has three fingers touch the table surface, this is like the three legs of the tripod.
- the other hand puts on finger on the back of the hand that has three fingers on the table.
- now as the hand with three fingers on the table move, the hand with one finger on top of the back of the hand stays still.
- The hand with one finger on top of the back of the hand is what allows the augmented reality image to stay still and the legs that move have a positional value for each leg, as the legs move this translates to positional head tracking.
- most head tracking systems have lights on the headset visor and none on the body or environment, so it's like a hand held camera vs a tripod held camera.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: Augmented + head tracking reality using camera and senso

Post by JDuncan »

Here's a picture to see what I'm talking about.
You do not have the required permissions to view the files attached to this post.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: Augmented + head tracking reality using camera and senso

Post by JDuncan »

Using that picture as a template, here is how it can be used to describe how VR controller can use it.

A1 = -2 = head turning in a 2D circle.
A2 = the 0 between -1 and -2 = the head looking vertically up or down.
A3 = -1 = forward or backward movement.

A3 is the only thing on the hand held controller that controls movement, the rest of the player movement control comes from A1 and A2

Since the person needs to have a button to allow for action from the player in the game, the controller has buttons that control the players tools in the game. Like the playstation circle, square, triangle, x, buttons.

So the controller looks like a PlayStation controller that has only one stick, this is the A3 control stick. And the controller has the circle, square, triangle, x, buttons too.

Then as the person looks left or right this is from the head tracking A1.
Then as the person looks up or down this is from the head tracking A2.

Now if there's hand tracking this means there's no PlayStation controller to hold onto, then the A3 has to come from the 1 on the tripod, which is the feet on the floor.
So the forward and backward comes from the foot moving forward or backwards.

Now if there's something like the omni treadmill then the feet will be walking so the feet moving forward or backward on the treadmill controls the A3 control.

Now what is the head wants to move sideways? Then the A3 has to control sideways movement too.

Now if the players moves A3 side to side and looks forward the A3 and A1 are used, then the buttons on the controller use the players tools in the game and now you have a simile of a PlayStation controller in VR.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: Augmented + head tracking reality using camera and senso

Post by JDuncan »

After reading on the oculusvr board how eye tracking is being looked at for the oculus vr, I thought about how that would be done.
The eyes in the vr world need to emulate the eyes in the real world, so in the real world when the person looks up the eyes in the vr head move up.

In my diagram about head tracking I showed a tripod.
The tripod has three parts; the legs and the spot the legs go and the camera.

thing 1) On the stereo vision the eyes have a sweet spot the image is 3D, this is the camera on the tripod.
thing 2) On the stereo vision the eyes move, this is the moving legs on the tripod.
thing 3) On the VR headset, the eyes look at a spot on the screen and the screen shows the eyes are looking at this spot;
The screen showing where the eyes are looking are what the legs of the tripod touch, that is the monitor is the floor the tripod feet touch.

And that is what lets the eyes be tracked in VR, those three things.

How the three things fit together;
Now to get the eyes into the VR, you have a overlay on the monitor that shows the sweet spot the eyes are looking at and the peripheral vision.
So when the eyes move over the screen the sweet spot moves and the sweet spot is surrounded on either side by the peripheral vision.
And the sweet spot and peripheral vision are a image that's overlayed on the monitor and this overlay is then drawn into the vr head as the eyes moving.

Now as I showed in my head tracking thread, the head tracking can locate the turning and the up and down motion, but not the forward movement.
To get the forward movement you need to joystick like the PS controller, then the forward motion is like car wheels turning and the head is like the car truning wheel.
Now the controller can move the player forward like car wheels or sideways while the person looks forward.
So to get this without hand controls maybe a tongue control would work for paralyzed people?
The tongue controls the wheels of the car and the side to side motion instead of the controller stick?
What I'm getting at is I'm thinking the eyes can work fine for head tracking type control but not forward or side to side control.
You do not have the required permissions to view the files attached to this post.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: Augmented + head tracking reality using camera and senso

Post by JDuncan »

[IMG ALT=""]http://www.avsforum.com/content/type/61 ... eight/1000[/IMG]

Previously I had an idea to tie a gametrak to the neck and then link the gametrak strings to the VR head visor so when the head visor moved the gametrak string would move too which would move the controller sticks the strings are tied too.

http://www.youtube.com/watch?v=nObCJFLvqrQ

But now since I have this idea to use the environment to help track the head movement as well as a sensor on the chest, I can describe how a gametrak type controller can be made for VR head tracking.

- The person wearing the VR headset is also wearing a checkered green and black vest.

- This person wearing the checkered vest has three red lasers in front of him pointed at the vest.
The center laser hits the check on the vest and this is the starting point that shows the person is standing still.
The two lasers on either side of the center laser don't hit the vest but are there to allow for the man to move the torso and still have at least one laser hit the vest at all times.

- There is a camera is in front of the person, the camera sees the vest and the vr headset and the vr headset has colored dots on it the camera can see.

- When the person moves the torso, the check part of the vest the laser hits will change too.

- By the laser hitting the vest, the checks on the vest have a number value.
So when the person moves so the laser hitting check A now hits check B, then where is check A now ?
Check A is the measured distance from check B.

- Now if the dots on the head visor and the checks on the vest are all put onto a grid in a computer program.
Then when the vest is still and the person moves only the head, the grid sees the dots on the head move.

When the persons head moved, the dot on the visor that was at grid number 20 is gone, now a dot is at grid number 50.
This means the dot that was at grid 20 moved a measured amount of space.

- Then if both the vest and head move, there is a starting position that links the dots on the head to the dots on the vest and this is put into a grid so they can be logically placed when the person moves.

Now the vest has a place on the grid at all times, and the head moves and links to the vest checks so the head movement is tracked as if the vest was still.

- In effect, the gametrak string holds the checks on the vest in a grid constantly, so the vest is like a spring tied to the camera.
You bend the spring but it goes back to it's original form after you let it go, so the vest and camera act like they don't move but are tied together.
Then when the head moves this moves the string on the gametrak, and the dots on the head visor are what moves the gametrak string.

- So if the person moves in a circle so he doesn't have to sit down all the time but can stand in a omni VR treadmill, then the vest has a pattern on it the camera can see so it knows when the persons torso turned, then the dots on the headset link to the new position of the torso and then when the head moves it changed it's original position on the grid. So basically color the vest and headset so when the person moves the system of the grid still works.

- So you need the colored vest the lasers hitting the vest, and then a program to tie the vest checks to the dots on the head visor on a grid like I described.
5mw red lasers are cheap, vests are cheap and replaceable, the program looks like a Carmack type programmer could make it easy. Then you could look at letting the person turn 360 degrees after you get the simple version going.
Post Reply

Return to “Physical VR Tools”