head tracking hand tracking ideas

Talk about Head Mounted Displays (HMDs), augmented reality, wearable computing, controller hardware, haptic feedback, motion tracking, and related topics here!
Post Reply
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

head tracking hand tracking ideas

Post by JDuncan »

This will be in three parts so the reading is easier. If you don't understand a part then say the part, section 1, 2, or three.

Section 1 = what is holography
section 2 = holographic technology
section 3 = vr technology

I wrote something in the oculus forum but the idea has come along some so I am making this new thread so only the best ideas are shown.

Think of this post as the prelude.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

Section 1

Image
http://www.youtube.com/watch?v=Oa7QF-ItjZA
http://www.youtube.com/watch?v=aTctta2OMRc

The above picture and two youtube videoa show the basic design for the holographic TV.

What happens is the display and eye form a line, and the two points that make the line are the eye point and the display pixel point.

The pixel point sends light to the eye point in a straight line. What happens is the pixel point sends light onto the eye point.

In real life we see objects because light bounces off of it and then the redirected light that bounced off the object then bounces onto the eye which then percieves the object.

So in the display eye points the display acts as though light bounced on the display then the light bounced from the display and onto the eye so the eye could see the display.

In real life though, when the person looks around what it sees it sees the other side of what it's looking at, because the light is striking the viewed object 360 degrees around the object so as the person looks at the object the light is bouncing off the object at the different points the person looks at as the person walks around the object.
But in a display when the person looks around the object the tv has a diffuser that sends the same 2d imade to all points the display sends light too, so there is no 3d and no hologram because the only light that hits the eye as it moves around the object is the same light from the different places the eye looks at the display.

But if the persons eye moved around the display then the display sends light to the eye so the eye sees different reflection of light from the object the display is showing, this is 3d and a hologram.

So the principle of the 3d I am describing in the display is the line sends light to the eye and the eye is in one xy axis position. But when the eyes xy axis position changes the display sends light to the eye so the eye sees light from the object in the display from a different part of the object being looked at.

How does the display sends light to the eye based on the eyes position so the effect is the eye recieves light from the object being looked at from the new view of what the eye is looking at?

...
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

Section 2

... How the display decides to send light to the eye based on the eyes position relative to the display is a technology I have thought out.

I will detail that technology below.

First off this was built for the oculusvr but I think it's applicable to 3d tv too so that's why I am making this thread. But substitute the 3d glasses for the oculus vr headset and then they are compaible so both vr and 3d tv can use this technology I will describe below.

The tracking of the eyes needs to be precise enough so the difference in the eyes position creates the visual impression of a 3d image on the screen the eyes are looking at.

Hold your index finger in front of your face about a foot or two away from your eyes, and then holding the finger still moves your head around the finger so you see the finger from different viewpoints of the finger.
This is the accuracy the tracking system needs.

So on the display is "a camera called camera2", "a display for the camera to look at called monitor2", "a tilting mirror", "a stationary mirror", "a green laser", "a program to redirect the tilting mrror so the laser shines onto a point on the display".

The person looking at the tv sees "monitor1", and as the person sees monitor1, "camera1" captures the persons eyes.

camera1 sends the picture of the eyes onto monitor2.
camera2 sees the person on monitor2.
The display sends light from the stationary laser onto a stationary tilted mirror, and the laser light is reflected onto the dynamic tilting mirror.
From the dynamicly tilting mirror the laser light is reflected onto the monitor2 and onto the face of the person on monitor2.
The program tilts the dynamicaly tilting mirror to redirect the laser onto a point on the face on monitor2.
When the program correctly sets the laser onto the point onthe face this is where the laser will shine on continually even when the person moves their faces xy axis position, because the program will redirect the laser onto the persons face at that exact point.

Now the person is tracked in xy axis position so when the person looks around the image on the screen the xy axis the eyes are at see one light from the display so the image seen by the eyes are one viewpoint of the image being looked at, and a different xy axis shows a 3d image hologram so the eyes see whats on the display from a different angle.

And that's the basic idea.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

Section 3

The tv would show a souce that has a stereoscopic image for all the possible points the eye could look at the image from, depending on the xy axis the eyes are looking at the tv.

And if this is for VR, then the yaw of the head, or turn of the head as the person looks left or right, could be seen by camera1 if the person wears some identifyable light around the person head.

Like a paper with leds on the 4 edge tips of the paper. And the plane of the paper or what is written on is held on flat and on top of the persons head.
Now when the person turns left or right the camera1 sees the leds and sends this image to camera2 which applies the leds on the paper to the xy coordinates so they equal each other.

Now if the person turns so the eyes aren't visible to the camera1 then what is visible to camera1 is the leds on top of the persons head, and since the leds equal the xy coordiantes of the eyes, camera2 shows the program leds so the program can guess the eyes xy axis coordinates.

Now you have the yx and yaw you can have hand tracking and it works inside of the coordinates the head and eyes are using.
The hands use the paper and leds on the paper, so the hands go thought the plane of the paper, which puts 2 led papers around the wrists of the two hands, one paper per hand.
The paper acts as a cuff wrist.

The hands are seen by camera1 and the camera1 sees the leds surrounding the two hands.
Camera2 sees the two hands leds and the program creates a virtual box to surround the two hands paper leds, this acts as a tracking system when the virtual box surrounds each of the hands.
The virtual box has xy and yaw coordinates, now when the person moves their hands the program slides the virtual box over the leds cuffs so they mantain their initial position inside of the virtual box.
The virtual boxes coordinates exist inside of the head tracking coordiantes and when the virtual box moves it changes it's position to the eyes.
This way the person can hold their hands out to their sides and touch their nose in vr and not miss.

For haptic touch, the hands have gloves, and the way the hand moves up, down, left, right, changes how the fingers move.
So if the fingers have a initial position then they can use the hands coordinates and plot the fingers coordinates changes, this way if the vr world has items to touch the person can touch then as they see them and feel them through the haptic gloves.

So you can see without holography, vr position and haptic touch is not possible. I'll admit, Maybe it is, maybe some genius can figure out how, But I don't see how.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

mods I put this in the wrong forum, I thought this was the research forum. Can you move this to the research forum for vr and augmented reality please?
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

Image

A wheel holds a man in the middle of the wheel using a harness attached to the man.

The mans arms and legs have poles attached to them that act like oars that go into and out of water at is moves a boat.
The poles have sensors that relay the mans position to the VR program so his arms and legs move in VR as they do in the wheel.
Because the poles are held in the wheel by holes, the poles can move in the full freedom of movement the arms and legs can.

The person in the wheel must have the ability to crouch, and roll, as he does in real life. Or crawl on the VR ground.
To do this the wheel needs to use the poles so the poles bear weight from the arms and legs.
So at one pole position the legs move as if in air, and another pole position the legs touch the ground so the pole bears weight.
The pole position is judged by the leg extension, if the leg is extended the pole bears weight, if the leg is curled the pole moves freely.
This way with the pole bearing weight the person can have the sensation of walking in VR space.

And that's the basic idea to apply to the arms too if the person moves the body to make them crawl on the ground in VR space, the wheel and poles move so the person is physically crawling in the wheel.

This idea has weight distribution and weight bearing so the person can possibly feel gravity force in VR as he or she walks around.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

This post shows a SW version of the tilting mirror mechanism, which would allow for cheaper holography and augmented reality.

Now the VR uses a camera to read a screen to tilt the mirror so the laser strikes the tracked part of the monitor.
The mechanical design in real life would make this a large box and take up a lot of room and be very expensive.

So I envision a virtual tilting mirror design I will describe below;

The monitor1 and camera1 are in the real world, but the monitor2, camera2, and other parts of the design are virtual.

The effect of this is the laser is still guided by the program to hit monitor on the tracked part of the monitor,
which finds the changing xyz coordinates of the tracked thing on the monitor.
But the laser is in video game world reality.

So by integrating the real world picture in video game world, the real world picture is beamed onto a video game world tv.
Then the program reads the real world picture in the video game world on the virtual monitor,
then uses the virtual monitor in the video game world that is showing the real picture of the person on the virtual screen
to see where the laser is pointing and then redirect the laser onto the part of the monitor that is the tracked part.
This way the program finds the changing xyz values of the persons eyes being tracked without any physical HW.
This makes the whole design only need the 1 monitor in the real world for the person to look at and the 1 real physical camera for watching the person look at the monitor.

So I would need to find a way to broadcast real pictures in real-time to a video game tv then have a program look at the video game tv and make changes to the tilting mirror to find the xyz values necessary for the holographic effect I described earlier.

Monitor1 shows stereoscopic eyes the xy coordinates of the thing the eyes look at on the screen, a orange or dog, etc.

Camera1 sees the eyes and records this so that the eyes xy coordinates are able to be shown on monitor2.

Monitor2 shows camera2 the eyes xy coordinates and the eyes xy coordinates change as the person looks
at monitor1 from different physical perspectives.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

This post shows the technology for augmented reality.

As camera2 sees the eyes xy coordinates it shows a stereoscopic image based on the coordinate of the eyes,
so when the eyes move, each new xy coordinate the eyes go to the monitor1 shows the eyes a new stereoscopic image.

This is the basic idea for a camera that's recording the person but not on the person; camera1,
and a camera watching the video from camera1 that is also not on the body of the person being recorded by camera1; camera2.
But what if both cameras are on the body of the person being recorded by camera1?
That is what I will describe below, it would work for robot navigation and augmented reality.

monitor1 must show the stereoscopic eyes a image, and this image must have a xy coordinate that is held in a coordinate system.
If monitor1 shows a face to the stereoscopic eyes,
the face on monitor1 has a xy coordinate that is one piece of a greater whole of xy coordinates.

Remember that the light must bounce off of the image on monitor1,
and then that light that bounced off of the object in monitor1 is the reflected onto the stereoscopic eyes.
So if light can bounce onto the object in monitor1 there is other xy coordinates that exist besides the image the eyes are looking at.
Therefore the eyes look at a xy coordinate that is one coordinate within a larger system of xy coordinates.

So a camera must get the coordinates plural, then find the coordinate singular, to show to the stereoscopic eyes.

Now camera2 sees the eyes and the eyes looking at the image on monitor1 is the initial xy coordinate on monitor2.
The eyes and camera are on the same body,
so the xy coordinates the eyes move in on monitor2 must be the same coordinates the eyes use to find the thing being looked at on monitor1.
How this is done is the eyes have a position in the coordinates of the thing it is looking at.
This way when the eyes see the image on monitor1, then the eyes also have a coordinate on the coordinates the thing seen on monitor1 is in.

So now as the eyes see the image on monitor1 that is a single coordinate, the eyes are a coordinate that itself that is tracked by camera2.

Now when the eyes look at the image coordinate on monitor1, the eyes are also a coordinate on monitor2 to camera2.
So if the coordinate on monitor1 is augmented reality, then when the eyes look at the coordinate and change the eyes coordinate,
the program shows the image on monitor1 from the new perspective the eyes look at the image on monitor1.

Now the important part is, is the coordinates accurate so that the augmented image isn't bouncing around as the person looks at it from different perspectives?
The y coordinate would relate to the persons walking or standing,
then the x coordinates would be mapped from this y coordinates,
so each time the person walks or sits etc the y axis is the same each time the person moves,
then the x axis is decided on the y axis.

And for robotics, I think the image on monitor1 would have some image that can have a skeleton wireframe put around it,
and this skeleton can have image recognition traits,
then when the program sees the skeleton it decides what to do or it copies what to do to the skeleton,
using something like "the baxtor robot system" of mimicry.

So this post shows how to use the holographic system I described for augmented reality.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

I posted this at gamedev to ask a few questions but thought I would post it here too;
http://www.gamedev.net/topic/645585-cha ... -tracking/

"
What I have to say here is an idea for tracking in 3d space for use in the oculus vr.

Now bear in mind that I'm learning c# and unity right now and in a years time I should be able to try some things to do what I want. So this is really just a few questions to more experienced developers on what is possible.

I have thought through what I am trying to do and will describe it now.

A cube is beside a mirror and the mirror is above the cube, and the mirror is facing the cube, so the mirror is facing downwards.

A light souce like a laser pen is shining upwards and the laser pen is beside the cube and below the mirror, and the mirror is recieving light from the laser pen and shining the light it recieves from the laser pen onto the cube.

It looks like this;

See picture A.jpg
http://www.imagebam.com/image/f655ea265986546

Now the laser is shing onto the cube the cube also has a mirror so the laser beam is shing onto a different surface, this is where it gets tricky.

See picture B;
http://www.imagebam.com/image/34b9e6265986548

The surface the cube reflects the laser light onto is called "destination".

"Destination" has a moving dot that is chased by the laser light from the cube, so that the laser light from the cube sits on top of the dot on the "destination".

See picture C to see the dot;
http://www.imagebam.com/image/9496e7265986549

So in pictures A,B,C, you see the overall mechanism I want to create.

Now the tricky thing is what happens when the cube redirects the laser beam light.

In order for the cube to redirect the laser beam light onto "destination", the cube must have a surface that moves in the xy axis positions, so the moving surface must sit on top of the cube.

See picture D;
http://www.imagebam.com/image/cc9086265986551

Now as the moving surface redirects the laser light onto the "destination", the moving surface has changing xy axis coordinates.

Now what I want to know, is is it possible to create this moving surface that chases the dot on the "destination" and as it chases the dot the moving surface has accurate xy axis coordinates that change every time the moving surface moves,
so if the moving surface moves the xy axis coordinates change?

That is the basic design I have in mind.

Now the "destination" has a dot that the moving surface chases with the laser beam it is reflecting onto the "destination".

That moving dot on the "destination" is a facial point that is being tracked using facial recognition tracking point.

So the problem now is to video a face, apply facial tracking points to the face, feed the face with the points on it to the "destination", and then the moving surface sends the laser light onto the facial tracking point, whichever point I choose.

And as the facial point moves the moving surface sends light onto that point and then the xy axis coordinates change so the face moves has xy axis coordinates data in the moving surface as the moving surface sends light from the mirror onto the "destination".

I was thinking of using unity and c# to get the facial points, and then somehow getting unity movie texture to show the face with the points on it in a video game environment, then setting up the cube and shing the laser onto the movie texture and somehow getting xy axis coordinates from this setup.

But I'm not sure the movie texture can be used like this. So since I am learning c# and unity and I will put a lot of time and effort into this I thought I would ask if what I want to do is possible or not before I put too much effort into this project I want to do.
"
zalo
Certif-Eyed!
Posts: 661
Joined: Sun Mar 25, 2012 12:33 pm

Re: head tracking hand tracking ideas

Post by zalo »

I recently saw a tracking system very similar to the one described here at Siggraph (but they didn't use it for head tracking).

[youtube-hd]http://m.youtube.com/watch?v=9Q_lcFZOgVo[/youtube-hd]

[youtube-hd]http://youtube.com/watch?v=ac0Q5Bp2BSk[/youtube-hd]

They used a pan-tilt saccade mirror galvanometer tracking system (1 KHz refresh rate) to shine a non-contact laser vibrometer at moving objects to get their haptic sensation from far away.

If I were you, I'd google into their system to figure out just what manner of pan-tilt saccade mirror galvanometer tracker they are using.
JDuncan
Cross Eyed!
Posts: 130
Joined: Wed Feb 09, 2011 3:30 pm
Location: My Left Hand
Contact:

Re: head tracking hand tracking ideas

Post by JDuncan »

Hi zalo,

It sure looks like what I described.

This is from their website

"
Summary
Broadcasting contents of sport games (e.g. the FIFA World Cup, the Olympic games etc.) have been quite popular. Hence, high-quality and powerful videos are highly demanded. However, it is often hard for camera operators to keep tracking their camera's direction on a dynamic object such as a particular player, a ball, and so on. In such cases, shootable method has been limited to either moving the camera's gaze slowly with wide angle of view, or controlling the gaze not accuratly but based on a prediction and adopting some parts which are shot well by chance. Super slow and close-up videos of the remarkable player or the ball are thought to be especially quite valuable. However, camera operators have not been able to do that.

To solve this issue, we developped "1ms Auto Pan-Tilt" technology. This technology can automatically control the camera's Pan-Tilt angles to keep an object always at the center of field, just like "autofocus" keeps an object in focus. Even a high-speed object like a bouncing pingpong ball in play can be tracked at the center due to a high-speed optical gaze controller Saccade Mirror and a 1000-fps high-speed vision. The Saccade Mirror controls a camera's gazing direction not by moving the camera itself but by rotating two-axis small galvanometer mirrors. It controls the gaze by 60 deg, the widest angle, for both pan and tilt. And steering the gaze by 40 deg takes only 3.5 ms. The newest prototype system accesses a Full HD image quality for an actual broadcasting service.

A photograph of the Saccade Mirror is shown in Fig.1. An image sequence of 1ms Auto Pan-Tilt movie of a pingpong game is shown in Fig.2. The movie was captured by Full-HD high-speed camera with 500fps. From the figure, the ball in the game always can been seen at the center of the each image. A 1ms Auto Pan-Tilt movie is also shown as a video in the bottom of this page. Here, we envision the system for a broadcasting service of a sport game, but also expect recording detail dynamics of a flying bird, an insect, a car, an aircraft, and so on.

"

http://www.k2.t.u-tokyo.ac.jp/mvf/SaccadeMirrorFullHD/

I like that, that's cool.
Post Reply

Return to “General VR/AR Discussion”