![Image](http://tech.yostengineering.com/3-space-sensor/experimental-projects/real-time-mocap-and-vr-in-udk/images/udk_suit_1.jpg)
http://www.youtube.com/watch?feature=pl ... STge5IDxF4
http://forums.epicgames.com/threads/928 ... EI-3-Space
Looks awesome, doesnt it?
![Woot! :woot](./images/smilies/woot.gif)
I'm just guessing here, but isn't this skeleton tracking based on relative angles? Meaning no absolute position data coming off the sensors but the joints are rotated the same as the joints in your actual skeleton. This would mean that one part of the body would always be at ground level, no? I guess a jump could be calculated from accelerometer data, not sure how accurate that would be but possible I guess... and as don't know what I'm talking about I'll stop, haha.Namielus wrote:I cant see why not.
Well it's a similar idea but the solution is fundamentally different. A magnetic skeletal tracker would be comparatively trivial because you could basically just read the sensor coordinates as-is. This inertial solution however has to run all the drifty sensors through a skeletal constraint model - probably in the form of a very complex Kalman filter to get stable coordinates out of the model. Anyway, I can see Chiky's idea working in conjunction with this system to improve performance. Right now I assume this inertial system requires that one foot be on the ground to act as a stable origin point. But if you added magnetic sensors into the mix you could have more than one point acting as stabilizers.FingerFlinger wrote:Was it Chriky who was working on a skeletal-tracked walking thing with the Hydra? I can't find the thread. From this demo, it seems like that is definitely an idea worth exploring.
You gave up already?FingerFlinger wrote:Well, my other project fell through due to my lack of antenna design ability, so I am looking for something new...
I agree, but my gut tells me that optical flow is going to be difficult to integrate and customize. Optical flow is a pretty beefy topic that assumes an extensive background in EE, mathematics, and computer vision. I'm not convinced that a "canned" open source implementation can just be used out of the box like an API.FingerFlinger wrote:Well, my other project fell through due to my lack of antenna design ability, so I am looking for something new... I would love to get a magnetic tracker project off the ground, but I've really been thinking that a dead-reckoning system integrating optical flow and IMUs could be the way to go at this point.
EDIT: Sorry, getting a bit off topic here.
The API is a wrapper of the serial port to simplify communication with the sensor. It also allows it to be bound into programs like UDK using its DLL_Bind. The sensors can output currently the orientation in a verity of formats or the raw IMU data and a few other things like the button states or temperature. With a single sensor the position can be estimated with orientation and acceleration data.brantlew wrote:You are using some sort of 3-Space API to pull the sensor values. What is available to you through the API? Do they just supply individual sensor orientations or do they provide sensor positions as well?
UDK's animation system provides a lot of things but all I did was make an animation tree that allowed me to access the bones and set every orientation manually. There is a sensor on the thigh, shin, and foot so that each has an orientation. The neck and head are a bit different in that they are connected but have a limited range of motion. I applied some of the rotation from the head to the neck to make it move more correct. The spine should have an interpolation between the chest and the hip sensors but I didn't get it working as I wanted in time.brantlew wrote: I'm not familiar with the way UDK works, so does it provide a skeletal model with physical constraints - so that if you rotate the thigh upwards, the shin and foot come with it automatically? What custom code are you supplying - are you just gluing the sensor readings to a UDK skeleton, or are you computing a model of your own?
There are 17 sensors and they communicate at an average of 60 samples. If a packet is dropped its not resent as its already stale data. The current throughput can be improved.brantlew wrote: How many sensors are you using, and what is the data rate for that sensor mesh as in how many samples per second per sensor can you read?
The latency depends upon the filter mode selected and the communication method used.Fredz wrote: - did you happen to measure the latency (not update rate) for wired embedded and wireless ?
- is the frequency with the Kalman Filter AHRS functionality a solid 260Hz or is it a best case situation ?
- does this last mode correctly account for yaw drift ?
The technology is already in use, from doing spine analysis to sticking in a ball and getting rotation and acceleration data.cybereality wrote:how is this technology going to be used? Will you be releasing a game with controls like this, or is this just for demo purposes?