Full Body Motion VR Ideas
Posted: Thu Aug 09, 2012 5:48 am
Hi guys, I'm new to the forum. I've been lurking for a bit, just trying to soak up all the info I can. VR is a topic that I've been thinking about on a number of levels for quite a while now, and although I don't have the technical skills, I'd like to be able to contribute something worthwhile to the community and the technology nonetheless. Also I'm putting this out there because ideally, I'd like someone else to implement the idea to see how well it works. If they're no takers, then I'll do what I can myself after I get to grips with programming/scripting/etc in general.
Aside from a wide FOV and in-game POV meshing with your own POV, one of the big draws of the VR idea is the ability to use your body to interact with the world - specifically as it relates to moving around it.
There have been a number of solutions for this difficult problem, ranging from exotic hardware like roller skates with sensors, to ball bearing treadmills, to even more exotic hardware like 'holodecks' - specialized rooms that allow for free action within the room. Some have taken VR outdoors, onto large open areas like basketball courts.
Been a VR geek, I think all that is very cool. But been a pragmatist, I can't see any of that stuff been realistically workable in a mass market way that'll actually appeal enough to developers and audience to want to develop for, and spend money on that equipment and gear.
I mean... irrespective of how accurate a sensation of movement you can get with the tech - be it a roller ball with some sort of movement limiting exoskeleton - I can't see this technology been something that will appeal to enough people to get the world excited about it - not like the Occulus Rift is doing.
To a large extent, I think the problem is like HMDs before the Occulus Rift - we're implicitly assuming a certain element of full body motion control that isn't necessary - and is causing us to look down the wrong path. That implicit assumption is simply - that you have to push your legs forward.
Drop that assumption - and suddenly EVERYTHING becomes easy. What's the solution to walking forwards? Walk on the spot. Running? Run on the spot. What about colliding into things? Well, you're jogging and walking on the spot.
While on one hand, it's not quite as immersive as actual movement - on the other hand, it is still engaging our sense of proprioception in a very big way. That's pretty much what VR is - a synchrony of sensory elements that enable the digital to replicate believable sensory experiences... that's exactly what VR HMDs do - it synchronizes the proprioception of the head and neck with the movement of the camera.
The big up to this method is that you suddenly transform any game involving walking/running around into a low/moderate level physical simulation, reversing a multi-decade long crippling health trend among the developed world - where our excess(ively good) entertainment is increasing sedentary lifestyles at a frighteningly unhealthy pace.
And you do so in a way that is easy to get into and get out of - requiring very little in the way of specialized equipment - the technology to do so is already available at a consumer level (HMDs w/ head tracking + kinect motion tracking) - it simply needs the appropriate software to allow this to happen.
To be fair... what I'm saying may not be terribly revelatory - in some ways, it's pretty obvious after all - with at least one guy doing something like it for Skyrim (although the video has been taken down for some reason). But humour me if you will... just stand up for a moment, close your eyes, and walk and jog on the spot - imagining that you're moving forward in space.
It's not that bad is it? As long as there is sufficient synchrony between the movement of your limbs and the movement of the view port, it could be very immersive, and very practical.
So the details of the proposed solution are simple enough.
Get the kinect - track the leg joints and their relationship - the faster and higher the knee moves up, the faster the player moves in game.
Functionally, this would take the form of a piece of software that translated that movement into a game pad style input - the faster you pump your legs, the greater the vector of the direction of movement.
But it's not sufficient for a single vector of movement - free motion requires the ability to allow the player to move in all directions at any time. Games achieve this by allowing you to strafe and back pedal.
The key to this working then is a straightforward intuitive design of how to translate 'on the spot' movement into strafing.
Leaning is... not desirable - it doesn't feel natural, and it can be ergonomically disastrous given enough time. Neither is using your arm to indicate direction of travel - you want to leave those free for other gestures and movement and or control.
You could link it upto a controller - have the player indicate the direction of movement with an analogue stick, and have the walk on the spot/run on the spot movement generate the magnitude of the vector.
But maybe the better way to do it would simply be to have a centre point where the player stands on indicate forward movement - while a circle around him represents the direction of travel - if the player walks with at least one foot in that direction of travel, then they strafe in that direction. The central area should comfortably encompass an average size person - with a radius of about 1.2 feet. The outer circle should be easily accessibly simply by having the player step to the side - that way, changing direction is never more than a couple of short steps or a larger step away.
One problem this does introduce is that of drift - all this side stepping can cause a person to drift from the centerpoint if they're not careful - even trip over things that aren't quite where they expected to be. The solution to this would be to have an on screen display element - that shows where the user is relative to the centerpoint; when they're out of the centerpoint.
I've got a logitech keyboard that has quite a nice OSD that goes over everything any time anywhere when I hit the capslock/numlock functions. I wonder if it'd be possible to do the same for a little circular indicator telling you where you are.
The upside is that this solution can work quite well for both when you're tethered, and untethered - if you're tethered (allowing you to use the full power of a desktop PC; which I imagine we're going to want to do for at least a few more years), the heading would have to be controlled with a traditional controller (which according to preliminary impressions with the occulus rift, isn't too bad - still an extremely immersive experience) - such as a 360 gamepad (hold one side in one hand, not that bad), or more ergonomic devices like a PS Move Navigation controller, or the razer hydra.
If you're untethered, then the experience would be even more straightforward - the circle moves relative to the direction that your torso is pointed in - to be honest, I'm not sure how well the Kinect camera tracks the sides and back of people - but assuming it's reliable, then it should be still a relatively straight forward experience. Circle strafing would literally be like running around in a little circle!
To be honest, the gist of what I'm saying seems kinda obvious to me. It almost makes me feel like I'm missing something - is there a reason that this approach hasn't been done yet? The pros are quite significant, and the cons... well, not that bad.
Pros:
Low cost, not much equipment required (1x kinect + software).
Accessible to everyone.
Easy to startup and use.
Relatively intuitive.
Allows a full body motion framework to develop that could reliably and easily be supported by software and content developers.
Easy enough to add to existing games.
Minimal space requirements.
Good fun exercise.
Cons:
Reduced sense of proprioception - no bodily sense of actual forward movement - but leg pumping with visual forward movement still gets you a long way there.
Requires a little training (centerpoint and strafing).
But that's not all - the above covers leg movement, and could be feasibly implemented by a talented coder. The next part of this post covers arm gesture/body movement.
So the ideal for VR arm movement is that it tracks our arms 1 to 1. Which with a kinect isn't too bad. Even with latency, it's still a very workable solution. When the leap motion detector is released, it may be possible to further augment this sort of 1 to 1 arm control.
Only problem is, there's no software to support it! On the software side, it's a much more complicated process than what I've proposed above (i.e. you can't map complex arm/hand movements to a control stick) - which is also a large part of why I've seperated out this discussion from the leg movement part.
To be honest; without direct developer support, I would largely stick with hand held control tech like the move navigation controller and the Razer Hydra. The efficacy of quick button responses largely outweigh the novelty of gesture controls (as opposed to 1:1 arm mapping).
The problem with gesture controls is that they rely on the user to gesture first - recognize the gesture, then play the action - there's a good 200-500ms gap between action and effect; enough to create a cognitive disconnect - enough for people to think that, that's not me performing the action (and it isn't really). In the end, it feels clunky, even foolish to play like that.
But with developer support - where the they build an arm/physics model and gameplay/interactions that is relevant to that full range of arm/hand movement, then the sky is the limit there...
The main problem with this though is collision and tactile feedback or lack thereof - should your arm or a weapon you're holding collide with geometry in virtual space - how should the game respond? Just clip through the object and pretend it's not there? With clashing swords, do they just phase through each other?
Some people think that this is an intractable problem with VR if you don't have the appropriate exoskeleton arm or some such that can limit range of motion. Sure that'd be pretty good if you can get hold of one - but assuming that you can't, but still think that we should push gaming into this more immersive and natural method of interaction, then what other options are available?
I think without exception, even if you can't simulate the tactile feedback of the virtual world, you still want to respect the reality of the virtual world - in this sense, you can't have characters, arms, weapons, etc clipping through each other, even when the player input is demanding it to!
But if there's a disjunct between the player's motion and the game's representation of that motion, then the illusion of reality that you've been attempting to cultivate with all this technology is lost.
So there obviously needs to be some sort of feedback - while tactile is difficult to achieve on such a macro scale (at the size of the arms, rather than hands/finger tips (which I believe is termed haptic anyway)), you can use the sensory elements that the existing technology already stimulates - providing the player with visual and auditory feedback.
In practice, this means that as you wave your arm about, your in game arm should wave about as freely as you do. When you're in game arm collides with something - it should be restricted by that object - but your actual arm still requires representation to let you know where it is.
The trick then is, is to represent the physical arm that is out of sync with the virtual arm - in a manner that motivates the player to 'reengage' the virtual arm - to resynchronize the arms in both spaces.
Although the method is upto the developer - one example of this is the idea of a red phantom limb - as the arms lose synchrony, the representation of the player's actual limb location grows stronger - the further away from the virtual limb it is, the more it should glow red, perhaps even with visual indicators to show the player that they need to 'grab' the virtual arm.
The consequence of having it out of sync too long is that the virtual limb loses control - falls flatly to the side, at which time the player needs to re-engage the virtual limb by placing their arm at their side.
You can of course use audio cues to reinforce this loss of syncrhony between the virtual and the real arm.
The effect of this would be to allow the virtual arm to be affected by game physics - if your character is carrying a large heavy item, and you're swinging your arm all over the place, they drop the item, and you'll have to re-engage your arms to get them to move again. Similarly, if you continue swinging your arms wildly after your weapon has been parried in a sword fight - you lose control of the arm and the weapon.
On the flipside, if you keep your arm roughly in line with your virtual arm, there is a convincing proprioception sensation of weight and physical boundaries - try it for yourself - imagine a wall, punch it, and stop your arm as it meets this imaginary wall.
There is no wall, and yet the sudden abrupt jerking sensation is there as you'd expect it to be (even when it's you controlling your arm, and not because there's a wall stopping you from moving further).
I'd expect initially, it may not be super convincing - but with not all that much training to get used to the idea, your brain will be cued to halt your arms and extremities from collisions as convincingly as if it were there - the red phantom limbs and loss of control providing you with the mental incentive to stop the collision in place of pain and actual physical occlusion.
This solution like the movement solution seems like a pretty obvious one to me - I might be missing something again, but I figure this is more just a case of it's a tricky software thing to do, with little reason or need to do it yet.
Combined, these ideas represent a low cost, but nonetheless effective solution to the problem of full body motion simulation that can be used to both enhance the immersion factor of virtual reality, and provide developers with incentives to create next generation gameplay and interaction ideas for games that increasingly bumping up against the boundaries of diminishing returns.
Hopefully my explanations are fairly clear. I intend to sketch some diagrams to include with this post to clarify things further, but I'd thought I'd get some feedback at this stage on these ideas.
*edit* Text wall crits forum for 9999. I'll definetly have to draw up the diagrams and get them in there for clarity.
Aside from a wide FOV and in-game POV meshing with your own POV, one of the big draws of the VR idea is the ability to use your body to interact with the world - specifically as it relates to moving around it.
There have been a number of solutions for this difficult problem, ranging from exotic hardware like roller skates with sensors, to ball bearing treadmills, to even more exotic hardware like 'holodecks' - specialized rooms that allow for free action within the room. Some have taken VR outdoors, onto large open areas like basketball courts.
Been a VR geek, I think all that is very cool. But been a pragmatist, I can't see any of that stuff been realistically workable in a mass market way that'll actually appeal enough to developers and audience to want to develop for, and spend money on that equipment and gear.
I mean... irrespective of how accurate a sensation of movement you can get with the tech - be it a roller ball with some sort of movement limiting exoskeleton - I can't see this technology been something that will appeal to enough people to get the world excited about it - not like the Occulus Rift is doing.
To a large extent, I think the problem is like HMDs before the Occulus Rift - we're implicitly assuming a certain element of full body motion control that isn't necessary - and is causing us to look down the wrong path. That implicit assumption is simply - that you have to push your legs forward.
Drop that assumption - and suddenly EVERYTHING becomes easy. What's the solution to walking forwards? Walk on the spot. Running? Run on the spot. What about colliding into things? Well, you're jogging and walking on the spot.
While on one hand, it's not quite as immersive as actual movement - on the other hand, it is still engaging our sense of proprioception in a very big way. That's pretty much what VR is - a synchrony of sensory elements that enable the digital to replicate believable sensory experiences... that's exactly what VR HMDs do - it synchronizes the proprioception of the head and neck with the movement of the camera.
The big up to this method is that you suddenly transform any game involving walking/running around into a low/moderate level physical simulation, reversing a multi-decade long crippling health trend among the developed world - where our excess(ively good) entertainment is increasing sedentary lifestyles at a frighteningly unhealthy pace.
And you do so in a way that is easy to get into and get out of - requiring very little in the way of specialized equipment - the technology to do so is already available at a consumer level (HMDs w/ head tracking + kinect motion tracking) - it simply needs the appropriate software to allow this to happen.
To be fair... what I'm saying may not be terribly revelatory - in some ways, it's pretty obvious after all - with at least one guy doing something like it for Skyrim (although the video has been taken down for some reason). But humour me if you will... just stand up for a moment, close your eyes, and walk and jog on the spot - imagining that you're moving forward in space.
It's not that bad is it? As long as there is sufficient synchrony between the movement of your limbs and the movement of the view port, it could be very immersive, and very practical.
So the details of the proposed solution are simple enough.
Get the kinect - track the leg joints and their relationship - the faster and higher the knee moves up, the faster the player moves in game.
Functionally, this would take the form of a piece of software that translated that movement into a game pad style input - the faster you pump your legs, the greater the vector of the direction of movement.
But it's not sufficient for a single vector of movement - free motion requires the ability to allow the player to move in all directions at any time. Games achieve this by allowing you to strafe and back pedal.
The key to this working then is a straightforward intuitive design of how to translate 'on the spot' movement into strafing.
Leaning is... not desirable - it doesn't feel natural, and it can be ergonomically disastrous given enough time. Neither is using your arm to indicate direction of travel - you want to leave those free for other gestures and movement and or control.
You could link it upto a controller - have the player indicate the direction of movement with an analogue stick, and have the walk on the spot/run on the spot movement generate the magnitude of the vector.
But maybe the better way to do it would simply be to have a centre point where the player stands on indicate forward movement - while a circle around him represents the direction of travel - if the player walks with at least one foot in that direction of travel, then they strafe in that direction. The central area should comfortably encompass an average size person - with a radius of about 1.2 feet. The outer circle should be easily accessibly simply by having the player step to the side - that way, changing direction is never more than a couple of short steps or a larger step away.
One problem this does introduce is that of drift - all this side stepping can cause a person to drift from the centerpoint if they're not careful - even trip over things that aren't quite where they expected to be. The solution to this would be to have an on screen display element - that shows where the user is relative to the centerpoint; when they're out of the centerpoint.
I've got a logitech keyboard that has quite a nice OSD that goes over everything any time anywhere when I hit the capslock/numlock functions. I wonder if it'd be possible to do the same for a little circular indicator telling you where you are.
The upside is that this solution can work quite well for both when you're tethered, and untethered - if you're tethered (allowing you to use the full power of a desktop PC; which I imagine we're going to want to do for at least a few more years), the heading would have to be controlled with a traditional controller (which according to preliminary impressions with the occulus rift, isn't too bad - still an extremely immersive experience) - such as a 360 gamepad (hold one side in one hand, not that bad), or more ergonomic devices like a PS Move Navigation controller, or the razer hydra.
If you're untethered, then the experience would be even more straightforward - the circle moves relative to the direction that your torso is pointed in - to be honest, I'm not sure how well the Kinect camera tracks the sides and back of people - but assuming it's reliable, then it should be still a relatively straight forward experience. Circle strafing would literally be like running around in a little circle!
To be honest, the gist of what I'm saying seems kinda obvious to me. It almost makes me feel like I'm missing something - is there a reason that this approach hasn't been done yet? The pros are quite significant, and the cons... well, not that bad.
Pros:
Low cost, not much equipment required (1x kinect + software).
Accessible to everyone.
Easy to startup and use.
Relatively intuitive.
Allows a full body motion framework to develop that could reliably and easily be supported by software and content developers.
Easy enough to add to existing games.
Minimal space requirements.
Good fun exercise.
Cons:
Reduced sense of proprioception - no bodily sense of actual forward movement - but leg pumping with visual forward movement still gets you a long way there.
Requires a little training (centerpoint and strafing).
But that's not all - the above covers leg movement, and could be feasibly implemented by a talented coder. The next part of this post covers arm gesture/body movement.
So the ideal for VR arm movement is that it tracks our arms 1 to 1. Which with a kinect isn't too bad. Even with latency, it's still a very workable solution. When the leap motion detector is released, it may be possible to further augment this sort of 1 to 1 arm control.
Only problem is, there's no software to support it! On the software side, it's a much more complicated process than what I've proposed above (i.e. you can't map complex arm/hand movements to a control stick) - which is also a large part of why I've seperated out this discussion from the leg movement part.
To be honest; without direct developer support, I would largely stick with hand held control tech like the move navigation controller and the Razer Hydra. The efficacy of quick button responses largely outweigh the novelty of gesture controls (as opposed to 1:1 arm mapping).
The problem with gesture controls is that they rely on the user to gesture first - recognize the gesture, then play the action - there's a good 200-500ms gap between action and effect; enough to create a cognitive disconnect - enough for people to think that, that's not me performing the action (and it isn't really). In the end, it feels clunky, even foolish to play like that.
But with developer support - where the they build an arm/physics model and gameplay/interactions that is relevant to that full range of arm/hand movement, then the sky is the limit there...
The main problem with this though is collision and tactile feedback or lack thereof - should your arm or a weapon you're holding collide with geometry in virtual space - how should the game respond? Just clip through the object and pretend it's not there? With clashing swords, do they just phase through each other?
Some people think that this is an intractable problem with VR if you don't have the appropriate exoskeleton arm or some such that can limit range of motion. Sure that'd be pretty good if you can get hold of one - but assuming that you can't, but still think that we should push gaming into this more immersive and natural method of interaction, then what other options are available?
I think without exception, even if you can't simulate the tactile feedback of the virtual world, you still want to respect the reality of the virtual world - in this sense, you can't have characters, arms, weapons, etc clipping through each other, even when the player input is demanding it to!
But if there's a disjunct between the player's motion and the game's representation of that motion, then the illusion of reality that you've been attempting to cultivate with all this technology is lost.
So there obviously needs to be some sort of feedback - while tactile is difficult to achieve on such a macro scale (at the size of the arms, rather than hands/finger tips (which I believe is termed haptic anyway)), you can use the sensory elements that the existing technology already stimulates - providing the player with visual and auditory feedback.
In practice, this means that as you wave your arm about, your in game arm should wave about as freely as you do. When you're in game arm collides with something - it should be restricted by that object - but your actual arm still requires representation to let you know where it is.
The trick then is, is to represent the physical arm that is out of sync with the virtual arm - in a manner that motivates the player to 'reengage' the virtual arm - to resynchronize the arms in both spaces.
Although the method is upto the developer - one example of this is the idea of a red phantom limb - as the arms lose synchrony, the representation of the player's actual limb location grows stronger - the further away from the virtual limb it is, the more it should glow red, perhaps even with visual indicators to show the player that they need to 'grab' the virtual arm.
The consequence of having it out of sync too long is that the virtual limb loses control - falls flatly to the side, at which time the player needs to re-engage the virtual limb by placing their arm at their side.
You can of course use audio cues to reinforce this loss of syncrhony between the virtual and the real arm.
The effect of this would be to allow the virtual arm to be affected by game physics - if your character is carrying a large heavy item, and you're swinging your arm all over the place, they drop the item, and you'll have to re-engage your arms to get them to move again. Similarly, if you continue swinging your arms wildly after your weapon has been parried in a sword fight - you lose control of the arm and the weapon.
On the flipside, if you keep your arm roughly in line with your virtual arm, there is a convincing proprioception sensation of weight and physical boundaries - try it for yourself - imagine a wall, punch it, and stop your arm as it meets this imaginary wall.
There is no wall, and yet the sudden abrupt jerking sensation is there as you'd expect it to be (even when it's you controlling your arm, and not because there's a wall stopping you from moving further).
I'd expect initially, it may not be super convincing - but with not all that much training to get used to the idea, your brain will be cued to halt your arms and extremities from collisions as convincingly as if it were there - the red phantom limbs and loss of control providing you with the mental incentive to stop the collision in place of pain and actual physical occlusion.
This solution like the movement solution seems like a pretty obvious one to me - I might be missing something again, but I figure this is more just a case of it's a tricky software thing to do, with little reason or need to do it yet.
Combined, these ideas represent a low cost, but nonetheless effective solution to the problem of full body motion simulation that can be used to both enhance the immersion factor of virtual reality, and provide developers with incentives to create next generation gameplay and interaction ideas for games that increasingly bumping up against the boundaries of diminishing returns.
Hopefully my explanations are fairly clear. I intend to sketch some diagrams to include with this post to clarify things further, but I'd thought I'd get some feedback at this stage on these ideas.
*edit* Text wall crits forum for 9999. I'll definetly have to draw up the diagrams and get them in there for clarity.