VR/AR Windows "Desktop" development

This is for discussion and development of non-commercial open source VR/AR projects (e.g. Kickstarter applicable, etc). Contact MTBS admins at customerservice@mtbs3d.com if you are unsure if your efforts qualify.
Post Reply
LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

VR/AR Windows "Desktop" development

Post by LeeN »

I have a prototype I am developing of a 3D windows environment:

[youtube-hd]http://www.youtube.com/watch?v=w9EOzPY8R0w[/youtube-hd]

In the video, I am showing an OpenGL view where I am running a gnome-terminal and VLC playing Underworld. I then open Chrome and navigate to MTBS3D (note for some reason I have to double click, single clicking wasn't working for some reason). I play back a flash video, go to engadget, and then finally open Eclipse.

It's based on X Windows. I looked into doing this with Windows 7 (their 3D flip, Windows Key + Tab) and it was a dead end, which if I recall correctly was because of Microsoft's DRM policy that you can't access the contents of windows.

I started off trying to create an X Server but that was a huge task and I ended up wasting a lot of time, then after playing around with various X Servers it dawned on me that I can use an X Server with backing store to provide textures to OpenGL. So I am using Xephyr as a backend server, and I used xwd and xshowdamage to learn how to fetch images and to be notified of updates to windows. I also found ways of sending key and mouse events to windows.

There is still quite a bit to do with this.

One big challenge is that X Windows is entirely 2D so windows relate to each other in 2D and while it has a hierarchy, popup menus and tooltip/hint windows are not child windows (child windows are clipped by parent windows), so they are disconnected and it is difficult to determine what they should be attached to (you can see this problem when I first type into chrome and the dropdown appears below chrome due to my temporary window layout logic). I'm going to have to investigate ways of dealing with that.

Input still needs to be figured out, occasionally windows stop responding to mouse events. I need to dig through xtrace logs to understand what is happening.

Frames need to be added so I can grab/resize(rotate) windows with the mouse, although I might try to jump straight to using the Razer Hydra to move windows around in 3D instead.

I plan to put the source code up some where when I get to a good point (I am an avid user of mercurial so probably google code), right now though the code is very volatile.



Ultimately the goal would be to create an entirely 3D/VR/AR protocol, instead of doing a hack of X Windows, where processes can connect to a Scene server and create VBOs, FBOs, PBOs, Shaders, Trigger/Event boxes, and Display/Input devices can connect to the Scene server and stream the content and display the 3D scene and allow the user to interact with triggers. It would be interesting to have an augmented reality where you can see extended virtual controls for everything in your house, work, stores, etc.

For now though to get the most apps the X protocol is the best solution, and it would be awesome in the interim to have 360 degrees of space as a desktop.

User avatar
brantlew
Petrif-Eyed
Posts: 2220
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: VR/AR Windows "Desktop" development

Post by brantlew »

Very cool. Should be combined with a head-tracking HMD for the full realization of how cool this could be. It's unfortunate that Windows tries to prevent development of alternate shells like this. For wide FOV HMDs (ie. Rift) the Windows desktop is completely inadequate. You would need a true virtual window manager like this.

User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11394
Joined: Sat Apr 12, 2008 8:18 pm
Which stereoscopic 3D solution do you primarily use?: S-3D desktop monitor

Re: VR/AR Windows "Desktop" development

Post by cybereality »

Nice work man! Looking good.

This should actually be possible on Windows, but I'm not sure how it would be done. However I know there have been similar projects that work on Windows like SphereXP ( http://www.spheresite.com/ ). Did do some quick research on it and didn't find much help, but surely there is a way.

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

I haven't seen that before.

It looks like they can't really interact with windows when their in 3D, although researching it I've read blurbs about experimental 'active' windows and I saw one youtube video that looked like they may have been interacting. You can also get similar 3D windows in X Windows using Compiz, but it's just visual eye candy and you can't use a program when it's transformed.

It definitely gives me some ideas, thanks!

Sadhu
One Eyed Hopeful
Posts: 9
Joined: Wed Jun 06, 2012 4:25 pm
Location: Poland

Re: VR/AR Windows "Desktop" development

Post by Sadhu »

I'm not sure if I can get my head around this, but have you looked at Wayland on Linux? Maybe this could be implemented as a Wayland compositor/window manager? It's still in heavy development, but already quite usable.

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

Wayland doesn't really do much more than I get from X except *maybe* simplify it but then I lose compatibility with older systems and applications. At my work for example we do crossplatform builds of linux but it's a maintenance nightmare for our build scripts to support multiple distros and multiple versions of a distro (for the host platform), so because of that we are stuck Waylandless. And while you can run Wayland on top of X, applications still have to be built with libraries that support it, something I don't want to deal with and I don't think end users want to deal with.

I also kind of want to focus on desktop paradigm with 3D head/hand tracking (fun stuff), I really don't want at this time to get into the details of a display server more than tricking applications into working in 3D. And Wayland is just an optimization of X by merging the compositor and window manager and stripping out legacy crap in X it doesn't extend clients into a 3D paradigm.

druidsbane
Binocular Vision CONFIRMED!
Posts: 237
Joined: Thu Jun 07, 2012 8:40 am
Which stereoscopic 3D solution do you primarily use?: LCD shutter glasses
Location: New York
Contact:

Re: VR/AR Windows "Desktop" development

Post by druidsbane »

What about a plugin to a composite manager like compiz? (http://www.compiz.org/) Should still give you access to all the windows but probably cleaner b/c then you don't have to worry about updates, it composites everything for you and already supports 3D effects, etc... before rendering. Seems like it might simplify support across distros and platforms while making your code focus on just the 3D window manager part. From their page you can see the two most important parts you care about:
Compiz is an OpenGL compositing manager that use GLX_EXT_texture_from_pixmap for binding redirected top-level windows to texture objects. It has a flexible plug-in system and it is designed to run well on most graphics hardware.
...
Compiz can also be a window manager, which means that it is the software between you and your desktop apps. It enables you to move or resize windows, to switch workspaces, to switch windows easily (using alt-tab or so), and so on.
Looking forward to seeing more of this :)
Ibex 3D VR Desktop for the Oculus Rift: http://hwahba.com/ibex - https://bitbucket.org/druidsbane/ibex

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

I tried compiz a long time ago I was looking at this plugin http://wiki.compiz.org/Plugins/Headtracking , and came to the conclusion it wouldn't work because it leaves mouse input to the x server, so there is no redirection 2d>3d>2d. Basically everything is still 2D it's just that some effects make things look 3d temporarily.

User avatar
Chriky
Binocular Vision CONFIRMED!
Posts: 228
Joined: Fri Jan 27, 2012 11:24 am
Which stereoscopic 3D solution do you primarily use?: Head Mounted Display (HMD)

Re: VR/AR Windows "Desktop" development

Post by Chriky »

I haven't got my Linux machine to hand but I was sure you could grab a window and drag it around the 3D desktop cube...?

druidsbane
Binocular Vision CONFIRMED!
Posts: 237
Joined: Thu Jun 07, 2012 8:40 am
Which stereoscopic 3D solution do you primarily use?: LCD shutter glasses
Location: New York
Contact:

Re: VR/AR Windows "Desktop" development

Post by druidsbane »

Chriky wrote:I haven't got my Linux machine to hand but I was sure you could grab a window and drag it around the 3D desktop cube...?
You could. It is both a window manager AND a compositing manager. The former controls position and focus, etc... the latter lets you render any way you wish. The two go hand in hand really. The question is whether you can override the window management portion as well as compositing to get the full amount of information needed to make this useful with an HMD.
Ibex 3D VR Desktop for the Oculus Rift: http://hwahba.com/ibex - https://bitbucket.org/druidsbane/ibex

bobv5
Certif-Eyed!
Posts: 529
Joined: Tue Jan 19, 2010 6:38 pm

Re: VR/AR Windows "Desktop" development

Post by bobv5 »

I'm no programmer, so sorry if this is irelevant, but a command called "getpixel" exists. I belive it is part of directx. This is what the boblight guy use's for the windows version of boblight.

http://blogger.xs4all.nl/loosen/articles/408184.aspx
"If you have a diabolical mind, the first thing that probably came to mind is that it will make an excellent trap: how do you get off a functional omni-directional treadmill?"

JohnF30
One Eyed Hopeful
Posts: 7
Joined: Sat Jun 30, 2012 3:01 am

Re: VR/AR Windows "Desktop" development

Post by JohnF30 »

You could use the upcoming Leap from Leapmotion as an accurate input device. It could be used to reproduce the "ARI" from Heavy Rain.

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

[youtube-hd]https://www.youtube.com/watch?v=XP0LLfi ... ata_player[/youtube-hd]

This is an update. I'm using the Razer Hydra for demonstrating what head tracking would be like and for controlling a 3d mouse cursor. I also am trying out having a shadow appear on the window when the cursor is in proximity as hint.

After trying to get popup/hint/tooltip/menu-context windows to work correctly, I've come to the conclusion that the best solution is to make it so that windows from an application will be clumped together into a panel. This means programs that have many windows like Gimp will not move independently in 3d but will move together.

I am still thinking of moving to an xtrace setup, with this I will get more information about applications and windows.

I am hoping to get some large amounts of time to complete some things this coming week. In particular I want to see if I can run this under Windows 7 using cygwin and either Xephyr or VcXsrv. I want to then setup a real build environment with maybe cmake, from there I'd like to do a code release.

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

The Leap Motion seems like the idea input device but that is like 5 or 7 months away.

NickK
Two Eyed Hopeful
Posts: 55
Joined: Thu Jun 14, 2012 10:59 pm

Re: VR/AR Windows "Desktop" development

Post by NickK »

LeeN,

Looks like a very promising project. If you plan to open source it I will definitely give it a try, at least as a tester. I'm on Ubuntu but can install other flavors as well.

There are a few things that seem to me a bit questionable. I don't want to barge in with my opinions but it looks like usability of this interface is not that great. I'll cover it in the items 1-3 below.

1. Windows floating randomly out in 3D is not very convenient because as a user I feel lost in empty space. Some sort of an order would make them better organized. For example, all windows may have their corners pinned to a sphere like a cockpit. Zooming in and out will keep the corners on the sphere too. In fact, having longitude and latitude lines (e.g. white lines on dark blue background) would help people orient themselves in 3D. It would help to have some sort of a marker as the initial point of view. Plus, it may also be helpful to prevent 2D windows from rotating by default, so that they are positioned upright when I zoom in on them. For 3D applications like google earth the restriction would not apply.

2. There is no virtual keyboard. Ideally, I'd want to see a flat panel with a virtual keyboard that displays the keys that I press. Otherwise, it's hard to type while you are wearing a VR visor. For example, the panel may look like a flat or tilted cut of the sphere's bottom, display a keyboard and icons of launched applications. Without the icons you'll easily get users lost floating in space among multiple windows.

3. Regarding OpenGL, are you using some sort of an OpenGL engine like OpenSceneGraph to position windows? If yes, lighting in 3D is very important to give the user some sense of a position and an orientation. In OSG you can create several light sources positioned above the user. Your demo has a directional light source from the bottom which is very confusing. Normally, people expect a light source (a lamp) at the top. If you have a panel at the bottom illuminated from the top it would give the impression of a work desk which is more practical.

4. Regarding Wayland: I've been communicating with Kristian Hoegsberg a while ago when I also thought that Wayland was 2D. He corrected me on that. A Wayland server does not enforce the content of a window buffer. Thus, conceptually you can use wl_surface buffer to pass polygon+color data of any 3D model. As long as that Wayland server is written to recognize the 3D data structure, it can render the window/model any way you want. That said, for compatibility with older 2D applications Wayland is not a good choice since there aren't many applications written for Wayland yet.

5. And one last point, do you really have any justifiable reason to use CMake? In my experience, for complex build systems it brings more trouble than it actually solves. Not saying that other solutions are better (they are not) but I encourage you to pick the simplest solution possible at this point and spend your time on something more useful. Just an opinion. Feel free to ignore it.

NK

User avatar
brantlew
Petrif-Eyed
Posts: 2220
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: VR/AR Windows "Desktop" development

Post by brantlew »

@LeeN: Very cool. I'm interested to see how things go with cygwin.

@NickK: Good idea about the virtual keyboard for full immersion headsets, but it only goes halfway. Seeing your key presses is one thing but blindly finding the proper place to place your fingers is another problem. Maybe a data glove is the only answer.

NickK
Two Eyed Hopeful
Posts: 55
Joined: Thu Jun 14, 2012 10:59 pm

Re: VR/AR Windows "Desktop" development

Post by NickK »

brantlew wrote:@LeeN: Very cool. I'm interested to see how things go with cygwin.

@NickK: Good idea about the virtual keyboard for full immersion headsets, but it only goes halfway. Seeing your key presses is one thing but blindly finding the proper place to place your fingers is another problem. Maybe a data glove is the only answer.
I'm not sure about the glove. You can't hold your hand up for an hour. How would you propose to make it usable for an extended period of time, say hours?

Alternatively, you can do something crazy -- but this is *really* crazy. Find 2 mice (left and right) with 4 buttons on top and 1 on their thumb sides. As you move mice on the desk, virtual reality displays you moving your fingers over a keyboard, e.g. using a different color. Not sure how ergonomic it is going to be though.

User avatar
brantlew
Petrif-Eyed
Posts: 2220
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: VR/AR Windows "Desktop" development

Post by brantlew »

@NickK: The LEAP device might be a good solution. From what I have seen it is pretty good at tracking individual finger motions so just point one at your physical desk, and then render your fingers into virtual space. Depending on the technology, it might be able to even trace out your keyboard which would be even better. Render both your physical keyboard and fingers into the virtual space.

druidsbane
Binocular Vision CONFIRMED!
Posts: 237
Joined: Thu Jun 07, 2012 8:40 am
Which stereoscopic 3D solution do you primarily use?: LCD shutter glasses
Location: New York
Contact:

Re: VR/AR Windows "Desktop" development

Post by druidsbane »

brantlew wrote:@NickK: The LEAP device might be a good solution. From what I have seen it is pretty good at tracking individual finger motions so just point one at your physical desk, and then render your fingers into virtual space. Depending on the technology, it might be able to even trace out your keyboard which would be even better. Render both your physical keyboard and fingers into the virtual space.
Or just render a virtual keyboard and track the hands on the desk so that they type on it... might need to calibrate that or something, but would make for an awesome virtual workspace if we can get the resolution up on the Rift :)
Ibex 3D VR Desktop for the Oculus Rift: http://hwahba.com/ibex - https://bitbucket.org/druidsbane/ibex

NickK
Two Eyed Hopeful
Posts: 55
Joined: Thu Jun 14, 2012 10:59 pm

Re: VR/AR Windows "Desktop" development

Post by NickK »

brantlew wrote:@NickK: The LEAP device might be a good solution. From what I have seen it is pretty good at tracking individual finger motions so just point one at your physical desk, and then render your fingers into virtual space. Depending on the technology, it might be able to even trace out your keyboard which would be even better. Render both your physical keyboard and fingers into the virtual space.
I am a little skeptical of the LEAP device for the productivity or interface purpose. For productive long term work you need to keep your hands firmly on some solid 2D surface. Try holding your arm up for half an hour like they do in LEAP videos to see how tiring it becomes.

It's great for kids to play a video game however. My understanding is that the LEAP device requires that you keep your hand up in the air. Otherwise it won't be able to track your fingers. It is based on infrared light detection/processing and if you keep your fingers or hands on your desk, the desk will warm up and their infrared detectors will get confused.

User avatar
brantlew
Petrif-Eyed
Posts: 2220
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: VR/AR Windows "Desktop" development

Post by brantlew »

@NickK: Good point. Very likely that the LEAP tech couldn't handle this. I guess we'll find out in a few months. I filled out an application a couple months ago for a LEAP dev kit, but they never responded :(

@druidsbane: Well as anyone who has used an iPad can attest, non-tactile keyboards really suck. So it would be cool if you could keep the tactile feel of a keyboard but be able to see it in virtual space as well.

Now that I think about it, you know what might be even easier? Some sort of AR tech using a green-screen technique. Put a green mat on your desk and a camera on your face. Then capture and mask everything within the green mat interior and render the masked video into your virtual space. You would be able to see your full hands and keyboard without any complex hardware.

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

NickK wrote: 1. Windows floating randomly out in 3D is not very convenient because as a user I feel lost in empty space. Some sort of an order would make them better organized. For example, all windows may have their corners pinned to a sphere like a cockpit. Zooming in and out will keep the corners on the sphere too. In fact, having longitude and latitude lines (e.g. white lines on dark blue background) would help people orient themselves in 3D. It would help to have some sort of a marker as the initial point of view. Plus, it may also be helpful to prevent 2D windows from rotating by default, so that they are positioned upright when I zoom in on them. For 3D applications like google earth the restriction would not apply.
Those are good ideas. I was thinking of having a way of creating/using a 3D background as you would a wallpaper on a 2D desktop, that might help in this regards.
NickK wrote: 2. There is no virtual keyboard. Ideally, I'd want to see a flat panel with a virtual keyboard that displays the keys that I press. Otherwise, it's hard to type while you are wearing a VR visor. For example, the panel may look like a flat or tilted cut of the sphere's bottom, display a keyboard and icons of launched applications. Without the icons you'll easily get users lost floating in space among multiple windows.
I was thinking a solution might be to have a web cam aimed down at your hands and keyboard and display that in the 3d environment. I was also planning to try to use the Kinect for tracking and I was thinking I could also possibly use the video portion of it for displaying the keyboard and your hands to yourself. For the Hydra I was also thinking of using the Analog sticks as a way of choosing keys to type, there is an Android virtual keyboard that kind of works that way.
NickK wrote: 3. Regarding OpenGL, are you using some sort of an OpenGL engine like OpenSceneGraph to position windows? If yes, lighting in 3D is very important to give the user some sense of a position and an orientation. In OSG you can create several light sources positioned above the user. Your demo has a directional light source from the bottom which is very confusing. Normally, people expect a light source (a lamp) at the top. If you have a panel at the bottom illuminated from the top it would give the impression of a work desk which is more practical.
LOL, that is actually caused by my TV and camera. I have a DLP HDTV and I was using my cell phone to record the video and it doesn't have a real zoom, so the angle and position of the camera produces uneven lighting from my TV. There is actually no lighting at all for UI, although in the future I was going to do bling on window frames (reflections, refractions, etc).

I need a better way of creating these videos in Linux.
NickK wrote: 4. Regarding Wayland: I've been communicating with Kristian Hoegsberg a while ago when I also thought that Wayland was 2D. He corrected me on that. A Wayland server does not enforce the content of a window buffer. Thus, conceptually you can use wl_surface buffer to pass polygon+color data of any 3D model. As long as that Wayland server is written to recognize the 3D data structure, it can render the window/model any way you want. That said, for compatibility with older 2D applications Wayland is not a good choice since there aren't many applications written for Wayland yet.
That is interesting, but from what I have seen of the specification, input was still 2D, so I am not sure what benefit you would get from that outside of visual effects.
NickK wrote: 5. And one last point, do you really have any justifiable reason to use CMake? In my experience, for complex build systems it brings more trouble than it actually solves. Not saying that other solutions are better (they are not) but I encourage you to pick the simplest solution possible at this point and spend your time on something more useful. Just an opinion. Feel free to ignore it.
I really dislike autoconf and configure scripts. I've worked with a lot of them and when things go wrong with it (and for some reason I always run into problems with them), it's generally a pain to debug. I'm hoping to keep things simple enough that cmake will suffice and I've used it for some of my own projects and I've used it working with webkit, and I like the fact that it will generate projects outside of your source code and has the capability of generating Visual C++ Solutions/Projects with out much effort (although that probably won't happen for this project) and you can use ccmake or cmake-gui to see all the options and settings available.

Thanks for the feedback Nick!

tgecho
One Eyed Hopeful
Posts: 1
Joined: Sat Jun 09, 2012 6:47 pm

Re: VR/AR Windows "Desktop" development

Post by tgecho »

This is beautiful. I work with a lot of virtual/remote machines at the same time, and I can totally envision this plus a high res rift being a great way to have them all scattered around me without a ton of real monitors.

I think a webcam aimed down is an elegantly simple way to handle the keyboard issue. A kinect would add possibilities for using gestures to manage windows and the like.

Keep it up!

NickK
Two Eyed Hopeful
Posts: 55
Joined: Thu Jun 14, 2012 10:59 pm

Re: VR/AR Windows "Desktop" development

Post by NickK »

LeeN wrote: Those are good ideas. I was thinking of having a way of creating/using a 3D background as you would a wallpaper on a 2D desktop, that might help in this regards.
My goal was not to tell you what to do but rather give a few examples. The key message was to keep 2 perspectives at the same time: as both a user and a developer. These are very different views. For example, Compiz was developed from the developer's perspective -- "Look what we can do! See how cool it is!". Users looked at it, agreed that it was cool, played around for a while and returned to their 2D GUI because there was no productivity gain or anything useful about it. I don't want your effort to go to waste the same way. I really hope that someone will finally come up with a usable 3D interface.

I suspect a dual perspective may be vital to create a good usable 3D interface because you are competing with 2D interfaces that have been polished and optimized over several decades. As you design and implement new features, you can put on your user's hat to evaluate the feature and its impact on convenience/speed/order/etc (completely forget about the underlying implementation and think of usability) and then put on your developer's hat to see if you can implement what the user wants. I think the user's perspective was missing in previous 3D interface implementations like Compiz. Just my opinion though. I'm sure Compiz guys would disagree. :)
LeeN wrote: I was thinking a solution might be to have a web cam aimed down at your hands and keyboard and display that in the 3d environment. I was also planning to try to use the Kinect for tracking and I was thinking I could also possibly use the video portion of it for displaying the keyboard and your hands to yourself. For the Hydra I was also thinking of using the Analog sticks as a way of choosing keys to type, there is an Android virtual keyboard that kind of works that way.
I've been doing a great deal of thinking on 3D interfaces lately. I suspect that the keyboard problem is best solved in hardware. IMHO, Microsoft engineers already solved this problem in 2009. They just hadn't figured out that their pressure sensitive keyboard was a great match for virtual reality. See this video around 1:19 when they start pressing multiple keys at the same time:
http://www.youtube.com/watch?v=80FWh_fQ_Zg
[youtube]http://www.youtube.com/watch?v=80FWh_fQ_Zg[/youtube]

In VR, light touch (green in video above) can be used to identify (1) that the user wants to type something => VR needs to bring up a virtual keyboard, and (2) where the user keeps his/her fingers at the moment. The real click is registered with higher pressure (red in video). When no fingers are touching the keyboard, the virtual keyboard also disappears in the VR view.

Unfortunately, the pressure sensitive keyboard was patented in 2010, so we won't likely see it in the nearest future. In the meantime, your software idea may be your best bet.
LeeN wrote: That is interesting, but from what I have seen of the specification, input was still 2D, so I am not sure what benefit you would get from that outside of visual effects.
Yes, you are right that the input buffers are 2D. But Kristian is also right that Wayland is 3D. Let me try to explain it:

Wayland is only a protocol. The underlying server that Kristian's team is developing, Weston, is one possible implementation of a Wayland-compliant server. Weston interprets the buffers as colors of pixels in a 2D window. But Wayland does not define the content of the buffer. Conceptually, you can put the entire TCP/IP protocol into the buffer and it will still be OK from the perspective of Wayland protocol.

Thus, you can implement your own W-server (let's call it Easton) that checks a certain flag in the buffer. If the flag is not set, Easton will interpret the buffer as a 2D window of a 2D application. If the flag is raised then Easton interprets the data as a buffer with 3D model data. For example, one dimention will correspond to the vertex index, while the other will correspond to XYZA and RGB data of that vertex. Your brand new Easton server will still be Wayland-compliant and work correctly with 2D and 3D applications.

In addition, Weston is relatively new, so it is easy to modify and extend it. While X-servers are very convoluted and hard to ensure compliant behavior.

That said, for virtual reality applications there are presently 3 disadvantages of going Wayland:
(1) There are almost no applications developed for it.
(2) Wayland is dependent on open source kernel mode setting drivers. The performance of the open source drivers is vastly inferior compared to the proprietary blobs from nVidia or AMD. The Wayland team appears to have decided to go with open source Radeon and Nouveau drivers which are good enough for 2D GUI. But I'm afraid they'll be too slow for HMD's with head tracking when the entire 3D world needs to be rendered quickly.
(3) The Wayland documentation is lacking. I have voiced my concern to Kristian that they need to consider improving their documentation. For example, initially I also thought that Wayland prevented implementations of a 3D server.
LeeN wrote: I really dislike autoconf and configure scripts. I've worked with a lot of them and when things go wrong with it (and for some reason I always run into problems with them), it's generally a pain to debug. I'm hoping to keep things simple enough that cmake will suffice and I've used it for some of my own projects and I've used it working with webkit, and I like the fact that it will generate projects outside of your source code and has the capability of generating Visual C++ Solutions/Projects with out much effort (although that probably won't happen for this project) and you can use ccmake or cmake-gui to see all the options and settings available. Thanks for the feedback Nick!
I'm not a fan of autotools myself. I was thinking along the lines of hand written Makefiles plus VC++ project files with minor help by python/perl scripts if needed at all. But this is a really minor issue compared to general usability of your 3D interface. Now that I thought about it, I shouldn't even brought it up. CMake is good enough.

StreetRat
Two Eyed Hopeful
Posts: 65
Joined: Sun Oct 24, 2010 11:11 pm

Re: VR/AR Windows "Desktop" development

Post by StreetRat »

Found this post yesterday (dont look around the forums much) and ive had a similar idea for a while now, just never got around to it. I noticed in the first post you said you had problems in windows 7.

Not sure if this will help or not but its vb code to show you how to use the images from the 3d flip in your own program. (not my code)
http://blogs.msdn.com/b/calvin_hsia/arc ... nager.aspx

Seems windows 7 has a few api's for manipulating the data.

From what ive learned since yesterday, every windowed application renders to its own individual back buffer, then rendered to the final desktop image allowing for shaders and such, then that background image is rendered in directx onto 2 triangles. (rendering might not be the correct word for all that, but you get the idea)

Since you can access the back buffers though the api (example above) i was thinking about using that. You can get the size of the window as well so you can set your own quad up to the correct size and then copy the back buffer to your own item. The image is also available while the application is minimized, and its constantly updated so you can use it to watch a movie, paint, etc.
My next step is to try create a xna/dx app and render the backgrounds to it to see if its possible.

Im not too sure how to get the interaction between my app and the rest of the applications properly though.
Ive found a way to manipulate the size of a window (MoveWindow function) as well as individual clicks via PostMessage. Not sure about typing.
Not sure how itll work in a 3d environment either.

I had the same idea as you of using the hydra as an input method, left stick to control movement, emulate head tracking, and use the right to control the mouse, allows for rotation as well as movement on a 1:1 scale.
Not too sure on the interaction in 3d space though, that might be a bit beyond me.

Windows uses dwm.exe to render the desktop, and its also possible to use directx to render to the desktop itself from an external program, usually removing the icons though.
I had the idea of using directx, rendering all the applications though my app to the desktop, and hacking dwm.exe to comment out its own rendering calls (within memory only as a trainer of sorts, not actually touching the exe). That way the original background never gets rendered. No idea if that would be possible though. Its the only way i could see to render your own desktop without any standard applications showing up on it.

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

I have a feeling that is not going to work, I recall investigating the thumbnail api, this is the reason I think it's not possible:
http://stackoverflow.com/questions/3848 ... ike-flip3d

User avatar
coresnake
Two Eyed Hopeful
Posts: 75
Joined: Fri Jun 22, 2012 5:32 am

Re: VR/AR Windows "Desktop" development

Post by coresnake »

I agree that the virtual keyboard problem should be solved in hardware if you expect any kind of productivity from this, I think a combination of the data glove with a haptic device such as the Novint falcon would be ideal to let you 'feel' virtual keys.

More tech of this kind should start cropping up when the first haptic enabled cell phones become popular. I can't find the url right now but I know there is a company pushing a new gel based haptic screen for multiple phones starting 2013.

User avatar
brantlew
Petrif-Eyed
Posts: 2220
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: VR/AR Windows "Desktop" development

Post by brantlew »

I remember a couple of years ago when I was digging into VNC apps, I came across the concept of mirror drivers in Windows. Now they are a little bit old - based on the XP GDI model but I think they still work to some degree in later Window versions. I know this is what some of the VNC variants use to optimize their screen-grabbing code. This is how I thought about implementing a virtual desktop a while back. Just taking one of the open source VNC code bases and modifying it to render the screen to a 3D rectangle instead of transmitting across the network. Anyway - just thought the info could be helpful.

http://msdn.microsoft.com/en-us/library ... 85%29.aspx

http://www.uvnc.com/features/driver.html

StreetRat
Two Eyed Hopeful
Posts: 65
Joined: Sun Oct 24, 2010 11:11 pm

Re: VR/AR Windows "Desktop" development

Post by StreetRat »

Interesting find LeeN
I could be wrong, but i think the problem with the post you gave is the guys assuming that all because the api call has the word thumbnail in it, its returning a thumbnail.
I modified the example i gave earlier, and could get a perfect 1:1 replication of all open my windows within picture boxes under VB.net. If they were thumbnails or distorted in any way, the text would also look distorted, but it didnt, it looked exactly the same.
From what ive seen, your getting access to the complete back buffer before it hits the 2d screen, and you can then manipulate it how ever you want.

I had a few problems in converting some c# stuff to vb.net last night (2 hours to draw a triangle) but now i have that down pat, next step is textures and to see if i can get the app to draw.

XNA api has some calls in it to get data from a back buffer straight out, so not sure if theyll work the way i think they will or not.
Worst case scenario, i paint all the back buffers to a picture box (as i know it works) then use that picture box as a texture.

NickK
Two Eyed Hopeful
Posts: 55
Joined: Thu Jun 14, 2012 10:59 pm

Re: VR/AR Windows "Desktop" development

Post by NickK »

LeeN,
What is the latency of your approach for capturing a window content through xwd? Can it work efficiently with 120Hz refresh rate when I move my head while typing or modifying the window? Does it get rendered sufficiently fast to eliminate visual artifacts? Thanks.
NK

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

Hi Nick, I'm using XGetImage (which I got looking at xwd source) and It's a decent speed streaming 720p video at half res without overlays (that's what I show in the videos). This is with almost no optimization except damage regions from Xdamage so there is at least 2 things I want to try to see if they improve performance and that is 1) Use shared memory buffers (this probably wouldn't work on some platforms) and 2) using shaders (or GL_BGRA_EXT) to not have to swizzle the results of XGetImage. There are likely other optimizations besides those but I have not yet started even measuring performance. Outside of this, moving to something like wayland would probably be ideal.

The videos I recorded the application was running on an HP Envy notebook which has an i7 and an AMD mobile mobility Radeon and dual SSD, the display is 1080p. I think the linux graphics driver sucks though, performance seems to degrade a lot when I increase the size of my applications window.

StreetRat
Two Eyed Hopeful
Posts: 65
Joined: Sun Oct 24, 2010 11:11 pm

Re: VR/AR Windows "Desktop" development

Post by StreetRat »

Unfortunately LeeN, i have to agree that using Win 7 wont be easy if at all possible.
Spent a few days trying and the examples ive found either hack the dx dlls or are more illusions than useable.
Even though you can get the thumbnail data as a full size image, it prints on top of the item, not too it, so youd have to have to print on a form, have the form open on the screen, take a screen shot of the screen where the form is, and then use the screen shot as a texture in a 3d environment.
Doing that for 15 or so apps would be rather slow, and if one overlaps the other, youd be getting the overlap.
No good if your running a full screen app
On top of that, even when you do get the thumbnail, that dosent include any menus or open windows etc, there all separate.

I havent given up (im a stickler for punishment)

NickK
Two Eyed Hopeful
Posts: 55
Joined: Thu Jun 14, 2012 10:59 pm

Re: VR/AR Windows "Desktop" development

Post by NickK »

Guys, I don't quite understand what you are trying to accomplish with Win 7 and thumbnail API. If the goal is to get textures of most of the windows on the desktop then it can easily be done through Win32 API alone. Below I'm including a demo code that I quickly assembled piecemeal (the code is dirty, just for demo purpose). This strategy will give you everything except for the hardware accelerated applications.

For hardware accelerated stuff like some movies and video games it won't work. However, I strongly suspect that thumbnail API or any other API can't possible work because the data is not available in system memory. The video data is stored in the local memory of the video card which is not accessible by the OS functions. You need to talk to video card drivers to get that data. I am quite sure that it is possible to do through OpenGL or DirectX. My only concern is the latency of the video data retrieval. Since graphics rendering pipeline is optimized in the forward direction, getting pixel data may simply be unoptimized.

NK

Below is an example desktop the way it looks when I launch my WinSurf application. The windows are purposefully positioned to overlap and stick beyond the screen edges.
Image

When you click on Run->Launch, the program will extract the window client area from all windows and place them into individual BMP files. I show almost all of them below assembled separately into 1 file. The output BMPs will include the Windows panel and the background desktop that you can also map to 3D if you wish. For 3D interface you need to keep them all in memory instead and slap them as textures on 3D quads. (Edit: forgot to copy-paste a couple of windows like the QT window but it's also available among about 10 files.)
Image

The code below is run on X220T laptop, nearly virgin Windows 7 OS, Visual Studio 11 beta available on Microsoft website.
To reproduce follow these instructions:
  • 1. Create a simple Win32 program with the VC++ wizard. The first handful of functions below are autogenerated.
    2. Project settings -> turn off Unicode to avoid dealing with wide symbols.
    3. In the resource editor, add a menu entry Run -> Launch to be used as a trigger for window texturing. Assigns ID_RUN_LAUNCH identifier to Launch menu in the resource editor.
    4. Modify WndProc() below to dispatch the ID_RUN_LAUNCH event to OnLaunch(lParam) function.
    5. Copy functions CreateBMPFile, CreateBitmapInfoStruct, OnLaunch, EnumWindowsProc verbatim into your CPP file from the code below.
    6. Add needed forward declarations, globalWnd and cnt global variables. Assign hWnd to globalWnd in InitInstance().
    7. Compile and run. It should create a handful of BMP files with all your window textures.

Code: Select all

// WinSurf.cpp : Defines the entry point for the application.

#include "stdafx.h"
#include "WinSurf.h"
#include <stdio.h>
#include <iostream>

#define MAX_LOADSTRING 100

// Global Variables:
HINSTANCE hInst;								// current instance
TCHAR szTitle[MAX_LOADSTRING];					// The title bar text
TCHAR szWindowClass[MAX_LOADSTRING];			// the main window class name

// Forward declarations of functions included in this code module:
ATOM				MyRegisterClass(HINSTANCE hInstance);
BOOL				InitInstance(HINSTANCE, int);
LRESULT CALLBACK	WndProc(HWND, UINT, WPARAM, LPARAM);
INT_PTR CALLBACK	About(HWND, UINT, WPARAM, LPARAM);

// Forward declarations
void OnLaunch(LPARAM);
PBITMAPINFO CreateBitmapInfoStruct(HWND hwnd, HBITMAP hBmp);
void CreateBMPFile(HWND hwnd, LPTSTR pszFile, PBITMAPINFO pbi, 
                  HBITMAP hBMP, HDC hDC);
// For message box error reporting
static HWND globalWnd;

int APIENTRY _tWinMain(_In_ HINSTANCE hInstance,
                     _In_opt_ HINSTANCE hPrevInstance,
                     _In_ LPTSTR    lpCmdLine,
                     _In_ int       nCmdShow)
{
	UNREFERENCED_PARAMETER(hPrevInstance);
	UNREFERENCED_PARAMETER(lpCmdLine);

 	// TODO: Place code here.
	MSG msg;
	HACCEL hAccelTable;

	// Initialize global strings
	LoadString(hInstance, IDS_APP_TITLE, szTitle, MAX_LOADSTRING);
	LoadString(hInstance, IDC_WINSURF, szWindowClass, MAX_LOADSTRING);
	MyRegisterClass(hInstance);

	// Perform application initialization:
	if (!InitInstance (hInstance, nCmdShow)) {	return FALSE; }

	hAccelTable = LoadAccelerators(hInstance, MAKEINTRESOURCE(IDC_WINSURF));

	// Main message loop:
	while (GetMessage(&msg, NULL, 0, 0))
	{
		if (!TranslateAccelerator(msg.hwnd, hAccelTable, &msg))
		{
			TranslateMessage(&msg);
			DispatchMessage(&msg);
		}
	}
	return (int) msg.wParam;
}

//  FUNCTION: MyRegisterClass()
//  PURPOSE: Registers the window class.
ATOM MyRegisterClass(HINSTANCE hInstance)
{
	WNDCLASSEX wcex;
	wcex.cbSize = sizeof(WNDCLASSEX);
	wcex.style			= CS_HREDRAW | CS_VREDRAW;
	wcex.lpfnWndProc	= WndProc;
	wcex.cbClsExtra		= 0;
	wcex.cbWndExtra		= 0;
	wcex.hInstance		= hInstance;
	wcex.hIcon			= LoadIcon(hInstance, MAKEINTRESOURCE(IDI_WINSURF));
	wcex.hCursor		= LoadCursor(NULL, IDC_ARROW);
	wcex.hbrBackground	= (HBRUSH)(COLOR_WINDOW+1);
	wcex.lpszMenuName	= MAKEINTRESOURCE(IDC_WINSURF);
	wcex.lpszClassName	= szWindowClass;
	wcex.hIconSm		= LoadIcon(wcex.hInstance, MAKEINTRESOURCE(IDI_SMALL));

	return RegisterClassEx(&wcex);
}

//   FUNCTION: InitInstance(HINSTANCE, int)
//   PURPOSE: Saves instance handle and creates main window
//   COMMENTS:
//        In this function, we save the instance handle in a global variable and
//        create and display the main program window.
BOOL InitInstance(HINSTANCE hInstance, int nCmdShow)
{
   HWND hWnd;
   hInst = hInstance; // Store instance handle in our global variable
   hWnd = CreateWindow(szWindowClass, szTitle, WS_OVERLAPPEDWINDOW,
      CW_USEDEFAULT, 0, CW_USEDEFAULT, 0, NULL, NULL, hInstance, NULL);
   if (!hWnd) { return FALSE; }
   ShowWindow(hWnd, nCmdShow);
   UpdateWindow(hWnd);
   globalWnd = hWnd;
   return TRUE;
}

//  FUNCTION: WndProc(HWND, UINT, WPARAM, LPARAM)
//  PURPOSE:  Processes messages for the main window.
//  WM_COMMAND	- process the application menu
//  WM_PAINT	- Paint the main window
//  WM_DESTROY	- post a quit message and return
LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
	int wmId, wmEvent;
	PAINTSTRUCT ps;
	HDC hdc;

	switch (message)
	{
	case WM_COMMAND:
		wmId    = LOWORD(wParam);
		wmEvent = HIWORD(wParam);
		// Parse the menu selections:
		switch (wmId)
		{
		case ID_RUN_LAUNCH:
			OnLaunch(lParam);
			break;
		case IDM_ABOUT:
			DialogBox(hInst, MAKEINTRESOURCE(IDD_ABOUTBOX), hWnd, About);
			break;
		case IDM_EXIT:
			DestroyWindow(hWnd);
			break;
		default:
			return DefWindowProc(hWnd, message, wParam, lParam);
		}
		break;
	case WM_PAINT:
		hdc = BeginPaint(hWnd, &ps);
		// TODO: Add any drawing code here...
		EndPaint(hWnd, &ps);
		break;
	case WM_DESTROY:
		PostQuitMessage(0);
		break;
	default:
		return DefWindowProc(hWnd, message, wParam, lParam);
	}
	return 0;
}

// Message handler for about box.
INT_PTR CALLBACK About(HWND hDlg, UINT message, WPARAM wParam, LPARAM lParam)
{
	UNREFERENCED_PARAMETER(lParam);
	switch (message)
	{
	case WM_INITDIALOG:
		return (INT_PTR)TRUE;

	case WM_COMMAND:
		if (LOWORD(wParam) == IDOK || LOWORD(wParam) == IDCANCEL)
		{
			EndDialog(hDlg, LOWORD(wParam));
			return (INT_PTR)TRUE;
		}
		break;
	}
	return (INT_PTR)FALSE;
}

// Purpose: intercept window handles, select windows of intererst and convert
// them into bitmap files for future texturing.
static int cnt = 0;
static BOOL CALLBACK EnumWindowsProc(HWND someWnd, LPARAM lParam)
{
	char winTitle[255];
	HBITMAP hbitmap;
	PBITMAPINFO bitmapInfo;
	RECT rect;
	HDC hDC;
	HDC hDCMem;
	int width;
	int height;

	cnt++;
	sprintf(winTitle, "file%d.bmp\0", cnt);

	if (IsWindow(someWnd) &&             // get rid of applications w/o windows
		IsWindowVisible(someWnd) &&      // skip hidden windows
		::GetParent(someWnd) == NULL) {  // select only top level windows
		hDC = ::GetWindowDC(someWnd);
		hDCMem = ::CreateCompatibleDC(hDC);
		if (hDC && hDCMem) {
			::GetWindowRect(someWnd, &rect);
			width = rect.right - rect.left;
			height = rect.bottom - rect.top;
			if (width > 0 && height > 0) {
				hbitmap = ::CreateCompatibleBitmap(hDC, width, height);
				if (hbitmap) {
					::SelectObject(hDCMem, hbitmap);
					::PrintWindow(someWnd, hDCMem, 0);
					bitmapInfo = CreateBitmapInfoStruct(someWnd, hbitmap);
					CreateBMPFile(someWnd, winTitle, bitmapInfo, hbitmap, hDC);
					::DeleteObject(hbitmap);
				} else {
					::MessageBox(globalWnd, 
						         "CreateCompatibleBitmap failed", NULL, NULL); 
				}
			}
		} else {
			::MessageBox(globalWnd, "hDC or hDCMem failed", NULL, NULL); 
		}
	}
	return TRUE;
}

// Sets EnumWindowsProc() as an intercept function for all windows on desktop
void OnLaunch(LPARAM lParam)
{
	HWND desktopWnd = ::GetDesktopWindow();
	EnumChildWindows(desktopWnd, EnumWindowsProc, lParam);
}

// From Microsoft website
// Purpose: constructs necessary data structures to convert DDB into DIB
PBITMAPINFO CreateBitmapInfoStruct(HWND hwnd, HBITMAP hBmp)
{ 
    BITMAP bmp; 
    PBITMAPINFO pbmi; 
    WORD    cClrBits; 

    // Retrieve the bitmap color format, width, and height.  
	if (!GetObject(hBmp, sizeof(BITMAP), (LPSTR)&bmp)) {
		::MessageBox(globalWnd, "GetObject", NULL, NULL); 
		PostQuitMessage(0);
	}

    // Convert the color format to a count of bits.  
    cClrBits = (WORD)(bmp.bmPlanes * bmp.bmBitsPixel); 
    if (cClrBits == 1) 
        cClrBits = 1; 
    else if (cClrBits <= 4) 
        cClrBits = 4; 
    else if (cClrBits <= 8) 
        cClrBits = 8; 
    else if (cClrBits <= 16) 
        cClrBits = 16; 
    else if (cClrBits <= 24) 
        cClrBits = 24; 
    else cClrBits = 32; 

    // Allocate memory for the BITMAPINFO structure. (This structure  
    // contains a BITMAPINFOHEADER structure and an array of RGBQUAD  
    // data structures.)  

     if (cClrBits < 24) 
         pbmi = (PBITMAPINFO) LocalAlloc(LPTR, 
                    sizeof(BITMAPINFOHEADER) + 
                    sizeof(RGBQUAD) * (1<< cClrBits)); 

     // There is no RGBQUAD array for these formats: 24-bit-per-pixel or 32-bit-per-pixel 

     else 
         pbmi = (PBITMAPINFO) LocalAlloc(LPTR, 
                    sizeof(BITMAPINFOHEADER)); 

    // Initialize the fields in the BITMAPINFO structure.  

    pbmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER); 
    pbmi->bmiHeader.biWidth = bmp.bmWidth; 
    pbmi->bmiHeader.biHeight = bmp.bmHeight; 
    pbmi->bmiHeader.biPlanes = bmp.bmPlanes; 
    pbmi->bmiHeader.biBitCount = bmp.bmBitsPixel; 
    if (cClrBits < 24) 
        pbmi->bmiHeader.biClrUsed = (1<<cClrBits); 

    // If the bitmap is not compressed, set the BI_RGB flag.  
    pbmi->bmiHeader.biCompression = BI_RGB; 

    // Compute the number of bytes in the array of color  
    // indices and store the result in biSizeImage.  
    // The width must be DWORD aligned unless the bitmap is RLE 
    // compressed. 
    pbmi->bmiHeader.biSizeImage = ((pbmi->bmiHeader.biWidth * cClrBits +31) & ~31) /8
                                  * pbmi->bmiHeader.biHeight; 
    // Set biClrImportant to 0, indicating that all of the  
    // device colors are important.  
     pbmi->bmiHeader.biClrImportant = 0; 
     return pbmi; 
} 

// From Microsoft website
// Purpose: converts in-memory bitmap into DIB on disk
void CreateBMPFile(HWND hwnd, LPTSTR pszFile, PBITMAPINFO pbi, 
                  HBITMAP hBMP, HDC hDC) 
{ 
    HANDLE hf;                 // file handle  
    BITMAPFILEHEADER hdr;       // bitmap file-header  
    PBITMAPINFOHEADER pbih;     // bitmap info-header  
    LPBYTE lpBits;              // memory pointer  
    DWORD dwTotal;              // total count of bytes  
    DWORD cb;                   // incremental count of bytes  
    BYTE *hp;                   // byte pointer  
    DWORD dwTmp; 

    pbih = (PBITMAPINFOHEADER) pbi; 
    lpBits = (LPBYTE) GlobalAlloc(GMEM_FIXED, pbih->biSizeImage);

    if (!lpBits) 
		::MessageBox(globalWnd, "GlobalAlloc", NULL, NULL); 

    // Retrieve the color table (RGBQUAD array) and the bits  
    // (array of palette indices) from the DIB.  
    if (!GetDIBits(hDC, hBMP, 0, (WORD) pbih->biHeight, lpBits, pbi, 
        DIB_RGB_COLORS)) 
    {
		::MessageBox(globalWnd, "GetDIBits", NULL, NULL); 
    }

    // Create the .BMP file.  
    hf = CreateFile(pszFile, 
                   GENERIC_READ | GENERIC_WRITE, 
                   (DWORD) 0, 
                    NULL, 
                   CREATE_ALWAYS, 
                   FILE_ATTRIBUTE_NORMAL, 
                   (HANDLE) NULL); 
    if (hf == INVALID_HANDLE_VALUE) 
		::MessageBox(globalWnd, "CreateFile", NULL, NULL); 

    hdr.bfType = 0x4d42;        // 0x42 = "B" 0x4d = "M"  
    // Compute the size of the entire file.  
    hdr.bfSize = (DWORD) (sizeof(BITMAPFILEHEADER) + 
                 pbih->biSize + pbih->biClrUsed 
                 * sizeof(RGBQUAD) + pbih->biSizeImage); 
    hdr.bfReserved1 = 0; 
    hdr.bfReserved2 = 0; 

    // Compute the offset to the array of color indices.  
    hdr.bfOffBits = (DWORD) sizeof(BITMAPFILEHEADER) + 
                    pbih->biSize + pbih->biClrUsed 
                    * sizeof (RGBQUAD); 

    // Copy the BITMAPFILEHEADER into the .BMP file.  
    if (!WriteFile(hf, (LPVOID) &hdr, sizeof(BITMAPFILEHEADER), 
        (LPDWORD) &dwTmp,  NULL)) 
    {
	   ::MessageBox(globalWnd, "WriteFile", NULL, NULL);
    }

    // Copy the BITMAPINFOHEADER and RGBQUAD array into the file.  
    if (!WriteFile(hf, (LPVOID) pbih, sizeof(BITMAPINFOHEADER) 
                  + pbih->biClrUsed * sizeof (RGBQUAD), 
                  (LPDWORD) &dwTmp, ( NULL)))
		::MessageBox(globalWnd, "WriteFile", NULL, NULL);

    // Copy the array of color indices into the .BMP file.  
    dwTotal = cb = pbih->biSizeImage; 
    hp = lpBits; 
    if (!WriteFile(hf, (LPSTR) hp, (int) cb, (LPDWORD) &dwTmp,NULL)) 
           ::MessageBox(globalWnd, "WriteFile", NULL, NULL); 

    // Close the .BMP file.  
     if (!CloseHandle(hf)) 
           ::MessageBox(globalWnd, "CloseHandle", NULL, NULL);

    // Free memory.  
    GlobalFree((HGLOBAL)lpBits);
}
[/size]

StreetRat
Two Eyed Hopeful
Posts: 65
Joined: Sun Oct 24, 2010 11:11 pm

Re: VR/AR Windows "Desktop" development

Post by StreetRat »

The thing with windows 7 was using the thumbnail api thing because it allowed for constant updates, so they wernt just a once off copy bitmap, you could edit or resize an app on the screen and the same thing would happen in the copy, It worked for movies, highlighting or just typing.
The thumbnail uses the apps backbuffer so the data is already in memory, the idea was to just use this as a texture in dx or open gl. I still think this is the way to go, its just a matter of trying to find out where the data is.

If you can update your images on a regular basis (60 ish frames a second) while you have a full screen app on top, then it might be something to look into.
Thanks for posting the code too, c++ isnt one of my strong points, but ill be interested to see how it all works.

User avatar
brantlew
Petrif-Eyed
Posts: 2220
Joined: Sat Sep 17, 2011 9:23 pm
Location: Menlo Park, CA

Re: VR/AR Windows "Desktop" development

Post by brantlew »

@NickK: Nice to see that people still work with the Win32 API sometimes instead of the mountainous heap of wrapper libraries that Microsoft churns out.

User avatar
cybereality
3D Angel Eyes (Moderator)
Posts: 11394
Joined: Sat Apr 12, 2008 8:18 pm
Which stereoscopic 3D solution do you primarily use?: S-3D desktop monitor

Re: VR/AR Windows "Desktop" development

Post by cybereality »

Thanks for posting this code NickK.

NickK
Two Eyed Hopeful
Posts: 55
Joined: Thu Jun 14, 2012 10:59 pm

Re: VR/AR Windows "Desktop" development

Post by NickK »

@streetRat:
Continuous update is not a problem. I used QueryPerformanceCounter() and QueryPerformanceFrequency() Win32 API
functions to benchmark the wall clock time it takes to get BMP context of all windows, see the relevant code on the usage here:
http://stackoverflow.com/questions/1739 ... ncecounter

Comment out the actual file write operation since it is not needed when you work with textures. I did the benchmarking on my X220T with i5-2520M CPU running at 2.5GHz with the following applications running to
imitate typical usage patterns:
- Visual Studio 11
- QT Creator
- Notepad
- MS Paint
- Chrome
- Command prompt
- File explorer

The wall clock time to get all the window contexts and convert them into DIBs in memory varied in the range of 1.6 - 2.0 msec. For 120Hz refresh rate you have 8 msec for full rendering. The 2 msec window capture leaves you 6 msec to map these textures into 3D world. For 60 Hz refresh rate 2 msec overhead is negligible.

@Brantlew:
I have been programming for Linux over the last 5+ years and my primary desktop is Linux too. That's why I'm not up-to-date on Windows latest programming packages and frameworks.

@Cybereality:
No problem. It's just a prototype code for testing, nothing fancy.

@LeeN:
Proprietary Linux drivers are reasonably good with both Nvidia and AMD. Please check that you have latest drivers that support OpenGL 3+, preferably OpenGL 3.3:
glxinfo | grep OpenGL
The problems with proprietary drivers usually occur when Linux kernel is newly upgraded. I wouldn't recommend going with the latest kernel for that reason.

Open source graphics drivers (Nouveau for Nvidia and Radeon for AMD) are not good enough for virtual reality 3D. They are based on the Mesa/Gallium3D open source driver stack that is only starting to enable shaders in OpenGL 3. In my opinion, it is not ready for 3D yet.

That said, are you sure that performance degradation is due to graphics drivers? X11 is not particularly fast and, to my knowledge, it does redundant copying of application window contexts. Do you know whether the runtime is gated by window capture or by 3D mapping? In my experiments on GTX 9800 Nvidia hardware with latest proprietary Linux drivers I can easily get 500Hz frame rate for simple 3D scenes. Plus, quite a few people are actually gaming on Linux. I personally tested Doom 3 and runs fine on my Ubuntu system with GTX 9800. So, my guess is that proprietary drivers are not likely to be a bottleneck.

LeeN
Cross Eyed!
Posts: 140
Joined: Sat Jul 17, 2010 10:28 am
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV

Re: VR/AR Windows "Desktop" development

Post by LeeN »

NickK wrote: Proprietary Linux drivers are reasonably good with both Nvidia and AMD. Please check that you have latest drivers that support OpenGL 3+, preferably OpenGL 3.3:
glxinfo | grep OpenGL
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: ATI Mobility Radeon HD 5800 Series
OpenGL version string: 3.2.9756 Compatibility Profile Context
OpenGL shading language version string: 1.50

I do also have the Catalyst Control Center but it looks like more settings are set for performance.
NickK wrote: The problems with proprietary drivers usually occur when Linux kernel is newly upgraded. I wouldn't recommend going with the latest kernel for that reason.

Open source graphics drivers (Nouveau for Nvidia and Radeon for AMD) are not good enough for virtual reality 3D. They are based on the Mesa/Gallium3D open source driver stack that is only starting to enable shaders in OpenGL 3. In my opinion, it is not ready for 3D yet.

That said, are you sure that performance degradation is due to graphics drivers? X11 is not particularly fast and, to my knowledge, it does redundant copying of application window contexts. Do you know whether the runtime is gated by window capture or by 3D mapping? In my experiments on GTX 9800 Nvidia hardware with latest proprietary Linux drivers I can easily get 500Hz frame rate for simple 3D scenes. Plus, quite a few people are actually gaming on Linux. I personally tested Doom 3 and runs fine on my Ubuntu system with GTX 9800. So, my guess is that proprietary drivers are not likely to be a bottleneck.
I'm not positive it's graphics driver related, I'm guessing entirely on the fact that changing the size of my applications window impacts performance the most out of most things I've seen, even with only drawing a single terminal window in 3D.

I do have compiz installed that could be degrading rendering performance in my application.

At this point though I am not going to do much more for performance, as I am confident that it should not really be a problem in the future as long as my architecture itself doesn't cause performance degradation.


My current status:
I implemented normal crossing events that fixed a lot of pointer input issues I was seeing, the only thing I do not support currently is grab events and crossing events related to them.
CMake is done and working.
Working on modularizing/refactoring my code more (:oops: I'd be embarrassed if any one saw it in it's current state)
I want to have a set of modules for using the mouse only (no Hydra), and being able to configure for experimenting with Razer Hydra in different ways. I think this will allow others to play with it with out the need for any hardware and provide additional examples for others who want to integrate other hardware or libraries for head tracking and pointer devices.

StreetRat
Two Eyed Hopeful
Posts: 65
Joined: Sun Oct 24, 2010 11:11 pm

Re: VR/AR Windows "Desktop" development

Post by StreetRat »

@NickK
Nice to hear it can work well.
I tried the PrintWindow, after finding it in your code, in VB.net using xna but it was kinda slow.
If it can work at a decent fps i might have another look and try speed everything up a bit.
Im sticking with xna, directx for now because it works with nvidias 3d vision, but if its for the rift, and side by side, then it wouldent matter if its opengl or directx

I did start creating an 'always on top', 'borderless', 'control-bar-less' form in vb and used the thumbail api on that. Got it moving stuff around the screen and resizing stuff using the moveWindow command, but its slow and theres repaint issues, and still havent found a way to show or click menu items.
Regardless of which way the items are displayed, its pointless if you cant see or click on a menu.
Although in windows, each menu item has its own hwnd and that can be got though some api commands but not sure if they can be displayed.

jbboehr
One Eyed Hopeful
Posts: 5
Joined: Sun Aug 05, 2012 3:19 pm
Which stereoscopic 3D solution do you primarily use?: S-3D HDTV
Location: Southern California
Contact:

Re: VR/AR Windows "Desktop" development

Post by jbboehr »

As soon as I saw the Oculus Rift, I immediately started looking for a project such as this. While my experience with C is limited and X non-existent, I'm very interested in contributing to this project in any capacity.

Post Reply

Return to “VR/AR Research & Development”