EzDevInfo.com

vr.js

NPAPI plugin to expose fun VR devices to Javascript.

OpenGL Asymmetric Frustum for Desktop VR

I am making an OpenGL c++ application that tracks the users location in relation to the screen and then updates the rendered scene to the perspective of the user. This is know as "desktop VR" or you can think of the screen as a diorama or fish tank. I am rather new to OpenGL and have only defined a very simple scene thus far, just a cube, and it is initially rendered correctly. The problem is that when I start moving and want to rerender the cube scene, the projection plane seems translated and I don't see what I think I should. I want this plane fixed. If I were writing a ray tracer, my window would always be fixed, but my eye is allowed to wander. Can someone please explain to me how I can achieve the effect I desire (pinning the viewing window) while having my camera/eye wander at a non-origin coordinate? All of the examples I find demand that the camera/eye be at the origin, but this is not conceptually convenient for me. Also, because this is a "fish tank", I am setting my d_near to be the xy-plane, where z = 0.

In screen/world space, I assign the center of the screen to (0,0,0) and its 4 corners to: TL(-44.25, 25, 0) TR( 44.25, 25, 0) BR( 44.25,-25, 0) BL(-44.25,-25, 0) These values are in cm for a 16x9 display.

I then calculate the user's eye (actually a web cam on my face) using POSIT to be usually somewhere in the range of (+/-40, +/-40, 40-250). My POSIT method is accurate.

I am defining my own matrices for the perspective and viewing transforms and using shaders.

I initialize as follows:

float right = 44.25;
float left = -44.25;
float top = 25.00;
float bottom = -25.00; 

vec3 eye = vec3(0.0, 0.0, 100.0);
vec3 view_dir = vec3(0.0, 0.0, -1.0);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 n = normalize(-view_dir);
vec3 u = normalize(cross(up, n)); 
vec3 v = normalize(cross(n, u));

float d_x = -(dot(eye, u));
float d_y = -(dot(eye, v));
float d_z = -(dot(eye, n));

float d_near = eye.z;
float d_far = d_near + 50;

// perspective transform matrix
mat4 P = mat4((2.0*d_near)/(right-left ), 0, (right+left)/(right-left), 0, 
            0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
            0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
            0, 0, -1.0, 0);

// viewing transform matrix
mat4 V = mat4(u.x, u.y, u.z, d_x,
              v.x, v.y, v.z, d_y,
              n.x, n.y, n.z, d_z,
              0.0, 0.0, 0.0, 1.0);

mat4 MV = C * V;
//MV = V;

From what I gather looking on the web, my view_dir and up are to remain fixed. This means that I need only to update d_near and d_far as well as d_x, d_y, and d_y? I do this in my glutIdleFunc( idle );

void idle (void) {  

    hBuffer->getFrame(hFrame);
    if (hFrame->goodH && hFrame->timeStamp != timeStamp) {
        timeStamp = hFrame->timeStamp;
        std::cout << "(" << hFrame->eye.x << ", " <<
                    hFrame->eye.y << ", " <<
                    hFrame->eye.z << ") \n";

        eye = vec3(hFrame->eye.x, hFrame->eye.y, hFrame->eye.z);

        d_near = eye.z;
        d_far = eye.z + 50;

        P = mat4((2.0*d_near)/(right-left), 0, (right+left)/(right-left), 0, 
                 0, (2.0*d_near)/(top-bottom), (top+bottom)/(top-bottom), 0,
                 0, 0, -(d_far+d_near)/(d_far-d_near), -(2.0*d_far*d_near)/(d_far-d_near),
                 0, 0, -1.0, 0);

        d_x = -(dot(eye, u));
        d_y = -(dot(eye, v));
        d_z = -(dot(eye, n));

        C = mat4(1.0, 0.0, 0.0, eye.x,
                 0.0, 1.0, 0.0, eye.y,
                 0.0, 0.0, 1.0, 0.0,
                 0.0, 0.0, 0.0, 1.0);

        V = mat4(u.x, u.y, u.z, d_x,
                 v.x, v.y, v.z, d_y,
                 n.x, n.y, n.z, d_z,
                 0.0, 0.0, 0.0, 1.0);

        MV = C * V;
        //MV = V;

        glutPostRedisplay();
    }
}

Here is my shader code:

#version 150

uniform mat4 MV;
uniform mat4 P;
in vec4 vPosition;
in vec4 vColor;
out vec4 color;

void 
main() 
{ 
    gl_Position = P * MV * vPosition;
    color = vColor;
}

Ok, I made some changes to my code, but without success. When I use V in place of MV in the vertex shader, everything looks as I want it to, the perspective is correct and the objects are the right size, however, the scene is translated by the displacement of the camera. When using C and V to obtain MV, my scene is rendered from the perspective of an observer straight on and the rendered scene fills the window as it should, but the perspective of the eye/camera is lost.

Really, what I want is to translate the 2D pixels, the projection plane, by the appropriate x and y values of the eye/camera, so as to keep the center of the object (whose xy center is (0,0)) in the center of the rendered image. I am guided by the examples in the textbook "Interactive Computer Graphics: A Top-Down Approach with Shader-Based OpenGL (6th Edition)". Using the files paired with the book freely available on the web, I am continuing with the row major approach.

The following images are taken when not using the matrix C to create MV. When I do use C to create MV, all scenes look like the first image below. I desire no translation in z and so I leave that as 0. Because the projection plane and my camera plane are parallel, the conversion from one to the other coordinate system is simply a translation and inv(T) ~ -T.

Here is my image for the eye at (0,0,50): Here is my image for the eye at (0,0,50):

Here is my image for the eye at (56,-16,50): Here is my image for the eye at (56,-16,50):


Source: (StackOverflow)

Tutorial for building a safari HTML5 panorama?

Is there a tutorial out there for building a HTML5 based VR panorama like the one presented in Apple's technology demo? I'm asking because there's very little search results or tutorials on reproducing the VR effect. Here's the link:

VR Demo


Source: (StackOverflow)

Advertisements

Google Cardboard setVRModeEnabled(false) not displaying anything

I'm working on an app and am modifying the Treasure Hunt sample code. I have a visual effect that isn't working correctly that I'd like to debug. It is difficult to do in stereo mode. It seems I could just call cardboardView.setVRModeEnabled(false); in onCreate() and it would not do the distortion correction and only render one eye. That would be perfect.

However, all I see is black.

I reduced my onDrawEye() to simply:

glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

When setVRModeEnabled is true, my screen is red. When it is false, all I see is black. Any ideas?


Source: (StackOverflow)

HTML5 panoramic

I am using this Flash panoramic viewer (http://flashpanoramas.com/player/). But for devices without Flash support are there any good HTML alternatives? It needs to support proper 3d panoramics such as Google Street View.


Source: (StackOverflow)

Samsung Gear Vr - handle back button clicks

how can I handle back button single click , long click to pause game , if pressed again destroy game


Source: (StackOverflow)

MMORPG / VR Architecture [closed]

Could anyone provide a link to the article / blog discussing architecture of the MMORPG or Virtual Reality Server or other system with rich 3D client.


Source: (StackOverflow)

Implement a Kalman filter to smooth data from deviceOrientation API

I'm trying to smooth the data I'm getting from the deviceOrientation API to make a Google Cardboard application in the browser.

I'm piping the accelerometer data straight into the ThreeJs camera rotation but we're getting a lot of noise on the signal which is causing the view to judder.

Someone suggested a Kalman filter as the best way to approach smoothing signal processing noise and I found this simple Javascript library on gitHub

https://github.com/itamarwe/kalman

However its really light on the documentation.

I understand that I need to create a Kalman model by providing a Vector and 3 Matrices as arguments and then update the model, again with a vector and matrices as arguments over a time frame.

I also understand that a Kalman filter equation has several distinct parts: the current estimated position, the Kalman gain value, the current reading from the orientation API and the previous estimated position.

I can see that a point in 3D space can be described as a Vector so any of the position values, such as an estimated position, or the current reading can be a Vector.

What I don't understand is how these parts could be translated into Matrices to form the arguments for the Javascript library.


Source: (StackOverflow)

Description-File for physical setup of Multi-Monitors

I need machine-readable descriptions for Multi-Monitor and VR Setups, like simple dual-screen computers, Powerwalls, and Caves. This description must include the sizes and placements of all outputs (displays or projections) in the physical space.

The far goal is to combine User-(Head)-tracking, device tracking for mobile devices, etc. with multi-display environments.

  • The simplest issue is to be aware of the gap between the screens of a multi-monitor setup because of the borders of the display cases.
  • The most complex setup would probably be caves with polygonal or curved projection surfaces.

My impression is that every VR-Software out there defines it's own setup-config-crackpot-text-file-format. Is there a common standard or common practice I am missing?


Source: (StackOverflow)

Oculus Rift VR - Samples projects error

I downloaded the SDK from Oculus Rift website, and I'm trying to run the projects on the samples folder. When I build the project I'm getting an error that says

fatal error C1083: Cannot open open include file: 'd3dcompiler.h' : no such file or directory. 

although they add the lib files in the linker. anyone else got this errors on their samples projects?


Source: (StackOverflow)

Recenter or Reorient view with Cardboard SDK on Unity

With Unity, the CardboardHead script is added to the main camera and that handles everything quite nicely, but I need to be able to "recenter" the view on demand and the only option I see so far is to rorate the entire scene and it seems like this is something the would address first-hand and I can't find anything in the docs.

With Oculus Mobile SDK (GearVR), it would be OVRCamera.ResetCameraPositionOrientation(Vector3.one, Vector3.zero, Vector3.up, Vector3.zero); though they handle it nicely each time the viewer is put on so it's rarely needed there.


Source: (StackOverflow)

Samsung Gear VR - open menu item from Gear VR tap click

I create menu (canvas ui) with Render mode work space , it consist from two buttons

Start

Quit

I try open start with Gear taps , but it dosent work

sing UnityEngine;
using System.Collections;

public class LoadOnClick : MonoBehaviour {

    public void LoadScene(int Level){
        if (Input.GetButtonDown (0)) {
            Application.LoadLevel (Level);
        }

    }
}

Notice it work with me when I goes in play mode , with mouse click

sugguestion should I make tag to button

what I want make sure I understand well , when On CLick fired to run my script

can I specify input here

enter image description here


Source: (StackOverflow)

Three.js - VRControls integration - How to move in the scene?

I use Three.js to render and move (my orbitControl changes camera.position) in a small scene.
Now I have an oculus rift. So I added VRControls and VREffect.
There is no problem to move the head.
But I can no more move in the scene because VRControls override the camera parameters :

object.quaternion.copy( state.orientation ); // object is the camera

I thought that it was easy to correct : I have only to update the camera instead of overriding it :

object.quaternion.copy(stateOrientationQuat.multiply(currentCameraQuat));

But it does not work : it renders a moving flicking scene. VRControls and orbitControl seem to fight...

Could you tell me what is to do to integrate VRControls in an existing project ? If you have the update code (I don't really know quaternions...) it would very help.

Thanks


Source: (StackOverflow)

circular model of x-axis in simulink with virtual reality

I designed a circular path model in Simulink, however I want to rotate object e.g ball along X-Axis but it rotates along Y-axis by default. How can I do this by VR SIGNAL EXPANDER or editing in Virtual Reality? the thing i want to model a spiral in which the object starting from bottom and moves towards up. I have model in which: enter image description here

*

A clock connected to two trigonometry function of Sin and Cos which makes their separate product by taking Sin value and clock value and similarly as Cos, And Mux-ing their two generated signals with a third clock signal in order to increase its circular path. I can't use VR SIGNAL EXPANDER as there are 3 multiplexed signals doesn't support that and also, the final translation requires 3 signals, so without VR SIGNAL EXPANDER it works but only along Y-axis.

*

My virtual reality wrl :

#VRML V2.0 utf8

#Created with V-Realm Builder v2.0
#Integrated Data Systems Inc.
#www.ids-net.com


SpotLight {
    cutOffAngle 0.785398
    direction   0 -0.995037 -0.0995036
    location    0 30 0
    on  TRUE
}
Background {
    groundAngle [ 0.9, 1.5, 1.57 ]
    groundColor [ 0 0.8 0,
              0.174249 0.82 0.187362,
              0.467223 0.82 0.445801,
              0.621997 0.67 0.600279 ]
    skyAngle    [ 0.1, 1.2, 1.57 ]
    skyColor    [ 0.76238 0.8 0.1427,
              0.277798 0.219779 0.7,
              0.222549 0.390234 0.7,
              0.60094 0.662637 0.69 ]
}
Viewpoint {
    jump    TRUE
    position    0 10 40
    description "My_View"
}
DEF Ball Transform {
    translation -0.3753 13.4729 -10
    scale   1.5 1.5 1.5
    children Shape {
        appearance  Appearance {
            material    Material {
                diffuseColor    0.8 0.471841 0.503074
                emissiveColor   0.35 0.237062 0.271762
                specularColor   0.45 0.282414 0.308843
            }

        }

        geometry    Sphere {
            radius  1.5
        }

    }
}
Transform {
    translation 0 -0.1 0
    children Shape {
        appearance  Appearance {
            material    Material {
            }

            texture ImageTexture {
                url "texture/Brick_2.jpg"
            }

        }

        geometry    Box {
            size    20 0.1 20
        }

    }
}
DEF Top_View Viewpoint {
    fieldOfView 0.785398
    jump    TRUE
    orientation -1 0 0  0.753982
    position    0 34.6716 22.3133
    description "Top_View"
}

Source: (StackOverflow)

Method to fix the video-projector deformation with GLSL/HLSL full-screen shader

I am working in VR field where good calibration of a projected screen is very important, and because of difficult-to-adjust ceiling mounts and other hardware specificities, I am looking for a fullscreen shader method to “correct” the shape of the screen.

Most of 2D or 3D engines allows to apply a full-screen effect or deformation by redrawing the rendering result on a quad that you can deform or render in a custom way.

The first idea was to use a vertex shader to offset the corners of this screen quad, so the image is deformed as a quadrilateral (like the hardware keystone on a projector), but it won’t be enough for the requirements (this approach is described on math.stackexchange with a live fiddle demo).

In my target case:

  • The image deformation must be non-linear most of the time, so 9 or 16 control points are needed to get a finer adjust.
  • The borders of the image are not straight (barrel or pillow effect), so even with few control points, the image must be distorted in a curved way in between. Otherwise the deformation would make visible linear seams between at each control points’ limits. Ideally, knowing the corrected position of each control points of 3x3 or 4x4 grid, the way would be to define a continuous transform for the texture coordinates of the image being drawn on the full screen quad:

u,v => corrected_u, corrected_v

You can find an illustration here.

  • I’ve saw some FFD algorithm that works in 2D or 3D that would allow to deform “softly” an image or mesh as if it was made of rubber, but the implementation seems heavy for a real-time shader.
  • I thought also of a weight-based deformation as we have in squeletal/soft-bodies animation, but seems uncertain to weight properly the control points. Do you know a method, algorithm or general approach that could help me solve the problem ?
  • I saw some mesh-based deformation like the new Oculus Rift DK2 requires for its own deformations, but most of the 2D/3D engine use a single quad made of 4 vertices only in standard.

Source: (StackOverflow)

Oculus Rift VR with C++

I have the Oculus Rift VR and I downloaded the SDK from their website. I'm using Visual Studio 2010 Pro, and I did all they mentioned in the WIKI page Minimal Oculus Application Tutorial.

I added the lib files and all the things they said. But I'm getting a lot of errors when I add the line #include "OVR.h"

it doesn't find all the header files they have in this file. even though I did all they mentioned TWICE!

any help?


Source: (StackOverflow)