EzDevInfo.com

three.js interview questions

Top three.js frequently asked interview questions

Changing three.js background to transparent or other color

I've been trying to change what seems to be the default background color of my canvas from black to transparent / any other color - but no luck.

My HTML:

<canvas id="canvasColor">

My CSS:

<style type="text/css">
#canvasColor {
 z-index: 998;
opacity:1;
background: red;
}
</style>

As you can see in the following online example I have some animation appended to the canvas, so cant just do a opacity: 0; on the id.

Live preview: http://devsgs.com/preview/test/particle/

Any ideas how to overwrite the default black?


Source: (StackOverflow)

Using textures in THREE.js

I am starting with THREE.js, and I am trying to draw a rectangle with a texture on it, lit by a single source of light. I think this is as simple as it gets (HTML omitted for brevity):

function loadScene() {
    var world = document.getElementById('world'),
        WIDTH = 1200,
        HEIGHT = 500,
        VIEW_ANGLE = 45,
        ASPECT = WIDTH / HEIGHT,
        NEAR = 0.1,
        FAR = 10000,

        renderer = new THREE.WebGLRenderer(),
        camera = new THREE.Camera(VIEW_ANGLE, ASPECT, NEAR, FAR),
        scene = new THREE.Scene(),
        texture = THREE.ImageUtils.loadTexture('crate.gif'),
        material = new THREE.MeshBasicMaterial({map: texture}),
        // material = new THREE.MeshPhongMaterial({color: 0xCC0000});
        geometry = new THREE.PlaneGeometry(100, 100),
        mesh = new THREE.Mesh(geometry, material),
        pointLight = new THREE.PointLight(0xFFFFFF);

    camera.position.z = 200;    
    renderer.setSize(WIDTH, HEIGHT);
    scene.addChild(mesh);
    world.appendChild(renderer.domElement);
    pointLight.position.x = 50;
    pointLight.position.y = 50;
    pointLight.position.z = 130;
    scene.addLight(pointLight); 
    renderer.render(scene, camera);
}

The problem is, I cannot see anything. If I change the material and use the commented one, a square appears as I would expect. Note that

  • The texture is 256x256, so its sides are power of two
  • The function is actually called when the body is loaded; indeed it works with a different material.
  • It does not work even if I serve the file from a webserver, so it is not an issue of cross-domain policy not allowing to load the image.

What I am I doing wrong?


Source: (StackOverflow)

Advertisements

Improved Area Lighting in WebGL & ThreeJS

I have been working on an area lighting implementation in WebGL similar to this demo:

http://threejs.org/examples/webgldeferred_arealights.html

The above implementation in three.js was ported from the work of ArKano22 over on gamedev.net:

http://www.gamedev.net/topic/552315-glsl-area-light-implementation/

Though these solutions are very impressive, they both have a few limitations. The primary issue with ArKano22's original implementation is that the calculation of the diffuse term does not account for surface normals.

I have been augmenting this solution for some weeks now, working with the improvements by redPlant to address this problem. Currently I have normal calculations incorporated into the solution, BUT the result is also flawed.

Here is a sneak preview of my current implementation:

area lighting teaser

Introduction

The steps for calculating the diffuse term for each fragment is as follows:

  1. Project the vertex onto the plane that the area light sits on, so that the projected vector is coincident with the light's normal/direction.
  2. Check that the vertex is on the correct side of the area light plane by comparing the projection vector with the light's normal.
  3. Calculate the 2D offset of this projected point on the plane from the light's center/position.
  4. Clamp this 2D offset vector so that it sits inside the light's area (defined by its width and height).
  5. Derive the 3D world position of the projected and clamped 2D point. This is the nearest point on the area light to the vertex.
  6. Perform the usual diffuse calculations that you would for a point light by taking the dot product between the the vertex-to-nearest-point vector (normalised) and the vertex normal.

Problem

The issue with this solution is that the lighting calculations are done from the nearest point and do not account for other points on the lights surface that could be illuminating the fragment even more so. Let me try and explain why…

Consider the following diagram:

problematic area lighting situation

The area light is both perpendicular to the surface and intersects it. Each of the fragments on the surface will always return a nearest point on the area light where the surface and the light intersect. Since the surface normal and the vertex-to-light vectors are always perpendicular, the dot product between them is zero. Subsequently, the calculation of the diffuse contribution is zero despite there being a large area of light looming over the surface.

Potential Solution

I propose that rather than calculate the light from the nearest point on the area light, we calculate it from a point on the area light that yields the greatest dot product between the vertex-to-light vector (normalised) and the vertex normal. In the diagram above, this would be the purple dot, rather than the blue dot.

Help!

And so, this is where I need your help. In my head, I have a pretty good idea of how this point can be derived, but don't have the mathematical competence to arrive at the solution.

Currently I have the following information available in my fragment shader:

  • vertex position
  • vertex normal (unit vector)
  • light position, width and height
  • light normal (unit vector)
  • light right (unit vector)
  • light up (unit vector)
  • projected point from the vertex onto the lights plane (3D)
  • projected point offset from the lights center (2D)
  • clamped offset (2D)
  • world position of this clamped offset – the nearest point (3D)

To put all this information into a visual context, I created this diagram (hope it helps):

available lighting information

To test my proposal, I need the casting point on the area light – represented by the red dots, so that I can perform the dot product between the vertex-to-casting-point (normalised) and the vertex normal. Again, this should yield the maximum possible contribution value.

UPDATE!!!

I have created an interactive sketch over on CodePen that visualises the mathematics that I currently have implemented:

http://codepen.io/wagerfield/pen/ywqCp

codepen

The relavent code that you should focus on is line 318.

castingPoint.location is an instance of THREE.Vector3 and is the missing piece of the puzzle. You should also notice that there are 2 values at the lower left of the sketch – these are dynamically updated to display the dot product between the relevant vectors.

I imagine that the solution would require another pseudo plane that aligns with the direction of the vertex normal AND is perpendicular to the light's plane, but I could be wrong!

Anyway, I hope that some compassionate genius out there can help me solve this!

Many thanks in advance :D


Source: (StackOverflow)

How to rotate a 3D object on axis three.js?

I have a great problem about the rotation in three.js I want to rotate my 3D cube in one of my game.

//init
geometry = new THREE.CubeGeometry grid, grid, grid
material = new THREE.MeshLambertMaterial {color:0xFFFFFF * Math.random(), shading:THREE.FlatShading, overdraw:true, transparent: true, opacity:0.8}
for i in [1...@shape.length]
    othergeo = new THREE.Mesh new THREE.CubeGeometry(grid, grid, grid)
    othergeo.position.x = grid * @shape[i][0]
    othergeo.position.y = grid * @shape[i][1]
    THREE.GeometryUtils.merge geometry, othergeo
@mesh = new THREE.Mesh geometry, material

//rotate
@mesh.rotation.y += y * Math.PI / 180
@mesh.rotation.x += x * Math.PI / 180
@mesh.rotation.z += z * Math.PI / 180

and (x, y, z) may be (1, 0, 0)

then the cube can rotate, but the problem is the cube rotate on its own axis,so after it has rotated, it can't rotate as expected.

I find the page How to rotate a Three.js Vector3 around an axis?, but it just let a Vector3 point rotate around the world axis?

and I have tried to use matrixRotationWorld as

@mesh.matrixRotationWorld.x += x * Math.PI / 180
@mesh.matrixRotationWorld.y += y * Math.PI / 180
@mesh.matrixRotationWorld.z += z * Math.PI / 180

but it doesn't work, I don't whether I used it in a wrong way or there are other ways..

so, how to let the 3D cube rotate around the world's axis???


Source: (StackOverflow)

Interior Mapping shader self shadowing

I'm tinkering with Joost van Dongen's Interior mapping shader and I'm trying to implement self-shadowing. Still I couldn't quite figure out what coordinates shadow casting light vectors need to be in. You can see somewhat working demo at here I've attached the light position with an offset to the camera position just to see whats happening but obviously it doesn't look right either. Shader code is below. Look for SHADOWS DEV in fragment shader. Vectors in question are: shad_E and shad_I.

vertex shader:

varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space

varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction

uniform vec3 lightPosition;

void main() {

    // inverse veiw matrix
    mat4 modelViewMatrixInverse = InverseMatrix( modelViewMatrix );

    // surface position in object space
    oP = position;

    // position of the eye in object space
    oE = modelViewMatrixInverse[3].xyz;

    // incident ray direction in object space
    oI = oP - oE; 

     // link the light position to camera for testing
     // need to find a way for world space directional light to work
    shad_E = oE - lightPosition;

     // light vector
    shad_I = oP - shad_E;

    gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}

fragment shader:

varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space

varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction

uniform vec3 wallFreq;

uniform float wallsBias;

uniform vec3 wallCeilingColor;
uniform vec3 wallFloorColor;
uniform vec3 wallXYColor;
uniform vec3 wallZYColor;

float checker(vec2 uv, float checkSize) {
  float fmodResult = mod( floor(checkSize * uv.x) + floor(checkSize * uv.y), 2.0);

  if (fmodResult < 1.0) {
    return 1.0;
  } else {
    return 0.85;
  }
}

void main() {

    // INTERIOR MAPPING by Joost van Dongen
    // http://interiormapping.oogst3d.net/
    // email: joost@ronimo-games.com
    // Twitter: @JoostDevBlog

    vec3 wallFrequencies = wallFreq / 2.0 - wallsBias;

    //calculate wall locations
    vec3 walls = ( floor( oP * wallFrequencies) + step( vec3( 0.0 ), oI )) / wallFrequencies;

    //how much of the ray is needed to get from the oE to each of the walls
    vec3 rayFractions = ( walls - oE) / oI;

    //texture-coordinates of intersections
    vec2 intersectionXY = (oE + rayFractions.z * oI).xy;
    vec2 intersectionXZ = (oE + rayFractions.y * oI).xz;
    vec2 intersectionZY = (oE + rayFractions.x * oI).zy;

    //use the intersection as the texture coordinates for the ceiling
    vec3 ceilingColour = wallCeilingColor * checker( intersectionXZ, 2.0 );
    vec3 floorColour = wallFloorColor * checker( intersectionXZ, 2.0 );
    vec3 verticalColour = mix(floorColour, ceilingColour, step(0.0, oI.y));
    vec3 wallXYColour = wallXYColor * checker( intersectionXY, 2.0 );
    vec3 wallZYColour = wallZYColor * checker( intersectionZY, 2.0 );

    // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //

    vec3 shad_P = oP;  // just surface position in object space
    vec3 shad_walls = ( floor( shad_P * wallFrequencies) + step( vec3( 0.0 ), shad_I )) / wallFrequencies;
    vec3 shad_rayFr = ( shad_walls - shad_E ) / shad_I;

    // Cast shadow from ceiling planes (intersectionXZ)

    wallZYColour *= mix( 0.3, 1.0, step( shad_rayFr.x, shad_rayFr.y ));
    verticalColour *= mix( 0.3, 1.0, step( rayFractions.y, shad_rayFr.y ));
    wallXYColour *= mix( 0.3, 1.0, step( shad_rayFr.z, shad_rayFr.y ));

    // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //

    // intersect walls
    float xVSz = step(rayFractions.x, rayFractions.z);
    vec3 interiorColour = mix(wallXYColour, wallZYColour, xVSz);
    float rayFraction_xVSz = mix(rayFractions.z, rayFractions.x, xVSz);
    float xzVSy = step(rayFraction_xVSz, rayFractions.y);

    interiorColour = mix(verticalColour, interiorColour, xzVSy);

    gl_FragColor.xyz = interiorColour;  

}

Source: (StackOverflow)

Is there a working THREE.js API documentation?

I am trying to learn the basics of THREE.js. I have read a couple of tutorial, and I would like to start experimenting. My problem is that I am not able to find any documentation

This is supposed to be an API browser, but I was not able to find the very basic objects, like PlaneGeometry or SphereGeometry. Is there any other place where to find an API?


Source: (StackOverflow)

Learning WebGL and three.js [closed]

I'm new and starting to learn about 3D computer graphics in web browsers. I'm interested in making 3D games in a browser. For anyone who has learned both WebGL and three.js...

  1. Is knowledge of WebGL required to use three.js?

  2. What are the advantages of using three.js vs. WebGL?


Source: (StackOverflow)

"Cross origin requests are only supported for HTTP." error when loading a local file

I'm trying to load a 3D model into Three.js with JSONLoader, and that 3D model is in the same directory as the entire website.

I'm getting the "Cross origin requests are only supported for HTTP." error, but I don't know what's causing it nor how to fix it.


Source: (StackOverflow)

Online WebGL GLSL Shader editor [closed]

I'm looking for online GLSL shader IDE Editor that will for writing GLSL shaders and use webGL for rendering. The tool should have features such as intellisense, syntax coloring, basic debugging tools or trivial compilation error highlighting.


Source: (StackOverflow)

Three.js: Plane visible only half the time

I've created a plane, which I rotate, using Three.js. For some reason, the plane doesn't show half the time. I've created a fiddle here showing the behaviour.


Source: (StackOverflow)

Three.js - skinned skeletal mesh instances, animations and blending

I'm working on a small multiplayer game which has a single skinned player mesh with many players using it. Some Background: I've tried loading via maya and blender collada export. Both seem to reference some form of animation data but I couldn't get it working. I've tried the maya JSON exporter, which spat out tiny 1k files with only a material line. Finally the blender JSON exporter worked. To those also trying to load skinned meshes, I found this very helpful: Model with bones animation (blender export) animating incorrectly in three.js

So now I have a geometry object and a materials array from the JSON loader.

I can set skinning=true on the materials, create a THREE.SkinnedMesh, add it to the scene, add animations via THREE.AnimationHandler.add (I'm quite unclear on what the AnimationHandler actually does), create a THREE.Animation, call play() and update(dt). Finally I have a single mesh and an animation playing in my scene.

Now what I want are these...

  1. Many instances - I want more than one player model running around in my scene.

    • I don't want the same mesh and animation data loaded many times.
    • Animation time should be per-instance (so they don't all animate in sync).

    Should I be creating many THREE.SkinnedMesh and THREE.Animation for the same model? Where does THREE.AnimationHandler come in?

  2. Many animations - I want idle/run cycles able to be played individually.

    AFAIK there's only a single timeline of animation keyframes. How does Three.js partition this up for me, or do I have to do it manually?

  3. Animation Blending - When a character stops running and stands still with the idle animation, I don't want an instant snap from one to the other. I'd like to pause the run animation and blend that state back into the idle animation.

    Is this currently possible with skinned meshes (not morph targets)? Are there examples or docs about this?

Any information would be greatly appreciated, even just a nudge in the right direction. I'm not after a full tutorial, I would like some higher level information about these features.

I could happily implement 2 and 3, but I'd like some information/descriptive docs about the threejs skinning and animation framework to get me started. For example, this isn't much to go on.

[EDIT]
Thanks, @NishchitDhanani, this page is quite good but doesn't mention multiple animations or blending skeletal animations: http://chimera.labs.oreilly.com/books/1234000000802/ch05.html#animating_characters_with_skinning

This page says multiple animations are still a current issue but not much more (discussed a little in the comments): http://devmatrix.wordpress.com/2013/02/27/creating-skeletal-animation-in-blender-and-exporting-it-to-three-js/

The current answers are...

  1. Use many THREE.SkinnedMesh and still not sure about THREE.AnimationHandler.
  2. Don't know. Perhaps there's a way to modify the start/end keyframes manually in the THREE.Animation.
  3. Not implemented AFAIK. I might try creating a custom shader that can take two THREE.Animations and interpolate between them.

Source: (StackOverflow)

three.js - mesh group example? (THREE.Object3D() advanced)

I'm attempting to understand how to group / link child meshes to a parent. I want to be able to:

  • drag the parent
  • rotate child elements relative to the parent
  • have parent rotation / translation do the right thing for children

My only background in this is using LSL in Second Life to manipulate linked prims in an object. I am thinking I dont want to merge meshes, because I want to maintain control (hover, texture, rotation, scaling, etc) over each child.

Any good tutorials on this out there? This is achieved with THREE.Object3D(), yes?

thanks, Daniel


Source: (StackOverflow)

Converting World coordinates to Screen coordinates in Three.js using Projection

There are several excellent stack questions (1, 2) about unprojecting in Three.js, that is how to convert (x,y) mouse coordinates in the browser to the (x,y,z) coordinates in Three.js canvas space. Mostly they follow this pattern:

    var elem = renderer.domElement, 
        boundingRect = elem.getBoundingClientRect(),
        x = (event.clientX - boundingRect.left) * (elem.width / boundingRect.width),
        y = (event.clientY - boundingRect.top) * (elem.height / boundingRect.height);

    var vector = new THREE.Vector3( 
        ( x / WIDTH ) * 2 - 1, 
        - ( y / HEIGHT ) * 2 + 1, 
        0.5 
    );

    projector.unprojectVector( vector, camera );
    var ray = new THREE.Ray( camera.position, vector.subSelf( camera.position ).normalize() );
    var intersects = ray.intersectObjects( scene.children );

I have been attempting to do the reverse - instead of going from "screen to world" space, to go from "world to screen" space. If I know the position of the object in Three.js, how do I determine its position on the screen?

There does not seem to be any published solution to this problem. Another question about this just showed up on Stack, but the author claims to have solved the problem with a function that is not working for me. Their solution does not use a projected Ray, and I am pretty sure that since 2D to 3D uses unprojectVector(), that the 3D to 2D solution will require projectVector().

There is also this issue opened on Github.

Any help is appreciated.


Source: (StackOverflow)

ThreeJS predefined shader attributes / uniforms

I have started with ThreeJS's WebGL renderer after doing some "regular" WebGL with no additional libraries + GLSL shaders. I am trying to write custom shaders now in my ThreeJS program and I noticed that ThreeJS takes care of a lot of the standard stuff such as the projection and model / view matrices. My simple vertex shader now looks like this:

// All of these seem to be predefined:
// vec3 position;
// mat4 projectionMatrix;
// mat4 modelViewMatrix;
// mat3 normalMatrix;
// vec3 normal;

// I added this
varying vec3 vNormal;

void main() {
    vNormal = normalMatrix * vec3(normal);
    gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

My question is: Which other variables (I'm assuming they're uniforms) are predefined for vertex and fragment shaders that I could use? Does ThreeJS help out with light vectors / light color for instance (of course assuming I've added one or more lights to my ThreeJS scene)?

Update (Oct. 9, 2014): This question has been getting quite a few views, and the user Killah mentioned that the existing answers did not lead to a solution anymore with the current version of three.js. I added and accepted my own answer, see it below.


Source: (StackOverflow)

Three.js ported to native code?

I've been playing with WebGL quite a bit lately and I really dig Three.js. It's really lightweight and just serves as something that makes wrangling most of the GL calls a bit easier, and provides a quick way of creating basic primatives like a sphere.

Now, in native land, it seems that all the frameworks want to be so much more than that. Things like Oolong, UDK, Unity, Cocos, etc. I did a bit of googling, and the closest thing I could find was iSGL3D but I'm not thoroughly convinced it is the right answer.

Is there something more similar to Three.js that is written in native C, C++ or Objective-C that I can't find?


Source: (StackOverflow)