three.js
JavaScript 3D library.
three.js - Javascript 3D library
I have been working on an area lighting implementation in WebGL similar to this demo:
http://threejs.org/examples/webgldeferred_arealights.html
The above implementation in three.js was ported from the work of ArKano22 over on gamedev.net:
http://www.gamedev.net/topic/552315-glsl-area-light-implementation/
Though these solutions are very impressive, they both have a few limitations. The primary issue with ArKano22's original implementation is that the calculation of the diffuse term does not account for surface normals.
I have been augmenting this solution for some weeks now, working with the improvements by redPlant to address this problem. Currently I have normal calculations incorporated into the solution, BUT the result is also flawed.
Here is a sneak preview of my current implementation:
Introduction
The steps for calculating the diffuse term for each fragment is as follows:
- Project the vertex onto the plane that the area light sits on, so that the projected vector is coincident with the light's normal/direction.
- Check that the vertex is on the correct side of the area light plane by comparing the projection vector with the light's normal.
- Calculate the 2D offset of this projected point on the plane from the light's center/position.
- Clamp this 2D offset vector so that it sits inside the light's area (defined by its width and height).
- Derive the 3D world position of the projected and clamped 2D point. This is the nearest point on the area light to the vertex.
- Perform the usual diffuse calculations that you would for a point light by taking the dot product between the the vertex-to-nearest-point vector (normalised) and the vertex normal.
Problem
The issue with this solution is that the lighting calculations are done from the nearest point and do not account for other points on the lights surface that could be illuminating the fragment even more so. Let me try and explain why…
Consider the following diagram:
The area light is both perpendicular to the surface and intersects it. Each of the fragments on the surface will always return a nearest point on the area light where the surface and the light intersect. Since the surface normal and the vertex-to-light vectors are always perpendicular, the dot product between them is zero. Subsequently, the calculation of the diffuse contribution is zero despite there being a large area of light looming over the surface.
Potential Solution
I propose that rather than calculate the light from the nearest point on the area light, we calculate it from a point on the area light that yields the greatest dot product between the vertex-to-light vector (normalised) and the vertex normal. In the diagram above, this would be the purple dot, rather than the blue dot.
Help!
And so, this is where I need your help. In my head, I have a pretty good idea of how this point can be derived, but don't have the mathematical competence to arrive at the solution.
Currently I have the following information available in my fragment shader:
- vertex position
- vertex normal (unit vector)
- light position, width and height
- light normal (unit vector)
- light right (unit vector)
- light up (unit vector)
- projected point from the vertex onto the lights plane (3D)
- projected point offset from the lights center (2D)
- clamped offset (2D)
- world position of this clamped offset – the nearest point (3D)
To put all this information into a visual context, I created this diagram (hope it helps):
To test my proposal, I need the casting point on the area light – represented by the red dots, so that I can perform the dot product between the vertex-to-casting-point (normalised) and the vertex normal. Again, this should yield the maximum possible contribution value.
UPDATE!!!
I have created an interactive sketch over on CodePen that visualises the mathematics that I currently have implemented:
The relavent code that you should focus on is line 318.
castingPoint.location
is an instance of THREE.Vector3
and is the missing piece of the puzzle. You should also notice that there are 2 values at the lower left of the sketch – these are dynamically updated to display the dot product between the relevant vectors.
I imagine that the solution would require another pseudo plane that aligns with the direction of the vertex normal AND is perpendicular to the light's plane, but I could be wrong!
Anyway, I hope that some compassionate genius out there can help me solve this!
Many thanks in advance :D
Source: (StackOverflow)
I'm attempting to understand how to group / link child meshes to a parent. I want to be able to:
- drag the parent
- rotate child elements relative to the parent
- have parent rotation / translation do the right thing for children
My only background in this is using LSL in Second Life to manipulate linked prims in an object. I am thinking I dont want to merge meshes, because I want to maintain control (hover, texture, rotation, scaling, etc) over each child.
Any good tutorials on this out there? This is achieved with THREE.Object3D(), yes?
thanks, Daniel
Source: (StackOverflow)
I'm trying to load a 3D model into Three.js with JSONLoader
, and that 3D model is in the same directory as the entire website.
I'm getting the "Cross origin requests are only supported for HTTP."
error, but I don't know what's causing it nor how to fix it.
Source: (StackOverflow)
So I'm new and starting to learn about 3D computer graphics in web browsers. I'm interested in making 3D games in a browser. For anyone who has learned both WebGL and three.js...
Is knowledge of WebGL required to use three.js?
What are the advantages of using three.js vs. WebGL?
Source: (StackOverflow)
I'm tinkering with Joost van Dongen's Interior mapping shader and I'm trying to implement self-shadowing. Still I couldn't quite figure out what coordinates shadow casting light vectors need to be in. You can see somewhat working demo at here I've attached the light position with an offset to the camera position just to see whats happening but obviously it doesn't look right either.
Shader code is below. Look for SHADOWS DEV in fragment shader. Vectors in question are: shad_E and shad_I.
vertex shader:
varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space
varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction
uniform vec3 lightPosition;
void main() {
// inverse veiw matrix
mat4 modelViewMatrixInverse = InverseMatrix( modelViewMatrix );
// surface position in object space
oP = position;
// position of the eye in object space
oE = modelViewMatrixInverse[3].xyz;
// incident ray direction in object space
oI = oP - oE;
// link the light position to camera for testing
// need to find a way for world space directional light to work
shad_E = oE - lightPosition;
// light vector
shad_I = oP - shad_E;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
fragment shader:
varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space
varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction
uniform vec3 wallFreq;
uniform float wallsBias;
uniform vec3 wallCeilingColor;
uniform vec3 wallFloorColor;
uniform vec3 wallXYColor;
uniform vec3 wallZYColor;
float checker(vec2 uv, float checkSize) {
float fmodResult = mod( floor(checkSize * uv.x) + floor(checkSize * uv.y), 2.0);
if (fmodResult < 1.0) {
return 1.0;
} else {
return 0.85;
}
}
void main() {
// INTERIOR MAPPING by Joost van Dongen
// http://interiormapping.oogst3d.net/
// email: joost@ronimo-games.com
// Twitter: @JoostDevBlog
vec3 wallFrequencies = wallFreq / 2.0 - wallsBias;
//calculate wall locations
vec3 walls = ( floor( oP * wallFrequencies) + step( vec3( 0.0 ), oI )) / wallFrequencies;
//how much of the ray is needed to get from the oE to each of the walls
vec3 rayFractions = ( walls - oE) / oI;
//texture-coordinates of intersections
vec2 intersectionXY = (oE + rayFractions.z * oI).xy;
vec2 intersectionXZ = (oE + rayFractions.y * oI).xz;
vec2 intersectionZY = (oE + rayFractions.x * oI).zy;
//use the intersection as the texture coordinates for the ceiling
vec3 ceilingColour = wallCeilingColor * checker( intersectionXZ, 2.0 );
vec3 floorColour = wallFloorColor * checker( intersectionXZ, 2.0 );
vec3 verticalColour = mix(floorColour, ceilingColour, step(0.0, oI.y));
vec3 wallXYColour = wallXYColor * checker( intersectionXY, 2.0 );
vec3 wallZYColour = wallZYColor * checker( intersectionZY, 2.0 );
// SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //
vec3 shad_P = oP; // just surface position in object space
vec3 shad_walls = ( floor( shad_P * wallFrequencies) + step( vec3( 0.0 ), shad_I )) / wallFrequencies;
vec3 shad_rayFr = ( shad_walls - shad_E ) / shad_I;
// Cast shadow from ceiling planes (intersectionXZ)
wallZYColour *= mix( 0.3, 1.0, step( shad_rayFr.x, shad_rayFr.y ));
verticalColour *= mix( 0.3, 1.0, step( rayFractions.y, shad_rayFr.y ));
wallXYColour *= mix( 0.3, 1.0, step( shad_rayFr.z, shad_rayFr.y ));
// SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //
// intersect walls
float xVSz = step(rayFractions.x, rayFractions.z);
vec3 interiorColour = mix(wallXYColour, wallZYColour, xVSz);
float rayFraction_xVSz = mix(rayFractions.z, rayFractions.x, xVSz);
float xzVSy = step(rayFraction_xVSz, rayFractions.y);
interiorColour = mix(verticalColour, interiorColour, xzVSy);
gl_FragColor.xyz = interiorColour;
}
Source: (StackOverflow)
I am starting with THREE.js, and I am trying to draw a rectangle with a texture on it, lit by a single source of light. I think this is as simple as it gets (HTML omitted for brevity):
function loadScene() {
var world = document.getElementById('world'),
WIDTH = 1200,
HEIGHT = 500,
VIEW_ANGLE = 45,
ASPECT = WIDTH / HEIGHT,
NEAR = 0.1,
FAR = 10000,
renderer = new THREE.WebGLRenderer(),
camera = new THREE.Camera(VIEW_ANGLE, ASPECT, NEAR, FAR),
scene = new THREE.Scene(),
texture = THREE.ImageUtils.loadTexture('crate.gif'),
material = new THREE.MeshBasicMaterial({map: texture}),
// material = new THREE.MeshPhongMaterial({color: 0xCC0000});
geometry = new THREE.PlaneGeometry(100, 100),
mesh = new THREE.Mesh(geometry, material),
pointLight = new THREE.PointLight(0xFFFFFF);
camera.position.z = 200;
renderer.setSize(WIDTH, HEIGHT);
scene.addChild(mesh);
world.appendChild(renderer.domElement);
pointLight.position.x = 50;
pointLight.position.y = 50;
pointLight.position.z = 130;
scene.addLight(pointLight);
renderer.render(scene, camera);
}
The problem is, I cannot see anything. If I change the material and use the commented one, a square appears as I would expect. Note that
- The texture is 256x256, so its sides are power of two
- The function is actually called when the body is loaded; indeed it works with a different material.
- It does not work even if I serve the file from a webserver, so it is not an issue of cross-domain policy not allowing to load the image.
What I am I doing wrong?
Source: (StackOverflow)
I've been trying to change what seems to be the default background color of my canvas from black to transparent / any other color - but no luck.
My HTML:
<canvas id="canvasColor">
My CSS:
<style type="text/css">
#canvasColor {
z-index: 998;
opacity:1;
background: red;
}
</style>
As you can see in the following online example I have some animation appended to the canvas, so cant just do a opacity: 0; on the id.
Live preview:
http://devsgs.com/preview/test/particle/
Any ideas how to overwrite the default black?
Source: (StackOverflow)
I have quite a few objects in my scene so rotating all of them could be a pain. So what is the most easy way to move camera around origin on mouse click and drag? This way all the lights, objects in the scene are in the same location, so the only thing changing is the camera. Three.js does not provide a way to rotate a camera around a point, or does it?
Thank you
Source: (StackOverflow)
Can anyone who has used three.js tell me if its possible to detect webgl support, and, if not present, fallback to a standard Canvas render?
Source: (StackOverflow)
I'm looking for online GLSL shader IDE Editor that will for writing GLSL shaders and use webGL for rendering. The tool should have features such as intellisense, syntax coloring, basic debugging tools or trivial compilation error highlighting.
Source: (StackOverflow)
I've asked this and got the answer:
var geom = new THREE.Geometry();
var v1 = new THREE.Vector3(0,0,0);
var v2 = new THREE.Vector3(0,500,0);
var v3 = new THREE.Vector3(0,500,500);
geom.vertices.push(new THREE.Vertex(v1));
geom.vertices.push(new THREE.Vertex(v2));
geom.vertices.push(new THREE.Vertex(v3));
var object = new THREE.Mesh( geom, new THREE.MeshNormalMaterial() );
scene.addObject(object);
I expected this to work but it didn't.
Source: (StackOverflow)
I would like to assign a remote video to a texture in WebGL. Since the video source is different from the document source, I added Access-Control-Allow-Origin:*
to the http headers of the video source. In addition, I assigned an anonymous origin to the video tag by using video.crossOrigin = '';
. Interestingly, the cross-domain attribute works with images, but NOT with the video tag. As soon as the WebGL texture is assigned to the video object, javascript throws the following exception:
Uncaught Error: SECURITY_ERR: DOM Exception 18
Here is a jsfiddle to reproduce this issue. This example is based on the webgl_kinect example of three.js:
http://jsfiddle.net/ZgeTU/2/
Here are the relevant sections:
// CROSS-ORIGIN VIDEO SOURCE
// REMOTE VIDEO SOURCE PROVIDES "Access-Control-Allow-Origin:*" HEADER
video.src =
'http://kammerl.de/threejs/three.js/examples/textures/kinect.webm';
// DEFINING ANONYMOUS ORIGIN
video.crossOrigin = '';
video.play();
Later the video tag is assigned to a Three.js texture:
texture = new THREE.Texture( video );
Apparently this problem using a crossOrigin video in webGL is known for a while, but I haven't found any updates on this:
http://jbuckley.ca/2012/02/cross-origin-video/
Does anyone know what the status of this issue is? Is there any workaround to access remote videos in webGL? Any help is greatly appreciated!
Thanks!
Source: (StackOverflow)
I'm using ThreeJS to develop a web application that displays a list of entities, each with corresponding "View" and "Hide" button; e.g. entityName View Hide. When user clicks View button, following function is called and entity drawn on screen successfully.
function loadOBJFile(objFile){
/* material of OBJ model */
var OBJMaterial = new THREE.MeshPhongMaterial({color: 0x8888ff});
var loader = new THREE.OBJLoader();
loader.load(objFile, function (object){
object.traverse (function (child){
if (child instanceof THREE.Mesh) {
child.material = OBJMaterial;
}
});
object.position.y = 0.1;
scene.add(object);
});
}
function addEntity(object) {
loadOBJFile(object.name);
}
And on clicking Hide button, following function is called:
function removeEntity(object){
scene.remove(object.name);
}
The problem is, entity is not removed from screen once loaded when Hide button is clicked. What can I do to make Hide button to work?
I did small experiment. I added scene.remove(object.name);
right after scene.add(object);
within addEntity
function and as result, when "View" button clicked, no entity drawn (as expected) meaning that scene.remove(object.name);
worked just fine within addEntity
. But still I'm unable to figure out how to use it in removeEntity(object).
Also, I checked contents of scene.children and it shows: [object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Complete code: http://devplace.in/~harman/model_display1.php.html
Please ask, if more detail is needed. I tested with rev-59-dev and rev-60 of ThreeJS.
Thanks. :)
Source: (StackOverflow)
Is it possible to create shadows from a DirectionalLight
?
If I use SpotLight
then I see a shadow, but if I use DirectionalLight
it doesn't work.
Source: (StackOverflow)
I'm working on a small multiplayer game which has a single skinned player mesh with many players using it. Some Background: I've tried loading via maya and blender collada export. Both seem to reference some form of animation data but I couldn't get it working. I've tried the maya JSON exporter, which spat out tiny 1k files with only a material line. Finally the blender JSON exporter worked. To those also trying to load skinned meshes, I found this very helpful: Model with bones animation (blender export) animating incorrectly in three.js
So now I have a geometry
object and a materials
array from the JSON loader.
I can set skinning=true
on the materials, create a THREE.SkinnedMesh
, add it to the scene, add animations via THREE.AnimationHandler.add
(I'm quite unclear on what the AnimationHandler
actually does), create a THREE.Animation
, call play()
and update(dt)
. Finally I have a single mesh and an animation playing in my scene.
Now what I want are these...
Many instances - I want more than one player model running around in my scene.
- I don't want the same mesh and animation data loaded many times.
- Animation time should be per-instance (so they don't all animate in sync).
Should I be creating many THREE.SkinnedMesh
and THREE.Animation
for the same model? Where does THREE.AnimationHandler
come in?
Many animations - I want idle/run cycles able to be played individually.
AFAIK there's only a single timeline of animation keyframes. How does Three.js partition this up for me, or do I have to do it manually?
Animation Blending - When a character stops running and stands still with the idle animation, I don't want an instant snap from one to the other. I'd like to pause the run animation and blend that state back into the idle animation.
Is this currently possible with skinned meshes (not morph targets)? Are there examples or docs about this?
Any information would be greatly appreciated, even just a nudge in the right direction. I'm not after a full tutorial, I would like some higher level information about these features.
I could happily implement 2 and 3, but I'd like some information/descriptive docs about the threejs skinning and animation framework to get me started. For example, this isn't much to go on.
[EDIT]
Thanks, @NishchitDhanani, this page is quite good but doesn't mention multiple animations or blending skeletal animations: http://chimera.labs.oreilly.com/books/1234000000802/ch05.html#animating_characters_with_skinning
This page says multiple animations are still a current issue but not much more (discussed a little in the comments):
http://devmatrix.wordpress.com/2013/02/27/creating-skeletal-animation-in-blender-and-exporting-it-to-three-js/
The current answers are...
- Use many
THREE.SkinnedMesh
and still not sure about THREE.AnimationHandler
.
- Don't know. Perhaps there's a way to modify the start/end keyframes manually in the
THREE.Animation
.
- Not implemented AFAIK. I might try creating a custom shader that can take two
THREE.Animation
s and interpolate between them.
Source: (StackOverflow)