EzDevInfo.com

mesh.js

data synchronization library MeshJS - A featherlight data synchronization library for creating powerful applications a universal, streamable interface for synchronizing data

Android OpenGL .OBJ file loader [closed]

There seem to be quite a number of OBJ mesh file loaders out there that people have developed for use on the Android platform. I'm wondering if anyone has any experience with these and can offer a recommendation on which one seems to work best for them.

Here are my criteria:

  • Lightweight (small file size),
  • Optimized for speed,
  • Easy to implement,
  • Offers some sort of texture mapping support (not sure if I need this -- haven't gotten far enough in my coding to know if I need a library to do this, or if OpenGL ES will be able to do all that work I need here), and
  • Can be used in Android apps that are being sold commercially.

Here are a few of the libraries I've found.

I'm also open to hearing about others not included on this list. Thanks!


Source: (StackOverflow)

Can I hide faces of a mesh in three.js?

I want to make parts of a mesh invisible at runtime. Can I set these parts invisible/transparent, e.g. by changing attributes of single faces? The mesh itself uses only one material.


Exemplary illustration as the editor understands this question: Imagine a mesh (here with a geometry of 20 vertices) where each quad of four vertices builds up a Face4. Now, some parts of the mesh should be made invisible (here two faces are invisible).

example


Source: (StackOverflow)

Advertisements

Calculating normals between 2 meshes ending up in seams

My Task

I currently creating a terrain for Unity3D which is specialized for mobile-devices with low memory for a running app. Allowing a terrain with a size of 15.000 x 15.000 kilometers and a height from -1.000 meters to 10.000 meters and it's only limits are the space on the hard disk.

Situation

Everything is running fine right now except that the normals between different meshes ( each mesh has a subdivision level ) are not calculated correctly. Here are two pictures which visualize the problem:

Mesh with displayed triangles Mesh with normals only

The problem only occurs on a transition from one subdivision level to another. If both mesh have the same level it works well. I first thought i miss some faces when calculating the normals but it seems they are all included into the calculation.

Some Code

Normal calculation of each face:

Vector3 u = vertices[item.Face1] - vertices[item.Face0];
Vector3 v = vertices[item.Face2] - vertices[item.Face0];

Vector3 fn = new Vector3((u.Y * v.Z) - (u.Z * v.Y), (u.Z * v.X) - (u.X * v.Z), (u.X * v.Y) - (u.Y * v.X));
fn.Normalize();

After calculating the normal of each face around a vertex i add all the face-normals to the vertex-normal and normalize it. The result is shown in the pictures, and as you can see in the background and on the meshes itself it works as long there is no different subdivision level.

Some more code

/// <summary>
/// This is a static indicies array which contains all indicies
/// for all possible meshes.
/// </summary>
private static readonly Int32[] // Subdivision
                             [] // All borders
                             [] Indicies = new Int32[8][][]; // Indicies

Calculate each normal of the current mesh:

Int32 count = 0;
for (int y = 0; y < length; y++)
{
    for (int x = 0; x < length; x++)
    {
        ns[count++] = GetNormal(x, y, faces, vs);
    }
}

The GetNormal-method:

private unsafe Vector3 GetNormal(Int32 x, Int32 y, Int32[] indicies, Vector3* vertices)
{
    Vector3 normal = new Vector3();
    CalculateNormal(x, y, indicies, vertices, ref normal);
    normal.Normalize();
    // Calculate all face normals and normalize
    return normal;
}

The CalculateNormal-method:

private unsafe void CalculateNormal(Int32 x, Int32 y, Int32[] indicies, Vector3* vertices, ref Vector3 normal)
{
    Int32 p = ((y * Length) + x);
    Int32 length = Length - 1;

    foreach (Face item in FindFaces(this, indicies, p))
    {
        Vector3 u = vertices[item.Face1] - vertices[item.Face0];
        Vector3 v = vertices[item.Face2] - vertices[item.Face0];

        Vector3 fn = new Vector3((u.Y * v.Z) - (u.Z * v.Y), (u.Z * v.X) - (u.X * v.Z), (u.X * v.Y) - (u.Y * v.X));
        fn.Normalize();
        normal += fn;
    }

    SegmentHeighmap heightmap;
    if (x == 0 && y == 0)
    {
        foreach (Face item in FindFaces(Neighbor.Left, out heightmap, TranslateLeftX, TranslateLeftY, x, y))
        {
            Face f = item;
            AddFaceNormal(ref f, ref normal, heightmap);
        }

... /* A lot of more code here for each possible combination */

The AddFaceNormal-method:

private static void AddFaceNormal(ref Face face, ref Vector3 normal, SegmentHeighmap heightmap)
{
    Vector3 v0;
    Vector3 v1;
    Vector3 v2;
    heightmap.CalculateVertex(face.Face0, out v0);
    heightmap.CalculateVertex(face.Face1, out v1);
    heightmap.CalculateVertex(face.Face2, out v2);

    Vector3 u = v1 - v0;
    Vector3 v = v2 - v0;

    Vector3 fn = new Vector3((u.Y * v.Z) - (u.Z * v.Y), (u.Z * v.X) - (u.X * v.Z), (u.X * v.Y) - (u.Y * v.X));
    fn.Normalize();
    normal += fn;
}

The FindFaces-methods:

private IEnumerable<Face> FindFaces(Neighbor neighbor, out SegmentHeighmap heightmap, TranslationHandler translateX, TranslationHandler translateY, Int32 x, Int32 y)
{
    Segment segment = Segment.GetNeighbor(neighbor);
    if (segment != null)
    {
        heightmap = segment.Heighmap;
        Int32 point = ((translateY(this, heightmap, y) * Length) + translateX(this, heightmap, x));

        return FindFaces(heightmap, null, point);
    }
    heightmap = null;
    return Enumerable.Empty<Face>();
}
private IEnumerable<Face> FindFaces(SegmentHeighmap heightmap, Int32[] indicies, Int32 point)
{
    indicies = indicies ?? Indicies[heightmap.Segment.SubdivisionLevel][heightmap.SideFlag];

    for (int i = 0; i < indicies.Length; i += 3)
    {
        Int32 a = indicies[i], b = indicies[i + 1], c = indicies[i + 2];
        if (a == point || b == point || c == point)
        {
            yield return new Face(a, b, c);
        }
    }
}

The TransformPoint-method:

private Int32 TranslatePoint(Int32 point, Segment segment)
{
    Int32 subdiv = segment.SubdivisionLevel - Parent.SubdivisionLevel;
    if (subdiv == 0)
    {
        return point;
    }
    if (Math.Abs(subdiv) == 1)
    {
        if (subdiv > 0)
        {
            return point * 2;
        }
        return point / 2;
    }

    throw new InvalidOperationException("Subdivision difference is greater than 1");
}

And finally the TranslationHandler-delegate and 2 sample handlers:

/// <summary>
/// Handles the translation from one coordinate space into another
/// This handler is used internal only
/// </summary>
private delegate Int32 TranslationHandler(SegmentHeighmap @this, SegmentHeighmap other, Int32 v);

private static readonly TranslationHandler TranslateLeftX = (t, o, v) => o.Length - 1;
private static readonly TranslationHandler TranslateLeftY = (t, o, v) => t.TranslatePoint(v, o.Segment);

Question

The question is simple: Why is it not working for different levels, do i miss something in my calculation?


Source: (StackOverflow)

Decomposing a 3d mesh into a 2d net

Suppose you have a 3 dimensional object, represented as a 3d mesh in some common file format. How would you devise an algorithm to decompose the mesh into one or more 2d 'nets' - that is, a 2-dimensional representation that can be cut out and folded to create the original 3d object.

Amongst other things, the algorithm would need to account for:

  • Multiple possible decompositions for any given object
  • Handling fitting a mesh into fixed size canvases (sheets of paper).
  • Recognizing when two panels in the net would overlap (and are thus invalid).
  • Breaking a mesh up into multiple nets if they can't be represented as a single one, due to overlap or page size constraints.
  • Generating tabs in the appropriate places, for attaching adjacent faces.

The obvious degenerate case is simply to create one net per face, with tabs on half the edges. This isn't ideal, obviously: The ideal case is a single continuous net. The reality for complex shapes is likely to be somewhere in the middle.

I realize that finding the optimal net (fewest nets / least pages) is probably computationally expensive, but a good heuristic for finding 'good enough' nets would suffice.


Source: (StackOverflow)

procedurally generate a sphere mesh

i am looking for an algorithm ( in pseudo code) that generates the 3d coordinates of a sphere mesh like this:

alt text

the number of horizontal and lateral slices should be configurable

thanks a lot in advance!


Source: (StackOverflow)

Algorithm for generating a triangular mesh from a cloud of points

In some simulation program we generate object surfaces in terms of points, each point has 3D coordinates and the vector that represents the normal to the surface at that point. For visualization purposes we would like to generate a mesh composed of triangles; each three close points form one triangle with its normal. Then we can send this information to some standard visualization programs that render the surface like VMD (Visual Molecular Dynamics).

We wonder which is the fastest/available algorithm for doing this.


Source: (StackOverflow)

Algorithm for labeling edges of a triangular mesh

Introduction

As part of a larger program (related to rendering of volumetric graphics), I have a small but tricky subproblem where an arbitrary (but finite) triangular 2D mesh needs to be labeled in a specific way. Already a while ago I wrote a solution (see below) which was good enough for the test meshes I had at the time, even though I realized that the approach will probably not work very well for every possible mesh that one could think of. Now I have finally encountered a mesh for which the present solution does not perform that well at all -- and it looks like I should come up with a totally different kind of an approach. Unfortunately, it seems that I am not really able to reset my lines of thinking, which is why I thought I'd ask here.

The problem

Consider the picture below. (The colors are not part of the problem; I just added them to improve (?) the visualization. Also the varying edge width is a totally irrelevant artifact.)

For every triangle (e.g., the orange ABC and the green ABD), each of the three edges needs to be given one of two labels, say "0" or "1". There are just two requirements:

  1. Not all the edges of a triangle can have the same label. In other words, for every triangle there must be two "0"s and one "1", or two "1"s and one "0".
  2. If an edge is shared by two triangles, it must have the same label for both. In other words, if the edge AB in the picture is labeled "0" for the triangle ABC, it must be labeled "0" for ABD, too.

The mesh is a genuine 2D one, and it is finite: i.e., it does not wrap, and it has a well-defined outer border. Obviously, on the border it is quite easy to satisfy the requirements -- but it gets more difficult inside.

Intuitively, it looks like at least one solution should always exist, even though I cannot prove it. (Usually there are several -- any one of them is enough.)

Current solution

My current solution is a really brute-force one (provided here just for completeness -- feel free to skip this section):

  • Maintain four sets of triangles -- one for each possible count (0..3) of edges remaining to be labeled. In the beginning, every triangle is in the set where three edges remain to be labeled.
  • For as long as there are triangles with non-labeled edges:
    Find the smallest non-zero number of unallocated edges for which there are still triangles left. In other words: at any given time, we try to minimize the number of triangles for which the labeling has been partially completed. The number of edges remaining will be anything between 1 and 3. Then, just pick one such triangle with this specific number of edges remaining to be allocated. For this triangle, do the following:
    • See if the labeling of any remaining edge is already imposed by the labeling of some other triangle. If so, assign the labels as implied by requirement #2 above.
    • If this results in a dead end (i.e., requirement #1 can no more be satisfied for the present triangle), then start over the whole process from the very beginning.
    • Allocate any remaining edges as follows:
      • If no edges have been labeled so far, assign the first one randomly.
      • When one edge already allocated, assign the second one so that it will have the opposite label.
      • When two edges allocated: if they have the same label, assign the third one to have the opposite label (obviously); if the two have different labels, assign the third one randomly.
    • Update the sets of triangles for the different counts of unallocated edges.
  • If we ever get here, then we have a solution -- hooray!

Usually this approach finds a solution within just a couple of iterations, but recently I encountered a mesh for which the algorithm tends to terminate only after one or two thousands of retries... Which obviously suggests that there may be meshes for which it never terminates.


Now, I would love to have a deterministic algorithm that is guaranteed to always find a solution. Computational complexity is not that big an issue, because the meshes are not very large and the labeling basically only has to be done when a new mesh is loaded, which does not happen all the time -- so an algorithm with (for example) exponential complexity ought to be fine, as long as it works. (But of course: the more efficient, the better.)

Thank you for reading this far. Now, any help would be greatly appreciated!



Edit: Results based on suggested solutions

Unfortunately, I cannot get the approach suggested by Dialecticus to work. Maybe I did not get it right... Anyway, consider the following mesh, with the start point indicated by a green dot: Let's zoom in a little bit... Now, let's start the algorithm. After the first step, the labeling looks like this (red = "starred paths", blue = "ringed paths"): So far so good. After the second step: And the third: ... fourth: But now we have a problem! Let's do one more round - but please pay attention on the triangle plotted in magenta: According to my current implementation, all the edges of the magenta triangle are on a ring path, so they should be blue -- which effectively makes this a counterexample. Now maybe I got it wrong somehow... But in any case the two edges that are nearest to the start node obviously cannot be red; and if the third one is labeled red, then it seems that the solution does not really fit the idea anymore.

Btw, here is the data used. Each row represents one edge, and the columns are to be interpreted as follows:

  1. Index of first node
  2. Index of second node
  3. x coordinate of first node
  4. y coordinate of first node
  5. x coordinate of second node
  6. y coordinate of second node

The start node is the one having index 1.


I guess that next I should try the method suggested by RafaƂ Dowgird... But perhaps I ought to do something completely different for a while :)


Source: (StackOverflow)

A Good 3D mesh library [closed]

I'm looking for a good 3D Mesh library

  • Should be able to read popular formats (OFF, OBJ...)
  • Should support both half-edge structure and a triangle soup
  • Should be tolerant to faults and illegal meshes.
  • Basic geometric operations - intersections, normal calculation, etc'
  • Most importantly - Should not be convoluted with endless template and inheritance hierarchies.

I've tried both CGAL and OpenMesh but both fail miserably in the last point.

Specifically CGAL which is impossible to follow even with the most advanced code analysis tools.

So far I'm seriously considering to pull my own.

My preference is C++ but I'm open to other options.


Source: (StackOverflow)

c# XNA low frame rate

Ok, I have 80,000 "Box" Mesh with simple textures I have set a view distance and only draw the ones you can see which leave 600 to 1000 for the DrawModel function belowe The problume is I only get 10 frame per second and my view distance is crappy Also, I have done memory test to all my code and the "mesh.draw()" takes 30 Frame per second off. nothing else take NEAR that much. Any help?

        private void DrawModel(MeshHolder tmpMH)
        {          
            Model tmpDrawModel = (Model)_Meshs[tmpMH.MeshFileName];
            Matrix[] transforms = new Matrix[tmpDrawModel.Bones.Count];
            tmpDrawModel.CopyAbsoluteBoneTransformsTo(transforms);
            foreach (ModelMesh mesh in tmpDrawModel.Meshes)
            {
                foreach (BasicEffect effect in mesh.Effects)
                {

                    effect.LightingEnabled = false;

                    effect.TextureEnabled = true;
                    effect.Texture = (Texture2D)_Textures[tmpMH.GetTexture(Count)]; 



                    effect.View = _MainCam.View;
                    effect.Projection = _projection;
                    effect.World =
                         transforms[mesh.ParentBone.Index] *
                        Matrix.CreateFromYawPitchRoll(tmpMH.Rotation.Y, tmpMH.Rotation.X, tmpMH.Rotation.Z) *
                        Matrix.CreateScale(tmpMH.Scale) *
                        Matrix.CreateTranslation(tmpMH.Position);
                }

                    mesh.Draw();               
            }
        }

Source: (StackOverflow)

Algorithm for determining whether a point is inside a 3D mesh

What is a fast algorithm for determining whether or not a point is inside a 3D mesh? For simplicity you can assume the mesh is all triangles and has no holes.

What I know so far is that one popular way of determining whether or not a ray has crossed a mesh is to count the number of ray/triangle intersections. It has to be fast because I am using it for a haptic medical simulation. So I cannot test all of the triangles for ray intersection. I need some kind of hashing or tree data structure to store the triangles in to help determine which triangle are relevant.

Also, I know that if I have any arbitrary 2D projection of the vertices, a simple point/triangle intersection test is all necessary. However, I'd still need to know which triangles are relevant and, in addition, which triangles lie in front of a the point and only test those triangles.


Source: (StackOverflow)

How to change width of CubeGeometry with Three.js?

I have a cube geometry and a mesh, and i don't know how to change the width (or height... i can change x, y and z though). Here's a snippet of what i have right now:

geometry = new THREE.CubeGeometry( 200, 200, 200 );
material = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe: true } );
mesh = new THREE.Mesh( geometry, material );
// WebGL renderer here

function render(){
    mesh.rotation.x += 0.01;
    mesh.rotation.y += 0.02;
    renderer.render( scene, camera );
}

function changeStuff(){
    mesh.geometry.width = 500; //Doesn't work.
    mesh.width = 500; // Doesn't work.
    geometry.width = 500; //Doesn't work.
    mesh.position.x = 500// Works!!

    render();
}

Thanks!

EDIT

Found a solution:

mesh.scale.x = 500;

Source: (StackOverflow)

Get border edges of mesh - in winding order

I have a triangulated mesh. Assume it looks like an bumpy surface. I want to be able to find all edges that fall on the surrounding border of the mesh. (forget about inner vertices)

I know I have to find edges that are only connected to one triangle, and collect all these together and that is the answer. But I want to be sure that the vertices of these edges are ordered clockwise around the shape.

I want to do this because I would like to get a polygon line around the outside of mesh.

I hope this is clear enough to understand. In a sense i am trying to "De-Triangulate" the mesh. ha! if there is such a term.


Source: (StackOverflow)

LibGDX mesh heightmap normals and lights

I am trying to get mesh normals and lights working in LibGDX project.

I already have textured mesh generated from heightmap texture pixels.

The problem is I cannot get normals lighted up correctly. Also Im not 100% sure I have normal vertices correctly set up in TerrainChunk class.

Heres the main class code:

package com.me.terrain;

import com.badlogic.gdx.Game;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.files.FileHandle;
import com.badlogic.gdx.graphics.Color;
import com.badlogic.gdx.graphics.GL20;
import com.badlogic.gdx.graphics.Mesh;
import com.badlogic.gdx.graphics.PerspectiveCamera;
import com.badlogic.gdx.graphics.Pixmap;
import com.badlogic.gdx.graphics.Texture;
import com.badlogic.gdx.graphics.VertexAttribute;
import com.badlogic.gdx.graphics.VertexAttributes.Usage;
import com.badlogic.gdx.graphics.g3d.utils.CameraInputController;
import com.badlogic.gdx.graphics.glutils.ShaderProgram;
import com.badlogic.gdx.math.Matrix3;
import com.badlogic.gdx.math.Matrix4;
import com.badlogic.gdx.math.Vector3;

public class Terra extends Game {

private PerspectiveCamera camera;
private CameraInputController camController;

private TerrainChunk chunk;
private Mesh mesh;

private ShaderProgram shader;
private Texture terrainTexture;

private final Matrix3 normalMatrix = new Matrix3();

private static final float[] lightPosition = { 5, 35, 5 };
private static final float[] ambientColor = { 0.2f, 0.2f, 0.2f, 1.0f };
private static final float[] diffuseColor = { 0.5f, 0.5f, 0.5f, 1.0f };
private static final float[] specularColor = { 0.7f, 0.7f, 0.7f, 1.0f };

private static final float[] fogColor = { 0.2f, 0.1f, 0.6f, 1.0f };

private Matrix4 model = new Matrix4();
private Matrix4 modelView = new Matrix4();

private final String vertexShader =
        "attribute vec4 a_position; \n" +
        "attribute vec3 a_normal; \n" +
        "attribute vec2 a_texCoord; \n" +
        "attribute vec4 a_color; \n" +

        "uniform mat4 u_MVPMatrix; \n" +
        "uniform mat3 u_normalMatrix; \n" +

        "uniform vec3 u_lightPosition; \n" +

        "varying float intensity; \n" +
        "varying vec2 texCoords; \n" +
        "varying vec4 v_color; \n" +

        "void main() { \n" +
        "    vec3 normal = normalize(u_normalMatrix * a_normal); \n" +
        "    vec3 light = normalize(u_lightPosition); \n" +
        "    intensity = max( dot(normal, light) , 0.0); \n" +

        "    v_color = a_color; \n" +
        "    texCoords = a_texCoord; \n" +

        "    gl_Position = u_MVPMatrix * a_position; \n" +
        "}";

private final String fragmentShader =
        "#ifdef GL_ES \n" +
        "precision mediump float; \n" +
        "#endif \n" +

        "uniform vec4 u_ambientColor; \n" +
        "uniform vec4 u_diffuseColor; \n" +
        "uniform vec4 u_specularColor; \n" +

        "uniform sampler2D u_texture; \n" +
        "varying vec2 texCoords; \n" +
        "varying vec4 v_color; \n" +

        "varying float intensity; \n" +

        "void main() { \n" +
        "    gl_FragColor = v_color * intensity * texture2D(u_texture, texCoords); \n" +
        "}";

@Override
public void create() {

    // Terrain texture size is 128x128
    terrainTexture = new Texture(Gdx.files.internal("data/concrete2.png"));

    // Height map (black/white) texture size is 32x32
    String heightMapFile = "data/heightmap.png";


    // position, normal, color, texture
    int vertexSize = 3 + 3 + 1 + 2;  

    chunk = new TerrainChunk(32, 32, vertexSize, heightMapFile);



    mesh = new Mesh(true, chunk.vertices.length / 3, chunk.indices.length,
            new VertexAttribute(Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE),
            new VertexAttribute(Usage.Normal, 3, ShaderProgram.NORMAL_ATTRIBUTE),
            new VertexAttribute(Usage.ColorPacked, 4, ShaderProgram.COLOR_ATTRIBUTE),
            new VertexAttribute(Usage.TextureCoordinates, 2, ShaderProgram.TEXCOORD_ATTRIBUTE));

    mesh.setVertices(chunk.vertices);
    mesh.setIndices(chunk.indices);



    camera = new PerspectiveCamera(67, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
    camera.position.set(5, 50, 5);
    camera.direction.set(3, 0, 0).sub(camera.position).nor();
    camera.near = 0.005f;
    camera.far = 300;
    camera.update();

    camController = new CameraInputController(camera);
    Gdx.input.setInputProcessor(camController);

    ShaderProgram.pedantic = false;

    shader = new ShaderProgram(vertexShader, fragmentShader);

}

@Override
public void render() {

    Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
    Gdx.gl.glEnable(GL20.GL_DEPTH_TEST);
    Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);

    camController.update();
    camera.update();


    // This is wrong?
    model.setToRotation(new Vector3(0, 1, 0), 45f);
    modelView.set(camera.view).mul(model);


    terrainTexture.bind();

    shader.begin();

    shader.setUniformMatrix("u_MVPMatrix", camera.combined);
    shader.setUniformMatrix("u_normalMatrix", normalMatrix.set(modelView).inv().transpose());

    shader.setUniform3fv("u_lightPosition", lightPosition, 0, 3);
    shader.setUniform4fv("u_ambientColor", ambientColor, 0, 4);
    shader.setUniform4fv("u_diffuseColor", diffuseColor, 0, 4);
    shader.setUniform4fv("u_specularColor", specularColor, 0, 4);

    shader.setUniformi("u_texture", 0);

    mesh.render(shader, GL20.GL_TRIANGLES);

    shader.end();

}
}

TerrainChunk class code:

final static class TerrainChunk {

    public final float[] heightMap;
    public final short width;
    public final short height;
    public final float[] vertices;
    public final short[] indices;

    public final int vertexSize;
    private final int positionSize = 3;

    public TerrainChunk(int width, int height, int vertexSize, String heightMapTexture) {

        if ((width + 1) * (height + 1) > Short.MAX_VALUE) {
            throw new IllegalArgumentException(            
                    "Chunk size too big, (width + 1)*(height+1) must be <= 32767");
        }

        this.heightMap = new float[(width + 1) * (height + 1)];
        this.width = (short) width;
        this.height = (short) height;
        this.vertices = new float[heightMap.length * vertexSize];
        this.indices = new short[width * height * 6];
        this.vertexSize = vertexSize;

        buildHeightmap(heightMapTexture);

        buildIndices();
        buildVertices();

        calcNormals(indices, vertices);

    }

    public void buildHeightmap(String pathToHeightMap) {

        FileHandle handle = Gdx.files.internal(pathToHeightMap);
        Pixmap heightmapImage = new Pixmap(handle);
        Color color = new Color();
        int idh = 0;

        for (int x = 0; x < this.width + 1; x++) {
            for (int y = 0; y < this.height + 1; y++) {
                Color.rgba8888ToColor(color, heightmapImage.getPixel(x, y));
                this.heightMap[idh++] = color.r;
            }
        }
    }

    public void buildVertices() {
        int heightPitch = height + 1;
        int widthPitch = width + 1;

        int idx = 0;
        int hIdx = 0;
        int strength = 10; // multiplier for height map

        float scale = 4f;

        for (int z = 0; z < heightPitch; z++) {
            for (int x = 0; x < widthPitch; x++) {

                // POSITION
                vertices[idx++] = scale * x;
                vertices[idx++] = heightMap[hIdx++] * strength;
                vertices[idx++] = scale * z;

                // NORMAL, skip these for now
                idx += 3;

                // COLOR
                vertices[idx++] = Color.WHITE.toFloatBits();

                // TEXTURE
                vertices[idx++] = (x / (float) width);
                vertices[idx++] = (z / (float) height);

            }
        }
    }

    private void buildIndices() {
        int idx = 0;
        short pitch = (short) (width + 1);
        short i1 = 0;
        short i2 = 1;
        short i3 = (short) (1 + pitch);
        short i4 = pitch;

        short row = 0;

        for (int z = 0; z < height; z++) {
            for (int x = 0; x < width; x++) {
                indices[idx++] = i1;
                indices[idx++] = i2;
                indices[idx++] = i3;

                indices[idx++] = i3;
                indices[idx++] = i4;
                indices[idx++] = i1;

                i1++;
                i2++;
                i3++;
                i4++;
            }

            row += pitch;
            i1 = row;
            i2 = (short) (row + 1);
            i3 = (short) (i2 + pitch);
            i4 = (short) (row + pitch);
        }
    }

    // Gets the index of the first float of a normal for a specific vertex
    private int getNormalStart(int vertIndex) {
        return vertIndex * vertexSize + positionSize;
    }

    // Gets the index of the first float of a specific vertex
    private int getPositionStart(int vertIndex) {
        return vertIndex * vertexSize;
    }

    // Adds the provided value to the normal
    private void addNormal(int vertIndex, float[] verts, float x, float y, float z) {

        int i = getNormalStart(vertIndex);

        verts[i] += x;
        verts[i + 1] += y;
        verts[i + 2] += z;
    }

    /*
     * Normalizes normals
     */
    private void normalizeNormal(int vertIndex, float[] verts) {

        int i = getNormalStart(vertIndex);

        float x = verts[i];
        float y = verts[i + 1];
        float z = verts[i + 2];

        float num2 = ((x * x) + (y * y)) + (z * z);
        float num = 1f / (float) Math.sqrt(num2);
        x *= num;
        y *= num;
        z *= num;

        verts[i] = x;
        verts[i + 1] = y;
        verts[i + 2] = z;
    }

    /*
     * Calculates the normals
     */
    private void calcNormals(short[] indices, float[] verts) {

        for (int i = 0; i < indices.length; i += 3) {
            int i1 = getPositionStart(indices[i]);
            int i2 = getPositionStart(indices[i + 1]);
            int i3 = getPositionStart(indices[i + 2]);

            // p1
            float x1 = verts[i1];
            float y1 = verts[i1 + 1];
            float z1 = verts[i1 + 2];

            // p2
            float x2 = verts[i2];
            float y2 = verts[i2 + 1];
            float z2 = verts[i2 + 2];

            // p3
            float x3 = verts[i3];
            float y3 = verts[i3 + 1];
            float z3 = verts[i3 + 2];

            // u = p3 - p1
            float ux = x3 - x1;
            float uy = y3 - y1;
            float uz = z3 - z1;

            // v = p2 - p1
            float vx = x2 - x1;
            float vy = y2 - y1;
            float vz = z2 - z1;

            // n = cross(v, u)
            float nx = (vy * uz) - (vz * uy);
            float ny = (vz * ux) - (vx * uz);
            float nz = (vx * uy) - (vy * ux);

            // normalize(n)
            float num2 = ((nx * nx) + (ny * ny)) + (nz * nz);
            float num = 1f / (float) Math.sqrt(num2);
            nx *= num;
            ny *= num;
            nz *= num;

            addNormal(indices[i], verts, nx, ny, nz);
            addNormal(indices[i + 1], verts, nx, ny, nz);
            addNormal(indices[i + 2], verts, nx, ny, nz);
        }

        for (int i = 0; i < (verts.length / vertexSize); i++) {
            normalizeNormal(i, verts);
        }
    }

}

What Im seeing is when I move camera the lights dont show correctly when I'm above terrain. They show more when Im under the terrain, though incorrectly even then I think.

pics:

  1. below: http://i.imgur.com/TocCLfA.png

  2. above: http://i.imgur.com/fwGhGDT.png


Source: (StackOverflow)

Sort a set of 3-D points in clockwise/counter-clockwise order

In 3-D space I have an unordered set of, say, 6 points; something like this:

           (A)*
                          (C)*
(E)*
                         (F)*
     (B)*

                  (D)*

The points form a 3-D contour but they are unordered. For unordered I mean that they are stored in an

unorderedList = [A - B - C - D - E - F]

I just want to reorganize this list starting from an arbitrary location (let's say point A) and traversing the points clockwise or counter-clockwise. Something like this:

orderedList = [A - E - B - D - F - C]

or

orderedList = [A - C - F - D - B - E]

I'm trying to implement an algorithm as simple as possible, since the set of points in mention corresponds to a N-ring neighborhood of each vertex on a mesh of ~420000 points, and I have to do this for each point on the mesh.

Some time ago there was a similar discussion regarding points in 2-D, but for now it's not clear for me how to go from this approach to my 3-D scenario.


Source: (StackOverflow)

How to animate a 3d model (mesh) in OpenGL?

I want to animate a model (for example a human, walking) in OpenGL. I know there is stuff like skeleton-animation (with tricky math), but what about this....

  1. Create a model in Blender
  2. Create a skeleton for that model in Blender
  3. Now do a walking animation in Blender with that model and skeleton
  4. Take some "keyFrames" of that animation and export every "keyFrame" as a single model (for example as obj file)
  5. Make an OBJ file loader for OpenGL (to get vertex, texture, normal and face data)
  6. Use a VBO to draw that animated model in OpenGL (and get some tricky ideas how to change the current "keyFrame"/model in the VBO ... perhaps something with glMapBufferRange

Ok, I know this idea is only a little script, but is it worth looking into further? What is a good concept to change the "keyFrame"/models in the VBO?

I know that memory problem, but with small models (and not too much animations) it could be done, I think.


Source: (StackOverflow)