EzDevInfo.com

opengl-es interview questions

Top opengl-es frequently asked interview questions

OpenGL ES Render to Texture

I have been having trouble finding straightforward code to render a scene to a texture in OpenGL ES (specifically for the iPhone, if that matters). I am interested in knowing the following:

  1. How do you render a scene to a texture in OpenGL ES?
  2. What parameters must you use to create a texture that is capable of being a render target in OpenGL ES?
  3. Are there any implications with applying this rendered texture to other primitives?

Source: (StackOverflow)

Faster alternative to glReadPixels in iPhone OpenGL ES 2.0

Is there any faster way to access the frame buffer than using glReadPixels? I would need read-only access to a small rectangular rendering area in the frame buffer to process the data further in CPU. Performance is important because I have to perform this operation repeatedly. I have searched the web and found some approach like using Pixel Buffer Object and glMapBuffer but it seems that OpenGL ES 2.0 does not support them.


Source: (StackOverflow)

Advertisements

Tutorials and libraries for OpenGL-ES games on Android [closed]

What tutorials and libraries are available which can help beginners to develop 2D and 3D games on Android using OpenGL-ES? I'm looking for tutorials which can help me learn OpenGL-ES, and I'm looking for OpenGL-ES libraries which can make life easier for beginners in OpenGL-ES.

Since Android is still small, I guess it may be help-full to read iPhone OpenGL-ES tutorials as well, as I suppose the OpenGL-ES functionality is much the same.

I have found the following useful information which I would have liked to share:

Android tutorials:

Other Android OpenGL-ES information:

iPhone OpenGL-ES tutorials (where the OpenGl-ES information is probably useful):

As for libraries which a beginner might use to get a simpler hands-on experience with OpenGL-ES, I have only found Rokon, which is recently started, thus has many holes and bugs. And it's gnuGPL licensed (at the moment) which means it cannot be used, if we wish to sell our games.

What else is out there?


Source: (StackOverflow)

Draw text in OpenGL ES (Android)

I'm currently developing a small OpenGL game for the Android platform and I wonder if there's an easy way to render text on top of the rendered frame (like a HUD with the player´s score etc). The text would need to use a custom font also. I've seen an example using a View as an overlay, but I don't know if I want to do that since I might want to port the game to other platforms later. Any ideas?

Regards/Per


Source: (StackOverflow)

"This UIView seems to be the delegate of an NSISVariable it doesn't know anything about. This is an internal UIKit bug" Error

I am working on an opengl project. I have used some images (2 for x-y scales) and labels (8) to display the scale on the screen. My first view is a tableview from which I go to openglView. But whenever I go back from openglView to the tableView it gives me this error and app crashes.

"This UIView seems to be the delegate of an NSISVariable it doesn't know anything about. This is an internal UIKit bug."

ANY Suggestions? Is this happening because I am including too many UI elements, other than those images and labels I have used some buttons also. And I am applying affineTransform to those images and labels and one button also.

Exact error is:

2013-01-31 12:20:18.743 EMtouch[50496:12203] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '{ Rows:
    UILayoutContainerView:0x9835660.Height == 480 + 1*0x7e53030.marker + 1*0x7e546c0.marker
    UILayoutContainerView:0x9835660.Width == 320 + 1*0x7e52f90.marker + 1*0x7e54330.marker
    UILayoutContainerView:0x9835660.minX == 0 + 1*0x7e52ca0.marker + -0.5*0x7e52f90.marker
    UILayoutContainerView:0x9835660.minY == 0 + 1*0x7e52fd0.marker + -0.5*0x7e53030.marker
    UINavigationTransitionView:0x9837ea0.Height == 480 + 1*0x7e51bf0.marker + 1*0x7e53030.marker + 1*0x7e546c0.marker
    UINavigationTransitionView:0x9837ea0.Width == 320 + 1*0x7e519c0.marker + 1*0x7e52f90.marker + 1*0x7e54330.marker
    UINavigationTransitionView:0x9837ea0.minX == 0 + 1*0x7e51940.marker + -0.5*0x7e519c0.marker
    UINavigationTransitionView:0x9837ea0.minY == 0 + 1*0x7e51b80.marker + -0.5*0x7e51bf0.marker
    UIWindow:0x7e1aea0.Height == 480 + 1*0x7e546c0.marker
    UIWindow:0x7e1aea0.Width == 320 + 1*0x7e54330.marker
    UIWindow:0x7e1aea0.minX == 0 + -0.5*0x7e54330.marker + 1*0x7e54410.marker
    UIWindow:0x7e1aea0.minY == 0 + 1*0x7e542d0.marker + -0.5*0x7e546c0.marker
    objective == <250:-0.000579834> + <250:-9.72015e-08>*UILabel:0x7b44bf0.Width + <250:9.72015e-08>*UILabel:0x7b45100.Width

  Constraints:
    <NSAutoresizingMaskLayoutConstraint:0x7e51940 h=-&- v=-&- UINavigationTransitionView:0x9837ea0.midX == UILayoutContainerView:0x9835660.midX>        Marker:0x7e51940.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e519c0 h=-&- v=-&- UINavigationTransitionView:0x9837ea0.width == UILayoutContainerView:0x9835660.width>      Marker:0x7e519c0.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e51b80 h=-&- v=-&- UINavigationTransitionView:0x9837ea0.midY == UILayoutContainerView:0x9835660.midY>        Marker:0x7e51b80.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e51bf0 h=-&- v=-&- UINavigationTransitionView:0x9837ea0.height == UILayoutContainerView:0x9835660.height>        Marker:0x7e51bf0.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e52ca0 h=-&- v=-&- UILayoutContainerView:0x9835660.midX == UIWindow:0x7e1aea0.midX>      Marker:0x7e52ca0.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e52f90 h=-&- v=-&- UILayoutContainerView:0x9835660.width == UIWindow:0x7e1aea0.width>        Marker:0x7e52f90.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e52fd0 h=-&- v=-&- UILayoutContainerView:0x9835660.midY == UIWindow:0x7e1aea0.midY>      Marker:0x7e52fd0.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e53030 h=-&- v=-&- UILayoutContainerView:0x9835660.height == UIWindow:0x7e1aea0.height>      Marker:0x7e53030.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e54330 h=--- v=--- H:[UIWindow:0x7e1aea0(320)]>      Marker:0x7e54330.marker
    <NSAutoresizingMaskLayoutConstraint:0x7e546c0 h=--- v=--- V:[UIWindow:0x7e1aea0(480)]>      Marker:0x7e546c0.marker
    <_UIWindowAnchoringConstraint:0x7e542d0 h=--- v=--- UIWindow:0x7e1aea0.midY == + 240>       Marker:0x7e542d0.marker
    <_UIWindowAnchoringConstraint:0x7e54410 h=--- v=--- UIWindow:0x7e1aea0.midX == + 160>       Marker:0x7e54410.marker
}: internal error.  Cannot find an outgoing row head for incoming head UILabel:0x7b44bf0.Width, which should never happen.'
*** First throw call stack:
(0x1fb1012 0x19f4e7e 0x1fb0deb 0x1599609 0x159c64f 0x159c753 0xe7e8f9 0x982b24 0x982783 0xbba3fe 0xbba698 0x97a3b6 0x97a554 0x28f7d8 0x1c2b014 0x1c1b7d5 0x1f57af5 0x1f56f44 0x1f56e1b 0x34d37e3 0x34d3668 0x93c65c 0xc56d 0x2b35 0x1)
libc++abi.dylib: terminate called throwing an exception

Source: (StackOverflow)

How to use onSensorChanged sensor data in combination with OpenGL

( edit: I added the best working approach in my augmented reality framework and now also take the gyroscope into account which makes it much more stable again: DroidAR framework )

I have written a TestSuite to find out how to calculate the rotation angles from the data you get in SensorEventListener.onSensorChanged(). I really hope you can complete my solution to help people who will have the same problems like me. Here is the code, I think you will understand it after reading it.

Feel free to change it, the main idea was to implement several methods to send the orientation angles to the opengl view or any other target which would need it.

method 1 to 4 are working, they are directly sending the rotationMatrix to the OpenGl view.

method 6 works now too, but I have no explanation why the rotation has to be done y x z..

all other methods are not working or buggy and I hope someone knows to get them working.I think the best method would be method 5 if it would work, because it would be the easiest to understand but i'm not sure how efficient it is. the complete code isn't optimized so I recommend to not use it as it is in your project.

here it is:

/**
 * This class provides a basic demonstration of how to use the
 * {@link android.hardware.SensorManager SensorManager} API to draw a 3D
 * compass.
 */
public class SensorToOpenGlTests extends Activity implements Renderer,
  SensorEventListener {

 private static final boolean TRY_TRANSPOSED_VERSION = false;

 /*
  * MODUS overview:
  * 
  * 1 - unbufferd data directly transfaired from the rotation matrix to the
  * modelview matrix
  * 
  * 2 - buffered version of 1 where both acceleration and magnetometer are
  * buffered
  * 
  * 3 - buffered version of 1 where only magnetometer is buffered
  * 
  * 4 - buffered version of 1 where only acceleration is buffered
  * 
  * 5 - uses the orientation sensor and sets the angles how to rotate the
  * camera with glrotate()
  * 
  * 6 - uses the rotation matrix to calculate the angles
  * 
  * 7 to 12 - every possibility how the rotationMatrix could be constructed
  * in SensorManager.getRotationMatrix (see
  * http://www.songho.ca/opengl/gl_anglestoaxes.html#anglestoaxes for all
  * possibilities)
  */

 private static int MODUS = 2;

 private GLSurfaceView openglView;
 private FloatBuffer vertexBuffer;
 private ByteBuffer indexBuffer;
 private FloatBuffer colorBuffer;

 private SensorManager mSensorManager;
 private float[] rotationMatrix = new float[16];
 private float[] accelGData = new float[3];
 private float[] bufferedAccelGData = new float[3];
 private float[] magnetData = new float[3];
 private float[] bufferedMagnetData = new float[3];
 private float[] orientationData = new float[3];

 // private float[] mI = new float[16];

 private float[] resultingAngles = new float[3];

 private int mCount;

 final static float rad2deg = (float) (180.0f / Math.PI);

 private boolean landscape;

 public SensorToOpenGlTests() {
 }

 /** Called with the activity is first created. */
 @Override
 public void onCreate(Bundle savedInstanceState) {
  super.onCreate(savedInstanceState);

  mSensorManager = (SensorManager) getSystemService(Context.SENSOR_SERVICE);
  openglView = new GLSurfaceView(this);
  openglView.setRenderer(this);
  setContentView(openglView);
 }

 @Override
 protected void onResume() {
  // Ideally a game should implement onResume() and onPause()
  // to take appropriate action when the activity looses focus
  super.onResume();
  openglView.onResume();

  if (((WindowManager) getSystemService(WINDOW_SERVICE))
    .getDefaultDisplay().getOrientation() == 1) {
   landscape = true;
  } else {
   landscape = false;
  }

  mSensorManager.registerListener(this, mSensorManager
    .getDefaultSensor(Sensor.TYPE_ACCELEROMETER),
    SensorManager.SENSOR_DELAY_GAME);
  mSensorManager.registerListener(this, mSensorManager
    .getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD),
    SensorManager.SENSOR_DELAY_GAME);
  mSensorManager.registerListener(this, mSensorManager
    .getDefaultSensor(Sensor.TYPE_ORIENTATION),
    SensorManager.SENSOR_DELAY_GAME);
 }

 @Override
 protected void onPause() {
  // Ideally a game should implement onResume() and onPause()
  // to take appropriate action when the activity looses focus
  super.onPause();
  openglView.onPause();
  mSensorManager.unregisterListener(this);
 }

 public int[] getConfigSpec() {
  // We want a depth buffer, don't care about the
  // details of the color buffer.
  int[] configSpec = { EGL10.EGL_DEPTH_SIZE, 16, EGL10.EGL_NONE };
  return configSpec;
 }

 public void onDrawFrame(GL10 gl) {

  // clear screen and color buffer:
  gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
  // set target matrix to modelview matrix:
  gl.glMatrixMode(GL10.GL_MODELVIEW);
  // init modelview matrix:
  gl.glLoadIdentity();
  // move camera away a little bit:

  if ((MODUS == 1) || (MODUS == 2) || (MODUS == 3) || (MODUS == 4)) {

   if (landscape) {
    // in landscape mode first remap the rotationMatrix before using
    // it with glMultMatrixf:
    float[] result = new float[16];
    SensorManager.remapCoordinateSystem(rotationMatrix,
      SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X,
      result);
    gl.glMultMatrixf(result, 0);
   } else {
    gl.glMultMatrixf(rotationMatrix, 0);
   }
  } else {
   //in all other modes do the rotation by hand
   //the order y x z is important!
   gl.glRotatef(resultingAngles[2], 0, 1, 0);
   gl.glRotatef(resultingAngles[1], 1, 0, 0);
   gl.glRotatef(resultingAngles[0], 0, 0, 1);
  }

  //move the axis to simulate augmented behaviour:
  gl.glTranslatef(0, 2, 0);

  // draw the 3 axis on the screen:
  gl.glVertexPointer(3, GL_FLOAT, 0, vertexBuffer);
  gl.glColorPointer(4, GL_FLOAT, 0, colorBuffer);
  gl.glDrawElements(GL_LINES, 6, GL_UNSIGNED_BYTE, indexBuffer);
 }

 public void onSurfaceChanged(GL10 gl, int width, int height) {
  gl.glViewport(0, 0, width, height);
  float r = (float) width / height;
  gl.glMatrixMode(GL10.GL_PROJECTION);
  gl.glLoadIdentity();
  gl.glFrustumf(-r, r, -1, 1, 1, 10);
 }

 public void onSurfaceCreated(GL10 gl, EGLConfig config) {
  gl.glDisable(GL10.GL_DITHER);
  gl.glClearColor(1, 1, 1, 1);
  gl.glEnable(GL10.GL_CULL_FACE);
  gl.glShadeModel(GL10.GL_SMOOTH);
  gl.glEnable(GL10.GL_DEPTH_TEST);

  gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
  gl.glEnableClientState(GL10.GL_COLOR_ARRAY);

  // load the 3 axis and there colors:
  float vertices[] = { 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1 };
  float colors[] = { 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1 };
  byte indices[] = { 0, 1, 0, 2, 0, 3 };

  ByteBuffer vbb;
  vbb = ByteBuffer.allocateDirect(vertices.length * 4);
  vbb.order(ByteOrder.nativeOrder());
  vertexBuffer = vbb.asFloatBuffer();
  vertexBuffer.put(vertices);
  vertexBuffer.position(0);

  vbb = ByteBuffer.allocateDirect(colors.length * 4);
  vbb.order(ByteOrder.nativeOrder());
  colorBuffer = vbb.asFloatBuffer();
  colorBuffer.put(colors);
  colorBuffer.position(0);

  indexBuffer = ByteBuffer.allocateDirect(indices.length);
  indexBuffer.put(indices);
  indexBuffer.position(0);
 }

 public void onAccuracyChanged(Sensor sensor, int accuracy) {
 }

 public void onSensorChanged(SensorEvent event) {

  // load the new values:
  loadNewSensorData(event);

  if (MODUS == 1) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);
  }

  if (MODUS == 2) {
   rootMeanSquareBuffer(bufferedAccelGData, accelGData);
   rootMeanSquareBuffer(bufferedMagnetData, magnetData);
   SensorManager.getRotationMatrix(rotationMatrix, null,
     bufferedAccelGData, bufferedMagnetData);
  }

  if (MODUS == 3) {
   rootMeanSquareBuffer(bufferedMagnetData, magnetData);
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     bufferedMagnetData);
  }

  if (MODUS == 4) {
   rootMeanSquareBuffer(bufferedAccelGData, accelGData);
   SensorManager.getRotationMatrix(rotationMatrix, null,
     bufferedAccelGData, magnetData);
  }

  if (MODUS == 5) {
   // this mode uses the sensor data recieved from the orientation
   // sensor
   resultingAngles = orientationData.clone();
   if ((-90 > resultingAngles[1]) || (resultingAngles[1] > 90)) {
    resultingAngles[1] = orientationData[0];
    resultingAngles[2] = orientationData[1];
    resultingAngles[0] = orientationData[2];
   }
  }

  if (MODUS == 6) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);
   final float[] anglesInRadians = new float[3];
   SensorManager.getOrientation(rotationMatrix, anglesInRadians);
   //TODO check for landscape mode
   resultingAngles[0] = anglesInRadians[0] * rad2deg;
   resultingAngles[1] = anglesInRadians[1] * rad2deg;
   resultingAngles[2] = anglesInRadians[2] * -rad2deg;
  }

  if (MODUS == 7) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);

   rotationMatrix = transpose(rotationMatrix);
   /*
    * this assumes that the rotation matrices are multiplied in x y z
    * order Rx*Ry*Rz
    */

   resultingAngles[2] = (float) (Math.asin(rotationMatrix[2]));
   final float cosB = (float) Math.cos(resultingAngles[2]);
   resultingAngles[2] = resultingAngles[2] * rad2deg;
   resultingAngles[0] = -(float) (Math.acos(rotationMatrix[0] / cosB))
     * rad2deg;
   resultingAngles[1] = (float) (Math.acos(rotationMatrix[10] / cosB))
     * rad2deg;
  }

  if (MODUS == 8) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);
   rotationMatrix = transpose(rotationMatrix);
   /*
    * this assumes that the rotation matrices are multiplied in z y x
    */

   resultingAngles[2] = (float) (Math.asin(-rotationMatrix[8]));
   final float cosB = (float) Math.cos(resultingAngles[2]);
   resultingAngles[2] = resultingAngles[2] * rad2deg;
   resultingAngles[1] = (float) (Math.acos(rotationMatrix[9] / cosB))
     * rad2deg;
   resultingAngles[0] = (float) (Math.asin(rotationMatrix[4] / cosB))
     * rad2deg;
  }

  if (MODUS == 9) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);
   rotationMatrix = transpose(rotationMatrix);
   /*
    * this assumes that the rotation matrices are multiplied in z x y
    * 
    * note z axis looks good at this one
    */

   resultingAngles[1] = (float) (Math.asin(rotationMatrix[9]));
   final float minusCosA = -(float) Math.cos(resultingAngles[1]);
   resultingAngles[1] = resultingAngles[1] * rad2deg;
   resultingAngles[2] = (float) (Math.asin(rotationMatrix[8]
     / minusCosA))
     * rad2deg;
   resultingAngles[0] = (float) (Math.asin(rotationMatrix[1]
     / minusCosA))
     * rad2deg;
  }

  if (MODUS == 10) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);
   rotationMatrix = transpose(rotationMatrix);
   /*
    * this assumes that the rotation matrices are multiplied in y x z
    */

   resultingAngles[1] = (float) (Math.asin(-rotationMatrix[6]));
   final float cosA = (float) Math.cos(resultingAngles[1]);
   resultingAngles[1] = resultingAngles[1] * rad2deg;
   resultingAngles[2] = (float) (Math.asin(rotationMatrix[2] / cosA))
     * rad2deg;
   resultingAngles[0] = (float) (Math.acos(rotationMatrix[5] / cosA))
     * rad2deg;
  }

  if (MODUS == 11) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);
   rotationMatrix = transpose(rotationMatrix);
   /*
    * this assumes that the rotation matrices are multiplied in y z x
    */

   resultingAngles[0] = (float) (Math.asin(rotationMatrix[4]));
   final float cosC = (float) Math.cos(resultingAngles[0]);
   resultingAngles[0] = resultingAngles[0] * rad2deg;
   resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC))
     * rad2deg;
   resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC))
     * rad2deg;
  }

  if (MODUS == 12) {
   SensorManager.getRotationMatrix(rotationMatrix, null, accelGData,
     magnetData);
   rotationMatrix = transpose(rotationMatrix);
   /*
    * this assumes that the rotation matrices are multiplied in x z y
    */

   resultingAngles[0] = (float) (Math.asin(-rotationMatrix[1]));
   final float cosC = (float) Math.cos(resultingAngles[0]);
   resultingAngles[0] = resultingAngles[0] * rad2deg;
   resultingAngles[2] = (float) (Math.acos(rotationMatrix[0] / cosC))
     * rad2deg;
   resultingAngles[1] = (float) (Math.acos(rotationMatrix[5] / cosC))
     * rad2deg;
  }
  logOutput();
 }

 /**
  * transposes the matrix because it was transposted (inverted, but here its
  * the same, because its a rotation matrix) to be used for opengl
  * 
  * @param source
  * @return
  */
 private float[] transpose(float[] source) {
  final float[] result = source.clone();
  if (TRY_TRANSPOSED_VERSION) {
   result[1] = source[4];
   result[2] = source[8];
   result[4] = source[1];
   result[6] = source[9];
   result[8] = source[2];
   result[9] = source[6];
  }
  // the other values in the matrix are not relevant for rotations
  return result;
 }

 private void rootMeanSquareBuffer(float[] target, float[] values) {

  final float amplification = 200.0f;
  float buffer = 20.0f;

  target[0] += amplification;
  target[1] += amplification;
  target[2] += amplification;
  values[0] += amplification;
  values[1] += amplification;
  values[2] += amplification;

  target[0] = (float) (Math
    .sqrt((target[0] * target[0] * buffer + values[0] * values[0])
      / (1 + buffer)));
  target[1] = (float) (Math
    .sqrt((target[1] * target[1] * buffer + values[1] * values[1])
      / (1 + buffer)));
  target[2] = (float) (Math
    .sqrt((target[2] * target[2] * buffer + values[2] * values[2])
      / (1 + buffer)));

  target[0] -= amplification;
  target[1] -= amplification;
  target[2] -= amplification;
  values[0] -= amplification;
  values[1] -= amplification;
  values[2] -= amplification;
 }

 private void loadNewSensorData(SensorEvent event) {
  final int type = event.sensor.getType();
  if (type == Sensor.TYPE_ACCELEROMETER) {
   accelGData = event.values.clone();
  }
  if (type == Sensor.TYPE_MAGNETIC_FIELD) {
   magnetData = event.values.clone();
  }
  if (type == Sensor.TYPE_ORIENTATION) {
   orientationData = event.values.clone();
  }
 }

 private void logOutput() {
  if (mCount++ > 30) {
   mCount = 0;
   Log.d("Compass", "yaw0: " + (int) (resultingAngles[0])
     + "  pitch1: " + (int) (resultingAngles[1]) + "  roll2: "
     + (int) (resultingAngles[2]));
  }
 }
}

Source: (StackOverflow)

Explicit vs Automatic attribute location binding for OpenGL shaders

When setting up attribute locations for an OpenGL shader program, you are faced with two options:

glBindAttribLocation() before linking to explicitly define an attribute location.

or

glGetAttribLocation() after linking to obtain an automatically assigned attribute location.

What is the utility for using one over the other?

And which one, if any, is preferred in practice?


Source: (StackOverflow)

How to deal with different aspect ratios in libGDX?

I have implemented some screens using libGDX that would obviously use the Screen class provided by the libGDX framework. However, the implementation for these screens works only with pre-defined screen sizes. For example, if the sprite was meant for a 640 x 480 size screen (4:3 Aspect ratio), it won't work as intended on other screen sizes because the sprites go par the screen boundaries and are not scaled to the screen size at all. Moreover, if simple scaling would have been provided by the libGDX, the issue I am facing would have still been there because that would cause the aspect ratio of the game screen to change.

After researching on internet, I came across a blog/forum that had discussed the same issue. I have implemented it and so far it is working fine. But I want to confirm whether this is the best option to achieve this or whether there are better alternatives. Below is the code to show how I am dealing with this legitimate problem.

FORUM LINK: http://www.java-gaming.org/index.php?topic=25685.new

public class SplashScreen implements Screen {

    // Aspect Ratio maintenance
    private static final int VIRTUAL_WIDTH = 640;
    private static final int VIRTUAL_HEIGHT = 480;
    private static final float ASPECT_RATIO = (float) VIRTUAL_WIDTH / (float) VIRTUAL_HEIGHT;

    private Camera camera;
    private Rectangle viewport;
    // ------end------

    MainGame TempMainGame;

    public Texture splashScreen;
    public TextureRegion splashScreenRegion;
    public SpriteBatch splashScreenSprite;

    public SplashScreen(MainGame maingame) {
        TempMainGame = maingame;
    }

    @Override
    public void dispose() {
        splashScreenSprite.dispose();
        splashScreen.dispose();
    }

    @Override
    public void render(float arg0) {
        //----Aspect Ratio maintenance

        // update camera
        camera.update();
        camera.apply(Gdx.gl10);

        // set viewport
        Gdx.gl.glViewport((int) viewport.x, (int) viewport.y,
        (int) viewport.width, (int) viewport.height);

        // clear previous frame
        Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);

        // DRAW EVERYTHING
        //--maintenance end--

        splashScreenSprite.begin();
        splashScreenSprite.disableBlending();
        splashScreenSprite.draw(splashScreenRegion, 0, 0);
        splashScreenSprite.end();
    }

    @Override
    public void resize(int width, int height) {
        //--Aspect Ratio Maintenance--
        // calculate new viewport
        float aspectRatio = (float)width/(float)height;
        float scale = 1f;
        Vector2 crop = new Vector2(0f, 0f);

        if(aspectRatio > ASPECT_RATIO) {
            scale = (float) height / (float) VIRTUAL_HEIGHT;
            crop.x = (width - VIRTUAL_WIDTH * scale) / 2f;
        } else if(aspectRatio < ASPECT_RATIO) {
            scale = (float) width / (float) VIRTUAL_WIDTH;
            crop.y = (height - VIRTUAL_HEIGHT * scale) / 2f;
        } else {
            scale = (float) width / (float) VIRTUAL_WIDTH;
        }

        float w = (float) VIRTUAL_WIDTH * scale;
        float h = (float) VIRTUAL_HEIGHT * scale;
        viewport = new Rectangle(crop.x, crop.y, w, h);
        //Maintenance ends here--
    }

    @Override
    public void show() {
        camera = new OrthographicCamera(VIRTUAL_WIDTH, VIRTUAL_HEIGHT); //Aspect Ratio Maintenance

        splashScreen = new Texture(Gdx.files.internal("images/splashScreen.png"));
        splashScreenRegion = new TextureRegion(splashScreen, 0, 0, 640, 480);
        splashScreenSprite = new SpriteBatch();

        if(Assets.load()) {
            this.dispose();
            TempMainGame.setScreen(TempMainGame.mainmenu);
        }
    }
}

UPDATE: I recently came to know that libGDX has some of its own functionality to maintain aspect ratios which I would like to discuss here. While searching the aspect ratio issue across the internet, I came across several forums/developers who had this problem of "How to maintain the aspect ratio on different screen sizes?" One of the solutions that really worked for me was posted above.

Later on when I proceeded with implementing the touchDown() methods for the screen, I found that due to scaling on resize, the co-ordinates on which I had implemented touchDown() would change by a great amount. After working with some code to translate the co-ordinates in accordance with the screen resize, I reduced this amount to a great extent but I wasn't successful to maintain them with pin point accuracy. For example, if I had implemented touchDown() on a texture, resizing the screen would shift the touchListener on the texture region some pixels to the right or left, depending on the resize and this was obviously undesired.

Later on I came to know that the stage class has its own native functionality to maintain the aspect ratio (boolean stretch = false). Now that I have implemented my screen by using the stage class, the aspect ratio is maintained well by it. However on resize or different screen sizes, the black area that is generated always appears on the right side of the screen; that is the screen is not centered which makes it quite ugly if the black area is substantially large.

Can any community member help me out to resolve this problem?


Source: (StackOverflow)

OpenGL ES versus OpenGL

What are the differences between OpenGL ES and OpenGL ?


Source: (StackOverflow)

OpenGL vs OpenGL ES 2.0 - Can an OpenGL Application Be Easily Ported?

I am working on a gaming framework of sorts, and am a newcomer to OpenGL. Most books seem to not give a terribly clear answer to this question, and I want to develop on my desktop using OpenGL, but execute the code in an OpenGL ES 2.0 environment. My question is twofold then:

  1. If I target my framework for OpenGL on the desktop, will it just run without modification in an OpenGL ES 2.0 environment?
  2. If not, then is there a good emulator out there, PC or Mac; is there a script that I can run that will convert my OpenGL code into OpenGL ES code, or flag things that won't work?

Source: (StackOverflow)

OpenGL ES iPhone - drawing anti aliased lines

Normally, you'd use something like:

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
glEnable(GL_LINE_SMOOTH);

glLineWidth(2.0f);

glVertexPointer(2, GL_FLOAT, 0, points);
glEnableClientState(GL_VERTEX_ARRAY);

glDrawArrays(GL_LINE_STRIP, 0, num_points);

glDisableClientState(GL_VERTEX_ARRAY);

It looks good in the iPhone simulator, but on the iPhone the lines get extremely thin and w/o any anti aliasing.

How do you get AA on iPhone?


Source: (StackOverflow)

Android OpenGL ES and 2D

Well, here's my request. I don't know OpenGL already, and I'm not willing to learn it, I want to learn OpenGL ES directly since I'm targeting my development to android, however. I want to learn OpenGL ES in order to develop my 2D games. I chose it for performances purpose (since basic SurfaceView drawing isn't that efficient when it comes to RT games). My question is: where to start? I've spent over a month browsing Google and reading/trying some tutorials/examples I've found anywhere but to be honest, it didn't help much and this is for two reasons:

  1. Almost all the articles/tutorials I've came across are 3D related (I only want to learn how to do my 2D Sprites drawing)
  2. There's no base to start from since all the articles targets a specific things like: "How to draw a triangle (with vertices)", "How to create a Mesh"... etc.

I've tried to read some source code too (ex.: replica island) but the codes are too complicated and contains a lot of things that aren't necessary; result: I get lost among 100 .java files with weird class names and stuff.

I guess there's no course like the one I'm looking for, but I'll be very glad if somebody could give me some guidelines and some links maybe to learn what I'm up to (only OpenGL ES 2D Sprites rendering! nothing 3D).


Source: (StackOverflow)

GLSurfaceView inside fragment not rendering when restarted

I have a GLSurfaceView set up and rendering as expected using a GLSurfaceView.Renderer. My App uses fragments from the android support package. When I navigate to a new fragment surfaceDestroyed is called but when I come back to the fragment via the backstack the GLSurfaceView will not render, calls to requestRender do not result in an onDraw call.

I Am aware that I need to call onResume and onPause on the surface view and I am doing this from the hosting fragment but it doesn't seem to solve the issue. All examples about htis method refer to the activity, could this be the issue? And if so how do you use a GLSurfaeView inside a fragment.

Any insight greatly appreciated, I'm happy to post code but it seems to be more of a general question to me,

Thanks


Source: (StackOverflow)

Tools for GLSL editing [closed]

I'm looking for some kind of tool to work with GLSL. I want to experiment with shaders in the WebGL application, so what I'm looking for is something like RenderMonkey. As far as I know - RenderMonkey is not supported anymore, so there must be some other tool that took it's place.

The best would be if I could do both the "effect composing" like RM and the raw GLSL code development.


Source: (StackOverflow)

Want to display a 3D model on the iPhone: how to get started?

I want to display and rotate a single 3D model, preferably textured, on the iPhone. Doesn't have to zoom in and out, or have a background, or anything.

I have the following:

  • an iPhone
  • a MacBook
  • the iPhone SDK
  • Blender

My knowledge base:

  • I can make 3D models in various 3D programs (I'm most comfortable with 3D Studio Max, which I once took a course on, but I've used others)
  • General knowledge of procedural programming from years ago (QuickBasic - I'm old!)
  • Beginner's knowledge of object-oriented programming from going through simple Java and C# tutorials (Head Start C# book and my wife's intro to OOP course that used Java)
  • I have managed to display a 3D textured model and spin it using a tutorial in C# I got off the net (I didn't just copy and paste, I understand basically how it works) and the XNA game development library, using Visual Studio on Windows.

What I do not know:

  • Much about Objective C
  • Anything about OpenGL or OpenGL ES, which the iPhone apparently uses
  • Anything about XCode

My main problem is that I don't know where to start! All the iPhone books I found seem to be about creating GUI applications, not OpenGL apps. I found an OpenGL book but I don't know how much, if any, applies to iPhone development. And I find the Objective C syntax somewhat confusing, with the weird nested method naming, things like "id" that don't make sense, and the scary thought that I have to do manual memory management.

Where is the best place to start? I couldn't find any tutorials for this sort of thing, but maybe my Google-Fu is weak. Or maybe I should start with learning Objective C? I know of books like Aaron Hillgrass', but I've also read that they are outdated and much of the sample code doesn't work on the iPhone SDK, plus it seems geared towards the Model-View-Controller paradigm which doesn't seem that suited for 3D apps.

Basically I'm confused about what my first steps should be.


Source: (StackOverflow)