EzDevInfo.com

opengl interview questions

Top opengl frequently asked interview questions

How do you render primitives as wireframes in OpenGL?

How do you render primitives as wireframes in OpenGL?


Source: (StackOverflow)

What does "immediate mode" mean in OpenGL?

What is "immediate mode"? Give a code example.

When do I have to use immediate mode instead of retained mode? What are pros and cons of using each method?


Source: (StackOverflow)

Advertisements

How to make an OpenGL rendering context with transparent background?

Rendering contexts usually have a solid color on the background (black or whatever, see the image below):

alt text

I'm wondering if it's possible to setup a window, with no decorations AND with the transparent background, while allowing me to render OpenGL stuff on it.

This would give the illusion that the triangle is floating on the screen. The transparent background should allow you to see the desktop or other applications that might be behind it.

Could you please exemplify with source code?

Platform: Windows (win32 only)


Source: (StackOverflow)

Vertex shader vs Fragment Shader

I've read some tutorials regarding Cg, yet one thing is not quite clear to me. What exactly is the difference between vertex and fragment shaders? And for what situations is one better suited than the other?


Source: (StackOverflow)

How are 3D games so efficient?

There is something I have never understood. How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot @ 60fps uses a whopping 30% ?


Source: (StackOverflow)

What is state-of-the-art for text rendering in OpenGL as of version 4.1?

There are already a number of questions about text rendering in OpenGL, such as:

But mostly what is discussed is rendering textured quads using the fixed-function pipeline. Surely shaders must make a better way.

I'm not really concerned about internationalization, most of my strings will be plot tick labels (date and time or purely numeric). But the plots will be re-rendered at the screen refresh rate and there could be quite a bit of text (not more than a few thousand glyphs on-screen, but enough that hardware accelerated layout would be nice).

What is the recommended approach for text-rendering using modern OpenGL? (Citing existing software using the approach is good evidence that it works well)

  • Geometry shaders that accept e.g. position and orientation and a character sequence and emit textured quads
  • Geometry shaders that render vector fonts
  • As above, but using tessellation shaders instead
  • A compute shader to do font rasterization

Source: (StackOverflow)

Once upon a time, when > was faster than < ... Wait, what?

I am reading an awesome OpenGL tutorial. It's really great, trust me. The topic I am currently at is Z-buffer. Aside from explaining what's it all about, the author mentions that we can perform custom depth tests, such as GL_LESS, GL_ALWAYS, etc. He also explains that the actual meaning of depth values (which is top and which isn't) can also be customized. I understand so far. And then the author says something unbelievable:

The range zNear can be greater than the range zFar; if it is, then the window-space values will be reversed, in terms of what constitutes closest or farthest from the viewer.

Earlier, it was said that the window-space Z value of 0 is closest and 1 is farthest. However, if our clip-space Z values were negated, the depth of 1 would be closest to the view and the depth of 0 would be farthest. Yet, if we flip the direction of the depth test (GL_LESS to GL_GREATER, etc), we get the exact same result. So it's really just a convention. Indeed, flipping the sign of Z and the depth test was once a vital performance optimization for many games.

If I understand correctly, performance-wise, flipping the sign of Z and the depth test is nothing but changing a < comparison to a > comparison. So, if I understand correctly and the author isn't lying or making things up, then changing < to > used to be a vital optimization for many games.

Is the author making things up, am I misunderstanding something, or is it indeed the case that once < was slower (vitally, as the author says) than >?

Thanks for clarifying this quite curious matter!

Disclaimer: I am fully aware that algorithm complexity is the primary source for optimizations. Furthermore, I suspect that nowadays it definitely wouldn't make any difference and I am not asking this to optimize anything. I am just extremely, painfully, maybe prohibitively curious.


Source: (StackOverflow)

Using OpenGl with C#? [closed]

Is there free OpenGL support libraries for C#? If so, which one do I use and where do I find sample projects ?

EDIT #1

Does C# provide classes for OpenGL ?


Source: (StackOverflow)

Whats the concept of and differences between 'FrameBuffer' and 'RenderBuffer' in OpenGL?

I'm confusing about concept of FrameBuffer and RenderBuffer. I know that they're required to render, but I want to understand them before use.

I know some bitmap buffer is required to store temporary drawing result. The back buffer. And the other buffer required to be seen on screen during those drawings are in progress. The front buffer. And flip them, and draw again. I know this concept, but it's hard to connect those objects to this concept.

Whats the concept of and differences of them?


Source: (StackOverflow)

What is the correct file extension for GLSL shaders?

I'm learning glsl shading and I've come across different file formats. I've seen people giving their vertex and fragment shaders .vert and .frag extensions. But I've also seen .vsh and .fsh extensions, and even both shaders together in a single .glsl file. So I'm wondering if there is a standard file format, or which way is the 'correct' one?


Source: (StackOverflow)

GLSL/C++: Arrays of Uniforms?

I would like to leave OpenGL's lights and make my own. I would like my shaders to allow for a variable number of lights.

Can we declare an array of uniforms in GLSL shaders? If so, how would we set the values of those uniforms?


Source: (StackOverflow)

How to debug a GLSL shader?

I need to debug a GLSL program but I don't know how to output intermediate result. Is it possible to make some debug traces (like with printf) with GLSL ?


Source: (StackOverflow)

What is so bad about GL_QUADS?

I hear that GL_QUADS are going to be removed in the OpenGL versions > 3.0, why is that? Will my old programs not work in the future then? I have benchmarked, and GL_TRIANGLES or GL_QUADS have no difference in render speed (might even be that GL_QUADS is faster). So whats the point?


Source: (StackOverflow)

The purpose of Model View Projection Matrix

For what purposes we are using Model View Projection Matrix? Why do shaders require Model View Projection Matrix?


Source: (StackOverflow)

Why are quaternions used for rotations?

I'm a physicist, and have been learning some programming, and have come across a lot of people using quaternions for rotations instead of writing things in matrix/vector form.

In physics, there are very good reasons we don't use quaternions (despite the bizarre story that's occasionally told about Hamilton/Gibbs/etc). Physics requires that our descriptions have good analytic behavior (this has a precisely defined meaning, but in some rather technical ways that go far beyond what's taught in normal intro classes, so I won't go into any detail). It turns out that quaternions don't have this nice behavior, and so they aren't useful, and vectors/matrices do, so we use them.

However, restricted to rigid rotations and descriptions that do not use any analytic structures, 3D rotations can be equivalently described either way (or a few other ways).

Generally, we just want a mapping of a point X=(x,y,z) to a new point X'=(x',y',z') subject to the constraint that X^2 = X'^2. And there are lots of things that do this.

The naive way is to just draw the triangles this defines and use trig, or use the isomorphism between a point (x,y,z) and a vector (x,y,z) and the function f(X) = X' and a matrix MX=X', or using quaternions, or projecting out components of the old vector along the new one using some other method (x, y, z)^T.(a,b,c) (x',y',z'), etc, etc.

From a math point of view, these descriptions are all equivalent in this setting (as a theorem). They all have the same number of degrees of freedom, the same number of constraints, etc.

So why do quaternions seem to preferred over vectors?

The usual reasons I see are no gimbal lock, or numerical issues.

The no gimbal lock argument seems odd, since this is only a problem of euler angles. It is also only a coordinate problem (just like the singularity at r=0 in polar coordinates (the Jacobian looses rank)), which means it is only a local problem, and can be resolved by switching coordinates, rotating out of the degeneracy, or using two overlapping coordinate systems.

I'm less sure about numerical issues, since I don't know in detail how both of these (and any alternatives) would be implemented. I've read that re-normalizing a quaternion is easier than doing that for a rotation matrix, but this is only true for a general matrix; a rotation has additional constraints that trivializes this (which are built into the definition of quaternions) (In fact, this has to be true since they have the same number of degrees of freedom).

So what is the reason for the use of quaternions over vectors or other alternatives?


Source: (StackOverflow)