EzDevInfo.com

optimus

Id obfuscation based on Knuth's multiplicative hashing method for PHP.

NVIDIA Optimus card not switching under OpenGL

When I used use "glGetString(GL_VERSION)" and "glGetString(GL_SHADING_LANGUAGE_VERSION)" to check the OpenGL version on my computer, I got the following information:

3.1.0 - Build 8.15.10.2538 for GL_VERSION

1.40 - Intel Build 8.15.10.2538 for GL_SHADING_LANGUAGE_VERSION

When I ran "Geeks3D GPU Caps Viewer", it shown the OpenGL version of my graphics cards(NVS 4200M) are

GL_VERSION: 4.3.0

GLSL version: 4.30 NVIDIA via Cg compiler

Does that mean my graphics cards only supports some OpenGL 4.3.0 functions, and I cannot create 4.3 context?


Source: (StackOverflow)

How to start debug version of project in nsight with optirun command?

I'we been writing some simple cuda program (I'm student so I need to practice), and the thing is I can compile it with nvcc from terminal (using Kubuntu 12.04LTS) and then execute it with optirun ./a.out (hardver is geforce gt 525m on dell inspiron) and everything works fine. The major problem is that I can't do anything from Nsight. When I try to start debug version of code the message is "Launch failed! Binaries not found!". I think it's about running command with optirun but I'm not sure. Any similar experiences? Thanks, for helping in advance folks. :)


Source: (StackOverflow)

Advertisements

Forcing NVIDIA GPU programmatically in Optimus laptops

I'm programming a DirectX game, and when I run it on an Optimus laptop the Intel GPU is used, resulting in horrible performance. If I force the NVIDIA GPU using the context menu or by renaming my executable to bf3.exe or some other famous game executable name, performance is as expected.
Obviously neither is an acceptable solution for when I have to redistribute my game, so is there a way to programmatically force the laptop to use the NVIDIA GPU?

I've already tried using DirectX to enumerate adapters (IDirect3D9::GetAdapterCount, IDirect3D9::GetAdapterIdentifier) and it doesn't work: only 1 GPU is being reported (the one in use).


Source: (StackOverflow)

Forcing hardware accelerated rendering

I have an OpenGL library written in c++ that is used from a C# application using C++/CLI adapters. My problem is that if the application is used on laptops with Nvidia Optimus technology the application will not use the hardware acceleration and fail.

I have tried to use the info found in Nvidias document http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf about linking libs to my C++-dll and exporting NvOptimusEnablement from my OpenGL-library but that fails. I guess I have to do something with the .exe not with the .dlls linked to the .exe

For us it is not a good option to use profiles since we need to ensure that the nvidia hardware is used.

Is there some way a C# application can force Optimus to use the Nvidia chipset instead of the integrated Intel chipset?


Source: (StackOverflow)

OpenGL 3.3 Two different results on two GPUs nVidia Optimus with Shadow mapping

So i'm working on a project (to both learn and create a game in the future) in C++ and for rendering i've chose OpenGL 3.3. I've been working on Intel HD 4000 built in my processor as it opens default new apps and everything went sweet. But then i've tried to open it on my 2nd GPU - nVidia GTX660m which is much much faster one, and i've expected much more FPSes on it. But nope, not only i had dozens of dozens bugs (in example - Intel went ok if i've put out vec3 as color in fragment shader, but nVidia went full-on crazy style if i didnt put out vec4...). Of course not any errors at compile so its extremly hard to fix...

But now, when i've fixed most of it, im stuggling with one problem that i cannot fix as i would like (there could be some dirty ways but... that's not the point).

In short way: i generate on both GPU valid depth maps, but on nVidia GPU its not grey-scale, bur red-to-black scale Which is extremely odd, since same code on same machine (almost) should run the same (also same API!). Due to that fact my fragment shader propably doesnt catch-up with it, and on nVidia does not detect lit areas and its completly dark (at least in spotlight, directional light doesnt work).

Pics: Image when using Intel HD4000 (which comes from my i5 ivy-bridge cpu) Image when using nVidia GTX660m when running app from RMB menu. No soft shadows for "buildings" (big blocks) nor the flashlight effect (spot light)

IMPORTANT - Notice depth maps on GTX660m are redToBlack scale, not grey-scale like on Intel GPU. Top one is from directional light and bottom one is from point light of course.

My FragmentShader: #version 330 core

in vec2 UV;                         //Coords for standard texture (model)
in vec4 ShadowCoord;                //Coords for directional light (pos)
in vec4 POVShadowCoord;             //Coords for spot light (flashlight) (pos)

out vec4 color;                     //Output color

uniform sampler2D myTextureSampler; //Standard texture with data for models
uniform sampler2D shadowMap;        //Shadowmap for directional light
uniform sampler2D POVshadowMap;     //Shadowmap for spot light (flashlight)

void main(){
    vec3 tex_data = texture2D( myTextureSampler, UV ).rgb;

    float bias = 0.005;
    float visibility = 0.7;
    float decrease = 0.002;

    int early_bailing = 0;
    if ( texture2D( shadowMap, ShadowCoord.xy + vec2(0,0)/1850.0 ).z  <  ShadowCoord.z-bias ) {
        visibility -= decrease; early_bailing++;
    }
    if ( texture2D( shadowMap, ShadowCoord.xy + vec2(-2,-2)/1850.0 ).z  <  ShadowCoord.z-bias ) {
        visibility -= decrease; early_bailing++;
    }
    if ( texture2D( shadowMap, ShadowCoord.xy + vec2(-2, 2)/1850.0 ).z  <  ShadowCoord.z-bias ) {
        visibility -= decrease; early_bailing++;
    }
    if ( texture2D( shadowMap, ShadowCoord.xy + vec2( 2,-2)/1850.0 ).z  <  ShadowCoord.z-bias ) {
        visibility -= decrease; early_bailing++;
    }
    if ( texture2D( shadowMap, ShadowCoord.xy + vec2( 2, 2)/1850.0 ).z  <  ShadowCoord.z-bias ) {
        visibility -= decrease; early_bailing++;
    }
    if(early_bailing < 5) {
        if(early_bailing > 0) {
            for (int i=-2; i < 2; i++) {
                for(int j = -2; j < 2; j++) {
                    if(i ==  0 && j ==  0) continue;
                    if(i == -2 && j == -2) continue;
                    if ( texture2D( shadowMap, ShadowCoord.xy + vec2(i,j)/850.0 ).z  <  ShadowCoord.z-bias )
                        visibility -= decrease;
                }
            }
        }
    } else {
        visibility -= 14 * decrease;
    }

    float x = POVShadowCoord.x/POVShadowCoord.w;
    float y = POVShadowCoord.y/POVShadowCoord.w;
    bias = 0.0004;
    if(x < 0 || x > 1 || y < 0 || y > 1) {
        visibility -= 0.6;
    } else {
        float min_visibility = visibility - 0.6;
        if ( textureProj( POVshadowMap, POVShadowCoord.xyw).z < (POVShadowCoord.z - bias)/POVShadowCoord.w) {
            visibility = min_visibility;
        } else {
            //Flashlight effect
            float dx = 0.5 - x;
            float dy = 0.5 - y;
            visibility -= sqrt(dx*dx + dy*dy);
            if(visibility < min_visibility)
                visibility = min_visibility;
        }
    }

    color = vec4(visibility * tex_data, 1);
}

First part is for directional light - i pre-sample depthmap in 5 points, if all are same i dont sample more (early bailing - much performance optimization!), or if some differs, i sample all and calculate intensity of shadow on current fragment.

Second part is simply sample from point light depth map, and then i check distance from center of the ray to simulate flashlight effect (stronger in the center).

I dont think anything more is needed but if is, please write and i'll post needed code.

ALSO - with shadowmaps only 16bits precision (GL_DEPTH_COMPONENT16) Intel HD4000 is faster then my GTX660m (which in way more powerfull) - this is very weird. Though i think its becouse i dont draw any high-poly, just many very-low-poly. Am I correct?


Source: (StackOverflow)

TTS text received & processed but NOT HEARD on LG Optimus S

On one hand, this problem is tough because I have the same exact code working perfectly on 3 different Android 2.2 phones, but not working on an LG Optimus S (runing Android 2.2, too).

On the other hand, this problem is reproducible, so there may be some hope on the way to solving the mystery.

The problem manifests itself such that the first two text segments passed to the TTS engine (Pico) are processed (and heard through the speaker!) correctly on all phones, including the problematic one (LG Optimus S).

But the third and fourth segments passed to the TTS engine, after the speech RecognitionController's RECOGNIZED step, results in totally benign logs in all phones, except that in the problematic phone nothing is heard through the speaker! - despite receiving all OnUtteranceCompleted() even for the problematic phone!

I know the code is correct because it works perfectly on all other phones, so I am stumped as to what could be causing this.

Could this be inadequate CPU resources? inadequate memory resources?

If so, why does it work for the first 2 text segments, but doesn't work for successive 2 text segments?

If that could help in spotting something "weird" in the system behavior, I am including a sample logcat of the missing TTS-speech on the problematic phone:

INFO/RecognitionController(1773): State change: RECOGNIZING -> RECOGNIZED
INFO/RecognitionController(1773): Final state: RECOGNIZED
INFO/ServerConnectorImpl(1773): ClientReport{session_id=040af29064d281350f1325c6a361f003,request_id=1,application_id=voice-search,client_perceived_request_status=0,request_ack_latency_ms=93,total_latency_ms=2179,user_perceived_latency_ms=213,network_type=1,endpoint_trigger_type=3,}
INFO/AudioService(121):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@45a4f450
DEBUG/AppRecognizer(2167): Proceed.
INFO/TTS received:(2167): Speaking text segment number three but NOTHING is coming out of the speaker!!! 
VERBOSE/TtsService(572): TTS service received Speaking text segment number three but NOTHING is coming out of the speaker!!! 
VERBOSE/TtsService(572): TTS processing: Speaking text segment number three but NOTHING is coming out of the speaker!!! 
VERBOSE/TtsService(572): TtsService.setLanguage(eng, USA, )
INFO/SVOX Pico Engine(572): Language already loaded (en-US == en-US)
INFO/SynthProxy(572): setting speech rate to 100
INFO/SynthProxy(572): setting pitch to 100
INFO/ClientReportSender(1773): Sending 1 client reports over HTTP
INFO/TTS received:(2167): Speaking text segment number four but NOTHING is coming out of the speaker!!!
VERBOSE/TtsService(572): TTS service received Speaking text segment number four but NOTHING is coming out of the speaker!!!
WARN/AudioTrack(572): obtainBuffer timed out (is the CPU pegged?) 0x5b3988 user=00062b40, server=00061b40
VERBOSE/TtsService(572): TTS callback: dispatch started
VERBOSE/TtsService(572): TTS callback: dispatch completed to 1
VERBOSE/TtsService(572): TTS processing: Speaking text segment number four but NOTHING is coming out of the speaker!!!
VERBOSE/onUtteranceCompleted(2167): segment #3
VERBOSE/TtsService(572): TtsService.setLanguage(eng, USA, )
INFO/SVOX Pico Engine(572): Language already loaded (en-US == en-US)
INFO/SynthProxy(572): setting speech rate to 100
INFO/SynthProxy(572): setting pitch to 100
WARN/AudioTrack(572): obtainBuffer timed out (is the CPU pegged?) 0x5b3988 user=0007dc00, server=0007cc00
VERBOSE/TtsService(572): TTS callback: dispatch started
VERBOSE/TtsService(572): TTS callback: dispatch completed to 1
VERBOSE/onUtteranceCompleted(2167): segment #4

The corresponding log on a phone that works perfectly looks like this:

INFO/RecognitionController(1773): State change: RECOGNIZING -> RECOGNIZED
INFO/RecognitionController(1773): Final state: RECOGNIZED
INFO/ServerConnectorImpl(1773): ClientReport{session_id=040af29064d281350f1325c6a361f003,request_id=1,application_id=voice-search,client_perceived_request_status=0,request_ack_latency_ms=96,total_latency_ms=2449,user_perceived_latency_ms=140,network_type=1,endpoint_trigger_type=3,}
INFO/AudioService(121):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@46039d08
DEBUG/AppRecognizer(2167): Proceed.
INFO/TTS received:(2167): Speaking text segment number three (and I can hear it :) 
VERBOSE/TtsService(572): TTS service received Speaking text segment number three (and I can hear it :) 
VERBOSE/TtsService(572): TTS processing: Speaking text segment number three (and I can hear it :) 
INFO/ClientReportSender(1773): Sending 1 client reports over HTTP
VERBOSE/TtsService(572): TtsService.setLanguage(eng, USA, )
INFO/SVOX Pico Engine(572): TtsEngine::setLanguage found matching language(eng) but not matching country().
INFO/SVOX Pico Engine(572): Language already loaded (en-US == en-US)
INFO/SynthProxy(572): setting speech rate to 100
INFO/SynthProxy(572): setting pitch to 100
INFO/TTS received:(2167): Speaking text segment number four (and I can hear it :)
VERBOSE/TtsService(572): TTS service received Speaking text segment number four (and I can hear it :)
INFO/AudioHardwareQSD(121): AudioHardware pcm playback is going to standby.
DEBUG/dalvikvm(3262): GC_EXPLICIT freed 6946 objects / 326312 bytes in 76ms
WARN/AudioTrack(572): obtainBuffer timed out (is the CPU pegged?) 0x3ce730 user=00032e80, server=00031e80
WARN/AudioFlinger(121): write blocked for 170 msecs, 161 delayed writes, thread 0xdc08
VERBOSE/TtsService(572): TTS callback: dispatch started
VERBOSE/onUtteranceCompleted(2167): segment #3
VERBOSE/TtsService(572): TTS callback: dispatch completed to 1
VERBOSE/TtsService(572): TTS processing: Speaking text segment number four (and I can hear it :)
VERBOSE/TtsService(572): TtsService.setLanguage(eng, USA, )
INFO/SVOX Pico Engine(572): TtsEngine::setLanguage found matching language(eng) but not matching country().
INFO/SVOX Pico Engine(572): Language already loaded (en-US == en-US)
INFO/SynthProxy(572): setting speech rate to 100
INFO/SynthProxy(572): setting pitch to 100
WARN/KeyCharacterMap(2167): No keyboard for id 131074
WARN/KeyCharacterMap(2167): Using default keymap: /system/usr/keychars/qwerty.kcm.bin
DEBUG/dalvikvm(7137): GC_EXPLICIT freed 1585 objects / 93216 bytes in 67ms
DEBUG/dalvikvm(6697): GC_EXPLICIT freed 3108 objects / 178688 bytes in 59ms
VERBOSE/TtsService(572): TTS callback: dispatch started
VERBOSE/onUtteranceCompleted(2167): segment #4

UPDATE I: The problem (only in the LG Optimus S LS670) only occurs after the speech recognizer kicks in for the first time. I can send any number of text segments, some of which are very long, and the TTS engine speaks out loud perfectly. But the moment the phone goes into listening (not at the same time as speaking, of course), TTS stops sounding out loud. As if some speaker "mute" occurs automatically as soon as speech recognizer kicks in but not restored automatically, once speech recognition is done.

I actually went ahead and tried inserting an audioManager.setMicrophoneMute(false); in RecognitionListener.onEndOfSpeech() but that didn't help.

UPDATE II: I even tried adding to RecognitionListener.onEndOfSpeech() the following, thinking that prehaps restarting the TTS engine could reset a bug somwhere - this didn't help either:

Intent checkIntent = new Intent();
checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkIntent, TTS_STATCHECK);    

Ideas? Suggestions?


Source: (StackOverflow)

GPU benchmark for nvidia Optimus cards

I need a GPGPU benchmark which will load the GPU so that I can measure the parameters like temperature rise, amount of battery drain etc. Basically I want to alert the user when the GPU is using a lot of power than normal use. Hence I need to decide on threshold values of GPU temperature, clock frequency and battery drain rate above which GPU will be using more power than normal use. I have tried using several graphics benchmark but most of them don't use GPU resources to the fullest. Please provide me a link to such GPGPU benchmark.


Source: (StackOverflow)

Resigning system.img on a device

I am working on an automatic app updating solution for devices (LG p509 - Optimus 1) which we deploy to our customers. We have control of these devices and currently install a custom kernel on them (but not a full custom ROM). Since we are trying to do auto-updating of our app on the device, we need the system to be signed by a key which we control so we can sign our apps with the same key (to get the INSTALL_PACKAGES permission).
I have been having a few issues running AOSP builds on the device (using the LG released source for the device), and am trying to take a step back and evaluate our options. I have a few questions:

  1. Is it viable to just pull the system.img off the phone and resign the contents? If so, where is the system apk located? I poked through the PackageManager source and it uses a systempackage (seemingly called "android") to compare apps with to see if they are allowed to have system permissions.
  2. Has anyone here created a custom ROM for the device that could offer some advice on how just get our signature be the system signature?

Any insight would be appreciated.


Source: (StackOverflow)

Can't run CUDA nor OpenCL on GeForce 540M

I have problem running samples provided by Nvidia in their GPU Computing SDK (there's a library of compiled sample codes).

For cuda I get message "No CUDA-capable device is detected", for OpenCL there's error from function that should find OpenCL capable units.

I have installed all three parts from Nvidia to develop with OpenCL - devdriver for win7 64bit v.301.27, cuda toolkit 4.2.9 and gpu computing sdk 4.2.9.

I think this might have to do with Optimus technology that reroutes output from Nvidia GPU to Intel to render things (this notebook has also Intel 3000HD accelerator), but in Nvidia control pannel I set to use high performance Nvidia GPU, set power profile to prefer maximum performance and for PhysX I changed from automatic selection to Nvidia processor again. Nothing has changed though, those samples won't run (not even those targeted for GF8000 cards).

I would like to play somewhat with OpenCL and see what it is capable of but without ability to test things it's useless. I have found some info about this on forums, but it was mostly about linux users where you need Bumblebee to access Nvidia GPU. There's no such problem on Windows however, drivers are better and so you can access it without dark magic (or I thought so until I found this problem).


Source: (StackOverflow)

Weird VGL Notice - [VGL] NOTICE: Pixel format of 2D X server does not match pixel format of Pbuffer. Disabling PBO readback

I'm porting a game that I wrote from Windows to Linux. It uses GLFW and OpenGL. When I run it using optirun, to take advantage of my nVidia Optimus setup, it spits this out to the console:

[VGL] NOTICE: Pixel format of 2D X server does not match pixel format of
[VGL] Pbuffer.  Disabling PBO readback.

I've never seen this before, but my impression is that I'm loading my textures in GL_RGBA format, when they need to be in GL_BGRA or something like that. However, I'm using DevIL's ilutGLLoadImage function to obtain an OpenGL texture handle, so I never specify a format.

Has anyone seen this before?


Source: (StackOverflow)

Xubuntu 12.04 display (with bumblebee) screwed up after upgrade

I recently left my machine (running Xubuntu 12.04) up while it upgraded some 300 different things. I have a Hybrid Graphics NVIDIA Optimus system, and it was working perfectly with Bumblebee 3.0 up until the upgrade. Since the upgrade, fonts don't render correctly (certain pixels in white letters turn black) and Bumblebee throws

[430.015513] [ERROR]The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect.
[430.015627] [ERROR]Could not connect to bumblebee daemon - is it running?

when I try to run anything with the NVIDIA chip.

I googled the errors and a reboot seems to have fixed it for most. I've rebooted, re-installed bumblebee, and updated my nvidia drivers, all to no avail.


Source: (StackOverflow)

Access LG Optimus X2 Flash Light

I'm learning android apps programming and working on camera flash. I have the following code in my apps that I copied from another post. It works perfectly fine for Galaxy devices but not my LG Optimus X2. I did set the Manifest permission, I don't have any clue about the issue, any help is highly appreciated.

Camera mycam = Camera.open();
Parameters p = mycam.getParameters();// = mycam.getParameters();
p.setFlashMode(Parameters.FLASH_MODE_TORCH); 
mycam.setParameters(p); //time passes 
try {
    Thread.sleep(500);
} catch (InterruptedException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
}
p.setFlashMode(Parameters.FLASH_MODE_OFF);
mycam.release();

Btw, any code that can work for all android devices with a flash? or it must be device specific? Where can I get those information, I don't find much related info.


Source: (StackOverflow)

"The launch timed out and was terminated" error with Bumblebee on Linux

When running a long kernel (especially in debug mode with some memory checking) on a CUDA-enabled GeForce GPU with Bumblebee, I get the following error:

CUDA error 6: the launch timed out and was terminated

This seems to be caused by the NVIDIA driver's watchdog. A solution is available here. However, why is this happening while using Bumblebee and optirun to run a simple CUDA kernel (i.e. I do not use my NVIDIA GPU for display)?

The command I used to launch the program is:

optirun [cuda-memcheck] ./my_program program_options

Source: (StackOverflow)

OpenGL glfwOpenWindow Doesn't work on Optimus video card

I have a Geforce GT 540M, my laptop uses Optimus so it will 'switch' between the Intel GPU and the Geforce GPU depending on the applications/settings etc.

As far as I can tell on the line to open a window, it returns false:

if( !glfwOpenWindow( 1024, 768, 0,0,0,0, 32,0, GLFW_WINDOW ) )
{
    fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the     tutorials.\n" );
    system("pause");
    glfwTerminate();
    return -1;
}

The system command was just to confirm the error message I received.

Is there a way to force the compiler to recognize my graphics card? My assumption is that it can only spot my Intel gpu.


Source: (StackOverflow)

Can not call acpi_call method in ubuntun 12.10

I have a notebook has a optimus graphic cards(Nvidia). So I want to relax notebook with acpi_call method. But error is given. I had downloaded acpi_call_master from this web page: (https://github.com/mkottman/acpi_call) Then this zip is exracted by me, and following code is called in terminal by me. However these errors are given:

dagli@dagli-Inspiron-N5110:~/acpi_call-master$ ls
acpi_call.c  examples  Makefile  README.md  support
dagli@dagli-Inspiron-N5110:~/acpi_call-master$ sudo make
make -C /lib/modules/3.5.0-17-generic/build M=/home/dagli/acpi_call-master modules
make: *** /lib/modules/3.5.0-17-generic/build: Böyle bir dosya ya da dizin yok. Durdu.
make: *** [default] Hata 2

Source: (StackOverflow)