Reprinted please indicate the source for the klayge game engine, the permanent link of this article is http://www.klayge.org /? P = 2052
NVIDIA Optimus technology combines power consumption and performance on a notebook and enables automatic and seamless switching. But the question is, what should I do if I don't want it to be automated? On ThinkPad t420s, NV's independent graphics card is NVS 4200 m, feature level supports to d3d 11.0, Intel's integrated graphics card is HD 3000, and feature level supports to d3d 10.1. (If you are not familiar with Feature Level, read this article)
Control in BIOS
On the platform that supports Optimus, you can find the options in bios. You can choose to use NV, Intel or automatic switch. However, this is static and requires a restart during each switchover. It is definitely not what we want.
Right-click to control
Right-click the icon of the EXE file and choose "Run with graphics processor" from the shortcut menu. You can select the Nv card or Intel card. Interestingly, if you areProgramReturns two video cards. If you select an NV card, both graphics cards are NV cards. If you select an Intel card, the first video card is Intel and the second is NV.
Control in Driver
In the drive of NV, you can use the global profile to control that all programs are executed through the Nv card or Intel card. In addition, there is a per-app profile that allows each program to have different graphics card configurations. However, this method is still not flexible enough to be used in programs.
Static link Cuda
The intel card cannot support cuda, so the Cuda program must be executed only on the Nv card. We accidentally found that the Cuda program in the Nv GPU computing SDK can be automatically switched to the Nv card for execution. After experimenting with various combinations, we found that the static link to cudart. lib can be switched to the Nv card. Dynamic Loading of nvcuda. dll is invalid. This method requires the introduction of a useless library, which is not concise enough, but also troublesome on non-NV platforms.
Enumerate all devices
As mentioned above, you can enumerate the adapter in the program. What will happen if a device is forced to be created on the Nv card? The result of my experiment is that this does indeed enable the Nv card, but it seems that the system will automatically copy and so on, making the rendering process have an inherent overhead. The more draw calls, the more obvious. Although not the best, it is still a solution.
Amazing nvoptimusenablement
On the Nv developer website, there is a new document about Optimus, which mentions that a new method for starting the Nv card is introduced in drivers above r302. By exporting a global variable named nvoptimusenablement from EXE, when it is 1, the driver can switch to the Nv video card:
Extern "C" {_ declspec (dllexport) DWORD nvoptimusenablement = 0x00000001 ;}
In this way, you can finally control which video card to use in the program, and there is no performance degradation problem in the previous method.
Another interesting finding is that if you build a device in the DLL and the EXE is linked to the DLL through Lib, everything works normally: The device name is NVS 4200 m, and the feature level supports d3d11. However, if the EXE uses loadlibrary to dynamically import the DLL, the device name is Intel HD 3000, and feature level supports d3d11. However, in either way, the Nv card is actually enabled and its performance is not damaged.
Summary
we have listed several ways to enable the Nv card on the Optimus platform. The last two, especially the last, are practical, but far from perfect. It is expected that NV will decide whether to switch the Nv card and return the correct device name based on the module where createdevice is located, rather than the module of exe.