The NVIDIA Accelerated Linux Graphics Driver consists of the following components (filenames in parentheses are the full names of the components after installation). Some paths may be different on different systems (e.g., X modules may be installed in /usr/X11R6/ rather than /usr/lib/xorg/).
An X driver (/usr/lib/xorg/modules/drivers/nvidia_drv.so
);
this driver is needed by the X server to use your NVIDIA
hardware.
A GLX extension module for X (/usr/lib/xorg/modules/extensions/libglx.so.375.20
);
this module is used by the X server to provide server-side GLX
support.
An X module for wrapped software rendering (/usr/lib/xorg/modules/libnvidia-wfb.so.375.20
and optionally, /usr/lib/xorg/modules/libwfb.so
); this module is
used by the X driver to perform software rendering on GeForce 8
series GPUs. If libwfb.so
already
exists, nvidia-installer will not overwrite it. Otherwise, it will
create a symbolic link from libwfb.so
to libnvidia-wfb.so.375.20
.
EGL and OpenGL ES libraries ( /usr/lib/libEGL.so.1
, /usr/lib/libGLESv1_CM.so.375.20
, and
/usr/lib/libGLESv2.so.375.20
);
these libraries provide the API entry points for all OpenGL ES and
EGL function calls. They are loaded at run-time by
applications.
Vendor neutral graphics libraries provided by libglvnd
(/usr/lib/libOpenGL.so.0
,
/usr/lib/libGLX.so.0
, and
/usr/lib/libGLdispatch.so.0
); these
libraries are currently used to provide full OpenGL dispatching
support to NVIDIA's implementation of EGL.
Source code for libglvnd is available at https://github.com/NVIDIA/libglvnd
GLVND vendor implementation libraries for GLX (/usr/lib/libGLX_nvidia.so.0
) and EGL
(/usr/lib/libEGL_nvidia.so.0
); these
libraries provide NVIDIA implementations of OpenGL functionality
which may be accessed using the GLVND client-facing libraries.
A GLX client library and Vulkan ICD (/usr/lib/libGL.so.1
), either as part of the GLVND
infrastructure, or a legacy, non-GLVND GLX client library. This
library provides API entry points for all GLX function calls, and
is loaded at run-time by applications. Users may choose one or the
other at installation time by using either the --glvnd-glx-client
or the --no-glvnd-glx-client
command line option to
nvidia-installer.
Note that although both the GLVND and non-GLVND GLX client
libraries share the same SONAME of libGL.so.1, only one of them at
a time may be installed at a time. /usr/lib/libGL.so.375.20
is the non-GLVND
GLX client library, and /usr/lib/libGL.so.1.0.0
is the GLVND GLX client
library.
This library is also used as the Vulkan ICD. Its configuration
file is installed as /etc/vulkan/icd.d/nvidia_icd.json
.
Repackagers of the driver are encouraged to provide the
GLVND-based driver stack to promote adoption of the new
infrastructure, but those who choose to package the legacy GLX
client library instead of, or as an alternative to, the GLVND GLX
client library should be aware that the NVIDIA EGL driver depends
upon GLVND for proper functionality. The legacy GLX client library
may coexist with most GLVND libraries, with the exception of
libGL.so.1
and libGLX.so.0
, so it is possible to support both
NVIDIA EGL and legacy, non-GLVND NVIDIA GLX by installing all of
the GLVND libraries except for libGL and libGLX alongside the
legacy libGL.
Various libraries that are used internally by other driver
components. These include /usr/lib/libnvidia-cfg.so.375.20
,
/usr/lib/libnvidia-compiler.so.375.20
,
/usr/lib/libnvidia-eglcore.so.375.20
,
/usr/lib/libnvidia-glcore.so.375.20
, and
/usr/lib/libnvidia-glsi.so.375.20
.
A VDPAU (Video Decode and Presentation API for Unix-like
systems) library for the NVIDIA vendor implementation,
(/usr/lib/vdpau/libvdpau_nvidia.so.375.20
);
see Appendix G, VDPAU
Support for details.
The CUDA library (/usr/lib/libcuda.so.375.20
) which provides
runtime support for CUDA (high-performance computing on the GPU)
applications.
The Fatbinary Loader library (/usr/lib/libnvidia-fatbinaryloader.so.375.20
)
provides support for the CUDA driver to work with CUDA fatbinaries.
Fatbinary is a container format which can package multiple PTX and
Cubin files compiled for different SM architectures.
The PTX JIT Compiler library (/usr/lib/libnvidia-ptxjitcompiler.so.375.20
)
is a JIT compiler which compiles PTX into GPU machine code and is
used by the CUDA driver.
Two OpenCL libraries (/usr/lib/libOpenCL.so.1.0.0
, /usr/lib/libnvidia-opencl.so.375.20
); the
former is a vendor-independent Installable Client Driver (ICD)
loader, and the latter is the NVIDIA Vendor ICD. A config file
/etc/OpenCL/vendors/nvidia.icd
is
also installed, to advertise the NVIDIA Vendor ICD to the ICD
Loader.
The nvidia-cuda-mps-control
and
nvidia-cuda-mps-server
applications,
which allow MPI processes to run concurrently on a single GPU.
A kernel module (/lib/modules/`uname
-r`/kernel/drivers/video/nvidia-modeset.ko
); this kernel
module is responsible for programming the display engine of the
GPU. User-mode NVIDIA driver components such as the NVIDIA X
driver, OpenGL driver, and VDPAU driver communicate with
nvidia-modeset.ko through the /dev/nvidia-modeset device file.
A kernel module (/lib/modules/`uname
-r`/kernel/drivers/video/nvidia.ko
); this kernel module
provides low-level access to your NVIDIA hardware for all of the
above components. It is generally loaded into the kernel when the X
server is started, and is used by the X driver and OpenGL.
nvidia.ko consists of two pieces: the binary-only core, and a
kernel interface that must be compiled specifically for your kernel
version. Note that the Linux kernel does not have a consistent
binary interface like the X server, so it is important that this
kernel interface be matched with the version of the kernel that you
are using. This can either be accomplished by compiling yourself,
or using precompiled binaries provided for the kernels shipped with
some of the more common Linux distributions.
NVIDIA Unified Memory kernel module (/lib/modules/`uname
-r`/kernel/drivers/video/nvidia-uvm.ko
); this kernel module
provides functionality for sharing memory between the CPU and GPU
in CUDA programs. It is generally loaded into the kernel when a
CUDA program is started, and is used by the CUDA driver on
supported platforms.
The nvidia-tls libraries (/usr/lib/libnvidia-tls.so.375.20
and
/usr/lib/tls/libnvidia-tls.so.375.20
); these
files provide thread local storage support for the NVIDIA OpenGL
libraries (libGL, libnvidia-glcore, and libglx). Each nvidia-tls
library provides support for a particular thread local storage
model (such as ELF TLS), and the one appropriate for your system
will be loaded at run time.
The nvidia-ml library (/usr/lib/libnvidia-ml.so.375.20
); The NVIDIA
Management Library provides a monitoring and management API. See
Chapter 25,
The NVIDIA Management Library for more information.
The application nvidia-installer (/usr/bin/nvidia-installer
) is NVIDIA's tool for
installing and updating NVIDIA drivers. See Chapter 4,
Installing the NVIDIA Driver for a more thorough
description.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-installer/.
The application nvidia-modprobe (/usr/bin/nvidia-modprobe
) is installed as setuid
root and is used to load the NVIDIA kernel module and create the
/dev/nvidia*
device nodes by
processes (such as CUDA applications) that don't run with
sufficient privileges to do those things themselves.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-modprobe/.
The application nvidia-xconfig (/usr/bin/nvidia-xconfig
) is NVIDIA's tool for
manipulating X server configuration files. See Chapter 6,
Configuring X for the NVIDIA Driver for more
information.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-xconfig/.
The application nvidia-settings (/usr/bin/nvidia-settings
) is NVIDIA's tool for
dynamic configuration while the X server is running. See Chapter 23,
Using the nvidia-settings Utility for more
information.
The libnvidia-gtk libraries (/usr/lib/libnvidia-gtk2.so.375.20
and on
some platforms /usr/lib/libnvidia-gtk3.so.375.20
); these
libraries are required to provide the nvidia-settings user
interface.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-settings/.
The application nvidia-smi (/usr/bin/nvidia-smi
) is the NVIDIA System
Management Interface for management and monitoring functionality.
See Chapter 24,
Using the nvidia-smi Utility for more information.
The application nvidia-debugdump (/usr/bin/nvidia-debugdump
) is NVIDIA's tool for
collecting internal GPU state. It is normally invoked by the
nvidia-bug-report.sh (/usr/bin/nvidia-bug-report.sh
) script. See
Chapter 26,
Using the nvidia-debugdump Utility for more
information.
The daemon nvidia-persistenced (/usr/bin/nvidia-persistenced
) is the NVIDIA
Persistence Daemon for allowing the NVIDIA kernel module to
maintain persistent state when no other NVIDIA driver components
are running. See Chapter 27,
Using the nvidia-persistenced Utility for more
information.
Source code is available at ftp://download.nvidia.com/XFree86/nvidia-persistenced/.
The NVCUVID library (/usr/lib/libnvcuvid.so.375.20
); The NVIDIA
CUDA Video Decoder (NVCUVID) library provides an interface to
hardware video decoding capabilities on NVIDIA GPUs with CUDA.
The NvEncodeAPI library (/usr/lib/libnvidia-encode.so.375.20
); The
NVENC Video Encoding library provides an interface to video encoder
hardware on supported NVIDIA GPUs.
The NvIFROpenGL library (/usr/lib/libnvidia-ifr.so.375.20
); The
NVIDIA OpenGL-based Inband Frame Readback library provides an
interface to capture and optionally encode an OpenGL framebuffer.
NvIFROpenGL is a private API that is only available to approved
partners for use in remote graphics scenarios. Please contact
NVIDIA at [email protected] for more information.
The NvFBC library (/usr/lib/libnvidia-fbc.so.375.20
); The
NVIDIA Framebuffer Capture library provides an interface to capture
and optionally encode the framebuffer of an X server screen. NvFBC
is a private API that is only available to approved partners for
use in remote graphics scenarios. Please contact NVIDIA at [email protected]
for more information.
An X driver configuration file (/usr/share/X11/xorg.conf.d/nvidia-drm-outputclass.conf
);
If the X server is sufficiently new, this file will be installed to
configure the X server to load the nvidia_drv.so
driver automatically if it is
started after the NVIDIA DRM kernel module (nvidia-drm.ko) is
loaded. This feature is supported in X.Org xserver 1.16 and higher
when running on Linux kernel 3.13 or higher with CONFIG_DRM
enabled.
Predefined application profile keys and documentation for those
keys can be found in the following files in the directory
/usr/share/nvidia/
: nvidia-application-profiles-375.20-rc
,
nvidia-application-profiles-375.20-key-documentation
.
See Appendix J, Application Profiles for more information.
Problems will arise if applications use the wrong version of a library. This can be the case if there are either old libGL libraries or stale symlinks left lying around. If you think there may be something awry in your installation, check that the following files are in place (these are all the files of the NVIDIA Accelerated Linux Graphics Driver, as well as their symlinks):
/usr/lib/xorg/modules/drivers/nvidia_drv.so /usr/lib/xorg/modules/libwfb.so (if your X server is new enough), or /usr/lib/xorg/modules/libnvidia-wfb.so and /usr/lib/xorg/modules/libwfb.so -> libnvidia-wfb.so /usr/lib/xorg/modules/extensions/libglx.so.375.20 /usr/lib/xorg/modules/extensions/libglx.so -> libglx.so.375.20 (the above may also be in /usr/lib/modules or /usr/X11R6/lib/modules) /usr/lib/libGL.so.375.20 /usr/lib/libGL.so.1 -> libGL.so.375.20 /usr/lib/libGL.so -> libGL.so.1 (on GLVND-based installations, libGL.so.1 from GLVND may be used instead of libGL.so.375.20 as shown above.) /usr/lib/libnvidia-glcore.so.375.20 /usr/lib/libcuda.so.375.20 /usr/lib/libcuda.so -> libcuda.so.375.20 /lib/modules/`uname -r`/video/nvidia.{o,ko}, or /lib/modules/`uname -r`/kernel/drivers/video/nvidia.{o,ko}
If there are other libraries whose "soname" conflicts with that of the NVIDIA libraries, ldconfig may create the wrong symlinks. It is recommended that you manually remove or rename conflicting libraries (be sure to rename clashing libraries to something that ldconfig will not look at -- we have found that prepending "XXX" to a library name generally does the trick), rerun 'ldconfig', and check that the correct symlinks were made. An example of a library that often creates conflicts is "/usr/lib/mesa/libGL.so*".
If the libraries appear to be correct, then verify that the application is using the correct libraries. For example, to check that the application /usr/bin/glxgears is using the NVIDIA libraries, run:
% ldd /usr/bin/glxgears linux-gate.so.1 => (0xffffe000) libGL.so.1 => /usr/lib/libGL.so.1 (0xb7ed1000) libXext.so.6 => /usr/lib/libXext.so.6 (0xb7ec0000) libX11.so.6 => /usr/lib/libX11.so.6 (0xb7de0000) libpthread.so.0 => /lib/tls/libpthread.so.0 (0x00946000) libm.so.6 => /lib/tls/libm.so.6 (0x0075d000) libc.so.6 => /lib/tls/libc.so.6 (0x00631000) libnvidia-tls.so.375.20 => /usr/lib/tls/libnvidia-tls.so.375.20 (0xb7ddd000) libnvidia-glcore.so.375.20 => /usr/lib/libnvidia-glcore.so.375.20 (0xb5d1f000) libdl.so.2 => /lib/libdl.so.2 (0x00782000) /lib/ld-linux.so.2 (0x00614000)
In the example above, the list of libraries reported by
ldd includes
libnvidia-tls.so.375.20
and
libnvidia-glcore.so.375.20
: this
is because glxgears
links libGL.so.1
, which in this case
is the legacy, non-GLVND NVIDIA GLX client library. When
libGL.so.1
is provided by GLVND
instead, libGLX.so.0
and libGLdispatch.so.0
should appear in the output of
ldd. If the GLX
client library is something other than the NVIDIA or GLVND
libGL.so.1
, then you will need to
either remove the library that is getting in the way or adjust your
dynamic loader search path using the LD_LIBRARY_PATH
environment variable. You may want
to consult the man pages for ldconfig and ldd.