If your goal is to get up and running as fast as you can, this installer script does everything you will need.
It works for Ubuntu 18.04 and 20.04 and installs either CUDA 10.2 or CUDA 11.0. Pick CUDA 10.2 if you are not sure what to take. The script does not yet work for CUDA 10.2 on Ubuntu 20.04.
sudo git clone https://github.com/DataCrunchIO/Install-CUDA.git ~/Install-CUDA
sudo chmod +x ~/Install-CUDA/installer.sh
sudo ~/Install-CUDA/installer.sh
Follow the instructions of the installer and your server will be running CUDA in mere minutes!
If you would like to learn how to install the NVidia driver and CUDA manually; these are the steps the installer takes.
First we will need the CUDA installer which we can find on NVidia’s website. The installer includes an appropriate driver as well.
Note that we do not actually need to install CUDA, the NVidia driver is actually enough since we will be using conda environments which include CUDA. However, if you want to run CUDA accelerated programs outside of conda, it is convenient to have it installed.
Here we will be installing CUDA 10.2 for Ubuntu 18.04;
wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_linux.run
Before installing, we will need to install some dependencies:
sudo apt update
sudo apt install build-essential gcc-multilib dkms
The installer will ask what to install, you should select the driver and CUDA toolkit.
Next, we make the file executable and run it:
sudo chmod +x cuda_10.2.89_440.33.01_linux.run
sudo ./cuda_10.2.89_440.33.01_linux.run
Follow the instructions given by the installer. Choose to install the driver and CUDA toolkit. The samples are optional.
Next we will want to configure the runtime library:
bash -c "echo /usr/local/cuda/lib64/ > /etc/ld.so.conf.d/cuda.conf"
sudo ldconfig
We add our path variable by adding “/usr/local/cuda/bin” to our PATH variable:
sudo nano /etc/environment
We want our file to look like this:
After adding, press ctrl+x to exit, save the file when prompted.
At this point, you can check the output of “nvidia-smi”, you should see your GPU’s, driver version and CUDA version. If all is looking good, we will modify our startup script;
sudo nano /etc/rc.local
paste:
#!/bin/bash
nvidia-smi -pm 1
nvidia-smi -e 0
exit 0
/etc/rc.local should look like this:
If you are wondering what the script does;
“#!/bin/bash”: required to let the shell know to use bash. (this is not a normal comment, not a optional line)
“nvidia-smi -pm 1”: This will enable persistence mode to keep the driver loaded (which will increase the speed of some actions).
“nvidia-smi -e 0”: This will disable error correcting on the memory of the GPU. This is safe to do for most applications and will allow using more GPU memory.
“exit 0”: Save and close the script.
Let’s make the file executable and reboot:
sudo chmod +x /etc/rc.local
sudo reboot -h now
And that’s it, you are ready to use your GPU’s! You can confirm the status of persistence mode and ecc by running 'nvidia-smi'
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.