Installing PyTorch with CUDA in Conda
The following guide shows you how to install PyTorch with CUDA under the Conda virtual environment.
Assumptions
- Ubuntu OS
- NVIDIA GPU with CUDA support
- Conda (see installation instructions here)
- CUDA (installed by system admin)
Specifications
This guide is written for the following specs:
- Ubuntu 16.04
- Python 3.6
- CUDA 9.0
- cuDNN v7.1
- Miniconda 3
- OpenCV3
Guide
First, get cuDNN by following this cuDNN Guide.
Then we need to update mkl
package in base environment to prevent
this issue later on.
Let’s create a virtual Conda environment called “pytorch”: Let’s create a virtual Conda environment called “pytorch”:
You many of course use a different environment name, just be sure to adjust accordingly for the rest of this guide
After it prepares the environment and installs the default packages, activate the virtual environment via:
Now let’s install the necessary dependencies in our current PyTorch environment:
- We specified
pyyaml=3.12
because newer versions will be incompatible with Detectron, should you use it with Caffe2. See this issue - For LAPACK support, install
magma-cudaxx
where xx reflects your cuda version, for e.g. 91 corresponds to cuda 9.1
Let’s clone pytorch’s repo and its submodules into our home directory.
Before we begin manually compiling the binaries, we need to first assign some environment variables.
Firstly, for our non-standard installation of cuDNN, we
need to tell PyTorch where to look for libcudart
via the environment variable
$LD_LIBRARY_PATH
. If you have followed my cuDNN Guide
you would have assigned this to be:
Next we need to tells CMake to look for packages in our Conda environment before looking in system install locations:
We are now ready to install pytorch via the very convenient installer in the repo:
- To determine assignment for MAX_JOBS, please use the number that is one more
than the output from
cat /proc/cpuinfo | grep processor | wc -l
You’d think we’re done, but not quite! We have to point the $PYTHONPATH
environment variable to our build folder like so
However it will be tedious to type that everytime we activate our environment.
You may append that line to .bash_profile
or .bashrc
but some variables
such as $PYTHONPATH
are potentially used in many environments and it could
lead to python import errors when the paths contain different modules sharing
the same name. For instance, both caffe and caffe2 contain a module named
‘caffe’.
The solution to overcome this is to write a script to save our environment variables within our environemnt so that they get loaded automatically every time we activate our environment and get unset automatically when we deactivate our environment. The following steps are an adaptation of this guide stated in the official Conda documentation.
Let’s enter our environment directory and do the following
Edit ./etc/conda/activate.d/env_vars.sh
as follows:
Edit ./etc/conda/deactivate.d/env_vars.sh
as follows:
Now let’s reload the current environment to reflect the variables
We are now ready to test if PyTorch has been installed correctly with CUDA
To check if PyTorch was installed successfully:
To check if GPU build was successful:
Torchvision
If you are also installing torchvision: