3 minute read

The following guide shows you how to install PyTorch with CUDA under the Conda virtual environment.

Assumptions

  • Ubuntu OS
  • NVIDIA GPU with CUDA support
  • Conda (see installation instructions here)
  • CUDA (installed by system admin)

Specifications

This guide is written for the following specs:

  • Ubuntu 16.04
  • Python 3.6
  • CUDA 9.0
  • cuDNN v7.1
  • Miniconda 3
  • OpenCV3

Guide

First, get cuDNN by following this cuDNN Guide.

Then we need to update mkl package in base environment to prevent this issue later on.

conda update mkl

Let’s create a virtual Conda environment called “pytorch”: Let’s create a virtual Conda environment called “pytorch”:

conda create -n pytorch python=3

You many of course use a different environment name, just be sure to adjust accordingly for the rest of this guide

After it prepares the environment and installs the default packages, activate the virtual environment via:

conda activate pytorch
# to deactivate: conda deactivate pytorch

Now let’s install the necessary dependencies in our current PyTorch environment:

# Install basic dependencies
conda install cffi cmake future gflags glog hypothesis lmdb mkl mkl-include numpy opencv protobuf pyyaml=3.12 setuptools scipy six snappy typing -y

# Install LAPACK support for the GPU
conda install -c pytorch magma-cuda90 -y
  • We specified pyyaml=3.12 because newer versions will be incompatible with Detectron, should you use it with Caffe2. See this issue
  • For LAPACK support, install magma-cudaxx where xx reflects your cuda version, for e.g. 91 corresponds to cuda 9.1

Let’s clone pytorch’s repo and its submodules into our home directory.

cd ~
git clone --recursive git@github.com:pytorch/pytorch.git
cd pytorch
git submodule update --init --recursive

Before we begin manually compiling the binaries, we need to first assign some environment variables.

Firstly, for our non-standard installation of cuDNN, we need to tell PyTorch where to look for libcudart via the environment variable $LD_LIBRARY_PATH. If you have followed my cuDNN Guide you would have assigned this to be:

export LD_LIBRARY_PATH=$CUDA_HOME/lib64

Next we need to tells CMake to look for packages in our Conda environment before looking in system install locations:

export CMAKE_PREFIX_PATH=$CONDA_PREFIX

We are now ready to install pytorch via the very convenient installer in the repo:

CUDNN_LIB_DIR=$CUDA_HOME/lib64/ \
CUDNN_INCLUDE=$CUDA_HOME/include/ \
MAX_JOBS=25 \ 
python setup.py install
  • To determine assignment for MAX_JOBS, please use the number that is one more than the output from cat /proc/cpuinfo | grep processor | wc -l

You’d think we’re done, but not quite! We have to point the $PYTHONPATH environment variable to our build folder like so

export PYTHONPATH=$HOME/pytorch/build:$PYTHONPATH

However it will be tedious to type that everytime we activate our environment. You may append that line to .bash_profile or .bashrc but some variables such as $PYTHONPATH are potentially used in many environments and it could lead to python import errors when the paths contain different modules sharing the same name. For instance, both caffe and caffe2 contain a module named ‘caffe’.

The solution to overcome this is to write a script to save our environment variables within our environemnt so that they get loaded automatically every time we activate our environment and get unset automatically when we deactivate our environment. The following steps are an adaptation of this guide stated in the official Conda documentation.

Let’s enter our environment directory and do the following

cd $CONDA_PREFIX
mkdir -p ./etc/conda/activate.d
mkdir -p ./etc/conda/deactivate.d
touch ./etc/conda/activate.d/env_vars.sh
touch ./etc/conda/deactivate.d/env_vars.sh

Edit ./etc/conda/activate.d/env_vars.sh as follows:

#!/bin/sh

export PYTHONPATH=$HOME/pytorch/build:$PYTHONPATH

Edit ./etc/conda/deactivate.d/env_vars.sh as follows:

#!/bin/sh

unset PYTHONPATH

Now let’s reload the current environment to reflect the variables

conda activate pytorch

We are now ready to test if PyTorch has been installed correctly with CUDA

To check if PyTorch was installed successfully:

# Basic test:
cd ~ && python -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"
#=> Success

# For a comprehensive test:
cd $HOME/pytorch
python test/run_test.py

To check if GPU build was successful:

# Check number of GPUs visible to PyTorch:
python -c 'import torch; print(torch.cuda.is_available())'
#=> 2

# See initial output from the following to ensure GPU is used:
cd $HOME/pytorch
python caffe2/python/operator_test/activation_ops_test.py

Torchvision

If you are also installing torchvision:

cd $HOME/pytorch
git clone https://github.com/pytorch/vision
cd vision
python setup.py install