My minimal TensorFlow / CUDA environment setup

As I said yesterday, I’ve been documenting my setups more carefully. Here’s what I got for a minimal setup of a TensorFlow environment that works both for scripts and Jupyter Notebooks. (Your own definition of what is “minimal” for you may vary widely from my own.)

Basic System:

  1. Ubuntu Linux 16.04
  2. CUDA-capable GPU
  3. nVidia driver already installed. (Handled by Ubuntu, current version 381.22)
  4. Anaconda/Python already installed.


  1. Install CUDA (further details and troubleshooting at:
    Note that this is only be necessary once on any computer. If it’s already done and you just need to create a new environment, go straight to step 2.

    • Install kernel headers:
      sudo apt-get install linux-headers-$(uname -r)
    • Download CUDA toolkit: Select “deb (local)” filetype.
    • Run:
      sudo dpkg -i cuda-repo-ubuntu1604-9-0-local-ga2_8.0.61-1_amd64.deb
      from the dir containing the downloaded file (file name will change over time with new versions and new Ubuntu versions)
    • sudo apt-get update
    • sudo apt-get install cuda
  2. Create environment
    • Use Python 3.4 or 3.5 because 3.6 still unsupported:
      conda create -n <envname> python =3.5
    • Switch to the new environment:
      source activate <envname>
    • Install TensorFlow using pip. Remember to use GPU-capable version:
      pip install tensorflow-gpu
    • Install other common packages (you might come up with your own list):
      conda install numpy pandas jupyter notebook
    • Need the notebook extensions to be able to select environment within Jupyter:
      conda install anaconda-nb-extensions -c anaconda-nb-extensions
  3. Set up path variables in the environment
    These variables do not get saved unless you modify the activate.d and deactivate.d directories for the conda environment. Jupyter doesn’t care what is set up in the environment it is invoked from, as it will always start its own kernel. It is important then to find the directory where the conda environment files live. For me it is ~/Apps/anaconda3/envs. (I believe the “standard” is ~/anaconda2 or ~anaconda3.)
    Basically, you need to find <anaconda installation directory>/envs.

    • Find the directory for <envname>
    • cd <anaconda installation directory>/<envs>/<envname>/etc
    • cd conda, if not there, create it.
    • Create activate.d and deactivate.d directories if they don’t exist.
    • Each most contain an bash file. Create one if it doesn’t exist.
    • activate.d/ should contain:
      export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
      It may contain other variables if it existed before, but for a clean environment this is unlikely.
    • deactivate.d/  should contain:
      unset PATH
      unset LD_LIBRARY_PATH
    • A bit more here:
  4. Test environment with TensorFlow script known to use GPU.
  5. Test with Jupyter Notebook. I’m using one of the TFLearn exercises I discussed yesterday that I’ve set up to take advantage of CUDA/GPU. It required the additional installation of tflearn in my environment.
    • Open notebook
    • Select the kernel you want to work with, in this case the one associated with <envname>.
      If no explicit selection is made, it will use the default Anaconda kernel, which in my case is 3.6 without TensorFlow. That won’t work.
    • Run the notebook, monitor GPU use to make sure it’s being used.
  6. Install any other necessary packages in the environment
  7. Keep notes somewhere telling you what you did for the specific environment. Someday you may need to re-create it!

That should do it.

I created a “core” TensorFlow env. I can clone that in the future and make variations with different packages or changes as required for the specific purpose.