Installing Tensorflow for Rust on Ubuntu with NVIDIA GPU

Has anyone successfully installed and used Tensorflow (or later) 0.21.0 rust crate on an Ubuntu (or Debian) o/s to support an NVIDIA GPU (I have an RTX 3070 running the NVIDIA 555 driver and CUDA 12.5)? If so, would you be able to provide complete step-by-step instructions for how you did this?

I've followed the NVIDIA installation instructions (which are pretty much tailored to Python users) and I seem to have installed TensorFlow okay and get the appropriate test results for the small Python program they use to verify install).

However, when I try to run a compiled Rust program (using examples from the github attached to the crate), it doesn't seem to be able to find a file called libtensorflow.so.2. Does anyone have any idea how this all should work? Or should I just forget about it and use the arrayfire crate (which does seem to be working).

Thanks for any and all help.

I will answer my own question because I suspect that others may also be trying to do this and (with the help of ChatGPT4.0 I finally got it to work. So here's how I did it:

I started with this question to ChatGPT: Can you give me step-by-step instructions to download TensorFlow on Ubuntu so that I can connect it to a Rust program and use my NVIDIA RX 3070 GPU. I already have a working NVIDIA 555 driver installed for the GPU so I don't need that part.

Here's the (quite long) answer it gave me (If you send me a message - email:wayne.materi@gmail.com - I'll be happy to share the chat with you):
Sure, I can guide you through the process of installing TensorFlow on Ubuntu and connecting it to a Rust program using your NVIDIA GPU. Here are the steps:
Step 1: Install CUDA and cuDNN

Download CUDA Toolkit:
    Visit the CUDA Toolkit Archive and download the appropriate version for your system.

Install CUDA Toolkit:
    Follow the installation instructions specific to the version you downloaded. This usually involves running a set of commands in the terminal.

sudo dpkg -i cuda-repo-__amd64.deb
sudo apt-key adv --fetch-keys Index of /compute/cuda/repos/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda

Download and Install cuDNN:

Visit the cuDNN Archive and download the version compatible with your CUDA installation.
Extract and copy the files to the appropriate directories.

(in a Konsole terminal, type)
tar -xzvf cudnn--linux-x64-v.tgz
sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*

Set Environment Variables:
Add the following lines to your ~/.bashrc or ~/.zshrc file to update your PATH and LD_LIBRARY_PATH.

(in a Konsole terminal, type)
export PATH=/usr/local/cuda-/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Then, source the file to apply the changes or open a new Konsole terminal:

(In an open Konsole terminal, type)
source ~/.bashrc

Step 2: Install TensorFlow

Install Python and Pip:
    Make sure you have Python and Pip installed. You can install them using the following commands:

(in a Konsole terminal, type)
sudo apt-get update
sudo apt-get install python3 python3-pip

Install TensorFlow with GPU Support:

You can install TensorFlow using pip with GPU support.

(in a Konsole terminal, type)****** SEE NOTE BELOW ON DEALING WITH the externally-managed-environment error if that happens to you.

pip3 install tensorflow

Step 3: Verify TensorFlow Installation

Verify TensorFlow Installation:
    You can verify the installation by running a simple TensorFlow script.

python3

import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

exit()

Install TensorFlow Rust Bindings:

You need to add the TensorFlow Rust bindings to your project. Add the following dependencies to your Cargo.toml file:

in Cargo.toml

[dependencies]
tensorflow = { version = "0.21.0", features = ["tensorflow_gpu"] }  #features enables the gpu 

Link TensorFlow C Library:
    Ensure that TensorFlow's C library is available. You may need to download the TensorFlow C library and set the LD_LIBRARY_PATH to include the path to the library.

How I did this (on ChatGPT's direction):
Step 1: Download the TensorFlow C Library

Download the Latest TensorFlow C Library from https://www.tensorflow.org/install/lang_c
Download the appropriate version for your system. For example, for Linux:

wget https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-cpu-linux-x86_64-2.15.0.tar.gz

Step 2: Extract the Library

Extract the Downloaded Archive:

sudo tar -C /usr/local -xzf libtensorflow-cpu-linux-x86_64-2.15.0.tar.gz

Verify the Extraction:
Ensure the files are extracted to /usr/local/lib and /usr/local/include:

ls /usr/local/lib - should show several "libtensorflow*" files
ls /usr/local/include - should have a tensorflow and a tsl subdirectory

Step 3: Set the LD_LIBRARY_PATH

Update LD_LIBRARY_PATH:
Add the TensorFlow library path to the LD_LIBRARY_PATH environment variable. You can do this by adding the following lines to your ~/.bashrc file:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

Reload the Shell Configuration:

source ~/.bashrc


Step 4: Write a Simple Rust Program to Use TensorFl

Various programs suggested by ChatGPT didn't compile, so I copied one from a Medium article called "Introduction to TensorFlow for Rust: Building and Executing a graph" by Alfred Weirich. I'll try to post a simpler one (if possible) once I've learned a bit more about TensorFlow in Rust.

This should guide you through setting up TensorFlow on Ubuntu and using it with Rust while leveraging your NVIDIA GPU.

********NOTE ON DEALING WITH the externally-managed-environment error

When I typed in "pip3 install tensorflow" as in Step2:Install TensorFlow, substep 2 Install TensorFlow with GPU support I got the following error: error: externally-managed-environment

× This environment is externally managed
ChatGPT correctly instructed me to fix this as follows:

The externally-managed-environment error typically occurs in environments where the package manager (like pip) is being managed by the system package manager (like apt). This is a safety feature to prevent conflicts between system-installed packages and those installed via pip.

To resolve this, you can use a virtual environment to create an isolated Python environment where you can install packages independently of the system's Python installation. Here’s how you can do it:

Step 1: Install Virtual Environment Tools

If you haven't already installed virtualenv, you can do so using apt:

(in a Konsole terminal, type)
sudo apt update
sudo apt install python3-venv

Step 2: Create a Virtual Environment

Create a new virtual environment in a directory of your choice. Here, we'll create one in a directory named myenv.

(in a Konsole terminal, type)
python3 -m venv myenv

Step 3: Activate the Virtual Environment

Activate the virtual environment:

(in a Konsole terminal, type)
source myenv/bin/activate

Step 4: Install TensorFlow in the Virtual Environment

With the virtual environment activated, you can now install TensorFlow:

(in a Konsole terminal, type)
pip install tensorflow

Step 5: Verify the Installation

You can verify the installation by running a simple TensorFlow script:

(in a Konsole terminal, type)
python -c "import tensorflow as tf; print(tf.version)"

Step 6: Deactivate the Virtual Environment

When you're done working in the virtual environment, you can deactivate it:

(in a Konsole terminal, type)
deactivate

I hope that made sense and helps. Sorry that I haven't tried that under Windows or Mac O/S. Hopefully, the same prompt in ChatGPT for either of those o/ses would make sense and/or be easier.

If you don't mind, please mark that reply as the solution (the checkbox at the bottom of the reply) so that the topic will show as answered in the title, and others will see the solution prominently.

Done. Thanks for telling me about that.

1 Like

This topic was automatically closed 90 days after the last reply. We invite you to open a new topic if you have further questions or comments.