27

Is there a version of TensorFlow for 32-bit Linux? I only see the 64-bit wheel available, and didn't find anything about it on the site.

Jonathon Byrd
  • 525
  • 1
  • 5
  • 9
  • 1
    This question is not really about programming is it? – Bonatti Nov 10 '15 at 16:59
  • 1
    @Bonatti read http://stackoverflow.com/help/on-topic – Franck Dernoncourt Dec 06 '15 at 00:12
  • @FranckDernoncourt Yes, and Topic 4 of `Some questions are still off-topic, even if they fit into one of the categories listed above:` states this: `Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it` – Bonatti Dec 07 '15 at 09:50
  • 4
    @Bonatti I don't see the question as a recommendation request. – Franck Dernoncourt Dec 07 '15 at 14:58
  • @FranckDernoncourt As far as TensorFlow goes `TensorFlow is an open source software library for machine learning in various kinds of perceptual and language understanding tasks`.... this means that this is a tool, and the question seem like its asking for an alteration or a different version of that tool.... this is an off site resource, that is not directly related to programming or what the OP has tried to resolve. – Bonatti Dec 07 '15 at 16:57
  • @Bonatti It wasn't a recommendation request. I misread a comment in a Reddit thread, and thought someone said they pip-installed a 32-bit version. I asked, because I couldn't find one, and didn't want to deal with bazel. I actually ended up just installing 64-bit Ubuntu (for a different reason). I guess I didn't know what I was doing when I installed in the first place a long time ago! – Jonathon Byrd Dec 30 '15 at 20:05

4 Answers4

26

We have only tested the TensorFlow distribution on 64-bit Linux and Mac OS X, and distribute binary packages for those platforms only. Try following the source installation instructions to build a version for your platform.

EDIT: One user has published instructions for running TensorFlow on a 32-bit ARM processor, which is promising for other 32-bit architectures. These instructions may have useful pointers for getting TensorFlow and Bazel to work in a 32-bit environment.

mrry
  • 125,488
  • 26
  • 399
  • 400
13

I've built a CPU-only version of TensorFlow on 32-bit Ubuntu (16.04.1 Xubuntu). It went a lot more smoothly than anticipated, for such a complex library that doesn't support 32-bit architectures officially.

It can be done by following a subset of the intersection of these two guides:

If I haven't forgotten anything, here are the steps I've taken:

  1. Install Oracle Java 8 JDK:

    $ sudo apt-get remove icedtea-8-plugin  #This is just in case
    $ sudo add-apt-repository ppa:webupd8team/java
    $ sudo apt-get update
    $ sudo apt-get install oracle-java8-installer
    

(This is all you need in a pristine Xubuntu install, but google the above keywords otherwise, to read about selecting a default JRE and javac.)

  1. Dependencies:

    sudo apt-get update
    sudo apt-get install git zip unzip swig python-numpy python-dev python-pip python-wheel
    pip install --upgrade pip
    
  2. Following the instructions that come with Bazel, download a Bazel source zip (I got bazel-0.4.3-dist.zip), make a directory like ~/tf/bazel/ and unzip it there.

  3. I was getting an OutOfMemoryError during the following build, but this fix took care of it (i.e. adding the -J-Xmx512m for the bootstrap build).

  4. Call bash ./compile.sh, and wait for a long time (overnight for me, but see the remarks at the end).

  5. $ git clone -b r0.12 https://github.com/tensorflow/tensorflow

  6. This seems like the only change to the source code that was necessary!

    $ cd tensorflow
    $ grep -Rl "lib64"| xargs sed -i 's/lib64/lib/g'
    
  7. Then $ ./configure and say no to everything. (Accept defaults where relevant.)

  8. The following took quite a few hours with my setup:

    $ bazel build -c opt --jobs 1 --local_resources 1024,0.5,1.0 --verbose_failures //tensorflow/tools/pip_package:build_pip_package
    $ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
    $ pip install --user /tmp/tensorflow_pkg/ten<Press TAB here>
    

To see that it's installed, see if it works on the TensorFlow Beginners tutorial. I use jupyter qtconsole (i.e. the new name of IPython). Run the code in the mnist_softmax.py. It should take little time even on very limited machines.

For some reason, TensorFlow's guide to building from source doesn't suggest running the unit tests:

$ bazel test //tensorflow/...

(Yes, type in the ellipses.)

Though I couldn't run them — it spent 19 hours trying to link libtensorflow_cc.so, and then something killed the linker. This was with half a core and 1536 MB memory limit. Maybe someone else, with a larger machine, can report on how the unit tests go.

Why didn't we need to do the other things mentioned in those two walkthroughs? Firstly, most of that work is about taking care of GPU interfacing. Secondly, both Bazel and TensorFlow have become more self-contained since the first of those walkthroughs was written.

Note that the above settings provided to Bazel for the build are very conservative (1024 MB RAM, half a core, one job at a time), because I'm running this through VirtualBox using a single core of a $200 netbook of the type that Intel makes for disadvantaged kids in Venezuela, Pakistan and Nigeria. (By the way, if you do this, make sure the virtual HDD is 20 GB at the very least — trying to build the unit tests above took about 5 GB of space.) The build of the wheel took almost 20 hours and the modest deep CNN from the second tutorial, which is quoted to take up to half an hour to run on modern desktop CPUs, takes about 80 hours under this setup. One might wonder why I don't get a desktop, but the truth is that actual training with TensorFlow only makes sense on a high-end GPU (or a bunch thereof), and when we can hire an AWS spot instance with such a GPU for about 10 cents an hour without commitment and on a workable ad-hoc basis, it doesn't make a lot of sense to be training elsewhere. The 480000% speed-up is really noticeable. On the other hand, the convenience of having a local installation is well worth going through a process such as above.

Community
  • 1
  • 1
Evgeni Sergeev
  • 22,495
  • 17
  • 107
  • 124
  • 1
    I used the above steps to install tensorflow in my system, Ubuntu 16.04 LTS, 3.8 GB Memory 32 bit, Intel core i7. It was awesome... I was able to get it done in 4-5 hours. The MNIST tutorial files executed at good speed. The mentioned errors did appear but it did not come in the way. Thumbs Up... – pritywiz May 02 '17 at 11:41
  • Did anybody try it for Tenorflow 2.x? – soham Nov 30 '20 at 02:23
1

It appears that Google does not yet support tensorflow on 32-bit machines.

On a 32-bit machine running Centos 6.5,the following error is received after the "import tensorflow as tf" command: ImportError: tensorflow/python/_pywrap_tensorflow.so: wrong ELF class: ELFCLASS64

Until Google distributes a 32-bit version of tensorflow, I also recommend building tensorflow from source as specified here.

dlk5730
  • 23
  • 4
1

I have used the information from the responses to this question and generated a detailed instructions list to compile and install tensorflow in a 32 bits linux system.

The latest version of the instructions is available in github at: tensorflow-32-bits-linux

Instructions to install Tensorflow in a 32 bits linux system

I used the following steps to install tensorflow in a old Asus Eee-Pc 1000H. Granted, it has been upgraded from the original 1 GB of RAM and an 80 GB HDD, to 2 GB of RAM and to 480 GB of SSD storage.

I tested the these instructions with the following OS versions and worked without problems: * Xubuntu 16.04.6 Xenial Xerus 32 bits. * Xubuntu 18.04.3 Bionic Beaver 32 bits. * Debian 9.11 Stretch 32 bits.

Choose a convenient linux system

I have tested both the Ubuntu 16.04 (Xenial) and Debian 9.11 (Stretch) systems with 2 GB of RAM.

I set up the system to have 4 GB of SWAP space. With only 1 GB of SWAP, some compilations failed.

It's critical that the distribution has the version 8 of the Java SDK: openjdk-8-jdk

Install the Java 8 SDK and build tools

sudo apt-get update
sudo apt-get install openjdk-8-jdk
sudo apt-get install git zip unzip autoconf automake libtool curl zlib1g-dev swig build-essential

Install Python libraries

Next, we install python 3 development libraries and the keras module that will be required by tensorflow.

sudo apt-get install python3-dev python3-pip python3-wheel
sudo python3 -m pip install --upgrade pip
python3 -m pip install --user keras

You can use eithr python 3 or python 2 and compile tensorflow for that version.

Install and compile Bazel from sources

We need the source code bazel 0.19.2 distribution. We can obtain it and install in a new folder.

cd $HOME
wget https://github.com/bazelbuild/bazel/releases/download/0.19.2/bazel-0.19.2-dist.zip
mkdir Bazel-0-19.2
cd Bazel-0-19.2
unzip ../bazel-0.19.2-dist.zip

Before compiling, we need to remove line 30 of ./src/tools/singlejar/mapped_file_posix.inc file (#error This code for 64 bit Unix.) that throws an error if we are not in a 64 bit machine. This bazel version works ok in 32 bits.

vi  ./src/tools/singlejar/mapped_file_posix.inc

Also we need to increase the java memory available to Bazel and start compiling it.

export BAZEL_JAVAC_OPTS="-J-Xmx1g"
./compile.sh

When it finishes (It can take several hours), we move the bazel compiled executable to some location in the current user's path

sudo cp output/bazel /usr/local/bin

Compile Tensorflow from sources

Create a folder and clone tensorflow's 1.13.2 version to it. Starting from version 1.14, tensorflow uses the Intel MKL DNN optimization library that it only works in 64 bits systems. So 1.13.2 is the last version that runs in 32 bits.

cd $HOME
mkdir Tensorflow-1.13.2
cd Tensorflow-1.13.2
git clone -b v1.13.2 --depth=1 https://github.com/tensorflow/tensorflow .

Before compiling, we replace the references to 64 bit libraries to the 32 bit ones.

grep -Rl "lib64"| xargs sed -i 's/lib64/lib/g'

We start the tensorflow configuration. We need to explicity disable the use of several optional libraries that are not available or not supported on 32 bit systems.

export TF_NEED_CUDA=0
export TF_NEED_AWS=0
./configure

We have to take the following considerations: * When asked to specify the location of python. [Default is /usr/bin/python]: We should respond /usr/bin/python3 to use python 3. * When asked to input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] we just hit Enter * We should respond N to all the Y/N questions. * When asked to specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Just hit Enter

Now we start compiling tensorflow disabling optional components like aws, kafka, etc.

bazel build --config=noaws --config=nohdfs --config=nokafka --config=noignite --config=nonccl -c opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package

If everything went ok, now we generate the pip package.

bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

And we install the pip package

python3 -m pip install --user /tmp/tensorflow_pkg/tensorflow-1.13.2-cp35-cp35m-linux_i686.whl

Test tensorflow

Now we run a small test to check that it works. We create a test.py file with the following contents:

import tensorflow as tf

mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

And we run the test

python3 test.py

Here is the output

Epoch 1/5
60000/60000 [==============================] - 87s 1ms/sample - loss: 0.2202 - acc: 0.9348
Epoch 2/5
60000/60000 [==============================] - 131s 2ms/sample - loss: 0.0963 - acc: 0.9703
Epoch 3/5
60000/60000 [==============================] - 135s 2ms/sample - loss: 0.0685 - acc: 0.9785
Epoch 4/5
60000/60000 [==============================] - 128s 2ms/sample - loss: 0.0526 - acc: 0.9828
Epoch 5/5
60000/60000 [==============================] - 128s 2ms/sample - loss: 0.0436 - acc: 0.9863
10000/10000 [==============================] - 3s 273us/sample - loss: 0.0666 - acc: 0.9800

Enjoy your new Tensorflow library !!

Javier
  • 41
  • 4