0

I'm looking for a way to distribute the following code, to a Windows 10 machine:

""" Neural Network with Eager API.

A 2-Hidden Layers Fully Connected Neural Network (a.k.a Multilayer Perceptron)
implementation with TensorFlow's Eager API. This example is using the MNIST database
of handwritten digits (http://yann.lecun.com/exdb/mnist/).

This example is using TensorFlow layers, see 'neural_network_raw' example for
a raw implementation with variables.

Links:
    [MNIST Dataset](http://yann.lecun.com/exdb/mnist/).

Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
"""
from __future__ import print_function

import tensorflow as tf
import tensorflow.contrib.eager as tfe
from datetime import datetime

from traits.api import HasTraits, CInt, CFloat
from traitsui.api import View, Item, Handler, Action

import matplotlib.pyplot as plt
import os
print('1') #Added for debugging along with the input, doesn't work.
input()

# Set Eager API
tfe.enable_eager_execution()
print('2')
input()
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=False)

# Parameters
learning_rate = 0.001
num_steps = 5
batch_size = 128
display_step = 10

# OODA tries
eps = 0.002
lower_bound = 0.005
upper_bound = 100
ooda_step = 5
epochs = 50

# Network Parameters
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)

# Using TF Dataset to split data into batches
dataset = tf.data.Dataset.from_tensor_slices(
    (mnist.train.images, mnist.train.labels)).batch(batch_size)
dataset_iter = tfe.Iterator(dataset)

# Define the neural network. To use eager API and tf.layers API together,
# we must instantiate a tfe.Network class as follow:
class NeuralNet(tfe.Network):

    # SGD Optimizer
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
    # Compute gradients
    #grad = 0# = tfe.implicit_gradients(NeuralNet.loss_fn)
    def __init__(self):
        # Define each layer
        super(NeuralNet, self).__init__()
        # Hidden fully connected layer with 256 neurons
        self.layer1 = self.track_layer(
            tf.layers.Dense(n_hidden_1, activation=tf.nn.relu))
        # Hidden fully connected layer with 256 neurons
        self.layer2 = self.track_layer(
            tf.layers.Dense(n_hidden_2, activation=tf.nn.relu))
        # Output fully connected layer with a neuron for each class
        self.out_layer = self.track_layer(tf.layers.Dense(num_classes))
        NeuralNet.grad = tfe.implicit_gradients(NeuralNet.loss_fn)

    def call(self, x):
        x = self.layer1(x)
        x = self.layer2(x)
        return self.out_layer(x)

    # Cross-Entropy loss function
    @staticmethod
    def loss_fn(inference_fn, inputs, labels):
        # Using sparse_softmax cross entropy
        return tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
            logits=inference_fn(inputs), labels=labels))


    # Calculate accuracy
    @staticmethod    
    def accuracy_fn(inference_fn, inputs, labels):
        prediction = tf.nn.softmax(inference_fn(inputs))
        correct_pred = tf.equal(tf.argmax(prediction, 1), labels)
        return tf.reduce_mean(tf.cast(correct_pred, tf.float32))

    @staticmethod
    def trainNN():
        testId = 0
        #files
        if os.path.exists('tests.txt'):
            #get id from file
            file = open('tests.txt', 'r')
            lines = file.read().splitlines()
            if lines != []:
                testId = int(lines[-1].split()[0]) + 1
            file.close()

        logs = open('logs.txt', 'a+')
        graphslog = open('graphs.txt', 'a+')
        testslog = open('tests.txt', 'a+')

        # Training
        lossevol = []
        accevol = []
        dataset_iter = tfe.Iterator(dataset)
        neural_net = NeuralNet()
        average_loss = 0.
        average_acc = 0.
        start = datetime.now()
        for epoch in range(epochs):
            for step in range(num_steps):

                # Iterate through the dataset
                try:
                    d = dataset_iter.next()
                except StopIteration:
                    # Refill queue
                    dataset_iter = tfe.Iterator(dataset)
                    d = dataset_iter.next()

                # Images
                x_batch = d[0]
                # Labels
                y_batch = tf.cast(d[1], dtype=tf.int64)

                # Compute the batch loss
                batch_loss = NeuralNet.loss_fn(neural_net, x_batch, y_batch)
                average_loss += batch_loss
                # Compute the batch accuracy
                batch_accuracy = NeuralNet.accuracy_fn(neural_net, x_batch, y_batch)
                average_acc += batch_accuracy

                if step == 0:
                    # Display the initial cost, before optimizing
                    print("Initial loss= {:.9f}".format(average_loss))
                    logs.write("Initial loss= {:.9f}\n".format(average_loss))

                # Update the variables following gradients info
                if epoch < ooda> lower_bound and batch_loss < upper> ooda_step):
                    NeuralNet.optimizer.apply_gradients(NeuralNet.grad(neural_net, x_batch, y_batch))

                # Display info
                if (step + 1) % display_step == 0 or step == 0:
                    if step > 0:
                        average_loss /= display_step
                        average_acc /= display_step
                    lossevol.append(average_loss)
                    accevol.append(batch_accuracy)
                    print("Epoch:", 'd' % (epoch + 1),
                          "Step:", 'd' % (step + 1), " loss=",
                          "{:.9f}".format(average_loss), " accuracy=",
                          "{:.4f}".format(average_acc))
                    logs.write('Epoch: d ' % (epoch + 1) +
                       'Step: d ' % (step + 1) + 
                       'loss={:.9f} '.format(average_loss) +
                       'accuracy={:.4f}\n'.format(average_acc))
                    average_loss = 0.
                    average_acc = 0.

        end = datetime.now()
        print("Training took: ", (end - start), logs)
        trainingTime = end-start
        # Evaluate model on the test image set
        testX = mnist.test.images
        testY = mnist.test.labels


        test_acc = NeuralNet.accuracy_fn(neural_net, testX, testY)
        print("Testset Accuracy: {:.4f}".format(test_acc)) # 9694 9753 9763 9783 9762
        logs.write("Testset Accuracy: {:.4f}\n".format(test_acc))

        aux = range(len(lossevol))
        plt.figure()
        plt.xlabel("Display steps")
        plt.ylabel("Loss")
        plt.plot(aux, lossevol, color = "red")
        plt.show()
        plt.figure()
        plt.xlabel("Display steps")
        plt.ylabel("Batch Accuaracy")
        plt.plot(aux, accevol , color = "blue")
        plt.show()
        logs.write("-----------------------------------------\n")

        print("%d " % testId + "{:.4f} ".format(test_acc) + "\n")        
        graphslog.write("%d " % testId + "{:.4f} ".format(test_acc) + str(trainingTime) + "\n")
        testslog.write("%d " % testId + 
                       "Epochs: %d " % epochs +
                       "Steps number: %d " % num_steps +
                       "Batch size: %d " % batch_size +
                       "OODA step: %d " % ooda_step +
                       "Lower bound: %f " % lower_bound +
                       "Upper bound: %f " % upper_bound +
                       "Display step: %d\n" % display_step)
        logs.close()
        graphslog.close()
        testslog.close()

#GUI Configuration

class NNHandler(Handler):

    def setattr(self, info, object, name, value):
        Handler.setattr(self, info, object, name, value)
        info.object._updated = True

    def object__updated_changed(self, info):
        if info.initialized:
            info.ui.title += "*"
            #info.ui.camera.epochs = 500

    def test(self, info):
        global epochs
        global num_steps
        global batch_size
        global ooda_step
        global lower_bound
        global upper_bound
        global display_step
        epochs = info.epochs.value
        num_steps = info.num_steps.value
        batch_size = info.batch_size.value
        ooda_step = info.ooda_step.value
        lower_bound = info.lower_bound.value
        upper_bound = info.upper_bound.value
        display_step = info.display_step.value
        NeuralNet.trainNN()

    def graphs(self, info):
        if os.path.exists('graphs.txt') == False:
            return
        file = open('graphs.txt', 'r')
        tests = file.read().splitlines()

        test_num = range(len(tests))
        test_acc = []
        test_time = []

        for line in tests:
            words = line.split()
            test_acc.append(float(words[1]))
            test_time.append((datetime.strptime(words[2], '%H:%M:%S.%f') - datetime.strptime('1900 1 1', '%Y %m %d')).total_seconds())

        print(test_num)
        print(test_acc)
        print(test_time)
        plt.figure()
        plt.xlabel("Test ID")
        plt.ylabel("Time")
        plt.scatter(test_num, test_time, color = "red")
        plt.show()
        plt.figure()
        plt.xlabel("Test ID")
        plt.ylabel("Accuaracy")
        plt.scatter(test_num, test_acc , color = "blue")
        plt.show()


class Camera(HasTraits):
    epochs = CInt(50, label = "Epochs")
    num_steps = CInt(5, label = "Number of steps")
    batch_size = CInt(128, label = "Batch size")

    ooda_step = CInt(2, label = "OODA step")
    lower_bound = CFloat(0.005, label = "Lower bound")
    upper_bound = CFloat(100, label = "Upper bound")

    display_step = CInt(10, label = "Display step")

    train = Action(name = "Run NN",
                action = "test")

    generateGraph = Action(name = "Generate graphs",
                action = "graphs")

    view = View(
                Item('epochs'),
                Item('num_steps'),
                Item('batch_size'),
                Item('ooda_step'),
                Item('lower_bound'),
                Item('upper_bound'),
                Item('display_step'),
#                Item('figure', editor=MPLFigureEditor(),
#                                show_label=False),
                handler = NNHandler(),
                buttons = [train, generateGraph]
            )

cam = Camera()
#cam.configure_traits()

if __name__ == "__main__":
    print("wtf")
    cam.configure_traits()


input()

I'm using an Anaconda environment, with Python 3.6, on a x64 machine. I've tried multiple options, couldn't get any to work:

  1. cx_Freeze

Here's the setup:

import sys
from cx_Freeze import setup, Executable

# Dependencies are automatically detected, but it might need fine tuning.
build_exe_options = {"packages": ["os", "traits.api", "traitsui.api", "matplotlib.pyplot", "tensorflow", "tensorflow.contrib.eager", "datetime", "__future__"], "excludes": ["tkinter"], "includes": ['numpy.core._methods', 'numpy.lib.format']}

# GUI applications require a different base on Windows (the default is for a
# console application).
base = None
if sys.platform == "win32":
    base = "Win32GUI"

setup(  name = "ooda",
        version = "0.1",
        description = "My GUI application!",
        options = {"build_exe": build_exe_options},
        executables = [Executable("ooda.py", base = base)])

This builds the exe without errors, but when trying to use it I get: Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll.

  1. Simply use pyinstaller

Again, no errors when building. When I open the console, I actually get an error!

Traceback (most recent call last):
  File "ooda.py", line 23, in <module>
  File "<frozen>", line 971, in _find_and_load
  File "<frozen>", line 955, in _find_and_load_unlocked
  File "<frozen>", line 665, in _load_unlocked
  File "C:\Users\Roby\Anaconda3\envs\tensorflow\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
    exec&#40;bytecode, module.__dict__&#41;
  File "site-packages\traitsui\api.py", line 36, in <module>
  File "<frozen>", line 971, in _find_and_load
  File "<frozen>", line 955, in _find_and_load_unlocked
  File "<frozen>", line 665, in _load_unlocked
  File "C:\Users\Roby\Anaconda3\envs\tensorflow\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
    exec&#40;bytecode, module.__dict__&#41;
  File "site-packages\traitsui\editors\__init__.py", line 23, in <module>
  File "<frozen>", line 971, in _find_and_load
  File "<frozen>", line 955, in _find_and_load_unlocked
  File "<frozen>", line 665, in _load_unlocked
  File "C:\Users\Roby\Anaconda3\envs\tensorflow\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
    exec&#40;bytecode, module.__dict__&#41;
  File "site-packages\traitsui\editors\api.py", line 24, in <module>
  File "<frozen>", line 971, in _find_and_load
  File "<frozen>", line 955, in _find_and_load_unlocked
  File "<frozen>", line 665, in _load_unlocked
  File "C:\Users\Roby\Anaconda3\envs\tensorflow\lib\site-packages\PyInstaller\loader\pyimod03_importers.py", line 631, in exec_module
    exec&#40;bytecode, module.__dict__&#41;
  File "site-packages\traitsui\editors\code_editor.py", line 37, in <module>
  File "site-packages\traitsui\editors\code_editor.py", line 49, in ToolkitEditorFactory
  File "site-packages\traits\traits.py", line 522, in __call__
  File "site-packages\traits\traits.py", line 1236, in Color
  File "site-packages\traitsui\toolkit_traits.py", line 8, in ColorTrait
  File "site-packages\traitsui\toolkit.py", line 109, in toolkit
  File "site-packages\pyface\base_toolkit.py", line 281, in find_toolkit
  File "site-packages\pyface\base_toolkit.py", line 209, in import_toolkit
RuntimeError: No traitsui.toolkits plugin found for toolkit null
[9148] Failed to execute script ooda

Saw a few links with an errors similar to this, but they all said for toolkit qt4 while mine sais for toolkit null.

After this, I tried adding ETS_TOOLKIT=qt4 as an env var but I'm getting the same error.

Edit due to comments

As per comments suggestion, I've tried to implicitly add the imports:

import pyface
import PyQt5

Simply by adding those 2, the error changes to No traitsui.toolkits plugin found for toolkit qt4. If I also go and set

from traits.etsconfig.api import ETSConfig
ETSConfig.toolkit = 'pyqt'

It can't find the plugin for toolkit qt5.

So after some more research, I tried importing pyfaces the following way:

os.environ['ETS_TOOLKIT'] = 'qt4'

import imp
try:
    imp.find_module('PySide') # test if PySide if available
except ImportError:
    os.environ['QT_API'] = 'pyqt' # signal to pyface that PyQt4 should be used

In this case, I get the same error for qt4.

Edit #2

Following the suggestions in the comments, I tried to explicitly import traitsui.qt4.toolkit yet even with this in place I am getting the same error but for pyfaces instead of traitsui with pyinstaller.

Also, I've noticed that pyinstaller is giving me some warnings when building:

102773 INFO: Looking for dynamic libraries
102980 WARNING: lib not found: mpich2mpi.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_blacs_mpich2_ilp64.dll
103690 WARNING: lib not found: impi.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_blacs_intelmpi_lp64.dll
105931 WARNING: lib not found: mpich2mpi.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_blacs_mpich2_lp64.dll
106129 WARNING: lib not found: msmpi.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_blacs_msmpi_ilp64.dll
106356 WARNING: lib not found: msmpi.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_blacs_msmpi_lp64.dll
107704 WARNING: lib not found: impi.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_blacs_intelmpi_ilp64.dll
109251 WARNING: lib not found: pgc14.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_pgi_thread.dll
109441 WARNING: lib not found: pgf90rtl.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_pgi_thread.dll
109639 WARNING: lib not found: pgf90.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_pgi_thread.dll
111431 WARNING: lib not found: tbb.dll dependency of C:\Users\Roby\Anaconda3\envs\tensorflow\Library\bin\mkl_tbb_thread.dll

After some research I found the following links:

creating standalone exe using pyinstaller with mayavi import

https://github.com/pyinstaller/pyinstaller/issues/3274

The cx_freeze gave me an executable which simply instantly closes without any messages and the pyinstaller keeps giving me the same error.

My environment:

  1. Windows 10 x64
  2. Python 3.6
  3. Tensorflow latest on anaconda (not sure)
  4. Matplotlib 2.x
  5. Traits 4.x
  6. Traitsui 6.0.0
taigi100
  • 2,609
  • 4
  • 23
  • 42
  • 1
    Note that a StackOverflow question should include everything needed to answer **inside the question itself**, so it's still meaningful even if external links break or change. Links can be welcome, but only if the question is still complete and answerable without them. When adding code to the question itself, use the `{}` button in the editing UI to format it. – Charles Duffy Jul 08 '18 at 21:21
  • BTW, have you reviewed [Pyinstaller numpy “Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll”](https://stackoverflow.com/questions/35478526/pyinstaller-numpy-intel-mkl-fatal-error-cannot-load-mkl-intel-thread-dll)? – Charles Duffy Jul 08 '18 at 21:25
  • 1
    @CharlesDuffy There you go, I've edited the question. Yes, I've reviewed that question a few times now. Keep in mind, that's for Pyinstaller and I get that error on cx_freeze. The error I get with pyinstaller is totally different. – taigi100 Jul 08 '18 at 21:53
  • Thank you -- the added content is helpful. (*Ideally* we'd have a [mcve] -- the *shortest possible* runnable code that causes the same error -- but this is good enough). This generally means that there's a runtime import that pyinstaller isn't discovering -- if you can track down what it is, you can manually add an explicit import or a hook for it. – Charles Duffy Jul 08 '18 at 23:26
  • Note that `pyface.ui.null` is actually a package that really exists. Adding an explicit `import` for it and then rebuilding your package with pyinstaller may be all you need. – Charles Duffy Jul 08 '18 at 23:27
  • ...that said, the traitsui docs specify very explicitly that it *does* require one of four possible UI toolkits (PyQt, wxPython, PySide, or PyQt5) to work correctly. Figuring out which one you *intend* to use and adding an explicit import for it would also be a step in the right direction. – Charles Duffy Jul 09 '18 at 00:06
  • @CharlesDuffy Updated the question with the results from explicitly importing them and setting the a backend (Honestly, don't really care which UI toolkit I'm going to use, I simply want to manage to build it) – taigi100 Jul 09 '18 at 08:03
  • You also need an explicit `import traitsui.toolkits.qt4`. Basically, pyinstaller needs to be able to find all the imports you need **without** relying on any dynamic behavior (so calls to `__import__()` or `imp.find_module()` need to be supplemented with actual `import` statements with concrete arguments). – Charles Duffy Jul 09 '18 at 12:33
  • ...to be clear, btw, the extent that you don't care about the how is part of why I'm commenting and not answering (the other extent is related to the question's breadth -- if it were narrowed to a MCVE and a specific problem, one could have a complete, canonical and tested answer, but here you've got a bunch of distinct libraries mixed with a bunch of your own code). I actually think this question is too broad to be within SO's topic guidelines, but you're making a good-faith effort, so I'm trying to help to some extent regardless, hence the back-and-forth in the comments here. – Charles Duffy Jul 09 '18 at 12:34
  • ...our goal is to be a Q&A database, not a source of general debugging assistance *except* to the extent that said assistance can be distilled down to simple, unique questions with canonical answers that are likely to help other people. "Why do I get an error about pyface.ui.null when trying to use pyinstaller to bundle traitsui?" would be a great question, for example, coupled with the shortest code that generates that error in question. You've similarly got several different potential cx_Freeze minimal questions; asking them as one big block makes it impossible to write a canonical answer. – Charles Duffy Jul 09 '18 at 12:47
  • @CharlesDuffy Updated the question accordingly. Also tried a few more ways which didn't work. Sorry for the not well though through the question - It's my first python project, the initial agreement wasn't to make it a .exe and now they want a .exe asap. It's a little bit frustrating that packaging this takes over double the time of the entire project itself. – taigi100 Jul 09 '18 at 17:37
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/174671/discussion-between-taigi100-and-charles-duffy). – taigi100 Jul 09 '18 at 17:39

0 Answers0