Is it possible to train a tensorflow model, then export it as something accessible without tensorflow? I want to apply some machine learning to a school project in which the code is submitted on an online portal - it doesn’t have tensorflow installed though, only standard libraries. I am able to upload additional files, but any tensorflow file would require tensorflow to make sense of... Will I have to write my ML code from scratch?
3 Answers
Pretty much, unless you brought tensorflow and all of it's files with your application. Other than that, no, you cannot import tensorflow or have any tensorflow dependent modules or code.

- 106
- 3
Yes, it is possible. Suppose you are working with pretty simple networks such as 2 or 3 layers of fully connected NN, you can save/extract the weight and bias terms from .pb file to any format (e.g. .csv) and use them accordingly.
For example,
import tensorflow as tf
import numpy as np
from tensorflow.python.platform import gfile
from tensorflow.python.framework import tensor_util
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.3)
config = tf.ConfigProto(allow_soft_placement=True,
log_device_placement=True,
gpu_options=gpu_options)
GRAPH_PB_PATH = "./YOUR.pb"
with tf.Session(config=config) as sess:
print("load graph")
with gfile.FastGFile(GRAPH_PB_PATH, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
sess.graph.as_default()
tf.import_graph_def(graph_def, name='')
graph_nodes = [n for n in graph_def.node]
wts = [n for n in graph_nodes if n.op == 'Const']
result = []
result_name = []
for n in wts:
result_name.append(n.name)
result.append(tensor_util.MakeNdarray(n.attr['value'].tensor))
np.savetxt("layer1_weight.csv", result[0], delimiter=",")
np.savetxt("layer1_bias.csv", result[1], delimiter=",")
np.savetxt("layer2_weight.csv", result[2], delimiter=",")
np.savetxt("layer2_bias.csv", result[3], delimiter=",")

- 316
- 2
- 5
-
So, do you mean recreating the network by hand with the weights and biases resulting from training the tensorflow model? – Harry Stuart May 22 '19 at 20:04
If you use only simple fully connected layers, you can implement them in numpy without big problems. Save kernels and biases to files (or inject weight right to your code as python constants) and do for each layer:
# preallocate w once at the beginning for each layer
w = np.empty([len(x), layer['kernel'].shape[1]])
# x is input, mult kernel with x, write result to w
x.dot(layer['kernel'], out=w) # matrix mult with kernel
w += layer['bias'] # add bias
out = np.maximum(w, 0) # ReLU
Or you can try this lib (for old tensorflow versions): https://github.com/riga/tfdeploy. It's fully written using numpy only, you can try to cut some code pieces from it.

- 792
- 1
- 12
- 30
-
What is `layer`? created a list of `tf.variables` for each layer's kernels and another list for each layer's biases. – Harry Stuart May 29 '19 at 12:12
-
1