10

I have the demo.sh working fine and I've looked at the parser_eval.py and grokked it all to some extent. However, I don't see how to serve this model using TensorFlow Serving. There are two issues I can see off the top:

1) There's no exported model for these graphs, the graph is built at each invocation using a graph builder (e.g. structured_graph_builder.py), a context protocol buffer, and a whole bunch of other stuff that I don't understand fully at this point (it seems to register additional syntaxnet.ops as well). So... is it possible, and how would I export these models into the "bundle" form required by Serving and the SessionBundleFactory? If not, it seems the graph building logic / steps will need to be re-implemented in C++ because the Serving only runs in C++ context.

2) demo.sh is actually two models literally piped together with UNIX pipe, so any Servable would have to (problably) build two sessions and marshal the data from one to the other. Is this a correct approach? Or is it possible to build a "big" graph containing both models "patched" together and export that instead?

dmansfield
  • 1,108
  • 10
  • 22

1 Answers1

6

So after a lot of learning, research etc. I ended up putting together a pull request for tensorflow/models and syntaxnet which achieves the goal of serving Parsey McParseface from TF serving.

https://github.com/tensorflow/models/pull/250

What's NOT here is the actual "serving" code, but that is relatively trivial compared to the work to resolve the issues in the above question.

Dave
  • 4,356
  • 4
  • 37
  • 40
dmansfield
  • 1,108
  • 10
  • 22
  • 1
    And I've created a repository to house a simple (WIP) TF Serving artifact to serve the model. Comes with a nodejs gRPC test client. https://github.com/dmansfield/parsey-mcparseface-api – dmansfield Jul 27 '16 at 14:56