1

Issue:

i have a successfully running nearest neighbour tensorflow model on colab, named top_classify. but when comes to saving, getting the error message below:

KeyError: "Failed to add concrete function 
'b'__inference_model_layer_call_fn_158405'' to object-based SavedModel as it captures 
tensor <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>> which is 
unsupported or not reachable from root. One reason could be that a stateful object or 
a variable that the function depends on is not assigned to an attribute of the 
serialized trackable object (see SaveTest.test_captures_unreachable_variable)."

Details:

the nearest neighbour model of interest is using output of an already trained model (embedding_network). and it is comparing input's output with train data set's outputs (nearest distance). i mean we are not directly comparing images, comparing their outputs. normally i wouldn't bother to code NN on tensorflow, since model is not iterative, but i need a tflite model to use on an android app. so, i don't have much choice.

first i trained the embedding_network model (via transfer learning & siamese), which is giving output size of (None, 27). xc is the constant output matrix of the entire training set (752 examples, of size (752, 27) ), yc is the correct labels of the traing set. both are tf constants. the below example is for best match (1NN). code can also be modified to work for any number of desired matches (KNN).

xc = tf.constant(embedding_network(x_train))
yc = tf.constant(y_train)
inputs = keras.Input(shape=(TARGET_SIZE, TARGET_SIZE, 3))

x0 = embedding_network(inputs, training=False) 
distance = tf.reduce_sum(tf.abs(tf.add(xc, tf.negative(x0))), axis=1)
findKClosestTrImages = tf.argsort(distance, direction='ASCENDING') 
closest0 = tf.gather(yc, findKClosestTrImages[0]) 
out=tf.one_hot(closest0, DEPTH)

top_classify = keras.Model(inputs=inputs, outputs=out)
top_classify.summary()

as you can see i'm not doing any training with the model, no loss function, neither compile & fit. since i will just use it on inference.

here is the summary of the new nearest neighbour model, top_classify:

Model: "model_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_6 (InputLayer)        [(None, 224, 224, 3)]     0         
                                                                 
 model (Functional)          (None, 27)                2292571   
                                                                 
 tf.math.negative_1 (TFOpLam  (None, 27)               0         
 bda)                                                            
                                                                 
 tf.math.add_1 (TFOpLambda)  (752, 27)                 0         
                                                                 
 tf.math.abs_1 (TFOpLambda)  (752, 27)                 0         
                                                                 
 tf.math.reduce_sum_1 (TFOpL  (752,)                   0         
 ambda)                                                          
                                                                 
 tf.argsort_1 (TFOpLambda)   (752,)                    0         
                                                                 
 tf.__operators__.getitem_1   ()                       0         
 (SlicingOpLambda)                                               
                                                                 
 tf.compat.v1.gather_1 (TFOp  ()                       0         
 Lambda)                                                         
                                                                 
 tf.one_hot_1 (TFOpLambda)   (27,)                     0         
                                                                 
=================================================================
Total params: 2,292,571
Trainable params: 0
Non-trainable params: 2,292,571
_________________________________________________________________

anyway model works as a charm on colab. i'm getting really good results. but i can't save the model, whether i use tf.saved_model.save or top_classify.save

errors if try tf.saved_model.save

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/function_serialization.py in serialize_concrete_function(concrete_function, node_ids, coder)
     64     for capture in concrete_function.captured_inputs:
---> 65       bound_inputs.append(node_ids[capture])
     66   except KeyError:

7 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/object_identity.py in __getitem__(self, key)
    138   def __getitem__(self, key):
--> 139     return self._storage[self._wrap_key(key)]
    140 

KeyError: <_ObjectIdentityWrapper wrapping <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>>>

During handling of the above exception, another exception occurred:

KeyError                                  Traceback (most recent call last)
<ipython-input-33-13ced446758f> in <module>()
     15 TOP_CLASSIFY_SAVE_LOC = "/content/top_classify"
     16 #top_classify.save("/content/top_classify")
---> 17 tf.saved_model.save(top_classify, TOP_CLASSIFY_SAVE_LOC)
     18 #top_classify.save("/content/top_classify", save_format='h5')
     19 # Convert the model

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in save(obj, export_dir, signatures, options)
   1278   # pylint: enable=line-too-long
   1279   metrics.IncrementWriteApi(_SAVE_V2_LABEL)
-> 1280   save_and_return_nodes(obj, export_dir, signatures, options)
   1281   metrics.IncrementWrite(write_version="2")
   1282 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in save_and_return_nodes(obj, export_dir, signatures, options, experimental_skip_checkpoint)
   1313 
   1314   _, exported_graph, object_saver, asset_info, saved_nodes, node_paths = (
-> 1315       _build_meta_graph(obj, signatures, options, meta_graph_def))
   1316   saved_model.saved_model_schema_version = (
   1317       constants.SAVED_MODEL_SCHEMA_VERSION)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in _build_meta_graph(obj, signatures, options, meta_graph_def)
   1485 
   1486   with save_context.save_context(options):
-> 1487     return _build_meta_graph_impl(obj, signatures, options, meta_graph_def)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in _build_meta_graph_impl(obj, signatures, options, meta_graph_def)
   1449 
   1450   object_graph_proto = _serialize_object_graph(
-> 1451       saveable_view, asset_info.asset_index)
   1452   meta_graph_def.object_graph_def.CopyFrom(object_graph_proto)
   1453 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/save.py in _serialize_object_graph(saveable_view, asset_file_def_index)
   1015     name = saveable_view.function_name_map.get(name, name)
   1016     serialized = function_serialization.serialize_concrete_function(
-> 1017         concrete_function, saveable_view.captured_tensor_node_ids, coder)
   1018     if serialized is not None:
   1019       proto.concrete_functions[name].CopyFrom(serialized)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/function_serialization.py in serialize_concrete_function(concrete_function, node_ids, coder)
     66   except KeyError:
     67     raise KeyError(
---> 68         f"Failed to add concrete function '{concrete_function.name}' to object-"
     69         f"based SavedModel as it captures tensor {capture!r} which is unsupported"
     70         " or not reachable from root. "

KeyError: "Failed to add concrete function 
'b'__inference_model_layer_call_fn_158405'' to object-based SavedModel as it captures 
tensor <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>> which is 
unsupported or not reachable from root. One reason could be that a stateful object or 
a variable that the function depends on is not assigned to an attribute of the 
serialized trackable object (see SaveTest.test_captures_unreachable_variable)."

errors if try top_classify.save:

WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
/usr/local/lib/python3.7/dist-packages/keras/engine/functional.py:1410: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
  layer_config = serialize_layer_fn(layer)
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-34-7275f8fa7bba> in <module>()
     18 top_classify.save("/content/top_classify", save_format='h5')
     19 # Convert the model
---> 20 converter = tf.lite.TFLiteConverter.from_saved_model(TOP_CLASSIFY_SAVE_LOC) # path to the SavedModel directory
     21 tflite_model = converter.convert()
     22 

4 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/lite/python/lite.py in from_saved_model(cls, saved_model_dir, signature_keys, tags)
   1603 
   1604     with context.eager_mode():
-> 1605       saved_model = _load(saved_model_dir, tags)
   1606     if not signature_keys:
   1607       signature_keys = saved_model.signatures

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load.py in load(export_dir, tags, options)
    898     ValueError: If `tags` don't match a MetaGraph in the SavedModel.
    899   """
--> 900   result = load_internal(export_dir, tags, options)["root"]
    901   return result
    902 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/load.py in load_internal(export_dir, tags, options, loader_cls, filters)
    911     tags = nest.flatten(tags)
    912   saved_model_proto, debug_info = (
--> 913       loader_impl.parse_saved_model_with_debug_info(export_dir))
    914 
    915   if (len(saved_model_proto.meta_graphs) == 1 and

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/loader_impl.py in parse_saved_model_with_debug_info(export_dir)
     58     parsed. Missing graph debug info file is fine.
     59   """
---> 60   saved_model = _parse_saved_model(export_dir)
     61 
     62   debug_info_path = file_io.join(

/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/loader_impl.py in parse_saved_model(export_dir)
    117   else:
    118     raise IOError(
--> 119         f"SavedModel file does not exist at: {export_dir}{os.path.sep}"
    120         f"{{{constants.SAVED_MODEL_FILENAME_PBTXT}|"
    121         f"{constants.SAVED_MODEL_FILENAME_PB}}}")

OSError: SavedModel file does not exist at: /content/top_classify/{saved_model.pbtxt|saved_model.pb}

if you come up with a solution or suggestion, it would be really appreciated. ty

SoajanII
  • 323
  • 5
  • 19

1 Answers1

2

After many detailed search, and trials, i found this forum post:

https://github.com/keras-team/keras/issues/15699 (Error when Saving model with data augmentation layer on Tensorflow 2.7 #15699). which states, data augmentation may create some save issues.

it wasn't stated in the question, but here is the details of the embedding_network, in my tf code:

inputs = tf.keras.Input(shape=(TARGET_SIZE, TARGET_SIZE, 3))
x0 = rescale(inputs)
x0 = data_augmentation(x0)
x0 = base_model(x0, training=False)
x0 = tf.keras.layers.GlobalAveragePooling2D()(x0)
x0 = tf.keras.layers.Dropout(DROPOUT_VAL)(x0)
outputs = tf.keras.layers.Dense(NUM_CLASSES)(x0)
embedding_network = tf.keras.Model(inputs, outputs)

as you see it has an augmentation layer, which was blocking the save. so i have created a very similar model called as embedding_network_cleany, without augmentations.

inputs = tf.keras.Input(shape=(TARGET_SIZE, TARGET_SIZE, 3))
x0 = rescale(inputs)
x2 = base_model(x0, training=False)
x3 = tf.keras.layers.GlobalAveragePooling2D()(x2)
x4 = tf.keras.layers.Dropout(DROPOUT_VAL)(x3)
outputs = tf.keras.layers.Dense(NUM_CLASSES)(x4)
embedding_network_cleany = tf.keras.Model(inputs, outputs)

coppied the embedding_network's weights with embedding_network.save_weights, then load on to embedding_network_cleany. now i'm able to both save top_classify and embedding_network_cleany models. and can convert to tflite too

a new solution:

Probably a way better solution is given in this link by AloneTogether Saving model on Tensorflow 2.7.0 with data augmentation layer

"A workaround would be to simply save your model with the older Keras H5 format model.save("test", save_format='h5')"

SoajanII
  • 323
  • 5
  • 19