Home > Software design >  How to convert frozen graph to TensorFlow lite
How to convert frozen graph to TensorFlow lite

Time:12-11

I have been trying to follow, https://www.tensorflow.org/lite/examples/object_detection/overview#model_customization all day to convert any of the tensorflow Zoo models to a TensorFlow lite model for running on Android with no luck.

I downloaded several of the models from here, https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1_detection_zoo.md (FYI, Chrome does not let you down these links as not https, I had to right click Inspect the link and click on the link in inspector)

I have the script,

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_frozen_graph(
    graph_def_file='frozen_graph.pb',
    input_shapes = {'normalized_input_image_tensor':[1,300,300,3]},
    input_arrays = ['normalized_input_image_tensor'],
    output_arrays = ['TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1', 'TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3']
)
tflite_model = converter.convert()

with open('model.tflite', 'wb') as f:
  f.write(tflite_model)

but gives the error, ValueError: Invalid tensors 'normalized_input_image_tensor' were found

so the lines, input_shapes = {'normalized_input_image_tensor':[1,300,300,3]}, input_arrays = ['normalized_input_image_tensor'], output_arrays = ['TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1', 'TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3']

must be wrong, need a different shape, but how do I get this for each of the zoo models, or is there some preconvert code I need to run first?

Running the "code snipet" below I get,

--------------------------------------------------
Frozen model layers:
name: "add/y"
op: "Const"
attr {
  key: "dtype"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "value"
  value {
    tensor {
      dtype: DT_FLOAT
      tensor_shape {
      }
      float_val: 1.0
    }
  }
}

Input layer:  add/y
Output layer:  Postprocessor/BatchMultiClassNonMaxSuppression/map/while/NextIteration_1
--------------------------------------------------

But I don't see how this would map to the input_shape or help with the conversion??

CodePudding user response:

This code snippet

import tensorflow as tf

def print_layers(graph_def):
    def _imports_graph_def():
        tf.compat.v1.import_graph_def(graph_def, name="")

    wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
    import_graph = wrapped_import.graph

    print("-" * 50)
    print("Frozen model layers: ")
    layers = [op.name for op in import_graph.get_operations()]
    ops = import_graph.get_operations()
    print(ops[0])
    print("Input layer: ", layers[0])
    print("Output layer: ", layers[-1])
    print("-" * 50)

# Load frozen graph using TensorFlow 1.x functions
with tf.io.gfile.GFile("model.pb", "rb") as f:
    graph_def = tf.compat.v1.GraphDef()
    loaded = graph_def.ParseFromString(f.read())

frozen_func = print_layers(graph_def=graph_def)

prints the attributes, including the shape, of the input layer, along with the names of input and output layers:

--------------------------------------------------
Frozen model layers: 
name: "image_tensor"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_UINT8
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}

Input layer:  image_tensor
Output layer:  detection_classes
--------------------------------------------------

You can then insert correct layer names and shape to your code, and the conversion should work.

CodePudding user response:

I think this article can help you

  • Related