Home > Back-end >  TFLite model conversion shows warning and using the converted model with interpreter crashes python
TFLite model conversion shows warning and using the converted model with interpreter crashes python

Time:04-16

I have a tensorflow keras model which has the following architecture:

Model: "model_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_3 (InputLayer)        [(None, 199)]             0         
                                                                 
 token_and_position_embeddin  (None, 199, 256)         2611456   
 g_2 (TokenAndPositionEmbedd                                     
 ing)                                                            
                                                                 
 lstm_4 (LSTM)               (None, 199, 150)          244200    
                                                                 
 lstm_5 (LSTM)               (None, 150)               180600    
                                                                 
 dense_2 (Dense)             (None, 10001)             1510151   
                                                                 
=================================================================
Total params: 4,546,407
Trainable params: 4,546,407
Non-trainable params: 0
_________________________________________________________________


I am trying to convert it to tflite in this way:

converter = tf.lite.TFLiteConverter.from_saved_model(str(model_saved_dir))
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter.target_spec.supported_types = [tf.float16]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# Save the model.
with open(model_home / 'model.tflite', 'wb') as f:
    f.write(tflite_model)

During conversion it shows the following warnings:

2022-04-12 20:17:23.584937: I tensorflow/cc/saved_model/loader.cc:301] SavedModel load for tags { serve }; Status: success: OK. Took 329307 microseconds.
2022-04-12 20:17:23.771051: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:237] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2022-04-12 20:17:23.983377: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1892] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s):
Flex ops: FlexTensorListFromTensor, FlexTensorListGetItem, FlexTensorListReserve, FlexTensorListSetItem, FlexTensorListStack
Details:
    tf.TensorListFromTensor(tensor<199x?x256xf32>, tensor<2xi32>) -> (tensor<!tf_type.variant<tensor<?x256xf32>>>) : {device = ""}
    tf.TensorListFromTensor(tensor<?x?x150xf32>, tensor<2xi32>) -> (tensor<!tf_type.variant<tensor<?x150xf32>>>) : {device = ""}
    tf.TensorListGetItem(tensor<!tf_type.variant<tensor<?x150xf32>>>, tensor<i32>, tensor<2xi32>) -> (tensor<?x150xf32>) : {device = ""}
    tf.TensorListGetItem(tensor<!tf_type.variant<tensor<?x256xf32>>>, tensor<i32>, tensor<2xi32>) -> (tensor<?x256xf32>) : {device = ""}
    tf.TensorListReserve(tensor<2xi32>, tensor<i32>) -> (tensor<!tf_type.variant<tensor<?x150xf32>>>) : {device = ""}
    tf.TensorListSetItem(tensor<!tf_type.variant<tensor<?x150xf32>>>, tensor<i32>, tensor<?x150xf32>) -> (tensor<!tf_type.variant<tensor<?x150xf32>>>) : {device = ""}
    tf.TensorListStack(tensor<!tf_type.variant<tensor<?x150xf32>>>, tensor<2xi32>) -> (tensor<?x?x150xf32>) : {device = "", num_elements = -1 : i64}
See instructions: https://www.tensorflow.org/lite/guide/ops_select
WARNING:absl:Buffer deduplication procedure will be skipped when flatbuffer library is not properly loaded

But the file is created after this and when trying to use this model in this way:

interpreter = tf.lite.Interpreter(model_path=str(model_home / 'model.tflite'))
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]

# dummy input
ip = [[0 for _ in range(198)]   [1]]
ip = np.array(ip, dtype=np.int32)

interpreter.set_tensor(input_index, ip)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
print(predictions)

my jupyter notebook kernel dies and python program crashes. My tensorflow version is '2.8.0' and python version is '3.8.10'. When I try to use the model from Android studio it also crashes showing error ByteBuffer is not a valid FlatBuffer model.

CodePudding user response:

I have solved the problem of crashing python kernel and also crashing when loading in Android Studio by adding input and output size to the model signature before converting the model:

run_model = tf.function(lambda x: model(x))
# This is important, let's fix the input size.
BATCH_SIZE = 1
INPUT_SIZE = 199
concrete_func = run_model.get_concrete_function(
    tf.TensorSpec([BATCH_SIZE, INPUT_SIZE], model.inputs[0].dtype))

converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
  • Related