Home > Back-end >  Optimize and (re)save saved-model with grappler
Optimize and (re)save saved-model with grappler

Time:10-30

I've got a (TF2) saved-model full of training ops clutter and I'm trying to optimize it for inference using grappler, but I want to save it back to a TF2 saved-model subsequently (to keep the general workflow away from TF1).

I currently have:

from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2_as_graph
from tensorflow.lite.python.util import run_graph_optimizations, get_grappler_config

# Load the saved-model and get the inference concrete function
sm = tf.saved_model.load('path/to/savedmodel/dir')
func = sm.signatures['serving_default']

# Replace variables with constants in order to get rid of the training clutter
frozen_func, graph_def = convert_variables_to_constants_v2_as_graph(func)

# Use grappler to optimize the concrete function graph after replacing vars with constants
input_tensors = [tsr for tsr in frozen_func.inputs if tsr.dtype != tf.resource]
output_tensors = frozen_func.outputs
graph_def = run_graph_optimizations(graph_def, input_tensors, output_tensors,
                                    config=get_grappler_config(["constfold", "function"]),
                                    graph=frozen_func.graph)

# Here the intention is to somehow reconvert the optimized graph-def into a concrete function
# and subsequently re-save that as a TF2(not TF1!) saved-model, is there a way to do that?
frozen_func_graph = tf.Graph()
with frozen_func_graph.as_default():
    tf.import_graph_def(graph_def, name='')

# ... what now?

The issue is, since direct tf.Graph usage has been deprecated in TF2, I intend to convert the optimized graph back to a TF2 saved-model. I was thinking of doing that by somehow manually constructing a ConcreteFunction wrapping this optimized graph, but as far as I've researched, there seems to be now way to do that. This would basically mean I'd still have to use TF1 compat APIs, which ideally I'd like to avoid.

The ugly (ugly) option I'd really like to avoid would be (haven't tried it yet but would probably work):

  • use v1 APIs to construct a TF1 saved-model using tf.compat.v1.saved_model.builder.SavedModelBuilder and save the TF1 saved-model
  • load back the TF1 saved-model using v2 API (so tf.saved_model.load instead of tf.compat.v1.saved_model.load, the former converts a TF1 saved-model automatically to a TF2 saved-model)
  • (re)save the converted TF2 saved-model

Is there a way to do this nicely? Preferably also without being forced to dump the optimized saved-model if I don't want to, seems that constructing saved-models in memory is not possible? (that's not such a big issue though)

CodePudding user response:

Finally got it, not ideal since I use internal (more or less) TF2 API calls, but at least TF1 compat APIs are not used at all. Here's the full code.

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2_as_graph
from tensorflow.lite.python.util import run_graph_optimizations, get_grappler_config
from tensorflow.python.tools.optimize_for_inference_lib import optimize_for_inference
from tensorflow.python.eager import context, wrap_function

# Load the saved-model and get the inference concrete function
sm = tf.saved_model.load('path/to/savedmodel/dir')
func = sm.signatures['serving_default'] # note: key might differ according to what your model's inference function is

# Replace variables with constants in order to get rid of the training clutter
frozen_func, graph_def = convert_variables_to_constants_v2_as_graph(func)

# Use grappler to optimize the concrete function graph after replacing vars with constants
input_tensors = [tsr for tsr in frozen_func.inputs if tsr.dtype != tf.resource]
output_tensors = frozen_func.outputs
graph_def = run_graph_optimizations(graph_def, input_tensors, output_tensors,
                                    config=get_grappler_config(["constfold", "function"]),
                                    graph=frozen_func.graph)

# Optimize for inference
input_tsr_names = [tsr.name for tsr in input_tensors]
output_tsr_names = [tsr.name for tsr in output_tensors]
input_node_names = list(set([tsr_name.rsplit(':', 1)[0] for tsr_name in input_tsr_names]))
output_node_names = list(set([tsr_name.rsplit(':', 1)[0] for tsr_name in output_tsr_names]))
graph_def = optimize_for_inference(input_graph_def=graph_def,
                                   input_node_names=input_node_names,
               placeholder_type_enum=tf.dtypes.float32.as_datatype_enum,
                                   output_node_names=output_node_names,
                                   toco_compatible=True)

# This next part inspired from _construct_concrete_function function here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/convert_to_constants.py#L1062

# Remove old functions to use updated functions from graph def - not sure if this is actually needed here, didn't look into it
for f in graph_def.library.function:
    if context.context().has_function(f.signature.name):
        context.context().remove_function(f.signature.name)

# GraphDef to concrete function
opt_frozen_func = wrap_function.function_from_graph_def(graph_def,
                                                        input_tsr_names,
                                                        output_tsr_names)

# Wrap concrete function into module to export as saved-model
class OptimizedFrozenModel(tf.Module):
    def __init__(self, name=None):
        super().__init__(name)

module = OptimizedFrozenModel()
module.__call__ = opt_frozen_func

# Export frozen & optimized saved-model
tf.saved_model.save(module, 'path/to/optimized_savedmodel/dir', signatures=opt_frozen_func)
  • Related