Home > Software engineering >  Does naming of the ops impacts the memory/compute performance of TensorFlow?
Does naming of the ops impacts the memory/compute performance of TensorFlow?

Time:10-13

To make the question clear lets use an example. Assume that we pass a huge Tensor to a series of operations (reshape, transpose, etc.), is it more memory/compute/etc efficient to keep using the same variable name or it does not matter? See two cases below:

  • Case 1: change name
x = Conv2d (...)

x_transposed = tf.transpose(x)

x_expanded = tf.expand_dims(x_transposed , -1)

x_reshaped = tf.reshape(x_expanded , [...])
  • Case 2: keep names
x = Conv2d (...)

x = tf.transpose(x)

x = tf.expand_dims(x, -1)

x = tf.reshape(x, [...])

CodePudding user response:

By converting the lines from the code snippet provided into two different Python functions, wrapping them with tf.function to compile them into a callable Tensorflow graph (see here for more information), and printing the concrete graph, it appears they are both identical, indicating the variable names utilized do not make a difference when constructing the graph. The example below should illustrate (tweaked slightly from the provided snippet):

import tensorflow as tf


def same_name():
     x = tf.convert_to_tensor([1, 2, 3], dtype=tf.float32)

     x = tf.transpose(x)

     x = tf.expand_dims(x, -1)

     x = tf.reshape(x, [3, 1])

     x = tf.nn.relu(x)


def diff_name():
    x = tf.convert_to_tensor([1, 2, 3], dtype=tf.float32)

    x_transposed = tf.transpose(x)

    x_expanded = tf.expand_dims(x_transposed, -1)

    x_reshaped = tf.reshape(x_expanded, [3, 1])

    x_relued = tf.nn.relu(x_reshaped)


if __name__ == "__main__":
    print(tf.function(same_name).get_concrete_function().graph.as_graph_def())
    print(tf.function(diff_name).get_concrete_function().graph.as_graph_def())

The output in both cases is:

node {
  name: "Const"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_FLOAT
        tensor_shape {
          dim {
            size: 3
          }
        }
        tensor_content: "\000\000\200?\000\000\000@\000\000@@"
      }
    }
  }
}
node {
  name: "transpose/perm"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 1
          }
        }
        int_val: 0
      }
    }
  }
}
node {
  name: "transpose"
  op: "Transpose"
  input: "Const"
  input: "transpose/perm"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "Tperm"
    value {
      type: DT_INT32
    }
  }
}
node {
  name: "ExpandDims/dim"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
        }
        int_val: -1
      }
    }
  }
}
node {
  name: "ExpandDims"
  op: "ExpandDims"
  input: "transpose"
  input: "ExpandDims/dim"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "Tdim"
    value {
      type: DT_INT32
    }
  }
}
node {
  name: "Reshape/shape"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
          dim {
            size: 2
          }
        }
        tensor_content: "\003\000\000\000\001\000\000\000"
      }
    }
  }
}
node {
  name: "Reshape"
  op: "Reshape"
  input: "ExpandDims"
  input: "Reshape/shape"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
  attr {
    key: "Tshape"
    value {
      type: DT_INT32
    }
  }
}
node {
  name: "Relu"
  op: "Relu"
  input: "Reshape"
  attr {
    key: "T"
    value {
      type: DT_FLOAT
    }
  }
}
versions {
  producer: 440
}

  • Related