Home > other >  TensorFlow seems to modify both class and instance object
TensorFlow seems to modify both class and instance object

Time:11-22

I have observed that the TensorFlow methods like assign_add and assign_sub modify the variables of both object and class (if exist). Here is a simple code to reproduce my observation. Can anyone please clarify about this behavior (assign_sub and assign_add modifying both class and instance attributes)?

#a python class
class myc_base():
    a=1.
    def __init__(self, b=1.):
        self.b=b
    def add(self, to_add=1.):
        self.a =to_add
        self.b =to_add
    def sub(self, to_sub=1.):
        self.a-=to_sub
        self.b-=to_sub

obj_base=myc_base()

print(f'Init.     -- class.a: {myc_base.a} | obj.a: {obj_base.a}, obj.b: {obj_base.b}')
obj_base.add(5.)
print(f'after add -- class.a: {myc_base.a} | obj.a: {obj_base.a}, obj.b: {obj_base.b}')
obj_base.sub(2.)
print(f'after sub -- class.a: {myc_base.a} | obj.a: {obj_base.a}, obj.b: {obj_base.b}')

Output:

Init.     -- class.a: 1.0 | obj.a: 1.0, obj.b: 1.0
after add -- class.a: 1.0 | obj.a: 6.0, obj.b: 6.0
after sub -- class.a: 1.0 | obj.a: 4.0, obj.b: 4.0

With TensorFlow:

import tensorflow as tf

#a class for tf operations
class myc_tf():
    a=tf.Variable(1.)
    def __init__(self, b=tf.Variable(1.)):
        self.b=b
    def add(self, to_add=1.):
        self.a.assign_add(to_add)
        self.b.assign_add(to_add)
    def sub(self, to_sub=1.):
        self.a.assign_sub(to_sub)
        self.b.assign_sub(to_sub)

obj_tf=myc_tf()

print(f'Init.     -- class.a: {myc_tf.a.numpy()} | obj.a: {obj_tf.a.numpy()}, obj.b: {obj_tf.b.numpy()}')
obj_tf.add(5.)
print(f'after add -- class.a: {myc_tf.a.numpy()} | obj.a: {obj_tf.a.numpy()}, obj.b: {obj_tf.b.numpy()}')
obj_tf.sub(2.)
print(f'after sub -- class.a: {myc_tf.a.numpy()} | obj.a: {obj_tf.a.numpy()}, obj.b: {obj_tf.b.numpy()}')

Output:

Init.     -- class.a: 1.0 | obj.a: 1.0, obj.b: 1.0
after add -- class.a: 6.0 | obj.a: 6.0, obj.b: 6.0
after sub -- class.a: 4.0 | obj.a: 4.0, obj.b: 4.0

CodePudding user response:

a is a class attribute. b is an instance attribute.

However, augmented assignments like

self.a  = to_add
self.a -= to_sub

are not modifying the class attribute you think you are accessing via the instance. They are really equivalent to

self.a = self.a.__iadd__(to_add)
self.a = self.a.__isub__(to_sub)

so the first time one is used, the class attribute is accessed on the RHS, but a new instance attribute is then created, and that instance attribute shadows the class attribute in all future calls.

If you want to modify a class attribute via an instance, you need to be explicit about it. One possible solution:

type(self).a  = to_add

Your TensorFlow code doesn't make any assignments, augmented or otherwise. It's simply a method call on whatever self.a resolves to, which is the class attribute. No new instance attribute is ever created.

CodePudding user response:

you need to understand the class and local variables when the class initiates with the variables you created and the init() functions. The assign_add() and assign_sub() are updated and wait for the value to update but the python class is to erase once release or re-assign value.

Sample: Starting at 10 both var1 and var2, the var2 lives with 100 as a local variable spent inside the class and their functions. By var1 is assigned it more 30 as a local variable. Updates can reach var1 as 40, var2 as 10 because var2 is never outside the functions and the result is 100.

import os
from os.path import exists
import tensorflow as tf

import matplotlib.pyplot as plt

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Class and Functions
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class MyDenseLayer(tf.keras.layers.Layer):
    var1 = tf.Variable([10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0])
    var2 = tf.Variable([10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0, 10.0])
    
    def __init__(self, num_outputs):
        super(MyDenseLayer, self).__init__()
        self.num_outputs = num_outputs
        self.var2 = self.var1 * 10.0
        
    def build(self, input_shape):
        self.kernel = self.add_weight("kernel",
        shape=[int(input_shape[-1]),
        self.num_outputs])

    def call(self, inputs):
        self.var1.assign_add([30.0, 30.0, 30.0, 30.0, 30.0, 30.0, 30.0, 30.0, 30.0, 30.0])
        
        temp = tf.constant( self.var2 ).numpy()
        return temp

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
start = 3
limit = 33
delta = 3

# Create DATA
sample = tf.range(start, limit, delta)
sample = tf.cast( sample, dtype=tf.float32 )

# Initail, ( 10, 1 )
sample = tf.constant( sample, shape=( 10, 1 ) )
layer = MyDenseLayer(10)
data = layer(sample)

print( tf.constant( MyDenseLayer.var1 ).numpy() )
print( tf.constant( MyDenseLayer.var2 ).numpy() )
print( data )

Output: local var1, local var2 ( no assign functions ) and data ( result from class update )

[40. 40. 40. 40. 40. 40. 40. 40. 40. 40.]
[10. 10. 10. 10. 10. 10. 10. 10. 10. 10.]
[100. 100. 100. 100. 100. 100. 100. 100. 100. 100.]
  • Related