Home > Blockchain >  Tensorflow to learn an input dependent and an input independent variable
Tensorflow to learn an input dependent and an input independent variable

Time:12-08

i am trying to implement a physics informed network for equation discovery of the burger's equation (https://arxiv.org/abs/1711.10561). This consists of 2 predictions. The prediction of the velocity of the fluid which is dependent on the position and time point (which are the inputs) and on a diffusion coefficient nu, which is common throughout all of the profile.

I set up a network like so

def neural_network (train): 
inp_1 = Input(shape=(train.shape[1],))  #setting the size of the input layer
initial = 'he_uniform'
x = Dense(20,kernel_initializer= initial, activation = 'tanh', bias_initializer=initial)(inp_1)
x = Dense(20,kernel_initializer= initial, activation = 'tanh', bias_initializer=initial)(x)
x = Dense(20,kernel_initializer= initial, activation = 'tanh', bias_initializer=initial)(x)
x = Dense(20,kernel_initializer= initial,  activation = 'tanh',bias_initializer=initial)(x)
x = Dense(20,kernel_initializer= initial,  activation = 'tanh',bias_initializer=initial)(x)
x = Dense(20,kernel_initializer= initial,  activation = 'tanh',bias_initializer=initial)(x)

x = Dense(1,kernel_initializer= initial,  activation = 'tanh',bias_initializer=initial)(x)


nu = tf.Variable([[1.]], trainable = True, shape=((1,1)))
nu= Dense(1,kernel_initializer= initial,  activation = 'tanh',bias_initializer=initial)(nu)


out = tf.concat([x, nu], 1)

return Model(inputs=inp_1, outputs=out)


model = Sequential()

model = neural_network(xt_train)

model.summary()

and then tried to evaluate my expression on some sample ready code :

def residualValOfPDE(xt, nu):
    x = xt[:, 0:1] # x coordinate
    t = xt[:, 1:2] # t coordinate
    with tf.GradientTape(persistent=True) as tape:
        
        tape.watch(x) 
        tape.watch(t)
        
        u, nu  = model( tf.stack([x[:, 0], t[:, 0]], axis=1) )[0]        
        u_x = tape.gradient(u, x)   
        
    u_t  = tape.gradient(u, t)        
    u_xx = tape.gradient(u_x, x)

    return u_t   u*u_x - nu*u_xx
  

Where xt_f contains the positions and times as columns and different points as rows. Now, when i try to evaluate the expression for one point :

print( residualValOfPDE(xt_f[1:2,:], nu))  

it works correctly. However when i try to pass multiple points in like:

print( residualValOfPDE(xt_f[1:3,:], nu)) 

I get the following error:

InvalidArgumentError Traceback (most recent call last) in 19 return u_t uu_x - nuu_xx 20 # ---> 21 print( residualValOfPDE(xt_f[1:3,:], nu)) # calculate the residual value at each collection point

in residualValOfPDE(xt, nu) 11 tape.watch(t) 12 ---> 13 u, nu = model( tf.stack([x[:, 0], t[:, 0]], axis=1) )[0] 14 u_x = tape.gradient(u, x) 15

~/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.traceback) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb

~/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name) 7105 def raise_from_not_ok_status(e, name): 7106 e.message = (" name: " name if name is not None else "") -> 7107 raise core._status_to_exception(e) from None # pylint: disable=protected-access 7108 7109

InvalidArgumentError: Exception encountered when calling layer "tf.concat_28" (type TFOpLambda).

ConcatOp : Dimensions of inputs should match: shape[0] = [2,1] vs. shape[1] = [1,1] [Op:ConcatV2] name: concat

Call arguments received: • values=['tf.Tensor(shape=(2, 1), dtype=float32)', 'tf.Tensor(shape=(1, 1), dtype=float32)'] • axis=1 • name=concat

Any ideas how to solve it? Thanks in advance

CodePudding user response:

The problem is that you are creating the nu variable with a hard-coded batch size of 1. That is why it is only working with a sample size of 1 and no more. It is hard to say what you want to do, but you can try something like this:

import tensorflow as tf

class NuLayer(tf.keras.layers.Layer):

  def __init__(self, batch_dim, initial='he_uniform'):
    super(NuLayer, self).__init__()
    self.batch_dim = batch_dim

  def build(self, input_shape):
    self.nu = tf.Variable(initial_value = tf.ones((self.batch_dim,1)), trainable = True)

  def call(self, inputs):
    return self.nu

inp_1 = tf.keras.layers.Input(shape=(2,))  #setting the size of the input layer
initial = 'he_uniform'
x = tf.keras.layers.Dense(20,kernel_initializer= initial, activation = 'tanh', bias_initializer=initial)(inp_1)
x = tf.keras.layers.Dense(20,kernel_initializer= initial, activation = 'tanh', bias_initializer=initial)(x)
x = tf.keras.layers.Dense(20,kernel_initializer= initial, activation = 'tanh', bias_initializer=initial)(x)
x = tf.keras.layers.Dense(20,kernel_initializer= initial, activation = 'tanh',bias_initializer=initial)(x)
x = tf.keras.layers.Dense(20,kernel_initializer= initial, activation = 'tanh',bias_initializer=initial)(x)
x = tf.keras.layers.Dense(20,kernel_initializer= initial, activation = 'tanh',bias_initializer=initial)(x)

x = tf.keras.layers.Dense(1,kernel_initializer= initial,  activation = 'tanh',bias_initializer=initial)(x)

nu = NuLayer(batch_dim=4)
nu = nu(inp_1)
out = tf.keras.layers.Concatenate(axis=1)([x, nu])
model = tf.keras.Model(inputs=inp_1, outputs=out)

def residualValOfPDE(xt):
    
    x = xt[:, 0:1] # x coordinate
    t = xt[:, 1:2] # t coordinate
    with tf.GradientTape(persistent=True) as tape:
        
        tape.watch(x) 
        tape.watch(t)
        u, nu  = model(tf.stack([x[:, 0], t[:, 0]], axis=1) )[0]        
        u_x = tape.gradient(u, x)   
        
    u_t  = tape.gradient(u, t)        
    u_xx = tape.gradient(u_x, x)

    return u_t   u*u_x - nu*u_xx

xt_f = tf.random.normal((10000, 2))

print( residualValOfPDE(xt_f[1:3,:])) 

tf.Tensor(
[[0.5751909]
 [0.       ]], shape=(2, 1), dtype=float32)

If you want to examine a different batch size, then change the batch size in the Input layer:

nu = NuLayer(batch_dim=4)
print( residualValOfPDE(xt_f[1:5,:])) 
[[-0.51205623]
 [ 0.        ]
 [ 0.        ]
 [ 0.        ]], shape=(4, 1), dtype=float32)
  • Related