Home > Blockchain >  Can autograd handle repeated use of the same layer in the same depth of the computation graph?
Can autograd handle repeated use of the same layer in the same depth of the computation graph?

Time:05-13

I have a network which works as follows: The input is split in half; the first half is put through some convolutional layers l1, then the second half is put through the same layers l1 (after the output for the first half of the input has been computed), then the two output representations are concatenated and put through additional layers l2 at once. Now my question (similar to Can autograd in pytorch handle a repeated use of a layer within the same module? but not quite the same setting as in the other question, the same layer was reused in different depths of the computation graph, whereas here, the same layer is used twice within the same depth) is: does autograd handle this properly? I.e. is the backpropagation error for l1 computed with respect to both of its forward passes and the weights are adapted w.r.t. both of these at once?

CodePudding user response:

Autograd does not care how many times you "use" something. This is not how it works. it just builds a graph behind the scenes of the dependencies, using something twice just makes a graph that is not a line, but it will not affect its execution.

  • Related