Home > other >  Mismatch in expected results of convolution using Conv2d from Pytorch?
Mismatch in expected results of convolution using Conv2d from Pytorch?

Time:11-12

I am experimenting with the conv2d function implemented in PyTorch. I wrote a code sample below where we have a 3x3 matrix of 1 batch, 2 input channels. I implement my convolution layer to have a kernel the exact size of the matrix (so the stride doesn't matter) and an output channel of 1. I fix the weights at 1.

Basically this should just sum up input tensor. Why are the last two printed values slightly differing? Is this the result of some sort of floating point calculation error?

import torch.nn as nn
import torch
m = nn.Conv2d(2, 1, 3, stride=2) 
input = torch.randn(1, 2, 3, 3)
m.weight = torch.nn.Parameter(torch.ones_like(m.weight))

output = m(input)
print(input)
print(torch.sum(input))
print(output)

CodePudding user response:

The bias of Conv2d is not initialized as zeros.

Try this

import torch.nn as nn
import torch
m = nn.Conv2d(2, 1, 3, stride=2) 
input = torch.randn(1, 2, 3, 3)
m.weight = torch.nn.Parameter(torch.ones_like(m.weight))
nn.init.zeros_(m.bias)

output = m(input)
print(input)
print(torch.sum(input))
print(output)
  • Related