【知识点】nn.Conv2d参数设置

文章详细解释了Pytorch中卷积层的关键参数,如in_channels和out_channels代表输入和输出通道数,kernel_size定义卷积核大小,stride是移动步长,padding用于图像填充,dilation控制空洞卷积,groups涉及分组卷积的概念。并以实例说明了groups参数如何影响卷积操作。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

reference

 

in_channels
  这个很好理解,就是输入的四维张量[N, C, H, W]中的C了,即输入张量的channels数。这个形参是确定权重等可学习参数的shape所必需的。

out_channels
  也很好理解,即期望的四维输出张量的channels数。

kernel_size
  卷积核的大小,一般我们会使用5x5、3x3这种左右两个数相同的卷积核,因此这种情况只需要写kernel_size = 5这样的就行了。如果左右两个数不同,比如3x5的卷积核,那么写作kernel_size = (3, 5),注意需要写一个tuple,而不能写一个列表(list)。

stride = 1
  卷积核在图像窗口上每次平移的间隔,即所谓的步长。这个概念和Tensorflow等其他框架没什么区别,不再多言。

padding = 0
  Pytorch与Tensorflow在卷积层实现上最大的差别就在于padding上。
  Padding即所谓的图像填充,后面的int型常数代表填充的多少(行数、列数),默认为0。需要注意的是这里的填充包括图像的上下左右,以padding = 1为例,若原始图像大小为32x32,那么padding后的图像大小就变成了34x34,而不是33x33。
  Pytorch不同于Tensorflow的地方在于,Tensorflow提供的是padding的模式,比如same、valid,且不同模式对应了不同的输出图像尺寸计算公式。而Pytorch则需要手动输入padding的数量,当然,Pytorch这种实现好处就在于输出图像尺寸计算公式是唯一的,即

 

  当然,上面的公式过于复杂难以记忆。大多数情况下的kernel_size、padding左右两数均相同,且不采用空洞卷积(dilation默认为1),因此只需要记

O = 【(I - K + 2P)/ S】 +1

这种在深度学习课程里学过的公式就好了。

dilation = 1
  这个参数决定了是否采用空洞卷积,默认为1(不采用)。从中文上来讲,这个参数的意义从卷积核上的一个参数到另一个参数需要走过的距离,那当然默认是1了,毕竟不可能两个不同的参数占同一个地方吧(为0)。
  更形象和直观的图示可以观察Github上的Dilated convolution animations,展示了dilation=2的情况。

groups = 1
  决定了是否采用分组卷积,groups参数可以参考groups参数详解

bias = True
  即是否要添加偏置参数作为可学习参数的一个,默认为True。

padding_mode = ‘zeros’
  即padding的模式,默认采用零填充。


groups参数详解

reference

以mobilenet为例

意思是将对应的输入通道与输出通道数进行分组, 默认值为1, 也就是说默认输出输入的所有通道各为一组。 

比如输入数据大小为90x100x100x32,通道数32,要经过一个3x3x48的卷积,

group默认是1,就是全连接的卷积层。

如果group是2,那么对应要将输入的32个通道分成2个16的通道,将输出的48个通道分成2个24的通道。对输出的2个24的通道,第一个24通道与输入的第一个16通道进行全卷积,第二个24通道与输入的第二个16通道进行全卷积。

极端情况下,输入输出通道数相同,比如为24,group大小也为24,那么每个输出卷积核,只与输入的对应的通道进行卷积。

比如input_size = [N, C, H, W] = [1,6,1,1],   如果你令conv = nn.Conv2d(in_channels=6, out_channels=6, kernel_size=1, stride=1, dilation: 空洞卷积; padding=0, groups=?, bias=False),

则当groups=1时,即为默认的卷积层,则conv.weight.data.size为[6,6,1,1],实际上共有6 * 6=36个参数;若group=3时,则每组计算只有out_channel/groups = 2个channel参与,故每一组卷积层的参数大小为[6,2,1,1],每一组共有6 * 2=12个参数,相当于每一组被重复用了3次(即group)次,最后再concat.

groups 决定了将原输入分为几组,而每组channel重用几次,由out_channels/groups计算得到,这也说明了为什么需要 groups能供被 out_channelsin_channels整除。

接下来,我们展开讨论如何使用pytorch自定义50层残差网络构建一个图像识别神经网络,数据集使用我自己准备的,下面是我定义好的50层残差网络:class RestNet50( nn.Module): def __init__(self): super(RestNet50, self).__init__() self.relu = nn.ReLU() self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=7, stride=2, padding=3) self.pool1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.conv2_1 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=1, stride=1, padding=0) self.conv2_2 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1) self.conv2_3 = nn.Conv2d(in_channels=64, out_channels=256, kernel_size=1, stride=1, padding=0) self.downsample2 = nn.Conv2d(in_channels=64, out_channels=256, kernel_size=1, stride=1, padding=0) self.conv2_4 = nn.Conv2d(in_channels=256, out_channels=64, kernel_size=1, stride=1, padding=0) self.conv2_5 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1) self.conv2_6 = nn.Conv2d(in_channels=64, out_channels=256, kernel_size=1, stride=1, padding=0) self.conv3_1 = nn.Conv2d(in_channels=256, out_channels=128, kernel_size=1, stride=1, padding=0) self.conv3_2 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=2, padding=1) self.conv3_3 = nn.Conv2d(in_channels=128, out_channels=512, kernel_size=1, stride=1, padding=0) self.downsample3 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=1, stride=2, padding=0) self.conv3_4 = nn.Conv2d(in_channels=512, out_channels=128, kernel_size=1, stride=1, padding=0) self.conv3_5 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1) self.conv3_6 = nn.Conv2d(in_channels=128, out_channels=512, kernel_size=1, stride=1, padding=0) self.conv4_1 = nn.Conv2d(in_channels=512, out_channels=256, kernel_size=1, stride=1, padding=0) self.conv4_2 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=2, padding=1) self.conv4_3 = nn.Conv2d(in_channels=256, out_channels=1024, kernel_size=1, stride=1, padding=0) self.downsample4 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=1, stride=2, padding=0) self.conv4_4 = nn.Conv2d(in_channels=1024, out_channels=256, kernel_size=1, stride=1, padding=0) self.conv4_5 = nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1) self.conv4_6 = nn.Conv2d(in_channels=256, out_channels=1024, kernel_size=1, stride=1, padding=0) self.conv5_1 = nn.Conv2d(in_channels=1024, out_channels=512, kernel_size=1, stride=1, padding=0) self.conv5_2 = nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=2, padding=1) self.conv5_3 = nn.Conv2d(in_channels=512, out_channels=2048, kernel_size=1, stride=1, padding=0) self.downsample5 = nn.Conv2d(in_channels=1024, out_channels=2048, kernel_size=1, stride=2, padding=0) self.conv5_4 = nn.Conv2d(in_channels=2048, out_channels=512, kernel_size=1, stride=1, padding=0) self.conv5_5 = nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1) self.conv5_6 = nn.Conv2d(in_channels=512, out_channels=2048, kernel_size=1, stride=1, padding=0) self.avgpool = nn.AvgPool2d(7,1,0) self.fc = nn.Linear(2048, 1000) def forward(self,x): x = self.conv1(x) x = self.pool1(x) identity = x x = self.conv2_1(x) x = self.conv2_2(x) x = self.conv2_3(x) identity = self.downsample2(identity) x = x + identity identity = x x = self.conv2_4(x) x = self.conv2_5(x) x = self.conv2_6(x) x = x + identity identity = x x = self.conv2_4(x) x = self.conv2_5(x) x = self.conv2_6(x) x = x + identity identity = x x = self.conv3_1(x) x = self.conv3_2(x) x = self.conv3_3(x) identity = self.downsample3(identity) x = x + identity identity = x x = self.conv3_4(x) x = self.conv3_5(x) x = self.conv3_6(x) x = x + identity identity = x x = self.conv3_4(x) x = self.conv3_5(x) x = self.conv3_6(x) x = x + identity identity = x x = self.conv3_4(x) x = self.conv3_5(x) x = self.conv3_6(x) x = x + identity identity = x x = self.conv4_1(x) x = self.conv4_2(x) x = self.conv4_3(x) identity = self.downsample4(identity) x = x + identity identity = x x = self.conv4_4(x) x = self.conv4_5(x) x = self.conv4_6(x) x = x + identity identity = x x = self.conv4_4(x) x = self.conv4_5(x) x = self.conv4_6(x) x = x + identity identity = x x = self.conv4_4(x) x = self.conv4_5(x) x = self.conv4_6(x) x = x + identity identity = x x = self.conv4_4(x) x = self.conv4_5(x) x = self.conv4_6(x) x = x + identity identity = x x = self.conv4_4(x) x = self.conv4_5(x) x = self.conv4_6(x) x = x + identity identity = x x = self.conv5_1(x) x = self.conv5_2(x) x = self.conv5_3(x) identity = self.downsample5(identity) x = x + identity identity = x x = self.conv5_4(x) x = self.conv5_5(x) x = self.conv5_6(x) x = x + identity identity = x x = self.conv5_4(x) x = self.conv5_5(x) x = self.conv5_6(x) x = x + identity x = self.avgpool(x) x = x.view(x.size(0), -1) x = self.fc(x) return x
最新发布
06-10
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值