1.概述
看了吴恩达老师的深度学习课程,里面有提到卷积网络的大小计算方式,就记录一下,顺便以pytorch代码实现验证一下。
2.普通卷积
记图像尺寸为n∗nn*nn∗n,卷即核尺寸为f∗ff*ff∗f,步长为sss,padding大小为ppp,输出的尺寸大小为(n+2p−fs+1)∗(n+2p−fs+1){({{n+2p-f} \over s}+1 )}*{({{n+2p-f} \over s}+1 )}(sn+2p−f+1)∗(sn+2p−f+1)
令n=224,f=3,s=1,p=1n=224,f=3,s=1,p=1n=224,f=3,s=1,p=1
带入上式计算得输出尺寸size=224size=224size=224
代码验证如下
import torch
import torch.nn as nn
class conv(nn.Module):
def __init__(self, ch_in=3, ch_out=3):
super(conv, self).__init__()
self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1, bias=True)
def forward(self, x):
x = self.conv1(x)
return x
t = conv()
a=torch.randn(32,3,224,224)
print(t(a).shape)
>>>torch.Size([32, 3, 224, 224])
令n=224,f=3,s=2,p=2n=224,f=3,s=2,p=2n=224,f=3,s=2,p=2
带入上式计算得输出尺寸size=113.5size=113.5size=113.5
代码验证如下
import torch
import torch.nn as nn
class conv(nn.Module):
def __init__(self, ch_in=3, ch_out=3):
super(conv, self).__init__()
self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=2, padding=2, bias=True)
def forward(self, x):
x = self.conv1(x)
return x
t = conv()
a=torch.randn(32,3,224,224)
print(t(a).shape)
>>>torch.Size([32, 3, 113, 113])
这里需要注意的是像素是没有小数点的,所以在设计卷积层时要注意参数的设置
3.膨胀卷积
膨胀卷积是对卷积核进行0填充,参数dilation_rate(默认为1),会改变卷积核的大小,其他计算和普通卷积相同,卷积核的计算公式如下kernel_size=dilation_rate∗(kernel_size−1)+1kernel\_size=dilation\_rate*(kernel\_size - 1)+1kernel_size=dilation_rate∗(kernel_size−1)+1
令n=224,f=3,s=1,p=1,dilation_rate=2n=224,f=3,s=1,p=1,dilation\_rate=2n=224,f=3,s=1,p=1,dilation_rate=2
代入公式计算kernel_size=5kernel\_size=5kernel_size=5,size=222size=222size=222
代码验证如下
import torch
import torch.nn as nn
class conv(nn.Module):
def __init__(self, ch_in=3, ch_out=3):
super(conv, self).__init__()
self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1,dilation=2, bias=True)
def forward(self, x):
x = self.conv1(x)
return x
t = conv()
a=torch.randn(32,3,224,224)
print(t(a).shape)
>>>torch.Size([32, 3, 222, 222])
4.转置卷积
可以理解为把常规卷积的输入与输出交换下位置
计算公式为s∗(n−1)+f−2ps*(n-1)+f-2ps∗(n−1)+f−2p
令n=5,f=3,s=2,p=0n=5,f=3,s=2,p=0n=5,f=3,s=2,p=0
代入计算的size=11size=11size=11
代码验证
import torch
import torch.nn as nn
class conv(nn.Module):
def __init__(self, ch_in=3, ch_out=3):
super(conv, self).__init__()
self.conv1 = nn.ConvTranspose2d(in_channels= ch_in,out_channels=ch_out,kernel_size=3,stride=2,padding=0)
def forward(self, x):
x = self.conv1(x)
return x
t = conv()
a=torch.randn(32,3,5,5)
print(t(a).shape)
>>>torch.Size([32, 3, 11, 11])