深入解析EfficientNet在PyTorch中的实现
前言
EfficientNet是一种高效的卷积神经网络架构,由Google Research团队在2019年提出。它通过系统性地平衡网络的深度、宽度和分辨率,实现了在计算资源有限的情况下获得最优性能。本文将详细解析基于PyTorch的EfficientNet实现,帮助读者深入理解这一高效网络架构的设计原理和实现细节。
EfficientNet核心思想
EfficientNet的核心创新在于提出了复合缩放(Compound Scaling)方法,该方法通过统一的缩放系数来平衡网络的三个维度:
- 深度(depth):网络的层数
- 宽度(width):每层的通道数
- 分辨率(resolution):输入图像的分辨率
这种平衡缩放的方式使得模型能够在有限的计算资源下获得最佳性能。
实现详解
1. 基础配置
实现中首先定义了基础模型配置和不同版本(从b0到b7)的超参数:
base_model = [
# expand_ratio, channels, repeats, stride, kernel_size
[1, 16, 1, 1, 3],
[6, 24, 2, 2, 3],
[6, 40, 2, 2, 5],
[6, 80, 3, 2, 3],
[6, 112, 3, 1, 5],
[6, 192, 4, 2, 5],
[6, 320, 1, 1, 3],
]
phi_values = {
# tuple of: (phi_value, resolution, drop_rate)
"b0": (0, 224, 0.2),
"b1": (0.5, 240, 0.2),
"b2": (1, 260, 0.3),
"b3": (2, 300, 0.3),
"b4": (3, 380, 0.4),
"b5": (4, 456, 0.4),
"b6": (5, 528, 0.5),
"b7": (6, 600, 0.5),
}
2. 核心组件
CNNBlock模块
class CNNBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride, padding, groups=1):
super(CNNBlock, self).__init__()
self.cnn = nn.Conv2d(
in_channels,
out_channels,
kernel_size,
stride,
padding,
groups=groups,
bias=False,
)
self.bn = nn.BatchNorm2d(out_channels)
self.silu = nn.SiLU() # SiLU <-> Swish
def forward(self, x):
return self.silu(self.bn(self.cnn(x)))
这个基础卷积块包含:
- 一个卷积层(支持分组卷积)
- 批归一化层
- SiLU激活函数(Swish激活函数的变体)
Squeeze-Excitation模块
class SqueezeExcitation(nn.Module):
def __init__(self, in_channels, reduced_dim):
super(SqueezeExcitation, self).__init__()
self.se = nn.Sequential(
nn.AdaptiveAvgPool2d(1), # C x H x W -> C x 1 x 1
nn.Conv2d(in_channels, reduced_dim, 1),
nn.SiLU(),
nn.Conv2d(reduced_dim, in_channels, 1),
nn.Sigmoid(),
)
def forward(self, x):
return x * self.se(x)
SE模块通过自适应全局平均池化获取通道级别的全局信息,然后通过两个全连接层学习通道间的依赖关系,最后使用Sigmoid激活函数生成通道注意力权重。
Inverted Residual Block
class InvertedResidualBlock(nn.Module):
def __init__(
self,
in_channels,
out_channels,
kernel_size,
stride,
padding,
expand_ratio,
reduction=4, # squeeze excitation
survival_prob=0.8, # for stochastic depth
):
super(InvertedResidualBlock, self).__init__()
self.survival_prob = 0.8
self.use_residual = in_channels == out_channels and stride == 1
hidden_dim = in_channels * expand_ratio
self.expand = in_channels != hidden_dim
reduced_dim = int(in_channels / reduction)
if self.expand:
self.expand_conv = CNNBlock(
in_channels,
hidden_dim,
kernel_size=3,
stride=1,
padding=1,
)
self.conv = nn.Sequential(
CNNBlock(
hidden_dim,
hidden_dim,
kernel_size,
stride,
padding,
groups=hidden_dim,
),
SqueezeExcitation(hidden_dim, reduced_dim),
nn.Conv2d(hidden_dim, out_channels, 1, bias=False),
nn.BatchNorm2d(out_channels),
)
倒残差块是MobileNetV2中提出的结构,在EfficientNet中得到进一步优化:
- 首先通过1x1卷积扩展通道数(expand phase)
- 然后进行深度可分离卷积(depthwise convolution)
- 接着应用SE模块
- 最后通过1x1卷积压缩通道数(project phase)
3. 随机深度(Stochastic Depth)
def stochastic_depth(self, x):
if not self.training:
return x
binary_tensor = (
torch.rand(x.shape[0], 1, 1, 1, device=x.device) < self.survival_prob
)
return torch.div(x, self.survival_prob) * binary_tensor
随机深度是一种正则化技术,在训练过程中随机丢弃某些残差块的输出,有助于防止过拟合。在推理阶段会保留所有残差连接。
4. EfficientNet主网络
class EfficientNet(nn.Module):
def __init__(self, version, num_classes):
super(EfficientNet, self).__init__()
width_factor, depth_factor, dropout_rate = self.calculate_factors(version)
last_channels = ceil(1280 * width_factor)
self.pool = nn.AdaptiveAvgPool2d(1)
self.features = self.create_features(width_factor, depth_factor, last_channels)
self.classifier = nn.Sequential(
nn.Dropout(dropout_rate),
nn.Linear(last_channels, num_classes),
)
主网络类实现了完整的EfficientNet架构,包括:
- 计算宽度和深度缩放因子
- 构建特征提取网络
- 添加分类头
复合缩放实现
def calculate_factors(self, version, alpha=1.2, beta=1.1):
phi, res, drop_rate = phi_values[version]
depth_factor = alpha**phi
width_factor = beta**phi
return width_factor, depth_factor, drop_rate
这部分代码实现了EfficientNet的核心创新——复合缩放方法。通过phi值(版本参数)统一控制网络的深度、宽度和分辨率缩放。
使用示例
def test():
device = "cuda" if torch.cuda.is_available() else "cpu"
version = "b0"
phi, res, drop_rate = phi_values[version]
num_examples, num_classes = 4, 10
x = torch.randn((num_examples, 3, res, res)).to(device)
model = EfficientNet(
version=version,
num_classes=num_classes,
).to(device)
print(model(x).shape) # (num_examples, num_classes)
测试代码展示了如何初始化一个EfficientNet-b0模型并进行前向传播。
总结
本文详细解析了EfficientNet在PyTorch中的实现,重点介绍了:
- 复合缩放方法的实现原理
- 倒残差块和SE模块的设计
- 随机深度等正则化技术的应用
- 不同版本模型的配置方式
EfficientNet通过精心设计的网络结构和系统性的缩放策略,在计算效率和模型性能之间取得了很好的平衡。理解这一实现有助于我们在实际项目中应用或改进这一高效架构。
创作声明:本文部分内容由AI辅助生成(AIGC),仅供参考