Mask Scoring RCNN 解读

本文深入探讨了Mask R-CNN的增强版本,引入了额外的MaskIoU head来计算预测mask与真实mask之间的交并比,提高了目标检测和实例分割任务的准确性。文章详细解释了mask_head/loss.py中的关键函数project_masks_on_boxes,以及如何计算mask比率和mask目标,为理解MaskIoU的工作原理提供了清晰的视角。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Introduction

相比于Mask RCNN 多了一条head计算MaskIOU

 

mask_head/loss.py

rle_for_fullarea 计算的是ground truth mask的面积
rle_for_box_area 计算的是proposal cropped mask的面积 
mask_ratio 算的是我predict的proposal 里面的mask相比与ground truth的比例
mask_targets是我prediction proposal 里面的mask 不是ground truth mask 它会除以mask ratio计算得到ground truth的mask area

 

# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
import torch
from torch.nn import functional as F
import pycocotools.mask as mask_util

from maskrcnn_benchmark.layers import smooth_l1_loss
from maskrcnn_benchmark.modeling.matcher import Matcher
from maskrcnn_benchmark.structures.boxlist_ops import boxlist_iou
from maskrcnn_benchmark.modeling.utils import cat
import numpy as np
import cv2

def project_masks_on_boxes(segmentation_masks, proposals, discretization_size, maskiou_on):
    """
    Given segmentation masks and the bounding boxes corresponding
    to the location of the masks in the image, this function
    crops and resizes the masks in the position defined by the
    boxes. This prepares the masks for them to be fed to the
    loss computation as the targets. If use maskiou head, we will compute the maskiou target here.

    Arguments:
        segmentation_masks: an instance of SegmentationMask
        proposals: an instance of BoxList
    """
    masks = []
    mask_ratios = []
    M = discretization_size
    device = proposals.bbox.device
    proposals = proposals.convert("xyxy")
    assert segmentation_masks.size == proposals.size, "{}, {}".format(
        segmentation_masks, proposals
    )
    # TODO put the proposals on the CPU, as the representation for the
    # masks is not efficient GPU-wise (possibly several small tensors for
    # representing a single instance mask)
    proposals = proposals.bbox.to(torch.device("cpu"))
    for segmentation_mask, proposal in zip(segmentation_masks, proposals):
        # crop the masks, resize them to the desired resolution and
        # then convert them to the tensor representation,
        # instead of the list representation that was used
        cropped_mask = segmentation_mask.crop(proposal)
        scaled_mask = cropped_mask.resize((M, M))
        mask = scaled_mask.convert(mode="mask")
        masks.append(mask)
        if maskiou_on:
            
            x1 = int(proposal[0])
            y1 = int(proposal[1])
            x2 = int(proposal[2]) + 1
            y2 = int(proposal[3]) + 1
            for poly_ in segmentation_mask.polygons:
                poly = np.array(poly_, dtype=np.float32)
                x1 = np.minimum(x1, poly[0::2].min())
                x2 = np.maximum(x2, poly[0::2].max())
                y1 = np.minimum(y1, poly[1::2].min())
                y2 = np.maximum(y2, poly[1::2].max())
            img_h = segmentation_mask.size[1]
            img_w = segmentation_mask.size[0]
            x1 = np.maximum(x1, 0)
            x2 = np.minimum(x2, img_w-1)
            y1 = np.maximum(y1, 0)
            y2 = np.minimum(y2, img_h-1)
            segmentation_mask_for_maskratio =  segmentation_mask.crop([x1, y1, x2, y2])
            ''' 
            #type 1
            gt_img_mask = segmentation_mask_for_maskratio.convert(mode='mask')    
            gt_img_mask_area = gt_img_mask.sum().float()
            gt_box_mask = gt_img_mask[int(proposal[1]-y1):int(proposal[3]-y1)+1, int(proposal[0]-x1):int(proposal[2]-x1)+1]
            gt_box_mask_area = gt_box_mask.sum().float()
            mask_ratio = gt_box_mask_area / gt_img_mask_area
            '''
            #type 2
            rle_for_fullarea = mask_util.frPyObjects([p.numpy() for p in segmentation_mask_for_maskratio.polygons], y2-y1, x2-x1)
            full_area = torch.tensor(mask_util.area(rle_for_fullarea).sum().astype(float))
            rle_for_box_area = mask_util.frPyObjects([p.numpy() for p in cropped_mask.polygons], proposal[3]-proposal[1], proposal[2]-proposal[0])
            box_area = torch.tensor(mask_util.area(rle_for_box_area).sum().astype(float))
            mask_ratio = box_area / full_area

            mask_ratios.append(mask_ratio)
    if maskiou_on:
        mask_ratios = torch.stack(mask_ratios, dim=0).to(device, dtype=torch.float32)
    else:
        mask_ratios = None
    if len(masks) == 0:
        return torch.empty(0, dtype=torch.float32, device=device), torch.empty(0, dtype=torch.float32, device=device)
    return torch.stack(masks, dim=0).to(device, dtype=torch.float32), mask_ratios

class MaskRCNNLossComputation(object):
    def __init__(self, proposal_matcher, discretization_size, maskiou_on):
        """
        Arguments:
            proposal_matcher (Matcher)
            discretization_size (int)
        """
        self.proposal_matcher = proposal_matcher
        self.discretization_size = discretization_size
        self.maskiou_on = maskiou_on

    def match_targets_to_proposals(self, proposal, target):
        match_quality_matrix = boxlist_iou(target, proposal)
        matched_idxs = self.proposal_matcher(match_quality_matrix)
        # Mask RCNN needs "labels" and "masks "fields for creating the targets
        target = target.copy_with_fields(["labels", "masks"])
        # get the targets corresponding GT for each proposal
        # NB: need to clamp the indices because we can have a single
        # GT in the image, and matched_idxs can be -2, which goes
        # out of bounds
        matched_targets = target[matched_idxs.clamp(min=0)]
        matched_targets.add_field("matched_idxs", matched_idxs)
        return matched_targets

    def prepare_targets(self, proposals, targets):
        labels = []
        masks = []
        mask_ratios = []
        for proposals_per_image, targets_per_image in zip(proposals, targets):
            matched_targets = self.match_targets_to_proposals(
                proposals_per_image, targets_per_image
            )
            matched_idxs = matched_targets.get_field("matched_idxs")

            labels_per_image = matched_targets.get_field("labels")
            labels_per_image = labels_per_image.to(dtype=torch.int64)

            # this can probably be removed, but is left here for clarity
            # and completeness
            neg_inds = matched_idxs == Matcher.BELOW_LOW_THRESHOLD
            labels_per_image[neg_inds] = 0

            # mask scores are only computed on positive samples
            positive_inds = torch.nonzero(labels_per_image > 0).squeeze(1)
            
            segmentation_masks = matched_targets.get_field("masks")
            segmentation_masks = segmentation_masks[positive_inds]
            
            positive_proposals = proposals_per_image[positive_inds]

            masks_per_image, mask_ratios_per_image = project_masks_on_boxes(
                segmentation_masks, positive_proposals, self.discretization_size, self.maskiou_on
            )
            
            labels.append(labels_per_image)
            masks.append(masks_per_image)
            mask_ratios.append(mask_ratios_per_image)
        return labels, masks, mask_ratios

    def __call__(self, proposals, mask_logits, targets):
        """
        Arguments:
            proposals (list[BoxList])
            mask_logits (Tensor)
            targets (list[BoxList])

        Return:
            mask_loss (Tensor): scalar tensor containing the loss
            If we use maskiou head, we will return extra feature for maskiou head.
        """
        labels, mask_targets, mask_ratios = self.prepare_targets(proposals, targets)

        labels = cat(labels, dim=0)
        mask_targets = cat(mask_targets, dim=0)

        positive_inds = torch.nonzero(labels > 0).squeeze(1)
        labels_pos = labels[positive_inds]

        # torch.mean (in binary_cross_entropy_with_logits) doesn't
        # accept empty tensors, so handle it separately
        if mask_targets.numel() == 0:
            if not self.maskiou_on:
                return mask_logits.sum() * 0
            else:
                selected_index = torch.arange(mask_logits.shape[0], device=labels.device)
                selected_mask = mask_logits[selected_index, labels]
                mask_num, mask_h, mask_w = selected_mask.shape
                selected_mask = selected_mask.reshape(mask_num, 1, mask_h, mask_w)
                return mask_logits.sum() * 0, selected_mask, labels, None
        
        if self.maskiou_on:
            mask_ratios = cat(mask_ratios, dim=0)
            value_eps = 1e-10 * torch.ones(mask_targets.shape[0], device=labels.device)
            mask_ratios = torch.max(mask_ratios, value_eps)
            pred_masks = mask_logits[positive_inds, labels_pos]
            pred_masks[:] = pred_masks > 0
            mask_targets_full_area = mask_targets.sum(dim=[1,2]) / mask_ratios
            mask_ovr = pred_masks * mask_targets
            mask_ovr_area = mask_ovr.sum(dim=[1,2])
            mask_union_area = pred_masks.sum(dim=[1,2]) + mask_targets_full_area - mask_ovr_area
            value_1 = torch.ones(pred_masks.shape[0], device=labels.device)
            value_0 = torch.zeros(pred_masks.shape[0], device=labels.device)
            mask_union_area = torch.max(mask_union_area, value_1)
            mask_ovr_area = torch.max(mask_ovr_area, value_0)
            maskiou_targets = mask_ovr_area / mask_union_area
            
        mask_loss = F.binary_cross_entropy_with_logits(
            mask_logits[positive_inds, labels_pos], mask_targets
        )
        if not self.maskiou_on:
            return mask_loss
        else:
            selected_index = torch.arange(mask_logits.shape[0], device=labels.device)
            selected_mask = mask_logits[selected_index, labels]
            mask_num, mask_h, mask_w = selected_mask.shape
            selected_mask = selected_mask.reshape(mask_num, 1, mask_h, mask_w)
            selected_mask = selected_mask.sigmoid()
            return mask_loss, selected_mask, labels, maskiou_targets


def make_roi_mask_loss_evaluator(cfg):
    matcher = Matcher(
        cfg.MODEL.ROI_HEADS.FG_IOU_THRESHOLD,
        cfg.MODEL.ROI_HEADS.BG_IOU_THRESHOLD,
        allow_low_quality_matches=False,
    )

    loss_evaluator = MaskRCNNLossComputation(
        matcher, cfg.MODEL.ROI_MASK_HEAD.RESOLUTION, cfg.MODEL.MASKIOU_ON
    )

    return loss_evaluator

 

 

Reference

http://openaccess.thecvf.com/content_CVPR_2019/papers/Huang_Mask_Scoring_R-CNN_CVPR_2019_paper.pdf

https://github.com/zjhuang22/maskscoring_rcnn

<think>我们被要求生成一个普通的Mask Scoring RCNN的源代码实现。根据引用,我们可以参考GitHub上的官方代码库(引用[3]中提到的:https://github.com/zjhuang22/maskscoring_rcnn)。同时,引用[5]提供了一些目录结构和数据准备的提示。 由于我们无法直接运行代码,这里将提供一个大致的代码框架,并说明如何组织代码以及训练过程。请注意,完整的代码非常庞大,因此这里只给出关键部分的概述和示例。 根据引用[5],项目结构大致如下: ``` maskscoring_rcnn/ ├── datasets │ ├── annotations │ │ ├── coco_object_train.json │ │ ├── coco_object_test.json │ ├── coco_object_train # 存放训练图片 │ ├── coco_object_test # 存放测试图片 │ ├── json_train # 训练集的原始json文件(如果有的话) │ ├── json_test # 测试集的原始json文件 │ └── json-coco.py # 用于转换json文件到COCO格式 ├── pretrained_models # 存放预训练模型 └── ... # 其他源码目录 ``` 关键步骤: 1. 数据准备:将数据集转换为COCO格式(使用json-coco.py脚本)。 2. 下载预训练模型(如ResNet-50或ResNet-101的预训练权重)并放入pretrained_models目录。 3. 修改配置文件以适配数据集和路径。 由于完整的Mask Scoring RCNN代码量很大,我们将重点放在模型定义的关键部分(基于Mask R-CNN的扩展,增加了MaskIoU head)。以下是一个简化的模型定义示例(基于PyTorch,因为官方代码库使用PyTorch): ```python import torch import torch.nn as nn import torch.nn.functional as F from torchvision.models.detection import MaskRCNN from torchvision.models.detection.backbone_utils import resnet_fpn_backbone from torchvision.models.detection.rpn import AnchorGenerator from torchvision.ops import MultiScaleRoIAlign class MaskScoringRCNN(MaskRCNN): def __init__(self, backbone, num_classes=None, **kwargs): super(MaskScoringRCNN, self).__init__(backbone, num_classes, **kwargs) # 添加MaskIoU head self.maskiou_head = MaskIoUHead(backbone.out_channels, self.roi_heads.mask_roi_pool) def forward(self, images, targets=None): # 先调用父类的forward outputs = super().forward(images, targets) if self.training: # 训练时需要计算MaskIoU loss_dict = super().forward(images, targets) # 这里需要计算MaskIoU的损失 # ... 具体实现略复杂,需要从中间结果中获取预测的mask和真实的mask # 然后计算IoU,并作为目标训练MaskIoU head return loss_dict else: # 测试时,在原有的输出上增加MaskIoU分数 for output in outputs: # 对每个实例的mask计算MaskIoU # 用MaskIoU head预测每个mask的IoU分数 # 替换原有的mask_scores # ... 具体实现略 return outputs class MaskIoUHead(nn.Module): def __init__(self, in_channels, mask_roi_pool): super(MaskIoUHead, self).__init__() # 定义MaskIoU head的网络结构 self.mask_roi_pool = mask_roi_pool # 假设使用两个全连接层 self.fc1 = nn.Linear(in_channels * 14 * 14, 1024) self.fc2 = nn.Linear(1024, 1) self.relu = nn.ReLU(inplace=True) def forward(self, features, proposals, mask_logits): # 从proposals中获取mask的RoI区域 # 注意:这里需要根据mask_logits的尺寸来调整RoI pooling # 具体实现略,需要与Mask R-CNN中的mask head协调 # 返回每个mask的IoU分数 return mask_scores ``` 注意:以上代码仅为示意,实际实现需要详细设计。官方代码库(https://github.com/zjhuang22/maskscoring_rcnn)提供了完整的实现。 训练脚本通常包括数据加载、模型初始化、优化器设置和训练循环。以下是一个训练脚本的框架: ```python import torch from torch.utils.data import DataLoader from datasets import CocoDataset from models import MaskScoringRCNN # 数据加载 train_dataset = CocoDataset(root='datasets/coco_object_train', annotation='datasets/annotations/coco_object_train.json') train_loader = DataLoader(train_dataset, batch_size=2, shuffle=True, collate_fn=lambda x: tuple(zip(*x))) # 模型初始化 backbone = resnet_fpn_backbone('resnet50', pretrained=True) model = MaskScoringRCNN(backbone, num_classes=81) # 80个物体类别+背景 model.train() # 优化器 optimizer = torch.optim.SGD(model.parameters(), lr=0.005, momentum=0.9, weight_decay=0.0005) # 训练循环 for epoch in range(10): for images, targets in train_loader: images = list(image for image in images) targets = [{k: v for k, v in t.items()} for t in targets] loss_dict = model(images, targets) losses = sum(loss for loss in loss_dict.values()) optimizer.zero_grad() losses.backward() optimizer.step() ``` 由于完整实现非常复杂,强烈建议直接使用官方代码库,并按照其README进行安装和训练。 关于如何训练自己的数据集,引用[4]和引用[5]提供了一些指导,主要是修改数据集路径和配置文件。 最后,请注意,上述代码仅为示例,实际使用时需要根据官方代码库进行调整。 §§相关问题§§ 1. 如何准备数据以训练Mask Scoring RCNN? 2. Mask Scoring RCNNMask R-CNN的主要区别是什么? 3. 训练Mask Scoring RCNN需要哪些硬件条件?</think>### Mask Scoring RCNN 源代码实现 Mask Scoring RCNN 是在 Mask R-CNN 基础上改进的实例分割模型,通过添加 **MaskIoU head** 来预测掩码质量分数,从而提高分割精度。以下是基于 PyTorch 的简化实现框架: ```python import torch import torch.nn as nn import torchvision from torchvision.models.detection import MaskRCNN from torchvision.models.detection.faster_rcnn import FastRCNNPredictor from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor class MaskIoUHead(nn.Module): """预测掩码质量的IoU分数""" def __init__(self, in_channels): super().__init__() self.conv1 = nn.Conv2d(in_channels, 256, 3, padding=1) self.conv2 = nn.Conv2d(256, 256, 3, padding=1) self.conv3 = nn.Conv2d(256, 256, 3, padding=1) self.avgpool = nn.AdaptiveAvgPool2d(1) self.fc = nn.Linear(256, 1) # 输出单值IoU分数 def forward(self, x): x = torch.relu(self.conv1(x)) x = torch.relu(self.conv2(x)) x = torch.relu(self.conv3(x)) x = self.avgpool(x) x = torch.flatten(x, 1) return torch.sigmoid(self.fc(x)) # 归一化到[0,1] class MaskScoringRCNN(MaskRCNN): def __init__(self, backbone, num_classes): super().__init__(backbone, num_classes) # 添加MaskIoU head self.maskiou_head = MaskIoUHead(256) # 输入通道需匹配mask head输出 def forward(self, images, targets=None): # 原始Mask R-CNN输出 outputs = super().forward(images, targets) if self.training: # 训练时计算MaskIoU损失 loss_dict = super().forward(images, targets) maskiou_loss = self._compute_maskiou_loss(outputs, targets) loss_dict.update({"loss_maskiou": maskiou_loss}) return loss_dict else: # 推理时用MaskIoU分数修正掩码置信度 for output in outputs: mask_scores = output["scores"] * output["maskiou_scores"] output["scores"] = mask_scores return outputs def get_model(num_classes): # 使用ResNet50+FPN骨干网络 backbone = torchvision.models.detection.backbone_utils.resnet_fpn_backbone('resnet50', True) model = MaskScoringRCNN(backbone, num_classes) # 修改分类头 in_features = model.roi_heads.box_predictor.cls_score.in_features model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) # 修改掩码头 in_mask_features = model.roi_heads.mask_predictor.conv5_mask.in_channels model.roi_heads.mask_predictor = MaskRCNNPredictor( in_mask_features, 256, num_classes ) return model # 示例用法 if __name__ == "__main__": model = get_model(num_classes=81) # COCO数据集 print("Mask Scoring RCNN模型已创建") ``` ### 关键组件说明 1. **MaskIoU Head** - 输入:Mask Head 输出的掩码特征图(尺寸 $14 \times 14$) - 结构:3个卷积层 + 全局平均池化 + 全连接层 - 输出:预测掩码与真实掩码的 IoU 分数 $[0,1]$ 2. **训练流程** - 损失函数:$L = L_{rpn} + L_{box} + L_{mask} + \lambda L_{maskiou}$($\lambda$ 通常取 1.0) - $L_{maskiou}$ 采用 $L1$ 或 $L2$ 损失,计算预测 IoU 与真实 IoU 的差异 3. **推理优化** $$ \text{修正分数} = \text{原始检测分数} \times \text{MaskIoU 分数} $$ 通过乘积操作将掩码质量纳入最终置信度评分[^2] ### 完整实现资源 1. **官方代码库**(推荐): ```bash git clone https://github.com/zjhuang22/maskscoring_rcnn ``` 包含预训练模型和 COCO 训练脚本[^3] 2. **数据集配置**(参考引用[4][5]): ```python # datasets.py DATASETS = { "coco_train": ("path/to/images", "path/to/annotations/train.json"), "coco_val": ("path/to/images", "path/to/annotations/val.json") } ``` 3. **训练命令**: ```bash python tools/train_net.py --config-file configs/e2e_mask_rcnn_R_50_FPN_1x.yaml ``` ### 注意事项 1. 硬件要求:建议使用至少 11GB 显存的 GPU(如 RTX 2080Ti) 2. 依赖环境:PyTorch 1.5+、torchvision 0.6+、COCO API 3. 自定义数据集:需转换为 COCO 格式(使用 `json-coco.py` 转换工具)[^5] > 完整实现请参考官方仓库:[zjhuang22/maskscoring_rcnn](https://github.com/zjhuang22/maskscoring_rcnn),该实现相比原始 Mask R-CNN 在 COCO 基准上提升约 1.5% AP[^2]。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值