Pointpillars代码解读——数据处理

Pointpillars论文:https://arxiv.org/abs/1812.05784

代码:GitHub - open-mmlab/OpenPCDet: OpenPCDet Toolbox for LiDAR-based 3D Object Detection.

算法复现:基于kitti数据集的3D目标检测算法的训练流程_mini kitti 数据集-CSDN博客

文献理解:PointPillars文献理解_pillar feature net-CSDN博客

复现结果:Pointpillar算法复现结果分析_kitti ap40 results-CSDN博客

参考博客:(三)PointPillars论文的MMDetection3D代码解读——数据处理篇_pointpillars代码-CSDN博客


目录

一、简介

二、配置文件

(一)kitti-3d-3class.py

(二)pointpillars_hv_secfpn_kitti.py

(三)cyclic-40e.py

(四)default_runtime.py

三、kitti数据集处理

(一)kitti_dataset.py

(二)基类Det3DDataset的parse_data_info()函数

(三)KittiDataset的parse_ann_info函数

(四)基类Det3DDataset的parse_ann_info()函数


一、简介

图1.1:Pointpillars网络架构图

PointPillars 是一个来自工业界的模型,整体的思想是基于图片的处理框架,直接将点云从俯视图的视角划分为一个个的立方柱体(Pillars),从而构成了伪图片数据,然后再使用2D检测框架进行特征提取和预测得到检测框,从而使得该模型在速度和精度都达到了一个很好的平衡。 PointPillars 的网络结构如图1.1所示。

二、配置文件

(一)kitti-3d-3class.py

代码存放于 mmdetection3d/config/_base_/datasets/kitti-3d-3class.py

图2.1:文件存放位置

代码全文如下:

# dataset settings
dataset_type = 'KittiDataset'
data_root = 'data/kitti/'
class_names = ['Pedestrian', 'Cyclist', 'Car']
point_cloud_range = [0, -40, -3, 70.4, 40, 1]
input_modality = dict(use_lidar=True, use_camera=False)
metainfo = dict(classes=class_names)

# Example to use different file client
# Method 1: simply set the data root and let the file I/O module
# automatically infer from prefix (not support LMDB and Memcache yet)

# data_root = 's3://openmmlab/datasets/detection3d/kitti/'

# Method 2: Use backend_args, file_client_args in versions before 1.1.0
# backend_args = dict(
#     backend='petrel',
#     path_mapping=dict({
#         './data/': 's3://openmmlab/datasets/detection3d/',
#          'data/': 's3://openmmlab/datasets/detection3d/'
#      }))
backend_args = None

db_sampler = dict(
    data_root=data_root,
    info_path=data_root + 'kitti_dbinfos_train.pkl',
    rate=1.0,
    prepare=dict(
        filter_by_difficulty=[-1],
        filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)),
    classes=class_names,
    sample_groups=dict(Car=12, Pedestrian=6, Cyclist=6),
    points_loader=dict(
        type='LoadPointsFromFile',
        coord_type='LIDAR',
        load_dim=4,
        use_dim=4,
        backend_args=backend_args),
    backend_args=backend_args)

train_pipeline = [
    dict(
        type='LoadPointsFromFile',
        coord_type='LIDAR',
        load_dim=4,  # x, y, z, intensity
        use_dim=4,
        backend_args=backend_args),
    dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True),
    dict(type='ObjectSample', db_sampler=db_sampler),
    dict(
        type='ObjectNoise',
        num_try=100,
        translation_std=[1.0, 1.0, 0.5],
        global_rot_range=[0.0, 0.0],
        rot_range=[-0.78539816, 0.78539816]),
    dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5),
    dict(
        type='GlobalRotScaleTrans',
        rot_range=[-0.78539816, 0.78539816],
        scale_ratio_range=[0.95, 1.05]),
    dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range),
    dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range),
    dict(type='PointShuffle'),
    dict(
        type='Pack3DDetInputs',
        keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
]
test_pipeline = [
    dict(
        type='LoadPointsFromFile',
        coord_type='LIDAR',
        load_dim=4,
        use_dim=4,
        backend_args=backend_args),
    dict(
        type='MultiScaleFlipAug3D',
        img_scale=(1333, 800),
        pts_scale_ratio=1,
        flip=False,
        transforms=[
            dict(
                type='GlobalRotScaleTrans',
                rot_range=[0, 0],
                scale_ratio_range=[1., 1.],
                translation_std=[0, 0, 0]),
            dict(type='RandomFlip3D'),
            dict(
                type='PointsRangeFilter', point_cloud_range=point_cloud_range)
        ]),
    dict(type='Pack3DDetInputs', keys=['points'])
]
# construct a pipeline for data and gt loading in show function
# please keep its loading function consistent with test_pipeline (e.g. client)
eval_pipeline = [
    dict(
        type='LoadPointsFromFile',
        coord_type='LIDAR',
        load_dim=4,
        use_dim=4,
        backend_args=backend_args),
    dict(type='Pack3DDetInputs', keys=['points'])
]
train_dataloader = dict(
    batch_size=8,
    num_workers=4,
    persistent_workers=True,
    sampler=dict(type='DefaultSampler', shuffle=True),
    dataset=dict(
        type='RepeatDataset',
        times=2,
        dataset=dict(
            type=dataset_type,
            data_root=data_root,
            ann_file='kitti_infos_train.pkl',
            data_prefix=dict(pts='training/velodyne_reduced'),
            pipeline=train_pipeline,
            modality=input_modality,
            test_mode=False,
            metainfo=metainfo,
            # we use box_type_3d='LiDAR' in kitti and nuscenes dataset
            # and box_type_3d='Depth' in sunrgbd and scannet dataset.
            box_type_3d='LiDAR',
            backend_args=backend_args)))
val_dataloader = dict(
    batch_size=1,
    num_workers=1,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        data_prefix=dict(pts='training/velodyne_reduced'),
        ann_file='kitti_infos_val.pkl',
        pipeline=test_pipeline,
        modality=input_modality,
        test_mode=True,
        metainfo=metainfo,
        box_type_3d='LiDAR',
        backend_args=backend_args))
test_dataloader = dict(
    batch_size=1,
    num_workers=1,
    persistent_workers=True,
    drop_last=False,
    sampler=dict(type='DefaultSampler', shuffle=False),
    dataset=dict(
        type=dataset_type,
        data_root=data_root,
        data_prefix=dict(pts='training/velodyne_reduced'),
        ann_file='kitti_infos_val.pkl',
        pipeline=test_pipeline,
        modality=input_modality,
        test_mode=True,
        metainfo=metainfo,
        box_type_3d='LiDAR',
        backend_args=backend_args))
val_evaluator = dict(
    type='KittiMetric',
    ann_file=data_root + 'kitti_infos_val.pkl',
    metric='bbox',
    backend_args=backend_args)
test_evaluator = val_evaluator

vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
    type='Det3DLocalVisualizer', vis_backends=vis_backends, name='visualizer')

以下是分部分解读:

# dataset settings
dataset_type = 'KittiDataset'
data_root = 'data/kitti/'
class_names = ['Pedestrian', 'Cyclist', 'Car']
point_cloud_range = [0, -40, -3, 70.4, 40, 1]
input_modality = dict(use_lidar=True, use_camera=False)
metainfo = dict(classes=class_names)

这里是kitti数据集的配置,其中数据类型是kitti的数据集(kittiDataset)、数据根目录在data/kitti、类别名称为“行人、骑行者、汽车”、第四行为点云范围、输入形式为激光雷达【未使用相机视觉】、metainfo是元信息,确定了类别名称。

db_sampler = dict(
    data_root=data_root,
    info_path=data_root + 'kitti_dbinfos_train.pkl',
    rate=1.0,
    prepare=dict(
        filter_by_difficulty=[-1],
        filter_by_min_points=dict(Car=5, Pedestrian=10, Cyclist=10)),
    classes=class_names,
    sample_groups=dict(Car=12, Pedestrian=6, Cyclist=6),
    points_loader=dict(
        type=LoadPointsFromFile,
        coord_type='LIDAR',
        load_dim=4,
        use_dim=4,
        backend_args=backend_args),
    backend_args=backend_args)

这里是kitti_dbinfos_train.pkl标注文件的读取,data_root定义数据存放的目录、info_path定义信息的路径。

train_dataloader = dict(
    batch_size=6,
    num_workers=4,
    persistent_workers=True,
    sampler=dict(type=DefaultSampler, shuffle=True),
    dataset=dict(
        type=RepeatDataset,
        times=2,
        dataset=dict(
            type=KittiDataset,
            data_root=data_root,
            ann_file='kitti_infos_train.pkl',
            data_prefix=dict(pts='training/velodyne_reduced'),
            pipeline=train_pipeline,
            modality=input_modality,
            test_mode=False,
            metainfo=metainfo,
            # we use box_type_3d='LiDAR' in kitti and nuscenes dataset
            # and box_type_3d='Depth' in sunrgbd and scannet dataset.
            box_type_3d='LiDAR',
            backend_args=backend_args)))

在train_dataloader中定义了数据加载部分的信息,其中Batch_size(批处理大小)设置为6(即一次性处理6位数据),num_workers定义了工作区间的大小为4。

data_root定义数据集的路径、ann_file定义标注文件的路径、data_prefix定义点云文件的路径、pipeline这里用于定义train_pipeline负责读取点云文件和标注文件以及一些数据增强的操作。

train_pipeline定义如下:

train_pipeline = [
    dict(
        type=LoadPointsFromFile,
        coord_type='LIDAR',
        load_dim=4,  # x, y, z, intensity
        use_dim=4,
        backend_args=backend_args),
    dict(type=LoadAnnotations3D, with_bbox_3d=True, with_label_3d=True),
    dict(type=ObjectSample, db_sampler=db_sampler),
    dict(
        type=ObjectNoise,
        num_try=100,
        translation_std=[1.0, 1.0, 0.5],
        global_rot_range=[0.0, 0.0],
        rot_range=[-0.78539816, 0.78539816]),
    dict(type=RandomFlip3D, flip_ratio_bev_horizontal=0.5),
    dict(
        type=GlobalRotScaleTrans,
        rot_range=[-0.78539816, 0.78539816],
        scale_ratio_range=[0.95, 1.05]),
    dict(type=PointsRangeFilter, point_cloud_range=point_cloud_range),
    dict(type=ObjectRangeFilter, point_cloud_range=point_cloud_range),
    dict(type=PointShuffle),
    dict(
        type=Pack3DDetInputs, keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
]

第一个dict字典用于读取点云文件,第二个dict用于读取标注文件,第三到第九个dict用于点云数据的增强,最后一个dict将点云数据进行打包。

在val_dataloader和test_dataloader中的pipeline都以test_pipeline赋值,下面看test_pipeline定义:

test_pipeline = [
    dict(
        type=LoadPointsFromFile,
        coord_type='LIDAR',
        load_dim=4,
        use_dim=4,
        backend_args=backend_args),
    dict(
        type=MultiScaleFlipAug3D,
        img_scale=(1333, 800),
        pts_scale_ratio=1,
        flip=False,
        transforms=[
            dict(
                type=GlobalRotScaleTrans,
                rot_range=[0, 0],
                scale_ratio_range=[1., 1.],
                translation_std=[0, 0, 0]),
            dict(type=RandomFlip3D),
            dict(type=PointsRangeFilter, point_cloud_range=point_cloud_range)
        ]),
    dict(type=Pack3DDetInputs, keys=['points'])
]

一共三个字典定义,第一个dict用于读取点云文件,第二个dict用于点云数据的增强,第三个dict将点云数据进行打包。

(二)pointpillars_hv_secfpn_kitti.py

代码路径:mmdetection3d/config/_base_/models/pointpillars_hv_secfpn_kitti.py,全部代码:

voxel_size = [0.16, 0.16, 4]

model = dict(
    type='VoxelNet',
    data_preprocessor=dict(
        type='Det3DDataPreprocessor',
        voxel=True,
        voxel_layer=dict(
            max_num_points=32,  # max_points_per_voxel
            point_cloud_range=[0, -39.68, -3, 69.12, 39.68, 1],
            voxel_size=voxel_size,
            max_voxels=(16000, 40000))),
    voxel_encoder=dict(
        type='PillarFeatureNet',
        in_channels=4,
 
### PointPillars 代码实现 PointPillars 是一种用于处理点云数据的目标检测算法,其核心在于使用 Pillar 来描述点云数据并利用二维卷积来提取特征。下面是一个简化版的 PointPillars 实现框架。 #### 数据预处理部分 ```python import numpy as np def generate_pillars(points, max_points_per_pillar=100, max_pillars=12000): """ 将原始点云转换成支柱形式 参数: points (np.ndarray): 原始点云数据 max_points_per_pillar (int): 每个支柱中的最大点数 max_pillars (int): 总的最大支柱数量 返回: pillars (list of lists): 转换成支柱后的点云列表 """ # 对输入点进行离散化处理... discrete_coords = ... # 创建字典存储每个位置对应的点集合... pillar_dict = {} for coord in discrete_coords: if tuple(coord) not in pillar_dict: pillar_dict[tuple(coord)] = [] if len(pillar_dict[tuple(coord)]) < max_points_per_pillar and \ len(pillar_dict.keys()) <= max_pillars: pillar_dict[tuple(coord)].append(point) # 构建最终的pillar数组... pillars = list(pillar_dict.values()) return pillars ``` #### 编码器网络定义 ```python from torch import nn import torch.nn.functional as F class Encoder(nn.Module): def __init__(self, input_channels, output_channels): super(Encoder, self).__init__() self.conv_layers = nn.Sequential( nn.Conv2d(input_channels, 64, kernel_size=(3, 3), stride=(2, 2)), nn.BatchNorm2d(64), nn.ReLU(), nn.Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2)), nn.BatchNorm2d(128), nn.ReLU() ) self.fc_layer = nn.Linear(128 * 7 * 7, output_channels) def forward(self, x): batch_size = x.size()[0] out = self.conv_layers(x).view(batch_size, -1) out = F.relu(out) out = self.fc_layer(out) return out ``` 上述代码片段展示了如何将点云转化为支柱以及编码器的设计思路[^1]。需要注意的是这只是一个非常基础的例子,并未完全按照实际论文中的参数设置;真实场景下还需要考虑更多细节如坐标量化、填充策略等。 为了更深入理解该模型的具体工作流程和优化技巧,建议查阅官方开源项目或相关研究文献获取完整的源代码和技术文档[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值