realsenseD455相机录制bag转为TUM数据集

本文参考
文章https://blog.csdn.net/m0_60355964/article/details/129518283?ops_request_misc=%257B%2522request%255Fid%2522%253A%252211559cdf09f5ff02d4b1d97f2b0744ee%2522%252C%2522scm%2522%253A%252220140713.130102334..%2522%257D&request_id=11559cdf09f5ff02d4b1d97f2b0744ee&biz_id=0&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduend~default-2-129518283-null-null.142^v101^pc_search_result_base9&utm_term=RealSense%20tum&spm=1018.2226.3001.4187
文章https://blog.csdn.net/neptune4751/article/details/137183817?ops_request_misc=&request_id=&biz_id=102&utm_term=RealSense%20tum&utm_medium=distribute.pc_search_result.none-task-blog-2~all~sobaiduweb~default-4-137183817.142^v101^pc_search_result_base9&spm=1018.2226.3001.4187

1.录制视频

打开Intel RealSense Viewer

设置Depth Stream以及Color Stream的图像分辨率为640 × 480

设置采集帧率为30 fps

点击左上角的Record按钮即可进行录制,开始录制后,点击左上角的Stop按钮即可结束录制并保存录制结果。

若点击Record按钮后出现以下报错,则更改一下保存路径。

点击右上角的齿轮图标,选择Settings,然后改变存储路径,之后点击ApplyOK

 结束录制后,在相应存储路径下即生成.bag文件。

2.提取rgb和depth图片,以及时间戳

第一步,进入/catkin_ws/src文件夹下,进入终端,克隆项目

git clone https://github.com/kinglintianxia/bag2tum.git

第二步,bag文件的位置新建image文件夹,然后在image文件夹下新建depth和rgb文件夹

修改bag2tum.launch文件中的:save_folder, rgb_topic 和depth_topic参数:

 <param name="save_folder" value="/home/nv/zoe/bag2tum/image" />
 <param name="rgb_topic" value="/camera/color/image_raw" />
 <param name="depth_topic" value="/camera/aligned_depth_to_color/image_raw" />

注:通过查看rostopic的信息,来获取rgb_topic以及depth_topic的信息:

rosbag info xxx.bag
topic如下图所示:
在这里插入图片描述
在这里插入图片描述
第三步骤,在bag2tum下创建build文件夹
  1. cmake ..

  2. make

第四步骤,启动roslaunch

  1. source devel/setup.bash 

  2. roslaunch bag2tum bag2tum.launc

 第五步骤,play bag文件:

rosbag play XX.bag

然后会在image文件夹下自动生成生成深度图以及RGB图以及时间戳:

第六步骤,对齐时间戳

由于深度图及彩色图的时间戳并非严格一一对齐,存在一定的时间差,因此需将深度图及彩色图按照时间戳最接近原则进行两两配对。将associate.py脚本文件存储至image文件夹下,如图所示:

associate.py脚本文件:

"""
The RealSense provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.

For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""

import argparse
import sys
import os
import numpy


def read_file_list(filename):
    """
    Reads a trajectory from a text file.

    File format:
    The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
    and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp.

    Input:
    filename -- File name

    Output:
    dict -- dictionary of (stamp,data) tuples

    """
    file = open(filename)
    data = file.read()
    lines = data.replace(",", " ").replace("\t", " ").split("\n")
    list = [[v.strip() for v in line.split(" ") if v.strip() != ""] for line in lines if
            len(line) > 0 and line[0] != "#"]
    list = [(float(l[0]), l[1:]) for l in list if len(l) > 1]
    return dict(list)


def associate(first_list, second_list, offset, max_difference):
    """
    Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim
    to find the closest match for every input tuple.

    Input:
    first_list -- first dictionary of (stamp,data) tuples
    second_list -- second dictionary of (stamp,data) tuples
    offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
    max_difference -- search radius for candidate generation

    Output:
    matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))

    """
    first_keys = first_list.keys()
    second_keys = second_list.keys()
    potential_matches = [(abs(a - (b + offset)), a, b)
                         for a in first_keys
                         for b in second_keys
                         if abs(a - (b + offset)) < max_difference]
    potential_matches.sort()
    matches = []
    for diff, a, b in potential_matches:
        if a in first_keys and b in second_keys:
            first_keys.remove(a)
            second_keys.remove(b)
            matches.append((a, b))

    matches.sort()
    return matches


if __name__ == '__main__':

    # parse command line
    parser = argparse.ArgumentParser(description='''
    This script takes two data files with timestamps and associates them   
    ''')
    parser.add_argument('first_file', help='first text file (format: timestamp data)')
    parser.add_argument('second_file', help='second text file (format: timestamp data)')
    parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
    parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',
                        default=0.0)
    parser.add_argument('--max_difference',
                        help='maximally allowed time difference for matching entries (default: 0.02)', default=0.02)
    args = parser.parse_args()

    first_list = read_file_list(args.first_file)
    second_list = read_file_list(args.second_file)

    matches = associate(first_list, second_list, float(args.offset), float(args.max_difference))

    if args.first_only:
        for a, b in matches:
            print("%f %s" % (a, " ".join(first_list[a])))
    else:
        for a, b in matches:
            print("%f %s %f %s" % (a, " ".join(first_list[a]), b - float(args.offset), " ".join(second_list[b])))

 

在该路径下打开终端并通过执行如下命令生成配对结果associate.txt

python associate.py rgb.txt depth.txt > associate.txt

至此,数据集制作完成。

### 将RGB图像和深度图像转换为TUM数据集格式 为了将RGB图像和深度图像转换为TUM数据集格式,通常涉及以下几个方面的工作: #### 创建必要的目录结构 首先需要创建用于存储处理后图像的文件夹结构。这可以通过简单的命令行操作完成,在目标位置建立`image`文件夹及其子文件夹`depth`和`rgb`[^3]。 ```bash mkdir -p image/{depth,rgb} ``` #### 修改配置文件以适应特定的数据源 如果使用ROS环境下的Realsense相机录制Bag文件,则需调整`bag2tum.launch`文件内的几个关键参数设置,确保这些路径指向实际使用的资源地址。具体来说就是指定保存图片的目标文件夹(`save_folder`)以及订阅的主题名称(`rgb_topic`, `depth_topic`),如下所示: ```xml <launch> <!-- Other configurations --> <node pkg="..." type="..."> <param name="save_folder" value="/path/to/save/folder"/> <param name="rgb_topic" value="/camera/color/image_raw"/> <param name="depth_topic" value="/camera/aligned_depth_to_color/image_raw"/> <!-- More parameters... --> </node> </launch> ``` #### 处理时间戳同步问题 值得注意的是,在某些情况下,原始获取到的彩色图与深度图之间可能存在轻微的时间差异,即它们的小数点位数可能并不完全相同。为此,可以利用Python脚本如`associate.py`来进行两者的关联匹配工作,从而保证每一对对应的RGB-D帧具有相同的全局时间戳记录[^5]。 ```python import sys from collections import defaultdict def read_file_list(filename): """Reads a trajectory from a text file. File format should be two columns representing timestamp and filename.""" file_list = [] with open(filename) as f: for line in f.readlines(): s = line.strip().split() if len(s) != 2: continue file_list.append((float(s[0]), s[1])) return file_list if __name__ == '__main__': rgb_list = read_file_list(sys.argv[1]) depth_list = read_file_list(sys.argv[2]) matches = defaultdict(list) # Perform matching based on timestamps... ... ``` 通过上述步骤的操作,即可有效地将来自不同传感器模态(比如Kinect v2或RealSense D4xx系列设备)捕获得到的一系列配对好的RGB-D影像资料整理成为符合标准定义之TUM RGB-D Dataset Format的要求形式[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值