Traceback (most recent call last): File "D:\anaconda\envs\pytorch-cpu\lib\site-packages\torch\multiprocessing\spawn.py", line 76, in _wrap fn(i, *args) File "C:\Users\lenovo\1.py", line 45, in train setup(rank, world_size) File "C:\Users\lenovo\1.py", line 30, in setup dist.init_process_group( File "D:\anaconda\envs\pytorch-cpu\lib\site-packages\torch\distributed\c10d_logger.py", line 79, in wrapper return func(*args, **kwargs) File "D:\anaconda\envs\pytorch-cpu\lib\site-packages\torch\distributed\c10d_logger.py", line 93, in wrapper func_return = func(*args, **kwargs) File "D:\anaconda\envs\pytorch-cpu\lib\site-packages\torch\distributed\distributed_c10d.py", line 1361, in init_process_group store, rank, world_size = next(rendezvous_iterator) File "D:\anaconda\envs\pytorch-cpu\lib\site-packages\torch\distributed\rendezvous.py", line 258, in _env_rendezvous_handler store = _create_c10d_store(master_addr, master_port, rank, world_size, timeout, use_libuv) File "D:\anaconda\envs\pytorch-cpu\lib\site-packages\torch\distributed\rendezvous.py", line 185, in _create_c10d_store return TCPStore( RuntimeError: use_libuv was requested but PyTorch was build without libuv support

时间: 2025-07-12 13:46:36 浏览: 7
<think>好的,我现在需要解决用户遇到的PyTorch分布式训练中的RuntimeError问题。错误信息显示“RuntimeError: use_libuv was requested but PyTorch was build without libuv support”。首先,我得理解这个错误的原因。 用户在使用PyTorch的分布式功能时,尝试初始化进程组,但代码中可能指定了使用libuv作为后端通信库。然而,当前的PyTorch安装版本在编译时没有包含libuv的支持,导致无法使用该功能。这可能是因为用户的环境是通过Anaconda安装的PyTorch CPU版本,而默认情况下,某些PyTorch的发行版可能没有启用libuv。 接下来,我需要考虑解决这个问题的几种可能性。首先,用户可能在代码中显式设置了使用libuv,或者PyTorch默认尝试使用libuv但失败了。查看错误发生的代码位置,用户调用了`dist.init_process_group`,可能在其中指定了某些参数,比如`backend`。但通常PyTorch的默认后端是gloo、nccl等,libuv可能并不是常见的选项,所以这可能是一个配置问题。 另一个可能性是环境变量的问题,例如用户可能设置了某些环境变量强制使用libuv。但根据错误信息,问题更可能出在PyTorch的编译选项上。因此,用户需要确保安装的PyTorch版本支持libuv。 解决方案方面,首先建议用户检查他们的PyTorch安装是否包含libuv支持。如果用户确实需要libuv,可能需要从源码重新编译PyTorch并启用libuv,但这对于普通用户来说可能比较复杂。另一种方法是修改代码,避免使用libuv,转而使用其他支持的后端,比如gloo或nccl。不过,由于用户使用的是CPU版本,nccl可能不可用,因此gloo可能是更合适的选择。 此外,用户可能在初始化进程组时没有正确指定参数,或者使用的初始化方法(如env://)需要正确的环境变量设置。需要检查`init_process_group`的参数,特别是`backend`和`init_method`是否正确设置。例如,使用gloo后端,并确保MASTER_ADDR和MASTER_PORT环境变量已正确配置。 还有一种情况是,用户可能在使用多进程启动时,没有正确配置spawn方法,导致底层通信库选择错误。可能需要检查多进程启动的代码,比如使用`torch.multiprocessing.spawn`是否正确传递参数,或者是否存在其他配置问题。 最后,如果用户不需要特定的libuv功能,最简单的解决方法是更换后端。例如,将后端参数设置为'gloo',并确保环境变量正确,这样可能就能绕过libuv的问题。同时,建议用户确认他们的PyTorch版本是否完整支持分布式训练,可能需要升级或重新安装包含必要依赖的版本。 总结下来,可能的解决步骤包括:1. 更换后端为gloo;2. 检查环境变量和初始化方法;3. 重新安装支持libuv的PyTorch版本或从源码编译;4. 确保多进程配置正确。需要逐步指导用户尝试这些方法,优先推荐最简单的更换后端的方法,如果无效再考虑其他步骤。</think>这个错误表明你在使用PyTorch分布式训练时,代码尝试使用`libuv`作为通信后端,但当前安装的PyTorch版本编译时未包含`libuv`支持。以下是逐步解决方案: --- ### **1. 问题根源分析** 错误发生在初始化分布式进程组时: ```python dist.init_process_group(...) ``` PyTorch默认会根据环境自动选择通信后端,但代码中可能显式或隐式地要求使用`libuv`(通常通过环境变量或代码参数)。 --- ### **2. 解决方案一:更换通信后端** **推荐使用 `gloo` 或 `nccl` 后端**(CPU环境只能用`gloo`): ```python dist.init_process_group( backend="gloo", # CPU环境用gloo,GPU环境可尝试nccl init_method="env://", # 使用环境变量初始化 rank=rank, world_size=world_size ) ``` 同时确保设置以下环境变量: ```bash export MASTER_ADDR=localhost export MASTER_PORT=12355 ``` Windows下使用: ```cmd set MASTER_ADDR=localhost set MASTER_PORT=12355 ``` --- ### **3. 解决方案二:检查PyTorch安装** 如果必须使用`libuv`(一般不建议): - **卸载当前PyTorch**: ```bash pip uninstall torch ``` - **安装支持libuv的版本**: ```bash # 从源码编译(复杂,需配置环境) git clone --recursive https://github.com/pytorch/pytorch cd pytorch USE_LIBUV=1 python setup.py install ``` 或尝试预编译版本(需查看官方文档是否有提供)。 --- ### **4. 验证分布式配置** 在代码开头添加环境检查: ```python import torch.distributed as dist print("Available backends:", dist.Backend.backend_list) # 输出应包含 ['gloo', 'nccl', ...] ``` --- ### **5. 完整代码示例(CPU环境)** ```python import torch import torch.distributed as dist import torch.multiprocessing as mp def setup(rank, world_size): # 显式指定后端为gloo dist.init_process_group( backend="gloo", init_method="env://", rank=rank, world_size=world_size ) def train(rank, world_size): setup(rank, world_size) # 你的训练代码 if __name__ == "__main__": world_size = 2 # 进程数 mp.spawn(train, args=(world_size,), nprocs=world_size) ``` --- ### **6. 注意事项** - 确保所有进程的`MASTER_ADDR`和`MASTER_PORT`一致。 - 若使用GPU,将后端改为`nccl`(需CUDA环境)。 - Windows系统对分布式训练支持有限,建议优先在Linux下测试。 通过上述调整应能绕过`libuv`依赖问题。如果仍有问题,请提供完整代码和运行环境信息。
阅读全文

相关推荐

Loaded from checkpoint '/data/zhangxiaoran/MambaOut/mambaout_femto.pth' Traceback (most recent call last): File "train.py", line 994, in <module> main() File "train.py", line 460, in main model = create_model(**create_model_args) File "/data/zhangxiaoran/anaconda/envs/pytorch/lib/python3.8/site-packages/timm/models/factory.py", line 74, in create_model load_checkpoint(model, checkpoint_path) File "/data/zhangxiaoran/anaconda/envs/pytorch/lib/python3.8/site-packages/timm/models/helpers.py", line 75, in load_checkpoint incompatible_keys = model.load_state_dict(state_dict) File "/data/zhangxiaoran/anaconda/envs/pytorch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for MambaOut: size mismatch for head.fc2.weight: copying a param with shape torch.Size([1000, 1152]) from checkpoint, the shape in current model is torch.Size([4, 1152]). size mismatch for head.fc2.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([4]). [2025-03-24 20:07:25,460] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 3860019) of binary: /data/zhangxiaoran/anaconda/envs/pytorch/bin/python Traceback (most recent call last): File "/data/zhangxiaoran/anaconda/envs/pytorch/bin/torchrun", line 8, in <module> sys.exit(main()) File "/data/zhangxiaoran/anaconda/envs/pytorch/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/data/zhangxiaoran/anaconda/envs/pytorch/lib/python3.8/site-packages/torch/distributed/run.py", line 806, in main run(args) File "/data/zhangxiaoran/anaconda/envs/pytorch/lib/python3.8/site-packages/torch/distributed/run.py", line 797, in run elastic_launch( File

W0708 17:51:13.615000 2128 site-packages\torch\distributed\elastic\multiprocessing\redirects.py:29] NOTE: Redirects are currently not supported in Windows or MacOs. Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\run.py", line 922, in <module> main() File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\elastic\multiprocessing\errors\__init__.py", line 355, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\run.py", line 918, in main run(args) File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\run.py", line 909, in run elastic_launch( File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\launcher\api.py", line 138, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\launcher\api.py", line 260, in launch_agent result = agent.run() ^^^^^^^^^^^ File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper result = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 711, in run result = self._invoke_run(role) ^^^^^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\elastic\agent\server\api.py", line 864, in _invoke_run self._initialize_workers(self._worker_group) File "E:\Anaconda\envs\yolov12\Lib\site-packages\torch\distributed\elastic\metrics\api.py", line 137, in wrapper result = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "E:\Anaconda\envs\yolov12\Lib\

Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Raytrack\anaconda3\envs\ie\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\Raytrack\anaconda3\envs\ie\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\Raytrack\anaconda3\envs\ie\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\Raytrack\anaconda3\envs\ie\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Users\Raytrack\anaconda3\envs\ie\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Users\Raytrack\anaconda3\envs\ie\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Users\Raytrack\anaconda3\envs\ie\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\Raytrack\Desktop\MyIE_20250318\train.py", line 6, in <module> import torch File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\__init__.py", line 123, in <module> raise err OSError: [WinError 1455] 页面文件太小,无法完成操作。 Error loading "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\lib\cudnn_adv_train64_8.dll" or one of its dependencies. Traceback (most recent call last): File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 986, in _try_get_data data = self._data_queue.get(timeout=timeout) File "C:\Users\Raytrack\anaconda3\envs\ie\lib\queue.py", line 178, in get raise Empty _queue.Empty The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:/Users/Raytrack/Desktop/MyIE_20250318/train.py", line 158, in <module> entry_point(config) File "C:/Users/Raytrack/Desktop/MyIE_20250318/train.py", line 122, in entry_point main(config, local_master, logger if local_master else None) File "C:/Users/Raytrack/Desktop/MyIE_20250318/train.py", line 70, in main trainer.train() File "C:\Users\Raytrack\Desktop\MyIE_20250318\trainer\trainer.py", line 136, in train result_dict = self._train_epoch(epoch) File "C:\Users\Raytrack\Desktop\MyIE_20250318\trainer\trainer.py", line 266, in _train_epoch val_result_dict = self._valid_epoch(epoch) File "C:\Users\Raytrack\Desktop\MyIE_20250318\trainer\trainer.py", line 315, in _valid_epoch for step_idx, input_data_item in enumerate(self.valid_data_loader): File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 517, in __next__ data = self._next_data() File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 1182, in _next_data idx, data = self._get_data() File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 1138, in _get_data success, data = self._try_get_data() File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 999, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e RuntimeError: DataLoader worker (pid(s) 2064) exited unexpectedly 进程已结束,退出代码1 我训练时发生如上错误,训练完全没问题,打印了出来,在验证环节报错,而且如果我debug,一条一条去执行valid程序,又没问题

Traceback (most recent call last): File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\train.py", line 543, in <module> Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) train(hyp, opt, device, tb_writer) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\spawn.py", line 126, in _main File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\train.py", line 199, in train self = reduction.pickle.load(from_parent) EOFError: Ran out of input testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\utils\datasets.py", line 79, in create_dataloader dataloader = loader(dataset, File "D:\ptcharm\project\pythonProject\yolov5-5.0\yolov5-5.0\utils\datasets.py", line 97, in __init__ self.iterator = super().__iter__() File "D:\ANACONDA\envs\yolov5_new\lib\site-packages\torch\utils\data\dataloader.py", line 491, in __iter__ return self._get_iterator() File "D:\ANACONDA\envs\yolov5_new\lib\site-packages\torch\utils\data\dataloader.py", line 422, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "D:\ANACONDA\envs\yolov5_new\lib\site-packages\torch\utils\data\dataloader.py", line 1146, in __init__ w.start() File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "D:\ANACONDA\envs\yolov5_new\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ reduction.dump(process_obj, to_child) File "D:\ANA

解释(chineseclip) clip@LAPTOP-2DE9Q4DR:~/bishe/Chinese-CLIP$ bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ${DataPath} /home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects --local_rank argument to be set, please change it to read from os.environ['LOCAL_RANK'] instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( Traceback (most recent call last): File "cn_clip/training/main.py", line 350, in <module> main() File "cn_clip/training/main.py", line 67, in main os.makedirs(dirname, exist_ok=True) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/os.py", line 213, in makedirs makedirs(head, exist_ok=exist_ok) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/os.py", line 213, in makedirs makedirs(head, exist_ok=exist_ok) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/os.py", line 223, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/experiments' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1338) of binary: /home/clip/anaconda3/envs/chineseclip/bin/python Traceback (most recent call last): File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in <module> main() File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launch.py", line 191, in main launch(args) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launch.py", line 176, in launch run(args) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ cn_clip/training/main.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2025-03-25_16:58:43 host : LAPTOP-2DE9Q4DR.localdomain rank : 0 (local_rank: 0) exitcode : 1 (pid: 1338) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================

D:\Anaconda3\envs\yolov11\python.exe D:\PycharmProjects\yolov11\ds.py D:\Anaconda3\envs\yolov11\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( D:\Anaconda3\envs\yolov11\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=ResNet18_Weights.IMAGENET1K_V1. You can also use weights=ResNet18_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) Epoch 1/30 ---------- D:\Anaconda3\envs\yolov11\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( D:\Anaconda3\envs\yolov11\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weights=ResNet18_Weights.IMAGENET1K_V1. You can also use weights=ResNet18_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) Epoch 1/30 ---------- Traceback (most recent call last): File "<string>", line 1, in <module> File "D:\Anaconda3\envs\yolov11\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "D:\Anaconda3\envs\yolov11\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "D:\Anaconda3\envs\yolov11\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\Anaconda3\envs\yolov11\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "D:\Anacond

解释(chineseclip) clip@LAPTOP-2DE9Q4DR:~/bishe/Chinese-CLIP$ bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ${DataPath} /home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects --local_rank argument to be set, please change it to read from os.environ['LOCAL_RANK'] instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( Traceback (most recent call last): File "cn_clip/training/main.py", line 350, in <module> main() File "cn_clip/training/main.py", line 67, in main os.makedirs(dirname, exist_ok=True) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/os.py", line 213, in makedirs makedirs(head, exist_ok=exist_ok) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/os.py", line 213, in makedirs makedirs(head, exist_ok=exist_ok) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/os.py", line 223, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/experiments' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 21845) of binary: /home/clip/anaconda3/envs/chineseclip/bin/python Traceback (most recent call last): File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launch.py", line 195, in <module> main() File "/home/clip/anaconda3/envs/chineseclip/lib/python3.8/site-packages/torch/distributed/launch.py", line 191, in main launch(args) File "

(mxyolov11) maxiao@miao-ThinkStation-P360-Tower:~/mx331$ cd YOLOv11-pt-master/ (mxyolov11) maxiao@miao-ThinkStation-P360-Tower:~/mx331/YOLOv11-pt-master$ bash main.sh 1 --train /home/maxiao/anaconda3/envs/mxyolov11/lib/python3.10/site-packages/torch/distributed/launch.py:183: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use-env is set by default in torchrun. If your script expects --local-rank argument to be set, please change it to read from os.environ['LOCAL_RANK'] instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( Traceback (most recent call last): File "/home/maxiao/mx331/YOLOv11-pt-master/main.py", line 303, in <module> main() File "/home/maxiao/mx331/YOLOv11-pt-master/main.py", line 289, in main profile(args, params) File "/home/maxiao/mx331/YOLOv11-pt-master/main.py", line 243, in profile import thop ModuleNotFoundError: No module named 'thop' E0331 20:23:11.925000 125994292524096 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 406393) of binary: /home/maxiao/anaconda3/envs/mxyolov11/bin/python3 Traceback (most recent call last): File "/home/maxiao/anaconda3/envs/mxyolov11/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/maxiao/anaconda3/envs/mxyolov11/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/maxiao/anaconda3/envs/mxyolov11/lib/python3.10/site-packages/torch/distributed/launch.py", line 198, in <module> main() File "/home/maxiao/anaconda3/envs/mxyolov11/lib/python3.10/site-packages/torch/distributed/launch.py", line 194, in main launch(args) File "/home/maxiao/anaconda3/envs/mxyolov11/lib/python3.10/site-packages/torch/distributed/launch.py", line 179, in launch run(args) File "/home/maxiao/anaconda3/envs/mxyol

(VScode) zhaohy@workspace2:~/VSCode-main$ python train_test_eval.py --Training True --Testing True --Evaluation True /home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) /home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) /home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) W0311 15:54:21.788752 136232691705664 torch/multiprocessing/spawn.py:145] Terminating process 2342884 via signal SIGTERM Traceback (most recent call last): File "train_test_eval.py", line 67, in <module> Training.train_net(num_gpus=num_gpus, args=args) File "/home/zhaohy/VSCode-main/Training.py", line 64, in train_net mp.spawn(main, nprocs=num_gpus, args=(num_gpus, args)) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 281, in spawn return start_processes(fn, args, nprocs, join, daemon, start_method="spawn") File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 237, in start_processes while not context.join(): File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 188, in join raise ProcessRaisedException(msg, error_index, failed_process.pid) torch.multiprocessing.spawn.ProcessRaisedException: -- Process 1 terminated with the following error: Traceback (most recent call last): File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 75, in _wrap fn(i, *args) File "/home/zhaohy/VSCode-main/Training.py", line 71, in main dist.init_process_group(backend='nccl', init_method='env://') # 自动从环境变量读取 RANK/WORLD_SIZE File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper return func(*args, **kwargs) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/c10d_logger.py", line 89, in wrapper func_return = func(*args, **kwargs) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1305, in init_process_group store, rank, world_size = next(rendezvous_iterator) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 234, in _env_rendezvous_handler rank = int(_get_env_or_raise("RANK")) File "/home/zhaohy/anaconda3/envs/VScode/lib/python3.8/site-packages/torch/distributed/rendezvous.py", line 219, in _get_env_or_raise raise _env_error(env_var) ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set

The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:/Users/Raytrack/Desktop/MyIE_20250318/train.py", line 158, in <module> entry_point(config) File "C:/Users/Raytrack/Desktop/MyIE_20250318/train.py", line 122, in entry_point main(config, local_master, logger if local_master else None) File "C:/Users/Raytrack/Desktop/MyIE_20250318/train.py", line 70, in main trainer.train() File "C:\Users\Raytrack\Desktop\MyIE_20250318\trainer\trainer.py", line 136, in train result_dict = self._train_epoch(epoch) File "C:\Users\Raytrack\Desktop\MyIE_20250318\trainer\trainer.py", line 266, in _train_epoch val_result_dict = self._valid_epoch(epoch) File "C:\Users\Raytrack\Desktop\MyIE_20250318\trainer\trainer.py", line 315, in _valid_epoch for step_idx, input_data_item in enumerate(self.valid_data_loader): File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 517, in __next__ data = self._next_data() File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 1182, in _next_data idx, data = self._get_data() File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 1138, in _get_data success, data = self._try_get_data() File "C:\Users\Raytrack\anaconda3\envs\ie\lib\site-packages\torch\utils\data\dataloader.py", line 999, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e RuntimeError: DataLoader worker (pid(s) 2064) exited unexpectedly 进程已结束,退出代码1

2025-06-26 16:12:03,164 - mmdet - INFO - Saving checkpoint at 12 epochs [ ] 0/81, elapsed: 0s, ETA:/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ../aten/src/ATen/native/BinaryOps.cpp:467.) return torch.floor_divide(self, other) [ ] 1/81, 0.3 task/s, elapsed: 3s, ETA: 273s/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/core/bbox/coders/fut_nms_free_coder.py:78: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). self.post_center_range = torch.tensor( /media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/core/bbox/coders/map_nms_free_coder.py:82: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). self.post_center_range = torch.tensor( [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 81/81, 4.5 task/s, elapsed: 18s, ETA: 0s Traceback (most recent call last): File "tools/train.py", line 266, in <module> main() File "tools/train.py", line 255, in main custom_train_model( File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/VAD/apis/train.py", line 21, in custom_train_model custom_train_detector( File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/VAD/apis/mmdet_train.py", line 194, in custom_train_detector runner.run(data_loaders, cfg.workflow) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 54, in train self.call_hook('after_train_epoch') File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook getattr(hook, fn_name)(self) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 267, in after_train_epoch self._do_evaluate(runner) File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/core/evaluation/eval_hooks.py", line 88, in _do_evaluate key_score = self.evaluate(runner, results) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 361, in evaluate eval_res = self.dataloader.dataset.evaluate( File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/datasets/nuscenes_vad_dataset.py", line 1781, in evaluate all_metric_dict[key] += results[i]['metric_results'][key] KeyError: 0 ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2522198) of binary: /home/wangbaihui/anaconda3/envs/vad/bin/python /home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:367: UserWarning: ********************************************************************** CHILD PROCESS FAILED WITH NO ERROR_FILE ********************************************************************** CHILD PROCESS FAILED WITH NO ERROR_FILE Child process 2522198 (local_rank 0) FAILED (exitcode 1) Error msg: Process failed with exitcode 1 Without writing an error file to <N/A>. While this DOES NOT affect the correctness of your application, no trace information about the error will be available for inspection. Consider decorating your top level entrypoint function with torch.distributed.elastic.multiprocessing.errors.record. Example: from torch.distributed.elastic.multiprocessing.errors import record @record def trainer_main(args): # do train ********************************************************************** warnings.warn(_no_error_file_warning_msg(rank, failure)) Traceback (most recent call last): File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/run.py", line 702, in <module> main() File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 361, in wrapper return f(*args, **kwargs) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/run.py", line 698, in main run(args) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run elastic_launch( File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: *************************************** tools/train.py FAILED ======================================= Root Cause: [0]: time: 2025-06-26_16:12:28 rank: 0 (local_rank: 0) exitcode: 1 (pid: 2522198) error_file: <N/A> msg: "Process failed with exitcode 1" ======================================= Other Failures: <NO_OTHER_FAILURES> ***********2025-06-26 16:12:03,164 - mmdet - INFO - Saving checkpoint at 12 epochs [ ] 0/81, elapsed: 0s, ETA:/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at ../aten/src/ATen/native/BinaryOps.cpp:467.) return torch.floor_divide(self, other) [ ] 1/81, 0.3 task/s, elapsed: 3s, ETA: 273s/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/core/bbox/coders/fut_nms_free_coder.py:78: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). self.post_center_range = torch.tensor( /media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/core/bbox/coders/map_nms_free_coder.py:82: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). self.post_center_range = torch.tensor( [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 81/81, 4.5 task/s, elapsed: 18s, ETA: 0s Traceback (most recent call last): File "tools/train.py", line 266, in <module> main() File "tools/train.py", line 255, in main custom_train_model( File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/VAD/apis/train.py", line 21, in custom_train_model custom_train_detector( File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/VAD/apis/mmdet_train.py", line 194, in custom_train_detector runner.run(data_loaders, cfg.workflow) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 54, in train self.call_hook('after_train_epoch') File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook getattr(hook, fn_name)(self) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 267, in after_train_epoch self._do_evaluate(runner) File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/core/evaluation/eval_hooks.py", line 88, in _do_evaluate key_score = self.evaluate(runner, results) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/mmcv/runner/hooks/evaluation.py", line 361, in evaluate eval_res = self.dataloader.dataset.evaluate( File "/media/wangbaihui/1ecf654b-afad-4dab-af7b-e34b00dda87a/mmdetection3d/VAD/projects/mmdet3d_plugin/datasets/nuscenes_vad_dataset.py", line 1781, in evaluate all_metric_dict[key] += results[i]['metric_results'][key] KeyError: 0 ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2522198) of binary: /home/wangbaihui/anaconda3/envs/vad/bin/python /home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:367: UserWarning: ********************************************************************** CHILD PROCESS FAILED WITH NO ERROR_FILE ********************************************************************** CHILD PROCESS FAILED WITH NO ERROR_FILE Child process 2522198 (local_rank 0) FAILED (exitcode 1) Error msg: Process failed with exitcode 1 Without writing an error file to <N/A>. While this DOES NOT affect the correctness of your application, no trace information about the error will be available for inspection. Consider decorating your top level entrypoint function with torch.distributed.elastic.multiprocessing.errors.record. Example: from torch.distributed.elastic.multiprocessing.errors import record @record def trainer_main(args): # do train ********************************************************************** warnings.warn(_no_error_file_warning_msg(rank, failure)) Traceback (most recent call last): File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/run.py", line 702, in <module> main() File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 361, in wrapper return f(*args, **kwargs) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/run.py", line 698, in main run(args) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run elastic_launch( File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/wangbaihui/anaconda3/envs/vad/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: *************************************** tools/train.py FAILED ======================================= Root Cause: [0]: time: 2025-06-26_16:12:28 rank: 0 (local_rank: 0) exitcode: 1 (pid: 2522198) error_file: <N/A> msg: "Process failed with exitcode 1" ======================================= Other Failures: <NO_OTHER_FAILURES> ***********

Traceback (most recent call last): File "/home/hpe/zj/Rein/tools/test.py", line 144, in <module> main() File "/home/hpe/zj/Rein/tools/test.py", line 140, in main runner.test() File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmengine/runner/runner.py", line 1823, in test metrics = self.test_loop.run() # type: ignore ^^^^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmengine/runner/loops.py", line 462, in run for idx, data_batch in enumerate(self.dataloader): File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 633, in __next__ data = self._next_data() ^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data return self._process_data(data) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data data.reraise() File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/torch/_utils.py", line 644, in reraise raise exception RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) ^^^^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index] ~~~~~~~~~~~~^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmengine/dataset/dataset_wrapper.py", line 171, in __getitem__ return self.datasets[dataset_idx][sample_idx] ~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmengine/dataset/base_dataset.py", line 410, in __getitem__ data = self.prepare_data(idx) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmengine/dataset/base_dataset.py", line 793, in prepare_data return self.pipeline(data_info) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmengine/dataset/base_dataset.py", line 60, in __call__ data = t(data) ^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmcv/transforms/base.py", line 12, in __call__ return self.transform(results) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmseg/datasets/transforms/formatting.py", line 72, in transform img = to_tensor(img).contiguous() ^^^^^^^^^^^^^^ File "/home/hpe/anaconda3/envs/rein/lib/python3.11/site-packages/mmcv/transforms/formatting.py", line 31, in to_tensor return torch.from_numpy(data) ^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Numpy is not available

大家在看

recommend-type

ADC_AD7173.zip

ADC_AD7173之官方C语言驱动程序
recommend-type

vindr-cxr:VinDr-CXR

VinDr-CXR:带有放射科医生注释的胸部 X 射线开放数据集 VinDr-CXR 是一个大型公开可用的胸片数据集,带有用于常见胸肺疾病分类和关键发现定位的放射学注释。 它由 Vingroup 大数据研究所 (VinBigdata) 创建。 该数据集包含 2018 年至 2020 年从越南两家主要医院收集的超过 18,000 次 CXR 扫描。这些图像被标记为存在 28 种不同的放射学发现和诊断。 训练集中的每次扫描都由一组三名放射科医生进行注释。 对于测试集,五位经验丰富的放射科医生参与了标记过程,并根据他们的共识来建立测试标记的最佳参考标准。 要下载数据集,用户需要注册并接受我们网页上描述的数据使用协议 (DUA)。 通过接受 DUA,用户同意他们不会共享数据,并且数据集只能用于科学研究和教育目的。 代码 该存储库旨在支持使用 VinDr-CXR 数据。 我们提供了用于从 DICO
recommend-type

微信聊天记录导出- MemoTrace 留痕 2.0.6(WeChatMsg)

解锁Windows本地数据库 还原微信聊天界面: 文本 图片 拍一拍等系统消息 导出数据: 批量导出数据 导出联系人 sqlite数据库 HTML(文本、图片、视频、表情包、语音、文件、系统消息) CSV文档 TXT文档 Word文档 使用步骤: 登录要导出数据的微信(不支持微信多开,不支持部分老版本微信); 点击获取信息之后,正常情况下所有信息均会自动填充,这时候就直接点击开始启动就行了; 如果微信路径获取失败,就手动设置为微信中文件管理路径下的wxid_xxx文件夹,该wxid必须和前面获取的wxid一致,否则的话会显示密钥错误; 点击开始启动; 成功后新的数据库会存储在 WeChatMsg 软件目录下的 app/DataBase/Msg文件夹中; 最后重新启动WeChatMsg即可。
recommend-type

中科大版苏淳概率论答案

本资料是中科大版本 苏淳编著的概率论答案,此为本书前半部分答案,其中包含书中部分习题,系老师所布置的重点习题答案。包含初等概率论,随机变量,随机向量,数字特征与特征函数极限定理几章的内容
recommend-type

北邮计算机网络滑动窗口实验报告(附页包含源程序)

北邮计算机网络实验报告,是数据链路层的滑动窗口协议,采用选择重传协议,报告中内容完整,包含结构说明,代码说明,程序流程图,结果分析(表格),探究分析,源程序等。

最新推荐

recommend-type

langchain4j-1.1.0.jar中文-英文对照文档.zip

1、压缩文件中包含: 中文-英文对照文档、jar包下载地址、Maven依赖、Gradle依赖、源代码下载地址。 2、使用方法: 解压最外层zip,再解压其中的zip包,双击 【index.html】 文件,即可用浏览器打开、进行查看。 3、特殊说明: (1)本文档为人性化翻译,精心制作,请放心使用; (2)只翻译了该翻译的内容,如:注释、说明、描述、用法讲解 等; (3)不该翻译的内容保持原样,如:类名、方法名、包名、类型、关键字、代码 等。 4、温馨提示: (1)为了防止解压后路径太长导致浏览器无法打开,推荐在解压时选择“解压到当前文件夹”(放心,自带文件夹,文件不会散落一地); (2)有时,一套Java组件会有多个jar,所以在下载前,请仔细阅读本篇描述,以确保这就是你需要的文件。 5、本文件关键字: jar中文-英文对照文档.zip,java,jar包,Maven,第三方jar包,组件,开源组件,第三方组件,Gradle,中文API文档,手册,开发手册,使用手册,参考手册。
recommend-type

计算机控制课程设计报告直流电机转速闭环控制(1).doc

计算机控制课程设计报告直流电机转速闭环控制(1).doc
recommend-type

初一下册Unit5-Why-do-you-like-pandas-知识点(1).doc

初一下册Unit5-Why-do-you-like-pandas-知识点(1).doc
recommend-type

Wamp5: 一键配置ASP/PHP/HTML服务器工具

根据提供的文件信息,以下是关于标题、描述和文件列表中所涉及知识点的详细阐述。 ### 标题知识点 标题中提到的是"PHP集成版工具wamp5.rar",这里面包含了以下几个重要知识点: 1. **PHP**: PHP是一种广泛使用的开源服务器端脚本语言,主要用于网站开发。它可以嵌入到HTML中,从而让网页具有动态内容。PHP因其开源、跨平台、面向对象、安全性高等特点,成为最流行的网站开发语言之一。 2. **集成版工具**: 集成版工具通常指的是将多个功能组合在一起的软件包,目的是为了简化安装和配置流程。在PHP开发环境中,这样的集成工具通常包括了PHP解释器、Web服务器以及数据库管理系统等关键组件。 3. **Wamp5**: Wamp5是这类集成版工具的一种,它基于Windows操作系统。Wamp5的名称来源于它包含的主要组件的首字母缩写,即Windows、Apache、MySQL和PHP。这种工具允许开发者快速搭建本地Web开发环境,无需分别安装和配置各个组件。 4. **RAR压缩文件**: RAR是一种常见的文件压缩格式,它以较小的体积存储数据,便于传输和存储。RAR文件通常需要特定的解压缩软件进行解压缩操作。 ### 描述知识点 描述中提到了工具的一个重要功能:“可以自动配置asp/php/html等的服务器, 不用辛辛苦苦的为怎么配置服务器而烦恼”。这里面涵盖了以下知识点: 1. **自动配置**: 自动配置功能意味着该工具能够简化服务器的搭建过程,用户不需要手动进行繁琐的配置步骤,如修改配置文件、启动服务等。这是集成版工具的一项重要功能,极大地降低了初学者的技术门槛。 2. **ASP/PHP/HTML**: 这三种技术是Web开发中常用的组件。ASP (Active Server Pages) 是微软开发的服务器端脚本环境;HTML (HyperText Markup Language) 是用于创建网页的标准标记语言;PHP是服务器端脚本语言。在Wamp5这类集成环境中,可以很容易地对这些技术进行测试和开发,因为它们已经预配置在一起。 3. **服务器**: 在Web开发中,服务器是一个运行Web应用程序并响应客户端请求的软件或硬件系统。常见的服务器软件包括Apache、Nginx等。集成版工具提供了一个本地服务器环境,使得开发者可以在本地测试他们的应用程序。 ### 标签知识点 标签中仅出现了“PHP”一个关键词,这意味着该工具专注于与PHP相关的开发环境配置。 ### 压缩包子文件的文件名称列表知识点 1. **wamp.exe**: 这是Wamp5集成版工具的可执行文件,用户通过运行这个文件,即可启动Wamp5环境,开始进行PHP等相关开发。 2. **使用说明文档.txt**: 通常这样的文本文件包含了软件的安装指南、功能介绍、常见问题解答以及操作教程等内容。它是用户了解和使用软件的重要辅助资料,可以帮助用户快速上手。 ### 总结 Wamp5作为一个PHP集成版工具,为PHP开发者提供了一个便捷的本地开发环境。它通过自动化配置减少了开发者在搭建环境时的麻烦,整合了Web开发的关键组件,从而使得开发者能够专注于代码的编写,而不是环境配置。此外,它还提供了详尽的使用文档,帮助用户理解和运用这一工具,有效地提高了开发效率。对于希望快速入门PHP或需要一个简单易用的本地开发环境的开发者来说,Wamp5是一个非常好的选择。
recommend-type

【数据融合技术】:甘肃土壤类型空间分析中的专业性应用

# 摘要 数据融合技术作为一种集成多源数据信息的方法,在土壤类型空间分析中发挥着关键作用。本文介绍了数据融合技术的基本概念及其理论基础,阐述了数据预处理、同步整合及冲突解决等关键技术,并详细描述了甘肃土壤类型数据准备的流程,包括数据采集、质量评估、空间化处理及融合实践准备。通过具体案例分析,展示了数据融合在土壤类型空间分布分析、土壤质量评估及土壤保护规划中的应用。同时,文
recommend-type

sht20温湿度传感器使用什么将上拉电阻和滤波电容引出

<think>好的,我需要解决用户关于SHT20温湿度传感器的上拉电阻和滤波电容的引出方法问题。首先,我得回忆一下SHT20的基本特性和常见电路设计。SHT20是数字传感器,通常使用I2C接口,所以上拉电阻是必须的,用于确保数据线和时钟线的稳定。根据引用[2],SHT10是通过SCK和DATA线与单片机通信,而SHT30在引用[3]中使用I2C协议,需要上拉电阻。虽然用户问的是SHT20,但SHT系列通常设计类似,所以可以推断SHT20也需要类似的上拉电阻配置。通常I2C总线的上拉电阻值在4.7kΩ到10kΩ之间,但具体值可能取决于总线速度和电源电压。需要确认数据手册中的推荐值,但用户可能没有
recommend-type

Delphi仿速达财务软件导航条组件开发教程

Delphi作为一款历史悠久的集成开发环境(IDE),由Embarcadero Technologies公司开发,它使用Object Pascal语言,被广泛应用于Windows平台下的桌面应用程序开发。在Delphi中开发组件是一项核心技术,它允许开发者创建可复用的代码单元,提高开发效率和软件模块化水平。本文将详细介绍如何在Delphi环境下仿制速达财务软件中的导航条组件,这不仅涉及到组件的创建和使用,还会涉及界面设计和事件处理等技术点。 首先,需要了解Delphi组件的基本概念。在Delphi中,组件是一种特殊的对象,它们被放置在窗体(Form)上,可以响应用户操作并进行交互。组件可以是可视的,也可以是不可视的,可视组件在设计时就能在窗体上看到,如按钮、编辑框等;不可视组件则主要用于后台服务,如定时器、数据库连接等。组件的源码可以分为接口部分和实现部分,接口部分描述组件的属性和方法,实现部分包含方法的具体代码。 在开发仿速达财务软件的导航条组件时,我们需要关注以下几个方面的知识点: 1. 组件的继承体系 仿制组件首先需要确定继承体系。在Delphi中,大多数可视组件都继承自TControl或其子类,如TPanel、TButton等。导航条组件通常会继承自TPanel或者TWinControl,这取决于导航条是否需要支持子组件的放置。如果导航条只是单纯的一个显示区域,TPanel即可满足需求;如果导航条上有多个按钮或其他控件,可能需要继承自TWinControl以提供对子组件的支持。 2. 界面设计与绘制 组件的外观和交互是用户的第一印象。在Delphi中,可视组件的界面主要通过重写OnPaint事件来完成。Delphi提供了丰富的绘图工具,如Canvas对象,使用它可以绘制各种图形,如直线、矩形、椭圆等,并且可以对字体、颜色进行设置。对于导航条,可能需要绘制背景图案、分隔线条、选中状态的高亮等。 3. 事件处理 导航条组件需要响应用户的交互操作,例如鼠标点击事件。在Delphi中,可以通过重写组件的OnClick事件来响应用户的点击操作,进而实现导航条的导航功能。如果导航条上的项目较多,还可能需要考虑使用滚动条,让更多的导航项能够显示在窗体上。 4. 用户自定义属性和方法 为了使组件更加灵活和强大,开发者通常会为组件添加自定义的属性和方法。在导航条组件中,开发者可能会添加属性来定义按钮个数、按钮文本、按钮位置等;同时可能会添加方法来处理特定的事件,如自动调整按钮位置以适应不同的显示尺寸等。 5. 数据绑定和状态同步 在财务软件中,导航条往往需要与软件其他部分的状态进行同步。例如,用户当前所处的功能模块会影响导航条上相应项目的选中状态。这通常涉及到数据绑定技术,Delphi支持组件间的属性绑定,通过数据绑定可以轻松实现组件状态的同步。 6. 导航条组件的封装和发布 开发完毕后,组件需要被封装成独立的单元供其他项目使用。封装通常涉及将组件源码保存为pas文件,并在设计时能够在组件面板中找到。发布组件可能还需要编写相应的安装包和使用文档,方便其他开发者安装和使用。 7. Delphi IDE的支持 Delphi IDE提供了组件面板编辑器(Component Palette),允许开发者将开发好的组件添加到组件面板中。在组件面板编辑器中,可以自定义组件的图标和分类,使得组件在Delphi中的使用更为便捷。 通过以上的知识点梳理,可以看出Delphi仿速达导航条组件的开发涉及到的不仅仅是简单的代码编写,还涉及到用户界面设计、事件驱动编程、组件封装等多个方面。掌握这些知识点,对于一名Delphi开发者而言,是十分重要的。
recommend-type

【空间分布规律】:甘肃土壤类型与农业生产的关联性研究

# 摘要 本文对甘肃土壤类型及其在农业生产中的作用进行了系统性研究。首先概述了甘肃土壤类型的基础理论,并探讨了土壤类型与农业生产的理论联系。通过GIS技术分析,本文详细阐述了甘肃土壤的空间分布规律,并对其特征和影响因素进行了深入分析。此外,本文还研究了甘肃土壤类型对农业生产实际影响,包括不同区域土壤改良和作物种植案例,以及土壤养分、水分管理对作物生长周期和产量的具体影响。最后,提出了促进甘肃土壤与农业可持续发展的策略,包括土壤保护、退化防治对策以及土壤类型优化与农业创新的结合。本文旨在为
recommend-type

常见运放电路的基本结构和基本原理

<think>首先,用户的问题是关于运算放大器电路的基本结构和工作原理。我需要根据系统级指令来构建回答。系统级指令:-所有行内数学表达式必须使用$...$格式。-独立公式必须使用$$...$$格式并单独成段。-LaTeX语法正确。-使用中文回答。-生成相关问题。-回答中引用的段落末尾自然地添加引用标识,例如[^1]。-回答结构清晰,帮助用户逐步解决问题,尽量保证回答真实可靠。参考站内引用:-引用[1]:关于运算放大器基本电路用法,特别是反相放大器电路。-引用[2]:关于uA741运算放大器电路的基本原理,包括输入级、输出级等。用户的问题:"我想了解运放电路的基本结构和工作原理请问运算放大器电路
recommend-type

ASP.NET2.0初学者个人网站实例分享

标题:“ASP.NET 2.0个人网站”指向了一个网站开发项目,这个项目是使用ASP.NET 2.0框架构建的。ASP.NET 2.0是微软公司推出的一种用于Web开发的服务器端技术,它是.NET Framework的一部分。这个框架允许开发者构建动态网站、网络应用程序和网络服务。开发者可以使用C#或VB.NET等编程语言来编写应用程序。由于这被标签为“2.0”,我们可以假设这是一个较早版本的ASP.NET,相较于后来的版本,它可能没有那么先进的特性,但对于初学者来说,它提供了基础并且易于上手的工具和控件来学习Web开发。 描述:“个人练习所做,适合ASP.NET初学者参考啊,有兴趣的可以前来下载去看看,同时帮小弟我赚些积分”提供了关于该项目的背景信息。它是某个个人开发者或学习者为了实践和学习ASP.NET 2.0而创建的个人网站项目。这个项目被描述为适合初学者作为学习参考。开发者可能是为了积累积分或网络声誉,鼓励他人下载该项目。这样的描述说明了该项目可以被其他人获取,进行学习和参考,或许还能给予原作者一些社区积分或其他形式的回报。 标签:“2.0”表明这个项目专门针对ASP.NET的2.0版本,可能意味着它不是最新的项目,但是它可以帮助初学者理解早期ASP.NET版本的设计和开发模式。这个标签对于那些寻找具体版本教程或资料的人来说是有用的。 压缩包子文件的文件名称列表:“MySelf”表示在分享的压缩文件中,可能包含了与“ASP.NET 2.0个人网站”项目相关的所有文件。文件名“我的”是中文,可能是指创建者以“我”为中心构建了这个个人网站。虽然文件名本身没有提供太多的信息,但我们可以推测它包含的是网站源代码、相关资源文件、数据库文件(如果有的话)、配置文件和可能的文档说明等。 知识点总结: 1. ASP.NET 2.0是.NET Framework下的一个用于构建Web应用程序的服务器端框架。 2. 它支持使用C#和VB.NET等.NET支持的编程语言进行开发。 3. ASP.NET 2.0提供了一组丰富的控件,可帮助开发者快速构建Web表单、用户界面以及实现后台逻辑。 4. 它还提供了一种称作“Web站点”项目模板,使得初学者能够方便地开始Web开发项目。 5. ASP.NET 2.0是微软.NET历史上一个重要的里程碑,引入了许多创新特性,如成员资格和角色管理、主题和皮肤、网站导航和个性化设置等。 6. 在学习ASP.NET 2.0的过程中,初学者可以了解到如HTTP请求和响应、服务器控件、状态管理、数据绑定、缓存策略等基础概念。 7. 本项目可作为ASP.NET初学者的实践平台,帮助他们理解框架的基本结构和工作流程,从而为学习更高版本的ASP.NET打下坚实基础。 8. 个人网站项目的构建可以涵盖前端设计(HTML, CSS, JavaScript)和后端逻辑(C#或VB.NET)的综合应用。 9. 在学习过程中,初学者应该学会如何配置和使用IIS(Internet Information Services)来部署ASP.NET网站。 10. “赚取积分”可能指的是在某个在线社区、论坛或代码托管平台上,通过分享项目来获得一定的积分或奖励,这通常是用来衡量用户对社区贡献大小的一种方式。 综上所述,该“ASP.NET 2.0个人网站”项目不仅为初学者提供了一个实用的学习资源,同时体现了开发者对于开源共享精神的实践,对社区贡献出自己的力量。通过这样的实践,初学者能够更好地理解ASP.NET框架的运作,逐步建立起自己的Web开发技能。