1%] Generating Python from MSG carla_msgs/CarlaBoundingBox Traceback (most recent call last): File "/opt/ros/noetic/share/genpy/cmake/../../../lib/genpy/genmsg_py.py", line 43, in <module> import genpy.generator File "/opt/ros/noetic/lib/python3/dist-packages/genpy/__init__.py", line 34, in <module> from . message import Message, SerializationError, DeserializationError, MessageException, struct_I File "/opt/ros/noetic/lib/python3/dist-packages/genpy/message.py", line 48, in <module> import yaml ModuleNotFoundError: No module named 'yaml' make[2]: *** [ros-bridge/carla_msgs/CMakeFiles/carla_msgs_generate_messages_py.dir/build.make:85:/home/dwh/carla-ros-bridge/catkin_ws/devel/lib/python3/dist-packages/carla_msgs/msg/_CarlaBoundingBox.py] 错误 1 make[1]: *** [CMakeFiles/Makefile2:2546:ros-bridge/carla_msgs/CMakeFiles/carla_msgs_generate_messages_py.dir/all] 错误 2 make: *** [Makefile:141:all] 错误 2 Invoking "make -j1" failed

时间: 2025-07-05 11:35:44 浏览: 3
<think>嗯,用户又遇到了新的编译错误,这次是在生成Python消息时出现的。错误信息显示“ModuleNotFoundError: No module named 'yaml'”,看起来是缺少Python的yaml模块。用户之前的问题是关于链接错误,现在这个问题属于依赖缺失,可能需要不同的解决方法。 首先,我需要确认这个错误的具体原因。根据日志,genmsg_py.py在导入genpy时失败,而根本原因是找不到yaml模块。ROS的某些包在生成消息时需要PyYAML库来处理YAML文件。用户可能没有安装这个Python包,或者在当前的Python环境中没有正确安装。 接下来,我应该考虑用户可能使用的Python环境。ROS Noetic默认使用Python3,但用户是否在使用虚拟环境,比如Anaconda?如果是的话,ROS的环境变量可能没有被正确设置,导致系统Python的包无法被访问。或者用户可能确实没有安装PyYAML。 解决方案的第一步应该是安装PyYAML。对于系统Python3,可以通过apt-get安装python3-yaml,或者使用pip安装。如果用户使用虚拟环境,可能需要激活环境后再用pip安装。 另外,用户可能在之前的问题中已经处理过依赖,但这次是新的依赖问题。需要确保所有ROS消息生成所需的包都已安装。可能还需要检查其他相关依赖,比如genpy是否完整安装。 用户的错误信息中还提到了make调用失败,所以安装完PyYAML后,需要清理工作空间并重新编译。需要提醒用户进行catkin_make clean和重新编译的步骤。 另外,如果用户之前使用Anaconda,可能需要退出虚拟环境,因为ROS可能无法在虚拟环境中找到系统安装的Python包。这时候需要检查环境变量,确保使用的是系统Python。 最后,如果问题依然存在,可能需要进一步检查Python路径和ROS的配置,或者是否存在多个Python版本的冲突。例如,用户是否安装了Python2和Python3,而ROS配置错误地指向了Python2的环境? 总结一下,解决步骤应该是安装PyYAML,检查Python环境,清理并重新编译,处理可能的虚拟环境问题。需要一步步引导用户操作,并给出具体的命令,确保他们能够正确执行。</think>--- ### **问题分析** 你遇到的错误是由于ROS生成Python消息时缺少`PyYAML`库导致的,具体表现为: ```bash ModuleNotFoundError: No module named 'yaml' ``` 该错误发生在`carla_msgs`包的`CarlaBoundingBox`消息类型转换为Python代码的过程中。ROS的`genpy`工具依赖`PyYAML`库来解析消息定义文件(`.msg`),但当前Python环境中未安装此库。 --- ### **解决方法** #### **步骤1:安装PyYAML库** 根据你的Python环境选择以下命令: ```bash # 如果是系统默认Python3(ROS Noetic推荐环境): sudo apt-get install python3-yaml # 或者通过pip安装(适用于虚拟环境或自定义Python路径): pip install pyyaml ``` --- #### **步骤2:检查Python环境冲突** 如果安装了Anaconda或Miniconda,ROS可能因环境变量冲突无法找到系统Python库。尝试**退出虚拟环境**: ```bash conda deactivate # 若在虚拟环境中 ``` --- #### **步骤3:清理并重新编译** ```bash cd ~/carla-ros-bridge/catkin_ws catkin_make clean # 清理旧编译文件 catkin_make -j1 # 单线程编译以验证修复 ``` --- ### **常见问题扩展** 1. **为何需要PyYAML?** ROS消息(`.msg`)和服务(`.srv`)文件需要通过`genpy`工具生成语言绑定代码,而`genpy`依赖`PyYAML`解析元数据。 2. **已安装PyYAML但仍报错?** 检查Python路径是否被覆盖: ```bash # 查看当前Python解释器路径 which python # 确认PyYAML安装位置 python -c "import yaml; print(yaml.__file__)" ``` 如果路径指向虚拟环境,需确保与ROS使用的Python环境一致。 3. **ROS Noetic对Python版本的要求** ROS Noetic默认使用**Python3.8**。若通过`pip`安装库,需确认目标环境与`catkin_make`调用的Python版本匹配。 --- ### **补充命令** - **强制重新生成消息文件**: 若消息生成过程被缓存干扰,手动删除生成的文件: ```bash rm -rf devel build # 彻底清理编译目录 ``` - **验证PyYAML安装**: ```bash python -c "import yaml; print('PyYAML版本:', yaml.__version__)" ``` --- 如果问题仍未解决,请提供以下信息: 1. 执行`python --version`的输出 2. 执行`pip list | grep pyyaml`的结果 3. 是否使用Anaconda或其他虚拟环境?
阅读全文

相关推荐

[ 34%] Generating C++ code from carla_msgs/CarlaBoundingBox.msg Traceback (most recent call last): File "/opt/ros/noetic/share/gencpp/cmake/../../../lib/gencpp/gen_cpp.py", line 50, in <module> sys.argv, msg_template_map, srv_template_map) File "/opt/ros/noetic/lib/python3/dist-packages/genmsg/template_tools.py", line 213, in generate_from_command_line_options generate_from_file(argv[1], options.package, options.outdir, options.emdir, options.includepath, msg_template_dict, srv_template_dict) File "/opt/ros/noetic/lib/python3/dist-packages/genmsg/template_tools.py", line 154, in generate_from_file _generate_msg_from_file(input_file, output_dir, template_dir, search_path, package_name, msg_template_dict) File "/opt/ros/noetic/lib/python3/dist-packages/genmsg/template_tools.py", line 99, in _generate_msg_from_file search_path) File "/opt/ros/noetic/lib/python3/dist-packages/genmsg/template_tools.py", line 77, in _generate_from_spec interpreter = em.Interpreter(output=ofile, globals=g, options={em.RAW_OPT:True,em.BUFFERED_OPT:True}) AttributeError: module 'em' has no attribute 'RAW_OPT' make[2]: *** [ros-bridge/carla_msgs/CMakeFiles/carla_msgs_generate_messages_cpp.dir/build.make:84:/home/dwh/carla-ros-bridge/catkin_ws/devel/include/carla_msgs/CarlaBoundingBox.h] 错误 1 make[2]: *** 正在删除文件“/home/dwh/carla-ros-bridge/catkin_ws/devel/include/carla_msgs/CarlaBoundingBox.h” make[1]: *** [CMakeFiles/Makefile2:1497:ros-bridge/carla_msgs/CMakeFiles/carla_msgs_generate_messages_cpp.dir/all] 错误 2 make: *** [Makefile:141:all] 错误 2 Invoking "make -j1" failed

(cpg) aoteq@aoteq-OMEN-Laptop-15-ek0xxx:~/myur_ws$ catkin_make -j1 Base path: /home/aoteq/myur_ws Source space: /home/aoteq/myur_ws/src Build space: /home/aoteq/myur_ws/build Devel space: /home/aoteq/myur_ws/devel Install space: /home/aoteq/myur_ws/install #### #### Running command: "make cmake_check_build_system" in "/home/aoteq/myur_ws/build" #### #### #### Running command: "make -j1" in "/home/aoteq/myur_ws/build" #### [ 0%] Built target _ur_msgs_generate_messages_check_deps_SetIO [ 0%] Built target _ur_msgs_generate_messages_check_deps_RobotStateRTMsg [ 0%] Built target _ur_msgs_generate_messages_check_deps_IOStates [ 0%] Built target _ur_msgs_generate_messages_check_deps_Digital [ 0%] Built target _ur_msgs_generate_messages_check_deps_Analog [ 0%] Built target _ur_msgs_generate_messages_check_deps_RobotModeDataMsg [ 0%] Built target _ur_msgs_generate_messages_check_deps_MasterboardDataMsg [ 0%] Built target _ur_msgs_generate_messages_check_deps_ToolDataMsg [ 0%] Built target _ur_msgs_generate_messages_check_deps_SetSpeedSliderFraction [ 0%] Built target std_msgs_generate_messages_nodejs [ 0%] Built target _ur_msgs_generate_messages_check_deps_SetPayload [ 6%] Built target ur_msgs_generate_messages_nodejs [ 6%] Built target std_msgs_generate_messages_eus [ 13%] Built target ur_msgs_generate_messages_eus [ 13%] Built target std_msgs_generate_messages_cpp [ 20%] Built target ur_msgs_generate_messages_cpp [ 20%] Built target std_msgs_generate_messages_py [ 28%] Built target ur_msgs_generate_messages_py [ 28%] Built target std_msgs_generate_messages_lisp [ 34%] Built target ur_msgs_generate_messages_lisp [ 34%] Built target ur_msgs_generate_messages [ 34%] Built target _robotiq_85_msgs_generate_messages_check_deps_GripperStat [ 34%] Built target _robotiq_85_msgs_generate_messages_check_deps_GripperCmd [ 36%] Built target robotiq_85_msgs_generate_messages_lisp [ 37%] Built target robotiq_85_msgs_generate_messages_cpp [ 38%] Built target robotiq_85_msgs_g

[ 3%] Generating dynamic reconfigure files from cfg/Params_PID.cfg: /home/ubuntu/new_workspace/devel/include/simple_follower/Params_PIDConfig.h /home/ubuntu/new_workspace/devel/lib/python2.7/dist-packages/simple_follower/cfg/Params_PIDConfig.py Generating reconfiguration files for Params_PID in simple_follower Wrote header file in /home/ubuntu/new_workspace/devel/include/simple_follower/Params_PIDConfig.h In file included from /home/ubuntu/new_workspace/src/lsx10/lslidar_x10_driver/src/input.cc:18:0: /home/ubuntu/new_workspace/src/lsx10/lslidar_x10_driver/include/lslidar_x10_driver/input.h:35:10: fatal error: lslidar_x10_msgs/LslidarX10Packet.h: No such file or directory #include <lslidar_x10_msgs/LslidarX10Packet.h> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ compilation terminated. lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/build.make:62: recipe for target 'lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/src/input.cc.o' failed make[2]: *** [lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/src/input.cc.o] Error 1 CMakeFiles/Makefile2:10548: recipe for target 'lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/all' failed make[1]: *** [lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... Generating reconfiguration files for params in rplidar_ros Wrote header file in /home/ubuntu/new_workspace/devel/include/rplidar_ros/paramsConfig.h [ 3%] Built target simple_follower_gencfg [ 3%] Built target rplidar_ros_gencfg [ 3%] Linking CXX executable /home/ubuntu/new_workspace/devel/lib/rplidar_ros/rplidarNodeClient [ 3%] Built target rplidarNodeClient Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed

[ 2%] Generating dynamic reconfigure files from cfg/Depth.cfg: /home/ubuntu/new_workspace/devel/include/depthimage_to_laserscan/DepthConfig.h /home/ubuntu/new_workspace/devel/lib/python2.7/dist-packages/depthimage_to_laserscan/cfg/DepthConfig.py Generating reconfiguration files for Depth in depthimage_to_laserscan Wrote header file in /home/ubuntu/new_workspace/devel/include/depthimage_to_laserscan/DepthConfig.h [ 2%] Built target depthimage_to_laserscan_gencfg Scanning dependencies of target lslidar_serial_x10 [ 2%] Building CXX object lsx10/lslidar_x10_driver/CMakeFiles/lslidar_serial_x10.dir/src/lsiosr.cpp.o [ 2%] Linking CXX shared library /home/ubuntu/new_workspace/devel/lib/liblslidar_serial_x10.so [ 2%] Built target lslidar_serial_x10 Scanning dependencies of target lslidar_input_x10 [ 2%] Building CXX object lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/src/input.cc.o In file included from /home/ubuntu/new_workspace/src/lsx10/lslidar_x10_driver/src/input.cc:18:0: /home/ubuntu/new_workspace/src/lsx10/lslidar_x10_driver/include/lslidar_x10_driver/input.h:32:10: fatal error: pcap.h: No such file or directory #include ^~~~~~~~ compilation terminated. lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/build.make:62: recipe for target 'lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/src/input.cc.o' failed make[2]: *** [lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/src/input.cc.o] Error 1 CMakeFiles/Makefile2:10548: recipe for target 'lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/all' failed make[1]: *** [lsx10/lslidar_x10_driver/CMakeFiles/lslidar_input_x10.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... [ 2%] Linking CXX shared library /home/ubuntu/new_workspace/devel/lib/libDepthImageToLaserScan.so [ 2%] Built target DepthImageToLaserScan [ 2%] Linking CXX shared library /home/ubuntu/new_workspace/devel/lib/libsba.so [ 2%] Built target sba Makefile:140: recipe for target 'all' failed make: *** [all] Error 2 Invoking "make -j4 -l4" failed

y536@zy536-virtual-machine:~/catkin_ws$ catkin_make Base path: /home/zy536/catkin_ws Source space: /home/zy536/catkin_ws/src Build space: /home/zy536/catkin_ws/build Devel space: /home/zy536/catkin_ws/devel Install space: /home/zy536/catkin_ws/install #### #### Running command: "cmake /home/zy536/catkin_ws/src -DCATKIN_DEVEL_PREFIX=/home/zy536/catkin_ws/devel -DCMAKE_INSTALL_PREFIX=/home/zy536/catkin_ws/install -G Unix Makefiles" in "/home/zy536/catkin_ws/build" #### -- Using CATKIN_DEVEL_PREFIX: /home/zy536/catkin_ws/devel -- Using CMAKE_PREFIX_PATH: /home/zy536/catkin_ws/devel;/opt/ros/noetic -- This workspace overlays: /home/zy536/catkin_ws/devel;/opt/ros/noetic -- Found PythonInterp: /usr/bin/python3 (found suitable version "3.8.10", minimum required is "3") -- Using PYTHON_EXECUTABLE: /usr/bin/python3 -- Using Debian Python package layout -- Using empy: /usr/lib/python3/dist-packages/em.py -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /home/zy536/catkin_ws/build/test_results -- Forcing gtest/gmock from source, though one was otherwise available. -- Found gtest sources under '/usr/src/googletest': gtests will be built -- Found gmock sources under '/usr/src/googletest': gmock will be built -- Found PythonInterp: /usr/bin/python3 (found version "3.8.10") -- Using Python nosetests: /usr/bin/nosetests3 -- catkin 0.8.10 -- BUILD_SHARED_LIBS is on -- BUILD_SHARED_LIBS is on -- Using CATKIN_WHITELIST_PACKAGES: your_package -- Configuring done -- Generating done -- Build files have been written to: /home/zy536/catkin_ws/build #### #### Running command: "make -j4 -l4" in "/home/zy536/catkin_ws/build" ####

/home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources W0703 16:13:22.516433 3913223 torch/distributed/run.py:766] W0703 16:13:22.516433 3913223 torch/distributed/run.py:766] ***************************************** W0703 16:13:22.516433 3913223 torch/distributed/run.py:766] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. W0703 16:13:22.516433 3913223 torch/distributed/run.py:766] ***************************************** /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources /home/wiseatc/.local/lib/python3.11/site-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. import pkg_resources [rank0]: Traceback (most recent call last): [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1863, in _get_module [rank0]: return importlib.import_module("." + module_name, self.__name__) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module [rank0]: return _bootstrap._gcd_import(name[level:], package, level) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "<frozen importlib._bootstrap>", line 1206, in _gcd_import [rank0]: File "<frozen importlib._bootstrap>", line 1178, in _find_and_load [rank0]: File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked [rank0]: File "<frozen importlib._bootstrap>", line 690, in _load_unlocked [rank0]: File "<frozen importlib._bootstrap_external>", line 940, in exec_module [rank0]: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama_fast.py", line 29, in <module> [rank0]: from .tokenization_llama import LlamaTokenizer [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama.py", line 27, in <module> [rank0]: import sentencepiece as spm [rank0]: File "/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py", line 10, in <module> [rank0]: from . import _sentencepiece [rank0]: ImportError: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank0]: The above exception was the direct cause of the following exception: [rank0]: Traceback (most recent call last): [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 82, in load_tokenizer [rank0]: tokenizer = AutoTokenizer.from_pretrained( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 912, in from_pretrained [rank0]: tokenizer_class_from_name(config_tokenizer_class) is not None [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 611, in tokenizer_class_from_name [rank0]: return getattr(module, class_name) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1851, in __getattr__ [rank0]: module = self._get_module(self._class_to_module[name]) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1865, in _get_module [rank0]: raise RuntimeError( [rank0]: RuntimeError: Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback): [rank0]: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank0]: The above exception was the direct cause of the following exception: [rank0]: Traceback (most recent call last): [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 23, in <module> [rank0]: launch() [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch [rank0]: run_exp() [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp [rank0]: _training_function(config={"args": args, "callbacks": callbacks}) [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function [rank0]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 48, in run_sft [rank0]: tokenizer_module = load_tokenizer(model_args) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 97, in load_tokenizer [rank0]: raise OSError("Failed to load tokenizer.") from e [rank0]: OSError: Failed to load tokenizer. [rank3]: Traceback (most recent call last): [rank3]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1863, in _get_module [rank3]: return importlib.import_module("." + module_name, self.__name__) [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank3]: File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module [rank3]: return _bootstrap._gcd_import(name[level:], package, level) [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank3]: File "<frozen importlib._bootstrap>", line 1206, in _gcd_import [rank3]: File "<frozen importlib._bootstrap>", line 1178, in _find_and_load [rank3]: File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked [rank3]: File "<frozen importlib._bootstrap>", line 690, in _load_unlocked [rank3]: File "<frozen importlib._bootstrap_external>", line 940, in exec_module [rank3]: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed [rank3]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama_fast.py", line 29, in <module> [rank3]: from .tokenization_llama import LlamaTokenizer [rank3]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama.py", line 27, in <module> [rank3]: import sentencepiece as spm [rank3]: File "/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py", line 10, in <module> [rank3]: from . import _sentencepiece [rank3]: ImportError: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank3]: The above exception was the direct cause of the following exception: [rank3]: Traceback (most recent call last): [rank3]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 82, in load_tokenizer [rank3]: tokenizer = AutoTokenizer.from_pretrained( [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank3]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 912, in from_pretrained [rank3]: tokenizer_class_from_name(config_tokenizer_class) is not None [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank3]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 611, in tokenizer_class_from_name [rank3]: return getattr(module, class_name) [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank3]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1851, in __getattr__ [rank3]: module = self._get_module(self._class_to_module[name]) [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank3]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1865, in _get_module [rank3]: raise RuntimeError( [rank3]: RuntimeError: Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback): [rank3]: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank3]: The above exception was the direct cause of the following exception: [rank3]: Traceback (most recent call last): [rank3]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 23, in <module> [rank3]: launch() [rank3]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch [rank3]: run_exp() [rank3]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp [rank3]: _training_function(config={"args": args, "callbacks": callbacks}) [rank3]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function [rank3]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) [rank3]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 48, in run_sft [rank3]: tokenizer_module = load_tokenizer(model_args) [rank3]: ^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank3]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 97, in load_tokenizer [rank3]: raise OSError("Failed to load tokenizer.") from e [rank3]: OSError: Failed to load tokenizer. [rank1]: Traceback (most recent call last): [rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1863, in _get_module [rank1]: return importlib.import_module("." + module_name, self.__name__) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module [rank1]: return _bootstrap._gcd_import(name[level:], package, level) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "<frozen importlib._bootstrap>", line 1206, in _gcd_import [rank1]: File "<frozen importlib._bootstrap>", line 1178, in _find_and_load [rank1]: File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked [rank1]: File "<frozen importlib._bootstrap>", line 690, in _load_unlocked [rank1]: File "<frozen importlib._bootstrap_external>", line 940, in exec_module [rank1]: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed [rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama_fast.py", line 29, in <module> [rank1]: from .tokenization_llama import LlamaTokenizer [rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama.py", line 27, in <module> [rank1]: import sentencepiece as spm [rank1]: File "/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py", line 10, in <module> [rank1]: from . import _sentencepiece [rank1]: ImportError: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank1]: The above exception was the direct cause of the following exception: [rank1]: Traceback (most recent call last): [rank1]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 82, in load_tokenizer [rank1]: tokenizer = AutoTokenizer.from_pretrained( [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 912, in from_pretrained [rank1]: tokenizer_class_from_name(config_tokenizer_class) is not None [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 611, in tokenizer_class_from_name [rank1]: return getattr(module, class_name) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1851, in __getattr__ [rank1]: module = self._get_module(self._class_to_module[name]) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1865, in _get_module [rank1]: raise RuntimeError( [rank1]: RuntimeError: Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback): [rank1]: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank1]: The above exception was the direct cause of the following exception: [rank1]: Traceback (most recent call last): [rank1]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 23, in <module> [rank1]: launch() [rank1]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch [rank1]: run_exp() [rank1]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp [rank1]: _training_function(config={"args": args, "callbacks": callbacks}) [rank1]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function [rank1]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) [rank1]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 48, in run_sft [rank1]: tokenizer_module = load_tokenizer(model_args) [rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank1]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 97, in load_tokenizer [rank1]: raise OSError("Failed to load tokenizer.") from e [rank1]: OSError: Failed to load tokenizer. [rank2]: Traceback (most recent call last): [rank2]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1863, in _get_module [rank2]: return importlib.import_module("." + module_name, self.__name__) [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank2]: File "/usr/lib/python3.11/importlib/__init__.py", line 126, in import_module [rank2]: return _bootstrap._gcd_import(name[level:], package, level) [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank2]: File "<frozen importlib._bootstrap>", line 1206, in _gcd_import [rank2]: File "<frozen importlib._bootstrap>", line 1178, in _find_and_load [rank2]: File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked [rank2]: File "<frozen importlib._bootstrap>", line 690, in _load_unlocked [rank2]: File "<frozen importlib._bootstrap_external>", line 940, in exec_module [rank2]: File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed [rank2]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama_fast.py", line 29, in <module> [rank2]: from .tokenization_llama import LlamaTokenizer [rank2]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/llama/tokenization_llama.py", line 27, in <module> [rank2]: import sentencepiece as spm [rank2]: File "/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py", line 10, in <module> [rank2]: from . import _sentencepiece [rank2]: ImportError: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank2]: The above exception was the direct cause of the following exception: [rank2]: Traceback (most recent call last): [rank2]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 82, in load_tokenizer [rank2]: tokenizer = AutoTokenizer.from_pretrained( [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank2]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 912, in from_pretrained [rank2]: tokenizer_class_from_name(config_tokenizer_class) is not None [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank2]: File "/usr/local/lib/python3.11/dist-packages/transformers/models/auto/tokenization_auto.py", line 611, in tokenizer_class_from_name [rank2]: return getattr(module, class_name) [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank2]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1851, in __getattr__ [rank2]: module = self._get_module(self._class_to_module[name]) [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank2]: File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1865, in _get_module [rank2]: raise RuntimeError( [rank2]: RuntimeError: Failed to import transformers.models.llama.tokenization_llama_fast because of the following error (look up to see its traceback): [rank2]: cannot import name '_sentencepiece' from partially initialized module 'sentencepiece' (most likely due to a circular import) (/usr/local/lib/python3.11/dist-packages/sentencepiece/__init__.py) [rank2]: The above exception was the direct cause of the following exception: [rank2]: Traceback (most recent call last): [rank2]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 23, in <module> [rank2]: launch() [rank2]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py", line 19, in launch [rank2]: run_exp() [rank2]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp [rank2]: _training_function(config={"args": args, "callbacks": callbacks}) [rank2]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function [rank2]: run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) [rank2]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 48, in run_sft [rank2]: tokenizer_module = load_tokenizer(model_args) [rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank2]: File "/home/wiseatc/LLaMA-Factory/src/llamafactory/model/loader.py", line 97, in load_tokenizer [rank2]: raise OSError("Failed to load tokenizer.") from e [rank2]: OSError: Failed to load tokenizer. [rank0]:[W703 16:13:30.861219244 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator()) W0703 16:13:31.449512 3913223 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3913282 closing signal SIGTERM W0703 16:13:31.450263 3913223 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3913283 closing signal SIGTERM W0703 16:13:31.450724 3913223 torch/distributed/elastic/multiprocessing/api.py:900] Sending process 3913284 closing signal SIGTERM E0703 16:13:31.765744 3913223 torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 1) local_rank: 0 (pid: 3913281) of binary: /usr/bin/python3.11 Traceback (most recent call last): File "/usr/local/bin/torchrun", line 8, in <module> sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper return f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/distributed/run.py", line 892, in main run(args) File "/usr/local/lib/python3.11/dist-packages/torch/distributed/run.py", line 883, in run elastic_launch( File "/usr/local/lib/python3.11/dist-packages/torch/distributed/launcher/api.py", line 139, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.11/dist-packages/torch/distributed/launcher/api.py", line 270, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2025-07-03_16:13:31 host : wiseatc-Super-Server rank : 0 (local_rank: 0) exitcode : 1 (pid: 3913281) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ Traceback (most recent call last): File "/home/wiseatc/.local/bin/llamafactory-cli", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/wiseatc/LLaMA-Factory/src/llamafactory/cli.py", line 130, in main process = subprocess.run( ^^^^^^^^^^^^^^^ File "/usr/lib/python3.11/subprocess.py", line 569, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['torchrun', '--nnodes', '1', '--node_rank', '0', '--nproc_per_node', '4', '--master_addr', '127.0.0.1', '--master_port', '38589', '/home/wiseatc/LLaMA-Factory/src/llamafactory/launcher.py', 'saves/DeepSeek-R1-1.5B-Distill/lora/train_2025-07-03-16-00-01/training_args.yaml']' returned non-zero exit status 1.

INFO 07-06 22:09:23 __init__.py:207] Automatically detected platform cuda. (VllmWorkerProcess pid=10542) INFO 07-06 22:09:23 multiproc_worker_utils.py:229] Worker ready; awaiting tasks (VllmWorkerProcess pid=10542) INFO 07-06 22:09:24 cuda.py:178] Cannot use FlashAttention-2 backend for Volta and Turing GPUs. (VllmWorkerProcess pid=10542) INFO 07-06 22:09:24 cuda.py:226] Using XFormers backend. (VllmWorkerProcess pid=10542) INFO 07-06 22:09:25 utils.py:916] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=10542) INFO 07-06 22:09:25 pynccl.py:69] vLLM is using nccl==2.21.5 INFO 07-06 22:09:25 utils.py:916] Found nccl from library libnccl.so.2 INFO 07-06 22:09:25 pynccl.py:69] vLLM is using nccl==2.21.5 INFO 07-06 22:09:25 custom_all_reduce_utils.py:206] generating GPU P2P access cache in /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json INFO 07-06 22:09:39 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json (VllmWorkerProcess pid=10542) INFO 07-06 22:09:39 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_0,1.json WARNING 07-06 22:09:39 custom_all_reduce.py:145] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly. (VllmWorkerProcess pid=10542) WARNING 07-06 22:09:39 custom_all_reduce.py:145] Custom allreduce is disabled because your platform lacks GPU P2P capability or P2P test failed. To silence this warning, specify disable_custom_all_reduce=True explicitly. INFO 07-06 22:09:39 shm_broadcast.py:258] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_b99a0acb'), local_subscribe_port=57279, remote_subscribe_port=None) INFO 07-06 22:09:39 model_runner.py:1110] Starting to load model /home/user/Desktop/DeepSeek-R1-Distill-Qwen-32B... (VllmWorkerProcess pid=10542) INFO 07-06 22:09:39 model_runner.py:1110] Starting to load model /home/user/Desktop/DeepSeek-R1-Distill-Qwen-32B... ERROR 07-06 22:09:39 engine.py:400] CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacity of 21.49 GiB of which 43.50 MiB is free. Including non-PyTorch memory, this process has 21.44 GiB memory in use. Of the allocated memory 21.19 GiB is allocated by PyTorch, and 20.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ERROR 07-06 22:09:39 engine.py:400] Traceback (most recent call last): ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine ERROR 07-06 22:09:39 engine.py:400] engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 124, in from_engine_args ERROR 07-06 22:09:39 engine.py:400] return cls(ipc_path=ipc_path, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 76, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.engine = LLMEngine(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.model_executor = executor_class(vllm_config=vllm_config, ) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 271, in __init__ ERROR 07-06 22:09:39 engine.py:400] super().__init__(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__ ERROR 07-06 22:09:39 engine.py:400] self._init_executor() ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor ERROR 07-06 22:09:39 engine.py:400] self._run_workers("load_model", ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers ERROR 07-06 22:09:39 engine.py:400] driver_worker_output = run_method(self.driver_worker, sent_method, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/utils.py", line 2196, in run_method ERROR 07-06 22:09:39 engine.py:400] return func(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model ERROR 07-06 22:09:39 engine.py:400] self.model_runner.load_model() ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model ERROR 07-06 22:09:39 engine.py:400] self.model = get_model(vllm_config=self.vllm_config) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model ERROR 07-06 22:09:39 engine.py:400] return loader.load_model(vllm_config=vllm_config) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 406, in load_model ERROR 07-06 22:09:39 engine.py:400] model = _initialize_model(vllm_config=vllm_config) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model ERROR 07-06 22:09:39 engine.py:400] return model_class(vllm_config=vllm_config, prefix=prefix) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 453, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.model = Qwen2Model(vllm_config=vllm_config, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in __init__ ERROR 07-06 22:09:39 engine.py:400] old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 307, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.start_layer, self.end_layer, self.layers = make_layers( ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers ERROR 07-06 22:09:39 engine.py:400] [PPMissingLayer() for _ in range(start_layer)] + [ ERROR 07-06 22:09:39 engine.py:400] ^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in ERROR 07-06 22:09:39 engine.py:400] maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 309, in <lambda> ERROR 07-06 22:09:39 engine.py:400] lambda prefix: Qwen2DecoderLayer(config=config, ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 220, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.mlp = Qwen2MLP( ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 82, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.down_proj = RowParallelLinear( ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 1062, in __init__ ERROR 07-06 22:09:39 engine.py:400] self.quant_method.create_weights( ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 129, in create_weights ERROR 07-06 22:09:39 engine.py:400] weight = Parameter(torch.empty(sum(output_partition_sizes), ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in __torch_function__ ERROR 07-06 22:09:39 engine.py:400] return func(*args, **kwargs) ERROR 07-06 22:09:39 engine.py:400] ^^^^^^^^^^^^^^^^^^^^^ ERROR 07-06 22:09:39 engine.py:400] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacity of 21.49 GiB of which 43.50 MiB is free. Including non-PyTorch memory, this process has 21.44 GiB memory in use. Of the allocated memory 21.19 GiB is allocated by PyTorch, and 20.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) Process SpawnProcess-1: INFO 07-06 22:09:39 multiproc_worker_utils.py:128] Killing local vLLM worker processes Traceback (most recent call last): File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 402, in run_mp_engine raise e File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 391, in run_mp_engine engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 124, in from_engine_args return cls(ipc_path=ipc_path, ^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 76, in __init__ self.engine = LLMEngine(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 273, in __init__ self.model_executor = executor_class(vllm_config=vllm_config, ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 271, in __init__ super().__init__(*args, **kwargs) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 52, in __init__ self._init_executor() File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 125, in _init_executor self._run_workers("load_model", File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers driver_worker_output = run_method(self.driver_worker, sent_method, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/utils.py", line 2196, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/worker.py", line 183, in load_model self.model_runner.load_model() File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/worker/model_runner.py", line 1112, in load_model self.model = get_model(vllm_config=self.vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/__init__.py", line 14, in get_model return loader.load_model(vllm_config=vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 406, in load_model model = _initialize_model(vllm_config=vllm_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/model_loader/loader.py", line 125, in _initialize_model return model_class(vllm_config=vllm_config, prefix=prefix) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 453, in __init__ self.model = Qwen2Model(vllm_config=vllm_config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 151, in __init__ old_init(self, vllm_config=vllm_config, prefix=prefix, **kwargs) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 307, in __init__ self.start_layer, self.end_layer, self.layers = make_layers( ^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 557, in make_layers [PPMissingLayer() for _ in range(start_layer)] + [ ^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/utils.py", line 558, in maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 309, in <lambda> lambda prefix: Qwen2DecoderLayer(config=config, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 220, in __init__ self.mlp = Qwen2MLP( ^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/models/qwen2.py", line 82, in __init__ self.down_proj = RowParallelLinear( ^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 1062, in __init__ self.quant_method.create_weights( File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/model_executor/layers/linear.py", line 129, in create_weights weight = Parameter(torch.empty(sum(output_partition_sizes), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/torch/utils/_device.py", line 106, in __torch_function__ return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 136.00 MiB. GPU 0 has a total capacity of 21.49 GiB of which 43.50 MiB is free. Including non-PyTorch memory, this process has 21.44 GiB memory in use. Of the allocated memory 21.19 GiB is allocated by PyTorch, and 20.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) [rank0]:[W706 22:09:40.345098611 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator()) Traceback (most recent call last): File "/root/miniconda3/envs/deepseek_vllm/bin/vllm", line 8, in <module> sys.exit(main()) ^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/cli/main.py", line 73, in main args.dispatch_function(args) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/cli/serve.py", line 34, in cmd uvloop.run(run_server(args)) File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/uvloop/__init__.py", line 105, in run return runner.run(wrapper()) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/uvloop/__init__.py", line 61, in wrapper return await main ^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 947, in run_server async with build_async_engine_client(args) as engine_client: File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 139, in build_async_engine_client async with build_async_engine_client_from_engine_args( File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/deepseek_vllm/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 233, in build_async_engine_client_from_engine_args raise RuntimeError( RuntimeError: Engine process failed to start. See stack trace for the root cause. (deepseek_vllm) root@user-X99:/home/user/Desktop# /root/miniconda3/envs/deepseek_vllm/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d ' (deepseek_vllm) root@user-X99:/home/user/Desktop#

Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [22 lines of output] Running from numpy source directory. <string>:461: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates Traceback (most recent call last): File "/home/d0nesy/.local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module> main() File "/home/d0nesy/.local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main json_out["return_val"] = hook(**hook_input["kwargs"]) File "/home/d0nesy/.local/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 175, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "/tmp/pip-build-env-n7vngi1y/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 377, in prepare_metadata_for_build_wheel self.run_setup() File "/tmp/pip-build-env-n7vngi1y/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 522, in run_setup super().run_setup(setup_script=setup_script) File "/tmp/pip-build-env-n7vngi1y/overlay/local/lib/python3.10/dist-packages/setuptools/build_meta.py", line 320, in run_setup exec(code, locals()) File "<string>", line 488, in <module> File "<string>", line 465, in setup_package File "/tmp/pip-install-endg8jy1/numpy_464e9154f7004c2880f769bc5f5543db/numpy/distutils/__init__.py", line 26, in <module> from . import ccompiler File "/tmp/pip-install-endg8jy1/numpy_464e9154f7004c2880f769bc5f5543db/numpy/distutils/ccompiler.py", line 111, in <module> replace_method(CCompiler, 'find_executables', CCompiler

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting numpy==1.18.5 Using cached https://pypi.tuna.tsinghua.edu.cn/packages/01/1b/d3ddcabd5817be02df0e6ee20d64f77ff6d0d97f83b77f65e98c8a651981/numpy-1.18.5.zip (5.4 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [22 lines of output] Running from numpy source directory. <string>:461: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates Traceback (most recent call last): File "/root/miniconda/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/root/miniconda/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/root/miniconda/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "/tmp/pip-build-env-1ozh18nc/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 377, in prepare_metadata_for_build_wheel self.run_setup() File "/tmp/pip-build-env-1ozh18nc/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 522, in run_setup super().run_setup(setup_script=setup_script) File "/tmp/pip-build-env-1ozh18nc/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 320, in run_setup exec(code, locals()) File "<string>", line 488, in <module> File "<string>", line 465, in setup_package File "/tmp/pip-install-oe_nwzu4/numpy_8accd611d4e240c09a

(my-env) root@autodl-container-242a4285b3-3647fd99:~/autodl-tmp/YOLOv10# pip install numpy==1.19 Looking in indexes: http://mirrors.aliyun.com/pypi/simple Collecting numpy==1.19 Downloading http://mirrors.aliyun.com/pypi/packages/f1/2c/717bdd12404c73ec0c8c734c81a0bad7048866bc36a88a1b69fd52b01c07/numpy-1.19.0.zip (7.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3/7.3 MB 11.5 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [24 lines of output] Running from numpy source directory. <string>:460: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates Traceback (most recent call last): File "/root/miniconda3/envs/my-env/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module> main() File "/root/miniconda3/envs/my-env/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main json_out["return_val"] = hook(**hook_input["kwargs"]) File "/root/miniconda3/envs/my-env/lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 175, in prepare_metadata_for_build_wheel return hook(metadata_directory, config_settings) File "/tmp/pip-build-env-yngvw6of/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 377, in prepare_metadata_for_build_wheel self.run_setup() File "/tmp/pip-build-env-yngvw6of/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 522, in run_setup super().run_setup(setup_script=setup_script) File "/tmp/pip-build-env-yngvw6of/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 320, in run_

nd for pyqt5== (env) [root@host-10-180-209-45 ~]# pip install pyqt5==5.15.0 -i https://pypi.douban.com/simple Looking in indexes: https://pypi.douban.com/simple Collecting pyqt5==5.15.0 Downloading https://pypi.doubanio.com/packages/8c/90/82c62bbbadcca98e8c6fa84f1a638de1ed1c89e85368241e9cc43fcbc320/PyQt5-5.15.0.tar.gz (3.3 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 2.5 MB/s eta 0:00:00 Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [22 lines of output] Traceback (most recent call last): File "/root/python/env/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/root/python/env/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File "/root/python/env/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel whl_basename = backend.build_wheel(metadata_directory, config_settings) File "/tmp/pip-build-env-uzsc1n8a/overlay/lib64/python3.8/site-packages/sipbuild/api.py", line 46, in build_wheel project = AbstractProject.bootstrap('wheel', File "/tmp/pip-build-env-uzsc1n8a/overlay/lib64/python3.8/site-packages/sipbuild/abstract_project.py", line 87, in bootstrap project.setup(pyproject, tool, tool_description) File "/tmp/pip-build-env-uzsc1n8a/overlay/lib64/python3.8/site-packages/sipbuild/project.py", line 586, in setup self.apply_user_defaults(tool) File "project.py", line 62, in apply_user_defaults super().apply_user_defaults(tool) File "/tmp/pip-build-env-uzsc1n8a/overlay/lib/python3.8/site-packages/pyqtbuild/project.py", line 70, in apply_user_defaults super().apply_user_defaults(tool) File "/tmp/pip-build-env-uzsc1n8a/overlay/lib64/python3.8/site-packages/sipbuild/project.py", line 237, in apply_user_defaults self.builder.apply_user_defaults(tool) File "/tmp/pip-build-env-uzsc1n8a/overlay/lib/python3.8/site-packages/pyqtbuild/builder.py", line 69, in apply_user_defaults raise PyProjectOptionException('qmake', sipbuild.pyproject.PyProjectOptionException [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip.

大家在看

recommend-type

TXT文件合并器一款合并文本文件的工具

TXT文件合并器,一款合并文本文件的工具,可以的。
recommend-type

Scratch语言教程&案例&相关项目资源

这篇文章为想要学习和探索Scratch编程的青少年和初学者们提供了宝贵的教程、案例以及相关项目资源,旨在帮助他们轻松入门Scratch编程,并在实践中不断提升编程能力。 文章首先聚焦于Scratch教程的介绍,强调了教程在Scratch编程学习中的重要性。通过精心挑选的一系列优质教程资源,文章引导读者逐步了解Scratch的基本界面、积木块功能以及编程逻辑等核心概念。这些教程采用图文结合的方式,使得复杂的编程概念变得简单易懂,帮助初学者快速掌握Scratch编程的基础知识。 除了基础教程,文章还深入探讨了Scratch案例学习的价值。通过展示一系列真实而有趣的Scratch案例,文章让读者了解到Scratch在动画设计、游戏制作等领域的广泛应用。这些案例不仅具有创意和趣味性,而且能够帮助读者将所学知识应用到实际项目中,提升解决实际问题的能力。 此外,文章还梳理了与Scratch相关的项目资源,为学习者提供了实践Scratch编程的机会。这些项目资源包括Scratch社区分享的项目、学校或教育机构的实践项目等,为学习者提供了丰富的实战演练场景。通过参与这些项目,学习者不仅可以锻炼编
recommend-type

Xilinx 7系列FPGA手册[打包下载]

Xilinx 7系列FPGA手册打包下载,包括以下手册: 1)ug470_7Series_Config.pdf 2)ug471_7Series_SelectIO.pdf 3)ug472_7Series_Clocking.pdf 4)ug473_7Series_Memory_Resources.pdf 5)ug474_7Series_CLB.pdf 6)ug479_7Series_DSP48E1.pdf 7)ug480_7Series_XADC.pdf 8)ug482_7Series_GTP_Transceivers.pdf
recommend-type

filter LTC1068 模块AD设计 Altium设计 硬件原理图+PCB文件.rar

filter LTC1068 模块AD设计 Altium设计 硬件原理图+PCB文件,2层板设计,Altium Designer 设计的工程文件,包括完整的原理图及PCB文件,可以用Altium(AD)软件打开或修改,可作为你产品设计的参考。
recommend-type

谐响应分析步骤-ANSYS谐响应分析

谐响应分析 第三节:步骤 四个主要步骤: 建模 选择分析类型和选项 施加谐波载荷并求解 观看结果

最新推荐

recommend-type

网络工程师面试题(80%命中率).doc

网络工程师面试题(80%命中率).doc
recommend-type

springboot基于起点小说网数据的文本分析系统设计与实现_7134v95o_kk003.zip

springboot基于起点小说网数据的文本分析系统设计与实现_7134v95o_kk003
recommend-type

论多网融合在通信工程中的应用(1).docx

论多网融合在通信工程中的应用(1).docx
recommend-type

cc65 Windows完整版发布:6502 C开发工具

cc65是一个针对6502处理器的完整C编程开发环境,特别适用于Windows操作系统。6502处理器是一种经典的8位微处理器,于1970年代被广泛应用于诸如Apple II、Atari 2600、NES(任天堂娱乐系统)等早期计算机和游戏机中。cc65工具集能够允许开发者使用C语言编写程序,这对于那些希望为这些老旧系统开发软件的程序员来说是一大福音,因为相较于汇编语言,C语言更加高级、易读,并且具备更好的可移植性。 cc65开发工具包主要包含以下几个重要组件: 1. C编译器:这是cc65的核心部分,它能够将C语言源代码编译成6502处理器的机器码。这使得开发者可以用高级语言编写程序,而不必处理低级的汇编指令。 2. 链接器:链接器负责将编译器生成的目标代码和库文件组合成一个单独的可执行程序。在6502的开发环境中,链接器还需要处理各种内存段的定位和映射问题。 3. 汇编器:虽然主要通过C语言进行开发,但某些底层操作仍然可能需要使用汇编语言来实现。cc65包含了一个汇编器,允许程序员编写汇编代码段。 4. 库和运行时:cc65提供了一套标准库,这些库函数为C语言提供了支持,并且对于操作系统级别的功能进行了封装,使得开发者能够更方便地进行编程。运行时支持包括启动代码、中断处理、内存管理等。 5. 开发工具和文档:除了基本的编译、链接和汇编工具外,cc65还提供了一系列辅助工具,如反汇编器、二进制文件编辑器、交叉引用器等。同时,cc65还包含丰富的文档资源,为开发者提供了详尽的使用指南、编程参考和示例代码。 cc65可以广泛用于学习和开发6502架构相关的软件,尤其适合那些对6502处理器、复古计算机或者早期游戏系统有兴趣的开发者。这些开发者可能想要创建或修改旧式游戏、系统软件、仿真器,或者进行技术研究和学习。 尽管cc65是一个功能强大的工具,但它也要求开发者对目标平台的硬件架构和操作系统有足够的了解。这是因为6502并非现代处理器,它对内存访问、I/O操作和中断管理等有着特殊的限制和要求。因此,使用cc65需要开发者具备一定的背景知识,包括但不限于6502指令集、内存映射、硬件寄存器操作等方面的内容。 此外,cc65针对Windows平台进行了优化和封装,使得它可以在Windows操作系统上无缝运行,这对于习惯于Windows环境的用户是一个好消息。不过,如果用户使用的是其他操作系统,可能需要通过相应的兼容层或虚拟机来运行Windows环境,以便使用cc65工具。 总的来说,cc65提供了一种相对简单的方式来开发运行在6502处理器上的软件。它极大地降低了开发难度,使得更多的人能够接触和参与到基于6502平台的软件开发中来,为这个历史悠久的平台注入了新的活力。
recommend-type

【CLIP模型实战】:从数据预处理到代码实现的图文相似度计算完全指南

# 1. CLIP模型概述 在人工智能领域中,CLIP(Contrastive Language-Image Pre-training)模型作为一款开创性的多模态学习技术,已经成为业界瞩目的焦点。CLIP模型是由OpenAI在2021年推出的,其创新之处在于通过大规模的图文对齐数据进行预训练,能够理解和匹配图像与自然语言描述之间的关联性。CLIP突破了传统方法在特定任务上需要大规模标注数据的限制
recommend-type

车载以太网doip协议格式

<think>首先,用户的问题是:“我想查找关于车载以太网DOIP协议的数据格式或规范”。用户引用了站内内容,其中提到DoIP是基于以太网的通讯协议,用于传输UDS协议的数据,规范于ISO13400标准。关键点:-DoIP协议:DiagnosticcommunicationoverInternetProtocol-规范:ISO13400标准-数据格式:我需要提供关于DoIP数据格式的详细信息根据系统级指令:-所有行内数学表达式使用$...$格式-独立公式使用$$...$$格式并单独成段-LaTeX语法正确-使用中文回答-生成相关问题-回答中引用的段落末尾自然地添加引用标识-回答结构清晰,帮助用
recommend-type

JavaScript中文帮助手册:初学者实用指南

### JavaScript中文帮助手册知识点概述 #### 1. JavaScript简介 JavaScript是一种轻量级的编程语言,广泛用于网页开发。它能够增强用户与网页的交互性,使得网页内容变得动态和富有生气。JavaScript能够操纵网页中的HTML元素,响应用户事件,以及与后端服务器进行通信等。 #### 2. JavaScript基本语法 JavaScript的语法受到了Java和C语言的影响,包括变量声明、数据类型、运算符、控制语句等基础组成部分。以下为JavaScript中常见的基础知识点: - 变量:使用关键字`var`、`let`或`const`来声明变量,其中`let`和`const`是ES6新增的关键字,提供了块级作用域和不可变变量的概念。 - 数据类型:包括基本数据类型(字符串、数值、布尔、null和undefined)和复合数据类型(对象、数组和函数)。 - 运算符:包括算术运算符、关系运算符、逻辑运算符、位运算符等。 - 控制语句:条件判断语句(if...else、switch)、循环语句(for、while、do...while)等。 - 函数:是JavaScript中的基础,可以被看作是一段代码的集合,用于封装重复使用的代码逻辑。 #### 3. DOM操作 文档对象模型(DOM)是HTML和XML文档的编程接口。JavaScript可以通过DOM操作来读取、修改、添加或删除网页中的元素和内容。以下为DOM操作的基础知识点: - 获取元素:使用`getElementById()`、`getElementsByTagName()`等方法获取页面中的元素。 - 创建和添加元素:使用`document.createElement()`创建新元素,使用`appendChild()`或`insertBefore()`方法将元素添加到文档中。 - 修改和删除元素:通过访问元素的属性和方法,例如`innerHTML`、`textContent`、`removeChild()`等来修改或删除元素。 - 事件处理:为元素添加事件监听器,响应用户的点击、鼠标移动、键盘输入等行为。 #### 4. BOM操作 浏览器对象模型(BOM)提供了独立于内容而与浏览器窗口进行交互的对象和方法。以下是BOM操作的基础知识点: - window对象:代表了浏览器窗口本身,提供了许多属性和方法,如窗口大小调整、滚动、弹窗等。 - location对象:提供了当前URL信息的接口,可以用来获取URL、重定向页面等。 - history对象:提供了浏览器会话历史的接口,可以进行导航历史操作。 - screen对象:提供了屏幕信息的接口,包括屏幕的宽度、高度等。 #### 5. JavaScript事件 JavaScript事件是用户或浏览器自身执行的某些行为,如点击、页面加载、键盘按键、鼠标移动等。通过事件,JavaScript可以对这些行为进行响应。以下为事件处理的基础知识点: - 事件类型:包括鼠标事件、键盘事件、表单事件、窗口事件等。 - 事件监听:通过`addEventListener()`方法为元素添加事件监听器,规定当事件发生时所要执行的函数。 - 事件冒泡:事件从最深的节点开始,然后逐级向上传播到根节点。 - 事件捕获:事件从根节点开始,然后逐级向下传播到最深的节点。 #### 6. JavaScript高级特性 随着ECMAScript标准的演进,JavaScript引入了许多高级特性,这些特性包括但不限于: - 对象字面量增强:属性简写、方法简写、计算属性名等。 - 解构赋值:可以从数组或对象中提取数据,赋值给变量。 - 模板字符串:允许嵌入表达式。 - 异步编程:Promise、async/await等用于处理异步操作。 - 模块化:使用`import`和`export`关键字导入和导出模块。 - 类和模块:引入了`class`关键字,允许使用面向对象编程风格定义类,以及模块的声明。 #### 7. 开发工具和调试技巧 为了提高JavaScript开发效率和调试问题,以下是一些常用的工具和调试技巧: - 浏览器的开发者工具:包括控制台(Console)、元素查看器(Elements)、网络监控(Network)、源码编辑器(Sources)等。 - 断点调试:在源码编辑器中设置断点,逐步执行代码,查看变量值和程序流程。 - console.log:在控制台输出日志,帮助理解程序执行流程和变量状态。 - 使用JavaScript验证工具:如JSHint、ESLint等,可以在开发过程中进行代码质量检查。 以上就是《JavaScript中文帮助手册》中可能包含的主要知识点。作为初学者,通过这些内容可以系统地学习和掌握JavaScript基础和进阶知识,实现从初学到实践的跨越。在实际应用中,还需结合具体实例和项目练习,不断加深理解和熟练操作。
recommend-type

深入理解MySQL存储引擎:InnoDB与MyISAM的终极对决

# 1. MySQL存储引擎概述 MySQL数据库的灵活性和高性能在很大程度上得益于其存储引擎架构。**存储引擎**是MySQL中用于存储、索引、查询数据的底层软件模块。不同的存储引擎拥有不同的功能和特性,允许数据库管理员针对特定的应用需求选择最佳的存储引擎。例如,**InnoDB**提供事务支持和行级锁定,适用于需要ACID(原子
recommend-type

window中系统中断,cpu占用100%

<think>首先,用户的问题是关于解决Windows系统中“系统中断”导致CPU占用100%的问题。我需要基于提供的引用和系统指令来构建一个结构清晰的回答。系统指令:-所有行内数学表达式必须使用$...$格式。-独立公式使用$$...$$格式并单独成段。-LaTeX语法正确。-使用中文回答。-生成相关问题。-在回答中引用的段落末尾添加引用标识,例如[^1]。-回答结构清晰,帮助用户逐步解决问题。-参考站内引用:引用[1]是关于CPU使用率100%的应急措施,引用[2]是关于Java程序CPU占用过高的解决步骤,引用[3]是关于CPU软中断导致高利用率的分析与解决方案。用户的问题:Window
recommend-type

C++Builder6.0缺失帮助文件的解决方案

标题“BCB6.0帮助文件”和描述“很多用户的C++Builder6.0的版本没有帮助文件,help文件对学习和研究BCB6.0是很重要的。”表明了我们讨论的主题是关于C++Builder(通常简称BCB)6.0版本的官方帮助文件。C++Builder是一款由Borland公司(后被Embarcadero Technologies公司收购)开发的集成开发环境(IDE),专门用于C++语言的开发。该软件的第六版,即BCB6.0,于2002年发布,是该系列的一个重要版本。在这个版本中,提供了一个帮助文件,对于学习和研究BCB6.0至关重要。因为帮助文件中包含了大量关于IDE使用的指导、编程API的参考、示例代码等,是使用该IDE不可或缺的资料。 我们可以通过【压缩包子文件的文件名称列表】中的“BCB6.0_Help”推测,这可能是一个压缩文件,包含了帮助文件的副本,可能是一个ZIP或者其他格式的压缩文件。该文件的名称“BCB6.0_Help”暗示了文件中包含的是与C++Builder6.0相关的帮助文档。在实际获取和解压该文件后,用户能够访问到详尽的文档,以便更深入地了解和利用BCB6.0的功能。 BCB6.0帮助文件的知识点主要包括以下几个方面: 1. 环境搭建和配置指南:帮助文档会解释如何安装和配置BCB6.0环境,包括如何设置编译器、调试器和其他工具选项,确保用户能够顺利开始项目。 2. IDE使用教程:文档中应包含有关如何操作IDE界面的说明,例如窗口布局、菜单结构、快捷键使用等,帮助用户熟悉开发环境。 3. 语言参考:C++Builder6.0支持C++语言,因此帮助文件会包含C++语言核心特性的说明、标准库参考、模板和STL等。 4. VCL框架说明:BCB6.0是基于Visual Component Library(VCL)框架的,帮助文件会介绍如何使用VCL构建GUI应用程序,包括组件的使用方法、事件处理、窗体设计等。 5. 数据库编程:文档会提供关于如何利用C++Builder进行数据库开发的指导,涵盖了数据库连接、SQL语言、数据集操作等关键知识点。 6. 高级功能介绍:帮助文件还会介绍一些高级功能,如使用组件面板、定制组件、深入到编译器优化、代码分析工具的使用等。 7. 示例项目和代码:为了更好地演示如何使用IDE和语言特性,帮助文件通常包含了一个或多个示例项目以及一些实用的代码片段。 8. 第三方插件和工具:BCB6.0还可能支持第三方插件,帮助文件可能会对一些广泛使用的插件进行介绍和解释如何安装和使用它们。 9. 故障排除和调试:文档会提供一些常见问题的解决方案、调试技巧以及性能调优建议。 10. 版本更新记录:虽然版本更新记录通常不会在帮助文件内详细描述,但可能会提到重大的新增特性、改进和已知问题。 11. 联系方式和资源:帮助文件中可能会包含Embarcadero公司的技术支持联系方式,以及推荐的外部资源,比如论坛、在线文档链接和社区。 在学习和研究BCB6.0时,帮助文件是一个十分宝贵的资源,它能提供快速准确的信息和指导。对于任何一个使用BCB6.0进行开发的程序员来说,熟悉这些知识点是必不可少的。