Traceback (most recent call last): File "train.py", line 11, in <module> config = read_config(args['config']) File "F:\D4PG\d4pg-pytorch-master\utils\utils.py", line 65, in read_config cfg = yaml.load(ymlfile) TypeError: load() missing 1 required positional argument: 'Loader'

时间: 2023-12-14 20:38:57 浏览: 149
这个错误通常出现在使用 PyYAML 库的 load() 函数时,因为在 PyYAML 5.1 版本中,load() 函数已经被弃用了。相反,您需要使用 load() 函数的安全版本,即 safe_load()。 要解决这个问题,您可以尝试将以下代码: ``` cfg = yaml.load(ymlfile) ``` 改为: ``` cfg = yaml.safe_load(ymlfile) ``` 这应该可以解决您遇到的问题。如果您仍然遇到问题,请确保您使用了最新版本的 PyYAML 库。
相关问题

Traceback (most recent call last): File "train.py", line 98, in <module> args, cfg = merge_config() File "/root/autodl-tmp/Ultra-Fast-Lane-Detection-master/utils/common.py", line 53, in merge_config cfg = Config.fromfile(args.config) File "/root/autodl-tmp/Ultra-Fast-Lane-Detection-master/utils/config.py", line 160, in fromfile cfg_dict, cfg_text = Config._file2dict(filename) File "/root/autodl-tmp/Ultra-Fast-Lane-Detection-master/utils/config.py", line 109, in _file2dict cfg_text += f.read() File "/root/miniconda3/envs/lane-det/lib/python3.7/codecs.py", line 322, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc8 in position 531: invalid continuation byte

这个错误是由于读取配置文件时使用了错误的编码方式导致的。建议检查配置文件的编码方式是否正确,或者尝试使用其他编码方式进行读取。你可以尝试使用命令行工具`iconv`将文件转换为正确的编码格式。例如,将文件转换为utf-8编码的命令是: ``` iconv -f gb2312 -t utf-8 input_file > output_file ``` 其中,`-f`参数表示输入文件的编码方式,`-t`参数表示输出文件的编码方式,`input_file`表示输入文件的文件名,`output_file`表示输出文件的文件名。

现在报这个错误[INFO|2025-03-14 11:38:36] llamafactory.data.template:143 >> Add <|im_end|> to stop words. Traceback (most recent call last): File "/usr/local/bin/llamafactory-cli", line 8, in <module> sys.exit(main()) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/cli.py", line 118, in main run_exp() File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/train/tuner.py", line 103, in run_exp _training_function(config={"args": args, "callbacks": callbacks}) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/train/tuner.py", line 68, in _training_function run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 51, in run_sft dataset_module = get_dataset(template, model_args, data_args, training_args, stage="sft", **tokenizer_module) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/data/loader.py", line 297, in get_dataset dataset = _get_merged_dataset(data_args.dataset, model_args, data_args, training_args, stage) File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/data/loader.py", line 171, in _get_merged_dataset for dataset_name, dataset_attr in zip(dataset_names, get_dataset_list(dataset_names, data_args.dataset_dir)): File "/mnt/workspace/.cache/modelscope/LLaMA-Factory/src/llamafactory/data/parser.py", line 129, in get_dataset_list raise ValueError(f"Undefined dataset {name} in {DATA_CONFIG}.") ValueError: Undefined dataset /mnt/workspace/.cache/modelscope/datasets/liucong/Chinese-DeepSeek-R1-Distill-data-110k/distill_r1_110k in dataset_info.json.

### LLaMA Factory 训练过程中的 'Undefined dataset' 错误解决方案 在使用 LLaMA Factory 进行模型微调的过程中,如果遇到 `ValueError: Undefined dataset` 的错误提示,通常表明数据集加载或配置存在问题。以下是可能的原因分析以及对应的解决方法: #### 可能原因一:数据路径未正确定义 当指定的数据文件路径不存在或者无法被程序访问时,可能会触发此错误。需要确认数据集的实际存储位置是否与代码中定义的路径一致。 - **验证路径有效性** 使用绝对路径代替相对路径可以减少因工作目录不同而导致的问题[^1]。 ```python import os dataset_path = "/absolute/path/to/your/dataset.csv" if not os.path.exists(dataset_path): raise FileNotFoundError(f"The specified dataset path does not exist: {dataset_path}") ``` --- #### 可能原因二:数据格式不兼容 即使提供了正确的路径,但如果数据本身不符合预期格式(例如 CSV 文件缺少必要的列名),也可能引发类似的错误。 - **检查数据结构** 确认输入数据集中包含所有必需字段,并且这些字段名称与脚本期望的一致[^2]。 ```python import pandas as pd data = pd.read_csv(dataset_path) required_columns = ["text", "label"] # 假设这是所需的两列 missing_cols = [col for col in required_columns if col not in data.columns] if missing_cols: raise KeyError(f"Missing required columns in the dataset: {missing_cols}") ``` --- #### 可能原因三:数据集对象未初始化 某些情况下,可能是由于数据集实例化失败引起的。这通常是由于参数设置不当或其他依赖项缺失造成的。 - **重新创建数据集对象** 如果手动构建了自定义数据处理逻辑,则需仔细核对其实现细节并确保每一步都成功完成。 ```python from datasets import load_dataset, DatasetDict try: raw_datasets = load_dataset("csv", data_files={"train": dataset_path}) except Exception as e: print(f"Failed to load dataset due to error: {e}") else: train_val_split = raw_datasets["train"].train_test_split(test_size=0.1) final_datasets = DatasetDict({ "train": train_val_split['train'], "validation": train_val_split['test'] }) ``` --- #### 总结建议 综合以上几点,在排查此类问题时应优先关注以下几个方面: 1. 验证数据源是否存在及其可读性; 2. 对比实际数据格式同需求规格之间的差异; 3. 调试整个流程以定位潜在瓶颈所在。 通过上述措施基本能够有效应对大部分由 “undefined dataset” 所带来的挑战。
阅读全文

相关推荐

/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/tyro/_parsers.py:332: UserWarning: The field model.action-expert-variant is annotated with type typing.Literal['dummy', 'gemma_300m', 'gemma_2b', 'gemma_2b_lora'], but the default value gemma_300m_lora has type <class 'str'>. We'll try to handle this gracefully, but it may cause unexpected behavior. warnings.warn(message) 19:07:30.004 [I] Running on: shuo-hp (10287:train.py:195) INFO:2025-05-12 19:07:30,228:jax._src.xla_bridge:945: Unable to initialize backend 'rocm': module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig' 19:07:30.228 [I] Unable to initialize backend 'rocm': module 'jaxlib.xla_extension' has no attribute 'GpuAllocatorConfig' (10287:xla_bridge.py:945) INFO:2025-05-12 19:07:30,228:jax._src.xla_bridge:945: Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory 19:07:30.228 [I] Unable to initialize backend 'tpu': INTERNAL: Failed to open libtpu.so: libtpu.so: cannot open shared object file: No such file or directory (10287:xla_bridge.py:945) 19:07:30.500 [I] Wiped checkpoint directory /home/shuo/VLA/openpi/checkpoints/pi0_ours_aloha/your_experiment_name (10287:checkpoints.py:25) 19:07:30.500 [I] Created BasePyTreeCheckpointHandler: pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=None (10287:base_pytree_checkpoint_handler.py:332) 19:07:30.500 [I] Created BasePyTreeCheckpointHandler: pytree_metadata_options=PyTreeMetadataOptions(support_rich_types=False), array_metadata_store=None (10287:base_pytree_checkpoint_handler.py:332) 19:07:30.500 [I] [thread=MainThread] Failed to get flag value for EXPERIMENTAL_ORBAX_USE_DISTRIBUTED_PROCESS_ID. (10287:multihost.py:375) 19:07:30.500 [I] [process=0][thread=MainThread] CheckpointManager init: checkpointers=None, item_names=None, item_handlers={'assets': <openpi.training.checkpoints.CallbackHandler object at 0x72e5cae0ff50>, 'train_state': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa0e90>, 'params': <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa05d0>}, handler_registry=None (10287:checkpoint_manager.py:622) 19:07:30.501 [I] Deferred registration for item: "assets". Adding handler <openpi.training.checkpoints.CallbackHandler object at 0x72e5cae0ff50> for item "assets" and save args <class 'openpi.training.checkpoints.CallbackSave'> and restore args <class 'openpi.training.checkpoints.CallbackRestore'> to _handler_registry. (10287:composite_checkpoint_handler.py:239) 19:07:30.501 [I] Deferred registration for item: "train_state". Adding handler <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa0e90> for item "train_state" and save args <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'> and restore args <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'> to _handler_registry. (10287:composite_checkpoint_handler.py:239) 19:07:30.501 [I] Deferred registration for item: "params". Adding handler <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa05d0> for item "params" and save args <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'> and restore args <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'> to _handler_registry. (10287:composite_checkpoint_handler.py:239) 19:07:30.501 [I] Deferred registration for item: "metrics". Adding handler <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x72e5cad7fd10> for item "metrics" and save args <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'> and restore args <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'> to _handler_registry. (10287:composite_checkpoint_handler.py:239) 19:07:30.501 [I] Initialized registry DefaultCheckpointHandlerRegistry({('assets', <class 'openpi.training.checkpoints.CallbackSave'>): <openpi.training.checkpoints.CallbackHandler object at 0x72e5cae0ff50>, ('assets', <class 'openpi.training.checkpoints.CallbackRestore'>): <openpi.training.checkpoints.CallbackHandler object at 0x72e5cae0ff50>, ('train_state', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa0e90>, ('train_state', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa0e90>, ('params', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeSaveArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa05d0>, ('params', <class 'orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeRestoreArgs'>): <orbax.checkpoint._src.handlers.pytree_checkpoint_handler.PyTreeCheckpointHandler object at 0x72e5cafa05d0>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonSaveArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x72e5cad7fd10>, ('metrics', <class 'orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonRestoreArgs'>): <orbax.checkpoint._src.handlers.json_checkpoint_handler.JsonCheckpointHandler object at 0x72e5cad7fd10>}). (10287:composite_checkpoint_handler.py:508) 19:07:30.501 [I] orbax-checkpoint version: 0.11.1 (10287:abstract_checkpointer.py:35) 19:07:30.501 [I] [process=0][thread=MainThread] Using barrier_sync_fn: <function get_barrier_sync_fn.<locals>.<lambda> at 0x72e5cacb85e0> timeout: 7200 secs and primary_host=0 for async checkpoint writes (10287:async_checkpointer.py:80) 19:07:30.501 [I] Found 0 checkpoint steps in /home/shuo/VLA/openpi/checkpoints/pi0_ours_aloha/your_experiment_name (10287:checkpoint_manager.py:1528) 19:07:30.501 [I] Saving root metadata (10287:checkpoint_manager.py:1569) 19:07:30.501 [I] [process=0][thread=MainThread] Skipping global process sync, barrier name: CheckpointManager:save_metadata (10287:multihost.py:293) 19:07:30.501 [I] [process=0][thread=MainThread] CheckpointManager created, primary_host=0, CheckpointManagerOptions=CheckpointManagerOptions(save_interval_steps=1, max_to_keep=1, keep_time_interval=None, keep_period=5000, should_keep_fn=None, best_fn=None, best_mode='max', keep_checkpoints_without_metrics=True, step_prefix=None, step_format_fixed_length=None, step_name_format=None, create=False, cleanup_tmp_directories=False, save_on_steps=frozenset(), single_host_load_and_broadcast=False, todelete_subdir=None, enable_background_delete=False, read_only=False, enable_async_checkpointing=True, async_options=AsyncOptions(timeout_secs=7200, barrier_sync_fn=None, post_finalization_callback=None, create_directories_asynchronously=False), multiprocessing_options=MultiprocessingOptions(primary_host=0, active_processes=None, barrier_sync_key_prefix=None), should_save_fn=None, file_options=FileOptions(path_permission_mode=None), save_root_metadata=True, temporary_path_class=None, save_decision_policy=None), root_directory=/home/shuo/VLA/openpi/checkpoints/pi0_ours_aloha/your_experiment_name: <orbax.checkpoint.checkpoint_manager.CheckpointManager object at 0x72e5cadffd10> (10287:checkpoint_manager.py:797) 19:07:30.553 [I] Loaded norm stats from s3://openpi-assets/checkpoints/pi0_base/assets/trossen (10287:config.py:166) Returning existing local_dir /home/shuo/VLA/lerobot/aloha-real-data as remote repo cannot be accessed in snapshot_download (None). 19:07:30.553 [W] Returning existing local_dir /home/shuo/VLA/lerobot/aloha-real-data as remote repo cannot be accessed in snapshot_download (None). (10287:_snapshot_download.py:213) Returning existing local_dir /home/shuo/VLA/lerobot/aloha-real-data as remote repo cannot be accessed in snapshot_download (None). 19:07:30.554 [W] Returning existing local_dir /home/shuo/VLA/lerobot/aloha-real-data as remote repo cannot be accessed in snapshot_download (None). (10287:_snapshot_download.py:213) Returning existing local_dir /home/shuo/VLA/lerobot/aloha-real-data as remote repo cannot be accessed in snapshot_download (None). 19:07:30.555 [W] Returning existing local_dir /home/shuo/VLA/lerobot/aloha-real-data as remote repo cannot be accessed in snapshot_download (None). (10287:_snapshot_download.py:213) Traceback (most recent call last): File "/home/shuo/VLA/openpi/scripts/train.py", line 273, in <module> main(_config.cli()) File "/home/shuo/VLA/openpi/scripts/train.py", line 226, in main batch = next(data_iter) ^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/src/openpi/training/data_loader.py", line 177, in __iter__ for batch in self._data_loader: File "/home/shuo/VLA/openpi/src/openpi/training/data_loader.py", line 257, in __iter__ batch = next(data_iter) ^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 708, in __next__ data = self._next_data() ^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1480, in _next_data return self._process_data(data) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1505, in _process_data data.reraise() File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/torch/_utils.py", line 733, in reraise raise exception KeyError: Caught KeyError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 349, in _worker_loop data = fetcher.fetch(index) # type: ignore[possibly-undefined] ^^^^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in data = [self.dataset[idx] for idx in possibly_batched_index] ~~~~~~~~~~~~^^^^^ File "/home/shuo/VLA/openpi/src/openpi/training/data_loader.py", line 47, in __getitem__ return self._transform(self._dataset[index]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/src/openpi/transforms.py", line 70, in __call__ data = transform(data) ^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/src/openpi/transforms.py", line 101, in __call__ return jax.tree.map(lambda k: flat_item[k], self.structure) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/jax/_src/tree.py", line 155, in map return tree_util.tree_map(f, tree, *rest, is_leaf=is_leaf) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/jax/_src/tree_util.py", line 358, in tree_map return treedef.unflatten(f(*xs) for xs in zip(*all_leaves)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shuo/VLA/openpi/.venv/lib/python3.11/site-packages/jax/_src/tree_util.py", line 358, in <genexpr> return treedef.unflatten(f(*xs) for xs in zip(*all_leaves)) ^^^^^^ File "/home/shuo/VLA/openpi/src/openpi/transforms.py", line 101, in <lambda> return jax.tree.map(lambda k: flat_item[k], self.structure) ~~~~~~~~~^^^ KeyError: 'observation.images.cam_low'

[INFO|2025-05-27 09:38:24] llamafactory.hparams.parser:401 >> Process rank: 0, world size: 1, device: cuda:0, distributed training: False, compute dtype: torch.float16 Traceback (most recent call last): File "/root/miniconda3/envs/llama_factory/bin/llamafactory-cli", line 8, in <module> sys.exit(main()) ^^^^^^ File "/home/ly/LLaMA-Factory/src/llamafactory/cli.py", line 115, in main COMMAND_MAP[command]() File "/home/ly/LLaMA-Factory/src/llamafactory/train/tuner.py", line 110, in run_exp _training_function(config={"args": args, "callbacks": callbacks}) File "/home/ly/LLaMA-Factory/src/llamafactory/train/tuner.py", line 72, in _training_function run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks) File "/home/ly/LLaMA-Factory/src/llamafactory/train/sft/workflow.py", line 48, in run_sft tokenizer_module = load_tokenizer(model_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ly/LLaMA-Factory/src/llamafactory/model/loader.py", line 80, in load_tokenizer init_kwargs = _get_init_kwargs(model_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ly/LLaMA-Factory/src/llamafactory/model/loader.py", line 66, in _get_init_kwargs model_args.model_name_or_path = try_download_model_from_other_hub(model_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ly/LLaMA-Factory/src/llamafactory/extras/misc.py", line 266, in try_download_model_from_other_hub return snapshot_download( ^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama_factory/lib/python3.12/site-packages/modelscope/hub/snapshot_download.py", line 108, in snapshot_download return _snapshot_download( ^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama_factory/lib/python3.12/site-packages/modelscope/hub/snapshot_download.py", line 254, in _snapshot_download endpoint = _api.get_endpoint_for_read( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama_factory/lib/python3.12/site-packages/modelscope/hub/api.py", line 332, in get_endpoint_for_read if not self.repo_exists( ^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/llama_factory/lib/python3.12/site-packages/modelscope/hub/api.py", line 379, in repo_exists raise Exception('Invalid repo_id: %s, must be of format namespace/name' % repo_type) Exception: Invalid repo_id: model, must be of format namespace/name什么原因?

2025-03-11 10:44:54.222546: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_blas.cc:654] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED Traceback (most recent call last): File "C:\Users\19124\anaconda3\envs\rl(tf1.x)\lib\site-packages\tensorflow\python\client\session.py", line 1322, in _do_call return fn(*args) File "C:\Users\19124\anaconda3\envs\rl(tf1.x)\lib\site-packages\tensorflow\python\client\session.py", line 1307, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "C:\Users\19124\anaconda3\envs\rl(tf1.x)\lib\site-packages\tensorflow\python\client\session.py", line 1409, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(128, 7), b.shape=(7, 128), m=128, n=128, k=7 [[Node: Critic/dense/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_state_0_0/_5, Critic/dense/kernel/read)]] [[Node: Critic/dense_1/BiasAdd/_7 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_21_Critic/dense_1/BiasAdd", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\threeMotorsProject\threeMotorsProject\RL\PPO\ppo.py", line 157, in <module> train() File "D:\threeMotorsProject\threeMotorsProject\RL\PPO\ppo.py", line 122, in train ppo.update(np.vstack(buffer_s), np.vstack(buffer_a), np.array(discounted_r)[:, np.newaxis]) File "D:\threeMotorsProject\threeMotorsProject\RL\PPO\ppo.py", line 80, in update adv = self.sess.run(self.v, {self.S: s}) - r File "C:\Users\19124\anaconda3\envs\rl(tf1.x)\lib\site-packages\tensorflow\python\client\session.py",

Traceback (most recent call last): File "D:\yolov11\ultralytics\ultralytics\engine\trainer.py", line 562, in get_dataset data = check_det_dataset(self.args.data)#/home/hk/workspace/yolov11/datasets/BoMai-balloon-train8data/images/val File "D:\yolov11\ultralytics\ultralytics\data\utils.py", line 329, in check_det_dataset raise FileNotFoundError(m) FileNotFoundError: Dataset 'D://yolov11/ultralytics/ultralytics/cfg/datasets/BoMai-balloon.yaml' images not found ⚠️, missing path 'D:\yolov11\ultralytics\train13\datas\images\val' Note dataset download directory is 'D:\yolov11\ultralytics\datasets'. You can update this in 'C:\Users\WangLei1\AppData\Roaming\Ultralytics\settings.json' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\yolov11\ultralytics\train.py", line 12, in <module> model.train(data=r'D:/yolov11/ultralytics/ultralytics/cfg/datasets/BoMai-balloon.yaml',\ ~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ device=0,\ ^^^^^^^^^^ ...<8 lines>... #workers=4 ^^^^^^^^^^ ) ^ File "D:\yolov11\ultralytics\ultralytics\engine\model.py", line 800, in train self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\yolov11\ultralytics\ultralytics\engine\trainer.py", line 133, in __init__ self.trainset, self.testset = self.get_dataset() ~~~~~~~~~~~~~~~~^^ File "D:\yolov11\ultralytics\ultralytics\engine\trainer.py", line 566, in get_dataset raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e RuntimeError: Dataset 'D://yolov11/ultralytics/ultralytics/cfg/datasets/BoMai-balloon.yaml' error Dataset 'D://yol

(mast3r-slam) root@autodl-container-89914fa1bb-a01dbef1:~/MASt3R-SLAM# bash ./scripts/eval_euroc.sh ~/autodl-fs/datasets/euroc/MH_05_difficult/ {'use_calib': True, 'single_thread': True, 'dataset': {'subsample': 2, 'img_downsample': 1, 'center_principle_point': True}, 'matching': {'max_iter': 10, 'lambda_init': 1e-08, 'convergence_thresh': 1e-06, 'dist_thresh': 0.1, 'radius': 3, 'dilation_max': 5}, 'tracking': {'min_match_frac': 0.05, 'max_iters': 50, 'C_conf': 0.0, 'Q_conf': 1.5, 'rel_error': 0.001, 'delta_norm': 0.001, 'huber': 1.345, 'match_frac_thresh': 0.333, 'sigma_ray': 0.003, 'sigma_dist': 10.0, 'sigma_pixel': 1.0, 'sigma_depth': 10.0, 'sigma_point': 0.05, 'pixel_border': -10, 'depth_eps': 1e-06, 'filtering_mode': 'weighted_pointmap', 'filtering_score': 'median'}, 'local_opt': {'pin': 1, 'window_size': 1000000.0, 'C_conf': 0.0, 'Q_conf': 1.5, 'min_match_frac': 0.1, 'pixel_border': -10, 'depth_eps': 1e-06, 'max_iters': 10, 'sigma_ray': 0.003, 'sigma_dist': 10.0, 'sigma_pixel': 1.0, 'sigma_depth': 10.0, 'sigma_point': 0.05, 'delta_norm': 1e-08, 'use_cuda': True}, 'retrieval': {'k': 3, 'min_thresh': 0.005}, 'reloc': {'min_match_frac': 0.3, 'strict': True}, 'inherit': 'config/base.yaml'} Traceback (most recent call last): File "/root/MASt3R-SLAM/main.py", line 170, in <module> dataset = load_dataset(args.dataset) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/MASt3R-SLAM/mast3r_slam/dataloader.py", line 325, in load_dataset return EurocDataset(dataset_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/MASt3R-SLAM/mast3r_slam/dataloader.py", line 100, in __init__ tstamp_rgb = np.loadtxt(rgb_list, delimiter=",", dtype=np.unicode_, skiprows=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/miniconda3/envs/mast3r-slam/lib/python3.11/site-packages/numpy/lib/npyio.py", line 1373, in loadtxt arr = _read(fname, dtype=dtype, comment=comment, delimiter=delimiter, ^^^^^^^^^^^^^^^^

joblib.externals.loky.process_executor._RemoteTraceback: """ Traceback (most recent call last): File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\externals\loky\process_executor.py", line 490, in _process_worker r = call_item() ^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\externals\loky\process_executor.py", line 291, in __call__ return self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 606, in __call__ return [func(*args, **kwargs) for func, args, kwargs in self.items] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 606, in return [func(*args, **kwargs) for func, args, kwargs in self.items] ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\utils\parallel.py", line 139, in __call__ return self.function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_validation.py", line 866, in _fit_and_score estimator.fit(X_train, y_train, **fit_params) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\base.py", line 1382, in wrapper estimator._validate_params() File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\base.py", line 436, in _validate_params validate_parameter_constraints( File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\utils\_param_validation.py", line 98, in validate_parameter_constraints raise InvalidParameterError( sklearn.utils._param_validation.InvalidParameterError: The 'criterion' parameter of RandomForestRegressor must be a str among {'poisson', 'friedman_mse', 'squared_error', 'absolute_error'}. Got 'mae' instead. """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\python\电力负荷预测.py", line 69, in <module> random_search.fit(X_train, y_train) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\base.py", line 1389, in wrapper return fit_method(estimator, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_search.py", line 1024, in fit self._run_search(evaluate_candidates) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_search.py", line 1951, in _run_search evaluate_candidates( File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_search.py", line 970, in evaluate_candidates out = parallel( ^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\utils\parallel.py", line 77, in __call__ return super().__call__(iterable_with_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 2071, in __call__ return output if self.return_generator else list(output) ^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 1681, in _get_outputs yield from self._retrieve() File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 1783, in _retrieve self._raise_error_fast() File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 1858, in _raise_error_fast error_job.get_result(self.timeout) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 757, in get_result return self._return_or_raise() ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 772, in _return_or_raise raise self._result sklearn.utils._param_validation.InvalidParameterError: The 'criterion' parameter of RandomForestRegressor must be a str among {'poisson', 'friedman_mse', 'squared_error', 'absolute_error'}. Got 'mae' instead.

joblib.externals.loky.process_executor._RemoteTraceback: """ Traceback (most recent call last): File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\externals\loky\process_executor.py", line 490, in _process_worker r = call_item() ^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\externals\loky\process_executor.py", line 291, in __call__ return self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 606, in __call__ return [func(*args, **kwargs) for func, args, kwargs in self.items] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 606, in return [func(*args, **kwargs) for func, args, kwargs in self.items] ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\utils\parallel.py", line 139, in __call__ return self.function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_validation.py", line 866, in _fit_and_score estimator.fit(X_train, y_train, **fit_params) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\base.py", line 1389, in wrapper return fit_method(estimator, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\ensemble\_forest.py", line 448, in fit raise ValueError("Out of bag estimation only available if bootstrap=True") ValueError: Out of bag estimation only available if bootstrap=True """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "D:\python\电力负荷预测.py", line 77, in <module> random_search.fit(X_train, y_train) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\base.py", line 1389, in wrapper return fit_method(estimator, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_search.py", line 1024, in fit self._run_search(evaluate_candidates) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_search.py", line 1951, in _run_search evaluate_candidates( File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\model_selection\_search.py", line 970, in evaluate_candidates out = parallel( ^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\sklearn\utils\parallel.py", line 77, in __call__ return super().__call__(iterable_with_config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 2071, in __call__ return output if self.return_generator else list(output) ^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 1681, in _get_outputs yield from self._retrieve() File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 1783, in _retrieve self._raise_error_fast() File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 1858, in _raise_error_fast error_job.get_result(self.timeout) File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 757, in get_result return self._return_or_raise() ^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DELL\AppData\Local\Programs\Python\Python311\Lib\site-packages\joblib\parallel.py", line 772, in _return_or_raise raise self._result ValueError: Out of bag estimation only available if bootstrap=True

D:\PythonProject\deepseekai\.venv\Scripts\python.exe D:\PythonProject\deepseekai\train_weather_model.py 模型文件已复制到: ./local-deepseek-model\model.safetensors 配置文件已创建: config.json 分词器配置文件已创建: tokenizer_config.json You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message Traceback (most recent call last): File "D:\PythonProject\deepseekai\train_weather_model.py", line 68, in <module> tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\transformers\models\auto\tokenization_auto.py", line 1013, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2025, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^^^^^ File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2063, in _from_pretrained slow_tokenizer = (cls.slow_tokenizer_class)._from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\transformers\tokenization_utils_base.py", line 2278, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\transformers\models\llama\tokenization_llama.py", line 171, in __init__ self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\transformers\models\llama\tokenization_llama.py", line 198, in get_spm_processor tokenizer.Load(self.vocab_file) File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\sentencepiece\__init__.py", line 961, in Load return self.LoadFromFile(model_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\PythonProject\deepseekai\.venv\Lib\site-packages\sentencepiece\__init__.py", line 316, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: not a string Process finished with exit code 1

我已经下载了tiktoken和protobuf库,D:\PythonProject\deepseekai.venv\Scripts\python.exe D:\PythonProject\deepseekai\train_weather_model.py PyTorch 版本: 2.3.1+cu118 CUDA 可用: True GPU 名称: NVIDIA GeForce GTX 1650 Ti You are using the default legacy behaviour of the <class ‘transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast’>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message. Traceback (most recent call last): File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\convert_slow_tokenizer.py”, line 1737, in convert_slow_tokenizer ).converted() ^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\convert_slow_tokenizer.py”, line 1631, in converted tokenizer = self.tokenizer() ^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\convert_slow_tokenizer.py”, line 1624, in tokenizer vocab_scores, merges = self.extract_vocab_merges_from_model(self.vocab_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\convert_slow_tokenizer.py”, line 1600, in extract_vocab_merges_from_model bpe_ranks = load_tiktoken_bpe(tiktoken_url) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\tiktoken\load.py”, line 148, in load_tiktoken_bpe contents = read_file_cached(tiktoken_bpe_file, expected_hash) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\tiktoken\load.py”, line 48, in read_file_cached cache_key = hashlib.sha1(blobpath.encode()).hexdigest() ^^^^^^^^^^^^^^^ AttributeError: ‘NoneType’ object has no attribute ‘encode’ During handling of the above exception, another exception occurred: Traceback (most recent call last): File “D:\PythonProject\deepseekai\train_weather_model.py”, line 31, in <module> tokenizer = AutoTokenizer.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\models\auto\tokenization_auto.py”, line 1032, in from_pretrained return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\tokenization_utils_base.py”, line 2025, in from_pretrained return cls._from_pretrained( ^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\tokenization_utils_base.py”, line 2278, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\models\llama\tokenization_llama_fast.py”, line 154, in init super().init( File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\tokenization_utils_fast.py”, line 139, in init fast_tokenizer = convert_slow_tokenizer(self, from_tiktoken=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File “D:\PythonProject\deepseekai.venv\Lib\site-packages\transformers\convert_slow_tokenizer.py”, line 1739, in convert_slow_tokenizer raise ValueError( ValueError: Converting from SentencePiece and Tiktoken failed, if a converter for SentencePiece is available, provide a model path with a SentencePiece tokenizer.model file.Currently available slow->fast converters: [‘AlbertTokenizer’, ‘BartTokenizer’, ‘BarthezTokenizer’, ‘BertTokenizer’, ‘BigBirdTokenizer’, ‘BlenderbotTokenizer’, ‘CamembertTokenizer’, ‘CLIPTokenizer’, ‘CodeGenTokenizer’, ‘ConvBertTokenizer’, ‘DebertaTokenizer’, ‘DebertaV2Tokenizer’, ‘DistilBertTokenizer’, ‘DPRReaderTokenizer’, ‘DPRQuestionEncoderTokenizer’, ‘DPRContextEncoderTokenizer’, ‘ElectraTokenizer’, ‘FNetTokenizer’, ‘FunnelTokenizer’, ‘GPT2Tokenizer’, ‘HerbertTokenizer’, ‘LayoutLMTokenizer’, ‘LayoutLMv2Tokenizer’, ‘LayoutLMv3Tokenizer’, ‘LayoutXLMTokenizer’, ‘LongformerTokenizer’, ‘LEDTokenizer’, ‘LxmertTokenizer’, ‘MarkupLMTokenizer’, ‘MBartTokenizer’, ‘MBart50Tokenizer’, ‘MPNetTokenizer’, ‘MobileBertTokenizer’, ‘MvpTokenizer’, ‘NllbTokenizer’, ‘OpenAIGPTTokenizer’, ‘PegasusTokenizer’, ‘Qwen2Tokenizer’, ‘RealmTokenizer’, ‘ReformerTokenizer’, ‘RemBertTokenizer’, ‘RetriBertTokenizer’, ‘RobertaTokenizer’, ‘RoFormerTokenizer’, ‘SeamlessM4TTokenizer’, ‘SqueezeBertTokenizer’, ‘T5Tokenizer’, ‘UdopTokenizer’, ‘WhisperTokenizer’, ‘XLMRobertaTokenizer’, ‘XLNetTokenizer’, ‘SplinterTokenizer’, ‘XGLMTokenizer’, ‘LlamaTokenizer’, ‘CodeLlamaTokenizer’, ‘GemmaTokenizer’, ‘Phi3Tokenizer’] Process finished with exit code 1

最新推荐

recommend-type

AI学生综合素质评价系统建设方案.pptx

AI学生综合素质评价系统建设方案.pptx
recommend-type

web-element-selector

一个专注于Web元素选择的开源工具,支持一键获取页面元素。快速定位和交互,适用于自动化测试、网页抓取等场景。
recommend-type

同态加密降维打击:Paillier算法的CTF密文计算漏洞利用.pdf

文档支持目录章节跳转同时还支持阅读器左侧大纲显示和章节快速定位,文档内容完整、条理清晰。文档内所有文字、图表、函数、目录等元素均显示正常,无任何异常情况,敬请您放心查阅与使用。文档仅供学习参考,请勿用作商业用途。 从隐写术到编码转换,从音频隐写到文件结构分析,CTF-Misc 教会你用技术的眼睛发现数据中的「彩蛋」。掌握 Stegsolve、CyberChef、Audacity 等工具,合法破解摩斯密码、二维码、LSB 隐写,在虚拟战场中提升网络安全意识与技术能力。记住:所有技术仅用于学习与竞赛!
recommend-type

【物理应用】基于matlab能态密度泛函的载流子迁移发射模型【含Matlab源码 13526期】.zip

Matlab领域上传的视频是由对应的完整代码运行得来的,完整代码皆可运行,亲测可用,适合小白; 1、从视频里可见完整代码的内容 主函数:main.m; 调用函数:其他m文件;无需运行 运行结果效果图; 2、代码运行版本 Matlab 2019b;若运行有误,根据提示修改;若不会,私信博主; 3、运行操作步骤 步骤一:将所有文件放到Matlab的当前文件夹中; 步骤二:双击打开main.m文件; 步骤三:点击运行,等程序运行完得到结果; 4、仿真咨询 如需其他服务,可私信博主; 4.1 博客或资源的完整代码提供 4.2 期刊或参考文献复现 4.3 Matlab程序定制 4.4 科研合作
recommend-type

HWSD2-SMU-字段中文说明

HWSD2.0数据库中HWSD2_SMU的字段中文说明
recommend-type

高校常微分方程教程答案解析

常微分方程是研究含有未知函数及其导数的方程的数学分支。在物理学、工程学、生物学以及经济学等诸多领域都有广泛应用。丁同仁与李承志合著的《常微分方程》(第二版)作为一本教材,广泛应用于国内的高校教学中,备受师生青睐。然而,该书作为教材性质的书籍,并未在书中提供详细的解答,这对自学者来说可能构成一定障碍。因此,本文件中提供了部分章节的答案,帮助学生更好地理解和掌握常微分方程的知识。 对于常微分方程的学习者而言,掌握以下几个关键知识点是必要的: 1. 基本概念:了解什么是微分方程,以及根据微分方程中的未知函数、未知函数的导数以及自变量的不同关系可以将微分方程分类为常微分方程和偏微分方程。常微分方程通常涉及单一自变量。 2. 阶数和线性:熟悉微分方程的阶数是指微分方程中出现的最高阶导数的阶数。此外,线性微分方程是微分方程研究中的一个重要类型,其中未知函数及其各阶导数都是一次的,且无乘积项。 3. 解的结构:理解微分方程解的概念,包括通解、特解、初值问题和边值问题。特别是,通过初值问题能了解给定初始条件下的特解是如何确定的。 4. 解法技巧:掌握解常微分方程的基本技巧,比如变量分离法、常数变易法、积分因子法等。对于线性微分方程,特别需要学习如何利用齐次性和非齐次性的特征,来求解线性方程的通解。 5. 系统的线性微分方程:扩展到多个变量的线性微分方程系统,需要掌握如何将多个一阶线性微分方程联立起来,形成方程组,并且了解如何应用矩阵和行列式来简化问题。 6. 初等函数解法:针对某些类型的微分方程,如伯努利方程和恰当微分方程等,它们可以通过变量代换转化为可分离变量或一阶线性微分方程来求解。 7. 特殊类型的方程:对于某些特殊类型的方程,例如克莱罗方程、里卡蒂方程等,需要掌握它们各自特定的求解方法。 8. 稳定性和相空间:了解微分方程解的稳定性和动力系统理论,学习如何通过相空间来分析系统的长期行为。 9. 数值解法:由于许多微分方程难以找到解析解,因此需要掌握数值解法如欧拉法、龙格-库塔法等来近似求解微分方程的数值解。 10. 应用实例:通过实际问题来理解微分方程在模型构建中的应用,例如在力学、电学、化学反应等领域中,微分方程如何描述和预测系统的动态变化。 通过掌握上述知识点,学习者将能够更好地使用《常微分方程》教材,解决其中的习题,并将理论知识应用于实际问题的分析与求解中。上传部分章节答案的做法,无疑为学习者提供了极大的帮助,使得他们能够对照答案来检验自己的解题方法是否正确,从而加深对常微分方程理论和解题技巧的理解。
recommend-type

语音唤醒方案设计:避开这5大设计陷阱,确保用户体验与系统稳定性的秘诀

# 1. 语音唤醒技术概述 语音唤醒技术作为现代人工智能领域的一个重要分支,正逐渐渗透到我们的日常生活中。它允许用户通过说出特定的指令或关键词,来激活和控制各种智能设备,而无需进行物理操作。本章将介绍语音唤醒技术的基本概念、工作原理及其在不同场景中的应用。首先,我们将探讨语音唤醒技术的定义及其核心功能,然后,我们会通过示例来说明它如何工作,以及它对于各种用户群体带来的便利性。通过本章,读
recommend-type

C语言 学生成绩管理系统 完整论文 代码实现

<think>我们正在构建一个C语言学生成绩管理系统。根据引用内容,我们需要实现以下功能:1.系统以菜单方式工作2.学生信息录入(包括学号、姓名、平时成绩、大作业成绩、期末考核成绩等,并计算总评成绩)3.学生信息浏览4.查询功能(按学号、按姓名)5.学生信息的删除与修改6.数据以文件形式保存和加载7.使用链表结构来存储学生数据(便于插入和删除)此外,我们还需要注意:-界面美观和一定的容错能力(如成绩输入范围0~100)-结构体定义学生信息-各个功能模块的函数实现由于用户要求完整论文和代码,但这里我们主要提供代码示例,并简要说明设计思路(相当于论文的核心部分)。设计思路:1.定义学生结构体(st
recommend-type

LAAS_FRONT系统2009年12月31日日志分析

根据提供的文件信息,可以推断出一些关键的知识点。由于文件信息中的标题和描述几乎相同,且重复强调了“LAAS_FRONT 12-31 第二台日志”,我们可以从文件名称中的关键词开始分析。 标题中的“LAAS_FRONT”可能指的是“Log as a Service Frontend”的缩写。LAAS通常指的是日志即服务(Logging as a Service),这是一种提供远程日志管理的在线服务模型。在这种服务模型中,日志数据被收集、存储、分析并提供给用户,而无需用户自己操作日志文件或管理自己的日志基础设施。Frontend则通常指的是用户与服务进行交互的界面。 文件的标题和描述中提到“第二台日志”,这可能意味着这是某系统中第二台服务器的日志文件。在系统的监控和日志管理中,记录每台服务器的日志是常见的做法,它有助于故障隔离、性能监控和安全审计。如果系统中有两台或多台服务器处理相同的服务,记录每台服务器的日志可以更细致地查看每台服务器的运行状态和性能指标。 结合“log4j.log.2009-12-31”这个文件名,可以了解到这是使用了Log4j日志框架的Java应用程序的日志文件,并且是2009年12月31日的记录。Log4j是一个流行的Java日志记录库,它允许开发者记录各种级别的信息到不同的目的地,比如控制台、文件或远程服务器。日志文件的命名通常包括日志记录的日期,这在日志轮转(log rotation)中尤为重要,因为日志文件通常会根据时间或大小进行轮转以管理磁盘空间。 日志轮转是一种常见的日志管理实践,它确保不会由于日志文件的不断增长而耗尽存储空间。通过定期关闭并存档当前日志文件,并开始新的日志文件,可以维护日志信息的可管理性和可访问性。轮转可以基于时间(例如每天、每周或每月)或基于文件大小(例如达到特定兆字节时)。 从描述来看,“LAAS_FRONT 12-31 第二台日志”没有提供更多具体信息,这意味着我们只能根据文件名和标签推断出这是一份日志文件,且与LAAS服务和Log4j框架有关。如果需要详细分析文件内容,我们将需要访问具体的日志文件内容。 总结以上知识点,可以得到以下关键信息: 1. LAAS服务模式:一种在线服务模型,用于远程管理日志数据。 2. 前端(Frontend):用户与服务进行交互的界面。 3. 日志文件:记录系统运行情况的文件,对于问题诊断和系统监控至关重要。 4. Log4j:Java平台下的一个日志记录库。 5. 日志轮转:管理日志文件大小和存储空间的一种方法。 6. 系统监控:通过分析日志文件,可以监控系统性能和诊断潜在问题。 这些知识点可以用于指导IT专业人员在处理类似日志文件时,更好地理解文件的来源、用途及如何有效地管理和分析日志数据。
recommend-type

构建高效AI语音唤醒系统:硬件选择与优化的黄金法则

# 1. AI语音唤醒系统的概述与重要性 AI语音唤醒系统作为智能家居、智能助手以及各种自动化服务的关键输入设备,已经成为AI技术应用中不可或缺的部分。其核心功能是通过语音识别技术,将人类的语音命令转换成计算机可理解的指令,进而控制设备的运行。由于其简便的操作方式和直观的交互体验,AI语音唤醒系统越来越受到市场的欢迎。 随着技术的演进,语音唤醒系统的准确性和效率得到