活动介绍

为什么报错dict' object has no attribute 'encode'

时间: 2023-06-22 12:18:50 浏览: 317
这个报错是因为字典(dict)类型没有 encode() 方法,而在上面的代码中,我们尝试对字典类型的 data 变量进行编码,所以会报错。 为了解决这个问题,我们需要将 data 变量从字典类型转换为字符串类型,然后再进行编码。可以使用 urllib.parse 模块中的 urlencode() 方法将字典转换为字符串,代码示例如下: ```python import requests import urllib.parse url = 'http://example.com' headers = {'Content-Type': 'application/x-www-form-urlencoded; charset=EUC-KR'} data = {'param1': 'value1', 'param2': 'value2'} # 将字典类型的 data 变量转换为字符串类型,并编码为 EUC-KR data_encoded = urllib.parse.urlencode(data).encode('EUC-KR') response = requests.post(url, headers=headers, data=data_encoded) ``` 上述代码中,我们首先使用 urllib.parse.urlencode() 将 data 变量从字典类型转换为字符串类型,并指定了编码方式为 EUC-KR。然后将编码后的数据传递给 requests.post() 的 data 参数即可。
相关问题

'dict' object has no attribute 'encode'

### Python 中字典对象没有 `encode` 属性的解决方案 在 Python 中遇到 `'dict' object has no attribute 'encode'` 错误是因为尝试对字典对象使用了仅适用于字符串的方法 `encode()`。此方法用于将字符串编码为字节序列,而字典本身并不支持这种操作。 如果意图是对字典中的某些值进行编码,则应先访问这些特定键对应的值并对其进行编码处理: ```python my_dict = {'key': 'value'} encoded_value = my_dict['key'].encode('utf-8') print(encoded_value) ``` 对于需要将整个字典转换为其他格式再进行编码的情况(例如 JSON),可以利用 `json` 库来完成这一过程[^1]: ```python import json data = {"Name": "Runoob", "Age": 7} json_str = json.dumps(data) # 将字典转成JSON字符串 byte_data = json_str.encode('utf-8') # 对JSON字符串进行编码 print(byte_data) ``` 当目标是从文件读取数据并将之作为已编码形式存储时,应该考虑如何正确地打开文件以及指定合适的编码方式而不是直接对字典应用 `.encode()` 方法[^2]。 另外,在涉及网络传输或保存到数据库等场景下,通常会先将字典序列化为适当的数据交换格式如 JSON 或 XML 后再执行相应的编码操作。

AttributeError: 'dict' object has no attribute 'encode'

当你尝试在字典对象上调用encode()方法时,会出现AttributeError: 'dict' object has no attribute 'encode'的错误。这是因为字典对象没有encode()方法。encode()方法是字符串对象的方法,用于将字符串编码为指定的字符集。如果你想将字典对象编码为指定的字符集,可以使用json.dumps()方法将字典对象转换为JSON字符串,然后再使用encode()方法将JSON字符串编码为指定的字符集。
阅读全文

相关推荐

import logging from flask import Flask, request, jsonify import numpy as np import requests import time from pymilvus import ( Collection, connections, utility, AnnSearchRequest, # 添加这个导入 Reranker, # 添加这个导入 WeightedRanker # 添加这个导入 ) from config.config1 import MILVUS_CONFIG, MODEL_CONFIG # 初始化日志 logging.basicConfig( level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s", handlers=[ logging.FileHandler("logs/classfication_query.log"), logging.StreamHandler() ] ) logger = logging.getLogger("ClassficationQuery") app = Flask(__name__) class VectorServiceClient: """HTTP调用模型服务进行向量编码(密集和稀疏)""" def __init__(self): self.service_url = MODEL_CONFIG["model_service_url"] self.timeout = 120 logger.info(f"Using vector service: {self.service_url}") def batch_encode_dense(self, texts): """批量生成密集向量""" return self._call_vector_service(texts, "dense") def batch_encode_sparse(self, texts): """批量生成稀疏向量""" return self._call_vector_service(texts, "sparse") def _call_vector_service(self, texts, vector_type): """调用向量服务通用方法""" try: if not texts: return [] payload = { "texts": texts, "type": vector_type } response = requests.post( self.service_url, headers={"Content-Type": "application/json"}, json=payload, timeout=self.timeout ) if response.status_code >= 400: error_detail = f"HTTP {response.status_code} Error: " try: error_detail += response.json().get("error", response.text[:500]) except: error_detail += response.text[:500] logger.error(f"{vector_type} service error: {error_detail}") response.raise_for_status() result = response.json() if "error" in result: logger.error(f"Vector service error ({vector_type}): {result['error']}") raise ValueError(result["error"]) if "vectors" not in result: logger.error(f"Invalid response from {vector_type} service: vectors not found") raise ValueError(f"Invalid response from {vector_type} service") return result["vectors"] except requests.exceptions.RequestException as e: logger.error(f"Request to {vector_type} service failed: {str(e)}") raise except Exception as e: logger.error(f"Encoding via {vector_type} service failed: {str(e)}") raise class ClassficationSearchHandler: """特征值搜索处理器""" def __init__(self): self.vector_service = VectorServiceClient() self.collection = None self.connect() self.load_collection() def connect(self): """连接Milvus数据库""" try: connections.connect( host=MILVUS_CONFIG["host"], port=MILVUS_CONFIG["port"] ) logger.info(f"Connected to Milvus: {MILVUS_CONFIG['host']}") except Exception as e: logger.error(f"Milvus connection failed: {str(e)}") raise def load_collection(self): """加载集合""" collection_name = MILVUS_CONFIG["collection_name"] try: if not utility.has_collection(collection_name): logger.error(f"Collection {collection_name} does not exist") raise ValueError(f"Collection {collection_name} not found") self.collection = Collection(collection_name) self.collection.load() logger.info(f"Loaded collection: {collection_name}") except Exception as e: logger.error(f"Failed to load collection: {str(e)}") raise def hybrid_search(self, query_text, top_k=5, dense_weight=0.5, sparse_weight=0.5): """ 执行混合检索(密集向量 + 稀疏向量) 参数: query_text: 查询文本(特征值) top_k: 返回结果数量 dense_weight: 密集向量权重 sparse_weight: 稀疏向量权重 返回: 排序后的结果列表 """ start_time = time.time() # 1. 编码查询文本 try: logger.info(f"Encoding query text: '{query_text}'") # 编码密集向量 dense_vectors = self.vector_service.batch_encode_dense([query_text]) if not dense_vectors or len(dense_vectors) == 0: logger.error("Dense vector encoding returned empty result") return [] dense_vector = dense_vectors[0] logger.info(f"Dense vector generated, length: {len(dense_vector)}") # 编码稀疏向量 sparse_vectors = self.vector_service.batch_encode_sparse([query_text]) if not sparse_vectors or len(sparse_vectors) == 0: logger.error("Sparse vector encoding returned empty result") return [] sparse_vector = sparse_vectors[0] logger.info(f"Sparse vector generated, length: {len(sparse_vector)}") except Exception as e: logger.error(f"Vector encoding failed: {str(e)}") return [] # 2. 创建搜索请求对象 try: # 创建密集向量搜索请求 dense_search_req = AnnSearchRequest( data=[dense_vector], # 注意:需要是二维列表 anns_field="classfication_dense_vector", param={"metric_type": "IP", "params": {"nprobe": 16}}, limit=top_k * 3, # 获取更多候选结果用于融合 weight=dense_weight ) # 创建稀疏向量搜索请求 sparse_search_req = AnnSearchRequest( data=[sparse_vector], # 注意:需要是二维列表 anns_field="classfication_sparse_vector", param={"metric_type": "IP", "params": {}}, # 稀疏向量不需要nprobe limit=top_k * 3, weight=sparse_weight ) # 3. 执行混合搜索 logger.info("Executing hybrid search...") start_search = time.time() # 使用RRF(Reciprocal Rank Fusion)进行结果融合 rerank = Reranker( strategy="rrf", # 使用RRF策略 params={"k": 60} # RRF参数 ) # 执行混合搜索 results = self.collection.hybrid_search( [dense_search_req, sparse_search_req], # 搜索请求列表 rerank=rerank, # 重排策略 limit=top_k, # 最终返回结果数量 output_fields=["matnr", "matkl", "maktx", "classfication"] ) search_time = time.time() - start_search logger.info(f"Hybrid search completed in {search_time:.2f}s, found {len(results)} results") except Exception as e: logger.error(f"Hybrid search failed: {str(e)}", exc_info=True) return [] # 4. 处理并返回结果 formatted_results = [] for i, hit in enumerate(results): entity = hit.entity formatted_results.append({ "rank": i + 1, "matnr": entity.get("matnr", ""), "matkl": entity.get("matkl", ""), "maktx": entity.get("maktx", ""), "classfication": entity.get("classfication", ""), "score": hit.score }) # 记录前5个结果 if i < 5: logger.info(f"Result #{i + 1}: MATNR={entity.get('matnr')}, Score={hit.score:.4f}") total_time = time.time() - start_time logger.info(f"Total search time: {total_time:.2f}s") return formatted_results # 初始化搜索处理器 search_handler = ClassficationSearchHandler() @app.route('/query_similar_by_classfication', methods=['POST']) def query_similar_by_classfication(): """特征值相似度查询接口""" try: data = request.json if not data or "query_text" not in data: return jsonify({"error": "Missing 'query_text' parameter"}), 400 query_text = data["query_text"] top_k = data.get("top_k", 5) dense_weight = data.get("dense_weight", 0.5) sparse_weight = data.get("sparse_weight", 0.5) logger.info(f"New query: text='{query_text}', top_k={top_k}, " f"dense_weight={dense_weight}, sparse_weight={sparse_weight}") # 执行混合搜索 results = search_handler.hybrid_search( query_text, top_k=top_k, dense_weight=dense_weight, sparse_weight=sparse_weight ) if not results: logger.info("No results found") return jsonify({"results": []}) return jsonify({"results": results}) except Exception as e: logger.error(f"API error: {str(e)}", exc_info=True) return jsonify({"error": "Internal server error"}), 500 @app.route('/health', methods=['GET']) def health_check(): """健康检查端点""" try: # 检查Milvus连接 if not search_handler.collection: return jsonify({"status": "down", "reason": "Milvus not connected"}), 500 # 检查模型服务 try: response = requests.get(f"{MODEL_CONFIG['model_service_url']}/health", timeout=5) if response.status_code != 200: return jsonify({"status": "down", "reason": "Model service unavailable"}), 500 except Exception as e: return jsonify({"status": "down", "reason": f"Model service error: {str(e)}"}), 500 return jsonify({"status": "up"}), 200 except Exception as e: return jsonify({"status": "down", "reason": str(e)}), 500 if __name__ == '__main__': try: logger.info("Starting Classfication Query Service on 0.0.0.0:8081") app.run(host='0.0.0.0', port=2379, debug=False) except Exception as e: logger.error(f"Failed to start service: {str(e)}")这段代码报错2025-07-07 16:05:40,087 - INFO - Executing hybrid search... 2025-07-07 16:05:40,087 [ERROR][handler]: Unexpected error: [hybrid_search], 'dict' object has no attribute 'data', <Time: {'RPC start': '2025-07-07 16:05:40.087355', 'Exception': '2025-07-07 16:05:40.087428'}> (decorators.py:158) 2025-07-07 16:05:40,087 - ERROR - Hybrid search failed: <MilvusException: (code=1, message=Unexpected error, message=<'dict' object has no attribute 'data'>)> 2025-07-07 16:05:40,087 - INFO - No results found

import json import torch from typing import Dict, List from torch.utils.data import Dataset import transformers from peft import LoraConfig, TaskType, get_peft_model from torch.utils.data import DataLoader, SequentialSampler from transformers import Trainer, TrainingArguments from lora_plus import LoraPlusTrainer from torch.utils.data import RandomSampler from swanlab.integration.transformers import SwanLabCallback import swanlab import numpy as np import pandas as pd import re from typing import Dict, List import torch from tqdm import tqdm from transformers import PreTrainedTokenizer from transformers import AutoTokenizer import torch.nn as nn from lora_plus import LoraPlusTrainer # 确保已安装lora_plus库 # 初始化SwanLab swanlab.init("Finetune-Llama3.2-with-Encoder") swanlab_callback = SwanLabCallback( project="Finetune-Llama3.2-with-Encoder", experiment_name="Finetune-Llama3.2-with-Encoder" ) # 常量定义 CHEM_FORMULA_SIZE = r"([A-Z][a-z]*)([0-9]*)" VALID_ELEMENTS = ["C", "N", "P", "O", "S", "Si", "I", "H", "Cl", "F", "Br", "B", "Se", "Fe", "Co", "As", "K", "Na"] element_to_idx = {elem: idx for idx, elem in enumerate(VALID_ELEMENTS)} # 化学式转密集向量 def formula_to_dense(chem_formula: str) -> torch.Tensor: dense_vec = torch.zeros(len(VALID_ELEMENTS), dtype=torch.float32) matches = re.findall(CHEM_FORMULA_SIZE, chem_formula) for chem_symbol, num_str in matches: num = 1 if num_str == "" else int(num_str) if chem_symbol in element_to_idx: idx = element_to_idx[chem_symbol] dense_vec[idx] += num return dense_vec # 位置编码生成 (PyTorch实现) def positional_encoding(max_position: int, d_model: int, min_freq: float = 1e-4) -> torch.Tensor: position = torch.arange(max_position).unsqueeze(1) div_term = torch.exp(torch.arange(0, d_model, 2) * (-torch.log(torch.tensor(min_freq)) / d_model)) pos_enc = torch.zeros(max_position, d_model) pos_enc[:, 0::2] = torch.sin(position * div_term) pos_enc[:, 1::2] = torch.cos(position * div_term) return pos_enc # 初始化位置编码矩阵 P = positional_encoding(2000000, 256) dimn = 256 # 与位置编码维度一致 # 质谱数据编码 def encode_spectra(rag_tensor: list, P: torch.Tensor, dimn: int) -> torch.Tensor: encoded_list = [] for sample in rag_tensor: mz_list, intensity_list = sample # 创建基础特征矩阵 [m/z, intensity] base_features = torch.tensor([mz_list, intensity_list], dtype=torch.float32).T # 添加位置编码特征 pos_enc = torch.stack([P[min(int(mz), P.size(0)-1)] for mz in mz_list]) # 组合所有特征 [m/z, intensity, pos_enc...] features = torch.cat([base_features, pos_enc], dim=1) # 填充/截断到固定长度 if features.size(0) < 501: padding = torch.zeros(501 - features.size(0), features.size(1)) features = torch.cat([features, padding], dim=0) else: features = features[:501] encoded_list.append(features) return torch.stack(encoded_list) # 质谱数据预处理 def preprocess_spectra(df: pd.DataFrame) -> list: spectra_list = [] for idx, row in tqdm(df.iterrows(), total=len(df)): spectrum_str = row['Spectrum'] total_mass = row['Total Exact Mass'] # 解析质谱字符串 pairs = spectrum_str.split() mz_list, intensity_list = [], [] for pair in pairs: mz, intensity = pair.split(':') mz_list.append(float(mz)) intensity_list.append(float(intensity)) # 添加总精确质量 mz_list.append(total_mass) intensity_list.append(0.0) # 四舍五入处理 mz_list = [round(mz, 2) for mz in mz_list] intensity_list = [round(intensity, 2) for intensity in intensity_list] spectra_list.append([mz_list, intensity_list]) return spectra_list # 自定义数据集类 class MolecularDataset(Dataset): def __init__(self, csv_path: str, tokenizer: AutoTokenizer, max_seq_len: int = 512): self.df = pd.read_csv(csv_path) self.tokenizer = tokenizer self.max_seq_len = max_seq_len # 预处理质谱数据 spectra_data = preprocess_spectra(self.df) self.spec_encoded = encode_spectra(spectra_data, P, dimn) def __len__(self): return len(self.df) def __getitem__(self, idx) -> dict: # 分子式向量 formula = self.df.iloc[idx]['Molecular Formula'] formula_vec = formula_to_dense(formula) # 质谱矩阵 spec_matrix = self.spec_encoded[idx] # SELFIES编码 selfies_str = self.df.iloc[idx]['SELFIES'] encoding = self.tokenizer( selfies_str, padding='max_length', truncation=True, max_length=self.max_seq_len, return_tensors='pt' ) return { 'formula_vec': formula_vec, 'spec_matrix': spec_matrix, 'input_ids': encoding['input_ids'].squeeze(0), 'attention_mask': encoding['attention_mask'].squeeze(0) } # 加载tokenizer tokenizer = AutoTokenizer.from_pretrained('/root/workspace/checkpoint-2500') # 创建数据集 dataset = MolecularDataset('/root/workspace/SELFIES-SFT.csv', tokenizer) data_collator = transformers.DataCollatorForSeq2Seq(tokenizer=tokenizer) # 定义带额外Encoder的自定义模型 class LlamaWithEncoder(nn.Module): def __init__(self, base_model, encoder1_dim=18, encoder2_dim=256, hidden_dim=512): super().__init__() self.base_model = base_model # 第一个Transformer Encoder encoder1_layer = nn.TransformerEncoderLayer( d_model=encoder1_dim, nhead=3, dim_feedforward=hidden_dim, batch_first=True ) self.encoder1 = nn.TransformerEncoder(encoder1_layer, num_layers=2) # 第二个Transformer Encoder encoder2_layer = nn.TransformerEncoderLayer( d_model=encoder2_dim, nhead=8, dim_feedforward=hidden_dim, batch_first=True ) self.encoder2 = nn.TransformerEncoder(encoder2_layer, num_layers=2) # 投影层 self.proj1 = nn.Linear(encoder1_dim, base_model.config.hidden_size) self.proj2 = nn.Linear(encoder2_dim, base_model.config.hidden_size) # 融合层 self.fusion = nn.Linear(2 * base_model.config.hidden_size, base_model.config.hidden_size) def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs): return self.base_model.prepare_inputs_for_generation( input_ids, past_key_values=past_key_values, **kwargs ) def forward( self, input_ids=None, attention_mask=None, encoder1_inputs=None, encoder2_inputs=None, labels=None, past_key_values=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs ): # 处理编码器输入 enc1_out = self.encoder1(encoder1_inputs) enc1_out = enc1_out.mean(dim=1) enc1_proj = self.proj1(enc1_out) enc2_out = self.encoder2(encoder2_inputs) enc2_out = enc2_out.mean(dim=1) enc2_proj = self.proj2(enc2_out) # 融合编码器输出 fused = self.fusion(torch.cat([enc1_proj, enc2_proj], dim=1)) fused = fused.unsqueeze(1) # 获取嵌入层输出 embeddings = self.base_model.get_input_embeddings()(input_ids) # 将融合结果与第一个token的嵌入结合 if embeddings.size(1) > 0: embeddings[:, 0, :] = (embeddings[:, 0, :] + fused[:, 0, :]) / 2 # 使用修改后的嵌入调用基础模型 return self.base_model( inputs_embeds=embeddings, attention_mask=attention_mask, labels=labels, past_key_values=past_key_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, **kwargs ) # 加载预训练模型 base_model = transformers.AutoModelForCausalLM.from_pretrained( "/root/workspace/checkpoint-2500", trust_remote_code=True, torch_dtype=torch.bfloat16, ) model = LlamaWithEncoder(base_model) lora_config = LoraConfig( r=8, lora_alpha=16, target_modules="all-linear", # 目标注意力层 lora_dropout=0.0, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, lora_config) model.print_trainable_parameters() # 输出示例:0.3% 参数可训练 training_args = TrainingArguments( output_dir="./llama3.2-SELFIES-SFT", per_device_train_batch_size=16, gradient_accumulation_steps=16, num_train_epochs=10, learning_rate=5.0e-05, optim="adamw_torch", logging_steps=10, bf16=True, save_strategy="steps", lr_scheduler_type='cosine', max_grad_norm=1.0, save_steps=2000, warmup_steps=0 ) class CustomTrainer(LoraPlusTrainer): def get_train_dataloader(self) -> DataLoader: """ Returns the training dataloader using a random sampler to shuffle the dataset. """ return DataLoader( self.train_dataset, batch_size=self.args.train_batch_size, shuffle=True, collate_fn=self.data_collator, drop_last=False, ) # 使用修改后的 CustomTrainer lp_trainer = CustomTrainer( model, training_args, train_dataset=dataset, tokenizer=tokenizer, data_collator=data_collator, callbacks=[swanlab_callback], ) lp_trainer.train() lp_trainer.save_model(output_dir='./llama3.2-SELFIES-SFT')解决报错 File "/root/workspace/sft.py", line 332, in <module> File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2241, in train return inner_training_loop( File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 2318, in _inner_training_loop self.optimizer, self.lr_scheduler = deepspeed_init(self, num_training_steps=max_steps) File "/opt/conda/lib/python3.10/site-packages/transformers/integrations/deepspeed.py", line 398, in deepspeed_init hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps) File "/opt/conda/lib/python3.10/site-packages/transformers/integrations/deepspeed.py", line 226, in trainer_config_finalize if hasattr(model.config, "hidden_size"): File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 857, in __getattr__ return getattr(self.base_model, name) File "/opt/conda/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 367, in __getattr__ return getattr(self.model, name) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") 'LlamaWithEncoder' object has no attribute 'config'

出现以下问题,继续改进:(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk). trainable params: 1,572,864 || all params: 1,838,401,536 || trainable%: 0.0856 Map: 100%|████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 76.55 examples/s] 训练集样本示例: {'anchor_input_ids': [56568, 118919, 116122, 11319], 'positive_input_ids': [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], 'negative_input_ids': [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} 验证集样本示例: {'anchor_input_ids': [56568, 118919, 116122, 11319], 'positive_input_ids': [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], 'negative_input_ids': [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} INFO:__main__:GPU内存使用: 已分配 1.77GB, 保留 1.81GB 0%| | 0/3 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\checkpoint.py:87: UserWarning: None of the inputs have requires_grad=True. Gradients will be None warnings.warn( Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 328, in <module> trainer.train() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2171, in train return inner_training_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2531, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 3676, in training_step loss = self.compute_loss(model, inputs) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 203, in compute_loss lm_labels[lm_labels == self.tokenizer.pad_token_id] = -100 AttributeError: 'NoneType' object has no attribute 'pad_token_id' 0%| | 0/3 [00:02<?, ?it/s]

(.venv) PS D:\pycharm\daima\fg\login_reg> python manage.py makemigrations Traceback (most recent call last): File "D:\pycharm\daima\fg\login_reg\manage.py", line 21, in <module> main() File "D:\pycharm\daima\fg\login_reg\manage.py", line 17, in main execute_from_command_line(sys.argv) File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\core\management\__init__.py", line 381, in execute_from_command_line utility.execute() File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\core\management\__init__.py", line 375, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\core\management\base.py", line 323, in run_from_argv self.execute(*args, **cmd_options) File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\core\management\base.py", line 364, in execute output = self.handle(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\core\management\base.py", line 83, in wrapped res = handle_func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\core\management\commands\makemigrations.py", line 101, in handle loader.check_consistent_history(connection) File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\migrations\loader.py", line 283, in check_consistent_history applied = recorder.applied_migrations() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\migrations\recorder.py", line 73, in applied_migrations if self.has_table(): ^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\migrations\recorder.py", line 56, in has_table return self.Migration._meta.db_table in self.connection.introspection.table_names(self.connection.cursor()) ^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\base\base.py", line 256, in cursor return self._cursor() ^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\base\base.py", line 233, in _cursor self.ensure_connection() File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\base\base.py", line 217, in ensure_connection self.connect() File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\base\base.py", line 197, in connect self.init_connection_state() File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\mysql\base.py", line 231, in init_connection_state if self.features.is_sql_auto_is_null_enabled: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\utils\functional.py", line 80, in __get__ res = instance.__dict__[self.name] = self.func(instance) ^^^^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\mysql\features.py", line 82, in is_sql_auto_is_null_enabled cursor.execute('SELECT @@SQL_AUTO_IS_NULL') File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\utils.py", line 103, in execute sql = self.db.ops.last_executed_query(self.cursor, sql, params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\pycharm\daima\fg\.venv\Lib\site-packages\django\db\backends\mysql\operations.py", line 146, in last_executed_query query = query.decode(errors='replace') ^^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'decode'. Did you mean: 'encode'?

import json import torch from typing import Dict, List from torch.utils.data import Dataset import transformers from peft import LoraConfig, TaskType, get_peft_model from torch.utils.data import DataLoader, SequentialSampler from transformers import Trainer, TrainingArguments from lora_plus import LoraPlusTrainer from torch.utils.data import RandomSampler from swanlab.integration.transformers import SwanLabCallback import swanlab import numpy as np import pandas as pd import re from typing import Dict, List import torch from tqdm import tqdm from transformers import PreTrainedTokenizer from transformers import AutoTokenizer import torch.nn as nn swanlab.init("Finetune-Llama3.2-with-Encoder") swanlab_callback = SwanLabCallback( project="Finetune-Llama3.2-with-Encoder", experiment_name="Finetune-Llama3.2-with-Encoder" ) # 常量定义 CHEM_FORMULA_SIZE = "([A-Z][a-z]*)([0-9]*)" VALID_ELEMENTS = ["C", "N", "P", "O", "S", "Si", "I", "H", "Cl", "F", "Br", "B", "Se", "Fe", "Co", "As", "K", "Na"] ELEMENT_VECTORS = np.eye(len(VALID_ELEMENTS)) element_to_position = dict(zip(VALID_ELEMENTS, ELEMENT_VECTORS)) # 化学式转密集向量 def formula_to_dense(chem_formula: str) -> np.ndarray: total_onehot = [] for (chem_symbol, num) in re.findall(CHEM_FORMULA_SIZE, chem_formula): num = 1 if num == "" else int(num) one_hot = element_to_position[chem_symbol].reshape(1, -1) one_hot_repeats = np.repeat(one_hot, repeats=num, axis=0) total_onehot.append(one_hot_repeats) if len(total_onehot) == 0: dense_vec = np.zeros(len(VALID_ELEMENTS)) else: dense_vec = np.vstack(total_onehot).sum(0) return dense_vec # 正弦嵌入 def sine_embed(v, max_count=256): num_freqs = int(np.ceil(np.log2(max_count))) freqs = 0.5 ** torch.arange(num_freqs, dtype=torch.float32) * np.pi v_tensor = torch.tensor(v, dtype=torch.float32)[:, None] embedded = torch.sin(v_tensor * freqs[None, :]) return torch.abs(embedded).numpy() def positional_encoding(max_position, d_model, min_freq=1e-6): position = np.arange(max_position) freqs = min_freq **(2 * (np.arange(d_model) // 2) / d_model) pos_enc = position.reshape(-1, 1) * freqs.reshape(1, -1) pos_enc[:, ::2] = np.cos(pos_enc[:, ::2]) pos_enc[:, 1::2] = np.sin(pos_enc[:, 1::2]) return pos_enc # 生成位置编码 P = positional_encoding(2000000, 256, min_freq=1e2) # 转换为PyTorch张量以便后续使用 P = torch.tensor(P, dtype=torch.float32) dimn = 255 # 质谱数据编码(修复后) def encoding(rag_tensor, P, dimn): to_pad = [] for sample in rag_tensor: # 直接使用列表(因为sample[0]和sample[1]是Python列表) all_dim = [sample[0]] # 移除.tolist(),因为本身就是列表 # 处理位置编码(sample[1]是列表,直接遍历) pos_enc = [P[int(i)-1] for i in sample[1]] for dim_idx in range(dimn): dim_vals = [i[dim_idx].item() for i in pos_enc] all_dim.append(dim_vals) to_pad.append(all_dim) # 使用PyTorch进行序列填充 padded = [] for i in to_pad: # 转换为张量 tensor = torch.tensor(i, dtype=torch.float32) # 计算需要填充的长度 pad_length = max(0, 501 - tensor.size(1)) # 进行后向填充 padded_tensor = torch.nn.functional.pad(tensor, (0, pad_length), mode='constant', value=0) # 如果长度超过501,则截断 if padded_tensor.size(1) > 501: padded_tensor = padded_tensor[:, :501] padded.append(padded_tensor) # 堆叠并交换轴 to_pad = torch.stack(padded) to_pad = to_pad.permute(0, 2, 1) # 相当于numpy的swapaxes(to_pad, 1, -1) return to_pad # 质谱数据预处理(PyTorch实现) def prepro_specs_train(df): df = df.reset_index(drop=True) valid = [] mz_intensity = df['Spectrum'].to_list() def process_line(line): pairs = line.split() mz_list = [] intensity_list = [] for pair in pairs: mz, intensity = pair.split(':') mz_list.append(float(mz)) intensity_list.append(float(intensity)) return mz_list, intensity_list for idx, intensities in tqdm(enumerate(mz_intensity)): mz_list, intensity_list = process_line(intensities) # 添加总精确质量和0强度值 mz_list.append(float(df.at[idx, 'Total Exact Mass'])) intensity_list.append(0.0) # 四舍五入处理 round_mz_list = [round(float(mz), 2) for mz in mz_list] round_intensity_list = [round(float(intensity), 2) for intensity in intensity_list] valid.append([round_mz_list, round_intensity_list]) return valid # 返回列表的列表 # 自定义数据集类 class CSVDataset(torch.utils.data.Dataset): def __init__(self, csv_path, tokenizer: PreTrainedTokenizer, max_selfies_len=512): self.df = pd.read_csv(csv_path) self.tokenizer = tokenizer self.max_selfies_len = max_selfies_len # 预处理质谱数据 spec_df = self.df[['Total Exact Mass', 'Spectrum']].copy() self.rag_tensor = prepro_specs_train(spec_df) self.spec_encoded = encoding(self.rag_tensor, P, dimn) def __len__(self): return len(self.df) def __getitem__(self, idx) -> Dict[str, torch.Tensor]: # 1. 处理分子式 formula = self.df.iloc[idx]['Molecular Formula'] formula_vec = formula_to_dense(formula) # 形状: (18,) # 2. 处理质谱数据 spec_matrix = self.spec_encoded[idx] # 形状: (501, 257) # 3. 处理SELFIES - 添加attention_mask selfies_str = self.df.iloc[idx]['SELFIES'] # 编码时同时获取input_ids和attention_mask encoding_result = self.tokenizer.encode_plus( selfies_str, add_special_tokens=True, # 添加[CLS]和[SEP] max_length=self.max_selfies_len, padding='max_length', truncation=True, return_attention_mask=True, return_tensors='pt' ) input_ids = encoding_result['input_ids'].squeeze(0) attention_mask = encoding_result['attention_mask'].squeeze(0) return { 'formula_vec': torch.tensor(formula_vec, dtype=torch.float32), 'spec_matrix': spec_matrix, # 已为tensor,无需重复转换 'selfies_ids': input_ids, 'attention_mask': attention_mask } # 初始化tokenizer tokenizer = AutoTokenizer.from_pretrained('/root/workspace/checkpoint-2500') # 创建数据集 dataset = CSVDataset('/root/workspace/SELFIES-SFT.csv', tokenizer) data_collator = transformers.DataCollatorForSeq2Seq( tokenizer=tokenizer) # 定义带额外Encoder的自定义模型 class LlamaWithEncoder(nn.Module): def __init__(self, base_model, encoder1_dim=18, encoder2_dim=256, hidden_dim=512): super().__init__() # 基础Llama模型 self.base_model = base_model # 第一个Transformer Encoder (处理形状为(batch, 18)的输入) encoder1_layer = nn.TransformerEncoderLayer( d_model=encoder1_dim, nhead=3, # 18可以被3整除 dim_feedforward=hidden_dim, batch_first=True ) self.encoder1 = nn.TransformerEncoder(encoder1_layer, num_layers=2) # 第二个Transformer Encoder (处理形状为(batch, 501, 257)的输入) encoder2_layer = nn.TransformerEncoderLayer( d_model=encoder2_dim, nhead=8, # 257可以被17整除 dim_feedforward=hidden_dim, batch_first=True ) self.encoder2 = nn.TransformerEncoder(encoder2_layer, num_layers=2) # 投影层,将两个encoder的输出映射到Llama的隐藏维度 self.proj1 = nn.Linear(encoder1_dim, base_model.config.hidden_size) self.proj2 = nn.Linear(encoder2_dim, base_model.config.hidden_size) # 融合层 self.fusion = nn.Linear(2 * base_model.config.hidden_size, base_model.config.hidden_size) def forward(self, input_ids=None, attention_mask=None, encoder1_inputs=None, encoder2_inputs=None, labels=None): # 处理编码器输入 enc1_out = self.encoder1(encoder1_inputs) # (batch, 18, 18) enc1_out = enc1_out.mean(dim=1) # (batch, 18) enc1_proj = self.proj1(enc1_out) # (batch, hidden_size) enc2_out = self.encoder2(encoder2_inputs) # (batch, 501, 257) enc2_out = enc2_out.mean(dim=1) # (batch, 257) enc2_proj = self.proj2(enc2_out) # (batch, hidden_size) # 融合编码器输出 fused = self.fusion(torch.cat([enc1_proj, enc2_proj], dim=1)) # (batch, hidden_size) # 将融合结果作为decoder的初始状态 batch_size = fused.size(0) fused = fused.unsqueeze(1) # (batch, 1, hidden_size) # 准备Llama输入 outputs = self.base_model( input_ids=input_ids, attention_mask=attention_mask, labels=labels, ) # 这里我们将编码器的输出与Llama的嵌入层输出进行融合 # 获取嵌入层输出 embeddings = self.base_model.get_input_embeddings()(input_ids) # (batch, seq_len, hidden_size) # 将编码器输出与第一个位置的嵌入融合 if embeddings.size(1) > 0: embeddings[:, 0, :] = (embeddings[:, 0, :] + fused[:, 0, :]) / 2 # 使用修改后的嵌入重新计算模型输出 outputs = self.base_model( inputs_embeds=embeddings, attention_mask=attention_mask, labels=labels ) return outputs # 加载预训练模型 base_model = transformers.AutoModelForCausalLM.from_pretrained( "/root/workspace/checkpoint-2500", trust_remote_code=True, torch_dtype=torch.bfloat16, ) model = LlamaWithEncoder(base_model) lora_config = LoraConfig( r=8, lora_alpha=16, target_modules="all-linear", # 目标注意力层 lora_dropout=0.0, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, lora_config) model.print_trainable_parameters() # 输出示例:0.3% 参数可训练 training_args = TrainingArguments( output_dir="./llama3.2-SELFIES-SFT", per_device_train_batch_size=16, gradient_accumulation_steps=16, num_train_epochs=10, learning_rate=5.0e-05, optim="adamw_torch", logging_steps=10, bf16=True, save_strategy="steps", lr_scheduler_type='cosine', max_grad_norm=1.0, save_steps=2000, warmup_steps=0 ) class CustomTrainer(LoraPlusTrainer): def get_train_dataloader(self) -> DataLoader: """ Returns the training dataloader using a random sampler to shuffle the dataset. """ return DataLoader( self.train_dataset, batch_size=self.args.train_batch_size, shuffle=True, collate_fn=self.data_collator, drop_last=False, ) # 使用修改后的 CustomTrainer lp_trainer = CustomTrainer( model, training_args, train_dataset=dataset, tokenizer=tokenizer, data_collator=data_collator, callbacks=[swanlab_callback], ) lp_trainer.train() lp_trainer.save_model(output_dir='./llama3.2-SELFIES-SFT')运行出现报错,解决报错 File "/root/workspace/sft.py", line 278, in <module> model = get_peft_model(model, lora_config) File "/opt/conda/lib/python3.10/site-packages/peft/mapping_func.py", line 125, in get_peft_model return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type]( File "/opt/conda/lib/python3.10/site-packages/peft/peft_model.py", line 1811, in __init__ self.base_model_prepare_inputs_for_generation = self.base_model.prepare_inputs_for_generation File "/opt/conda/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 367, in __getattr__ return getattr(self.model, name) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__ raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") 'LlamaWithEncoder' object has no attribute 'prepare_inputs_for_generation'

xyg@xyg-T6AD:~$ sudo apt install ros-humble-hardware-interface [sudo] xyg 的密码: 正在读取软件包列表... 完成 正在分析软件包的依赖关系树... 完成 正在读取状态信息... 完成 下列软件包是自动安装的并且现在不需要了: fonts-lyx libaom-dev libarmadillo-dev libarpack2-dev libblas-dev libblosc-dev libcfitsio-dev libcfitsio-doc libcharls-dev libdav1d-dev libde265-dev libdouble-conversion-dev libfontconfig-dev libfontconfig1-dev libfreexl-dev libfyba-dev libgdal-dev libgeos-dev libgeotiff-dev libgif-dev libgl2ps-dev libheif-dev libjson-c-dev libjsoncpp-dev libkml-dev libkmlconvenience1 libkmlregionator1 libkmlxsd1 liblapack-dev liblbfgsb0 libnetcdf-c++4 libnetcdf-cxx-legacy-dev libogdi-dev libogg-dev libopenjp2-7-dev libopenni-dev libopenni-sensor-pointclouds0 libopenni0 libopenni2-0 libopenni2-dev libpcl-apps1.12 libpcl-common1.12 libpcl-dev libpcl-features1.12 libpcl-filters1.12 libpcl-io1.12 libpcl-kdtree1.12 libpcl-keypoints1.12 libpcl-ml1.12 libpcl-octree1.12 libpcl-outofcore1.12 libpcl-people1.12 libpcl-recognition1.12 libpcl-registration1.12 libpcl-sample-consensus1.12 libpcl-search1.12 libpcl-segmentation1.12 libpcl-stereo1.12 libpcl-surface1.12 libpcl-tracking1.12 libpcl-visualization1.12 libpoppler-dev libpoppler-private-dev libproj-dev libqt5designercomponents5 libqt5qmlworkerscript5 libqt5quickparticles5 libqt5quickshapes5 libqt5quicktest5 libqt5webkit5-dev librttopo-dev libspatialite-dev libsuperlu-dev libtheora-dev liburiparser-dev libutfcpp-dev libvtk9-dev libvtk9-java libvtk9-qt-dev libvtk9.1-qt libwebp-dev libx265-dev libxerces-c-dev libxft-dev libxml2-dev libxsimd-dev openni-utils pydocstyle pyflakes3 python-matplotlib-data python3-appdirs python3-beniget python3-brotli python3-cycler python3-decorator python3-flake8 python3-fonttools python3-fs python3-gast python3-kiwisolver python3-lz4 python3-matplotlib python3-mccabe python3-mpi4py python3-mpmath python3-ply python3-psutil python3-pycodestyle python3-pydocstyle python3-pyflakes python3-pythran python3-scipy python3-snowballstemmer python3-sympy python3-ufolib2 python3-unicodedata2 python3-vtk9 qdoc-qt5 qhelpgenerator-qt5 qt5-assistant qtattributionsscanner-qt5 qtdeclarative5-dev qtdeclarative5-dev-tools qttools5-dev qttools5-dev-tools qttools5-private-dev ros-humble-action-tutorials-cpp ros-humble-action-tutorials-interfaces ros-humble-action-tutorials-py ros-humble-ament-cmake-auto ros-humble-ament-cmake-copyright ros-humble-ament-cmake-flake8 ros-humble-ament-cmake-lint-cmake ros-humble-ament-cmake-pep257 ros-humble-ament-cmake-xmllint ros-humble-ament-flake8 ros-humble-ament-lint-auto ros-humble-ament-lint-cmake ros-humble-ament-lint-common ros-humble-ament-pep257 ros-humble-ament-xmllint ros-humble-composition ros-humble-demo-nodes-cpp ros-humble-demo-nodes-cpp-native ros-humble-demo-nodes-py ros-humble-depthimage-to-laserscan ros-humble-dummy-map-server ros-humble-dummy-robot-bringup ros-humble-dummy-sensors ros-humble-example-interfaces ros-humble-examples-rclcpp-minimal-action-client ros-humble-examples-rclcpp-minimal-action-server ros-humble-examples-rclcpp-minimal-client ros-humble-examples-rclcpp-minimal-composition ros-humble-examples-rclcpp-minimal-publisher ros-humble-examples-rclcpp-minimal-service ros-humble-examples-rclcpp-minimal-subscriber ros-humble-examples-rclcpp-minimal-timer ros-humble-examples-rclcpp-multithreaded-executor ros-humble-examples-rclpy-executors ros-humble-examples-rclpy-minimal-action-client ros-humble-examples-rclpy-minimal-action-server ros-humble-examples-rclpy-minimal-client ros-humble-examples-rclpy-minimal-publisher ros-humble-examples-rclpy-minimal-service ros-humble-examples-rclpy-minimal-subscriber ros-humble-geometry2 ros-humble-image-geometry ros-humble-image-tools ros-humble-intra-process-demo ros-humble-keyboard-handler ros-humble-lifecycle ros-humble-logging-demo ros-humble-pcl-conversions ros-humble-pcl-msgs ros-humble-pendulum-control ros-humble-pendulum-msgs ros-humble-qt-gui-cpp ros-humble-qt-gui-py-common ros-humble-quality-of-service-demo-cpp ros-humble-quality-of-service-demo-py ros-humble-ros-environment ros-humble-ros2action ros-humble-ros2bag ros-humble-ros2component ros-humble-ros2interface ros-humble-ros2launch ros-humble-ros2lifecycle ros-humble-ros2multicast ros-humble-ros2topic ros-humble-rosbag2 ros-humble-rosbag2-compression ros-humble-rosbag2-compression-zstd ros-humble-rosbag2-cpp ros-humble-rosbag2-interfaces ros-humble-rosbag2-py ros-humble-rosbag2-storage ros-humble-rosbag2-storage-default-plugins ros-humble-rosbag2-transport ros-humble-rosidl-default-generators ros-humble-rqt-action ros-humble-rqt-bag ros-humble-rqt-bag-plugins ros-humble-rqt-common-plugins ros-humble-rqt-console ros-humble-rqt-gui-cpp ros-humble-rqt-image-view ros-humble-rqt-msg ros-humble-rqt-plot ros-humble-rqt-publisher ros-humble-rqt-py-common ros-humble-rqt-py-console ros-humble-rqt-reconfigure ros-humble-rqt-service-caller ros-humble-rqt-shell ros-humble-rqt-srv ros-humble-rqt-topic ros-humble-rttest ros-humble-shared-queues-vendor ros-humble-sros2 ros-humble-sros2-cmake ros-humble-teleop-twist-joy ros-humble-teleop-twist-keyboard ros-humble-tf2-bullet ros-humble-tf2-sensor-msgs ros-humble-tf2-tools ros-humble-tlsf ros-humble-tlsf-cpp ros-humble-topic-monitor ros-humble-turtlesim ros-humble-zstd-vendor tcl-dev tcl8.6-dev tk-dev tk8.6-dev unicode-data vtk9 使用'sudo apt autoremove'来卸载它(它们)。 下列软件包将被升级: ros-humble-hardware-interface 升级了 1 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 305 个软件包未被升级。 需要下载 228 kB 的归档。 解压缩后会消耗 0 B 的额外空间。 获取:1 https://mirrors.tuna.tsinghua.edu.cn/ros2/ubuntu jammy/main amd64 ros-humble-hardware-interface amd64 2.51.0-1jammy.20250617.223440 [228 kB] 已下载 228 kB,耗时 11秒 (21.7 kB/s) (正在读取数据库 ... 系统当前共安装有 322880 个文件和目录。) 准备解压 .../ros-humble-hardware-interface_2.51.0-1jammy.20250617.223440_amd64.d eb ... 正在解压 ros-humble-hardware-interface (2.51.0-1jammy.20250617.223440) 并覆盖 (2 .50.0-1jammy.20250429.212517) ... 正在设置 ros-humble-hardware-interface (2.51.0-1jammy.20250617.223440) ... xyg@xyg-T6AD:~$ sudo apt install ros-humble-control-msgs 正在读取软件包列表... 完成 正在分析软件包的依赖关系树... 完成 正在读取状态信息... 完成 下列软件包是自动安装的并且现在不需要了: fonts-lyx libaom-dev libarmadillo-dev libarpack2-dev libblas-dev libblosc-dev libcfitsio-dev libcfitsio-doc libcharls-dev libdav1d-dev libde265-dev libdouble-conversion-dev libfontconfig-dev libfontconfig1-dev libfreexl-dev libfyba-dev libgdal-dev libgeos-dev libgeotiff-dev libgif-dev libgl2ps-dev libheif-dev libjson-c-dev libjsoncpp-dev libkml-dev libkmlconvenience1 libkmlregionator1 libkmlxsd1 liblapack-dev liblbfgsb0 libnetcdf-c++4 libnetcdf-cxx-legacy-dev libogdi-dev libogg-dev libopenjp2-7-dev libopenni-dev libopenni-sensor-pointclouds0 libopenni0 libopenni2-0 libopenni2-dev libpcl-apps1.12 libpcl-common1.12 libpcl-dev libpcl-features1.12 libpcl-filters1.12 libpcl-io1.12 libpcl-kdtree1.12 libpcl-keypoints1.12 libpcl-ml1.12 libpcl-octree1.12 libpcl-outofcore1.12 libpcl-people1.12 libpcl-recognition1.12 libpcl-registration1.12 libpcl-sample-consensus1.12 libpcl-search1.12 libpcl-segmentation1.12 libpcl-stereo1.12 libpcl-surface1.12 libpcl-tracking1.12 libpcl-visualization1.12 libpoppler-dev libpoppler-private-dev libproj-dev libqt5designercomponents5 libqt5qmlworkerscript5 libqt5quickparticles5 libqt5quickshapes5 libqt5quicktest5 libqt5webkit5-dev librttopo-dev libspatialite-dev libsuperlu-dev libtheora-dev liburiparser-dev libutfcpp-dev libvtk9-dev libvtk9-java libvtk9-qt-dev libvtk9.1-qt libwebp-dev libx265-dev libxerces-c-dev libxft-dev libxml2-dev libxsimd-dev openni-utils pydocstyle pyflakes3 python-matplotlib-data python3-appdirs python3-beniget python3-brotli python3-cycler python3-decorator python3-flake8 python3-fonttools python3-fs python3-gast python3-kiwisolver python3-lz4 python3-matplotlib python3-mccabe python3-mpi4py python3-mpmath python3-ply python3-psutil python3-pycodestyle python3-pydocstyle python3-pyflakes python3-pythran python3-scipy python3-snowballstemmer python3-sympy python3-ufolib2 python3-unicodedata2 python3-vtk9 qdoc-qt5 qhelpgenerator-qt5 qt5-assistant qtattributionsscanner-qt5 qtdeclarative5-dev qtdeclarative5-dev-tools qttools5-dev qttools5-dev-tools qttools5-private-dev ros-humble-action-tutorials-cpp ros-humble-action-tutorials-interfaces ros-humble-action-tutorials-py ros-humble-ament-cmake-auto ros-humble-ament-cmake-copyright ros-humble-ament-cmake-flake8 ros-humble-ament-cmake-lint-cmake ros-humble-ament-cmake-pep257 ros-humble-ament-cmake-xmllint ros-humble-ament-flake8 ros-humble-ament-lint-auto ros-humble-ament-lint-cmake ros-humble-ament-lint-common ros-humble-ament-pep257 ros-humble-ament-xmllint ros-humble-composition ros-humble-demo-nodes-cpp ros-humble-demo-nodes-cpp-native ros-humble-demo-nodes-py ros-humble-depthimage-to-laserscan ros-humble-dummy-map-server ros-humble-dummy-robot-bringup ros-humble-dummy-sensors ros-humble-example-interfaces ros-humble-examples-rclcpp-minimal-action-client ros-humble-examples-rclcpp-minimal-action-server ros-humble-examples-rclcpp-minimal-client ros-humble-examples-rclcpp-minimal-composition ros-humble-examples-rclcpp-minimal-publisher ros-humble-examples-rclcpp-minimal-service ros-humble-examples-rclcpp-minimal-subscriber ros-humble-examples-rclcpp-minimal-timer ros-humble-examples-rclcpp-multithreaded-executor ros-humble-examples-rclpy-executors ros-humble-examples-rclpy-minimal-action-client ros-humble-examples-rclpy-minimal-action-server ros-humble-examples-rclpy-minimal-client ros-humble-examples-rclpy-minimal-publisher ros-humble-examples-rclpy-minimal-service ros-humble-examples-rclpy-minimal-subscriber ros-humble-geometry2 ros-humble-image-geometry ros-humble-image-tools ros-humble-intra-process-demo ros-humble-keyboard-handler ros-humble-lifecycle ros-humble-logging-demo ros-humble-pcl-conversions ros-humble-pcl-msgs ros-humble-pendulum-control ros-humble-pendulum-msgs ros-humble-qt-gui-cpp ros-humble-qt-gui-py-common ros-humble-quality-of-service-demo-cpp ros-humble-quality-of-service-demo-py ros-humble-ros-environment ros-humble-ros2action ros-humble-ros2bag ros-humble-ros2component ros-humble-ros2interface ros-humble-ros2launch ros-humble-ros2lifecycle ros-humble-ros2multicast ros-humble-ros2topic ros-humble-rosbag2 ros-humble-rosbag2-compression ros-humble-rosbag2-compression-zstd ros-humble-rosbag2-cpp ros-humble-rosbag2-interfaces ros-humble-rosbag2-py ros-humble-rosbag2-storage ros-humble-rosbag2-storage-default-plugins ros-humble-rosbag2-transport ros-humble-rosidl-default-generators ros-humble-rqt-action ros-humble-rqt-bag ros-humble-rqt-bag-plugins ros-humble-rqt-common-plugins ros-humble-rqt-console ros-humble-rqt-gui-cpp ros-humble-rqt-image-view ros-humble-rqt-msg ros-humble-rqt-plot ros-humble-rqt-publisher ros-humble-rqt-py-common ros-humble-rqt-py-console ros-humble-rqt-reconfigure ros-humble-rqt-service-caller ros-humble-rqt-shell ros-humble-rqt-srv ros-humble-rqt-topic ros-humble-rttest ros-humble-shared-queues-vendor ros-humble-sros2 ros-humble-sros2-cmake ros-humble-teleop-twist-joy ros-humble-teleop-twist-keyboard ros-humble-tf2-bullet ros-humble-tf2-sensor-msgs ros-humble-tf2-tools ros-humble-tlsf ros-humble-tlsf-cpp ros-humble-topic-monitor ros-humble-turtlesim ros-humble-zstd-vendor tcl-dev tcl8.6-dev tk-dev tk8.6-dev unicode-data vtk9 使用'sudo apt autoremove'来卸载它(它们)。 下列软件包将被升级: ros-humble-control-msgs 升级了 1 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 304 个软件包未被升级。 需要下载 441 kB 的归档。 解压缩后会消耗 0 B 的额外空间。 获取:1 https://mirrors.tuna.tsinghua.edu.cn/ros2/ubuntu jammy/main amd64 ros-humble-control-msgs amd64 4.8.0-1jammy.20250617.205352 [441 kB] 已下载 441 kB,耗时 1秒 (634 kB/s) (正在读取数据库 ... 系统当前共安装有 322880 个文件和目录。) 准备解压 .../ros-humble-control-msgs_4.8.0-1jammy.20250617.205352_amd64.deb ... 正在解压 ros-humble-control-msgs (4.8.0-1jammy.20250617.205352) 并覆盖 (4.8.0-1j ammy.20250325.185909) ... 正在设置 ros-humble-control-msgs (4.8.0-1jammy.20250617.205352) ... 正在处理用于 libc-bin (2.35-0ubuntu3.10) 的触发器 ... xyg@xyg-T6AD:~$ sudo apt install ros-humble-control-toolbox 正在读取软件包列表... 完成 正在分析软件包的依赖关系树... 完成 正在读取状态信息... 完成 ros-humble-control-toolbox 已经是最新版 (3.6.1-1jammy.20250617.230904)。 ros-humble-control-toolbox 已设置为手动安装。 下列软件包是自动安装的并且现在不需要了: fonts-lyx libaom-dev libarmadillo-dev libarpack2-dev libblas-dev libblosc-dev libcfitsio-dev libcfitsio-doc libcharls-dev libdav1d-dev libde265-dev libdouble-conversion-dev libfontconfig-dev libfontconfig1-dev libfreexl-dev libfyba-dev libgdal-dev libgeos-dev libgeotiff-dev libgif-dev libgl2ps-dev libheif-dev libjson-c-dev libjsoncpp-dev libkml-dev libkmlconvenience1 libkmlregionator1 libkmlxsd1 liblapack-dev liblbfgsb0 libnetcdf-c++4 libnetcdf-cxx-legacy-dev libogdi-dev libogg-dev libopenjp2-7-dev libopenni-dev libopenni-sensor-pointclouds0 libopenni0 libopenni2-0 libopenni2-dev libpcl-apps1.12 libpcl-common1.12 libpcl-dev libpcl-features1.12 libpcl-filters1.12 libpcl-io1.12 libpcl-kdtree1.12 libpcl-keypoints1.12 libpcl-ml1.12 libpcl-octree1.12 libpcl-outofcore1.12 libpcl-people1.12 libpcl-recognition1.12 libpcl-registration1.12 libpcl-sample-consensus1.12 libpcl-search1.12 libpcl-segmentation1.12 libpcl-stereo1.12 libpcl-surface1.12 libpcl-tracking1.12 libpcl-visualization1.12 libpoppler-dev libpoppler-private-dev libproj-dev libqt5designercomponents5 libqt5qmlworkerscript5 libqt5quickparticles5 libqt5quickshapes5 libqt5quicktest5 libqt5webkit5-dev librttopo-dev libspatialite-dev libsuperlu-dev libtheora-dev liburiparser-dev libutfcpp-dev libvtk9-dev libvtk9-java libvtk9-qt-dev libvtk9.1-qt libwebp-dev libx265-dev libxerces-c-dev libxft-dev libxml2-dev libxsimd-dev openni-utils pydocstyle pyflakes3 python-matplotlib-data python3-appdirs python3-beniget python3-brotli python3-cycler python3-decorator python3-flake8 python3-fonttools python3-fs python3-gast python3-kiwisolver python3-lz4 python3-matplotlib python3-mccabe python3-mpi4py python3-mpmath python3-ply python3-psutil python3-pycodestyle python3-pydocstyle python3-pyflakes python3-pythran python3-scipy python3-snowballstemmer python3-sympy python3-ufolib2 python3-unicodedata2 python3-vtk9 qdoc-qt5 qhelpgenerator-qt5 qt5-assistant qtattributionsscanner-qt5 qtdeclarative5-dev qtdeclarative5-dev-tools qttools5-dev qttools5-dev-tools qttools5-private-dev ros-humble-action-tutorials-cpp ros-humble-action-tutorials-interfaces ros-humble-action-tutorials-py ros-humble-ament-cmake-auto ros-humble-ament-cmake-copyright ros-humble-ament-cmake-flake8 ros-humble-ament-cmake-lint-cmake ros-humble-ament-cmake-pep257 ros-humble-ament-cmake-xmllint ros-humble-ament-flake8 ros-humble-ament-lint-auto ros-humble-ament-lint-cmake ros-humble-ament-lint-common ros-humble-ament-pep257 ros-humble-ament-xmllint ros-humble-composition ros-humble-demo-nodes-cpp ros-humble-demo-nodes-cpp-native ros-humble-demo-nodes-py ros-humble-depthimage-to-laserscan ros-humble-dummy-map-server ros-humble-dummy-robot-bringup ros-humble-dummy-sensors ros-humble-example-interfaces ros-humble-examples-rclcpp-minimal-action-client ros-humble-examples-rclcpp-minimal-action-server ros-humble-examples-rclcpp-minimal-client ros-humble-examples-rclcpp-minimal-composition ros-humble-examples-rclcpp-minimal-publisher ros-humble-examples-rclcpp-minimal-service ros-humble-examples-rclcpp-minimal-subscriber ros-humble-examples-rclcpp-minimal-timer ros-humble-examples-rclcpp-multithreaded-executor ros-humble-examples-rclpy-executors ros-humble-examples-rclpy-minimal-action-client ros-humble-examples-rclpy-minimal-action-server ros-humble-examples-rclpy-minimal-client ros-humble-examples-rclpy-minimal-publisher ros-humble-examples-rclpy-minimal-service ros-humble-examples-rclpy-minimal-subscriber ros-humble-geometry2 ros-humble-image-geometry ros-humble-image-tools ros-humble-intra-process-demo ros-humble-keyboard-handler ros-humble-lifecycle ros-humble-logging-demo ros-humble-pcl-conversions ros-humble-pcl-msgs ros-humble-pendulum-control ros-humble-pendulum-msgs ros-humble-qt-gui-cpp ros-humble-qt-gui-py-common ros-humble-quality-of-service-demo-cpp ros-humble-quality-of-service-demo-py ros-humble-ros-environment ros-humble-ros2action ros-humble-ros2bag ros-humble-ros2component ros-humble-ros2interface ros-humble-ros2launch ros-humble-ros2lifecycle ros-humble-ros2multicast ros-humble-ros2topic ros-humble-rosbag2 ros-humble-rosbag2-compression ros-humble-rosbag2-compression-zstd ros-humble-rosbag2-cpp ros-humble-rosbag2-interfaces ros-humble-rosbag2-py ros-humble-rosbag2-storage ros-humble-rosbag2-storage-default-plugins ros-humble-rosbag2-transport ros-humble-rosidl-default-generators ros-humble-rqt-action ros-humble-rqt-bag ros-humble-rqt-bag-plugins ros-humble-rqt-common-plugins ros-humble-rqt-console ros-humble-rqt-gui-cpp ros-humble-rqt-image-view ros-humble-rqt-msg ros-humble-rqt-plot ros-humble-rqt-publisher ros-humble-rqt-py-common ros-humble-rqt-py-console ros-humble-rqt-reconfigure ros-humble-rqt-service-caller ros-humble-rqt-shell ros-humble-rqt-srv ros-humble-rqt-topic ros-humble-rttest ros-humble-shared-queues-vendor ros-humble-sros2 ros-humble-sros2-cmake ros-humble-teleop-twist-joy ros-humble-teleop-twist-keyboard ros-humble-tf2-bullet ros-humble-tf2-sensor-msgs ros-humble-tf2-tools ros-humble-tlsf ros-humble-tlsf-cpp ros-humble-topic-monitor ros-humble-turtlesim ros-humble-zstd-vendor tcl-dev tcl8.6-dev tk-dev tk8.6-dev unicode-data vtk9 使用'sudo apt autoremove'来卸载它(它们)。 升级了 0 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 304 个软件包未被升级。 xyg@xyg-T6AD:~$ sudo apt install ros-humble-ros2-control 正在读取软件包列表... 完成 正在分析软件包的依赖关系树... 完成 正在读取状态信息... 完成 ros-humble-ros2-control 已经是最新版 (2.51.0-1jammy.20250617.234359)。 下列软件包是自动安装的并且现在不需要了: fonts-lyx libaom-dev libarmadillo-dev libarpack2-dev libblas-dev libblosc-dev libcfitsio-dev libcfitsio-doc libcharls-dev libdav1d-dev libde265-dev libdouble-conversion-dev libfontconfig-dev libfontconfig1-dev libfreexl-dev libfyba-dev libgdal-dev libgeos-dev libgeotiff-dev libgif-dev libgl2ps-dev libheif-dev libjson-c-dev libjsoncpp-dev libkml-dev libkmlconvenience1 libkmlregionator1 libkmlxsd1 liblapack-dev liblbfgsb0 libnetcdf-c++4 libnetcdf-cxx-legacy-dev libogdi-dev libogg-dev libopenjp2-7-dev libopenni-dev libopenni-sensor-pointclouds0 libopenni0 libopenni2-0 libopenni2-dev libpcl-apps1.12 libpcl-common1.12 libpcl-dev libpcl-features1.12 libpcl-filters1.12 libpcl-io1.12 libpcl-kdtree1.12 libpcl-keypoints1.12 libpcl-ml1.12 libpcl-octree1.12 libpcl-outofcore1.12 libpcl-people1.12 libpcl-recognition1.12 libpcl-registration1.12 libpcl-sample-consensus1.12 libpcl-search1.12 libpcl-segmentation1.12 libpcl-stereo1.12 libpcl-surface1.12 libpcl-tracking1.12 libpcl-visualization1.12 libpoppler-dev libpoppler-private-dev libproj-dev libqt5designercomponents5 libqt5qmlworkerscript5 libqt5quickparticles5 libqt5quickshapes5 libqt5quicktest5 libqt5webkit5-dev librttopo-dev libspatialite-dev libsuperlu-dev libtheora-dev liburiparser-dev libutfcpp-dev libvtk9-dev libvtk9-java libvtk9-qt-dev libvtk9.1-qt libwebp-dev libx265-dev libxerces-c-dev libxft-dev libxml2-dev libxsimd-dev openni-utils pydocstyle pyflakes3 python-matplotlib-data python3-appdirs python3-beniget python3-brotli python3-cycler python3-decorator python3-flake8 python3-fonttools python3-fs python3-gast python3-kiwisolver python3-lz4 python3-matplotlib python3-mccabe python3-mpi4py python3-mpmath python3-ply python3-psutil python3-pycodestyle python3-pydocstyle python3-pyflakes python3-pythran python3-scipy python3-snowballstemmer python3-sympy python3-ufolib2 python3-unicodedata2 python3-vtk9 qdoc-qt5 qhelpgenerator-qt5 qt5-assistant qtattributionsscanner-qt5 qtdeclarative5-dev qtdeclarative5-dev-tools qttools5-dev qttools5-dev-tools qttools5-private-dev ros-humble-action-tutorials-cpp ros-humble-action-tutorials-interfaces ros-humble-action-tutorials-py ros-humble-ament-cmake-auto ros-humble-ament-cmake-copyright ros-humble-ament-cmake-flake8 ros-humble-ament-cmake-lint-cmake ros-humble-ament-cmake-pep257 ros-humble-ament-cmake-xmllint ros-humble-ament-flake8 ros-humble-ament-lint-auto ros-humble-ament-lint-cmake ros-humble-ament-lint-common ros-humble-ament-pep257 ros-humble-ament-xmllint ros-humble-composition ros-humble-demo-nodes-cpp ros-humble-demo-nodes-cpp-native ros-humble-demo-nodes-py ros-humble-depthimage-to-laserscan ros-humble-dummy-map-server ros-humble-dummy-robot-bringup ros-humble-dummy-sensors ros-humble-example-interfaces ros-humble-examples-rclcpp-minimal-action-client ros-humble-examples-rclcpp-minimal-action-server ros-humble-examples-rclcpp-minimal-client ros-humble-examples-rclcpp-minimal-composition ros-humble-examples-rclcpp-minimal-publisher ros-humble-examples-rclcpp-minimal-service ros-humble-examples-rclcpp-minimal-subscriber ros-humble-examples-rclcpp-minimal-timer ros-humble-examples-rclcpp-multithreaded-executor ros-humble-examples-rclpy-executors ros-humble-examples-rclpy-minimal-action-client ros-humble-examples-rclpy-minimal-action-server ros-humble-examples-rclpy-minimal-client ros-humble-examples-rclpy-minimal-publisher ros-humble-examples-rclpy-minimal-service ros-humble-examples-rclpy-minimal-subscriber ros-humble-geometry2 ros-humble-image-geometry ros-humble-image-tools ros-humble-intra-process-demo ros-humble-keyboard-handler ros-humble-lifecycle ros-humble-logging-demo ros-humble-pcl-conversions ros-humble-pcl-msgs ros-humble-pendulum-control ros-humble-pendulum-msgs ros-humble-qt-gui-cpp ros-humble-qt-gui-py-common ros-humble-quality-of-service-demo-cpp ros-humble-quality-of-service-demo-py ros-humble-ros-environment ros-humble-ros2action ros-humble-ros2bag ros-humble-ros2component ros-humble-ros2interface ros-humble-ros2launch ros-humble-ros2lifecycle ros-humble-ros2multicast ros-humble-ros2topic ros-humble-rosbag2 ros-humble-rosbag2-compression ros-humble-rosbag2-compression-zstd ros-humble-rosbag2-cpp ros-humble-rosbag2-interfaces ros-humble-rosbag2-py ros-humble-rosbag2-storage ros-humble-rosbag2-storage-default-plugins ros-humble-rosbag2-transport ros-humble-rosidl-default-generators ros-humble-rqt-action ros-humble-rqt-bag ros-humble-rqt-bag-plugins ros-humble-rqt-common-plugins ros-humble-rqt-console ros-humble-rqt-gui-cpp ros-humble-rqt-image-view ros-humble-rqt-msg ros-humble-rqt-plot ros-humble-rqt-publisher ros-humble-rqt-py-common ros-humble-rqt-py-console ros-humble-rqt-reconfigure ros-humble-rqt-service-caller ros-humble-rqt-shell ros-humble-rqt-srv ros-humble-rqt-topic ros-humble-rttest ros-humble-shared-queues-vendor ros-humble-sros2 ros-humble-sros2-cmake ros-humble-teleop-twist-joy ros-humble-teleop-twist-keyboard ros-humble-tf2-bullet ros-humble-tf2-sensor-msgs ros-humble-tf2-tools ros-humble-tlsf ros-humble-tlsf-cpp ros-humble-topic-monitor ros-humble-turtlesim ros-humble-zstd-vendor tcl-dev tcl8.6-dev tk-dev tk8.6-dev unicode-data vtk9 使用'sudo apt autoremove'来卸载它(它们)。 升级了 0 个软件包,新安装了 0 个软件包,要卸载 0 个软件包,有 304 个软件包未被升级。 xyg@xyg-T6AD:~$ colcon build --cmake-clean-cache --packages-select pca9685_hardware Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python3.10/distutils/core.py", line 215, in run_setup exec(f.read(), g) File "<string>", line 8, in <module> AttributeError: 'NoneType' object has no attribute 'split' [2.803s] ERROR:colcon.colcon_core.package_identification:Exception in package identification extension 'python_setup_py' in 'rpi_kernel/linux-rpi-6.12.y/tools/perf/util': Command '['/usr/bin/python3', '-c', 'import sys;from contextlib import suppress;exec("with suppress(ImportError): from setuptools.extern.packaging.specifiers import SpecifierSet");exec("with suppress(ImportError): from packaging.specifiers import SpecifierSet");from distutils.core import run_setup;dist = run_setup( \'setup.py\', script_args=(\'--dry-run\',), stop_after=\'config\');skip_keys = (\'cmdclass\', \'distclass\', \'ext_modules\', \'metadata\');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith(\'_\') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data[\'metadata\'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in (\'license_files\', \'provides_extras\')};sys.stdout.buffer.write(repr(data).encode(\'utf-8\'))']' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/lib/python3/dist-packages/colcon_core/package_identification/__init__.py", line 144, in _identify retval = extension.identify(_reused_descriptor_instance) File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 48, in identify config = get_setup_information(setup_py) File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 249, in get_setup_information _setup_information_cache[hashable_env] = _get_setup_information( File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 296, in _get_setup_information result = subprocess.run( File "/usr/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/bin/python3', '-c', 'import sys;from contextlib import suppress;exec("with suppress(ImportError): from setuptools.extern.packaging.specifiers import SpecifierSet");exec("with suppress(ImportError): from packaging.specifiers import SpecifierSet");from distutils.core import run_setup;dist = run_setup( \'setup.py\', script_args=(\'--dry-run\',), stop_after=\'config\');skip_keys = (\'cmdclass\', \'distclass\', \'ext_modules\', \'metadata\');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith(\'_\') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data[\'metadata\'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in (\'license_files\', \'provides_extras\')};sys.stdout.buffer.write(repr(data).encode(\'utf-8\'))']' returned non-zero exit status 1. WARNING: Package name "yahboomcar_KCFTracker" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, underscores, and dashes. [3.734s] WARNING:colcon.colcon_core.prefix_path.colcon:The path '/home/xyg/ros2_ws/install' in the environment variable COLCON_PREFIX_PATH doesn't exist [3.734s] WARNING:colcon.colcon_ros.prefix_path.ament:The path '/home/xyg/ros2_ws/install' in the environment variable AMENT_PREFIX_PATH doesn't exist [3.734s] WARNING:colcon.colcon_ros.prefix_path.catkin:The path '/home/xyg/ros2_ws/install' in the environment variable CMAKE_PREFIX_PATH doesn't exist [3.759s] ERROR:colcon:colcon build: Duplicate package names not supported: - pca9685_hardware: - ros2_ws/src/pca9685_hardware - ros_env_backup/ros2_ws/src/pca9685_hardware - 下载/ros2_ws/src/pca9685_hardware xyg@xyg-T6AD:~$ colcon build --symlink-install --packages-select pca9685_hardware \ --cmake-args -DCMAKE_BUILD_TYPE=Release Traceback (most recent call last): File "<string>", line 1, in <module> File "/usr/lib/python3.10/distutils/core.py", line 215, in run_setup exec(f.read(), g) File "<string>", line 8, in <module> AttributeError: 'NoneType' object has no attribute 'split' [2.007s] ERROR:colcon.colcon_core.package_identification:Exception in package identification extension 'python_setup_py' in 'rpi_kernel/linux-rpi-6.12.y/tools/perf/util': Command '['/usr/bin/python3', '-c', 'import sys;from contextlib import suppress;exec("with suppress(ImportError): from setuptools.extern.packaging.specifiers import SpecifierSet");exec("with suppress(ImportError): from packaging.specifiers import SpecifierSet");from distutils.core import run_setup;dist = run_setup( \'setup.py\', script_args=(\'--dry-run\',), stop_after=\'config\');skip_keys = (\'cmdclass\', \'distclass\', \'ext_modules\', \'metadata\');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith(\'_\') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data[\'metadata\'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in (\'license_files\', \'provides_extras\')};sys.stdout.buffer.write(repr(data).encode(\'utf-8\'))']' returned non-zero exit status 1. Traceback (most recent call last): File "/usr/lib/python3/dist-packages/colcon_core/package_identification/__init__.py", line 144, in _identify retval = extension.identify(_reused_descriptor_instance) File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 48, in identify config = get_setup_information(setup_py) File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 249, in get_setup_information _setup_information_cache[hashable_env] = _get_setup_information( File "/usr/lib/python3/dist-packages/colcon_python_setup_py/package_identification/python_setup_py.py", line 296, in _get_setup_information result = subprocess.run( File "/usr/lib/python3.10/subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['/usr/bin/python3', '-c', 'import sys;from contextlib import suppress;exec("with suppress(ImportError): from setuptools.extern.packaging.specifiers import SpecifierSet");exec("with suppress(ImportError): from packaging.specifiers import SpecifierSet");from distutils.core import run_setup;dist = run_setup( \'setup.py\', script_args=(\'--dry-run\',), stop_after=\'config\');skip_keys = (\'cmdclass\', \'distclass\', \'ext_modules\', \'metadata\');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith(\'_\') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data[\'metadata\'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in (\'license_files\', \'provides_extras\')};sys.stdout.buffer.write(repr(data).encode(\'utf-8\'))']' returned non-zero exit status 1. WARNING: Package name "yahboomcar_KCFTracker" does not follow the naming conventions. It should start with a lower case letter and only contain lower case letters, digits, underscores, and dashes. [2.822s] WARNING:colcon.colcon_core.prefix_path.colcon:The path '/home/xyg/ros2_ws/install' in the environment variable COLCON_PREFIX_PATH doesn't exist [2.822s] WARNING:colcon.colcon_ros.prefix_path.ament:The path '/home/xyg/ros2_ws/install' in the environment variable AMENT_PREFIX_PATH doesn't exist [2.822s] WARNING:colcon.colcon_ros.prefix_path.catkin:The path '/home/xyg/ros2_ws/install' in the environment variable CMAKE_PREFIX_PATH doesn't exist [2.846s] ERROR:colcon:colcon build: Duplicate package names not supported: - pca9685_hardware: - ros2_ws/src/pca9685_hardware - ros_env_backup/ros2_ws/src/pca9685_hardware - 下载/ros2_ws/src/pca9685_hardware xyg@xyg-T6AD:~$ dpkg -L ros-humble-hardware-interface-types | grep cmake dpkg-query: 软件包 ros-humble-hardware-interface-types 没有被安装 通过 dpkg --contents (= dpkg-deb --contents) 来列出档案文件清单。

最新推荐

recommend-type

光子学领域基于连续域束缚态的铌酸锂二次谐波超表面COMSOL模拟研究 - 二次谐波

内容概要:本文探讨了基于连续域束缚态(BICs)的铌酸锂二次谐波超表面的COMSOL光子晶体模拟。首先介绍了BICs的概念及其在光学领域的应用潜力,然后详细描述了在COMSOL中建立的三维模型,包括周期性晶格结构和BICs模式。接着分析了模拟结果,展示了光子在铌酸锂超表面上的传播行为变化,特别是二次谐波效应的显著提升。最后讨论了代码实现和模拟结果的可视化方法,并展望了未来优化方向和其他潜在应用。 适合人群:从事光子学、光学工程及相关领域的研究人员和学生。 使用场景及目标:适用于希望深入了解BICs在铌酸锂二次谐波中的应用机制,以及希望通过COMSOL进行类似模拟实验的人群。 其他说明:文中涉及大量COMSOL建模和仿真细节,对于初学者可能有一定难度,建议先掌握相关基础知识再进行深入学习。
recommend-type

Abaqus仿真技术在PCB板钻削加工中的应用:铜箔与纤维复合材料建模及本构关系研究

Abaqus仿真技术在PCB板钻削加工中的应用,重点讨论了铜箔和纤维复合材料的建模及其本构关系。首先,文章阐述了铜箔采用J-C本构模型进行模拟的原因及其优势,能够准确预测加工过程中的变形和应力。其次,针对纤维复合材料,文章提出了两种建模方式:二维壳单元Hashin准则和三维Hashin子程序,分别适用于不同的应用场景。此外,还探讨了有限元方法在PCB钻削加工仿真的应用,强调了结合实验数据和实际工艺参数的重要性。最后,文章指出,合理的仿真技术和材料选择有助于提升加工效率和产品质量。 适合人群:从事PCB板制造及相关领域的工程师和技术人员,尤其是对仿真技术有一定了解并希望深入掌握Abaqus应用的人群。 使用场景及目标:① 提高对PCB板钻削加工仿真技术的理解;② 掌握铜箔和纤维复合材料的建模方法;③ 学习如何结合实验数据和实际工艺参数优化仿真效果。 其他说明:本文不仅提供了理论指导,还结合了实际案例,使读者能够在实践中更好地应用所学知识。
recommend-type

Webdiy.net新闻系统v1.0企业版发布:功能强大、易操作

标题中提到的"Webdiy.net新闻系统 v1.0 企业版"是一个针对企业级应用开发的新闻内容管理系统,是基于.NET框架构建的。从描述中我们可以提炼出以下知识点: 1. **系统特性**: - **易用性**:系统设计简单,方便企业用户快速上手和操作。 - **可定制性**:用户可以轻松修改网站的外观和基本信息,例如网页标题、页面颜色、页眉和页脚等,以符合企业的品牌形象。 2. **数据库支持**: - **Access数据库**:作为轻量级数据库,Access对于小型项目和需要快速部署的场景非常合适。 - **Sql Server数据库**:适用于需要强大数据处理能力和高并发支持的企业级应用。 3. **性能优化**: - 系统针对Access和Sql Server数据库进行了特定的性能优化,意味着它能够提供更为流畅的用户体验和更快的数据响应速度。 4. **编辑器功能**: - **所见即所得编辑器**:类似于Microsoft Word,允许用户进行图文混排编辑,这样的功能对于非技术人员来说非常友好,因为他们可以直观地编辑内容而无需深入了解HTML或CSS代码。 5. **图片管理**: - 新闻系统中包含在线图片上传、浏览和删除的功能,这对于新闻编辑来说是非常必要的,可以快速地为新闻内容添加相关图片,并且方便地进行管理和更新。 6. **内容发布流程**: - **审核机制**:后台发布新闻后,需经过审核才能显示到网站上,这样可以保证发布的内容质量,减少错误和不当信息的传播。 7. **内容排序与类别管理**: - 用户可以按照不同的显示字段对新闻内容进行排序,这样可以突出显示最新或最受欢迎的内容。 - 新闻类别的动态管理及自定义显示顺序,可以灵活地对新闻内容进行分类,方便用户浏览和查找。 8. **前端展示**: - 系统支持Javascript前端页面调用,这允许开发者将系统内容嵌入到其他网页或系统中。 - 支持iframe调用,通过这种HTML元素可以将系统内容嵌入到网页中,实现了内容的跨域展示。 9. **安全性**: - 提供了默认的管理账号和密码(webdiy / webdiy.net),对于企业应用来说,这些默认的凭证需要被替换,以保证系统的安全性。 10. **文件结构**: - 压缩包文件名称为"webdiynetnews",这可能是系统的根目录名称或主要安装文件。 11. **技术栈**: - 系统基于ASP.NET技术构建,这表明它使用.NET框架开发,并且可以利用.NET生态中的各种库和工具来实现功能的扩展和维护。 在实施和部署这样的系统时,企业可能还需要考虑以下方面: - **可扩展性**:随着业务的增长,系统应该能容易地扩展,以支持更多的用户和内容。 - **安全性**:除了更改为安全的管理员凭证外,还需考虑防止SQL注入、XSS攻击等网络安全问题。 - **备份和恢复**:定期备份数据库和内容是很重要的,以防数据丢失或需要恢复到先前的状态。 - **多用户和权限管理**:企业环境可能需要多级用户权限设置,以适应不同的团队和角色。 - **SEO优化**:系统应支持良好的搜索引擎优化实践,以便于内容在搜索引擎中的排名和发现。 - **移动适应性**:由于移动设备的广泛使用,系统应具备良好的响应式设计,以提供良好的移动用户体验。 根据描述,该系统似乎是一个成熟的解决方案,适用于需要快速搭建新闻网站的企业用户,但同时也应该注意定期的维护和更新,以确保系统的稳定性和安全性。
recommend-type

【Windows 11用户的福音】:一步到位解决GX Works2安装问题,让兼容性不再是问题!

# 1. GX Works2简介与Windows 11兼容性问题 ## 简介GX Works2 GX Works2是由三菱电机开发的一款针对其PLC(可编程逻辑控制器)产品的编程和调试软件。该软件以其直观的用户界面和强大的编程功能闻名于工业自动化领域,支持从简单的逻辑控制到复杂的系统集成项目。随着Windows操作系统的迭代升级,尤其是新推
recommend-type

电磁加热双边谐振和单边谐振的区别

<think>我们正在处理关于电磁加热技术中双边谐振与单边谐振的工作原理及区别的问题。 首先,需要明确用户意图是了解这两种谐振方式的区别以及它们的工作原理。 我们将通过以下步骤进行: 1. 调用网络搜索工具,检索关于电磁加热中双边谐振和单边谐振的信息。 2. 整合检索到的内容,按照要求生成回答,包括工作原理和区别,并组织成多个方法或步骤(但避免使用步骤词汇,用换行分隔)。 3. 在回答中,如果有公式或代码,按照指定格式处理。 4. 最后,添加相关问题部分。 注意:避免使用第一人称,避免步骤词汇,引用内容不集中末尾,而是融入回答中。 根据搜索,电磁加热中的谐振通常指的是感应加
recommend-type

EnvMan源代码压缩包内容及功能解析

根据给定文件信息,我们需要生成关于“EnvMan-source.zip”这一压缩包的知识点。首先,由于提供的信息有限,我们无法直接得知EnvMan-source.zip的具体内容和功能,但可以通过标题、描述和标签中的信息进行推断。文件名称列表只有一个“EnvMan”,这暗示了压缩包可能包含一个名为EnvMan的软件或项目源代码。以下是一些可能的知识点: ### EnvMan软件/项目概览 EnvMan可能是一个用于环境管理的工具或框架,其源代码被打包并以“EnvMan-source.zip”的形式进行分发。通常,环境管理相关的软件用于构建、配置、管理和维护应用程序的运行时环境,这可能包括各种操作系统、服务器、中间件、数据库等组件的安装、配置和版本控制。 ### 源代码文件说明 由于只有一个名称“EnvMan”出现在文件列表中,我们可以推测这个压缩包可能只包含一个与EnvMan相关的源代码文件夹。源代码文件夹可能包含以下几个部分: - **项目结构**:展示EnvMan项目的基本目录结构,通常包括源代码文件(.c, .cpp, .java等)、头文件(.h, .hpp等)、资源文件(图片、配置文件等)、文档(说明文件、开发者指南等)、构建脚本(Makefile, build.gradle等)。 - **开发文档**:可能包含README文件、开发者指南或者项目wiki,用于说明EnvMan的功能、安装、配置、使用方法以及可能的API说明或开发者贡献指南。 - **版本信息**:在描述中提到了版本号“-1101”,这表明我们所见的源代码包是EnvMan的1101版本。通常版本信息会详细记录在版本控制文件(如ChangeLog或RELEASE_NOTES)中,说明了本次更新包含的新特性、修复的问题、已知的问题等。 ### 压缩包的特点 - **命名规范**:标题、描述和标签中的一致性表明这是一个正式发布的软件包。通常,源代码包的命名会遵循一定的规范,如“项目名称-版本号-类型”,在这里类型是“source”。 - **分发形式**:以.zip格式的压缩包进行分发,是一种常见的软件源代码分发方式。虽然较现代的版本控制系统(如Git、Mercurial)通常支持直接从仓库克隆源代码,但打包成zip文件依然是一种便于存储和传输的手段。 ### 可能的应用场景 - **开发环境配置**:EnvMan可能是用于创建、配置和管理开发环境的工具,这种工具在开发人员设置新的开发机或新的项目环境时非常有用。 - **自动化部署**:EnvMan可能包含自动化部署环境的脚本或命令,使得部署流程变得快捷且高效。 - **监控与维护**:作为环境管理工具,EnvMan可能还支持对环境的监控功能,包括系统资源监控、服务状态检查等,以保证生产环境的稳定性。 ### 总结 尽管以上知识点是基于有限的信息进行的假设性推论,但EnvMan-source.zip包可能是一个用于环境管理的软件或项目的源代码包。该软件或项目可能包含构建和部署自动化环境的能力,以及对运行时环境的监控和维护。文件命名的一致性暗示这是一个正式的版本发布。如果要深入了解EnvMan的功能与用法,建议直接查看压缩包中的文档或源代码注释。同时,考虑到源代码的开发,我们还应该探究该项目所使用的技术栈、编程语言以及版本控制工具等,这将有助于进一步了解EnvMan的技术细节。
recommend-type

【Windows 11终极解决方案】:彻底攻克GX Works2安装中难缠的.Net Framework 3.5障碍!

# 1. Windows 11与GX Works2简介 ## 1.1 Windows 11操作系统概览 Windows 11,作为微软最新的操作系统,不仅仅提供了一种现代的用户体验,而且加强了在企业环境中的安全性与生产力工具。其引入了全新的界面设计、改进的多任务处理以及对Android应用的支持,使它成为IT专业人
recommend-type

贝叶斯gmm

<think>首先,用户的问题是关于贝叶斯高斯混合模型(Bayesian Gaussian Mixture Model, BGMM)的原理和应用。我需要根据系统指令来构建回答。 系统指令包括: - 所有行内数学表达式使用 `$...$` 格式。 - 独立公式使用 `$$...$$` 格式并单独成段。 - LaTeX语法正确。 - 使用中文回答。 - 生成相关问题。 - 在回答中引用的段落末尾自然地添加引用标识,如 `[^1]`。 - 回答结构清晰,帮助用户逐步解决问题。 - 保证回答真实可靠。 参考提供的引用: - 引用[1]:讨论了高斯混合模型在机器学习中的应用,包括其优点和挑战。 -
recommend-type

威海卫国旅游网美化版网站建设意向表下载

根据提供的文件信息,我们可以提取以下知识点: 1. 网站建设意向表概念: 网站建设意向表是指在网站开发过程中,客户或项目负责人填写的一份表单,用以明确表达网站建设的需求、目标、功能、风格偏好等关键信息。它是项目开发前期沟通的载体,确保开发团队能够准确理解客户需求并据此进行网站设计和功能实现。 2. 美化版的含义: 美化版通常指的是对原有产品、设计或界面进行视觉上的改进,使之更加吸引人和用户体验更佳。在网站建设的上下文中,美化版可能指对网站的设计元素、布局、色彩搭配等进行更新和优化,从而提高网站的美观度和用户交互体验。 3. 代码和CSS的优化: 代码优化:指的是对网站的源代码进行改进,包括但不限于提高代码的执行效率、减少冗余、提升可读性和可维护性。这可能涉及代码重构、使用更高效的算法、减少HTTP请求次数等技术手段。 CSS优化:层叠样式表(Cascading Style Sheets, CSS)是一种用于描述网页呈现样式的语言。CSS优化可能包括对样式的简化、合并、压缩,使用CSS预处理器、应用媒体查询以实现响应式设计,以及采用更高效的选择器减少重绘和重排等。 4. 网站建设实践: 网站建设涉及诸多实践,包括需求收集、网站规划、设计、编程、测试和部署。其中,前端开发是网站建设中的重要环节,涉及HTML、CSS和JavaScript等技术。此外,还需要考虑到网站的安全性、SEO优化、用户体验设计(UX)、交互设计(UI)等多方面因素。 5. 文件描述中提到的威海卫国旅游网: 威海卫国旅游网可能是一个以威海地区旅游信息为主题的网站。网站可能提供旅游景点介绍、旅游服务预订、旅游攻略分享等相关内容。该网站的这一项目表明,他们关注用户体验并致力于提供高质量的在线服务。 6. 文件标签的含义: 文件标签包括“下载”、“源代码”、“源码”、“资料”和“邮件管理类”。这些标签说明该压缩文件中包含了可以下载的资源,具体内容是网站相关源代码以及相关的开发资料。另外,提到“邮件管理类”可能意味着在网站项目中包含了用于处理用户邮件订阅、通知、回复等功能的代码或模块。 7. 压缩文件的文件名称列表: 该文件的名称为“网站建设意向表 美化版”。从文件名称可以推断出该文件是一个表单,用于收集网站建设相关需求,且经过了视觉和界面的改进。 综合上述内容,可以得出结论,本表单文件是一个为特定网站建设项目设计的需求收集工具,经过技术优化并美化了用户界面,旨在提升用户体验,并且可能包含了邮件管理功能,方便网站运营者与用户进行沟通。该文件是一份宝贵资源,尤其是对于需要进行网站建设或优化的开发者来说,可以作为参考模板或直接使用。
recommend-type

【FPGA设计高手必读】:高效除法的实现与基2 SRT算法优化

# 1. FPGA设计中的高效除法基础 ## 为何高效除法在FPGA设计中至关重要 在数字电路设计领域,尤其是在现场可编程门阵列(FPGA)中,高效的除法器设计对于实现高性能运算至关重要。由于除法运算相对复杂,其硬件实现往往涉及大量的逻辑门和触发器,消耗的资源和执行时间较多。因此,开发者必须设计出既高效又节省资源的除法器,以适应FPGA设计的性能和资源限制。此外,随着应用领域对计算速度和精度要求的不断提升,传统算法无法满足新需求,这就推动了高效除法算法的研究与发展。 ## 高效除法实现的挑战 实现FPGA设计中的高效除法,面临着诸多挑战。首先,除法操作的固有延迟限制了整体电路的性能;其