活动介绍

解释一下这段代码:loss = tf.reduce_mean(tf.nn.nce_loss( nce_weights, nce_biases, embed, train_labels, num_sampled, vocabulary_size))

时间: 2023-04-02 10:02:10 浏览: 194
这段代码是 TensorFlow 中用于计算 NCE loss 的函数。NCE loss 是一种用于训练神经网络的损失函数,它可以用于训练词向量模型。在这段代码中,我们需要提供一些参数,包括 NCE 权重、NCE 偏置、嵌入向量、训练标签、采样数量和词汇表大小。函数会根据这些参数计算出 NCE loss,并返回一个标量值作为损失。
相关问题

解释以下代码: def forward(self, input_ids, attention_mask, token_type_ids, word_ids=None, word_mask=None, label=None, label1=None): batch_size = input_ids.shape[0] outputs = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids) out = outputs[1] # [CLS] [batch_size, embedding_size] if self.args.is_lstm or self.args.is_gru: sequence_output = outputs[0] # [batch_size, max_sen_len, embedding_size] hiddens = self.rnn(sequence_output, attention_mask) # output = self.cnn(input_embed) # (16, 768) out = torch.mean(hiddens, dim=1) word_emb = self.embedding(word_ids) cnn_out = self.cnn(word_emb) cnn_out = self.cnn_fc(cnn_out) out1 = torch.cat([out.unsqueeze(dim=1), cnn_out.unsqueeze(dim=1)], dim=1) out_gate = self.gate_fc(out1) out2 = torch.matmul(out1.transpose(1, 2), out_gate) out2 = out2.squeeze(dim=-1) # out = outputs[0] # if self.label_num == 35: # out1 = self.fc3(outputs[1]) # # # out = self.rnn(sequence_output, attention_mask) # # # # out = torch.mean(out, dim=1) # # out = self.cnn(out) # # out2 = self.fc4(out) # # out = torch.cat([out1, out2], dim=1) # # out = out1 + out2 # # # out_8 = self.fc_8_mlp(out) # # out_8 = self.fc_8_mlp1(out_8) # # # # out_35 = self.fc_35_mlp(out) # # out_35 = self.fc_35_mlp1(out_35) # # # # logits1 = self.fc1(out_8) # # logits0 = self.fc2(out) # # if self.label_num == 8: # out = self.cnn(out) # logits0 = self.fc(out) # logits0 = F.log_softmax(logits0, dim=-1) logits0 = self.fc(out2) logits = logits0 if label is not None: if self.label_num == 1: logits = logits.squeeze(-1) loss = self.loss_fct_bce(logits, label) else: # loss = self.loss_fct_cros(logits.view(-1, self.label_num), label.view(-1)) loss = self.loss_fct_bce(logits, label) # label1 = label1.unsqueeze(dim=-1) # label1 = torch.zeros(batch_size, 8).to(label1.device).scatter_(1, label1, 1) # loss1 = self.loss_fct_bce(logits1, label1) # loss1 = self.loss_fct_cros(logits1, label1) # nce_loss = self.info_nce_loss(out, label) # if nce_loss: # loss = loss + nce_loss # print("nce loss", nce_loss) # loss = loss + loss1 outputs = (loss,) + (logits,) else: outputs = logits return outputs # (loss), logits

这段代码是一个自定义的前向传播 (`forward`) 函数,适用于一个混合使用BERT、RNN(LSTM/GRU)、CNN和门控机制的多模态深度学习模型。它主要用于文本分类或多标签预测任务。下面分步解释各个关键部分的功能: ### 1. BERT编码 ```python outputs = self.bert(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids) ``` 这部分调用了预训练好的BERT模型对输入句子进行编码,返回值包括最后一层隐藏状态和其他辅助信息。 - **`input_ids`**: 输入词ID序列; - **`attention_mask`**: 表示哪些位置是非填充的真实内容; - **`token_type_ids`**: 区分不同句子的部分标识符。 ### 2. 获取 [CLS] 向量表示 ```python out = outputs[1] # [CLS] [batch_size, embedding_size] ``` 从BERT输出中取出 `[CLS]` 标志位对应的位置作为整体文档或句子级别的固定长度嵌入向量。 ### 3. RNN 层处理 如果配置允许,则继续使用 RNN 模型进一步处理序列化的特征: ```python if self.args.is_lstm or self.args.is_gru: sequence_output = outputs[0] # 序列每个时间步长的隐含状态 hiddens = self.rnn(sequence_output, attention_mask) # 使用指定类型的RNN模型 out = torch.mean(hiddens, dim=1) # 平均池化所有时刻的状态得到固定维度表达 ``` ### 4. 单独处理词汇级别特征 对于额外提供的词语 ID 列表,通过 Embedding 查找其对应的向量并经过一维 CNN 提取局部特性: ```python word_emb = self.embedding(word_ids) cnn_out = self.cnn(word_emb) cnn_out = self.cnn_fc(cnn_out) ``` ### 5. 组合特征与门控机制 将上述两部分产生的全局特征 `[CLS]` 和局部特征 `cnn_out` 融合成更高阶的信息结构,并利用门控单元选择重要成分: ```python out1 = torch.cat([out.unsqueeze(dim=1), cnn_out.unsqueeze(dim=1)], dim=1) # 计算门控权重 out_gate = self.gate_fc(out1) # 加权求和 out2 = torch.matmul(out1.transpose(1, 2), out_gate).squeeze(dim=-1) ``` ### 6. 最终分类头 采用全连接层完成最终的任务转换,如二分类或多类别分类: ```python logits0 = self.fc(out2) logits = logits0 ``` ### 7. 损失计算 如果有提供真实标签,则根据设定的数量调整格式并与预测分布比较得出损失函数值: ```python if label is not None: if self.label_num == 1: logits = logits.squeeze(-1) loss = self.loss_fct_bce(logits, label) else: loss = self.loss_fct_bce(logits, label) outputs = (loss,) + (logits,) else: outputs = logits ``` 该过程展示了如何结合多种技术来构建复杂的自然语言处理(NLP)系统,并针对特定应用场景定制化地融合不同类型的数据源及其内部联系。 ---

ContrastiveTrainer代码如下,请通过以上分析直接返回修改结果:# trainer/contrastive_trainer.py import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader from transformers import Trainer, TrainingArguments from transformers.tokenization_utils_base import PreTrainedTokenizerBase from transformers.utils import PaddingStrategy from typing import Any, Dict, List, Optional, Tuple, Union import logging import numpy as np import os from tqdm import tqdm from dataclasses import dataclass class ContrastiveTrainer(Trainer): def __init__(self, *args, contrastive_config=None, **kwargs): # 显式调用父类初始化 super().__init__(*args, **kwargs) # 保存自定义配置 self.contrastive_config = contrastive_config or {} # 确保父类预处理方法可用 self._prepare_for_training = super()._prepare_for_training # 设置日志 logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) @dataclass class ContrastiveDataCollator: """ 对比学习数据收集器,处理对比学习的正负样本对 """ tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: """ 处理一批数据,生成模型输入格式 """ # 分离出三元组的各个部分 anchor_features = [{"input_ids": f["anchor_input_ids"]} for f in features] positive_features = [{"input_ids": f["positive_input_ids"]} for f in features] negative_features = [{"input_ids": f["negative_input_ids"]} for f in features] # 对每个部分分别进行填充 batch_anchor = self.tokenizer.pad( anchor_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_positive = self.tokenizer.pad( positive_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_negative = self.tokenizer.pad( negative_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 def create_attention_mask(input_ids): mask = torch.ones_like(input_ids) mask[input_ids == self.tokenizer.pad_token_id] = 0 return mask # 返回一个字典,包含所有部分 return { "anchor_input_ids": batch_anchor["input_ids"], "anchor_attention_mask": create_attention_mask(batch_anchor["input_ids"]), "positive_input_ids": batch_positive["input_ids"], "positive_attention_mask": create_attention_mask(batch_positive["input_ids"]), "negative_input_ids": batch_negative["input_ids"], "negative_attention_mask": create_attention_mask(batch_negative["input_ids"]), } class ContrastiveTrainer(Trainer): """ 对比学习训练器类,实现对比学习训练逻辑 """ def __init__( self, model: nn.Module = None, args: TrainingArguments = None, data_collator: Optional[ContrastiveDataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, tokenizer: Optional[PreTrainedTokenizerBase] = None, model_init: Optional[callable] = None, compute_metrics: Optional[callable] = None, callbacks: Optional[List[Any]] = None, optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None), preprocess_logits_for_metrics: Optional[callable] = None, contrastive_config: Optional[Dict] = None ): super().__init__( model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, model_init=model_init, compute_metrics=compute_metrics, callbacks=callbacks, optimizers=optimizers, preprocess_logits_for_metrics=preprocess_logits_for_metrics ) # 对比学习配置 self.contrastive_config = contrastive_config or {} self.temperature = self.contrastive_config.get("temperature", 0.07) self.margin = self.contrastive_config.get("margin", 0.3) self.contrastive_weight = self.contrastive_config.get("weight", 0.8) self.repr_layer = self.contrastive_config.get("repr_layer", -1) # 默认最后一层隐藏状态 # 损失函数 self.cross_entropy = nn.CrossEntropyLoss() def compute_contrastive_loss(self, anchor_emb, pos_emb, neg_emb): """ 计算对比损失 (InfoNCE + Triplet Margin组合) """ # 计算余弦相似度 pos_sim = F.cosine_similarity(anchor_emb, pos_emb) neg_sim = F.cosine_similarity(anchor_emb, neg_emb) # InfoNCE损失 numerator = torch.exp(pos_sim / self.temperature) denominator = numerator + torch.exp(neg_sim / self.temperature) info_nce_loss = -torch.log(numerator / denominator).mean() # 三元组损失 triplet_loss = F.triplet_margin_loss( anchor_emb, pos_emb, neg_emb, margin=self.margin ) # 加权组合 return info_nce_loss + triplet_loss def get_sequence_representation(self, outputs, attention_mask): """ 获取序列表示(取最后一个token的隐藏状态) """ # 获取指定层的隐藏状态 hidden_states = outputs.hidden_states[self.repr_layer] # 获取每个序列的最后一个非填充token # 注意:attention_mask中1表示有效token,0表示填充 last_token_indices = attention_mask.sum(dim=1) - 1 # 收集每个序列的最后一个token的隐藏状态 batch_size = hidden_states.size(0) sequence_representations = hidden_states[ torch.arange(batch_size), last_token_indices ] return sequence_representations def compute_loss(self, model, inputs, return_outputs=False): """ 计算总损失(语言建模损失 + 对比损失) """ # 提取输入 anchor_input_ids = inputs.get("anchor_input_ids") anchor_attention_mask = inputs.get("anchor_attention_mask") positive_input_ids = inputs.get("positive_input_ids") positive_attention_mask = inputs.get("positive_attention_mask") negative_input_ids = inputs.get("negative_input_ids") negative_attention_mask = inputs.get("negative_attention_mask") # 前向传播获取隐藏状态 def get_embeddings(input_ids, attention_mask): outputs = model( input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True, return_dict=True ) return self.get_sequence_representation(outputs, attention_mask) # 获取三元组的嵌入表示 anchor_emb = get_embeddings(anchor_input_ids, anchor_attention_mask) pos_emb = get_embeddings(positive_input_ids, positive_attention_mask) neg_emb = get_embeddings(negative_input_ids, negative_attention_mask) # 计算对比损失 cl_loss = self.compute_contrastive_loss(anchor_emb, pos_emb, neg_emb) cl_loss = cl_loss * self.contrastive_weight # 计算语言建模损失(仅针对positive回复) lm_labels = positive_input_ids.clone() lm_labels[lm_labels == self.tokenizer.pad_token_id] = -100 # 忽略填充token lm_outputs = model( input_ids=positive_input_ids, attention_mask=positive_attention_mask, labels=lm_labels ) lm_loss = lm_outputs.loss # 总损失 = LM损失 + 对比损失 total_loss = lm_loss + cl_loss # 如果返回输出,则返回损失和输出 if return_outputs: outputs = { "lm_loss": lm_loss, "cl_loss": cl_loss, "total_loss": total_loss, "logits": lm_outputs.logits } return total_loss, outputs return total_loss def training_step(self, model, inputs): """ 自定义训练步骤 """ model.train() inputs = self._prepare_inputs(inputs) # 前向传播 with self.compute_loss_context_manager(): loss, outputs = self.compute_loss(model, inputs, return_outputs=True) # 如果使用梯度累积,需要除以累积步数 if self.args.gradient_accumulation_steps > 1: loss = loss / self.args.gradient_accumulation_steps # 反向传播 loss.backward() # 记录日志 self.log({ "train/lm_loss": outputs["lm_loss"].item(), "train/cl_loss": outputs["cl_loss"].item(), "train/loss": loss.item(), "train/lr": self.lr_scheduler.get_last_lr()[0] }) return loss.detach() def log(self, logs: Dict[str, float]): """ 自定义日志记录 """ if self.state.epoch is not None: logs["epoch"] = round(self.state.epoch, 2) # 每N步记录一次日志 if self.state.global_step % self.args.logging_steps == 0: logger.info(f"Step {self.state.global_step}: {logs}") def train(self, **kwargs): """ 自定义训练循环 """ # 初始化训练 self._prepare_for_training() # 训练循环 for epoch in range(int(self.args.num_train_epochs)): logger.info(f"Starting epoch {epoch + 1}/{self.args.num_train_epochs}") # 创建数据加载器 train_dataloader = self.get_train_dataloader() # 训练一个epoch for step, inputs in enumerate(tqdm(train_dataloader, desc=f"Epoch {epoch+1}")): # 训练步骤 loss = self.training_step(self.model, inputs) # 梯度更新 if (step + 1) % self.args.gradient_accumulation_steps == 0: self.optimizer.step() self.lr_scheduler.step() self.optimizer.zero_grad() self.state.global_step += 1 # 模型保存 if self.args.save_strategy == "steps" and self.state.global_step % self.args.save_steps == 0: self._save_model(self.args.output_dir) # 每个epoch结束时保存模型 if self.args.save_strategy == "epoch": self._save_model(self.args.output_dir) def _save_model(self, output_dir: str): """ 保存模型适配器 """ save_path = os.path.join(output_dir, f"checkpoint-{self.state.global_step}") os.makedirs(save_path, exist_ok=True) # 保存适配器权重 self.model.save_pretrained(save_path) self.tokenizer.save_pretrained(save_path) logger.info(f"Model saved to {save_path}") def evaluate(self, **kwargs): """ 自定义评估方法 """ self.model.eval() eval_dataloader = self.get_eval_dataloader() total_loss = 0.0 total_lm_loss = 0.0 total_cl_loss = 0.0 num_batches = 0 with torch.no_grad(): for inputs in tqdm(eval_dataloader, desc="Evaluating"): inputs = self._prepare_inputs(inputs) loss, outputs = self.compute_loss(self.model, inputs, return_outputs=True) total_loss += loss.item() total_lm_loss += outputs["lm_loss"].item() total_cl_loss += outputs["cl_loss"].item() num_batches += 1 avg_loss = total_loss / num_batches avg_lm_loss = total_lm_loss / num_batches avg_cl_loss = total_cl_loss / num_batches metrics = { "eval_loss": avg_loss, "eval_lm_loss": avg_lm_loss, "eval_cl_loss": avg_cl_loss } logger.info(f"Evaluation results: {metrics}") return metrics

<think>我们正在自定义一个ContrastiveTrainer类,它继承自transformers的Trainer类。在初始化ContrastiveTrainer时,我们传递了一个额外的参数`contrastive_config`,但是父类Trainer的构造函数并不接受这个参数,因此导致了错误。 解决方案: 在子类ContrastiveTrainer的__init__方法中,我们需要先从**kwargs中提取出`contrastive_config`参数(如果存在),然后再调用父类的__init__方法,这样父类就不会收到这个未知参数。 具体修改步骤: 1. 在ContrastiveTrainer的__init__方法中,使用`kwargs.pop`方法将`contrastive_config`从关键字参数中移除,并赋值给实例变量。 2. 然后调用父类的__init__方法,此时**kwargs中已经不再包含`contrastive_config`。 3. 我们可以选择在父类初始化之后,使用这个配置进行一些自定义的初始化操作。 修改后的ContrastiveTrainer类如下:</think>要解决 `ContrastiveTrainer` 初始化时的 `TypeError: Trainer.__init__() got an unexpected keyword argument 'contrastive_config'` 错误,需要修改自定义训练器的构造函数,正确处理父类不支持的参数。以下是完整修复方案: ### 错误原因分析 当创建 `ContrastiveTrainer` 实例时,传递的 `contrastive_config` 参数会被直接转发给父类 `Trainer` 的构造函数。但 Hugging Face 的 `Trainer` 类不接受此自定义参数[^1],导致类型错误。 ### 修复步骤 修改 `ContrastiveTrainer` 类的 `__init__` 方法: ```python class ContrastiveTrainer(Trainer): def __init__(self, *args, contrastive_config=None, **kwargs): # 1. 提取并保存自定义参数 self.contrastive_config = contrastive_config # 2. 安全移除自定义参数后调用父类初始化 super().__init__(*args, **kwargs) # 此时 kwargs 不含 contrastive_config # 3. 可选:执行依赖配置的初始化 if self.contrastive_config: print(f"Loaded contrastive config: {self.contrastive_config}") ``` ### 关键修改说明 1. **显式声明自定义参数** `contrastive_config=None` 明确声明此参数,使其不被包含在 `**kwargs` 中 2. **安全传递剩余参数** 修改后的 `**kwargs` 仅包含父类支持的参数(如 `model`, `args` 等),避免类型错误 3. **配置验证(可选)** 添加配置验证逻辑确保参数完整性: ```python required_keys = ["temperature", "margin", "weight", "repr_layer"] if any(key not in self.contrastive_config for key in required_keys): raise ValueError("Missing required keys in contrastive_config") ``` ### 完整调用示例 ```python # 初始化训练器 (使用修复后的参数传递) trainer = ContrastiveTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, contrastive_config=contrastive_config # 作为独立参数传递 ) ``` ### 底层机制解析 Hugging Face `Trainer` 使用 `__init__` 参数白名单机制[^2],所有未声明的关键字参数都会触发 `TypeError`。通过显式提取自定义参数: 1. 避免污染父类参数空间 2. 保持扩展性与兼容性 3. 支持动态配置重载 ### 注意事项 - **参数默认值**:始终为 `contrastive_config` 设置默认值(如 `None`),确保无配置时能正常初始化 - **版本兼容**:Hugging Face Transformers 升级可能改变 `Trainer` 参数列表,需定期检查[^3] - **错误处理**:添加类型检查确保 `contrastive_config` 是字典: ```python if not isinstance(contrastive_config, dict): raise TypeError("contrastive_config must be a dictionary") ``` ### 扩展建议 如需在训练中使用此配置,可覆盖 `compute_loss` 方法: ```python def compute_loss(self, model, inputs, return_outputs=False): # 从 inputs 解包 anchor/positive/negative loss = ... # 使用 self.contrastive_config 计算对比损失 return loss ``` [^1]: Hugging Face Trainer 参数限制说明 [^2]: Python 关键字参数处理机制 [^3]: Transformers 官方文档版本兼容性指南
阅读全文

相关推荐

将以下两份代码分别修改,使其可用完美配合:代码1:# 使用示例 from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer from contrastive_trainer import ContrastiveTrainer, ContrastiveDataCollator from datasets import load_dataset class ContrastiveTrainer(Trainer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # 确保父类初始化 self._prepare_for_training() # 显式调用 def _prepare_for_training(self): # 自定义逻辑或直接复用父类方法 super()._prepare_for_training() # 调用父类实现 # 加载模型和分词器 model = AutoModelForCausalLM.from_pretrained("model/Qwen/Qwen1.5-1.8B") tokenizer = AutoTokenizer.from_pretrained("model/Qwen/Qwen1.5-1.8B") tokenizer.pad_token = tokenizer.eos_token train_dataset = load_dataset('json', data_files='data/processed/train_style_triplets.json')['train'] val_dataset = load_dataset('json', data_files='data/processed/val_style_triplets.json')['train'] # 验证集 # 训练参数配置 training_args = TrainingArguments( output_dir="./model/lora_adapter", per_device_train_batch_size=4, num_train_epochs=3, learning_rate=2e-4, logging_steps=50, save_steps=500, fp16=True, gradient_accumulation_steps=4, ) # 对比学习配置 contrastive_config = { "temperature": 0.07, "margin": 0.3, "weight": 0.8, "repr_layer": -1 # 使用最后一层隐藏状态 } # 初始化数据收集器 data_collator = ContrastiveDataCollator( tokenizer=tokenizer, max_length=512, padding="max_length" ) # 初始化训练器 (修正参数传递) trainer = ContrastiveTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, contrastive_config=contrastive_config # 作为独立参数传递 ) # 开始训练 trainer.train() # 评估模型 eval_results = trainer.evaluate() 代码2:# trainer/contrastive_trainer.py import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import Dataset, DataLoader from transformers import Trainer, TrainingArguments from transformers.tokenization_utils_base import PreTrainedTokenizerBase from transformers.utils import PaddingStrategy from typing import Any, Dict, List, Optional, Tuple, Union import logging import numpy as np import os from tqdm import tqdm from dataclasses import dataclass class ContrastiveTrainer(Trainer): def __init__(self, *args, contrastive_config=None, **kwargs): # 1. 显式声明并提取自定义参数 self.contrastive_config = contrastive_config or {} # 2. 验证配置完整性 required_keys = ["temperature", "margin", "weight", "repr_layer"] if any(key not in self.contrastive_config for key in required_keys): raise ValueError("Missing required keys in contrastive_config") # 3. 安全移除自定义参数后调用父类初始化 super().__init__(*args, **kwargs) # 此时 kwargs 不含 contrastive_config # 4. 初始化依赖配置的参数 self.temperature = self.contrastive_config["temperature"] self.margin = self.contrastive_config["margin"] self.contrastive_weight = self.contrastive_config["weight"] self.repr_layer = self.contrastive_config["repr_layer"] # 5. 损失函数初始化 self.cross_entropy = nn.CrossEntropyLoss() # 设置日志 logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) @dataclass class ContrastiveDataCollator: """ 对比学习数据收集器,处理对比学习的正负样本对 """ tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: """ 处理一批数据,生成模型输入格式 """ # 分离出三元组的各个部分 anchor_features = [{"input_ids": f["anchor_input_ids"]} for f in features] positive_features = [{"input_ids": f["positive_input_ids"]} for f in features] negative_features = [{"input_ids": f["negative_input_ids"]} for f in features] # 对每个部分分别进行填充 batch_anchor = self.tokenizer.pad( anchor_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_positive = self.tokenizer.pad( positive_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_negative = self.tokenizer.pad( negative_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 def create_attention_mask(input_ids): mask = torch.ones_like(input_ids) mask[input_ids == self.tokenizer.pad_token_id] = 0 return mask # 返回一个字典,包含所有部分 return { "anchor_input_ids": batch_anchor["input_ids"], "anchor_attention_mask": create_attention_mask(batch_anchor["input_ids"]), "positive_input_ids": batch_positive["input_ids"], "positive_attention_mask": create_attention_mask(batch_positive["input_ids"]), "negative_input_ids": batch_negative["input_ids"], "negative_attention_mask": create_attention_mask(batch_negative["input_ids"]), } class ContrastiveTrainer(Trainer): """ 对比学习训练器类,实现对比学习训练逻辑 """ def __init__( self, model: nn.Module = None, args: TrainingArguments = None, data_collator: Optional[ContrastiveDataCollator] = None, train_dataset: Optional[Dataset] = None, eval_dataset: Optional[Dataset] = None, tokenizer: Optional[PreTrainedTokenizerBase] = None, model_init: Optional[callable] = None, compute_metrics: Optional[callable] = None, callbacks: Optional[List[Any]] = None, optimizers: Tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR] = (None, None), preprocess_logits_for_metrics: Optional[callable] = None, contrastive_config: Optional[Dict] = None ): super().__init__( model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, model_init=model_init, compute_metrics=compute_metrics, callbacks=callbacks, optimizers=optimizers, preprocess_logits_for_metrics=preprocess_logits_for_metrics ) # 对比学习配置 self.contrastive_config = contrastive_config or {} self.temperature = self.contrastive_config.get("temperature", 0.07) self.margin = self.contrastive_config.get("margin", 0.3) self.contrastive_weight = self.contrastive_config.get("weight", 0.8) self.repr_layer = self.contrastive_config.get("repr_layer", -1) # 默认最后一层隐藏状态 # 损失函数 self.cross_entropy = nn.CrossEntropyLoss() def compute_contrastive_loss(self, anchor_emb, pos_emb, neg_emb): """ 计算对比损失 (InfoNCE + Triplet Margin组合) """ # 计算余弦相似度 pos_sim = F.cosine_similarity(anchor_emb, pos_emb) neg_sim = F.cosine_similarity(anchor_emb, neg_emb) # InfoNCE损失 numerator = torch.exp(pos_sim / self.temperature) denominator = numerator + torch.exp(neg_sim / self.temperature) info_nce_loss = -torch.log(numerator / denominator).mean() # 三元组损失 triplet_loss = F.triplet_margin_loss( anchor_emb, pos_emb, neg_emb, margin=self.margin ) # 加权组合 return info_nce_loss + triplet_loss def get_sequence_representation(self, outputs, attention_mask): """ 获取序列表示(取最后一个token的隐藏状态) """ # 获取指定层的隐藏状态 hidden_states = outputs.hidden_states[self.repr_layer] # 获取每个序列的最后一个非填充token # 注意:attention_mask中1表示有效token,0表示填充 last_token_indices = attention_mask.sum(dim=1) - 1 # 收集每个序列的最后一个token的隐藏状态 batch_size = hidden_states.size(0) sequence_representations = hidden_states[ torch.arange(batch_size), last_token_indices ] return sequence_representations def compute_loss(self, model, inputs, return_outputs=False): """ 计算总损失(语言建模损失 + 对比损失) """ # 提取输入 anchor_input_ids = inputs.get("anchor_input_ids") anchor_attention_mask = inputs.get("anchor_attention_mask") positive_input_ids = inputs.get("positive_input_ids") positive_attention_mask = inputs.get("positive_attention_mask") negative_input_ids = inputs.get("negative_input_ids") negative_attention_mask = inputs.get("negative_attention_mask") # 前向传播获取隐藏状态 def get_embeddings(input_ids, attention_mask): outputs = model( input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True, return_dict=True ) return self.get_sequence_representation(outputs, attention_mask) # 获取三元组的嵌入表示 anchor_emb = get_embeddings(anchor_input_ids, anchor_attention_mask) pos_emb = get_embeddings(positive_input_ids, positive_attention_mask) neg_emb = get_embeddings(negative_input_ids, negative_attention_mask) # 计算对比损失 cl_loss = self.compute_contrastive_loss(anchor_emb, pos_emb, neg_emb) cl_loss = cl_loss * self.contrastive_weight # 计算语言建模损失(仅针对positive回复) lm_labels = positive_input_ids.clone() lm_labels[lm_labels == self.tokenizer.pad_token_id] = -100 # 忽略填充token lm_outputs = model( input_ids=positive_input_ids, attention_mask=positive_attention_mask, labels=lm_labels ) lm_loss = lm_outputs.loss # 总损失 = LM损失 + 对比损失 total_loss = lm_loss + cl_loss # 如果返回输出,则返回损失和输出 if return_outputs: outputs = { "lm_loss": lm_loss, "cl_loss": cl_loss, "total_loss": total_loss, "logits": lm_outputs.logits } return total_loss, outputs return total_loss def training_step(self, model, inputs): """ 自定义训练步骤 """ model.train() inputs = self._prepare_inputs(inputs) # 前向传播 with self.compute_loss_context_manager(): loss, outputs = self.compute_loss(model, inputs, return_outputs=True) # 如果使用梯度累积,需要除以累积步数 if self.args.gradient_accumulation_steps > 1: loss = loss / self.args.gradient_accumulation_steps # 反向传播 loss.backward() # 记录日志 self.log({ "train/lm_loss": outputs["lm_loss"].item(), "train/cl_loss": outputs["cl_loss"].item(), "train/loss": loss.item(), "train/lr": self.lr_scheduler.get_last_lr()[0] }) return loss.detach() def log(self, logs: Dict[str, float]): """ 自定义日志记录 """ if self.state.epoch is not None: logs["epoch"] = round(self.state.epoch, 2) # 每N步记录一次日志 if self.state.global_step % self.args.logging_steps == 0: logger.info(f"Step {self.state.global_step}: {logs}") def train(self, **kwargs): """ 自定义训练循环 """ # 初始化训练 self._prepare_for_training() # 训练循环 for epoch in range(int(self.args.num_train_epochs)): logger.info(f"Starting epoch {epoch + 1}/{self.args.num_train_epochs}") # 创建数据加载器 train_dataloader = self.get_train_dataloader() # 训练一个epoch for step, inputs in enumerate(tqdm(train_dataloader, desc=f"Epoch {epoch+1}")): # 训练步骤 loss = self.training_step(self.model, inputs) # 梯度更新 if (step + 1) % self.args.gradient_accumulation_steps == 0: self.optimizer.step() self.lr_scheduler.step() self.optimizer.zero_grad() self.state.global_step += 1 # 模型保存 if self.args.save_strategy == "steps" and self.state.global_step % self.args.save_steps == 0: self._save_model(self.args.output_dir) # 每个epoch结束时保存模型 if self.args.save_strategy == "epoch": self._save_model(self.args.output_dir) def _save_model(self, output_dir: str): """ 保存模型适配器 """ save_path = os.path.join(output_dir, f"checkpoint-{self.state.global_step}") os.makedirs(save_path, exist_ok=True) # 保存适配器权重 self.model.save_pretrained(save_path) self.tokenizer.save_pretrained(save_path) logger.info(f"Model saved to {save_path}") def evaluate(self, **kwargs): """ 自定义评估方法 """ self.model.eval() eval_dataloader = self.get_eval_dataloader() total_loss = 0.0 total_lm_loss = 0.0 total_cl_loss = 0.0 num_batches = 0 with torch.no_grad(): for inputs in tqdm(eval_dataloader, desc="Evaluating"): inputs = self._prepare_inputs(inputs) loss, outputs = self.compute_loss(self.model, inputs, return_outputs=True) total_loss += loss.item() total_lm_loss += outputs["lm_loss"].item() total_cl_loss += outputs["cl_loss"].item() num_batches += 1 avg_loss = total_loss / num_batches avg_lm_loss = total_lm_loss / num_batches avg_cl_loss = total_cl_loss / num_batches metrics = { "eval_loss": avg_loss, "eval_lm_loss": avg_lm_loss, "eval_cl_loss": avg_cl_loss } logger.info(f"Evaluation results: {metrics}") return metrics

代我检查错误:import torch import torch.nn as nn import torch.nn.functional as F from transformers import Trainer from transformers.tokenization_utils_base import PreTrainedTokenizerBase from transformers.utils import PaddingStrategy from typing import Any, Dict, List, Optional, Tuple, Union import logging from dataclasses import dataclass # 设置日志 logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) @dataclass class ContrastiveDataCollator: tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: # 分离出三元组的各个部分 anchor_features = [{"input_ids": f["anchor_input_ids"]} for f in features] positive_features = [{"input_ids": f["positive_input_ids"]} for f in features] negative_features = [{"input_ids": f["negative_input_ids"]} for f in features] # 对每个部分分别进行填充 batch_anchor = self.tokenizer.pad( anchor_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_positive = self.tokenizer.pad( positive_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_negative = self.tokenizer.pad( negative_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 def create_attention_mask(input_ids): return (input_ids != self.tokenizer.pad_token_id).int() return { "anchor_input_ids": batch_anchor["input_ids"], "anchor_attention_mask": create_attention_mask(batch_anchor["input_ids"]), "positive_input_ids": batch_positive["input_ids"], "positive_attention_mask": create_attention_mask(batch_positive["input_ids"]), "negative_input_ids": batch_negative["input_ids"], "negative_attention_mask": create_attention_mask(batch_negative["input_ids"]), } class ContrastiveTrainer(Trainer): def __init__(self, *args, contrastive_config=None, **kwargs): super().__init__(*args, **kwargs) if contrastive_config is None: contrastive_config = {} # 设置默认值并验证配置 self.temperature = contrastive_config.get("temperature", 0.07) self.margin = contrastive_config.get("margin", 0.3) self.contrastive_weight = contrastive_config.get("weight", 0.8) self.repr_layer = contrastive_config.get("repr_layer", -1) # 验证必要参数 if not hasattr(self.model.config, "output_hidden_states") or not self.model.config.output_hidden_states: raise ValueError("模型必须设置output_hidden_states=True以获取隐藏状态") self.cross_entropy = nn.CrossEntropyLoss() def compute_contrastive_loss(self, anchor_emb, pos_emb, neg_emb): # 计算余弦相似度 pos_sim = F.cosine_similarity(anchor_emb, pos_emb) neg_sim = F.cosine_similarity(anchor_emb, neg_emb) # InfoNCE损失 numerator = torch.exp(pos_sim / self.temperature) denominator = numerator + torch.exp(neg_sim / self.temperature) info_nce_loss = -torch.log(numerator / (denominator + 1e-8)).mean() # 三元组损失 triplet_loss = F.relu(neg_sim - pos_sim + self.margin).mean() # 加权组合 return info_nce_loss + triplet_loss def get_sequence_representation(self, outputs, attention_mask): # 获取指定层的隐藏状态 hidden_states = outputs.hidden_states[self.repr_layer] # 获取每个序列的最后一个非填充token seq_lengths = attention_mask.sum(dim=1) - 1 batch_indices = torch.arange(hidden_states.size(0)) return hidden_states[batch_indices, seq_lengths] def compute_loss(self, model, inputs, return_outputs=False): # 提取输入 anchor_ids = inputs["anchor_input_ids"] anchor_mask = inputs["anchor_attention_mask"] positive_ids = inputs["positive_input_ids"] positive_mask = inputs["positive_attention_mask"] negative_ids = inputs["negative_input_ids"] negative_mask = inputs["negative_attention_mask"] # 前向传播获取隐藏状态 def get_embeddings(input_ids, attention_mask): outputs = model( input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True, return_dict=True ) return self.get_sequence_representation(outputs, attention_mask) # 获取三元组的嵌入表示 anchor_emb = get_embeddings(anchor_ids, anchor_mask) pos_emb = get_embeddings(positive_ids, positive_mask) neg_emb = get_embeddings(negative_ids, negative_mask) # 计算对比损失 cl_loss = self.compute_contrastive_loss(anchor_emb, pos_emb, neg_emb) cl_loss = cl_loss * self.contrastive_weight # 计算语言建模损失(仅针对positive回复) lm_labels = positive_ids.clone() lm_labels[lm_labels == self.tokenizer.pad_token_id] = -100 lm_outputs = model( input_ids=positive_ids, attention_mask=positive_mask, labels=lm_labels ) lm_loss = lm_outputs.loss # 总损失 = LM损失 + 对比损失 total_loss = lm_loss + cl_loss # 记录自定义指标 self.log({"cl_loss": cl_loss.item(), "lm_loss": lm_loss.item()}) return (total_loss, lm_outputs) if return_outputs else total_loss

将以下两段代码合成一段:代码1:# 使用示例 from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments from contrastive_trainer import ContrastiveTrainer, ContrastiveDataCollator from datasets import load_dataset # 加载模型和分词器 model = AutoModelForCausalLM.from_pretrained("model/Qwen/Qwen1.5-1.8B", output_hidden_states=True) # 添加output_hidden_states tokenizer = AutoTokenizer.from_pretrained("model/Qwen/Qwen1.5-1.8B") tokenizer.pad_token = tokenizer.eos_token train_dataset = load_dataset('json', data_files='data/processed/train_style_triplets.json')['train'] val_dataset = load_dataset('json', data_files='data/processed/val_style_triplets.json')['train'] # 训练参数配置 training_args = TrainingArguments( output_dir="./model/lora_adapter", per_device_train_batch_size=4, num_train_epochs=3, learning_rate=2e-4, logging_steps=50, save_steps=500, fp16=True, gradient_accumulation_steps=4, report_to="none", # 禁用默认日志 ) # 对比学习配置 contrastive_config = { "temperature": 0.07, "margin": 0.3, "weight": 0.8, "repr_layer": -1 } # 初始化数据收集器 data_collator = ContrastiveDataCollator( tokenizer=tokenizer, max_length=512, padding="max_length" ) # 初始化训练器 trainer = ContrastiveTrainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, tokenizer=tokenizer, # 添加tokenizer参数 contrastive_config=contrastive_config ) # 开始训练 trainer.train() # 评估模型 eval_results = trainer.evaluate() print(eval_results) 代码2:import torch import torch.nn as nn import torch.nn.functional as F from transformers import Trainer from transformers.tokenization_utils_base import PreTrainedTokenizerBase from transformers.utils import PaddingStrategy from typing import Any, Dict, List, Optional, Tuple, Union import logging from dataclasses import dataclass # 设置日志 logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) @dataclass class ContrastiveDataCollator: tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: # 分离出三元组的各个部分 anchor_features = [{"input_ids": f["anchor_input_ids"]} for f in features] positive_features = [{"input_ids": f["positive_input_ids"]} for f in features] negative_features = [{"input_ids": f["negative_input_ids"]} for f in features] # 对每个部分分别进行填充 batch_anchor = self.tokenizer.pad( anchor_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_positive = self.tokenizer.pad( positive_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_negative = self.tokenizer.pad( negative_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 def create_attention_mask(input_ids): return (input_ids != self.tokenizer.pad_token_id).int() return { "anchor_input_ids": batch_anchor["input_ids"], "anchor_attention_mask": create_attention_mask(batch_anchor["input_ids"]), "positive_input_ids": batch_positive["input_ids"], "positive_attention_mask": create_attention_mask(batch_positive["input_ids"]), "negative_input_ids": batch_negative["input_ids"], "negative_attention_mask": create_attention_mask(batch_negative["input_ids"]), } class ContrastiveTrainer(Trainer): def __init__(self, *args, contrastive_config=None, **kwargs): super().__init__(*args, **kwargs) if contrastive_config is None: contrastive_config = {} # 设置默认值并验证配置 self.temperature = contrastive_config.get("temperature", 0.07) self.margin = contrastive_config.get("margin", 0.3) self.contrastive_weight = contrastive_config.get("weight", 0.8) self.repr_layer = contrastive_config.get("repr_layer", -1) # 验证必要参数 if not hasattr(self.model.config, "output_hidden_states") or not self.model.config.output_hidden_states: raise ValueError("模型必须设置output_hidden_states=True以获取隐藏状态") self.cross_entropy = nn.CrossEntropyLoss() def compute_contrastive_loss(self, anchor_emb, pos_emb, neg_emb): # 计算余弦相似度 pos_sim = F.cosine_similarity(anchor_emb, pos_emb) neg_sim = F.cosine_similarity(anchor_emb, neg_emb) # InfoNCE损失 numerator = torch.exp(pos_sim / self.temperature) denominator = numerator + torch.exp(neg_sim / self.temperature) info_nce_loss = -torch.log(numerator / (denominator + 1e-8)).mean() # 三元组损失 triplet_loss = F.relu(neg_sim - pos_sim + self.margin).mean() # 加权组合 return info_nce_loss + triplet_loss def get_sequence_representation(self, outputs, attention_mask): # 获取指定层的隐藏状态 hidden_states = outputs.hidden_states[self.repr_layer] # 获取每个序列的最后一个非填充token seq_lengths = attention_mask.sum(dim=1) - 1 batch_indices = torch.arange(hidden_states.size(0)) return hidden_states[batch_indices, seq_lengths] def compute_loss(self, model, inputs, return_outputs=False): # 提取输入 anchor_ids = inputs["anchor_input_ids"] anchor_mask = inputs["anchor_attention_mask"] positive_ids = inputs["positive_input_ids"] positive_mask = inputs["positive_attention_mask"] negative_ids = inputs["negative_input_ids"] negative_mask = inputs["negative_attention_mask"] # 前向传播获取隐藏状态 def get_embeddings(input_ids, attention_mask): outputs = model( input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True, return_dict=True ) return self.get_sequence_representation(outputs, attention_mask) # 获取三元组的嵌入表示 anchor_emb = get_embeddings(anchor_ids, anchor_mask) pos_emb = get_embeddings(positive_ids, positive_mask) neg_emb = get_embeddings(negative_ids, negative_mask) # 计算对比损失 cl_loss = self.compute_contrastive_loss(anchor_emb, pos_emb, neg_emb) cl_loss = cl_loss * self.contrastive_weight # 计算语言建模损失(仅针对positive回复) lm_labels = positive_ids.clone() lm_labels[lm_labels == self.tokenizer.pad_token_id] = -100 lm_outputs = model( input_ids=positive_ids, attention_mask=positive_mask, labels=lm_labels ) lm_loss = lm_outputs.loss # 总损失 = LM损失 + 对比损失 total_loss = lm_loss + cl_loss # 记录自定义指标 self.log({"cl_loss": cl_loss.item(), "lm_loss": lm_loss.item()}) return (total_loss, lm_outputs) if return_outputs else total_loss

我正在编辑【python】代码,遇到了(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk). trainable params: 1,572,864 || all params: 1,838,401,536 || trainable%: 0.0856 Generating train split: 3 examples [00:00, 50.38 examples/s] Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 400, in <module> val_dataset = load_and_tokenize_dataset('data/processed/val_style_triplets.json', tokenizer) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 390, in load_and_tokenize_dataset tokenized_dataset = raw_dataset.map( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\datasets\arrow_dataset.py", line 560, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\datasets\arrow_dataset.py", line 3086, in map raise ValueError( ValueError: Column to remove ['anchor', 'positive', 'negative'] not in the dataset. Current columns in the dataset: ['text'] ,请帮我检查并改正错误点。我的原始代码如下: import torch import torch.nn as nn import torch.nn.functional as F from transformers import ( AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer, PreTrainedTokenizerBase, BitsAndBytesConfig ) from transformers.tokenization_utils_base import PreTrainedTokenizerBase from transformers.utils import PaddingStrategy from datasets import load_dataset, Dataset # 添加Dataset导入 from typing import Any, Dict, List, Optional, Tuple, Union import logging from dataclasses import dataclass import os import gc from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training # 设置日志 logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) # 内存优化工具函数 def clear_memory(): """清除Python和CUDA缓存""" gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() torch.cuda.reset_peak_memory_stats() def print_memory_usage(): """打印当前内存使用情况""" if torch.cuda.is_available(): allocated = torch.cuda.memory_allocated() / (1024 ** 3) reserved = torch.cuda.memory_reserved() / (1024 ** 3) logger.info(f"GPU内存使用: 已分配 {allocated:.2f}GB, 保留 {reserved:.2f}GB") else: logger.info("未检测到GPU") def tokenize_function(examples, tokenizer, max_length=256): """将文本转换为token IDs""" tokenized = {} # 对每个字段进行分词 for key in ['anchor', 'positive', 'negative']: if key in examples: # 使用分词器处理文本 result = tokenizer( examples[key], max_length=max_length, truncation=True, padding=False, return_tensors=None ) tokenized[f"{key}_input_ids"] = result["input_ids"] return tokenized @dataclass class ContrastiveDataCollator: """内存优化的数据收集器""" tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: # 分离出三元组的各个部分 anchor_features = [{"input_ids": f["anchor_input_ids"]} for f in features] positive_features = [{"input_ids": f["positive_input_ids"]} for f in features] negative_features = [{"input_ids": f["negative_input_ids"]} for f in features] # 对每个部分分别进行填充 batch_anchor = self.tokenizer.pad( anchor_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_positive = self.tokenizer.pad( positive_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_negative = self.tokenizer.pad( negative_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 def create_attention_mask(input_ids): return (input_ids != self.tokenizer.pad_token_id).int() # 释放中间变量内存 del anchor_features, positive_features, negative_features clear_memory() return { "anchor_input_ids": batch_anchor["input_ids"], "anchor_attention_mask": create_attention_mask(batch_anchor["input_ids"]), "positive_input_ids": batch_positive["input_ids"], "positive_attention_mask": create_attention_mask(batch_positive["input_ids"]), "negative_input_ids": batch_negative["input_ids"], "negative_attention_mask": create_attention_mask(batch_negative["input_ids"]), } @dataclass class EvalDataCollator: """评估专用的数据收集器""" tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: # 评估时只使用正样本(用于语言建模评估) input_features = [] for f in features: # 确保所有必要字段都存在 if "positive_input_ids" in f: input_features.append({"input_ids": f["positive_input_ids"]}) else: # 如果缺少positive_input_ids,尝试使用其他字段 if "input_ids" in f: input_features.append({"input_ids": f["input_ids"]}) else: # 如果都没有,跳过该样本 continue if not input_features: raise ValueError("评估数据中没有找到有效的输入特征") # 对样本进行填充 batch = self.tokenizer.pad( input_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 attention_mask = (batch["input_ids"] != self.tokenizer.pad_token_id).int() # 创建标签(用于语言建模) labels = batch["input_ids"].clone() labels[labels == self.tokenizer.pad_token_id] = -100 return { "input_ids": batch["input_ids"], "attention_mask": attention_mask, "labels": labels } class ContrastiveTrainer(Trainer): """内存优化的训练器""" def __init__(self, tokenizer=None, *args, contrastive_config=None, **kwargs): # 首先调用父类初始化 super().__init__(*args, **kwargs) # 关键修复:设置tokenizer self.tokenizer = tokenizer if contrastive_config is None: contrastive_config = {} # 设置默认值 self.temperature = contrastive_config.get("temperature", 0.07) self.margin = contrastive_config.get("margin", 0.3) self.contrastive_weight = contrastive_config.get("weight", 0.8) self.repr_layer = contrastive_config.get("repr_layer", -1) # 验证必要参数 if not hasattr(self.model.config, "output_hidden_states") or not self.model.config.output_hidden_states: raise ValueError("模型必须设置output_hidden_states=True") self.cross_entropy = nn.CrossEntropyLoss() def compute_contrastive_loss(self, anchor_emb, pos_emb, neg_emb): """计算对比损失""" # 计算余弦相似度 pos_sim = F.cosine_similarity(anchor_emb, pos_emb) neg_sim = F.cosine_similarity(anchor_emb, neg_emb) # 计算InfoNCE损失 numerator = torch.exp(pos_sim / self.temperature) denominator = numerator + torch.exp(neg_sim / self.temperature) info_nce_loss = -torch.log(numerator / (denominator + 1e-8)).mean() # 计算三元组损失 triplet_loss = F.relu(neg_sim - pos_sim + self.margin).mean() return info_nce_loss + triplet_loss def get_sequence_representation(self, outputs, attention_mask): """获取序列表示(内存优化版)""" # 只获取需要的隐藏状态层 hidden_states = outputs.hidden_states[self.repr_layer] # 获取每个序列的最后一个非填充token seq_lengths = attention_mask.sum(dim=1) - 1 batch_indices = torch.arange(hidden_states.size(0)) # 返回对应位置的隐藏状态 return hidden_states[batch_indices, seq_lengths] def compute_loss(self, model, inputs, return_outputs=False): """改进的损失计算,兼容训练和评估两种模式""" # 检查输入数据格式 if "anchor_input_ids" in inputs: # 训练模式:处理三元组数据 return self._compute_training_loss(model, inputs, return_outputs) else: # 评估模式:处理单一样本数据 return self._compute_evaluation_loss(model, inputs, return_outputs) def _compute_training_loss(self, model, inputs, return_outputs=False): """训练阶段的损失计算(处理三元组)""" # 提取输入 anchor_ids = inputs["anchor_input_ids"] anchor_mask = inputs["anchor_attention_mask"] positive_ids = inputs["positive_input_ids"] positive_mask = inputs["positive_attention_mask"] negative_ids = inputs["negative_input_ids"] negative_mask = inputs["negative_attention_mask"] # 前向传播获取隐藏状态 def get_embeddings(input_ids, attention_mask): outputs = model( input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True, return_dict=True ) return self.get_sequence_representation(outputs, attention_mask) # 获取三元组的嵌入表示 anchor_emb = get_embeddings(anchor_ids, anchor_mask) pos_emb = get_embeddings(positive_ids, positive_mask) neg_emb = get_embeddings(negative_ids, negative_mask) # 计算对比损失 cl_loss = self.compute_contrastive_loss(anchor_emb, pos_emb, neg_emb) cl_loss = cl_loss * self.contrastive_weight # 关键修复:确保tokenizer已设置 if self.tokenizer is None: raise ValueError("Tokenizer未设置!") # 计算语言建模损失 lm_labels = positive_ids.clone() # 关键修复:使用tokenizer的pad_token_id pad_token_id = self.tokenizer.pad_token_id lm_labels[lm_labels == pad_token_id] = -100 # 计算语言建模损失 lm_outputs = model( input_ids=positive_ids, attention_mask=positive_mask, labels=lm_labels ) lm_loss = lm_outputs.loss # 总损失 = LM损失 + 对比损失 total_loss = lm_loss + cl_loss # 记录内存使用 print_memory_usage() return (total_loss, lm_outputs) if return_outputs else total_loss def _compute_evaluation_loss(self, model, inputs, return_outputs=False): """评估阶段的损失计算(处理单一样本)""" # 提取评估输入 input_ids = inputs["input_ids"] attention_mask = inputs["attention_mask"] labels = inputs["labels"] # 计算语言建模损失 outputs = model( input_ids=input_ids, attention_mask=attention_mask, labels=labels ) loss = outputs.loss # 记录内存使用 print_memory_usage() return (loss, outputs) if return_outputs else loss def evaluate( self, eval_dataset: Optional[Dataset] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = "eval", ) -> Dict[str, float]: """重写评估方法以使用专用的数据收集器""" # 创建评估专用的数据收集器 eval_data_collator = EvalDataCollator( tokenizer=self.tokenizer, max_length=256, padding="max_length" ) # 临时保存原始数据收集器 original_collator = self.data_collator try: # 使用评估专用的数据收集器 self.data_collator = eval_data_collator # 调用父类的评估方法 return super().evaluate( eval_dataset=eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix ) finally: # 恢复原始数据收集器 self.data_collator = original_collator # ================ 主程序 ================ # if __name__ == "__main__": # 配置量化以减少内存使用 bnb_config = BitsAndBytesConfig( load_in_4bit=True, # 使用4位量化 bnb_4bit_quant_type="nf4", # 使用NF4量化类型 bnb_4bit_use_double_quant=True, # 双重量化 bnb_4bit_compute_dtype=torch.float16 # 计算使用FP16 ) # 加载模型和分词器(使用量化) model = AutoModelForCausalLM.from_pretrained( "model/Qwen/Qwen1.5-1.8B", quantization_config=bnb_config, # 应用量化配置 device_map="auto", # 自动选择设备 output_hidden_states=True, # 必须设置以获取隐藏状态 return_dict_in_generate=True, use_cache=False # 禁用缓存以节省内存 ) tokenizer = AutoTokenizer.from_pretrained("model/Qwen/Qwen1.5-1.8B") tokenizer.pad_token = tokenizer.eos_token # 设置填充token # 为量化模型添加LoRA适配器 lora_config = LoraConfig( r=8, lora_alpha=32, target_modules=["q_proj", "v_proj"], # 针对Qwen1.5-1.8B模型 lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) # 关键修复:准备模型用于k位训练 model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=True) # 添加LoRA适配器 model = get_peft_model(model, lora_config) # 关键修复:显式启用LoRA参数的梯度 for param in model.parameters(): if param.requires_grad: param.requires_grad = True model.print_trainable_parameters() # 打印可训练参数数量 # 加载数据集 def load_and_tokenize_dataset(file_path, tokenizer): """加载数据集并进行分词处理""" # 加载原始数据集 dataset_dict = load_dataset('json', data_files=file_path) raw_dataset = dataset_dict['train'] # 应用分词函数 tokenized_dataset = raw_dataset.map( lambda ex: tokenize_function(ex, tokenizer, max_length=256), batched=True, batch_size=8, # 减小批处理大小 remove_columns=['anchor', 'positive', 'negative'] ) return tokenized_dataset train_dataset = load_and_tokenize_dataset('data/processed/train_style_triplets.json', tokenizer) val_dataset = load_and_tokenize_dataset('data/processed/val_style_triplets.json', tokenizer) # 验证数据集格式 print("训练集样本示例:", train_dataset[0]) print("验证集样本示例:", val_dataset[0]) # 训练参数配置(内存优化) training_args = TrainingArguments( output_dir="./model/lora_adapter", per_device_train_batch_size=1, # 减小批量大小 gradient_accumulation_steps=8, # 增加梯度累积步数 num_train_epochs=3, learning_rate=2e-4, logging_steps=10, # 更频繁的日志记录以监控内存 save_steps=500, fp16=True, report_to="none", remove_unused_columns=False, gradient_checkpointing=True, # 启用梯度检查点 optim="adafactor", # 使用内存更少的优化器 ) # 对比学习配置 contrastive_config = { "temperature": 0.07, "margin": 0.3, "weight": 0.8, "repr_layer": -1 } # 初始化数据收集器 data_collator = ContrastiveDataCollator( tokenizer=tokenizer, max_length=256, # 减少最大长度 padding="max_length" ) # 初始化训练器 - 关键修复:传递tokenizer trainer = ContrastiveTrainer( model=model, args=training_args, tokenizer=tokenizer, # 传递tokenizer data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, contrastive_config=contrastive_config ) # 开始训练前打印内存状态 print_memory_usage() # 关键修复:验证可训练参数 print("可训练参数列表:") for name, param in model.named_parameters(): if param.requires_grad: print(f"- {name}") # 开始训练 trainer.train() # 保存LoRA适配器 model.save_pretrained("./model/lora_adapter") # 评估模型 try: eval_results = trainer.evaluate() print("评估结果:", eval_results) except Exception as e: print(f"评估过程中发生错误: {e}") import traceback traceback.print_exc()

代码出现问题:(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk). trainable params: 1,572,864 || all params: 1,838,401,536 || trainable%: 0.0856 训练集样本示例: {'anchor_input_ids': [56568, 118919, 116122, 11319], 'positive_input_ids': [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], 'negative_input_ids': [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} 验证集样本示例: {'anchor_input_ids': [56568, 118919, 116122, 11319], 'positive_input_ids': [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], 'negative_input_ids': [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} Trainer.tokenizer is now deprecated. You should use Trainer.processing_class = processing_class instead. INFO:__main__:GPU内存使用: 已分配 2.93GB, 保留 4.13GB 可训练参数列表: - base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.0.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.0.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.1.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.1.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.1.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.1.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.2.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.2.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.2.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.2.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.3.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.3.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.3.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.3.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.4.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.4.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.4.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.4.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.5.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.5.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.5.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.5.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.6.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.6.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.6.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.6.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.7.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.7.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.7.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.7.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.8.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.8.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.8.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.8.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.9.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.9.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.9.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.9.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.10.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.10.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.10.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.10.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.11.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.11.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.11.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.11.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.12.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.12.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.12.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.12.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.13.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.13.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.13.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.13.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.14.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.14.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.14.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.14.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.15.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.15.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.15.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.15.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.16.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.16.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.16.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.16.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.17.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.17.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.17.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.17.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.18.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.18.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.18.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.18.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.19.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.19.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.19.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.19.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.20.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.20.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.20.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.20.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.21.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.21.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.21.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.21.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.22.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.22.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.22.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.22.self_attn.v_proj.lora_B.default.weight - base_model.model.model.layers.23.self_attn.q_proj.lora_A.default.weight - base_model.model.model.layers.23.self_attn.q_proj.lora_B.default.weight - base_model.model.model.layers.23.self_attn.v_proj.lora_A.default.weight - base_model.model.model.layers.23.self_attn.v_proj.lora_B.default.weight 0%| | 0/3 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. INFO:__main__:GPU内存使用: 已分配 4.00GB, 保留 4.21GB Could not estimate the number of tokens of the input, floating-point operations will not be computed Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. INFO:__main__:GPU内存使用: 已分配 4.02GB, 保留 4.22GB 33%|████████████████████████████ | 1/3 [00:03<00:06, 3.25s/it]Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. INFO:__main__:GPU内存使用: 已分配 4.01GB, 保留 4.25GB Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. INFO:__main__:GPU内存使用: 已分配 4.02GB, 保留 4.26GB 67%|████████████████████████████████████████████████████████ | 2/3 [00:06<00:02, 2.98s/it]Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. INFO:__main__:GPU内存使用: 已分配 4.01GB, 保留 4.25GB Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. Trainer.tokenizer is now deprecated. You should use Trainer.processing_class instead. INFO:__main__:GPU内存使用: 已分配 4.02GB, 保留 4.26GB {'train_runtime': 9.034, 'train_samples_per_second': 0.664, 'train_steps_per_second': 0.332, 'train_loss': 1.0772175788879395, 'epoch': 3.0} 100%|████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:09<00:00, 3.01s/it] Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 356, in <module> eval_results = trainer.evaluate() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 4076, in evaluate output = eval_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 4270, in evaluation_loop losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 4496, in prediction_step outputs = model(**inputs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 818, in forward return model_forward(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\utils\operations.py", line 806, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\amp\autocast_mode.py", line 44, in decorate_autocast return func(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\peft\peft_model.py", line 1719, in forward return self.base_model( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\peft\tuners\tuners_utils.py", line 197, in forward return self.model.forward(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\models\qwen2\modeling_qwen2.py", line 816, in forward outputs = self.model( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\models\qwen2\modeling_qwen2.py", line 521, in forward raise ValueError("You must specify exactly one of input_ids or inputs_embeds") ValueError: You must specify exactly one of input_ids or inputs_embeds (style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 57, in <module> class ContrastiveTrainer(Trainer): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 63, in ContrastiveTrainer eval_dataset: Optional[Dataset] = None, NameError: name 'Dataset' is not defined 原代码如下:import torch import torch.nn as nn import torch.nn.functional as F from transformers import ( AutoModelForCausalLM, AutoTokenizer, TrainingArguments, Trainer, PreTrainedTokenizerBase, BitsAndBytesConfig ) from transformers.tokenization_utils_base import PreTrainedTokenizerBase from transformers.utils import PaddingStrategy from datasets import load_dataset from typing import Any, Dict, List, Optional, Tuple, Union import logging from dataclasses import dataclass import os import gc from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training @dataclass class EvalDataCollator: """评估专用的数据收集器""" tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: # 评估时只使用正样本(用于语言建模评估) positive_features = [{"input_ids": f["positive_input_ids"]} for f in features] # 对正样本进行填充 batch_positive = self.tokenizer.pad( positive_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 attention_mask = (batch_positive["input_ids"] != self.tokenizer.pad_token_id).int() # 创建标签(用于语言建模) labels = batch_positive["input_ids"].clone() labels[labels == self.tokenizer.pad_token_id] = -100 return { "input_ids": batch_positive["input_ids"], "attention_mask": attention_mask, "labels": labels } class ContrastiveTrainer(Trainer): """内存优化的训练器""" # ... [保持其他方法不变] ... def evaluate( self, eval_dataset: Optional[Dataset] = None, ignore_keys: Optional[List[str]] = None, metric_key_prefix: str = "eval", ) -> Dict[str, float]: """重写评估方法以使用专用的数据收集器""" # 创建评估专用的数据收集器 eval_data_collator = EvalDataCollator( tokenizer=self.tokenizer, max_length=256, padding="max_length" ) # 临时保存原始数据收集器 original_collator = self.data_collator try: # 使用评估专用的数据收集器 self.data_collator = eval_data_collator # 调用父类的评估方法 return super().evaluate( eval_dataset=eval_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix ) finally: # 恢复原始数据收集器 self.data_collator = original_collator # 设置日志 logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) # 内存优化工具函数 def clear_memory(): """清除Python和CUDA缓存""" gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() torch.cuda.reset_peak_memory_stats() def print_memory_usage(): """打印当前内存使用情况""" if torch.cuda.is_available(): allocated = torch.cuda.memory_allocated() / (1024 ** 3) reserved = torch.cuda.memory_reserved() / (1024 ** 3) logger.info(f"GPU内存使用: 已分配 {allocated:.2f}GB, 保留 {reserved:.2f}GB") else: logger.info("未检测到GPU") def tokenize_function(examples, tokenizer, max_length=256): """将文本转换为token IDs""" tokenized = {} # 对每个字段进行分词 for key in ['anchor', 'positive', 'negative']: if key in examples: # 使用分词器处理文本 result = tokenizer( examples[key], max_length=max_length, truncation=True, padding=False, return_tensors=None ) tokenized[f"{key}_input_ids"] = result["input_ids"] return tokenized @dataclass class ContrastiveDataCollator: """内存优化的数据收集器""" tokenizer: PreTrainedTokenizerBase padding: Union[bool, str, PaddingStrategy] = True max_length: Optional[int] = None pad_to_multiple_of: Optional[int] = None return_tensors: str = "pt" def __call__(self, features: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]: # 分离出三元组的各个部分 anchor_features = [{"input_ids": f["anchor_input_ids"]} for f in features] positive_features = [{"input_ids": f["positive_input_ids"]} for f in features] negative_features = [{"input_ids": f["negative_input_ids"]} for f in features] # 对每个部分分别进行填充 batch_anchor = self.tokenizer.pad( anchor_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_positive = self.tokenizer.pad( positive_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) batch_negative = self.tokenizer.pad( negative_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors=self.return_tensors, ) # 创建注意力掩码 def create_attention_mask(input_ids): return (input_ids != self.tokenizer.pad_token_id).int() # 释放中间变量内存 del anchor_features, positive_features, negative_features clear_memory() return { "anchor_input_ids": batch_anchor["input_ids"], "anchor_attention_mask": create_attention_mask(batch_anchor["input_ids"]), "positive_input_ids": batch_positive["input_ids"], "positive_attention_mask": create_attention_mask(batch_positive["input_ids"]), "negative_input_ids": batch_negative["input_ids"], "negative_attention_mask": create_attention_mask(batch_negative["input_ids"]), } class ContrastiveTrainer(Trainer): """内存优化的训练器""" def __init__(self, tokenizer=None, *args, contrastive_config=None, **kwargs): # 首先调用父类初始化 super().__init__(*args, **kwargs) # 关键修复:设置tokenizer self.tokenizer = tokenizer if contrastive_config is None: contrastive_config = {} # 设置默认值 self.temperature = contrastive_config.get("temperature", 0.07) self.margin = contrastive_config.get("margin", 0.3) self.contrastive_weight = contrastive_config.get("weight", 0.8) self.repr_layer = contrastive_config.get("repr_layer", -1) # 验证必要参数 if not hasattr(self.model.config, "output_hidden_states") or not self.model.config.output_hidden_states: raise ValueError("模型必须设置output_hidden_states=True") self.cross_entropy = nn.CrossEntropyLoss() def compute_contrastive_loss(self, anchor_emb, pos_emb, neg_emb): """计算对比损失""" # 计算余弦相似度 pos_sim = F.cosine_similarity(anchor_emb, pos_emb) neg_sim = F.cosine_similarity(anchor_emb, neg_emb) # 计算InfoNCE损失 numerator = torch.exp(pos_sim / self.temperature) denominator = numerator + torch.exp(neg_sim / self.temperature) info_nce_loss = -torch.log(numerator / (denominator + 1e-8)).mean() # 计算三元组损失 triplet_loss = F.relu(neg_sim - pos_sim + self.margin).mean() return info_nce_loss + triplet_loss def get_sequence_representation(self, outputs, attention_mask): """获取序列表示(内存优化版)""" # 只获取需要的隐藏状态层 hidden_states = outputs.hidden_states[self.repr_layer] # 获取每个序列的最后一个非填充token seq_lengths = attention_mask.sum(dim=1) - 1 batch_indices = torch.arange(hidden_states.size(0)) # 返回对应位置的隐藏状态 return hidden_states[batch_indices, seq_lengths] def compute_loss(self, model, inputs, return_outputs=False): """内存优化的损失计算""" # 确保模型处于训练模式 model.train() # 提取输入 anchor_ids = inputs["anchor_input_ids"] anchor_mask = inputs["anchor_attention_mask"] positive_ids = inputs["positive_input_ids"] positive_mask = inputs["positive_attention_mask"] negative_ids = inputs["negative_input_ids"] negative_mask = inputs["negative_attention_mask"] # 前向传播获取隐藏状态 def get_embeddings(input_ids, attention_mask): outputs = model( input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True, return_dict=True ) return self.get_sequence_representation(outputs, attention_mask) # 获取三元组的嵌入表示 anchor_emb = get_embeddings(anchor_ids, anchor_mask) pos_emb = get_embeddings(positive_ids, positive_mask) neg_emb = get_embeddings(negative_ids, negative_mask) # 计算对比损失 cl_loss = self.compute_contrastive_loss(anchor_emb, pos_emb, neg_emb) cl_loss = cl_loss * self.contrastive_weight # 关键修复:确保tokenizer已设置 if self.tokenizer is None: raise ValueError("Tokenizer未设置!") # 计算语言建模损失 lm_labels = positive_ids.clone() # 关键修复:使用tokenizer的pad_token_id pad_token_id = self.tokenizer.pad_token_id lm_labels[lm_labels == pad_token_id] = -100 # 计算语言建模损失 lm_outputs = model( input_ids=positive_ids, attention_mask=positive_mask, labels=lm_labels ) lm_loss = lm_outputs.loss # 总损失 = LM损失 + 对比损失 total_loss = lm_loss + cl_loss # 记录内存使用 print_memory_usage() return (total_loss, lm_outputs) if return_outputs else total_loss # ================ 主程序 ================ # if __name__ == "__main__": # 配置量化以减少内存使用 bnb_config = BitsAndBytesConfig( load_in_4bit=True, # 使用4位量化 bnb_4bit_quant_type="nf4", # 使用NF4量化类型 bnb_4bit_use_double_quant=True, # 双重量化 bnb_4bit_compute_dtype=torch.float16 # 计算使用FP16 ) # 加载模型和分词器(使用量化) model = AutoModelForCausalLM.from_pretrained( "model/Qwen/Qwen1.5-1.8B", quantization_config=bnb_config, # 应用量化配置 device_map="auto", # 自动选择设备 output_hidden_states=True, # 必须设置以获取隐藏状态 return_dict_in_generate=True, use_cache=False # 禁用缓存以节省内存 ) tokenizer = AutoTokenizer.from_pretrained("model/Qwen/Qwen1.5-1.8B") tokenizer.pad_token = tokenizer.eos_token # 设置填充token # 为量化模型添加LoRA适配器 lora_config = LoraConfig( r=8, lora_alpha=32, target_modules=["q_proj", "v_proj"], # 针对Qwen1.5-1.8B模型 lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) # 关键修复:准备模型用于k位训练 model = prepare_model_for_kbit_training(model, use_gradient_checkpointing=True) # 添加LoRA适配器 model = get_peft_model(model, lora_config) # 关键修复:显式启用LoRA参数的梯度 for param in model.parameters(): if param.requires_grad: param.requires_grad = True model.print_trainable_parameters() # 打印可训练参数数量 # 加载数据集 def load_and_tokenize_dataset(file_path, tokenizer): """加载数据集并进行分词处理""" # 加载原始数据集 dataset_dict = load_dataset('json', data_files=file_path) raw_dataset = dataset_dict['train'] # 应用分词函数 tokenized_dataset = raw_dataset.map( lambda ex: tokenize_function(ex, tokenizer, max_length=256), batched=True, batch_size=8, # 减小批处理大小 remove_columns=['anchor', 'positive', 'negative'] ) return tokenized_dataset train_dataset = load_and_tokenize_dataset('data/processed/train_style_triplets.json', tokenizer) val_dataset = load_and_tokenize_dataset('data/processed/val_style_triplets.json', tokenizer) # 验证数据集格式 print("训练集样本示例:", train_dataset[0]) print("验证集样本示例:", val_dataset[0]) # 训练参数配置(内存优化) training_args = TrainingArguments( output_dir="./model/lora_adapter", per_device_train_batch_size=1, # 减小批量大小 gradient_accumulation_steps=8, # 增加梯度累积步数 num_train_epochs=3, learning_rate=2e-4, logging_steps=10, # 更频繁的日志记录以监控内存 save_steps=500, fp16=True, report_to="none", remove_unused_columns=False, gradient_checkpointing=True, # 启用梯度检查点 optim="adafactor", # 使用内存更少的优化器 ) # 对比学习配置 contrastive_config = { "temperature": 0.07, "margin": 0.3, "weight": 0.8, "repr_layer": -1 } # 初始化数据收集器 data_collator = ContrastiveDataCollator( tokenizer=tokenizer, max_length=256, # 减少最大长度 padding="max_length" ) # 初始化训练器 - 关键修复:传递tokenizer trainer = ContrastiveTrainer( model=model, args=training_args, tokenizer=tokenizer, # 传递tokenizer data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset, contrastive_config=contrastive_config ) # 开始训练前打印内存状态 print_memory_usage() # 关键修复:验证可训练参数 print("可训练参数列表:") for name, param in model.named_parameters(): if param.requires_grad: print(f"- {name}") # 开始训练 trainer.train() # 保存LoRA适配器 model.save_pretrained("./model/lora_adapter") # 评估模型 try: eval_results = trainer.evaluate() print("评估结果:", eval_results) except Exception as e: print(f"评估过程中发生错误: {e}") import traceback traceback.print_exc()

(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py INFO:accelerate.utils.modeling:We will use 90% of the memory on device 0 for storing the model, and 10% for the buffer to avoid OOM. You can set max_memory in to a higher value to use more memory (at your own risk). trainable params: 1,572,864 || all params: 1,838,401,536 || trainable%: 0.0856 训练集样本示例: {'anchor_input_ids': [56568, 118919, 116122, 11319], 'positive_input_ids': [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], 'negative_input_ids': [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} 验证集样本示例: {'anchor_input_ids': [56568, 118919, 116122, 11319], 'positive_input_ids': [116122, 20412, 107340, 9370, 100357, 102323, 3837, 109202, 104078, 103975, 100675, 101940, 100912, 105054, 6313], 'negative_input_ids': [100323, 104307, 99245, 9370, 106059, 104060, 3837, 104530, 115604, 99329, 11319]} Trainer.tokenizer is now deprecated. You should use Trainer.processing_class = processing_class instead. INFO:__main__:GPU内存使用: 已分配 1.77GB, 保留 1.81GB 0%| | 0/3 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 349, in <module> trainer.train() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2171, in train return inner_training_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2531, in _inner_training_loop tr_loss_step = self.training_step(model, inputs, num_items_in_batch) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 3676, in training_step loss = self.compute_loss(model, inputs) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 191, in compute_loss anchor_ids = anchor_ids.requires_grad_() RuntimeError: only Tensors of floating point dtype can require gradients 0%| | 0/3 [00:00<?, ?it/s]

Traceback (most recent call last): File "/ssd2/mjt/MFON-main/MOSI/main.py", line 33, in <module> main() File "/ssd2/mjt/MFON-main/MOSI/main.py", line 28, in main TVA_train_fusion(config, metrics, config.seed, train_data, valid_data) File "/ssd2/mjt/MFON-main/MOSI/train/TVA_train.py", line 83, in TVA_train_fusion pred, (loss_v, loss_a, loss_nce) = model(text, vision, audio, vision_text, audio_text, mode='train') File "/home/mjt/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/ssd2/mjt/MFON-main/MOSI/models/model.py", line 131, in forward h_tv = self.vision_with_text(vision_text, proj_vision, proj_vision) # [seq-v, bs, 768] [seq-t, bs, 768]--> [seq, bs,h] File "/home/mjt/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/ssd2/mjt/MFON-main/MOSI/models/trans/transformer.py", line 66, in forward x += self.embed_positions(x_in.transpose(0, 1)[:, :, 0]).transpose(0, 1) # Add positional embedding IndexError: too many indices for tensor of dimension 2Traceback (most recent call last): File "/ssd2/mjt/MFON-main/MOSI/main.py", line 33, in <module> main() File "/ssd2/mjt/MFON-main/MOSI/main.py", line 28, in main TVA_train_fusion(config, metrics, config.seed, train_data, valid_data) File "/ssd2/mjt/MFON-main/MOSI/train/TVA_train.py", line 83, in TVA_train_fusion pred, (loss_v, loss_a, loss_nce) = model(text, vision, audio, vision_text, audio_text, mode='train') File "/home/mjt/anaconda3/envs/pytorch/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/ssd2/mjt/MFON-main/MOSI/models/model.py", line 131, in forward h_tv = self.vision_with_text(vision_text, proj_vision, proj_vision) # [seq-v, bs, 768] [seq-t, bs, 768]--> [seq, bs,h]

以上代码有以下问题,分析修改:(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py Map: 0%| | 0/1 [00:00<?, ? examples/s]ERROR:__main__:无法解析anchor_input_ids: 你如何看待气候变化? ERROR:__main__:无法解析positive_input_ids: 气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放! ERROR:__main__:无法解析negative_input_ids: 哈哈天气什么的随便啦,不如聊聊游戏? Map: 100%|████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 13.02 examples/s] Map: 0%| | 0/1 [00:00<?, ? examples/s]ERROR:__main__:无法解析anchor_input_ids: 你如何看待气候变化? ERROR:__main__:无法解析positive_input_ids: 气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放! ERROR:__main__:无法解析negative_input_ids: 哈哈天气什么的随便啦,不如聊聊游戏? Map: 100%|████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 67.37 examples/s] 训练集样本示例: {'anchor_input_ids': '你如何看待气候变化?', 'positive_input_ids': '气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放!', 'negative_input_ids': '哈哈天气什么的随便啦,不如聊聊游戏?'} 验证集样本示例: {'anchor_input_ids': '你如何看待气候变化?', 'positive_input_ids': '气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放!', 'negative_input_ids': '哈哈天气什么的随便啦,不如聊聊游戏?'} 0%| | 0/3 [00:00<?, ?it/s]ERROR:__main__:无法解析token IDs: 你如何看待气候变化? ERROR:__main__:无法解析token IDs: 气候变化是严峻的全球危机,我们需要立即采取行动减少碳排放! ERROR:__main__:无法解析token IDs: 哈哈天气什么的随便啦,不如聊聊游戏? You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 281, in <module> trainer.train() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2171, in train return inner_training_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2480, in _inner_training_loop batch_samples, num_items_in_batch = self.get_batch_samples(epoch_iterator, num_batches) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 5156, in get_batch_samples batch_samples += [next(epoch_iterator)] File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\data_loader.py", line 567, in __iter__ current_batch = next(dataloader_iter) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\dataloader.py", line 701, in __next__ data = self._next_data() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\dataloader.py", line 757, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\_utils\fetch.py", line 55, in fetch return self.collate_fn(data) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 96, in __call__ "positive_attention_mask": create_to_attention_mask(batch_positive["input_ids"]), NameError: name 'create_to_attention_mask' is not defined. Did you mean: 'create_attention_mask'? 0%| | 0/3 [00:00<?, ?it/s]

以上代码出现问题:(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\generation\configuration_utils.py:818: UserWarning: return_dict_in_generate is NOT set to True, but output_hidden_states is. When return_dict_in_generate is not True, output_hidden_states is ignored. warnings.warn( Generating train split: 1 examples [00:00, 6.57 examples/s] Generating train split: 1 examples [00:00, 142.77 examples/s] C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py:76: FutureWarning: tokenizer is deprecated and will be removed in version 5.0.0 for ContrastiveTrainer.__init__. Use processing_class instead. super().__init__(*args, **kwargs) Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 223, in <module> trainer.train() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2171, in train return inner_training_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2200, in _inner_training_loop train_dataloader = self.get_train_dataloader() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 1000, in get_train_dataloader train_dataset = self._remove_unused_columns(train_dataset, description="training") File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 926, in _remove_unused_columns raise ValueError( ValueError: No columns in the dataset match the model's forward method signature. The following columns have been ignored: [negative_input_ids, positive_input_ids, anchor_input_ids]. Please check the dataset and model. You may need to set remove_unused_columns=False in TrainingArguments. 请分析解决

以上代码出现(style_tune) C:\Users\28996\Desktop\AI\persona_contrastive_finetuning>python Contrastive_Training_LM.py 0%| | 0/3 [00:00<?, ?it/s]You're using a Qwen2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the __call__ method is faster than using a method to encode the text followed by a call to the pad method to get a padded encoding. Traceback (most recent call last): File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 227, in <module> trainer.train() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2171, in train return inner_training_loop( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 2480, in _inner_training_loop batch_samples, num_items_in_batch = self.get_batch_samples(epoch_iterator, num_batches) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\trainer.py", line 5156, in get_batch_samples batch_samples += [next(epoch_iterator)] File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\accelerate\data_loader.py", line 567, in __iter__ current_batch = next(dataloader_iter) File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\dataloader.py", line 701, in __next__ data = self._next_data() File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\dataloader.py", line 757, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\torch\utils\data\_utils\fetch.py", line 55, in fetch return self.collate_fn(data) File "C:\Users\28996\Desktop\AI\persona_contrastive_finetuning\Contrastive_Training_LM.py", line 38, in __call__ batch_anchor = self.tokenizer.pad( File "C:\Users\28996\miniconda3\envs\style_tune\lib\site-packages\transformers\tokenization_utils_base.py", line 3337, in pad raise ValueError( ValueError: type of 你如何看待气候变化? unknown: <class 'str'>. Should be one of a python, numpy, pytorch or tensorflow object. 0%| | 0/3 [00:00<?, ?it/s] 请分析解决

最新推荐

recommend-type

C# Socket通信源码:多连接支持与断线重连功能的物联网解决方案

内容概要:本文介绍了一套基于C#编写的Socket服务器与客户端通信源码,源自商业级物联网项目。这套代码实现了双Socket机制、多连接支持以及断线重连功能,适用于各类C#项目(如MVC、Winform、控制台、Webform)。它通过简单的静态类调用即可获取客户端传输的数据,并内置了接收和发送数据缓冲队列,确保数据传输的稳定性。此外,代码提供了数据读取接口,但不涉及具体的数据处理逻辑。文中详细展示了服务端和客户端的基本配置与使用方法,强调了在实际应用中需要注意的问题,如避免主线程执行耗时操作以防内存膨胀。 适合人群:具备基本C#编程能力的研发人员,尤其是对Socket通信有一定了解并希望快速集成相关功能到现有项目中的开发者。 使用场景及目标:① 需要在短时间内为C#项目增加稳定的Socket通信功能;② 实现多设备间的数据交换,特别是对于智能家居、工业传感器等物联网应用场景。 其他说明:虽然该代码能够满足大多数中小型项目的通信需求,但对于需要高性能、低延迟的金融级交易系统则不太合适。同时,代码并未采用异步技术,因此在面对海量连接时可能需要进一步优化。
recommend-type

STM32CubeIDE 1.10.1代码自动提示补全功能

资源下载链接为: https://pan.quark.cn/s/22ca96b7bd39 STM32CubeIDE 1.10.1代码自动提示补全功能
recommend-type

专业定制变频器方案:高效节能,智能控制,满足多样化应用需求

内容概要:本文详细介绍了变频器在电气技术领域的应用及其工作原理,重点讨论了变频器的技术方案,包括基于电力电子器件的不同技术方案和控制策略。此外,还提供了变频器控制程序的代码分析,涵盖主程序、输入模块、输出模块和通信模块的关键组成部分,并附有一段简化的伪代码示例,帮助读者更好地理解变频器的内部机制和实际操作方法。 适合人群:从事电气工程、自动化控制及相关领域的技术人员和研究人员。 使用场景及目标:适用于希望深入了解变频器工作原理和技术实现的专业人士,旨在提高他们对变频器的理解和应用能力。 其他说明:随着电力电子技术和控制技术的发展,未来的变频器将更加智能化和高效化,文中提到的内容有助于读者跟上行业发展的步伐。
recommend-type

掌握XFireSpring整合技术:HELLOworld原代码使用教程

标题:“xfirespring整合使用原代码”中提到的“xfirespring”是指将XFire和Spring框架进行整合使用。XFire是一个基于SOAP的Web服务框架,而Spring是一个轻量级的Java/Java EE全功能栈的应用程序框架。在Web服务开发中,将XFire与Spring整合能够发挥两者的优势,例如Spring的依赖注入、事务管理等特性,与XFire的简洁的Web服务开发模型相结合。 描述:“xfirespring整合使用HELLOworld原代码”说明了在这个整合过程中实现了一个非常基本的Web服务示例,即“HELLOworld”。这通常意味着创建了一个能够返回"HELLO world"字符串作为响应的Web服务方法。这个简单的例子用来展示如何设置环境、编写服务类、定义Web服务接口以及部署和测试整合后的应用程序。 标签:“xfirespring”表明文档、代码示例或者讨论集中于XFire和Spring的整合技术。 文件列表中的“index.jsp”通常是一个Web应用程序的入口点,它可能用于提供一个用户界面,通过这个界面调用Web服务或者展示Web服务的调用结果。“WEB-INF”是Java Web应用中的一个特殊目录,它存放了应用服务器加载的Servlet类文件和相关的配置文件,例如web.xml。web.xml文件中定义了Web应用程序的配置信息,如Servlet映射、初始化参数、安全约束等。“META-INF”目录包含了元数据信息,这些信息通常由部署工具使用,用于描述应用的元数据,如manifest文件,它记录了归档文件中的包信息以及相关的依赖关系。 整合XFire和Spring框架,具体知识点可以分为以下几个部分: 1. XFire框架概述 XFire是一个开源的Web服务框架,它是基于SOAP协议的,提供了一种简化的方式来创建、部署和调用Web服务。XFire支持多种数据绑定,包括XML、JSON和Java数据对象等。开发人员可以使用注解或者基于XML的配置来定义服务接口和服务实现。 2. Spring框架概述 Spring是一个全面的企业应用开发框架,它提供了丰富的功能,包括但不限于依赖注入、面向切面编程(AOP)、数据访问/集成、消息传递、事务管理等。Spring的核心特性是依赖注入,通过依赖注入能够将应用程序的组件解耦合,从而提高应用程序的灵活性和可测试性。 3. XFire和Spring整合的目的 整合这两个框架的目的是为了利用各自的优势。XFire可以用来创建Web服务,而Spring可以管理这些Web服务的生命周期,提供企业级服务,如事务管理、安全性、数据访问等。整合后,开发者可以享受Spring的依赖注入、事务管理等企业级功能,同时利用XFire的简洁的Web服务开发模型。 4. XFire与Spring整合的基本步骤 整合的基本步骤可能包括添加必要的依赖到项目中,配置Spring的applicationContext.xml,以包括XFire特定的bean配置。比如,需要配置XFire的ServiceExporter和ServicePublisher beans,使得Spring可以管理XFire的Web服务。同时,需要定义服务接口以及服务实现类,并通过注解或者XML配置将其关联起来。 5. Web服务实现示例:“HELLOworld” 实现一个Web服务通常涉及到定义服务接口和服务实现类。服务接口定义了服务的方法,而服务实现类则提供了这些方法的具体实现。在XFire和Spring整合的上下文中,“HELLOworld”示例可能包含一个接口定义,比如`HelloWorldService`,和一个实现类`HelloWorldServiceImpl`,该类有一个`sayHello`方法返回"HELLO world"字符串。 6. 部署和测试 部署Web服务时,需要将应用程序打包成WAR文件,并部署到支持Servlet 2.3及以上版本的Web应用服务器上。部署后,可以通过客户端或浏览器测试Web服务的功能,例如通过访问XFire提供的服务描述页面(WSDL)来了解如何调用服务。 7. JSP与Web服务交互 如果在应用程序中使用了JSP页面,那么JSP可以用来作为用户与Web服务交互的界面。例如,JSP可以包含JavaScript代码来发送异步的AJAX请求到Web服务,并展示返回的结果给用户。在这个过程中,JSP页面可能使用XMLHttpRequest对象或者现代的Fetch API与Web服务进行通信。 8. 项目配置文件说明 项目配置文件如web.xml和applicationContext.xml分别在Web应用和服务配置中扮演关键角色。web.xml负责定义Web组件,比如Servlet、过滤器和监听器,而applicationContext.xml则负责定义Spring容器中的bean,包括数据源、事务管理器、业务逻辑组件和服务访问器等。 总之,通过上述整合使用原代码的知识点,可以深入理解XFire与Spring框架的结合使用,以及如何开发和部署基本的Web服务。这些技术知识有助于进行更高层次的Web服务开发,以及在复杂的IT环境中灵活运用各种框架和工具。
recommend-type

【Unity2018汉化大揭秘】:一步到位优化中文用户体验

# 摘要 本论文详细介绍了Unity2018汉化项目的实施过程,从理论准备到实践步骤,再到后期的测试与优化,最终分享了汉化经验和心得。首先,强调了汉化项目对于用户体验提升和产品国际化的重要性,并分析了汉化资源的分类、识别与管理方法。接着,详细阐述了汉化工具的选择、脚本编写、资源手动调整等实践步骤,以及汉化质量测试、体验优化和项目维护更新的重要性。论文最后通过案例研究,分析了汉化中的挑战和应对策略,并预测了汉化技术未来的发展方向。整个汉化过程不仅涉及技术层面的操作,还包含了深入的理论研究和实践心得分享,对提升汉化项目的效率和质量具有指导意义。 # 关键字 Unity汉化;用户体验;国际化;资源
recommend-type

iPhone

<think>我们注意到用户意图不明确,但提到了“照片提取”和“其他功能帮助”。因此,我们需要通过搜索来获取关于iPhone照片提取的常见方法以及其他可能的功能帮助。由于用户问题比较宽泛,我们将重点放在照片提取上,因为这是明确提到的关键词。同时,我们也会考虑一些其他常用功能的帮助。首先,针对照片提取,可能涉及从iPhone导出照片、从备份中提取照片、或者从损坏的设备中恢复照片等。我们将搜索这些方面的信息。其次,关于其他功能帮助,我们可以提供一些常见问题的快速指南,如电池优化、屏幕时间管理等。根据要求,我们需要将答案组织为多个方法或步骤,并在每个步骤间换行。同时,避免使用第一人称和步骤词汇。由于
recommend-type

驾校一点通软件:提升驾驶证考试通过率

标题“驾校一点通”指向的是一款专门为学员考取驾驶证提供帮助的软件,该软件强调其辅助性质,旨在为学员提供便捷的学习方式和复习资料。从描述中可以推断出,“驾校一点通”是一个与驾驶考试相关的应用软件,这类软件一般包含驾驶理论学习、模拟考试、交通法规解释等内容。 文件标题中的“2007”这个年份标签很可能意味着软件的最初发布时间或版本更新年份,这说明了软件具有一定的历史背景和可能经过了多次更新,以适应不断变化的驾驶考试要求。 压缩包子文件的文件名称列表中,有以下几个文件类型值得关注: 1. images.dat:这个文件名表明,这是一个包含图像数据的文件,很可能包含了用于软件界面展示的图片,如各种标志、道路场景等图形。在驾照学习软件中,这类图片通常用于帮助用户认识和记忆不同交通标志、信号灯以及驾驶过程中需要注意的各种道路情况。 2. library.dat:这个文件名暗示它是一个包含了大量信息的库文件,可能包含了法规、驾驶知识、考试题库等数据。这类文件是提供给用户学习驾驶理论知识和准备科目一理论考试的重要资源。 3. 驾校一点通小型汽车专用.exe:这是一个可执行文件,是软件的主要安装程序。根据标题推测,这款软件主要是针对小型汽车驾照考试的学员设计的。通常,小型汽车(C1类驾照)需要学习包括车辆构造、基础驾驶技能、安全行车常识、交通法规等内容。 4. 使用说明.html:这个文件是软件使用说明的文档,通常以网页格式存在,用户可以通过浏览器阅读。使用说明应该会详细介绍软件的安装流程、功能介绍、如何使用软件的各种模块以及如何通过软件来帮助自己更好地准备考试。 综合以上信息,我们可以挖掘出以下几个相关知识点: - 软件类型:辅助学习软件,专门针对驾驶考试设计。 - 应用领域:主要用于帮助驾考学员准备理论和实践考试。 - 文件类型:包括图片文件(images.dat)、库文件(library.dat)、可执行文件(.exe)和网页格式的说明文件(.html)。 - 功能内容:可能包含交通法规知识学习、交通标志识别、驾驶理论学习、模拟考试、考试题库练习等功能。 - 版本信息:软件很可能最早发布于2007年,后续可能有多个版本更新。 - 用户群体:主要面向小型汽车驾照考生,即C1类驾照学员。 - 使用方式:用户需要将.exe安装文件进行安装,然后根据.html格式的使用说明来熟悉软件操作,从而利用images.dat和library.dat中的资源来辅助学习。 以上知识点为从给定文件信息中提炼出来的重点,这些内容对于了解“驾校一点通”这款软件的功能、作用、使用方法以及它的发展历史都有重要的指导意义。
recommend-type

【DFLauncher自动化教程】:简化游戏启动流程,让游戏体验更流畅

# 摘要 DFLauncher是一个功能丰富的游戏启动和管理平台,本论文将介绍其安装、基础使用、高级设置、社区互动以及插件开发等方面。通过对配置文件的解析、界面定制、自动化功能的实现、高级配置选项、安全性和性能监控的详细讨论,本文阐述了DFLauncher如何帮助用户更高效地管理和优化游戏环境。此外,本文还探讨了DFLauncher社区的资源分享、教育教程和插件开发等内容,
recommend-type

自适应卡尔曼滤波是什么意思

<think>嗯,用户这次想深入了解自适应卡尔曼滤波(AKF)的概念和原理。从对话历史看,用户之前研究过相机自动对焦系统的降噪技术,其中提到过自适应卡尔曼滤波的应用场景。用户可能是个工程师或研究者,正在探索信号处理在具体场景的实现细节。 用户提供的三篇参考文献很有价值:[1]是基础理论综述,[2]聚焦多传感器场景,[3]讨论噪声协方差自适应方法。需要特别注意相机AF系统的特殊需求——实时性要求高(每秒数十次对焦计算)、噪声环境复杂(机械振动/弱光干扰),这些在解释原理时要结合具体案例。 技术要点需要分层解析:先明确标准卡尔曼滤波的局限(固定噪声参数),再展开自适应机制。对于相机AF场景,重
recommend-type

EIA-CEA 861B标准深入解析:时间与EDID技术

EIA-CEA 861B标准是美国电子工业联盟(Electronic Industries Alliance, EIA)和消费电子协会(Consumer Electronics Association, CEA)联合制定的一个技术规范,该规范详细规定了视频显示设备和系统之间的通信协议,特别是关于视频显示设备的时间信息(timing)和扩展显示识别数据(Extended Display Identification Data,简称EDID)的结构与内容。 在视频显示技术领域,确保不同品牌、不同型号的显示设备之间能够正确交换信息是至关重要的,而这正是EIA-CEA 861B标准所解决的问题。它为制造商提供了一个统一的标准,以便设备能够互相识别和兼容。该标准对于确保设备能够正确配置分辨率、刷新率等参数至关重要。 ### 知识点详解 #### EIA-CEA 861B标准的历史和重要性 EIA-CEA 861B标准是随着数字视频接口(Digital Visual Interface,DVI)和后来的高带宽数字内容保护(High-bandwidth Digital Content Protection,HDCP)等技术的发展而出现的。该标准之所以重要,是因为它定义了电视、显示器和其他显示设备之间如何交互时间参数和显示能力信息。这有助于避免兼容性问题,并确保消费者能有较好的体验。 #### Timing信息 Timing信息指的是关于视频信号时序的信息,包括分辨率、水平频率、垂直频率、像素时钟频率等。这些参数决定了视频信号的同步性和刷新率。正确配置这些参数对于视频播放的稳定性和清晰度至关重要。EIA-CEA 861B标准规定了多种推荐的视频模式(如VESA标准模式)和特定的时序信息格式,使得设备制造商可以参照这些标准来设计产品。 #### EDID EDID是显示设备向计算机或其他视频源发送的数据结构,包含了关于显示设备能力的信息,如制造商、型号、支持的分辨率列表、支持的视频格式、屏幕尺寸等。这种信息交流机制允许视频源设备能够“了解”连接的显示设备,并自动设置最佳的输出分辨率和刷新率,实现即插即用(plug and play)功能。 EDID的结构包含了一系列的块(block),其中定义了包括基本显示参数、色彩特性、名称和序列号等在内的信息。该标准确保了这些信息能以一种标准的方式被传输和解释,从而简化了显示设置的过程。 #### EIA-CEA 861B标准的应用 EIA-CEA 861B标准不仅适用于DVI接口,还适用于HDMI(High-Definition Multimedia Interface)和DisplayPort等数字视频接口。这些接口技术都必须遵循EDID的通信协议,以保证设备间正确交换信息。由于标准的广泛采用,它已经成为现代视频信号传输和显示设备设计的基础。 #### EIA-CEA 861B标准的更新 随着技术的进步,EIA-CEA 861B标准也在不断地更新和修订。例如,随着4K分辨率和更高刷新率的显示技术的发展,该标准已经扩展以包括支持这些新技术的时序和EDID信息。任何显示设备制造商在设计新产品时,都必须考虑最新的EIA-CEA 861B标准,以确保兼容性。 #### 结论 EIA-CEA 861B标准是电子显示领域的一个重要规范,它详细定义了视频显示设备在通信时所使用的信号时序和设备信息的格式。该标准的存在,使得不同厂商生产的显示设备可以无缝连接和集成,极大地增强了用户体验。对于IT专业人士而言,了解和遵守EIA-CEA 861B标准是进行视频系统设计、故障诊断及设备兼容性测试的重要基础。