活动介绍
file-type

JupyterNotebook的Code States项目探索

ZIP文件

下载需积分: 5 | 5KB | 更新于2024-12-05 | 37 浏览量 | 0 下载量 举报 收藏
download 立即下载
是一个使用Jupyter Notebook作为主要开发和文档工具的项目。Jupyter Notebook是一种开源的Web应用程序,允许用户创建和分享包含实时代码、方程、可视化和解释性文本的文档。它广泛用于数据清理和转换、数值模拟、统计建模、机器学习等数据科学领域。 ### 知识点详细说明: #### Jupyter Notebook简介 Jupyter Notebook最初是由Fernando Pérez在2001年发起的一个名为IPython的项目中开发出来的。它允许用户以交互式的方式编写和执行代码,并通过Web浏览器进行可视化展示。Jupyter Notebook支持多种编程语言,而最常用的是Python。其名称来自于三个核心语言:Julia、Python和R,不过现在它已经不限于这些语言。 #### 核心组件 - **内核(kernel)**:Jupyter Notebook使用内核来运行和管理代码。对于Python,最常见的内核是IPython。内核负责执行代码,提供补全和自动修正,以及其他交互功能。 - **笔记本(notebook)**:由一系列单元格(cell)组成,每个单元格可以包含代码、Markdown文本或者富文本等。用户可以在单元格内执行代码,并查看执行结果。 - **前端界面**:用户通过浏览器中的前端界面与Notebook交互,可以编辑文本单元格,运行代码单元格,以及查看图表和表格等输出结果。 #### 开发流程和应用场景 在开发流程中,数据科学家通常使用Jupyter Notebook来迭代地测试代码,记录分析过程,并将结果分享给其他人。这种交互式的工作方式有助于快速实验和理解数据。Notebook还可以用来创建教学材料、演示、报告等。 应用场景非常广泛,包括但不限于: - 数据清洗和分析 - 数据可视化 - 机器学习模型的开发和测试 - 复杂算法的解释和文档化 - 教育和教学 - 多学科研究项目 #### 使用Jupyter Notebook的好处 - **即时反馈**:可以在同一个Notebook中看到代码的输入和输出结果,方便调试和理解代码行为。 - **文档与代码一体化**:可以在代码单元格旁边编写说明性文本,提高文档的可读性和维护性。 - **易于分享和协作**:Notebook可以导出为多种格式,如HTML、PDF、Markdown等,便于分享和在不同环境中的查看。同时,多个用户可以通过JupyterHub等功能协作编辑同一个Notebook。 - **高度可扩展**:Jupyter Notebook支持大量的扩展插件,用户可以根据需要扩展功能。 #### 项目"Code_States_Project"的具体内容 由于提供的文件信息中没有关于项目"Code_States_Project"的更具体描述,我们无法详细说明该项目包含哪些具体内容和代码实现。但是可以推测,这个项目可能是与编程、数据处理、算法实现或者机器学习相关的项目,因为Jupyter Notebook在这些领域特别受欢迎。用户可以通过解压缩文件"Code_States_Project-main",然后在本地环境中打开相应的.ipynb文件来查看和运行项目中的代码。通过分析Notebook中的代码单元格和文本单元格,用户可以了解项目的具体实现细节和使用的数据科学方法。 #### 结语 Jupyter Notebook是一个强大的工具,它简化了编程、数据分析和教育等多种任务。对于初学者和专业人士来说,它都是一个非常有价值的资源。"Code_States_Project"作为基于Jupyter Notebook的项目,可能会包含各种代码示例、数据分析结果、机器学习模型等,这些内容对于理解特定的编程问题、数据分析方法或机器学习技术将是非常有帮助的。

相关推荐

filetype

We can now use a method to plot the loss surface of the network by projecting the parameter updates into two dimensions. You can find more information on that here. But you can just use the provided code. The contour plot will show how the loss will change if you would follow the two main directions of the past parameter updates. Think about the challenges and the optimization process of this landscape. What could impede the convergence of the net? # project states onto the main directions of the gradient updates using n samples over all steps starting from sample x # the directions are calculated using the last sample as a reference directions, state_ids, loss_coordinates = get_state_directions(states, n_states=10, start_from=0, reference_id=-1) # compute the losses over the main directions of the gradient updates x, y, Z, _ = get_loss_grid(net, data_loader, loss_fn, directions=directions, resolution=(20, 20), scale=loss_coordinates.abs().max().item()) # plot the landscape as a contour plot fig = plot_contour(np.copy(x), np.copy(y), np.copy(Z), scale=True) fig.add_traces(go.Scatter(x=np.copy(loss_coordinates[0].cpu().numpy()), y=np.copy(loss_coordinates[1].cpu().numpy()))) print('loss samples:', np.array(losses)[state_ids]) conf_pltly() init_notebook_mode(connected=False) iplot(fig) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-62-26d05ea2d790> in <cell line: 3>() 1 # project states onto the main directions of the gradient updates using n samples over all steps starting from sample x 2 # the directions are calculated using the last sample as a reference ----> 3 directions, state_ids, loss_coordinates = get_state_directions(states, n_states=10, start_from=0, reference_id=-1) 4 5 # compute the losses over the main directions of the gradient updates <ipython-input-60-6cc4aad7dcda> in get_state_directions(states, n_states, start_from, reference_id) 15 params.append(param.view(-1)) 16 ---> 17 params = torch.stack(params, dim=0) 18 reference = params[-1] 19 RuntimeError: stack expects each tensor to be equal size, but got [200704] at entry 0 and [256] at entry 1这个错误怎么改

filetype

import os import torch import transformers from transformers import ( AutoModelForCausalLM, AutoTokenizer, TrainingArguments, DataCollatorForLanguageModeling, BitsAndBytesConfig, Trainer ) from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training from datasets import load_dataset import logging import psutil import gc from datetime import datetime # === 配置区域 === MODEL_NAME = "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/models/Yi-6B" DATASET_PATH = "./data/train_lora_formatted.jsonl" OUTPUT_DIR = "./yi6b-lora-optimized" DEVICE_MAP = "auto" # 使用自动设备映射 # 确保输出目录存在 os.makedirs(OUTPUT_DIR, exist_ok=True) # === 内存优化配置 === os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "expandable_segments:True" # 减少内存碎片 torch.backends.cuda.cufft_plan_cache.clear() # 清理CUDA缓存 # === 增强的日志系统 === def setup_logging(output_dir): """配置日志系统,支持文件和TensorBoard""" logger = logging.getLogger(__name__) logger.setLevel(logging.INFO) # 文件日志处理器 file_handler = logging.FileHandler(os.path.join(output_dir, "training.log")) file_handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')) logger.addHandler(file_handler) # 控制台日志处理器 console_handler = logging.StreamHandler() console_handler.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')) logger.addHandler(console_handler) # TensorBoard日志目录 tensorboard_log_dir = os.path.join(output_dir, "logs", datetime.now().strftime("%Y%m%d-%H%M%S")) os.makedirs(tensorboard_log_dir, exist_ok=True) # 安装TensorBoard回调 tb_writer = None try: from torch.utils.tensorboard import SummaryWriter tb_writer = SummaryWriter(log_dir=tensorboard_log_dir) logger.info(f"TensorBoard日志目录: {tensorboard_log_dir}") except ImportError: logger.warning("TensorBoard未安装,可视化功能不可用") return logger, tb_writer logger, tb_writer = setup_logging(OUTPUT_DIR) # === 量化配置 - 使用更高效的配置 === quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, ) # === 加载模型 === logger.info("加载预训练模型...") model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, device_map=DEVICE_MAP, quantization_config=quant_config, torch_dtype=torch.bfloat16, trust_remote_code=True, attn_implementation="flash_attention_2" # 使用FlashAttention优化内存 ) # === 分词器处理 === logger.info("加载分词器...") tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True) tokenizer.padding_side = "right" if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id # === 准备模型训练 === model = prepare_model_for_kbit_training( model, use_gradient_checkpointing=True # 启用梯度检查点以节省内存 ) # === LoRA 配置 - 优化内存使用 === logger.info("配置LoRA...") lora_config = LoraConfig( r=64, # 降低rank以减少内存使用 lora_alpha=32, # 降低alpha值 target_modules=["q_proj", "v_proj"], # 减少目标模块 lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, lora_config) # 记录可训练参数 trainable_params = sum(p.numel() for p in model.parameters() if p.requires_grad) total_params = sum(p.numel() for p in model.parameters()) logger.info(f"可训练参数: {trainable_params:,} / 总参数: {total_params:,} ({trainable_params / total_params:.2%})") # === 加载并预处理数据集 === logger.info("加载和预处理数据集...") dataset = load_dataset("json", data_files=DATASET_PATH, split="train") # 文本过滤函数 def is_valid_text(example): text = example.get("text", "") return text is not None and len(text.strip()) > 200 # 增加最小长度要求 dataset = dataset.filter(is_valid_text) logger.info(f"过滤后数据集大小: {len(dataset)} 条") # 动态填充的分词函数 - 节省内存 def tokenize_function(examples): tokenized = tokenizer( examples["text"], padding=True, # 使用动态填充 truncation=True, max_length=1024, # 降低上下文长度以减少内存使用 ) # 创建 labels - 因果语言建模需要 labels = input_ids tokenized["labels"] = tokenized["input_ids"].copy() return tokenized tokenized_dataset = dataset.map( tokenize_function, batched=True, remove_columns=["text"], batch_size=64, # 降低批处理大小以减少内存峰值 num_proc=4, # 减少进程数以降低内存开销 ) # === 数据整理器 === data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False # 因果语言建模 ) # === 训练参数 - 优化内存使用 === report_to_list = ["tensorboard"] if tb_writer else [] training_args = TrainingArguments( output_dir=OUTPUT_DIR, per_device_train_batch_size=4, # 大幅降低批次大小 gradient_accumulation_steps=4, # 增加梯度累积步数以保持有效批次大小 learning_rate=2e-5, num_train_epochs=3, logging_steps=50, save_strategy="steps", save_steps=500, bf16=True, optim="paged_adamw_32bit", report_to=report_to_list, warmup_ratio=0.05, gradient_checkpointing=True, # 启用梯度检查点 fp16=False, max_grad_norm=0.3, # 降低梯度裁剪阈值 remove_unused_columns=True, # 移除未使用的列以节省内存 dataloader_num_workers=4, # 减少数据加载工作线程 evaluation_strategy="steps", eval_steps=500, save_total_limit=2, # 减少保存的检查点数量 logging_dir=os.path.join(OUTPUT_DIR, "logs"), load_best_model_at_end=True, ddp_find_unused_parameters=False, logging_first_step=True, group_by_length=True, lr_scheduler_type="cosine", weight_decay=0.01, ) # === GPU监控工具 === def monitor_gpu(): """监控GPU使用情况""" if torch.cuda.is_available(): device = torch.device("cuda") mem_alloc = torch.cuda.memory_allocated(device) / 1024 ** 3 mem_reserved = torch.cuda.memory_reserved(device) / 1024 ** 3 mem_total = torch.cuda.get_device_properties(device).total_memory / 1024 ** 3 return { "allocated": f"{mem_alloc:.2f} GB", "reserved": f"{mem_reserved:.2f} GB", "total": f"{mem_total:.2f} GB", "utilization": f"{mem_alloc / mem_total * 100:.1f}%" } return {} # === 创建训练器 === eval_dataset = None if len(tokenized_dataset) > 100: eval_dataset = tokenized_dataset.select(range(100)) trainer = Trainer( model=model, tokenizer=tokenizer, args=training_args, train_dataset=tokenized_dataset, eval_dataset=eval_dataset, data_collator=data_collator, ) # === 训练前验证 === def validate_data_and_model(): """验证数据和模型是否准备好训练""" logger.info("\n=== 训练前验证 ===") # 检查样本格式 sample = tokenized_dataset[0] logger.info(f"样本键: {list(sample.keys())}") logger.info(f"input_ids 长度: {len(sample['input_ids'])}") # 创建单个样本测试批次 test_batch = data_collator([sample]) # 移动数据到设备 test_batch = {k: v.to(model.device) for k, v in test_batch.items()} # 前向传播测试 model.train() outputs = model(**test_batch) loss_value = outputs.loss.item() logger.info(f"测试批次损失: {loss_value:.4f}") # 记录到TensorBoard if tb_writer: tb_writer.add_scalar("debug/test_loss", loss_value, 0) # 反向传播测试 outputs.loss.backward() logger.info("反向传播成功!") # 重置梯度 model.zero_grad() logger.info("验证完成,准备开始训练\n") # 记录初始GPU使用情况 gpu_status = monitor_gpu() logger.info(f"初始GPU状态: {gpu_status}") # 记录到TensorBoard if tb_writer: tb_writer.add_text("system/initial_gpu", str(gpu_status), 0) validate_data_and_model() # === 自定义回调 - 监控资源使用 === class ResourceMonitorCallback(transformers.TrainerCallback): def __init__(self, tb_writer=None): self.tb_writer = tb_writer self.start_time = datetime.now() self.last_log_time = datetime.now() def on_step_end(self, args, state, control, **kwargs): current_time = datetime.now() time_diff = (current_time - self.last_log_time).total_seconds() # 每分钟记录一次资源使用情况 if time_diff > 60: self.last_log_time = current_time # GPU监控 gpu_status = monitor_gpu() logger.info(f"Step {state.global_step} - GPU状态: {gpu_status}") # CPU和内存监控 cpu_percent = psutil.cpu_percent() mem = psutil.virtual_memory() logger.info( f"CPU使用率: {cpu_percent}%, 内存使用: {mem.used / 1024 ** 3:.2f}GB/{mem.total / 1024 ** 3:.2f}GB") # 记录到TensorBoard if self.tb_writer: # GPU显存使用 if torch.cuda.is_available(): device = torch.device("cuda") mem_alloc = torch.cuda.memory_allocated(device) / 1024 ** 3 self.tb_writer.add_scalar("system/gpu_mem", mem_alloc, state.global_step) # CPU使用率 self.tb_writer.add_scalar("system/cpu_usage", cpu_percent, state.global_step) # 系统内存使用 self.tb_writer.add_scalar("system/ram_usage", mem.used / 1024 ** 3, state.global_step) def on_log(self, args, state, control, logs=None, **kwargs): """记录训练指标到TensorBoard""" if self.tb_writer and logs is not None: for metric_name, metric_value in logs.items(): if "loss" in metric_name or "lr" in metric_name or "grad_norm" in metric_name: self.tb_writer.add_scalar(f"train/{metric_name}", metric_value, state.global_step) def on_train_end(self, args, state, control, **kwargs): """训练结束时记录总时间""" training_time = datetime.now() - self.start_time logger.info(f"训练总时间: {training_time}") if self.tb_writer: self.tb_writer.add_text("system/total_time", str(training_time)) # 添加回调 trainer.add_callback(ResourceMonitorCallback(tb_writer=tb_writer)) # === 内存清理函数 === def clear_memory(): """清理内存和GPU缓存""" gc.collect() if torch.cuda.is_available(): torch.cuda.empty_cache() torch.cuda.ipc_collect() logger.info("内存清理完成") # === 启动训练 === try: logger.info("开始训练...") # 分阶段训练以减少内存峰值 num_samples = len(tokenized_dataset) chunk_size = 1000 # 每次处理1000个样本 for i in range(0, num_samples, chunk_size): end_idx = min(i + chunk_size, num_samples) logger.info(f"训练样本 {i} 到 {end_idx - 1} / {num_samples}") # 创建子数据集 chunk_dataset = tokenized_dataset.select(range(i, end_idx)) # 更新训练器 trainer.train_dataset = chunk_dataset # 训练当前块 trainer.train() # 清理内存 clear_memory() # 保存训练指标 metrics = trainer.evaluate() trainer.log_metrics("train", metrics) trainer.save_metrics("train", metrics) # 保存最佳模型 trainer.save_model(OUTPUT_DIR) tokenizer.save_pretrained(OUTPUT_DIR) logger.info(f"训练完成! 模型保存在: {OUTPUT_DIR}") # 记录最终指标到TensorBoard if tb_writer: for metric_name, metric_value in metrics.items(): tb_writer.add_scalar(f"final/{metric_name}", metric_value) tb_writer.close() except Exception as e: logger.error(f"训练出错: {e}") import traceback logger.error(traceback.format_exc()) # 尝试更小批量训练 logger.info("\n尝试更小批量训练...") small_dataset = tokenized_dataset.select(range(50)) trainer.train_dataset = small_dataset trainer.train() # 保存模型 trainer.save_model(f"{OUTPUT_DIR}_small") tokenizer.save_pretrained(f"{OUTPUT_DIR}_small") logger.info(f"小批量训练完成! 模型保存在: {OUTPUT_DIR}_small") # 记录错误到TensorBoard if tb_writer: tb_writer.add_text("error/exception", traceback.format_exc()) # 清理内存 clear_memory() # === 训练后验证 === def validate_final_model(): """验证训练后的模型""" logger.info("\n=== 训练后验证 ===") # 加载保存的模型 from peft import PeftModel # 仅加载基础模型配置 base_model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, device_map=DEVICE_MAP, quantization_config=quant_config, torch_dtype=torch.bfloat16, trust_remote_code=True, load_in_4bit=True ) # 加载LoRA适配器 peft_model = PeftModel.from_pretrained(base_model, OUTPUT_DIR) # 不再合并LoRA权重,直接使用 peft_model 推理 peft_model.eval() # 测试生成 prompt = "中国的首都是" inputs = tokenizer(prompt, return_tensors="pt").to(peft_model.device) outputs = peft_model.generate( **inputs, max_new_tokens=50, # 减少生成长度 temperature=0.7, top_p=0.9, repetition_penalty=1.2, do_sample=True ) generated = tokenizer.decode(outputs[0], skip_special_tokens=True) logger.info(f"提示: {prompt}") logger.info(f"生成结果: {generated}") # 记录到TensorBoard if tb_writer: tb_writer.add_text("validation/sample", f"提示: {prompt}\n生成: {generated}") # 更全面的测试 test_prompts = [ "人工智能的未来发展趋势是", "如何学习深度学习?", "写一个关于太空探索的短故事:" ] for i, test_prompt in enumerate(test_prompts): inputs = tokenizer(test_prompt, return_tensors="pt").to(peft_model.device) outputs = peft_model.generate( **inputs, max_new_tokens=100, # 减少生成长度 temperature=0.7, top_p=0.9, repetition_penalty=1.2, do_sample=True ) generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True) logger.info(f"\n提示: {test_prompt}\n生成: {generated_text}\n{'=' * 50}") # 记录到TensorBoard if tb_writer: tb_writer.add_text(f"validation/test_{i}", f"提示: {test_prompt}\n生成: {generated_text}") logger.info("验证完成") # 执行验证 validate_final_model() # 关闭TensorBoard写入器 if tb_writer: tb_writer.close() logger.info("TensorBoard日志已关闭") 2025-07-13 22:58:30,094 - INFO - 训练完成! 模型保存在: ./yi6b-lora-optimized 2025-07-13 22:58:30,351 - INFO - 内存清理完成 2025-07-13 22:58:30,351 - INFO - === 训练后验证 === Loading checkpoint shards: 100%|█████████████████████████████████████████████| 2/2 [00:34<00:00, 17.33s/it] /home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/peft/tuners/lora/bnb.py:213: UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors. warnings.warn( Traceback (most recent call last): File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/train_lora.py", line 477, in <module> validate_final_model() File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/train_lora.py", line 432, in validate_final_model outputs = merged_model.generate( ^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 1520, in generate return self.sample( ^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/transformers/generation/utils.py", line 2617, in sample outputs = self( ^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/accelerate/hooks.py", line 164, in new_forward output = module._old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 1183, in forward outputs = self.model( ^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/accelerate/hooks.py", line 164, in new_forward output = module._old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 1070, in forward layer_outputs = decoder_layer( ^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/accelerate/hooks.py", line 164, in new_forward output = module._old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 798, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( ^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/accelerate/hooks.py", line 164, in new_forward output = module._old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py", line 694, in forward key_states = self.k_proj(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/bitsandbytes/nn/modules.py", line 490, in forward return bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state).to(inp_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/bitsandbytes/autograd/_functions.py", line 393, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/torch/autograd/function.py", line 575, in apply return super().apply(*args, **kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/vipuser/ai_writer_project_final_with_fixed_output_ui/.venv/lib/python3.11/site-packages/bitsandbytes/autograd/_functions.py", line 322, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, quant_state).to(A.dtype).t(), bias) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: mat1 and mat2 shapes cannot be multiplied (3x4096 and 512x4096)

filetype

import json import torch from typing import Dict, List from torch.utils.data import Dataset import transformers from peft import LoraConfig, TaskType, get_peft_model from torch.utils.data import DataLoader, SequentialSampler from transformers import Trainer, TrainingArguments from lora_plus import LoraPlusTrainer from torch.utils.data import RandomSampler from swanlab.integration.transformers import SwanLabCallback import swanlab import numpy as np import pandas as pd import re from typing import Dict, List import torch from tqdm import tqdm from transformers import PreTrainedTokenizer from transformers import AutoTokenizer import torch.nn as nn from lora_plus import LoraPlusTrainer # 确保已安装lora_plus库 # 初始化SwanLab swanlab.init("Finetune-Llama3.2-with-Encoder") swanlab_callback = SwanLabCallback( project="Finetune-Llama3.2-with-Encoder", experiment_name="Finetune-Llama3.2-with-Encoder" ) # 常量定义 CHEM_FORMULA_SIZE = r"([A-Z][a-z]*)([0-9]*)" VALID_ELEMENTS = ["C", "N", "P", "O", "S", "Si", "I", "H", "Cl", "F", "Br", "B", "Se", "Fe", "Co", "As", "K", "Na"] element_to_idx = {elem: idx for idx, elem in enumerate(VALID_ELEMENTS)} # 化学式转密集向量 def formula_to_dense(chem_formula: str) -> torch.Tensor: dense_vec = torch.zeros(len(VALID_ELEMENTS), dtype=torch.float32) matches = re.findall(CHEM_FORMULA_SIZE, chem_formula) for chem_symbol, num_str in matches: num = 1 if num_str == "" else int(num_str) if chem_symbol in element_to_idx: idx = element_to_idx[chem_symbol] dense_vec[idx] += num return dense_vec # 位置编码生成 (PyTorch实现) def positional_encoding(max_position: int, d_model: int, min_freq: float = 1e-4) -> torch.Tensor: position = torch.arange(max_position).unsqueeze(1) div_term = torch.exp(torch.arange(0, d_model, 2) * (-torch.log(torch.tensor(min_freq)) / d_model)) pos_enc = torch.zeros(max_position, d_model) pos_enc[:, 0::2] = torch.sin(position * div_term) pos_enc[:, 1::2] = torch.cos(position * div_term) return pos_enc # 初始化位置编码矩阵 P = positional_encoding(2000000, 256) dimn = 256 # 与位置编码维度一致 # 质谱数据编码 def encode_spectra(rag_tensor: list, P: torch.Tensor, dimn: int) -> torch.Tensor: encoded_list = [] for sample in rag_tensor: mz_list, intensity_list = sample # 创建基础特征矩阵 [m/z, intensity] base_features = torch.tensor([mz_list, intensity_list], dtype=torch.float32).T # 添加位置编码特征 pos_enc = torch.stack([P[min(int(mz), P.size(0)-1)] for mz in mz_list]) # 组合所有特征 [m/z, intensity, pos_enc...] features = torch.cat([base_features, pos_enc], dim=1) # 填充/截断到固定长度 if features.size(0) < 501: padding = torch.zeros(501 - features.size(0), features.size(1)) features = torch.cat([features, padding], dim=0) else: features = features[:501] encoded_list.append(features) return torch.stack(encoded_list) # 质谱数据预处理 def preprocess_spectra(df: pd.DataFrame) -> list: spectra_list = [] for idx, row in tqdm(df.iterrows(), total=len(df)): spectrum_str = row['Spectrum'] total_mass = row['Total Exact Mass'] # 解析质谱字符串 pairs = spectrum_str.split() mz_list, intensity_list = [], [] for pair in pairs: mz, intensity = pair.split(':') mz_list.append(float(mz)) intensity_list.append(float(intensity)) # 添加总精确质量 mz_list.append(total_mass) intensity_list.append(0.0) # 四舍五入处理 mz_list = [round(mz, 2) for mz in mz_list] intensity_list = [round(intensity, 2) for intensity in intensity_list] spectra_list.append([mz_list, intensity_list]) return spectra_list # 自定义数据集类 class MolecularDataset(Dataset): def __init__(self, csv_path: str, tokenizer: AutoTokenizer, max_seq_len: int = 512): self.df = pd.read_csv(csv_path) self.tokenizer = tokenizer self.max_seq_len = max_seq_len # 预处理质谱数据 spectra_data = preprocess_spectra(self.df) self.spec_encoded = encode_spectra(spectra_data, P, dimn) def __len__(self): return len(self.df) def __getitem__(self, idx) -> dict: # 分子式向量 formula = self.df.iloc[idx]['Molecular Formula'] formula_vec = formula_to_dense(formula) # 质谱矩阵 spec_matrix = self.spec_encoded[idx] # SELFIES编码 selfies_str = self.df.iloc[idx]['SELFIES'] encoding = self.tokenizer( selfies_str, add_special_tokens=True, padding='max_length', truncation=True, return_attention_mask=True, max_length=self.max_seq_len, return_tensors='pt' ) return { 'formula_vec': formula_vec, 'spec_matrix': spec_matrix, 'input_ids': encoding['input_ids'].squeeze(0), 'attention_mask': encoding['attention_mask'].squeeze(0) } # 加载tokenizer tokenizer = AutoTokenizer.from_pretrained('/root/workspace/checkpoint-2500') # 创建数据集 dataset = MolecularDataset('/root/workspace/SELFIES-SFT.csv', tokenizer) data_collator = transformers.DataCollatorForSeq2Seq(tokenizer=tokenizer) # 定义带额外Encoder的自定义模型 class LlamaWithEncoder(nn.Module): def __init__(self, base_model, encoder1_dim=18, encoder2_dim=256, hidden_dim=512): super().__init__() self.base_model = base_model self.config = base_model.config # 添加这行关键代码 # 第一个Transformer Encoder encoder1_layer = nn.TransformerEncoderLayer( d_model=encoder1_dim, nhead=3, dim_feedforward=hidden_dim, batch_first=True ) self.encoder1 = nn.TransformerEncoder(encoder1_layer, num_layers=2) # 第二个Transformer Encoder encoder2_layer = nn.TransformerEncoderLayer( d_model=encoder2_dim, nhead=8, dim_feedforward=hidden_dim, batch_first=True ) self.encoder2 = nn.TransformerEncoder(encoder2_layer, num_layers=2) # 投影层 self.proj1 = nn.Linear(encoder1_dim, base_model.config.hidden_size) self.proj2 = nn.Linear(encoder2_dim, base_model.config.hidden_size) # 融合层 self.fusion = nn.Linear(2 * base_model.config.hidden_size, base_model.config.hidden_size) def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs): return self.base_model.prepare_inputs_for_generation( input_ids, past_key_values=past_key_values, **kwargs ) def forward( self, input_ids=None, attention_mask=None, encoder1_inputs=None, encoder2_inputs=None, labels=None, past_key_values=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs ): # 处理编码器输入 enc1_out = self.encoder1(encoder1_inputs) enc1_out = enc1_out.mean(dim=1) enc1_proj = self.proj1(enc1_out) enc2_out = self.encoder2(encoder2_inputs) enc2_out = enc2_out.mean(dim=1) enc2_proj = self.proj2(enc2_out) # 融合编码器输出 fused = self.fusion(torch.cat([enc1_proj, enc2_proj], dim=1)) fused = fused.unsqueeze(1) # 获取嵌入层输出 embeddings = self.base_model.get_input_embeddings()(input_ids) # 将融合结果与第一个token的嵌入结合 if embeddings.size(1) > 0: embeddings[:, 0, :] = (embeddings[:, 0, :] + fused[:, 0, :]) / 2 # 使用修改后的嵌入调用基础模型 return self.base_model( inputs_embeds=embeddings, attention_mask=attention_mask, labels=labels, past_key_values=past_key_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, **kwargs ) # 加载预训练模型 base_model = transformers.AutoModelForCausalLM.from_pretrained( "/root/workspace/checkpoint-2500", trust_remote_code=True, torch_dtype=torch.bfloat16, ) model = LlamaWithEncoder(base_model) lora_config = LoraConfig( r=8, lora_alpha=16, target_modules="all-linear", # 目标注意力层 lora_dropout=0.0, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, lora_config) model.print_trainable_parameters() # 输出示例:0.3% 参数可训练 training_args = TrainingArguments( output_dir="./llama3.2-SELFIES-SFT", per_device_train_batch_size=16, gradient_accumulation_steps=16, num_train_epochs=10, learning_rate=5.0e-05, optim="adamw_torch", logging_steps=10, bf16=True, save_strategy="steps", lr_scheduler_type='cosine', max_grad_norm=1.0, save_steps=2000, warmup_steps=0 ) class CustomTrainer(LoraPlusTrainer): def get_train_dataloader(self) -> DataLoader: """ Returns the training dataloader using a random sampler to shuffle the dataset. """ return DataLoader( self.train_dataset, batch_size=self.args.train_batch_size, shuffle=True, collate_fn=self.data_collator, drop_last=False, ) # 使用修改后的 CustomTrainer lp_trainer = CustomTrainer( model, training_args, train_dataset=dataset, tokenizer=tokenizer, data_collator=data_collator, callbacks=[swanlab_callback], ) lp_trainer.train() lp_trainer.save_model(output_dir='./llama3.2-SELFIES-SFT')这个代码数据集构建对吗,我的目标是输入质谱和化学式特征开始自回归生成SELFIES,在推理时并不输入任何字符串只使用质谱和化学式特征

filetype

{"openapi":"3.1.0","info":{"title":"Auto FA Task Center","version":"0.1.0"},"paths":{"/jobs/":{"get":{"tags":["Jobs"],"summary":"Read Jobs","operationId":"read_jobs_jobs__get","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/Job"},"type":"array","title":"Response Read Jobs Jobs Get"}}}}}}},"/jobs/remove/":{"post":{"tags":["Jobs"],"summary":"Remove Job Use Id","operationId":"remove_job_use_id_jobs_remove__post","parameters":[{"name":"job_id","in":"query","required":true,"schema":{"type":"string","title":"Job Id"}}],"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"type":"boolean","title":"Response Remove Job Use Id Jobs Remove Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/jobs/Add/update_machine_states":{"post":{"tags":["Jobs"],"summary":"Add Update Machine States Job","operationId":"add_update_machine_states_job_jobs_Add_update_machine_states_post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Body_add_update_machine_states_job_jobs_Add_update_machine_states_post"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Job"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/jobs/Add/down_load_unit_test_detail":{"post":{"tags":["Jobs"],"summary":"Add Down Load Unit Test Detail Job","operationId":"add_down_load_unit_test_detail_job_jobs_Add_down_load_unit_test_detail_post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Body_add_down_load_unit_test_detail_job_jobs_Add_down_load_unit_test_detail_post"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Job"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/":{"get":{"tags":["Scheduler"],"summary":"App Stage","operationId":"app_stage__get","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}}}}},"/start_auto_run":{"get":{"tags":["Scheduler"],"summary":"App Start","operationId":"app_start_start_auto_run_get","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}}}}},"/pause_auto_run":{"get":{"tags":["Scheduler"],"summary":"App Pause","operationId":"app_pause_pause_auto_run_get","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}}}}},"/stop_auto_run":{"get":{"tags":["Scheduler"],"summary":"App Stop","operationId":"app_stop_stop_auto_run_get","parameters":[{"name":"wait","in":"query","required":false,"schema":{"type":"boolean","default":true,"title":"Wait"}}],"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/records?system_fail={system_fail}":{"post":{"tags":["Records"],"summary":"Get Fail Records","operationId":"get_fail_records_records_system_fail__system_fail__post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/StationTypeFilterDict"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/UnitTestDetail"},"type":"array","title":"Response Get Fail Records Records System Fail System Fail Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/records/":{"post":{"tags":["Records"],"summary":"Get Fail Records","operationId":"get_fail_records_records__post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/StationTypeFilterDict"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/UnitTestDetail"},"type":"array","title":"Response Get Fail Records Records Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/records/download?system_fail={system_fail}":{"post":{"tags":["Records"],"summary":"Get Fail Records","operationId":"get_fail_records_records_download_system_fail__system_fail__post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/StationTypeFilterDict"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/records/download":{"post":{"tags":["Records"],"summary":"Get Fail Records","operationId":"get_fail_records_records_download_post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/StationTypeFilterDict"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/records/delete":{"post":{"tags":["Records"],"summary":"Delete Fail Records","operationId":"delete_fail_records_records_delete_post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/StationTypeFilterDict"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"type":"integer","title":"Response Delete Fail Records Records Delete Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/records/Add":{"post":{"tags":["Records"],"summary":"Add Records","operationId":"add_records_records_Add_post","requestBody":{"content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/UnitTestDetail"},"type":"array","title":"Records"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/UnitTestDetail"},"type":"array","title":"Response Add Records Records Add Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/fa_items/Add":{"post":{"tags":["FA_Items"],"summary":"Fa Items Add","operationId":"fa_items_add_fa_items_Add_post","requestBody":{"content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/FailFAItem"},"type":"array","title":"Fail Items"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"type":"boolean","title":"Response Fa Items Add Fa Items Add Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/fa_items/read":{"post":{"tags":["FA_Items"],"summary":"Fa Items Read","operationId":"fa_items_read_fa_items_read_post","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/FailFAItem"},"type":"array","title":"Response Fa Items Read Fa Items Read Post"}}}}}}},"/fa_items/import":{"post":{"tags":["FA_Items"],"summary":"Import Mapping List","operationId":"import_mapping_list_fa_items_import_post","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/FailFAItem"},"type":"array","title":"Response Import Mapping List Fa Items Import Post"}}}}}}},"/dt_items/read":{"post":{"tags":["DT_Items"],"summary":"Dt Items Read","operationId":"dt_items_read_dt_items_read_post","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/ErrorCode"},"type":"array","title":"Response Dt Items Read Dt Items Read Post"}}}}}}},"/dt_items/Add":{"post":{"tags":["DT_Items"],"summary":"Dt Items Add","operationId":"dt_items_add_dt_items_Add_post","requestBody":{"content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/ErrorCode"},"type":"array","title":"Dt Items"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"type":"boolean","title":"Response Dt Items Add Dt Items Add Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/dt_items/import":{"post":{"tags":["DT_Items"],"summary":"Import Mapping List","operationId":"import_mapping_list_dt_items_import_post","responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/ErrorCode"},"type":"array","title":"Response Import Mapping List Dt Items Import Post"}}}}}}},"/machine_states/":{"post":{"tags":["MachineState"],"summary":"Get Machine States","operationId":"get_machine_states_machine_states__post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/MachineStateFilterDict"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"items":{"$ref":"#/components/schemas/MachineStateDataQuery"},"type":"array","title":"Response Get Machine States Machine States Post"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/jobs/Add/down_load_raw_data_for_tossing":{"post":{"tags":["Jobs"],"summary":"Add Down Load Raw Data For Tossing Job","operationId":"add_down_load_raw_data_for_tossing_job_jobs_Add_down_load_raw_data_for_tossing_post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Body_add_down_load_raw_data_for_tossing_job_jobs_Add_down_load_raw_data_for_tossing_post"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Job"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/jobs/Add/down_load_raw_data_for_auto_jmp":{"post":{"tags":["Jobs"],"summary":"Add Down Load Raw Data For Auto Jmp Job","operationId":"add_down_load_raw_data_for_auto_jmp_job_jobs_Add_down_load_raw_data_for_auto_jmp_post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Body_add_down_load_raw_data_for_auto_jmp_job_jobs_Add_down_load_raw_data_for_auto_jmp_post"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Job"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/raw_data_headers":{"get":{"tags":["InsightRawData"],"summary":"Read Raw Data Headers","operationId":"read_raw_data_headers_raw_data_headers_get","parameters":[{"name":"skip","in":"query","required":false,"schema":{"type":"integer","default":0,"title":"Skip"}},{"name":"limit","in":"query","required":false,"schema":{"type":"integer","default":100,"title":"Limit"}}],"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/InsightRawDataResponse"},"title":"Response Read Raw Data Headers Raw Data Headers Get"}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/raw_data":{"get":{"tags":["InsightRawData"],"summary":"Read Raw Data","operationId":"read_raw_data_raw_data_get","parameters":[{"name":"skip","in":"query","required":false,"schema":{"type":"integer","default":0,"title":"Skip"}},{"name":"limit","in":"query","required":false,"schema":{"type":"integer","default":100,"title":"Limit"}}],"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}},"/tossing":{"post":{"tags":["Tossing"],"summary":"Get Tossing","description":"获取所有ExchangeUser","operationId":"get_tossing_tossing_post","requestBody":{"content":{"application/json":{"schema":{"$ref":"#/components/schemas/StationTypeFilterDict"}}},"required":true},"responses":{"200":{"description":"Successful Response","content":{"application/json":{"schema":{}}}},"422":{"description":"Validation Error","content":{"application/json":{"schema":{"$ref":"#/components/schemas/HTTPValidationError"}}}}}}}},"components":{"schemas":{"Body_add_down_load_raw_data_for_auto_jmp_job_jobs_Add_down_load_raw_data_for_auto_jmp_post":{"properties":{"trigger":{"$ref":"#/components/schemas/IntervalTriggerArgs"},"data_setting":{"$ref":"#/components/schemas/DownLoadRawDataJobSettingForTossing"}},"type":"object","required":["trigger","data_setting"],"title":"Body_add_down_load_raw_data_for_auto_jmp_job_jobs_Add_down_load_raw_data_for_auto_jmp_post"},"Body_add_down_load_raw_data_for_tossing_job_jobs_Add_down_load_raw_data_for_tossing_post":{"properties":{"trigger":{"$ref":"#/components/schemas/IntervalTriggerArgs"},"data_setting":{"$ref":"#/components/schemas/DownLoadRawDataJobSettingForTossing"}},"type":"object","required":["trigger","data_setting"],"title":"Body_add_down_load_raw_data_for_tossing_job_jobs_Add_down_load_raw_data_for_tossing_post"},"Body_add_down_load_unit_test_detail_job_jobs_Add_down_load_unit_test_detail_post":{"properties":{"trigger":{"$ref":"#/components/schemas/IntervalTriggerArgs"},"data_setting":{"$ref":"#/components/schemas/TestUnitDetailsJobSetting"}},"type":"object","required":["trigger","data_setting"],"title":"Body_add_down_load_unit_test_detail_job_jobs_Add_down_load_unit_test_detail_post"},"Body_add_update_machine_states_job_jobs_Add_update_machine_states_post":{"properties":{"trigger":{"$ref":"#/components/schemas/IntervalTriggerArgs"},"data_setting":{"$ref":"#/components/schemas/HiveMachineStateJobSetting"}},"type":"object","required":["trigger","data_setting"],"title":"Body_add_update_machine_states_job_jobs_Add_update_machine_states_post"},"CutTimeWeekOffset":{"properties":{"D1":{"type":"integer","title":"D1","default":0},"D2":{"type":"integer","title":"D2","default":0},"D3":{"type":"integer","title":"D3","default":0},"D4":{"type":"integer","title":"D4","default":0},"D5":{"type":"integer","title":"D5","default":0},"D6":{"type":"integer","title":"D6","default":0},"D7":{"type":"integer","title":"D7","default":0}},"type":"object","title":"CutTimeWeekOffset"},"DownLoadRawDataJobSettingForTossing":{"properties":{"setting_index":{"type":"integer","title":"Setting Index","default":0},"ReportCommand":{"type":"string","title":"Reportcommand","default":"DownLoadRawData"},"report_id":{"type":"string","title":"Report Id","default":""},"FixedStartTime":{"type":"string","format":"date-time","title":"Fixedstarttime"},"FixedEndTime":{"type":"string","format":"date-time","title":"Fixedendtime"},"StartHourAnchor":{"type":"string","enum":["Start","End"],"title":"Starthouranchor","default":"Start"},"StartHourOffset":{"type":"integer","title":"Starthouroffset","default":0},"EndHourOffset":{"anyOf":[{"type":"integer"},{"type":"null"}],"title":"Endhouroffset"},"shift_hour":{"type":"integer","title":"Shift Hour","default":7},"shift_week_offset":{"$ref":"#/components/schemas/ShiftWeekOffset"},"cut_time_week_offset":{"$ref":"#/components/schemas/CutTimeWeekOffset"},"SITE":{"type":"string","title":"Site","default":"LZSH"},"SITENAME":{"items":{"type":"string"},"type":"array","title":"Sitename","default":[]},"PROJECT":{"type":"string","title":"Project","default":""},"PROJECT_CODE":{"type":"string","title":"Project Code","default":"D48"},"STATIONTYPE":{"items":{"type":"string"},"type":"array","title":"Stationtype","default":[]},"product_code":{"items":{"type":"string"},"type":"array","title":"Product Code","default":[]},"station_type_list":{"items":{"items":{"type":"string"},"type":"array"},"type":"array","title":"Station Type List","default":[]},"line_list":{"items":{"type":"string"},"type":"array","title":"Line List","default":[]},"mail_recipients":{"$ref":"#/components/schemas/ReportRecipients","default":{"bcc":["[email protected]","[email protected]"]}},"parametricType":{"items":{"$ref":"#/components/schemas/ParametricType"},"type":"array","title":"Parametrictype","default":[]},"attributes":{"items":{"type":"string"},"type":"array","title":"Attributes","default":[]},"modules":{"items":{"type":"string"},"type":"array","title":"Modules","default":[]}},"type":"object","required":["shift_week_offset","cut_time_week_offset"],"title":"DownLoadRawDataJobSettingForTossing"},"ErrorCode":{"properties":{"Error_ID":{"type":"string","title":"Error Id","default":""},"Station":{"type":"string","title":"Station"},"Error_description":{"type":"string","title":"Error Description"},"Error_type":{"type":"string","title":"Error Type"},"Component":{"type":"string","title":"Component"},"Sub_Component":{"type":"string","title":"Sub Component"},"Error_Index":{"type":"integer","title":"Error Index"},"Action_Index":{"type":"integer","title":"Action Index"},"RiskLevel":{"type":"string","enum":["High","Medium","Low"],"title":"Risklevel","default":"Low"},"ICT_DRI":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Ict Dri","default":""},"Allie_DRI":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Allie Dri","default":""},"Code":{"type":"string","title":"Code"},"category":{"type":"string","title":"Category"},"Error_description_ch":{"type":"string","title":"Error Description Ch"},"Issue_description":{"type":"string","title":"Issue Description"},"Root_cause":{"type":"string","title":"Root Cause"},"Analysis_step":{"type":"string","title":"Analysis Step"},"Short_term":{"type":"string","title":"Short Term"},"Long_term":{"type":"string","title":"Long Term"},"Action":{"type":"string","title":"Action"},"images_count":{"type":"integer","title":"Images Count"}},"type":"object","required":["Station","Error_description","Error_type","Component","Sub_Component","Error_Index","Action_Index","Code","category","Error_description_ch","Issue_description","Root_cause","Analysis_step","Short_term","Long_term","Action","images_count"],"title":"ErrorCode"},"FactoryMetricsFilterData":{"properties":{"station_type":{"items":{"type":"string"},"type":"array","title":"Station Type"},"line_type":{"items":{"type":"string"},"type":"array","title":"Line Type"},"product":{"items":{"type":"string"},"type":"array","title":"Product"},"site":{"items":{"type":"string"},"type":"array","title":"Site"},"apple_code":{"items":{"type":"string"},"type":"array","title":"Apple Code"}},"type":"object","title":"FactoryMetricsFilterData"},"FailFAItem":{"properties":{"failItemId":{"type":"string","title":"Failitemid","default":""},"equipmentType":{"type":"string","title":"Equipmenttype","default":""},"displayName":{"type":"string","title":"Displayname","default":""},"MainPart":{"type":"string","title":"Mainpart","default":""},"SmallPart":{"type":"string","title":"Smallpart","default":""},"test":{"type":"string","title":"Test","default":""},"message":{"type":"string","title":"Message","default":""},"RiskLevel":{"type":"string","enum":["High","Medium","Low"],"title":"Risklevel","default":"Low"},"ICT_DRI":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Ict Dri","default":""},"Allie_DRI":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Allie Dri","default":""},"image_count":{"type":"integer","title":"Image Count","default":0},"NG":{"type":"string","title":"Ng","default":""},"OK":{"type":"string","title":"Ok","default":""},"Issue_description":{"type":"string","title":"Issue Description","default":""},"Analysis_step":{"type":"string","title":"Analysis Step","default":""},"Root_cause":{"type":"string","title":"Root Cause","default":""},"Short_term":{"type":"string","title":"Short Term","default":""},"Long_term":{"type":"string","title":"Long Term","default":""},"Action":{"type":"string","title":"Action","default":""}},"type":"object","title":"FailFAItem"},"HTTPValidationError":{"properties":{"detail":{"items":{"$ref":"#/components/schemas/ValidationError"},"type":"array","title":"Detail"}},"type":"object","title":"HTTPValidationError"},"HiveMachineStateJobSetting":{"properties":{"setting_index":{"type":"integer","title":"Setting Index","default":0},"ReportCommand":{"type":"string","title":"Reportcommand","default":"HiveMachineState"},"StartHourOffset":{"type":"integer","title":"Starthouroffset","default":0},"EndHourOffset":{"anyOf":[{"type":"integer"},{"type":"null"}],"title":"Endhouroffset"},"shift_hour":{"type":"integer","title":"Shift Hour","default":7},"shift_week_offset":{"$ref":"#/components/schemas/ShiftWeekOffset"},"cut_time_week_offset":{"$ref":"#/components/schemas/CutTimeWeekOffset"},"kind":{"type":"string","enum":["MACHINESTATE","ERRORDATA","MACHINEDATA"],"title":"Kind","default":"MACHINESTATE"},"filter_data":{"$ref":"#/components/schemas/FactoryMetricsFilterData"}},"type":"object","required":["shift_week_offset","cut_time_week_offset","filter_data"],"title":"HiveMachineStateJobSetting"},"InitStation":{"properties":{"id":{"type":"integer","title":"Id"},"active":{"type":"boolean","title":"Active"},"last_updated_at":{"type":"string","format":"date-time","title":"Last Updated At"},"last_updated_by":{"type":"string","title":"Last Updated By"},"authorized":{"type":"boolean","title":"Authorized"},"publisher_id":{"type":"string","title":"Publisher Id"},"product":{"type":"string","title":"Product"},"build_type":{"type":"string","title":"Build Type"},"site":{"type":"string","title":"Site"},"building":{"type":"string","title":"Building"},"line":{"type":"string","title":"Line"},"line_name":{"type":"string","title":"Line Name","default":""},"line_type":{"type":"string","title":"Line Type"},"station_type":{"type":"string","title":"Station Type"},"vendor":{"type":"string","title":"Vendor"},"sequence":{"type":"integer","title":"Sequence"},"instance":{"type":"integer","title":"Instance"},"ip_address":{"type":"string","title":"Ip Address"},"registered_mac_addresses":{"type":"string","title":"Registered Mac Addresses"},"mac_addresses_non_usb":{"type":"string","title":"Mac Addresses Non Usb"},"mac_addresses_usb":{"type":"string","title":"Mac Addresses Usb"},"gateway_ip_address":{"type":"string","title":"Gateway Ip Address"},"rfid":{"type":"string","title":"Rfid"},"groundhog_id":{"type":"string","title":"Groundhog Id"},"station_id":{"type":"string","title":"Station Id","default":""},"entity":{"type":"string","title":"Entity","default":""}},"type":"object","required":["id","active","last_updated_at","last_updated_by","authorized","publisher_id","product","build_type","site","building","line","line_type","station_type","vendor","sequence","instance","ip_address","registered_mac_addresses","mac_addresses_non_usb","mac_addresses_usb","gateway_ip_address","rfid","groundhog_id"],"title":"InitStation"},"InsightModuleDataResponse":{"properties":{"ModuleName":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Modulename","default":""},"SerialNumber":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Serialnumber","default":""},"Vendor":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Vendor","default":""},"InfoCode":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Infocode","default":""},"LotCode":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Lotcode","default":""},"DateCode":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Datecode","default":""},"PartNumber":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Partnumber","default":""}},"type":"object","title":"InsightModuleDataResponse"},"InsightRawDataResponse":{"properties":{"SeriesID":{"type":"string","title":"Seriesid"},"Site":{"type":"string","title":"Site","default":"PGPD"},"Product":{"type":"string","title":"Product","default":"D48"},"SerialNumber":{"type":"string","title":"Serialnumber","default":"C7CHDF004FR0000NBC"},"SpecialBuildName":{"anyOf":[{"type":"string"},{"type":"number"}],"title":"Specialbuildname","default":""},"SpecialBuildDescription":{"anyOf":[{"type":"string"},{"type":"number"}],"title":"Specialbuilddescription","default":""},"UnitNumber":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"null"}],"title":"Unitnumber","default":""},"StationID":{"type":"string","title":"Stationid","default":"PGPD_F03-4FGC-25_1_MMV"},"Line":{"type":"string","title":"Line","default":""},"StationType":{"type":"string","title":"Stationtype","default":""},"StationInstanceNumber":{"type":"integer","title":"Stationinstancenumber","default":0},"TestPassFailStatus":{"type":"string","title":"Testpassfailstatus","default":"PASS"},"StartTime":{"type":"string","format":"date-time","title":"Starttime","default":"45695.111875"},"EndTime":{"type":"string","format":"date-time","title":"Endtime","default":"45695.1118865741"},"Version":{"anyOf":[{"type":"string"},{"type":"number"}],"title":"Version","default":"BZ-03.41-01.02-03.21-NN.NN"},"ListOfFailingTests":{"anyOf":[{"type":"string"},{"type":"number"}],"title":"Listoffailingtests","default":""},"test_data":{"items":{"$ref":"#/components/schemas/InsightRawDataTestDataResponse"},"type":"array","title":"Test Data"},"module":{"items":{"$ref":"#/components/schemas/InsightModuleDataResponse"},"type":"array","title":"Module"}},"type":"object","required":["test_data","module"],"title":"InsightRawDataResponse"},"InsightRawDataTestDataResponse":{"properties":{"KeyType":{"type":"string","enum":["PARAMETRIC","ATTRIBUTE","HEADER","MODULE"],"title":"Keytype","default":"PARAMETRIC"},"KeyName":{"type":"string","title":"Keyname","default":""},"Value":{"type":"number","title":"Value","default":0}},"type":"object","title":"InsightRawDataTestDataResponse"},"IntervalTriggerArgs":{"properties":{"weeks":{"type":"integer","title":"Weeks","default":0},"days":{"type":"integer","title":"Days","default":0},"hours":{"type":"integer","title":"Hours","default":0},"minutes":{"type":"integer","title":"Minutes","default":0},"seconds":{"type":"integer","title":"Seconds","default":0},"start_date":{"anyOf":[{"type":"string","format":"date-time"},{"type":"string"},{"type":"null"}],"title":"Start Date"},"end_date":{"anyOf":[{"type":"string","format":"date-time"},{"type":"string"},{"type":"null"}],"title":"End Date"},"timezone":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Timezone","default":"Asia/Shanghai"},"jitter":{"anyOf":[{"type":"integer"},{"type":"null"}],"title":"Jitter"}},"type":"object","title":"IntervalTriggerArgs"},"Job":{"properties":{"id":{"type":"string","title":"Id"},"next_run_time":{"type":"string","format":"date-time","title":"Next Run Time"},"state":{"type":"object","title":"State"}},"type":"object","required":["id","next_run_time"],"title":"Job"},"MachineStateDataQuery":{"properties":{"message_id":{"type":"string","title":"Message Id"},"kind":{"type":"string","title":"Kind","default":"MACHINESTATE"},"station_type":{"type":"string","title":"Station Type"},"machine_state":{"type":"integer","title":"Machine State"},"state_change_time":{"type":"string","format":"date-time","title":"State Change Time"},"code":{"anyOf":[{"type":"string"},{"type":"number"}],"title":"Code"},"error_message":{"anyOf":[{"type":"string"},{"type":"number"}],"title":"Error Message"},"sub_station":{"anyOf":[{"type":"string"},{"type":"number"}],"title":"Sub Station","default":""},"error_id":{"type":"string","title":"Error Id","default":""},"publisher_id":{"type":"string","title":"Publisher Id"},"is_closed":{"type":"boolean","title":"Is Closed","default":false},"next_message_id":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Next Message Id"},"next_state_change_time":{"anyOf":[{"type":"string","format":"date-time"},{"type":"null"}],"title":"Next State Change Time"},"state_start_time":{"anyOf":[{"type":"string","format":"date-time"},{"type":"null"}],"title":"State Start Time"},"state_end_time":{"anyOf":[{"type":"string","format":"date-time"},{"type":"null"}],"title":"State End Time"},"state_duration_time":{"anyOf":[{"type":"number"},{"type":"null"}],"title":"State Duration Time"},"state_duration_minute":{"anyOf":[{"type":"number"},{"type":"null"}],"title":"State Duration Minute"},"station":{"$ref":"#/components/schemas/InitStation"},"error_detail":{"anyOf":[{"$ref":"#/components/schemas/ErrorCode"},{"type":"null"}]}},"type":"object","required":["message_id","station_type","machine_state","state_change_time","code","error_message","publisher_id","station"],"title":"MachineStateDataQuery"},"MachineStateFilterDict":{"properties":{"station_type":{"items":{"type":"string"},"type":"array","title":"Station Type"},"machine_state":{"items":{"type":"integer"},"type":"array","title":"Machine State"},"state_duration_time":{"prefixItems":[{"type":"integer"},{"type":"integer"}],"type":"array","maxItems":2,"minItems":2,"title":"State Duration Time","default":[0,999999]},"start_time":{"anyOf":[{"type":"string","format":"date-time"},{"type":"null"}],"title":"Start Time"},"end_time":{"anyOf":[{"type":"string","format":"date-time"},{"type":"null"}],"title":"End Time"}},"type":"object","title":"MachineStateFilterDict"},"ParametricType":{"properties":{"stationType":{"type":"string","title":"Stationtype","default":""},"overlayVersion":{"items":{"type":"string"},"type":"array","title":"Overlayversion","default":[]},"overlayVersionAll":{"type":"boolean","title":"Overlayversionall","default":true},"keys":{"items":{"type":"string"},"type":"array","title":"Keys","default":[]}},"type":"object","title":"ParametricType"},"ReportRecipients":{"properties":{"to":{"anyOf":[{"items":{"type":"string","format":"email"},"type":"array"},{"type":"null"}],"title":"To"},"cc":{"anyOf":[{"items":{"type":"string","format":"email"},"type":"array"},{"type":"null"}],"title":"Cc"},"bcc":{"anyOf":[{"items":{"type":"string","format":"email"},"type":"array"},{"type":"null"}],"title":"Bcc"}},"type":"object","title":"ReportRecipients"},"ShiftWeekOffset":{"properties":{"D1":{"type":"integer","title":"D1","default":0},"D2":{"type":"integer","title":"D2","default":0},"D3":{"type":"integer","title":"D3","default":0},"D4":{"type":"integer","title":"D4","default":0},"D5":{"type":"integer","title":"D5","default":0},"D6":{"type":"integer","title":"D6","default":0},"D7":{"type":"integer","title":"D7","default":0}},"type":"object","title":"ShiftWeekOffset"},"StationTypeFilterDict":{"properties":{"station_type":{"items":{"type":"string"},"type":"array","title":"Station Type"},"start_time":{"anyOf":[{"type":"string","format":"date-time"},{"type":"null"}],"title":"Start Time"},"end_time":{"anyOf":[{"type":"string","format":"date-time"},{"type":"null"}],"title":"End Time"},"isSystem_fail":{"type":"boolean","title":"Issystem Fail","default":false}},"type":"object","required":["station_type"],"title":"StationTypeFilterDict"},"TestUnitDetailsJobSetting":{"properties":{"setting_index":{"type":"integer","title":"Setting Index","default":0},"ReportCommand":{"type":"string","title":"Reportcommand","default":"TestUnitDetails"},"report_id":{"type":"string","title":"Report Id","default":""},"StartHourAnchor":{"type":"string","enum":["Start","End"],"title":"Starthouranchor","default":"Start"},"StartHourOffset":{"type":"integer","title":"Starthouroffset","default":0},"EndHourOffset":{"anyOf":[{"type":"integer"},{"type":"null"}],"title":"Endhouroffset"},"SITENAME":{"items":{"type":"string"},"type":"array","title":"Sitename"},"PROJECT_CODE":{"items":{"type":"string"},"type":"array","title":"Project Code"},"shift_hour":{"type":"integer","title":"Shift Hour","default":7},"shift_week_offset":{"$ref":"#/components/schemas/ShiftWeekOffset"},"cut_time_week_offset":{"$ref":"#/components/schemas/CutTimeWeekOffset"},"station_type_list":{"items":{"type":"string"},"type":"array","title":"Station Type List"},"line_list":{"items":{"type":"string"},"type":"array","title":"Line List"},"site":{"items":{"type":"string"},"type":"array","title":"Site"},"result_type":{"type":"string","enum":["MultiPass","FirstPass"],"title":"Result Type","default":"MultiPass"},"result":{"items":{"type":"string","enum":["PASS","FAIL","RETEST"]},"type":"array","title":"Result"}},"type":"object","required":["shift_week_offset","cut_time_week_offset"],"title":"TestUnitDetailsJobSetting"},"UnitTestDetail":{"properties":{"subSubTest":{"type":"string","title":"Subsubtest"},"masked":{"type":"boolean","title":"Masked"},"highlightTestId":{"type":"string","title":"Highlighttestid","default":""},"localFolderId":{"type":"string","title":"Localfolderid","default":""},"failItemId":{"type":"string","title":"Failitemid","default":""},"displayName":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Displayname","default":""},"configurationCode":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Configurationcode","default":""},"lineName":{"type":"string","title":"Linename"},"siteName":{"type":"string","title":"Sitename"},"productAssembly":{"type":"string","title":"Productassembly"},"units":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Units","default":""},"equipmentId":{"type":"string","title":"Equipmentid"},"equipmentType":{"type":"string","title":"Equipmenttype"},"parentBuild":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Parentbuild","default":""},"Name":{"type":"string","title":"Name"},"Result":{"type":"string","title":"Result"},"subTest":{"type":"string","title":"Subtest"},"lock":{"type":"boolean","title":"Lock","default":false},"testEndTime":{"type":"string","format":"date-time","title":"Testendtime"},"startTime":{"type":"string","format":"date-time","title":"Starttime"},"value":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Value","default":""},"childBuild":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Childbuild","default":""},"mfgYearWeek":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Mfgyearweek","default":""},"isSecured":{"type":"boolean","title":"Issecured"},"test":{"type":"string","title":"Test"},"fixtureId":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Fixtureid","default":""},"hashedSerialNumber":{"type":"string","title":"Hashedserialnumber"},"overlayVersion":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Overlayversion","default":""},"lowerLimit":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Lowerlimit","default":""},"message":{"type":"string","title":"Message","default":""},"headId":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Headid","default":""},"productCode":{"type":"string","title":"Productcode"},"productName":{"type":"string","title":"Productname","default":""},"qualifier":{"anyOf":[{"type":"string"},{"type":"null"}],"title":"Qualifier","default":""},"upperLimit":{"anyOf":[{"type":"string"},{"type":"integer"},{"type":"number"},{"type":"null"}],"title":"Upperlimit","default":""},"isSystem_fail":{"type":"boolean","title":"Issystem Fail","default":true}},"type":"object","required":["subSubTest","masked","lineName","siteName","productAssembly","equipmentId","equipmentType","Name","Result","subTest","testEndTime","startTime","isSecured","test","hashedSerialNumber","productCode"],"title":"UnitTestDetail"},"ValidationError":{"properties":{"loc":{"items":{"anyOf":[{"type":"string"},{"type":"integer"}]},"type":"array","title":"Location"},"msg":{"type":"string","title":"Message"},"type":{"type":"string","title":"Error Type"}},"type":"object","required":["loc","msg","type"],"title":"ValidationError"}}}}

EngleSEN
  • 粉丝: 62
上传资源 快速赚钱