活动介绍

解释代码: def __len__(self): remain = len(self.dataset) % self.batch_size if self.drop_last or not remain: return len(self.dataset) // self.batch_size else: return len(self.dataset) // self.batch_size + 1

时间: 2023-05-29 17:07:22 浏览: 139
As an AI language model, I do not have personal opinions or beliefs. "Null" is a term used in programming and refers to the absence of a value or a null value. It can be used to indicate that a variable or expression does not have a value or that a condition is not met.
相关问题

解释代码: def __len__(self): remain = len(self.dataset) % self.batch_size if self.drop_last or not remain: return len(self.dataset) // self.batch_size else: return len(self.dataset) // self.batch_size + 1

As an AI language model, I do not have emotions or personal opinions. My purpose is to provide helpful and informative responses to your queries. Could you please provide more context or a specific question so that I can assist you better?

代码解释: def __len__(self): remain = len(self.dataset) % self.batch_size if self.drop_last or not remain: return len(self.dataset) // self.batch_size else: return len(self.dataset) // self.batch_size + 1

As an AI language model, I do not have a physical or emotional state, therefore, null has no impact or meaning for me. However, in computer programming, null represents a value that does not exist or is unknown.
阅读全文

相关推荐

import torch import torch.nn as nn import torch.optim as optim import pandas as pd from rdkit import Chem from rdkit.Chem import Draw, rdMolDescriptors from torch_geometric.data import Data, Batch from torch_geometric.nn import GCNConv, global_mean_pool, GATConv, GraphConv from torch_geometric.loader import DataLoader import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler import os import numpy as np import random from functools import lru_cache from torch.cuda import amp # 设置随机种子以确保结果可复现 def set_seed(seed): torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) np.random.seed(seed) random.seed(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False set_seed(42) # ------------------------- 工具函数 ------------------------- def validate_atomic_nums(atomic_nums): valid_atoms = {1, 6, 7, 8, 16, 9, 17} # H, C, N, O, S, F, Cl if isinstance(atomic_nums, torch.Tensor): atomic_nums = atomic_nums.cpu().numpy() if atomic_nums.is_cuda else atomic_nums.numpy() if isinstance(atomic_nums, list): atomic_nums = np.array(atomic_nums) if atomic_nums.ndim > 1: atomic_nums = atomic_nums.flatten() atomic_nums = np.round(atomic_nums).astype(int) return [num if num in valid_atoms else 6 for num in atomic_nums] def smiles_to_graph(smiles): mol = Chem.MolFromSmiles(smiles) if mol is None: return None Chem.SanitizeMol(mol) atom_feats = [atom.GetAtomicNum() for atom in mol.GetAtoms()] edge_index = [] for bond in mol.GetBonds(): i, j = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx() edge_index += [(i, j), (j, i)] x = torch.tensor(atom_feats, dtype=torch.long).view(-1, 1) edge_index = torch.tensor(edge_index, dtype=torch.long).t().contiguous() return Data(x=x, edge_index=edge_index) def augment_mol(smiles): mol = Chem.MolFromSmiles(smiles) if mol: try: mol = Chem.AddHs(mol) except: pass return Chem.MolToSmiles(mol) return smiles # ------------------------- 增强版边生成算法(改进版)------------------------- def generate_realistic_edges_improved(node_features, avg_edge_count=20, atomic_nums=None, force_min_edges=True): if node_features is None or node_features.numel() == 0: return torch.empty((2, 0), dtype=torch.long), torch.tensor([]) num_nodes = node_features.shape[0] if num_nodes < 2: return torch.empty((2, 0), dtype=torch.long), torch.tensor([]) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] max_valence = {1: 1, 6: 4, 7: 3, 8: 2, 16: 6, 9: 1, 17: 1} # 计算节点相似度并添加额外的连接倾向 similarity = torch.matmul(node_features, node_features.transpose(0, 1)).squeeze() # 添加额外的连接偏向,确保分子整体性(避免孤立节点) connectivity_bias = torch.ones_like(similarity) - torch.eye(num_nodes, dtype=torch.long) similarity = similarity + 0.5 * connectivity_bias.float() norm_similarity = similarity / (torch.max(similarity) + 1e-8) edge_probs = norm_similarity.view(-1) num_possible_edges = num_nodes * (num_nodes - 1) num_edges = min(avg_edge_count, num_possible_edges) _, indices = torch.topk(edge_probs, k=num_edges) edge_set = set() edge_index = [] used_valence = {i: 0 for i in range(num_nodes)} bond_types = [] tried_edges = set() # 确保至少有一个连接,并优先连接苯环部分 if num_nodes >= 2 and force_min_edges: if num_nodes >= 6 and all(atomic_nums[i] == 6 for i in range(6)): # 先连接苯环形成基础结构 for i in range(6): j = (i + 1) % 6 if (i, j) not in tried_edges: edge_set.add((i, j)) edge_set.add((j, i)) edge_index.append([i, j]) edge_index.append([j, i]) used_valence[i] += 1 used_valence[j] += 1 bond_types.append(2 if i % 2 == 0 else 1) bond_types.append(2 if i % 2 == 0 else 1) tried_edges.add((i, j)) tried_edges.add((j, i)) # 连接苯环到第一个非苯环原子(确保分子整体性) if num_nodes > 6: first_non_benzene = 6 # 寻找与苯环连接最紧密的非苯环原子 max_sim = -1 best_benzene = 0 for i in range(6): sim = norm_similarity[i, first_non_benzene] if sim > max_sim: max_sim = sim best_benzene = i if (best_benzene, first_non_benzene) not in tried_edges: edge_set.add((best_benzene, first_non_benzene)) edge_set.add((first_non_benzene, best_benzene)) edge_index.append([best_benzene, first_non_benzene]) edge_index.append([first_non_benzene, best_benzene]) used_valence[best_benzene] += 1 used_valence[first_non_benzene] += 1 bond_types.append(1) bond_types.append(1) tried_edges.add((best_benzene, first_non_benzene)) tried_edges.add((first_non_benzene, best_benzene)) else: # 常规分子的最小连接(确保连通性) max_sim = -1 best_u, best_v = 0, 1 for i in range(num_nodes): for j in range(i + 1, num_nodes): if norm_similarity[i, j] > max_sim and (i, j) not in tried_edges: max_sim = norm_similarity[i, j] best_u, best_v = i, j edge_set.add((best_u, best_v)) edge_set.add((best_v, best_u)) edge_index.append([best_u, best_v]) edge_index.append([best_v, best_u]) used_valence[best_u] += 1 used_valence[best_v] += 1 bond_types.append(1) bond_types.append(1) tried_edges.add((best_u, best_v)) tried_edges.add((best_v, best_u)) # 改进边生成逻辑,确保分子连接性(优先连接未连通部分) for idx in indices: u, v = (idx // num_nodes).item(), (idx % num_nodes).item() if u == v or (u, v) in tried_edges or (v, u) in tried_edges: continue tried_edges.add((u, v)) tried_edges.add((v, u)) atom_u, atom_v = atomic_nums[u], atomic_nums[v] max_u, max_v = max_valence[atom_u], max_valence[atom_v] avail_u, avail_v = max_u - used_valence[u], max_v - used_valence[v] # 不允许两个氢原子相连 if atom_u == 1 and atom_v == 1: continue # 确保连接不会导致分子分裂(至少需要n-1条边连接n个节点) if len(edge_set) < num_nodes - 1: edge_set.add((u, v)) edge_set.add((v, u)) edge_index.append([u, v]) edge_index.append([v, u]) used_valence[u] += 1 used_valence[v] += 1 bond_types.append(1) bond_types.append(1) continue # 常规连接逻辑 if avail_u >= 1 and avail_v >= 1: edge_set.add((u, v)) edge_set.add((v, u)) edge_index.append([u, v]) edge_index.append([v, u]) used_valence[u] += 1 used_valence[v] += 1 bond_types.append(1) bond_types.append(1) continue # 尝试形成双键(增加分子稳定性) if atom_u != 1 and atom_v != 1 and avail_u >= 2 and avail_v >= 2 and random.random() < 0.3: edge_set.add((u, v)) edge_set.add((v, u)) edge_index.append([u, v]) edge_index.append([v, u]) used_valence[u] += 2 used_valence[v] += 2 bond_types.append(2) bond_types.append(2) # 确保分子连接性的最后检查 if len(edge_index) == 0 and num_nodes >= 2: edge_index = [[0, 1], [1, 0]] bond_types = [1, 1] used_valence[0], used_valence[1] = 1, 1 return torch.tensor(edge_index).t().contiguous(), torch.tensor(bond_types) @lru_cache(maxsize=128) def cached_calculate_tpsa(mol): try: return rdMolDescriptors.CalcTPSA(mol) if mol else 0 except: return 0 @lru_cache(maxsize=128) def cached_calculate_ring_penalty(mol): try: if mol: ssr = Chem.GetSymmSSSR(mol) return len(ssr) * 0.2 return 0 except: return 0 # ------------------------- Gumbel-Softmax 实现 ------------------------- def gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1): gumbels = -(torch.empty_like(logits).exponential_() + eps).log() gumbels = (logits + gumbels) / tau y_soft = gumbels.softmax(dim) if hard: index = y_soft.max(dim, keepdim=True)[1] y_hard = torch.zeros_like(logits).scatter_(dim, index, 1.0) ret = y_hard - y_soft.detach() + y_soft else: ret = y_soft return ret # ------------------------- 核心优化:强化化学约束(改进版)------------------------- def build_valid_mol_improved(atomic_nums, edge_index, bond_types=None): atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] bond_type_map = {1: Chem.BondType.SINGLE, 2: Chem.BondType.DOUBLE} mol = Chem.RWMol() atom_indices = [] for num in atomic_nums: atom = Chem.Atom(num) idx = mol.AddAtom(atom) atom_indices.append(idx) added_edges = set() if bond_types is not None: if isinstance(bond_types, torch.Tensor): bond_types = bond_types.cpu().numpy().tolist() if bond_types.is_cuda else bond_types.numpy().tolist() else: bond_types = [] max_valence = {1: 1, 6: 4, 7: 3, 8: 2, 16: 6, 9: 1, 17: 1} current_valence = {i: 0 for i in range(len(atomic_nums))} if edge_index.numel() > 0: # 按重要性排序边,优先连接形成单一片段(苯环连接优先) edge_importance = [] for j in range(edge_index.shape[1]): u, v = edge_index[0, j].item(), edge_index[1, j].item() # 计算边的重要性:苯环与非苯环连接 > 苯环内部连接 > 其他连接 importance = 2.0 if (u < 6 and v >= 6) or (v < 6 and u >= 6) else 1.5 if (u < 6 and v < 6) else 1.0 edge_importance.append((importance, j)) # 按重要性降序排序 edge_order = [j for _, j in sorted(edge_importance, key=lambda x: x[0], reverse=True)] for j in edge_order: u, v = edge_index[0, j].item(), edge_index[1, j].item() if u >= len(atomic_nums) or v >= len(atomic_nums) or u == v or (u, v) in added_edges: continue atom_u = mol.GetAtomWithIdx(atom_indices[u]) atom_v = mol.GetAtomWithIdx(atom_indices[v]) max_u = max_valence[atomic_nums[u]] max_v = max_valence[atomic_nums[v]] used_u = current_valence[u] used_v = current_valence[v] remain_u = max_u - used_u remain_v = max_v - used_v if remain_u >= 1 and remain_v >= 1: bond_type = bond_type_map.get(bond_types[j] if j < len(bond_types) else 1, Chem.BondType.SINGLE) bond_order = 1 if bond_type == Chem.BondType.SINGLE else 2 if bond_order > min(remain_u, remain_v): bond_order = 1 bond_type = Chem.BondType.SINGLE mol.AddBond(atom_indices[u], atom_indices[v], bond_type) added_edges.add((u, v)) added_edges.add((v, u)) current_valence[u] += bond_order current_valence[v] += bond_order # 更稳健的价态修复逻辑(优先保持分子连接性) try: Chem.SanitizeMol(mol) return mol except: # 先尝试添加氢原子修复(保持分子完整性) try: mol = Chem.AddHs(mol) Chem.SanitizeMol(mol) return mol except: pass # 智能移除键(仅移除导致价态问题的非关键键) try: problematic_bonds = [] for bond in mol.GetBonds(): begin = bond.GetBeginAtom() end = bond.GetEndAtom() if begin.GetExplicitValence() > max_valence[begin.GetAtomicNum()] or \ end.GetExplicitValence() > max_valence[end.GetAtomicNum()]: problematic_bonds.append(bond.GetIdx()) # 按键类型和连接重要性排序(先移除双键,再移除非苯环连接) problematic_bonds.sort(key=lambda idx: ( mol.GetBondWithIdx(idx).GetBondTypeAsDouble(), -1 if (mol.GetBondWithIdx(idx).GetBeginAtomIdx() < 6 and mol.GetBondWithIdx(idx).GetEndAtomIdx() >= 6) else 1 ), reverse=True) for bond_idx in problematic_bonds: mol.RemoveBond(mol.GetBondWithIdx(bond_idx).GetBeginAtomIdx(), mol.GetBondWithIdx(bond_idx).GetEndAtomIdx()) try: Chem.SanitizeMol(mol) break except: continue Chem.SanitizeMol(mol) return mol except: # 最后尝试移除氢原子(作为最后的修复手段) try: mol = Chem.RemoveHs(mol) Chem.SanitizeMol(mol) return mol except: return None def mol_to_graph(mol): if not mol: return None try: Chem.SanitizeMol(mol) atom_feats = [atom.GetAtomicNum() for atom in mol.GetAtoms()] edge_index = [] for bond in mol.GetBonds(): i, j = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx() edge_index += [(i, j), (j, i)] x = torch.tensor(atom_feats, dtype=torch.long).view(-1, 1) edge_index = torch.tensor(edge_index, dtype=torch.long).t().contiguous() return Data(x=x, edge_index=edge_index) except: return None def filter_valid_mols(mols): valid_mols = [] for mol in mols: if mol is None: continue try: Chem.SanitizeMol(mol) rdMolDescriptors.CalcExactMolWt(mol) valid_mols.append(mol) except: continue return valid_mols # ------------------------- 新增:分子片段过滤工具(改进版)------------------------- def filter_single_fragment_mols_improved(mol_list): valid_mols = [] for mol in mol_list: if mol is None: continue try: # 先尝试添加氢原子以确保分子完整性 mol = Chem.AddHs(mol) Chem.SanitizeMol(mol) # 使用更鲁棒的方法检测片段数 frags = Chem.GetMolFrags(mol, asMols=True, sanitizeFrags=False) if len(frags) == 1: # 进一步检查分子重量和原子数(过滤过小分子) mol_weight = rdMolDescriptors.CalcExactMolWt(mol) if mol_weight > 50 and mol.GetNumAtoms() > 5: valid_mols.append(mol) except: continue return valid_mols # ------------------------- 新增:苯环检测工具 ------------------------- def has_benzene_ring(mol): if mol is None: return False try: benzene_smarts = Chem.MolFromSmarts("c1ccccc1") return mol.HasSubstructMatch(benzene_smarts) except Exception as e: print(f"检测苯环失败: {e}") return False def filter_benzene_mols(mols): return [mol for mol in mols if has_benzene_ring(mol)] # ------------------------- 数据集 ------------------------- class MolecularGraphDataset(torch.utils.data.Dataset): def __init__(self, path): df = pd.read_excel(path, usecols=[0, 1, 2]) df.columns = ['SMILES', 'Concentration', 'Efficiency'] df.dropna(inplace=True) df['SMILES'] = df['SMILES'].apply(augment_mol) self.graphs, self.conditions = [], [] scaler = StandardScaler() cond_scaled = scaler.fit_transform(df[['Concentration', 'Efficiency']]) self.edge_stats = {'edge_counts': []} valid_count = 0 for i, row in df.iterrows(): graph = smiles_to_graph(row['SMILES']) if graph: atomic_nums = graph.x.squeeze().tolist() atomic_nums = validate_atomic_nums(atomic_nums) graph.x = torch.tensor(atomic_nums, dtype=torch.long).view(-1, 1) self.graphs.append(graph) self.conditions.append(torch.tensor(cond_scaled[i], dtype=torch.float32)) self.edge_stats['edge_counts'].append(graph.edge_index.shape[1] // 2) valid_count += 1 print(f"成功加载 {valid_count} 个分子") self.avg_edge_count = int(np.mean(self.edge_stats['edge_counts'])) if self.edge_stats['edge_counts'] else 20 print(f"真实图平均边数: {self.avg_edge_count}") def __len__(self): return len(self.graphs) def __getitem__(self, idx): return self.graphs[idx], self.conditions[idx] # ------------------------- 增强版生成器(含苯环条件编码) ------------------------- class EnhancedGraphGenerator(nn.Module): def __init__(self, noise_dim=16, condition_dim=2, benzene_condition_dim=1, hidden_dim=128, num_atoms=15): super().__init__() self.num_atoms = num_atoms self.benzene_embedding = nn.Embedding(2, hidden_dim) # 苯环条件嵌入 # 噪声和条件处理 self.initial_transform = nn.Sequential( nn.Linear(noise_dim + condition_dim + hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.BatchNorm1d(hidden_dim), nn.Linear(hidden_dim, hidden_dim) ) # 苯环专用生成路径 self.benzene_generator = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.Linear(hidden_dim, 6 * 7) # 6个碳原子,7种可能的原子类型 ) # 分子其余部分生成路径 self.rest_generator = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.Linear(hidden_dim, (num_atoms - 6) * 7) # 剩余原子 ) # 原子间连接预测 self.edge_predictor = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.Linear(hidden_dim, num_atoms * num_atoms) # 原子间连接概率 ) self.tau = 0.8 self.valid_atoms = torch.tensor([1, 6, 7, 8, 16, 9, 17], dtype=torch.long) def forward(self, z, cond, benzene_condition): # 嵌入苯环条件 benzene_feat = self.benzene_embedding(benzene_condition) # 合并噪声和条件 x = torch.cat([z, cond, benzene_feat], dim=1) shared_repr = self.initial_transform(x) # 生成苯环部分(前6个原子) benzene_logits = self.benzene_generator(shared_repr).view(-1, 6, 7) benzene_probs = gumbel_softmax(benzene_logits, tau=self.tau, hard=False, dim=-1) # 强制前6个原子为碳原子(苯环) benzene_indices = torch.ones_like(benzene_probs.argmax(dim=-1)) * 1 # 碳的索引是1 benzene_nodes = self.valid_atoms[benzene_indices].float().view(-1, 6, 1) / 17.0 # 生成分子其余部分 if self.num_atoms > 6: rest_logits = self.rest_generator(shared_repr).view(-1, self.num_atoms - 6, 7) rest_probs = gumbel_softmax(rest_logits, tau=self.tau, hard=False, dim=-1) rest_indices = rest_probs.argmax(dim=-1) rest_nodes = self.valid_atoms[rest_indices].float().view(-1, self.num_atoms - 6, 1) / 17.0 # 合并苯环和其余部分 node_feats = torch.cat([benzene_nodes, rest_nodes], dim=1) else: node_feats = benzene_nodes return node_feats # ------------------------- 增强版判别器 ------------------------- class EnhancedGraphDiscriminator(nn.Module): def __init__(self, node_feat_dim=1, condition_dim=2): super().__init__() # 图卷积层 self.conv1 = GCNConv(node_feat_dim + 1, 64) self.bn1 = nn.BatchNorm1d(64) self.conv2 = GATConv(64, 32, heads=3, concat=False) self.bn2 = nn.BatchNorm1d(32) self.conv3 = GraphConv(32, 16) self.bn3 = nn.BatchNorm1d(16) # 注意力机制 self.attention = nn.Sequential( nn.Linear(16, 8), nn.Tanh(), nn.Linear(8, 1) ) # 条件处理 self.condition_processor = nn.Sequential( nn.Linear(condition_dim, 16), nn.LeakyReLU(0.2) ) # 分类器 self.classifier = nn.Sequential( nn.Linear(16 + 16, 16), nn.LeakyReLU(0.2), nn.Linear(16, 1) ) def forward(self, data, condition): x, edge_index, batch = data.x.float(), data.edge_index, data.batch # 提取键类型特征 bond_features = torch.zeros(x.size(0), 1).to(x.device) if hasattr(data, 'bond_types') and data.bond_types is not None: bond_types = data.bond_types for i in range(edge_index.size(1)): u, v = edge_index[0, i].item(), edge_index[1, i].item() bond_type = bond_types[i] if i < len(bond_types) else 1 bond_features[u] = max(bond_features[u], bond_type) bond_features[v] = max(bond_features[v], bond_type) # 合并原子特征和键特征 x = torch.cat([x, bond_features], dim=1) # 图卷积 x = self.conv1(x, edge_index) x = self.bn1(x).relu() x = self.conv2(x, edge_index) x = self.bn2(x).relu() x = self.conv3(x, edge_index) x = self.bn3(x).relu() # 注意力池化 attn_weights = self.attention(x).squeeze() attn_weights = torch.softmax(attn_weights, dim=0) x = global_mean_pool(x * attn_weights.unsqueeze(-1), batch) # 处理条件 cond_repr = self.condition_processor(condition) # 合并图表示和条件 x = torch.cat([x, cond_repr], dim=1) # 分类 return torch.sigmoid(self.classifier(x)) # ------------------------- 训练流程(含苯环条件) ------------------------- def train_gan(dataset_path, epochs=2000, batch_size=32, lr=1e-5, device=None, generator_class=EnhancedGraphGenerator): if device is None: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"使用设备: {device}") dataset = MolecularGraphDataset(dataset_path) if len(dataset) == 0: print("错误:数据集为空") return None, None def collate_fn(data_list): graphs, conditions = zip(*data_list) batch_graphs = Batch.from_data_list(graphs) batch_conditions = torch.stack(conditions) return batch_graphs, batch_conditions dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn) # 创建生成器和判别器 generator = generator_class(noise_dim=16, condition_dim=2).to(device) discriminator = EnhancedGraphDiscriminator().to(device) # 调整学习率比例,让生成器学习更快 g_opt = optim.Adam(generator.parameters(), lr=lr * 3, betas=(0.5, 0.999)) d_opt = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999)) g_scheduler = optim.lr_scheduler.ReduceLROnPlateau(g_opt, 'min', patience=50, factor=0.5) d_scheduler = optim.lr_scheduler.ReduceLROnPlateau(d_opt, 'min', patience=50, factor=0.5) loss_fn = nn.BCELoss() use_amp = device.type == 'cuda' scaler = amp.GradScaler(enabled=use_amp) d_loss_history, g_loss_history = [], [] valid_mol_counts = [] fragment_counts = [] benzene_ratios = [] # 新增:记录含苯环分子比例 os.makedirs("results", exist_ok=True) os.chdir("results") for epoch in range(1, epochs + 1): generator.train() discriminator.train() total_d_loss, total_g_loss = 0.0, 0.0 for real_batch, conds in dataloader: real_batch = real_batch.to(device) conds = conds.to(device) batch_size = real_batch.num_graphs real_labels = torch.ones(batch_size, 1).to(device) fake_labels = torch.zeros(batch_size, 1).to(device) # 判别器训练 with amp.autocast(enabled=use_amp): d_real = discriminator(real_batch, conds) loss_real = loss_fn(d_real, real_labels) noise = torch.randn(batch_size, 16).to(device) # 生成苯环条件(50%概率要求含苯环) benzene_condition = torch.randint(0, 2, (batch_size,), device=device) fake_nodes = generator(noise, conds, benzene_condition).detach() fake_mols = [] for i in range(fake_nodes.shape[0]): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol and mol.GetNumAtoms() > 0 and mol.GetNumBonds() > 0: fake_mols.append(mol_to_graph(mol)) if fake_mols: fake_batch = Batch.from_data_list(fake_mols).to(device) conds_subset = conds[:len(fake_mols)] d_fake = discriminator(fake_batch, conds_subset) loss_fake = loss_fn(d_fake, fake_labels[:len(fake_mols)]) d_loss = loss_real + loss_fake else: # 如果没有生成有效的分子,创建一个简单的分子作为占位符 mol = Chem.MolFromSmiles("CCO") fake_graph = mol_to_graph(mol) fake_batch = Batch.from_data_list([fake_graph]).to(device) d_fake = discriminator(fake_batch, conds[:1]) loss_fake = loss_fn(d_fake, fake_labels[:1]) d_loss = loss_real + loss_fake d_opt.zero_grad() scaler.scale(d_loss).backward() torch.nn.utils.clip_grad_norm_(discriminator.parameters(), 1.0) scaler.step(d_opt) scaler.update() total_d_loss += d_loss.item() # 生成器训练 with amp.autocast(enabled=use_amp): noise = torch.randn(batch_size, 16).to(device) benzene_condition = torch.randint(0, 2, (batch_size,), device=device) fake_nodes = generator(noise, conds, benzene_condition) fake_graphs = [] for i in range(fake_nodes.shape[0]): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) fake_graphs.append(Data(x=node_feats, edge_index=edge_index, bond_types=bond_types)) valid_fake_graphs = [] for graph in fake_graphs: if graph.edge_index.numel() == 0: graph.edge_index = torch.tensor([[0, 0]], dtype=torch.long).t().to(device) graph.bond_types = torch.tensor([1], dtype=torch.long).to(device) valid_fake_graphs.append(graph) fake_batch = Batch.from_data_list(valid_fake_graphs).to(device) g_loss = loss_fn(discriminator(fake_batch, conds), real_labels) g_opt.zero_grad() scaler.scale(g_loss).backward() torch.nn.utils.clip_grad_norm_(generator.parameters(), 1.0) scaler.step(g_opt) scaler.update() total_g_loss += g_loss.item() avg_d_loss = total_d_loss / len(dataloader) avg_g_loss = total_g_loss / len(dataloader) d_loss_history.append(avg_d_loss) g_loss_history.append(avg_g_loss) g_scheduler.step(avg_g_loss) d_scheduler.step(avg_d_loss) if epoch % 10 == 0: generator.eval() discriminator.eval() with torch.no_grad(): num_samples = 50 noise = torch.randn(num_samples, 16).to(device) conds = torch.randn(num_samples, 2).to(device) benzene_condition = torch.ones(num_samples, dtype=torch.long, device=device) # 强制生成含苯环 fake_nodes = generator(noise, conds, benzene_condition) d_real_scores, d_fake_scores = [], [] for i in range(min(num_samples, 10)): real_idx = np.random.randint(0, len(dataset)) real_graph, real_cond = dataset[real_idx] real_batch = Batch.from_data_list([real_graph]).to(device) real_cond = real_cond.unsqueeze(0).to(device) d_real = discriminator(real_batch, real_cond) d_real_scores.append(d_real.item()) node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol: fake_graph = mol_to_graph(mol) if fake_graph: fake_batch = Batch.from_data_list([fake_graph]).to(device) fake_cond = conds[i].unsqueeze(0) d_fake = discriminator(fake_batch, fake_cond) d_fake_scores.append(d_fake.item()) if d_real_scores and d_fake_scores: print(f"Epoch {epoch}: D_loss={avg_d_loss:.4f}, G_loss={avg_g_loss:.4f}") print(f"D_real评分: {np.mean(d_real_scores):.4f} ± {np.std(d_real_scores):.4f}") print(f"D_fake评分: {np.mean(d_fake_scores):.4f} ± {np.std(d_fake_scores):.4f}") print(f"学习率: G={g_opt.param_groups[0]['lr']:.8f}, D={d_opt.param_groups[0]['lr']:.8f}") else: print(f"Epoch {epoch}: D_loss={avg_d_loss:.4f}, G_loss={avg_g_loss:.4f}") generator.train() if epoch % 100 == 0: torch.save(generator.state_dict(), f"generator_epoch_{epoch}.pt") torch.save(discriminator.state_dict(), f"discriminator_epoch_{epoch}.pt") generator.eval() with torch.no_grad(): num_samples = 25 noise = torch.randn(num_samples, 16).to(device) conds = torch.randn(num_samples, 2).to(device) benzene_condition = torch.ones(num_samples, dtype=torch.long, device=device) # 强制含苯环 fake_nodes = generator(noise, conds, benzene_condition) generated_mols = [] for i in range(num_samples): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol and mol.GetNumAtoms() > 0 and mol.GetNumBonds() > 0: try: mol_weight = rdMolDescriptors.CalcExactMolWt(mol) logp = rdMolDescriptors.CalcCrippenDescriptors(mol)[0] tpsa = cached_calculate_tpsa(mol) generated_mols.append((mol, mol_weight, logp, tpsa)) except Exception as e: print(f"计算分子描述符时出错: {e}") # 过滤无效分子 valid_mols = filter_valid_mols([mol for mol, _, _, _ in generated_mols]) # 统计含苯环分子比例 benzene_mols = filter_benzene_mols(valid_mols) benzene_ratio = len(benzene_mols) / len(valid_mols) if valid_mols else 0 benzene_ratios.append(benzene_ratio) # 统计分子片段情况(使用改进的过滤函数) single_fragment_mols = filter_single_fragment_mols_improved(valid_mols) fragment_ratio = len(single_fragment_mols) / len(valid_mols) if valid_mols else 0 fragment_counts.append(fragment_ratio) valid_mol_counts.append(len(single_fragment_mols)) print(f"Epoch {epoch}: 生成{len(generated_mols)}个分子,初步过滤后保留{len(valid_mols)}个合法分子") print(f"Epoch {epoch}: 含苯环分子比例: {len(benzene_mols)}/{len(valid_mols)} = {benzene_ratio:.2%}") print( f"Epoch {epoch}: 单一片段分子比例: {len(single_fragment_mols)}/{len(valid_mols)} = {fragment_ratio:.2%}") if single_fragment_mols: filtered_mols = [] for mol in single_fragment_mols: try: mol_weight = rdMolDescriptors.CalcExactMolWt(mol) logp = rdMolDescriptors.CalcCrippenDescriptors(mol)[0] tpsa = cached_calculate_tpsa(mol) filtered_mols.append((mol, mol_weight, logp, tpsa)) except: continue if filtered_mols: filtered_mols.sort(key=lambda x: x[1]) mols = [mol for mol, _, _, _ in filtered_mols] legends = [ f"Mol {i + 1}\nMW: {mw:.2f}\nLogP: {logp:.2f}\nTPSA: {tpsa:.2f}" for i, (_, mw, logp, tpsa) in enumerate(filtered_mols) ] img = Draw.MolsToGridImage(mols, molsPerRow=5, subImgSize=(200, 200), legends=legends) img.save(f"generated_molecules_epoch_{epoch}.png") print(f"Epoch {epoch}: 保存{len(filtered_mols)}个单一片段分子的可视化结果") else: print(f"Epoch {epoch}: 过滤后无合法单一片段分子可显示") else: print(f"Epoch {epoch}: 未生成任何合法单一片段分子") generator.train() # 绘制损失曲线 plt.figure(figsize=(10, 5)) plt.plot(d_loss_history, label='Discriminator Loss') plt.plot(g_loss_history, label='Generator Loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('GAN Loss Curve') plt.legend() plt.savefig('gan_loss_curve.png') plt.close() # 绘制合法分子数量曲线 plt.figure(figsize=(10, 5)) plt.plot([i * 100 for i in range(1, len(valid_mol_counts) + 1)], valid_mol_counts, label='Valid Single-Fragment Molecules') plt.xlabel('Epoch') plt.ylabel('Number of Valid Molecules') plt.title('Number of Valid Single-Fragment Molecules Generated per Epoch') plt.legend() plt.savefig('valid_molecules_curve.png') plt.close() # 绘制分子片段比例曲线 plt.figure(figsize=(10, 5)) plt.plot([i * 100 for i in range(1, len(fragment_counts) + 1)], fragment_counts, label='Single-Fragment Ratio') plt.xlabel('Epoch') plt.ylabel('Single-Fragment Molecules Ratio') plt.title('Ratio of Single-Fragment Molecules in Generated Molecules') plt.legend() plt.savefig('fragment_ratio_curve.png') plt.close() # 绘制苯环比例曲线 plt.figure(figsize=(10, 5)) plt.plot([i * 100 for i in range(1, len(benzene_ratios) + 1)], benzene_ratios, label='Benzene Ring Ratio') plt.xlabel('Epoch') plt.ylabel('Benzene Ring Molecules Ratio') plt.title('Ratio of Molecules Containing Benzene Rings') plt.legend() plt.savefig('benzene_ratio_curve.png') plt.close() return generator, discriminator # ------------------------- 参考分子处理与批量生成(改进版)------------------------- def process_reference_smiles(reference_smiles): ref_graph = smiles_to_graph(reference_smiles) if ref_graph is None: raise ValueError("参考分子SMILES转换图结构失败,请检查SMILES合法性") ref_concentration = 1.0 ref_efficiency = 0.8 ref_condition = torch.tensor([ref_concentration, ref_efficiency], dtype=torch.float32) return ref_graph, ref_condition def generate_molecules_with_reference_improved(generator, reference_smiles, num_samples=1000, noise_type="gaussian", force_benzene=True, device=None): if device is None: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ref_graph, ref_condition = process_reference_smiles(reference_smiles) ref_condition_batch = ref_condition.unsqueeze(0).repeat(num_samples, 1).to(device) if noise_type.lower() == "gaussian": noise = torch.randn(num_samples, 16).to(device) elif noise_type.lower() == "uniform": noise = 2 * torch.rand(num_samples, 16).to(device) - 1 else: raise ValueError("噪声类型必须是 'gaussian' 或 'uniform'") generator.eval() generated_mols = [] # 强制生成含苯环分子 benzene_condition = torch.ones(num_samples, dtype=torch.long, device=device) if force_benzene else \ torch.randint(0, 2, (num_samples,), device=device) with torch.no_grad(): fake_nodes = generator(noise, ref_condition_batch, benzene_condition) for i in range(num_samples): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] # 使用改进的边生成函数 edge_index, bond_types = generate_realistic_edges_improved( node_feats, 20, atomic_nums ) # 使用改进的分子构建函数 mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol and mol.GetNumAtoms() > 0 and mol.GetNumBonds() > 0: generated_mols.append(mol) # 过滤无效分子 valid_mols = filter_valid_mols(generated_mols) print(f"使用{noise_type}噪声,基于参考分子生成 {num_samples} 个分子,初步过滤后有效分子: {len(valid_mols)}") # 过滤含苯环的分子(如果强制要求) if force_benzene: benzene_mols = filter_benzene_mols(valid_mols) print(f"进一步过滤后,含苯环分子数量: {len(benzene_mols)}") print(f"含苯环分子比例: {len(benzene_mols) / len(valid_mols):.2%}") valid_mols = benzene_mols # 使用改进的单一片段过滤函数 single_fragment_mols = filter_single_fragment_mols_improved(valid_mols) print(f"进一步过滤后,单一片段分子数量: {len(single_fragment_mols)}") # 计算片段比例 fragment_ratio = len(single_fragment_mols) / len(valid_mols) if valid_mols else 0 print(f"单一片段分子比例: {fragment_ratio:.2%}") return single_fragment_mols # ------------------------- 分子属性计算与保存 ------------------------- def calculate_molecular_properties(mols): properties = [] for mol in mols: try: mol_weight = rdMolDescriptors.CalcExactMolWt(mol) logp = rdMolDescriptors.CalcCrippenDescriptors(mol)[0] tpsa = rdMolDescriptors.CalcTPSA(mol) hba = rdMolDescriptors.CalcNumHBA(mol) hbd = rdMolDescriptors.CalcNumHBD(mol) rot_bonds = rdMolDescriptors.CalcNumRotatableBonds(mol) n_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 7) o_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 8) s_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 16) p_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 15) frags = Chem.GetMolFrags(mol, asMols=True) fragment_count = len(frags) # 检测苯环 has_benzene = has_benzene_ring(mol) properties.append({ 'SMILES': Chem.MolToSmiles(mol), 'MW': mol_weight, 'LogP': logp, 'TPSA': tpsa, 'HBA': hba, 'HBD': hbd, 'RotBonds': rot_bonds, 'N_count': n_count, 'O_count': o_count, 'S_count': s_count, 'P_count': p_count, 'FragmentCount': fragment_count, 'HasBenzene': has_benzene }) except Exception as e: print(f"计算分子属性时出错: {e}") continue return properties def save_molecules(mols, prefix="generated", noise_type="gaussian"): if not mols: print("没有分子可保存") return subdir = f"{prefix}_{noise_type}" os.makedirs(subdir, exist_ok=True) if len(mols) <= 100: legends = [f"Mol {i + 1}" for i in range(len(mols))] img = Draw.MolsToGridImage(mols, molsPerRow=5, subImgSize=(300, 300), legends=legends) img.save(f"{subdir}/{prefix}_{noise_type}_molecules.png") properties = calculate_molecular_properties(mols) df = pd.DataFrame(properties) df.to_csv(f"{subdir}/{prefix}_{noise_type}_properties.csv", index=False) with open(f"{subdir}/{prefix}_{noise_type}_smiles.smi", "w") as f: for props in properties: f.write(f"{props['SMILES']}\n") print(f"已保存 {len(mols)} 个单一片段分子到目录: {subdir}") # ------------------------- 新增:生成结果分析工具 ------------------------- def analyze_generated_molecules(mols_gaussian, mols_uniform): print("\n===== 生成分子分析报告 =====") count_gaussian = len(mols_gaussian) count_uniform = len(mols_uniform) print(f"高斯噪声生成单一片段分子: {count_gaussian}") print(f"均匀噪声生成单一片段分子: {count_uniform}") def calculate_avg_properties(mols): if not mols: return {} props = calculate_molecular_properties(mols) avg_props = {} for key in props[0].keys(): if key != 'SMILES': avg_props[key] = sum(p[key] for p in props) / len(props) return avg_props avg_gaussian = calculate_avg_properties(mols_gaussian) avg_uniform = calculate_avg_properties(mols_uniform) if avg_gaussian and avg_uniform: print("\n高斯噪声生成分子的平均属性:") for key, value in avg_gaussian.items(): print(f" {key}: {value:.2f}") print("\n均匀噪声生成分子的平均属性:") for key, value in avg_uniform.items(): print(f" {key}: {value:.2f}") print("\n属性差异 (均匀 - 高斯):") for key in avg_gaussian.keys(): if key != 'SMILES': diff = avg_uniform[key] - avg_gaussian[key] print(f" {key}: {diff:+.2f}") if mols_gaussian and mols_uniform: properties = ['MW', 'LogP', 'TPSA', 'HBA', 'HBD', 'RotBonds'] plt.figure(figsize=(15, 10)) for i, prop in enumerate(properties, 1): plt.subplot(2, 3, i) gaussian_vals = [p[prop] for p in calculate_molecular_properties(mols_gaussian)] uniform_vals = [p[prop] for p in calculate_molecular_properties(mols_uniform)] plt.hist(gaussian_vals, bins=20, alpha=0.5, label='Gaussian') plt.hist(uniform_vals, bins=20, alpha=0.5, label='Uniform') plt.title(f'{prop} Distribution') plt.xlabel(prop) plt.ylabel('Frequency') plt.legend() plt.tight_layout() plt.savefig('molecular_property_distribution.png') plt.close() print("\n分子属性分布图已保存为 'molecular_property_distribution.png'") # ------------------------- 新增:分子可视化工具 ------------------------- def visualize_molecules_grid(molecules, num_per_row=5, filename="molecules_grid.png", legends=None): """创建高质量的分子网格可视化图""" if not molecules: print("没有分子可可视化") return if legends is None: legends = [f"Molecule {i + 1}" for i in range(len(molecules))] try: img = Draw.MolsToGridImage( molecules, molsPerRow=num_per_row, subImgSize=(300, 300), legends=legends, useSVG=False, highlightAtomLists=None, highlightBondLists=None ) img.save(filename) print(f"分子网格图已保存至: {filename}") return img except Exception as e: print(f"生成分子网格图时出错: {e}") return None # ------------------------- 主函数 ------------------------- def main(): print("=" * 80) print("基于合法SMILES的分子生成GAN系统(改进版)") print("=" * 80) dataset_path = "D:\python\pythonProject1\DATA\Inhibitor1368_data.xlsx" if not os.path.exists(dataset_path): print(f"错误:数据集文件 '{dataset_path}' 不存在!") exit(1) print(f"开始加载数据集: {dataset_path}") print("=" * 80) print("开始训练分子生成GAN...") generator, discriminator = train_gan( dataset_path=dataset_path, epochs=200, batch_size=32, lr=1e-5, ) print("=" * 80) # 设置参考缓蚀剂分子(确保是单一片段含苯环的有效分子) reference_smiles = "NCCNCc1ccc(O)c2ncccc12" if generator: print("训练完成!模型和生成结果已保存") print("生成的分子可视化结果在'results'目录") print("损失曲线已保存为'gan_loss_curve.png'") print("合法分子数量曲线已保存为'valid_molecules_curve.png'") print("分子片段比例曲线已保存为'fragment_ratio_curve.png'") print("苯环比例曲线已保存为'benzene_ratio_curve.png'") # 基于参考分子生成1000个新分子(高斯噪声,强制含苯环) print("\n开始基于参考分子生成新分子(高斯噪声,强制含苯环)...") gaussian_mols = generate_molecules_with_reference_improved( generator, reference_smiles, num_samples=1000, noise_type="gaussian", force_benzene=True # 强制生成含苯环分子 ) save_molecules(gaussian_mols, prefix="ref_based", noise_type="gaussian_benzene") # 可视化最佳分子 if gaussian_mols: # 计算每个分子的QED分数 qed_scores = [] for mol in gaussian_mols: try: qed_scores.append(rdMolDescriptors.CalcQED(mol)) except: qed_scores.append(0) # 按QED分数排序 sorted_indices = sorted(range(len(qed_scores)), key=lambda i: qed_scores[i], reverse=True) top_molecules = [gaussian_mols[i] for i in sorted_indices[:25]] top_legends = [f"QED: {qed_scores[i]:.3f}" for i in sorted_indices[:25]] # 可视化 visualize_molecules_grid( top_molecules, num_per_row=5, filename="top_molecules_by_qed.png", legends=top_legends ) print("已生成并保存最佳分子的可视化结果") # 基于参考分子生成1000个新分子(均匀噪声,强制含苯环) print("\n开始基于参考分子生成新分子(均匀噪声,强制含苯环)...") uniform_mols = generate_molecules_with_reference_improved( generator, reference_smiles, num_samples=1000, noise_type="uniform", force_benzene=True # 强制生成含苯环分子 ) save_molecules(uniform_mols, prefix="ref_based", noise_type="uniform_benzene") # 分析两种噪声生成的分子差异 if gaussian_mols and uniform_mols: analyze_generated_molecules(gaussian_mols, uniform_mols) else: print("训练过程中未生成合法分子,请检查数据") print("=" * 80) if __name__ == "__main__": main()该代码生成的新分子 符合要求的非常非常少 如何改进

使用gaussian噪声,基于参考分子生成 1000 个分子,初步过滤后有效分子: 1000 进一步过滤后,含苯环分子数量: 939 含苯环分子比例: 93.90% [15:31:55] WARNING: could not find number of expected rings. Switching to an approximate ring finding algorithm. 进一步过滤后,单一片段分子数量: 0 单一片段分子比例: 0.00% 没有分子可保存 开始基于参考分子生成新分子(均匀噪声,强制含苯环)... [15:31:55] WARNING: could not find number of expected rings. Switching to an approximate ring finding algorithm. [15:31:55] WARNING: could not find number of expected rings. Switching to an approximate ring finding algorithm. [15:31:55] WARNING: could not find number of expected rings. Switching to an approximate ring finding algorithm. 使用uniform噪声,基于参考分子生成 1000 个分子,初步过滤后有效分子: 1000 [15:31:56] WARNING: could not find number of expected rings. Switching to an approximate ring finding algorithm. 进一步过滤后,含苯环分子数量: 957 含苯环分子比例: 95.70% [15:31:56] WARNING: could not find number of expected rings. Switching to an approximate ring finding algorithm. [15:31:56] WARNING: could not find number of expected rings. Switching to an approximate ring finding algorithm. 进一步过滤后,单一片段分子数量: 1 单一片段分子比例: 0.10%这个是下面的代码运行的结果import torch import torch.nn as nn import torch.optim as optim import pandas as pd from rdkit import Chem from rdkit.Chem import Draw, rdMolDescriptors from torch_geometric.data import Data, Batch from torch_geometric.nn import GCNConv, global_mean_pool, GATConv, GraphConv from torch_geometric.loader import DataLoader import matplotlib.pyplot as plt from sklearn.preprocessing import StandardScaler import os import numpy as np import random from functools import lru_cache from torch.cuda import amp # 设置随机种子以确保结果可复现 def set_seed(seed): torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) np.random.seed(seed) random.seed(seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False set_seed(42) # ------------------------- 工具函数 ------------------------- def validate_atomic_nums(atomic_nums): valid_atoms = {1, 6, 7, 8, 16, 9, 17} # H, C, N, O, S, F, Cl if isinstance(atomic_nums, torch.Tensor): atomic_nums = atomic_nums.cpu().numpy() if atomic_nums.is_cuda else atomic_nums.numpy() if isinstance(atomic_nums, list): atomic_nums = np.array(atomic_nums) if atomic_nums.ndim > 1: atomic_nums = atomic_nums.flatten() atomic_nums = np.round(atomic_nums).astype(int) return [num if num in valid_atoms else 6 for num in atomic_nums] def smiles_to_graph(smiles): mol = Chem.MolFromSmiles(smiles) if mol is None: return None Chem.SanitizeMol(mol) atom_feats = [atom.GetAtomicNum() for atom in mol.GetAtoms()] edge_index = [] for bond in mol.GetBonds(): i, j = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx() edge_index += [(i, j), (j, i)] x = torch.tensor(atom_feats, dtype=torch.long).view(-1, 1) edge_index = torch.tensor(edge_index, dtype=torch.long).t().contiguous() return Data(x=x, edge_index=edge_index) def augment_mol(smiles): mol = Chem.MolFromSmiles(smiles) if mol: try: mol = Chem.AddHs(mol) except: pass return Chem.MolToSmiles(mol) return smiles # ------------------------- 增强版边生成算法(改进版)------------------------- def generate_realistic_edges_improved(node_features, avg_edge_count=20, atomic_nums=None, force_min_edges=True): if node_features is None or node_features.numel() == 0: return torch.empty((2, 0), dtype=torch.long), torch.tensor([]) num_nodes = node_features.shape[0] if num_nodes < 2: return torch.empty((2, 0), dtype=torch.long), torch.tensor([]) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] max_valence = {1: 1, 6: 4, 7: 3, 8: 2, 16: 6, 9: 1, 17: 1} # 计算节点相似度并添加额外的连接倾向 similarity = torch.matmul(node_features, node_features.transpose(0, 1)).squeeze() # 添加额外的连接偏向,确保分子整体性(避免孤立节点) connectivity_bias = torch.ones_like(similarity) - torch.eye(num_nodes, dtype=torch.long) similarity = similarity + 0.5 * connectivity_bias.float() norm_similarity = similarity / (torch.max(similarity) + 1e-8) edge_probs = norm_similarity.view(-1) num_possible_edges = num_nodes * (num_nodes - 1) num_edges = min(avg_edge_count, num_possible_edges) _, indices = torch.topk(edge_probs, k=num_edges) edge_set = set() edge_index = [] used_valence = {i: 0 for i in range(num_nodes)} bond_types = [] tried_edges = set() # 确保至少有一个连接,并优先连接苯环部分 if num_nodes >= 2 and force_min_edges: if num_nodes >= 6 and all(atomic_nums[i] == 6 for i in range(6)): # 先连接苯环形成基础结构 for i in range(6): j = (i + 1) % 6 if (i, j) not in tried_edges: edge_set.add((i, j)) edge_set.add((j, i)) edge_index.append([i, j]) edge_index.append([j, i]) used_valence[i] += 1 used_valence[j] += 1 bond_types.append(2 if i % 2 == 0 else 1) bond_types.append(2 if i % 2 == 0 else 1) tried_edges.add((i, j)) tried_edges.add((j, i)) # 连接苯环到第一个非苯环原子(确保分子整体性) if num_nodes > 6: first_non_benzene = 6 # 寻找与苯环连接最紧密的非苯环原子 max_sim = -1 best_benzene = 0 for i in range(6): sim = norm_similarity[i, first_non_benzene] if sim > max_sim: max_sim = sim best_benzene = i if (best_benzene, first_non_benzene) not in tried_edges: edge_set.add((best_benzene, first_non_benzene)) edge_set.add((first_non_benzene, best_benzene)) edge_index.append([best_benzene, first_non_benzene]) edge_index.append([first_non_benzene, best_benzene]) used_valence[best_benzene] += 1 used_valence[first_non_benzene] += 1 bond_types.append(1) bond_types.append(1) tried_edges.add((best_benzene, first_non_benzene)) tried_edges.add((first_non_benzene, best_benzene)) else: # 常规分子的最小连接(确保连通性) max_sim = -1 best_u, best_v = 0, 1 for i in range(num_nodes): for j in range(i + 1, num_nodes): if norm_similarity[i, j] > max_sim and (i, j) not in tried_edges: max_sim = norm_similarity[i, j] best_u, best_v = i, j edge_set.add((best_u, best_v)) edge_set.add((best_v, best_u)) edge_index.append([best_u, best_v]) edge_index.append([best_v, best_u]) used_valence[best_u] += 1 used_valence[best_v] += 1 bond_types.append(1) bond_types.append(1) tried_edges.add((best_u, best_v)) tried_edges.add((best_v, best_u)) # 改进边生成逻辑,确保分子连接性(优先连接未连通部分) for idx in indices: u, v = (idx // num_nodes).item(), (idx % num_nodes).item() if u == v or (u, v) in tried_edges or (v, u) in tried_edges: continue tried_edges.add((u, v)) tried_edges.add((v, u)) atom_u, atom_v = atomic_nums[u], atomic_nums[v] max_u, max_v = max_valence[atom_u], max_valence[atom_v] avail_u, avail_v = max_u - used_valence[u], max_v - used_valence[v] # 不允许两个氢原子相连 if atom_u == 1 and atom_v == 1: continue # 确保连接不会导致分子分裂(至少需要n-1条边连接n个节点) if len(edge_set) < num_nodes - 1: edge_set.add((u, v)) edge_set.add((v, u)) edge_index.append([u, v]) edge_index.append([v, u]) used_valence[u] += 1 used_valence[v] += 1 bond_types.append(1) bond_types.append(1) continue # 常规连接逻辑 if avail_u >= 1 and avail_v >= 1: edge_set.add((u, v)) edge_set.add((v, u)) edge_index.append([u, v]) edge_index.append([v, u]) used_valence[u] += 1 used_valence[v] += 1 bond_types.append(1) bond_types.append(1) continue # 尝试形成双键(增加分子稳定性) if atom_u != 1 and atom_v != 1 and avail_u >= 2 and avail_v >= 2 and random.random() < 0.3: edge_set.add((u, v)) edge_set.add((v, u)) edge_index.append([u, v]) edge_index.append([v, u]) used_valence[u] += 2 used_valence[v] += 2 bond_types.append(2) bond_types.append(2) # 确保分子连接性的最后检查 if len(edge_index) == 0 and num_nodes >= 2: edge_index = [[0, 1], [1, 0]] bond_types = [1, 1] used_valence[0], used_valence[1] = 1, 1 return torch.tensor(edge_index).t().contiguous(), torch.tensor(bond_types) @lru_cache(maxsize=128) def cached_calculate_tpsa(mol): try: return rdMolDescriptors.CalcTPSA(mol) if mol else 0 except: return 0 @lru_cache(maxsize=128) def cached_calculate_ring_penalty(mol): try: if mol: ssr = Chem.GetSymmSSSR(mol) return len(ssr) * 0.2 return 0 except: return 0 # ------------------------- Gumbel-Softmax 实现 ------------------------- def gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=-1): gumbels = -(torch.empty_like(logits).exponential_() + eps).log() gumbels = (logits + gumbels) / tau y_soft = gumbels.softmax(dim) if hard: index = y_soft.max(dim, keepdim=True)[1] y_hard = torch.zeros_like(logits).scatter_(dim, index, 1.0) ret = y_hard - y_soft.detach() + y_soft else: ret = y_soft return ret # ------------------------- 核心优化:强化化学约束(改进版)------------------------- def build_valid_mol_improved(atomic_nums, edge_index, bond_types=None): atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] bond_type_map = {1: Chem.BondType.SINGLE, 2: Chem.BondType.DOUBLE} mol = Chem.RWMol() atom_indices = [] for num in atomic_nums: atom = Chem.Atom(num) idx = mol.AddAtom(atom) atom_indices.append(idx) added_edges = set() if bond_types is not None: if isinstance(bond_types, torch.Tensor): bond_types = bond_types.cpu().numpy().tolist() if bond_types.is_cuda else bond_types.numpy().tolist() else: bond_types = [] max_valence = {1: 1, 6: 4, 7: 3, 8: 2, 16: 6, 9: 1, 17: 1} current_valence = {i: 0 for i in range(len(atomic_nums))} if edge_index.numel() > 0: # 按重要性排序边,优先连接形成单一片段(苯环连接优先) edge_importance = [] for j in range(edge_index.shape[1]): u, v = edge_index[0, j].item(), edge_index[1, j].item() # 计算边的重要性:苯环与非苯环连接 > 苯环内部连接 > 其他连接 importance = 2.0 if (u < 6 and v >= 6) or (v < 6 and u >= 6) else 1.5 if (u < 6 and v < 6) else 1.0 edge_importance.append((importance, j)) # 按重要性降序排序 edge_order = [j for _, j in sorted(edge_importance, key=lambda x: x[0], reverse=True)] for j in edge_order: u, v = edge_index[0, j].item(), edge_index[1, j].item() if u >= len(atomic_nums) or v >= len(atomic_nums) or u == v or (u, v) in added_edges: continue atom_u = mol.GetAtomWithIdx(atom_indices[u]) atom_v = mol.GetAtomWithIdx(atom_indices[v]) max_u = max_valence[atomic_nums[u]] max_v = max_valence[atomic_nums[v]] used_u = current_valence[u] used_v = current_valence[v] remain_u = max_u - used_u remain_v = max_v - used_v if remain_u >= 1 and remain_v >= 1: bond_type = bond_type_map.get(bond_types[j] if j < len(bond_types) else 1, Chem.BondType.SINGLE) bond_order = 1 if bond_type == Chem.BondType.SINGLE else 2 if bond_order > min(remain_u, remain_v): bond_order = 1 bond_type = Chem.BondType.SINGLE mol.AddBond(atom_indices[u], atom_indices[v], bond_type) added_edges.add((u, v)) added_edges.add((v, u)) current_valence[u] += bond_order current_valence[v] += bond_order # 更稳健的价态修复逻辑(优先保持分子连接性) try: Chem.SanitizeMol(mol) return mol except: # 先尝试添加氢原子修复(保持分子完整性) try: mol = Chem.AddHs(mol) Chem.SanitizeMol(mol) return mol except: pass # 智能移除键(仅移除导致价态问题的非关键键) try: problematic_bonds = [] for bond in mol.GetBonds(): begin = bond.GetBeginAtom() end = bond.GetEndAtom() if begin.GetExplicitValence() > max_valence[begin.GetAtomicNum()] or \ end.GetExplicitValence() > max_valence[end.GetAtomicNum()]: problematic_bonds.append(bond.GetIdx()) # 按键类型和连接重要性排序(先移除双键,再移除非苯环连接) problematic_bonds.sort(key=lambda idx: ( mol.GetBondWithIdx(idx).GetBondTypeAsDouble(), -1 if (mol.GetBondWithIdx(idx).GetBeginAtomIdx() < 6 and mol.GetBondWithIdx(idx).GetEndAtomIdx() >= 6) else 1 ), reverse=True) for bond_idx in problematic_bonds: mol.RemoveBond(mol.GetBondWithIdx(bond_idx).GetBeginAtomIdx(), mol.GetBondWithIdx(bond_idx).GetEndAtomIdx()) try: Chem.SanitizeMol(mol) break except: continue Chem.SanitizeMol(mol) return mol except: # 最后尝试移除氢原子(作为最后的修复手段) try: mol = Chem.RemoveHs(mol) Chem.SanitizeMol(mol) return mol except: return None def mol_to_graph(mol): if not mol: return None try: Chem.SanitizeMol(mol) atom_feats = [atom.GetAtomicNum() for atom in mol.GetAtoms()] edge_index = [] for bond in mol.GetBonds(): i, j = bond.GetBeginAtomIdx(), bond.GetEndAtomIdx() edge_index += [(i, j), (j, i)] x = torch.tensor(atom_feats, dtype=torch.long).view(-1, 1) edge_index = torch.tensor(edge_index, dtype=torch.long).t().contiguous() return Data(x=x, edge_index=edge_index) except: return None def filter_valid_mols(mols): valid_mols = [] for mol in mols: if mol is None: continue try: Chem.SanitizeMol(mol) rdMolDescriptors.CalcExactMolWt(mol) valid_mols.append(mol) except: continue return valid_mols # ------------------------- 新增:分子片段过滤工具(改进版)------------------------- def filter_single_fragment_mols_improved(mol_list): valid_mols = [] for mol in mol_list: if mol is None: continue try: # 先尝试添加氢原子以确保分子完整性 mol = Chem.AddHs(mol) Chem.SanitizeMol(mol) # 使用更鲁棒的方法检测片段数 frags = Chem.GetMolFrags(mol, asMols=True, sanitizeFrags=False) if len(frags) == 1: # 进一步检查分子重量和原子数(过滤过小分子) mol_weight = rdMolDescriptors.CalcExactMolWt(mol) if mol_weight > 50 and mol.GetNumAtoms() > 5: valid_mols.append(mol) except: continue return valid_mols # ------------------------- 新增:苯环检测工具 ------------------------- def has_benzene_ring(mol): if mol is None: return False try: benzene_smarts = Chem.MolFromSmarts("c1ccccc1") return mol.HasSubstructMatch(benzene_smarts) except Exception as e: print(f"检测苯环失败: {e}") return False def filter_benzene_mols(mols): return [mol for mol in mols if has_benzene_ring(mol)] # ------------------------- 数据集 ------------------------- class MolecularGraphDataset(torch.utils.data.Dataset): def __init__(self, path): df = pd.read_excel(path, usecols=[0, 1, 2]) df.columns = ['SMILES', 'Concentration', 'Efficiency'] df.dropna(inplace=True) df['SMILES'] = df['SMILES'].apply(augment_mol) self.graphs, self.conditions = [], [] scaler = StandardScaler() cond_scaled = scaler.fit_transform(df[['Concentration', 'Efficiency']]) self.edge_stats = {'edge_counts': []} valid_count = 0 for i, row in df.iterrows(): graph = smiles_to_graph(row['SMILES']) if graph: atomic_nums = graph.x.squeeze().tolist() atomic_nums = validate_atomic_nums(atomic_nums) graph.x = torch.tensor(atomic_nums, dtype=torch.long).view(-1, 1) self.graphs.append(graph) self.conditions.append(torch.tensor(cond_scaled[i], dtype=torch.float32)) self.edge_stats['edge_counts'].append(graph.edge_index.shape[1] // 2) valid_count += 1 print(f"成功加载 {valid_count} 个分子") self.avg_edge_count = int(np.mean(self.edge_stats['edge_counts'])) if self.edge_stats['edge_counts'] else 20 print(f"真实图平均边数: {self.avg_edge_count}") def __len__(self): return len(self.graphs) def __getitem__(self, idx): return self.graphs[idx], self.conditions[idx] # ------------------------- 增强版生成器(含苯环条件编码) ------------------------- class EnhancedGraphGenerator(nn.Module): def __init__(self, noise_dim=16, condition_dim=2, benzene_condition_dim=1, hidden_dim=128, num_atoms=15): super().__init__() self.num_atoms = num_atoms self.benzene_embedding = nn.Embedding(2, hidden_dim) # 苯环条件嵌入 # 噪声和条件处理 self.initial_transform = nn.Sequential( nn.Linear(noise_dim + condition_dim + hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.BatchNorm1d(hidden_dim), nn.Linear(hidden_dim, hidden_dim) ) # 苯环专用生成路径 self.benzene_generator = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.Linear(hidden_dim, 6 * 7) # 6个碳原子,7种可能的原子类型 ) # 分子其余部分生成路径 self.rest_generator = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.Linear(hidden_dim, (num_atoms - 6) * 7) # 剩余原子 ) # 原子间连接预测 self.edge_predictor = nn.Sequential( nn.Linear(hidden_dim, hidden_dim), nn.LeakyReLU(0.2), nn.Linear(hidden_dim, num_atoms * num_atoms) # 原子间连接概率 ) self.tau = 0.8 self.valid_atoms = torch.tensor([1, 6, 7, 8, 16, 9, 17], dtype=torch.long) def forward(self, z, cond, benzene_condition): # 嵌入苯环条件 benzene_feat = self.benzene_embedding(benzene_condition) # 合并噪声和条件 x = torch.cat([z, cond, benzene_feat], dim=1) shared_repr = self.initial_transform(x) # 生成苯环部分(前6个原子) benzene_logits = self.benzene_generator(shared_repr).view(-1, 6, 7) benzene_probs = gumbel_softmax(benzene_logits, tau=self.tau, hard=False, dim=-1) # 强制前6个原子为碳原子(苯环) benzene_indices = torch.ones_like(benzene_probs.argmax(dim=-1)) * 1 # 碳的索引是1 benzene_nodes = self.valid_atoms[benzene_indices].float().view(-1, 6, 1) / 17.0 # 生成分子其余部分 if self.num_atoms > 6: rest_logits = self.rest_generator(shared_repr).view(-1, self.num_atoms - 6, 7) rest_probs = gumbel_softmax(rest_logits, tau=self.tau, hard=False, dim=-1) rest_indices = rest_probs.argmax(dim=-1) rest_nodes = self.valid_atoms[rest_indices].float().view(-1, self.num_atoms - 6, 1) / 17.0 # 合并苯环和其余部分 node_feats = torch.cat([benzene_nodes, rest_nodes], dim=1) else: node_feats = benzene_nodes return node_feats # ------------------------- 增强版判别器 ------------------------- class EnhancedGraphDiscriminator(nn.Module): def __init__(self, node_feat_dim=1, condition_dim=2): super().__init__() # 图卷积层 self.conv1 = GCNConv(node_feat_dim + 1, 64) self.bn1 = nn.BatchNorm1d(64) self.conv2 = GATConv(64, 32, heads=3, concat=False) self.bn2 = nn.BatchNorm1d(32) self.conv3 = GraphConv(32, 16) self.bn3 = nn.BatchNorm1d(16) # 注意力机制 self.attention = nn.Sequential( nn.Linear(16, 8), nn.Tanh(), nn.Linear(8, 1) ) # 条件处理 self.condition_processor = nn.Sequential( nn.Linear(condition_dim, 16), nn.LeakyReLU(0.2) ) # 分类器 self.classifier = nn.Sequential( nn.Linear(16 + 16, 16), nn.LeakyReLU(0.2), nn.Linear(16, 1) ) def forward(self, data, condition): x, edge_index, batch = data.x.float(), data.edge_index, data.batch # 提取键类型特征 bond_features = torch.zeros(x.size(0), 1).to(x.device) if hasattr(data, 'bond_types') and data.bond_types is not None: bond_types = data.bond_types for i in range(edge_index.size(1)): u, v = edge_index[0, i].item(), edge_index[1, i].item() bond_type = bond_types[i] if i < len(bond_types) else 1 bond_features[u] = max(bond_features[u], bond_type) bond_features[v] = max(bond_features[v], bond_type) # 合并原子特征和键特征 x = torch.cat([x, bond_features], dim=1) # 图卷积 x = self.conv1(x, edge_index) x = self.bn1(x).relu() x = self.conv2(x, edge_index) x = self.bn2(x).relu() x = self.conv3(x, edge_index) x = self.bn3(x).relu() # 注意力池化 attn_weights = self.attention(x).squeeze() attn_weights = torch.softmax(attn_weights, dim=0) x = global_mean_pool(x * attn_weights.unsqueeze(-1), batch) # 处理条件 cond_repr = self.condition_processor(condition) # 合并图表示和条件 x = torch.cat([x, cond_repr], dim=1) # 分类 return torch.sigmoid(self.classifier(x)) # ------------------------- 训练流程(含苯环条件) ------------------------- def train_gan(dataset_path, epochs=2000, batch_size=32, lr=1e-5, device=None, generator_class=EnhancedGraphGenerator): if device is None: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"使用设备: {device}") dataset = MolecularGraphDataset(dataset_path) if len(dataset) == 0: print("错误:数据集为空") return None, None def collate_fn(data_list): graphs, conditions = zip(*data_list) batch_graphs = Batch.from_data_list(graphs) batch_conditions = torch.stack(conditions) return batch_graphs, batch_conditions dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=True, collate_fn=collate_fn) # 创建生成器和判别器 generator = generator_class(noise_dim=16, condition_dim=2).to(device) discriminator = EnhancedGraphDiscriminator().to(device) # 调整学习率比例,让生成器学习更快 g_opt = optim.Adam(generator.parameters(), lr=lr * 3, betas=(0.5, 0.999)) d_opt = optim.Adam(discriminator.parameters(), lr=lr, betas=(0.5, 0.999)) g_scheduler = optim.lr_scheduler.ReduceLROnPlateau(g_opt, 'min', patience=50, factor=0.5) d_scheduler = optim.lr_scheduler.ReduceLROnPlateau(d_opt, 'min', patience=50, factor=0.5) loss_fn = nn.BCELoss() use_amp = device.type == 'cuda' scaler = amp.GradScaler(enabled=use_amp) d_loss_history, g_loss_history = [], [] valid_mol_counts = [] fragment_counts = [] benzene_ratios = [] # 新增:记录含苯环分子比例 os.makedirs("results", exist_ok=True) os.chdir("results") for epoch in range(1, epochs + 1): generator.train() discriminator.train() total_d_loss, total_g_loss = 0.0, 0.0 for real_batch, conds in dataloader: real_batch = real_batch.to(device) conds = conds.to(device) batch_size = real_batch.num_graphs real_labels = torch.ones(batch_size, 1).to(device) fake_labels = torch.zeros(batch_size, 1).to(device) # 判别器训练 with amp.autocast(enabled=use_amp): d_real = discriminator(real_batch, conds) loss_real = loss_fn(d_real, real_labels) noise = torch.randn(batch_size, 16).to(device) # 生成苯环条件(50%概率要求含苯环) benzene_condition = torch.randint(0, 2, (batch_size,), device=device) fake_nodes = generator(noise, conds, benzene_condition).detach() fake_mols = [] for i in range(fake_nodes.shape[0]): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol and mol.GetNumAtoms() > 0 and mol.GetNumBonds() > 0: fake_mols.append(mol_to_graph(mol)) if fake_mols: fake_batch = Batch.from_data_list(fake_mols).to(device) conds_subset = conds[:len(fake_mols)] d_fake = discriminator(fake_batch, conds_subset) loss_fake = loss_fn(d_fake, fake_labels[:len(fake_mols)]) d_loss = loss_real + loss_fake else: # 如果没有生成有效的分子,创建一个简单的分子作为占位符 mol = Chem.MolFromSmiles("CCO") fake_graph = mol_to_graph(mol) fake_batch = Batch.from_data_list([fake_graph]).to(device) d_fake = discriminator(fake_batch, conds[:1]) loss_fake = loss_fn(d_fake, fake_labels[:1]) d_loss = loss_real + loss_fake d_opt.zero_grad() scaler.scale(d_loss).backward() torch.nn.utils.clip_grad_norm_(discriminator.parameters(), 1.0) scaler.step(d_opt) scaler.update() total_d_loss += d_loss.item() # 生成器训练 with amp.autocast(enabled=use_amp): noise = torch.randn(batch_size, 16).to(device) benzene_condition = torch.randint(0, 2, (batch_size,), device=device) fake_nodes = generator(noise, conds, benzene_condition) fake_graphs = [] for i in range(fake_nodes.shape[0]): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) fake_graphs.append(Data(x=node_feats, edge_index=edge_index, bond_types=bond_types)) valid_fake_graphs = [] for graph in fake_graphs: if graph.edge_index.numel() == 0: graph.edge_index = torch.tensor([[0, 0]], dtype=torch.long).t().to(device) graph.bond_types = torch.tensor([1], dtype=torch.long).to(device) valid_fake_graphs.append(graph) fake_batch = Batch.from_data_list(valid_fake_graphs).to(device) g_loss = loss_fn(discriminator(fake_batch, conds), real_labels) g_opt.zero_grad() scaler.scale(g_loss).backward() torch.nn.utils.clip_grad_norm_(generator.parameters(), 1.0) scaler.step(g_opt) scaler.update() total_g_loss += g_loss.item() avg_d_loss = total_d_loss / len(dataloader) avg_g_loss = total_g_loss / len(dataloader) d_loss_history.append(avg_d_loss) g_loss_history.append(avg_g_loss) g_scheduler.step(avg_g_loss) d_scheduler.step(avg_d_loss) if epoch % 10 == 0: generator.eval() discriminator.eval() with torch.no_grad(): num_samples = 50 noise = torch.randn(num_samples, 16).to(device) conds = torch.randn(num_samples, 2).to(device) benzene_condition = torch.ones(num_samples, dtype=torch.long, device=device) # 强制生成含苯环 fake_nodes = generator(noise, conds, benzene_condition) d_real_scores, d_fake_scores = [], [] for i in range(min(num_samples, 10)): real_idx = np.random.randint(0, len(dataset)) real_graph, real_cond = dataset[real_idx] real_batch = Batch.from_data_list([real_graph]).to(device) real_cond = real_cond.unsqueeze(0).to(device) d_real = discriminator(real_batch, real_cond) d_real_scores.append(d_real.item()) node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol: fake_graph = mol_to_graph(mol) if fake_graph: fake_batch = Batch.from_data_list([fake_graph]).to(device) fake_cond = conds[i].unsqueeze(0) d_fake = discriminator(fake_batch, fake_cond) d_fake_scores.append(d_fake.item()) if d_real_scores and d_fake_scores: print(f"Epoch {epoch}: D_loss={avg_d_loss:.4f}, G_loss={avg_g_loss:.4f}") print(f"D_real评分: {np.mean(d_real_scores):.4f} ± {np.std(d_real_scores):.4f}") print(f"D_fake评分: {np.mean(d_fake_scores):.4f} ± {np.std(d_fake_scores):.4f}") print(f"学习率: G={g_opt.param_groups[0]['lr']:.8f}, D={d_opt.param_groups[0]['lr']:.8f}") else: print(f"Epoch {epoch}: D_loss={avg_d_loss:.4f}, G_loss={avg_g_loss:.4f}") generator.train() if epoch % 100 == 0: torch.save(generator.state_dict(), f"generator_epoch_{epoch}.pt") torch.save(discriminator.state_dict(), f"discriminator_epoch_{epoch}.pt") generator.eval() with torch.no_grad(): num_samples = 25 noise = torch.randn(num_samples, 16).to(device) conds = torch.randn(num_samples, 2).to(device) benzene_condition = torch.ones(num_samples, dtype=torch.long, device=device) # 强制含苯环 fake_nodes = generator(noise, conds, benzene_condition) generated_mols = [] for i in range(num_samples): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] edge_index, bond_types = generate_realistic_edges_improved( node_feats, dataset.avg_edge_count, atomic_nums ) mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol and mol.GetNumAtoms() > 0 and mol.GetNumBonds() > 0: try: mol_weight = rdMolDescriptors.CalcExactMolWt(mol) logp = rdMolDescriptors.CalcCrippenDescriptors(mol)[0] tpsa = cached_calculate_tpsa(mol) generated_mols.append((mol, mol_weight, logp, tpsa)) except Exception as e: print(f"计算分子描述符时出错: {e}") # 过滤无效分子 valid_mols = filter_valid_mols([mol for mol, _, _, _ in generated_mols]) # 统计含苯环分子比例 benzene_mols = filter_benzene_mols(valid_mols) benzene_ratio = len(benzene_mols) / len(valid_mols) if valid_mols else 0 benzene_ratios.append(benzene_ratio) # 统计分子片段情况(使用改进的过滤函数) single_fragment_mols = filter_single_fragment_mols_improved(valid_mols) fragment_ratio = len(single_fragment_mols) / len(valid_mols) if valid_mols else 0 fragment_counts.append(fragment_ratio) valid_mol_counts.append(len(single_fragment_mols)) print(f"Epoch {epoch}: 生成{len(generated_mols)}个分子,初步过滤后保留{len(valid_mols)}个合法分子") print(f"Epoch {epoch}: 含苯环分子比例: {len(benzene_mols)}/{len(valid_mols)} = {benzene_ratio:.2%}") print( f"Epoch {epoch}: 单一片段分子比例: {len(single_fragment_mols)}/{len(valid_mols)} = {fragment_ratio:.2%}") if single_fragment_mols: filtered_mols = [] for mol in single_fragment_mols: try: mol_weight = rdMolDescriptors.CalcExactMolWt(mol) logp = rdMolDescriptors.CalcCrippenDescriptors(mol)[0] tpsa = cached_calculate_tpsa(mol) filtered_mols.append((mol, mol_weight, logp, tpsa)) except: continue if filtered_mols: filtered_mols.sort(key=lambda x: x[1]) mols = [mol for mol, _, _, _ in filtered_mols] legends = [ f"Mol {i + 1}\nMW: {mw:.2f}\nLogP: {logp:.2f}\nTPSA: {tpsa:.2f}" for i, (_, mw, logp, tpsa) in enumerate(filtered_mols) ] img = Draw.MolsToGridImage(mols, molsPerRow=5, subImgSize=(200, 200), legends=legends) img.save(f"generated_molecules_epoch_{epoch}.png") print(f"Epoch {epoch}: 保存{len(filtered_mols)}个单一片段分子的可视化结果") else: print(f"Epoch {epoch}: 过滤后无合法单一片段分子可显示") else: print(f"Epoch {epoch}: 未生成任何合法单一片段分子") generator.train() # 绘制损失曲线 plt.figure(figsize=(10, 5)) plt.plot(d_loss_history, label='Discriminator Loss') plt.plot(g_loss_history, label='Generator Loss') plt.xlabel('Epoch') plt.ylabel('Loss') plt.title('GAN Loss Curve') plt.legend() plt.savefig('gan_loss_curve.png') plt.close() # 绘制合法分子数量曲线 plt.figure(figsize=(10, 5)) plt.plot([i * 100 for i in range(1, len(valid_mol_counts) + 1)], valid_mol_counts, label='Valid Single-Fragment Molecules') plt.xlabel('Epoch') plt.ylabel('Number of Valid Molecules') plt.title('Number of Valid Single-Fragment Molecules Generated per Epoch') plt.legend() plt.savefig('valid_molecules_curve.png') plt.close() # 绘制分子片段比例曲线 plt.figure(figsize=(10, 5)) plt.plot([i * 100 for i in range(1, len(fragment_counts) + 1)], fragment_counts, label='Single-Fragment Ratio') plt.xlabel('Epoch') plt.ylabel('Single-Fragment Molecules Ratio') plt.title('Ratio of Single-Fragment Molecules in Generated Molecules') plt.legend() plt.savefig('fragment_ratio_curve.png') plt.close() # 绘制苯环比例曲线 plt.figure(figsize=(10, 5)) plt.plot([i * 100 for i in range(1, len(benzene_ratios) + 1)], benzene_ratios, label='Benzene Ring Ratio') plt.xlabel('Epoch') plt.ylabel('Benzene Ring Molecules Ratio') plt.title('Ratio of Molecules Containing Benzene Rings') plt.legend() plt.savefig('benzene_ratio_curve.png') plt.close() return generator, discriminator # ------------------------- 参考分子处理与批量生成(改进版)------------------------- def process_reference_smiles(reference_smiles): ref_graph = smiles_to_graph(reference_smiles) if ref_graph is None: raise ValueError("参考分子SMILES转换图结构失败,请检查SMILES合法性") ref_concentration = 1.0 ref_efficiency = 0.8 ref_condition = torch.tensor([ref_concentration, ref_efficiency], dtype=torch.float32) return ref_graph, ref_condition def generate_molecules_with_reference_improved(generator, reference_smiles, num_samples=1000, noise_type="gaussian", force_benzene=True, device=None): if device is None: device = torch.device("cuda" if torch.cuda.is_available() else "cpu") ref_graph, ref_condition = process_reference_smiles(reference_smiles) ref_condition_batch = ref_condition.unsqueeze(0).repeat(num_samples, 1).to(device) if noise_type.lower() == "gaussian": noise = torch.randn(num_samples, 16).to(device) elif noise_type.lower() == "uniform": noise = 2 * torch.rand(num_samples, 16).to(device) - 1 else: raise ValueError("噪声类型必须是 'gaussian' 或 'uniform'") generator.eval() generated_mols = [] # 强制生成含苯环分子 benzene_condition = torch.ones(num_samples, dtype=torch.long, device=device) if force_benzene else \ torch.randint(0, 2, (num_samples,), device=device) with torch.no_grad(): fake_nodes = generator(noise, ref_condition_batch, benzene_condition) for i in range(num_samples): node_feats = fake_nodes[i] atomic_nums = (node_feats.squeeze() * 17).cpu().numpy().round().astype(int) atomic_nums = validate_atomic_nums(atomic_nums) atomic_nums = [int(num) for num in atomic_nums] # 使用改进的边生成函数 edge_index, bond_types = generate_realistic_edges_improved( node_feats, 20, atomic_nums ) # 使用改进的分子构建函数 mol = build_valid_mol_improved(atomic_nums, edge_index, bond_types) if mol and mol.GetNumAtoms() > 0 and mol.GetNumBonds() > 0: generated_mols.append(mol) # 过滤无效分子 valid_mols = filter_valid_mols(generated_mols) print(f"使用{noise_type}噪声,基于参考分子生成 {num_samples} 个分子,初步过滤后有效分子: {len(valid_mols)}") # 过滤含苯环的分子(如果强制要求) if force_benzene: benzene_mols = filter_benzene_mols(valid_mols) print(f"进一步过滤后,含苯环分子数量: {len(benzene_mols)}") print(f"含苯环分子比例: {len(benzene_mols) / len(valid_mols):.2%}") valid_mols = benzene_mols # 使用改进的单一片段过滤函数 single_fragment_mols = filter_single_fragment_mols_improved(valid_mols) print(f"进一步过滤后,单一片段分子数量: {len(single_fragment_mols)}") # 计算片段比例 fragment_ratio = len(single_fragment_mols) / len(valid_mols) if valid_mols else 0 print(f"单一片段分子比例: {fragment_ratio:.2%}") return single_fragment_mols # ------------------------- 分子属性计算与保存 ------------------------- def calculate_molecular_properties(mols): properties = [] for mol in mols: try: mol_weight = rdMolDescriptors.CalcExactMolWt(mol) logp = rdMolDescriptors.CalcCrippenDescriptors(mol)[0] tpsa = rdMolDescriptors.CalcTPSA(mol) hba = rdMolDescriptors.CalcNumHBA(mol) hbd = rdMolDescriptors.CalcNumHBD(mol) rot_bonds = rdMolDescriptors.CalcNumRotatableBonds(mol) n_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 7) o_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 8) s_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 16) p_count = sum(1 for atom in mol.GetAtoms() if atom.GetAtomicNum() == 15) frags = Chem.GetMolFrags(mol, asMols=True) fragment_count = len(frags) # 检测苯环 has_benzene = has_benzene_ring(mol) properties.append({ 'SMILES': Chem.MolToSmiles(mol), 'MW': mol_weight, 'LogP': logp, 'TPSA': tpsa, 'HBA': hba, 'HBD': hbd, 'RotBonds': rot_bonds, 'N_count': n_count, 'O_count': o_count, 'S_count': s_count, 'P_count': p_count, 'FragmentCount': fragment_count, 'HasBenzene': has_benzene }) except Exception as e: print(f"计算分子属性时出错: {e}") continue return properties def save_molecules(mols, prefix="generated", noise_type="gaussian"): if not mols: print("没有分子可保存") return subdir = f"{prefix}_{noise_type}" os.makedirs(subdir, exist_ok=True) if len(mols) <= 100: legends = [f"Mol {i + 1}" for i in range(len(mols))] img = Draw.MolsToGridImage(mols, molsPerRow=5, subImgSize=(300, 300), legends=legends) img.save(f"{subdir}/{prefix}_{noise_type}_molecules.png") properties = calculate_molecular_properties(mols) df = pd.DataFrame(properties) df.to_csv(f"{subdir}/{prefix}_{noise_type}_properties.csv", index=False) with open(f"{subdir}/{prefix}_{noise_type}_smiles.smi", "w") as f: for props in properties: f.write(f"{props['SMILES']}\n") print(f"已保存 {len(mols)} 个单一片段分子到目录: {subdir}") # ------------------------- 新增:生成结果分析工具 ------------------------- def analyze_generated_molecules(mols_gaussian, mols_uniform): print("\n===== 生成分子分析报告 =====") count_gaussian = len(mols_gaussian) count_uniform = len(mols_uniform) print(f"高斯噪声生成单一片段分子: {count_gaussian}") print(f"均匀噪声生成单一片段分子: {count_uniform}") def calculate_avg_properties(mols): if not mols: return {} props = calculate_molecular_properties(mols) avg_props = {} for key in props[0].keys(): if key != 'SMILES': avg_props[key] = sum(p[key] for p in props) / len(props) return avg_props avg_gaussian = calculate_avg_properties(mols_gaussian) avg_uniform = calculate_avg_properties(mols_uniform) if avg_gaussian and avg_uniform: print("\n高斯噪声生成分子的平均属性:") for key, value in avg_gaussian.items(): print(f" {key}: {value:.2f}") print("\n均匀噪声生成分子的平均属性:") for key, value in avg_uniform.items(): print(f" {key}: {value:.2f}") print("\n属性差异 (均匀 - 高斯):") for key in avg_gaussian.keys(): if key != 'SMILES': diff = avg_uniform[key] - avg_gaussian[key] print(f" {key}: {diff:+.2f}") if mols_gaussian and mols_uniform: properties = ['MW', 'LogP', 'TPSA', 'HBA', 'HBD', 'RotBonds'] plt.figure(figsize=(15, 10)) for i, prop in enumerate(properties, 1): plt.subplot(2, 3, i) gaussian_vals = [p[prop] for p in calculate_molecular_properties(mols_gaussian)] uniform_vals = [p[prop] for p in calculate_molecular_properties(mols_uniform)] plt.hist(gaussian_vals, bins=20, alpha=0.5, label='Gaussian') plt.hist(uniform_vals, bins=20, alpha=0.5, label='Uniform') plt.title(f'{prop} Distribution') plt.xlabel(prop) plt.ylabel('Frequency') plt.legend() plt.tight_layout() plt.savefig('molecular_property_distribution.png') plt.close() print("\n分子属性分布图已保存为 'molecular_property_distribution.png'") # ------------------------- 新增:分子可视化工具 ------------------------- def visualize_molecules_grid(molecules, num_per_row=5, filename="molecules_grid.png", legends=None): """创建高质量的分子网格可视化图""" if not molecules: print("没有分子可可视化") return if legends is None: legends = [f"Molecule {i + 1}" for i in range(len(molecules))] try: img = Draw.MolsToGridImage( molecules, molsPerRow=num_per_row, subImgSize=(300, 300), legends=legends, useSVG=False, highlightAtomLists=None, highlightBondLists=None ) img.save(filename) print(f"分子网格图已保存至: {filename}") return img except Exception as e: print(f"生成分子网格图时出错: {e}") return None # ------------------------- 主函数 ------------------------- def main(): print("=" * 80) print("基于合法SMILES的分子生成GAN系统(改进版)") print("=" * 80) dataset_path = "D:\python\pythonProject1\DATA\Inhibitor1368_data.xlsx" if not os.path.exists(dataset_path): print(f"错误:数据集文件 '{dataset_path}' 不存在!") exit(1) print(f"开始加载数据集: {dataset_path}") print("=" * 80) print("开始训练分子生成GAN...") generator, discriminator = train_gan( dataset_path=dataset_path, epochs=200, batch_size=32, lr=1e-5, ) print("=" * 80) # 设置参考缓蚀剂分子(确保是单一片段含苯环的有效分子) reference_smiles = "NCCNCc1ccc(O)c2ncccc12" if generator: print("训练完成!模型和生成结果已保存") print("生成的分子可视化结果在'results'目录") print("损失曲线已保存为'gan_loss_curve.png'") print("合法分子数量曲线已保存为'valid_molecules_curve.png'") print("分子片段比例曲线已保存为'fragment_ratio_curve.png'") print("苯环比例曲线已保存为'benzene_ratio_curve.png'") # 基于参考分子生成1000个新分子(高斯噪声,强制含苯环) print("\n开始基于参考分子生成新分子(高斯噪声,强制含苯环)...") gaussian_mols = generate_molecules_with_reference_improved( generator, reference_smiles, num_samples=1000, noise_type="gaussian", force_benzene=True # 强制生成含苯环分子 ) save_molecules(gaussian_mols, prefix="ref_based", noise_type="gaussian_benzene") # 可视化最佳分子 if gaussian_mols: # 计算每个分子的QED分数 qed_scores = [] for mol in gaussian_mols: try: qed_scores.append(rdMolDescriptors.CalcQED(mol)) except: qed_scores.append(0) # 按QED分数排序 sorted_indices = sorted(range(len(qed_scores)), key=lambda i: qed_scores[i], reverse=True) top_molecules = [gaussian_mols[i] for i in sorted_indices[:25]] top_legends = [f"QED: {qed_scores[i]:.3f}" for i in sorted_indices[:25]] # 可视化 visualize_molecules_grid( top_molecules, num_per_row=5, filename="top_molecules_by_qed.png", legends=top_legends ) print("已生成并保存最佳分子的可视化结果") # 基于参考分子生成1000个新分子(均匀噪声,强制含苯环) print("\n开始基于参考分子生成新分子(均匀噪声,强制含苯环)...") uniform_mols = generate_molecules_with_reference_improved( generator, reference_smiles, num_samples=1000, noise_type="uniform", force_benzene=True # 强制生成含苯环分子 ) save_molecules(uniform_mols, prefix="ref_based", noise_type="uniform_benzene") # 分析两种噪声生成的分子差异 if gaussian_mols and uniform_mols: analyze_generated_molecules(gaussian_mols, uniform_mols) else: print("训练过程中未生成合法分子,请检查数据") print("=" * 80) if __name__ == "__main__": main()

zip

最新推荐

recommend-type

langchain4j-anthropic-spring-boot-starter-0.31.0.jar中文文档.zip

1、压缩文件中包含: 中文文档、jar包下载地址、Maven依赖、Gradle依赖、源代码下载地址。 2、使用方法: 解压最外层zip,再解压其中的zip包,双击 【index.html】 文件,即可用浏览器打开、进行查看。 3、特殊说明: (1)本文档为人性化翻译,精心制作,请放心使用; (2)只翻译了该翻译的内容,如:注释、说明、描述、用法讲解 等; (3)不该翻译的内容保持原样,如:类名、方法名、包名、类型、关键字、代码 等。 4、温馨提示: (1)为了防止解压后路径太长导致浏览器无法打开,推荐在解压时选择“解压到当前文件夹”(放心,自带文件夹,文件不会散落一地); (2)有时,一套Java组件会有多个jar,所以在下载前,请仔细阅读本篇描述,以确保这就是你需要的文件。 5、本文件关键字: jar中文文档.zip,java,jar包,Maven,第三方jar包,组件,开源组件,第三方组件,Gradle,中文API文档,手册,开发手册,使用手册,参考手册。
recommend-type

TMS320F28335电机控制程序详解:BLDC、PMSM无感有感及异步VF源代码与开发资料

TMS320F28335这款高性能数字信号处理器(DSP)在电机控制领域的应用,涵盖了BLDC(无刷直流电机)、PMSM(永磁同步电机)的无感有感控制以及异步VF(变频调速)程序。文章不仅解释了各类型的电机控制原理,还提供了完整的开发资料,包括源代码、原理图和说明文档,帮助读者深入了解其工作原理和编程技巧。 适合人群:从事电机控制系统开发的技术人员,尤其是对TMS320F28335感兴趣的工程师。 使用场景及目标:适用于需要掌握TMS320F28335在不同电机控制应用场景下具体实现方法的专业人士,旨在提高他们对该微控制器的理解和实际操作能力。 其他说明:文中提供的开发资料为读者提供了从硬件到软件的全面支持,有助于加速项目开发进程并提升系统性能。
recommend-type

基于爬山搜索法的风力发电MPPT控制Simulink仿真:定步长与变步长算法性能对比 - 爬山搜索法 最新版

基于爬山搜索法的风力发电最大功率点追踪(MPPT)控制的Simulink仿真模型,重点比较了定步长和变步长算法在不同风速条件下的表现。文中展示了两种算法的具体实现方法及其优缺点。定步长算法虽然结构简单、计算量小,但在风速突变时响应较慢,存在明显的稳态振荡。相比之下,变步长算法能够根据功率变化动态调整步长,表现出更快的响应速度和更高的精度,尤其在风速突变时优势明显。实验数据显示,变步长算法在风速从8m/s突增至10m/s的情况下,仅用0.3秒即可稳定,功率波动范围仅为±15W,而定步长算法则需要0.8秒,功率波动达到±35W。 适合人群:从事风力发电研究的技术人员、对MPPT控制感兴趣的工程技术人员以及相关专业的高校师生。 使用场景及目标:适用于风力发电系统的设计与优化,特别是需要提高系统响应速度和精度的场合。目标是在不同风速条件下,选择合适的MPPT算法以最大化风能利用率。 其他说明:文章还讨论了定步长算法在风速平稳情况下的优势,提出了根据不同应用场景灵活选择或组合使用这两种算法的建议。
recommend-type

Visual C++.NET编程技术实战指南

根据提供的文件信息,可以生成以下知识点: ### Visual C++.NET编程技术体验 #### 第2章 定制窗口 - **设置窗口风格**:介绍了如何通过编程自定义窗口的外观和行为。包括改变窗口的标题栏、边框样式、大小和位置等。这通常涉及到Windows API中的`SetWindowLong`和`SetClassLong`函数。 - **创建六边形窗口**:展示了如何创建一个具有特殊形状边界的窗口,这类窗口不遵循标准的矩形形状。它需要使用`SetWindowRgn`函数设置窗口的区域。 - **创建异形窗口**:扩展了定制窗口的内容,提供了创建非标准形状窗口的方法。这可能需要创建一个不规则的窗口区域,并将其应用到窗口上。 #### 第3章 菜单和控制条高级应用 - **菜单编程**:讲解了如何创建和修改菜单项,处理用户与菜单的交互事件,以及动态地添加或删除菜单项。 - **工具栏编程**:阐述了如何使用工具栏,包括如何创建工具栏按钮、分配事件处理函数,并实现工具栏按钮的响应逻辑。 - **状态栏编程**:介绍了状态栏的创建、添加不同类型的指示器(如文本、进度条等)以及状态信息的显示更新。 - **为工具栏添加皮肤**:展示了如何为工具栏提供更加丰富的视觉效果,通常涉及到第三方的控件库或是自定义的绘图代码。 #### 第5章 系统编程 - **操作注册表**:解释了Windows注册表的结构和如何通过程序对其进行读写操作,这对于配置软件和管理软件设置非常关键。 - **系统托盘编程**:讲解了如何在系统托盘区域创建图标,并实现最小化到托盘、从托盘恢复窗口的功能。 - **鼠标钩子程序**:介绍了钩子(Hook)技术,特别是鼠标钩子,如何拦截和处理系统中的鼠标事件。 - **文件分割器**:提供了如何将文件分割成多个部分,并且能够重新组合文件的技术示例。 #### 第6章 多文档/多视图编程 - **单文档多视**:展示了如何在同一个文档中创建多个视图,这在文档编辑软件中非常常见。 #### 第7章 对话框高级应用 - **实现无模式对话框**:介绍了无模式对话框的概念及其应用场景,以及如何实现和管理无模式对话框。 - **使用模式属性表及向导属性表**:讲解了属性表的创建和使用方法,以及如何通过向导性质的对话框引导用户完成多步骤的任务。 - **鼠标敏感文字**:提供了如何实现点击文字触发特定事件的功能,这在阅读器和编辑器应用中很有用。 #### 第8章 GDI+图形编程 - **图像浏览器**:通过图像浏览器示例,展示了GDI+在图像处理和展示中的应用,包括图像的加载、显示以及基本的图像操作。 #### 第9章 多线程编程 - **使用全局变量通信**:介绍了在多线程环境下使用全局变量进行线程间通信的方法和注意事项。 - **使用Windows消息通信**:讲解了通过消息队列在不同线程间传递信息的技术,包括发送消息和处理消息。 - **使用CriticalSection对象**:阐述了如何使用临界区(CriticalSection)对象防止多个线程同时访问同一资源。 - **使用Mutex对象**:介绍了互斥锁(Mutex)的使用,用以同步线程对共享资源的访问,保证资源的安全。 - **使用Semaphore对象**:解释了信号量(Semaphore)对象的使用,它允许一个资源由指定数量的线程同时访问。 #### 第10章 DLL编程 - **创建和使用Win32 DLL**:介绍了如何创建和链接Win32动态链接库(DLL),以及如何在其他程序中使用这些DLL。 - **创建和使用MFC DLL**:详细说明了如何创建和使用基于MFC的动态链接库,适用于需要使用MFC类库的场景。 #### 第11章 ATL编程 - **简单的非属性化ATL项目**:讲解了ATL(Active Template Library)的基础使用方法,创建一个不使用属性化组件的简单项目。 - **使用ATL开发COM组件**:详细阐述了使用ATL开发COM组件的步骤,包括创建接口、实现类以及注册组件。 #### 第12章 STL编程 - **list编程**:介绍了STL(标准模板库)中的list容器的使用,讲解了如何使用list实现复杂数据结构的管理。 #### 第13章 网络编程 - **网上聊天应用程序**:提供了实现基本聊天功能的示例代码,包括客户端和服务器的通信逻辑。 - **简单的网页浏览器**:演示了如何创建一个简单的Web浏览器程序,涉及到网络通信和HTML解析。 - **ISAPI服务器扩展编程**:介绍了如何开发ISAPI(Internet Server API)服务器扩展来扩展IIS(Internet Information Services)的功能。 #### 第14章 数据库编程 - **ODBC数据库编程**:解释了ODBC(开放数据库互联)的概念,并提供了使用ODBC API进行数据库访问的示例。 - **ADO编程**:介绍了ADO(ActiveX Data Objects)技术,讲解了如何使用ADO进行数据库的增删改查等操作。 #### 第15章 HTML帮助系统编程 - **创建HTML帮助文件**:讲解了如何创建HTML帮助文件(.chm),这是微软帮助系统的一种形式。 - **区分上下文帮助**:提供了在应用程序中实现上下文敏感帮助的技术示例。 #### 第16章 安装和部署项目 - **用安装程序向导创建安装项目**:介绍了如何使用Visual Studio中的安装和部署项目向导,快速创建安装程序。 - **部署应用程序**:阐述了部署应用程序到目标系统的过程,以及如何确保应用程序的正确安装和运行。 以上内容涵盖了Visual C++.NET编程技术体验一书中涉及的各个技术点,并对每个章节中的示例做了详细的说明。这些知识点对于理解和掌握Visual C++.NET编程非常重要,并能够帮助读者在实际项目开发中应用这些技术。
recommend-type

HarmonyOS内核深度探秘:优化自由行旅游系统的策略

# 1. HarmonyOS内核概述与旅游系统需求分析 ## 1.1 HarmonyOS内核概述 HarmonyOS(鸿蒙操作系统)是由华为开发的一个分布式操作系统,旨在提供跨多种设备的无缝体验。其内核设计理念强调高效、安全和模块化,以微内核架构和分布式能力
recommend-type

tkinter模块所有控件

<think>好的,我现在需要回答用户关于Tkinter模块中所有控件列表的问题。首先,我需要回忆Tkinter的基础知识,确保自己了解各个控件的名称和用途。用户希望得到所有支持的控件列表,所以必须全面覆盖,不能遗漏。 首先,我应该从Tkinter的标准控件开始。常见的控件包括Label、Button、Entry这些基础部件。然后是Frame,用于布局,还有Canvas用于绘图。接下来是Checkbutton、Radiobutton,这些属于选择类控件。Listbox和Scrollbar通常一起使用,处理滚动内容。还有Scale(滑块)、Spinbox、Menu、Menubutton这些可能
recommend-type

局域网五子棋游戏:娱乐与聊天的完美结合

标题“网络五子棋”和描述“适合于局域网之间娱乐和聊天!”以及标签“五子棋 网络”所涉及的知识点主要围绕着五子棋游戏的网络版本及其在局域网中的应用。以下是详细的知识点: 1. 五子棋游戏概述: 五子棋是一种两人对弈的纯策略型棋类游戏,又称为连珠、五子连线等。游戏的目标是在一个15x15的棋盘上,通过先后放置黑白棋子,使得任意一方先形成连续五个同色棋子的一方获胜。五子棋的规则简单,但策略丰富,适合各年龄段的玩家。 2. 网络五子棋的意义: 网络五子棋是指可以在互联网或局域网中连接进行对弈的五子棋游戏版本。通过网络版本,玩家不必在同一地点即可进行游戏,突破了空间限制,满足了现代人们快节奏生活的需求,同时也为玩家们提供了与不同对手切磋交流的机会。 3. 局域网通信原理: 局域网(Local Area Network,LAN)是一种覆盖较小范围如家庭、学校、实验室或单一建筑内的计算机网络。它通过有线或无线的方式连接网络内的设备,允许用户共享资源如打印机和文件,以及进行游戏和通信。局域网内的计算机之间可以通过网络协议进行通信。 4. 网络五子棋的工作方式: 在局域网中玩五子棋,通常需要一个客户端程序(如五子棋.exe)和一个服务器程序。客户端负责显示游戏界面、接受用户输入、发送落子请求给服务器,而服务器负责维护游戏状态、处理玩家的游戏逻辑和落子请求。当一方玩家落子时,客户端将该信息发送到服务器,服务器确认无误后将更新后的棋盘状态传回给所有客户端,更新显示。 5. 五子棋.exe程序: 五子棋.exe是一个可执行程序,它使得用户可以在个人计算机上安装并运行五子棋游戏。该程序可能包含了游戏的图形界面、人工智能算法(如果支持单机对战AI的话)、网络通信模块以及游戏规则的实现。 6. put.wav文件: put.wav是一个声音文件,很可能用于在游戏进行时提供声音反馈,比如落子声。在网络环境中,声音文件可能被用于提升玩家的游戏体验,尤其是在局域网多人游戏场景中。当玩家落子时,系统会播放.wav文件中的声音,为游戏增添互动性和趣味性。 7. 网络五子棋的技术要求: 为了确保多人在线游戏的顺利进行,网络五子棋需要具备一些基本的技术要求,包括但不限于稳定的网络连接、高效的数据传输协议(如TCP/IP)、以及安全的数据加密措施(如果需要的话)。此外,还需要有一个良好的用户界面设计来提供直观和舒适的用户体验。 8. 社交与娱乐: 网络五子棋除了是一个娱乐游戏外,它还具有社交功能。玩家可以通过游戏内的聊天系统进行交流,分享经验和策略,甚至通过网络寻找新的朋友。这使得网络五子棋不仅是一个个人娱乐工具,同时也是一种社交活动。 总结来说,网络五子棋结合了五子棋游戏的传统魅力和现代网络技术,使得不同地区的玩家能够在局域网内进行娱乐和聊天,既丰富了人们的娱乐生活,又加强了人际交流。而实现这一切的基础在于客户端程序的设计、服务器端的稳定运行、局域网的高效通信,以及音效文件增强的游戏体验。
recommend-type

自由行旅游新篇章:HarmonyOS技术融合与系统架构深度解析

# 1. HarmonyOS技术概述 ## 1.1 HarmonyOS的起源与发展 HarmonyOS(鸿蒙操作系统)由华为公司开发,旨在构建全场景分布式OS,以应对不同设备间的互联问题。自从2019年首次发布以来,HarmonyOS迅速成长,并迅速应用于智能手机、平板、智能穿戴、车载设备等多种平台。该系
recommend-type

足底支撑相到达73%是什么问题

<think>好的,我现在要处理用户的问题:“足底支撑相达到73%可能的原因及影响”。首先,需要理解用户提到的“足底支撑相73%”具体指什么。足底支撑相通常指步态周期中脚与地面接触的时间占比,正常范围大约在60%-62%左右。如果达到73%,明显超出正常范围,可能意味着步态异常。 接下来,需要确定可能的原因。可能的原因包括生物力学异常,比如足弓异常(扁平足或高弓足)、踝关节活动度受限,或者肌肉力量不足,特别是小腿和足部肌肉。另外,神经系统疾病如脑卒中或帕金森病可能影响步态控制。骨骼关节问题如关节炎、髋膝关节病变也可能导致支撑时间延长。还有代偿性步态,比如因疼痛或受伤而改变步态模式。 然后是
recommend-type

宾馆预约系统开发与优化建议

宾馆预约系统是一个典型的在线服务应用,它允许用户通过互联网平台预定宾馆房间。这种系统通常包含多个模块,比如用户界面、房态管理、预订处理、支付处理和客户评价等。从技术层面来看,构建一个宾馆预约系统涉及到众多的IT知识和技术细节,下面将详细说明。 ### 标题知识点 - 宾馆预约系统 #### 1. 系统架构设计 宾馆预约系统作为一个完整的应用,首先需要进行系统架构设计,决定其采用的软件架构模式,如B/S架构或C/S架构。此外,系统设计还需要考虑扩展性、可用性、安全性和维护性。一般会采用三层架构,包括表示层、业务逻辑层和数据访问层。 #### 2. 前端开发 前端开发主要负责用户界面的设计与实现,包括用户注册、登录、房间搜索、预订流程、支付确认、用户反馈等功能的页面展示和交互设计。常用的前端技术栈有HTML, CSS, JavaScript, 以及各种前端框架如React, Vue.js或Angular。 #### 3. 后端开发 后端开发主要负责处理业务逻辑,包括用户管理、房间状态管理、订单处理等。后端技术包括但不限于Java (使用Spring Boot框架), Python (使用Django或Flask框架), PHP (使用Laravel框架)等。 #### 4. 数据库设计 数据库设计对系统的性能和可扩展性至关重要。宾馆预约系统可能需要设计的数据库表包括用户信息表、房间信息表、预订记录表、支付信息表等。常用的数据库系统有MySQL, PostgreSQL, MongoDB等。 #### 5. 网络安全 网络安全是宾馆预约系统的重要考虑因素,包括数据加密、用户认证授权、防止SQL注入、XSS攻击、CSRF攻击等。系统需要实现安全的认证机制,比如OAuth或JWT。 #### 6. 云服务和服务器部署 现代的宾馆预约系统可能部署在云平台上,如AWS, Azure, 腾讯云或阿里云。在云平台上,系统可以按需分配资源,提高系统的稳定性和弹性。 #### 7. 付款接口集成 支付模块需要集成第三方支付接口,如支付宝、微信支付、PayPal等,需要处理支付请求、支付状态确认、退款等业务。 #### 8. 接口设计与微服务 系统可能采用RESTful API或GraphQL等接口设计方式,提供服务的微服务化,以支持不同设备和服务的接入。 ### 描述知识点 - 这是我个人自己做的 请大家帮忙修改哦 #### 个人项目经验与团队合作 描述中的这句话暗示了该宾馆预约系统可能是由一个个人开发者创建的。个人开发和团队合作在软件开发流程中有着显著的不同。个人开发者需要关注的方面包括项目管理、需求分析、代码质量保证、测试和部署等。而在团队合作中,每个成员会承担不同的职责,需要有效的沟通和协作。 #### 用户反馈与迭代 描述还暗示了该系统目前处于需要外部反馈和修改的阶段。这表明系统可能还处于开发或测试阶段,需要通过用户的实际使用反馈来不断迭代改进。 ### 标签知识点 - 200 #### 未提供信息 “200”这个标签可能指的是HTTP状态码中表示请求成功(OK)的200状态码。但是,由于没有提供更多的上下文信息,无法进一步分析其在本例中的具体含义。 ### 压缩包子文件的文件名称列表知识点 - 1111 #### 文件命名与管理 “1111”这个文件名称可能是一个版本号、日期标记或者是一个简单的标识符。文件命名应当遵循一定的规则,以确保文件的可追溯性和管理的便利性。在软件开发过程中,合理组织文件和版本控制(如使用Git)是必不可少的。 综上所述,宾馆预约系统的开发是一项复杂的工程,它涉及前后端的开发、数据库设计、系统安全、接口设计等多个方面。开发者在开发过程中需要不断学习和应用各类IT知识,以确保系统能够安全、高效、稳定地运行。而对于个人开发项目,如何合理利用有限资源、高效地管理和优化项目过程也是至关重要的。