file-type

使用Matlab将单元格数组写入文本文件

版权申诉

RAR文件

2KB | 更新于2024-11-08 | 105 浏览量 | 0 下载量 举报 收藏
download 限时特惠:#14.90
Matlab是一种强大的数学计算软件,广泛应用于工程计算、控制设计、信号处理和通信等领域。单元数组(cell array)是Matlab中的一种特殊数据类型,它允许存储不同类型或大小的数据。将单元数组写入文本文件是一个常见的需求,可用于记录数据、进行数据交换或将数据保存为可读格式。本文档"Write Cell Array to Text File.rar_cell"详细介绍了如何使用Matlab代码将单元数组数据保存到文本文件中。 1. 单元数组基础: 单元数组由若干个单元组成,每个单元可以存储不同类型的数据(如数值、字符串、结构体等)。单元数组使用大括号{}来定义,例如: ```matlab C = {'John', 'Doe'; 1, 2}; ``` 这个例子中创建了一个2x2的单元数组C,每个单元可以包含不同类型的数据。 2. 将单元数组写入文本文件的方法: 使用Matlab的`writematrix`函数可以将单元数组保存为文本文件。此函数能够将单元数组中的数据转换为矩阵格式,并将其写入到文本文件中。以下是基本的用法示例: ```matlab C = {'Name', 'Age'; 'John', 30; 'Jane', 25}; writematrix(C, 'output.txt'); ``` 上述代码将单元数组C保存为名为"output.txt"的文本文件。如果单元数组中的数据类型不统一,Matlab会将其转换为字符串格式进行存储。 3. 文件操作选项: 在使用`writematrix`函数时,可以指定多个选项来控制输出文件的格式,例如: - `Delimiter`:指定单元数组数据的分隔符,默认为空格。 - `Quotecharacters`:指定用于引用非数值字符串的字符,默认为双引号。 - `Numformat`:指定数值数据的显示格式,例如`'%.2f'`可以指定数值保留两位小数。 - `Append`:指定是否将数据追加到现有文件的末尾。 4. 错误处理: 在将数据写入文本文件的过程中,可能会遇到各种错误,如文件路径不存在、文件权限不足、单元数组类型不兼容等问题。Matlab的错误处理机制可以帮助我们捕获并处理这些错误。例如: ```matlab try writematrix(C, 'output.txt'); catch ME disp(ME.message); end ``` 这段代码尝试将单元数组C写入文件,并使用try-catch结构来捕获并显示可能出现的错误信息。 5. 实际应用案例: 在实际应用中,可能需要将从多个源获取的数据汇总到单元数组中,然后统一写入文本文件。例如,可以先将数据存储在一个单元数组中,然后使用循环遍历单元数组,将每个单元的数据写入到文本文件的不同行或列中。这种方法提供了高度的灵活性和控制度,适合于复杂的文件操作任务。 本资源文档"Write Cell Array to Text File.rar_cell"为用户提供了详细的步骤和代码示例,帮助用户学习如何使用Matlab将单元数组高效地保存为文本文件。通过对文档内容的深入理解,用户可以掌握在Matlab环境下进行数据持久化的基本技能,并能够将其应用于更广泛的工程和科研场景。

相关推荐

filetype

详细解释这个脚本,逐行 import numpy as np import linecache from io import StringIO import time import os import mmap import platform import math import argparse def pbc(a, box): return a - box * np.round(a/box) class CellList: """ A class for implementing the link cell list method in molecular simulations. Handles periodic boundary conditions and efficient neighbor searching. python calculate_hbonds.py --f 01traj.lammpstrj --d 01system.data --donor 3 --hydrogen 5 --acceptor 1,4,6 --outname 01ret.txt --axis=x --binsize=5.0 --dist 3.5 --alpha 30 --start 0 --end 80 Attributes: bounds (np.ndarray): Box boundaries of shape (3, 2) cell_size (float): Requested size of grid cells box_lengths (np.ndarray): Size of the simulation box in each dimension n_cells (np.ndarray): Number of cells in each dimension (nx, ny, nz) actual_cell_size (np.ndarray): Actual cell size after adjusting for box size total_cells (int): Total number of cells in the system particle_cell_indices (np.ndarray): Cell index for each particle cell_to_particles (dict): Mapping from cell index to particle indices particles (np.ndarray): Array of particle coordinates """ def __init__(self, bounds, cell_size): """ Initialize the cell list system. Args: bounds: Box boundaries as [[xmin, xmax], [ymin, ymax], [zmin, zmax]] cell_size: Requested size of grid cells (cubic cells assumed) """ self.bounds = np.array(bounds, dtype=float) self.cell_size = float(cell_size) # Calculate box dimensions self.box_lengths = self.bounds[:, 1] - self.bounds[:, 0] # Calculate number of cells in each dimension (ceiling to cover entire box) self.n_cells = np.ceil(self.box_lengths / self.cell_size).astype(int) self.nx, self.ny, self.nz = self.n_cells self.total_cells = self.nx * self.ny * self.nz # Calculate actual cell size (may be smaller than requested) self.actual_cell_size = self.box_lengths / self.n_cells # Initialize particle storage self.particle_cell_indices = None self.cell_to_particles = None self.particles = None # Validate grid if np.any(self.n_cells < 1): raise ValueError("Cell size too large for box dimensions") if np.any(self.actual_cell_size <= 0): raise ValueError("Invalid cell size resulting in non-positive dimensions") def assign_particles(self, coords): """ Assign particles to cells and build the cell-to-particles mapping. Args: coords: Array of particle coordinates, shape (N, 3) """ coords = np.asarray(coords) if coords.shape[1] != 3: raise ValueError("Particle coordinates must be 3-dimensional") self.particles = coords N = len(coords) self.particle_cell_indices = np.zeros(N, dtype=int) self.cell_to_particles = {} # Assign each particle to a cell for idx, coord in enumerate(coords): cell_idx = self.coord_to_cell_index(coord) self.particle_cell_indices[idx] = cell_idx if cell_idx not in self.cell_to_particles: self.cell_to_particles[cell_idx] = [] self.cell_to_particles[cell_idx].append(idx) def coord_to_cell_index(self, coord): """ Convert a 3D coordinate to a 1D cell index with periodic boundaries. Args: coord: Particle coordinate (x, y, z) Returns: int: 1D cell index """ # Apply periodic boundary conditions rel_pos = coord - self.bounds[:, 0] periodic_pos = rel_pos % self.box_lengths # Calculate grid coordinates i, j, k = (periodic_pos / self.actual_cell_size).astype(int) # Apply periodic wrapping i = i % self.nx j = j % self.ny k = k % self.nz # Convert to 1D index: index = i + j*nx + k*nx*ny return int(i + j * self.nx + k * self.nx * self.ny) def get_neighboring_cells(self, cell_index): """ Get the 26 neighboring cells (plus central cell) for a given cell index. Args: cell_index: Central cell index Returns: list: Indices of 27 neighboring cells (including central cell) """ # Convert 1D index to 3D grid coordinates k = cell_index // (self.nx * self.ny) remainder = cell_index % (self.nx * self.ny) j = remainder // self.nx i = remainder % self.nx neighbors = [] # Iterate over 3x3x3 neighborhood for dx in [-1, 0, 1]: for dy in [-1, 0, 1]: for dz in [-1, 0, 1]: # Apply periodic boundaries ni = (i + dx) % self.nx nj = (j + dy) % self.ny nk = (k + dz) % self.nz # Convert back to 1D index neighbor_index = ni + nj * self.nx + nk * (self.nx * self.ny) neighbors.append(neighbor_index) return neighbors def get_particles_in_neighboring_cells(self, coord): """ Get all particle indices in the 27 neighboring cells of a given coordinate. This includes all particles in the same cell as the coordinate. Args: coord: Reference coordinate (x, y, z) Returns: list: Indices of particles in neighboring cells """ if self.cell_to_particles is None: raise RuntimeError("Particles not assigned. Call assign_particles() first.") # Get central cell index center_idx = self.coord_to_cell_index(coord) # Get all neighboring cell indices (27 cells) neighbor_cells = self.get_neighboring_cells(center_idx) # Collect all particles in these cells particles = [] for cell_idx in neighbor_cells: if cell_idx in self.cell_to_particles: particles.extend(self.cell_to_particles[cell_idx]) return particles def get_neighbor_particles_excluding_self(self, particle_index): """ Get neighbor particles excluding the particle itself. Args: particle_index: Index of the particle to find neighbors for Returns: list: Indices of neighboring particles (excluding self) """ if self.particles is None: raise RuntimeError("Particles not assigned. Call assign_particles() first.") # Get particle coordinate coord = self.particles[particle_index] # Get all particles in neighboring cells (including self) all_neighbors = self.get_particles_in_neighboring_cells(coord) # Remove self from the list neighbors_excluding_self = [idx for idx in all_neighbors if idx != particle_index] return neighbors_excluding_self def get_particles_in_same_cell(self, coord): """ Get all particle indices in the same cell as the given coordinate. Args: coord: Reference coordinate (x, y, z) Returns: list: Indices of particles in the same cell """ if self.cell_to_particles is None: raise RuntimeError("Particles not assigned. Call assign_particles() first.") # Get cell index cell_idx = self.coord_to_cell_index(coord) # Return particles in this cell if cell_idx in self.cell_to_particles: return self.cell_to_particles[cell_idx].copy() return [] def get_cell_stats(self): """ Get information about the grid system. Returns: dict: Dictionary containing grid statistics """ return { "grid_dimensions": (self.nx, self.ny, self.nz), "total_cells": self.total_cells, "requested_cell_size": self.cell_size, "actual_cell_size": self.actual_cell_size, "box_dimensions": self.box_lengths } class snapshot: def __init__(self): self.nodes = {} class ReaderLammpstrj(): def __init__(self, filename): self.filename = filename self.mmap_scan() # grasp some info from frames list print("The trjectory is successfully loaded!") print("num of frames: %d"%(self.nframes)) def mmap_scan(self): f = open(self.filename, "r") if os.name == 'nt': # Windows: don't use MAP_SHARED or tagname self.mmap = mmap.mmap(f.fileno(), length=0, access=mmap.ACCESS_READ) else: # Linux/Unix: use MAP_SHARED self.mmap = mmap.mmap(f.fileno(), length=0, flags=mmap.MAP_SHARED, access=mmap.ACCESS_READ) # get all the seek self.Seek = [] index_past = 0 while True: index_current = self.mmap.find(b'ITEM: TIMESTEP', index_past) index_past = index_current + 1 if index_current == -1: break self.Seek.append(index_current) self.Seek = np.asarray(self.Seek, dtype=int) self.nframes = len(self.Seek) def mmap_snap(self, ifr): start = self.Seek[ifr] end = self.Seek[ifr + 1] if ifr != len(self.Seek) - 1 else self.mmap.size() - 1 nchars = end - start self.mmap.seek(start) str_frame = self.mmap.read(nchars).decode('utf-8').strip() return self.parse_snap_fromstr(str_frame) def parse_snap_fromstr(self, snap_str): snap = snapshot() snap_lines = snap_str.split('\n') for i, line in enumerate(snap_lines): sp = line.strip().split() if sp[0] == "ITEM:": if sp[1] == "TIMESTEP": snap.timestep = int(snap_lines[i + 1].strip()) elif sp[1] == "NUMBER": snap.natoms = int(snap_lines[i + 1].strip()) elif sp[1] == "BOX": snap.dimension = len(sp[3:]) edge = np.asarray([[float(l) for l in snap_lines[i + _].strip().split()] for _ in range(1, snap.dimension + 1)]).reshape(-1) snap.edge = edge lx = snap.edge[1] - snap.edge[0] ly = snap.edge[3] - snap.edge[2] lz = snap.edge[5] - snap.edge[4] snap.box = np.array([lx, ly, lz]) snap.bounds = edge.reshape(-1, 2) elif sp[1] == "ATOMS": atom_header = sp[2:] io_str = StringIO("\n".join(snap_lines[i + 1:])) atom_value = np.loadtxt(io_str) # grap info for ah in atom_header: if ah not in snap.nodes: snap.nodes[ah] = [] index = atom_header.index(ah) snap.nodes[ah] = atom_value[:, index] # add some info to snap snap.types = np.unique(snap.nodes['type']) snap.natoms = len(snap.nodes['type']) snap.nodes['type'] = snap.nodes['type'].astype(int) # change xs to x if scaled if 'xs' in snap.nodes: snap.nodes['x'] = snap.nodes['xs'] * \ snap.box[0] + snap.edge[0] snap.nodes['y'] = snap.nodes['ys'] * \ snap.box[1] + snap.edge[2] snap.nodes['z'] = snap.nodes['zs'] * \ snap.box[2] + snap.edge[4] # add pos snap.pos = np.c_[snap.nodes['x'], snap.nodes['y'], snap.nodes['z']] return snap class ReaderLammpsData: def __init__(self, filename): self.filename = filename self.scan() def scan(self): import linecache strLines = np.asarray(linecache.getlines(self.filename)) self.nodes = {} for i, line in enumerate(strLines): sp = line.split() if len(sp) == 2 and sp[-1] in ['atoms', 'bonds', 'angles', 'dihedrals', 'impropers']: key = 'num_' + sp[-1] self.nodes[key] = int(sp[0]) elif len(sp) == 3 and sp[-1] == 'types': key = 'num_' + sp[1] + '_' + sp[-1] self.nodes[key] = int(sp[0]) elif len(sp) == 4 and sp[-1][-2:] == 'hi': key = 'edge_' + sp[-1][0] self.nodes[key] = [float(sp[0]), float(sp[1])] elif len(sp) > 0 and sp[0] == 'Masses': index_start = i + 2 index_end = index_start + self.nodes['num_atom_types'] sectionLines = strLines[index_start:index_end] self.nodes['Masses'] = np.loadtxt( StringIO(''.join(sectionLines))) elif len(sp) > 0 and sp[0] == 'Atoms': index_start = i + 2 index_end = index_start + self.nodes['num_atoms'] sectionLines = strLines[index_start:index_end] io_str = StringIO(''.join(sectionLines)) self.nodes['Atoms'] = np.loadtxt( StringIO(''.join(sectionLines)), dtype=float) elif len(sp) > 0 and sp[0] == 'Bonds': index_start = i + 2 index_end = index_start + self.nodes['num_bonds'] sectionLines = strLines[index_start:index_end] self.nodes['Bonds'] = np.loadtxt( StringIO(''.join(sectionLines)), dtype=int) elif len(sp) > 0 and sp[0] == 'Angles': index_start = i + 2 index_end = index_start + self.nodes['num_angles'] sectionLines = strLines[index_start:index_end] self.nodes['Angles'] = np.loadtxt( StringIO(''.join(sectionLines)), dtype=int) elif len(sp) > 0 and sp[0] == 'Dihedrals': index_start = i + 2 index_end = index_start + self.nodes['num_dihedrals'] sectionLines = strLines[index_start:index_end] self.nodes['Dihedrals'] = np.loadtxt( StringIO(''.join(sectionLines)), dtype=int) elif len(sp) > 0 and sp[0] == 'Impropers': index_start = i + 2 index_end = index_start + self.nodes['num_impropers'] sectionLines = strLines[index_start:index_end] self.nodes['impropers'] = np.loadtxt( StringIO(''.join(sectionLines)), dtype=int) # get types self.atom_types = self.nodes['Atoms'][:, 2] self.types = np.unique(self.atom_types) # get bonded atom id for all atoms self.bonded = {} for bond in self.nodes['Bonds']: ida, idb = bond[2], bond[3] if ida not in self.bonded: self.bonded[ida] = [] self.bonded[ida].append(idb) else: self.bonded[ida].append(idb) if idb not in self.bonded: self.bonded[idb] = [] self.bonded[idb].append(ida) else: self.bonded[idb].append(ida) class compute_hbonds: def __init__(self, donors_type, hydrogens_type, acceptors_type, dist, alpha): self.donors_type = donors_type self.hydrogens_type = hydrogens_type self.acceptors_type = acceptors_type self.dist = dist self.dist2 = dist * dist self.alpha = alpha # 添加初始化 self.timesteps = [] # 存储每帧的时间步 self.frame_hbonds = [] # 存储每帧的氢键信息 self.get_index_fromInitData() self.run() def get_index_fromInitData(self): # get donors id_donors = [] for d in self.donors_type: ret = np.argwhere(D.atom_types == d).reshape(-1) id_donors.extend(D.nodes['Atoms'][ret, 0]) # get hydrogens id_hydrogens = [] for d in self.hydrogens_type: ret = np.argwhere(D.atom_types == d).reshape(-1) id_hydrogens.extend(D.nodes['Atoms'][ret, 0]) # get acceptors id_acceptors = [] for d in self.acceptors_type: ret = np.argwhere(D.atom_types == d).reshape(-1) id_acceptors.extend(D.nodes['Atoms'][ret, 0]) self.id_donors = np.asarray(id_donors, dtype=int) self.id_hydrogens = np.asarray(id_hydrogens, dtype=int) self.id_acceptors = np.asarray(id_acceptors, dtype=int) print("------------------------") print("num of donors: %d" % (self.id_donors.shape[0])) print("num of hydrogens: %d" % (len(self.id_hydrogens))) print("num of acceptors: %d" % (self.id_acceptors.shape[0])) print("------------------------") def run(self): o = open('hbonds_rawdata.csv', 'w') o.write("index,timestep,id_donor,id_hydrogen,id_acceptor,distance,angle,x_donor,y_donor,z_donor,x_h,y_h,z_h,x_acceptor,y_acceptor,z_acceptor\n") for ifr in range(T.nframes): ret_hbonds = [] id_hbonds = [] snap = T.mmap_snap(ifr) print('Tackle Frame Timestep: %d, index: %d' % (snap.timestep, ifr)) # 添加时间步记录 self.timesteps.append(snap.timestep) # 记录当前帧的时间步 hbond_cnt = 0 # link cell list cell_list = CellList(snap.bounds, 3.0) cell_list.assign_particles(snap.pos) for i, don in enumerate(self.id_donors): index = np.argwhere(snap.nodes['id'] == don).reshape(-1)[0] pos_don = np.asarray([snap.nodes['x'][index], snap.nodes['y'][index], snap.nodes['z'][index]]) # get neighboring acceptors neighbors = cell_list.get_neighbor_particles_excluding_self(index) id_neigh_acceptors = [] for nei in neighbors: if snap.nodes['type'][nei] in self.acceptors_type: id_neigh_acceptors.append(snap.nodes['id'][nei]) for h in D.bonded[don]: # check if used if h in id_hbonds: continue if h in self.id_hydrogens: index_h = np.argwhere( snap.nodes['id'] == h).reshape(-1)[0] pos_h = np.asarray([snap.nodes['x'][index_h], snap.nodes['y'][index_h], snap.nodes['z'][index_h]]) vec_h = pbc(pos_h-pos_don, snap.box) else: continue # cycle all the acceptors, like O-H...O for acc in id_neigh_acceptors: # check if used if acc == don: continue index_acc = np.argwhere( snap.nodes['id'] == acc).reshape(-1)[0] pos_acc = np.asarray( [snap.nodes['x'][index_acc], snap.nodes['y'][index_acc], snap.nodes['z'][index_acc]]) # check dist vec_acc = pbc(pos_acc-pos_don, snap.box) dist2 = (vec_acc**2).sum() if dist2 < self.dist2: # check angle angle = (vec_h * vec_acc).sum() / np.linalg.norm(vec_h) / np.linalg.norm(vec_acc) angle = np.arccos(angle) / np.pi * 180 if angle < self.alpha: # record this hydrogen bond hbond_cnt += 1 ret_hbonds.append( [don, h, acc, pos_don, pos_h, pos_acc, dist2**0.5, angle]) id_hbonds.extend([don, h, acc]) # wirte to csv o.write("%d,%d,%d,%d,%d,%.4f,%.4f,%.7f,%.7f,%.7f,%.7f,%.7f,%.7f,%.7f,%.7f,%.7f\n"%(ifr, snap.timestep, don, h, acc, dist2**0.5, angle, pos_don[0], pos_don[1], pos_don[2], pos_h[0], pos_h[1],pos_h[2],pos_acc[0],pos_acc[1],pos_acc[2])) print('hbonds: %d' % (hbond_cnt)) # append hbonds self.frame_hbonds.append(ret_hbonds) o.close() def compute_ditribution_along_vec(self, direction, start, end, dx=5): print('-----------------') if direction == 'x': index = 0 elif direction == 'y': index = 1 else: index = 2 result = [] # get position-direction bins = np.arange(start, end+0.1, dx) for frame in self.frame_hbonds: pos = np.asarray([hb[4][index] for hb in frame]) hist, e = np.histogram(pos, density=False, bins=bins) result.append(hist) rr = bins[:-1] + np.diff(bins) final_result = np.c_[rr, np.asarray(result).T] np.savetxt(outname, final_result, fmt='%.2f') print('calculation finished!') # 添加C(t)计算方法 def compute_Ct(self): """计算C(t)间歇性氢键自相关函数并输出""" # 获取初始帧的氢键集合 initial_hbonds = set() for hbond in self.frame_hbonds[0]: don, h, acc = hbond[0], hbond[1], hbond[2] initial_hbonds.add((don, h, acc)) N0 = len(initial_hbonds) # 初始氢键数量 if N0 == 0: print("Warning: No hydrogen bonds found at initial frame.") return # 初始化C(t)数组 Ct = np.zeros(len(self.frame_hbonds)) timesteps = np.zeros(len(self.frame_hbonds)) # 计算每帧的存活氢键数 for t in range(len(self.frame_hbonds)): current_hbonds = set() for hbond in self.frame_hbonds[t]: don, h, acc = hbond[0], hbond[1], hbond[2] current_hbonds.add((don, h, acc)) # 计算当前帧存活的初始氢键数 surviving = len(initial_hbonds & current_hbonds) Ct[t] = surviving / N0 timesteps[t] = self.timesteps[t] # 输出C(t)结果 np.savetxt('hbond_Ct.txt', np.column_stack((timesteps, Ct)), header='Timestep C(t)', fmt='%d %.6f') print("C(t) saved to hbond_Ct.txt") def get_args(): parser = argparse.ArgumentParser( description='Calculate the hydrogen bonds distribution along one direction.') parser.add_argument( '--f', type=str, help='filename of lammps dump file', required=True) parser.add_argument( '--d', type=str, help='filename of lammps data file', required=True) parser.add_argument( '--donor', type=str, help="donor type, seperate by ',', like '7,9'", required=True) parser.add_argument('--hydrogen', type=str, help="hydrogen type, seperate by ',', like '5,6'", required=True) parser.add_argument('--acceptor', type=str, help="acceptor type, seperate by ',', like '3,4'", required=True) parser.add_argument('--axis', type=str, help="distribution along axis", required=True) parser.add_argument('--start', type=float, help="histogram bin start along axis", required=True) parser.add_argument('--end', type=float, help="histogram bin end along axis", required=True) parser.add_argument('--binsize', type=float, help="histogram bin size along axis", required=True) parser.add_argument('--outname', type=str, help="dump filename. \nThe fist column: position tick along axis \n \ The other columns: hbonds distribution for all the frames", default='hbond.txt') parser.add_argument('--dist', type=float, help="distance criteria in hydrogen bond, default 3.5", default=3.5) parser.add_argument('--alpha', type=float, help="angle criteria in hydrogen bond, default 30", default=30.0) args = parser.parse_args() return args if __name__ == '__main__': args = get_args() donors = [int(_) for _ in args.donor.split(',')] acceptors = [int(_) for _ in args.acceptor.split(',')] hydrogens = [int(_) for _ in args.hydrogen.split(',')] dist = float(args.dist) alpha = float(args.alpha) axis = args.axis start = args.start end = args.end binsize = float(args.binsize) outname = args.outname # read and calculation T = ReaderLammpstrj(args.f) D = ReaderLammpsData(args.d) # print all types and num print('-----------------') for p in np.unique(D.atom_types): num = len(np.argwhere(D.atom_types == p).reshape(-1)) print('type %d: %d' % (p, num)) C = compute_hbonds(donors, hydrogens, acceptors, dist, alpha) C.compute_ditribution_along_vec(axis, start, end, binsize) # 添加C(t)计算 C.compute_Ct() # 计算并输出C(t)

filetype

站号 lon lat 海拔高度 cheng 0 59417 106.8540 22.3397 0 0.4 1 59419 106.7400 22.0644 0 0.4 2 53276 112.9010 42.3978 0 0.5 3 59421 107.1660 22.8594 0 0.3 4 59427 107.0390 22.1103 0 0.4 ... ... ... ... ... ... 1165 50658 125.7880 48.0842 0 1.4 1166 50659 126.2400 48.0539 0 1.3 1167 51430 81.1480 43.8317 0 0.5 1168 51431 81.3264 43.9406 0 1.1 1169 56319 95.3175 29.3128 0 0.8 1170 rows × 5 columns 0.0 32.0 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[31], line 63 61 # 对非中国区域的数据进行掩码操作 62 xx, yy = m(grid_lon, grid_lat) # 将经纬度转换为投影坐标 ---> 63 mask = ~m.is_land(xx, yy) | ((xx < m.xmin) | (xx > m.xmax)) | ((yy < m.ymin) | (yy > m.ymax)) 64 masked_data = np.ma.array(interpolated_accuracy, mask=mask) 66 # 绘制等高线填充图 File e:\Aminiconda\envs\tyingb\lib\site-packages\mpl_toolkits\basemap\__init__.py:2051, in Basemap.is_land(self, xpt, ypt) 2049 landpt = False 2050 for poly in self.landpolygons: -> 2051 landpt = _geoslib.Point((xpt,ypt)).within(poly) 2052 if landpt: break 2053 lakept = False File src\\_geoslib.pyx:468, in _geoslib.Point.__init__() TypeError: only size-1 arrays can be converted to Python scalars --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[42], line 2 1 shp = geopandas.read_file('E:/text/长春站点/长春市.shp') ----> 2 interpolated_accuracy.rio.set_spatial_dims(x_dim="lon", y_dim="lat", inplace=True) 3 interpolated_accuracy.rio.write_crs("WGS1984", inplace=True)#视自己的数据坐标系而定,这里是比较广泛的WGS1984 4 d = interpolated_accuracy.rio.clip(shp.geometry.apply(mapping),shp.crs,drop=False) AttributeError: 'numpy.ndarray' object has no attribute 'rio'

filetype

--------------------------------------------------------------------------- KeyError Traceback (most recent call last) File D:\anaconda\app\Lib\site-packages\xarray\backends\file_manager.py:211, in CachingFileManager._acquire_with_cache_info(self, needs_lock) 210 try: --> 211 file = self._cache[self._key] 212 except KeyError: File D:\anaconda\app\Lib\site-packages\xarray\backends\lru_cache.py:56, in LRUCache.__getitem__(self, key) 55 with self._lock: ---> 56 value = self._cache[key] 57 self._cache.move_to_end(key) KeyError: [<class 'netCDF4._netCDF4.Dataset'>, ('https://coastwatch.pfeg.noaa.gov/erddap/griddap/noaaOisst21Agg.nc?sst[(1990-01-01):1:(2020-01-01T00:00:00Z)][(25):1:(32)][(120):1:(128)]',), 'r', (('clobber', True), ('diskless', False), ('format', 'NETCDF4'), ('persist', False)), '5304bc67-54df-417e-907c-7d3f71b418db'] During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) Cell In[22], line 3 1 import xarray as xr 2 url = "https://coastwatch.pfeg.noaa.gov/erddap/griddap/noaaOisst21Agg.nc?sst[(1990-01-01):1:(2020-01-01T00:00:00Z)][(25):1:(32)][(120):1:(128)]" ----> 3 data = xr.open_dataset(url) File D:\anaconda\app\Lib\site-packages\xarray\backends\api.py:566, in open_dataset(filename_or_obj, engine, chunks, cache, decode_cf, mask_and_scale, decode_times, decode_timedelta, use_cftime, concat_characters, decode_coords, drop_variables, inline_array, chunked_array_type, from_array_kwargs, backend_kwargs, **kwargs) 554 decoders = _resolve_decoders_kwargs( 555 decode_cf, 556 open_backend_dataset_parameters=backend.open_dataset_parameters, (...) 562 decode_coords=decode_coords, 563 ) 565 overwrite_encoded_chunks = kwargs.pop("overwrite_encoded_chunks", None) --> 566 backend_ds = backend.open_dataset( 567 filename_or_obj, 568 drop_variables=drop

filetype

ValueError Traceback (most recent call last) Cell In[52], line 2 1 # 时间序列字段线性插值 ----> 2 combined_df['SOC'] = combined_df['SOC'].interpolate(method='time') 4 # 驱动电机相关字段分场景填补 5 motor_cols = ['驱动电机转速', '驱动电机转矩', '驱动电机控制器温度'] File D:\anaconda\Lib\site-packages\pandas\core\generic.py:8499, in NDFrame.interpolate(self, method, axis, limit, inplace, limit_direction, limit_area, downcast, **kwargs) 8497 else: 8498 index = missing.get_interp_index(method, obj.index) -> 8499 new_data = obj._mgr.interpolate( 8500 method=method, 8501 index=index, 8502 limit=limit, 8503 limit_direction=limit_direction, 8504 limit_area=limit_area, 8505 inplace=inplace, 8506 downcast=downcast, 8507 **kwargs, 8508 ) 8510 result = self._constructor_from_mgr(new_data, axes=new_data.axes) 8511 if should_transpose: File D:\anaconda\Lib\site-packages\pandas\core\internals\base.py:291, in DataManager.interpolate(self, inplace, **kwargs) 290 def interpolate(self, inplace: bool, **kwargs) -> Self: --> 291 return self.apply_with_block( 292 "interpolate", 293 inplace=inplace, 294 **kwargs, 295 using_cow=using_copy_on_write(), 296 already_warned=_AlreadyWarned(), 297 ) File D:\anaconda\Lib\site-packages\pandas\core\internals\managers.py:363, in BaseBlockManager.apply(self, f, align_keys, **kwargs) 361 applied = b.apply(f, **kwargs) 362 else: --> 363 applied = getattr(b, f)(**kwargs) 364 result_blocks = extend_blocks(applied, result_blocks) 366 out = type(self).from_blocks(result_blocks, self.axes) File D:\anaconda\Lib\site-packages\pandas\core\internals\blocks.py:1797, in Block.interpolate(self, method, index, inplace, limit, limit_direction, limit_area, downcast, using_cow, already_warned, **kwargs) 1794 copy, refs = self._get_refs_and_copy(using_cow, inplace) 1796 # Dispatch to the EA method. -> 1797 new_values = self.array_values.interpolate( 1798 method=method, 1799 axis=self.ndim - 1, 1800 index=index, 1801 limit=limit, 1802 limit_direction=limit_direction, 1803 limit_area=limit_area, 1804 copy=copy, 1805 **kwargs, 1806 ) 1807 data = extract_array(new_values, extract_numpy=True) 1809 if ( 1810 not copy 1811 and warn_copy_on_write() 1812 and already_warned is not None 1813 and not already_warned.warned_already 1814 ): File D:\anaconda\Lib\site-packages\pandas\core\arrays\numpy_.py:296, in NumpyExtensionArray.interpolate(self, method, axis, index, limit, limit_direction, limit_area, copy, **kwargs) 293 out_data = self._ndarray.copy() 295 # TODO: assert we have floating dtype? --> 296 missing.interpolate_2d_inplace( 297 out_data, 298 method=method, 299 axis=axis, 300 index=index, 301 limit=limit, 302 limit_direction=limit_direction, 303 limit_area=limit_area, 304 **kwargs, 305 ) 306 if not copy: 307 return self File D:\anaconda\Lib\site-packages\pandas\core\missing.py:373, in interpolate_2d_inplace(data, index, axis, method, limit, limit_direction, limit_area, fill_value, mask, **kwargs) 371 if method == "time": 372 if not needs_i8_conversion(index.dtype): --> 373 raise ValueError( 374 "time-weighted interpolation only works " 375 "on Series or DataFrames with a " 376 "DatetimeIndex" 377 ) 378 method = "values" 380 limit_direction = validate_limit_direction(limit_direction) ValueError: time-weighted interpolation only works on Series or DataFrames with a DatetimeIndex