活动介绍

pandas中str.contains语法

时间: 2023-05-31 16:19:16 浏览: 189
### 回答1: pandas中str.contains是一个字符串方法,用于检查一个字符串是否包含另一个字符串。它的语法是: DataFrame['列名'].str.contains('要查找的字符串', na=False) 其中,DataFrame是一个数据框,['列名']是要查找的列名,'要查找的字符串'是要查找的字符串,na=False表示不包括缺失值。如果要查找多个字符串,可以使用正则表达式。 ### 回答2: 在pandas中,str.contains函数是用于判断某个字符串是否存在于另一个字符串中的函数。该函数可以用于Series和DataFrame对象中。下面是该函数的语法: Series.str.contains(pat, case=True, flags=0, na=nan, regex=True) 其中,各参数的含义如下: - pat:被搜索的模式,在字符串中必须出现的部分。可以是一个正则表达式。 - case:是否区分大小写,默认为True。 - flags:正则表达式的标志位,默认为0。 - na:替代缺失值的值,默认为NaN。 - regex:是否将pat作为正则表达式处理,默认为True。 使用该函数时,可以传入一个字符串或正则表达式作为pat参数,判断该字符串是否存在于待搜索的Series或DataFrame对象中的每一行。返回值是一个布尔值的Series对象,代表每一行是否包含了被搜索的字符串。如果搜索的字符串不存在于被搜索的Series或DataFrame对象中,则返回False。 下面简单介绍几个实例: 1. 判断字符串是否存在于Series中 假设有如下Series对象: import pandas as pd s = pd.Series(['apple', 'banana', 'orange', 'pear']) 现在,我们想要判断某个字符串是否存在于该Series中。可以使用str.contains函数,如下: # 判断字符串“apple”是否存在于s中 result = s.str.contains('apple') print(result) # 输出: # 0 True # 1 False # 2 False # 3 False # dtype: bool 在这个例子中,我们传入了字符串“apple”作为pat参数,判断字符串“apple”是否存在于s中。输出结果表明,字符串“apple”只存在于s的第一个元素中,因此第一个元素的返回值为True,其余元素返回值为False。 2. 判断正则表达式是否匹配Series中的元素 接下来,我们来看一个使用正则表达式的例子: # 判断是否以字母a开头的字符串存在于s中 result = s.str.contains('^a') print(result) # 输出: # 0 True # 1 False # 2 True # 3 False # dtype: bool 在这个例子中,我们传入了正则表达式“^a”作为pat参数,判断s中是否存在以字母a开头的字符串。输出结果显示,s中的第一个元素和第三个元素均以字母a开头,因此第一和第三个元素的返回值为True,其余元素返回值为False。 3. 判断字符串是否存在于DataFrame的一列中 除了Series之外,str.contains函数还可以用于DataFrame对象。在这种情况下,函数将被应用于DataFrame中的每一列。 # 创建DataFrame对象 df = pd.DataFrame({'fruits': ['apple', 'banana', 'orange', 'pear'], 'prices': [2.5, 3.2, 4.1, 1.5]}) 现在,我们想要利用str.contains函数判断某个字符串是否存在于DataFrame的一个列中。可以如下使用: # 判断字符串“apple”是否存在于fruits列中 result = df['fruits'].str.contains('apple') print(result) # 输出: # 0 True # 1 False # 2 False # 3 False # Name: fruits, dtype: bool 在这个例子中,我们利用df['fruits']获取DataFrame中的fruits列,然后调用str.contains函数判断字符串“apple”是否存在于该列中。结果表明,只有第一个元素中存在字符串“apple”,因此该元素为True,其余元素均为False。 综上所述,str.contains函数是一个方便的字符串查找函数,在处理Series和DataFrame对象中的字符串数据时非常实用。需要注意的是,该函数默认认为输入的模式是正则表达式,因此在使用时需要根据实际需要选择是否开启regex参数,以免产生误解。 ### 回答3: 在Pandas中,str.contains是一种用于字符串匹配的方法。它允许用户在DataFrame或Series中的字符串列中搜索某个模式,并返回布尔值指示是否包含该模式。 str.contains方法的语法如下: Series.str.contains(pat, case=True, flags=0, na=nan, regex=True) 其中,pat为要搜索的模式,它可以是字符串或正则表达式。case参数用于指示是否区分大小写,默认为True。flags参数用于传递正则表达式标志。na参数用于指示如何处理缺失值,默认为nan。regex参数用于指示pat是否是正则表达式,默认为True。 str.contains方法将返回一个具有与调用对象相同长度的布尔Series。对于每个元素,它指示该元素的字符串是否包含指定的模式。如果找到匹配项,则返回True,否则返回False。 下面是一个简单的示例: import pandas as pd data = {'text': ['Hello, World!', 'Foo Bar']} df = pd.DataFrame(data) # 使用str.contains搜索包含“llo”的行 result = df['text'].str.contains('llo') print(result) # 输出 # 0 True # 1 False # Name: text, dtype: bool 在此示例中,我们使用str.contains方法搜索包含“llo”的行。我们传递了一个用于指定模式的字符串参数“llo”。我们将数据存储在一个DataFrame中,其中包含两个字符串文本行。最后,我们的输出结果表明第一个行包含“llo”,第二个行不包含。
阅读全文

相关推荐

>>> splice_data, spliced_ptms, altered_flanks = project.project_ptms_onto_MATS(SE_events\ = SE_data, MXE_events = MXE_data, A5SS_events = A5SS_data, A3SS_events = A3SS_data, RI_\ events = RI_data, coordinate_type = 'hg38', identify_flanking_sequences = True) Projecting PTMs onto MATS splice events using hg38 coordinates. Skipped Exon events: 0%| | 0/3635 [00:00<?, ?it/s] Traceback (most recent call last): File "", line 1, in <module> splice_data, spliced_ptms, altered_flanks = project.project_ptms_onto_MATS(SE_events = SE_data, MXE_events = MXE_data, A5SS_events = A5SS_data, A3SS_events = A3SS_data, RI_events = RI_data, coordinate_type = 'hg38', identify_flanking_sequences = True) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/940660EA0660CEB4/PTM-POSE/venv/lib/python3.13/site-packages/ptm_pose/project.py", line 416, in project_ptms_onto_MATS spliced_events['SE'], SE_ptms = project_ptms_onto_splice_events(SE_events, annotate_original_df=True, ptm_coordinates = ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', region_start_col = 'exonStart_0base', region_end_col = 'exonEnd', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', event_id_col = 'AS ID', extra_cols = extra_cols, coordinate_type=coordinate_type, start_coordinate_system='0-based', taskbar_label = "Skipped Exon events", separate_modification_types=separate_modification_types, PROCESSES = SE_processes) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/940660EA0660CEB4/PTM-POSE/venv/lib/python3.13/site-packages/ptm_pose/project.py", line 327, in project_ptms_onto_splice_events splice_data, spliced_ptm_info = find_ptms_in_many_regions(splice_data, ptm_coordinates, chromosome_col = chromosome_col, strand_col = strand_col, region_start_col = region_start_col, region_end_col = region_end_col, dPSI_col = dPSI_col, sig_col = sig_col, event_id_col = event_id_col, gene_col = gene_col, extra_cols = extra_cols, annotate_original_df = annotate_original_df, coordinate_type = coordinate_type,start_coordinate_system=start_coordinate_system, end_coordinate_system=end_coordinate_system, taskbar_label = taskbar_label, separate_modification_types=separate_modification_types) ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/mnt/940660EA0660CEB4/PTM-POSE/venv/lib/python3.13/site-packages/ptm_pose/project.py", line 212, in find_ptms_in_many_regions if annotate_original_df: ^^^^^^^^^^^^^^^^^^^^ File "/mnt/940660EA0660CEB4/PTM-POSE/venv/lib/python3.13/site-packages/pandas/core/generic.py", line 1577, in __nonzero__ raise ValueError( ...<2 lines>... ) ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). 脚本project.py内容: import numpy as np import pandas as pd import multiprocessing import datetime from ptm_pose import pose_config, helpers from ptm_pose import flanking_sequences as fs from tqdm import tqdm def find_ptms_in_region(ptm_coordinates, chromosome, strand, start, end, gene = None, coordinate_type = 'hg38'): """ Given an genomic region in either hg38 or hg19 coordinates (such as the region encoding an exon of interest), identify PTMs that are mapped to that region. If so, return the exon number. If none are found, return np.nan. Parameters ---------- chromosome: str chromosome where region is located strand: int DNA strand for region is found on (1 for forward, -1 for reverse) start: int start position of region on the chromosome/strand (should always be less than end) end: int end position of region on the chromosome/strand (should always be greater than start) coordinate_type: str indicates the coordinate system used for the start and end positions. Either hg38 or hg19. Default is 'hg38'. Returns ------- ptms_in_region: pandas.DataFrame dataframe containing all PTMs found in the region. If no PTMs are found, returns np.nan. """ #restrict to PTMs on the same chromosome and strand ptms_in_region = ptm_coordinates[(ptm_coordinates['Chromosome/scaffold name'] == chromosome) & (ptm_coordinates['Strand'] == strand)].copy() if coordinate_type in ['hg18', 'hg19','hg38']: loc_col = f'Gene Location ({coordinate_type})' else: raise ValueError('Coordinate type must be hg38 or hg19') #check to make sure the start value is less than the end coordinate. If it is not, treat the end coordinate as the start and the start coordinate as the end if start < end: ptms_in_region = ptms_in_region[(ptms_in_region[loc_col] >= start) & (ptms_in_region[loc_col] <= end)] else: ptms_in_region = ptms_in_region[(ptms_in_region[loc_col] <= start) & (ptms_in_region[loc_col] >= end)] #extract only PTM information from dataframe and return that and list (if not ptms, return empty dataframe) if not ptms_in_region.empty: #grab uniprot id and residue ptms_in_region = ptms_in_region[['Source of PTM', 'UniProtKB Accession','Isoform ID', 'Isoform Type', 'Residue', 'PTM Position in Isoform', loc_col, 'Modification', 'Modification Class', 'Canonical Flanking Sequence', 'Constitutive', 'MS_LIT', 'MS_CST', 'LT_LIT', 'Compendia', 'Number of Compendia']] #check if ptm is associated with the same gene (if info is provided). if not, do not add if gene is not None: for i, row in ptms_in_region.iterrows(): #if ';' in row['UniProtKB Accession']: # uni_ids = row['UniProtKB Accession'].split(';') # remove = True # for uni in uni_ids: # if row['UniProtKB Accession'] in pose_config.uniprot_to_genename: # if gene in pose_config.uniprot_to_genename[uni.split('-')[0]].split(' '): # remove = False # break # if remove: # ptms_in_region.drop(i) if row['UniProtKB Accession'] in pose_config.uniprot_to_genename: if gene not in pose_config.uniprot_to_genename[row['UniProtKB Accession']].split(' '): ptms_in_region = ptms_in_region.drop(i) else: ptms_in_region = ptms_in_region.drop(i) #make sure ptms still are present after filtering if ptms_in_region.empty: return pd.DataFrame() else: ptms_in_region.insert(0, 'Gene', gene) #calculate proximity to region start and end ptms_in_region['Proximity to Region Start (bp)'] = (ptms_in_region[loc_col] - start).abs() ptms_in_region['Proximity to Region End (bp)'] = (ptms_in_region[loc_col] - end).abs() ptms_in_region['Proximity to Splice Boundary (bp)'] = ptms_in_region.apply(lambda x: min(x['Proximity to Region Start (bp)'], x['Proximity to Region End (bp)']), axis = 1) return ptms_in_region else: return pd.DataFrame() def convert_strand_symbol(strand): """ Given DNA strand information, make sure the strand information is in integer format (1 for forward, -1 for reverse). This is intended to convert from string format ('+' or '-') to integer format (1 or -1), but will return the input if it is already in integer format. Parameters ---------- strand: str or int DNA strand information, either as a string ('+' or '-') or an integer (1 or -1) Returns ------- int DNA strand information as an integer (1 for forward, -1 for reverse) """ if isinstance(strand, str): if strand == '+' or strand == '1': return 1 elif strand == '-' or strand == '-1': return -1 else: return strand def find_ptms_in_many_regions(region_data, ptm_coordinates, annotate_original_df=True, chromosome_col = 'chr', strand_col = 'strand', region_start_col = 'exonStart_0base', region_end_col = 'exonEnd', gene_col = None, dPSI_col = None, sig_col = None, event_id_col = None, extra_cols = None, annotate_original_df = True, coordinate_type = 'hg38', start_coordinate_system = '1-based', end_coordinate_system = '1-based', separate_modification_types = False, taskbar_label = None): """ Given a dataframe with a unique region in each row, project PTMs onto the regions. Assumes that the region data will have chromosome, strand, and genomic start/end positions, and each row corresponds to a unique region. Parameters ---------- ptm_coordinates: pandas.DataFrame dataframe containing PTM information, including chromosome, strand, and genomic location of PTMs region_data: pandas.DataFrame dataframe containing region information, including chromosome, strand, and genomic location of regions of interest chromosome_col: str column name in splice_data that contains chromosome information. Default is 'chr'. Expects it to be a str with only the chromosome number: 'Y', '1', '2', etc. strand_col: str column name in splice_data that contains strand information. Default is 'strand'. Expects it to be a str with '+' or '-', or integers as 1 or -1. Will convert to integers automatically if string format is provided. region_start_col: str column name in splice_data that contains the start position of the region of interest. Default is 'exonStart_0base'. region_end_col: str column name in splice_data that contains the end position of the region of interest. Default is 'exonEnd'. gene_col: str column name in splice_data that contains the gene name. If provided, will be used to make sure the projected PTMs stem from the same gene (some cases where genomic coordiantes overlap between distinct genes). Default is None. event_id_col: str column name in splice_data that contains the unique identifier for the splice event. If provided, will be used to annotate the ptm information with the specific splice event ID. Default is None. coordinate_type: str indicates the coordinate system used for the start and end positions. Either hg38 or hg19. Default is 'hg38'. separate_modification_types: bool Indicate whether to store PTM sites with multiple modification types as multiple rows. For example, if a site at K100 was both an acetylation and methylation site, these will be separated into unique rows with the same site number but different modification types. Default is True. taskbar_label: str Label to display in the tqdm progress bar. Default is None, which will automatically state "Projecting PTMs onto regions using ----- coordinates". Returns ------- spliced_ptm_info: pandas.DataFrame Contains the PTMs identified across the different splice events splice_data: pandas.DataFrame dataframe containing the original splice data with an additional column 'PTMs' that contains the PTMs found in the region of interest, in the format of 'SiteNumber(ModificationType)'. If no PTMs are found, the value will be np.nan. """ if taskbar_label is None: taskbar_label = 'Projecting PTMs onto regions using ' + coordinate_type + ' coordinates.' if region_data[chromosome_col].str.contains('chr').any(): region_data[chromosome_col] = region_data[chromosome_col].str.strip('chr') spliced_ptm_info = [] spliced_ptms_list = [] num_ptms_affected = [] num_unique_ptm_sites = [] #copy region_data = region_data.copy() #iterate through each row of the splice data and find PTMs in the region for index, row in tqdm(region_data.iterrows(), total = len(region_data), desc = taskbar_label): #grab region information from row chromosome = row[chromosome_col] strand = convert_strand_symbol(row[strand_col]) start = row[region_start_col] end = row[region_end_col] #only provide these if column is given gene = row[gene_col] if gene_col is not None else None #adjust region coordinates if needed (make sure in 1-based coordinate system) if start_coordinate_system == '0-based': start += 1 elif start_coordinate_system != '1-based': raise ValueError("Start coordinate system must be either '0-based' or '1-based'") if end_coordinate_system == '0-based': end += 1 elif end_coordinate_system != '1-based': raise ValueError("End coordinate system must be either '0-based' or '1-based'") #project ptms onto region ptms_in_region = find_ptms_in_region(ptm_coordinates, chromosome, strand, start, end, gene = gene, coordinate_type = coordinate_type) extra_info = {} #add additional context from splice data, if indicated extra_info = {} if event_id_col is not None: extra_info['Region ID'] = row[event_id_col] if dPSI_col is not None: extra_info['dPSI'] = row[dPSI_col] if sig_col is not None: extra_info['Significance'] = row[sig_col] if extra_cols is not None: for col in extra_cols: extra_info[col] = row[col] #add extra info to ptms_in_region ptms_in_region = pd.concat([pd.DataFrame(extra_info, index = ptms_in_region.index), ptms_in_region], axis = 1) #if desired, add ptm information to the original splice event dataframe if annotate_original_df: if not ptms_in_region.empty: #split and separate unique modification types if separate_modification_types: ptms_in_region['Modification Class'] = ptms_in_region['Modification Class'].str.split(';') ptms_in_region = ptms_in_region.explode('Modification Class') ptms_info = ptms_in_region.apply(lambda x: x['UniProtKB Accession'] + '_' + x['Residue'] + str(x['PTM Position in Isoform']) + ' (' + x['Modification Class'] + ')', axis = 1) ptms_str = '/'.join(ptms_info.values) spliced_ptms_list.append(ptms_str) num_ptms_affected.append(ptms_in_region.shape[0]) num_unique_ptm_sites.append(ptms_in_region.groupby(['UniProtKB Accession', 'Residue', 'PTM Position in Isoform']).size().shape[0]) else: spliced_ptms_list.append(np.nan) num_ptms_affected.append(0) num_unique_ptm_sites.append(0) spliced_ptm_info.append(ptms_in_region.copy()) #combine all PTM information spliced_ptm_info = pd.concat(spliced_ptm_info, ignore_index = True) #convert ptm position to float if spliced_ptm_info.shape[0] > 0: spliced_ptm_info['PTM Position in Isoform'] = spliced_ptm_info['PTM Position in Isoform'].astype(float) #add ptm info to original splice event dataframe if annotate_original_df: region_data['PTMs'] = spliced_ptms_list region_data['Number of PTMs Affected'] = num_ptms_affected region_data['Number of Unique PTM Sites by Position'] = num_unique_ptm_sites region_data['Event Length'] = (region_data[region_end_col] - region_data[region_start_col]).abs() region_data['PTM Density (PTMs/bp)'] = (region_data['Number of Unique PTM Sites by Position']*3)/region_data['Event Length'] #multiply by 3 to convert aa to bp (3 bp per codon) return region_data, spliced_ptm_info def project_ptms_onto_splice_events(splice_data, annotate_original_df = True, chromosome_col = 'chr', strand_col = 'strand', region_start_col = 'exonStart_0base', region_end_col = 'exonEnd', dPSI_col = None, sig_col = None, event_id_col = None, gene_col = None, extra_cols = None, separate_modification_types = False, coordinate_type = 'hg38', start_coordinate_system = '1-based', end_coordinate_system = '1-based', taskbar_label = None, ptm_coordinates = None,PROCESSES = 1, **kwargs): """ Given splice event quantification data, project PTMs onto the regions impacted by the splice events. Assumes that the splice event data will have chromosome, strand, and genomic start/end positions for the regions of interest, and each row of the splice_event_data corresponds to a unique region. Important note: PTM-POSE relies on Ensembl based coordinates (1-based), so if the coordinates are 0-based, make sure to indicate using the start_coordinate_system and end_coordinate_system parameters. For example, rMATS uses 0-based for the start coordinates, but 1-based for the end coordinates. In this case, set start_coordinate_system = '0-based' and end_coordinate_system = '1-based'. Parameters ---------- splice_data: pandas.DataFrame dataframe containing splice event information, including chromosome, strand, and genomic location of regions of interest ptm_coordinates: pandas.DataFrame dataframe containing PTM information, including chromosome, strand, and genomic location of PTMs. If none, it will pull from the config file. chromosome_col: str column name in splice_data that contains chromosome information. Default is 'chr'. Expects it to be a str with only the chromosome number: 'Y', '1', '2', etc. strand_col: str column name in splice_data that contains strand information. Default is 'strand'. Expects it to be a str with '+' or '-', or integers as 1 or -1. Will convert to integers automatically if string format is provided. region_start_col: str column name in splice_data that contains the start position of the region of interest. Default is 'exonStart_0base'. region_end_col: str column name in splice_data that contains the end position of the region of interest. Default is 'exonEnd'. event_id_col: str column name in splice_data that contains the unique identifier for the splice event. If provided, will be used to annotate the ptm information with the specific splice event ID. Default is None. gene_col: str column name in splice_data that contains the gene name. If provided, will be used to make sure the projected PTMs stem from the same gene (some cases where genomic coordiantes overlap between distinct genes). Default is None. dPSI_col: str column name in splice_data that contains the delta PSI value for the splice event. Default is None, which will not include this information in the output sig_col: str column name in splice_data that contains the significance value for the splice event. Default is None, which will not include this information in the output. extra_cols: list list of additional columns to include in the output dataframe. Default is None, which will not include any additional columns. coordinate_type: str indicates the coordinate system used for the start and end positions. Either hg38 or hg19. Default is 'hg38'. start_coordinate_system: str indicates the coordinate system used for the start position. Either '0-based' or '1-based'. Default is '1-based'. end_coordinate_system: str indicates the coordinate system used for the end position. Either '0-based' or '1-based'. Default is '1-based'. separate_modification_types: bool Indicate whether to store PTM sites with multiple modification types as multiple rows. For example, if a site at K100 was both an acetylation and methylation site, these will be separated into unique rows with the same site number but different modification types. Default is True. taskbar_label: str Label to display in the tqdm progress bar. Default is None, which will automatically state "Projecting PTMs onto regions using ----- coordinates". PROCESSES: int Number of processes to use for multiprocessing. Default is 1 (single processing) **kwargs: additional keyword arguments Additional keyword arguments to pass to the find_ptms_in_many_regions function, which will be fed into the filter_ptms() function from the helper module. These will be used to filter ptms with lower evidence. For example, if you want to filter PTMs based on the number of MS observations, you can add 'min_MS_observations = 2' to the kwargs. This will filter out any PTMs that have less than 2 MS observations. See the filter_ptms() function for more options. Returns ------- spliced_ptm_info: pandas.DataFrame Contains the PTMs identified across the different splice events splice_data: pandas.DataFrame dataframe containing the original splice data with an additional column 'PTMs' that contains the PTMs found in the region of interest, in the format of 'SiteNumber(ModificationType)'. If no PTMs are found, the value will be np.nan. """ #load ptm data from config if not provided if ptm_coordinates is None: ptm_coordinates = pose_config.ptm_coordinates.copy() #check for any keyword arguments to use for filtering if kwargs: filter_arguments = helpers.extract_filter_kwargs(**kwargs) #check any excess unused keyword arguments, report them helpers.check_filter_kwargs(filter_arguments) #filter ptm coordinates file to include only ptms with desired evidence ptm_coordinates = helpers.filter_ptms(ptm_coordinates, **filter_arguments) if taskbar_label is None: taskbar_label = 'Projecting PTMs onto splice events using ' + coordinate_type + ' coordinates.' #copy splice_data = splice_data.copy() #check columns to make sure they are present and correct data type check_columns(splice_data, chromosome_col=chromosome_col, strand_col=strand_col, region_start_col=region_start_col, region_end_col=region_end_col, dPSI_col=dPSI_col, sig_col=sig_col, event_id_col=event_id_col, gene_col=gene_col, extra_cols=extra_cols) if PROCESSES == 1: splice_data, spliced_ptm_info = find_ptms_in_many_regions(splice_data, ptm_coordinates, chromosome_col = chromosome_col, strand_col = strand_col, region_start_col = region_start_col, region_end_col = region_end_col, dPSI_col = dPSI_col, sig_col = sig_col, event_id_col = event_id_col, gene_col = gene_col, extra_cols = extra_cols, annotate_original_df = annotate_original_df, coordinate_type = coordinate_type,start_coordinate_system=start_coordinate_system, end_coordinate_system=end_coordinate_system, taskbar_label = taskbar_label, separate_modification_types=separate_modification_types) elif PROCESSES > 1: #check num_cpus available, if greater than number of cores - 1 (to avoid freezing machine), then set to PROCESSES to 1 less than total number of cores num_cores = multiprocessing.cpu_count() if PROCESSES > num_cores - 1: PROCESSES = num_cores - 1 #split dataframe into chunks equal to PROCESSES splice_data_split = np.array_split(splice_data, PROCESSES) pool = multiprocessing.Pool(PROCESSES) #run with multiprocessing results = pool.starmap(find_ptms_in_many_regions, [(splice_data_split[i], ptm_coordinates, chromosome_col, strand_col, region_start_col, region_end_col, gene_col, dPSI_col, sig_col, event_id_col, extra_cols, annotate_original_df, coordinate_type, start_coordinate_system, end_coordinate_system, separate_modification_types, taskbar_label) for i in range(PROCESSES)]) splice_data = pd.concat([res[0] for res in results]) spliced_ptm_info = pd.concat([res[1] for res in results]) #raise ValueError('Multiprocessing not yet functional. Please set PROCESSES = 1.') print(f'PTMs projection successful ({spliced_ptm_info.shape[0]} identified).\n') return splice_data, spliced_ptm_info def project_ptms_onto_MATS(SE_events = None, A5SS_events = None, A3SS_events = None, RI_events = None, MXE_events = None, coordinate_type = 'hg38', identify_flanking_sequences = False, dPSI_col = 'meanDeltaPSI', sig_col = 'FDR', extra_cols = None, separate_modification_types = False, PROCESSES = 1,ptm_coordinates = None, **kwargs): """ Given splice quantification from the MATS algorithm, annotate with PTMs that are found in the differentially included regions. Parameters ---------- ptm_coordinates: pandas.DataFrame dataframe containing PTM information, including chromosome, strand, and genomic location of PTMs SE_events: pandas.DataFrame dataframe containing skipped exon event information from MATS A5SS_events: pandas.DataFrame dataframe containing 5' alternative splice site event information from MATS A3SS_events: pandas.DataFrame dataframe containing 3' alternative splice site event information from MATS RI_events: pandas.DataFrame dataframe containing retained intron event information from MATS MXE_events: pandas.DataFrame dataframe containing mutually exclusive exon event information from MATS coordinate_type: str indicates the coordinate system used for the start and end positions. Either hg38 or hg19. Default is 'hg38'. dPSI_col: str Column name indicating delta PSI value. Default is 'meanDeltaPSI'. sig_col: str Column name indicating significance of the event. Default is 'FDR'. extra_cols: list List of column names for additional information to add to the results. Default is None. separate_modification_types: bool Indicate whether residues with multiple modifications (i.e. phosphorylation and acetylation) should be treated as separate PTMs and be placed in unique rows of the output dataframe. Default is False. PROCESSES: int Number of processes to use for multiprocessing. Default is 1. **kwargs: additional keyword arguments Additional keyword arguments to pass to the find_ptms_in_many_regions function, which will be fed into the filter_ptms() function from the helper module. These will be used to filter ptms with lower evidence. For example, if you want to filter PTMs based on the number of MS observations, you can add 'min_MS_observations = 2' to the kwargs. This will filter out any PTMs that have less than 2 MS observations. See the filter_ptms() function for more options. """ #load ptm data from config if not provided if ptm_coordinates is None: ptm_coordinates = pose_config.ptm_coordinates.copy() #check for any keyword arguments to use for filtering if kwargs: filter_arguments = helpers.extract_filter_kwargs(**kwargs) #check any excess unused keyword arguments, report them helpers.check_filter_kwargs(filter_arguments) #filter ptm coordinates file to include only ptms with desired evidence ptm_coordinates = helpers.filter_ptms(ptm_coordinates, **filter_arguments) print(f'Projecting PTMs onto MATS splice events using {coordinate_type} coordinates.') #reformat chromosome name format spliced_events = {} spliced_flanks = [] spliced_ptms = [] if SE_events is not None: if SE_events['chr'].str.contains('chr').any(): SE_events['chr'] = SE_events['chr'].apply(lambda x: x[3:]) SE_events['AS ID'] = "SE_" + SE_events.index.astype(str) #check to make sure there is enough information to do multiprocessing if that is desired if PROCESSES*4 > SE_events.shape[0]: SE_processes = 1 else: SE_processes = PROCESSES spliced_events['SE'], SE_ptms = project_ptms_onto_splice_events(SE_events, annotate_original_df=True, ptm_coordinates = ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', region_start_col = 'exonStart_0base', region_end_col = 'exonEnd', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', event_id_col = 'AS ID', extra_cols = extra_cols, coordinate_type=coordinate_type, start_coordinate_system='0-based', taskbar_label = "Skipped Exon events", separate_modification_types=separate_modification_types, PROCESSES = SE_processes) SE_ptms['Event Type'] = 'SE' spliced_ptms.append(SE_ptms) if identify_flanking_sequences: print('Identifying flanking sequences for skipped exon events.') if 'upstreamES' in SE_events.columns: first_flank_start_col = 'upstreamES' first_flank_end_col = 'upstreamEE' second_flank_start_col = 'downstreamES' second_flank_end_col = 'downstreamEE' elif 'firstFlankingES' in SE_events.columns: first_flank_start_col = 'firstFlankingES' first_flank_end_col = 'firstFlankingEE' second_flank_start_col = 'secondFlankingES' second_flank_end_col = 'secondFlankingEE' else: raise ValueError('Could not find flanking sequence columns in skipped exon event data, based on what is typically outputted by MATS. Please check column names and provide the appropriate columns for the first and second flanking sequences') SE_flanks = fs.get_flanking_changes_from_splice_data(SE_events, ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', spliced_region_start_col = 'exonStart_0base', spliced_region_end_col = 'exonEnd', first_flank_start_col = first_flank_start_col, first_flank_end_col = first_flank_end_col, second_flank_start_col = second_flank_start_col, second_flank_end_col = second_flank_end_col, dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', event_id_col = 'AS ID', extra_cols = extra_cols, coordinate_type=coordinate_type, start_coordinate_system='0-based') SE_flanks['Event Type'] = 'SE' spliced_flanks.append(SE_flanks) else: print('Skipped exon event data (SE_events) not provided, skipping') if A5SS_events is not None: if A5SS_events['chr'].str.contains('chr').any(): A5SS_events['chr'] = A5SS['chr'].apply(lambda x: x[3:]) #set the relevent start and end regions of the spliced out region, which are different depending on the strand region_start = [] region_end = [] first_flank_start = [] first_flank_end = [] second_flank_end = [] second_flank_start = [] for i, row in A5SS_events.iterrows(): strand = row['strand'] if strand == '+': region_start.append(row['shortEE']) region_end.append(row['longExonEnd']) if identify_flanking_sequences: first_flank_start.append(row['shortES']) first_flank_end.append(row['shortEE']) second_flank_start.append(row['flankingES']) second_flank_end.append(row['flankingEE']) else: region_start.append(row['longExonStart_0base']) region_end.append(row['shortES']) if identify_flanking_sequences: second_flank_start.append(row['shortES']) second_flank_end.append(row['shortEE']) first_flank_start.append(row['flankingES']) first_flank_end.append(row['flankingEE']) A5SS_events['event_start'] = region_start A5SS_events['event_end'] = region_end if identify_flanking_sequences: A5SS_events['first_flank_start'] = first_flank_start A5SS_events['first_flank_end'] = first_flank_end A5SS_events['second_flank_start'] = second_flank_start A5SS_events['second_flank_end'] = second_flank_end #set specific as id A5SS_events['AS ID'] = "5ASS_" + A5SS_events.index.astype(str) #check to make sure there is enough information to do multiprocessing if that is desired if PROCESSES*4 > A5SS_events.shape[0]: fiveASS_processes = 1 else: fiveASS_processes = PROCESSES #identify PTMs found within spliced regions spliced_events['5ASS'], fiveASS_ptms = project_ptms_onto_splice_events(A5SS_events, annotate_original_df=True, ptm_coordinates = ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', region_start_col = 'event_start', region_end_col = 'event_end', event_id_col = 'AS ID', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', coordinate_type=coordinate_type, start_coordinate_system = '0-based', extra_cols = extra_cols, taskbar_label = "5' ASS events", separate_modification_types=separate_modification_types, PROCESSES = fiveASS_processes) fiveASS_ptms['Event Type'] = '5ASS' spliced_ptms.append(fiveASS_ptms) #identify ptms with altered flanking sequences if identify_flanking_sequences: print("Identifying flanking sequences for 5'ASS events.") fiveASS_flanks = fs.get_flanking_changes_from_splice_data(A5SS_events, ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', spliced_region_start_col = 'event_start', spliced_region_end_col = 'event_end', first_flank_start_col = 'first_flank_start', first_flank_end_col = 'first_flank_end', second_flank_start_col = 'second_flank_start', second_flank_end_col = 'second_flank_end',dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', event_id_col = 'AS ID', extra_cols = extra_cols, coordinate_type=coordinate_type, start_coordinate_system='0-based') fiveASS_flanks['Event Type'] = '5ASS' spliced_flanks.append(fiveASS_flanks) else: print("5' ASS event data (A5SS_events) not provided, skipping.") if A3SS_events is not None: if RI_events['chr'].str.contains('chr').any(): RI_events['chr'] = RI_events['chr'].apply(lambda x: x[3:]) if A3SS_events['chr'].str.contains('chr').any(): A3SS_events['chr'] = A3SS_events['chr'].apply(lambda x: x[3:]) #set the relevent start and end regions of the spliced out region, which are different depending on the strand region_start = [] region_end = [] first_flank_start = [] first_flank_end = [] second_flank_end = [] second_flank_start = [] for i, row in A3SS_events.iterrows(): strand = row['strand'] if strand == '+': region_start.append(row['longExonStart_0base']) region_end.append(row['shortES']) if identify_flanking_sequences: second_flank_start.append(row['flankingES']) second_flank_end.append(row['flankingEE']) first_flank_start.append(row['shortES']) first_flank_end.append(row['shortEE']) else: region_start.append(row['shortEE']) region_end.append(row['longExonEnd']) if identify_flanking_sequences: second_flank_start.append(row['flankingES']) second_flank_end.append(row['flankingEE']) first_flank_start.append(row['shortES']) first_flank_end.append(row['shortEE']) #save region info A3SS_events['event_start'] = region_start A3SS_events['event_end'] = region_end if identify_flanking_sequences: A3SS_events['first_flank_start'] = first_flank_start A3SS_events['first_flank_end'] = first_flank_end A3SS_events['second_flank_start'] = second_flank_start A3SS_events['second_flank_end'] = second_flank_end #add event ids A3SS_events['AS ID'] = "3ASS_" + A3SS_events.index.astype(str) #check to make sure there is enough information to do multiprocessing if that is desired if PROCESSES*4 > A3SS_events.shape[0]: threeASS_processes = 1 else: threeASS_processes = PROCESSES spliced_events['3ASS'], threeASS_ptms = project_ptms_onto_splice_events(A3SS_events, annotate_original_df=True, ptm_coordinates = ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', region_start_col = 'event_start', region_end_col = 'event_end', event_id_col = 'AS ID', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', extra_cols = extra_cols, coordinate_type=coordinate_type, start_coordinate_system = '0-based', taskbar_label = "3' ASS events", separate_modification_types=separate_modification_types, PROCESSES = threeASS_processes) threeASS_ptms['Event Type'] = '3ASS' spliced_ptms.append(threeASS_ptms) #identify ptms with altered flanking sequences if identify_flanking_sequences: print("Identifying flanking sequences for 3' ASS events.") threeASS_flanks = fs.get_flanking_changes_from_splice_data(A3SS_events, ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', spliced_region_start_col = 'event_start', spliced_region_end_col = 'event_end', first_flank_start_col = 'first_flank_start', first_flank_end_col = 'first_flank_end', second_flank_start_col = 'second_flank_start', second_flank_end_col = 'second_flank_end', dPSI_col=dPSI_col, sig_col = dPSI_col, gene_col = 'geneSymbol', event_id_col = 'AS ID', extra_cols = extra_cols, coordinate_type=coordinate_type, start_coordinate_system='0-based') threeASS_flanks['Event Type'] = '3ASS' spliced_flanks.append(threeASS_flanks) else: print("3' ASS event data (A3SS_events) not provided, skipping") if RI_events is not None: if RI_events['chr'].str.contains('chr').any(): RI_events['chr'] = RI_events['chr'].apply(lambda x: x[3:]) #add event id RI_events['AS ID'] = "RI_" + RI_events.index.astype(str) #check to make sure there is enough information to do multiprocessing if that is desired if PROCESSES*4 > RI_events.shape[0]: RI_processes = 1 else: RI_processes = PROCESSES spliced_events['RI'], RI_ptms = project_ptms_onto_splice_events(RI_events, annotate_original_df=True, ptm_coordinates = ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', region_start_col = 'upstreamEE', region_end_col = 'downstreamES', event_id_col = 'AS ID', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', coordinate_type=coordinate_type, start_coordinate_system='0-based', extra_cols = extra_cols, taskbar_label = 'Retained Intron Events', separate_modification_types=separate_modification_types, PROCESSES = RI_processes) RI_ptms['Event Type'] = 'RI' spliced_ptms.append(RI_ptms) #identify ptms with altered flanking sequences if identify_flanking_sequences: print('Identifying flanking sequences for retained intron events.') RI_flanks = fs.get_flanking_changes_from_splice_data(RI_events, ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', spliced_region_start_col = 'upstreamEE', spliced_region_end_col = 'downstreamES', first_flank_start_col = 'upstreamES', first_flank_end_col = 'upstreamEE', second_flank_start_col = 'downstreamES', second_flank_end_col = 'downstreamEE', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', event_id_col = 'AS ID', extra_cols = extra_cols, coordinate_type=coordinate_type, start_coordinate_system='0-based') RI_flanks['Event Type'] = 'RI' spliced_flanks.append(RI_flanks) if MXE_events is not None: if MXE_events['chr'].str.contains('chr').any(): MXE_events['chr'] = MXE_events['chr'].apply(lambda x: x[3:]) #check to make sure there is enough information to do multiprocessing if that is desired if PROCESSES*4 > MXE_events.shape[0]: MXE_processes = 1 else: MXE_processes = PROCESSES #add AS ID MXE_events['AS ID'] = "MXE_" + MXE_events.index.astype(str) mxe_ptms = [] #first mxe exon spliced_events['MXE_Exon1'], MXE_Exon1_ptms = project_ptms_onto_splice_events(MXE_events, annotate_original_df=True, ptm_coordinates = ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', region_start_col = '1stExonStart_0base', region_end_col = '1stExonEnd', event_id_col = 'AS ID', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', coordinate_type=coordinate_type, start_coordinate_system = '0-based', taskbar_label = 'MXE, First Exon', extra_cols=extra_cols, separate_modification_types=separate_modification_types, PROCESSES = MXE_processes) MXE_Exon1_ptms['Event Type'] = 'MXE (First Exon)' mxe_ptms.append(MXE_Exon1_ptms) #second mxe exon spliced_events['MXE_Exon2'], MXE_Exon2_ptms = project_ptms_onto_splice_events(MXE_events, annotate_original_df=True, ptm_coordinates = ptm_coordinates, chromosome_col = 'chr', strand_col = 'strand', region_start_col = '2ndExonStart_0base', region_end_col = '2ndExonEnd', event_id_col = 'AS ID', dPSI_col=dPSI_col, sig_col = sig_col, gene_col = 'geneSymbol', extra_cols=extra_cols, coordinate_type=coordinate_type, start_coordinate_system='0-based', taskbar_label = 'MXE, Second Exon', separate_modification_types=separate_modification_types, PROCESSES = MXE_processes) MXE_Exon2_ptms['Event Type'] = 'MXE (Second Exon)' mxe_ptms.append(MXE_Exon2_ptms) #combine mxe ptms, and then drop any PTMs that were found in both MXE's mxe_ptms = pd.concat([MXE_Exon1_ptms, MXE_Exon2_ptms]) columns_to_check = ['UniProtKB Accession', 'Source of PTM', 'Residue', 'PTM Position in Isoform', 'Modification', 'Modification Class', 'Gene'] if dPSI_col is not None: columns_to_check.append('dPSI') if sig_col is not None: columns_to_check.append('Significance') if extra_cols is not None: columns_to_check += extra_cols mxe_ptms = mxe_ptms.drop_duplicates(subset = columns_to_check, keep = False) #flip dPSI values for second exon if dPSI_col is not None: mxe_ptms['dPSI'] = mxe_ptms.apply(lambda x: x['dPSI']* -1 if x['Event Type'] == 'MXE (Second Exon)' else x['dPSI'], axis = 1) #add mxe ptms to spliced_ptms spliced_ptms.append(mxe_ptms) spliced_ptms = pd.concat(spliced_ptms) if identify_flanking_sequences: spliced_flanks = pd.concat(spliced_flanks) return spliced_events, spliced_ptms, spliced_flanks else: return spliced_events, spliced_ptms #def project_ptms_onto_MAJIQ_dPSI(majiq_data, ptm_coordinates = None, coordinate_type = 'hg38', identify_flanking_sequences = False, dPSI_col = 'dPSI', sig_col = 'FDR', separate_modification_types = False, PROCESSES = 1): # print('in progress') # pass def add_splicegraph_info(psi_data, splicegraph, purpose = 'inclusion'): psi_data = psi_data[psi_data['splice_type'] != 'ME'].copy() if purpose == 'inclusion': #split exons into individual exons psi_data['Individual exon'] = psi_data['exons'].apply(lambda x: x.split(':')) psi_data = psi_data.explode('Individual exon').drop_duplicates() psi_data['Individual exon'] = psi_data['Individual exon'].astype(float) #add gene location information to psi data from spliceseq psi_data = psi_data.merge(splicegraph, left_on = ['symbol', 'Individual exon'], right_on = ['Symbol', 'Exon'], how = 'left') psi_data = psi_data.rename(columns = {'Chr_Start': 'spliced_region_start', 'Chr_Stop': 'spliced_region_end'}) return psi_data elif purpose == 'flanking': print('Not yet active. Please check back later.') else: raise ValueError('Purpose must be either inclusion or flanking. Please provide the correct purpose for the splicegraph information.') def project_ptms_onto_SpliceSeq(psi_data, splicegraph, gene_col ='symbol', dPSI_col = None, sig_col = None, extra_cols = None, coordinate_type = 'hg19', separate_modification_types = False, identify_flanking_sequences = False, flank_size = 5, ptm_coordinates = None, PROCESSES = 1, **kwargs): """ Given splice event quantification from SpliceSeq (such as what can be downloaded from TCGASpliceSeq), annotate with PTMs that are found in the differentially included regions. Parameters ---------- psi_data: pandas.DataFrame dataframe containing splice event quantification from SpliceSeq. Must contain the following columns: 'symbol', 'exons', 'splice_type'. splicegraph: pandas.DataFrame dataframe containing exon information from the splicegraph used during splice event quantification. Must contain the following columns: 'Symbol', 'Exon', 'Chr_Start', 'Chr_Stop'. gene_col: str column name in psi_data that contains the gene name. Default is 'symbol'. dPSI_col: str column name in psi_data that contains the delta PSI value for the splice event. Default is None, which will not include this information in the output. sig_col: str column name in psi_data that contains the significance value for the splice event. Default is None, which will not include this information in the output. extra_cols: list list of additional columns to include in the output dataframe. Default is None, which will not include any additional columns. coordinate_type: str indicates the coordinate system used for the start and end positions. Either hg38 or hg19. Default is 'hg19'. separate_modification_types: bool Indicate whether to store PTM sites with multiple modification types as multiple rows. For example, if a site at K100 was both an acetylation and methylation site, these will be separated into unique rows with the same site number but different modification types. Default is True. identify_flanking_sequences: bool Indicate whether to identify and return the flanking sequences for the splice events. Default is False. flank_size: int Size of the flanking sequence to extract from the splice event. Default is 5, which will extract 5 bases upstream and downstream of the splice event. Only relevant if identify_flanking_sequences is True. PROCESSES: int Number of processes to use for multiprocessing. Default is 1 (single processing). **kwargs: additional keyword arguments Additional keyword arguments to pass to the find_ptms_in_many_regions function, which will be fed into the filter_ptms() function from the helper module. These will be used to filter ptms with lower evidence. For example, if you want to filter PTMs based on the number of MS observations, you can add 'min_MS_observations = 2' to the kwargs. This will filter out any PTMs that have less than 2 MS observations. See the filter_ptms() function for more options. """ #load ptm data from config if not provided if ptm_coordinates is None: ptm_coordinates = pose_config.ptm_coordinates.copy() #check for any keyword arguments to use for filtering if kwargs: filter_arguments = helpers.extract_filter_kwargs(**kwargs) #check any excess unused keyword arguments, report them helpers.check_filter_kwargs(filter_arguments) #filter ptm coordinates file to include only ptms with desired evidence ptm_coordinates = helpers.filter_ptms(ptm_coordinates, **filter_arguments) #remove ME events from this analysis overlapping_columns = set(psi_data.columns).intersection({'Chromosome', 'Strand', 'Chr_Start', 'Chr_Stop'}) if len(overlapping_columns) > 0: #drop columns that will be added from splicegraph psi_data = psi_data.drop(columns=overlapping_columns) print('Removing ME events from analysis') spliced_data = psi_data.copy() spliced_data = spliced_data[spliced_data['splice_type'] != 'ME'].copy() #split exons into individual exons spliced_data['Individual exon'] = spliced_data['exons'].apply(lambda x: x.split(':')) spliced_data = spliced_data.explode('Individual exon').drop_duplicates() spliced_data['Individual exon'] = spliced_data['Individual exon'].astype(float) #add gene location information to psi data from spliceseq spliced_data = spliced_data.merge(splicegraph.copy(), left_on = ['symbol', 'Individual exon'], right_on = ['Symbol', 'Exon'], how = 'left') spliced_data = spliced_data.rename(columns = {'Chr_Start': 'spliced_region_start', 'Chr_Stop': 'spliced_region_end'}) print('Projecting PTMs onto SpliceSeq data') spliced_data, spliced_ptms = project_ptms_onto_splice_events(spliced_data, chromosome_col = 'Chromosome', strand_col = 'Strand', gene_col = 'symbol', region_start_col = 'spliced_region_start', region_end_col = 'spliced_region_end', event_id_col = 'as_id',dPSI_col = dPSI_col, sig_col = sig_col, extra_cols = extra_cols, separate_modification_types = separate_modification_types, coordinate_type = coordinate_type, PROCESSES = PROCESSES) ## add code for extracting flanking sequences (to do) if identify_flanking_sequences: altered_flanks = fs.get_flanking_changes_from_splicegraph(psi_data, splicegraph, dPSI_col = dPSI_col, sig_col = sig_col, extra_cols = extra_cols, gene_col = gene_col, coordinate_type=coordinate_type, flank_size = flank_size) return spliced_data, spliced_ptms, altered_flanks else: return spliced_data, spliced_ptms #def project_ptms_onto_TCGA_SpliceSeq(tcga_cancer = 'PRAD'): # """ # In progress. Will download and process TCGA SpliceSeq data for a specific cancer type, and project PTMs onto the spliced regions. # """ # print('Not yet active. Please check back later.') # pass def check_columns(splice_data, chromosome_col = None, strand_col = None, region_start_col = None, region_end_col = None, first_flank_start_col = None, first_flank_end_col = None, second_flank_start_col = None, second_flank_end_col = None, gene_col = None, dPSI_col = None, sig_col = None, event_id_col = None, extra_cols = None): """ Function to quickly check if the provided column names exist in the dataset and if they are the correct type of data """ expected_cols = [chromosome_col, strand_col, region_start_col, region_end_col, first_flank_start_col, first_flank_end_col, second_flank_start_col, second_flank_end_col, gene_col, dPSI_col, sig_col, event_id_col] expected_dtypes = [[str, object], [str,int, object], [int,float], [int,float], [int,float], [int,float], [int,float], [int,float], [str, object], float, float, None] #remove cols with None and the corresponding dtype entry expected_dtypes = [dtype for col, dtype in zip(expected_cols, expected_dtypes) if col is not None] expected_cols = [col for col in expected_cols if col is not None] #add extra columns to the expected columns list if extra_cols is not None: expected_cols += extra_cols expected_dtypes += [None]*len(extra_cols) #extra columns do not have dtype requirement #check to make sure columns exist in the dataframe if not all([x in splice_data.columns for x in expected_cols]): raise ValueError('Not all expected columns are present in the splice data. Please check the column names and provide the correct names for the following columns: {}'.format([x for x in expected_cols if x not in splice_data.columns])) #check to make sure columns are the correct data type for col, data_type in zip(expected_cols, expected_dtypes): if data_type is None: continue elif isinstance(data_type, list): if splice_data[col].dtype not in data_type: #try converting to the expected data type try: splice_data[col] = splice_data[col].astype(data_type[0]) except: raise ValueError('Column {} is not the expected data type. Expected data type is one of {}, but found data type {}'.format(col, data_type, splice_data[col].dtype)) else: if splice_data[col].dtype != data_type: #try converting to the expected data type try: splice_data[col] = splice_data[col].astype(data_type) except: raise ValueError('Column {} is not the expected data type. Expected data type is {}, but found data type {}'.format(col, data_type, splice_data[col].dtype))

import baostock as bs import pandas as pd import numpy as np from concurrent.futures import ThreadPoolExecutor from ta.momentum import RSIIndicator def get_all_a_shares(): """获取所有非ST的A股代码列表""" # 登录baostock bs.login() # 获取股票代码 rs = bs.query_all_stock() data_list = [] while (rs.error_code == '0') & rs.next(): data_list.append(rs.get_row_data()) # 转换为DataFrame并过滤 df = pd.DataFrame(data_list, columns=rs.fields) df = df[df['type'] == '1'] # 类型1为A股 df = df[~df['code_name'].str.contains('ST')] # 过滤ST股票 bs.logout() return df[['code', 'code_name']].values.tolist() def get_rsi(stock): """获取单只股票的RSI""" code, name = stock try: # 登录(每个线程需要独立登录) bs.login() # 获取最近60日数据(baostock需要指定日期范围) end_date = pd.Timestamp.now().strftime('%Y-%m-%d') start_date = (pd.Timestamp.now() - pd.Timedelta(days=60)).strftime('%Y-%m-%d') rs = bs.query_history_k_data( code, fields="date,close", start_date=start_date, end_date=end_date, frequency="d", adjustflag="2" # 后复权 ) # 转换为DataFrame df = pd.DataFrame(rs.data, columns=rs.fields) if len(df) < 30 or df.empty: return None # 类型转换 df['close'] = df['close'].astype(float) if df['close'].iloc[-1] == 0: # 过滤停牌股票 return None # 计算RSI rsi_indicator = RSIIndicator(close=df['close'], window=14) rsi = rsi_indicator.rsi().iloc[-1] bs.logout() # 关闭当前连接 return (code, name, rsi) except Exception as e: return None finally: bs.logout() def main(): # 获取股票列表 stocks = get_all_a_shares() print(f"待处理股票数量:{len(stocks)}") # 多线程处理 results = [] with ThreadPoolExecutor(max_workers=5) as executor: # 降低并发数以符合baostock限制 futures = [executor.submit(get_rsi, stock) for stock in stocks] for future in futures: result = future.result() if result and result[2] < 25: # 筛选RSI<25 results.append(result) # 输出结果 print("\nRSI低于25的股票:") print("代码\t名称\t\tRSI") for item in sorted(results, key=lambda x: x[2]): print(f"{item[0]}\t{item[1][:6]}\t{item[2]:.2f}") if __name__ == "__main__": main()ModuleNotFoundError: No module named 'ta',请修改代码

大家在看

recommend-type

蒙特卡罗剂量模拟和可视化工具包:一组旨在帮助临床医生和研究人员使用 GEANT4 或 TOPAS 的 Matlab 函数-matlab开发

这里有 3 组代码,旨在帮助临床医生和研究人员将 GEANT4 或 TOPAS (MC) 与 3D Slicer 结合使用进行剂量可视化和比较 第一段代码“STLfromDicomRN.m”采用 Varian Eclipse 生成的双散射质子计划的 Dicom 计划文件,并以“.STL”格式生成计划中的Kong径和补偿器模型。 此文件使用 zip 文件中包含的“stlwrite”和“surf2solid”函数。 这些文件可以导入到 MC 模拟几何中。 第二个是一组用于处理Dicom剂量文件和分析剂量的代码。 “NormalizeDicomDose.m”代码将 MC 剂量标准化为 Eclipse 剂量等中心处的剂量,并包含有关如何标准化为其他点或体积的说明。 “ProfilePlot.m”代码只是生成比较两点之间两个剂量文件的剂量的剂量曲线。 包含的是一个 matlab gui,它在您
recommend-type

中科大版苏淳概率论答案

本资料是中科大版本 苏淳编著的概率论答案,此为本书前半部分答案,其中包含书中部分习题,系老师所布置的重点习题答案。包含初等概率论,随机变量,随机向量,数字特征与特征函数极限定理几章的内容
recommend-type

公开公开公开公开-openprotocol_specification 2.7

LY-WCS-2012-01-06-01 V 1.0 公开公开公开公开 产品名称:产品名称:产品名称:产品名称: WCS 系统简介系统简介系统简介系统简介-公开版公开版公开版公开版 共共共共 13 页页页页 WCSWCSWCSWCS 系统简介系统简介系统简介系统简介 ((((客户交流用客户交流用客户交流用客户交流用)))) 文文文文 档档档档 作作作作 者:者:者:者: 王 超 日期:日期:日期:日期:2012/01/06 开发开发开发开发/测试经理:测试经理:测试经理:测试经理: 程 达 日期:日期:日期:日期:2012/01/06 项项项项 目目目目 经经经经 理:理:理:理: 程 达 日期:日期:日期:日期:2012/01/06 文文文文 档档档档 编编编编 号:号:号:号: ___________ ___ LY-WCS-2012-01-06-01______________ 上海朗因智能科技有限公司上海朗因智能科技有限公司上海朗因智能科技有限公司上海朗因智能科技有限公司 版权所有版权所有版权所有版权所有 不得复制不得复制不得复制不得复制
recommend-type

xilinx.com_user_IIC_AXI_1.0.zip

可以直接用在vivado 2017.4版本里。查看各个寄存器就知道用来干什么了,一号寄存器分频系数,二号的start、stop信号,三号寄存器8bit数据,四号寄存器只读,返回IIC状态和ACK信号,其中二号的一个bit可以用来不等待从机ACK,方便使用。
recommend-type

extjs6.2加SenchaCmd-6.5.3.6-windows-64bit

SenchaCmd-6.5.3.6-windows-64bit ext6.2.0gpl SenchaCmd-6.5.3.6-windows-64bit ext6.2.0gpl

最新推荐

recommend-type

员工工资管理系统VBSQL样本 (1)(1).doc

员工工资管理系统VBSQL样本 (1)(1).doc
recommend-type

精选Java案例开发技巧集锦

从提供的文件信息中,我们可以看出,这是一份关于Java案例开发的集合。虽然没有具体的文件名称列表内容,但根据标题和描述,我们可以推断出这是一份包含了多个Java编程案例的开发集锦。下面我将详细说明与Java案例开发相关的一些知识点。 首先,Java案例开发涉及的知识点相当广泛,它不仅包括了Java语言的基础知识,还包括了面向对象编程思想、数据结构、算法、软件工程原理、设计模式以及特定的开发工具和环境等。 ### Java基础知识 - **Java语言特性**:Java是一种面向对象、解释执行、健壮性、安全性、平台无关性的高级编程语言。 - **数据类型**:Java中的数据类型包括基本数据类型(int、short、long、byte、float、double、boolean、char)和引用数据类型(类、接口、数组)。 - **控制结构**:包括if、else、switch、for、while、do-while等条件和循环控制结构。 - **数组和字符串**:Java数组的定义、初始化和多维数组的使用;字符串的创建、处理和String类的常用方法。 - **异常处理**:try、catch、finally以及throw和throws的使用,用以处理程序中的异常情况。 - **类和对象**:类的定义、对象的创建和使用,以及对象之间的交互。 - **继承和多态**:通过extends关键字实现类的继承,以及通过抽象类和接口实现多态。 ### 面向对象编程 - **封装、继承、多态**:是面向对象编程(OOP)的三大特征,也是Java编程中实现代码复用和模块化的主要手段。 - **抽象类和接口**:抽象类和接口的定义和使用,以及它们在实现多态中的不同应用场景。 ### Java高级特性 - **集合框架**:List、Set、Map等集合类的使用,以及迭代器和比较器的使用。 - **泛型编程**:泛型类、接口和方法的定义和使用,以及类型擦除和通配符的应用。 - **多线程和并发**:创建和管理线程的方法,synchronized和volatile关键字的使用,以及并发包中的类如Executor和ConcurrentMap的应用。 - **I/O流**:文件I/O、字节流、字符流、缓冲流、对象序列化的使用和原理。 - **网络编程**:基于Socket编程,使用java.net包下的类进行网络通信。 - **Java内存模型**:理解堆、栈、方法区等内存区域的作用以及垃圾回收机制。 ### Java开发工具和环境 - **集成开发环境(IDE)**:如Eclipse、IntelliJ IDEA等,它们提供了代码编辑、编译、调试等功能。 - **构建工具**:如Maven和Gradle,它们用于项目构建、依赖管理以及自动化构建过程。 - **版本控制工具**:如Git和SVN,用于代码的版本控制和团队协作。 ### 设计模式和软件工程原理 - **设计模式**:如单例、工厂、策略、观察者、装饰者等设计模式,在Java开发中如何应用这些模式来提高代码的可维护性和可扩展性。 - **软件工程原理**:包括软件开发流程、项目管理、代码审查、单元测试等。 ### 实际案例开发 - **项目结构和构建**:了解如何组织Java项目文件,合理使用包和模块化结构。 - **需求分析和设计**:明确项目需求,进行系统设计,如数据库设计、系统架构设计等。 - **代码编写和实现**:根据设计编写符合要求的代码,实现系统的各个模块功能。 - **测试和维护**:进行单元测试、集成测试,确保代码质量,对项目进行维护和升级。 ### 其他相关知识点 - **Java虚拟机(JVM)**:了解JVM的基本工作原理,包括类加载机制、内存管理、垃圾回收算法等。 - **常用Java框架**:比如Spring、Hibernate、MyBatis等,在实际开发中常常与Java基础结合使用,提高开发效率。 以上知识点可以作为学习Java案例开发的基础框架。在实际的开发实践中,开发者需要结合具体的项目需求,对这些知识点进行灵活运用。通过反复的案例实践,可以加深对Java编程的理解,并逐步提升开发技能。这份集锦可能包含的案例可能涉及上述知识点的具体应用,能够帮助学习者更好地理解理论与实践的结合,从而快速提升个人的Java开发能力。
recommend-type

【VASP报错深度解析】:5大技巧识别并永久解决离子距离太近警告

# 1. VASP软件简介及离子距离太近问题概述 ## 1.1 VASP软件简介 **VASP(Vienna Ab initio Simulation Package)** 是一款广泛应用于计算材料科学的软件,它基于密度泛函理论(DFT)计算固体、表面和分子的电子结构和性质。
recommend-type

npm error A complete log of this run can be found in: D:\Program Files\nodejs\node_cache\_logs\2025-04-20T15_11_51_454Z-debug-0.log

<think>好的,我需要帮助用户找到npm的错误调试日志位置,并解决他们遇到的npm错误。首先,用户已经提供了一个具体的日志路径:'D:\Program Files\nodejs\node_cache\_logs\2025-04-20T15_11_51_454Z-debug-0.log',但看起来这个路径可能有问题,因为日期是2025年,这可能是一个示例或输入错误。我需要确认正确的日志路径生成方式。 根据npm的默认配置,日志文件通常位于npm的缓存目录下的_logs文件夹中。默认情况下,Windows系统中npm的缓存路径是%AppData%\npm-cache,而日志文件会以当前日期和
recommend-type

深入理解内存技术文档详解

由于文件内容无法查看,仅能根据文件的标题、描述、标签以及文件名称列表来构建相关知识点。以下是对“内存详解”这一主题的详细知识点梳理。 内存,作为计算机硬件的重要组成部分,负责临时存放CPU处理的数据和指令。理解内存的工作原理、类型、性能参数等对优化计算机系统性能至关重要。本知识点将从以下几个方面来详细介绍内存: 1. 内存基础概念 内存(Random Access Memory,RAM)是易失性存储器,这意味着一旦断电,存储在其中的数据将会丢失。内存允许计算机临时存储正在执行的程序和数据,以便CPU可以快速访问这些信息。 2. 内存类型 - 动态随机存取存储器(DRAM):目前最常见的RAM类型,用于大多数个人电脑和服务器。 - 静态随机存取存储器(SRAM):速度较快,通常用作CPU缓存。 - 同步动态随机存取存储器(SDRAM):在时钟信号的同步下工作的DRAM。 - 双倍数据速率同步动态随机存取存储器(DDR SDRAM):在时钟周期的上升沿和下降沿传输数据,大幅提升了内存的传输速率。 3. 内存组成结构 - 存储单元:由存储位构成的最小数据存储单位。 - 地址总线:用于选择内存中的存储单元。 - 数据总线:用于传输数据。 - 控制总线:用于传输控制信号。 4. 内存性能参数 - 存储容量:通常用MB(兆字节)或GB(吉字节)表示,指的是内存能够存储多少数据。 - 内存时序:指的是内存从接受到请求到开始读取数据之间的时间间隔。 - 内存频率:通常以MHz或GHz为单位,是内存传输数据的速度。 - 内存带宽:数据传输速率,通常以字节/秒为单位,直接关联到内存频率和数据位宽。 5. 内存工作原理 内存基于电容器和晶体管的工作原理,电容器存储电荷来表示1或0的状态,晶体管则用于读取或写入数据。为了保持数据不丢失,动态内存需要定期刷新。 6. 内存插槽与安装 - 计算机主板上有专用的内存插槽,常见的有DDR2、DDR3、DDR4和DDR5等不同类型。 - 安装内存时需确保兼容性,并按照正确的方向插入内存条,避免物理损坏。 7. 内存测试与优化 - 测试:可以使用如MemTest86等工具测试内存的稳定性和故障。 - 优化:通过超频来提高内存频率,但必须确保稳定性,否则会导致数据损坏或系统崩溃。 8. 内存兼容性问题 不同内存条可能由于制造商、工作频率、时序、电压等参数的不匹配而产生兼容性问题。在升级或更换内存时,必须检查其与主板和现有系统的兼容性。 9. 内存条的常见品牌与型号 诸如金士顿(Kingston)、海盗船(Corsair)、三星(Samsung)和芝奇(G.Skill)等知名品牌提供多种型号的内存条,针对不同需求的用户。 由于“内存详解.doc”是文件标题指定的文件内容,我们可以预期在该文档中将详细涵盖以上知识点,并有可能包含更多的实践案例、故障排查方法以及内存技术的最新发展等高级内容。在实际工作中,理解并应用这些内存相关的知识点对于提高计算机性能、解决计算机故障有着不可估量的价值。
recommend-type

【机械特性分析进阶秘籍】:频域与时域对比的全面研究

# 1. 机械特性分析的频域与时域概述 ## 1.1 频域与时域分析的基本概念 机械特性分析是通
recommend-type

dslicsrv安装报错Too many errors

<think>好的,用户遇到了安装dslicsrv时出现“Too many errors”的问题,需要帮助解决。首先,我需要明确dslicsrv是什么。可能是指某个特定的软件服务,但根据常见软件库和搜索引擎的信息,没有直接对应的结果。可能是拼写错误,或者是某个特定领域的专有软件?比如,可能与Dell的某个服务有关?例如,Dell System License Manager Service(dsLicSvc)可能更接近。假设用户可能拼写错误,将dslicsrv理解为dsLicSvc,即Dell的系统许可证管理服务。 接下来,用户遇到的错误是安装时出现“Too many errors”,这通常
recommend-type

深入解析Pro Ajax与Java技术的综合应用框架

根据提供的文件信息,我们可以推断出一系列与标题《Pro Ajax and Java》相关的IT知识点。这本书是由Apress出版,关注的是Ajax和Java技术。下面我将详细介绍这些知识点。 ### Ajax技术 Ajax(Asynchronous JavaScript and XML)是一种无需重新加载整个页面即可更新网页的技术。它通过在后台与服务器进行少量数据交换,实现了异步更新网页内容的目的。 1. **异步通信**:Ajax的核心是通过XMLHttpRequest对象或者现代的Fetch API等技术实现浏览器与服务器的异步通信。 2. **DOM操作**:利用JavaScript操作文档对象模型(DOM),能够实现页面内容的动态更新,而无需重新加载整个页面。 3. **数据交换格式**:Ajax通信中常使用的数据格式包括XML和JSON,但近年来JSON因其轻量级和易用性更受青睐。 4. **跨浏览器兼容性**:由于历史原因,实现Ajax的JavaScript代码需要考虑不同浏览器的兼容性问题。 5. **框架和库**:有许多流行的JavaScript库和框架支持Ajax开发,如jQuery、Dojo、ExtJS等,这些工具简化了Ajax的实现和数据操作。 ### Java技术 Java是一种广泛使用的面向对象编程语言,其在企业级应用、移动应用开发(Android)、Web应用开发等方面有着广泛应用。 1. **Java虚拟机(JVM)**:Java程序运行在Java虚拟机上,这使得Java具有良好的跨平台性。 2. **Java标准版(Java SE)**:包含了Java的核心类库和API,是Java应用开发的基础。 3. **Java企业版(Java EE)**:为企业级应用提供了额外的API和服务,如Java Servlet、JavaServer Pages(JSP)、Enterprise JavaBeans(EJB)等。 4. **面向对象编程(OOP)**:Java是一种纯粹的面向对象语言,它的语法和机制支持封装、继承和多态性。 5. **社区和生态系统**:Java拥有庞大的开发者社区和丰富的第三方库和框架,如Spring、Hibernate等,这些资源极大丰富了Java的应用范围。 ### 结合Ajax和Java 在结合使用Ajax和Java进行开发时,我们通常会采用MVC(模型-视图-控制器)架构模式,来构建可维护和可扩展的应用程序。 1. **服务器端技术**:Java经常被用来构建服务器端应用逻辑。例如,使用Servlet来处理客户端的请求,再将数据以Ajax请求的响应形式返回给客户端。 2. **客户端技术**:客户端的JavaScript(或使用框架库如jQuery)用于发起Ajax请求,并更新页面内容。 3. **数据格式**:Java后端通常会使用JSON或XML格式与Ajax进行数据交换。 4. **安全性**:Ajax请求可能涉及敏感数据,因此需要考虑如跨站请求伪造(CSRF)等安全问题。 5. **性能优化**:合理使用Ajax可以提升用户体验,但同时也需要注意对服务器和网络的负载,以及客户端脚本的执行性能。 ### 出版信息及文件信息 《Pro Ajax and Java》由Apress出版社出版,通常这种出版物会包含丰富的实例代码、开发指导、最佳实践以及相关的技术讨论。它旨在帮助开发者深化对Ajax和Java技术的理解和应用能力。 文件名“Apress.Pro.Ajax.and.Java.Frameworks.Jul.2006.HAPPY.NEW.YEAR.pdf”暗示了这份文档可能是一本专业的技术书籍。从文件名可以看出,该书还可能包含了框架使用方面的内容,并且是2006年出版的。标题中的“HAPPY.NEW.YEAR”可能仅仅是为了庆祝新年而加入的非标准部分。 总结而言,了解Ajax和Java的结合使用,不仅需要掌握两种技术的基本原理和使用方法,还要熟悉如何将它们整合到实际项目中,解决可能出现的技术挑战,从而构建功能强大且用户友好的Web应用。
recommend-type

【频域分析深度解读】:揭秘机械系统分析中的实用应用

# 1. 频域分析基础理论 频域分析是信号处理领域的一个重要分支,其核心在于研究信号在频率维度上的特性。与时域分析不同,频域分析关注的是信号频率成分的组成,这允许我们以一种直观的方式识别和处理信号中的特定频率特征。通过对信号进行傅里叶变换,原始的时域信号可以被转换为频域信号,这样就可以清晰地显示出各频率成分的幅度和相位信息。这种分析手段对于信号去噪、特征提取、压缩编码等应用场景至关重
recommend-type

openmv与stm32通信的时候,openmv的vin接5v会出问题吗

<think>我们正在处理用户关于OpenMV与STM32通信时,将OpenMV的VIN引脚连接到5V是否会引发问题。 根据引用[3]中的信息,AMG8833传感器在STM32上使用时,由单片机的3.3V和GND引脚供电。同时,引用[3]提到OpenMV相机作为配套计算机的情况,但用户的问题具体是OpenMV的VIN引脚接5V的问题。 我们需要明确OpenMV的电源要求: - OpenMV Cam的VIN引脚是电源输入引脚,其可接受电压范围通常在3.6V至5V。因此,5V是在可接受范围内的。 但是,我们还需要注意: 1. OpenMV Cam内部有稳压电路,会将输入电压降到3.3V