file-type

全局Node软件包轻松一览:global-packages使用指南

ZIP文件

下载需积分: 13 | 4KB | 更新于2024-12-26 | 26 浏览量 | 0 下载量 举报 收藏
download 立即下载
标题中提到的“global-packages”是一个指代一个npm包的名称,这个包的作用是列出全局安装的Node.js软件包。在Node.js的生态系统中,npm是最大的包管理工具,它允许开发者轻松地安装和管理项目依赖。当提到全局安装时,这通常意味着这些软件包会被安装在系统级别,可以在用户的机器上的任何项目中使用。 描述部分提供了如何使用这个npm包的具体步骤。首先,通过执行`npm install --save global-packages`命令来安装这个软件包。这里`--save`参数表示将这个包作为依赖添加到项目的`package.json`文件中。安装完成后,开发者可以通过`require('global-packages')`来引入这个模块,并且通过`await globalPackages()`的方式来异步获取全局安装包的列表。 接下来的代码示例展示了如何处理获取到的包列表,并且如何在控制台输出这些信息。这里的`try...catch`语句块用于捕获在获取包列表时可能发生的任何错误,并在控制台中打印错误信息。 关于标签,"npm package" 指出了这是一个npm包,"node" 暗示这个包是用于Node.js环境的,"global-packages" 是包名,"npmJavaScript" 表明它是使用npm进行管理的JavaScript软件包。 压缩包子文件的文件名称列表中,"global-packages-master" 表示这个npm包的源代码可以在版本控制系统(如Git)中的一个名为“global-packages”的仓库里找到,其主分支是“master”。 这里有几个相关知识点可以详细说明: 1. **npm包的安装与管理** npm(Node Package Manager)是一个命令行工具,它允许开发者安装Node.js的包、库以及其他依赖项。使用`npm install`命令可以安装指定的包,如果没有指定包名,npm默认会安装项目`package.json`文件中声明的依赖项。使用`--save`参数安装的包会被添加到`package.json`文件的dependencies部分,如果是开发环境的依赖则会被添加到devDependencies部分。 2. **全局安装与项目局部安装** 在Node.js开发中,包可以被安装在全局或者特定项目的目录中。全局安装意味着这个包对系统中的所有项目可用,这通常用在那些作为命令行工具使用的包上。局部安装则是将包安装在当前项目目录的`node_modules`文件夹中,并在项目根目录下的`package.json`文件中记录。局部安装是更为推荐的安装方式,因为它可以避免不同项目间的依赖冲突,并且便于在项目之间迁移。 3. **异步操作与Promise** JavaScript中的异步操作是通过Promise对象来实现的。Promise是一个代表了异步操作最终完成或失败的对象。在描述中,使用了`async/await`语法来处理异步操作,这提供了一种更为简洁和直观的方式来编写异步代码。`await`关键字用于等待一个Promise对象的结果,而`try...catch`结构用于捕获在等待过程中可能发生的错误。 4. **Node.js模块系统** Node.js的模块系统是基于CommonJS规范的,它允许开发者创建可复用的代码模块,这些模块可以通过`require`函数导入到其他模块中。在上述描述中,`require('global-packages')`是使用CommonJS规范导入`global-packages`模块的方式。使用`require`函数可以读取模块文件,执行模块代码,并且返回模块导出的对象。 5. **版本控制与代码仓库** 在软件开发中,版本控制系统(如Git)用来跟踪和管理代码随时间的变化。开发者可以在代码仓库(如GitHub或GitLab)中维护项目的源代码,允许团队协作、分支管理、版本发布和代码共享。在给定的文件信息中,“global-packages-master”可能是指代码库中对应的分支名称。 将这些知识点结合文件信息可以了解到,开发者可以使用`global-packages`这个npm包来轻松管理和查看全局安装的Node.js软件包,这对于开发者的日常操作和维护都是很有帮助的。

相关推荐

filetype
filetype

Traceback (most recent call last): File "c:\Users\johnn\Desktop\商品价值\time_series_gnn_predictor.py", line 699, in <module> main(args) File "c:\Users\johnn\Desktop\商品价值\time_series_gnn_predictor.py", line 612, in main model, history = train_model( ^^^^^^^^^^^^ File "c:\Users\johnn\Desktop\商品价值\time_series_gnn_predictor.py", line 305, in train_model outputs = model(batch_x) ^^^^^^^^^^^^^^ File "C:\Users\johnn\.conda\envs\apmcm\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\johnn\.conda\envs\apmcm\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\johnn\Desktop\商品价值\time_series_gnn_predictor.py", line 250, in forward x = layer(x, self.adj_matrix) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\johnn\.conda\envs\apmcm\Lib\site-packages\torch\nn\modules\module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\johnn\.conda\envs\apmcm\Lib\site-packages\torch\nn\modules\module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\johnn\Desktop\商品价值\time_series_gnn_predictor.py", line 201, in forward batch_size, seq_len, num_nodes, in_features = x.shape # 正确获取特征维度in_features ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 4, got 3)

filetype

# 随机森林多分类模型 # packages 安装 —— install.packages("") setwd("D:/Rpackages") # packages 载入 —— library() library(randomForest) library(tidyverse) library(skimr) library(DataExplorer) library(caret) library(pROC) library(ggplot2) library(splines) library(reshape2) library(scales) library(ggprism) library(ggpubr) # 加载数据 boston <- read.csv(file.choose()) # 数据框转换 boston <- as.data.frame(boston) # boston <- na.pass(boston) # boston <- na.omit(boston) # na.action = na.omit 返回删除所有包含NA值的行的数据框或矩阵 # na.action = na.pass 将NA值传递给后续的计算,可能会导致结果中也出现NA # 查看数据概貌 skim(boston) # 查看数据缺失情况 plot_missing(boston) boston[is.na(boston)] <- 0 #将matrix(此处为boston)中的NA值替换为0,按需运行 # 数据类型修正,将boston中的i列转为factor,i 将依次取c()中的值 for (i in c(1:2)) { boston[,i] <- factor(boston[,i]) } # 因变量分布情况 table(boston$Class) # 拆分数据 set.seed(1) trains <- createDataPartition( y = boston$Class, p = 0.7, list = F ) traindata <- boston[trains,] testdata <- boston[-trains,] # # test:自选训练集测试集(无特殊可不理 # traindata <- read.csv(file.choose()) # traindata <- as.data.frame(traindata) # for (i in c(1:5)) { # traindata[,i] <- factor(traindata[,i]) # } # # testdata <- read.csv(file.choose()) # testdata <- as.data.frame(testdata) # for (i in c(1:5)) { # testdata[,i] <- factor(testdata[,i]) # } # 拆分后因变量分布 table(traindata$Class) table(testdata$Class) dim(boston) # # 构建公式,自变量列名~a到b列(colnames[a:b]) # # colnames(boston) 用于获取数据框的列名, # form_clsm <- as.formula( # paste0( # "class~", # paste(colnames(traindata)[3:112],collapse = "+") # ) # ) # form_clsm # # # 构建模型 # set.seed(100001) # fit_rf_clsm <- randomForest( # form_clsm, # data = traindata, # ntree = 500, # 默认500,error & trees不稳定需增加树的数量 # mtry = 6, # 每个节点可提供选择的变量数目(指定节点中用于二叉树的变量个数 # # 默认情况下数据集变量个数的二次方根(分类模型)或三分之一(预测模型)) # importance = T # importance = TRUE: 启用特征重要性计算 # # 随机森林会根据每个特征对模型性能的影响,计算出每个特征的重要性 # ) # print(fit_rf_clsm) # 定义训练集特征和目标变量 X_train <- traindata[, -(1:2)] #traindata中除第1、2列外的列为自变量 x y_train <- as.factor(traindata[, 2]) #traindata中第2列为因变量 y # # 创建随机森林分类模型(基准模型 # model <- randomForest(x = X_train, y = y_train, ntree = 500) # # # 创建训练控制对象 # ctrl <- trainControl(method = "repeatedcv", number = 5, repeats = 10) # # k 折交叉验证作为模型评估方法,小数据做重复10次5折交叉验证 # # # 进行参数调优 # # mtry参数调节 # grid1 <- expand.grid(mtry = c(48:52)) # 定义mtry范围 # # grid1 <- expand.grid(mtry = c(10, 20, 30, 40, 50, 60, 70, 80, 90, 100)) # # 进阶 # # num_features <- ncol(X_train) # # mtry_range <- floor(sqrt(num_features)) # 取平方根并向下取整 # # grid_mtry <- expand.grid(mtry = seq(max(2, mtry_range - 5), # # min(mtry_range + 5, num_features))) # # # 使用caret包进行调参 # rf_model <- train(x = X_train, y = y_train, # 自变量 x, 因变量 y # method = "rf", # 使用的模型方法为随机森林(rf) # trControl = ctrl, # tuneGrid = grid1,) # # 输出结果 # print(rf_model) # # # # 定义最佳mtry参数 # grid2 <- expand.grid(mtry = c(30)) # # # 定义模型列表,存储每一个模型评估结果 # modellist <- list() # # 调整的参数是决策树的数量 # for (ntree in seq(100, 2000, by=100)) { # seq函数构建一个集合,间距by = 100 # set.seed(101) # fit <- train(x = X_train, y = y_train, method="rf", # metric="Accuracy", tuneGrid=grid2, # trControl=ctrl, ntree=ntree) # key <- toString(ntree) # modellist[[key]] <- fit # } # # # compare results # results <- resamples(modellist) # # # 输出最佳模型和参数 # summary(results) # # # 可视化模型性能,accuracy越近1越好,kappa越近1越好 # bwplot(results) # 箱线图 # dotplot(results) # 点图 # densityplot(results) # 密度图 # 使用最佳参数训练模型 set.seed(1001) fit_rf_clsm <- randomForest(x = X_train, y = y_train, mtry = 50, ntree = 500, importance = T) print(fit_rf_clsm) # ntree参数与error之间的关系图示 plot(fit_rf_clsm,main = "ERROR & TREES") legend("top", legend = colnames(fit_rf_clsm$err.rate), lty = 1:6, col = 1:6, horiz = T, cex = 0.9) plot(randomForest::margin(fit_rf_clsm), main = '观测值被判断正确的概率图') # 变量重要性 varImpPlot(fit_rf_clsm,main ="varImpPlot") varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 1) varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 2) importance_genus <- data.frame(importance(fit_rf_clsm)) importance_genus <- importance_genus[order(importance_genus$MeanDecreaseGini, decreasing=TRUE),] importance_genus <- importance_genus[order(importance_genus$MeanDecreaseAccuracy, decreasing=TRUE),] head(importance_genus) write.table(importance_genus,"importance_genus_DaChuang_SiteClass_unexposed20250209.txt", sep = '\t',col.names = NA,quote = FALSE) # 保存重要特征到文件 # # 预测 # # 训练集预测概率 # trainpredprob <- predict(fit_rf_clsm,newdata = traindata,type = "prob") # # 训练集ROC # multiclass.roc(response = traindata$class,predictor = trainpredprob) # # 训练集预测分类 # trainpredlab <- predict(fit_rf_clsm,newdata = traindata,type = "Class") # # 训练集混淆矩阵 # confusionMatrix_train <- confusionMatrix(data = trainpredlab, # reference = traindata$Class, # mode = "everything") # # 训练集综合结果 # multiClassSummary( # data.frame(obs = traindata$Class,pred=trainpredlab), # lev = levels(traindata$Class) # ) # # # # 作图展示 top N 重要的 OTUs # varImpPlot(fit_rf_clsm, n.var = min(20, nrow(fit_rf_clsm$importance)), # main = 'Top 20 - variable importance') # # # 测试集预测概率 # # X_test <- testdata[, -(1:2)] #testdata中除第1、2列外的列为自变量 x # # y_test <- as.factor(testdata[, 2]) #testdata中第2列为因变量 y # testpredprob <- predict(fit_rf_clsm,newdata = testdata,type = "prob") # # 测试集ROC # multiclass.roc(response = testdata$Class,predictor = testpredprob) # # 测试集预测分类 # testpredlab <- predict(fit_rf_clsm,newdata = testdata,type = "Class") # # 测试集混淆矩阵 # confusionMatrix_test <- confusionMatrix(data = testpredlab, # reference = testdata$Class, # mode = "everything") # confusionMatrix_test # # 测试集综合结果 # multiClassSummary( # data.frame(obs = testdata$Class,pred=testpredlab), # lev = levels(testdata$Class) # ) # 交叉验证帮助选择特定数量的特征 # 5次重复十折交叉验证 set.seed(10001) otu_train.cv <- replicate(5, rfcv(traindata[,-(1:2)], # 除去第a到b列(a:b) traindata$Class, cv.fold = 10, step = 1.5), simplify = FALSE) otu_train.cv <- data.frame(sapply(otu_train.cv, '[[', 'error.cv')) otu_train.cv$otus <- rownames(otu_train.cv) otu_train.cv <- reshape2::melt(otu_train.cv, id = 'otus') otu_train.cv$otus <- as.numeric(as.character(otu_train.cv$otus)) otu_train.cv.mean <- aggregate(otu_train.cv$value, by = list(otu_train.cv$otus), FUN = mean) head(otu_train.cv.mean, 18) # 绘图观察拐点 p <- ggplot(otu_train.cv,aes(otus,value)) + geom_smooth(se = FALSE, method = 'glm',formula = y~ns(x,6)) + theme(panel.grid = element_blank(), panel.background = element_rect(color = 'black',fill = 'transparent')) + labs(title = '',x='Number of genus',y='Cross-validation error') p # 在横坐标(xintecept)绘制竖线 p + geom_vline(xintercept = 160) # # 备用 # p2 <- ggplot(otu_train.cv,aes(otus,value)) + # geom_line() + # theme(panel.grid = element_blank(), # panel.background = element_rect(color = 'black', fill = 'transparent')) + # labs(title = '',x = 'Number of OTUs', y = 'Cross-validation error') # p2 # # p2 + geom_vline(xintercept = 30) # 大约提取前 N个重要的特征 importance_genus[1:160,] # importance_Ngenus <- importance_genus[1:160,] # # 输出表格 # write.table(importance_genus[1:70, ], # 'importance_genus_top70_of_zhangxiaofeng_human_drowning.txt', # sep = '\t', col.names = NA, quote = FALSE) # # 变量重要性 # varImpPlot(fit_rf_clsm,main ="varImpPlot") # varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 1) # varImpPlot(fit_rf_clsm,main = "varImpPlot",type = 2) # # varImpPlot(fit_rf_clsm, n.var = min(160, nrow(fit_rf_clsm$importance)), # min(num,)处的num为图中的菌属数量 # main = 'Top 19 - variable importance',type = 1) # 简约分类器(只取部分高影响的自变量) # 选择 top N 重要的 OTUs,例如上述已经根据“Mean Decrease Accuracy”排名获得 genus_select <- rownames(importance_genus)[1:160] # 数据子集的训练集和测试集 genus_train_top <- traindata[ ,c(genus_select, 'Class')] genus_test_top<- testdata[ ,c(genus_select, 'Class')] # set.seed(10001) # form_clsm1 <- as.formula( # paste0( # "class~", # paste(colnames(genus_train_top)[1:10],collapse = "+") # ) # ) # 构建模型 # fit_rf_clsm1 <- randomForest( # form_clsm1, # data = genus_train_top, # ntree = 500, # mtry = 6, # importance = T # ) x_train1 <- genus_train_top[, -160 - 1] # 自变量 x y_train1 <- as.factor(genus_train_top[, 160 + 1]) # 因变量 y # fit_rf_clsm_test1 <- randomForest(x = x_train1, # y = y_train1, # ntree = 500, # 增加树的数量以提高稳定性 # importance = TRUE # 启用特征重要性计算 # ) # # fit_rf_clsm_test1 # # # 5 折交叉验证,重复 10 次 # ctrl1 <- trainControl(method = "repeatedcv", number = 5, repeats = 10) # # # 定义 mtry 和 ntree 的参数范围 # grid3 <- expand.grid(mtry = c(2:15)) # 定义mtry范围 # # # 进阶 # # num_features <- ncol(X_train) # # mtry_range <- floor(sqrt(num_features)) # 取平方根并向下取整 # # grid_mtry <- expand.grid(mtry = seq(max(2, mtry_range - 5), # # min(mtry_range + 5, num_features))) # # # 使用caret包进行调参 # fit_rf_clsm_test2 <- train(x = x_train1, y = y_train1, # 自变量 x, 因变量 y # method = "rf", # 使用的模型方法为随机森林(rf) # trControl = ctrl1, # tuneGrid = grid3,) # # 输出结果 # print(fit_rf_clsm_test2) # # # 定义最佳mtry参数 # grid4 <- expand.grid(mtry = c(16)) # # # 定义模型列表,存储每一个模型评估结果 # modellist1 <- list() # # 调整的参数是决策树的数量 # for (ntree in seq(100, 2000, by=100)) { # seq函数构建一个集合,间距by = 100 # set.seed(100003) # fit1 <- train(x = x_train1, y = y_train1, method="rf", # metric="Accuracy", tuneGrid=grid4, # trControl=ctrl1, ntree=ntree) # key1 <- toString(ntree) # modellist1[[key1]] <- fit1 # } # # # compare results # results1 <- resamples(modellist1) # # # 输出最佳模型和参数 # summary(results1) # # # 可视化模型性能,accuracy越近进1越好,kappa越近1越好 # bwplot(results1) # 箱线图 # dotplot(results1) # 点图 # densityplot(results1) # 密度图 # 使用最佳参数训练模型 set.seed(1) fit_rf_clsm1 <- randomForest(x = x_train1, y = y_train1, mtry = 12, ntree = 500, importance = T) print(fit_rf_clsm1) # # "ERROR & TREES" # plot(fit_rf_clsm1,main = "ERROR & TREES") # # # plot(randomForest::margin(fit_rf_clsm1), main = '观测值被判断正确的概率图') # # 预测 # # 训练集预测概率 # trainpredprob <- predict(fit_rf_clsm1,newdata = genus_train_top,type = "prob") # # 训练集ROC # multiclass.roc(response = genus_train_top$Class,predictor = trainpredprob) # # 训练集预测分类 #trainpredlab <- predict(fit_rf_clsm1,newdata = genus_train_top,type = "Class") # # 训练集混淆矩阵 # confusionMatrix(data = trainpredlab, # reference = genus_train_top$Class, # mode = "everything") # 预测 # 测试集预测概率 # x_test1 <- genus_test_top[, -11] # 自变量 x # y_test1 <- as.factor(genus_test_top[, 11]) # 因变量 y testpredprob <- predict(fit_rf_clsm1,newdata = genus_test_top,type = "prob") write.table(testpredprob, file = "D:/Rpackages/testpredprob.txt", sep = "\t", row.names = FALSE, col.names = TRUE) # 测试集ROC multiclass.roc(response = genus_test_top$Class,predictor = testpredprob) # 测试集预测分类 testpredlab <- predict(fit_rf_clsm1,newdata = genus_test_top,type = "Class") # 测试集混淆矩阵 confusion_matrix <- confusionMatrix(data = testpredlab, reference = genus_test_top$Class, mode = "everything") # 测试集综合结果 multiClassSummary( data.frame(obs = genus_test_top$Class,pred=testpredlab), lev = levels(genus_test_top$Class) ) # 查看样本预测结果 results <- data.frame(Actual = genus_test_top$Class, Predicted = testpredlab) # 测试集预测分类 # testpredlab <- predict(fit_rf_clsm1,newdata = testdata,type = "class") # t <- table(testpredlab,testdata$class) # acc = sum(diag(t))/nrow(testdata)*100 # print(paste("模型准确率为:",round(acc,4),sep='')) # 绘制混淆矩阵热图(内容复杂,好汉谨慎处之) # confusion_matrix是混淆矩阵对象 # 转换混淆矩阵为数据框 roc1 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,1:6]) roc1 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,1]) plot(roc1$rocs[[1]],col="#1f77b4",print.auc = TRUE,print.auc.x=0.8,print.auc.y=0.8) roc2 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,2]) plot(roc2$rocs[[1]],add=TRUE,col="#ff7f0e",print.auc = TRUE,print.auc.x=0.6,print.auc.y=0.6) roc3 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,3]) plot(roc3$rocs[[1]],add=TRUE,col="#2ca02c",print.auc=TRUE,print.auc.x=0.5,print.auc.y=0.5) roc4 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,4]) plot(roc4$rocs[[1]],add=TRUE,col="#d62728",print.auc=TRUE,print.auc.x=0.4,print.auc.y=0.4) roc5 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,5]) plot(roc5$rocs[[1]],add=TRUE,col="#9467bd",print.auc=TRUE,print.auc.x=0.3,print.auc.y=0.3) roc6 <- multiclass.roc(response = genus_test_top$Class,predictor =testpredprob[,6]) plot(roc1$rocs[[6]], add = TRUE, col = "#8c564b", print.auc = TRUE, print.auc.x = 0.2, print.auc.y = 0.2) # confusion_matrix_df <- as.data.frame.matrix(confusion_matrix$table) # colnames(confusion_matrix_df) <- c("F","H") # rownames(confusion_matrix_df) <- c("F","H") # # #c("1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15") # colnames(confusion_matrix_df) <- c("01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12", "13", "14", "15") # rownames(confusion_matrix_df) <- c("01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12", "13", "14", "15") # # 计算归一化值 # draw_data <- round(confusion_matrix_df / colSums(confusion_matrix_df),2) # draw_data <- as.data.frame(t(draw_data)) #使用t()函数转换draw_data的行列,原本列为真实类别,现转为行 # draw_data$real <- rownames(draw_data) #提取行名作为新生成列real的类别 # draw_data <- melt(draw_data) #将宽格式的数据框转换为长格式 # # # 绘制矩阵热图 # confusion1 <- ggplot(draw_data, aes(real, variable, fill = value)) + # geom_tile() + # geom_text(aes(label = scales::percent(value))) + # scale_fill_gradient(low = "#F0F0F0", high = "#3575b5") + # labs(x = "Prediction", y = "Reference", title = "Confusion matrix") + # theme_prism(border = T) + # theme(panel.border = element_blank(), # axis.ticks.y = element_blank(), # axis.ticks.x = element_blank(), # legend.position="right", # plot.title = element_text(hjust = 0.5)) # # confusion1 # # 二分类ROC曲线绘制 # # 计算ROC曲线的参数 # roc_obj <- roc(response = genus_test_top$class, predictor = testpredprob[, 2]) # roc_auc <- auc(roc_obj) # # 将ROC对象转换为数据框 # roc_data <- data.frame(1 - roc_obj$specificities, roc_obj$sensitivities) # # 绘制ROC曲线 # ROC1 <- ggplot(roc_data, aes(x = 1 - roc_obj$specificities, y = roc_obj$sensitivities)) + # geom_line(color = "#0073C2FF", size = 1) + # annotate("segment", x = 0, y = 0, xend = 1, yend = 1, linetype = "dashed", color = "gray") + # annotate("text", x = 0.8, y = 0.2, label = paste("AUC =", round(roc_auc, 3)), size = 4, color = "black") + # coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) + # theme_pubr() + # labs(x = "1 - Specificity", y = "Sensitivity") + # ggtitle("ROC Curve") + # theme(plot.title = element_text(size = 14, face = "bold")) + # theme_prism(border = T) # ROC1 # # # 计算 ROC 和 AUC # roc_obj <- roc(response = genus_test_top$class, predictor = testpredprob[, 2]) # roc_auc <- auc(roc_obj) # # # 将 ROC 对象转换为数据框 # roc_data <- data.frame( # FPR = 1 - roc_obj$specificities, # TPR = roc_obj$sensitivities # ) # # # 平滑处理 # smooth_roc <- data.frame( # FPR = spline(roc_data$FPR, n = 500)$x, # TPR = spline(roc_data$TPR, n = 500)$y # ) # # # 绘制平滑后的 ROC 曲线 # ROC1 <- ggplot(smooth_roc, aes(x = FPR, y = TPR)) + # geom_line(color = "#0073C2FF", size = 1) + # annotate("segment", x = 0, y = 0, xend = 1, yend = 1, linetype = "dashed", color = "gray") + # annotate("text", x = 0.8, y = 0.2, label = paste("AUC =", round(roc_auc, 2)), size = 4, color = "black") + # coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) + # theme_pubr() + # labs(x = "1 - Specificity", y = "Sensitivity") + # ggtitle("Smoothed ROC Curve") + # theme(plot.title = element_text(size = 14, face = "bold")) + # theme_prism(border = T) # # ROC1 # # geom_smooth(se = FALSE, method = 'glm',formula = y~ns(x,6)) # # # 保存模型 # saveRDS(fit_rf_clsm, # file = "D:/Documents/R.data/fit_rf_clsm1_UnSimplifiedSiteClass_ConcernGender_UnexposedTop19_240209.rds") # # # 读取模型 # fit_rf_clsm1 <- readRDS("D:/Documents/R.data/fit_rf_clsm1_hand_Simplified_240102.rds") # # # 读取待分类数据 # testdata1 <- read.csv(file.choose()) # testdata1 <- as.data.frame(testdata) # for (i in c(1:2)) { # testdata1[,i] <- factor(testdata1[,i]) # } # # # 应用 # # 待分类数据集预测概率 # testpredprob <- predict(fit_rf_clsm1, newdata = testdata1, type = "prob") # # 测试集ROC # multiclass.roc(response = testdata1$class, predictor = testpredprob) # # 待分类数据集预测分类 # testpredlab <- predict(fit_rf_clsm1, newdata = testdata1,type = "class") # # 待分类数据集混淆矩阵 # confusion_matrix <- confusionMatrix(data = testpredlab, # reference = testdata1$class, # mode = "everything") # # 待分类数据集综合结果 # multiClassSummary( # data.frame(obs = testdata1$class,pred=testpredlab), # lev = levels(testdata1$class) # ) # # # 查看样本分类结果 # results <- data.frame(Actual = testdata1$class, Predicted = testpredlab) (这是整个的代码,testpredprob <- predict(fit_rf_clsm1,newdata = genus_test_top[1:5, ],type = "prob"),跑不下来)

filetype

为什么pip换了国内源刚开始很快后来很慢用pip安装langflow的时候要下载很多东西,已经换了清华源了,前面还挺快的,到后面就很慢了 ,后来换成阿里源,但还是这样 Downloading https://mirrors.aliyun.com/pypi/packages/97/8c/b49611a3daf01c3980d1da920a63ff4dc12d607d0edcb8a86ba331a7c686/fastapi-0.115.10-py3-none-any.whl (94 kB) Downloading https://mirrors.aliyun.com/pypi/packages/32/b6/7517af5234378518f27ad35a7b24af9591bc500b8c1780929c1295999eb6/fastapi-0.115.9-py3-none-any.whl (94 kB) Collecting starlette<0.46.0,>=0.40.0 (from fastapi<1.0.0,>=0.115.2->langflow-base==0.1.1->langflow) Downloading https://mirrors.aliyun.com/pypi/packages/d9/61/f2b52e107b1fc8944b33ef56bf6ac4ebbe16d91b94d2b87ce013bf63fb84/starlette-0.45.3-py3-none-any.whl (71 kB) Collecting fastapi<1.0.0,>=0.115.2 (from langflow-base==0.1.1->langflow) Downloading https://mirrors.aliyun.com/pypi/packages/8f/7d/2d6ce181d7a5f51dedb8c06206cbf0ec026a99bf145edd309f9e17c3282f/fastapi-0.115.8-py3-none-any.whl (94 kB) Downloading https://mirrors.aliyun.com/pypi/packages/e6/7f/bbd4dcf0faf61bc68a01939256e2ed02d681e9334c1a3cef24d5f77aba9f/fastapi-0.115.7-py3-none-any.whl (94 kB) Downloading https://mirrors.aliyun.com/pypi/packages/52/b3/7e4df40e585df024fac2f80d1a2d579c854ac37109675db2b0cc22c0bb9e/fastapi-0.115.6-py3-none-any.whl (94 kB) Collecting starlette<0.42.0,>=0.40.0 (from fastapi<1.0.0,>=0.115.2->langflow-base==0.1.1->langflow) Downloading https://mirrors.aliyun.com/pypi/packages/96/00/2b325970b3060c7cecebab6d295afe763365822b1306a12eeab198f74323/starlette-0.41.3-py3-none-any.whl (73 kB) Collecting fastapi<1.0.0,>=0.115.2 (from langflow-base==0.1.1->langflow) Downloading https://mirrors.aliyun.com/pypi/packages/54/c4/148d5046a96c428464557264877ae5a9338a83bbe0df045088749ec89820/fastapi-0.115.5-py3-none-any.whl (94 kB) 每下一个这些东西,要1分钟左右。而直接点击这些链接在浏览器里下载只要一瞬间,我的网络环境正常。会不会是因为向服务器发送了太多请求,还是因为pip设计的问题,只要连续下载的东西多了就会性能下降?

行者无疆0622
  • 粉丝: 35
上传资源 快速赚钱