活动介绍

QFileDialog::getSaveFileName报错

时间: 2023-10-02 08:05:52 浏览: 361
QFileDialog::getSaveFileName是Qt中用于弹出保存文件对话框的函数,如果出现错误,可能是参数传递有误或者操作系统权限不足等原因导致的。具体的错误信息可以通过调试来查看,或者在代码中添加一些错误处理代码来捕获异常和输出错误信息。以下是一些可能引起该函数报错的原因: 1. 参数传递错误:函数的第一个参数应该是一个 QWidget 指针,表示该对话框的父窗口,如果传递的参数不是 QWidget 类型,就会报错。 2. 文件名长度过长:如果要保存的文件名过长,超过了操作系统的文件名长度限制,也会导致该函数报错。 3. 文件路径不存在:如果要保存的文件路径不存在,也会导致该函数报错。 4. 操作系统权限不足:如果当前用户没有足够的权限在指定的目录下创建文件,也会导致该函数报错。 你可以先检查一下你的代码中是否存在上述问题,如果还是无法解决,可以提供更多的信息和错误提示,以便更好地定位问题。
相关问题

void MainWindow::on_exportToExcel_triggered() { if (!db.isOpen()) { QMessageBox::critical(this, "错误", "没有已打开的数据库!"); return; } QStringList tables = db.tables(); if (tables.isEmpty()) { QMessageBox::critical(this, "错误", "数据库中没有任何可用的表!"); return; } QString tableName = tables.first(); QString fileName = QFileDialog::getSaveFileName( this, tr("导出至CSV"), "", tr("Comma Separated Values File (*.csv);;所有文件 (*.*)") ); if (fileName.isEmpty()) { return; } if (!fileName.endsWith(".csv", Qt::CaseInsensitive)) { fileName.append(".csv"); } QFile file(fileName); if (!file.open(QIODevice::WriteOnly | QIODevice::Text)) { QMessageBox::warning(this, "警告", "无法创建或打开指定文件以供写入!"); return; } QTextStream out(&file); QSqlQuery query(db); bool success = query.exec(QString("SELECT * FROM %1").arg(tableName)); if (!success) { QMessageBox::warning(this, "警告", query.lastError().text()); file.remove(); file.close(); return; } // 写入列标题,并添加额外处理防止特殊字符干扰 QStringList headers; int columnCount = query.record().count(); for (int i = 0; i < columnCount; ++i) { QString header = query.record().fieldName(i); if (header.contains(',') || header.contains('\"')) { header.replace("\"", "\"\""); header.prepend('"').append('"'); } headers << header; } out << headers.join(",") + "\n"; while (query.next()) { QStringList rowValues; for (int j = 0; j < columnCount; ++j) { QVariant valueVariant = query.value(j); // 如果是数字类型,则直接使用 toString() 避免自动加上双引号影响 Excel 解析 if (valueVariant.typeId() == QMetaType::Int || valueVariant.typeId() == QMetaType::Double) { rowValues << valueVariant.toString(); } else { QString value = valueVariant.toString(); // 对包含逗号、换行符或其他特殊符号的内容加双引号包裹 if (value.contains(',') || value.contains('\n') || value.contains('\"')) { value.replace("\"", "\"\""); value.prepend('"').append('"'); } rowValues << value.trimmed(); } } out << rowValues.join(",") + "\n"; } file.close(); QMessageBox::information(this, "成功", QString("数据已从表 '%1' 成功导出到 CSV 文件 '%2' 中!") .arg(tableName) .arg(fileName)); }修改一下,在用户点击下拉菜单中的关闭数据库后,可关闭当前已打开的数据库

为了实现在用户点击“关闭数据库”选项后能够正常关闭当前已经打开的数据库连接,我们需要对现有程序稍作调整。可以在主窗口类 `MainWindow` 中新增一个槽函数用于关闭数据库,并绑定到下拉菜单中的相应动作。 以下是修改后的完整代码及其说明: --- ### 修改方案 #### 1. 新增关闭数据库的功能 我们假设您已经在界面设计工具(如Qt Designer)中设置了下拉菜单项 “关闭数据库”,并将其信号 `triggered()` 连接到一个新的槽函数 `on_closeDatabase_triggered()`。 ```cpp void MainWindow::on_closeDatabase_triggered() { if (db.isOpen()) { // 检查是否有数据库处于打开状态 db.close(); // 关闭数据库连接 QMessageBox::information(this, "通知", "数据库已成功关闭!"); } else { QMessageBox::warning(this, "警告", "没有需要关闭的数据库!"); } } ``` 此段代码会在用户选择“关闭数据库”时检查是否有一个正在使用的数据库连接;如果有则关闭它,并弹出提示信息框告知用户操作结果。 --- #### 2. 确保其他功能不受影响 当数据库被关闭之后,应该避免再次尝试访问未开启的状态下的数据库以防报错。因此可以考虑进一步优化现有的导出Excel等依赖于数据库的操作逻辑,在每次使用前均加入类似下面这样的检测机制: ```cpp if (!db.isOpen()) { QMessageBox::critical(this, "错误", "尚未打开数据库或数据库已被关闭,请先选择打开!"); return; } ``` 这种预防措施已在原始提供的 `on_exportToExcel_triggered()` 函数中有体现。 --- #### 完整示例代码整合 将上述两部分结合起来后得到最终版本如下所示: ```cpp // 导出到CSV文件 void MainWindow::on_exportToExcel_triggered() { if (!db.isOpen()) { QMessageBox::critical(this, "错误", "没有已打开的数据库!"); return; } QStringList tables = db.tables(); if (tables.isEmpty()) { QMessageBox::critical(this, "错误", "数据库中没有任何可用的表!"); return; } QString tableName = tables.first(); QString fileName = QFileDialog::getSaveFileName( this, tr("导出至CSV"), "", tr("Comma Separated Values File (*.csv);;所有文件 (*.*)") ); if (fileName.isEmpty()) { return; } if (!fileName.endsWith(".csv", Qt::CaseInsensitive)) { fileName.append(".csv"); } QFile file(fileName); if (!file.open(QIODevice::WriteOnly | QIODevice::Text)) { QMessageBox::warning(this, "警告", "无法创建或打开指定文件以供写入!"); return; } QTextStream out(&file); QSqlQuery query(db); bool success = query.exec(QString("SELECT * FROM %1").arg(tableName)); if (!success) { QMessageBox::warning(this, "警告", query.lastError().text()); file.remove(); file.close(); return; } QStringList headers; int columnCount = query.record().count(); for (int i = 0; i < columnCount; ++i) { QString header = query.record().fieldName(i); if (header.contains(',') || header.contains('\"')) { header.replace("\"", "\"\""); header.prepend('"').append('"'); } headers << header; } out << headers.join(",") + "\n"; while (query.next()) { QStringList rowValues; for (int j = 0; j < columnCount; ++j) { QVariant valueVariant = query.value(j); if (valueVariant.typeId() == QMetaType::Int || valueVariant.typeId() == QMetaType::Double) { rowValues << valueVariant.toString(); } else { QString value = valueVariant.toString(); if (value.contains(',') || value.contains('\n') || value.contains('\"')) { value.replace("\"", "\"\""); value.prepend('"').append('"'); } rowValues << value.trimmed(); } } out << rowValues.join(",") + "\n"; } file.close(); QMessageBox::information(this, "成功", QString("数据已从表 '%1' 成功导出到 CSV 文件 '%2' 中!") .arg(tableName) .arg(fileName)); } // 用户点击"关闭数据库" void MainWindow::on_closeDatabase_triggered() { if (db.isOpen()) { // 检测是否存在开放着的数据链接。 db.close(); QMessageBox::information(this, "通知", "数据库已成功关闭!"); } else { QMessageBox::warning(this, "警告", "没有需要关闭的数据库!"); } } ``` ---

// 弹出文件保存对话框 QString fileName = QFileDialog::getSaveFileName(this, "保存文件", QDir::homePath() + "/Desktop/test.xlsx", "Excel 文件 (*.xlsx)"); if (fileName.isEmpty()) { qDebug() << "用户取消保存!"; return; // 用户取消保存 } qDebug() << "保存路径:" << fileName; // 创建 Excel 应用程序对象 QAxObject excel("Excel.Application"); if (excel.isNull()) { QMessageBox::critical(this, "错误", "无法启动 Excel 应用程序!"); qDebug() << "无法启动 Excel 应用程序!"; return; } else { qDebug() << "Excel 启动成功!"; } // 设置 Excel 不可见 excel.setProperty("Visible", false); // 获取工作簿集合 QAxObject* workbooks = excel.querySubObject("Workbooks"); if (!workbooks) { QMessageBox::critical(this, "错误", "无法获取工作簿集合!"); qDebug() << "无法获取工作簿集合!"; return; } else { qDebug() << "工作簿集合获取成功!"; } // 添加一个新的工作簿 QAxObject* workbook = workbooks->querySubObject("Add"); if (!workbook) { QMessageBox::critical(this, "错误", "无法创建工作簿!"); qDebug() << "无法创建工作簿!"; return; } else { qDebug() << "工作簿创建成功!"; } // 获取第一个工作表 QAxObject* worksheet = workbook->querySubObject("Worksheets(int)", 1); if (!worksheet) { QMessageBox::critical(this, "错误", "无法获取工作表!"); qDebug() << "无法获取工作表!"; return; } else { qDebug() << "工作表获取成功!"; } // 设置工作表名称 worksheet->setProperty("Name", "物流数据"); // 获取表格的行数和列数 int rowCount = model->rowCount(); int columnCount = model->columnCount(); // 写入表头 for (int col = 0; col < columnCount; ++col) { QAxObject* cell = worksheet->querySubObject("Cells(int,int)", 1, col + 1); if (cell) { QString header = model->headerData(col, Qt::Horizontal).toString(); cell->setProperty("Value", header); qDebug() << "写入表头:" << header << "到单元格 (1," << col + 1 << ")"; delete cell; } else { qDebug() << "无法写入表头到单元格 (1," << col + 1 << ")"; } } // 写入数据 for (int row = 0; row < rowCount; ++row) { for (int col = 0; col < columnCount; ++col) { QAxObject* cell = worksheet->querySubObject("Cells(int,int)", row + 2, col + 1); if (cell) { QString value = model->item(row, col)->text(); cell->setProperty("Value", value); qDebug() << "写入数据:" << value << "到单元格 (" << row + 2 << "," << col + 1 << ")"; delete cell; } else { qDebug() << "无法写入单元格 (" << row + 2 << "," << col + 1 << ")"; } } } // 保存文件 bool saveResult = workbook->dynamicCall("SaveAs(const QString&)", fileName).toBool(); if (saveResult) { qDebug() << "文件保存成功!"; } else { qDebug() << "文件保存失败!"; } // 关闭工作簿 workbook->dynamicCall("Close()"); // 退出 Excel 应用程序 excel.dynamicCall("Quit()"); // 释放资源 delete worksheet; delete workbook; delete workbooks; if (saveResult) { QMessageBox::information(this, "成功", "数据已导出到 Excel 文件!"); } else { QMessageBox::critical(this, "错误", "文件保存失败!"); }报错文件保存失败

<think>嗯,用户提供的这段代码是关于使用Qt和QAxObject导出数据到Excel文件,但遇到了保存失败的问题。我需要仔细分析代码,找出可能导致失败的原因。首先,用户提到“报错文件保存失败”,所以需要检查所有可能引发保存失败的地方。 首先,代码中使用的是QAxObject来操作Excel,这在Qt中属于ActiveX操作,可能需要系统安装了Excel才能正常运行。如果用户的电脑没有安装Excel,或者Excel组件未正确注册,那么QAxObject可能无法创建Excel.Application对象,导致后续的保存失败。例如,在创建Excel应用对象时,如果excel.isNull()为真,会提示无法启动Excel。但用户的问题出现在保存阶段,所以可能已经跳过了这个检查,说明Excel启动成功。 接下来,检查保存的步骤。保存使用的是workbook->dynamicCall("SaveAs(const QString&)", fileName).toBool(),这里需要注意fileName的路径是否合法,是否有写入权限。比如,如果用户选择的路径是只读的,或者文件名包含非法字符,可能会导致保存失败。此外,文件扩展名是否正确,比如是否确实是.xlsx,或者是否被其他程序占用。 另外,Excel在保存文件时可能会弹出警告对话框,比如覆盖已存在的文件时,但因为代码中设置了excel.setProperty("Visible", false),理论上Excel不可见,这些对话框可能被阻塞,导致保存操作无法完成。这时候可能需要设置Excel的DisplayAlerts属性为false,以避免弹出提示框影响自动化操作。用户的代码中没有设置这个属性,可能导致在覆盖文件时Excel弹出确认对话框,而自动化操作无法处理,导致保存失败。 还有,检查文件保存的路径是否存在。如果路径中的某个目录不存在,Excel可能无法创建文件。例如,用户如果选择了一个不存在的文件夹,或者路径中有拼写错误,就会导致保存失败。需要确保fileName的路径是有效的,并且程序有权限在该路径下创建文件。 另外,QAxObject在调用SaveAs时,可能需要完整的文件路径,包括正确的后缀名。用户代码中在弹出保存对话框时,指定了默认的过滤器为Excel文件(*.xlsx),所以fileName应该包含.xlsx扩展名。但如果用户手动输入了其他扩展名,或者未正确指定,可能导致保存失败。不过,Qt的文件对话框通常会根据过滤器自动添加扩展名,所以这个问题可能性较小。 还有可能是Excel的版本问题。某些版本的Excel可能在自动化接口上有差异,比如SaveAs方法的参数是否需要其他选项,例如文件格式参数。用户代码中使用的是SaveAs(const QString&),可能缺少必要的格式参数,导致保存失败。正确的做法是使用SaveAs的完整参数列表,例如指定文件格式为xlWorkbookDefault(即51),尤其是在保存为.xlsx时。例如,正确的调用可能需要传递更多的参数,如: workbook->dynamicCall("SaveAs(const QString&, int, ...)", fileName, 51); 如果代码中没有指定文件格式,可能导致保存失败,尤其是当文件名没有正确扩展名时。因此,可能需要调整SaveAs的参数,明确指定文件类型。 此外,在保存之后,用户立即关闭工作簿并退出Excel,但如果保存过程中出现错误,可能需要先处理异常情况再退出。例如,如果SaveAs返回false,但后续依然执行了Close和Quit,可能导致资源未正确释放。不过,用户代码中已经根据saveResult进行提示,这部分可能没有问题。 另外,QAxObject的dynamicCall返回的是QVariant,转换为bool可能不正确。例如,SaveAs方法在VBA中可能没有返回值,或者返回的是void,此时toBool()可能无法正确获取保存状态。因此,可能需要检查是否应该使用其他方式判断保存是否成功,比如捕获异常或者检查文件是否实际生成。 此外,在释放资源时,用户代码中delete了worksheet、workbook和workbooks,但QAxObject的管理可能需要更谨慎,尤其是在有多个对象引用的情况下,可能导致内存问题,进而影响保存操作。不过,这部分错误通常会导致崩溃而非保存失败。 最后,检查是否有防病毒软件或文件监控程序阻止了Excel文件的创建,或者用户账户权限不足,无法写入目标路径。 综上所述,可能的原因有: 1. Excel的DisplayAlerts未禁用,导致覆盖文件时弹出对话框阻塞进程。 2. SaveAs方法缺少必要的参数,如文件格式。 3. 文件路径无效或权限不足。 4. QAxObject的dynamicCall返回值处理不当。 5. Excel版本或安装问题。</think>以下是代码报错"文件保存失败"的可能原因及解决方案,按排查顺序排列: $\textbf{1. 未禁用 Excel 警告提示}$ $\textcolor{red}{关键问题}$:未设置`DisplayAlerts`属性,当覆盖已存在文件时会被 Excel 的对话框阻塞 $\text{解决方法}$: ```cpp // 在设置 Visible 属性后添加 excel.setProperty("DisplayAlerts", false); // 禁用警告提示 ``` $\textbf{2. SaveAs 参数不完整}$ $\text{正确调用方式}$应指定文件格式参数: ```cpp // 原错误代码: // workbook->dynamicCall("SaveAs(const QString&)", fileName) // 修改后代码: workbook->dynamicCall("SaveAs(const QString&, int)", QDir::toNativeSeparators(fileName), 51); // 51 表示 .xlsx 格式 ``` $\textbf{3. 路径权限问题}$ $\text{验证方法}$: ```cpp qDebug() << "是否可写:" << QFileInfo(fileName).isWritable(); // 输出示例:是否可写: false ``` $\text{处理方案}$: - 检查路径是否存在非法字符 - 确保目标目录有写入权限 - 尝试保存到其他路径(如文档目录) $\textbf{4. 文件被占用}$ $\text{现象}$:当重复执行时,前一次 Excel 进程未完全退出 $\text{优化资源释放}$: ```cpp // 修改释放顺序 delete worksheet; // 先释放子对象 delete workbook; delete workbooks; excel.dynamicCall("Quit()"); // 最后退出 Excel ``` $\textbf{5. 异常处理增强}$ $\text{建议增加}$: ```cpp // 在关键操作后添加错误信息获取 QAxObject* error = excel.querySubObject("Application.LastError"); if (error) { qDebug() << "Excel错误:" << error->property("Text").toString(); delete error; } ``` $\textbf{完整修改建议步骤}$: 1. 添加禁用警告设置 2. 修改 SaveAs 调用参数 3. 在保存前验证路径可写性 4. 增加错误信息输出 5. 调整资源释放顺序 $\text{补充建议}$: - 使用 `QDir::toNativeSeparators()` 转换路径分隔符 - 对于 Office 365 用户,建议测试不同文件格式代码(51/52/56) - 考虑使用 `QAxObject::generateDocumentation()` 检查 COM 接口可用性
阅读全文

相关推荐

# main_window.py(主窗口逻辑) import os import numpy as np from PySide6.QtWidgets import QMainWindow, QFileDialog, QGraphicsScene, QGraphicsView, QMessageBox, QGraphicsPathItem from PySide6.QtGui import QPainterPath, QPen, QBrush, QAction, QTransform, QImage, QPixmap, QColor, QPainter from PySide6.QtCore import Qt, QRectF, QPointF from osgeo import ogr, gdal, osr from PySide6.QtWidgets import QInputDialog # 新增输入对话框 # 新增自定义图形项类(用于存储属性) class FeatureItem(QGraphicsPathItem): def __init__(self, path, attributes): super().__init__(path) self.attributes = attributes # 存储属性字典 class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("GIS软件") self.setGeometry(100, 100, 800, 600) ogr.UseExceptions() self.init_ui() self.scene = QGraphicsScene(self) self.graphicsView.setScene(self.scene) # 新增:存储所有几何边界 self.total_bounds = QRectF() def init_ui(self): self.toolBar = self.addToolBar("工具") self.actionOpen_Vector_Data = QAction("打开矢量数据", self) self.toolBar.addAction(self.actionOpen_Vector_Data) # 新增栅格动作 self.actionOpen_Raster_Data = QAction("打开栅格数据", self) self.toolBar.addAction(self.actionOpen_Raster_Data) # 添加到工具栏 # 新增缓冲区分析按钮 self.actionBuffer_Analysis = QAction("缓冲区分析", self) self.toolBar.addAction(self.actionBuffer_Analysis) self.graphicsView = QGraphicsView() self.setCentralWidget(self.graphicsView) # 新增属性查询按钮 self.actionQuery_Attribute = QAction("属性查询", self) self.toolBar.addAction(self.actionQuery_Attribute) self.actionOpen_Vector_Data.triggered.connect(self.open_vector_data) self.actionOpen_Raster_Data.triggered.connect(self.open_raster_data) # 新增连接 self.actionBuffer_Analysis.triggered.connect(self.buffer_analysis) self.actionQuery_Attribute.triggered.connect(self.enable_query_mode) # 新增鼠标点击事件 self.graphicsView.setMouseTracking(True) self.is_query_mode = False # 新增波段组合按钮 self.actionBand_Combination = QAction("波段组合", self) self.toolBar.addAction(self.actionBand_Combination) self.actionBand_Combination.triggered.connect(self.open_band_combination) # 新增栅格裁剪按钮(在init_ui方法末尾添加) self.actionClip_Raster = QAction("栅格裁剪", self) self.toolBar.addAction(self.actionClip_Raster) self.actionClip_Raster.triggered.connect(self.clip_raster) # 新增连接 self.actionBand_Calculation = QAction("波段运算", self) self.toolBar.addAction(self.actionBand_Calculation) self.actionBand_Calculation.triggered.connect(self.band_calculation) # 新增质心绘制按钮(放在init_ui方法中) self.actionDraw_Centroids = QAction("绘制质心", self) self.toolBar.addAction(self.actionDraw_Centroids) self.actionDraw_Centroids.triggered.connect(self.draw_centroids) self.centroid_items = [] # 新增:存储质心图形项 # 新增空间查询按钮(放在init_ui方法中) self.actionSpatial_Query = QAction("空间查询", self) self.toolBar.addAction(self.actionSpatial_Query) self.actionSpatial_Query.triggered.connect(self.enable_spatial_query_mode) self.is_spatial_query_mode = False self.spatial_query_results = [] # 存储查询结果 # 新增保存按钮(放在init_ui方法中) self.actionSave_Vector = QAction("保存矢量数据", self) self.toolBar.addAction(self.actionSave_Vector) self.actionSave_Vector.triggered.connect(self.save_vector_data) # 新增栅格保存按钮(放在init_ui方法中) self.actionSave_Raster = QAction("保存栅格数据", self) self.toolBar.addAction(self.actionSave_Raster) self.actionSave_Raster.triggered.connect(self.save_raster_data) def open_vector_data(self): file_path, _ = QFileDialog.getOpenFileName( self, "打开矢量文件", "", "Shapefile (*.shp);;GeoJSON (*.geojson);;All Files (*)" ) if file_path: self.load_vector_data(file_path) # 新增:自动缩放视图 self.auto_zoom() def load_vector_data(self, file_path): self.scene.clear() self.total_bounds = QRectF() # 重置边界 try: data_source = ogr.Open(file_path, 0) layer = data_source.GetLayer(0) for feature in layer: geom = feature.GetGeometryRef() path = self.geometry_to_qpainterpath(geom) # 更新总边界 if path.boundingRect().isValid(): self.total_bounds = self.total_bounds.united(path.boundingRect()) pen = QPen(Qt.blue, 1) brush = QBrush(Qt.cyan) self.scene.addPath(path, pen, brush) data_source = None except Exception as e: print(f"加载失败: {str(e)}") self.current_vector_path = file_path # 新增这一行 data_source = None def geometry_to_qpainterpath(self, geom): path = QPainterPath() if geom.GetGeometryType() == ogr.wkbPolygon: for ring in range(geom.GetGeometryCount()): linear_ring = geom.GetGeometryRef(ring) points = linear_ring.GetPoints() if points: path.moveTo(points[0][0], points[0][1]) for p in points[1:]: path.lineTo(p[0], p[1]) path.closeSubpath() elif geom.GetGeometryType() == ogr.wkbLineString: points = geom.GetPoints() if points: path.moveTo(points[0][0], points[0][1]) for p in points[1:]: path.lineTo(p[0], p[1]) elif geom.GetGeometryType() == ogr.wkbPoint: x, y = geom.GetX(), geom.GetY() path.addEllipse(x - 2, y - 2, 4, 4) return path def auto_zoom(self): """自动缩放视图到数据范围并放大2倍""" if not self.total_bounds.isValid(): return # 设置场景边界 self.scene.setSceneRect(self.total_bounds) # 获取视图可视区域 view_rect = self.graphicsView.viewport().rect() # 计算缩放比例(自动适应 + 2倍放大) transform = QTransform() transform.scale(2, 2) # 先放大2倍 # 应用缩放并居中 self.graphicsView.setTransform(transform) self.graphicsView.fitInView(self.total_bounds, Qt.KeepAspectRatio) # 新增缓冲区分析方法 def buffer_analysis(self): """执行缓冲区分析""" if not hasattr(self, 'current_vector_path'): QMessageBox.warning(self, "警告", "请先打开矢量数据文件!") return # 获取缓冲距离 distance, ok = QInputDialog.getDouble( self, "缓冲区分析", "输入缓冲距离(单位与数据坐标系一致):", 0.0, 0 ) if not ok: return try: # 重新打开数据源获取几何 data_source = ogr.Open(self.current_vector_path, 0) layer = data_source.GetLayer(0) # 创建缓冲区路径 buffer_path = QPainterPath() pen = QPen(Qt.red, 2, Qt.DashLine) brush = QBrush(QColor(255, 0, 0, 50)) # 半透明红色填充 for feature in layer: geom = feature.GetGeometryRef() buffer_geom = geom.Buffer(distance) path = self.geometry_to_qpainterpath(buffer_geom) buffer_path.addPath(path) # 添加到场景 self.scene.addPath(buffer_path, pen, brush) # 更新视图边界 if buffer_path.boundingRect().isValid(): self.total_bounds = self.total_bounds.united(buffer_path.boundingRect()) self.auto_zoom() data_source = None except Exception as e: QMessageBox.critical(self, "错误", f"缓冲区分析失败: {str(e)}") def load_vector_data(self, file_path): self.scene.clear() self.total_bounds = QRectF() try: data_source = ogr.Open(file_path, 0) layer = data_source.GetLayer(0) # 获取字段定义 layer_defn = layer.GetLayerDefn() field_names = [layer_defn.GetFieldDefn(i).GetName() for i in range(layer_defn.GetFieldCount())] # 新增代码:获取并存储坐标系信息 srs = layer.GetSpatialRef() if srs: self.scene.srs = srs # 存储坐标系信息到场景对象 for feature in layer: geom = feature.GetGeometryRef() path = self.geometry_to_qpainterpath(geom) # 创建属性字典 attributes = { "FID": feature.GetFID(), **{name: feature.GetField(name) for name in field_names} } # 使用自定义图形项 item = FeatureItem(path, attributes) item.setPen(QPen(Qt.blue, 1)) item.setBrush(QBrush(Qt.cyan)) item.setZValue(100) # 新增:确保要素在最上层 self.scene.addItem(item) if path.boundingRect().isValid(): self.total_bounds = self.total_bounds.united(path.boundingRect()) data_source = None except Exception as e: print(f"加载失败: {str(e)}") self.current_vector_path = file_path data_source = None # 新增属性查询方法 def enable_query_mode(self): """启用属性查询模式""" self.is_query_mode = not self.is_query_mode self.actionQuery_Attribute.setText("退出查询" if self.is_query_mode else "属性查询") self.graphicsView.setCursor(Qt.CrossCursor if self.is_query_mode else Qt.ArrowCursor) # 新增鼠标事件处理 def mousePressEvent(self, event): # 优先处理空间查询模式 if self.is_spatial_query_mode and event.button() == Qt.LeftButton: scene_pos = self.graphicsView.mapToScene(event.pos()) items = self.scene.items(scene_pos, Qt.IntersectsItemShape, Qt.DescendingOrder) for item in items: if isinstance(item, FeatureItem): # 获取空间关系选择 relations = ["相交", "包含", "被包含", "接触", "重叠"] relation, ok = QInputDialog.getItem( self, "空间关系选择", "请选择空间关系:", relations, 0, False ) if ok: self.perform_spatial_query(item, relation) return return # 处理属性查询模式 if self.is_query_mode and event.button() == Qt.LeftButton: scene_pos = self.graphicsView.mapToScene(event.pos()) items = self.scene.items(scene_pos, Qt.IntersectsItemShape, Qt.DescendingOrder) for item in items: if isinstance(item, FeatureItem): # 确保有属性数据 if hasattr(item, 'attributes') and item.attributes: info = "\n".join([f"{k}: {v}" for k, v in item.attributes.items()]) QMessageBox.information(self, "要素属性", info) else: QMessageBox.warning(self, "提示", "该要素无属性信息") return # 未点击到要素时提示 QMessageBox.warning(self, "提示", "未选中任何要素") return # 其他情况交给父类处理 super().mousePressEvent(event) def load_vector_data(self, file_path): """加载矢量数据到场景""" try: # 清除现有数据 self.scene.clear() self.centroid_items.clear() # 打开数据源 data_source = ogr.Open(file_path, 0) if not data_source: QMessageBox.critical(self, "错误", "文件打开失败") return layer = data_source.GetLayer(0) layer_defn = layer.GetLayerDefn() field_names = [layer_defn.GetFieldDefn(i).GetName() for i in range(layer_defn.GetFieldCount())] # 新增代码:获取并存储坐标系信息 srs = layer.GetSpatialRef() if srs: self.scene.srs = srs # 存储坐标系信息到场景对象 # 遍历要素 for feature in layer: geom = feature.GetGeometryRef() path = self.geometry_to_qpainterpath(geom) # 创建属性字典 attributes = { "FID": feature.GetFID(), **{name: feature.GetField(name) for name in field_names} } # 创建图形项 item = FeatureItem(path, attributes) item.setPen(QPen(Qt.blue, 1)) item.setBrush(QBrush(Qt.cyan)) item.setZValue(100) self.scene.addItem(item) # 后续原有代码... self.current_vector_path = file_path self.auto_zoom() data_source = None except Exception as e: QMessageBox.critical(self, "错误", f"加载失败: {str(e)}") # 新增空间查询模式切换方法 def enable_spatial_query_mode(self): """启用空间查询模式""" self.is_spatial_query_mode = not self.is_spatial_query_mode self.actionSpatial_Query.setText("退出空间查询" if self.is_spatial_query_mode else "空间查询") self.graphicsView.setCursor(Qt.CrossCursor if self.is_spatial_query_mode else Qt.ArrowCursor) if not self.is_spatial_query_mode: self.clear_spatial_query_results() # 新增空间查询核心方法 def perform_spatial_query(self, source_item, relation): """执行空间查询并高亮结果""" self.clear_spatial_query_results() try: # 获取源要素几何 source_geom = self.item_to_geometry(source_item) if not source_geom: return # 获取所有要素 all_items = [item for item in self.scene.items() if isinstance(item, FeatureItem)] # 遍历检查空间关系 for target_item in all_items: target_geom = self.item_to_geometry(target_item) if not target_geom: continue # 执行空间关系判断 if relation == "相交" and source_geom.Intersects(target_geom): self.highlight_item(target_item) elif relation == "包含" and source_geom.Contains(target_geom): self.highlight_item(target_item) elif relation == "被包含" and target_geom.Contains(source_geom): self.highlight_item(target_item) elif relation == "接触" and source_geom.Touches(target_geom): self.highlight_item(target_item) elif relation == "重叠" and source_geom.Overlaps(target_geom): self.highlight_item(target_item) except Exception as e: QMessageBox.critical(self, "错误", f"空间查询失败: {str(e)}") # 新增辅助方法 def item_to_geometry(self, item): """将图形项转换为OGR几何对象""" path = item.path() elements = path.toSubpathPolygons(QTransform()) if not elements: return None # 创建多边形几何 geom = ogr.Geometry(ogr.wkbPolygon) ring = ogr.Geometry(ogr.wkbLinearRing) for point in elements[0]: ring.AddPoint(point.x(), point.y()) ring.CloseRings() geom.AddGeometry(ring) return geom def highlight_item(self, item): """高亮显示查询结果""" original_pen = item.pen() highlight_pen = QPen(Qt.yellow, 3) item.setPen(highlight_pen) self.spatial_query_results.append((item, original_pen)) def clear_spatial_query_results(self): """清除查询结果高亮""" for item, original_pen in self.spatial_query_results: item.setPen(original_pen) self.spatial_query_results.clear() def open_raster_data(self): """打开栅格数据文件""" file_path, _ = QFileDialog.getOpenFileName( self, "打开栅格文件", "", "GeoTIFF (*.tif);;JPEG (*.jpg *.jpeg);;PNG (*.png);;All Files (*)" ) if file_path: try: self.load_raster_data(file_path) self.auto_zoom() except Exception as e: QMessageBox.critical(self, "错误", f"加载栅格失败: {str(e)}") def load_raster_data(self, file_path): """加载栅格数据到视图""" # 打开栅格文件(需要用户修改路径的部分) dataset = gdal.Open(file_path) # 相对路径示例:"./data/raster.tif" # 读取第一个波段 band = dataset.GetRasterBand(1) width = dataset.RasterXSize height = dataset.RasterYSize # 转换为numpy数组 data = band.ReadAsArray() # 创建QImage(注意数据类型转换) if data.dtype == np.uint8: format = QImage.Format.Format_Grayscale8 else: format = QImage.Format.Format_ARGB32 q_img = QImage(data.tobytes(), width, height, format) # 创建像素图项 pixmap = QPixmap.fromImage(q_img) raster_item = self.scene.addPixmap(pixmap) # 处理地理坐标(如果存在) geotransform = dataset.GetGeoTransform() if geotransform: # 计算四个角的坐标 x_origin = geotransform[0] y_origin = geotransform[3] pixel_width = geotransform[1] pixel_height = geotransform[5] # 更新场景边界 x_min = x_origin x_max = x_origin + pixel_width * width y_min = y_origin + pixel_height * height y_max = y_origin self.total_bounds = QRectF( QPointF(x_min, y_min), QPointF(x_max, y_max) ) dataset = None # 关闭数据集 def open_band_combination(self): if not hasattr(self, 'current_raster_path'): QMessageBox.warning(self, "警告", "请先打开栅格数据文件!") return # 复用open_raster_data的逻辑 self.open_raster_data() def open_raster_data(self): file_path, _ = QFileDialog.getOpenFileName( self, "打开栅格文件", "", "GeoTIFF (*.tif);;JPEG (*.jpg *.jpeg);;PNG (*.png);;All Files (*)" ) if file_path: try: dataset = gdal.Open(file_path) num_bands = dataset.RasterCount # 获取用户输入的波段组合 red_band, ok1 = QInputDialog.getInt( self, "波段选择", f"红通道波段号 (1-{num_bands}):", 1, 1, num_bands ) green_band, ok2 = QInputDialog.getInt( self, "波段选择", f"绿通道波段号 (1-{num_bands}):", min(2, num_bands), 1, num_bands ) blue_band, ok3 = QInputDialog.getInt( self, "波段选择", f"蓝通道波段号 (1-{num_bands}):", min(3, num_bands), 1, num_bands ) if not (ok1 and ok2 and ok3): return self.load_raster_data(file_path, red_band, green_band, blue_band) self.auto_zoom() self.current_raster_path = file_path # 新增存储当前路径 except Exception as e: QMessageBox.critical(self, "错误", f"加载栅格失败: {str(e)}") def load_raster_data(self, file_path, red_band=1, green_band=2, blue_band=3): """加载栅格数据到视图(支持波段组合)""" dataset = gdal.Open(file_path) width = dataset.RasterXSize height = dataset.RasterYSize # 读取三个波段数据 def read_band(band_num): band = dataset.GetRasterBand(band_num) data = band.ReadAsArray() # 自动拉伸到0-255范围 data_min = data.min() data_max = data.max() return np.clip(((data - data_min) / (data_max - data_min) * 255), 0, 255).astype(np.uint8) # 合并波段 rgb_array = np.dstack([ read_band(red_band), read_band(green_band), read_band(blue_band) ]) # 创建QImage q_img = QImage( rgb_array.data, width, height, 3 * width, # 每像素3字节(RGB) QImage.Format.Format_RGB888 ) # 创建像素图项 pixmap = QPixmap.fromImage(q_img) self.scene.addPixmap(pixmap) # 处理地理坐标(保持原有逻辑) geotransform = dataset.GetGeoTransform() if geotransform: x_origin = geotransform[0] y_origin = geotransform[3] pixel_width = geotransform[1] pixel_height = geotransform[5] x_min = x_origin x_max = x_origin + pixel_width * width y_min = y_origin + pixel_height * height # 计算下边界 y_max = y_origin # 上边界 # 确保坐标顺序正确 if x_min > x_max: x_min, x_max = x_max, x_min if y_min > y_max: y_min, y_max = y_max, y_min self.total_bounds = QRectF(QPointF(x_min, y_min), QPointF(x_max, y_max)) dataset = None # 新增栅格裁剪方法(必须缩进在类内部) def clip_raster(self): """执行栅格裁剪功能""" if not hasattr(self, 'current_raster_path'): QMessageBox.warning(self, "警告", "请先打开栅格数据文件!") return # 选择裁剪矢量文件 vector_path, _ = QFileDialog.getOpenFileName( self, "选择裁剪区域文件", "", "Shapefile (*.shp);;GeoJSON (*.geojson);;All Files (*)" ) if not vector_path: return try: # 获取原始栅格信息 src_ds = gdal.Open(self.current_raster_path) geotransform = src_ds.GetGeoTransform() proj = src_ds.GetProjection() # 获取矢量范围 vector_ds = ogr.Open(vector_path) layer = vector_ds.GetLayer() feature = layer.GetNextFeature() geom = feature.GetGeometryRef() x_min, x_max, y_min, y_max = geom.GetEnvelope() # 创建临时裁剪结果文件 import os # 确保导入os模块 output_path = os.path.splitext(self.current_raster_path)[0] + "_clipped.tif" # 执行裁剪操作 options = gdal.WarpOptions( format='GTiff', outputBounds=[x_min, y_min, x_max, y_max], dstSRS=proj ) gdal.Warp(output_path, src_ds, options=options) # 加载裁剪结果 self.load_raster_data(output_path) self.auto_zoom() # 清理资源 src_ds = None vector_ds = None except Exception as e: QMessageBox.critical(self, "错误", f"栅格裁剪失败: {str(e)}") # 新增波段运算方法 def band_calculation(self): """执行波段运算(示例为NDVI计算)""" if not hasattr(self, 'current_raster_path'): QMessageBox.warning(self, "警告", "请先打开栅格数据文件!") return try: # 获取用户输入参数 red_band, ok1 = QInputDialog.getInt( self, "波段选择", "输入红波段编号 (1-based):", 1, 1, 100 ) nir_band, ok2 = QInputDialog.getInt( self, "波段选择", "输入近红外波段编号 (1-based):", 4, 1, 100 ) if not (ok1 and ok2): return # 读取栅格数据 dataset = gdal.Open(self.current_raster_path) red = dataset.GetRasterBand(red_band).ReadAsArray() nir = dataset.GetRasterBand(nir_band).ReadAsArray() # 执行NDVI计算 ndvi = np.where( (nir + red) == 0, 0, (nir - red) / (nir + red) ).astype(np.float32) # 创建输出文件 output_path, _ = QFileDialog.getSaveFileName( self, "保存结果", "", "GeoTIFF (*.tif)" ) if not output_path: return # 写入结果 driver = gdal.GetDriverByName('GTiff') out_ds = driver.Create( output_path, dataset.RasterXSize, dataset.RasterYSize, 1, gdal.GDT_Float32 ) out_ds.SetGeoTransform(dataset.GetGeoTransform()) out_ds.SetProjection(dataset.GetProjection()) out_ds.GetRasterBand(1).WriteArray(ndvi) out_ds.FlushCache() # 清理资源 dataset = None out_ds = None QMessageBox.information(self, "成功", f"NDVI计算结果已保存至:\n{output_path}") except Exception as e: QMessageBox.critical(self, "错误", f"波段运算失败: {str(e)}") # 新增保存方法 def save_vector_data(self): """保存当前场景中的矢量数据""" if not hasattr(self, 'current_vector_path'): QMessageBox.warning(self, "警告", "没有可保存的矢量数据!") return # 获取保存路径 file_path, _ = QFileDialog.getSaveFileName( self, "保存矢量文件", "", "Shapefile (*.shp);;GeoJSON (*.geojson);;All Files (*)" ) if not file_path: return try: # 获取原始数据源信息 src_ds = ogr.Open(self.current_vector_path) src_layer = src_ds.GetLayer(0) src_defn = src_layer.GetLayerDefn() # 创建目标数据源 driver = ogr.GetDriverByName("ESRI Shapefile" if file_path.endswith(".shp") else "GeoJSON") if os.path.exists(file_path): driver.DeleteDataSource(file_path) dst_ds = driver.CreateDataSource(file_path) # 创建图层(保持与原始数据相同的坐标系) dst_layer = dst_ds.CreateLayer( "saved_features", srs=src_layer.GetSpatialRef(), geom_type=ogr.wkbPolygon ) # 复制字段定义 for i in range(src_defn.GetFieldCount()): field_defn = src_defn.GetFieldDefn(i) dst_layer.CreateField(field_defn) # 遍历场景中的所有要素项 for item in self.scene.items(): if isinstance(item, FeatureItem): # 创建新要素 feature = ogr.Feature(dst_layer.GetLayerDefn()) # 复制属性 for key, value in item.attributes.items(): if key == "FID": continue # FID通常自动生成 if feature.GetFieldIndex(key) != -1: feature.SetField(key, str(value)) # 转换几何 geom = self.item_to_geometry(item) if geom: feature.SetGeometry(geom) dst_layer.CreateFeature(feature) feature = None # 添加缓冲区要素(如果存在) self.save_additional_features(dst_layer, "buffer") # 添加质心要素(如果存在) self.save_additional_features(dst_layer, "centroid") dst_ds = None src_ds = None QMessageBox.information(self, "成功", f"数据已保存至:\n{file_path}") except Exception as e: QMessageBox.critical(self, "错误", f"保存失败: {str(e)}") # 新增辅助保存方法 def save_additional_features(self, layer, feature_type): """保存附加要素(缓冲区/质心)""" items = [] if feature_type == "buffer": items = [item for item in self.scene.items() if item.pen().style() == Qt.DashLine and item.pen().color() == Qt.red] elif feature_type == "centroid": items = self.centroid_items for item in items: geom = self.item_to_geometry(item) if geom: feature = ogr.Feature(layer.GetLayerDefn()) feature.SetGeometry(geom) # 添加类型标识字段 feature.SetField("FEATURE_TYPE", feature_type.upper()) layer.CreateFeature(feature) feature = None # 在item_to_geometry方法中添加点要素支持 def item_to_geometry(self, item): """增强版几何转换(支持点要素)""" path = item.path() elements = path.toSubpathPolygons(QTransform()) if not elements: # 处理点要素 if isinstance(item, QGraphicsPathItem): path = item.path() if path.elementCount() == 1 and path.elementAt(0).isMoveTo(): pt = path.elementAt(0) geom = ogr.Geometry(ogr.wkbPoint) geom.AddPoint(pt.x, pt.y) return geom return None # 原有多边形处理逻辑 geom = ogr.Geometry(ogr.wkbPolygon) ring = ogr.Geometry(ogr.wkbLinearRing) for point in elements[0]: ring.AddPoint(point.x(), point.y()) ring.CloseRings() geom.AddGeometry(ring) return geom # 新增栅格保存方法 def save_raster_data(self): """保存当前显示的栅格数据""" if not hasattr(self, 'current_raster_path'): QMessageBox.warning(self, "警告", "没有可保存的栅格数据!") return # 获取保存路径和格式 file_path, selected_filter = QFileDialog.getSaveFileName( self, "保存栅格文件", "", "GeoTIFF (*.tif);;JPEG (*.jpg *.jpeg);;PNG (*.png);;All Files (*)" ) if not file_path: return try: # 获取当前显示的栅格数据 dataset = gdal.Open(self.current_raster_path) band_count = dataset.RasterCount width = dataset.RasterXSize height = dataset.RasterYSize # 创建输出数据集 driver = gdal.GetDriverByName(self.get_driver_name(file_path)) out_ds = driver.Create( file_path, width, height, band_count, self.get_gdal_datatype(dataset) ) # 复制地理参考信息 out_ds.SetGeoTransform(dataset.GetGeoTransform()) out_ds.SetProjection(dataset.GetProjection()) # 复制所有波段数据 for band_num in range(1, band_count + 1): in_band = dataset.GetRasterBand(band_num) out_band = out_ds.GetRasterBand(band_num) out_band.WriteArray(in_band.ReadAsArray()) out_band.FlushCache() # 清理资源 out_ds = None dataset = None QMessageBox.information(self, "成功", f"栅格数据已保存至:\n{file_path}") except Exception as e: QMessageBox.critical(self, "错误", f"栅格保存失败: {str(e)}") # 新增辅助方法 def get_driver_name(self, file_path): """根据文件扩展名获取GDAL驱动名称""" ext = os.path.splitext(file_path)[1].lower() return { '.tif': 'GTiff', '.jpg': 'JPEG', '.jpeg': 'JPEG', '.png': 'PNG' }.get(ext, 'GTiff') def get_gdal_datatype(self, dataset): """获取与原始数据集匹配的GDAL数据类型""" sample_band = dataset.GetRasterBand(1) return { gdal.GDT_Byte: gdal.GDT_Byte, gdal.GDT_UInt16: gdal.GDT_UInt16, gdal.GDT_Int16: gdal.GDT_Int16, gdal.GDT_UInt32: gdal.GDT_UInt32, gdal.GDT_Int32: gdal.GDT_Int32, gdal.GDT_Float32: gdal.GDT_Float32, gdal.GDT_Float64: gdal.GDT_Float64 }.get(sample_band.DataType, gdal.GDT_Float32) 上面这是我的完整代码,下面是报错的信息D:\Anaconda\envs\GIS-practice\python.exe D:\bighomework\main.py Traceback (most recent call last) : File "D:\bighomework\main.py", line 8, in <module> window = MainWindow() File "D:\bighomework\main_window.py", line 24, in _ self.init_ui() File "D:\bighomework\main_window.py", line 72, in init_ui self.actionDraw_Centroids. triggered. connect(self.draw_centroids) AAAAAAA AttributeError: 'MainWindow' object has no attribute 'draw_centroids init_ Process finished with exit code 1

#include "mainwindow.h" #include "ui_mainwindow.h" MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent) , ui(new Ui::MainWindow) { ui->setupUi(this); timer = new QTimer(this); connect(timer, &QTimer::timeout, this, &MainWindow::captureScreen); isSelecting = false; isRecording = false; frameCount = 0; // 默认录制全屏 QScreen *screen = QGuiApplication::primaryScreen(); selectionRect = screen->geometry(); // 初始化UI ui->fpsSpinBox->setValue(30); ui->qualitySpinBox->setValue(23); // 默认CRF值 ui->statusLabel->setText("就緒"); // 禁用停止按钮 ui->stopButton->setEnabled(false); // RTSP地址写死 rtspUrl = "rtsp://127.0.0.1:8554/live/test222"; } MainWindow::~MainWindow() { if (isRecording) { finalizeFFmpeg(); } delete ui; } void MainWindow::on_startButton_clicked() { if (isRecording) return; // 获取输出文件路径 outputFilePath = QFileDialog::getSaveFileName( this, "保存录制文件", QDir::homePath(), "MP4文件 (*.mp4)" ); if (outputFilePath.isEmpty()) return; // 确保文件扩展名为.mp4 if (!outputFilePath.endsWith(".mp4", Qt::CaseInsensitive)) outputFilePath += ".mp4"; // 获取帧率和质量设置 int fps = ui->fpsSpinBox->value(); int quality = ui->qualitySpinBox->value(); // 初始化FFmpeg if (!initializeFFmpeg(outputFilePath, selectionRect.width(), selectionRect.height(), fps)) { QMessageBox::critical(this, "错误", "无法初始化FFmpeg编码器"); return; } // 初始化RTSP推流 if (!initializeRTSP(selectionRect.width(), selectionRect.height(), fps)) { QMessageBox::warning(this, "警告", "无法初始化RTSP推流,将继续录制本地文件"); } // 开始录制 isRecording = true; frameCount = 0; startTime = av_gettime(); timer->start(1000 / fps); ui->startButton->setEnabled(false); ui->stopButton->setEnabled(true); ui->selectAreaButton->setEnabled(false); ui->statusLabel->setText("正在录制和推流..."); } void MainWindow::on_stopButton_clicked() { if (!isRecording) return; // 停止录制 timer->stop(); finalizeFFmpeg(); isRecording = false; ui->startButton->setEnabled(true); ui->stopButton->setEnabled(false); ui->selectAreaButton->setEnabled(true); ui->statusLabel->setText("录制已完成"); QMessageBox::information(this, "成功", QString("录制已完成,文件保存至:\n%1").arg(outputFilePath)); } void MainWindow::on_selectAreaButton_clicked() { if (isRecording) return; // 进入选择区域模式 isSelecting = true; setCursor(Qt::CrossCursor); ui->statusLabel->setText("请在屏幕上拖动选择录制区域"); } void MainWindow::captureScreen() { if (!isRecording) return; // 捕获屏幕 QScreen *screen = QGuiApplication::primaryScreen(); QPixmap pixmap = screen->grabWindow(0, selectionRect.x(), selectionRect.y(), selectionRect.width(), selectionRect.height()); QImage image = pixmap.toImage().convertToFormat(QImage::Format_RGB888); // 添加到FFmpeg编码器 addFrame(image); frameCount++; // 更新状态 ui->statusLabel->setText(QString("正在录制和推流... 已录制 %1 帧").arg(frameCount)); } void MainWindow::mousePressEvent(QMouseEvent *event) { if (isSelecting && event->button() == Qt::LeftButton) { startPoint = event->globalPos(); endPoint = startPoint; update(); } } void MainWindow::mouseMoveEvent(QMouseEvent *event) { if (isSelecting && (event->buttons() & Qt::LeftButton)) { endPoint = event->globalPos(); update(); } } void MainWindow::mouseReleaseEvent(QMouseEvent *event) { if (isSelecting && event->button() == Qt::LeftButton) { endPoint = event->globalPos(); // 计算选择区域 int x = qMin(startPoint.x(), endPoint.x()); int y = qMin(startPoint.y(), endPoint.y()); int width = qAbs(startPoint.x() - endPoint.x()); int height = qAbs(startPoint.y() - endPoint.y()); selectionRect = QRect(x, y, width, height); // 退出选择模式 isSelecting = false; setCursor(Qt::ArrowCursor); ui->statusLabel->setText(QString("已选择录制区域: %1x%2") .arg(selectionRect.width()).arg(selectionRect.height())); update(); } } void MainWindow::paintEvent(QPaintEvent *event) { Q_UNUSED(event); if (isSelecting) { QPainter painter(this); painter.setPen(QPen(Qt::red, 2)); painter.setBrush(QBrush(QColor(255, 0, 0, 50))); // 转换为窗口坐标 QPoint topLeft = mapFromGlobal(QPoint(qMin(startPoint.x(), endPoint.x()), qMin(startPoint.y(), endPoint.y()))); QPoint bottomRight = mapFromGlobal(QPoint(qMax(startPoint.x(), endPoint.x()), qMax(startPoint.y(), endPoint.y()))); painter.drawRect(QRect(topLeft, bottomRight)); } } bool MainWindow::initializeFFmpeg(const QString &filename, int width, int height, int fps) { // 查找编码器 const AVCodec *codec = avcodec_find_encoder(AV_CODEC_ID_H264); if (!codec) { qDebug() << "找不到H.264编码器"; return false; } // 分配编码器上下文 codecContext = avcodec_alloc_context3(codec); if (!codecContext) { qDebug() << "无法分配编码器上下文"; return false; } // 设置编码器参数 codecContext->bit_rate = 4000000; // 4 Mbps codecContext->width = width; codecContext->height = height; codecContext->time_base = {1, fps}; codecContext->framerate = {fps, 1}; codecContext->gop_size = 10; // 每10帧一个I帧 codecContext->max_b_frames = 1; codecContext->pix_fmt = AV_PIX_FMT_YUV420P; // H.264 CRF设置 (0-51, 越小质量越高) av_opt_set(codecContext->priv_data, "crf", QString::number(ui->qualitySpinBox->value()).toUtf8(), 0); // 打开编码器 int ret = avcodec_open2(codecContext, codec, nullptr); if (ret < 0) { qDebug() << "无法打开编码器:" << ret; return false; } // 分配输出格式上下文 avformat_alloc_output_context2(&formatContext, nullptr, nullptr, filename.toUtf8().constData()); if (!formatContext) { qDebug() << "无法分配输出格式上下文"; return false; } // 添加视频流 stream = avformat_new_stream(formatContext, nullptr); if (!stream) { qDebug() << "无法创建视频流"; return false; } // 兼容旧版FFmpeg API #if LIBAVCODEC_VERSION_MAJOR < 58 stream->codec->codec_id = codecContext->codec_id; stream->codec->codec_type = AVMEDIA_TYPE_VIDEO; stream->codec->width = codecContext->width; stream->codec->height = codecContext->height; stream->codec->pix_fmt = codecContext->pix_fmt; stream->codec->bit_rate = codecContext->bit_rate; stream->codec->time_base = codecContext->time_base; #else stream->time_base = codecContext->time_base; ret = avcodec_parameters_from_context(stream->codecpar, codecContext); if (ret < 0) { qDebug() << "无法复制编码器参数:" << ret; return false; } #endif // 打开输出文件 if (!(formatContext->oformat->flags & AVFMT_NOFILE)) { ret = avio_open(&formatContext->pb, filename.toUtf8().constData(), AVIO_FLAG_WRITE); if (ret < 0) { qDebug() << "无法打开输出文件:" << ret; return false; } } // 写入文件头 ret = avformat_write_header(formatContext, nullptr); if (ret < 0) { qDebug() << "无法写入文件头:" << ret; return false; } // 分配AVFrame frame = av_frame_alloc(); if (!frame) { qDebug() << "无法分配AVFrame"; return false; } frame->format = codecContext->pix_fmt; frame->width = codecContext->width; frame->height = codecContext->height; ret = av_frame_get_buffer(frame, 0); if (ret < 0) { qDebug() << "无法分配AVFrame缓冲区:" << ret; return false; } // 分配AVPacket packet = av_packet_alloc(); if (!packet) { qDebug() << "无法分配AVPacket"; return false; } // 初始化SwsContext用于图像格式转换 swsContext = sws_getContext( width, height, AV_PIX_FMT_BGR24, // 输入格式 (Qt的QImage通常是BGR24) width, height, codecContext->pix_fmt, SWS_BILINEAR, nullptr, nullptr, nullptr ); if (!swsContext) { qDebug() << "无法初始化SwsContext"; return false; } return true; } bool MainWindow::initializeRTSP(int width, int height, int fps) { // 初始化RTSP输出上下文 int ret = avformat_alloc_output_context2(&rtspContext, nullptr, "rtsp", rtspUrl.toUtf8().constData()); if (ret < 0 || !rtspContext) { qDebug() << "无法分配RTSP输出格式上下文:" << ret; return false; } // 添加视频流 rtspStream = avformat_new_stream(rtspContext, nullptr); if (!rtspStream) { qDebug() << "无法创建RTSP视频流"; avformat_free_context(rtspContext); rtspContext = nullptr; return false; } // 兼容旧版FFmpeg API #if LIBAVCODEC_VERSION_MAJOR < 58 // 复制编码器参数 rtspStream->codec->codec_id = codecContext->codec_id; rtspStream->codec->codec_type = AVMEDIA_TYPE_VIDEO; rtspStream->codec->width = codecContext->width; rtspStream->codec->height = codecContext->height; rtspStream->codec->pix_fmt = codecContext->pix_fmt; rtspStream->codec->bit_rate = codecContext->bit_rate; rtspStream->codec->time_base = codecContext->time_base; // 设置RTSP流的时间基准 rtspStream->time_base = codecContext->time_base; #else // 复制编码器参数 ret = avcodec_parameters_copy(rtspStream->codecpar, stream->codecpar); if (ret < 0) { qDebug() << "无法复制编码器参数到RTSP流:" << ret; avformat_free_context(rtspContext); rtspContext = nullptr; return false; } // 设置RTSP流的时间基准 rtspStream->time_base = stream->time_base; #endif // 设置RTSP传输选项 AVDictionary *options = nullptr; av_dict_set(&options, "rtsp_transport", "tcp", 0); // 使用TCP传输 av_dict_set(&options, "stimeout", "2000000", 0); // 设置超时2秒 // 打开RTSP输出 if (!(rtspContext->oformat->flags & AVFMT_NOFILE)) { ret = avio_open2(&rtspContext->pb, rtspUrl.toUtf8().constData(), AVIO_FLAG_WRITE, nullptr, &options); if (ret < 0) { qDebug() << "无法打开RTSP输出:" << ret; av_dict_free(&options); avformat_free_context(rtspContext); rtspContext = nullptr; return false; } } // 写入RTSP头 ret = avformat_write_header(rtspContext, &options); if (ret < 0) { qDebug() << "无法写入RTSP头:" << ret; avio_closep(&rtspContext->pb); avformat_free_context(rtspContext); rtspContext = nullptr; av_dict_free(&options); return false; } av_dict_free(&options); return true; } void MainWindow::finalizeFFmpeg() { // 刷新编码器缓冲区 int ret; while (true) { ret = avcodec_send_frame(codecContext, nullptr); if (ret < 0) break; while (ret >= 0) { ret = avcodec_receive_packet(codecContext, packet); if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) break; else if (ret < 0) break; // 设置时间戳 av_packet_rescale_ts(packet, codecContext->time_base, stream->time_base); packet->stream_index = stream->index; // 写入本地文件 ret = av_interleaved_write_frame(formatContext, packet); if (ret < 0) { qDebug() << "写入本地文件数据包失败:" << ret; } // 推送到RTSP pushToRTSP(packet); av_packet_unref(packet); } } // 写入文件尾 av_write_trailer(formatContext); // 关闭RTSP流 if (rtspContext) { av_write_trailer(rtspContext); if (!(rtspContext->oformat->flags & AVFMT_NOFILE)) { avio_closep(&rtspContext->pb); } avformat_free_context(rtspContext); rtspContext = nullptr; } // 释放资源 if (swsContext) { sws_freeContext(swsContext); swsContext = nullptr; } if (packet) { av_packet_free(&packet); packet = nullptr; } if (frame) { av_frame_free(&frame); frame = nullptr; } if (codecContext) { avcodec_free_context(&codecContext); codecContext = nullptr; } if (formatContext) { if (!(formatContext->oformat->flags & AVFMT_NOFILE)) { avio_closep(&formatContext->pb); } avformat_free_context(formatContext); formatContext = nullptr; } } void MainWindow::addFrame(const QImage &image) { // 确保图像格式正确 if (image.format() != QImage::Format_RGB888) { qDebug() << "图像格式不是RGB24"; return; } // 准备输入数据 const uint8_t *src_data[4] = { image.bits(), nullptr, nullptr, nullptr }; int src_linesize[4] = { image.bytesPerLine(), 0, 0, 0 }; // 转换图像格式 sws_scale(swsContext, src_data, src_linesize, 0, image.height(), frame->data, frame->linesize); // 设置时间戳 frame->pts = frameCount; // 发送帧到编码器 int ret = avcodec_send_frame(codecContext, frame); if (ret < 0) { qDebug() << "发送帧到编码器失败:" << ret; return; } // 接收编码后的数据包 while (ret >= 0) { ret = avcodec_receive_packet(codecContext, packet); if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) break; else if (ret < 0) { qDebug() << "接收编码数据包失败:" << ret; break; } // 调整时间戳 av_packet_rescale_ts(packet, codecContext->time_base, stream->time_base); packet->stream_index = stream->index; // 写入本地文件 ret = av_interleaved_write_frame(formatContext, packet); if (ret < 0) { qDebug() << "写入本地文件数据包失败:" << ret; } // 推送到RTSP pushToRTSP(packet); av_packet_unref(packet); } } void MainWindow::pushToRTSP(AVPacket *packet) { if (!rtspContext) return; // 复制数据包(RTSP流需要独立的时间戳) AVPacket *rtspPacket = av_packet_clone(packet); if (!rtspPacket) { qDebug() << "无法复制RTSP数据包"; return; } // 调整RTSP流的时间戳 av_packet_rescale_ts(rtspPacket, stream->time_base, rtspStream->time_base); rtspPacket->stream_index = rtspStream->index; // 发送到RTSP服务器 int ret = av_interleaved_write_frame(rtspContext, rtspPacket); if (ret < 0) { qDebug() << "RTSP推流失败:" << ret; } av_packet_free(&rtspPacket); } int ret = av_interleaved_write_frame(rtspContext, rtspPacket);这里返回-22

# -*- coding: utf-8 -*- import sys import os import cv2 import numpy as np from PyQt5.QtWidgets import (QApplication, QMainWindow, QPushButton, QWidget, QVBoxLayout, QHBoxLayout, QMessageBox, QLabel, QFileDialog, QToolBar, QComboBox, QStatusBar) from PyQt5.QtCore import QRect, Qt, QSettings, QThread, pyqtSignal from PyQt5.QtGui import QImage, QPixmap from CamOperation_class import CameraOperation sys.path.append("D:\\海康\\MVS\\Development\\Samples\\Python\\BasicDemo") from MvCameraControl_class import * from MvErrorDefine_const import * from CameraParams_header import * from PyUICBasicDemo import Ui_MainWindow import ctypes from datetime import datetime import logging import platform # 配置日志系统 logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("cloth_inspection.log"), logging.StreamHandler() ] ) logging.info("布料印花检测系统启动") # 全局变量 current_sample_path = "" # 当前使用的样本路径 detection_history = [] # 检测历史记录 # 帧监控线程 class FrameMonitorThread(QThread): frame_status = pyqtSignal(str) def __init__(self, cam_operation): super().__init__() self.cam_operation = cam_operation self.running = True def run(self): while self.running: if self.cam_operation: status = self.cam_operation.get_frame_status() frame_text = "有帧" if status.get('current_frame', False) else "无帧" self.frame_status.emit(f"帧状态: {frame_text}") QThread.msleep(500) def stop(self): self.running = False # 布料印花检测函数(修复版) def check_print_quality(sample_image_path, test_image_path, threshold=0.05): """ 检测布料印花是否合格,并在合格样本上标出错误位置 :param sample_image_path: 合格样本图像路径 :param test_image_path: 待检测图像路径 :param threshold: 差异阈值,超过该值则认为印花不合格 :return: 是否合格,差异值,带有错误标记的合格样本图像 """ # 验证文件存在性 if not os.path.exists(sample_image_path): logging.error(f"样本图像不存在: {sample_image_path}") return None, None, None if not os.path.exists(test_image_path): logging.error(f"测试图像不存在: {test_image_path}") return None, None, None # 验证文件大小 sample_size = os.path.getsize(sample_image_path) test_size = os.path.getsize(test_image_path) if sample_size < 1024: # 小于1KB视为无效 logging.error(f"样本图像文件大小异常: {sample_size} 字节") return None, None, None if test_size < 1024: logging.error(f"测试图像文件大小异常: {test_size} 字节") return None, None, None # 读取图像(使用安全方法) try: # 使用imdecode避免路径编码问题 sample_image = cv2.imdecode(np.fromfile(sample_image_path, dtype=np.uint8), cv2.IMREAD_GRAYSCALE) test_image = cv2.imdecode(np.fromfile(test_image_path, dtype=np.uint8), cv2.IMREAD_GRAYSCALE) if sample_image is None: logging.error(f"无法解码样本图像: {sample_image_path}") return None, None, None if test_image is None: logging.error(f"无法解码测试图像: {test_image_path}") return None, None, None except Exception as e: logging.exception(f"图像读取异常: {str(e)}") return None, None, None # 确保两个图像大小一致 try: test_image = cv2.resize(test_image, (sample_image.shape[1], sample_image.shape[0])) except Exception as e: logging.error(f"图像调整大小失败: {str(e)}") return None, None, None # 计算两个图像之间的差异 diff = cv2.absdiff(sample_image, test_image) # 将差异图像二值化 ret, diff_binary = cv2.threshold(diff, 50, 255, cv2.THRESH_BINARY) # 计算差异的占比 diff_ratio = np.sum(diff_binary) / (diff_binary.shape[0] * diff_binary.shape[1] * 255) # 判断是否合格 is_qualified = diff_ratio < threshold # 在合格样本上标出错误位置 if is_qualified: marked_image = cv2.cvtColor(sample_image, cv2.COLOR_GRAY2BGR) else: marked_image = cv2.cvtColor(sample_image, cv2.COLOR_GRAY2BGR) contours, _ = cv2.findContours(diff_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(marked_image, contours, -1, (0, 0, 255), 2) return is_qualified, diff_ratio, marked_image # 布料印花检测功能 def check_print(): global isGrabbing, obj_cam_operation, current_sample_path, detection_history if not isGrabbing: QMessageBox.warning(mainWindow, "错误", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 检查样本路径是否有效 if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(mainWindow, "错误", "请先设置有效的标准样本图像!", QMessageBox.Ok) return # 检查当前帧是否可用 if not obj_cam_operation.is_frame_available(): QMessageBox.warning(mainWindow, "错误", "当前无有效图像帧,请检查相机状态!", QMessageBox.Ok) return # 创建临时文件目录(如果不存在) temp_dir = os.path.abspath(os.path.join(os.getcwd(), "temp_images")) os.makedirs(temp_dir, exist_ok=True) # 生成唯一的临时文件名(仅使用ASCII字符) timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") test_path = os.path.join(temp_dir, f"test_{timestamp}.bmp") # 保存当前帧作为测试图像 try: ret = obj_cam_operation.save_image(test_path, "bmp") if ret != MV_OK: QMessageBox.warning(mainWindow, "错误", f"保存测试图像失败!错误码: {hex(ret)}", QMessageBox.Ok) return # 验证保存结果 if not os.path.exists(test_path) or os.path.getsize(test_path) < 1024: error_msg = f"测试图像保存验证失败: {test_path}" if os.path.exists(test_path): error_msg += f" (大小: {os.path.getsize(test_path)} 字节)" logging.error(error_msg) QMessageBox.critical(mainWindow, "保存错误", error_msg, QMessageBox.Ok) return except Exception as e: error_msg = f"保存测试图像时发生错误: {str(e)}" QMessageBox.critical(mainWindow, "保存错误", error_msg, QMessageBox.Ok) logging.exception("保存测试图像失败") return # 执行印花检测 try: is_qualified, diff_ratio, marked_image = check_print_quality(current_sample_path, test_path) except FileNotFoundError as e: QMessageBox.critical(mainWindow, "文件未找到", f"无法找到图像文件: {str(e)}", QMessageBox.Ok) logging.error(f"图像文件未找到: {str(e)}") return except Exception as e: QMessageBox.critical(mainWindow, "检测错误", f"检测过程中发生错误: {str(e)}", QMessageBox.Ok) logging.exception("印花检测失败") return # 显示结果 if marked_image is not None: # 显示结果 result_text = f"印花是否合格: {'合格' if is_qualified else '不合格'}\n差异占比: {diff_ratio:.4f}" QMessageBox.information(mainWindow, "检测结果", result_text, QMessageBox.Ok) # 显示标记图像 cv2.imshow("缺陷标记结果", marked_image) cv2.waitKey(0) cv2.destroyAllWindows() # 记录检测结果 detection_result = { 'timestamp': datetime.now(), 'qualified': is_qualified, 'diff_ratio': diff_ratio, 'sample_path': current_sample_path, 'test_path': test_path } detection_history.append(detection_result) update_history_display() # 保存标准样本函数 def save_sample_image(): global isGrabbing, obj_cam_operation, current_sample_path if not isGrabbing: QMessageBox.warning(mainWindow, "错误", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 检查是否有有效图像 if not obj_cam_operation.is_frame_available(): QMessageBox.warning(mainWindow, "无有效图像", "未捕获到有效图像,请检查相机状态!", QMessageBox.Ok) return # 读取上次使用的路径 settings = QSettings("ClothInspection", "CameraApp") last_dir = settings.value("last_save_dir", os.path.join(os.getcwd(), "captures")) # 创建默认文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") default_filename = f"sample_{timestamp}" # 弹出文件保存对话框 file_path, selected_filter = QFileDialog.getSaveFileName( mainWindow, "保存标准样本图像", os.path.join(last_dir, default_filename), "BMP Files (*.bmp);;PNG Files (*.png);;JPEG Files (*.jpg);;所有文件 (*)", options=QFileDialog.DontUseNativeDialog ) if not file_path: logging.info("用户取消了图像保存操作") return # 用户取消保存 # 处理文件扩展名 file_extension = os.path.splitext(file_path)[1].lower() if not file_extension: # 根据选择的过滤器添加扩展名 if "BMP" in selected_filter: file_path += ".bmp" elif "PNG" in selected_filter: file_path += ".png" elif "JPEG" in selected_filter or "JPG" in selected_filter: file_path += ".jpg" else: # 默认使用BMP格式 file_path += ".bmp" file_extension = os.path.splitext(file_path)[1].lower() # 根据扩展名设置保存格式 format_mapping = { ".bmp": "bmp", ".png": "png", ".jpg": "jpg", ".jpeg": "jpg" } save_format = format_mapping.get(file_extension) if not save_format: QMessageBox.warning(mainWindow, "错误", "不支持的文件格式!", QMessageBox.Ok) return # 确保目录存在 directory = os.path.dirname(file_path) if directory and not os.path.exists(directory): try: os.makedirs(directory, exist_ok=True) logging.info(f"创建目录: {directory}") except OSError as e: error_msg = f"无法创建目录 {directory}: {str(e)}" QMessageBox.critical(mainWindow, "目录创建错误", error_msg, QMessageBox.Ok) return # 保存当前帧作为标准样本 try: ret = obj_cam_operation.save_image(file_path, save_format) if ret != MV_OK: strError = f"保存样本图像失败: {hex(ret)}" QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: success_msg = f"标准样本已保存至:\n{file_path}" QMessageBox.information(mainWindow, "成功", success_msg, QMessageBox.Ok) # 更新当前样本路径 current_sample_path = file_path update_sample_display() # 保存当前目录 settings.setValue("last_save_dir", os.path.dirname(file_path)) except Exception as e: error_msg = f"保存图像时发生错误: {str(e)}" QMessageBox.critical(mainWindow, "异常错误", error_msg, QMessageBox.Ok) logging.exception("保存样本图像时发生异常") # 预览当前样本 def preview_sample(): global current_sample_path if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(mainWindow, "错误", "请先设置有效的标准样本图像!", QMessageBox.Ok) return try: # 使用安全方法读取图像 img_data = np.fromfile(current_sample_path, dtype=np.uint8) sample_img = cv2.imdecode(img_data, cv2.IMREAD_COLOR) if sample_img is None: raise Exception("无法加载图像") cv2.imshow("标准样本预览", sample_img) cv2.waitKey(0) cv2.destroyAllWindows() except Exception as e: QMessageBox.warning(mainWindow, "错误", f"预览样本失败: {str(e)}", QMessageBox.Ok) # 更新样本路径显示 def update_sample_display(): global current_sample_path if current_sample_path: ui.lblSamplePath.setText(f"当前样本: {os.path.basename(current_sample_path)}") ui.lblSamplePath.setToolTip(current_sample_path) ui.bnPreviewSample.setEnabled(True) else: ui.lblSamplePath.setText("当前样本: 未设置样本") ui.bnPreviewSample.setEnabled(False) # 更新历史记录显示 def update_history_display(): global detection_history ui.cbHistory.clear() for i, result in enumerate(detection_history[-10:]): # 显示最近10条记录 timestamp = result['timestamp'].strftime("%H:%M:%S") status = "合格" if result['qualified'] else "不合格" ratio = f"{result['diff_ratio']:.4f}" ui.cbHistory.addItem(f"[{timestamp}] {status} - 差异: {ratio}") # 获取选取设备信息的索引,通过[]之间的字符去解析 def TxtWrapBy(start_str, end, all): start = all.find(start_str) if start >= 0: start += len(start_str) end = all.find(end, start) if end >= 0: return all[start:end].strip() # 将返回的错误码转换为十六进制显示 def ToHexStr(num): """将错误码转换为十六进制字符串""" # 处理非整数输入 if not isinstance(num, int): try: # 尝试转换为整数 num = int(num) except: # 无法转换时返回类型信息 return f"<非整数:{type(num)}>" chaDic = {10: 'a', 11: 'b', 12: 'c', 13: 'd', 14: 'e', 15: 'f'} hexStr = "" # 处理负数 if num < 0: num = num + 2 ** 32 # 转换为十六进制 while num >= 16: digit = num % 16 hexStr = chaDic.get(digit, str(digit)) + hexStr num //= 16 hexStr = chaDic.get(num, str(num)) + hexStr return "0x" + hexStr # ch:初始化SDK | en: initialize SDK MvCamera.MV_CC_Initialize() global deviceList deviceList = MV_CC_DEVICE_INFO_LIST() global cam cam = MvCamera() global nSelCamIndex nSelCamIndex = 0 global obj_cam_operation obj_cam_operation = 0 global isOpen isOpen = False global isGrabbing isGrabbing = False global isCalibMode # 是否是标定模式(获取原始图像) isCalibMode = True global frame_monitor_thread # 绑定下拉列表至设备信息索引 def xFunc(event): global nSelCamIndex nSelCamIndex = TxtWrapBy("[", "]", ui.ComboDevices.get()) # Decoding Characters def decoding_char(c_ubyte_value): c_char_p_value = ctypes.cast(c_ubyte_value, ctypes.c_char_p) try: decode_str = c_char_p_value.value.decode('gbk') # Chinese characters except UnicodeDecodeError: decode_str = str(c_char_p_value.value) return decode_str # ch:枚举相机 | en:enum devices def enum_devices(): global deviceList global obj_cam_operation deviceList = MV_CC_DEVICE_INFO_LIST() n_layer_type = (MV_GIGE_DEVICE | MV_USB_DEVICE | MV_GENTL_CAMERALINK_DEVICE | MV_GENTL_CXP_DEVICE | MV_GENTL_XOF_DEVICE) ret = MvCamera.MV_CC_EnumDevices(n_layer_type, deviceList) if ret != 0: strError = "Enum devices fail! ret = :" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) return ret if deviceList.nDeviceNum == 0: QMessageBox.warning(mainWindow, "Info", "Find no device", QMessageBox.Ok) return ret print("Find %d devices!" % deviceList.nDeviceNum) devList = [] for i in range(0, deviceList.nDeviceNum): mvcc_dev_info = cast(deviceList.pDeviceInfo[i], POINTER(MV_CC_DEVICE_INFO)).contents if mvcc_dev_info.nTLayerType == MV_GIGE_DEVICE or mvcc_dev_info.nTLayerType == MV_GENTL_GIGE_DEVICE: print("\ngige device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stGigEInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stGigEInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) nip1 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0xff000000) >> 24) nip2 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x00ff0000) >> 16) nip3 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x0000ff00) >> 8) nip4 = (mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x000000ff) print("current ip: %d.%d.%d.%d " % (nip1, nip2, nip3, nip4)) devList.append( "[" + str(i) + "]GigE: " + user_defined_name + " " + model_name + "(" + str(nip1) + "." + str( nip2) + "." + str(nip3) + "." + str(nip4) + ")") elif mvcc_dev_info.nTLayerType == MV_USB_DEVICE: print("\nu3v device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stUsb3VInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stUsb3VInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stUsb3VInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]USB: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_CAMERALINK_DEVICE: print("\nCML device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stCMLInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stCMLInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stCMLInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]CML: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_CXP_DEVICE: print("\nCXP device: [%极d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stCXPInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stCXPInfo.chModel极Name) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stCXPInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: "+strSerialNumber) devList.append("[" + str(i) + "]CXP: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_XOF_DEVICE: print("\nXoF device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stXoFInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stXoFInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stXoFInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]XoF: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") ui.ComboDevices.clear() ui.ComboDevices.addItems(devList) ui.ComboDevices.setCurrentIndex(0) # ch:打开相机 | en:open device def open_device(): global deviceList global nSelCamIndex global obj_cam_operation global isOpen global frame_monitor_thread if isOpen: QMessageBox.warning(mainWindow, "Error", 'Camera is Running!', QMessageBox.Ok) return MV_E_CALLORDER nSelCamIndex = ui.ComboDevices.currentIndex() if nSelCamIndex < 0: QMessageBox.warning(mainWindow, "Error", 'Please select a camera!', QMessageBox.Ok) return MV_E_CALLORDER obj_cam_operation = CameraOperation(cam, deviceList, nSelCamIndex) ret = obj_cam_operation.open_device() if 0 != ret: strError = "Open device failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) isOpen = False else: set_continue_mode() get_param() isOpen = True enable_controls() # 启动帧监控线程 frame_monitor_thread = FrameMonitorThread(obj_cam_operation) frame_monitor_thread.frame_status.connect(ui.statusBar.showMessage) frame_monitor_thread.start() # ch:开始取流 | en:Start grab image def start_grabbing(): global obj_cam_operation global isGrabbing ret = obj_cam_operation.start_grabbing(ui.widgetDisplay.winId()) if ret != 0: strError = "Start grabbing failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = True enable_controls() # ch:停止取流 | en:Stop grab image def stop_grabbing(): global obj_cam_operation global isGrabbing ret = obj_cam_operation.Stop_grabbing() if ret != 0: strError = "Stop grabbing failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = False enable_controls() # ch:关闭设备 | Close device def close_device(): global isOpen global isGrabbing global obj_cam_operation global frame_monitor_thread # 停止帧监控线程 if frame_monitor_thread and frame_monitor_thread.isRunning(): frame_monitor_thread.stop() frame_monitor_thread.wait(2000) if isOpen: obj_cam_operation.close_device() isOpen = False isGrabbing = False enable_controls() # ch:设置触发模式 | en:set trigger mode def set_continue_mode(): ret = obj_cam_operation.set_trigger_mode(False) if ret != 0: strError = "Set continue mode failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: ui.radioContinueMode.setChecked(True) ui.radioTriggerMode.setChecked(False) ui.bnSoftwareTrigger.setEnabled(False) # ch:设置软触发模式 | en:set software trigger mode def set_software_trigger_mode(): ret = obj_cam_operation.set_trigger_mode(True) if ret != 0: strError = "Set trigger mode failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: ui.radioContinueMode.setChecked(False) ui.radioTriggerMode.setChecked(True) ui.bnSoftwareTrigger.setEnabled(isGrabbing) # ch:设置触发命令 | en:set trigger software def trigger_once(): ret = obj_cam_operation.trigger_once() if ret != 0: strError = "TriggerSoftware failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) # 保存图像对话框 def save_image_dialog(): """ 打开保存图像对话框并保存当前帧 """ global isGrabbing, obj_cam_operation # 检查相机状态 if not isGrabbing: QMessageBox.warning(mainWindow, "相机未就绪", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 检查是否有有效图像 if not obj_cam_operation.is_frame_available(): QMessageBox.warning(mainWindow, "无有效图像", "未捕获到有效图像,请检查相机状态!", QMessageBox.Ok) return # 读取上次使用的路径 settings = QSettings("ClothInspection", "CameraApp") last_dir = settings.value("last_save_dir", os.path.join(os.getcwd(), "captures")) # 创建默认文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") default_filename = f"capture_{timestamp}" # 弹出文件保存对话框 file_path, selected_filter = QFileDialog.getSaveFileName( mainWindow, "保存图像", os.path.join(last_dir, default_filename), # 初始路径 "BMP 图像 (*.bmp);;JPEG 图像 (*.jpg);;PNG 图像 (*.png);;TIFF 图像 (*.tiff);;所有文件 (*)", options=QFileDialog.DontUseNativeDialog ) # 用户取消操作 if not file_path: logging.info("用户取消了图像保存操作") return # 处理文件扩展名 file_extension = os.path.splitext(file_path)[1].lower() if not file_extension: # 根据选择的过滤器添加扩展名 if "BMP" in selected_filter: file_path += ".bmp" elif "JPEG" in selected_filter or "JPG" in selected_filter: file_path += ".jpg" elif "PNG" in selected_filter: file_path += ".png" elif "TIFF" in selected_filter: file_path += ".tiff" else: # 默认使用BMP格式 file_path += ".bmp" # 确定保存格式 format_mapping = { ".bmp": "bmp", ".jpg": "jpg", ".jpeg": "jpg", ".png": "png", ".tiff": "tiff", ".tif": "tiff" } file_extension = os.path.splitext(file_path)[1].lower() save_format = format_mapping.get(file_extension, "bmp") # 确保目录存在 directory = os.path.dirname(file_path) if directory and not os.path.exists(directory): try: os.makedirs(directory, exist_ok=True) except OSError as e: QMessageBox.critical(mainWindow, "目录错误", f"无法创建目录:\n{str(e)}", QMessageBox.Ok) return # 保存图像 try: ret = obj_cam_operation.save_image(file_path, save_format) if ret == MV_OK: QMessageBox.information(mainWindow, "保存成功", f"图像已保存至:\n{file_path}", QMessageBox.Ok) logging.info(f"图像保存成功: {file_path}") # 保存当前目录 settings.setValue("last_save_dir", os.path.dirname(file_path)) else: error_msg = f"保存失败! 错误代码: {hex(ret)}" QMessageBox.warning(mainWindow, "保存失败", error_msg, QMessageBox.Ok) logging.error(f"图像保存失败: {file_path}, 错误代码: {hex(ret)}") except Exception as e: QMessageBox.critical(mainWindow, "保存错误", f"保存图像时发生错误:\n{str(e)}", QMessageBox.Ok) logging.exception(f"保存图像时发生异常: {file_path}") def is_float(str): try: float(str) return True except ValueError: return False # ch: 获取参数 | en:get param def get_param(): try: # 调用方法获取参数 ret = obj_cam_operation.get_parameters() # 记录调用结果(调试用) logging.debug(f"get_param() 返回: {ret} (类型: {type(ret)})") # 处理错误码 if ret != MV_OK: strError = "获取参数失败,错误码: " + ToHexStr(ret) QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: # 成功获取参数后更新UI ui.edtExposureTime.setText("{0:.2f}".format(obj_cam_operation.exposure_time)) ui.edtGain.setText("{0:.2f}".format(obj_cam_operation.gain)) ui.edtFrameRate.setText("{0:.2f}".format(obj_cam_operation.frame_rate)) # 记录成功信息 logging.info("成功获取相机参数") except Exception as e: # 处理所有异常 error_msg = f"获取参数时发生错误: {str(e)}" logging.error(error_msg) QMessageBox.critical(mainWindow, "严重错误", error_msg, QMessageBox.Ok) # ch: 设置参数 | en:set param def set_param(): frame_rate = ui.edtFrameRate.text() exposure = ui.edtExposureTime.text() gain = ui.edtGain.text() if not (is_float(frame_rate) and is_float(exposure) and is_float(gain)): strError = "设置参数失败: 参数必须是有效的浮点数" QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) return MV_E_PARAMETER try: # 使用正确的参数顺序和关键字 ret = obj_cam_operation.set_param( frame_rate=float(frame_rate), exposure_time=float(exposure), gain=float(gain) ) if ret != MV_OK: strError = "设置参数失败,错误码: " + ToHexStr(ret) QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: logging.info("参数设置成功") return MV_OK except Exception as e: error_msg = f"设置参数时发生错误: {str(e)}" logging.error(error_msg) QMessageBox.critical(mainWindow, "严重错误", error_msg, QMessageBox.Ok) return MV_E_STATE # ch: 设置控件状态 | en:set enable status def enable_controls(): global isGrabbing global isOpen # 先设置group的状态,再单独设置各控件状态 ui.groupGrab.setEnabled(isOpen) ui.groupParam.setEnabled(isOpen) ui.bnOpen.setEnabled(not isOpen) ui.bnClose.setEnabled(isOpen) ui.bnStart.setEnabled(isOpen and (not isGrabbing)) ui.bnStop.setEnabled(isOpen and isGrabbing) ui.bnSoftwareTrigger.setEnabled(isGrabbing and ui.radioTriggerMode.isChecked()) ui.bnSaveImage.setEnabled(isOpen and isGrabbing) # 添加检测按钮控制 ui.bnCheckPrint.setEnabled(isOpen and isGrabbing) ui.bnSaveSample.setEnabled(isOpen and isGrabbing) ui.bnPreviewSample.setEnabled(bool(current_sample_path)) if __name__ == "__main__": # ch:初始化SDK | en: initialize SDK MvCamera.MV_CC_Initialize() deviceList = MV_CC_DEVICE_INFO_LIST() cam = MvCamera() nSelCamIndex = 0 obj_cam_operation = 0 isOpen = False isGrabbing = False isCalibMode = True # 是否是标定模式(获取原始图像) frame_monitor_thread = None # 初始化UI app = QApplication(sys.argv) mainWindow = QMainWindow() ui = Ui_MainWindow() ui.setupUi(mainWindow) # 扩大主窗口尺寸 mainWindow.resize(1200, 800) # 宽度1200,高度800 # 创建工具栏 toolbar = mainWindow.addToolBar("检测工具") # 添加检测按钮 ui.bnCheckPrint = QPushButton("检测印花质量") toolbar.addWidget(ui.bnCheckPrint) # 添加保存样本按钮 ui.bnSaveSample = QPushButton("保存标准样本") toolbar.addWidget(ui.bnSaveSample) # 添加预览样本按钮 ui.bnPreviewSample = QPushButton("预览样本") toolbar.addWidget(ui.bnPreviewSample) # 添加历史记录下拉框 ui.cbHistory = QComboBox() ui.cbHistory.setMinimumWidth(300) toolbar.addWidget(QLabel("历史记录:")) toolbar.addWidget(ui.cbHistory) # 添加当前样本显示标签 ui.lblSamplePath = QLabel("当前样本: 未设置样本") status_bar = mainWindow.statusBar() status_bar.addPermanentWidget(ui.lblSamplePath) # 绑定按钮事件 ui.bnCheckPrint.clicked.connect(check_print) ui.bnSaveSample.clicked.connect(save_sample_image) ui.bnPreviewSample.clicked.connect(preview_sample) # 绑定其他按钮事件 ui.bnEnum.clicked.connect(enum_devices) ui.bnOpen.clicked.connect(open_device) ui.bnClose.clicked.connect(close_device) ui.bnStart.clicked.connect(start_grabbing) ui.bnStop.clicked.connect(stop_grabbing) ui.bnSoftwareTrigger.clicked.connect(trigger_once) ui.radioTriggerMode.clicked.connect(set_software_trigger_mode) ui.radioContinueMode.clicked.connect(set_continue_mode) ui.bnGetParam.clicked.connect(get_param) ui.bnSetParam.clicked.connect(set_param) # 修改保存图像按钮连接 ui.bnSaveImage.clicked.connect(save_image_dialog) # 显示主窗口 mainWindow.show() # 执行应用 app.exec_() # 关闭设备 close_device() # ch:反初始化SDK | en: finalize SDK MvCamera.MV_CC_Finalize() sys.exit() 这是one.py的完整代码把上个问题的解决方案整合起来并完整展示

# -*- coding: utf-8 -*- import sys import os import cv2 import numpy as np from PyQt5.QtWidgets import (QApplication, QMainWindow, QPushButton, QWidget, QVBoxLayout, QHBoxLayout, QMessageBox, QLabel, QFileDialog, QToolBox, QComboBox, QStatusBar, QGroupBox, QSlider, QDockWidget, QProgressDialog) from PyQt5.QtCore import QRect, Qt, QSettings, QThread, pyqtSignal sys.path.append("D:\\海康\\MVS\\Development\\Samples\\Python\\BasicDemo") from MvCameraControl_class import * from MvErrorDefine_const import * from CameraParams_header import * from PyUICBasicDemo import Ui_MainWindow import ctypes from datetime import datetime import logging import platform # 配置日志系统 logging.basicConfig( level=logging.DEBUG, # 设置为DEBUG级别获取更多信息 format='%(asctime)s - %(name)s - %(levelname)s - %message)s', handlers=[ logging.FileHandler("cloth_inspection_debug.log"), logging.StreamHandler() ] ) logging.info("布料印花检测系统启动") # 全局变量 current_sample_path = "" # 当前使用的样本路径 detection_history = [] # 检测历史记录 # ==================== 核心功能类 ==================== class CameraOperation: def __init__(self, cam, device_list, index): self.cam = cam self.device_list = device_list self.index = index self.g_bFrameAvailable = False # 帧数据可用标志 self.current_frame = None # 当前帧数据 self.exposure_time = 0.0 # 曝光时间 self.gain = 0.0 # 增益 self.frame_rate = 0.0 # 帧率 def __getattr__(self, name): """处理未定义属性的访问""" raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'") def is_frame_available(self): """检查是否有可用的帧数据""" return self.g_bFrameAvailable def get_current_frame(self): """获取当前帧数据""" return self.current_frame def open_device(self): """打开设备""" # 打开设备代码... # 在成功打开设备后设置初始状态 self.g_bFrameAvailable = False return MV_OK def start_grabbing(self, hwnd): """开始取流""" # 取流代码... # 在成功开始取流后设置标志 self.g_bFrameAvailable = True return MV_OK def Stop_grabbing(self): """停止取流""" # 停止取流代码... # 在停止取流后重置标志 self.g_bFrameAvailable = False return MV_OK def close_device(self): """关闭设备""" # 关闭设备代码... # 在关闭设备后重置标志 self.g_bFrameAvailable = False # 修复:self极_bFrameAvailable -> self.g_bFrameAvailable return MV_OK def get_one_frame(self): """获取一帧数据""" # 获取帧数据代码... # 在成功获取帧后更新当前帧 self.current_frame = frame_data self.g_bFrameAvailable = True return frame_data # 添加触发模式设置方法 def set_trigger_mode(self, is_trigger_mode): """ 设置相机触发模式 :param is_trigger_mode: True-触发模式, False-连续模式 :return: 状态码 """ try: # 设置触发模式 if is_trigger_mode: # 触发模式开启 ret = self.cam.MV_CC_SetEnumValue("TriggerMode", MV_TRIGGER_MODE_ON) if ret != MV_OK: logging.error(f"设置触发模式开启失败: {ToHexStr(ret)}") return ret # 设置触发源为软触发 ret = self.cam.MV_CC_SetEnumValue("TriggerSource", MV_TRIGGER_SOURCE_SOFTWARE) if ret != MV_OK: logging.error(f"设置软触发源失败: {ToHexStr(ret)}") return ret logging.info("已设置为触发模式(软触发)") else: # 触发模式关闭 ret = self.cam.MV_CC_SetEnumValue("TriggerMode", MV_TRIGGER_MODE_OFF) if ret != MV_OK: logging.error(f"设置触发模式关闭失败: {ToHexStr(ret)}") return ret logging.info极("已设置为连续模式") return MV_OK except Exception as e: logging.exception("设置触发模式时发生异常") return MV_E_ABNORMAL_IMAGE def trigger_once(self): """ 执行一次软触发 :return: 状态码 """ try: ret = self.cam.MV_极_CC_SetCommandValue("TriggerSoftware") if ret != MV_OK: logging.error(f"软触发失败: {ToHexStr(ret)}") else: logging.info("软触发成功") return ret except Exception as e: logging.exception("执行软触发时发生异常") return MV_E_ABNORMAL_IMAGE # 添加获取参数方法 def get_parameters(self): """ 获取相机参数(曝光时间、增益、帧率) :return: 状态码 """ try: # 获取曝光时间 st_float_param = MVCC_FLOATVALUE() ret = self.cam.MV_CC_GetFloatValue("ExposureTime", st_float_param) if ret != MV_OK: logging.error(f"获取曝光时间失败: {ToHexStr(ret)}") return ret self.exposure_time = st_float_param.fCurValue # 获取增益 st_float_param = MVCC_FLOATVALUE() ret = self.cam.MV_CC_GetFloatValue("Gain", st_float_param) if ret != MV_OK: logging.error(f"获取增益失败: {ToHexStr(ret)}") return ret self.gain = st_float_param.fCurValue # 获取帧率 st_float_param = MVCC_FLOATVALUE() ret = self.cam.MV_CC_GetFloatValue("AcquisitionFrameRate", st_float_param) if ret != MV_OK: logging.error(f"获取帧率失败: {ToHexStr(ret)}") return ret self.frame_rate = st_float_param.fCurValue logging.info(f"获取参数成功: 曝光={self.exposure_time} 增益={self.gain} 帧率={self.frame_rate}") return MV_OK except Exception as e: logging.exception("获取参数时发生异常") return MV_E_ABNORMAL_IMAGE # 添加设置参数方法 def set_param(self, frame_rate, exposure_time, gain): """ 设置相机参数 :param frame_rate: 帧率 :param exposure_time: 曝光时间 :param gain: 增益 :return: 状态码 """ try: # 设置帧率 ret = self.cam.MV_CC_SetFloatValue("AcquisitionFrameRate", frame_rate) if ret != MV_OK: logging.error(f"设置帧率失败: {ToHexStr(ret)}") return ret # 设置曝光时间 ret = self.cam.MV_CC_SetFloatValue("ExposureTime", exposure_time) if ret != MV_OK: logging.error(f"设置曝光时间失败: {ToHexStr(ret)}") return ret # 设置增益 ret = self.cam.MV_CC_SetFloatValue("Gain", gain) if ret != MV_OK: logging.error(f"设置增益失败: {ToHexStr(ret)}") return ret # 更新对象属性 self.frame_rate = frame_rate self.exposure_time = exposure_time self.gain = gain logging.info(f"参数设置成功: 帧率={frame_rate} 曝光={exposure_time} 增益={gain}") return MV_OK except Exception as e: logging.exception("设置参数时发生异常") return MV_E_ABNORMAL_IMAGE # 添加保存图像方法 def save_image(self, file_path, save_format): """ 保存当前帧到文件 :param file_path: 文件路径 :param save_format: 保存格式 ("bmp", "jpg", "png") :return: 状态码 """ try: if self.current_frame is None: logging.error("无法保存图像: 当前帧为空") return MV_E_NODATA # 根据格式保存图像 if save_format == "bmp": cv2.imwrite(file_path, self.current_frame) elif save_format == "jpg": cv2.imwrite(file_path, self.current_frame, [int(cv2.IMWRITE_JPEG_QUALITY), 95]) elif save_format == "png": cv2.imwrite(file_path, self.current_frame, [int(cv2.IMWRITE_PNG_COMPRESSION), 9]) else: logging.error(f"不支持的保存格式: {save_format}") return MV_E_UNSUPPORTED logging.info(f"图像保存成功: {file_path}") return MV_OK except Exception as e: logging.exception(f"保存图像时发生异常: {file_path}") return MV_E_ABNORMAL_IMAGE # ==================== 线程类 ==================== # 帧监控线程(修复版) class FrameMonitorThread(QThread): frame_status = pyqtSignal(str) def __init__(self, cam_operation): super().__init__() self.cam_operation = cam_operation self.running = True def run(self): while self.running: if self.cam_operation: # 使用正确的方法检查帧状态 frame_available = self.cam_operation.is_frame_available() frame_text = "有帧" if frame_available else "无帧" self.frame_status.emit(f"帧状态: {frame_text}") QThread.msleep(500) def stop(self): self.running = False # ==================== 核心检测功能 ==================== # 布料印花检测函数(修复版) def check_print_quality(sample_image_path, test_image, threshold=0.05): """ 检测布料印花是否合格,直接使用内存中的测试图像 :param sample_image_path: 合格样本图像路径 :param test_image: 内存中的测试图像 (numpy数组) :param threshold: 差异阈值 :return: 是否合格,差异值,标记图像 """ # 读取样本图像 try: sample_img_data = np.fromfile(sample_image_path, dtype=np.uint8) sample_image = cv2.imdecode(sample_img_data, cv2.IMREAD_GRAYSCALE) # 修复:cv2.READ_GRAYSCALE -> cv2.IMREAD_GRAYSCALE if sample_image is None: logging.error(f"无法解码样本图像: {sample_image_path}") return None, None, None except Exception as e: logging.exception(f"样本图像读取异常: {str(e)}") return None, None, None # 确保测试图像是灰度图 if len(test_image.shape) == 3: # 如果是彩色图像 test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY) # 确保两个图像大小一致 try: test_image = cv2.resize(test_image, (sample_image.shape[1], sample_image.shape[0])) except Exception as e: logging.error(f"图像调整大小失败: {str(e)}") return None, None, None # 计算差异 diff = cv2.absdiff(sample_image, test_image) # 二值化差异 _, thresholded = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY) # 计算差异比例 diff_pixels = np.count_nonzero(thresholded) total_pixels = sample_image.size diff_ratio = diff_pixels / total_pixels # 判断是否合格 is_qualified = diff_ratio <= threshold # 创建标记图像(红色标记差异区域) marked_image = cv2.cvtColor(test_image, cv2.COLOR_GRAY2BGR) marked_image[thresholded == 255] = [0, 0, 255] # 红色标记 return is_qualified, diff_ratio, marked_image # ==================== UI更新函数 ==================== # 更新检测结果显示 def update_diff_display(diff_ratio, is_qualified): """ 更新差异度显示控件 """ # 更新当前差异度显示 ui.lblCurrentDiff.setText(f"当前差异度: {diff_ratio*100:.2f}%") # 根据合格状态设置颜色 if is_qualified: ui.lblDiffStatus.setText("状态: 合格") ui.lblDiffStatus.setStyleSheet("color: green; font-size: 12px;") else: ui.lblDiffStatus.setText("状态: 不合格") ui.lblDiffStatus.setStyleSheet("color: red; font-size: 12px;") # 更新差异度阈值显示 def update_diff_threshold(value): """ 当滑块值改变时更新阈值显示 """ ui.lblDiffValue.setText(f"{value}%") # 布料印花检测功能(修复版) def check_print(): global isGrabbing, obj_cam_operation, current_sample_path, detection_history logging.info("检测印花质量按钮按下") # 1. 检查相机状态 if not isGrabbing: logging.warning("相机未取流") QMessageBox.warning(mainWindow, "错误", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 2. 检查相机操作对象 if not obj_cam_operation: logging.error("相机操作对象未初始化") QMessageBox.warning(mainWindow, "错误", "相机未正确初始化!", QMessageBox.Ok) return # 3. 检查样本路径 if not current_sample_path or not os.path.exists(current_sample_path): logging.warning(f"无效样本路径: {current_sample_path}") QMessageBox.warning(mainWindow, "错误", "请先设置有效的标准样本图像!", QMessageBox.Ok) return # 使用进度对话框防止UI阻塞 progress = QProgressDialog("正在检测...", "取消", 0, 100, mainWindow) progress.setWindowModality(Qt.WindowModal) progress.setValue(10) try: # 4. 获取当前帧 logging.info("尝试获取当前极") test_image = obj_cam_operation.get_current_frame() progress.setValue(30) if test_image is None: logging.warning("获取当前帧失败") QMessageBox.warning(mainWindow, "错误", "无法获取当前帧图像!", QMessageBox.Ok) return # 5. 获取差异度阈值 diff_threshold = ui.sliderDiffThreshold.value() / 100.0 logging.info(f"使用差异度阈值: {diff_threshold}") progress.setValue(50) # 6. 执行检测 is_qualified, diff_ratio, marked_image = check_print_quality( current_sample_path, test_image, threshold=diff_threshold ) progress.setValue(70) # 检查返回结果是否有效 if is_qualified is None: logging.error("检测函数返回无效结果") QMessageBox.critical(mainWindow, "检测错误", "检测失败,请检查日志", QMessageBox.Ok) return logging.info(f"检测结果: 合格={is_qualified}, 差异={diff_ratio}") progress.setValue(90) # 7. 更新UI update_diff_display(diff_ratio, is_qualified) result_text = f"印花是否合格: {'合格' if is_qualified else '不合格'}\n差异占比: {diff_ratio*100:.2f}%\n阈值: {diff_threshold*100:.2f}%" QMessageBox.information(mainWindow, "检测结果", result_text, QMessageBox.Ok) if marked_image is not None: cv2.imshow("缺陷标记结果", marked_image) cv2.waitKey(0) cv2.destroyAllWindows() else: logging.warning("标记图像为空") # 8. 记录检测结果 detection_result = { 'timestamp': datetime.now(), 'qualified': is_qualified, 'diff_ratio': diff_ratio, 'threshold': diff_threshold } detection_history.append(detection_result) update_history_display() progress.setValue(100) except Exception as e: logging.exception("印花检测失败") QMessageBox.critical(mainWindow, "检测错误", f"检测过程中发生错误: {str(e)}", QMessageBox.Ok) finally: progress.close() # 保存标准样本函数 def save_sample_image(): global isGrabbing, obj_cam_operation, current_sample_path if not isGrabbing: QMessageBox.warning(mainWindow, "错误", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 检查是否有有效图像 if not obj_cam_operation.is_frame_available(): QMessageBox.warning(mainWindow, "无有效图像", "未捕获到有效图像,请检查相机状态!", QMessageBox.Ok) return # 读取上次使用的路径 settings = QSettings("ClothInspection", "CameraApp") last_dir = settings.value("last_save_dir", os.path.join(os.getcwd(), "captures")) # 创建默认文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") default_filename = f"sample_{timestamp}" # 弹出文件保存对话框 file_path, selected_filter = QFileDialog.getSaveFileName( mainWindow, "保存标准样本图像", os.path.join(last_dir, default_filename), "BMP Files (*.bmp);;PNG Files (*.png);;JPEG Files (*.jpg);;所有文件 (*)", options=QFileDialog.DontUseNativeDialog ) if not file_path: logging.info("用户取消了图像保存操作") return # 用户取消保存 # 处理文件扩展名 file_extension = os.path.splitext(file_path)[1].lower() if not file_extension: # 根据选择的过滤器添加扩展名 if "BMP" in selected_filter: file_path += ".bmp" elif "PNG" in selected_filter: file_path += ".png" elif "JPEG" in selected_filter or "JPG" in selected_filter: file_path += ".jpg" else: # 默认使用BMP格式 file_path += ".bmp" file_extension = os.path.splitext(file_path)[1].lower() # 根据扩展名设置保存格式 format_mapping = { ".bmp": "bmp", ".png": "png", ".jpg": "jpg", ".jpeg": "jpg" } save_format = format_mapping.get(file_extension) if not save_format: QMessageBox.warning(mainWindow, "错误", "不支持的文件格式!", QMessageBox.Ok) return # 确保目录存在 directory = os.path.dirname(file_path) if directory and not os.path.exists(directory): try: os.makedirs(directory, exist_ok=True) logging.info(f"创建目录: {directory}") except OSError as e: error_msg = f"无法创建目录 {directory}: {str(e)}" QMessageBox.critical(mainWindow, "目录创建错误", error_msg, QMessageBox.Ok) return # 保存当前帧作为标准样本 try: ret = obj_cam_operation.save_image(file_path, save_format) if ret != MV_OK: strError = f"保存样本图像失败: {hex(ret)}" QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: success_msg = f"标准样本已保存至:\n{file_path}" QMessageBox.information(mainWindow, "成功", success_msg, QMessageBox.Ok) # 更新当前样本路径 current_sample_path = file_path update_sample_display() # 保存当前目录 settings.setValue("last_save_dir", os.path.dirname(file_path)) except Exception as e: error_msg = f"保存图像时发生错误: {str(e)}" QMessageBox.critical(mainWindow, "异常错误", error_msg, QMessageBox.Ok) logging.exception("保存样本图像时发生异常") # 预览当前样本 def preview_sample(): global current_sample_path if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(mainWindow, "错误", "请先设置有效的标准样本图像!", QMessageBox.Ok) return try: # 使用安全方法读取图像 img_data = np.fromfile(current_sample_path, dtype=np.uint8) sample_img = cv2.imdecode(img_data, cv2.IMREAD_COLOR) if sample_img is None: raise Exception("无法加载图像") cv2.imshow("标准样本预览", sample_img) cv2.waitKey(0) cv2.destroyAllWindows() except Exception as e: QMessageBox.warning(mainWindow, "错误", f"预览样本失败: {str(e)}", QMessageBox.Ok) # 更新样本路径显示 def update_sample_display(): global current_sample_path if current_sample_path: ui.lblSamplePath.setText(f"当前样本: {os.path.basename(current_sample_path)}") ui.lblSamplePath.setToolTip(current_sample_path) ui.bnPreviewSample.setEnabled(True) else: ui.lblSamplePath.setText("当前样本: 未设置样本") ui.bnPreviewSample.setEnabled(False) # 更新历史记录显示 def update_history_display(): global detection_history ui.cbHistory.clear() for i, result in enumerate(detection_history[-10:]): # 显示最近10条记录 timestamp = result['timestamp'].strftime("%H:%M:%S") status = "合格" if result['qualified'] else "不合格" ratio = f"{result['diff_ratio']*100:.2f}%" ui.cbHistory.addItem(f"[{timestamp}] {status} - 差异: {ratio}") # 获取选取设备信息的索引,通过[]之间的字符去解析 def TxtWrapBy(start_str, end, all): start = all.find(start_str) if start >= 0: start += len(start_str) end = all.find(end, start) if end >= 0: return all[start:end].strip() # 将返回的错误码转换为十六进制显示 def ToHexStr(num): """将错误码转换为十六进制字符串""" # 处理非整数输入 if not isinstance(num, int): try: # 尝试转换为整数 num = int(num) except: # 无法转换时返回类型信息 return f"<非整数:{type(num)}>" chaDic = {10: 'a', 11: 'b', 12: 'c', 13: 'd', 14: 'e', 15: 'f'} hexStr = "" # 处理负数 if num < 0: num = num + 2 ** 32 # 转换为十六进制 while num >= 16: digit = num % 16 hexStr = chaDic.get(digit, str(digit)) + hexStr num //= 16 hexStr = chaDic.get(num, str(num)) + hexStr return "0x" + hexStr # ch:初始化SDK | en: initialize SDK MvCamera.MV_CC_Initialize() # 全局变量声明 global deviceList global cam global nSelCamIndex global obj_cam_operation global isOpen global isGrabbing global isCalibMode global frame_monitor_thread # 初始化全局变量 deviceList = MV_CC_DEVICE_INFO_LIST() cam = MvCamera() nSelCamIndex = 0 obj_cam_operation = None isOpen = False isGrabbing = False isCalibMode = True # 是否是标定模式(获取原始图像) frame_monitor_thread = None # 绑定下拉列表至设备信息索引 def xFunc(event): global nSelCamIndex nSelCamIndex = TxtWrapBy("[", "]", ui.ComboDevices.get()) # Decoding Characters def decoding_char(c_ubyte_value): c_char_p_value = ctypes.cast(c_ubyte_value, ctypes.c_char_p) try: decode_str = c_char_p_value.value.decode('gbk') # Chinese characters except UnicodeDecodeError: decode_str = str(c_char_p_value.value) return decode_str # ch:枚举相机 | en:enum devices def enum_devices(): global deviceList global obj_cam_operation deviceList = MV_CC_DEVICE_INFO_LIST() n_layer_type = (MV_GIGE_DEVICE | MV_USB_DEVICE | MV_GENTL_CAMERALINK_DEVICE | MV_GENTL_CXP_DEVICE | MV_GENTL_XOF_DEVICE) ret = MvCamera.MV_CC_EnumDevices(n_layer_type, deviceList) if ret != 0: strError = "Enum devices fail! ret = :" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) return ret if deviceList.nDeviceNum == 0: QMessageBox.warning(mainWindow, "Info", "Find no device", QMessageBox.Ok) return ret print("Find %d devices!" % deviceList.nDeviceNum) devList = [] for i in range(0, deviceList.nDeviceNum): # 修复:device.nDeviceNum -> deviceList.nDeviceNum mvcc_dev_info = cast(deviceList.pDeviceInfo[i], POINTER(MV_CC_DEVICE_INFO)).contents if mvcc_dev_info.nTLayerType == MV_GIGE_DEVICE or mvcc_dev_info.nTLayerType == MV_GENTL_GIGE_DEVICE: print("\ngige device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stGigEInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stGigEInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) nip1 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0xff000000) >> 24) nip2 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x00ff0000) >> 16) nip3 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x0000ff00) >> 8) nip4 = (mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x000000ff) print("current ip: %d.%d.%d.%d " % (nip1, nip2, nip3, nip4)) devList.append( "[" + str(i) + "]GigE: " + user_defined_name + " " + model_name + "(" + str(nip1) + "." + str( nip2) + "." + str(nip3) + "." + str(nip4) + ")") elif mvcc_dev_info.nTLayerType == MV_USB_DEVICE: print("\nu3v device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stUsb3VInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stUsb3VInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stUsb3VInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]USB: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_CAMERALINK_DEVICE: print("\nCML device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stCMLInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stCMLInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stCMLInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]CML: " + user_defined_name + " "+ model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_CXP_DEVICE: print("\nCXP device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stCXPInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stCXPInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stCXPInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: "+strSerialNumber) devList.append("[" + str(i) + "]CXP: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_XOF_DEVICE: print("\nXoF device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stXoFInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stXoFInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stXoFInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]XoF: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") ui.ComboDevices.clear() ui.ComboDevices.addItems(devList) ui.ComboDevices.setCurrentIndex(0) # ch:打开相机 | en:open device def open_device(): global deviceList global nSelCamIndex global obj_cam_operation global isOpen global frame_monitor_thread if isOpen: QMessageBox.warning(mainWindow, "Error", 'Camera is Running!', QMessageBox.Ok) return MV_E_CALLORDER nSelCamIndex = ui.ComboDevices.currentIndex() if nSelCamIndex < 0: QMessageBox.warning(mainWindow, "Error", 'Please select a camera!', QMessageBox.Ok) return MV_E_CALLORDER obj_cam_operation = CameraOperation(cam, deviceList, nSelCamIndex) ret = obj_cam_operation.open_device() if 0 != ret: strError = "Open device failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) isOpen = False else: set_continue_mode() get_param() isOpen = True enable_controls() # 启动帧监控线程(修复版) frame_monitor_thread = FrameMonitorThread(obj_cam_operation) frame_monitor_thread.frame_status.connect(ui.statusBar.showMessage) frame_monitor_thread.start() # ch:开始取流 | en:Start grab image def start_grabbing(): global obj_cam_operation global isGrabbing ret = obj_cam_operation.start_grabbing(ui.widgetDisplay.winId()) if ret != 0: strError = "Start grabbing failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = True enable_controls() # ch:停止取流 | en:Stop grab image def stop_grabbing(): global obj_cam_operation global isGrabbing ret = obj_cam_operation.Stop_grabbing() if ret != 0: strError = "Stop grabbing failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = False enable_controls() # ch:关闭设备 | Close device def close_device(): global isOpen global isGrabbing global obj_cam_operation global frame_monitor_thread # 停止帧监控线程 if frame_monitor_thread and frame_monitor_thread.isRunning(): frame_monitor_thread.stop() frame_monitor_thread.wait(2000) if isOpen: obj_cam_operation.close_device() isOpen = False isGrabbing = False enable_controls() # ch:设置触发模式 | en:set trigger mode def set_continue_mode(): ret = obj_cam_operation.set_trigger_mode(False) if ret != 0: strError = "Set continue mode failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: ui.radioContinueMode.setChecked(True) ui.radioTriggerMode.setChecked(False) ui.bnSoftwareTrigger.setEnabled(False) # ch:设置软触发模式 | en:set software trigger mode def set_software_trigger_mode(): ret = obj_cam_operation.set_trigger_mode(True) if ret != 0: strError = "Set trigger mode failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: ui.radioContinueMode.setChecked(False) ui.radioTriggerMode.setChecked(True) ui.bnSoftwareTrigger.setEnabled(isGrabbing) # ch:设置触发命令 | en:set trigger software def trigger_once(): ret = obj_cam_operation.trigger_once() if ret != 0: strError = "TriggerSoftware failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) # 保存图像对话框 def save_image_dialog(): """ 打开保存图像对话框并保存当前帧 """ global isGrabbing, obj_cam_operation # 检查相机状态 if not isGrabbing: QMessageBox.warning(mainWindow, "相机未就绪", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 检查是否有有效图像 if not obj_cam_operation.is_frame_available(): QMessageBox.warning(mainWindow, "无有效图像", "未捕获到有效图像,请检查相机状态!", QMessageBox.Ok) return # 读取上次使用的路径 settings = QSettings("ClothInspection", "CameraApp") last_dir = settings.value("last_save_dir", os.path.join(os.getcwd(), "captures")) # 创建默认文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") # 修复:%H%M%极 -> %H%M%S default_filename = f"capture_{timestamp}" # 弹出文件保存对话框 file_path, selected_filter = QFileDialog.getSaveFileName( mainWindow, "保存图像", os.path.join(last_dir, default_filename), # 初始路径 "BMP 图像 (*.bmp);;JPEG 图像 (*.jpg);;PNG 图像 (*.png);;TIFF 图像 (*.tiff);;所有文件 (*)", options=QFileDialog.DontUseNativeDialog ) # 用户取消操作 if not file_path: logging.info("用户取消了图像保存操作") return # 处理文件扩展名 file_extension = os.path.splitext(file_path)[1].lower() if not file_extension: # 根据选择的过滤器添加扩展名 if "BMP" in selected_filter: file_path += ".bmp" elif "JPEG" in selected_filter or "JPG" in selected_filter: file_path += ".jpg" elif "PNG" in selected_filter: file_path += ".png" elif "TIFF" in selected_filter: file_path += ".tiff" else: # 默认使用BMP格式 file_path += ".bmp" # 确定保存格式 format_mapping = { ".bmp": "bmp", ".jpg": "jpg", ".jpeg": "jpg", ".png": "png", ".tiff": "tiff", ".tif": "tiff" } file_extension = os.path.splitext(file_path)[1].lower() save_format = format_mapping.get(file_extension, "bmp") # 确保目录存在 directory = os.path.dirname(file_path) if directory and not os.path.exists(directory): try: os.makedirs(directory, exist_ok=True) except OSError as e: QMessageBox.critical(mainWindow, "目录错误", f"无法创建目录:\n{str(e)}", QMessageBox.Ok) return # 保存图像 try: ret = obj_cam_operation.save_image(file_path, save_format) if ret == MV_OK: QMessageBox.information(mainWindow, "保存成功", f"图像已保存至:\n{file_path}", QMessageBox.Ok) logging.info(f"图像保存成功: {file_path}") # 保存当前目录 settings.setValue("last_save_dir", os.path.dirname(file_path)) else: error_msg = f"保存失败! 错误代码: {hex(ret)}" QMessageBox.warning(mainWindow, "保存失败", error_msg, QMessageBox.Ok) logging.error(f"图像保存失败: {file_path}, 错误代码: {hex(ret)}") except Exception as e: QMessageBox.critical(mainWindow, "保存错误", f"保存图像时发生错误:\n{str(e)}", QMessageBox.Ok) logging.exception(f"保存图像时发生异常: {file_path}") def is_float(str): try: float(str) return True except ValueError: return False # ch: 获取参数 | en:get param def get_param(): try: # 调用方法获取参数 ret = obj_cam_operation.get_parameters() # 记录调用结果(调试用) logging.debug(f"get_param() 返回: {ret} (类型: {type(ret)})") # 处理错误码 if ret != MV_OK: strError = "获取参数失败,错误码: " + ToHexStr(ret) QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: # 成功获取参数后更新UI ui.edtExposureTime.setText("{0:.2f}".format(obj_cam_operation.exposure_time)) ui.edtGain.setText("{0:.2f}".format(obj_cam_operation.gain)) ui.edtFrameRate.setText("{0:.2f}".format(obj_cam_operation.frame_rate)) # 记录成功信息 logging.info("成功获取相机参数") except Exception as e: # 处理所有异常 error_msg = f"获取参数时发生错误: {str(e)}" logging.error(error_msg) QMessageBox.critical(mainWindow, "严重错误", error_msg, QMessageBox.Ok) # ch: 设置参数 | en:set param def set_param(): frame_rate = ui.edtFrameRate.text() exposure = ui.edtExposureTime.text() gain = ui.edtGain.text() if not (is_float(frame_rate) and is_float(exposure) and is_float(gain)): strError = "设置参数失败: 参数必须是有效的浮点数" QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) return MV_E_PARAMETER try: # 使用正确的参数顺序和关键字 ret = obj_cam_operation.set_param( frame_rate=float(frame_rate), exposure_time=float(exposure), gain=float(gain) ) if ret != MV_OK: strError = "设置参数失败,错误码: " + ToHexStr(ret) QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: logging.info("参数设置成功") return MV_OK except Exception as e: error_msg = f"设置参数时发生错误: {str(e)}" logging.error(error_msg) QMessageBox.critical(mainWindow, "严重错误", error_msg, QMessageBox.Ok) return MV_E_STATE # ch: 设置控件状态 | en:set enable status def enable_controls(): global isGrabbing global isOpen # 先设置group的状态,再单独设置各控件状态 ui.groupGrab.setEnabled(isOpen) ui.groupParam.setEnabled(isOpen) ui.bnOpen.setEnabled(not isOpen) ui.bnClose.setEnabled(isOpen) ui.bnStart.setEnabled(isOpen and (not isGrabbing)) ui.bnStop.setEnabled(isOpen and isGrabbing) ui.bnSoftwareTrigger.setEnabled(isGrabbing and ui.radioTriggerMode.isChecked()) ui.bnSaveImage.setEnabled(isOpen and isGrabbing) # 添加检测按钮控制 ui.bnCheckPrint.setEnabled(isOpen and isGrabbing) ui.bnSaveSample.setEnabled(isOpen and isGrabbing) ui.bnPreviewSample.setEnabled(bool(current_sample_path)) if __name__ == "__main__": # ch:初始化SDK | en: initialize SDK MvCamera.MV_CC_Initialize() # 初始化全局变量 deviceList = MV_CC_DEVICE_INFO_LIST() cam = MvCamera() nSelCamIndex = 0 obj_cam_operation = None isOpen = False isGrabbing = False isCalibMode = True # 是否是标定模式(获取原始图像) frame_monitor_thread = None # 初始化UI app = QApplication(sys.argv) mainWindow = QMainWindow() ui = Ui_MainWindow() ui.setupUi(mainWindow) # 扩大主窗口尺寸 mainWindow.resize(1200, 800) # 宽度1200,高度800 # 创建工具栏 toolbar = mainWindow.addToolBar("检测工具") # 添加检测按钮 ui.bnCheckPrint = QPushButton("检测印花质量") toolbar.addWidget(ui.bnCheckPrint) # 添加保存样本按钮 ui.bnSaveSample = QPushButton("保存标准样本") toolbar.addWidget(ui.bnSaveSample) # 添加预览样本按钮 ui.bnPreviewSample = QPushButton("预览样本") toolbar.addWidget(ui.bnPreviewSample) # 添加历史记录下拉框 ui.cbHistory = QComboBox() ui.cbHistory.setMinimumWidth(300) toolbar.addWidget(QLabel("历史记录:")) toolbar.addWidget(ui.cbHistory) # 添加当前样本显示标签 ui.lblSamplePath = QLabel("当前样本: 未设置样本") status_bar = mainWindow.statusBar() status_bar.addPermanentWidget(ui.lblSamplePath) # === 新增差异度调整控件 === # 创建右侧面板容器 right_panel = QWidget() right_layout = QVBoxLayout(right_panel) right_layout.setContentsMargins(10, 10, 10, 10) # 创建差异度调整组 diff_group = QGroupBox("差异度调整") diff_layout = QVBoxLayout(diff_group) # 差异度阈值控制 ui.lblDiffThreshold = QLabel("差异度阈值 (0-100%):") ui.sliderDiffThreshold = QSlider(Qt.Horizontal) ui.sliderDiffThreshold.setRange(0, 100) # 0-100% ui.sliderDiffThreshold.setValue(5) # 默认5% ui.lblDiffValue = QLabel("5%") # 当前差异度显示 ui.lblCurrentDiff = QLabel("当前差异度: -") ui.lblCurrentDiff.setStyleSheet("font-size: 14px; font-weight: bold;") # 差异度状态指示器 ui.lblDiffStatus = QLabel("状态: 未检测") ui.lblDiffStatus.setStyleSheet("font-size: 12px;") # 布局控件 diff_layout.addWidget(ui.lblDiffThreshold) diff_layout.addWidget(ui.sliderDiffThreshold) diff_layout.addWidget(ui.lblDiffValue) diff_layout.addWidget(ui.lblCurrentDiff) diff_layout.addWidget(ui.lblDiffStatus) # 添加差异度组到右侧布局 right_layout.addWidget(diff_group) # 添加拉伸项使控件靠上 right_layout.addStretch(1) # 创建停靠窗口 dock = QDockWidget("检测控制面板", mainWindow) dock.setWidget(right_panel) dock.setFeatures(QDockWidget.DockWidgetMovable | QDockWidget.DockWidgetFloatable) mainWindow.addDockWidget(Qt.RightDockWidgetArea, dock) # === 差异度调整功能实现 === # 更新差异度阈值显示 def update_diff_threshold(value): ui.lblDiffValue.setText(f"{value}%") # 修复:ui.lbl极DiffValue -> ui.lblDiffValue # 连接滑块信号 ui.sliderDiffThreshold.valueChanged.connect(update_diff_threshold) # 更新检测结果显示 def update_diff_display(diff_ratio, is_qualified): # 更新当前差异度显示 ui.lblCurrentDiff.setText(f"当前差异度: {diff_ratio*100:.2f}%") # 根据合格状态设置颜色 if is_qualified: ui.lblDiffStatus.setText("状态: 合格") ui.lblDiffStatus.setStyleSheet("color: green; font-size: 12px;") else: ui.lblDiffStatus.setText("状态: 不合格") ui.lblDiffStatus.setStyleSheet("color: red; font-size: 12px;") # 绑定按钮事件 ui.bnCheckPrint.clicked.connect(check_print) ui.bnSaveSample.clicked.connect(save_sample_image) ui.bnPreviewSample.clicked.connect(preview_sample) # 绑定其他按钮事件 ui.bnEnum.clicked.connect(enum_devices) ui.bnOpen.clicked.connect(open_device) ui.bnClose.clicked.connect(close_device) ui.bnStart.clicked.connect(start_grabbing) ui.bnStop.clicked.connect(stop_grabbing) ui.bnSoftwareTrigger.clicked.connect(trigger_once) ui.radioTriggerMode.clicked.connect(set_software_trigger_mode) ui.radioContinueMode.clicked.connect(set_continue_mode) ui.bnGetParam.clicked.connect(get_param) ui.bnSetParam.clicked.connect(set_param) # 修改保存图像按钮连接 ui.bnSaveImage.clicked.connect(save_image_dialog) # 显示主窗口 mainWindow.show() # 执行应用 app.exec_() # 关闭设备 close_device() # ch:反初始化SDK | en: finalize SDK MvCamera.MV_CC_Finalize() sys.exit() 把你刚刚的解决方案用来改进这个代码并完整展示

# -*- coding: utf-8 -*- import sys import os import cv2 import numpy as np from PyQt5.QtWidgets import (QApplication, QMainWindow, QPushButton, QWidget, QVBoxLayout, QHBoxLayout, QMessageBox, QLabel, QFileDialog, QToolBar, QComboBox, QStatusBar, QGroupBox, QSlider, QDockWidget, QProgressDialog, QLineEdit, QCheckBox, QGridLayout, QSpinBox, QRadioButton) from PyQt5.QtCore import QRect, Qt, QSettings, QThread, pyqtSignal, QTimer from PyQt5.QtGui import QPixmap, QImage, QFont import time import datetime import logging import platform import random from skimage.metrics import structural_similarity as ssim import json import threading import ctypes # 尝试导入海康SDK try: from MvCameraControl_class import * except ImportError: logging.error("未找到海康SDK库,请安装MVS SDK") # 如果没有安装SDK,使用模拟模式 class MvCamera: MV_CC_DEVICE_INFO_LIST = type('MV_CC_DEVICE_INFO_LIST', (object,), {}) MV_GIGE_DEVICE = 1 MV_USB_DEVICE = 4 MV_ACCESS_Exclusive = 1 @staticmethod def MV_CC_EnumDevices(nTLayerType, stDeviceList): return 0 @staticmethod def MV_CC_CreateHandle(stDeviceInfo): return 0 @staticmethod def MV_CC_OpenDevice(stCamHandle, nAccessMode, nSwitchoverKey): return 0 @staticmethod def MV_CC_StartGrabbing(stCamHandle): return 0 @staticmethod def MV_CC_StopGrabbing(stCamHandle): return 0 @staticmethod def MV_CC_CloseDevice(stCamHandle): return 0 @staticmethod def MV_CC_DestroyHandle(stCamHandle): return 0 @staticmethod def MV_CC_RegisterImageCallBack(stCamHandle, cbOutput, pUser): return 0 # 配置日志系统 logging.basicConfig( level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("cloth_inspection_debug.log"), logging.StreamHandler() ] ) logging.info("布料印花检测系统启动") # 全局变量 current_sample_path = "" # 当前使用的样本路径 detection_history = [] # 检测历史记录 is_processing = False # 防止重复处理 discovered_cameras = [] # 存储发现的相机列表 # ====================== 虚拟传感器类 ====================== class VirtualSensor: """模拟真实传感器输入的虚拟传感器""" def __init__(self): self.state = False # 传感器状态(触发/未触发) self.trigger_delay = 0.5 # 默认触发延迟(秒) self.trigger_count = 0 # 触发计数器 self.sensor_type = "光电传感器" # 传感器类型 self.mock_mode = False # 模拟模式 def trigger(self): """模拟传感器触发""" self.state = True self.trigger_count += 1 logging.info(f"传感器触发 #{self.trigger_count}") time.sleep(self.trigger_delay) self.state = False def set_delay(self, delay): """设置触发延迟时间""" self.trigger_delay = max(0.1, min(delay, 5.0)) # 限制在0.1-5秒之间 def set_type(self, sensor_type): """设置传感器类型""" self.sensor_type = sensor_type def enable_mock(self, enable): """启用/禁用模拟模式""" self.mock_mode = enable if enable: logging.info("传感器模拟模式已启用") def mock_trigger(self): """模拟传感器触发(随机间隔)""" if self.mock_mode: interval = random.uniform(0.5, 3.0) threading.Timer(interval, self.trigger).start() # 创建虚拟传感器实例 virtual_sensor = VirtualSensor() # ====================== 传感器信号处理线程 ====================== class SensorThread(QThread): """处理传感器信号的线程""" sensor_triggered = pyqtSignal() def __init__(self, sensor): super().__init__() self.sensor = sensor self.running = True self.mock_timer = QTimer() self.mock_timer.timeout.connect(self.mock_sensor_check) def run(self): while self.running: if self.sensor.state: self.sensor_triggered.emit() # 等待传感器复位 while self.sensor.state: time.sleep(0.01) time.sleep(0.05) # 减少CPU占用 def start_mock(self, interval=1000): """启动模拟传感器触发""" self.mock_timer.start(interval) def stop_mock(self): """停止模拟传感器触发""" self.mock_timer.stop() def mock_sensor_check(self): """检查并触发模拟传感器""" if self.sensor.mock_mode: self.sensor.trigger() # ====================== 图像处理线程 ====================== class ImageProcessingThread(QThread): """图像处理线程,避免阻塞UI""" processing_complete = pyqtSignal(bool, float, np.ndarray) def __init__(self, sample_path, test_image, threshold, use_ssim): super().__init__() self.sample_path = sample_path self.test_image = test_image self.threshold = threshold self.use_ssim = use_ssim def run(self): try: # 执行检测 is_qualified, diff_ratio, marked_image = self.check_print_quality( self.sample_path, self.test_image, self.threshold, self.use_ssim ) # 发出信号 self.processing_complete.emit(is_qualified, diff_ratio, marked_image) except Exception as e: logging.exception(f"图像处理线程错误: {str(e)}") self.processing_complete.emit(None, None, None) def check_print_quality(self, sample_image_path, test_image, threshold=0.05, use_ssim=True): """ 优化的布料印花检测算法 :param sample_image_path: 合格样本图像路径 :param test_image: 测试图像 (numpy数组) :param threshold: 差异阈值 :param use_ssim: 是否使用SSIM结构相似性指标 :return: 是否合格,差异值,标记图像 """ try: # 读取样本图像 sample_img_data = np.fromfile(sample_image_path, dtype=np.uint8) sample_image = cv2.imdecode(sample_img_data, cv2.IMREAD_GRAYSCALE) if sample_image is None: logging.error(f"无法解码样本图像: {sample_image_path}") return None, None, None # 确保测试图像是灰度图 if len(test_image.shape) == 3: # 如果是彩色图像 test_image_gray = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY) else: test_image_gray = test_image.copy() # 图像配准 - 使用特征匹配解决轻微位移问题 aligned_image = self.align_images(sample_image, test_image_gray) if aligned_image is None: aligned_image = test_image_gray # 配准失败则使用原始图像 logging.warning("图像配准失败,使用原始图像") # 确保两个图像大小一致 if aligned_image.shape != sample_image.shape: aligned_image = cv2.resize(aligned_image, (sample_image.shape[1], sample_image.shape[0])) # 方法1: 极速SSIM算法 (优化版) if use_ssim: # 使用优化的SSIM计算 score = self.fast_ssim(sample_image, aligned_image) diff_ratio = 1.0 - score # 差异比例 # 计算绝对差异作为差异图 diff = cv2.absdiff(sample_image, aligned_image) _, thresholded = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY) else: # 方法2: 传统绝对差异法 diff = cv2.absdiff(sample_image, aligned_image) # 自适应阈值处理 thresholded = cv2.adaptiveThreshold( diff, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2 ) # 计算差异比例 diff_pixels = np.count_nonzero(thresholded) total_pixels = sample_image.size diff_ratio = diff_pixels / total_pixels # 形态学操作去除噪声 kernel = np.ones((3, 3), np.uint8) thresholded = cv2.morphologyEx(thresholded, cv2.MORPH_OPEN, kernel) thresholded = cv2.morphologyEx(thresholded, cv2.MORPH_CLOSE, kernel) # 多尺度缺陷检测 marked_image = self.detect_defects(aligned_image, thresholded) # 判断是否合格 is_qualified = diff_ratio <= threshold return is_qualified, diff_ratio, marked_image except Exception as e: logging.exception(f"检测过程中发生错误: {str(e)}") return None, None, None def fast_ssim(self, img1, img2): """优化的SSIM计算,提高性能""" # 图像下采样以提高速度 if img1.shape[0] > 512 or img1.shape[1] > 512: scale = 0.5 img1 = cv2.resize(img1, (0, 0), fx=scale, fy=scale) img2 = cv2.resize(img2, (0, 0), fx=scale, fy=scale) # 计算SSIM score = ssim(img1, img2, win_size=3, data_range=img1.max() - img1.min()) return max(0.0, min(1.0, score)) # 确保在0-1范围内 def align_images(self, image1, image2): """ 使用特征匹配对齐两幅图像 :param image1: 参考图像 :param image2: 待对齐图像 :return: 对齐后的图像 """ # 使用ORB检测器(比SIFT更快) orb = cv2.ORB_create() # 查找关键点和描述符 kp1, des1 = orb.detectAndCompute(image1, None) kp2, des2 = orb.detectAndCompute(image2, None) # 如果关键点不足,尝试使用SIFT if des1 is None or des2 is None or len(des1) < 4 or len(des2) < 4: sift = cv2.SIFT_create() kp1, des1 = sift.detectAndCompute(image1, None) kp2, des2 = sift.detectAndCompute(image2, None) # 如果还是没有足够的关键点,返回None if des1 is None or des2 is None or len(des1) < 4 or len(des2) < 4: return None # 使用BFMatcher进行特征匹配 bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) matches = bf.match(des1, des2) # 至少需要4个点计算变换矩阵 if len(matches) < 4: return None # 提取匹配点坐标 src_pts = np.float32([kp1[m.queryIdx].pt for m in matches]).reshape(-1, 1, 2) dst_pts = np.float32([kp2[m.trainIdx].pt for m in matches]).reshape(-1, 1, 2) # 计算变换矩阵(使用RANSAC) M, mask = cv2.findHomography(dst_pts, src_pts, cv2.RANSAC, 5.0) # 应用变换 aligned_image = cv2.warpPerspective( image2, M, (image1.shape[1], image1.shape[0]), flags=cv2.INTER_LINEAR ) return aligned_image def detect_defects(self, image, mask): """ 多尺度缺陷检测和标记 :param image: 原始图像 :param mask: 差异掩码 :return: 标记后的图像 """ # 创建彩色标记图像 marked_image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR) # 查找轮廓 contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # 设置最小缺陷尺寸(避免标记小噪点) min_defect_area = max(10, image.size * 0.0001) # 自适应最小面积 # 标记缺陷区域 defect_count = 0 for cnt in contours: area = cv2.contourArea(cnt) if area > min_defect_area: defect_count += 1 # 计算轮廓的边界框 x, y, w, h = cv2.boundingRect(cnt) # 绘制边界框 cv2.rectangle(marked_image, (x, y), (x+w, y+h), (0, 0, 255), 2) # 在缺陷中心添加文本标签 cv2.putText( marked_image, f"Defect {defect_count}: {area}px", (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1 ) # 添加缺陷统计信息 cv2.putText( marked_image, f"Total Defects: {defect_count}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2 ) return marked_image # ====================== 网络配置检查 ====================== def ctypes_array_to_str(array): """将ctypes字符数组转换为Python字符串""" # 转换为字节串 byte_str = bytes(array) # 找到第一个空字节的位置 null_index = byte_str.find(b'\0') if null_index >= 0: # 截取到第一个空字节 byte_str = byte_str[:null_index] # 尝试UTF-8解码 try: return byte_str.decode('utf-8') except UnicodeDecodeError: # 尝试GBK解码(常见于中文设备) try: return byte_str.decode('gbk') except UnicodeDecodeError: # 最终回退方案 return byte_str.decode('latin1', errors='replace') def check_network_configuration(): """检查网络配置是否适合海康相机""" global discovered_cameras # 尝试使用海康SDK枚举设备 device_list = MV_CC_DEVICE_INFO_LIST() ret = MvCamera.MV_CC_EnumDevices(MV_GIGE_DEVICE | MV_USB_DEVICE, device_list) if ret == 0 and device_list.nDeviceNum > 0: discovered_cameras = [] for i in range(device_list.nDeviceNum): # 获取设备信息指针 device_info_ptr = device_list.pDeviceInfo[i] # 解引用指针获取实际设备信息 device_info = device_info_ptr.contents # 检查设备类型 if device_info.nTLayerType == MV_GIGE_DEVICE: # 处理GigE设备 - 正确转换IP地址 ip_int = device_info.SpecialInfo.stGigEInfo.nCurrentIp # 将32位整数分解为4个字节 ip_bytes = [ (ip_int >> 24) & 0xFF, # 最高字节 (ip_int >> 16) & 0xFF, # 次高字节 (ip_int >> 8) & 0xFF, # 次低字节 ip_int & 0xFF # 最低字节 ] ip = ".".join(map(str, ip_bytes)) # 正确解码模型名和序列号 model = ctypes_array_to_str(device_info.SpecialInfo.stGigEInfo.chModelName) serial = ctypes_array_to_str(device_info.SpecialInfo.stGigEInfo.chSerialNumber) discovered_cameras.append({"ip": ip, "model": model, "serial": serial}) elif device_info.nTLayerType == MV_USB_DEVICE: # 处理USB设备 model = ctypes_array_to_str(device_info.SpecialInfo.stUsb3VInfo.chModelName) serial = ctypes_array_to_str(device_info.SpecialInfo.stUsb3VInfo.chSerialNumber) discovered_cameras.append({"ip": "USB", "model": model, "serial": serial}) logging.info(f"发现 {len(discovered_cameras)} 台真实相机") return True else: # 模拟相机发现 discovered_cameras = [ {"ip": "192.168.1.101", "model": "MV-CA016-10GC", "serial": "SN123456"}, {"ip": "192.168.1.102", "model": "MV-CA020-10GC", "serial": "SN789012"} ] logging.info(f"使用模拟相机数据: {len(discovered_cameras)} 台网络相机") return bool(discovered_cameras) # ====================== 主窗口类 ====================== class MainWindow(QMainWindow): def __init__(self, parent=None): super(MainWindow, self).__init__(parent) # 初始化顺序保持不变 self.setup_ui() self.setup_variables() self.setup_connections() self.load_settings() def setup_variables(self): """初始化变量""" self.isOpen = False # 相机是否打开 self.isGrabbing = False # 相机是否在取流 self.cam = None # 相机对象 self.current_frame = None # 当前帧 self.settings = None # 用于存储设置 self.image_thread = None # 图像处理线程 self.progress = None # 进度对话框 # 创建传感器线程 self.sensor_thread = SensorThread(virtual_sensor) def setup_ui(self): """初始化用户界面""" # 主窗口设置 self.setWindowTitle("布料印花检测系统") self.setGeometry(100, 100, 1200, 800) # 创建主窗口部件和布局 central_widget = QWidget() self.setCentralWidget(central_widget) main_layout = QVBoxLayout(central_widget) # ===== 顶部工具栏 ===== toolbar = QToolBar("主工具栏") self.addToolBar(toolbar) # 创建工具栏按钮 self.bnEnum = QPushButton("枚举设备") self.bnOpen = QPushButton("打开设备") self.bnClose = QPushButton("关闭设备") self.bnStart = QPushButton("开始取流") self.bnStop = QPushButton("停止取流") self.bnSaveImage = QPushButton("保存图像") # 添加到工具栏 toolbar.addWidget(self.bnEnum) toolbar.addWidget(self.bnOpen) toolbar.addWidget(self.bnClose) toolbar.addWidget(self.bnStart) toolbar.addWidget(self.bnStop) toolbar.addWidget(self.bnSaveImage) # ===== 设备选择区域 ===== device_group = QGroupBox("设备控制") device_layout = QHBoxLayout(device_group) self.ComboDevices = QComboBox() self.ComboDevices.setMinimumWidth(300) device_layout.addWidget(QLabel("选择设备:")) device_layout.addWidget(self.ComboDevices) # 触发模式 trigger_group = QGroupBox("采集模式") trigger_layout = QVBoxLayout(trigger_group) self.radioContinueMode = QRadioButton("连续采集模式") self.radioTriggerMode = QRadioButton("触发采集模式") self.radioContinueMode.setChecked(True) trigger_layout.addWidget(self.radioContinueMode) trigger_layout.addWidget(self.radioTriggerMode) # 软件触发按钮 self.bnSoftwareTrigger = QPushButton("软触发") self.bnSoftwareTrigger.setEnabled(False) device_layout.addWidget(trigger_group) device_layout.addWidget(self.bnSoftwareTrigger) main_layout.addWidget(device_group) # ===== 图像显示区域 ===== image_group = QGroupBox("图像显示") image_layout = QHBoxLayout(image_group) self.image_label = QLabel() self.image_label.setAlignment(Qt.AlignCenter) self.image_label.setStyleSheet("background-color: black;") image_layout.addWidget(self.image_label) # 参数控制 param_group = QGroupBox("相机参数") param_layout = QGridLayout(param_group) param_layout.addWidget(QLabel("曝光时间(μs):"), 0, 0) self.edtExposureTime = QLineEdit("10000") param_layout.addWidget(self.edtExposureTime, 0, 1) param_layout.addWidget(QLabel("增益(dB):"), 1, 0) self.edtGain = QLineEdit("0") param_layout.addWidget(self.edtGain, 1, 1) param_layout.addWidget(QLabel("帧率(fps):"), 2, 0) self.edtFrameRate = QLineEdit("30") param_layout.addWidget(self.edtFrameRate, 2, 1) self.bnGetParam = QPushButton("获取参数") self.bnSetParam = QPushButton("设置参数") param_layout.addWidget(self.bnGetParam, 0, 2) param_layout.addWidget(self.bnSetParam, 1, 2) image_layout.addWidget(param_group) main_layout.addWidget(image_group) # ===== 检测控制区域 ===== detection_group = QGroupBox("检测控制") detection_layout = QGridLayout(detection_group) # 样本控制 self.bnSaveSample = QPushButton("保存样本") self.bnPreviewSample = QPushButton("预览样本") self.lblSamplePath = QLabel("当前样本: 未设置样本") detection_layout.addWidget(QLabel("样本控制:"), 0, 0) detection_layout.addWidget(self.bnSaveSample, 0, 1) detection_layout.addWidget(self.bnPreviewSample, 0, 2) detection_layout.addWidget(self.lblSamplePath, 0, 3) # 检测参数 detection_layout.addWidget(QLabel("差异阈值:"), 1, 0) self.sliderDiffThreshold = QSlider(Qt.Horizontal) self.sliderDiffThreshold.setRange(1, 20) # 1% - 20% self.sliderDiffThreshold.setValue(5) detection_layout.addWidget(self.sliderDiffThreshold, 1, 1, 1, 2) self.lblDiffValue = QLabel("5%") detection_layout.addWidget(self.lblDiffValue, 1, 3) # 检测方法 self.cbUseSSIM = QCheckBox("使用SSIM算法") self.cbUseSSIM.setChecked(True) detection_layout.addWidget(self.cbUseSSIM, 2, 0) # 检测按钮 self.bnCheckPrint = QPushButton("检测布料印花") detection_layout.addWidget(self.bnCheckPrint, 2, 1, 1, 3) # 检测结果显示 self.lblCurrentDiff = QLabel("当前差异度: --") self.lblDiffStatus = QLabel("状态: --") self.lblDiffStatus.setStyleSheet("font-weight: bold;") detection_layout.addWidget(self.lblCurrentDiff, 3, 0, 1, 2) detection_layout.addWidget(self.lblDiffStatus, 3, 2, 1, 2) main_layout.addWidget(detection_group) # ===== 传感器控制区域 ===== sensor_group = QGroupBox("传感器控制") sensor_layout = QGridLayout(sensor_group) # 传感器类型 sensor_layout.addWidget(QLabel("传感器类型:"), 0, 0) self.comboSensorType = QComboBox() self.comboSensorType.addItems(["光电传感器", "接近传感器", "编码器"]) sensor_layout.addWidget(self.comboSensorType, 0, 1) # 传感器延迟 sensor_layout.addWidget(QLabel("触发延迟(秒):"), 1, 0) self.edtSensorDelay = QLineEdit("0.5") sensor_layout.addWidget(self.edtSensorDelay, 1, 1) self.bnSetSensorDelay = QPushButton("设置延迟") sensor_layout.addWidget(self.bnSetSensorDelay, 1, 2) # 传感器模拟 self.cbMockSensor = QCheckBox("启用传感器模拟") self.cbMockSensor.setChecked(False) sensor_layout.addWidget(self.cbMockSensor, 2, 0) self.spinMockInterval = QSpinBox() self.spinMockInterval.setRange(500, 5000) self.spinMockInterval.setValue(1000) self.spinMockInterval.setSuffix("ms") sensor_layout.addWidget(self.spinMockInterval, 2, 1) self.bnStartMock = QPushButton("开始模拟") self.bnStartMock.setEnabled(False) self.bnStopMock = QPushButton("停止模拟") self.bnStopMock.setEnabled(False) sensor_layout.addWidget(self.bnStartMock, 2, 2) sensor_layout.addWidget(self.bnStopMock, 2, 3) # 手动触发 self.bnManualTrigger = QPushButton("手动触发传感器") sensor_layout.addWidget(self.bnManualTrigger, 3, 0, 1, 4) main_layout.addWidget(sensor_group) # ===== 历史记录区域 ===== history_group = QGroupBox("检测历史") history_layout = QVBoxLayout(history_group) self.cbHistory = QComboBox() history_layout.addWidget(self.cbHistory) main_layout.addWidget(history_group) # ===== 状态栏 ===== self.statusBar = QStatusBar() self.setStatusBar(self.statusBar) self.statusBar.showMessage("就绪", 5000) # 帧状态标签 self.lblFrameStatus = QLabel("帧状态: --") self.statusBar.addPermanentWidget(self.lblFrameStatus) # 启用控件 self.enable_controls() # ====================== 相机操作方法 ====================== def enum_devices(self): """枚举设备""" self.ComboDevices.clear() # 多次尝试枚举 max_retries = 3 device_list = None for i in range(max_retries): device_list = MV_CC_DEVICE_INFO_LIST() ret = MvCamera.MV_CC_EnumDevices(MV_GIGE_DEVICE | MV_USB_DEVICE, device_list) if ret == 0 and device_list.nDeviceNum > 0: break else: logging.warning(f"枚举设备失败 (尝试 {i+1}/{max_retries}), 错误码: {ret}") time.sleep(1) # 等待1秒后重试 if device_list is None or device_list.nDeviceNum == 0: self.statusBar.showMessage("未找到任何设备", 5000) return # 将设备添加到下拉框 for i in range(device_list.nDeviceNum): device_info_ptr = device_list.pDeviceInfo[i] device_info = device_info_ptr.contents # 检查设备类型并添加 if device_info.nTLayerType == MV_GIGE_DEVICE: ip_int = device_info.SpecialInfo.stGigEInfo.nCurrentIp ip_bytes = [(ip_int >> 24) & 0xFF, (ip_int >> 16) & 0xFF, (ip_int >> 8) & 0xFF, ip_int & 0xFF] ip = ".".join(map(str, ip_bytes)) model = ctypes_array_to_str(device_info.SpecialInfo.stGigEInfo.chModelName) serial = ctypes_array_to_str(device_info.SpecialInfo.stGigEInfo.chSerialNumber) device_str = f"[{i}]GigE: {model} ({ip})" self.ComboDevices.addItem(device_str, i) elif device_info.nTLayerType == MV_USB_DEVICE: model = ctypes_array_to_str(device_info.SpecialInfo.stUsb3VInfo.chModelName) serial = ctypes_array_to_str(device_info.SpecialInfo.stUsb3VInfo.chSerialNumber) device_str = f"[{i}]USB: {model} (SN:{serial})" self.ComboDevices.addItem(device_str, i) self.statusBar.showMessage(f"已发现 {device_list.nDeviceNum} 个设备", 5000) # 自动选择第一个设备 if device_list.nDeviceNum > 0: self.ComboDevices.setCurrentIndex(0) def open_device(self): """打开真实海康设备""" if self.isOpen: QMessageBox.warning(self, "错误", "设备已打开", QMessageBox.Ok) return if self.ComboDevices.currentIndex() < 0: QMessageBox.warning(self, "错误", "请先选择设备", QMessageBox.Ok) return # 重新枚举设备以确保列表最新 device_list = MV_CC_DEVICE_INFO_LIST() ret = MvCamera.MV_CC_EnumDevices(MV_GIGE_DEVICE | MV_USB_DEVICE, device_list) if ret != 0: QMessageBox.warning(self, "错误", f"枚举设备失败! 错误码: {ret}", QMessageBox.Ok) return device_index = self.ComboDevices.currentIndex() if device_index >= device_list.nDeviceNum: QMessageBox.warning(self, "错误", "选择的设备索引无效", QMessageBox.Ok) return # 获取设备信息 device_info = device_list.pDeviceInfo[device_index] try: # 创建相机实例 self.cam = MvCamera() # 创建句柄 ret = self.cam.MV_CC_CreateHandle(device_info) if ret != 0: error_msg = self.get_camera_error(ret) QMessageBox.critical( self, "创建句柄失败", f"错误码: {ret}\n{error_msg}\n\n" "可能原因:\n" "1. 设备未正确连接\n" "2. 设备被其他程序占用\n" "3. 驱动程序未正确安装", QMessageBox.Ok ) self.cam = None return # 打开设备 ret = self.cam.MV_CC_OpenDevice(MV_ACCESS_Exclusive, 0) if ret != 0: error_msg = self.get_camera_error(ret) QMessageBox.critical( self, "打开设备失败", f"错误码: {ret}\n{error_msg}\n\n" "解决方案:\n" "1. 检查设备电源和连接\n" "2. 重启设备和计算机\n" "3. 确保设备未被其他程序占用", QMessageBox.Ok ) self.cam.MV_CC_DestroyHandle() self.cam = None return # 注册图像回调 - 修复语法错误的位置 ret = self.cam.MV_CC_RegisterImageCallBack(self.image_callback, None) if ret != 0: logging.warning(f"注册图像回调失败! 错误码: {ret}") # 回调注册失败不是致命错误,继续执行 # 设置基本参数 self.set_reasonable_default_params() self.isOpen = True self.enable_controls() self.statusBar.showMessage("设备已成功打开", 5000) except Exception as e: logging.exception(f"打开设备时发生异常: {str(e)}") QMessageBox.critical( self, "严重错误", f"打开设备时发生异常:\n{str(e)}\n\n" "请检查:\n" "1. SDK是否正确安装\n" "2. Python环境是否匹配SDK架构(32/64位)\n" "3. 依赖库是否完整", QMessageBox.Ok ) if self.cam: self.cam.MV_CC_DestroyHandle() self.cam = None def log_device_info(self, device_info): """记录设备详细信息用于调试""" if device_info.nTLayerType == MV_GIGE_DEVICE: ip_int = device_info.SpecialInfo.stGigEInfo.nCurrentIp ip_bytes = [ (ip_int >> 24) & 0xFF, (ip_int >> 16) & 0xFF, (ip_int >> 8) & 0xFF, ip_int & 0xFF ] ip = ".".join(map(str, ip_bytes)) model = ctypes_array_to_str(device_info.SpecialInfo.stGigEInfo.chModelName) serial = ctypes_array_to_str(device_info.SpecialInfo.stGigEInfo.chSerialNumber) logging.info(f"尝试打开设备: GigE相机 - 型号: {model}, IP: {ip}, SN: {serial}") elif device_info.nTLayerType == MV_USB_DEVICE: model = ctypes_array_to_str(device_info.SpecialInfo.stUsb3VInfo.chModelName) serial = ctypes_array_to_str(device_info.SpecialInfo.stUsb3VInfo.chSerialNumber) logging.info(f"尝试打开设备: USB相机 - 型号: {model}, SN: {serial}") def get_camera_error(self, error_code): """获取相机错误信息描述(增强版)""" error_map = { 0x80000000: "错误或无效参数", 0x80000001: "不支持的功能", 0x80000002: "相机未初始化", 0x80000003: "相机资源不足", 0x80000004: "设备未找到 - 请检查物理连接和IP配置", 0x80000005: "设备已被占用 - 关闭其他使用相机的程序", 0x80000006: "参数超出范围", 0x80000007: "参数类型错误", 0x80000008: "参数值错误", 0x80000009: "超时 - 操作未在规定时间内完成", 0x8000000A: "设备已打开", 0x8000000B: "设备未打开", 0x8000000C: "设备未连接 - 检查物理连接", 0x8000000D: "资源锁定 - 资源被其他操作占用", 0x8000000E: "无数据 - 没有可用的图像数据", 0x8000000F: "内存不足 - 系统内存不足", 0x80000010: "不支持的操作系统", 0x80000011: "内部错误 - SDK内部错误", 0x80000012: "文件访问错误", 0x80000013: "文件格式错误", 0x80000014: "访问被拒绝 - 权限不足", 0x80000015: "缓冲区太小 - 提供的缓冲区不足以容纳数据", 0x80000016: "无效句柄", 0x80000017: "无效ID", 0x80000018: "无效访问权限", 0x80000019: "无效IP配置 - 检查相机IP设置", 0x8000001A: "无效注册表类型", 0x8000001B: "无效注册表大小", 0x8000001C: "无效属性", 0x8000001D: "设备已被其他程序占用 - 关闭其他相机应用程序", } def set_reasonable_default_params(self): """设置合理的默认参数""" if not self.isOpen or self.cam is None: return try: # 设置合理的曝光时间 ret = self.cam.MV_CC_SetFloatValue("ExposureTime", 10000) if ret == 0: self.edtExposureTime.setText("10000") # 设置合理的增益 ret = self.cam.MV_CC_SetFloatValue("Gain", 0) if ret == 0: self.edtGain.setText("0") # 设置合理的帧率 ret = self.cam.MV_CC_SetFloatValue("AcquisitionFrameRate", 30) if ret == 0: self.edtFrameRate.setText("30") # 设置像素格式为Mono8(灰度) ret = self.cam.MV_CC_SetEnumValue("PixelFormat", PixelType_Gvsp_Mono8) if ret != 0: logging.warning(f"设置像素格式失败! 错误码: {ret}") # 设置自动白平衡(如果是彩色相机) ret = self.cam.MV_CC_SetEnumValue("BalanceWhiteAuto", 1) # 1 = Continuous if ret != 0: logging.debug("此相机可能不支持自动白平衡") # 设置自动曝光 ret = self.cam.MV_CC_SetEnumValue("ExposureAuto", 2) # 2 = Continuous if ret != 0: logging.debug("此相机可能不支持自动曝光") self.statusBar.showMessage("已设置默认参数", 2000) except Exception as e: logging.error(f"设置默认参数时出错: {str(e)}") def close_device(self): """关闭真实设备""" if not self.isOpen or self.cam is None: return # 停止取流 if self.isGrabbing: self.stop_grabbing() # 关闭设备 ret = self.cam.MV_CC_CloseDevice() if ret != 0: logging.error(f"关闭设备失败! 错误码: {ret}") # 销毁句柄 self.cam.MV_CC_DestroyHandle() self.cam = None self.isOpen = False self.isGrabbing = False self.enable_controls() self.statusBar.showMessage("设备已关闭", 3000) def start_grabbing(self): """开始真实取流""" if not self.isOpen: QMessageBox.warning(self, "错误", "请先打开设备", QMessageBox.Ok) return # 开始取流 ret = self.cam.MV_CC_StartGrabbing() if ret != 0: QMessageBox.warning(self, "错误", f"开始取流失败! 错误码: {ret}", QMessageBox.Ok) return self.isGrabbing = True self.enable_controls() self.statusBar.showMessage("已开始取流", 3000) def stop_grabbing(self): """停止真实取流""" if not self.isGrabbing or self.cam is None: return # 停止取流 ret = self.cam.MV_CC_StopGrabbing() if ret != 0: logging.error(f"停止取流失败! 错误码: {ret}") self.isGrabbing = False self.enable_controls() self.statusBar.showMessage("已停止取流", 3000) def set_continue_mode(self): """设置连续采集模式""" if self.isOpen: # 实际设置相机为连续模式 if self.cam: ret = self.cam.MV_CC_SetEnumValue("TriggerMode", 0) if ret == 0: self.statusBar.showMessage("已设置为连续采集模式", 3000) else: self.statusBar.showMessage(f"设置连续模式失败! 错误码: {ret}", 3000) self.bnSoftwareTrigger.setEnabled(False) def set_software_trigger_mode(self): """设置触发采集模式""" if self.isOpen: # 实际设置相机为软件触发模式 if self.cam: ret = self.cam.MV_CC_SetEnumValue("TriggerMode", 1) # 1 = On if ret == 0: self.statusBar.showMessage("已设置为触发采集模式", 3000) else: self.statusBar.showMessage(f"设置触发模式失败! 错误码: {ret}", 3000) self.bnSoftwareTrigger.setEnabled(self.isGrabbing) def trigger_once(self): """执行软触发""" if self.isOpen and self.isGrabbing and self.cam: # 执行软触发 ret = self.cam.MV_CC_SetCommandValue("TriggerSoftware") if ret == 0: self.statusBar.showMessage("已执行软触发", 3000) else: self.statusBar.showMessage(f"软触发失败! 错误码: {ret}", 3000) def get_param(self): """获取相机参数""" if not self.isOpen or self.cam is None: QMessageBox.warning(self, "错误", "设备未打开", QMessageBox.Ok) return try: # 获取曝光时间 exposure = ctypes.c_float() ret = self.cam.MV_CC_GetFloatValue("ExposureTime", exposure) if ret == 0: self.edtExposureTime.setText(f"{exposure.value:.2f}") # 获取增益 gain = ctypes.c_float() ret = self.cam.MV_CC_GetFloatValue("Gain", gain) if ret == 0: self.edtGain.setText(f"{gain.value:.2f}") # 获取帧率 frame_rate = ctypes.c_float() ret = self.cam.MV_CC_GetFloatValue("AcquisitionFrameRate", frame_rate) if ret == 0: self.edtFrameRate.setText(f"{frame_rate.value:.2f}") self.statusBar.showMessage("已获取相机参数", 3000) except Exception as e: logging.error(f"获取参数错误: {str(e)}") self.statusBar.showMessage("获取参数失败", 3000) def set_param(self): """设置相机参数""" if not self.isOpen or self.cam is None: QMessageBox.warning(self, "错误", "设备未打开", QMessageBox.Ok) return try: exposure = float(self.edtExposureTime.text()) gain = float(self.edtGain.text()) frame_rate = float(self.edtFrameRate.text()) # 验证参数范围 if not (5000 <= exposure <= 20000): raise ValueError("曝光时间应在5000-20000μs之间") if not (0 <= gain <= 20): raise ValueError("增益应在0-20dB之间") if not (10 <= frame_rate <= 60): raise ValueError("帧率应在10-60fps之间") # 设置曝光时间 ret = self.cam.MV_CC_SetFloatValue("ExposureTime", exposure) if ret != 0: logging.error(f"设置曝光失败! 错误码: {ret}") # 设置增益 ret = self.cam.MV_CC_SetFloatValue("Gain", gain) if ret != 0: logging.error(f"设置增益失败! 错误码: {ret}") # 设置帧率 ret = self.cam.MV_CC_SetFloatValue("AcquisitionFrameRate", frame_rate) if ret != 0: logging.error(f"设置帧率失败! 错误码: {ret}") self.statusBar.showMessage(f"已设置参数: 曝光={exposure}μs, 增益={gain}dB, 帧率={frame_rate}fps", 3000) except ValueError as e: QMessageBox.warning(self, "输入错误", str(e), QMessageBox.Ok) def save_image_dialog(self): """保存图像对话框""" if not self.isGrabbing: QMessageBox.warning(self, "错误", "请先开始取流", QMessageBox.Ok) return file_path, _ = QFileDialog.getSaveFileName( self, "保存图像", os.path.join(os.getcwd(), "capture.bmp"), "BMP Files (*.bmp);;All Files (*)" ) if file_path: # 在实际应用中这里会保存真实图像 # 需要从相机获取当前帧并保存 self.statusBar.showMessage(f"图像已保存至: {file_path}", 5000) def enable_controls(self): """设置控件状态""" # 相机控制 self.bnOpen.setEnabled(not self.isOpen) self.bnClose.setEnabled(self.isOpen) self.bnStart.setEnabled(self.isOpen and not self.isGrabbing) self.bnStop.setEnabled(self.isOpen and self.isGrabbing) self.bnSaveImage.setEnabled(self.isGrabbing) self.bnSoftwareTrigger.setEnabled(self.isGrabbing and self.radioTriggerMode.isChecked()) # 检测控制 self.bnCheckPrint.setEnabled(self.isGrabbing and bool(current_sample_path)) self.bnSaveSample.setEnabled(self.isGrabbing) self.bnPreviewSample.setEnabled(bool(current_sample_path)) # 参数控制 self.bnGetParam.setEnabled(self.isOpen) self.bnSetParam.setEnabled(self.isOpen) # ====================== 检测相关函数 ====================== def save_sample_image(self): """保存标准样本""" if not self.isGrabbing: QMessageBox.warning(self, "错误", "请先开始取流", QMessageBox.Ok) return file_path, _ = QFileDialog.getSaveFileName( self, "保存标准样本", os.path.join(os.getcwd(), "sample.bmp"), "BMP Files (*.bmp);;All Files (*)" ) if file_path: global current_sample_path current_sample_path = file_path # 保存当前帧作为样本 if hasattr(self, 'current_frame') and self.current_frame is not None: cv2.imwrite(file_path, self.current_frame) self.statusBar.showMessage(f"标准样本已保存: {file_path}", 5000) else: self.statusBar.showMessage("无法保存样本: 无有效图像", 5000) self.update_sample_display() self.save_settings() def preview_sample(self): """预览样本""" global current_sample_path if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(self, "错误", "请先设置有效的标准样本图像", QMessageBox.Ok) return # 显示样本图像 sample_image = cv2.imread(current_sample_path) if sample_image is not None: # 转换为QPixmap并显示 if len(sample_image.shape) == 2: h, w = sample_image.shape bytes_per_line = w q_img = QImage(sample_image.data, w, h, bytes_per_line, QImage.Format_Grayscale8) else: h, w, ch = sample_image.shape bytes_per_line = ch * w q_img = QImage(sample_image.data, w, h, bytes_per_line, QImage.Format_RGB888) pixmap = QPixmap.fromImage(q_img) self.lblImageDisplay.setPixmap(pixmap.scaled( self.lblImageDisplay.width(), self.lblImageDisplay.height(), Qt.KeepAspectRatio )) self.statusBar.showMessage("正在预览样本图像", 3000) else: QMessageBox.warning(self, "错误", "无法加载样本图像", QMessageBox.Ok) def update_diff_threshold(self, value): """更新差异度阈值显示""" self.lblDiffValue.setText(f"{value}%") def check_print(self): """执行检测""" global is_processing, current_sample_path, detection_history if is_processing: return is_processing = True # 检查条件 if not self.isGrabbing: QMessageBox.warning(self, "错误", "请先开始取流", QMessageBox.Ok) is_processing = False return if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(self, "错误", "请先设置有效的标准样本图像", QMessageBox.Ok) is_processing = False return # 获取当前帧 if not hasattr(self, 'current_frame') or self.current_frame is None: QMessageBox.warning(self, "错误", "无有效图像可用于检测", QMessageBox.Ok) is_processing = False return test_image = self.current_frame # 显示进度对话框 self.progress = QProgressDialog("正在检测布料质量...", "取消", 0, 100, self) self.progress.setWindowModality(Qt.WindowModal) self.progress.setValue(30) # 获取参数 diff_threshold = self.sliderDiffThreshold.value() / 100.0 use_ssim = self.cbUseSSIM.isChecked() # 启动图像处理线程 self.image_thread = ImageProcessingThread( current_sample_path, test_image, diff_threshold, use_ssim ) self.image_thread.processing_complete.connect(self.handle_processing_result) self.image_thread.start() # ====================== 传感器相关函数 ====================== def sensor_triggered(self): """处理传感器触发事件""" if not self.cbEnableSensor.isChecked(): logging.debug("传感器触发但未启用,忽略") return if not self.isGrabbing: logging.warning("传感器触发时相机未就绪") self.statusBar.showMessage("传感器触发但相机未就绪", 3000) return # 确保相机已连接 if not self.isOpen or self.cam is None: logging.warning("传感器触发时相机未连接") self.statusBar.showMessage("传感器触发但相机未连接", 3000) return # 设置触发模式(如果是触发模式) if self.radioTriggerMode.isChecked(): self.set_software_trigger_mode() # 在实际系统中,这里会确保布料移动到正确位置 logging.info("传感器触发 - 开始检测") self.statusBar.showMessage("传感器触发 - 开始检测", 3000) # 执行检测 self.check_print() def manual_sensor_trigger(self): """手动触发传感器""" virtual_sensor.trigger() self.statusBar.showMessage("手动触发传感器", 3000) def set_sensor_delay(self): """设置传感器触发延迟""" try: delay = float(self.edtSensorDelay.text()) virtual_sensor.set_delay(delay) self.save_settings() self.statusBar.showMessage(f"传感器延迟已设置为 {delay} 秒", 3000) except ValueError: QMessageBox.warning(self, "输入错误", "请输入有效的数字(0.1-5.0)", QMessageBox.Ok) def set_sensor_type(self, sensor_type): """设置传感器类型""" virtual_sensor.set_type(sensor_type) self.save_settings() self.statusBar.showMessage(f"传感器类型已设置为 {sensor_type}", 3000) def enable_sensor_mock(self, state): """启用/禁用传感器模拟""" virtual_sensor.enable_mock(state == Qt.Checked) self.bnStartMock.setEnabled(state == Qt.Checked) self.bnStopMock.setEnabled(state == Qt.Checked) self.spinMockInterval.setEnabled(state == Qt.Checked) def start_mock_sensor(self): """启动模拟传感器""" interval = self.spinMockInterval.value() self.sensor_thread.start_mock(interval) self.statusBar.showMessage(f"传感器模拟已启动,间隔 {interval}ms", 3000) def stop_mock_sensor(self): """停止模拟传感器""" self.sensor_thread.stop_mock() self.statusBar.showMessage("传感器模拟已停止", 3000) # ====================== 辅助方法 ====================== def load_settings(self): """加载应用程序设置""" self.settings = QSettings("ClothInspection", "CameraApp") # 加载样本路径 sample_path = self.settings.value("current_sample_path", "") if sample_path: global current_sample_path current_sample_path = sample_path self.update_sample_display() # 加载检测参数 diff_threshold = self.settings.value("diff_threshold", 5, type=int) self.sliderDiffThreshold.setValue(diff_threshold) self.update_diff_threshold(diff_threshold) # 加载传感器设置 sensor_delay = self.settings.value("sensor_delay", 0.5, type=float) self.edtSensorDelay.setText(str(sensor_delay)) virtual_sensor.set_delay(sensor_delay) sensor_type = self.settings.value("sensor_type", "光电传感器") self.comboSensorType.setCurrentText(sensor_type) virtual_sensor.set_type(sensor_type) def save_settings(self): """保存应用程序设置""" # 保存样本路径 self.settings.setValue("current_sample_path", current_sample_path) # 保存检测参数 self.settings.setValue("diff_threshold", self.sliderDiffThreshold.value()) # 保存传感器设置 self.settings.setValue("sensor_delay", float(self.edtSensorDelay.text())) self.settings.setValue("sensor_type", self.comboSensorType.currentText()) def update_sample_display(self): """更新样本路径显示""" global current_sample_path if current_sample_path: self.lblSamplePath.setText(f"当前样本: {os.path.basename(current_sample_path)}") self.lblSamplePath.setToolTip(current_sample_path) else: self.lblSamplePath.setText("当前样本: 未设置样本") def update_history_display(self): """更新历史记录显示""" global detection_history self.cbHistory.clear() for i, result in enumerate(detection_history[-10:]): # 显示最近10条记录 timestamp = result['timestamp'].strftime("%H:%M:%S") status = "合格" if result['qualified'] else "不合格" ratio = f"{result['diff_ratio']*100:.2f}%" self.cbHistory.addItem(f"[{timestamp}] {status} - 差异: {ratio}") def handle_processing_result(self, is_qualified, diff_ratio, marked_image): """处理检测结果""" global is_processing, detection_history self.progress.setValue(100) self.progress.close() if is_qualified is None: QMessageBox.critical(self, "检测错误", "检测过程中发生错误", QMessageBox.Ok) is_processing = False return # 更新UI显示 self.update_diff_display(diff_ratio, is_qualified) # 显示结果 result_text = f"布料印花 {'合格' if is_qualified else '不合格'}\n差异度: {diff_ratio*100:.2f}%\n阈值: {self.sliderDiffThreshold.value()}%" QMessageBox.information(self, "检测结果", result_text, QMessageBox.Ok) # 显示标记图像 if marked_image is not None: # 转换为QPixmap并显示 if len(marked_image.shape) == 2: h, w = marked_image.shape bytes_per_line = w q_img = QImage(marked_image.data, w, h, bytes_per_line, QImage.Format_Grayscale8) else: h, w, ch = marked_image.shape bytes_per_line = ch * w q_img = QImage(marked_image.data, w, h, bytes_per_line, QImage.Format_RGB888) pixmap = QPixmap.fromImage(q_img) self.lblImageDisplay.setPixmap(pixmap.scaled( self.lblImageDisplay.width(), self.lblImageDisplay.height(), Qt.KeepAspectRatio )) self.statusBar.showMessage("已显示缺陷标记图像", 5000) # 记录检测结果 detection_result = { 'timestamp': datetime.datetime.now(), 'qualified': is_qualified, 'diff_ratio': diff_ratio, 'threshold': self.sliderDiffThreshold.value() } detection_history.append(detection_result) self.update_history_display() is_processing = False def update_diff_display(self, diff_ratio, is_qualified): """更新差异度显示""" self.lblCurrentDiff.setText(f"当前差异度: {diff_ratio*100:.2f}%") if is_qualified: self.lblDiffStatus.setText("状态: 合格") self.lblDiffStatus.setStyleSheet("color: green;") else: self.lblDiffStatus.setText("状态: 不合格") self.lblDiffStatus.setStyleSheet("color: red;") def image_callback(self, pData, pFrameInfo, pUser): """图像数据回调函数""" try: if pFrameInfo.contents.nFrameLen <= 0: return # 将原始数据转换为numpy数组 data = (ctypes.c_ubyte * pFrameInfo.contents.nFrameLen).from_address(pData) image = np.frombuffer(data, dtype=np.uint8) # 根据帧信息解码图像 if pFrameInfo.contents.enPixelType == PixelType_Gvsp_Mono8: # 单通道灰度图 image = image.reshape(pFrameInfo.contents.nHeight, pFrameInfo.contents.nWidth) elif pFrameInfo.contents.enPixelType == PixelType_Gvsp_RGB8_Packed: # RGB24 image = image.reshape(pFrameInfo.contents.nHeight, pFrameInfo.contents.nWidth, 3) image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) else: # 其他格式需要转换 image = cv2.imdecode(image, cv2.IMREAD_UNCHANGED) # 显示图像 self.display_real_image(image) # 保存当前帧用于检测 self.current_frame = image.copy() # 更新帧状态 self.lblFrameStatus.setText(f"帧状态: {pFrameInfo.contents.nWidth}x{pFrameInfo.contents.nHeight}") except Exception as e: logging.exception(f"图像回调错误: {str(e)}") def display_real_image(self, image): """显示真实相机图像""" if len(image.shape) == 2: # 灰度图 h, w = image.shape bytes_per_line = w q_img = QImage(image.data, w, h, bytes_per_line, QImage.Format_Grayscale8) elif len(image.shape) == 3: # 彩色图 h, w, ch = image.shape bytes_per_line = ch * w q_img = QImage(image.data, w, h, bytes_per_line, QImage.Format_RGB888) else: return pixmap = QPixmap.fromImage(q_img) self.lblImageDisplay.setPixmap(pixmap.scaled( self.lblImageDisplay.width(), self.lblImageDisplay.height(), Qt.KeepAspectRatio )) # ====================== 连接方法 ====================== def setup_connections(self): """连接信号和槽""" # 相机控制 self.bnEnum.clicked.connect(self.enum_devices) self.bnOpen.clicked.connect(self.open_device) self.bnClose.clicked.connect(self.close_device) self.bnStart.clicked.connect(self.start_grabbing) self.bnStop.clicked.connect(self.stop_grabbing) self.bnSaveImage.clicked.connect(self.save_image_dialog) # 参数控制 self.bnGetParam.clicked.connect(self.get_param) self.bnSetParam.clicked.connect(self.set_param) # 触发模式 self.radioContinueMode.clicked.connect(self.set_continue_mode) self.radioTriggerMode.clicked.connect(self.set_software_trigger_mode) self.bnSoftwareTrigger.clicked.connect(self.trigger_once) # 检测控制 self.bnCheckPrint.clicked.connect(self.check_print) self.bnSaveSample.clicked.connect(self.save_sample_image) self.bnPreviewSample.clicked.connect(self.preview_sample) self.sliderDiffThreshold.valueChanged.connect(self.update_diff_threshold) # 传感器控制 self.bnSetSensorDelay.clicked.connect(self.set_sensor_delay) self.bnManualTrigger.clicked.connect(self.manual_sensor_trigger) self.comboSensorType.currentTextChanged.connect(self.set_sensor_type) self.cbMockSensor.stateChanged.connect(self.enable_sensor_mock) self.bnStartMock.clicked.connect(self.start_mock_sensor) self.bnStopMock.clicked.connect(self.stop_mock_sensor) # 连接传感器信号 self.sensor_thread.sensor_triggered.connect(self.sensor_triggered) def closeEvent(self, event): """关闭应用程序时执行清理""" self.save_settings() # 停止传感器线程 if self.sensor_thread.isRunning(): self.sensor_thread.stop_mock() self.sensor_thread.quit() self.sensor_thread.wait(2000) # 关闭相机 if self.cam: self.close_device() event.accept() # ====================== 主程序入口 ====================== if __name__ == "__main__": # 首先检查网络配置 if not check_network_configuration(): # 创建临时QApplication用于显示错误消息 app_temp = QApplication(sys.argv) error_msg = "网络配置检查失败,无法检测到海康相机。请检查:\n\n" error_msg += "1. 相机是否已正确连接并上电\n" error_msg += "2. 计算机和相机是否在同一子网\n" error_msg += "3. 防火墙是否阻止了相机通信\n" error_msg += "4. 网线连接是否正常\n\n" # 添加发现的相机信息(如果有) if discovered_cameras: error_msg += "发现的相机:\n" for cam in discovered_cameras: error_msg += f"- {cam['model']} (IP: {cam['ip']}, SN: {cam['serial']})\n" QMessageBox.critical(None, "网络错误", error_msg, QMessageBox.Ok) sys.exit(1) # 如果网络检查通过,继续运行主应用 app = QApplication(sys.argv) # 设置应用程序样式 app.setStyle("Fusion") # 创建主窗口 main_window = MainWindow() # 启动传感器线程 main_window.sensor_thread.start() # 显示主窗口 main_window.show() # 执行应用程序 sys.exit(app.exec_()) 这个程序出现下面的问题 Traceback (most recent call last): File "d:\海康\MVS\Development\Samples\Python\MvImport\2.py", line 1532, in <module> main_window = MainWindow() File "d:\海康\MVS\Development\Samples\Python\MvImport\2.py", line 449, in __init__ self.setup_ui() File "d:\海康\MVS\Development\Samples\Python\MvImport\2.py", line 665, in setup_ui self.enable_controls() File "d:\海康\MVS\Development\Samples\Python\MvImport\2.py", line 1088, in enable_controls self.bnOpen.setEnabled(not self.isOpen) AttributeError: 'MainWindow' object has no attribute 'isOpen'

import sys import serial from PyQt5 import QtWidgets import pyqtgraph as pg import numpy as np from threading import Thread, Event class RealTimePlot(QtWidgets.QWidget): def __init__(self): super().__init__() # 串口配置 self.ser = serial.Serial('COM10', 115200, timeout=1) self.data_buffer = np.zeros(500) self.ptr = 0 # 界面布局 self.setWindowTitle('ADC实时监测系统') self.layout = QtWidgets.QVBoxLayout() # 绘图区域 self.plot_widget = pg.PlotWidget() self.curve = self.plot_widget.plot(pen='y') self.plot_widget.setLabel('left', 'ADC值', '') self.plot_widget.setLabel('bottom', '时间', 's') self.plot_widget.setYRange(0, 4095) self.layout.addWidget(self.plot_widget) # 控制面板 control_panel = QtWidgets.QHBoxLayout() self.start_btn = QtWidgets.QPushButton('开始') self.stop_btn = QtWidgets.QPushButton('停止') self.save_btn = QtWidgets.QPushButton('保存数据') control_panel.addWidget(self.start_btn) control_panel.addWidget(self.stop_btn) control_panel.addWidget(self.save_btn) self.layout.addLayout(control_panel) self.setLayout(self.layout) # 事件绑定 self.start_btn.clicked.connect(self.start_acquisition) self.stop_btn.clicked.connect(self.stop_acquisition) self.save_btn.clicked.connect(self.save_data) # 线程控制 self.running = Event() self.thread = None def start_acquisition(self): if not self.running.is_set(): self.running.set() self.thread = Thread(target=self.data_reader) self.thread.start() self.start_btn.setEnabled(False) self.stop_btn.setEnabled(True) def stop_acquisition(self): self.running.clear() self.thread.join() self.start_btn.setEnabled(True) self.stop_btn.setEnabled(False) def data_reader(self): while self.running.is_set(): if self.ser.in_waiting: line = self.ser.readline().decode().strip() if line.startswith('ADC:'): try: value = int(line.split(':')[1]) self.data_buffer[:-1] = self.data_buffer[1:] self.data_buffer[-1] = value self.ptr += 1 QtWidgets.QApplication.processEvents() self.curve.setData(self.data_buffer) except ValueError: pass def save_data(self): filename, _ = QtWidgets.QFileDialog.getSaveFileName( self, "保存数据", "", "CSV文件 (*.csv)") if filename: np.savetxt(filename, self.data_buffer, delimiter=',', header='ADC Value', comments='') def closeEvent(self, event): self.running.clear() if self.thread is not None: self.thread.join() self.ser.close() event.accept() if __name__ == '__main__': app = QtWidgets.QApplication(sys.argv) window = RealTimePlot() window.show() sys.exit(app.exec_()) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[1], line 95 93 if __name__ == '__main__': 94 app = QtWidgets.QApplication(sys.argv) ---> 95 window = RealTimePlot() 96 window.show() 97 sys.exit(app.exec_()) Cell In[1], line 13, in RealTimePlot.__init__(self) 10 super().__init__() 12 # 串口配置 ---> 13 self.ser = serial.Serial('COM10', 115200, timeout=1) 14 self.data_buffer = np.zeros(500) 15 self.ptr = 0 AttributeError: module 'serial' has no attribute 'Serial'

这个程序里的Q_ARG一直是报错的情况该怎么解决 # -*- coding: utf-8 -*- import sys import os import cv2 import numpy as np import math import time import logging import threading from collections import deque from PyQt5.QtWidgets import ( QApplication, QMainWindow, QPushButton, QWidget, QVBoxLayout, QHBoxLayout, QMessageBox, QLabel, QFileDialog, QToolBox, QComboBox, QStatusBar, QGroupBox, QSlider, QDockWidget, QProgressDialog, QLineEdit, QRadioButton, QGridLayout, QSpinBox, QCheckBox, QDialog, QDialogButtonBox, QDoubleSpinBox, QProgressBar ) from PyQt5.QtCore import QRect, Qt, QSettings, QThread, pyqtSignal, QTimer, QMetaObject, pyqtSlot from PyQt5.QtGui import QImage, QPixmap from CamOperation_class import CameraOperation from MvCameraControl_class import * import ctypes from ctypes import cast, POINTER from datetime import datetime import skimage import platform from CameraParams_header import ( MV_GIGE_DEVICE, MV_USB_DEVICE, MV_GENTL_CAMERALINK_DEVICE, MV_GENTL_CXP_DEVICE, MV_GENTL_XOF_DEVICE ) # ===== 全局配置 ===== # 模板匹配参数 MATCH_THRESHOLD = 0.75 # 降低匹配置信度阈值以提高灵敏度 MIN_MATCH_COUNT = 10 # 最小匹配特征点数量 MIN_FRAME_INTERVAL = 0.1 # 最小检测间隔(秒) # ===== 全局变量 ===== current_sample_path = "" detection_history = [] isGrabbing = False isOpen = False obj_cam_operation = None frame_monitor_thread = None template_matcher_thread = None MV_OK = 0 MV_E_CALLORDER = -2147483647 # ==================== 优化后的质量检测算法 ==================== def enhanced_check_print_quality(sample_image_path, test_image, threshold=0.05): # 不再使用传感器数据调整阈值 adjusted_threshold = threshold try: sample_img_data = np.fromfile(sample_image_path, dtype=np.uint8) sample_image = cv2.imdecode(sample_img_data, cv2.IMREAD_GRAYSCALE) if sample_image is None: logging.error(f"无法解码样本图像: {sample_image_path}") return None, None, None except Exception as e: logging.exception(f"样本图像读取异常: {str(e)}") return None, None, None if len(test_image.shape) == 3: test_image_gray = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY) else: test_image_gray = test_image.copy() sample_image = cv2.GaussianBlur(sample_image, (5, 5), 0) test_image_gray = cv2.GaussianBlur(test_image_gray, (5, 5), 0) try: # 使用更鲁棒的SIFT特征检测器 sift = cv2.SIFT_create() keypoints1, descriptors1 = sift.detectAndCompute(sample_image, None) keypoints2, descriptors2 = sift.detectAndCompute(test_image_gray, None) if descriptors1 is None or descriptors2 is None: logging.warning("无法提取特征描述符,跳过配准") aligned_sample = sample_image else: # 使用FLANN匹配器提高匹配精度 FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) search_params = dict(checks=50) flann = cv2.FlannBasedMatcher(index_params, search_params) matches = flann.knnMatch(descriptors1, descriptors2, k=2) # 应用Lowe's比率测试筛选优质匹配 good_matches = [] for m, n in matches: if m.distance < 0.7 * n.distance: good_matches.append(m) if len(good_matches) > MIN_MATCH_COUNT: src_pts = np.float32([keypoints1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2) dst_pts = np.float32([keypoints2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2) H, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0) if H is not None: aligned_sample = cv2.warpPerspective( sample_image, H, (test_image_gray.shape[1], test_image_gray.shape[0]) ) logging.info("图像配准成功,使用配准后样本") else: aligned_sample = sample_image logging.warning("无法计算单应性矩阵,使用原始样本") else: aligned_sample = sample_image logging.warning(f"特征点匹配不足({len(good_matches)}/{MIN_MATCH_COUNT}),跳过图像配准") except Exception as e: logging.error(f"图像配准失败: {str(e)}") aligned_sample = sample_image try: if aligned_sample.shape != test_image_gray.shape: test_image_gray = cv2.resize(test_image_gray, (aligned_sample.shape[1], aligned_sample.shape[0])) except Exception as e: logging.error(f"图像调整大小失败: {str(e)}") return None, None, None try: from skimage.metrics import structural_similarity as compare_ssim ssim_score, ssim_diff = compare_ssim( aligned_sample, test_image_gray, full=True, gaussian_weights=True, data_range=255 ) except ImportError: from skimage.measure import compare_ssim ssim_score, ssim_diff = compare_ssim( aligned_sample, test_image_gray, full=True, gaussian_weights=True ) except Exception as e: logging.error(f"SSIM计算失败: {str(e)}") abs_diff = cv2.absdiff(aligned_sample, test_image_gray) ssim_diff = abs_diff.astype(np.float32) / 255.0 ssim_score = 1.0 - np.mean(ssim_diff) ssim_diff = (1 - ssim_diff) * 255 abs_diff = cv2.absdiff(aligned_sample, test_image_gray) combined_diff = cv2.addWeighted(ssim_diff.astype(np.uint8), 0.7, abs_diff, 0.3, 0) _, thresholded = cv2.threshold(combined_diff, 30, 255, cv2.THRESH_BINARY) kernel = np.ones((3, 3), np.uint8) thresholded = cv2.morphologyEx(thresholded, cv2.MORPH_OPEN, kernel) thresholded = cv2.morphologyEx(thresholded, cv2.MORPH_CLOSE, kernel) diff_pixels = np.count_nonzero(thresholded) total_pixels = aligned_sample.size diff_ratio = diff_pixels / total_pixels is_qualified = diff_ratio <= adjusted_threshold marked_image = cv2.cvtColor(test_image_gray, cv2.COLOR_GRAY2BGR) marked_image[thresholded == 255] = [0, 0, 255] # 放大缺陷标记 scale_factor = 2.0 # 放大2倍 marked_image = cv2.resize(marked_image, None, fx=scale_factor, fy=scale_factor, interpolation=cv2.INTER_LINEAR) labels = skimage.measure.label(thresholded) properties = skimage.measure.regionprops(labels) for prop in properties: if prop.area > 50: y, x = prop.centroid # 根据放大比例调整坐标 x_scaled = int(x * scale_factor) y_scaled = int(y * scale_factor) cv2.putText(marked_image, f"Defect", (x_scaled, y_scaled), cv2.FONT_HERSHEY_SIMPLEX, 0.5 * scale_factor, (0, 255, 255), int(scale_factor)) return is_qualified, diff_ratio, marked_image # ==================== 视觉触发的质量检测流程 ==================== def vision_controlled_check(capture_image=None, match_score=0.0): """修改为接受图像帧和匹配分数""" global current_sample_path, detection_history logging.info("视觉触发质量检测启动") # 如果没有提供图像,使用当前帧 if capture_image is None: frame = obj_cam_operation.get_current_frame() else: frame = capture_image if frame is None: QMessageBox.warning(mainWindow, "错误", "无法获取当前帧图像!", QMessageBox.Ok) return progress = QProgressDialog("正在检测...", "取消", 0, 100, mainWindow) progress.setWindowModality(Qt.WindowModal) progress.setValue(10) try: diff_threshold = mainWindow.sliderDiffThreshold.value() / 100.0 logging.info(f"使用差异度阈值: {diff_threshold}") progress.setValue(30) is_qualified, diff_ratio, marked_image = enhanced_check_print_quality( current_sample_path, frame, threshold=diff_threshold ) progress.setValue(70) if is_qualified is None: QMessageBox.critical(mainWindow, "检测错误", "检测失败,请检查日志", QMessageBox.Ok) return logging.info(f"检测结果: 合格={is_qualified}, 差异={diff_ratio}") progress.setValue(90) update_diff_display(diff_ratio, is_qualified) result_text = f"印花是否合格: {'合格' if is_qualified else '不合格'}\n差异占比: {diff_ratio*100:.2f}%\n阈值: {diff_threshold*100:.2f}%" QMessageBox.information(mainWindow, "检测结果", result_text, QMessageBox.Ok) if marked_image is not None: # 创建可调整大小的窗口 cv2.namedWindow("缺陷标记结果", cv2.WINDOW_NORMAL) cv2.resizeWindow("缺陷标记结果", 800, 600) # 初始大小 cv2.imshow("缺陷标记结果", marked_image) cv2.waitKey(0) cv2.destroyAllWindows() detection_result = { 'timestamp': datetime.now(), 'qualified': is_qualified, 'diff_ratio': diff_ratio, 'threshold': diff_threshold, 'trigger_type': 'vision' if capture_image else 'manual' } detection_history.append(detection_result) update_history_display() progress.setValue(100) except Exception as e: logging.exception("印花检测失败") QMessageBox.critical(mainWindow, "检测错误", f"检测过程中发生错误: {str(e)}", QMessageBox.Ok) finally: progress.close() # ==================== 相机操作函数 ==================== def open_device(): global deviceList, nSelCamIndex, obj_cam_operation, isOpen, frame_monitor_thread, mainWindow if isOpen: QMessageBox.warning(mainWindow, "Error", '相机已打开!', QMessageBox.Ok) return MV_E_CALLORDER nSelCamIndex = mainWindow.ComboDevices.currentIndex() if nSelCamIndex < 0: QMessageBox.warning(mainWindow, "Error", '请选择相机!', QMessageBox.Ok) return MV_E_CALLORDER # 创建相机控制对象 cam = MvCamera() # 初始化相机操作对象 obj_cam_operation = CameraOperation(cam, deviceList, nSelCamIndex) ret = obj_cam_operation.open_device() if 0 != ret: strError = "打开设备失败 ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) isOpen = False else: set_continue_mode() get_param() isOpen = True enable_controls() # 创建并启动帧监控线程 frame_monitor_thread = FrameMonitorThread(obj_cam_operation) frame_monitor_thread.frame_status.connect(mainWindow.statusBar().showMessage) frame_monitor_thread.start() def start_grabbing(): global obj_cam_operation, isGrabbing, template_matcher_thread ret = obj_cam_operation.start_grabbing(mainWindow.widgetDisplay.winId()) if ret != 0: strError = "开始取流失败 ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = True enable_controls() # 等待第一帧到达 QThread.msleep(500) if not obj_cam_operation.is_frame_available(): QMessageBox.warning(mainWindow, "警告", "开始取流后未接收到帧,请检查相机连接!", QMessageBox.Ok) # 如果启用了自动检测,启动检测线程 if mainWindow.chkContinuousMatch.isChecked(): toggle_template_matching(True) def stop_grabbing(): global obj_cam_operation, isGrabbing, template_matcher_thread ret = obj_cam_operation.Stop_grabbing() if ret != 0: strError = "停止取流失败 ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = False enable_controls() # 停止模板匹配线程 if template_matcher_thread and template_matcher_thread.isRunning(): template_matcher_thread.stop() def close_device(): global isOpen, isGrabbing, obj_cam_operation, frame_monitor_thread, template_matcher_thread if frame_monitor_thread and frame_monitor_thread.isRunning(): frame_monitor_thread.stop() frame_monitor_thread.wait(2000) # 停止模板匹配线程 if template_matcher_thread and template_matcher_thread.isRunning(): template_matcher_thread.stop() template_matcher_thread.wait(2000) template_matcher_thread = None if isOpen and obj_cam_operation: obj_cam_operation.close_device() isOpen = False isGrabbing = False enable_controls() # ==================== 连续帧匹配检测器 ==================== class ContinuousFrameMatcher(QThread): frame_processed = pyqtSignal(np.ndarray, float, bool) # 处理后的帧, 匹配分数, 是否匹配 match_score_updated = pyqtSignal(float) # 匹配分数更新信号 match_success = pyqtSignal(np.ndarray, float) # 匹配成功信号 (帧, 匹配分数) def __init__(self, cam_operation, parent=None): super().__init__(parent) self.cam_operation = cam_operation self.running = True self.sample_template = None self.min_match_count = MIN_MATCH_COUNT self.match_threshold = MATCH_THRESHOLD self.sample_kp = None self.sample_des = None self.current_match_score = 0.0 self.last_match_time = 0 self.frame_counter = 0 self.consecutive_fail_count = 0 self.last_trigger_time = 0 # 上次触发时间 self.cool_down = 0.2 # 冷却时间(秒) # 特征检测器 - 使用SIFT self.sift = cv2.SIFT_create() # 特征匹配器 - 使用FLANN提高匹配精度 FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5) search_params = dict(checks=50) self.flann = cv2.FlannBasedMatcher(index_params, search_params) # 性能监控 self.processing_times = deque(maxlen=100) self.frame_rates = deque(maxlen=100) # 黑白相机优化 self.clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8, 8)) def preprocess_image(self, image): """增强黑白图像特征提取""" # 如果是单通道图像,转换为三通道 if len(image.shape) == 2: image = cv2.cvtColor(image, cv2.COLOR_GRAY2BGR) # 对比度增强 (LAB空间) lab = cv2.cvtColor(image, cv2.COLOR_BGR2LAB) l, a, b = cv2.split(lab) cl = self.clahe.apply(l) limg = cv2.merge((cl, a, b)) enhanced = cv2.cvtColor(limg, cv2.COLOR_LAB2BGR) # 边缘增强 gray = cv2.cvtColor(enhanced, cv2.COLOR_BGR2GRAY) edges = cv2.Canny(gray, 50, 150) # 组合特征 return cv2.addWeighted(enhanced, 0.7, cv2.cvtColor(edges, cv2.COLOR_GRAY2BGR), 0.3, 0) def set_sample(self, sample_img): """设置标准样本并提取特征""" # 保存样本图像 self.sample_img = sample_img # 预处理增强特征 processed_sample = self.preprocess_image(sample_img) # 提取样本特征点 self.sample_kp, self.sample_des = self.sift.detectAndCompute(processed_sample, None) if self.sample_des is None or len(self.sample_kp) < self.min_match_count: logging.warning("样本特征点不足") return False logging.info(f"样本特征提取成功: {len(self.sample_kp)}个关键点") return True def process_frame(self, frame): """处理帧:特征提取、匹配和可视化""" is_matched = False match_score = 0.0 processed_frame = frame.copy() # 检查是否已设置样本 if self.sample_kp is None or self.sample_des is None: return processed_frame, match_score, is_matched # 预处理当前帧 processed_frame = self.preprocess_image(frame) # 转换为灰度图像用于特征提取 gray_frame = cv2.cvtColor(processed_frame, cv2.COLOR_BGR2GRAY) try: # 提取当前帧的特征点 kp, des = self.sift.detectAndCompute(gray_frame, None) if des is None or len(kp) < 10: # 特征点不足 return processed_frame, match_score, is_matched # 匹配特征点 matches = self.flann.knnMatch(self.sample_des, des, k=2) # 应用Lowe's比率测试 good_matches = [] for m, n in matches: if m.distance < 0.7 * n.distance: good_matches.append(m) # 计算匹配分数(匹配点数量占样本特征点数量的比例) if len(self.sample_kp) > 0: match_score = len(good_matches) / len(self.sample_kp) match_score = min(1.0, max(0.0, match_score)) # 确保在0-1范围内 else: match_score = 0.0 # 判断是否匹配成功 if len(good_matches) >= self.min_match_count and match_score >= self.match_threshold: is_matched = True # 在图像上绘制匹配结果 if len(gray_frame.shape) == 2: processed_frame = cv2.cvtColor(gray_frame, cv2.COLOR_GRAY2BGR) # 绘制匹配点 processed_frame = cv2.drawMatches( self.sample_img, self.sample_kp, processed_frame, kp, good_matches, None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS ) # 在图像上显示匹配分数 cv2.putText(processed_frame, f"Match Score: {match_score:.2f}", (20, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0), 2) # 更新当前匹配分数 self.current_match_score = match_score self.match_score_updated.emit(match_score) # 检查是否匹配成功且超过冷却时间 current_time = time.time() if is_matched and (current_time - self.last_trigger_time) > self.cool_down: self.last_trigger_time = current_time logging.info(f"匹配成功! 分数: {match_score:.2f}, 触发质量检测") # 发出匹配成功信号 (传递当前帧) self.match_success.emit(frame.copy(), match_score) except Exception as e: logging.error(f"帧处理错误: {str(e)}") return processed_frame, match_score, is_matched def set_threshold(self, threshold): """更新匹配阈值""" self.match_threshold = max(0.0, min(1.0, threshold)) logging.info(f"更新匹配阈值: {self.match_threshold:.2f}") def run(self): """主处理循环 - 连续处理每一帧""" logging.info("连续帧匹配线程启动") self.last_match_time = time.time() self.consecutive_fail_count = 0 while self.running: start_time = time.time() # 检查相机状态 if not self.cam_operation or not self.cam_operation.is_grabbing: if self.consecutive_fail_count % 10 == 0: logging.debug("相机未取流,等待...") time.sleep(0.1) self.consecutive_fail_count += 1 continue # 获取当前帧 frame = self.cam_operation.get_current_frame() if frame is None: self.consecutive_fail_count += 1 if self.consecutive_fail_count % 10 == 0: logging.warning(f"连续{self.consecutive_fail_count}次获取帧失败") time.sleep(0.05) continue self.consecutive_fail_count = 0 try: # 处理帧 processed_frame, match_score, is_matched = self.process_frame(frame) # 发送处理结果 self.frame_processed.emit(processed_frame, match_score, is_matched) except Exception as e: logging.error(f"帧处理错误: {str(e)}") # 控制处理频率 processing_time = time.time() - start_time sleep_time = max(0.01, MIN_FRAME_INTERVAL - processing_time) time.sleep(sleep_time) logging.info("连续帧匹配线程退出") # ==================== 模板匹配控制函数 ==================== def toggle_template_matching(state): global template_matcher_thread, current_sample_path logging.debug(f"切换连续匹配状态: {state}") if state == Qt.Checked and isGrabbing: # 确保已设置样本 if not current_sample_path: logging.warning("尝试启动连续匹配但未设置样本") QMessageBox.warning(mainWindow, "错误", "请先设置标准样本", QMessageBox.Ok) mainWindow.chkContinuousMatch.setChecked(False) return if template_matcher_thread is None: logging.info("创建新的连续帧匹配线程") template_matcher_thread = ContinuousFrameMatcher(obj_cam_operation) template_matcher_thread.frame_processed.connect(update_frame_display) template_matcher_thread.match_score_updated.connect(update_match_score_display) # 正确连接匹配成功信号到质量检测函数 template_matcher_thread.match_success.connect( lambda frame, score: vision_controlled_check(frame, score) ) # 加载样本图像 sample_img = cv2.imread(current_sample_path) if sample_img is None: logging.error("无法加载标准样本图像") QMessageBox.warning(mainWindow, "错误", "无法加载标准样本图像", QMessageBox.Ok) mainWindow.chkContinuousMatch.setChecked(False) return if not template_matcher_thread.set_sample(sample_img): logging.warning("标准样本特征不足") QMessageBox.warning(mainWindow, "错误", "标准样本特征不足", QMessageBox.Ok) mainWindow.chkContinuousMatch.setChecked(False) return if not template_matcher_thread.isRunning(): logging.info("启动连续帧匹配线程") template_matcher_thread.start() elif template_matcher_thread and template_matcher_thread.isRunning(): logging.info("停止连续帧匹配线程") template_matcher_thread.stop() # 重置匹配分数显示 update_match_score_display(0.0) # 重置帧显示 if obj_cam_operation and obj_cam_operation.is_frame_available(): frame = obj_cam_operation.get_current_frame() if frame is not None: display_frame = frame.copy() # 添加状态信息 cv2.putText(display_frame, "Continuous Matching Disabled", (20, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 0, 255), 2) update_frame_display(display_frame, 0.0, False) def update_frame_display(frame, match_score, is_matched): """更新主显示窗口(线程安全)""" # 确保在GUI线程中执行 if QThread.currentThread() != QApplication.instance().thread(): QMetaObject.invokeMethod( mainWindow, "updateDisplay", Qt.QueuedConnection, Q_ARG(np.ndarray, frame), Q_ARG(float, match_score), Q_ARG(bool, is_matched) ) return # 如果已经在主线程,直接调用主窗口的更新方法 mainWindow.updateDisplay(frame, match_score, is_matched) def update_match_score_display(score): """更新匹配分数显示""" # 将分数转换为百分比显示 score_percent = score * 100 mainWindow.lblMatchScoreValue.setText(f"{score_percent:.1f}%") # 根据分数设置颜色 if score > 0.8: # 高于80%显示绿色 color = "green" elif score > 0.6: # 60%-80%显示黄色 color = "orange" else: # 低于60%显示红色 color = "red" mainWindow.lblMatchScoreValue.setStyleSheet(f"color: {color}; font-weight: bold;") def update_diff_display(diff_ratio, is_qualified): mainWindow.lblCurrentDiff.setText(f"当前差异度: {diff_ratio*100:.2f}%") if is_qualified: mainWindow.lblDiffStatus.setText("状态: 合格") mainWindow.lblDiffStatus.setStyleSheet("color: green; font-size: 12px;") else: mainWindow.lblDiffStatus.setText("状态: 不合格") mainWindow.lblDiffStatus.setStyleSheet("color: red; font-size: 12px;") def update_diff_threshold(value): mainWindow.lblDiffValue.setText(f"{value}%") def update_sample_display(): global current_sample_path if current_sample_path: mainWindow.lblSamplePath.setText(f"当前样本: {os.path.basename(current_sample_path)}") mainWindow.lblSamplePath.setToolTip(current_sample_path) mainWindow.bnPreviewSample.setEnabled(True) else: mainWindow.lblSamplePath.setText("当前样本: 未设置样本") mainWindow.bnPreviewSample.setEnabled(False) def update_history_display(): global detection_history mainWindow.cbHistory.clear() for i, result in enumerate(detection_history[-10:]): timestamp = result['timestamp'].strftime("%H:%M:%S") status = "合格" if result['qualified'] else "不合格" ratio = f"{result['diff_ratio']*100:.2f}%" trigger = "视觉" if result['trigger_type'] == 'vision' else "手动" mainWindow.cbHistory.addItem(f"[{trigger} {timestamp}] {status} - 差异: {ratio}") def update_match_threshold(value): """更新匹配阈值显示并应用到匹配器""" global template_matcher_thread # 更新UI显示 if mainWindow: mainWindow.lblThresholdValue.setText(f"{value}%") # 如果匹配线程存在,更新其匹配阈值 if template_matcher_thread: # 转换为0-1范围的浮点数 threshold = value / 100.0 template_matcher_thread.set_threshold(threshold) logging.debug(f"更新匹配阈值: {threshold:.2f}") # ==================== 主窗口类 ==================== class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("布料印花检测系统 - 连续匹配版") self.resize(1200, 800) central_widget = QWidget() self.setCentralWidget(central_widget) main_layout = QVBoxLayout(central_widget) # 设备枚举区域 device_layout = QHBoxLayout() self.ComboDevices = QComboBox() self.bnEnum = QPushButton("枚举设备") self.bnOpen = QPushButton("打开设备") self.bnClose = QPushButton("关闭设备") device_layout.addWidget(self.ComboDevices) device_layout.addWidget(self.bnEnum) device_layout.addWidget(self.bnOpen) device_layout.addWidget(self.bnClose) main_layout.addLayout(device_layout) # 取流控制组 self.groupGrab = QGroupBox("取流控制") grab_layout = QHBoxLayout(self.groupGrab) self.bnStart = QPushButton("开始取流") self.bnStop = QPushButton("停止取流") self.radioContinueMode = QRadioButton("连续模式") self.radioTriggerMode = QRadioButton("触发模式") self.bnSoftwareTrigger = QPushButton("软触发") grab_layout.addWidget(self.bnStart) grab_layout.addWidget(self.bnStop) grab_layout.addWidget(self.radioContinueMode) grab_layout.addWidget(self.radioTriggerMode) grab_layout.addWidget(self.bnSoftwareTrigger) main_layout.addWidget(self.groupGrab) # 参数设置组 self.paramgroup = QGroupBox("相机参数") param_layout = QGridLayout(self.paramgroup) self.edtExposureTime = QLineEdit() self.edtGain = QLineEdit() self.edtFrameRate = QLineEdit() self.bnGetParam = QPushButton("获取参数") self.bnSetParam = QPushButton("设置参数") self.bnSaveImage = QPushButton("保存图像") param_layout.addWidget(QLabel("曝光时间:"), 0, 0) param_layout.addWidget(self.edtExposureTime, 0, 1) param_layout.addWidget(self.bnGetParam, 0, 2) param_layout.addWidget(QLabel("增益:"), 1, 0) param_layout.addWidget(self.edtGain, 1, 1) param_layout.addWidget(self.bnSetParam, 1, 2) param_layout.addWidget(QLabel("帧率:"), 2, 0) param_layout.addWidget(self.edtFrameRate, 2, 1) param_layout.addWidget(self.bnSaveImage, 2, 2) main_layout.addWidget(self.paramgroup) # 图像显示区域 self.widgetDisplay = QLabel() self.widgetDisplay.setMinimumSize(640, 480) self.widgetDisplay.setStyleSheet("background-color: black;") self.widgetDisplay.setAlignment(Qt.AlignCenter) self.widgetDisplay.setText("相机预览区域") main_layout.addWidget(self.widgetDisplay, 1) # 创建自定义UI组件 self.setup_custom_ui() # 添加阈值自适应定时器 self.threshold_timer = QTimer() self.threshold_timer.timeout.connect(self.auto_adjust_threshold) self.threshold_timer.start(2000) # 每2秒调整一次 def auto_adjust_threshold(self): """根据环境亮度自动调整匹配阈值""" if not obj_cam_operation or not isGrabbing: return # 获取当前帧并计算平均亮度 frame = obj_cam_operation.get_current_frame() if frame is None: return gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) brightness = np.mean(gray) # 根据亮度动态调整阈值 (亮度低时降低阈值要求) if brightness < 50: # 暗环境 new_threshold = 40 # 40% elif brightness > 200: # 亮环境 new_threshold = 65 # 65% else: # 正常环境 new_threshold = 55 # 55% # 更新UI self.sliderThreshold.setValue(new_threshold) self.lblThresholdValue.setText(f"{new_threshold}%") # 更新匹配器阈值 update_match_threshold(new_threshold) # 状态栏显示调整信息 self.statusBar().showMessage(f"亮度: {brightness:.1f}, 自动调整阈值至: {new_threshold}%", 3000) def setup_custom_ui(self): # 工具栏 toolbar = self.addToolBar("检测工具") self.bnCheckPrint = QPushButton("手动检测") self.bnSaveSample = QPushButton("保存标准样本") self.bnPreviewSample = QPushButton("预览样本") self.cbHistory = QComboBox() self.cbHistory.setMinimumWidth(300) toolbar.addWidget(self.bnCheckPrint) toolbar.addWidget(self.bnSaveSample) toolbar.addWidget(self.bnPreviewSample) toolbar.addWidget(QLabel("历史记录:")) toolbar.addWidget(self.cbHistory) # 状态栏样本路径 self.lblSamplePath = QLabel("当前样本: 未设置样本") self.statusBar().addPermanentWidget(self.lblSamplePath) # 右侧面板 right_panel = QWidget() right_layout = QVBoxLayout(right_panel) right_layout.setContentsMargins(10, 10, 10, 10) # 差异度调整组 diff_group = QGroupBox("差异度调整") diff_layout = QVBoxLayout(diff_group) self.lblDiffThreshold = QLabel("差异度阈值 (0-100%):") self.sliderDiffThreshold = QSlider(Qt.Horizontal) self.sliderDiffThreshold.setRange(0, 100) self.sliderDiffThreshold.setValue(5) self.lblDiffValue = QLabel("5%") self.lblCurrentDiff = QLabel("当前差异度: -") self.lblCurrentDiff.setStyleSheet("font-size: 14px; font-weight: bold;") self.lblDiffStatus = QLabel("状态: 未检测") self.lblDiffStatus.setStyleSheet("font-size: 12px;") diff_layout.addWidget(self.lblDiffThreshold) diff_layout.addWidget(self.sliderDiffThreshold) diff_layout.addWidget(self.lblDiffValue) diff_layout.addWidget(self.lblCurrentDiff) diff_layout.addWidget(self.lblDiffStatus) right_layout.addWidget(diff_group) # ===== 连续匹配面板 ===== match_group = QGroupBox("连续帧匹配") match_layout = QVBoxLayout(match_group) # 样本设置 sample_layout = QHBoxLayout() self.bnSetSample = QPushButton("设置标准样本") self.bnPreviewSample = QPushButton("预览样本") self.lblSampleStatus = QLabel("状态: 未设置样本") sample_layout.addWidget(self.bnSetSample) sample_layout.addWidget(self.bnPreviewSample) sample_layout.addWidget(self.lblSampleStatus) match_layout.addLayout(sample_layout) # 匹配参数 param_layout = QHBoxLayout() self.lblMatchThreshold = QLabel("匹配阈值:") self.sliderThreshold = QSlider(Qt.Horizontal) self.sliderThreshold.setRange(50, 100) self.sliderThreshold.setValue(75) # 降低默认阈值 self.lblThresholdValue = QLabel("75%") param_layout.addWidget(self.lblMatchThreshold) param_layout.addWidget(self.sliderThreshold) param_layout.addWidget(self.lblThresholdValue) match_layout.addLayout(param_layout) # 匹配分数显示 match_score_layout = QHBoxLayout() self.lblMatchScore = QLabel("实时匹配分数:") self.lblMatchScoreValue = QLabel("0.0%") self.lblMatchScoreValue.setStyleSheet("font-weight: bold;") match_score_layout.addWidget(self.lblMatchScore) match_score_layout.addWidget(self.lblMatchScoreValue) match_layout.addLayout(match_score_layout) # 连续匹配开关 self.chkContinuousMatch = QCheckBox("启用连续帧匹配") self.chkContinuousMatch.setChecked(False) match_layout.addWidget(self.chkContinuousMatch) right_layout.addWidget(match_group) right_layout.addStretch(1) # 停靠窗口 dock = QDockWidget("检测控制面板", self) dock.setWidget(right_panel) dock.setFeatures(QDockWidget.DockWidgetMovable | QDockWidget.DockWidgetFloatable) self.addDockWidget(Qt.RightDockWidgetArea, dock) @pyqtSlot(np.ndarray, float, bool) def updateDisplay(self, frame, match_score, is_matched): """线程安全的显示更新方法(添加可视化仪表盘)""" # 创建可视化覆盖层 overlay = frame.copy() height, width = frame.shape[:2] # 绘制匹配度仪表盘 center_x, center_y = width - 100, 100 radius = 80 cv2.circle(overlay, (center_x, center_y), radius, (100, 100, 100), 3) # 绘制指针 (根据匹配度旋转) angle = 240 * match_score # 0-100% 对应 240度范围 end_x = center_x + int(radius * 0.8 * math.cos(math.radians(angle - 90))) end_y = center_y + int(radius * 0.8 * math.sin(math.radians(angle - 90))) cv2.line(overlay, (center_x, center_y), (end_x, end_y), (0, 200, 0), 3) # 添加文本标签 cv2.putText(overlay, f"匹配度: {match_score*100:.1f}%", (center_x - 70, center_y + radius + 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0) if match_score > 0.8 else (0, 0, 255) if match_score < 0.6 else (0, 165, 255), 2) # 添加状态标签 status_text = "匹配成功!" if is_matched else "检测中..." cv2.putText(overlay, status_text, (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1.0, (0, 255, 0) if is_matched else (200, 200, 0), 2) # 将OpenCV图像转换为Qt图像 if len(overlay.shape) == 3: # 彩色图像 h, w, ch = overlay.shape bytes_per_line = ch * w q_img = QImage(overlay.data, w, h, bytes_per_line, QImage.Format_RGB888) q_img = q_img.rgbSwapped() # BGR -> RGB else: # 灰度图像 h, w = overlay.shape bytes_per_line = w q_img = QImage(overlay.data, w, h, bytes_per_line, QImage.Format_Grayscale8) # 创建QPixmap并缩放 pixmap = QPixmap.fromImage(q_img) scaled_pixmap = pixmap.scaled( self.widgetDisplay.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation ) # 更新显示 self.widgetDisplay.setPixmap(scaled_pixmap) self.widgetDisplay.setAlignment(Qt.AlignCenter) def closeEvent(self, event): logging.info("主窗口关闭,执行清理...") close_device() event.accept() # ===== 辅助函数 ===== def ToHexStr(num): if not isinstance(num, int): try: num = int(num) except: return f"<非整数:{type(num)}>" chaDic = {10: 'a', 11: 'b', 12: 'c', 13: 'd', 14: 'e', 15: 'f'} hexStr = "" if num < 0: num = num + 2 ** 32 while num >= 16: digit = num % 16 hexStr = chaDic.get(digit, str(digit)) + hexStr num //= 16 hexStr = chaDic.get(num, str(num)) + hexStr return "0x" + hexStr def enum_devices(): global deviceList, obj_cam_operation n_layer_type = ( MV_GIGE_DEVICE | MV_USB_DEVICE | MV_GENTL_CAMERALINK_DEVICE | MV_GENTL_CXP_DEVICE | MV_GENTL_XOF_DEVICE ) # 创建设备列表 deviceList = MV_CC_DEVICE_INFO_LIST() # 枚举设备 ret = MvCamera.MV_CC_EnumDevices(n_layer_type, deviceList) if ret != MV_OK: error_msg = f"枚举设备失败! 错误码: 0x{ret:x}" logging.error(error_msg) QMessageBox.warning(mainWindow, "错误", error_msg, QMessageBox.Ok) return ret if deviceList.nDeviceNum == 0: QMessageBox.warning(mainWindow, "提示", "未找到任何设备", QMessageBox.Ok) return MV_OK logging.info(f"找到 {deviceList.nDeviceNum} 个设备") # 处理设备信息 devList = [] for i in range(deviceList.nDeviceNum): # 获取设备信息 mvcc_dev_info = ctypes.cast( deviceList.pDeviceInfo[i], ctypes.POINTER(MV_CC_DEVICE_INFO) ).contents # 根据设备类型提取信息 if mvcc_dev_info.nTLayerType == MV_GIGE_DEVICE: st_gige_info = mvcc_dev_info.SpecialInfo.stGigEInfo ip_addr = ( f"{(st_gige_info.nCurrentIp >> 24) & 0xFF}." f"{(st_gige_info.nCurrentIp >> 16) & 0xFF}." f"{(st_gige_info.nCurrentIp >> 8) & 0xFF}." f"{st_gige_info.nCurrentIp & 0xFF}" ) # 修复:将c_ubyte_Array_16转换为字节串再解码 user_defined_bytes = bytes(st_gige_info.chUserDefinedName) dev_name = f"GigE: {user_defined_bytes.decode('gbk', 'ignore')}" devList.append(f"[{i}] {dev_name} ({ip_addr})") elif mvcc_dev_info.nTLayerType == MV_USB_DEVICE: st_usb_info = mvcc_dev_info.SpecialInfo.stUsb3VInfo serial = bytes(st_usb_info.chSerialNumber).decode('ascii', 'ignore').rstrip('\x00') # 修复:同样处理用户自定义名称 user_defined_bytes = bytes(st_usb_info.chUserDefinedName) dev_name = f"USB: {user_defined_bytes.decode('gbk', 'ignore')}" devList.append(f"[{i}] {dev_name} (SN: {serial})") else: devList.append(f"[{i}] 未知设备类型: {mvcc_dev_info.nTLayerType}") # 更新UI mainWindow.ComboDevices.clear() mainWindow.ComboDevices.addItems(devList) if devList: mainWindow.ComboDevices.setCurrentIndex(0) mainWindow.statusBar().showMessage(f"找到 {deviceList.nDeviceNum} 个设备", 3000) return MV_OK def set_continue_mode(): ret = obj_cam_operation.set_trigger_mode(False) if ret != 0: strError = "设置连续模式失败 ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: mainWindow.radioContinueMode.setChecked(True) mainWindow.radioTriggerMode.setChecked(False) mainWindow.bnSoftwareTrigger.setEnabled(False) def set_software_trigger_mode(): ret = obj_cam_operation.set_trigger_mode(True) if ret != 0: strError = "设置触发模式失败 ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: mainWindow.radioContinueMode.setChecked(False) mainWindow.radioTriggerMode.setChecked(True) mainWindow.bnSoftwareTrigger.setEnabled(isGrabbing) def trigger_once(): ret = obj_cam_operation.trigger_once() if ret != 0: strError = "软触发失败 ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) def save_sample_image(): global isGrabbing, obj_cam_operation, current_sample_path if not isGrabbing: QMessageBox.warning(mainWindow, "错误", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 尝试捕获当前帧 frame = obj_cam_operation.capture_frame() if frame is None: QMessageBox.warning(mainWindow, "无有效图像", "未捕获到有效图像,请检查相机状态!", QMessageBox.Ok) return # 确保图像有效 if frame.size == 0 or frame.shape[0] == 0 or frame.shape[1] == 0: QMessageBox.warning(mainWindow, "无效图像", "捕获的图像无效,请检查相机设置!", QMessageBox.Ok) return settings = QSettings("ClothInspection", "CameraApp") last_dir = settings.value("last_save_dir", os.path.join(os.getcwd(), "captures")) timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") default_filename = f"sample_{timestamp}" file_path, selected_filter = QFileDialog.getSaveFileName( mainWindow, "保存标准样本图像", os.path.join(last_dir, default_filename), "BMP Files (*.bmp);;PNG Files (*.png);;JPEG Files (*.jpg);;所有文件 (*)", options=QFileDialog.DontUseNativeDialog ) if not file_path: return # 确保文件扩展名正确 file_extension = os.path.splitext(file_path)[1].lower() if not file_extension: if "BMP" in selected_filter: file_path += ".bmp" elif "PNG" in selected_filter: file_path += ".png" elif "JPEG" in selected_filter or "JPG" in selected_filter: file_path += ".jpg" else: file_path += ".bmp" file_extension = os.path.splitext(file_path)[1].lower() # 创建目录(如果不存在) directory = os.path.dirname(file_path) if directory and not os.path.exists(directory): try: os.makedirs(directory, exist_ok=True) except OSError as e: QMessageBox.critical(mainWindow, "目录创建错误", f"无法创建目录 {directory}: {str(e)}", QMessageBox.Ok) return # 保存图像 try: # 使用OpenCV保存图像 if not cv2.imwrite(file_path, frame): raise Exception("OpenCV保存失败") # 更新状态 current_sample_path = file_path update_sample_display() settings.setValue("last_save_dir", os.path.dirname(file_path)) # 显示成功消息 QMessageBox.information(mainWindow, "成功", f"标准样本已保存至:\n{file_path}", QMessageBox.Ok) # 更新样本状态 mainWindow.lblSampleStatus.setText("状态: 样本已设置") mainWindow.lblSampleStatus.setStyleSheet("color: green;") except Exception as e: logging.error(f"保存图像失败: {str(e)}") QMessageBox.critical(mainWindow, "保存错误", f"保存图像时发生错误:\n{str(e)}", QMessageBox.Ok) def preview_sample(): global current_sample_path if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(mainWindow, "错误", "请先设置有效的标准样本图像!", QMessageBox.Ok) return try: # 直接使用OpenCV加载图像 sample_img = cv2.imread(current_sample_path) if sample_img is None: raise Exception("无法加载图像") # 显示图像 cv2.namedWindow("标准样本预览", cv2.WINDOW_NORMAL) cv2.resizeWindow("标准样本预览", 800, 600) cv2.imshow("标准样本预览", sample_img) cv2.waitKey(0) cv2.destroyAllWindows() except Exception as e: QMessageBox.warning(mainWindow, "错误", f"预览样本失败: {str(e)}", QMessageBox.Ok) def is_float(str): try: float(str) return True except ValueError: return False def get_param(): try: ret = obj_cam_operation.get_parameters() if ret != MV_OK: strError = "获取参数失败,错误码: " + ToHexStr(ret) QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: mainWindow.edtExposureTime.setText("{0:.2f}".format(obj_cam_operation.exposure_time)) mainWindow.edtGain.setText("{0:.2f}".format(obj_cam_operation.gain)) mainWindow.edtFrameRate.setText("{0:.2f}".format(obj_cam_operation.frame_rate)) except Exception as e: error_msg = f"获取参数时发生错误: {str(e)}" QMessageBox.critical(mainWindow, "严重错误", error_msg, QMessageBox.Ok) def set_param(): frame_rate = mainWindow.edtFrameRate.text() exposure = mainWindow.edtExposureTime.text() gain = mainWindow.edtGain.text() if not (is_float(frame_rate) and is_float(exposure) and is_float(gain)): strError = "设置参数失败: 参数必须是有效的浮点数" QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) return MV_E_PARAMETER try: ret = obj_cam_operation.set_param( frame_rate=float(frame_rate), exposure_time=float(exposure), gain=float(gain) ) if ret != MV_OK: strError = "设置参数失败,错误码: " + ToHexStr(ret) QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) except Exception as e: error_msg = f"设置参数时发生错误: {str(e)}" QMessageBox.critical(mainWindow, "严重错误", error_msg, QMessageBox.Ok) def enable_controls(): global isGrabbing, isOpen mainWindow.groupGrab.setEnabled(isOpen) mainWindow.paramgroup.setEnabled(isOpen) mainWindow.bnOpen.setEnabled(not isOpen) mainWindow.bnClose.setEnabled(isOpen) mainWindow.bnStart.setEnabled(isOpen and (not isGrabbing)) mainWindow.bnStop.setEnabled(isOpen and isGrabbing) mainWindow.bnSoftwareTrigger.setEnabled(isGrabbing and mainWindow.radioTriggerMode.isChecked()) mainWindow.bnSaveImage.setEnabled(isOpen and isGrabbing) mainWindow.bnCheckPrint.setEnabled(isOpen and isGrabbing) mainWindow.bnSaveSample.setEnabled(isOpen and isGrabbing) mainWindow.bnPreviewSample.setEnabled(bool(current_sample_path)) # 连续匹配控制 mainWindow.chkContinuousMatch.setEnabled(bool(current_sample_path) and isGrabbing) # ===== 相机帧监控线程 ===== class FrameMonitorThread(QThread): frame_status = pyqtSignal(str) # 用于发送状态消息的信号 def __init__(self, cam_operation): super().__init__() self.cam_operation = cam_operation self.running = True self.frame_count = 0 self.last_time = time.time() def run(self): """监控相机帧状态的主循环""" while self.running: try: if self.cam_operation and self.cam_operation.is_grabbing: # 获取帧统计信息 frame_info = self.get_frame_info() if frame_info: fps = frame_info.get('fps', 0) dropped = frame_info.get('dropped', 0) status = f"FPS: {fps:.1f} | 丢帧: {dropped}" self.frame_status.emit(status) else: self.frame_status.emit("取流中...") else: self.frame_status.emit("相机未取流") except Exception as e: self.frame_status.emit(f"监控错误: {str(e)}") # 每500ms检查一次 QThread.msleep(500) def stop(self): """停止监控线程""" self.running = False self.wait(1000) # 等待线程结束 def calculate_fps(self): """计算当前帧率""" current_time = time.time() elapsed = current_time - self.last_time if elapsed > 0: fps = self.frame_count / elapsed self.frame_count = 0 self.last_time = current_time return fps return 0 def get_frame_info(self): """获取帧信息""" try: # 更新帧计数 self.frame_count += 1 # 返回帧信息 return { 'fps': self.calculate_fps(), 'dropped': 0 # 实际应用中需要从相机获取真实丢帧数 } except Exception as e: logging.error(f"获取帧信息失败: {str(e)}") return None # ===== 主程序入口 ===== if __name__ == "__main__": # 配置日志系统 logging.basicConfig( level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("cloth_inspection_continuous.log"), logging.StreamHandler() ] ) logging.info("布料印花检测系统(连续匹配版)启动") app = QApplication(sys.argv) mainWindow = MainWindow() mainWindow.sliderThreshold.valueChanged.connect( lambda value: update_match_threshold(value) ) # 信号连接 mainWindow.sliderDiffThreshold.valueChanged.connect(update_diff_threshold) mainWindow.bnCheckPrint.clicked.connect(lambda: vision_controlled_check(None)) mainWindow.bnSaveSample.clicked.connect(save_sample_image) mainWindow.bnPreviewSample.clicked.connect(preview_sample) mainWindow.bnEnum.clicked.connect(enum_devices) mainWindow.bnOpen.clicked.connect(open_device) mainWindow.bnClose.clicked.connect(close_device) mainWindow.bnStart.clicked.connect(start_grabbing) mainWindow.bnStop.clicked.connect(stop_grabbing) mainWindow.bnSoftwareTrigger.clicked.connect(trigger_once) mainWindow.radioTriggerMode.clicked.connect(set_software_trigger_mode) mainWindow.radioContinueMode.clicked.connect(set_continue_mode) mainWindow.bnGetParam.clicked.connect(get_param) mainWindow.bnSetParam.clicked.connect(set_param) mainWindow.bnSaveImage.clicked.connect(save_sample_image) # 连续匹配信号连接 mainWindow.sliderThreshold.valueChanged.connect(update_match_score_display) mainWindow.chkContinuousMatch.stateChanged.connect(toggle_template_matching) mainWindow.show() app.exec_() close_device() sys.exit()

Traceback (most recent call last): File “d:\海康\MVS\Development\Samples\Python\MvImport\two.py”, line 15, in from CamOperation_class import CameraOperation File “d:\海康\MVS\Development\Samples\Python\MvImport\CamOperation_class.py”, line 40 if not file_path.startswith(r"\?"): ^ SyntaxError: EOL while scanning string literal这是我运行下面这个代码出现的错误 # -*- coding: utf-8 -*- import sys import os import cv2 import numpy as np from PyQt5.QtWidgets import (QApplication, QMainWindow, QPushButton, QWidget, QVBoxLayout, QHBoxLayout, QMessageBox, QLabel, QFileDialog, QToolBar, QComboBox, QStatusBar) from PyQt5.QtCore import QRect, Qt from CamOperation_class import CameraOperation sys.path.append("D:\\海康\\MVS\\Development\\Samples\\Python\\BasicDemo") from MvCameraControl_class import * from MvErrorDefine_const import * from CameraParams_header import * from PyUICBasicDemo import Ui_MainWindow import ctypes from datetime import datetime # 全局变量 current_sample_path = "" # 当前使用的样本路径 detection_history = [] # 检测历史记录 # 布料印花检测函数 def check_print_quality(sample_image_path, test_image_path, threshold=0.05): """ 检测布料印花是否合格,并在合格样本上标出错误位置 :param sample_image_path: 合格样本图像路径 :param test_image_path: 待检测图像路径 :param threshold: 差异阈值,超过该值则认为印花不合格 :return: 是否合格,差异值,带有错误标记的合格样本图像 """ # 读取图像 sample_image = cv2.imread(sample_image_path, cv2.IMREAD_GRAYSCALE) test_image = cv2.imread(test_image_path, cv2.IMREAD_GRAYSCALE) if sample_image is None or test_image is None: print("无法加载图像,请检查路径是否正确!") return None, None, None # 确保两个图像大小一致 test_image = cv2.resize(test_image, (sample_image.shape[1], sample_image.shape[0])) # 计算两个图像之间的差异 diff = cv2.absdiff(sample_image, test_image) # 将差异图像二值化 ret, diff_binary = cv2.threshold(diff, 50, 255, cv2.THRESH_BINARY) # 计算差异的占比 diff_ratio = np.sum(diff_binary) / (diff_binary.shape[0] * diff_binary.shape[1] * 255) # 修正计算方式 # 判断是否合格 is_qualified = diff_ratio < threshold # 在合格样本上标出错误位置 if is_qualified: marked_image = cv2.cvtColor(sample_image, cv2.COLOR_GRAY2BGR) else: marked_image = cv2.cvtColor(sample_image, cv2.COLOR_GRAY2BGR) contours, _ = cv2.findContours(diff_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cv2.drawContours(marked_image, contours, -1, (0, 0, 255), 2) return is_qualified, diff_ratio, marked_image # 布料印花检测功能 def check_print(): global isGrabbing, obj_cam_operation, current_sample_path, detection_history if not isGrabbing: QMessageBox.warning(mainWindow, "错误", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 检查样本路径是否有效 if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(mainWindow, "错误", "请先设置有效的标准样本图像!", QMessageBox.Ok) return # 保存当前帧作为测试图像 test_path = "temp_test_image.bmp" ret = obj_cam_operation.Save_Bmp() if ret != MV_OK: QMessageBox.warning(mainWindow, "错误", "保存测试图像失败!", QMessageBox.Ok) return # 执行印花检测 is_qualified, diff_ratio, marked_image = check_print_quality(current_sample_path, test_path) if marked_image is not None: # 显示结果 result_text = f"印花是否合格: {'合格' if is_qualified else '不合格'}\n差异占比: {diff_ratio:.4f}" QMessageBox.information(mainWindow, "检测结果", result_text, QMessageBox.Ok) # 显示标记图像 cv2.imshow("缺陷标记结果", marked_image) cv2.waitKey(0) cv2.destroyAllWindows() # 记录检测结果 detection_result = { 'timestamp': datetime.now(), 'qualified': is_qualified, 'diff_ratio': diff_ratio, 'sample_path': current_sample_path, 'test_path': test_path } detection_history.append(detection_result) update_history_display() # 保存标准样本函数 def save_sample_image(): global isGrabbing, obj_cam_operation, current_sample_path if not isGrabbing: QMessageBox.warning(mainWindow, "错误", "请先开始取流并捕获图像!", QMessageBox.Ok) return # 弹出文件保存对话框 file_path, _ = QFileDialog.getSaveFileName( mainWindow, "保存标准样本图像", "", "BMP Files (*.bmp);;PNG Files (*.png);;JPEG Files (*.jpg)" ) if not file_path: return # 用户取消保存 # 获取文件扩展名 file_extension = os.path.splitext(file_path)[1].lower() # 根据扩展名设置保存格式 if file_extension == ".bmp": save_format = "bmp" elif file_extension == ".png": save_format = "png" elif file_extension == ".jpg" or file_extension == ".jpeg": save_format = "jpeg" else: QMessageBox.warning(mainWindow, "错误", "不支持的文件格式!", QMessageBox.Ok) return # 保存当前帧作为标准样本 ret = obj_cam_operation.Save_Image(file_path, save_format) if ret != MV_OK: strError = "保存样本图像失败: " + ToHexStr(ret) QMessageBox.warning(mainWindow, "错误", strError, QMessageBox.Ok) else: QMessageBox.information(mainWindow, "成功", f"标准样本已保存至:\n{file_path}", QMessageBox.Ok) # 更新当前样本路径 current_sample_path = file_path update_sample_display() # 预览当前样本 def preview_sample(): global current_sample_path if not current_sample_path or not os.path.exists(current_sample_path): QMessageBox.warning(mainWindow, "错误", "请先设置有效的标准样本图像!", QMessageBox.Ok) return try: sample_img = cv2.imread(current_sample_path) if sample_img is None: raise Exception("无法加载图像") cv2.imshow("标准样本预览", sample_img) except Exception as e: QMessageBox.warning(mainWindow, "错误", f"预览样本失败: {str(e)}", QMessageBox.Ok) # 更新样本路径显示 def update_sample_display(): global current_sample_path if current_sample_path: ui.lblSamplePath.setText(f"当前样本: {os.path.basename(current_sample_path)}") ui.lblSamplePath.setToolTip(current_sample_path) else: ui.lblSamplePath.setText("当前样本: 未设置样本") # 更新历史记录显示 def update_history_display(): global detection_history ui.cbHistory.clear() for i, result in enumerate(detection_history[-10:]): # 显示最近10条记录 timestamp = result['timestamp'].strftime("%H:%M:%S") status = "合格" if result['qualified'] else "不合格" ratio = f"{result['diff_ratio']:.4f}" ui.cbHistory.addItem(f"[{timestamp}] {status} - 差异: {ratio}") # 获取选取设备信息的索引,通过[]之间的字符去解析 def TxtWrapBy(start_str, end, all): start = all.find(start_str) if start >= 0: start += len(start_str) end = all.find(end, start) if end >= 0: return all[start:end].strip() # 将返回的错误码转换为十六进制显示 def ToHexStr(num): chaDic = {10: 'a', 11: 'b', 12: 'c', 13: 'd', 14: 'e', 15: 'f'} hexStr = "" if num < 0: num = num + 2 ** 32 while num >= 16: digit = num % 16 hexStr = chaDic.get(digit, str(digit)) + hexStr num //= 16 hexStr = chaDic.get(num, str(num)) + hexStr return hexStr # ch:初始化SDK | en: initialize SDK MvCamera.MV_CC_Initialize() global deviceList deviceList = MV_CC_DEVICE_INFO_LIST() global cam cam = MvCamera() global nSelCamIndex nSelCamIndex = 0 global obj_cam_operation obj_cam_operation = 0 global isOpen isOpen = False global isGrabbing isGrabbing = False global isCalibMode # 是否是标定模式(获取原始图像) isCalibMode = True # 绑定下拉列表至设备信息索引 def xFunc(event): global nSelCamIndex nSelCamIndex = TxtWrapBy("[", "]", ui.ComboDevices.get()) # Decoding Characters def decoding_char(c_ubyte_value): c_char_p_value = ctypes.cast(c_ubyte_value, ctypes.c_char_p) try: decode_str = c_char_p_value.value.decode('gbk') # Chinese characters except UnicodeDecodeError: decode_str = str(c_char_p_value.value) return decode_str # ch:枚举相机 | en:enum devices def enum_devices(): global deviceList global obj_cam_operation deviceList = MV_CC_DEVICE_INFO_LIST() n_layer_type = (MV_GIGE_DEVICE | MV_USB_DEVICE | MV_GENTL_CAMERALINK_DEVICE | MV_GENTL_CXP_DEVICE | MV_GENTL_XOF_DEVICE) ret = MvCamera.MV_CC_EnumDevices(n_layer_type, deviceList) if ret != 0: strError = "Enum devices fail! ret = :" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) return ret if deviceList.nDeviceNum == 0: QMessageBox.warning(mainWindow, "Info", "Find no device", QMessageBox.Ok) return ret print("Find %d devices!" % deviceList.nDeviceNum) devList = [] for i in range(0, deviceList.nDeviceNum): mvcc_dev_info = cast(deviceList.pDeviceInfo[i], POINTER(MV_CC_DEVICE_INFO)).contents if mvcc_dev_info.nTLayerType == MV_GIGE_DEVICE or mvcc_dev_info.nTLayerType == MV_GENTL_GIGE_DEVICE: print("\ngige device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stGigEInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stGigEInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) nip1 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0xff000000) >> 24) nip2 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x00ff0000) >> 16) nip3 = ((mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x0000ff00) >> 8) nip4 = (mvcc_dev_info.SpecialInfo.stGigEInfo.nCurrentIp & 0x000000ff) print("current ip: %d.%d.%d.%d " % (nip1, nip2, nip3, nip4)) devList.append( "[" + str(i) + "]GigE: " + user_defined_name + " " + model_name + "(" + str(nip1) + "." + str( nip2) + "." + str(nip3) + "." + str(nip4) + ")") elif mvcc_dev_info.nTLayerType == MV_USB_DEVICE: print("\nu3v device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stUsb3VInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stUsb3VInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stUsb3VInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]USB: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_CAMERALINK_DEVICE: print("\nCML device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stCMLInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stCMLInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stCMLInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]CML: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_CXP_DEVICE: print("\nCXP device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stCXPInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stCXPInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stCXPInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]CXP: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") elif mvcc_dev_info.nTLayerType == MV_GENTL_XOF_DEVICE: print("\nXoF device: [%d]" % i) user_defined_name = decoding_char(mvcc_dev_info.SpecialInfo.stXoFInfo.chUserDefinedName) model_name = decoding_char(mvcc_dev_info.SpecialInfo.stXoFInfo.chModelName) print("device user define name: " + user_defined_name) print("device model name: " + model_name) strSerialNumber = "" for per in mvcc_dev_info.SpecialInfo.stXoFInfo.chSerialNumber: if per == 0: break strSerialNumber = strSerialNumber + chr(per) print("user serial number: " + strSerialNumber) devList.append("[" + str(i) + "]XoF: " + user_defined_name + " " + model_name + "(" + str(strSerialNumber) + ")") ui.ComboDevices.clear() ui.ComboDevices.addItems(devList) ui.ComboDevices.setCurrentIndex(0) # ch:打开相机 | en:open device def open_device(): global deviceList global nSelCamIndex global obj_cam_operation global isOpen if isOpen: QMessageBox.warning(mainWindow, "Error", 'Camera is Running!', QMessageBox.Ok) return MV_E_CALLORDER nSelCamIndex = ui.ComboDevices.currentIndex() if nSelCamIndex < 0: QMessageBox.warning(mainWindow, "Error", 'Please select a camera!', QMessageBox.Ok) return MV_E_CALLORDER obj_cam_operation = CameraOperation(cam, deviceList, nSelCamIndex) ret = obj_cam_operation.Open_device() if 0 != ret: strError = "Open device failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) isOpen = False else: set_continue_mode() get_param() isOpen = True enable_controls() # ch:开始取流 | en:Start grab image def start_grabbing(): global obj_cam_operation global isGrabbing ret = obj_cam_operation.Start_grabbing(ui.widgetDisplay.winId()) if ret != 0: strError = "Start grabbing failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = True enable_controls() # ch:停止取流 | en:Stop grab image def stop_grabbing(): global obj_cam_operation global isGrabbing ret = obj_cam_operation.Stop_grabbing() if ret != 0: strError = "Stop grabbing failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: isGrabbing = False enable_controls() # ch:关闭设备 | Close device def close_device(): global isOpen global isGrabbing global obj_cam_operation if isOpen: obj_cam_operation.Close_device() isOpen = False isGrabbing = False enable_controls() # ch:设置触发模式 | en:set trigger mode def set_continue_mode(): ret = obj_cam_operation.Set_trigger_mode(False) if ret != 0: strError = "Set continue mode failed ret:" + ToHexStr(ret) + " mode is " + str(is_trigger_mode) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: ui.radioContinueMode.setChecked(True) ui.radioTriggerMode.setChecked(False) ui.bnSoftwareTrigger.setEnabled(False) # ch:设置软触发模式 | en:set software trigger mode def set_software_trigger_mode(): ret = obj_cam_operation.Set_trigger_mode(True) if ret != 0: strError = "Set trigger mode failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: ui.radioContinueMode.setChecked(False) ui.radioTriggerMode.setChecked(True) ui.bnSoftwareTrigger.setEnabled(isGrabbing) # ch:设置触发命令 | en:set trigger software def trigger_once(): ret = obj_cam_operation.Trigger_once() if ret != 0: strError = "TriggerSoftware failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) # ch:存图 | en:save image def save_bmp(): ret = obj_cam_operation.Save_Bmp() if ret != MV_OK: strError = "Save BMP failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: print("Save image success") def is_float(str): try: float(str) return True except ValueError: return False # ch: 获取参数 | en:get param def get_param(): ret = obj_cam_operation.Get_parameter() if ret != MV_OK: strError = "Get param failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) else: ui.edtExposureTime.setText("{0:.2f}".format(obj_cam_operation.exposure_time)) ui.edtGain.setText("{0:.2f}".format(obj_cam_operation.gain)) ui.edtFrameRate.setText("{0:.2f}".format(obj_cam_operation.frame_rate)) # ch: 设置参数 | en:set param def set_param(): frame_rate = ui.edtFrameRate.text() exposure = ui.edtExposureTime.text() gain = ui.edtGain.text() if is_float(frame_rate)!=True or is_float(exposure)!=True or is_float(gain)!=True: strError = "Set param failed ret:" + ToHexStr(MV_E_PARAMETER) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) return MV_E_PARAMETER ret = obj_cam_operation.Set_parameter(frame_rate, exposure, gain) if ret != MV_OK: strError = "Set param failed ret:" + ToHexStr(ret) QMessageBox.warning(mainWindow, "Error", strError, QMessageBox.Ok) return MV_OK # ch: 设置控件状态 | en:set enable status def enable_controls(): global isGrabbing global isOpen # 先设置group的状态,再单独设置各控件状态 ui.groupGrab.setEnabled(isOpen) ui.groupParam.setEnabled(isOpen) ui.bnOpen.setEnabled(not isOpen) ui.bnClose.setEnabled(isOpen) ui.bnStart.setEnabled(isOpen and (not isGrabbing)) ui.bnStop.setEnabled(isOpen and isGrabbing) ui.bnSoftwareTrigger.setEnabled(isGrabbing and ui.radioTriggerMode.isChecked()) ui.bnSaveImage.setEnabled(isOpen and isGrabbing) # 添加检测按钮控制 ui.bnCheckPrint.setEnabled(isOpen and isGrabbing) ui.bnSaveSample.setEnabled(isOpen and isGrabbing) ui.bnPreviewSample.setEnabled(bool(current_sample_path)) if __name__ == "__main__": # ch:初始化SDK | en: initialize SDK MvCamera.MV_CC_Initialize() deviceList = MV_CC_DEVICE_INFO_LIST() cam = MvCamera() nSelCamIndex = 0 obj_cam_operation = 0 isOpen = False isGrabbing = False isCalibMode = True # 是否是标定模式(获取原始图像) # 初始化UI app = QApplication(sys.argv) mainWindow = QMainWindow() ui = Ui_MainWindow() ui.setupUi(mainWindow) # 扩大主窗口尺寸 mainWindow.resize(1200, 800) # 宽度1200,高度800 # 创建工具栏 toolbar = mainWindow.addToolBar("检测工具") # 添加检测按钮 ui.bnCheckPrint = QPushButton("检测印花质量") toolbar.addWidget(ui.bnCheckPrint) # 添加保存样本按钮 ui.bnSaveSample = QPushButton("保存标准样本") toolbar.addWidget(ui.bnSaveSample) # 添加预览样本按钮 ui.bnPreviewSample = QPushButton("预览样本") toolbar.addWidget(ui.bnPreviewSample) # 添加历史记录下拉框 ui.cbHistory = QComboBox() ui.cbHistory.setMinimumWidth(300) toolbar.addWidget(QLabel("历史记录:")) toolbar.addWidget(ui.cbHistory) # 添加当前样本显示标签 ui.lblSamplePath = QLabel("当前样本: 未设置样本") status_bar = mainWindow.statusBar() status_bar.addPermanentWidget(ui.lblSamplePath) # 绑定按钮事件 ui.bnCheckPrint.clicked.connect(check_print) ui.bnSaveSample.clicked.connect(save_sample_image) ui.bnPreviewSample.clicked.connect(preview_sample) # 绑定其他按钮事件 ui.bnEnum.clicked.connect(enum_devices) ui.bnOpen.clicked.connect(open_device) ui.bnClose.clicked.connect(close_device) ui.bnStart.clicked.connect(start_grabbing) ui.bnStop.clicked.connect(stop_grabbing) ui.bnSoftwareTrigger.clicked.connect(trigger_once) ui.radioTriggerMode.clicked.connect(set_software_trigger_mode) ui.radioContinueMode.clicked.connect(set_continue_mode) ui.bnGetParam.clicked.connect(get_param) ui.bnSetParam.clicked.connect(set_param) ui.bnSaveImage.clicked.connect(save_bmp) # 显示主窗口 mainWindow.show() # 执行应用 app.exec_() # 关闭设备 close_device() # ch:反初始化SDK | en: finalize SDK MvCamera.MV_CC_Finalize() sys.exit()

最新推荐

recommend-type

2022年网站美工个人年度工作总结(1).doc

2022年网站美工个人年度工作总结(1).doc
recommend-type

财务软件销售实习报告格式范文-实习报告格式(1).doc

财务软件销售实习报告格式范文-实习报告格式(1).doc
recommend-type

【航迹关联】基于标准 Hough 变换、修正 Hough 变换和序列 Hough 变换实现航迹起始算法研究Matlab代码.rar

【航迹关联】基于标准 Hough 变换、修正 Hough 变换和序列 Hough 变换实现航迹起始算法研究Matlab代码
recommend-type

Windows系统修复工具

Windows 系统修复工具主要用于解决 Windows 11/10 系统中的各种常见问题,具有操作简单、功能全面等特点: 文件资源管理器修复:可解决文件资源管理器卡死、崩溃、无响应等问题,能终止崩溃循环。还可修复右键菜单无响应或选项缺失问题,以及重建缩略图缓存,让图片、视频等文件的缩略图正常显示,此外,还能处理桌面缺少回收站图标、回收站损坏等问题。 互联网和连接修复:能够刷新 DNS 缓存,加速网页加载速度,减少访问延迟。可重置 TCP/IP 协议栈,增强网络连接稳定性,减少网络掉线情况,还能还原 Hosts 文件,清除恶意程序对网络设置的篡改,保障网络安全,解决电脑重装系统后网络无法连接、浏览器主页被篡改等问题。 系统修复:集成系统文件检查器(SFC),可自动扫描并修复受损的系统文件。能解决 Windows 激活状态异常的问题,还可重建 DLL 注册库,恢复应用程序兼容性,解决部分软件无法正常运行的问题,同时也能处理如 Windows 沙箱无法启动、Windows 将 JPG 或 JPEG 保存为 JFIF 等系统问题。 系统工具维护:提供启动管理器、服务管理器和进程管理器等工具,用户可控制和管理启动程序、系统服务和当前运行的进程,提高系统的启动和运行速度,防止不必要的程序和服务占用系统资源。还能查看系统规格,如处理器线程数、最大显示分辨率等。 故障排除:集成超过 20 个微软官方诊断工具,可对系统问题进行专业排查,还能生成硬件健康状态报告。能解决搜索和索引故障、邮件和日历应用程序崩溃、设置应用程序无法启动等问题,也可处理打印机、网络适配器、Windows 更新等相关故障。 其他修复功能:可以重置组策略设置、catroot2 文件夹、记事本等多种系统设置和组件,如重置 Windows 应用商店缓存、Windows 防火墙设置等。还能添加重建图标缓存支持,恢复粘滞便笺删除
recommend-type

获取本机IP地址的程序源码分析

从给定文件信息中我们可以提取出的关键知识点是“取本机IP”的实现方法以及与之相关的编程技术和源代码。在当今的信息技术领域中,获取本机IP地址是一项基本技能,广泛应用于网络通信类的软件开发中,下面将详细介绍这一知识点。 首先,获取本机IP地址通常需要依赖于编程语言和操作系统的API。不同的操作系统提供了不同的方法来获取IP地址。在Windows操作系统中,可以通过调用Windows API中的GetAdaptersInfo()或GetAdaptersAddresses()函数来获取网络适配器信息,进而得到IP地址。在类Unix操作系统中,可以通过读取/proc/net或是使用系统命令ifconfig、ip等来获取网络接口信息。 在程序设计过程中,获取本机IP地址的源程序通常会用到网络编程的知识,比如套接字编程(Socket Programming)。网络编程允许程序之间进行通信,套接字则是在网络通信过程中用于发送和接收数据的接口。在许多高级语言中,如Python、Java、C#等,都提供了内置的网络库和类来简化网络编程的工作。 在网络通信类中,IP地址是区分不同网络节点的重要标识,它是由IP协议规定的,用于在网络中唯一标识一个网络接口。IP地址可以是IPv4,也可以是较新的IPv6。IPv4地址由32位二进制数表示,通常分为四部分,每部分由8位构成,并以点分隔,如192.168.1.1。IPv6地址则由128位二进制数表示,其表示方法与IPv4有所不同,以冒号分隔的8组16进制数表示,如2001:0db8:85a3:0000:0000:8a2e:0370:7334。 当编写源代码以获取本机IP地址时,通常涉及到以下几个步骤: 1. 选择合适的编程语言和相关库。 2. 根据目标操作系统的API或系统命令获取网络接口信息。 3. 分析网络接口信息,提取出IP地址。 4. 将提取的IP地址转换成适合程序内部使用的格式。 5. 在程序中提供相应功能,如显示IP地址或用于网络通信。 例如,在Python中,可以使用内置的socket库来获取本机IP地址。一个简单的示例代码如下: ```python import socket # 获取主机名 hostname = socket.gethostname() # 获取本机IP local_ip = socket.gethostbyname(hostname) print("本机IP地址是:", local_ip) ``` 在实际应用中,获取本机IP地址通常是为了实现网络通信功能,例如建立客户端与服务器的连接,或者是在开发涉及到IP地址的其他功能时使用。 关于文件名称“getIP”,这是一个自解释的文件名,明显表示该文件或程序的作用是获取本机的IP地址。从标签“控件 源码 网络通信类 资源”中我们可以看出,这个文件很可能是一个可以嵌入其他软件中的代码片段,用以实现网络通信功能的一部分,具有较高的灵活性和重用性。 综上所述,获取本机IP地址是一个涉及到操作系统API、网络编程以及特定编程语言知识的技能。它不仅需要对编程语言提供的库函数有深入的理解,还要对操作系统的网络配置和IP协议有足够的认识。通过阅读和分析相关的源代码,如“getIP”,可以加深对网络编程实现细节的理解,并能够在实际开发中快速地应用这一技术。
recommend-type

【权威指南】:Win11笔记本上RTX4080驱动的安装与调试秘籍

# 1. Win11笔记本与RTX4080显卡介绍 随着技术的不断发展,笔记本电脑已不再是仅仅满足办公需求的移动设备,而是逐渐成为可以胜任复杂图形处理和游戏任务的强大工具。微软推出的Windows 11操作系统,以其全新的用户界面和改进的系统性能,为用户提供了更为流畅的操作体验。而在硬件方面,NVIDIA的RTX 4080显卡以其卓越的图形处理能力和AI支持,为笔记本电脑带来了前所未有的性能提升。本章节将带您初步了解Windows 11笔记本与NVIDIA RTX 4080显卡的基础信息,为进一步深入探讨它们的优化和调试技巧奠定基础。 ## 1.1 Win11笔记本的优势 Windows
recommend-type

windows环境举例

<think>好的,用户现在想在Windows环境下创建C函数库,需要具体的方法和示例代码。我需要结合之前的回答和用户提供的引用内容来给出合适的指导。 首先回顾之前的回答,用户已经了解了在Linux下创建静态库和动态库的方法,现在需要切换到Windows环境。根据引用[2],Windows下的C标准库和动态链接库的处理与Linux不同,比如使用dlfcn.h在Linux,而Windows可能需要其他方式。另外,引用[1]提到了在Windows下配置gcc环境(MinGW-w64),这可能是一个关键点,因为用户可能需要使用MinGW来编译库。 用户提供的引用[3]提到了使用MSVC编译器,这
recommend-type

QQ自动发送/回复系统源代码开放

根据提供的文件信息,我们可以了解到以下几点关键的知识点: ### 标题:“qqhelp” 1. **项目类型**: 标题“qqhelp”暗示这是一个与QQ相关的帮助工具或项目。QQ是中国流行的即时通讯软件,因此这个标题表明项目可能提供了对QQ客户端功能的辅助或扩展。 2. **用途**: “help”表明此项目的主要目的是提供帮助或解决问题。由于它提到了QQ,并且涉及“autosend/reply”功能,我们可以推测该项目可能用于自动化发送消息回复,或提供某种形式的自动回复机制。 ### 描述:“I put it to my web, but nobody sendmessage to got the source, now I public it. it supply qq,ticq autosend/reply ,full sourcecode use it as you like” 1. **发布情况**: 描述提到该项目原先被放置在某人的网站上,并且没有收到请求源代码的消息。这可能意味着项目不够知名或者需求不高。现在作者决定公开发布,这可能是因为希望项目能够被更多人了解和使用,或是出于开源共享的精神。 2. **功能特性**: 提到的“autosend/reply”表明该项目能够实现自动发送和回复消息。这种功能对于需要进行批量或定时消息沟通的应用场景非常有用,例如客户服务、自动化的营销通知等。 3. **代码可用性**: 作者指出提供了“full sourcecode”,意味着源代码完全开放,用户可以自由使用,无论是查看、学习还是修改,用户都有很大的灵活性。这对于希望学习编程或者有特定需求的开发者来说是一个很大的优势。 ### 标签:“综合系统类” 1. **项目分类**: 标签“综合系统类”表明这个项目可能是一个多功能的集成系统,它可能不仅限于QQ相关的功能,还可能包含了其他类型的综合服务或特性。 2. **技术范畴**: 这个标签可能表明该项目的技术实现比较全面,可能涉及到了多个技术栈或者系统集成的知识点,例如消息处理、网络编程、自动化处理等。 ### 压缩包子文件的文件名称列表: 1. **Unit1.dfm**: 这是一个Delphi或Object Pascal语言的窗体定义文件,用于定义应用程序中的用户界面布局。DFM文件通常用于存储组件的属性和位置信息,使得开发者可以快速地进行用户界面的设计和调整。 2. **qqhelp.dpr**: DPR是Delphi项目文件的扩展名,包含了Delphi项目的核心设置,如程序入口、使用的单元(Units)等。这个文件是编译和构建Delphi项目的起点,它能够帮助开发者了解项目的组织结构和编译指令。 3. **Unit1.pas**: PAS是Delphi或Object Pascal语言的源代码文件。这个文件可能包含了与QQ帮助工具相关的核心逻辑代码,例如处理自动发送和回复消息的算法等。 4. **readme.txt**: 这是一个常见的文本文件,包含项目的基本说明和使用指导,帮助用户了解如何获取、安装、运行和定制该项目。README文件通常是用户与项目首次交互时首先阅读的文件,因此它对于一个开源项目的用户友好度有着重要影响。 通过以上分析,我们可以看出“qqhelp”项目是一个针对QQ通讯工具的自动化消息发送与回复的辅助工具。项目包含完整的源代码,用户可以根据自己的需要进行查看、修改和使用。它可能包含Delphi语言编写的窗体界面和后端逻辑代码,具有一定的综合系统特性。项目作者出于某种原因将其开源,希望能够得到更广泛的使用和反馈。
recommend-type

【7步打造Win11深度学习利器】:Tensorflow-GPU与RTX4080终极优化指南

# 1. 深度学习与Windows 11的融合 在人工智能时代,深度学习已渗透到生活的方方面面,而Windows 11作为最新一代的操作系统,为深度学习提供了一个高效的工作平台。本章节将探讨深度学习技术如何与Windows 11系统融合,以及这一
recommend-type

ue画笔画画蓝图版

<think>我们正在讨论的是在UnrealEngine中使用蓝图实现画笔绘画功能。根据用户需求,重点在于通过蓝图系统实现类似毛笔的绘画效果。结合之前的回答和引用内容,我们将详细展开实现方法。核心思路:通过捕捉输入轨迹,动态生成笔触网格,并应用材质模拟墨迹效果。###详细实现步骤####1.创建绘画蓝图创建一个名为`BP_PaintBrush`的Actor蓝图:-**根组件**:SceneComponent-**关键组件**:-`SplineComponent`:用于存储绘画路径点-`InstancedStaticMeshComponent`:高效渲染重复笔触段(替代单个SplineMesh组