file-type

NG Zorro组件在Angular项目中的安装与主题定制

ZIP文件

下载需积分: 50 | 312KB | 更新于2024-12-03 | 28 浏览量 | 0 下载量 举报 收藏
download 立即下载
项目是一套基于Angular框架的前端应用布局,使用了NG-ZORRO组件库和变换主题来构建界面。NG-ZORRO是Angular的一个高质量的组件库,为开发者提供了一套丰富的UI组件,这些组件旨在与Ant Design的设计语言保持一致。 知识点详细说明: 1. Angular核心变换: Angular是一个开源的前端框架,它允许开发者使用TypeScript语言来构建动态网页应用。在"tf-app-layout"项目中,Angular的核心变换可能指的是如何将传统的Angular项目通过安装新的组件库和进行配置,转变为一套拥有现代化UI界面的应用。 2. NG-ZORRO组件: NG-ZORRO是Angular版本的Ant Design组件库,它提供了诸如按钮、表单控件、表格、导航菜单等多种常用的UI组件。通过"ng add ng-zorro-antd"命令,可以在现有的Angular项目中安装NG-ZORRO,从而引入这些预构建的组件,以加快开发过程并保持界面风格的一致性。 3. 安装过程配置选项: 安装NG-ZORRO时,开发者需要做出一些配置选择,例如: - 启用图标动态加载:在项目中使用动态加载图标可以减少应用的体积,因为图标是按需加载的。详情可参考官方文档链接。 - 设置自定义主题文件:这允许开发者根据项目的需求定制主题样式。有关如何设置自定义主题的详细信息,可以查阅相关文档链接。 - 选择地区代码:这通常关联到组件库中的多语言支持选项,例如选择`en_GB`表示应用将使用英国英语。 - 选择创建项目模板:在安装过程中可以选择不同的项目模板,这里提到的`blank`是一个基础的空模板,适合需要从零开始构建项目的开发者。 4. 启动服务器: 安装完所需的组件之后,开发者可以启动本地服务器,开始他们的开发工作。启动服务器通常会涉及到使用Angular CLI(命令行接口)的命令,如`ng serve`,这会编译应用并在默认的本地开发服务器上运行,方便开发者进行实时预览和调试。 5. HTML标签: 虽然【标签】中仅提及了"HTML",但需要注意的是,虽然NG-ZORRO组件是基于Angular框架使用的,HTML是Angular模板的基础,因此在使用NG-ZORRO组件时,依然需要使用HTML来定义组件的布局结构。NG-ZORRO组件的模板会利用Angular的数据绑定和指令来和HTML标签进行交互,实现动态渲染。 6. 压缩包子文件的文件名称列表: "tf-app-layout-master"可能是指项目文件夹的名称,这个名称表示了项目源代码的主目录。在实际开发中,项目结构往往包含多个子目录和文件,用于分别存放各种资源,例如组件文件、样式表、图片资源等。开发者需要熟悉整个项目的文件结构,以确保正确地添加、修改或引用所需的资源。 总结上述知识点,"tf-app-layout"项目是一个结合了Angular核心技术、NG-ZORRO组件库以及个性化主题设置的前端应用布局示例。通过理解这些概念和步骤,开发者可以有效地构建和定制具有专业外观和一致用户体验的Web应用。

相关推荐

filetype

现在出现了关于MATLAB应用的三个问题:矩阵奇异警告、数值输入框居中对齐问题,以及仿真后无图像显示,请你帮我修复并给出完整的代码,源代码如下classdef ChladniLab < matlab.apps.AppBase % Properties that correspond to app components properties (Access = public) UIFigure matlab.ui.Figure GridLayout matlab.ui.container.GridLayout LeftPanel matlab.ui.container.Panel MaterialPanel matlab.ui.container.Panel DensityEditFieldLabel matlab.ui.control.Label DensityEditField matlab.ui.control.NumericEditField PoissonRatioEditFieldLabel matlab.ui.control.Label PoissonRatioEditField matlab.ui.control.NumericEditField ElasticEditFieldLabel matlab.ui.control.Label ElasticEditField matlab.ui.control.NumericEditField PlatePanel matlab.ui.container.Panel LengthEditFieldLabel matlab.ui.control.Label LengthEditField matlab.ui.control.NumericEditField ThicknessEditFieldLabel matlab.ui.control.Label ThicknessEditField matlab.ui.control.NumericEditField VibrationPanel matlab.ui.container.Panel FrequencyEditFieldLabel matlab.ui.control.Label FrequencyEditField matlab.ui.control.NumericEditField AmplitudeEditFieldLabel matlab.ui.control.Label AmplitudeEditField matlab.ui.control.NumericEditField SimulationPanel matlab.ui.container.Panel ResolutionEditFieldLabel matlab.ui.control.Label ResolutionEditField matlab.ui.control.NumericEditField SimTimeEditFieldLabel matlab.ui.control.Label SimTimeEditField matlab.ui.control.NumericEditField ControlPanel matlab.ui.container.Panel RunButton matlab.ui.control.Button StopButton matlab.ui.control.Button ResetButton matlab.ui.control.Button RightPanel matlab.ui.container.Panel UIAxes matlab.ui.control.UIAxes ModeDropDownLabel matlab.ui.control.Label ModeDropDown matlab.ui.control.DropDown StatusLabel matlab.ui.control.Label ProgressBar matlab.ui.control.Label end % 应用状态属性 properties (Access = private) Running = false; % 仿真运行状态 StopRequested = false; % 停止请求标志 LastResult; % 存储上次仿真结果 end % 回调方法 methods (Access = private) % 创建UI组件 function createComponents(app) % 创建主窗口 app.UIFigure = uifigure('Visible', 'off'); app.UIFigure.Position = [100 100 1200 700]; app.UIFigure.Name = 'Chladni Lab - 克拉尼图形仿真平台'; app.UIFigure.Resize = 'on'; % 允许窗口调整大小 % 创建主网格布局 app.GridLayout = uigridlayout(app.UIFigure, [1, 2]); app.GridLayout.ColumnWidth = {'1x', '2x'}; app.GridLayout.RowHeight = {'1x'}; app.GridLayout.ColumnSpacing = 10; app.GridLayout.RowSpacing = 10; app.GridLayout.Padding = [10 10 10 10]; % 创建左侧面板 app.LeftPanel = uipanel(app.GridLayout); app.LeftPanel.Layout.Row = 1; app.LeftPanel.Layout.Column = 1; app.LeftPanel.Title = '控制面板'; app.LeftPanel.FontWeight = 'bold'; app.LeftPanel.Scrollable = 'on'; % 创建左侧垂直网格布局 (替代 uix.VBox) leftGrid = uigridlayout(app.LeftPanel, [5, 1]); leftGrid.RowHeight = {120, 100, 100, 100, 'fit'}; leftGrid.ColumnWidth = {'1x'}; leftGrid.Padding = [10 10 10 10]; leftGrid.RowSpacing = 15; % 创建材料参数面板 app.MaterialPanel = uipanel(leftGrid); app.MaterialPanel.Layout.Row = 1; app.MaterialPanel.Layout.Column = 1; app.MaterialPanel.Title = '材料参数'; app.MaterialPanel.BackgroundColor = [0.96 0.96 0.96]; app.MaterialPanel.FontWeight = 'bold'; materialGrid = uigridlayout(app.MaterialPanel, [3, 2]); materialGrid.ColumnWidth = {'1x', '1.5x'}; materialGrid.RowHeight = repmat({'fit'}, 1, 3); materialGrid.Padding = [10 10 10 10]; % 密度输入 app.DensityEditFieldLabel = uilabel(materialGrid); app.DensityEditFieldLabel.HorizontalAlignment = 'right'; app.DensityEditFieldLabel.Layout.Row = 1; app.DensityEditFieldLabel.Layout.Column = 1; app.DensityEditFieldLabel.Text = '密度 (kg/m³)'; app.DensityEditField = uieditfield(materialGrid, 'numeric'); app.DensityEditField.Limits = [1 20000]; app.DensityEditField.Value = 2700; app.DensityEditField.Layout.Row = 1; app.DensityEditField.Layout.Column = 2; app.DensityEditField.Tag = 'density'; % 泊松比输入 app.PoissonRatioEditFieldLabel = uilabel(materialGrid); app.PoissonRatioEditFieldLabel.HorizontalAlignment = 'right'; app.PoissonRatioEditFieldLabel.Layout.Row = 2; app.PoissonRatioEditFieldLabel.Layout.Column = 1; app.PoissonRatioEditFieldLabel.Text = '泊松比'; app.PoissonRatioEditField = uieditfield(materialGrid, 'numeric'); app.PoissonRatioEditField.Limits = [0.1 0.5]; app.PoissonRatioEditField.Value = 0.33; app.PoissonRatioEditField.Layout.Row = 2; app.PoissonRatioEditField.Layout.Column = 2; app.PoissonRatioEditField.Tag = 'poisson'; % 弹性模量输入 app.ElasticEditFieldLabel = uilabel(materialGrid); app.ElasticEditFieldLabel.HorizontalAlignment = 'right'; app.ElasticEditFieldLabel.Layout.Row = 3; app.ElasticEditFieldLabel.Layout.Column = 1; app.ElasticEditFieldLabel.Text = '弹性模量 (Pa)'; app.ElasticEditField = uieditfield(materialGrid, 'numeric'); app.ElasticEditField.Limits = [1e9 500e9]; app.ElasticEditField.Value = 70e9; app.ElasticEditField.Layout.Row = 3; app.ElasticEditField.Layout.Column = 2; app.ElasticEditField.Tag = 'elastic'; % 创建平板参数面板 app.PlatePanel = uipanel(leftGrid); app.PlatePanel.Layout.Row = 2; app.PlatePanel.Layout.Column = 1; app.PlatePanel.Title = '平板参数'; app.PlatePanel.BackgroundColor = [0.96 0.96 0.96]; app.PlatePanel.FontWeight = 'bold'; plateGrid = uigridlayout(app.PlatePanel, [2, 2]); plateGrid.ColumnWidth = {'1x', '1.5x'}; plateGrid.RowHeight = repmat({'fit'}, 1, 2); plateGrid.Padding = [10 10 10 10]; % 边长输入 app.LengthEditFieldLabel = uilabel(plateGrid); app.LengthEditFieldLabel.HorizontalAlignment = 'right'; app.LengthEditFieldLabel.Layout.Row = 1; app.LengthEditFieldLabel.Layout.Column = 1; app.LengthEditFieldLabel.Text = '边长 (m)'; app.LengthEditField = uieditfield(plateGrid, 'numeric'); app.LengthEditField.Limits = [0.01 1]; app.LengthEditField.Value = 0.15; app.LengthEditField.Layout.Row = 1; app.LengthEditField.Layout.Column = 2; app.LengthEditField.Tag = 'length'; % 厚度输入 app.ThicknessEditFieldLabel = uilabel(plateGrid); app.ThicknessEditFieldLabel.HorizontalAlignment = 'right'; app.ThicknessEditFieldLabel.Layout.Row = 2; app.ThicknessEditFieldLabel.Layout.Column = 1; app.ThicknessEditFieldLabel.Text = '厚度 (m)'; app.ThicknessEditField = uieditfield(plateGrid, 'numeric'); app.ThicknessEditField.Limits = [0.0001 0.1]; app.ThicknessEditField.Value = 0.001; app.ThicknessEditField.Layout.Row = 2; app.ThicknessEditField.Layout.Column = 2; app.ThicknessEditField.Tag = 'thickness'; % 创建振动参数面板 app.VibrationPanel = uipanel(leftGrid); app.VibrationPanel.Layout.Row = 3; app.VibrationPanel.Layout.Column = 1; app.VibrationPanel.Title = '振动参数'; app.VibrationPanel.BackgroundColor = [0.96 0.96 0.96]; app.VibrationPanel.FontWeight = 'bold'; vibrationGrid = uigridlayout(app.VibrationPanel, [2, 2]); vibrationGrid.ColumnWidth = {'1x', '1.5x'}; vibrationGrid.RowHeight = repmat({'fit'}, 1, 2); vibrationGrid.Padding = [10 10 10 10]; % 频率输入 app.FrequencyEditFieldLabel = uilabel(vibrationGrid); app.FrequencyEditFieldLabel.HorizontalAlignment = 'right'; app.FrequencyEditFieldLabel.Layout.Row = 1; app.FrequencyEditFieldLabel.Layout.Column = 1; app.FrequencyEditFieldLabel.Text = '频率 (Hz)'; app.FrequencyEditField = uieditfield(vibrationGrid, 'numeric'); app.FrequencyEditField.Limits = [1 5000]; app.FrequencyEditField.Value = 650; app.FrequencyEditField.Layout.Row = 1; app.FrequencyEditField.Layout.Column = 2; app.FrequencyEditField.Tag = 'frequency'; % 振幅输入 app.AmplitudeEditFieldLabel = uilabel(vibrationGrid); app.AmplitudeEditFieldLabel.HorizontalAlignment = 'right'; app.AmplitudeEditFieldLabel.Layout.Row = 2; app.AmplitudeEditFieldLabel.Layout.Column = 1; app.AmplitudeEditFieldLabel.Text = '振幅 (m)'; app.AmplitudeEditField = uieditfield(vibrationGrid, 'numeric'); app.AmplitudeEditField.Limits = [0.001 0.1]; app.AmplitudeEditField.Value = 0.01; app.AmplitudeEditField.Layout.Row = 2; app.AmplitudeEditField.Layout.Column = 2; app.AmplitudeEditField.Tag = 'amplitude'; % 创建仿真参数面板 app.SimulationPanel = uipanel(leftGrid); app.SimulationPanel.Layout.Row = 4; app.SimulationPanel.Layout.Column = 1; app.SimulationPanel.Title = '仿真参数'; app.SimulationPanel.BackgroundColor = [0.96 0.96 0.96]; app.SimulationPanel.FontWeight = 'bold'; simGrid = uigridlayout(app.SimulationPanel, [2, 2]); simGrid.ColumnWidth = {'1x', '1.5x'}; simGrid.RowHeight = repmat({'fit'}, 1, 2); simGrid.Padding = [10 10 10 10]; % 分辨率输入 app.ResolutionEditFieldLabel = uilabel(simGrid); app.ResolutionEditFieldLabel.HorizontalAlignment = 'right'; app.ResolutionEditFieldLabel.Layout.Row = 1; app.ResolutionEditFieldLabel.Layout.Column = 1; app.ResolutionEditFieldLabel.Text = '网格分辨率'; app.ResolutionEditField = uieditfield(simGrid, 'numeric'); app.ResolutionEditField.Limits = [4 32]; app.ResolutionEditField.RoundFractionalValues = 'on'; app.ResolutionEditField.Value = 16; app.ResolutionEditField.Layout.Row = 1; app.ResolutionEditField.Layout.Column = 2; app.ResolutionEditField.Tag = 'resolution'; % 仿真时间输入 app.SimTimeEditFieldLabel = uilabel(simGrid); app.SimTimeEditFieldLabel.HorizontalAlignment = 'right'; app.SimTimeEditFieldLabel.Layout.Row = 2; app.SimTimeEditFieldLabel.Layout.Column = 1; app.SimTimeEditFieldLabel.Text = '仿真时间 (s)'; app.SimTimeEditField = uieditfield(simGrid, 'numeric'); app.SimTimeEditField.Limits = [0.01 1]; app.SimTimeEditField.Value = 0.03; app.SimTimeEditField.Layout.Row = 2; app.SimTimeEditField.Layout.Column = 2; app.SimTimeEditField.Tag = 'simtime'; % 创建控制按钮面板 app.ControlPanel = uipanel(leftGrid); app.ControlPanel.Layout.Row = 5; app.ControlPanel.Layout.Column = 1; app.ControlPanel.BackgroundColor = [0.96 0.96 0.96]; app.ControlPanel.FontWeight = 'bold'; controlGrid = uigridlayout(app.ControlPanel, [1, 3]); controlGrid.ColumnWidth = {'1x', '1x', '1x'}; controlGrid.RowHeight = {'fit'}; controlGrid.Padding = [10 5 10 10]; % 创建控制按钮 app.RunButton = uibutton(controlGrid, 'push'); app.RunButton.ButtonPushedFcn = createCallbackFcn(app, @RunButtonPushed, true); app.RunButton.Layout.Row = 1; app.RunButton.Layout.Column = 1; app.RunButton.Text = '开始仿真'; app.RunButton.BackgroundColor = [0.47 0.67 0.19]; app.RunButton.FontWeight = 'bold'; app.StopButton = uibutton(controlGrid, 'push'); app.StopButton.ButtonPushedFcn = createCallbackFcn(app, @StopButtonPushed, true); app.StopButton.Layout.Row = 1; app.StopButton.Layout.Column = 2; app.StopButton.Text = '停止仿真'; app.RunButton.BackgroundColor = [0.47 0.67 0.19]; app.StopButton.BackgroundColor = [0.85 0.33 0.10]; app.StopButton.FontWeight = 'bold'; app.ResetButton = uibutton(controlGrid, 'push'); app.ResetButton.ButtonPushedFcn = createCallbackFcn(app, @ResetButtonPushed, true); app.ResetButton.Layout.Row = 1; app.ResetButton.Layout.Column = 3; app.ResetButton.Text = '重置参数'; app.ResetButton.FontWeight = 'bold'; % 创建右侧面板 app.RightPanel = uipanel(app.GridLayout); app.RightPanel.Layout.Row = 1; app.RightPanel.Layout.Column = 2; app.RightPanel.Title = '克拉尼图形可视化'; app.RightPanel.FontWeight = 'bold'; % 创建坐标轴 app.UIAxes = uiaxes(app.RightPanel); app.UIAxes.Position = [50 100 700 550]; title(app.UIAxes, '克拉尼图形') xlabel(app.UIAxes, 'X (m)') ylabel(app.UIAxes, 'Y (m)') colormap(app.UIAxes, 'jet'); colorbar(app.UIAxes); app.UIAxes.FontSize = 12; app.UIAxes.TitleFontSizeMultiplier = 1.2; % 创建显示模式下拉菜单 app.ModeDropDownLabel = uilabel(app.RightPanel); app.ModeDropDownLabel.HorizontalAlignment = 'right'; app.ModeDropDownLabel.Position = [50 70 100 22]; app.ModeDropDownLabel.Text = '显示模式:'; app.ModeDropDownLabel.FontWeight = 'bold'; app.ModeDropDown = uidropdown(app.RightPanel); app.ModeDropDown.Items = {'动态波动', '振幅分布', '节点线图'}; app.ModeDropDown.ValueChangedFcn = createCallbackFcn(app, @ModeDropDownValueChanged, true); app.ModeDropDown.Position = [160 70 150 22]; app.ModeDropDown.Value = '振幅分布'; app.ModeDropDown.FontWeight = 'bold'; % 状态标签 app.StatusLabel = uilabel(app.RightPanel); app.StatusLabel.Position = [50 40 300 22]; app.StatusLabel.Text = '就绪'; app.StatusLabel.FontSize = 14; app.StatusLabel.FontWeight = 'bold'; app.StatusLabel.FontColor = [0 0.5 0]; % 进度条 app.ProgressBar = uilabel(app.RightPanel); app.ProgressBar.Position = [400 40 300 22]; app.ProgressBar.Text = ''; app.ProgressBar.FontSize = 12; app.ProgressBar.FontWeight = 'bold'; % 显示主窗口 app.UIFigure.Visible = 'on'; end % 运行按钮回调 function RunButtonPushed(app, ~) if app.Running return; end % 验证参数 if ~app.validateParameters() return; end app.Running = true; app.StopRequested = false; app.StatusLabel.Text = '仿真运行中...'; app.StatusLabel.FontColor = [0 0 1]; app.ProgressBar.Text = '初始化... 0%'; drawnow; try % 获取用户输入参数 L = app.LengthEditField.Value; h = app.ThicknessEditField.Value; mu = app.PoissonRatioEditField.Value; rho = app.DensityEditField.Value; E = app.ElasticEditField.Value; Amp = app.AmplitudeEditField.Value; Freq = app.FrequencyEditField.Value; N = app.ResolutionEditField.Value; t_end = app.SimTimeEditField.Value; % 计算弯曲刚度 D = E*h^3/(12*(1-mu^2)); % 构建网格 dt = 1e-6; % 时间步长 dx = L/N; % 网格大小 x = dx*(0:N); [X,Y] = meshgrid(x,x); % 初始化位移 [Sq,Bq] = app.Equation_Sq0(N,mu,-Amp); % 调用类方法 U0 = Sq\Bq; U1 = U0; U2 = U1; % 初始化存储 U_Out1 = zeros(N+1,N+1); U_Out2 = zeros(N+5,N+5); U_Save = zeros((N+1)^2, 200); t_Save = 1; % 准备绘图 cla(app.UIAxes); hold(app.UIAxes, 'on'); axis(app.UIAxes, 'equal'); % 根据显示模式选择绘图方式 displayMode = app.ModeDropDown.Value; % 计算动态方程 t_start = 0; jishu = 1; totalSteps = round((t_end-t_start)/dt); progressStep = round(totalSteps/10); for t_k = t_start:dt:t_end if app.StopRequested break; end % 更新进度 if mod(jishu, progressStep) == 0 progress = round(jishu/totalSteps * 100); app.ProgressBar.Text = sprintf('计算中... %d%%', progress); drawnow; end % 0点处的运动位置 u0 = Amp*cos(2*pi*Freq*t_k+pi); L_Sq = N+5; % 实际计算时网格的尺寸 if jishu == 1 % 第一步计算 [Sq,Bq] = app.Equation_Sq0(N,mu,u0); for k = 1:L_Sq^2 [r_k,c_k] = ind2sub([L_Sq,L_Sq],k); if (r_k>=3 && r_k<=L_Sq-2) && (c_k>=3 && c_k<=L_Sq-2) Sq(k,k) = 20+rho*h*dx^4/D/dt^2; Sq(k,[k+1,k-1,k+L_Sq,k-L_Sq]) = -8; Sq(k,[k+L_Sq+1,k+L_Sq-1,k-L_Sq+1,k-L_Sq-1]) = 2; Sq(k,[k+2,k-2,k-2*L_Sq,k+2*L_Sq]) = 1; Fd = -100*sign(U2(k)-U1(k))*(U2(k)-U1(k))^2/dt^2; Bq(k) = dx^4/D*(rho*h/dt^2*(2*U2(k)-U1(k))+Fd); end end Indx_Center = sub2ind([L_Sq,L_Sq],3,3); Sq(Indx_Center,:) = 0; Sq(Indx_Center,Indx_Center) = 1; Bq(Indx_Center) = u0; else for k = 1:L_Sq^2 [r_k,c_k] = ind2sub([L_Sq,L_Sq],k); if (r_k>=3 && r_k<=L_Sq-2) && (c_k>=3 && c_k<=L_Sq-2) Fd = -100*sign(U2(k)-U1(k))*(U2(k)-U1(k))^2/dt^2; Bq(k) = dx^4/D*(rho*h/dt^2*(2*U2(k)-U1(k))+Fd); end end Bq(1+2+2*L_Sq) = u0; end U3 = Sq\Bq; U1 = U2; U2 = U3; % 储存,用作输出 U_Out2(:) = U3(:); U_Out = U_Out2(3:end-2,3:end-2); % 每100步更新一次图形 if mod(jishu,100) == 1 switch displayMode case '动态波动' surf(app.UIAxes, X, Y, U_Out); shading(app.UIAxes, 'interp'); zlim(app.UIAxes, [-0.2 0.2]); title(app.UIAxes, '平板动态波动'); view(app.UIAxes, 3); case '振幅分布' pcolor(app.UIAxes, X, Y, U_Out); shading(app.UIAxes, 'interp'); title(app.UIAxes, '振幅分布'); colorbar(app.UIAxes); view(app.UIAxes, 2); case '节点线图' contour(app.UIAxes, X, Y, U_Out, 10, 'LineWidth', 1.5); title(app.UIAxes, '节点线图'); colorbar(app.UIAxes); view(app.UIAxes, 2); end drawnow; end jishu = jishu+1; % 记最后200个数据储存 if jishu+50*200 >= totalSteps if mod(jishu,50) == 1 U_Save(:,t_Save) = U_Out(:); t_Save = t_Save+1; end end end % 保存结果用于后续分析 app.LastResult.X = X; app.LastResult.Y = Y; app.LastResult.U_Save = U_Save; app.LastResult.dx = dx; app.LastResult.N = N; % 计算振幅分布 U_Out_A = U_Out; U_Out_A(:) = max(U_Save,[],2)-min(U_Save,[],2); U_Out_A2 = [fliplr(U_Out_A(:,2:end)), U_Out_A]; U_Out_A3 = [flipud(U_Out_A2(2:end,:)); U_Out_A2]; app.LastResult.U_Amplitude = U_Out_A3; app.StatusLabel.Text = '仿真完成!'; app.StatusLabel.FontColor = [0 0.5 0]; app.ProgressBar.Text = ''; catch ME app.StatusLabel.Text = ['错误: ' ME.message]; app.StatusLabel.FontColor = [1 0 0]; app.ProgressBar.Text = ''; disp(ME.getReport()); end app.Running = false; end % 参数验证 function valid = validateParameters(app) valid = true; % 检查网格分辨率是否为偶数 if mod(app.ResolutionEditField.Value, 2) ~= 0 app.StatusLabel.Text = '错误: 网格分辨率必须是偶数'; app.StatusLabel.FontColor = [1 0 0]; valid = false; return; end % 检查时间步长是否合理 if app.SimTimeEditField.Value < 0.01 app.StatusLabel.Text = '错误: 仿真时间必须至少0.01秒'; app.StatusLabel.FontColor = [1 0 0]; valid = false; return; end % 检查频率是否在合理范围内 if app.FrequencyEditField.Value < 1 || app.FrequencyEditField.Value > 5000 app.StatusLabel.Text = '错误: 频率必须在1-5000Hz范围内'; app.StatusLabel.FontColor = [1 0 0]; valid = false; return; end end % 停止按钮回调 function StopButtonPushed(app, ~) app.StopRequested = true; app.StatusLabel.Text = '仿真已停止'; app.StatusLabel.FontColor = [0.5 0.5 0.5]; app.ProgressBar.Text = ''; end % 重置按钮回调 function ResetButtonPushed(app, ~) app.LengthEditField.Value = 0.15; app.ThicknessEditField.Value = 0.001; app.PoissonRatioEditField.Value = 0.33; app.DensityEditField.Value = 2700; app.ElasticEditField.Value = 70e9; app.FrequencyEditField.Value = 650; app.AmplitudeEditField.Value = 0.01; app.ResolutionEditField.Value = 16; app.SimTimeEditField.Value = 0.03; cla(app.UIAxes); app.StatusLabel.Text = '参数已重置'; app.StatusLabel.FontColor = [0 0.5 0]; app.ProgressBar.Text = ''; end % 显示模式改变回调 function ModeDropDownValueChanged(app, ~) if ~isempty(app.LastResult) && ~isempty(app.LastResult.U_Amplitude) displayMode = app.ModeDropDown.Value; cla(app.UIAxes); hold(app.UIAxes, 'on'); axis(app.UIAxes, 'equal'); x = app.LastResult.dx*(-app.LastResult.N:app.LastResult.N); [X3,Y3] = meshgrid(x,x); switch displayMode case '动态波动' title(app.UIAxes, '动态波动 - 请重新运行仿真'); app.StatusLabel.Text = '动态波动模式需要实时仿真'; app.StatusLabel.FontColor = [0 0 1]; case '振幅分布' surf(app.UIAxes, X3, Y3, app.LastResult.U_Amplitude); shading(app.UIAxes, 'interp'); title(app.UIAxes, '振幅分布'); colorbar(app.UIAxes); view(app.UIAxes, 2); app.StatusLabel.Text = '显示振幅分布'; app.StatusLabel.FontColor = [0 0.5 0]; case '节点线图' contour(app.UIAxes, X3, Y3, app.LastResult.U_Amplitude, 10, 'LineWidth', 1.5); title(app.UIAxes, '节点线图'); colorbar(app.UIAxes); view(app.UIAxes, 2); app.StatusLabel.Text = '显示节点线图'; app.StatusLabel.FontColor = [0 0.5 0]; end end end % 辅助函数Equation_Sq0(作为类方法) function [Sq,Bq] = Equation_Sq0(app, N, mu, u0) % 外拓展两圈后平板网格的索引 L_Sq = N+5; % 定义边界点类型 Point_Corner0 = [L_Sq,L_Sq; L_Sq-1,L_Sq; L_Sq,L_Sq-1]; Point_CornerC = [L_Sq-1,L_Sq-2; L_Sq-2,L_Sq-1]; Point_Out1 = [(L_Sq-1)*ones(L_Sq-5,1),(3:L_Sq-3)'; (3:L_Sq-3)',(L_Sq-1)*ones(L_Sq-5,1)]; Point_Corner = [L_Sq-1,L_Sq-1]; Point_Out2 = [L_Sq*ones(L_Sq-4,1),(3:L_Sq-2)'; (3:L_Sq-2)',L_Sq*ones(L_Sq-4,1)]; Point_Mirror1 = [2*ones(L_Sq-2,1),(3:L_Sq)'; (3:L_Sq)',2*ones(L_Sq-2,1)]; Point_Mirror2 = [1*ones(L_Sq-2,1),(3:L_Sq)'; (3:L_Sq)',1*ones(L_Sq-2,1)]; Point_MirrorC = [1,1;1,2;2,1;2,2]; % 初始化矩阵 Sq = zeros(L_Sq^2); Bq = zeros(L_Sq^2,1); for k = 1:L_Sq^2 [r_k,c_k] = ind2sub([L_Sq,L_Sq],k); % 四周边界点处理 if app.IsRowInRowList(Point_Corner0, [r_k,c_k]) Sq(k,k) = 1; Bq(k) = 0; % 自由角垂直外边界 elseif app.IsRowInRowList(Point_CornerC, [r_k,c_k]) if r_k == L_Sq-1 Sq(k,k-2:k) = [1,-2,1]; elseif c_k == L_Sq-1 Sq(k,[k-2*L_Sq,k-L_Sq,k]) = [1,-2,1]; end Bq(k) = 0; % 第一层边界点 elseif app.IsRowInRowList(Point_Out1, [r_k,c_k]) if r_k == 2 Sq(k,[k+1-L_Sq,k+1,k+1+L_Sq]) = [-mu,2+2*mu,-mu]; Sq(k,k) = -1; Sq(k,k+2) = -1; elseif r_k == L_Sq-1 Sq(k,[k-1-L_Sq,k-1,k-1+L_Sq]) = [-mu,2+2*mu,-mu]; Sq(k,k) = -1; Sq(k,k-2) = -1; elseif c_k == 2 Sq(k,[k,k+L_Sq,k+2*L_Sq]) = [-1,2+2*mu,-1]; Sq(k,k+L_Sq-1) = -mu; Sq(k,k+L_Sq+1) = -mu; elseif c_k == L_Sq-1 Sq(k,[k,k-L_Sq,k-2*L_Sq]) = [-1,2+2*mu,-1]; Sq(k,k-L_Sq-1) = -mu; Sq(k,k-L_Sq+1) = -mu; end Bq(k) = 0; % 自由角对角线外边界 elseif app.IsRowInRowList(Point_Corner, [r_k,c_k]) if r_k == L_Sq-1 && c_k == L_Sq-1 Sq(k,[k,k-2*L_Sq-2]) = [1,1]; Sq(k,[k-2,k-2*L_Sq]) = [-1,-1]; end Bq(k) = 0; % 第二层边界点 elseif app.IsRowInRowList(Point_Out2, [r_k,c_k]) if r_k == 1 Sq(k,k) = 1; Sq(k,[k+1-L_Sq,k+1,k+1+L_Sq]) = [2-mu,2*mu-6,2-mu]; Sq(k,[k+3-L_Sq,k+3,k+3+L_Sq]) = [mu-2,-2*mu+6,mu-2]; Sq(k,k+4) = -1; end Bq(k) = 0; % 正常平板上的点 elseif (r_k>=3 && r_k<=L_Sq-2) && (c_k>=3 && c_k<=L_Sq-2) Sq(k,k) = 20; Sq(k,[k+1,k-1,k+L_Sq,k-L_Sq]) = -8; Sq(k,[k+L_Sq+1,k+L_Sq-1,k-L_Sq+1,k-L_Sq-1]) = 2; Sq(k,[k+2,k-2,k-2*L_Sq,k+2*L_Sq]) = 1; Bq(k) = 0; % 对称边界处理 elseif app.IsRowInRowList(Point_Mirror1, [r_k,c_k]) if r_k == 2 Sq(k,k) = 1; Sq(k,k+2) = -1; elseif c_k == 2 Sq(k,k) = 1; Sq(k,k+2*L_Sq) = -1; end Bq(k) = 0; elseif app.IsRowInRowList(Point_Mirror2, [r_k,c_k]) if r_k == 1 Sq(k,k) = 1; Sq(k,k+4) = -1; elseif c_k == 1 Sq(k,k) = 1; Sq(k,k+4*L_Sq) = -1; end Bq(k) = 0; elseif app.IsRowInRowList(Point_MirrorC, [r_k,c_k]) if r_k == 1 && c_k == 1 Sq(k,k) = 1; Sq(k,k+4+4*L_Sq) = -1; end Bq(k) = 0; end end % 中心点约束 Indx_Center = sub2ind([L_Sq,L_Sq],3,3); Sq(Indx_Center,:) = 0; Sq(Indx_Center,Indx_Center) = 1; Bq(Indx_Center) = u0; end % 辅助函数IsRowInRowList(作为类方法) function TF = IsRowInRowList(~, List, Point) TF1 = (List(:,1) == Point(1)); TF = any(List(TF1,2) == Point(2)); end end % 应用初始化和启动 methods (Access = public) % 构造函数 function app = ChladniLab % 创建UI组件 createComponents(app) % 注册应用 registerApp(app, app.UIFigure) if nargout == 0 clear app end end % 运行代码 function run(app) app.UIFigure.Visible = 'on'; end end % 组件销毁 methods (Access = public) function delete(app) delete(app.UIFigure) end end end

filetype

import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import classification_report, confusion_matrix import seaborn as sns from time import time from matplotlib import rcParams rcParams['font.sans-serif'] = ['SimHei'] # 设置中文字体 rcParams['axes.unicode_minus'] = False # 避免负号显示问题 # 数据集加载函数 def data_load(data_dir, test_data_dir, img_height, img_width, batch_size): train_ds = tf.keras.preprocessing.image_dataset_from_directory( data_dir, label_mode='categorical', seed=123, image_size=(img_height, img_width), batch_size=batch_size) val_ds = tf.keras.preprocessing.image_dataset_from_directory( test_data_dir, label_mode='categorical', seed=123, image_size=(img_height, img_width), batch_size=batch_size) class_names = train_ds.class_names return train_ds, val_ds, class_names # 构建MobileNet模型 def model_load(IMG_SHAPE=(224, 224, 3), class_num=15): base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') base_model.trainable = False model = tf.keras.models.Sequential([ tf.keras.layers.experimental.preprocessing.Rescaling(1. / 127.5, offset=-1, input_shape=IMG_SHAPE), base_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(class_num, activation='softmax') ]) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) return model # 绘制训练过程中的损失和准确率 def show_loss_acc(history): acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()), 1]) plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.ylabel('Cross Entropy') plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.savefig('results/results_mobilenet.png', dpi=100) # 评估模型函数 def evaluate_model(model, val_ds, class_names): y_true = [] y_pred = [] for images, labels in val_ds: predictions = model.predict(images) y_pred.extend(np.argmax(predictions, axis=1)) y_true.extend(np.argmax(labels, axis=1)) cm = confusion_matrix(y_true, y_pred) print("Confusion Matrix:") print(cm) print("Classification Report:") print(classification_report(y_true, y_pred, target_names=class_names, digits=4)) plt.figure(figsize=(10, 8)) sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=class_names, yticklabels=class_names) plt.xlabel('Predicted') plt.ylabel('True') plt.title('Confusion Matrix') plt.show() # 自定义回调函数,用于动态打印评价指标 class PrintEvalCallback(tf.keras.callbacks.Callback): def on_epoch_end(self, epoch, logs=None): print(f"Epoch {epoch + 1}: " f"loss={logs['loss']:.4f}, accuracy={logs['accuracy']:.4f}, " f"val_loss={logs['val_loss']:.4f}, val_accuracy={logs['val_accuracy']:.4f}") # 主训练函数 def train(epochs): begin_time = time() train_ds, val_ds, class_names = data_load( "D:/1/WeChat Files/wxid_c6n47k03okeu22/FileStorage/File/2024-05/vegetables_tf2.3-master/data/train", "D:/1/WeChat Files/wxid_c6n47k03okeu22/FileStorage/File/2024-05/vegetables_tf2.3-master/data/valid", 224, 224, 16 ) print("Class Names:", class_names) model = model_load(class_num=len(class_names)) # 使用自定义回调函数 eval_callback = PrintEvalCallback() history = model.fit(train_ds, validation_data=val_ds, epochs=epochs, callbacks=[eval_callback]) model.save("models/mobilenet_fv.h5") end_time = time() print('Training time:', end_time - begin_time, "seconds") show_loss_acc(history) evaluate_model(model, val_ds, class_names) if __name__ == '__main__': train(epochs=10)import tensorflow as tf from PyQt5.QtGui import QIcon, QFont, QPixmap from PyQt5.QtCore import Qt from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QLabel, QPushButton, QTabWidget, QFileDialog, QMessageBox import sys import cv2 from PIL import Image import numpy as np import shutil class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowIcon(QIcon('images/logo.png')) # 确保路径正确 self.setWindowTitle('蔬菜识别系统') self.model = tf.keras.models.load_model("models/mobilenet_fv.h5") self.to_predict_name = "images/tim9.jpeg" self.class_names = ['豆', '苦瓜', '葫芦瓜', '茄子', '西兰花', '胡萝卜', '包菜', '辣椒', '西红柿', '花菜', '土豆', '黄瓜','番木瓜', '南瓜', '白萝卜', ] self.resize(900, 700) self.initUI() def initUI(self): main_widget = QWidget() main_layout = QHBoxLayout() font = QFont('楷体', 15) left_widget = QWidget() left_layout = QVBoxLayout() img_title = QLabel("样本") img_title.setFont(font) img_title.setAlignment(Qt.AlignCenter) self.img_label = QLabel() # 读取并检查图像 img_init = cv2.imread(self.to_predict_name) if img_init is None: print(f"Error: 无法加载图像文件 '{self.to_predict_name}'。请检查文件路径和文件名。") sys.exit(1) h, w, c = img_init.shape scale = 400 / h img_show = cv2.resize(img_init, (0, 0), fx=scale, fy=scale) cv2.imwrite("images/show.png", img_show) img_init = cv2.resize(img_init, (224, 224)) cv2.imwrite('images/target.png', img_init) self.img_label.setPixmap(QPixmap("images/show.png")) left_layout.addWidget(img_title) left_layout.addWidget(self.img_label, 1, Qt.AlignCenter) left_widget.setLayout(left_layout) right_widget = QWidget() right_layout = QVBoxLayout() btn_change = QPushButton("上传图片") btn_change.clicked.connect(self.change_img) btn_change.setFont(font) btn_predict = QPushButton("开始识别") btn_predict.setFont(font) btn_predict.clicked.connect(self.predict_img) label_result = QLabel('蔬菜名称') self.result = QLabel("等待识别") label_result.setFont(QFont('楷体', 16)) self.result.setFont(QFont('楷体', 24)) right_layout.addStretch() right_layout.addWidget(label_result, 0, Qt.AlignCenter) right_layout.addStretch() right_layout.addWidget(self.result, 0, Qt.AlignCenter) right_layout.addStretch() right_layout.addStretch() right_layout.addWidget(btn_change) right_layout.addWidget(btn_predict) right_layout.addStretch() right_widget.setLayout(right_layout) main_layout.addWidget(left_widget) main_layout.addWidget(right_widget) main_widget.setLayout(main_layout) about_widget = QWidget() about_layout = QVBoxLayout() about_title = QLabel('欢迎使用蔬菜识别系统') about_title.setFont(QFont('楷体', 18)) about_title.setAlignment(Qt.AlignCenter) about_img = QLabel() about_img.setPixmap(QPixmap('images/bj.jpg')) about_img.setAlignment(Qt.AlignCenter) label_super = QLabel("作者:杨益") label_super.setFont(QFont('楷体', 12)) label_super.setAlignment(Qt.AlignRight) about_layout.addWidget(about_title) about_layout.addStretch() about_layout.addWidget(about_img) about_layout.addStretch() about_layout.addWidget(label_super) about_widget.setLayout(about_layout) self.tab_widget = QTabWidget() self.tab_widget.addTab(main_widget, '主页') self.tab_widget.addTab(about_widget, '关于') self.tab_widget.setTabIcon(0, QIcon('images/主页面.png')) self.tab_widget.setTabIcon(1, QIcon('images/关于.png')) self.setCentralWidget(self.tab_widget) def change_img(self): openfile_name, _ = QFileDialog.getOpenFileName(self, '选择文件', '', 'Image files(*.jpg *.png *jpeg)') if openfile_name: target_image_name = "images/tmp_up." + openfile_name.split(".")[-1] shutil.copy(openfile_name, target_image_name) self.to_predict_name = target_image_name img_init = cv2.imread(self.to_predict_name) if img_init is None: print(f"Error: 无法加载图像文件 '{self.to_predict_name}'。请检查文件路径和文件名。") return h, w, c = img_init.shape scale = 400 / h img_show = cv2.resize(img_init, (0, 0), fx=scale, fy=scale) cv2.imwrite("images/show.png", img_show) img_init = cv2.resize(img_init, (224, 224)) cv2.imwrite('images/target.png', img_init) self.img_label.setPixmap(QPixmap("images/show.png")) self.result.setText("等待识别") def predict_img(self): img = Image.open('images/target.png') img = np.asarray(img) outputs = self.model.predict(img.reshape(1, 224, 224, 3)) result_index = int(np.argmax(outputs)) result = self.class_names[result_index] self.result.setText(result) def closeEvent(self, event): reply = QMessageBox.question(self, '退出', "是否要退出程序?", QMessageBox.Yes | QMessageBox.No, QMessageBox.No) if reply == QMessageBox.Yes: event.accept() else: event.ignore() if __name__ == "__main__": app = QApplication(sys.argv) x = MainWindow() x.show() sys.exit(app.exec_())这个代码是否正确

filetype

class AutoResizeTextView @JvmOverloads constructor( context: Context, attrs: AttributeSet? = null, defStyle: Int = android.R.attr.textViewStyle ) : AppCompatTextView(context, attrs, defStyle) { private val availableSpaceRect = RectF() private val sizeTester: SizeTester private var maxTextSize: Float = 0.toFloat() private var spacingMult = 1.0f private var spacingAdd = 0.0f private var minTextSize: Float = 0.toFloat() private var widthLimit: Int = 0 private var maxLines: Int = NO_LINE_LIMIT private var initialized = false private var textPaint: TextPaint private interface SizeTester { /** * @param suggestedSize Size of text to be tested * @param availableSpace available space in which text must fit * @return an integer < 0 if after applying `suggestedSize` to * text, it takes less space than `availableSpace`, > 0 * otherwise */ fun onTestSize(suggestedSize: Int, availableSpace: RectF): Int } init { // using the minimal recommended font size minTextSize = TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_SP, 12f, resources.displayMetrics) maxTextSize = textSize textPaint = TextPaint(paint) // prepare size tester: sizeTester = object : SizeTester { val textRect = RectF() override fun onTestSize(suggestedSize: Int, availableSpace: RectF): Int { textPaint.textSize = suggestedSize.toFloat() val transformationMethod = transformationMethod val text: String = transformationMethod?.getTransformation(text, this@AutoResizeTextView) ?.toString() ?: text.toString() val singleLine = maxLines == 1 if (singleLine) { textRect.bottom = textPaint.fontSpacing textRect.right = textPaint.measureText(text) } else { val layout: StaticLayout = if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) { StaticLayout.Builder.obtain(text, 0, text.length, textPaint, widthLimit) .setLineSpacing(spacingAdd, spacingMult) .setAlignment(Alignment.ALIGN_NORMAL).setIncludePad(true).build() } else { @Suppress("DEPRECATION") StaticLayout( text, textPaint, widthLimit, Alignment.ALIGN_NORMAL, spacingMult, spacingAdd, true ) } // return early if we have more lines if (maxLines != NO_LINE_LIMIT && layout.lineCount > maxLines) return 1 textRect.bottom = layout.height.toFloat() var maxWidth = -1 val lineCount = layout.lineCount for (i in 0 until lineCount) { val end = layout.getLineEnd(i) if (i < lineCount - 1 && end > 0 && !isValidWordWrap(text[end - 1])) return 1 if (maxWidth < layout.getLineRight(i) - layout.getLineLeft(i)) maxWidth = layout.getLineRight(i).toInt() - layout.getLineLeft(i).toInt() } //for (int i = 0; i < layout.getLineCount(); i++) // if (maxWidth < layout.getLineRight(i) - layout.getLineLeft(i)) // maxWidth = (int) layout.getLineRight(i) - (int) layout.getLineLeft(i); textRect.right = maxWidth.toFloat() } textRect.offsetTo(0f, 0f) return if (availableSpace.contains(textRect)) -1 else 1 // else, too big } } initialized = true } fun isValidWordWrap(c: Char): Boolean { return c == ' ' || c == '-' } override fun setAllCaps(allCaps: Boolean) { super.setAllCaps(allCaps) adjustTextSize() } override fun setTypeface(tf: Typeface?) { super.setTypeface(tf) adjustTextSize() } override fun setTextSize(size: Float) { maxTextSize = size adjustTextSize() } override fun setMaxLines(maxLines: Int) { super.setMaxLines(maxLines) this.maxLines = maxLines adjustTextSize() } override fun getMaxLines(): Int { return maxLines } override fun setSingleLine() { super.setSingleLine() maxLines = 1 adjustTextSize() } override fun setSingleLine(singleLine: Boolean) { super.setSingleLine(singleLine) maxLines = if (singleLine) 1 else NO_LINE_LIMIT adjustTextSize() } override fun setLines(lines: Int) { super.setLines(lines) maxLines = lines adjustTextSize() } override fun setTextSize(unit: Int, size: Float) { val c = context val r: Resources = if (c == null) Resources.getSystem() else c.resources maxTextSize = TypedValue.applyDimension(unit, size, r.displayMetrics) adjustTextSize() } override fun setLineSpacing(add: Float, mult: Float) { super.setLineSpacing(add, mult) spacingMult = mult spacingAdd = add } /** * Set the lower text size limit and invalidate the view, sp value. */ @Suppress("unused") fun setMinTextSize(minTextSize: Float) { this.minTextSize = sp2px(minTextSize).toFloat() adjustTextSize() } private fun adjustTextSize() { // This is a workaround for truncated text issue on ListView, as shown here: https://github.com/AndroidDeveloperLB/AutoFitTextView/pull/14 // TODO think of a nicer, elegant solution. // post(new Runnable() // { // @Override // public void run() // { if (!initialized) return val startSize = minTextSize.toInt() val heightLimit = measuredHeight - compoundPaddingBottom - compoundPaddingTop widthLimit = measuredWidth - compoundPaddingLeft - compoundPaddingRight if (widthLimit <= 0) return textPaint = TextPaint(paint) availableSpaceRect.right = widthLimit.toFloat() availableSpaceRect.bottom = heightLimit.toFloat() superSetTextSize(startSize) // } // }); } private fun superSetTextSize(startSize: Int) { val textSize = binarySearch(startSize, maxTextSize.toInt(), sizeTester, availableSpaceRect) super.setTextSize(TypedValue.COMPLEX_UNIT_PX, textSize.toFloat()) } private fun binarySearch( start: Int, end: Int, sizeTester: SizeTester, availableSpace: RectF ): Int { var lastBest = start var lo = start var hi = end - 1 var mid: Int while (lo <= hi) { mid = (lo + hi).ushr(1) val midValCmp = sizeTester.onTestSize(mid, availableSpace) if (midValCmp < 0) { lastBest = lo lo = mid + 1 } else if (midValCmp > 0) { hi = mid - 1 lastBest = hi } else return mid } // make sure to return last best // this is what should always be returned return lastBest } override fun onTextChanged(text: CharSequence, start: Int, before: Int, after: Int) { super.onTextChanged(text, start, before, after) adjustTextSize() } override fun onSizeChanged(width: Int, height: Int, oldwidth: Int, oldheight: Int) { super.onSizeChanged(width, height, oldwidth, oldheight) if (width != oldwidth || height != oldheight) adjustTextSize() } companion object { private const val NO_LINE_LIMIT = -1 } } 功能描述,并且增加完整注释

filetype

``` # 新增库导入 import cv2 import tensorflow as tf from tensorflow.keras.models import Model # Grad-CAM热图生成函数 def make_gradcam_heatmap(img_array, model, last_conv_layer_name, pred_index=None): grad_model = Model( [model.inputs], [model.get_layer(last_conv_layer_name).output, model.output] ) with tf.GradientTape() as tape: last_conv_layer_output, preds = grad_model(img_array) if pred_index is None: pred_index = tf.argmax(preds[0]) class_channel = preds[:, pred_index] grads = tape.gradient(class_channel, last_conv_layer_output) pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) last_conv_layer_output = last_conv_layer_output[0] heatmap = last_conv_layer_output @ pooled_grads[..., tf.newaxis] heatmap = tf.squeeze(heatmap) heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap) # 归一化 return heatmap.numpy() # 获取示例图像 sample_batch = next(train_generator) sample_images = sample_batch[0][:5] # 取前5个样本 # 获取最后一层卷积层名称(根据模型结构调整) last_conv_layer_name = "conv2d_1" # 根据实际模型结构确认 # 创建热力图可视化 plt.figure(figsize=(16, 8)) for i in range(5): img = sample_images[i] img_array = np.expand_dims(img, axis=0) # 生成热力图 heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name) # 调整热力图大小匹配原图 heatmap = cv2.resize(heatmap, (img_width, img_height)) heatmap = np.uint8(255 * heatmap) heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_JET) # 叠加热力图和原图 superimposed_img = cv2.addWeighted( cv2.cvtColor(np.uint8(img*255), cv2.COLOR_RGB2BGR), 0.6, heatmap, 0.4, 0 ) # 绘制结果 plt.subplot(2, 5, i+1) plt.imshow(img) plt.title("Original") plt.axis("off") plt.subplot(2, 5, i+6) plt.imshow(cv2.cvtColor(superimposed_img, cv2.COLOR_BGR2RGB)) plt.title("Heatmap") plt.axis("off") plt.tight_layout() plt.show()```Traceback (most recent call last): File “D:\建模\cnn3.py”, line 117, in <module> heatmap = make_gradcam_heatmap(img_array, model, last_conv_layer_name) File “D:\建模\cnn3.py”, line 85, in make_gradcam_heatmap [model.get_layer(last_conv_layer_name).output, model.output] File “C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py”, line 280, in output return self._get_node_attribute_at_index(0, “output_tensors”, “output”) File “C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py”, line 299, in _get_node_attribute_at_index raise AttributeError( AttributeError: The layer sequential has never been called and thus has no defined output… Did you mean: ‘outputs’? 解决该问题并生成一个可用的代码

filetype

package com.example.myapplication; import androidx.appcompat.app.AppCompatActivity; import android.content.res.AssetFileDescriptor; import android.content.res.AssetManager; import android.os.Bundle; import android.util.Log; import android.widget.TextView; import org.tensorflow.lite.Interpreter; import java.io.FileInputStream; import java.io.IOException; import java.nio.ByteBuffer; import java.nio.MappedByteBuffer; import java.nio.channels.FileChannel; public class logist extends AppCompatActivity { private static final String MODEL_PATH = "model_logist.tflite"; private static final String TAG = "Interpreter"; private Interpreter tflite; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_logist); TextView resut_f = findViewById(R.id.resut_f); float output[][] = new float[1][1]; float[][] input = {{(float) 0.4977424,(float) 0.3815156 , (float) 0.92981267 ,(float) 0.30464694, (float) 0.0306613, (float) 0.2767251, (float) 0.42961425, (float) 0.10500819 ,(float) 0.6788244 , (float) 0.80088454 ,(float) 0.744523 , (float) 0.8165212, (float) 0.91727537, (float) 0.5188435 , (float) 0.0448584}}; // input[0] = [9]; // try { ByteBuffer buffer = loadModelFile(this.getAssets(), MODEL_PATH); tflite = new Interpreter(buffer); tflite.run(input, output); resut_f.setText(Float.toString(output[0][0])); } catch (IOException ex) { Log.e(TAG, "Error loading TF Lite model.\n", ex); } } /** 从assets目录加载TF Lite模型. */ private static MappedByteBuffer loadModelFile(AssetManager assetManager, String modelPath) throws IOException { try (AssetFileDescriptor fileDescriptor = assetManager.openFd(modelPath); FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor())) { FileChannel fileChannel = inputStream.getChannel(); long startOffset = fileDescriptor.getStartOffset(); long declaredLength = fileDescriptor.getDeclaredLength(); return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength); } } }

filetype

import tkinter as tk from tkinter import ttk, filedialog, messagebox import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg from sklearn.preprocessing import MinMaxScaler import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense, Bidirectional, Dropout from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau import os # 解决中文显示问题 plt.rcParams['font.sans-serif'] = ['SimHei'] plt.rcParams['axes.unicode_minus'] = False # 自定义评估指标函数 def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_true - y_pred))) def mape(y_true, y_pred): epsilon = 1e-8 return tf.reduce_mean(tf.abs((y_true - y_pred) / (y_true + epsilon))) * 100 def r_squared(y_true, y_pred): ss_res = tf.reduce_sum(tf.square(y_true - y_pred)) ss_tot = tf.reduce_sum(tf.square(y_true - tf.reduce_mean(y_true))) return 1 - (ss_res / (ss_tot + tf.keras.backend.epsilon())) class DamSeepageModel: def __init__(self, root): self.root = root self.root.title("大坝渗流预测模型") self.root.geometry("1400x900") # 初始化数据 self.train_df = None self.test_df = None self.model = None self.scaler = MinMaxScaler(feature_range=(0, 1)) self.metrics_history = { # 新增:存储评估指标历史数据 'train': {'mse': [], 'rmse': [], 'mae': [], 'mape': [], 'r2': []}, 'val': {'mse': [], 'rmse': [], 'mae': [], 'mape': [], 'r2': []} } self.create_widgets() def create_widgets(self): main_frame = ttk.Frame(self.root, padding=10) main_frame.pack(fill=tk.BOTH, expand=True) # 左侧控制面板(精简布局) control_frame = ttk.LabelFrame(main_frame, text="模型控制", padding=10) control_frame.pack(side=tk.LEFT, fill=tk.Y, padx=5, pady=5) # 文件选择(提取为函数) self._create_file_selector(control_frame) # 参数设置(优化布局) param_frame = ttk.LabelFrame(control_frame, text="模型参数", padding=10) param_frame.pack(fill=tk.X, pady=10) self._create_param_selector(param_frame) # 控制按钮 btn_frame = ttk.Frame(control_frame) btn_frame.pack(fill=tk.X, pady=10) ttk.Button(btn_frame, text="训练模型", command=self.train_model).pack(side=tk.LEFT, padx=5) ttk.Button(btn_frame, text="预测结果", command=self.predict).pack(side=tk.LEFT, padx=5) ttk.Button(btn_frame, text="保存结果", command=self.save_results).pack(side=tk.LEFT, padx=5) ttk.Button(btn_frame, text="重置", command=self.reset).pack(side=tk.RIGHT, padx=5) # 状态栏 self.status_var = tk.StringVar(value="就绪") ttk.Label(control_frame, textvariable=self.status_var, relief=tk.SUNKEN, anchor=tk.W).pack(fill=tk.X, side=tk.BOTTOM) # 右侧结果显示(新增评估指标标签页) result_frame = ttk.Frame(main_frame) result_frame.pack(side=tk.RIGHT, fill=tk.BOTH, expand=True, padx=5, pady=5) self.notebook = ttk.Notebook(result_frame) self.notebook.pack(fill=tk.BOTH, expand=True) # 损失曲线标签页 self.loss_frame = ttk.Frame(self.notebook) self.notebook.add(self.loss_frame, text="训练损失") # 预测结果标签页 self.prediction_frame = ttk.Frame(self.notebook) self.notebook.add(self.prediction_frame, text="预测结果") # 新增:评估指标标签页 self.metrics_frame = ttk.Frame(self.notebook) self.notebook.add(self.metrics_frame, text="评估指标") # 初始化绘图区域(优化画布管理) self._init_plots() def _create_file_selector(self, parent): """提取文件选择模块为独立方法""" file_frame = ttk.LabelFrame(parent, text="数据文件", padding=10) file_frame.pack(fill=tk.X, pady=5) ttk.Label(file_frame, text="训练集:").grid(row=0, column=0, sticky=tk.W, pady=5) self.train_file_var = tk.StringVar() ttk.Entry(file_frame, textvariable=self.train_file_var, width=30, state='readonly').grid(row=0, column=1, padx=5) ttk.Button(file_frame, text="选择文件", command=lambda: self.select_file("train")).grid(row=0, column=2) ttk.Label(file_frame, text="测试集:").grid(row=1, column=0, sticky=tk.W, pady=5) self.test_file_var = tk.StringVar() ttk.Entry(file_frame, textvariable=self.test_file_var, width=30, state='readonly').grid(row=1, column=1, padx=5) ttk.Button(file_frame, text="选择文件", command=lambda: self.select_file("test")).grid(row=1, column=2) def _create_param_selector(self, parent): """提取参数选择模块为独立方法""" params = [ ("时间窗口大小:", self.window_size_var := tk.IntVar(value=60), 10, 200, 5), ("LSTM单元数:", self.lstm_units_var := tk.IntVar(value=64), 10, 200, 10), ("训练轮次:", self.epochs_var := tk.IntVar(value=150), 10, 500, 10), ("批处理大小:", self.batch_size_var := tk.IntVar(value=64), 16, 128, 16) ] for idx, (label, var, _from, _to, inc) in enumerate(params): ttk.Label(parent, text=label).grid(row=idx, column=0, sticky=tk.W, pady=5) ttk.Spinbox(parent, from_=_from, to=_to, increment=inc, textvariable=var, width=10).grid(row=idx, column=1, padx=5) def _init_plots(self): """统一初始化绘图区域""" # 预测结果图 self.pred_fig, self.pred_ax = plt.subplots(figsize=(12, 6)) self.pred_canvas = FigureCanvasTkAgg(self.pred_fig, master=self.prediction_frame) self.pred_canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True) # 损失曲线图 self.loss_fig, self.loss_ax = plt.subplots(figsize=(12, 4)) self.loss_canvas = FigureCanvasTkAgg(self.loss_fig, master=self.loss_frame) self.loss_canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True) # 评估指标图 self.metrics_fig, self.metrics_ax = plt.subplots(figsize=(12, 6)) self.metrics_canvas = FigureCanvasTkAgg(self.metrics_fig, master=self.metrics_frame) self.metrics_canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True) def select_file(self, file_type): """优化文件选择逻辑,提取公共处理""" file_path = filedialog.askopenfilename( title=f"选择{file_type}集Excel文件", filetypes=[("Excel文件", "*.xlsx *.xls"), ("所有文件", "*.*")] ) if not file_path: return try: df = self._load_and_preprocess_data(file_path) # 调用预处理函数 target_var = self.train_file_var if file_type == "train" else self.test_file_var target_var.set(os.path.basename(file_path)) setattr(self, f"{file_type}_df", df) self.status_var.set(f"已加载{file_type}集: {len(df)}条数据") except Exception as e: messagebox.showerror("文件错误", f"读取文件失败: {str(e)}") def _load_and_preprocess_data(self, file_path): """提取数据加载和预处理为独立方法""" df = pd.read_excel(file_path) required_cols = ['year', 'month', 'day', '水位'] missing_cols = [col for col in required_cols if col not in df.columns] if missing_cols: raise ValueError(f"缺少必要列: {', '.join(missing_cols)}") # 构造时间戳(优化时间处理) time_cols = ['year', 'month', 'day'] if 'hour' in df.columns: time_cols.extend(['hour', 'minute', 'second'] if 'second' in df.columns else ['hour', 'minute']) df['datetime'] = pd.to_datetime(df[time_cols]) return df.set_index('datetime').sort_index() # 按时间排序 def create_dataset(self, data, window_size): """向量化实现时间窗口(优化性能)""" data = data.reshape(-1) n_samples = len(data) - window_size # 使用滑动窗口向量化生成(替代循环) window_indices = np.arange(window_size).reshape(1, -1) + np.arange(n_samples).reshape(-1, 1) X = data[window_indices] y = data[window_size:] return X[..., np.newaxis], y # 保持LSTM输入格式 def train_model(self): if self.train_df is None: messagebox.showwarning("警告", "请先选择训练集文件") return try: self.status_var.set("数据预处理中...") train_data = self.scaler.fit_transform(self.train_df[['水位']]) window_size = self.window_size_var.get() X_train, y_train = self.create_dataset(train_data, window_size) # 模型优化:双向LSTM+Dropout self.model = Sequential([ Bidirectional(LSTM(self.lstm_units_var.get(), return_sequences=True, dropout=0.2, recurrent_dropout=0.2), input_shape=(window_size, 1)), LSTM(self.lstm_units_var.get(), dropout=0.2, recurrent_dropout=0.2), Dense(1) ]) self.model.compile( optimizer=Adam(learning_rate=0.001), loss='mse', metrics=['mse', rmse, 'mae', mape, r_squared] # 添加评估指标 ) # 高级回调:早停+学习率衰减 callbacks = [ EarlyStopping(monitor='val_loss', patience=15, restore_best_weights=True), ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=5, min_lr=1e-6), self._create_dynamic_callback() # 动态绘图回调 ] history = self.model.fit( X_train, y_train, epochs=self.epochs_var.get(), batch_size=self.batch_size_var.get(), validation_split=0.2, callbacks=callbacks, verbose=0 ) # 保存评估指标历史 self._update_metrics_history(history.history) self._plot_final_metrics() # 绘制最终指标曲线 self.status_var.set("模型训练完成") except Exception as e: messagebox.showerror("训练错误", f"训练失败: {str(e)}") def _create_dynamic_callback(self): """优化动态回调,支持多指标更新""" class DynamicCallback(tf.keras.callbacks.Callback): def __init__(self, app): self.app = app self.epochs = [] def on_epoch_end(self, epoch, logs=None): self.epochs.append(epoch+1) # 更新损失曲线 self.app.loss_ax.clear() self.app.loss_ax.plot(self.epochs, logs['loss'], label='训练损失') self.app.loss_ax.plot(self.epochs, logs['val_loss'], label='验证损失') self.app.loss_ax.set_title('训练损失变化') self.app.loss_ax.legend() self.app.loss_canvas.draw() # 更新状态信息 self.app.status_var.set( f"训练轮次: {epoch+1} | 训练损失: {logs['loss']:.4f} | 验证损失: {logs['val_loss']:.4f}" ) return DynamicCallback(self) def _update_metrics_history(self, history): """更新评估指标历史数据""" metrics_map = { 'mse': 'mse', 'rmse': 'rmse', 'mae': 'mae', 'mape': 'mape', 'r_squared': 'r2' } for metric, key in metrics_map.items(): self.metrics_history['train'][key] = history[metric] self.metrics_history['val'][key] = history[f'val_{metric}'] def _plot_final_metrics(self): """绘制最终评估指标曲线""" self.metrics_ax.clear() epochs = range(1, len(self.metrics_history['train']['mse'])+1) # 绘制MSE/RMSE self.metrics_ax.plot(epochs, self.metrics_history['train']['mse'], 'b-', label='训练MSE') self.metrics_ax.plot(epochs, self.metrics_history['val']['mse'], 'b--', label='验证MSE') self.metrics_ax.plot(epochs, self.metrics_history['train']['rmse'], 'r-', label='训练RMSE') self.metrics_ax.plot(epochs, self.metrics_history['val']['rmse'], 'r--', label='验证RMSE') # 绘制MAE/MAPE self.metrics_ax.plot(epochs, self.metrics_history['train']['mae'], 'g-', label='训练MAE') self.metrics_ax.plot(epochs, self.metrics_history['val']['mae'], 'g--', label='验证MAE') self.metrics_ax.plot(epochs, self.metrics_history['train']['mape'], 'm-', label='训练MAPE(%)') self.metrics_ax.plot(epochs, self.metrics_history['val']['mape'], 'm--', label='验证MAPE(%)') # 绘制R² self.metrics_ax.plot(epochs, self.metrics_history['train']['r2'], 'c-', label='训练R²') self.metrics_ax.plot(epochs, self.metrics_history['val']['r2'], 'c--', label='验证R²') self.metrics_ax.set_title('评估指标变化曲线') self.metrics_ax.set_xlabel('训练轮次') self.metrics_ax.set_ylabel('指标值') self.metrics_ax.legend(bbox_to_anchor=(1.02, 1), loc='upper left') self.metrics_fig.tight_layout() self.metrics_canvas.draw() # 其他方法(predict/save_results/reset)保持核心逻辑,根据需要调整 def predict(self): if not self.model or not self.test_df: messagebox.showwarning("警告", "请先训练模型并选择测试集") return try: test_data = self.scaler.transform(self.test_df[['水位']]) window_size = self.window_size_var.get() X_test, y_test = self.create_dataset(test_data, window_size) y_pred = self.model.predict(X_test) # 反归一化并绘图(保持原有逻辑优化) self.pred_ax.clear() self.pred_ax.plot(self.test_df.index[window_size:], self.scaler.inverse_transform(y_test.reshape(-1,1)), label='实际值', alpha=0.7) self.pred_ax.plot(self.test_df.index[window_size:], self.scaler.inverse_transform(y_pred), label='预测值', linestyle='--') self.pred_ax.set_title('大坝渗流水位预测 你好像死机了,能不能继续进行之前的任务

filetype

import tensorflow as tf import numpy as np import cv2 import os import json from tqdm import tqdm class ObjectRecognitionDeployer: def __init__(self, model_path, class_labels): """ 初始化部署器 :param model_path: 模型文件路径 (Keras或TFLite) :param class_labels: 类别标签列表 """ self.class_labels = class_labels self.model_path = model_path self.interpreter = None self.input_details = None self.output_details = None # 根据模型类型加载 if model_path.endswith('.tflite'): self.load_tflite_model(model_path) else: self.model = tf.keras.models.load_model(model_path) self.input_shape = self.model.input_shape[1:3] def load_tflite_model(self, model_path): """加载并配置TFLite模型""" # 加载模型 self.interpreter = tf.lite.Interpreter(model_path=model_path) self.interpreter.allocate_tensors() # 获取输入输出详细信息 self.input_details = self.interpreter.get_input_details() self.output_details = self.interpreter.get_output_details() # 保存输入形状 self.input_shape = tuple(self.input_details[0]['shape'][1:3]) # 安全地打印模型元数据 self.print_model_metadata(model_path) def print_model_metadata(self, model_path): """安全地打印TFLite模型元数据""" try: from tflite_support import metadata displayer = metadata.MetadataDisplayer.with_model_file(model_path) print("--- 模型元数据 ---") print(displayer.get_metadata_json()) print("--- 关联文件 ---") print(displayer.get_packed_associated_file_list()) except (ImportError, ValueError) as e: print(f"警告: 无法获取模型元数据 - {str(e)}") print("使用输入/输出详细信息代替:") print(f"输入: {self.input_details}") print(f"输出: {self.output_details}") def preprocess_image(self, image, input_size, input_dtype=np.float32): """ 预处理图像 :param image: 输入图像 (numpy数组或文件路径) :param input_size: 模型输入尺寸 (height, width) :param input_dtype: 期望的输入数据类型 :return: 预处理后的图像张量 """ if isinstance(image, str): if not os.path.exists(image): raise FileNotFoundError(f"图像文件不存在: {image}") img = cv2.imread(image) if img is None: raise ValueError(f"无法读取图像: {image}") else: img = image # 调整尺寸和颜色空间 img = cv2.resize(img, (input_size[1], input_size[0])) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 根据数据类型进行归一化 if input_dtype == np.uint8: img = img.astype(np.uint8) # 量化模型使用uint8 else: # 浮点模型使用float32 img = img.astype(np.float32) / 255.0 # 添加批次维度 img = np.expand_dims(img, axis=0) return img def predict(self, image): """ 执行预测 :param image: 输入图像 (numpy数组或文件路径) :return: 预测结果 (类别名称, 置信度) """ if self.interpreter is not None: # TFLite模型推理 return self.predict_tflite(image) else: # Keras模型推理 return self.predict_keras(image) def predict_keras(self, image): """使用Keras模型预测""" # 预处理 img = self.preprocess_image(image, self.input_shape, np.float32) # 预测 predictions = self.model.predict(img, verbose=0)[0] class_idx = np.argmax(predictions) confidence = predictions[class_idx] class_name = self.class_labels[class_idx] return class_name, confidence def predict_tflite(self, image): """使用TFLite模型预测""" # 获取输入数据类型 input_dtype = self.input_details[0]['dtype'] # 预处理 img = self.preprocess_image(image, self.input_shape, input_dtype) # 设置输入张量 self.interpreter.set_tensor(self.input_details[0]['index'], img) # 执行推理 self.interpreter.invoke() # 获取输出 output_data = self.interpreter.get_tensor(self.output_details[0]['index']) predictions = output_data[0] # 解析结果 class_idx = np.argmax(predictions) confidence = predictions[class_idx] # 如果输出是量化数据,需要反量化 if self.output_details[0]['dtype'] == np.uint8: # 反量化输出 scale, zero_point = self.output_details[0]['quantization'] confidence = scale * (confidence - zero_point) class_name = self.class_labels[class_idx] return class_name, confidence def benchmark(self, image, runs=100): """ 模型性能基准测试 :param image: 测试图像 :param runs: 运行次数 :return: 平均推理时间(ms), 内存占用(MB) """ # 预热运行 self.predict(image) # 计时测试 start_time = tf.timestamp() for _ in range(runs): self.predict(image) end_time = tf.timestamp() avg_time_ms = (end_time - start_time).numpy() * 1000 / runs # 内存占用 if self.interpreter: # 计算输入张量内存占用 input_size = self.input_details[0]['shape'] dtype_size = np.dtype(self.input_details[0]['dtype']).itemsize mem_usage = np.prod(input_size) * dtype_size / (1024 * 1024) else: # 估算Keras模型内存 mem_usage = self.model.count_params() * 4 / (1024 * 1024) # 假设32位浮点数 return avg_time_ms, mem_usage def create_metadata(self, output_path): """ 创建并保存模型元数据文件 :param output_path: 元数据文件输出路径 """ metadata = { "model_type": "tflite" if self.model_path.endswith('.tflite') else "keras", "class_labels": self.class_labels, "input_size": self.input_shape, "input_dtype": str(self.input_details[0]['dtype']) if self.interpreter else "float32", "quantization": None } if self.interpreter and self.input_details[0]['dtype'] == np.uint8: metadata["quantization"] = { "input_scale": float(self.input_details[0]['quantization'][0]), "input_zero_point": int(self.input_details[0]['quantization'][1]), "output_scale": float(self.output_details[0]['quantization'][0]), "output_zero_point": int(self.output_details[0]['quantization'][1]) } with open(output_path, 'w') as f: json.dump(metadata, f, indent=4) return metadata def convert_to_tflite_with_metadata(self, output_path, quantize=False, representative_data_dir=None): """ 将Keras模型转换为TFLite格式并添加元数据 :param output_path: 输出TFLite文件路径 :param quantize: 是否进行量化 :param representative_data_dir: 代表性数据集目录 """ if not self.model_path.endswith(('.keras', '.h5')): raise ValueError("需要Keras模型格式进行转换") # 加载Keras模型 keras_model = tf.keras.models.load_model(self.model_path) # 创建转换器 converter = tf.lite.TFLiteConverter.from_keras_model(keras_model) if quantize: # 量化配置 converter.optimizations = [tf.lite.Optimize.DEFAULT] # 设置代表性数据集生成器 converter.representative_dataset = lambda: self.representative_dataset( representative_data_dir, input_size=self.input_shape ) # 设置输入输出类型 converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 # 转换模型 tflite_model = converter.convert() # 保存模型 with open(output_path, 'wb') as f: f.write(tflite_model) print(f"TFLite模型已保存到: {output_path}") # 添加元数据 self.add_tflite_metadata(output_path) return output_path def representative_dataset(self, data_dir=None, input_size=(224, 224), num_samples=100): """ 生成代表性数据集用于量化 :param data_dir: 真实数据目录 :param input_size: 输入尺寸 (height, width) :param num_samples: 样本数量 """ # 优先使用真实数据 if data_dir and os.path.exists(data_dir): image_files = [os.path.join(data_dir, f) for f in os.listdir(data_dir) if f.lower().endswith(('.png', '.jpg', '.jpeg'))] # 限制样本数量 image_files = image_files[:min(len(image_files), num_samples)] print(f"使用 {len(image_files)} 张真实图像进行量化校准") for img_path in tqdm(image_files, desc="量化校准"): try: # 读取并预处理图像 img = cv2.imread(img_path) if img is None: continue img = cv2.resize(img, (input_size[1], input_size[0])) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = img.astype(np.float32) / 255.0 # 转换为float32并归一化 img = np.expand_dims(img, axis=0) yield [img] except Exception as e: print(f"处理图像 {img_path} 时出错: {str(e)}") else: # 使用随机数据作为备选 print(f"使用随机数据生成 {num_samples} 个样本进行量化校准") for _ in range(num_samples): # 生成随机图像,归一化到[0,1]范围,使用float32类型 data = np.random.rand(1, input_size[0], input_size[1], 3).astype(np.float32) yield [data] def add_tflite_metadata(self, model_path): """为TFLite模型添加元数据""" # 创建标签文件 labels_path = os.path.join(os.path.dirname(model_path), "labels.txt") with open(labels_path, 'w') as f: for label in self.class_labels: f.write(f"{label}\n") # 创建元数据 metadata_path = os.path.join(os.path.dirname(model_path), "metadata.json") self.create_metadata(metadata_path) print(f"元数据已创建: {metadata_path}") print(f"标签文件已创建: {labels_path}") # 使用示例 if __name__ == "__main__": # 类别标签 CLASS_LABELS = ['book', 'cup', 'glasses', 'phone', 'shoe'] # 初始化部署器 deployer = ObjectRecognitionDeployer( model_path='optimized_model.keras', class_labels=CLASS_LABELS ) # 转换为带元数据的TFLite格式 tflite_path = 'model_quantized.tflite' # 使用真实数据目录进行量化校准 REPRESENTATIVE_DATA_DIR = 'path/to/representative_dataset' # 替换为实际路径 deployer.convert_to_tflite_with_metadata( tflite_path, quantize=True, representative_data_dir=REPRESENTATIVE_DATA_DIR ) # 重新加载带元数据的模型 tflite_deployer = ObjectRecognitionDeployer( model_path=tflite_path, class_labels=CLASS_LABELS ) # 测试预测 test_image = 'test_image.jpg' class_name, confidence = tflite_deployer.predict(test_image) print(f"预测结果: {class_name}, 置信度: {confidence:.2f}") # 性能测试 avg_time, mem_usage = tflite_deployer.benchmark(test_image) print(f"平均推理时间: {avg_time:.2f} ms") print(f"内存占用: {mem_usage:.2f} MB") # 创建元数据文件 metadata = deployer.create_metadata('model_metadata.json') print("模型元数据:", json.dumps(metadata, indent=4)) import tensorflow as tf import numpy as np import cv2 import os import json from tqdm import tqdm class ObjectRecognitionDeployer: def __init__(self, model_path, class_labels): """ 初始化部署器 :param model_path: 模型文件路径 (Keras或TFLite) :param class_labels: 类别标签列表 """ self.class_labels = class_labels self.model_path = model_path self.interpreter = None self.input_details = None self.output_details = None # 根据模型类型加载 if model_path.endswith('.tflite'): self.load_tflite_model(model_path) else: self.model = tf.keras.models.load_model(model_path) self.input_shape = self.model.input_shape[1:3] def load_tflite_model(self, model_path): """加载并配置TFLite模型""" # 加载模型 self.interpreter = tf.lite.Interpreter(model_path=model_path) self.interpreter.allocate_tensors() # 获取输入输出详细信息 self.input_details = self.interpreter.get_input_details() self.output_details = self.interpreter.get_output_details() # 保存输入形状 self.input_shape = tuple(self.input_details[0]['shape'][1:3]) # 安全地打印模型元数据 self.print_model_metadata(model_path) def print_model_metadata(self, model_path): """安全地打印TFLite模型元数据""" try: from tflite_support import metadata displayer = metadata.MetadataDisplayer.with_model_file(model_path) print("--- 模型元数据 ---") print(displayer.get_metadata_json()) print("--- 关联文件 ---") print(displayer.get_packed_associated_file_list()) except (ImportError, ValueError) as e: print(f"警告: 无法获取模型元数据 - {str(e)}") print("使用输入/输出详细信息代替:") print(f"输入: {self.input_details}") print(f"输出: {self.output_details}") def preprocess_image(self, image, input_size, input_dtype=np.float32): """ 预处理图像 :param image: 输入图像 (numpy数组或文件路径) :param input_size: 模型输入尺寸 (height, width) :param input_dtype: 期望的输入数据类型 :return: 预处理后的图像张量 """ if isinstance(image, str): if not os.path.exists(image): raise FileNotFoundError(f"图像文件不存在: {image}") img = cv2.imread(image) if img is None: raise ValueError(f"无法读取图像: {image}") else: img = image # 调整尺寸和颜色空间 img = cv2.resize(img, (input_size[1], input_size[0])) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # 根据数据类型进行归一化 if input_dtype == np.uint8: img = img.astype(np.uint8) # 量化模型使用uint8 else: # 浮点模型使用float32 img = img.astype(np.float32) / 255.0 # 添加批次维度 img = np.expand_dims(img, axis=0) return img def predict(self, image): """ 执行预测 :param image: 输入图像 (numpy数组或文件路径) :return: 预测结果 (类别名称, 置信度) """ if self.interpreter is not None: # TFLite模型推理 return self.predict_tflite(image) else: # Keras模型推理 return self.predict_keras(image) def predict_keras(self, image): """使用Keras模型预测""" # 预处理 img = self.preprocess_image(image, self.input_shape, np.float32) # 预测 predictions = self.model.predict(img, verbose=0)[0] class_idx = np.argmax(predictions) confidence = predictions[class_idx] class_name = self.class_labels[class_idx] return class_name, confidence def predict_tflite(self, image): """使用TFLite模型预测""" # 获取输入数据类型 input_dtype = self.input_details[0]['dtype'] # 预处理 img = self.preprocess_image(image, self.input_shape, input_dtype) # 设置输入张量 self.interpreter.set_tensor(self.input_details[0]['index'], img) # 执行推理 self.interpreter.invoke() # 获取输出 output_data = self.interpreter.get_tensor(self.output_details[0]['index']) predictions = output_data[0] # 解析结果 class_idx = np.argmax(predictions) confidence = predictions[class_idx] # 如果输出是量化数据,需要反量化 if self.output_details[0]['dtype'] == np.uint8: # 反量化输出 scale, zero_point = self.output_details[0]['quantization'] confidence = scale * (confidence - zero_point) class_name = self.class_labels[class_idx] return class_name, confidence def benchmark(self, image, runs=100): """ 模型性能基准测试 :param image: 测试图像 :param runs: 运行次数 :return: 平均推理时间(ms), 内存占用(MB) """ # 预热运行 self.predict(image) # 计时测试 start_time = tf.timestamp() for _ in range(runs): self.predict(image) end_time = tf.timestamp() avg_time_ms = (end_time - start_time).numpy() * 1000 / runs # 内存占用 if self.interpreter: # 计算输入张量内存占用 input_size = self.input_details[0]['shape'] dtype_size = np.dtype(self.input_details[0]['dtype']).itemsize mem_usage = np.prod(input_size) * dtype_size / (1024 * 1024) else: # 估算Keras模型内存 mem_usage = self.model.count_params() * 4 / (1024 * 1024) # 假设32位浮点数 return avg_time_ms, mem_usage def create_metadata(self, output_path): """ 创建并保存模型元数据文件 :param output_path: 元数据文件输出路径 """ metadata = { "model_type": "tflite" if self.model_path.endswith('.tflite') else "keras", "class_labels": self.class_labels, "input_size": self.input_shape, "input_dtype": str(self.input_details[0]['dtype']) if self.interpreter else "float32", "quantization": None } if self.interpreter and self.input_details[0]['dtype'] == np.uint8: metadata["quantization"] = { "input_scale": float(self.input_details[0]['quantization'][0]), "input_zero_point": int(self.input_details[0]['quantization'][1]), "output_scale": float(self.output_details[0]['quantization'][0]), "output_zero_point": int(self.output_details[0]['quantization'][1]) } with open(output_path, 'w') as f: json.dump(metadata, f, indent=4) return metadata def convert_to_tflite_with_metadata(self, output_path, quantize=False, representative_data_dir=None): """ 将Keras模型转换为TFLite格式并添加元数据 :param output_path: 输出TFLite文件路径 :param quantize: 是否进行量化 :param representative_data_dir: 代表性数据集目录 """ if not self.model_path.endswith(('.keras', '.h5')): raise ValueError("需要Keras模型格式进行转换") # 加载Keras模型 keras_model = tf.keras.models.load_model(self.model_path) # 创建转换器 converter = tf.lite.TFLiteConverter.from_keras_model(keras_model) if quantize: # 量化配置 converter.optimizations = [tf.lite.Optimize.DEFAULT] # 设置代表性数据集生成器 converter.representative_dataset = lambda: self.representative_dataset( representative_data_dir, input_size=self.input_shape ) # 设置输入输出类型 converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 # 转换模型 tflite_model = converter.convert() # 保存模型 with open(output_path, 'wb') as f: f.write(tflite_model) print(f"TFLite模型已保存到: {output_path}") # 添加元数据 self.add_tflite_metadata(output_path) return output_path def representative_dataset(self, data_dir=None, input_size=(224, 224), num_samples=100): """ 生成代表性数据集用于量化 :param data_dir: 真实数据目录 :param input_size: 输入尺寸 (height, width) :param num_samples: 样本数量 """ # 优先使用真实数据 if data_dir and os.path.exists(data_dir): image_files = [os.path.join(data_dir, f) for f in os.listdir(data_dir) if f.lower().endswith(('.png', '.jpg', '.jpeg'))] # 限制样本数量 image_files = image_files[:min(len(image_files), num_samples)] print(f"使用 {len(image_files)} 张真实图像进行量化校准") for img_path in tqdm(image_files, desc="量化校准"): try: # 读取并预处理图像 img = cv2.imread(img_path) if img is None: continue img = cv2.resize(img, (input_size[1], input_size[0])) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = img.astype(np.float32) / 255.0 # 转换为float32并归一化 img = np.expand_dims(img, axis=0) yield [img] except Exception as e: print(f"处理图像 {img_path} 时出错: {str(e)}") else: # 使用随机数据作为备选 print(f"使用随机数据生成 {num_samples} 个样本进行量化校准") for _ in range(num_samples): # 生成随机图像,归一化到[0,1]范围,使用float32类型 data = np.random.rand(1, input_size[0], input_size[1], 3).astype(np.float32) yield [data] def add_tflite_metadata(self, model_path): """为TFLite模型添加元数据""" # 创建标签文件 labels_path = os.path.join(os.path.dirname(model_path), "labels.txt") with open(labels_path, 'w') as f: for label in self.class_labels: f.write(f"{label}\n") # 创建元数据 metadata_path = os.path.join(os.path.dirname(model_path), "metadata.json") self.create_metadata(metadata_path) print(f"元数据已创建: {metadata_path}") print(f"标签文件已创建: {labels_path}") # 使用示例 if __name__ == "__main__": # 类别标签 CLASS_LABELS = ['book', 'cup', 'glasses', 'phone', 'shoe'] # 初始化部署器 deployer = ObjectRecognitionDeployer( model_path='optimized_model.keras', class_labels=CLASS_LABELS ) # 转换为带元数据的TFLite格式 tflite_path = 'model_quantized.tflite' # 使用真实数据目录进行量化校准 REPRESENTATIVE_DATA_DIR = 'path/to/representative_dataset' # 替换为实际路径 deployer.convert_to_tflite_with_metadata( tflite_path, quantize=True, representative_data_dir=REPRESENTATIVE_DATA_DIR ) # 重新加载带元数据的模型 tflite_deployer = ObjectRecognitionDeployer( model_path=tflite_path, class_labels=CLASS_LABELS ) # 测试预测 test_image = 'test_image.jpg' class_name, confidence = tflite_deployer.predict(test_image) print(f"预测结果: {class_name}, 置信度: {confidence:.2f}") # 性能测试 avg_time, mem_usage = tflite_deployer.benchmark(test_image) print(f"平均推理时间: {avg_time:.2f} ms") print(f"内存占用: {mem_usage:.2f} MB") # 创建元数据文件 metadata = deployer.create_metadata('model_metadata.json') print("模型元数据:", json.dumps(metadata, indent=4)) 上述代码我已经成功执行,并且我的ObjectRecognitionDeployer类路径导入代码是from 计算机视觉.test2 import ObjectRecognitionDeployer

filetype

``` import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout import matplotlib.pyplot as plt import cv2 tf.random.set_seed(42) # 定义路径 train_dir = r'C:\Users\29930\Desktop\结构参数图' validation_dir = r'C:\Users\29930\Desktop\测试集路径' # 需要实际路径 # 图像参数 img_width, img_height = 150, 150 batch_size = 32 # 数据生成器 train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, # 修改为实际路径 target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, # 修改为实际路径 target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') # 模型构建 model = Sequential([ Conv2D(32, (3,3), activation='relu', input_shape=(img_width, img_height, 3)), MaxPooling2D(2,2), Conv2D(64, (3,3), activation='relu'), MaxPooling2D(2,2), Flatten(), Dense(128, activation='relu'), Dropout(0.5), Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # 训练模型 history = model.fit( train_generator, steps_per_epoch=len(train_generator), epochs=20, validation_data=validation_generator, validation_steps=len(validation_generator)) # Grad-CAM函数修正版 def generate_grad_cam(model, img_array, layer_name): # 创建梯度模型 grad_model = tf.keras.models.Model( inputs=model.input,, outputs=[model.get_layer(layer_name).output, model.output] ) # 计算梯度 with tf.GradientTape() as tape: conv_outputs, predictions = grad_model(img_array) loss = predictions[:, 0] # 获取梯度 grads = tape.gradient(loss, conv_outputs) pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) # 生成热力图 conv_outputs = conv_outputs[0] heatmap = tf.reduce_sum(conv_outputs * pooled_grads, axis=-1) # 归一化处理 heatmap = np.maximum(heatmap, 0) heatmap /= np.max(heatmap) return heatmap # 可视化部分 X, y = next(validation_generator) sample_image = X[0] img_array = np.expand_dims(sample_image, axis=0) # 获取最后一个卷积层(根据模型结构调整索引) last_conv_layer = model.layers[2] # 第二个Conv2D层 heatmap = generate_grad_cam(model, img_array, last_conv_layer.name) # 调整热力图尺寸 heatmap = cv2.resize(heatmap, (img_width, img_height)) heatmap = np.uint8(255 * heatmap) # 颜色映射 jet = plt.colormaps.get_cmap('jet') jet_colors = jet(np.arange(256))[:, :3] jet_heatmap = jet_colors[heatmap] # 叠加显示 superimposed_img = jet_heatmap * 0.4 + sample_image superimposed_img = np.clip(superimposed_img, 0, 1) # 绘制结果 plt.figure(figsize=(12, 4)) plt.subplot(131) plt.imshow(sample_image) plt.title('原始图像') plt.axis('off') plt.subplot(132) plt.imshow(heatmap, cmap='jet') plt.title('热力图') plt.axis('off') plt.subplot(133) plt.imshow(superimposed_img) plt.title('叠加效果') plt.axis('off') plt.tight_layout() plt.show()```File “C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py”, line 268, in input return self._get_node_attribute_at_index(0, “input_tensors”, “input”) File “C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py”, line 299, in _get_node_attribute_at_index raise AttributeError( AttributeError: The layer sequential has never been called and thus has no defined input… Did you mean: ‘inputs’?

filetype

``` import numpy as np import tensorflow as tf from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout import matplotlib.pyplot as plt import cv2 tf.random.set_seed(42) # 定义路径 train_dir = r'C:\Users\29930\Desktop\结构参数图' validation_dir = r'C:\Users\29930\Desktop\测试集路径' # 需要实际路径 # 图像参数 img_width, img_height = 150, 150 batch_size = 32 # 数据生成器 train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, # 修改为实际路径 target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( validation_dir, # 修改为实际路径 target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') # 模型构建 model = Sequential([ Conv2D(32, (3,3), activation='relu', input_shape=(img_width, img_height, 3)), MaxPooling2D(2,2), Conv2D(64, (3,3), activation='relu'), MaxPooling2D(2,2), Flatten(), Dense(128, activation='relu'), Dropout(0.5), Dense(1, activation='sigmoid') ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # 训练模型 history = model.fit( train_generator, steps_per_epoch=len(train_generator), epochs=20, validation_data=validation_generator, validation_steps=len(validation_generator)) # Grad-CAM函数修正版 def generate_grad_cam(model, img_array, layer_name): # 创建梯度模型 grad_model = tf.keras.models.Model( inputs=[model.inputs], outputs=[model.get_layer(layer_name).output, model.output] ) # 计算梯度 with tf.GradientTape() as tape: conv_outputs, predictions = grad_model(img_array) loss = predictions[:, 0] # 获取梯度 grads = tape.gradient(loss, conv_outputs) pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) # 生成热力图 conv_outputs = conv_outputs[0] heatmap = tf.reduce_sum(conv_outputs * pooled_grads, axis=-1) # 归一化处理 heatmap = np.maximum(heatmap, 0) heatmap /= np.max(heatmap) return heatmap # 可视化部分 X, y = next(validation_generator) sample_image = X[0] img_array = np.expand_dims(sample_image, axis=0) # 获取最后一个卷积层(根据模型结构调整索引) last_conv_layer = model.layers[2] # 第二个Conv2D层 heatmap = generate_grad_cam(model, img_array, last_conv_layer.name) # 调整热力图尺寸 heatmap = cv2.resize(heatmap, (img_width, img_height)) heatmap = np.uint8(255 * heatmap) # 颜色映射 jet = plt.colormaps.get_cmap('jet') jet_colors = jet(np.arange(256))[:, :3] jet_heatmap = jet_colors[heatmap] # 叠加显示 superimposed_img = jet_heatmap * 0.4 + sample_image superimposed_img = np.clip(superimposed_img, 0, 1) # 绘制结果 plt.figure(figsize=(12, 4)) plt.subplot(131) plt.imshow(sample_image) plt.title('原始图像') plt.axis('off') plt.subplot(132) plt.imshow(heatmap, cmap='jet') plt.title('热力图') plt.axis('off') plt.subplot(133) plt.imshow(superimposed_img) plt.title('叠加效果') plt.axis('off') plt.tight_layout() plt.show()```File "C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py", line 268, in input return self._get_node_attribute_at_index(0, "input_tensors", "input") File "C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py", line 299, in _get_node_attribute_at_index raise AttributeError( AttributeError: The layer sequential has never been called and thus has no defined input.. Did you mean: 'inputs'?

filetype

``` # 图像尺寸可以根据实际情况调整 img_width, img_height = 150, 150 batch_size = 32 train_datagen = ImageDataGenerator(rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') validation_generator = test_datagen.flow_from_directory( test_dir, # 验证集路径 target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(img_width, img_height, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit( train_generator, steps_per_epoch=len(train_generator), epochs=20, validation_data=validation_generator, validation_steps=len(validation_generator)) # 定义生成Grad-CAM热图的函数 def generate_grad_cam(model, img_array, layer_name): grad_model = tf.keras.models.Model( [model.inputs], [model.get_layer(layer_name).output, model.output] ) with tf.GradientTape() as tape: conv_output, predictions = grad_model(img_array) loss = predictions[:, 0] # 二分类任务取唯一输出值 grads = tape.gradient(loss, conv_output) pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2)) conv_output = conv_output[0] heatmap = tf.reduce_sum(conv_output * pooled_grads, axis=-1) heatmap = np.maximum(heatmap, 0) heatmap /= np.max(heatmap) # 归一化 return heatmap X, y = next(validation_generator) sample_image = X[0] img_array = np.expand_dims(sample_image, axis=0) # 扩展为(1, 150, 150, 3) last_conv_layer = model.layers[2] last_conv_layer_name = last_conv_layer.name heatmap = generate_grad_cam(model, img_array, last_conv_layer_name) heatmap = cv2.resize(heatmap, (img_width, img_height)) heatmap = np.uint8(255 * heatmap) jet = plt.colormaps.get_cmap('jet') jet_colors = jet(np.arange(256))[:, :3] jet_heatmap = jet_colors[heatmap] superimposed_img = jet_heatmap * 0.4 + sample_image superimposed_img = np.clip(superimposed_img, 0, 1) plt.figure(figsize=(12, 4)) plt.subplot(131) plt.imshow(sample_image) plt.title('原始图像') plt.axis('off') plt.subplot(132) plt.imshow(heatmap, cmap='jet') plt.title('热力图') plt.axis('off') plt.subplot(133) plt.imshow(superimposed_img) plt.title('叠加效果') plt.axis('off') plt.tight_layout() plt.show()```Traceback (most recent call last): File "D:\建模\cnn3.py", line 108, in <module> heatmap = generate_grad_cam(model, img_array, last_conv_layer_name) File "D:\建模\cnn3.py", line 81, in generate_grad_cam [model.inputs], [model.get_layer(layer_name).output, model.output] File "C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py", line 280, in output return self._get_node_attribute_at_index(0, "output_tensors", "output") File "C:\Users\29930\AppData\Local\Programs\Python\Python311\Lib\site-packages\keras\src\ops\operation.py", line 299, in _get_node_attribute_at_index raise AttributeError( AttributeError: The layer sequential has never been called and thus has no defined output.. Did you mean: 'outputs'?解决这个问题输出可用完整代码

filetype

import tkinter as tk from tkinter import ttk, filedialog, messagebox import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.font_manager import FontProperties from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg from sklearn.preprocessing import MinMaxScaler import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import LSTM, Dense from tensorflow.keras.optimizers import Adam from tensorflow.keras.callbacks import EarlyStopping import os plt.rcParams['font.sans-serif'] = ['SimHei'] # 使用黑体 plt.rcParams['axes.unicode_minus'] = False class DamSeepageModel: def __init__(self, root): self.root = root self.root.title("大坝渗流预测模型") self.root.geometry("1200x800") # 初始化数据 self.train_df = None self.test_df = None self.model = None self.scaler = MinMaxScaler(feature_range=(0, 1)) self.evaluation_metrics = {} # 存储评估指标结果 # 创建主界面 self.create_widgets() def create_widgets(self): # 创建主框架 main_frame = ttk.Frame(self.root, padding=10) main_frame.pack(fill=tk.BOTH, expand=True) # 左侧控制面板 control_frame = ttk.LabelFrame(main_frame, text="模型控制", padding=10) control_frame.pack(side=tk.LEFT, fill=tk.Y, padx=5, pady=5) # 文件选择部分 file_frame = ttk.LabelFrame(control_frame, text="数据文件", padding=10) file_frame.pack(fill=tk.X, pady=5) # 训练集选择 ttk.Label(file_frame, text="训练集:").grid(row=0, column=0, sticky=tk.W, pady=5) self.train_file_var = tk.StringVar() ttk.Entry(file_frame, textvariable=self.train_file_var, width=30, state='readonly').grid(row=0, column=1, padx=5) ttk.Button(file_frame, text="选择文件", command=lambda: self.select_file("train")).grid(row=0, column=2) # 测试集选择 ttk.Label(file_frame, text="测试集:").grid(row=1, column=0, sticky=tk.W, pady=5) self.test_file_var = tk.StringVar() ttk.Entry(file_frame, textvariable=self.test_file_var, width=30, state='readonly').grid(row=1, column=1, padx=5) ttk.Button(file_frame, text="选择文件", command=lambda: self.select_file("test")).grid(row=1, column=2) # 参数设置部分 param_frame = ttk.LabelFrame(control_frame, text="模型参数", padding=10) param_frame.pack(fill=tk.X, pady=10) # 时间窗口大小 ttk.Label(param_frame, text="时间窗口大小:").grid(row=0, column=0, sticky=tk.W, pady=5) self.window_size_var = tk.IntVar(value=60) ttk.Spinbox(param_frame, from_=10, to=200, increment=5, textvariable=self.window_size_var, width=10).grid(row=0, column=1, padx=5) # LSTM单元数量 ttk.Label(param_frame, text="LSTM单元数:").grid(row=1, column=0, sticky=tk.W, pady=5) self.lstm_units_var = tk.IntVar(value=50) ttk.Spinbox(param_frame, from_=10, to=200, increment=10, textvariable=self.lstm_units_var, width=10).grid(row=1, column=1, padx=5) # 训练轮次 ttk.Label(param_frame, text="训练轮次:").grid(row=2, column=0, sticky=tk.W, pady=5) self.epochs_var = tk.IntVar(value=100) ttk.Spinbox(param_frame, from_=10, to=500, increment=10, textvariable=self.epochs_var, width=10).grid(row=2, column=1, padx=5) # 批处理大小 ttk.Label(param_frame, text="批处理大小:").grid(row=3, column=0, sticky=tk.W, pady=5) self.batch_size_var = tk.IntVar(value=32) ttk.Spinbox(param_frame, from_=16, to=128, increment=16, textvariable=self.batch_size_var, width=10).grid(row=3, column=1, padx=5) # 控制按钮 btn_frame = ttk.Frame(control_frame) btn_frame.pack(fill=tk.X, pady=10) ttk.Button(btn_frame, text="训练模型", command=self.train_model).pack(side=tk.LEFT, padx=5) ttk.Button(btn_frame, text="预测结果", command=self.predict).pack(side=tk.LEFT, padx=5) ttk.Button(btn_frame, text="保存结果", command=self.save_results).pack(side=tk.LEFT, padx=5) ttk.Button(btn_frame, text="重置", command=self.reset).pack(side=tk.RIGHT, padx=5) # 状态栏 self.status_var = tk.StringVar(value="就绪") status_bar = ttk.Label(control_frame, textvariable=self.status_var, relief=tk.SUNKEN, anchor=tk.W) status_bar.pack(fill=tk.X, side=tk.BOTTOM) # 右侧结果显示区域 result_frame = ttk.Frame(main_frame) result_frame.pack(side=tk.RIGHT, fill=tk.BOTH, expand=True, padx=5, pady=5) # 创建标签页 self.notebook = ttk.Notebook(result_frame) self.notebook.pack(fill=tk.BOTH, expand=True) # 损失曲线标签页 self.loss_frame = ttk.Frame(self.notebook) self.notebook.add(self.loss_frame, text="训练损失") # 预测结果标签页 self.prediction_frame = ttk.Frame(self.notebook) self.notebook.add(self.prediction_frame, text="预测结果") # 添加指标文本框 self.metrics_var = tk.StringVar() metrics_label = ttk.Label( self.prediction_frame, textvariable=self.metrics_var, font=('TkDefaultFont', 10, 'bold'), relief='ridge', padding=5 ) metrics_label.pack(fill=tk.X, padx=5, pady=5) # 初始化绘图区域 self.fig, self.ax = plt.subplots(figsize=(10, 6)) self.canvas = FigureCanvasTkAgg(self.fig, master=self.prediction_frame) self.canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True) self.loss_fig, self.loss_ax = plt.subplots(figsize=(10, 4)) self.loss_canvas = FigureCanvasTkAgg(self.loss_fig, master=self.loss_frame) self.loss_canvas.get_tk_widget().pack(fill=tk.BOTH, expand=True) # 文件选择 def select_file(self, file_type): """选择Excel文件""" file_path = filedialog.askopenfilename( title=f"选择{file_type}集Excel文件", filetypes=[("Excel文件", "*.xlsx *.xls"), ("所有文件", "*.*")] ) if file_path: try: # 读取Excel文件 df = pd.read_excel(file_path) # 时间特征列 time_features = ['year', 'month', 'day'] missing_time_features = [feat for feat in time_features if feat not in df.columns] if '水位' not in df.columns: messagebox.showerror("列名错误", "Excel文件必须包含'水位'列") return if missing_time_features: messagebox.showerror("列名错误", f"Excel文件缺少预处理后的时间特征列: {', '.join(missing_time_features)}\n" "请确保已使用预处理功能添加这些列") return # 创建完整的时间戳列 # 处理可能缺失的小时、分钟、秒数据 if 'hour' in df.columns and 'minute' in df.columns and 'second' in df.columns: df['datetime'] = pd.to_datetime( df[['year', 'month', 'day', 'hour', 'minute', 'second']] ) elif 'hour' in df.columns and 'minute' in df.columns: df['datetime'] = pd.to_datetime( df[['year', 'month', 'day', 'hour', 'minute']].assign(second=0) ) else: df['datetime'] = pd.to_datetime(df[['year', 'month', 'day']]) # 设置时间索引 df = df.set_index('datetime') # 保存数据 if file_type == "train": self.train_df = df self.train_file_var.set(os.path.basename(file_path)) self.status_var.set(f"已加载训练集: {len(self.train_df)}条数据") else: self.test_df = df self.test_file_var.set(os.path.basename(file_path)) self.status_var.set(f"已加载测试集: {len(self.test_df)}条数据") except Exception as e: messagebox.showerror("文件错误", f"读取文件失败: {str(e)}") # 添加评估指标计算函数 def calculate_metrics(self, y_true, y_pred): """计算各种评估指标""" from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score mse = mean_squared_error(y_true, y_pred) rmse = np.sqrt(mse) mae = mean_absolute_error(y_true, y_pred) # 避免除以零错误 non_zero_idx = np.where(y_true != 0)[0] if len(non_zero_idx) > 0: mape = np.mean(np.abs((y_true[non_zero_idx] - y_pred[non_zero_idx]) / y_true[non_zero_idx])) * 100 else: mape = float('nan') r2 = r2_score(y_true, y_pred) return { 'MSE': mse, 'RMSE': rmse, 'MAE': mae, 'MAPE': mape, 'R2': r2 } def create_dataset(self, data, window_size): """创建时间窗口数据集""" X, y = [], [] for i in range(len(data) - window_size): X.append(data[i:(i + window_size), 0]) y.append(data[i + window_size, 0]) return np.array(X), np.array(y) def create_dynamic_plot_callback(self): """创建动态绘图回调实例,用于实时显示训练损失曲线""" class DynamicPlotCallback(tf.keras.callbacks.Callback): def __init__(self, gui_app): self.gui_app = gui_app # 引用主GUI实例 self.train_loss = [] # 存储训练损失 self.val_loss = [] # 存储验证损失 def on_epoch_end(self, epoch, logs=None): """每个epoch结束时更新图表""" logs = logs or {} # 收集损失数据 self.train_loss.append(logs.get('loss')) self.val_loss.append(logs.get('val_loss')) # 更新GUI中的图表(在主线程中执行) self.gui_app.root.after(0, self._update_plot) def _update_plot(self): """实际更新图表的函数""" try: # 清除现有图表 self.gui_app.loss_ax.clear() # 绘制训练和验证损失曲线 epochs = range(1, len(self.train_loss) + 1) self.gui_app.loss_ax.plot(epochs, self.train_loss, 'b-', label='训练损失') self.gui_app.loss_ax.plot(epochs, self.val_loss, 'r-', label='验证损失') # 设置图表属性 self.gui_app.loss_ax.set_title('模型训练损失') self.gui_app.loss_ax.set_xlabel('轮次') self.gui_app.loss_ax.set_ylabel('损失', rotation=0) self.gui_app.loss_ax.legend(loc='upper right') self.gui_app.loss_ax.grid(True, alpha=0.3) # 自动调整Y轴范围 all_losses = self.train_loss + self.val_loss min_loss = max(0, min(all_losses) * 0.9) max_loss = max(all_losses) * 1.1 self.gui_app.loss_ax.set_ylim(min_loss, max_loss) # 刷新画布 self.gui_app.loss_canvas.draw() # 更新状态栏显示最新损失 current_epoch = len(self.train_loss) if current_epoch > 0: latest_train_loss = self.train_loss[-1] latest_val_loss = self.val_loss[-1] if self.val_loss else 0 self.gui_app.status_var.set( f"训练中 | 轮次: {current_epoch} | " f"训练损失: {latest_train_loss:.6f} | " f"验证损失: {latest_val_loss:.6f}" ) self.gui_app.root.update() except Exception as e: print(f"更新图表时出错: {str(e)}") # 返回回调实例 return DynamicPlotCallback(self) def train_model(self): """训练LSTM模型""" if self.train_df is None: messagebox.showwarning("警告", "请先选择训练集文件") return try: self.status_var.set("正在预处理数据...") self.root.update() # 数据预处理 train_scaled = self.scaler.fit_transform(self.train_df[['水位']]) # 创建时间窗口数据集 window_size = self.window_size_var.get() X_train, y_train = self.create_dataset(train_scaled, window_size) # 调整LSTM输入格式 X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1)) # 构建LSTM模型 self.model = Sequential() self.model.add(LSTM( self.lstm_units_var.get(), return_sequences=True, input_shape=(window_size, 1) )) self.model.add(LSTM(self.lstm_units_var.get())) self.model.add(Dense(1)) self.model.compile( optimizer=Adam(learning_rate=0.001), loss='mean_squared_error' ) # 创建验证集(在训练之前) val_size = int(0.2 * len(X_train)) X_val, y_val = X_train[:val_size], y_train[:val_size] X_train, y_train = X_train[val_size:], y_train[val_size:] # 定义评估回调类 class MetricsCallback(tf.keras.callbacks.Callback): def __init__(self, X_val, y_val, scaler, gui_app): # 添加gui_app参数 super().__init__() self.X_val = X_val self.y_val = y_val self.scaler = scaler self.gui_app = gui_app # 直接存储引用 self.best_r2 = -float('inf') self.best_weights = None def on_epoch_end(self, epoch, logs=None): # 预测验证集(添加verbose=0避免输出) val_pred = self.model.predict(self.X_val, verbose=0) # 反归一化 val_pred_orig = self.scaler.inverse_transform(val_pred) y_val_orig = self.scaler.inverse_transform(self.y_val.reshape(-1, 1)) # 计算指标(使用self.gui_app) metrics = self.gui_app.calculate_metrics(y_val_orig, val_pred_orig) # 更新日志 logs = logs or {} logs.update({f'val_{k}': v for k, v in metrics.items()}) # 保存最佳权重(基于R²) if metrics['R2'] > self.best_r2: self.best_r2 = metrics['R2'] self.best_weights = self.model.get_weights() # 更新状态栏(使用self.gui_app) status = (f"训练中 | 轮次: {epoch + 1} | " f"损失: {logs.get('loss', 0):.6f} | " f"验证R²: {metrics['R2']:.4f}") self.gui_app.status_var.set(status) self.gui_app.root.update() # 添加回调(传递所有四个参数) metrics_callback = MetricsCallback(X_val, y_val, self.scaler, self) # 添加self参数 # 添加早停机制 early_stopping = EarlyStopping( monitor='val_loss', # 监控验证集损失 patience=self.epochs_var.get()/3, # 连续20轮无改善则停止 min_delta=0.0001, # 最小改善阈值 restore_best_weights=True, # 恢复最佳权重 verbose=1 # 显示早停信息 ) # 在model.fit中添加回调 history = self.model.fit( X_train, y_train, epochs=self.epochs_var.get(), batch_size=self.batch_size_var.get(), validation_data=(X_val, y_val), callbacks=[early_stopping, metrics_callback], # 添加新回调 verbose=0 ) # 训练结束后恢复最佳权重 if metrics_callback.best_weights is not None: self.model.set_weights(metrics_callback.best_weights) # 绘制损失曲线 self.loss_ax.clear() self.loss_ax.plot(history.history['loss'], label='训练损失') self.loss_ax.plot(history.history['val_loss'], label='验证损失') self.loss_ax.set_title('模型训练损失') self.loss_ax.set_xlabel('轮次') self.loss_ax.set_ylabel('损失',rotation=0) self.loss_ax.legend() self.loss_ax.grid(True) self.loss_canvas.draw() # 根据早停情况更新状态信息 if early_stopping.stopped_epoch > 0: stopped_epoch = early_stopping.stopped_epoch best_epoch = early_stopping.best_epoch final_loss = history.history['loss'][-1] best_loss = min(history.history['val_loss']) self.status_var.set( f"训练在{stopped_epoch + 1}轮提前终止 | " f"最佳模型在第{best_epoch + 1}轮 | " f"最终损失: {final_loss:.6f} | " f"最佳验证损失: {best_loss:.6f}" ) messagebox.showinfo( "训练完成", f"模型训练提前终止!\n" f"最佳模型在第{best_epoch + 1}轮\n" f"最佳验证损失: {best_loss:.6f}" ) else: final_loss = history.history['loss'][-1] self.status_var.set(f"模型训练完成 | 最终损失: {final_loss:.6f}") messagebox.showinfo("训练完成", "模型训练成功完成!") except Exception as e: messagebox.showerror("训练错误", f"模型训练失败:\n{str(e)}") self.status_var.set("训练失败") def predict(self): """使用模型进行预测""" if self.model is None: messagebox.showwarning("警告", "请先训练模型") return if self.test_df is None: messagebox.showwarning("警告", "请先选择测试集文件") return try: self.status_var.set("正在生成预测...") self.root.update() # 预处理测试数据 test_scaled = self.scaler.transform(self.test_df[['水位']]) # 创建测试集时间窗口 window_size = self.window_size_var.get() X_test, y_test = self.create_dataset(test_scaled, window_size) X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[1], 1)) # 进行预测 test_predict = self.model.predict(X_test) # 反归一化 test_predict = self.scaler.inverse_transform(test_predict) y_test_orig = self.scaler.inverse_transform(y_test.reshape(-1, 1)) # 创建时间索引 test_time = self.test_df.index[window_size:window_size + len(test_predict)] # 绘制图表 self.fig, self.ax = plt.subplots(figsize=(12, 6)) # 使用时间索引作为x轴 self.ax.plot(test_time, y_test_orig, label='真实值') self.ax.plot(test_time, test_predict, label='预测值', linestyle='--') self.ax.set_title('大坝渗流水位预测结果') self.ax.set_xlabel('时间') self.ax.set_ylabel('测压管水位') self.ax.legend() self.ax.grid(True) self.ax.tick_params(axis='x', rotation=45) # 计算并添加评估指标文本 self.evaluation_metrics = self.calculate_metrics( y_test_orig.flatten(), test_predict.flatten() ) metrics_text = ( f"MSE: {self.evaluation_metrics['MSE']:.4f} | " f"RMSE: {self.evaluation_metrics['RMSE']:.4f} | " f"MAE: {self.evaluation_metrics['MAE']:.4f} | " f"MAPE: {self.evaluation_metrics['MAPE']:.2f}% | " f"R²: {self.evaluation_metrics['R2']:.4f}" ) self.ax.text( 0.5, 1.05, metrics_text, transform=self.ax.transAxes, ha='center', fontsize=10, bbox=dict(facecolor='white', alpha=0.8) ) # 添加分隔线(移至绘图设置之后) # 注意:这里使用数值索引而不是时间对象 split_point = 0 # 测试集开始位置 self.ax.axvline(x=split_point, color='k', linestyle='--', alpha=0.5) self.ax.text( split_point, np.min(y_test_orig) * 0.9, ' 训练/测试分界', rotation=90, verticalalignment='bottom' ) # 调整布局并显示图表 plt.tight_layout() if hasattr(self, 'canvas'): self.canvas.draw() else: plt.show() self.status_var.set("预测完成,结果已显示") except Exception as e: messagebox.showerror("预测错误", f"预测失败:\n{str(e)}") self.status_var.set("预测失败") def save_results(self): """保存预测结果""" if not hasattr(self, 'test_predict') or self.test_predict is None: messagebox.showwarning("警告", "请先生成预测结果") return save_path = filedialog.asksaveasfilename( defaultextension=".xlsx", filetypes=[("Excel文件", "*.xlsx"), ("所有文件", "*.*")] ) if not save_path: return try: # 创建包含预测结果和评估指标的DataFrame window_size = self.window_size_var.get() test_time = self.test_df.index[window_size:window_size + len(self.test_predict)] metrics_df = pd.DataFrame([self.evaluation_metrics]) result_df = pd.DataFrame({ '时间': test_time, '实际水位': self.test_df['水位'][window_size:window_size + len(self.test_predict)].values, '预测水位': self.test_predict.flatten() }) # 保存到Excel的不同sheet with pd.ExcelWriter(save_path) as writer: result_df.to_excel(writer, sheet_name='预测结果', index=False) metrics_df.to_excel(writer, sheet_name='评估指标', index=False) # 保存图表 chart_path = os.path.splitext(save_path)[0] + "_chart.png" self.fig.savefig(chart_path, dpi=300) self.status_var.set(f"结果已保存至: {os.path.basename(save_path)}") messagebox.showinfo("保存成功", f"预测结果和图表已保存至:\n{save_path}\n{chart_path}") except Exception as e: messagebox.showerror("保存错误", f"保存结果失败:\n{str(e)}") def reset(self): """重置程序状态""" self.train_df = None self.test_df = None self.model = None self.train_file_var.set("") self.test_file_var.set("") self.ax.clear() self.loss_ax.clear() self.canvas.draw() self.loss_canvas.draw() self.data_text.delete(1.0, tk.END) self.status_var.set("已重置,请选择新数据") messagebox.showinfo("重置", "程序已重置,可以开始新的分析") if __name__ == "__main__": root = tk.Tk() app = DamSeepageModel(root) root.mainloop() 整个代码逐行检查一下

filetype

dorm_face_recognition_gui.py代码如下: import pickle import sys import os import cv2 import numpy as np import torch from PyQt5.QtWidgets import QListWidget, QProgressDialog from facenet_pytorch import MTCNN, InceptionResnetV1 from PIL import Image from PyQt5.QtWidgets import (QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QPushButton, QLabel, QFileDialog, QComboBox, QSlider, QMessageBox, QTextEdit, QGroupBox, QScrollArea, QDialog, QDialogButtonBox, QTableWidget, QTableWidgetItem, QHeaderView, QGridLayout) from PyQt5.QtCore import Qt, QTimer from PyQt5.QtGui import QImage, QPixmap, QIcon, QFont, QColor import joblib import logging import json from datetime import datetime 在 dorm_face_recognition_gui.py 顶部添加导入 from face_recognition import FaceRecognition 配置日志 logging.basicConfig(level=logging.INFO, format=‘%(asctime)s - %(levelname)s - %(message)s’) logger = logging.getLogger(name) class FeedbackDialog(QDialog): “”“反馈对话框”“” def __init__(self, parent=None, last_results=None, dorm_members=None): super().__init__(parent) self.setWindowTitle("识别错误反馈") self.setFixedSize(500, 400) self.last_results = last_results or [] self.dorm_members = dorm_members or [] self.init_ui() def init_ui(self): layout = QVBoxLayout(self) # 添加当前识别结果 result_label = QLabel("当前识别结果:") layout.addWidget(result_label) # 使用表格显示结果 self.results_table = QTableWidget() self.results_table.setColumnCount(4) self.results_table.setHorizontalHeaderLabels(["ID", "识别结果", "置信度", "位置和大小"]) self.results_table.setSelectionBehavior(QTableWidget.SelectRows) self.results_table.setEditTriggers(QTableWidget.NoEditTriggers) # 填充表格数据 self.results_table.setRowCount(len(self.last_results)) for i, result in enumerate(self.last_results): x, y, w, h = result["box"] self.results_table.setItem(i, 0, QTableWidgetItem(str(i + 1))) self.results_table.setItem(i, 1, QTableWidgetItem(result["label"])) self.results_table.setItem(i, 2, QTableWidgetItem(f"{result['confidence']:.2f}")) self.results_table.setItem(i, 3, QTableWidgetItem(f"({x}, {y}) - {w}x{h}")) # 设置表格样式 self.results_table.horizontalHeader().setSectionResizeMode(QHeaderView.Stretch) self.results_table.verticalHeader().setVisible(False) layout.addWidget(self.results_table) # 添加正确身份选择 correct_layout = QGridLayout() correct_label = QLabel("正确身份:") correct_layout.addWidget(correct_label, 0, 0) self.correct_combo = QComboBox() self.correct_combo.addItem("选择正确身份", None) for member in self.dorm_members: self.correct_combo.addItem(member, member) self.correct_combo.addItem("陌生人", "stranger") self.correct_combo.addItem("不在列表中", "unknown") correct_layout.addWidget(self.correct_combo, 0, 1) # 添加备注 note_label = QLabel("备注:") correct_layout.addWidget(note_label, 1, 0) self.note_text = QTextEdit() self.note_text.setPlaceholderText("可添加额外说明...") self.note_text.setMaximumHeight(60) correct_layout.addWidget(self.note_text, 1, 1) layout.addLayout(correct_layout) # 添加按钮 button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel) button_box.accepted.connect(self.accept) button_box.rejected.connect(self.reject) layout.addWidget(button_box) def get_selected_result(self): """获取选择的识别结果""" selected_row = self.results_table.currentRow() if selected_row >= 0 and selected_row < len(self.last_results): return self.last_results[selected_row] return None def get_feedback_data(self): """获取反馈数据""" selected_result = self.get_selected_result() if not selected_result: return None return { "timestamp": datetime.now().isoformat(), "original_label": selected_result["label"], "correct_label": self.correct_combo.currentData(), "confidence": selected_result["confidence"], "box": selected_result["box"], # 保存完整的框信息 "note": self.note_text.toPlainText().strip() } class FaceRecognitionSystem(QMainWindow): def init(self): super().init() self.setWindowTitle(“寝室人脸识别系统”) self.setGeometry(100, 100, 1200, 800) # 初始化变量 self.model_loaded = False self.camera_active = False self.video_capture = None self.timer = QTimer() self.current_image = None self.last_results = [] # 存储上次识别结果 self.dorm_members = [] # 寝室成员列表 # 创建主界面 self.main_widget = QWidget() self.setCentralWidget(self.main_widget) self.layout = QHBoxLayout(self.main_widget) # 左侧控制面板 - 占40%宽度 self.control_panel = QWidget() self.control_layout = QVBoxLayout(self.control_panel) self.control_layout.setAlignment(Qt.AlignTop) self.control_panel.setMaximumWidth(400) self.layout.addWidget(self.control_panel, 40) # 40%宽度 # 右侧图像显示区域 - 占60%宽度 self.image_panel = QWidget() self.image_layout = QVBoxLayout(self.image_panel) self.image_label = QLabel() self.image_label.setAlignment(Qt.AlignCenter) self.image_label.setMinimumSize(800, 600) self.image_label.setStyleSheet("background-color: #333; border: 1px solid #555;") self.image_layout.addWidget(self.image_label) self.layout.addWidget(self.image_panel, 60) # 60%宽度 # 状态栏 self.status_bar = self.statusBar() self.status_bar.showMessage("系统初始化中...") # 初始化人脸识别器 - 关键修复 self.face_recognition = FaceRecognition() # 初始化UI组件 self.init_ui() # 添加工具栏(必须在UI初始化后) self.toolbar = self.addToolBar('工具栏') # 添加反馈按钮 self.add_feedback_button() # 初始化模型 self.init_models() def init_ui(self): """初始化用户界面组件""" # 标题 title_label = QLabel("寝室人脸识别系统") title_label.setFont(QFont("Arial", 18, QFont.Bold)) title_label.setAlignment(Qt.AlignCenter) title_label.setStyleSheet("color: #2c3e50; padding: 10px;") self.control_layout.addWidget(title_label) # 模型加载 model_group = QGroupBox("模型设置") model_layout = QVBoxLayout(model_group) self.load_model_btn = QPushButton("加载模型") self.load_model_btn.setIcon(QIcon.fromTheme("document-open")) self.load_model_btn.setStyleSheet("background-color: #3498db;") self.load_model_btn.clicked.connect(self.load_model) model_layout.addWidget(self.load_model_btn) self.model_status = QLabel("模型状态: 未加载") model_layout.addWidget(self.model_status) self.control_layout.addWidget(model_group) # 在模型设置部分添加重新训练按钮 self.retrain_btn = QPushButton("重新训练模型") self.retrain_btn.setIcon(QIcon.fromTheme("view-refresh")) self.retrain_btn.setStyleSheet("background-color: #f39c12;") self.retrain_btn.clicked.connect(self.retrain_model) self.retrain_btn.setEnabled(False) # 初始不可用 model_layout.addWidget(self.retrain_btn) # 识别设置 settings_group = QGroupBox("识别设置") settings_layout = QVBoxLayout(settings_group) # 置信度阈值 threshold_layout = QHBoxLayout() threshold_label = QLabel("置信度阈值:") threshold_layout.addWidget(threshold_label) self.threshold_slider = QSlider(Qt.Horizontal) self.threshold_slider.setRange(0, 100) self.threshold_slider.setValue(70) self.threshold_slider.valueChanged.connect(self.update_threshold) threshold_layout.addWidget(self.threshold_slider) self.threshold_value = QLabel("0.70") threshold_layout.addWidget(self.threshold_value) settings_layout.addLayout(threshold_layout) # 显示选项 display_layout = QHBoxLayout() display_label = QLabel("显示模式:") display_layout.addWidget(display_label) self.display_combo = QComboBox() self.display_combo.addItems(["原始图像", "检测框", "识别结果"]) self.display_combo.setCurrentIndex(2) display_layout.addWidget(self.display_combo) settings_layout.addLayout(display_layout) self.control_layout.addWidget(settings_group) # 识别功能 recognition_group = QGroupBox("识别功能") recognition_layout = QVBoxLayout(recognition_group) # 图片识别 self.image_recognition_btn = QPushButton("图片识别") self.image_recognition_btn.setIcon(QIcon.fromTheme("image-x-generic")) self.image_recognition_btn.setStyleSheet("background-color: #9b59b6;") self.image_recognition_btn.clicked.connect(self.open_image) self.image_recognition_btn.setEnabled(False) recognition_layout.addWidget(self.image_recognition_btn) # 摄像头识别 self.camera_recognition_btn = QPushButton("启动摄像头识别") self.camera_recognition_btn.setIcon(QIcon.fromTheme("camera-web")) self.camera_recognition_btn.setStyleSheet("background-color: #e74c3c;") self.camera_recognition_btn.clicked.connect(self.toggle_camera) self.camera_recognition_btn.setEnabled(False) recognition_layout.addWidget(self.camera_recognition_btn) self.control_layout.addWidget(recognition_group) # 结果展示区域 - 使用QTextEdit替代QLabel results_group = QGroupBox("识别结果") results_layout = QVBoxLayout(results_group) self.results_text = QTextEdit() self.results_text.setReadOnly(True) self.results_text.setFont(QFont("Microsoft YaHei", 12)) # 使用支持中文的字体 self.results_text.setStyleSheet("background-color: #f8f9fa; border: 1px solid #ddd; padding: 10px;") self.results_text.setPlaceholderText("识别结果将显示在这里") # 添加滚动区域 scroll_area = QScrollArea() scroll_area.setWidgetResizable(True) scroll_area.setWidget(self.results_text) results_layout.addWidget(scroll_area) self.control_layout.addWidget(results_group, 1) # 占据剩余空间 # 系统信息 info_group = QGroupBox("系统信息") info_layout = QVBoxLayout(info_group) self.device_label = QLabel(f"计算设备: {'GPU' if torch.cuda.is_available() else 'CPU'}") info_layout.addWidget(self.device_label) self.model_info = QLabel("加载模型以显示信息") info_layout.addWidget(self.model_info) self.control_layout.addWidget(info_group) # 退出按钮 exit_btn = QPushButton("退出系统") exit_btn.setIcon(QIcon.fromTheme("application-exit")) exit_btn.clicked.connect(self.close) exit_btn.setStyleSheet("background-color: #ff6b6b; color: white;") self.control_layout.addWidget(exit_btn) def add_feedback_button(self): """添加反馈按钮到界面""" # 创建反馈按钮 self.feedback_button = QPushButton("提供反馈", self) self.feedback_button.setFixedSize(120, 40) # 设置固定大小 self.feedback_button.setStyleSheet( "QPushButton {" " background-color: #4CAF50;" " color: white;" " border-radius: 5px;" " font-weight: bold;" "}" "QPushButton:hover {" " background-color: #45a049;" "}" ) # 连接按钮点击事件 self.feedback_button.clicked.connect(self.open_feedback_dialog) # 添加到工具栏 self.toolbar.addWidget(self.feedback_button) def open_feedback_dialog(self): """打开反馈对话框""" if not self.last_results: QMessageBox.warning(self, "无法反馈", "没有可反馈的识别结果") return dialog = FeedbackDialog( self, last_results=self.last_results, dorm_members=self.dorm_members ) if dialog.exec_() == QDialog.Accepted: feedback_data = dialog.get_feedback_data() if feedback_data: # 修复:调用 FaceRecognition 实例的 save_feedback 方法 selected_result = dialog.get_selected_result() if selected_result: # 获取检测框 detected_box = [ selected_result["box"][0], selected_result["box"][1], selected_result["box"][0] + selected_result["box"][2], selected_result["box"][1] + selected_result["box"][3] ] # 调用保存反馈方法 self.face_recognition.save_feedback( self.current_image, detected_box, feedback_data["original_label"], feedback_data["correct_label"] ) QMessageBox.information(self, "反馈提交", "感谢您的反馈!数据已保存用于改进模型") else: QMessageBox.warning(self, "反馈错误", "未选择要反馈的人脸结果") def init_models(self): """初始化模型组件""" # 设置设备 self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') self.device_label.setText(f"计算设备: {'GPU' if torch.cuda.is_available() else 'CPU'}") # 初始化人脸检测器 try: self.detector = MTCNN( keep_all=True, post_process=False, device=self.device ) self.status_bar.showMessage("MTCNN 检测器初始化完成") logger.info("MTCNN 检测器初始化完成") except Exception as e: self.status_bar.showMessage(f"MTCNN 初始化失败: {str(e)}") logger.error(f"MTCNN 初始化失败: {str(e)}") return # 初始化人脸特征提取器 try: self.embedder = InceptionResnetV1( pretrained='vggface2', classify=False, device=self.device ).eval() self.status_bar.showMessage("FaceNet 特征提取器初始化完成") logger.info("FaceNet 特征提取器初始化完成") except Exception as e: self.status_bar.showMessage(f"FaceNet 初始化失败: {str(e)}") logger.error(f"FaceNet 初始化失败: {str(e)}") def load_model(self): """加载预训练的SVM分类器""" options = QFileDialog.Options() file_path, _ = QFileDialog.getOpenFileName( self, "选择模型文件", "", "模型文件 (*.pkl);;所有文件 (*)", options=options ) if file_path: try: # 加载模型 model_data = joblib.load(file_path) self.classifier = model_data['classifier'] self.label_encoder = model_data['label_encoder'] self.dorm_members = model_data['dorm_members'] # 启用重新训练按钮 self.retrain_btn.setEnabled(True) # 更新UI状态 self.model_loaded = True self.model_status.setText("模型状态: 已加载") self.model_info.setText(f"寝室成员: {', '.join(self.dorm_members)}") self.image_recognition_btn.setEnabled(True) self.camera_recognition_btn.setEnabled(True) # 状态栏消息 self.status_bar.showMessage(f"模型加载成功: {os.path.basename(file_path)}") # 显示成功消息 QMessageBox.information( self, "模型加载", f"模型加载成功!\n识别成员: {len(self.dorm_members)}人\n置信度阈值: {self.threshold_slider.value() / 100:.2f}" ) except Exception as e: QMessageBox.critical(self, "加载错误", f"模型加载失败: {str(e)}") self.status_bar.showMessage(f"模型加载失败: {str(e)}") def update_threshold(self, value): """更新置信度阈值""" threshold = value / 100 self.threshold_value.setText(f"{threshold:.2f}") self.status_bar.showMessage(f"置信度阈值更新为: {threshold:.2f}") def open_image(self): """打开图片文件进行识别""" if not self.model_loaded: QMessageBox.warning(self, "警告", "请先加载模型!") return options = QFileDialog.Options() file_path, _ = QFileDialog.getOpenFileName( self, "选择识别图片", "", "图片文件 (*.jpg *.jpeg *.png);;所有文件 (*)", options=options ) if file_path: # 读取图片 image = cv2.imread(file_path) if image is None: QMessageBox.critical(self, "错误", "无法读取图片文件!") return # 保存当前图片 self.current_image = image.copy() # 进行识别 self.recognize_faces(image) def toggle_camera(self): """切换摄像头状态""" if not self.model_loaded: QMessageBox.warning(self, "警告", "请先加载模型!") return if not self.camera_active: # 尝试打开摄像头 self.video_capture = cv2.VideoCapture(0) if not self.video_capture.isOpened(): QMessageBox.critical(self, "错误", "无法打开摄像头!") return # 启动摄像头 self.camera_active = True self.camera_recognition_btn.setText("停止摄像头识别") self.camera_recognition_btn.setIcon(QIcon.fromTheme("media-playback-stop")) self.timer.timeout.connect(self.process_camera_frame) self.timer.start(30) # 约33 FPS self.status_bar.showMessage("摄像头已启动") else: # 停止摄像头 self.camera_active = False self.camera_recognition_btn.setText("启动摄像头识别") self.camera_recognition_btn.setIcon(QIcon.fromTheme("camera-web")) self.timer.stop() if self.video_capture: self.video_capture.release() self.status_bar.showMessage("摄像头已停止") def process_camera_frame(self): """处理摄像头帧""" ret, frame = self.video_capture.read() if ret: # 保存当前帧 self.current_image = frame.copy() # 进行识别 self.recognize_faces(frame) def retrain_model(self): """使用反馈数据重新训练模型""" # 获取所有反馈数据 feedback_dir = os.path.join(os.getcwd(), "data", "feedback_data") # 修复1:支持多种文件扩展名 feedback_files = [] for f in os.listdir(feedback_dir): filepath = os.path.join(feedback_dir, f) if os.path.isfile(filepath) and (f.endswith('.pkl') or f.endswith('.json')): feedback_files.append(f) # 修复2:添加目录存在性检查 if not os.path.exists(feedback_dir): QMessageBox.warning(self, "目录不存在", f"反馈数据目录不存在: {feedback_dir}") return if not feedback_files: QMessageBox.information(self, "无反馈数据", "没有找到反馈数据,无法重新训练") return # 确认对话框 reply = QMessageBox.question( self, '确认重新训练', f"将使用 {len(feedback_files)} 条反馈数据重新训练模型。此操作可能需要几分钟时间,确定继续吗?", QMessageBox.Yes | QMessageBox.No, QMessageBox.No ) if reply != QMessageBox.Yes: return try: # 创建进度对话框 progress = QProgressDialog("正在重新训练模型...", "取消", 0, len(feedback_files), self) progress.setWindowTitle("模型重新训练") progress.setWindowModality(Qt.WindowModal) progress.setMinimumDuration(0) progress.setValue(0) # 收集所有反馈数据 feedback_data = [] for i, filename in enumerate(feedback_files): filepath = os.path.join(feedback_dir, filename) # 修复3:根据文件扩展名使用不同的加载方式 if filename.endswith('.pkl'): with open(filepath, 'rb') as f: # 二进制模式读取 data = pickle.load(f) elif filename.endswith('.json'): with open(filepath, 'r', encoding='utf-8') as f: data = json.load(f) else: continue # 跳过不支持的文件类型 feedback_data.append(data) progress.setValue(i + 1) QApplication.processEvents() # 保持UI响应 if progress.wasCanceled(): return progress.setValue(len(feedback_files)) # 重新训练模型 self.status_bar.showMessage("正在重新训练模型...") # 修复4:添加详细的日志记录 logger.info(f"开始重新训练,使用 {len(feedback_data)} 条反馈数据") # 调用重新训练方法 success = self.face_recognition.retrain_with_feedback(feedback_data) if success: # 更新UI状态 self.model_status.setText("模型状态: 已重新训练") self.dorm_members = self.face_recognition.dorm_members self.model_info.setText(f"寝室成员: {', '.join(self.dorm_members)}") # 保存更新后的模型 model_path = os.path.join("models", "updated_model.pkl") self.face_recognition.save_updated_model(model_path) QMessageBox.information(self, "训练完成", "模型已成功使用反馈数据重新训练!") else: QMessageBox.warning(self, "训练失败", "重新训练过程中出现问题") except Exception as e: logger.error(f"重新训练失败: {str(e)}") QMessageBox.critical(self, "训练错误", f"重新训练模型时出错: {str(e)}") def recognize_faces(self, image): """识别人脸并在图像上标注结果""" # 清空上次结果 self.last_results = [] # 转换为 PIL 图像 pil_image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # 检测人脸 boxes, probs, _ = self.detector.detect(pil_image, landmarks=True) # 获取显示选项 display_mode = self.display_combo.currentIndex() # 准备显示图像 display_image = image.copy() # 如果没有检测到人脸 if boxes is None: if display_mode == 2: # 识别结果模式 cv2.putText(display_image, "未检测到人脸", (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2) self.results_text.setText("未检测到人脸") else: # 提取每个人脸 faces = [] for box in boxes: x1, y1, x2, y2 = box face = pil_image.crop((x1, y1, x2, y2)) faces.append(face) # 提取特征 embeddings = [] if faces and self.model_loaded: # 批量处理所有人脸 face_tensors = [self.preprocess_face(face) for face in faces] if face_tensors: face_tensors = torch.stack(face_tensors).to(self.device) with torch.no_grad(): embeddings = self.embedder(face_tensors).cpu().numpy() # 处理每个人脸 for i, (box, prob) in enumerate(zip(boxes, probs)): x1, y1, x2, y2 = box w, h = x2 - x1, y2 - y1 # 在图像上绘制结果 if display_mode == 0: # 原始图像 # 不绘制任何内容 pass elif display_mode == 1: # 检测框 # 绘制人脸框 cv2.rectangle(display_image, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 2) elif display_mode == 2: # 识别结果 # 绘制人脸框 color = (0, 255, 0) # 绿色 # 如果有嵌入向量,则进行识别 if i < len(embeddings): # 预测 probabilities = self.classifier.predict_proba([embeddings[i]])[0] max_prob = np.max(probabilities) pred_class = self.classifier.predict([embeddings[i]])[0] pred_label = self.label_encoder.inverse_transform([pred_class])[0] # 获取置信度阈值 threshold = self.threshold_slider.value() / 100 # 判断是否为陌生人 if max_prob < threshold or pred_label == 'stranger': label = "陌生人" color = (0, 0, 255) # 红色 else: label = pred_label color = (0, 255, 0) # 绿色 # 保存结果用于文本显示 - 修复:保存完整的框信息 result = { "box": [int(x1), int(y1), int(x2 - x1), int(y2 - y1)], # [x, y, width, height] "label": label, "confidence": max_prob } self.last_results.append(result) # 绘制标签 cv2.rectangle(display_image, (int(x1), int(y1)), (int(x2), int(y2)), color, 2) cv2.putText(display_image, f"{label} ({max_prob:.2f})", (int(x1), int(y1) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.7, color, 2) else: # 无法识别的处理 cv2.rectangle(display_image, (int(x1), int(y1)), (int(x2), int(y2)), (0, 165, 255), 2) cv2.putText(display_image, "处理中...", (int(x1), int(y1) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 165, 255), 2) # 更新结果文本 self.update_results_text() # 在图像上显示FPS(摄像头模式下) if self.camera_active: fps = self.timer.interval() if fps > 0: cv2.putText(display_image, f"FPS: {1000 / fps:.1f}", (20, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 255), 2) # 显示图像 self.display_image(display_image) def update_results_text(self): """更新结果文本区域""" if not self.last_results: self.results_text.setText("未识别到任何人脸") return # 构建结果文本 result_text = "

识别结果:

" for i, result in enumerate(self.last_results, 1): x, y, w, h = result["box"] label = result["label"] confidence = result["confidence"] # 处理中文显示问题 if label in self.dorm_members: result_text += ( f"

人脸 #{i}: " f"寝室成员 - {label}
" f"位置: ({x}, {y}), 大小: {w}x{h}, 置信度: {confidence:.2f}

" ) else: result_text += ( f"

人脸 #{i}: " f"陌生人
" f"位置: ({x}, {y}), 大小: {w}x{h}, 置信度: {confidence:.2f}

" ) self.results_text.setHtml(result_text) def preprocess_face(self, face_img): """预处理人脸图像""" # 调整大小 face_img = face_img.resize((160, 160)) # 转换为张量并归一化 face_img = np.array(face_img).astype(np.float32) / 255.0 face_img = (face_img - 0.5) / 0.5 # 归一化到[-1, 1] face_img = torch.tensor(face_img).permute(2, 0, 1) # HWC to CHW return face_img def display_image(self, image): """在QLabel中显示图像""" # 将OpenCV图像转换为Qt格式 height, width, channel = image.shape bytes_per_line = 3 * width q_img = QImage(image.data, width, height, bytes_per_line, QImage.Format_RGB888).rgbSwapped() # 缩放图像以适应标签 pixmap = QPixmap.fromImage(q_img) self.image_label.setPixmap(pixmap.scaled( self.image_label.width(), self.image_label.height(), Qt.KeepAspectRatio, Qt.SmoothTransformation )) def closeEvent(self, event): """关闭事件处理""" if self.camera_active: self.timer.stop() if self.video_capture: self.video_capture.release() # 确认退出 reply = QMessageBox.question( self, '确认退出', "确定要退出系统吗?", QMessageBox.Yes | QMessageBox.No, QMessageBox.No ) if reply == QMessageBox.Yes: event.accept() else: event.ignore() if name == “main”: app = QApplication(sys.argv) # 设置全局异常处理 def handle_exception(exc_type, exc_value, exc_traceback): """全局异常处理""" import traceback error_msg = "".join(traceback.format_exception(exc_type, exc_value, exc_traceback)) print(f"未捕获的异常:\n{error_msg}") # 记录到文件 with open("error.log", "a") as f: f.write(f"\n\n{datetime.now()}:\n{error_msg}") # 显示给用户 QMessageBox.critical(None, "系统错误", f"发生未处理的异常:\n{str(exc_value)}") sys.exit(1) sys.excepthook = handle_exception window = FaceRecognitionSystem() window.show() sys.exit(app.exec_()) face_model.py代码如下:import os os.environ[‘TF_CPP_MIN_LOG_LEVEL’] = ‘3’ # 禁用 TensorFlow 日志(如果仍有依赖) import cv2 import numpy as np import time import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader from torchvision import transforms from sklearn.svm import SVC from sklearn.preprocessing import LabelEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score import joblib import logging import sys import glob from facenet_pytorch import MTCNN, InceptionResnetV1 from PIL import Image import gc 配置日志 logging.basicConfig(level=logging.INFO, format=‘%(asctime)s - %(levelname)s - %(message)s’) logger = logging.getLogger(name) def check_gpu_environment(): “”“检查 GPU 环境”“” print(“=” * 60) print(“GPU 环境检查”) print(“=” * 60) # 检查 CUDA 是否可用 print(f"PyTorch 版本: {torch.__version__}") print(f"CUDA 可用: {torch.cuda.is_available()}") if torch.cuda.is_available(): print(f"GPU 数量: {torch.cuda.device_count()}") for i in range(torch.cuda.device_count()): print(f"GPU {i}: {torch.cuda.get_device_name(i)}") print(f" 显存总量: {torch.cuda.get_device_properties(i).total_memory / 1024 ** 3:.2f} GB") print("=" * 60) class FaceDataset(Dataset): “”“人脸数据集类”“” def __init__(self, data_dir, min_samples=10, transform=None): self.data_dir = data_dir self.transform = transform self.faces = [] self.labels = [] self.label_map = {} self.dorm_members = [] self._load_dataset(min_samples) def _load_dataset(self, min_samples): """加载数据集""" # 遍历每个成员文件夹 for member_dir in os.listdir(self.data_dir): member_path = os.path.join(self.data_dir, member_dir) if not os.path.isdir(member_path): continue # 记录寝室成员 self.dorm_members.append(member_dir) self.label_map[member_dir] = len(self.label_map) # 遍历成员的所有照片 member_faces = [] for img_file in os.listdir(member_path): img_path = os.path.join(member_path, img_file) try: # 使用 PIL 加载图像 img = Image.open(img_path).convert('RGB') member_faces.append(img) except Exception as e: logger.warning(f"无法加载图像 {img_path}: {str(e)}") # 确保每个成员有足够样本 if len(member_faces) < min_samples: logger.warning(f"{member_dir} 只有 {len(member_faces)} 个有效样本,至少需要 {min_samples} 个") continue # 添加成员数据 self.faces.extend(member_faces) self.labels.extend([self.label_map[member_dir]] * len(member_faces)) # 添加陌生人样本 stranger_faces = self._generate_stranger_samples(len(self.faces) // 4) self.faces.extend(stranger_faces) self.labels.extend([len(self.label_map)] * len(stranger_faces)) self.label_map['stranger'] = len(self.label_map) logger.info(f"数据集加载完成: {len(self.faces)} 个样本, {len(self.dorm_members)} 个成员") def _generate_stranger_samples(self, num_samples): """生成陌生人样本""" stranger_faces = [] # 使用公开数据集的人脸作为陌生人 # 这里使用 LFW 数据集作为示例(实际项目中应使用真实数据) for _ in range(num_samples): # 生成随机噪声图像(实际应用中应使用真实陌生人照片) random_face = Image.fromarray(np.uint8(np.random.rand(160, 160, 3) * 255)) stranger_faces.append(random_face) return stranger_faces def __len__(self): return len(self.faces) def __getitem__(self, idx): face = self.faces[idx] label = self.labels[idx] if self.transform: face = self.transform(face) return face, label class DormFaceRecognizer: “”“寝室人脸识别系统 (PyTorch 实现)”“” def __init__(self, threshold=0.7, device=None): # 设置设备 self.device = device or ('cuda' if torch.cuda.is_available() else 'cpu') logger.info(f"使用设备: {self.device}") # 初始化人脸检测器 self.detector = MTCNN( keep_all=True, post_process=False, device=self.device ) logger.info("MTCNN 检测器初始化完成") # 初始化人脸特征提取器 self.embedder = InceptionResnetV1( pretrained='vggface2', classify=False, device=self.device ).eval() # 设置为评估模式 logger.info("FaceNet 特征提取器初始化完成") # 初始化其他组件 self.classifier = None self.label_encoder = None self.threshold = threshold self.dorm_members = [] # 数据预处理 self.transform = transforms.Compose([ transforms.Resize((160, 160)), transforms.ToTensor(), transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) ]) def create_dataset(self, data_dir, min_samples=10, batch_size=32, num_workers=4): """创建数据集""" dataset = FaceDataset( data_dir, min_samples=min_samples, transform=self.transform ) # 保存成员信息 self.dorm_members = dataset.dorm_members self.label_encoder = LabelEncoder().fit( list(dataset.label_map.keys()) + ['stranger'] ) # 创建数据加载器 dataloader = DataLoader( dataset, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=True ) return dataset, dataloader def extract_features(self, dataloader): """提取人脸特征向量""" embeddings = [] labels = [] logger.info("开始提取特征...") start_time = time.time() with torch.no_grad(): for batch_idx, (faces, batch_labels) in enumerate(dataloader): # 移动到设备 faces = faces.to(self.device) # 提取特征 batch_embeddings = self.embedder(faces) # 保存结果 embeddings.append(batch_embeddings.cpu().numpy()) labels.append(batch_labels.numpy()) # 每10个批次打印一次进度 if (batch_idx + 1) % 10 == 0: elapsed = time.time() - start_time logger.info(f"已处理 {batch_idx + 1}/{len(dataloader)} 批次, 耗时: {elapsed:.2f}秒") # 合并结果 embeddings = np.vstack(embeddings) labels = np.hstack(labels) logger.info(f"特征提取完成: {embeddings.shape[0]} 个样本, 耗时: {time.time() - start_time:.2f}秒") return embeddings, labels def train_classifier(self, embeddings, labels): """训练 SVM 分类器""" logger.info("开始训练分类器...") start_time = time.time() # 划分训练集和测试集 X_train, X_test, y_train, y_test = train_test_split( embeddings, labels, test_size=0.2, random_state=42 ) # 创建并训练 SVM 分类器 self.classifier = SVC(kernel='linear', probability=True, C=1.0) self.classifier.fit(X_train, y_train) # 评估模型 y_pred = self.classifier.predict(X_test) accuracy = accuracy_score(y_test, y_pred) logger.info(f"分类器训练完成, 准确率: {accuracy:.4f}, 耗时: {time.time() - start_time:.2f}秒") return accuracy def recognize_face(self, image): """识别单张图像中的人脸""" # 转换为 PIL 图像 if isinstance(image, np.ndarray): image = Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB)) # 检测人脸 boxes, probs, landmarks = self.detector.detect(image, landmarks=True) recognitions = [] if boxes is not None: # 提取每个人脸 faces = [] for box in boxes: x1, y1, x2, y2 = box face = image.crop((x1, y1, x2, y2)) faces.append(face) # 预处理人脸 face_tensors = torch.stack([self.transform(face) for face in faces]).to(self.device) # 提取特征 with torch.no_grad(): embeddings = self.embedder(face_tensors).cpu().numpy() # 预测 probabilities = self.classifier.predict_proba(embeddings) pred_classes = self.classifier.predict(embeddings) for i, (box, prob) in enumerate(zip(boxes, probs)): max_prob = np.max(probabilities[i]) pred_label = self.label_encoder.inverse_transform([pred_classes[i]])[0] # 判断是否为陌生人 if max_prob < self.threshold or pred_label == 'stranger': recognitions.append(("陌生人", max_prob, box)) else: recognitions.append((pred_label, max_prob, box)) return recognitions def save_model(self, file_path): """保存模型""" model_data = { 'classifier': self.classifier, 'label_encoder': self.label_encoder, 'threshold': self.threshold, 'dorm_members': self.dorm_members } joblib.dump(model_data, file_path) logger.info(f"模型已保存至: {file_path}") def load_model(self, file_path): """加载模型""" model_data = joblib.load(file_path) self.classifier = model_data['classifier'] self.label_encoder = model_data['label_encoder'] self.threshold = model_data['threshold'] self.dorm_members = model_data['dorm_members'] logger.info(f"模型已加载,寝室成员: {', '.join(self.dorm_members)}") def main(): “”“主函数”“” print(f"[{time.strftime(‘%H:%M:%S’)}] 程序启动") # 检查 GPU 环境 check_gpu_environment() # 检查并创建必要的目录 os.makedirs('data/dorm_faces', exist_ok=True) # 初始化识别器 try: recognizer = DormFaceRecognizer(threshold=0.6) logger.info("人脸识别器初始化成功") except Exception as e: logger.error(f"初始化失败: {str(e)}") print("程序将在10秒后退出...") time.sleep(10) return # 数据集路径 data_dir = "data/dorm_faces" # 检查数据集是否存在 if not os.path.exists(data_dir) or not os.listdir(data_dir): logger.warning(f"数据集目录 '{data_dir}' 不存在或为空") print("请创建以下结构的目录:") print("dorm_faces/") print("├── 成员1/") print("│ ├── 照片1.jpg") print("│ ├── 照片2.jpg") print("│ └── ...") print("├── 成员2/") print("│ └── ...") print("└── ...") print("\n程序将在10秒后退出...") time.sleep(10) return # 步骤1: 创建数据集 try: dataset, dataloader = recognizer.create_dataset( data_dir, min_samples=10, batch_size=64, num_workers=4 ) except Exception as e: logger.error(f"数据集创建失败: {str(e)}") return # 步骤2: 提取特征 try: embeddings, labels = recognizer.extract_features(dataloader) except Exception as e: logger.error(f"特征提取失败: {str(e)}") return # 步骤3: 训练分类器 try: accuracy = recognizer.train_classifier(embeddings, labels) except Exception as e: logger.error(f"分类器训练失败: {str(e)}") return # 保存模型 model_path = "models/dorm_face_model_pytorch.pkl" try: recognizer.save_model(model_path) except Exception as e: logger.error(f"模型保存失败: {str(e)}") # 测试识别 test_image_path = "test_photo.jpg" if not os.path.exists(test_image_path): logger.warning(f"测试图片 '{test_image_path}' 不存在,跳过识别测试") else: logger.info(f"正在测试识别: {test_image_path}") try: test_image = cv2.imread(test_image_path) if test_image is None: logger.error(f"无法读取图片: {test_image_path}") else: recognitions = recognizer.recognize_face(test_image) if not recognitions: logger.info("未检测到人脸") else: # 在图像上绘制结果 for name, confidence, box in recognitions: x1, y1, x2, y2 = box label = f"{name} ({confidence:.2f})" color = (0, 255, 0) if name != "陌生人" else (0, 0, 255) # 绘制矩形框 cv2.rectangle(test_image, (int(x1), int(y1)), (int(x2), int(y2)), color, 2) # 绘制标签 cv2.putText(test_image, label, (int(x1), int(y1) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2) # 显示结果 cv2.imshow("人脸识别结果", test_image) cv2.waitKey(0) cv2.destroyAllWindows() # 保存结果图像 result_path = "recognition_result_pytorch.jpg" cv2.imwrite(result_path, test_image) logger.info(f"识别结果已保存至: {result_path}") except Exception as e: logger.error(f"人脸识别失败: {str(e)}") logger.info("程序执行完成") if name == “main”: main() face_recognition.py代码如下:import json import cv2 import numpy as np import torch import insightface from insightface.app import FaceAnalysis from facenet_pytorch import InceptionResnetV1 from PIL import Image import joblib import os import pickle from datetime import datetime import random import torch.nn as nn import torch.optim as optim from sklearn.preprocessing import LabelEncoder from sklearn.svm import SVC from torch.utils.data import Dataset, DataLoader class FaceRecognition: def init(self, device=None): self.device = device or torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’) self.model_loaded = False self.training_data = {} # 初始化 training_data 属性 self.dorm_members = [] # 初始化 dorm_members 属性 self.label_encoder = LabelEncoder() # 初始化标签编码器 self.init_models() def init_models(self): """初始化人脸识别模型""" try: # 初始化ArcFace模型 - 使用正确的方法 self.arcface_model = FaceAnalysis(providers=['CPUExecutionProvider']) self.arcface_model.prepare(ctx_id=0, det_size=(640, 640)) # 初始化FaceNet模型作为备选 self.facenet_model = InceptionResnetV1( pretrained='vggface2', classify=False, device=self.device ).eval() # 状态标记 self.models_initialized = True print("模型初始化完成") except Exception as e: print(f"模型初始化失败: {str(e)}") self.models_initialized = False def load_classifier(self, model_path): """加载分类器模型""" try: model_data = joblib.load(model_path) self.classifier = model_data['classifier'] self.label_encoder = model_data['label_encoder'] self.dorm_members = model_data['dorm_members'] # 确保加载training_data self.training_data = model_data.get('training_data', {}) self.model_loaded = True print(f"分类器加载成功,成员: {', '.join(self.dorm_members)}") print(f"训练数据包含 {len(self.training_data)} 个类别") return True except Exception as e: print(f"分类器加载失败: {str(e)}") self.model_loaded = False return False def extract_features(self, face_img): """使用ArcFace提取人脸特征""" try: if face_img.size == 0: print("错误:空的人脸图像") return None # 将图像从BGR转换为RGB rgb_img = cv2.cvtColor(face_img, cv2.COLOR_BGR2RGB) faces = self.arcface_model.get(rgb_img) if faces: return faces[0].embedding print("未检测到人脸特征") return None except Exception as e: print(f"特征提取失败: {str(e)}") return None def extract_features_facenet(self, face_img): """使用FaceNet提取人脸特征(备选)""" try: # 转换为PIL图像并预处理 face_pil = Image.fromarray(cv2.cvtColor(face_img, cv2.COLOR_BGR2RGB)) face_tensor = self.preprocess_face(face_pil).to(self.device) with torch.no_grad(): features = self.facenet_model(face_tensor.unsqueeze(0)).cpu().numpy()[0] return features except Exception as e: print(f"FaceNet特征提取失败: {str(e)}") return None def preprocess_face(self, face_img): """预处理人脸图像""" # 调整大小 face_img = face_img.resize((160, 160)) # 转换为张量并归一化 face_img = np.array(face_img).astype(np.float32) / 255.0 face_img = (face_img - 0.5) / 0.5 # 归一化到[-1, 1] face_img = torch.tensor(face_img).permute(2, 0, 1) # HWC to CHW return face_img def retrain_with_feedback(self, feedback_data): """使用反馈数据重新训练模型""" # 检查是否有原始训练数据 if not self.training_data: print("错误:没有可用的原始训练数据") return False # 收集原始训练数据 original_features = [] original_labels = [] # 收集特征和标签 for member, embeddings in self.training_data.items(): for emb in embeddings: original_features.append(emb) original_labels.append(member) # 收集反馈数据 feedback_features = [] feedback_labels = [] for feedback in feedback_data: # 获取正确标签 correct_label = feedback.get("correct_label") if not correct_label or correct_label == "unknown": continue # 获取原始图像和人脸位置 image_path = feedback.get("image_path", "") if not image_path or not os.path.exists(image_path): print(f"图像路径无效: {image_path}") continue box = feedback.get("box", []) if len(box) != 4: print(f"无效的人脸框: {box}") continue # 处理图像 image = cv2.imread(image_path) if image is None: print(f"无法读取图像: {image_path}") continue # 裁剪人脸区域 x1, y1, x2, y2 = map(int, box) face_img = image[y1:y2, x1:x2] if face_img.size == 0: print(f"裁剪后的人脸图像为空: {image_path}") continue # 提取特征 embedding = self.extract_features(face_img) if embedding is None: print(f"无法提取特征: {image_path}") continue # 添加到训练数据 feedback_features.append(embedding) feedback_labels.append(correct_label) print(f"添加反馈数据: {correct_label} - {image_path}") # 检查是否有有效的反馈数据 if not feedback_features: print("错误:没有有效的反馈数据") return False # 合并数据 all_features = np.vstack([original_features, feedback_features]) all_labels = original_labels + feedback_labels # 重新训练分类器 self.classifier = SVC(kernel='linear', probability=True) self.classifier.fit(all_features, all_labels) # 更新标签编码器 self.label_encoder = LabelEncoder() self.label_encoder.fit(all_labels) # 更新寝室成员列表 self.dorm_members = list(self.label_encoder.classes_) # 更新训练数据 self.training_data = {} for label, feature in zip(all_labels, all_features): if label not in self.training_data: self.training_data[label] = [] self.training_data[label].append(feature) print(f"重新训练完成! 新模型包含 {len(self.dorm_members)} 个成员") return True def recognize(self, image, threshold=0.7): """识别人脸""" if not self.model_loaded or not self.models_initialized: return [], image.copy() # 使用ArcFace检测人脸 rgb_img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) faces = self.arcface_model.get(rgb_img) results = [] display_img = image.copy() if faces: for face in faces: # 获取人脸框 x1, y1, x2, y2 = face.bbox.astype(int) # 提取特征 embedding = face.embedding # 预测 probabilities = self.classifier.predict_proba([embedding])[0] max_prob = np.max(probabilities) pred_class = self.classifier.predict([embedding])[0] pred_label = self.label_encoder.inverse_transform([pred_class])[0] # 判断是否为陌生人 if max_prob < threshold or pred_label == 'stranger': label = "陌生人" color = (0, 0, 255) # 红色 else: label = pred_label color = (0, 255, 0) # 绿色 # 保存结果 results.append({ "box": [x1, y1, x2, y2], "label": label, "confidence": max_prob }) # 在图像上绘制结果 cv2.rectangle(display_img, (x1, y1), (x2, y2), color, 2) cv2.putText(display_img, f"{label} ({max_prob:.2f})", (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.7, color, 2) return results, display_img def save_feedback(self, image, detected_box, incorrect_label, correct_label): """保存用户反馈数据 - 改进为保存图像路径而非完整图像""" feedback_dir = "data/feedback_data" os.makedirs(feedback_dir, exist_ok=True) # 创建唯一文件名 timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") # 保存人脸图像 face_img_dir = os.path.join(feedback_dir, "faces") os.makedirs(face_img_dir, exist_ok=True) face_img_path = os.path.join(face_img_dir, f"face_{timestamp}.jpg") # 裁剪并保存人脸区域 x1, y1, x2, y2 = map(int, detected_box) # 修复1:确保裁剪区域有效 if y2 > y1 and x2 > x1: face_img = image[y1:y2, x1:x2] if face_img.size > 0: cv2.imwrite(face_img_path, face_img) else: logger.warning(f"裁剪的人脸区域无效: {detected_box}") face_img_path = None else: logger.warning(f"无效的检测框: {detected_box}") face_img_path = None # 保存反馈元数据 filename = f"feedback_{timestamp}.json" # 修复2:使用JSON格式 filepath = os.path.join(feedback_dir, filename) # 准备数据 feedback_data = { "image_path": face_img_path, # 保存路径而非完整图像 "detected_box": detected_box, "incorrect_label": incorrect_label, "correct_label": correct_label, "timestamp": timestamp } # 修复3:使用JSON保存便于阅读和调试 with open(filepath, 'w', encoding='utf-8') as f: json.dump(feedback_data, f, ensure_ascii=False, indent=2) return True def save_updated_model(self, output_path): """保存更新后的模型""" model_data = { 'classifier': self.classifier, 'label_encoder': self.label_encoder, 'dorm_members': self.dorm_members, 'training_data': self.training_data # 包含训练数据 } joblib.dump(model_data, output_path) print(f"更新后的模型已保存到: {output_path}") class TripletFaceDataset(Dataset): “”“三元组人脸数据集”“” def __init__(self, embeddings, labels): self.embeddings = embeddings self.labels = labels self.label_to_indices = {} # 创建标签到索引的映射 for idx, label in enumerate(labels): if label not in self.label_to_indices: self.label_to_indices[label] = [] self.label_to_indices[label].append(idx) def __getitem__(self, index): anchor_label = self.labels[index] # 随机选择正样本 positive_idx = index while positive_idx == index: positive_idx = random.choice(self.label_to_indices[anchor_label]) # 随机选择负样本 negative_label = random.choice([l for l in set(self.labels) if l != anchor_label]) negative_idx = random.choice(self.label_to_indices[negative_label]) return ( self.embeddings[index], self.embeddings[positive_idx], self.embeddings[negative_idx] ) def __len__(self): return len(self.embeddings) class TripletLoss(nn.Module): “”“三元组损失函数”“” def __init__(self, margin=1.0): super(TripletLoss, self).__init__() self.margin = margin def forward(self, anchor, positive, negative): distance_positive = (anchor - positive).pow(2).sum(1) distance_negative = (anchor - negative).pow(2).sum(1) losses = torch.relu(distance_positive - distance_negative + self.margin) return losses.mean() def train_triplet_model(embeddings, labels, epochs=100): “”“训练三元组模型”“” dataset = TripletFaceDataset(embeddings, labels) dataloader = DataLoader(dataset, batch_size=32, shuffle=True) model = nn.Sequential( nn.Linear(embeddings.shape[1], 256), nn.ReLU(), nn.Linear(256, 128) ) criterion = TripletLoss(margin=0.5) optimizer = optim.Adam(model.parameters(), lr=0.001) for epoch in range(epochs): total_loss = 0.0 for anchor, positive, negative in dataloader: optimizer.zero_grad() anchor_embed = model(anchor) positive_embed = model(positive) negative_embed = model(negative) loss = criterion(anchor_embed, positive_embed, negative_embed) loss.backward() optimizer.step() total_loss += loss.item() print(f"Epoch {epoch + 1}/{epochs}, Loss: {total_loss / len(dataloader):.4f}") return model main.py代码如下:import sys from dorm_face_recognition_gui import FaceRecognitionSystem from PyQt5.QtWidgets import QApplication if name == “main”: # 设置中文编码支持 if sys.platform == “win32”: import ctypes ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID(“dorm.face.recognition”) app = QApplication(sys.argv) app.setStyle("Fusion") # 使用Fusion风格 # 设置应用样式 app.setStyleSheet(""" QMainWindow { background-color: #ecf0f1; } QGroupBox { border: 1px solid #bdc3c7; border-radius: 8px; margin-top: 20px; padding: 10px; font-weight: bold; background-color: #ffffff; } QGroupBox::title { subcontrol-origin: margin; subcontrol-position: top center; padding: 0 5px; } QPushButton { background-color: #3498db; color: white; border: none; padding: 10px 15px; font-size: 14px; margin: 5px; border-radius: 5px; } QPushButton:hover { background-color: #2980b9; } QPushButton:pressed { background-color: #1c6ea4; } QPushButton:disabled { background-color: #bdc3c7; } QLabel { font-size: 14px; padding: 3px; } QComboBox, QSlider { padding: 4px; background-color: #ffffff; } QTextEdit { font-family: "Microsoft YaHei"; font-size: 12px; } """) window = FaceRecognitionSystem() window.show() sys.exit(app.exec_()) ui.py代码如下:from PyQt5.QtWidgets import (QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, QPushButton, QLabel, QFileDialog, QComboBox, QSlider, QMessageBox, QTextEdit, QGroupBox, QScrollArea, QDialog, QListWidget) from PyQt5.QtCore import Qt, QTimer from PyQt5.QtGui import QImage, QPixmap, QIcon, QFont from face_recognition import FaceRecognition class FaceRecognitionSystem(QMainWindow): def init(self): super().init() # … 原有初始化代码 … # 初始化人脸识别器 self.face_recognition = FaceRecognition() # 添加反馈按钮 self.add_feedback_button() def add_feedback_button(self): """添加反馈按钮到界面""" self.feedback_btn = QPushButton("反馈识别错误") self.feedback_btn.setIcon(QIcon.fromTheme("dialog-warning")) self.feedback_btn.setStyleSheet("background-color: #f39c12;") self.feedback_btn.clicked.connect(self.handle_feedback) # 找到识别功能组并添加按钮 for i in range(self.control_layout.count()): widget = self.control_layout.itemAt(i).widget() if isinstance(widget, QGroupBox) and widget.title() == "识别功能": layout = widget.layout() layout.addWidget(self.feedback_btn) break def handle_feedback(self): """处理用户反馈""" if not hasattr(self, 'last_results') or not self.last_results: QMessageBox.warning(self, "警告", "没有可反馈的识别结果") return # 创建反馈对话框 dialog = QDialog(self) dialog.setWindowTitle("识别错误反馈") dialog.setFixedSize(400, 300) layout = QVBoxLayout(dialog) # 添加当前识别结果 result_label = QLabel("当前识别结果:") layout.addWidget(result_label) self.feedback_list = QListWidget() for i, result in enumerate(self.last_results, 1): label = result["label"] confidence = result["confidence"] self.feedback_list.addItem(f"人脸 #{i}: {label} (置信度: {confidence:.2f})") layout.addWidget(self.feedback_list) # 添加正确身份选择 correct_label = QLabel("正确身份:") layout.addWidget(correct_label) self.correct_combo = QComboBox() self.correct_combo.addItems(["选择正确身份"] + self.face_recognition.dorm_members + ["陌生人", "不在列表中"]) layout.addWidget(self.correct_combo) # 添加按钮 btn_layout = QHBoxLayout() submit_btn = QPushButton("提交反馈") submit_btn.clicked.connect(lambda: self.submit_feedback(dialog)) btn_layout.addWidget(submit_btn) cancel_btn = QPushButton("取消") cancel_btn.clicked.connect(dialog.reject) btn_layout.addWidget(cancel_btn) layout.addLayout(btn_layout) dialog.exec_() def submit_feedback(self, dialog): """提交反馈并更新模型""" selected_index = self.feedback_list.currentRow() if selected_index < 0: QMessageBox.warning(self, "警告", "请选择一个识别结果") return result = self.last_results[selected_index] correct_identity = self.correct_combo.currentText() if correct_identity == "选择正确身份": QMessageBox.warning(self, "警告", "请选择正确身份") return # 保存反馈数据 self.face_recognition.save_feedback( self.current_image.copy(), result["box"], result["label"], correct_identity ) QMessageBox.information(self, "反馈提交", "感谢您的反馈!数据已保存用于改进模型") dialog.accept() def recognize_faces(self, image): """识别人脸并在图像上标注结果""" # 使用人脸识别器进行识别 self.last_results, display_image = self.face_recognition.recognize( image, threshold=self.threshold_slider.value() / 100 ) # 更新结果文本 self.update_results_text() # 显示图像 self.display_image(display_image) def update_results_text(self): """更新结果文本区域""" if not self.last_results: self.results_text.setText("未识别到任何人脸") return # 构建结果文本 result_text = "<h3>识别结果:</h3>" for i, result in enumerate(self.last_results, 1): x1, y1, x2, y2 = result["box"] label = result["label"] confidence = result["confidence"] # 处理中文显示问题 if label in self.face_recognition.dorm_members: result_text += ( f"

人脸 #{i}: " f"寝室成员 - {label}
" f"位置: ({x1}, {y1}), 置信度: {confidence:.2f}

" ) else: result_text += ( f"

人脸 #{i}: " f"陌生人
" f"位置: ({x1}, {y1}), 置信度: {confidence:.2f}

" ) self.results_text.setHtml(result_text) # ... 其余原有方法 ... 需要把重新训练模型部分和反馈部分全部删除
我是卖报的小砖家
  • 粉丝: 29
上传资源 快速赚钱