门户网站建设如何入账wordpress 机器学习
2026/4/17 2:40:51 网站建设 项目流程
门户网站建设如何入账,wordpress 机器学习,黄骅港到石家庄的客车时刻表,牡丹区住房和城乡建设局网站摘要 随着旅游业的快速发展#xff0c;景区环境污染问题日益突出#xff0c;其中垃圾污染尤为严重。传统的垃圾清理方式效率低下且成本高昂#xff0c;无法满足现代景区的管理需求。本文提出了一种基于深度学习的景区垃圾识别系统#xff0c;该系统整合了YOLOv5、YOLOv6、…摘要随着旅游业的快速发展景区环境污染问题日益突出其中垃圾污染尤为严重。传统的垃圾清理方式效率低下且成本高昂无法满足现代景区的管理需求。本文提出了一种基于深度学习的景区垃圾识别系统该系统整合了YOLOv5、YOLOv6、YOLOv7和YOLOv8等多种先进的目标检测算法并采用PySide6构建了用户友好的图形界面。本文将详细介绍系统的设计与实现包括数据集构建、模型训练、性能优化以及实际部署等方面为景区智能化管理提供了一套完整的解决方案。目录摘要1. 研究背景与意义1.1 景区环境问题的挑战1.2 传统方法的局限性1.3 深度学习在环境监测中的应用优势2. 相关技术综述2.1 YOLO系列算法演进2.1.1 YOLOv52.1.2 YOLOv62.1.3 YOLOv72.1.4 YOLOv82.2 PySide6框架介绍3. 系统设计与架构3.1 系统总体架构3.2 系统功能模块4. 数据集构建与处理4.1 数据集来源4.2 数据标注规范4.3 数据增强策略5. 模型训练与优化5.1 YOLOv8训练配置5.2 训练代码实现5.3 多模型对比训练6. PySide6图形界面开发6.1 主界面设计6.2 系统功能扩展7. 系统部署与优化7.1 部署环境配置7.2 性能优化策略1. 研究背景与意义1.1 景区环境问题的挑战近年来全球旅游业蓬勃发展景区游客数量持续增长。然而随之而来的环境问题也日益严重。据统计中国主要旅游景区每年产生垃圾量超过百万吨其中约30%的垃圾未能得到及时清理导致景区环境质量下降生态平衡遭到破坏。1.2 传统方法的局限性传统的垃圾清理方式主要依赖人工巡逻和定点清理存在以下问题效率低下人工巡查覆盖面有限响应滞后无法及时发现并处理垃圾成本高昂需要大量人力资源缺乏智能化无法进行数据分析和趋势预测1.3 深度学习在环境监测中的应用优势基于深度学习的目标检测技术为解决上述问题提供了新的思路实时监测7×24小时不间断监控高精度识别准确识别各类垃圾自动报警及时发现环境问题数据分析提供管理决策支持2. 相关技术综述2.1 YOLO系列算法演进2.1.1 YOLOv5YOLOv5由Ultralytics公司开发是目前应用最广泛的YOLO版本之一。其主要特点包括自适应锚框计算数据增强策略丰富提供了n、s、m、l、x五种模型尺寸支持多种训练和部署方式2.1.2 YOLOv6YOLOv6由美团视觉智能部研发主要改进包括更高效的网络结构引入了RepVGG风格的重参数化优化了训练策略2.1.3 YOLOv7YOLOv7在速度和精度上都实现了突破提出了扩展的高效层聚合网络模型重参数化技术动态标签分配策略2.1.4 YOLOv8YOLOv8是最新的YOLO版本主要特点包括无锚框设计更灵活的网络结构改进的训练策略2.2 PySide6框架介绍PySide6是Qt for Python的官方库提供了完整的Qt6 API绑定具有以下优势跨平台支持丰富的UI组件良好的文档支持活跃的社区生态3. 系统设计与架构3.1 系统总体架构text景区垃圾识别系统架构 1. 数据采集层摄像头、无人机、用户上传 2. 数据处理层图像预处理、数据增强 3. 模型推理层YOLO算法检测 4. 业务逻辑层垃圾分类、计数、定位 5. 应用展示层PySide6图形界面 6. 数据存储层检测结果、统计数据3.2 系统功能模块实时检测模块支持摄像头实时流检测图片检测模块支持单张或多张图片批量检测视频检测模块支持视频文件检测数据管理模块检测结果保存与查询统计分析模块垃圾分布统计分析系统设置模块模型参数配置4. 数据集构建与处理4.1 数据集来源本文使用了多种数据来源构建景区垃圾数据集公开数据集TACO (Trash Annotations in Context): 包含60个垃圾类别1500张图像Waste Classification Dataset: 专注于可回收物分类自建景区垃圾数据集在多个景区采集的特定场景数据数据采集标准分辨率不低于1920×1080光照条件多种天气和时段拍摄角度多角度覆盖垃圾种类涵盖塑料、纸张、金属、玻璃等4.2 数据标注规范python# 标注文件示例 (YOLO格式) # class_id center_x center_y width height 0 0.5 0.5 0.2 0.3 1 0.3 0.4 0.1 0.2类别定义text0: plastic_bottle # 塑料瓶 1: paper_cup # 纸杯 2: cigarette_butt # 烟蒂 3: plastic_bag # 塑料袋 4: food_wrapper # 食品包装 5: can # 易拉罐 6: glass_bottle # 玻璃瓶 7: paper # 纸张 8: fruit_peel # 果皮4.3 数据增强策略pythonimport albumentations as A from albumentations.pytorch import ToTensorV2 def get_augmentations(): 定义数据增强管道 train_transform A.Compose([ A.RandomResizedCrop(640, 640, scale(0.8, 1.0)), A.HorizontalFlip(p0.5), A.VerticalFlip(p0.1), A.RandomBrightnessContrast(p0.3), A.HueSaturationValue(p0.3), A.GaussianBlur(p0.1), A.CLAHE(p0.2), A.RandomShadow(p0.1), A.RandomFog(p0.1), A.Rotate(limit15, p0.5), ToTensorV2() ], bbox_paramsA.BboxParams(formatyolo, label_fields[class_labels])) return train_transform5. 模型训练与优化5.1 YOLOv8训练配置yaml# yolov8n-scenic-garbage.yaml path: ./datasets/scenic_garbage train: images/train val: images/val test: images/test # 类别数量 nc: 9 # 类别名称 names: 0: plastic_bottle 1: paper_cup 2: cigarette_butt 3: plastic_bag 4: food_wrapper 5: can 6: glass_bottle 7: paper 8: fruit_peel5.2 训练代码实现pythonimport torch from ultralytics import YOLO import yaml import os from datetime import datetime class ScenicGarbageTrainer: def __init__(self, model_typeyolov8n, devicecuda): 初始化训练器 Args: model_type: 模型类型可选 yolov8n, yolov8s, yolov8m, yolov8l, yolov8x device: 训练设备 self.model_type model_type self.device device if torch.cuda.is_available() else cpu self.model None def setup_dataset(self, data_yaml_path): 设置数据集 with open(data_yaml_path, r) as f: data_config yaml.safe_load(f) # 检查数据集是否存在 if not os.path.exists(data_config[path]): raise FileNotFoundError(f数据集路径不存在: {data_config[path]}) return data_config def train_model(self, data_yaml, epochs100, imgsz640, batch16): 训练模型 print(f开始训练{self.model_type}模型...) print(f设备: {self.device}) print(f训练轮次: {epochs}) print(f图片尺寸: {imgsz}) print(f批大小: {batch}) # 加载模型 self.model YOLO(f{self.model_type}.pt) # 训练参数 train_args { data: data_yaml, epochs: epochs, imgsz: imgsz, batch: batch, device: self.device, workers: 8, patience: 50, save: True, save_period: 10, exist_ok: True, pretrained: True, optimizer: AdamW, lr0: 0.001, lrf: 0.01, momentum: 0.937, weight_decay: 0.0005, warmup_epochs: 3, warmup_momentum: 0.8, warmup_bias_lr: 0.1, box: 7.5, cls: 0.5, dfl: 1.5, hsv_h: 0.015, hsv_s: 0.7, hsv_v: 0.4, degrees: 0.0, translate: 0.1, scale: 0.5, shear: 0.0, perspective: 0.0, flipud: 0.0, fliplr: 0.5, mosaic: 1.0, mixup: 0.0, copy_paste: 0.0, name: f{self.model_type}_scenic_garbage_{datetime.now().strftime(%Y%m%d_%H%M%S)} } # 开始训练 results self.model.train(**train_args) # 保存最佳模型 best_model_path results.save_dir / weights / best.pt print(f训练完成最佳模型保存路径: {best_model_path}) return results def evaluate_model(self, data_yaml): 评估模型性能 if self.model is None: raise ValueError(请先训练模型或加载已训练模型) metrics self.model.val(datadata_yaml) # 打印评估结果 print(模型评估结果:) print(fmAP0.5: {metrics.box.map50:.4f}) print(fmAP0.5:0.95: {metrics.box.map:.4f}) print(f精确率: {metrics.box.p:.4f}) print(f召回率: {metrics.box.r:.4f}) return metrics def export_model(self, formatonnx): 导出模型 if self.model is None: raise ValueError(请先训练模型或加载已训练模型) export_path f./exports/{self.model_type}_scenic_garbage.{format} success self.model.export(formatformat, simplifyTrue) if success: print(f模型导出成功: {export_path}) else: print(模型导出失败) return success # 使用示例 if __name__ __main__: # 初始化训练器 trainer ScenicGarbageTrainer(model_typeyolov8n) # 设置数据集 data_config trainer.setup_dataset(data/scenic_garbage.yaml) # 训练模型 results trainer.train_model( data_yamldata/scenic_garbage.yaml, epochs100, imgsz640, batch16 ) # 评估模型 metrics trainer.evaluate_model(data/scenic_garbage.yaml) # 导出模型 trainer.export_model(formatonnx)5.3 多模型对比训练pythonclass MultiModelComparer: def __init__(self, models[yolov5, yolov6, yolov7, yolov8]): self.models models self.results {} def compare_models(self, data_yaml, epochs50): 比较不同模型的性能 for model_name in self.models: print(f\n{*50}) print(f训练 {model_name} 模型) print(*50) if model_name yolov8: trainer ScenicGarbageTrainer(model_typeyolov8n) elif model_name yolov7: # YOLOv7训练代码 pass elif model_name yolov6: # YOLOv6训练代码 pass elif model_name yolov5: # YOLOv5训练代码 pass # 训练并评估 results trainer.train_model(data_yaml, epochsepochs) metrics trainer.evaluate_model(data_yaml) self.results[model_name] { metrics: metrics, params: trainer.model.model.params, inference_time: self.measure_inference_time(trainer.model) } # 生成对比报告 self.generate_comparison_report() def measure_inference_time(self, model, img_size(640, 640)): 测量推理时间 import time dummy_input torch.randn(1, 3, *img_size).to(model.device) # Warmup for _ in range(10): _ model(dummy_input) # 正式测量 start_time time.time() iterations 100 for _ in range(iterations): _ model(dummy_input) torch.cuda.synchronize() if torch.cuda.is_available() else None end_time time.time() avg_time (end_time - start_time) * 1000 / iterations # 毫秒 return avg_time def generate_comparison_report(self): 生成模型对比报告 print(\n *60) print(模型性能对比报告) print(*60) headers [模型, mAP0.5, mAP0.5:0.95, 参数量(M), 推理时间(ms)] print(f{headers[0]:10} {headers[1]:10} {headers[2]:15} {headers[3]:12} {headers[4]:15}) print(-*60) for model_name, result in self.results.items(): metrics result[metrics] params result[params] / 1e6 # 转换为百万 inference_time result[inference_time] print(f{model_name:10} {metrics.box.map50:.4f} {metrics.box.map:.4f} f{params:.1f} {inference_time:.2f})6. PySide6图形界面开发6.1 主界面设计pythonimport sys import cv2 import torch from pathlib import Path from PySide6.QtWidgets import * from PySide6.QtCore import * from PySide6.QtGui import * from ultralytics import YOLO import numpy as np from datetime import datetime import json import pandas as pd from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas import matplotlib.pyplot as plt class ScenicGarbageDetectionApp(QMainWindow): def __init__(self): super().__init__() self.model None self.current_video_path None self.video_capture None self.timer QTimer() self.detection_results [] self.setup_ui() self.setup_shortcuts() def setup_ui(self): 设置用户界面 self.setWindowTitle(景区垃圾智能识别系统 v1.0) self.setGeometry(100, 100, 1400, 900) # 中心部件 central_widget QWidget() self.setCentralWidget(central_widget) main_layout QHBoxLayout(central_widget) # 左侧控制面板 control_panel QFrame() control_panel.setFrameShape(QFrame.StyledPanel) control_panel.setFixedWidth(300) control_layout QVBoxLayout(control_panel) # 模型选择 model_group QGroupBox(模型设置) model_layout QVBoxLayout() self.model_combo QComboBox() self.model_combo.addItems([YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x, YOLOv7, YOLOv6, YOLOv5]) self.model_combo.setCurrentText(YOLOv8n) model_layout.addWidget(QLabel(选择模型:)) model_layout.addWidget(self.model_combo) self.load_model_btn QPushButton(加载模型) self.load_model_btn.clicked.connect(self.load_model) model_layout.addWidget(self.load_model_btn) model_group.setLayout(model_layout) control_layout.addWidget(model_group) # 检测模式 mode_group QGroupBox(检测模式) mode_layout QVBoxLayout() self.mode_combo QComboBox() self.mode_combo.addItems([图片检测, 实时摄像头, 视频文件]) mode_layout.addWidget(QLabel(选择模式:)) mode_layout.addWidget(self.mode_combo) self.mode_combo.currentTextChanged.connect(self.on_mode_changed) self.camera_combo QComboBox() self.refresh_cameras() mode_layout.addWidget(QLabel(选择摄像头:)) mode_layout.addWidget(self.camera_combo) self.start_camera_btn QPushButton(开始摄像头检测) self.start_camera_btn.clicked.connect(self.start_camera_detection) self.start_camera_btn.setEnabled(False) mode_layout.addWidget(self.start_camera_btn) self.select_video_btn QPushButton(选择视频文件) self.select_video_btn.clicked.connect(self.select_video_file) mode_layout.addWidget(self.select_video_btn) self.select_image_btn QPushButton(选择图片文件) self.select_image_btn.clicked.connect(self.select_image_file) mode_layout.addWidget(self.select_image_btn) mode_group.setLayout(mode_layout) control_layout.addWidget(mode_group) # 检测参数 param_group QGroupBox(检测参数) param_layout QFormLayout() self.conf_slider QSlider(Qt.Horizontal) self.conf_slider.setRange(10, 100) self.conf_slider.setValue(25) self.conf_slider.valueChanged.connect(self.update_conf_label) param_layout.addRow(置信度阈值:, self.conf_slider) self.conf_label QLabel(0.25) param_layout.addRow(当前值:, self.conf_label) self.iou_slider QSlider(Qt.Horizontal) self.iou_slider.setRange(10, 100) self.iou_slider.setValue(45) param_layout.addRow(IoU阈值:, self.iou_slider) param_group.setLayout(param_layout) control_layout.addWidget(param_group) # 统计信息 stats_group QGroupBox(统计信息) stats_layout QVBoxLayout() self.total_detections_label QLabel(总检测数: 0) stats_layout.addWidget(self.total_detections_label) self.class_stats_label QLabel(各类别统计:) stats_layout.addWidget(self.class_stats_label) self.class_stats_text QTextEdit() self.class_stats_text.setReadOnly(True) self.class_stats_text.setMaximumHeight(150) stats_layout.addWidget(self.class_stats_text) stats_group.setLayout(stats_layout) control_layout.addWidget(stats_group) # 控制按钮 self.detect_btn QPushButton(开始检测) self.detect_btn.clicked.connect(self.start_detection) self.detect_btn.setEnabled(False) self.stop_btn QPushButton(停止检测) self.stop_btn.clicked.connect(self.stop_detection) self.stop_btn.setEnabled(False) self.save_btn QPushButton(保存结果) self.save_btn.clicked.connect(self.save_results) self.save_btn.setEnabled(False) control_layout.addWidget(self.detect_btn) control_layout.addWidget(self.stop_btn) control_layout.addWidget(self.save_btn) control_layout.addStretch() # 右侧显示区域 display_area QTabWidget() # 图像显示标签页 self.image_label QLabel() self.image_label.setAlignment(Qt.AlignCenter) self.image_label.setStyleSheet(border: 2px solid #ccc;) display_area.addTab(self.image_label, 实时检测) # 统计图表标签页 self.stats_canvas plt.figure(figsize(8, 6)) self.stats_canvas_widget FigureCanvas(self.stats_canvas) display_area.addTab(self.stats_canvas_widget, 统计分析) # 历史记录标签页 self.history_table QTableWidget() self.history_table.setColumnCount(5) self.history_table.setHorizontalHeaderLabels([时间, 文件, 检测数, 主要类别, 置信度]) display_area.addTab(self.history_table, 历史记录) main_layout.addWidget(control_panel) main_layout.addWidget(display_area, 1) # 状态栏 self.status_bar QStatusBar() self.setStatusBar(self.status_bar) self.status_bar.showMessage(就绪) # 进度条 self.progress_bar QProgressBar() self.status_bar.addPermanentWidget(self.progress_bar) self.progress_bar.setVisible(False) def setup_shortcuts(self): 设置快捷键 QShortcut(QKeySequence(CtrlO), self, self.select_image_file) QShortcut(QKeySequence(CtrlV), self, self.select_video_file) QShortcut(QKeySequence(Space), self, self.start_detection) QShortcut(QKeySequence(Esc), self, self.stop_detection) QShortcut(QKeySequence(CtrlS), self, self.save_results) def refresh_cameras(self): 刷新摄像头列表 self.camera_combo.clear() self.camera_combo.addItem(选择摄像头) # 尝试检测可用摄像头 for i in range(10): cap cv2.VideoCapture(i) if cap.isOpened(): self.camera_combo.addItem(f摄像头 {i}) cap.release() def load_model(self): 加载模型 try: model_name self.model_combo.currentText() model_map { YOLOv8n: yolov8n.pt, YOLOv8s: yolov8s.pt, YOLOv8m: yolov8m.pt, YOLOv8l: yolov8l.pt, YOLOv8x: yolov8x.pt, } if model_name in model_map: model_path Path(fmodels/{model_map[model_name]}) if not model_path.exists(): self.show_message(错误, f模型文件不存在: {model_path}) return self.model YOLO(model_path) self.status_bar.showMessage(f模型加载成功: {model_name}) self.detect_btn.setEnabled(True) # 如果是摄像头模式启用摄像头按钮 if self.mode_combo.currentText() 实时摄像头: self.start_camera_btn.setEnabled(True) else: self.show_message(提示, f{model_name} 模型加载功能开发中) except Exception as e: self.show_message(错误, f模型加载失败: {str(e)}) def on_mode_changed(self, mode): 检测模式改变事件 is_camera_mode (mode 实时摄像头) self.camera_combo.setEnabled(is_camera_mode) self.start_camera_btn.setEnabled(is_camera_mode and self.model is not None) def select_image_file(self): 选择图片文件 file_path, _ QFileDialog.getOpenFileName( self, 选择图片文件, str(Path.home()), 图片文件 (*.jpg *.jpeg *.png *.bmp) ) if file_path: self.current_image_path file_path self.show_image(file_path) self.status_bar.showMessage(f已选择图片: {Path(file_path).name}) def select_video_file(self): 选择视频文件 file_path, _ QFileDialog.getOpenFileName( self, 选择视频文件, str(Path.home()), 视频文件 (*.mp4 *.avi *.mov *.mkv) ) if file_path: self.current_video_path file_path self.status_bar.showMessage(f已选择视频: {Path(file_path).name}) def show_image(self, image_path): 显示图片 pixmap QPixmap(image_path) if not pixmap.isNull(): scaled_pixmap pixmap.scaled( self.image_label.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation ) self.image_label.setPixmap(scaled_pixmap) def start_camera_detection(self): 开始摄像头检测 camera_index self.camera_combo.currentIndex() - 1 if camera_index 0: self.show_message(提示, 请选择摄像头) return self.video_capture cv2.VideoCapture(camera_index) if not self.video_capture.isOpened(): self.show_message(错误, 无法打开摄像头) return self.timer.timeout.connect(self.update_camera_frame) self.timer.start(30) # 30ms更新一帧 self.detect_btn.setEnabled(False) self.stop_btn.setEnabled(True) self.status_bar.showMessage(摄像头检测进行中...) def update_camera_frame(self): 更新摄像头帧 if self.video_capture is None: return ret, frame self.video_capture.read() if ret: # 检测 results self.model(frame, confself.conf_slider.value()/100, iouself.iou_slider.value()/100) # 绘制结果 annotated_frame results[0].plot() # 转换为Qt图像 rgb_image cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB) h, w, ch rgb_image.shape bytes_per_line ch * w qt_image QImage(rgb_image.data, w, h, bytes_per_line, QImage.Format_RGB888) pixmap QPixmap.fromImage(qt_image) # 显示 scaled_pixmap pixmap.scaled( self.image_label.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation ) self.image_label.setPixmap(scaled_pixmap) # 更新统计信息 self.update_statistics(results[0]) def start_detection(self): 开始检测 if self.model is None: self.show_message(提示, 请先加载模型) return mode self.mode_combo.currentText() if mode 图片检测 and hasattr(self, current_image_path): self.detect_image(self.current_image_path) elif mode 视频文件 and self.current_video_path: self.detect_video(self.current_video_path) def detect_image(self, image_path): 检测图片 try: self.progress_bar.setVisible(True) self.progress_bar.setValue(30) # 执行检测 results self.model(image_path, confself.conf_slider.value()/100, iouself.iou_slider.value()/100) self.progress_bar.setValue(70) # 获取带标注的图像 annotated_image results[0].plot() # 保存结果 output_path fresults/{Path(image_path).stem}_result.jpg cv2.imwrite(output_path, annotated_image) # 显示结果 rgb_image cv2.cvtColor(annotated_image, cv2.COLOR_BGR2RGB) h, w, ch rgb_image.shape bytes_per_line ch * w qt_image QImage(rgb_image.data, w, h, bytes_per_line, QImage.Format_RGB888) pixmap QPixmap.fromImage(qt_image) scaled_pixmap pixmap.scaled( self.image_label.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation ) self.image_label.setPixmap(scaled_pixmap) # 更新统计 self.update_statistics(results[0]) # 添加到历史记录 self.add_to_history(image_path, results[0]) self.progress_bar.setValue(100) QTimer.singleShot(500, lambda: self.progress_bar.setVisible(False)) self.status_bar.showMessage(f检测完成: {Path(image_path).name}) self.save_btn.setEnabled(True) except Exception as e: self.show_message(错误, f图片检测失败: {str(e)}) self.progress_bar.setVisible(False) def detect_video(self, video_path): 检测视频 try: cap cv2.VideoCapture(video_path) if not cap.isOpened(): self.show_message(错误, 无法打开视频文件) return # 获取视频信息 fps cap.get(cv2.CAP_PROP_FPS) frame_count int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) # 准备输出视频 output_path fresults/{Path(video_path).stem}_result.mp4 fourcc cv2.VideoWriter_fourcc(*mp4v) out None self.progress_bar.setVisible(True) self.stop_btn.setEnabled(True) self.detect_btn.setEnabled(False) frame_num 0 while cap.isOpened(): ret, frame cap.read() if not ret: break # 检测 results self.model(frame, confself.conf_slider.value()/100, iouself.iou_slider.value()/100) # 绘制结果 annotated_frame results[0].plot() # 初始化输出视频 if out is None: h, w annotated_frame.shape[:2] out cv2.VideoWriter(output_path, fourcc, fps, (w, h)) out.write(annotated_frame) # 更新进度 frame_num 1 progress int((frame_num / frame_count) * 100) self.progress_bar.setValue(progress) # 显示当前帧 if frame_num % 10 0: # 每10帧显示一次 rgb_image cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB) h, w, ch rgb_image.shape bytes_per_line ch * w qt_image QImage(rgb_image.data, w, h, bytes_per_line, QImage.Format_RGB888) pixmap QPixmap.fromImage(qt_image) scaled_pixmap pixmap.scaled( self.image_label.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation ) self.image_label.setPixmap(scaled_pixmap) QApplication.processEvents() if not self.stop_btn.isEnabled(): # 用户点击了停止 break cap.release() if out is not None: out.release() self.progress_bar.setVisible(False) self.status_bar.showMessage(f视频检测完成: {output_path}) except Exception as e: self.show_message(错误, f视频检测失败: {str(e)}) def update_statistics(self, results): 更新统计信息 if hasattr(results, boxes) and results.boxes is not None: boxes results.boxes total len(boxes) # 更新总检测数 self.total_detections_label.setText(f总检测数: {total}) # 更新各类别统计 class_stats {} if hasattr(boxes, cls) and boxes.cls is not None: classes boxes.cls.cpu().numpy() for cls in classes: cls_name self.model.names[int(cls)] class_stats[cls_name] class_stats.get(cls_name, 0) 1 # 显示统计信息 stats_text for cls_name, count in class_stats.items(): stats_text f{cls_name}: {count}个\n self.class_stats_text.setText(stats_text) # 更新图表 self.update_statistics_chart(class_stats) def update_statistics_chart(self, class_stats): 更新统计图表 self.stats_canvas.clear() ax self.stats_canvas.add_subplot(111) if class_stats: labels list(class_stats.keys()) values list(class_stats.values()) colors plt.cm.Set3(np.linspace(0, 1, len(labels))) # 创建柱状图 bars ax.bar(labels, values, colorcolors) ax.set_ylabel(数量) ax.set_title(垃圾类别分布) # 添加数值标签 for bar in bars: height bar.get_height() ax.text(bar.get_x() bar.get_width()/2., height, f{int(height)}, hacenter, vabottom) # 旋转x轴标签 plt.setp(ax.get_xticklabels(), rotation45, haright) else: ax.text(0.5, 0.5, 暂无检测数据, hacenter, vacenter, transformax.transAxes, fontsize12) self.stats_canvas.tight_layout() self.stats_canvas_widget.draw() def add_to_history(self, file_path, results): 添加到历史记录 current_time datetime.now().strftime(%Y-%m-%d %H:%M:%S) file_name Path(file_path).name # 获取检测数量 detection_count len(results.boxes) if results.boxes is not None else 0 # 获取主要类别 main_class 无 avg_confidence 0 if results.boxes is not None and len(results.boxes) 0: classes results.boxes.cls.cpu().numpy() confidences results.boxes.conf.cpu().numpy() if len(classes) 0: main_class self.model.names[int(classes[0])] avg_confidence np.mean(confidences) # 添加到表格 row_position self.history_table.rowCount() self.history_table.insertRow(row_position) self.history_table.setItem(row_position, 0, QTableWidgetItem(current_time)) self.history_table.setItem(row_position, 1, QTableWidgetItem(file_name)) self.history_table.setItem(row_position, 2, QTableWidgetItem(str(detection_count))) self.history_table.setItem(row_position, 3, QTableWidgetItem(main_class)) self.history_table.setItem(row_position, 4, QTableWidgetItem(f{avg_confidence:.3f})) def stop_detection(self): 停止检测 if self.timer.isActive(): self.timer.stop() if self.video_capture is not None: self.video_capture.release() self.video_capture None self.detect_btn.setEnabled(True) self.stop_btn.setEnabled(False) self.status_bar.showMessage(检测已停止) def save_results(self): 保存结果 try: # 生成报告 report_data { timestamp: datetime.now().isoformat(), model: self.model_combo.currentText(), confidence_threshold: self.conf_slider.value()/100, iou_threshold: self.iou_slider.value()/100, total_detections: self.total_detections_label.text(), detection_results: self.detection_results } # 保存为JSON report_path freports/report_{datetime.now().strftime(%Y%m%d_%H%M%S)}.json with open(report_path, w, encodingutf-8) as f: json.dump(report_data, f, ensure_asciiFalse, indent2) # 保存为Excel excel_path freports/report_{datetime.now().strftime(%Y%m%d_%H%M%S)}.xlsx df pd.DataFrame(self.detection_results) if not df.empty: df.to_excel(excel_path, indexFalse) self.show_message(成功, f结果已保存到:\n{report_path}\n{excel_path}) except Exception as e: self.show_message(错误, f保存失败: {str(e)}) def show_message(self, title, message): 显示消息对话框 msg_box QMessageBox(self) msg_box.setWindowTitle(title) msg_box.setText(message) msg_box.exec() def closeEvent(self, event): 关闭事件 self.stop_detection() event.accept() def main(): 主函数 app QApplication(sys.argv) app.setStyle(Fusion) # 设置现代样式 # 创建必要的目录 for directory in [models, results, reports]: Path(directory).mkdir(exist_okTrue) # 创建并显示窗口 window ScenicGarbageDetectionApp() window.show() sys.exit(app.exec()) if __name__ __main__: main()6.2 系统功能扩展pythonclass AdvancedFeatures: 系统高级功能扩展 staticmethod def batch_processing(image_folder, output_folder): 批量处理图片文件夹 pass staticmethod def export_statistics(start_date, end_date): 导出时间段统计报告 pass staticmethod def setup_alerts(thresholds): 设置垃圾数量报警阈值 pass staticmethod def integrate_with_map(coordinates, detection_data): 与地图系统集成 pass7. 系统部署与优化7.1 部署环境配置dockerfile# Dockerfile FROM nvidia/cuda:11.8.0-runtime-ubuntu22.04 # 设置工作目录 WORKDIR /app # 安装系统依赖 RUN apt-get update apt-get install -y \ python3-pip \ libgl1-mesa-glx \ libglib2.0-0 \ rm -rf /var/lib/apt/lists/* # 复制项目文件 COPY requirements.txt . COPY . . # 安装Python依赖 RUN pip3 install --no-cache-dir -r requirements.txt # 暴露端口 EXPOSE 8080 # 启动命令 CMD [python3, main.py]7.2 性能优化策略pythonclass PerformanceOptimizer: 性能优化器 def __init__(self): self.optimization_strategies [] def apply_quantization(self, model): 应用量化 quantized_model torch.quantization.quantize_dynamic( model, {torch.nn.Linear}, dtypetorch.qint8 ) return quantized_model def apply_pruning(self, model, pruning_rate0.2): 应用剪枝 pass def optimize_inference(self, model, input_size(640, 640)): 优化推理速度 # 使用TensorRT加速 pass def memory_optimization(self): 内存优化 torch.cuda.empty_cache() import gc gc.collect()

需要专业的网站建设服务?

联系我们获取免费的网站建设咨询和方案报价,让我们帮助您实现业务目标

立即咨询