当前位置: 首页 > news >正文

交易网站建设需要学什么网店装修

交易网站建设需要学什么,网店装修,网站网址黄页大全免费,今天西安最新通知目录 1.简介 2.YOLO算法 3.基于YOLOv5、YOLOv8的火灾检测 视频已上传b站 YOLOv5/YOLOv8的火灾检测#xff08;超实用项目#xff09;_哔哩哔哩_bilibili 本文为系列专栏#xff0c;包括各种YOLO检测算法项目、追踪算法项目、双目视觉、深度结构光相机测距测速三维测量项…目录 1.简介 2.YOLO算法 3.基于YOLOv5、YOLOv8的火灾检测 视频已上传b站 YOLOv5/YOLOv8的火灾检测超实用项目_哔哩哔哩_bilibili 本文为系列专栏包括各种YOLO检测算法项目、追踪算法项目、双目视觉、深度结构光相机测距测速三维测量项目等 专栏持续更新中有需要的小伙伴可私聊接项目定制。 1.简介 随着科技的不断发展人工智能技术在各个领域得到广泛应用。其中计算机视觉是人工智能领域中的一个重要分支它主要研究如何使机器“看”和“理解”图像或视频。在计算机视觉领域目标检测是一个关键问题它涉及识别图像或视频中的特定对象并确定它们的位置。火灾检测作为目标检测的一个重要应用领域对于及时发现火灾、减少人员伤亡和财产损失具有重要意义。 本项目旨在基于YOLOv5和YOLOv8这两个先进的目标检测模型开展火灾检测的研究和应用。YOLOYou Only Look Once是一种实时目标检测算法它将目标检测任务转化为一个回归问题通过单次前向传递神经网络即可得到图像中所有目标的类别和位置。YOLOv5是YOLO系列中的最新版本它在精度和速度之间取得了很好的平衡被广泛应用于各种实时目标检测任务。 在本项目中我们将探讨火灾检测领域的挑战和需求介绍YOLOv5和YOLOv8的基本原理和算法结构以及在实际火灾检测场景中的应用。通过本项目的研究我们希望能够为提高火灾检测的准确性和效率保障人们的生命财产安全做出积极贡献。 希望本项目能够为火灾检测领域的研究和实际应用提供有益的参考和启示推动人工智能技术在火灾安全领域的进一步发展和应用。 2.YOLO算法 YOLOYou Only Look Once是一种高效的实时目标检测算法它将目标检测任务转化为一个回归问题通过单次前向传递神经网络即可得到图像中所有目标的类别和位置。相较于传统的目标检测方法YOLO具有更快的速度和较高的准确性使其成为计算机视觉领域中的重要算法之一。 YOLO算法的基本思想是将输入图像划分为一个固定大小的网格grid每个网格负责预测图像中是否包含目标以及目标的位置和类别。与传统的滑动窗口方法不同YOLO将目标检测任务转化为一个回归问题同时预测所有目标的位置和类别避免了重复计算因此速度更快。 以下是YOLO算法的主要特点和步骤 划分网格 将输入图像划分为SxS个网格每个网格负责预测该网格内是否包含目标。 预测框和类别 每个网格预测B个边界框bounding boxes以及每个边界框的置信度confidence和类别概率。置信度表示边界框的准确性类别概率表示目标属于不同类别的概率。 计算损失函数 YOLO使用多任务损失函数包括边界框坐标的回归损失、置信度的损失包括目标是否存在的损失和目标位置的精度损失、类别概率的损失。通过最小化这些损失网络可以学习到准确的目标位置和类别信息。 非极大值抑制NMS 在预测结果中可能存在多个边界框对同一个目标的重复检测。为了去除这些重叠的边界框使用NMS算法来选择具有最高置信度的边界框并消除与其IoU交并比高于阈值的其他边界框。 输出结果 最终YOLO输出图像中所有目标的位置和类别信息以及它们的置信度分数。 YOLO的优势在于它的速度和准确性它能够实时处理高分辨率的图像并且在不同尺度和大小的目标上具有很好的泛化能力。这使得YOLO广泛应用于实时目标检测、视频分析、自动驾驶等领域。 3.基于YOLOv5、YOLOv8的火灾检测 部分代码展示 gui界面主代码 from PyQt5.QtWidgets import QApplication, QMainWindow, QFileDialog, QMenu, QAction from main_win.win import Ui_mainWindow from PyQt5.QtCore import Qt, QPoint, QTimer, QThread, pyqtSignal from PyQt5.QtGui import QImage, QPixmap, QPainter, QIconimport sys import os import json import numpy as np import torch import torch.backends.cudnn as cudnn import os import time import cv2from models.experimental import attempt_load from utils.datasets import LoadImages, LoadWebcam from utils.CustomMessageBox import MessageBox # LoadWebcam 的最后一个返回值改为 self.cap from utils.general import check_img_size, check_requirements, check_imshow, colorstr, non_max_suppression, \apply_classifier, scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path, save_one_box from utils.plots import colors, plot_one_box, plot_one_box_PIL from utils.torch_utils import select_device, load_classifier, time_sync from utils.capnums import Camera from dialog.rtsp_win import Windowclass DetThread(QThread):send_img pyqtSignal(np.ndarray)send_raw pyqtSignal(np.ndarray)send_statistic pyqtSignal(dict)# 发送信号正在检测/暂停/停止/检测结束/错误报告send_msg pyqtSignal(str)send_percent pyqtSignal(int)send_fps pyqtSignal(str)def __init__(self):super(DetThread, self).__init__()self.weights ./yolov5s.pt # 设置权重self.current_weight ./yolov5s.pt # 当前权重self.source 0 # 视频源self.conf_thres 0.25 # 置信度self.iou_thres 0.45 # iouself.jump_out False # 跳出循环self.is_continue True # 继续/暂停self.percent_length 1000 # 进度条self.rate_check True # 是否启用延时self.rate 100 # 延时HZself.save_fold ./result # 保存文件夹torch.no_grad()def run(self,imgsz640, # inference size (pixels)max_det1000, # maximum detections per imagedevice, # cuda device, i.e. 0 or 0,1,2,3 or cpuview_imgTrue, # show resultssave_txtFalse, # save results to *.txtsave_confFalse, # save confidences in --save-txt labelssave_cropFalse, # save cropped prediction boxesnosaveFalse, # do not save images/videosclassesNone, # filter by class: --class 0, or --class 0 2 3agnostic_nmsFalse, # class-agnostic NMSaugmentFalse, # augmented inferencevisualizeFalse, # visualize featuresupdateFalse, # update all modelsprojectruns/detect, # save results to project/namenameexp, # save results to project/nameexist_okFalse, # existing project/name ok, do not incrementline_thickness3, # bounding box thickness (pixels)hide_labelsFalse, # hide labelshide_confFalse, # hide confidenceshalfFalse, # use FP16 half-precision inference):# Initializetry:device select_device(device)half device.type ! cpu # half precision only supported on CUDA# Load modelmodel attempt_load(self.weights, map_locationdevice) # load FP32 modelnum_params 0for param in model.parameters():num_params param.numel()stride int(model.stride.max()) # model strideimgsz check_img_size(imgsz, sstride) # check image sizenames model.module.names if hasattr(model, module) else model.names # get class namesif half:model.half() # to FP16# Dataloaderif self.source.isnumeric() or self.source.lower().startswith((rtsp://, rtmp://, http://, https://)):view_img check_imshow()cudnn.benchmark True # set True to speed up constant image size inferencedataset LoadWebcam(self.source, img_sizeimgsz, stridestride)# bs len(dataset) # batch_sizeelse:dataset LoadImages(self.source, img_sizeimgsz, stridestride)# Run inferenceif device.type ! cpu:model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run oncecount 0# 跳帧检测jump_count 0start_time time.time()dataset iter(dataset)while True:# 手动停止if self.jump_out:self.vid_cap.release()self.send_percent.emit(0)self.send_msg.emit(停止)if hasattr(self, out):self.out.release()break# 临时更换模型if self.current_weight ! self.weights:# Load modelmodel attempt_load(self.weights, map_locationdevice) # load FP32 modelnum_params 0for param in model.parameters():num_params param.numel()stride int(model.stride.max()) # model strideimgsz check_img_size(imgsz, sstride) # check image sizenames model.module.names if hasattr(model, module) else model.names # get class namesif half:model.half() # to FP16# Run inferenceif device.type ! cpu:model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run onceself.current_weight self.weights# 暂停开关if self.is_continue:path, img, im0s, self.vid_cap next(dataset)# jump_count 1# if jump_count % 5 ! 0:# continuecount 1# 每三十帧刷新一次输出帧率if count % 30 0 and count 30:fps int(30/(time.time()-start_time))self.send_fps.emit(fpsstr(fps))start_time time.time()if self.vid_cap:percent int(count/self.vid_cap.get(cv2.CAP_PROP_FRAME_COUNT)*self.percent_length)self.send_percent.emit(percent)else:percent self.percent_lengthstatistic_dic {name: 0 for name in names}img torch.from_numpy(img).to(device)img img.half() if half else img.float() # uint8 to fp16/32img / 255.0 # 0 - 255 to 0.0 - 1.0if img.ndimension() 3:img img.unsqueeze(0)pred model(img, augmentaugment)[0]# Apply NMSpred non_max_suppression(pred, self.conf_thres, self.iou_thres, classes, agnostic_nms, max_detmax_det)# Process detectionsfor i, det in enumerate(pred): # detections per imageim0 im0s.copy()if len(det):# Rescale boxes from img_size to im0 sizedet[:, :4] scale_coords(img.shape[2:], det[:, :4], im0.shape).round()# Write resultsfor *xyxy, conf, cls in reversed(det):c int(cls) # integer classstatistic_dic[names[c]] 1label None if hide_labels else (names[c] if hide_conf else f{names[c]} {conf:.2f})# im0 plot_one_box_PIL(xyxy, im0, labellabel, colorcolors(c, True), line_thicknessline_thickness) # 中文标签画框但是耗时会增加plot_one_box(xyxy, im0, labellabel, colorcolors(c, True),line_thicknessline_thickness)# 控制视频发送频率if self.rate_check:time.sleep(1/self.rate)self.send_img.emit(im0)self.send_raw.emit(im0s if isinstance(im0s, np.ndarray) else im0s[0])self.send_statistic.emit(statistic_dic)# 如果自动录制if self.save_fold:os.makedirs(self.save_fold, exist_okTrue) # 路径不存在自动保存# 如果输入是图片if self.vid_cap is None:save_path os.path.join(self.save_fold,time.strftime(%Y_%m_%d_%H_%M_%S,time.localtime()) .jpg)cv2.imwrite(save_path, im0)else:if count 1: # 第一帧时初始化录制# 以视频原始帧率进行录制ori_fps int(self.vid_cap.get(cv2.CAP_PROP_FPS))if ori_fps 0:ori_fps 25# width int(self.vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))# height int(self.vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))width, height im0.shape[1], im0.shape[0]save_path os.path.join(self.save_fold, time.strftime(%Y_%m_%d_%H_%M_%S, time.localtime()) .mp4)self.out cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*mp4v), ori_fps,(width, height))self.out.write(im0)if percent self.percent_length:print(count)self.send_percent.emit(0)self.send_msg.emit(检测结束)if hasattr(self, out):self.out.release()# 正常跳出循环breakexcept Exception as e:self.send_msg.emit(%s % e) YOLOv5主代码 Train a YOLOv5 model on a custom datasetUsage:$ python path/to/train.py --data coco128.yaml --weights yolov5s.pt --img 640 import argparse import logging import os import random import sys import time import warnings from copy import deepcopy from pathlib import Path from threading import Threadimport math import numpy as np import torch.distributed as dist import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler import torch.utils.data import yaml from torch.cuda import amp from torch.nn.parallel import DistributedDataParallel as DDP from torch.utils.tensorboard import SummaryWriter from tqdm import tqdmFILE Path(__file__).absolute() sys.path.append(FILE.parents[0].as_posix()) # add yolov5/ to pathimport val # for end-of-epoch mAP from models.experimental import attempt_load from models.yolo import Model from utils.autoanchor import check_anchors from utils.datasets import create_dataloader from utils.general import labels_to_class_weights, increment_path, labels_to_image_weights, init_seeds, \strip_optimizer, get_latest_run, check_dataset, check_file, check_git_status, check_img_size, \check_requirements, print_mutation, set_logging, one_cycle, colorstr from utils.google_utils import attempt_download from utils.loss import ComputeLoss from utils.plots import plot_images, plot_labels, plot_results, plot_evolution from utils.torch_utils import ModelEMA, select_device, intersect_dicts, torch_distributed_zero_first, de_parallel from utils.wandb_logging.wandb_utils import WandbLogger, check_wandb_resume from utils.metrics import fitnessLOGGER logging.getLogger(__name__) LOCAL_RANK int(os.getenv(LOCAL_RANK, -1)) # https://pytorch.org/docs/stable/elastic/run.html RANK int(os.getenv(RANK, -1)) WORLD_SIZE int(os.getenv(WORLD_SIZE, 1))def train(hyp, # path/to/hyp.yaml or hyp dictionaryopt,device,):save_dir, epochs, batch_size, weights, single_cls, evolve, data, cfg, resume, noval, nosave, workers, \opt.save_dir, opt.epochs, opt.batch_size, opt.weights, opt.single_cls, opt.evolve, opt.data, opt.cfg, \opt.resume, opt.noval, opt.nosave, opt.workers# Directoriessave_dir Path(save_dir)wdir save_dir / weightswdir.mkdir(parentsTrue, exist_okTrue) # make dirlast wdir / last.ptbest wdir / best.ptresults_file save_dir / results.txt# Hyperparametersif isinstance(hyp, str):with open(hyp) as f:hyp yaml.safe_load(f) # load hyps dictLOGGER.info(colorstr(hyperparameters: ) , .join(f{k}{v} for k, v in hyp.items()))# Save run settingswith open(save_dir / hyp.yaml, w) as f:yaml.safe_dump(hyp, f, sort_keysFalse)with open(save_dir / opt.yaml, w) as f:yaml.safe_dump(vars(opt), f, sort_keysFalse)# Configureplots not evolve # create plotscuda device.type ! cpuinit_seeds(1 RANK)with open(data) as f:data_dict yaml.safe_load(f) # data dict# Loggersloggers {wandb: None, tb: None} # loggers dictif RANK in [-1, 0]:# TensorBoardif not evolve:prefix colorstr(tensorboard: )LOGGER.info(f{prefix}Start with tensorboard --logdir {opt.project}, view at http://localhost:6006/)loggers[tb] SummaryWriter(str(save_dir))# WBopt.hyp hyp # add hyperparametersrun_id torch.load(weights).get(wandb_id) if weights.endswith(.pt) and os.path.isfile(weights) else Nonerun_id run_id if opt.resume else None # start fresh run if transfer learningwandb_logger WandbLogger(opt, save_dir.stem, run_id, data_dict)loggers[wandb] wandb_logger.wandbif loggers[wandb]:data_dict wandb_logger.data_dictweights, epochs, hyp opt.weights, opt.epochs, opt.hyp # may update weights, epochs if resumingnc 1 if single_cls else int(data_dict[nc]) # number of classesnames [item] if single_cls and len(data_dict[names]) ! 1 else data_dict[names] # class namesassert len(names) nc, %g names found for nc%g dataset in %s % (len(names), nc, data) # checkis_coco data.endswith(coco.yaml) and nc 80 # COCO dataset# Modelpretrained weights.endswith(.pt)if pretrained:with torch_distributed_zero_first(RANK):weights attempt_download(weights) # download if not found locallyckpt torch.load(weights, map_locationdevice) # load checkpointmodel Model(cfg or ckpt[model].yaml, ch3, ncnc, anchorshyp.get(anchors)).to(device) # createexclude [anchor] if (cfg or hyp.get(anchors)) and not resume else [] # exclude keysstate_dict ckpt[model].float().state_dict() # to FP32state_dict intersect_dicts(state_dict, model.state_dict(), excludeexclude) # intersectmodel.load_state_dict(state_dict, strictFalse) # loadLOGGER.info(Transferred %g/%g items from %s % (len(state_dict), len(model.state_dict()), weights)) # reportelse:model Model(cfg, ch3, ncnc, anchorshyp.get(anchors)).to(device) # createwith torch_distributed_zero_first(RANK):check_dataset(data_dict) # checktrain_path data_dict[train]val_path data_dict[val]# Freezefreeze [] # parameter names to freeze (full or partial)for k, v in model.named_parameters():v.requires_grad True # train all layersif any(x in k for x in freeze):print(freezing %s % k)v.requires_grad False# Optimizernbs 64 # nominal batch sizeaccumulate max(round(nbs / batch_size), 1) # accumulate loss before optimizinghyp[weight_decay] * batch_size * accumulate / nbs # scale weight_decayLOGGER.info(fScaled weight_decay {hyp[weight_decay]})pg0, pg1, pg2 [], [], [] # optimizer parameter groupsfor k, v in model.named_modules():if hasattr(v, bias) and isinstance(v.bias, nn.Parameter):pg2.append(v.bias) # biasesif isinstance(v, nn.BatchNorm2d):pg0.append(v.weight) # no decayelif hasattr(v, weight) and isinstance(v.weight, nn.Parameter):pg1.append(v.weight) # apply decayif opt.adam:optimizer optim.Adam(pg0, lrhyp[lr0], betas(hyp[momentum], 0.999)) # adjust beta1 to momentumelse:optimizer optim.SGD(pg0, lrhyp[lr0], momentumhyp[momentum], nesterovTrue)optimizer.add_param_group({params: pg1, weight_decay: hyp[weight_decay]}) # add pg1 with weight_decayoptimizer.add_param_group({params: pg2}) # add pg2 (biases)LOGGER.info(Optimizer groups: %g .bias, %g conv.weight, %g other % (len(pg2), len(pg1), len(pg0)))del pg0, pg1, pg2# Scheduler https://arxiv.org/pdf/1812.01187.pdf# https://pytorch.org/docs/stable/_modules/torch/optim/lr_scheduler.html#OneCycleLRif opt.linear_lr:lf lambda x: (1 - x / (epochs - 1)) * (1.0 - hyp[lrf]) hyp[lrf] # linearelse:lf one_cycle(1, hyp[lrf], epochs) # cosine 1-hyp[lrf]scheduler lr_scheduler.LambdaLR(optimizer, lr_lambdalf)# plot_lr_scheduler(optimizer, scheduler, epochs)# EMAema ModelEMA(model) if RANK in [-1, 0] else None# Resumestart_epoch, best_fitness 0, 0.0if pretrained:# Optimizerif ckpt[optimizer] is not None:optimizer.load_state_dict(ckpt[optimizer])best_fitness ckpt[best_fitness]# EMAif ema and ckpt.get(ema):ema.ema.load_state_dict(ckpt[ema].float().state_dict())ema.updates ckpt[updates]# Resultsif ckpt.get(training_results) is not None:results_file.write_text(ckpt[training_results]) # write results.txt# Epochsstart_epoch ckpt[epoch] 1if resume:assert start_epoch 0, %s training to %g epochs is finished, nothing to resume. % (weights, epochs)if epochs start_epoch:LOGGER.info(%s has been trained for %g epochs. Fine-tuning for %g additional epochs. %(weights, ckpt[epoch], epochs))epochs ckpt[epoch] # finetune additional epochsdel ckpt, state_dict# Image sizesgs max(int(model.stride.max()), 32) # grid size (max stride)nl model.model[-1].nl # number of detection layers (used for scaling hyp[obj])imgsz check_img_size(opt.imgsz, gs) # verify imgsz is gs-multiple# DP modeif cuda and RANK -1 and torch.cuda.device_count() 1:logging.warning(DP not recommended, instead use torch.distributed.run for best DDP Multi-GPU results.\nSee Multi-GPU Tutorial at https://github.com/ultralytics/yolov5/issues/475 to get started.)model torch.nn.DataParallel(model)# SyncBatchNormif opt.sync_bn and cuda and RANK ! -1:raise Exception(can not train with --sync-bn, known issue https://github.com/ultralytics/yolov5/issues/3998)model torch.nn.SyncBatchNorm.convert_sync_batchnorm(model).to(device)LOGGER.info(Using SyncBatchNorm())# Trainloadertrain_loader, dataset create_dataloader(train_path, imgsz, batch_size // WORLD_SIZE, gs, single_cls,hyphyp, augmentTrue, cacheopt.cache_images, rectopt.rect, rankRANK,workersworkers, image_weightsopt.image_weights, quadopt.quad,prefixcolorstr(train: ))mlc np.concatenate(dataset.labels, 0)[:, 0].max() # max label classnb len(train_loader) # number of batchesassert mlc nc, Label class %g exceeds nc%g in %s. Possible class labels are 0-%g % (mlc, nc, data, nc - 1)# Process 0if RANK in [-1, 0]:val_loader create_dataloader(val_path, imgsz, batch_size // WORLD_SIZE * 2, gs, single_cls,hyphyp, cacheopt.cache_images and not noval, rectTrue, rank-1,workersworkers, pad0.5,prefixcolorstr(val: ))[0]if not resume:labels np.concatenate(dataset.labels, 0)# c torch.tensor(labels[:, 0]) # classes# cf torch.bincount(c.long(), minlengthnc) 1. # frequency# model._initialize_biases(cf.to(device))if plots:plot_labels(labels, names, save_dir, loggers)# Anchorsif not opt.noautoanchor:check_anchors(dataset, modelmodel, thrhyp[anchor_t], imgszimgsz)model.half().float() # pre-reduce anchor precision# DDP modeif cuda and RANK ! -1:model DDP(model, device_ids[LOCAL_RANK], output_deviceLOCAL_RANK)# Model parametershyp[box] * 3. / nl # scale to layershyp[cls] * nc / 80. * 3. / nl # scale to classes and layershyp[obj] * (imgsz / 640) ** 2 * 3. / nl # scale to image size and layershyp[label_smoothing] opt.label_smoothingmodel.nc nc # attach number of classes to modelmodel.hyp hyp # attach hyperparameters to modelmodel.gr 1.0 # iou loss ratio (obj_loss 1.0 or iou)model.class_weights labels_to_class_weights(dataset.labels, nc).to(device) * nc # attach class weightsmodel.names names# Start trainingt0 time.time()nw max(round(hyp[warmup_epochs] * nb), 1000) # number of warmup iterations, max(3 epochs, 1k iterations)# nw min(nw, (epochs - start_epoch) / 2 * nb) # limit warmup to 1/2 of traininglast_opt_step -1maps np.zeros(nc) # mAP per classresults (0, 0, 0, 0, 0, 0, 0) # P, R, mAP.5, mAP.5-.95, val_loss(box, obj, cls)scheduler.last_epoch start_epoch - 1 # do not movescaler amp.GradScaler(enabledcuda)compute_loss ComputeLoss(model) # init loss classLOGGER.info(fImage sizes {imgsz} train, {imgsz} val\nfUsing {train_loader.num_workers} dataloader workers\nfLogging results to {save_dir}\nfStarting training for {epochs} epochs...)for epoch in range(start_epoch, epochs): # epoch ------------------------------------------------------------------model.train()# Update image weights (optional)if opt.image_weights:# Generate indicesif RANK in [-1, 0]:cw model.class_weights.cpu().numpy() * (1 - maps) ** 2 / nc # class weightsiw labels_to_image_weights(dataset.labels, ncnc, class_weightscw) # image weightsdataset.indices random.choices(range(dataset.n), weightsiw, kdataset.n) # rand weighted idx# Broadcast if DDPif RANK ! -1:indices (torch.tensor(dataset.indices) if RANK 0 else torch.zeros(dataset.n)).int()dist.broadcast(indices, 0)if RANK ! 0:dataset.indices indices.cpu().numpy()# Update mosaic border# b int(random.uniform(0.25 * imgsz, 0.75 * imgsz gs) // gs * gs)# dataset.mosaic_border [b - imgsz, -b] # height, width bordersmloss torch.zeros(4, devicedevice) # mean lossesif RANK ! -1:train_loader.sampler.set_epoch(epoch)pbar enumerate(train_loader)LOGGER.info((\n %10s * 8) % (Epoch, gpu_mem, box, obj, cls, total, labels, img_size))if RANK in [-1, 0]:pbar tqdm(pbar, totalnb) # progress baroptimizer.zero_grad()for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------ni i nb * epoch # number integrated batches (since train start)imgs imgs.to(device, non_blockingTrue).float() / 255.0 # uint8 to float32, 0-255 to 0.0-1.0# Warmupif ni nw:xi [0, nw] # x interp# model.gr np.interp(ni, xi, [0.0, 1.0]) # iou loss ratio (obj_loss 1.0 or iou)accumulate max(1, np.interp(ni, xi, [1, nbs / batch_size]).round())for j, x in enumerate(optimizer.param_groups):# bias lr falls from 0.1 to lr0, all other lrs rise from 0.0 to lr0x[lr] np.interp(ni, xi, [hyp[warmup_bias_lr] if j 2 else 0.0, x[initial_lr] * lf(epoch)])if momentum in x:x[momentum] np.interp(ni, xi, [hyp[warmup_momentum], hyp[momentum]])# Multi-scaleif opt.multi_scale:sz random.randrange(imgsz * 0.5, imgsz * 1.5 gs) // gs * gs # sizesf sz / max(imgs.shape[2:]) # scale factorif sf ! 1:ns [math.ceil(x * sf / gs) * gs for x in imgs.shape[2:]] # new shape (stretched to gs-multiple)imgs F.interpolate(imgs, sizens, modebilinear, align_cornersFalse)# Forwardwith amp.autocast(enabledcuda):pred model(imgs) # forwardloss, loss_items compute_loss(pred, targets.to(device)) # loss scaled by batch_sizeif RANK ! -1:loss * WORLD_SIZE # gradient averaged between devices in DDP modeif opt.quad:loss * 4.# Backwardscaler.scale(loss).backward()# Optimizeif ni - last_opt_step accumulate:scaler.step(optimizer) # optimizer.stepscaler.update()optimizer.zero_grad()if ema:ema.update(model)last_opt_step ni# Printif RANK in [-1, 0]:mloss (mloss * i loss_items) / (i 1) # update mean lossesmem %.3gG % (torch.cuda.memory_reserved() / 1E9 if torch.cuda.is_available() else 0) # (GB)s (%10s * 2 %10.4g * 6) % (f{epoch}/{epochs - 1}, mem, *mloss, targets.shape[0], imgs.shape[-1])pbar.set_description(s)# Plotif plots and ni 3:f save_dir / ftrain_batch{ni}.jpg # filenameThread(targetplot_images, args(imgs, targets, paths, f), daemonTrue).start()if loggers[tb] and ni 0: # TensorBoardwith warnings.catch_warnings():warnings.simplefilter(ignore) # suppress jit trace warningloggers[tb].add_graph(torch.jit.trace(de_parallel(model), imgs[0:1], strictFalse), [])elif plots and ni 10 and loggers[wandb]:wandb_logger.log({Mosaics: [loggers[wandb].Image(str(x), captionx.name) for x insave_dir.glob(train*.jpg) if x.exists()]})# end batch ------------------------------------------------------------------------------------------------# Schedulerlr [x[lr] for x in optimizer.param_groups] # for loggersscheduler.step()# DDP process 0 or single-GPUif RANK in [-1, 0]:# mAPema.update_attr(model, include[yaml, nc, hyp, gr, names, stride, class_weights])final_epoch epoch 1 epochsif not noval or final_epoch: # Calculate mAPwandb_logger.current_epoch epoch 1results, maps, _ val.run(data_dict,batch_sizebatch_size // WORLD_SIZE * 2,imgszimgsz,modelema.ema,single_clssingle_cls,dataloaderval_loader,save_dirsave_dir,save_jsonis_coco and final_epoch,verbosenc 50 and final_epoch,plotsplots and final_epoch,wandb_loggerwandb_logger,compute_losscompute_loss)# Writewith open(results_file, a) as f:f.write(s %10.4g * 7 % results \n) # append metrics, val_loss# Logtags [train/box_loss, train/obj_loss, train/cls_loss, # train lossmetrics/precision, metrics/recall, metrics/mAP_0.5, metrics/mAP_0.5:0.95,val/box_loss, val/obj_loss, val/cls_loss, # val lossx/lr0, x/lr1, x/lr2] # paramsfor x, tag in zip(list(mloss[:-1]) list(results) lr, tags):if loggers[tb]:loggers[tb].add_scalar(tag, x, epoch) # TensorBoardif loggers[wandb]:wandb_logger.log({tag: x}) # WB# Update best mAPfi fitness(np.array(results).reshape(1, -1)) # weighted combination of [P, R, mAP.5, mAP.5-.95]if fi best_fitness:best_fitness fiwandb_logger.end_epoch(best_resultbest_fitness fi)# Save modelif (not nosave) or (final_epoch and not evolve): # if saveckpt {epoch: epoch,best_fitness: best_fitness,training_results: results_file.read_text(),model: deepcopy(de_parallel(model)).half(),ema: deepcopy(ema.ema).half(),updates: ema.updates,optimizer: optimizer.state_dict(),wandb_id: wandb_logger.wandb_run.id if loggers[wandb] else None}# Save last, best and deletetorch.save(ckpt, last)if best_fitness fi:torch.save(ckpt, best)if loggers[wandb]:if ((epoch 1) % opt.save_period 0 and not final_epoch) and opt.save_period ! -1:wandb_logger.log_model(last.parent, opt, epoch, fi, best_modelbest_fitness fi)del ckpt# end epoch ----------------------------------------------------------------------------------------------------# end training -----------------------------------------------------------------------------------------------------if RANK in [-1, 0]:LOGGER.info(f{epoch - start_epoch 1} epochs completed in {(time.time() - t0) / 3600:.3f} hours.\n)if plots:plot_results(save_dirsave_dir) # save as results.pngif loggers[wandb]:files [results.png, confusion_matrix.png, *[f{x}_curve.png for x in (F1, PR, P, R)]]wandb_logger.log({Results: [loggers[wandb].Image(str(save_dir / f), captionf) for f in filesif (save_dir / f).exists()]})if not evolve:if is_coco: # COCO datasetfor m in [last, best] if best.exists() else [last]: # speed, mAP testsresults, _, _ val.run(data_dict,batch_sizebatch_size // WORLD_SIZE * 2,imgszimgsz,modelattempt_load(m, device).half(),single_clssingle_cls,dataloaderval_loader,save_dirsave_dir,save_jsonTrue,plotsFalse)# Strip optimizersfor f in last, best:if f.exists():strip_optimizer(f) # strip optimizersif loggers[wandb]: # Log the stripped modelloggers[wandb].log_artifact(str(best if best.exists() else last), typemodel,namerun_ wandb_logger.wandb_run.id _model,aliases[latest, best, stripped])wandb_logger.finish_run()torch.cuda.empty_cache()return results
http://www.zqtcl.cn/news/577518/

相关文章:

  • 新乡做网站公司哪个地区网站建设好
  • 网站模板怎么编辑网站定制化
  • 利于优化的网站网络科技公司怎么赚钱
  • 制作网站的步骤和方法做物流的网站有哪些功能
  • vs做网站图片明明在文件夹里却找不到中国建筑网官网找客户信息
  • WordPress仿站培训黑龙江新闻夜航
  • 如何利用开源代码做网站济南做网站互联网公司有哪些
  • 生意网app下载官网郑州做网站优化公
  • wordpress网站更换域名wordpress 小工具定制
  • 上海做机床的公司网站设计网站怎样做色卡
  • 一个网站怎么绑定很多个域名做网站后台应该谁来做
  • 跑纸活做网站加大门户网站安全制度建设
  • 多商户开源商城seo对网店的作用有哪些
  • 提供微信网站建设福州seo建站
  • 泉州市住房与城乡建设网站潍坊网站建设方案外包
  • 网络文化经营许可证怎么申请免费seo提交工具
  • 网站建设 需求分析报告手机网站微信网站开发
  • 做司法考试题目的网站建站中企动力
  • 做360网站优化ppt模板免费下载千图网
  • 网站域名哪些后缀更好项目推广平台有哪些
  • 做游戏特效的网站网站开发中安全性的防范
  • 阿里云网站建设好用吗齐诺网站建设
  • 中小企业网站建设行情嘉兴公司的网站设计
  • 做服装有哪些好的网站台州网站建设多少钱
  • 任县网站建设公司北京网站开发网站开发公司
  • 重庆seo网站策划网站的tdk指的是什么
  • 自做刷赞网站山东东成建设咨询有限公司网站
  • 泉州网站制作推广网站建设一年多少
  • 超大型网站建设公司网站打开显示建设中
  • 惠东县网站建设建设方案