厦门海绵城市建设官方网站,小企业网站 优帮云,业绩统计网站开发,ui设计那个培训班好前言
大家好#xff0c;我是Snu77#xff0c;这里是RT-DETR有效涨点专栏。
本专栏的内容为根据ultralytics版本的RT-DETR进行改进#xff0c;内容持续更新#xff0c;每周更新文章数量3-10篇。
专栏以ResNet18、ResNet50为基础修改版本#xff0c;同时修改内容也支持Re…前言
大家好我是Snu77这里是RT-DETR有效涨点专栏。
本专栏的内容为根据ultralytics版本的RT-DETR进行改进内容持续更新每周更新文章数量3-10篇。
专栏以ResNet18、ResNet50为基础修改版本同时修改内容也支持ResNet32、ResNet101和PPHGNet版本其中ResNet为RT-DETR官方版本11移植过来的参数量基本保持一致(误差很小很小)不同于ultralytics仓库版本的ResNet官方版本同时ultralytics仓库的一些参数是和RT-DETR相冲的所以我也是会教大家调好一些参数和代码真正意义上的跑ultralytics的和RT-DETR官方版本的无区别
欢迎大家订阅本专栏一起学习RT-DETR 一、本文介绍
本文给大家带来的改进机制是Google发布的EfficientNetV1主干网络其主要思想是通过均衡地缩放网络的深度、宽度和分辨率以提高卷积神经网络的性能。该主干使用一个复合系数来统一地缩放网络的深度、宽度和分辨率实现更均衡的网络扩展从而提高性能。同时我提供多种该网络的版本可以满足不同读者的需求。本文首先通过介绍其主要框架原理然后教大家如何添加该网络结构到网络模型中并附上yaml文件和运行代码使用该网络参数量下降百分之三十以上,精度却能提两个点 专栏链接RT-DETR剑指论文专栏持续复现各种顶会内容——论文收割机RT-DETR 目录
一、本文介绍
二、EfficientNetV1的框架原理
三、EfficientNetV1的核心代码
四、手把手教你添加EfficientNetV1机制
4.1 修改一
4.2 修改二
4.3 修改三
4.4 修改四
4.5 修改五
4.6 修改六
4.7 修改七
4.8 修改八
4.9 RT-DETR不能打印计算量问题的解决
4.10 可选修改
五、EfficientNetV1的yaml文件
5.1 yaml文件
5.2 运行文件
5.3 成功训练截图
六、全文总结 二、EfficientNetV1的框架原理
官方论文地址 官方论文地址点击即可跳转
官方代码地址 官方代码地址点击即可跳转
EfficientNetV1的主要思想是通过均衡地缩放网络的深度、宽度和分辨率以提高卷积神经网络的性能。这种方法采用了一个简单但有效的复合系数统一调整所有维度。EfficientNet在多个方面优于现有的ConvNets特别是在ImageNet数据集上EfficientNet-B7模型在保持较小的大小和更快的推理速度的同时达到了84.3%的顶级准确率。此外EfficientNet还在CIFAR-100和Flowers等其他数据集上展示了出色的迁移学习性能参数数量大大减少。 总结EfficientNetV1的主要创新为提出了一种新的模型缩放方法该方法使用一个复合系数来统一地缩放网络的深度、宽度和分辨率实现更均衡的网络扩展 这张图展示了EfficientNet提出的模型缩放方法。图中(a)表示基线网络而图(b)-(d)表示传统的缩放方法只增加网络的一个维度宽度、深度或分辨率。图(e)展示了EfficientNet的创新之处即复合缩放方法它使用固定比例同时均匀地缩放网络的所有三个维度。 三、EfficientNetV1的核心代码
import re
import math
import collections
from functools import partial
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils import model_zoo__all__ [efficient]# Parameters for the entire model (stem, all blocks, and head)
GlobalParams collections.namedtuple(GlobalParams, [width_coefficient, depth_coefficient, image_size, dropout_rate,num_classes, batch_norm_momentum, batch_norm_epsilon,drop_connect_rate, depth_divisor, min_depth, include_top])# Parameters for an individual model block
BlockArgs collections.namedtuple(BlockArgs, [num_repeat, kernel_size, stride, expand_ratio,input_filters, output_filters, se_ratio, id_skip])# Set GlobalParams and BlockArgss defaults
GlobalParams.__new__.__defaults__ (None,) * len(GlobalParams._fields)
BlockArgs.__new__.__defaults__ (None,) * len(BlockArgs._fields)# Swish activation function
if hasattr(nn, SiLU):Swish nn.SiLU
else:# For compatibility with old PyTorch versionsclass Swish(nn.Module):def forward(self, x):return x * torch.sigmoid(x)# A memory-efficient implementation of Swish function
class SwishImplementation(torch.autograd.Function):staticmethoddef forward(ctx, i):result i * torch.sigmoid(i)ctx.save_for_backward(i)return resultstaticmethoddef backward(ctx, grad_output):i ctx.saved_tensors[0]sigmoid_i torch.sigmoid(i)return grad_output * (sigmoid_i * (1 i * (1 - sigmoid_i)))class MemoryEfficientSwish(nn.Module):def forward(self, x):return SwishImplementation.apply(x)def round_filters(filters, global_params):Calculate and round number of filters based on width multiplier.Use width_coefficient, depth_divisor and min_depth of global_params.Args:filters (int): Filters number to be calculated.global_params (namedtuple): Global params of the model.Returns:new_filters: New filters number after calculating.multiplier global_params.width_coefficientif not multiplier:return filters# TODO: modify the params names.# maybe the names (width_divisor,min_width)# are more suitable than (depth_divisor,min_depth).divisor global_params.depth_divisormin_depth global_params.min_depthfilters * multipliermin_depth min_depth or divisor # pay attention to this line when using min_depth# follow the formula transferred from official TensorFlow implementationnew_filters max(min_depth, int(filters divisor / 2) // divisor * divisor)if new_filters 0.9 * filters: # prevent rounding by more than 10%new_filters divisorreturn int(new_filters)def round_repeats(repeats, global_params):Calculate modules repeat number of a block based on depth multiplier.Use depth_coefficient of global_params.Args:repeats (int): num_repeat to be calculated.global_params (namedtuple): Global params of the model.Returns:new repeat: New repeat number after calculating.multiplier global_params.depth_coefficientif not multiplier:return repeats# follow the formula transferred from official TensorFlow implementationreturn int(math.ceil(multiplier * repeats))def drop_connect(inputs, p, training):Drop connect.Args:input (tensor: BCWH): Input of this structure.p (float: 0.0~1.0): Probability of drop connection.training (bool): The running mode.Returns:output: Output after drop connection.assert 0 p 1, p must be in range of [0,1]if not training:return inputsbatch_size inputs.shape[0]keep_prob 1 - p# generate binary_tensor mask according to probability (p for 0, 1-p for 1)random_tensor keep_probrandom_tensor torch.rand([batch_size, 1, 1, 1], dtypeinputs.dtype, deviceinputs.device)binary_tensor torch.floor(random_tensor)output inputs / keep_prob * binary_tensorreturn outputdef get_width_and_height_from_size(x):Obtain height and width from x.Args:x (int, tuple or list): Data size.Returns:size: A tuple or list (H,W).if isinstance(x, int):return x, xif isinstance(x, list) or isinstance(x, tuple):return xelse:raise TypeError()def calculate_output_image_size(input_image_size, stride):Calculates the output image size when using Conv2dSamePadding with a stride.Necessary for static padding. Thanks to mannatsingh for pointing this out.Args:input_image_size (int, tuple or list): Size of input image.stride (int, tuple or list): Conv2d operations stride.Returns:output_image_size: A list [H,W].if input_image_size is None:return Noneimage_height, image_width get_width_and_height_from_size(input_image_size)stride stride if isinstance(stride, int) else stride[0]image_height int(math.ceil(image_height / stride))image_width int(math.ceil(image_width / stride))return [image_height, image_width]# Note:
# The following SamePadding functions make output size equal ceil(input size/stride).
# Only when stride equals 1, can the output size be the same as input size.
# Dont be confused by their function names ! ! !def get_same_padding_conv2d(image_sizeNone):Chooses static padding if you have specified an image size, and dynamic padding otherwise.Static padding is necessary for ONNX exporting of models.Args:image_size (int or tuple): Size of the image.Returns:Conv2dDynamicSamePadding or Conv2dStaticSamePadding.if image_size is None:return Conv2dDynamicSamePaddingelse:return partial(Conv2dStaticSamePadding, image_sizeimage_size)class Conv2dDynamicSamePadding(nn.Conv2d):2D Convolutions like TensorFlow, for a dynamic image size.The padding is operated in forward function by calculating dynamically.# Tips for SAME mode padding.# Given the following:# i: width or height# s: stride# k: kernel size# d: dilation# p: padding# Output after Conv2d:# o floor((ip-((k-1)*d1))/s1)# If o equals i, i floor((ip-((k-1)*d1))/s1),# p (i-1)*s((k-1)*d1)-idef __init__(self, in_channels, out_channels, kernel_size, stride1, dilation1, groups1, biasTrue):super().__init__(in_channels, out_channels, kernel_size, stride, 0, dilation, groups, bias)self.stride self.stride if len(self.stride) 2 else [self.stride[0]] * 2def forward(self, x):ih, iw x.size()[-2:]kh, kw self.weight.size()[-2:]sh, sw self.strideoh, ow math.ceil(ih / sh), math.ceil(iw / sw) # change the output size according to stride ! ! !pad_h max((oh - 1) * self.stride[0] (kh - 1) * self.dilation[0] 1 - ih, 0)pad_w max((ow - 1) * self.stride[1] (kw - 1) * self.dilation[1] 1 - iw, 0)if pad_h 0 or pad_w 0:x F.pad(x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2])return F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)class Conv2dStaticSamePadding(nn.Conv2d):2D Convolutions like TensorFlows SAME mode, with the given input image size.The padding mudule is calculated in construction function, then used in forward.# With the same calculation as Conv2dDynamicSamePaddingdef __init__(self, in_channels, out_channels, kernel_size, stride1, image_sizeNone, **kwargs):super().__init__(in_channels, out_channels, kernel_size, stride, **kwargs)self.stride self.stride if len(self.stride) 2 else [self.stride[0]] * 2# Calculate padding based on image size and save itassert image_size is not Noneih, iw (image_size, image_size) if isinstance(image_size, int) else image_sizekh, kw self.weight.size()[-2:]sh, sw self.strideoh, ow math.ceil(ih / sh), math.ceil(iw / sw)pad_h max((oh - 1) * self.stride[0] (kh - 1) * self.dilation[0] 1 - ih, 0)pad_w max((ow - 1) * self.stride[1] (kw - 1) * self.dilation[1] 1 - iw, 0)if pad_h 0 or pad_w 0:self.static_padding nn.ZeroPad2d((pad_w // 2, pad_w - pad_w // 2,pad_h // 2, pad_h - pad_h // 2))else:self.static_padding nn.Identity()def forward(self, x):x self.static_padding(x)x F.conv2d(x, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups)return xdef get_same_padding_maxPool2d(image_sizeNone):Chooses static padding if you have specified an image size, and dynamic padding otherwise.Static padding is necessary for ONNX exporting of models.Args:image_size (int or tuple): Size of the image.Returns:MaxPool2dDynamicSamePadding or MaxPool2dStaticSamePadding.if image_size is None:return MaxPool2dDynamicSamePaddingelse:return partial(MaxPool2dStaticSamePadding, image_sizeimage_size)class MaxPool2dDynamicSamePadding(nn.MaxPool2d):2D MaxPooling like TensorFlows SAME mode, with a dynamic image size.The padding is operated in forward function by calculating dynamically.def __init__(self, kernel_size, stride, padding0, dilation1, return_indicesFalse, ceil_modeFalse):super().__init__(kernel_size, stride, padding, dilation, return_indices, ceil_mode)self.stride [self.stride] * 2 if isinstance(self.stride, int) else self.strideself.kernel_size [self.kernel_size] * 2 if isinstance(self.kernel_size, int) else self.kernel_sizeself.dilation [self.dilation] * 2 if isinstance(self.dilation, int) else self.dilationdef forward(self, x):ih, iw x.size()[-2:]kh, kw self.kernel_sizesh, sw self.strideoh, ow math.ceil(ih / sh), math.ceil(iw / sw)pad_h max((oh - 1) * self.stride[0] (kh - 1) * self.dilation[0] 1 - ih, 0)pad_w max((ow - 1) * self.stride[1] (kw - 1) * self.dilation[1] 1 - iw, 0)if pad_h 0 or pad_w 0:x F.pad(x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2])return F.max_pool2d(x, self.kernel_size, self.stride, self.padding,self.dilation, self.ceil_mode, self.return_indices)class MaxPool2dStaticSamePadding(nn.MaxPool2d):2D MaxPooling like TensorFlows SAME mode, with the given input image size.The padding mudule is calculated in construction function, then used in forward.def __init__(self, kernel_size, stride, image_sizeNone, **kwargs):super().__init__(kernel_size, stride, **kwargs)self.stride [self.stride] * 2 if isinstance(self.stride, int) else self.strideself.kernel_size [self.kernel_size] * 2 if isinstance(self.kernel_size, int) else self.kernel_sizeself.dilation [self.dilation] * 2 if isinstance(self.dilation, int) else self.dilation# Calculate padding based on image size and save itassert image_size is not Noneih, iw (image_size, image_size) if isinstance(image_size, int) else image_sizekh, kw self.kernel_sizesh, sw self.strideoh, ow math.ceil(ih / sh), math.ceil(iw / sw)pad_h max((oh - 1) * self.stride[0] (kh - 1) * self.dilation[0] 1 - ih, 0)pad_w max((ow - 1) * self.stride[1] (kw - 1) * self.dilation[1] 1 - iw, 0)if pad_h 0 or pad_w 0:self.static_padding nn.ZeroPad2d((pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2))else:self.static_padding nn.Identity()def forward(self, x):x self.static_padding(x)x F.max_pool2d(x, self.kernel_size, self.stride, self.padding,self.dilation, self.ceil_mode, self.return_indices)return x################################################################################
# Helper functions for loading model params
################################################################################# BlockDecoder: A Class for encoding and decoding BlockArgs
# efficientnet_params: A function to query compound coefficient
# get_model_params and efficientnet:
# Functions to get BlockArgs and GlobalParams for efficientnet
# url_map and url_map_advprop: Dicts of url_map for pretrained weights
# load_pretrained_weights: A function to load pretrained weightsclass BlockDecoder(object):Block Decoder for readability,straight from the official TensorFlow repository.staticmethoddef _decode_block_string(block_string):Get a block through a string notation of arguments.Args:block_string (str): A string notation of arguments.Examples: r1_k3_s11_e1_i32_o16_se0.25_noskip.Returns:BlockArgs: The namedtuple defined at the top of this file.assert isinstance(block_string, str)ops block_string.split(_)options {}for op in ops:splits re.split(r(\d.*), op)if len(splits) 2:key, value splits[:2]options[key] value# Check strideassert ((s in options and len(options[s]) 1) or(len(options[s]) 2 and options[s][0] options[s][1]))return BlockArgs(num_repeatint(options[r]),kernel_sizeint(options[k]),stride[int(options[s][0])],expand_ratioint(options[e]),input_filtersint(options[i]),output_filtersint(options[o]),se_ratiofloat(options[se]) if se in options else None,id_skip(noskip not in block_string))staticmethoddef _encode_block_string(block):Encode a block to a string.Args:block (namedtuple): A BlockArgs type argument.Returns:block_string: A String form of BlockArgs.args [r%d % block.num_repeat,k%d % block.kernel_size,s%d%d % (block.strides[0], block.strides[1]),e%s % block.expand_ratio,i%d % block.input_filters,o%d % block.output_filters]if 0 block.se_ratio 1:args.append(se%s % block.se_ratio)if block.id_skip is False:args.append(noskip)return _.join(args)staticmethoddef decode(string_list):Decode a list of string notations to specify blocks inside the network.Args:string_list (list[str]): A list of strings, each string is a notation of block.Returns:blocks_args: A list of BlockArgs namedtuples of block args.assert isinstance(string_list, list)blocks_args []for block_string in string_list:blocks_args.append(BlockDecoder._decode_block_string(block_string))return blocks_argsstaticmethoddef encode(blocks_args):Encode a list of BlockArgs to a list of strings.Args:blocks_args (list[namedtuples]): A list of BlockArgs namedtuples of block args.Returns:block_strings: A list of strings, each string is a notation of block.block_strings []for block in blocks_args:block_strings.append(BlockDecoder._encode_block_string(block))return block_stringsdef efficientnet_params(model_name):Map EfficientNet model name to parameter coefficients.Args:model_name (str): Model name to be queried.Returns:params_dict[model_name]: A (width,depth,res,dropout) tuple.params_dict {# Coefficients: width,depth,res,dropoutefficientnet-b0: (1.0, 1.0, 224, 0.2),efficientnet-b1: (1.0, 1.1, 240, 0.2),efficientnet-b2: (1.1, 1.2, 260, 0.3),efficientnet-b3: (1.2, 1.4, 300, 0.3),efficientnet-b4: (1.4, 1.8, 380, 0.4),efficientnet-b5: (1.6, 2.2, 456, 0.4),efficientnet-b6: (1.8, 2.6, 528, 0.5),efficientnet-b7: (2.0, 3.1, 600, 0.5),efficientnet-b8: (2.2, 3.6, 672, 0.5),efficientnet-l2: (4.3, 5.3, 800, 0.5),}return params_dict[model_name]def efficientnet(width_coefficientNone, depth_coefficientNone, image_sizeNone,dropout_rate0.2, drop_connect_rate0.2, num_classes1000, include_topTrue):Create BlockArgs and GlobalParams for efficientnet model.Args:width_coefficient (float)depth_coefficient (float)image_size (int)dropout_rate (float)drop_connect_rate (float)num_classes (int)Meaning as the name suggests.Returns:blocks_args, global_params.# Blocks args for the whole model(efficientnet-b0 by default)# It will be modified in the construction of EfficientNet Class according to modelblocks_args [r1_k3_s11_e1_i32_o16_se0.25,r2_k3_s22_e6_i16_o24_se0.25,r2_k5_s22_e6_i24_o40_se0.25,r3_k3_s22_e6_i40_o80_se0.25,r3_k5_s11_e6_i80_o112_se0.25,r4_k5_s22_e6_i112_o192_se0.25,r1_k3_s11_e6_i192_o320_se0.25,]blocks_args BlockDecoder.decode(blocks_args)global_params GlobalParams(width_coefficientwidth_coefficient,depth_coefficientdepth_coefficient,image_sizeimage_size,dropout_ratedropout_rate,num_classesnum_classes,batch_norm_momentum0.99,batch_norm_epsilon1e-3,drop_connect_ratedrop_connect_rate,depth_divisor8,min_depthNone,include_topinclude_top,)return blocks_args, global_paramsdef get_model_params(model_name, override_params):Get the block args and global params for a given model name.Args:model_name (str): Models name.override_params (dict): A dict to modify global_params.Returns:blocks_args, global_paramsif model_name.startswith(efficientnet):w, d, s, p efficientnet_params(model_name)# note: all models have drop connect rate 0.2blocks_args, global_params efficientnet(width_coefficientw, depth_coefficientd, dropout_ratep, image_sizes)else:raise NotImplementedError(model name is not pre-defined: {}.format(model_name))if override_params:# ValueError will be raised here if override_params has fields not included in global_params.global_params global_params._replace(**override_params)return blocks_args, global_params# train with Standard methods
# check more details in paper(EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks)
url_map {efficientnet-b0: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b0-355c32eb.pth,efficientnet-b1: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b1-f1951068.pth,efficientnet-b2: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b2-8bb594d6.pth,efficientnet-b3: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b3-5fb5a3c3.pth,efficientnet-b4: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b4-6ed6700e.pth,efficientnet-b5: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b5-b6417697.pth,efficientnet-b6: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b6-c76e70fd.pth,efficientnet-b7: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/efficientnet-b7-dcc49843.pth,
}# train with Adversarial Examples(AdvProp)
# check more details in paper(Adversarial Examples Improve Image Recognition)
url_map_advprop {efficientnet-b0: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b0-b64d5a18.pth,efficientnet-b1: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b1-0f3ce85a.pth,efficientnet-b2: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b2-6e9d97e5.pth,efficientnet-b3: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b3-cdd7c0f4.pth,efficientnet-b4: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b4-44fb3a87.pth,efficientnet-b5: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b5-86493f6b.pth,efficientnet-b6: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b6-ac80338e.pth,efficientnet-b7: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b7-4652b6dd.pth,efficientnet-b8: https://github.com/lukemelas/EfficientNet-PyTorch/releases/download/1.0/adv-efficientnet-b8-22a8fe65.pth,
}# TODO: add the petrained weights url map of efficientnet-l2def load_pretrained_weights(model, model_name, weights_pathNone, load_fcTrue, advpropFalse, verboseTrue):Loads pretrained weights from weights path or download using url.Args:model (Module): The whole model of efficientnet.model_name (str): Model name of efficientnet.weights_path (None or str):str: path to pretrained weights file on the local disk.None: use pretrained weights downloaded from the Internet.load_fc (bool): Whether to load pretrained weights for fc layer at the end of the model.advprop (bool): Whether to load pretrained weightstrained with advprop (valid when weights_path is None).if isinstance(weights_path, str):state_dict torch.load(weights_path)else:# AutoAugment or Advprop (different preprocessing)url_map_ url_map_advprop if advprop else url_mapstate_dict model_zoo.load_url(url_map_[model_name])if load_fc:ret model.load_state_dict(state_dict, strictFalse)assert not ret.missing_keys, Missing keys when loading pretrained weights: {}.format(ret.missing_keys)else:state_dict.pop(_fc.weight)state_dict.pop(_fc.bias)ret model.load_state_dict(state_dict, strictFalse)assert set(ret.missing_keys) set([_fc.weight, _fc.bias]), Missing keys when loading pretrained weights: {}.format(ret.missing_keys)assert not ret.unexpected_keys, Missing keys when loading pretrained weights: {}.format(ret.unexpected_keys)if verbose:print(Loaded pretrained weights for {}.format(model_name))VALID_MODELS (efficientnet-b0, efficientnet-b1, efficientnet-b2, efficientnet-b3,efficientnet-b4, efficientnet-b5, efficientnet-b6, efficientnet-b7,efficientnet-b8,# Support the construction of efficientnet-l2 without pretrained weightsefficientnet-l2
)class MBConvBlock(nn.Module):Mobile Inverted Residual Bottleneck Block.Args:block_args (namedtuple): BlockArgs, defined in utils.py.global_params (namedtuple): GlobalParam, defined in utils.py.image_size (tuple or list): [image_height, image_width].References:[1] https://arxiv.org/abs/1704.04861 (MobileNet v1)[2] https://arxiv.org/abs/1801.04381 (MobileNet v2)[3] https://arxiv.org/abs/1905.02244 (MobileNet v3)def __init__(self, block_args, global_params, image_sizeNone):super().__init__()self._block_args block_argsself._bn_mom 1 - global_params.batch_norm_momentum # pytorchs difference from tensorflowself._bn_eps global_params.batch_norm_epsilonself.has_se (self._block_args.se_ratio is not None) and (0 self._block_args.se_ratio 1)self.id_skip block_args.id_skip # whether to use skip connection and drop connect# Expansion phase (Inverted Bottleneck)inp self._block_args.input_filters # number of input channelsoup self._block_args.input_filters * self._block_args.expand_ratio # number of output channelsif self._block_args.expand_ratio ! 1:Conv2d get_same_padding_conv2d(image_sizeimage_size)self._expand_conv Conv2d(in_channelsinp, out_channelsoup, kernel_size1, biasFalse)self._bn0 nn.BatchNorm2d(num_featuresoup, momentumself._bn_mom, epsself._bn_eps)# image_size calculate_output_image_size(image_size, 1) -- this wouldnt modify image_size# Depthwise convolution phasek self._block_args.kernel_sizes self._block_args.strideConv2d get_same_padding_conv2d(image_sizeimage_size)self._depthwise_conv Conv2d(in_channelsoup, out_channelsoup, groupsoup, # groups makes it depthwisekernel_sizek, strides, biasFalse)self._bn1 nn.BatchNorm2d(num_featuresoup, momentumself._bn_mom, epsself._bn_eps)image_size calculate_output_image_size(image_size, s)# Squeeze and Excitation layer, if desiredif self.has_se:Conv2d get_same_padding_conv2d(image_size(1, 1))num_squeezed_channels max(1, int(self._block_args.input_filters * self._block_args.se_ratio))self._se_reduce Conv2d(in_channelsoup, out_channelsnum_squeezed_channels, kernel_size1)self._se_expand Conv2d(in_channelsnum_squeezed_channels, out_channelsoup, kernel_size1)# Pointwise convolution phasefinal_oup self._block_args.output_filtersConv2d get_same_padding_conv2d(image_sizeimage_size)self._project_conv Conv2d(in_channelsoup, out_channelsfinal_oup, kernel_size1, biasFalse)self._bn2 nn.BatchNorm2d(num_featuresfinal_oup, momentumself._bn_mom, epsself._bn_eps)self._swish MemoryEfficientSwish()def forward(self, inputs, drop_connect_rateNone):MBConvBlocks forward function.Args:inputs (tensor): Input tensor.drop_connect_rate (bool): Drop connect rate (float, between 0 and 1).Returns:Output of this block after processing.# Expansion and Depthwise Convolutionx inputsif self._block_args.expand_ratio ! 1:x self._expand_conv(inputs)x self._bn0(x)x self._swish(x)x self._depthwise_conv(x)x self._bn1(x)x self._swish(x)# Squeeze and Excitationif self.has_se:x_squeezed F.adaptive_avg_pool2d(x, 1)x_squeezed self._se_reduce(x_squeezed)x_squeezed self._swish(x_squeezed)x_squeezed self._se_expand(x_squeezed)x torch.sigmoid(x_squeezed) * x# Pointwise Convolutionx self._project_conv(x)x self._bn2(x)# Skip connection and drop connectinput_filters, output_filters self._block_args.input_filters, self._block_args.output_filtersif self.id_skip and self._block_args.stride 1 and input_filters output_filters:# The combination of skip connection and drop connect brings about stochastic depth.if drop_connect_rate:x drop_connect(x, pdrop_connect_rate, trainingself.training)x x inputs # skip connectionreturn xdef set_swish(self, memory_efficientTrue):Sets swish function as memory efficient (for training) or standard (for export).Args:memory_efficient (bool): Whether to use memory-efficient version of swish.self._swish MemoryEfficientSwish() if memory_efficient else Swish()class EfficientNet(nn.Module):def __init__(self, blocks_argsNone, global_paramsNone):super().__init__()assert isinstance(blocks_args, list), blocks_args should be a listassert len(blocks_args) 0, block args must be greater than 0self._global_params global_paramsself._blocks_args blocks_args# Batch norm parametersbn_mom 1 - self._global_params.batch_norm_momentumbn_eps self._global_params.batch_norm_epsilon# Get stem static or dynamic convolution depending on image sizeimage_size global_params.image_sizeConv2d get_same_padding_conv2d(image_sizeimage_size)# Stemin_channels 3 # rgbout_channels round_filters(32, self._global_params) # number of output channelsself._conv_stem Conv2d(in_channels, out_channels, kernel_size3, stride2, biasFalse)self._bn0 nn.BatchNorm2d(num_featuresout_channels, momentumbn_mom, epsbn_eps)image_size calculate_output_image_size(image_size, 2)# Build blocksself._blocks nn.ModuleList([])for block_args in self._blocks_args:# Update block input and output filters based on depth multiplier.block_args block_args._replace(input_filtersround_filters(block_args.input_filters, self._global_params),output_filtersround_filters(block_args.output_filters, self._global_params),num_repeatround_repeats(block_args.num_repeat, self._global_params))# The first block needs to take care of stride and filter size increase.self._blocks.append(MBConvBlock(block_args, self._global_params, image_sizeimage_size))image_size calculate_output_image_size(image_size, block_args.stride)if block_args.num_repeat 1: # modify block_args to keep same output sizeblock_args block_args._replace(input_filtersblock_args.output_filters, stride1)for _ in range(block_args.num_repeat - 1):self._blocks.append(MBConvBlock(block_args, self._global_params, image_sizeimage_size))# image_size calculate_output_image_size(image_size, block_args.stride) # stride 1# Headin_channels block_args.output_filters # output of final blockout_channels round_filters(1280, self._global_params)Conv2d get_same_padding_conv2d(image_sizeimage_size)self._conv_head Conv2d(in_channels, out_channels, kernel_size1, biasFalse)self._bn1 nn.BatchNorm2d(num_featuresout_channels, momentumbn_mom, epsbn_eps)# Final linear layerself._avg_pooling nn.AdaptiveAvgPool2d(1)if self._global_params.include_top:self._dropout nn.Dropout(self._global_params.dropout_rate)self._fc nn.Linear(out_channels, self._global_params.num_classes)# set activation to memory efficient swish by defaultself._swish MemoryEfficientSwish()self.width_list [i.size(1) for i in self.forward(torch.randn(1, 3, 640, 640))]def set_swish(self, memory_efficientTrue):Sets swish function as memory efficient (for training) or standard (for export).Args:memory_efficient (bool): Whether to use memory-efficient version of swish.self._swish MemoryEfficientSwish() if memory_efficient else Swish()for block in self._blocks:block.set_swish(memory_efficient)def extract_endpoints(self, inputs):# Use convolution layer to extract features# from reduction levels i in [1, 2, 3, 4, 5].## Args:# inputs (tensor): Input tensor.## Returns:# Dictionary of last intermediate features# with reduction levels i in [1, 2, 3, 4, 5].# Example:# import torch# from efficientnet.model import EfficientNet# inputs torch.rand(1, 3, 224, 224)# model EfficientNet.from_pretrained(efficientnet-b0)# endpoints model.extract_endpoints(inputs)# print(endpoints[reduction_1].shape) # torch.Size([1, 16, 112, 112])# print(endpoints[reduction_2].shape) # torch.Size([1, 24, 56, 56])# print(endpoints[reduction_3].shape) # torch.Size([1, 40, 28, 28])# print(endpoints[reduction_4].shape) # torch.Size([1, 112, 14, 14])# print(endpoints[reduction_5].shape) # torch.Size([1, 320, 7, 7])# print(endpoints[reduction_6].shape) # torch.Size([1, 1280, 7, 7])# endpoints dict()# Stemx self._swish(self._bn0(self._conv_stem(inputs)))prev_x x# Blocksfor idx, block in enumerate(self._blocks):drop_connect_rate self._global_params.drop_connect_rateif drop_connect_rate:drop_connect_rate * float(idx) / len(self._blocks) # scale drop connect_ratex block(x, drop_connect_ratedrop_connect_rate)if prev_x.size(2) x.size(2):endpoints[reduction_{}.format(len(endpoints) 1)] prev_xelif idx len(self._blocks) - 1:endpoints[reduction_{}.format(len(endpoints) 1)] xprev_x x# Headx self._swish(self._bn1(self._conv_head(x)))endpoints[reduction_{}.format(len(endpoints) 1)] xreturn endpointsdef forward(self, inputs):use convolution layer to extract feature .Args:inputs (tensor): Input tensor.Returns:Output of the final convolutionlayer in the efficientnet model.# Stemx self._swish(self._bn0(self._conv_stem(inputs)))unique_tensors {}# Blocksfor idx, block in enumerate(self._blocks):drop_connect_rate self._global_params.drop_connect_rateif drop_connect_rate:drop_connect_rate * float(idx) / len(self._blocks) # scale drop connect_ratex block(x, drop_connect_ratedrop_connect_rate)width, height x.shape[2], x.shape[3]unique_tensors[(width, height)] xresult_list list(unique_tensors.values())[-4:]# Headreturn result_listclassmethoddef from_name(cls, model_name, in_channels3, **override_params):Create an efficientnet model according to name.Args:model_name (str): Name for efficientnet.in_channels (int): Input datas channel number.override_params (other key word params):Params to override models global_params.Optional key:width_coefficient, depth_coefficient,image_size, dropout_rate,num_classes, batch_norm_momentum,batch_norm_epsilon, drop_connect_rate,depth_divisor, min_depthReturns:An efficientnet model.cls._check_model_name_is_valid(model_name)blocks_args, global_params get_model_params(model_name, override_params)model cls(blocks_args, global_params)model._change_in_channels(in_channels)return modelclassmethoddef from_pretrained(cls, model_name, weights_pathNone, advpropFalse,in_channels3, num_classes1000, **override_params):Create an efficientnet model according to name.Args:model_name (str): Name for efficientnet.weights_path (None or str):str: path to pretrained weights file on the local disk.None: use pretrained weights downloaded from the Internet.advprop (bool):Whether to load pretrained weightstrained with advprop (valid when weights_path is None).in_channels (int): Input datas channel number.num_classes (int):Number of categories for classification.It controls the output size for final linear layer.override_params (other key word params):Params to override models global_params.Optional key:width_coefficient, depth_coefficient,image_size, dropout_rate,batch_norm_momentum,batch_norm_epsilon, drop_connect_rate,depth_divisor, min_depthReturns:A pretrained efficientnet model.model cls.from_name(model_name, num_classesnum_classes, **override_params)load_pretrained_weights(model, model_name, weights_pathweights_path,load_fc(num_classes 1000), advpropadvprop)model._change_in_channels(in_channels)return modelclassmethoddef get_image_size(cls, model_name):Get the input image size for a given efficientnet model.Args:model_name (str): Name for efficientnet.Returns:Input image size (resolution).cls._check_model_name_is_valid(model_name)_, _, res, _ efficientnet_params(model_name)return resclassmethoddef _check_model_name_is_valid(cls, model_name):Validates model name.Args:model_name (str): Name for efficientnet.Returns:bool: Is a valid name or not.if model_name not in VALID_MODELS:raise ValueError(model_name should be one of: , .join(VALID_MODELS))def _change_in_channels(self, in_channels):Adjust models first convolution layer to in_channels, if in_channels not equals 3.Args:in_channels (int): Input datas channel number.if in_channels ! 3:Conv2d get_same_padding_conv2d(image_sizeself._global_params.image_size)out_channels round_filters(32, self._global_params)self._conv_stem Conv2d(in_channels, out_channels, kernel_size3, stride2, biasFalse)def efficient(model_nameefficientnet-b0, pretrainedFalse):if pretrained:model EfficientNet.from_pretrained({}.format(model_name))else:model EfficientNet.from_name({}.format(model_name))return modelif __name__ __main__:# VALID_MODELS (# efficientnet-b0, efficientnet-b1, efficientnet-b2, efficientnet-b3,# efficientnet-b4, efficientnet-b5, efficientnet-b6, efficientnet-b7,# efficientnet-b8,# # Support the construction of efficientnet-l2 without pretrained weights# efficientnet-l2# )# Generating Sample imageimage_size (1, 3, 640, 640)image torch.rand(*image_size)# Modelmodel efficient(efficientnet-b0)out model(image)print(len(out))
四、手把手教你添加EfficientNetV1机制
下面教大家如何修改该网络结构主干网络结构的修改步骤比较复杂我也会将task.py文件上传到CSDN的文件中大家如果自己修改不正确可以尝试用我的task.py文件替换你的然后只需要修改其中的第1、2、3、5步即可。
⭐修改过程中大家一定要仔细⭐ 4.1 修改一
首先我门中到如下“ultralytics/nn”的目录我们在这个目录下在创建一个新的目录名字为Addmodules此文件之后就用于存放我们的所有改进机制之后我们在创建的目录内创建一个新的py文件复制粘贴进去 可以根据文章改进机制来起这里大家根据自己的习惯命名即可。 4.2 修改二
第二步我们在我们创建的目录内创建一个新的py文件名字为__init__.py只需要创建一个即可然后在其内部导入我们本文的改进机制即可其余代码均为未发大家没有不用理会。 4.3 修改三
第三步我门中到如下文件ultralytics/nn/tasks.py然后在开头导入我们的所有改进机制如果你用了我多个改进机制这一步只需要修改一次即可。 4.4 修改四
添加如下两行代码
4.5 修改五
找到七百多行大概把具体看图片按照图片来修改就行添加红框内的部分注意没有()只是函数名此处我的文件里已经添加很多了后期都会发出来大家没有的不用理会即可。 elif m in {自行添加对应的模型即可下面都是一样的}:m m(*args)c2 m.width_list # 返回通道列表backbone True 4.6 修改六
用下面的代码替换红框内的内容。
if isinstance(c2, list):m_ mm_.backbone True
else:m_ nn.Sequential(*(m(*args) for _ in range(n))) if n 1 else m(*args) # modulet str(m)[8:-2].replace(__main__., ) # module type
m.np sum(x.numel() for x in m_.parameters()) # number params
m_.i, m_.f, m_.type i 4 if backbone else i, f, t # attach index, from index, type
if verbose:LOGGER.info(f{i:3}{str(f):20}{n_:3}{m.np:10.0f} {t:45}{str(args):30}) # print
save.extend(x % (i 4 if backbone else i) for x in ([f] if isinstance(f, int) else f) if x ! -1) # append to savelist
layers.append(m_)
if i 0:ch []
if isinstance(c2, list):ch.extend(c2)if len(c2) ! 5:ch.insert(0, 0)
else:ch.append(c2) 4.7 修改七
修改七这里非常要注意不是文件开头YOLOv8的那predict是400行的RTDETR的predict初始模型如下用我给的代码替换即可
代码如下- def predict(self, x, profileFalse, visualizeFalse, batchNone, augmentFalse, embedNone):Perform a forward pass through the model.Args:x (torch.Tensor): The input tensor.profile (bool, optional): If True, profile the computation time for each layer. Defaults to False.visualize (bool, optional): If True, save feature maps for visualization. Defaults to False.batch (dict, optional): Ground truth data for evaluation. Defaults to None.augment (bool, optional): If True, perform data augmentation during inference. Defaults to False.embed (list, optional): A list of feature vectors/embeddings to return.Returns:(torch.Tensor): Models output tensor.y, dt, embeddings [], [], [] # outputsfor m in self.model[:-1]: # except the head partif m.f ! -1: # if not from previous layerx y[m.f] if isinstance(m.f, int) else [x if j -1 else y[j] for j in m.f] # from earlier layersif profile:self._profile_one_layer(m, x, dt)if hasattr(m, backbone):x m(x)if len(x) ! 5: # 0 - 5x.insert(0, None)for index, i in enumerate(x):if index in self.save:y.append(i)else:y.append(None)x x[-1] # 最后一个输出传给下一层else:x m(x) # runy.append(x if m.i in self.save else None) # save outputif visualize:feature_visualization(x, m.type, m.i, save_dirvisualize)if embed and m.i in embed:embeddings.append(nn.functional.adaptive_avg_pool2d(x, (1, 1)).squeeze(-1).squeeze(-1)) # flattenif m.i max(embed):return torch.unbind(torch.cat(embeddings, 1), dim0)head self.model[-1]x head([y[j] for j in head.f], batch) # head inferencereturn x 4.8 修改八
我们将下面的s用640替换即可这一步也是部分的主干可以不修改但有的不修改就会报错所以我们还是修改一下。 4.9 RT-DETR不能打印计算量问题的解决
计算的GFLOPs计算异常不打印所以需要额外修改一处 我们找到如下文件ultralytics/utils/torch_utils.py文件内有如下的代码按照如下的图片进行修改大家看好函数就行其中红框的640可能和你的不一样 然后用我给的代码替换掉整个代码即可。 def get_flops(model, imgsz640):Return a YOLO models FLOPs.try:model de_parallel(model)p next(model.parameters())# stride max(int(model.stride.max()), 32) if hasattr(model, stride) else 32 # max stridestride 640im torch.empty((1, 3, stride, stride), devicep.device) # input image in BCHW formatflops thop.profile(deepcopy(model), inputs[im], verboseFalse)[0] / 1E9 * 2 if thop else 0 # stride GFLOPsimgsz imgsz if isinstance(imgsz, list) else [imgsz, imgsz] # expand if int/floatreturn flops * imgsz[0] / stride * imgsz[1] / stride # 640x640 GFLOPsexcept Exception:return 04.10 可选修改
有些读者的数据集部分图片比较特殊在验证的时候会导致形状不匹配的报错如果大家在验证的时候报错形状不匹配的错误可以固定验证集的图片尺寸方法如下 -
找到下面这个文件ultralytics/models/yolo/detect/train.py然后其中有一个类是DetectionTrainer class中的build_dataset函数中的一个参数rectmode val改为rectFalse 五、EfficientNetV1的yaml文件
5.1 yaml文件
大家复制下面的yaml文件然后通过我给大家的运行代码运行即可RT-DETR的调参部分需要后面的文章给大家讲现在目前免费给大家看这一部分不开放。
# Ultralytics YOLO , AGPL-3.0 license
# RT-DETR-l object detection model with P3-P5 outputs. For details see https://docs.ultralytics.com/models/rtdetr# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. modelyolov8n-cls.yaml will call yolov8-cls.yaml with scale n# [depth, width, max_channels]l: [1.00, 1.00, 1024]backbone:# [from, repeats, module, args]- [-1, 1, efficient, []] # 4head:- [-1, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 5 input_proj.2- [-1, 1, AIFI, [1024, 8]] # 6- [-1, 1, Conv, [256, 1, 1]] # 7, Y5, lateral_convs.0- [-1, 1, nn.Upsample, [None, 2, nearest]] # 8- [3, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 9 input_proj.1- [[-2, -1], 1, Concat, [1]] # 10- [-1, 3, RepC3, [256, 0.5]] # 11, fpn_blocks.0- [-1, 1, Conv, [256, 1, 1]] # 12, Y4, lateral_convs.1- [-1, 1, nn.Upsample, [None, 2, nearest]] # 13- [2, 1, Conv, [256, 1, 1, None, 1, 1, False]] # 14 input_proj.0- [[-2, -1], 1, Concat, [1]] # 15 cat backbone P4- [-1, 3, RepC3, [256, 0.5]] # X3 (16), fpn_blocks.1- [-1, 1, Conv, [256, 3, 2]] # 17, downsample_convs.0- [[-1, 12], 1, Concat, [1]] # 18 cat Y4- [-1, 3, RepC3, [256, 0.5]] # F4 (19), pan_blocks.0- [-1, 1, Conv, [256, 3, 2]] # 20, downsample_convs.1- [[-1, 7], 1, Concat, [1]] # 21 cat Y5- [-1, 3, RepC3, [256, 0.5]] # F5 (22), pan_blocks.1- [[16, 19, 22], 1, RTDETRDecoder, [nc, 256, 300, 4, 8, 3]] # Detect(P3, P4, P5)5.2 运行文件
大家可以创建一个train.py文件将下面的代码粘贴进去然后替换你的文件运行即可开始训练。
import warnings
from ultralytics import RTDETR
warnings.filterwarnings(ignore)if __name__ __main__:model RTDETR(替换你想要运行的yaml文件)# model.load() # 可以加载你的版本预训练权重model.train(datar替换你的数据集地址即可,cacheFalse,imgsz640,epochs72,batch4,workers0,device0,projectruns/RT-DETR-train,nameexp,# ampTrue) 5.3 成功训练截图
下面是成功运行的截图确保我的改进机制是可用的已经完成了有1个epochs的训练图片太大截不全第2个epochs了。 六、全文总结
从今天开始正式开始更新RT-DETR剑指论文专栏本专栏的内容会迅速铺开在短期呢大量更新价格也会乘阶梯性上涨所以想要和我一起学习RT-DETR改进可以在前期直接关注本文专栏旨在打造全网最好的RT-DETR专栏为想要发论文的家进行服务。 专栏链接RT-DETR剑指论文专栏持续复现各种顶会内容——论文收割机RT-DETR