当前位置: 首页 > news >正文

c asp.net网站开发书贵州两学一做网站

c asp.net网站开发书,贵州两学一做网站,现代网络营销的方式,wordpress经典编辑器之前用pytorch构建了squeezenet#xff0c;个人觉得pytorch是最好用的#xff0c;但是有的工程就是需要caffe结构的#xff0c;所以本篇也用caffe构建一个squeezenet网络。 数据处理 首先要对数据进行处理#xff0c;跟pytorch不同#xff0c;pytorch读取数据只需要给数据… 之前用pytorch构建了squeezenet个人觉得pytorch是最好用的但是有的工程就是需要caffe结构的所以本篇也用caffe构建一个squeezenet网络。 数据处理 首先要对数据进行处理跟pytorch不同pytorch读取数据只需要给数据集所在目录即可直接从中读取数据而caffe需要一个包含每张图片的绝对路径以及所在类别的txt文件从中读取数据。写一个生成次txt文件的脚本 import os import randomfolder cotta # 数据集目录相对路径 names os.listdir(folder)f1 open(/train_txt/train_cotta.txt, a) # 生成的txt地址 f2 open(/train_txt/test_water_workcloth.txt, a)for name in names:imgnames os.listdir(folder / name)random.shuffle(imgnames)numimg len(imgnames)for i in range(numimg):f1.write(%s %s\n % (folder / name / imgnames[i], name[0]))# if i int(0.9*numimg):# f1.write(%s %s\n%(folder / name / imgnames[i], name[0]))# else:# f2.write(%s %s\n%(folder / name / imgnames[i], name[0])) # f2.close() f1.close()数据集的目录也要跟pytorch的一致一个类的数据放在一个目录中目录名为类名。且脚本与该目录同级。 运行脚本后生成的txt内容如下 /cotta/0_other/0_1_391_572_68_68.jpg 0 /cotta/1_longSleeves/9605_1_5_565_357_82_70.jpg 1 /cotta/2_cotta/713_0.99796_1_316_162_96_87.jpg 2 ...... 图片相对路径 图片所属类别网络结构配置文件 trainval.prototxt layer {name: datatype: ImageDatatop: datatop: labeltransform_param {mirror: truecrop_size: 96}image_data_param {source: /train_txt/train_cotta.txt # 生成的txt的相对路径root_folder: /data/ # 存放数据集目录的路径batch_size: 64shuffle: truenew_height: 96new_width: 96}} layer {name: conv1type: Convolutionbottom: datatop: conv1convolution_param {num_output: 96kernel_size: 3stride: 1pad: 1weight_filler {type: xavier}} }layer { name: BatchNorm1 type: BatchNorm bottom: conv1 top: BatchNorm1 }layer {name: relu_conv1type: ReLUbottom: BatchNorm1top: BatchNorm1 } layer {name: pool1type: Poolingbottom: BatchNorm1top: pool1pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: fire2/squeeze1x1type: Convolutionbottom: pool1top: fire2/squeeze1x1convolution_param {num_output: 16kernel_size: 1weight_filler {type: xavier}} }layer { name: fire2/bn_squeeze1x1 type: BatchNorm bottom: fire2/squeeze1x1 top: fire2/bn_squeeze1x1 }layer {name: fire2/relu_squeeze1x1type: ReLUbottom: fire2/bn_squeeze1x1top: fire2/bn_squeeze1x1 } layer {name: fire2/expand1x1type: Convolutionbottom: fire2/bn_squeeze1x1top: fire2/expand1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire2/bn_expand1x1 type: BatchNorm bottom: fire2/expand1x1 top: fire2/bn_expand1x1 }layer {name: fire2/relu_expand1x1type: ReLUbottom: fire2/bn_expand1x1top: fire2/bn_expand1x1 } layer {name: fire2/expand3x3type: Convolutionbottom: fire2/bn_expand1x1top: fire2/expand3x3convolution_param {num_output: 64pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire2/bn_expand3x3 type: BatchNorm bottom: fire2/expand3x3 top: fire2/bn_expand3x3 }layer {name: fire2/relu_expand3x3type: ReLUbottom: fire2/bn_expand3x3top: fire2/bn_expand3x3 } layer {name: fire2/concattype: Concatbottom: fire2/bn_expand1x1bottom: fire2/bn_expand3x3top: fire2/concat }#fire2 ends: 128 channels layer {name: fire3/squeeze1x1type: Convolutionbottom: fire2/concattop: fire3/squeeze1x1convolution_param {num_output: 16kernel_size: 1weight_filler {type: xavier}} }layer { name: fire3/bn_squeeze1x1 type: BatchNorm bottom: fire3/squeeze1x1 top: fire3/bn_squeeze1x1 }layer {name: fire3/relu_squeeze1x1type: ReLUbottom: fire3/bn_squeeze1x1top: fire3/bn_squeeze1x1 } layer {name: fire3/expand1x1type: Convolutionbottom: fire3/bn_squeeze1x1top: fire3/expand1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire3/bn_expand1x1 type: BatchNorm bottom: fire3/expand1x1 top: fire3/bn_expand1x1 }layer {name: fire3/relu_expand1x1type: ReLUbottom: fire3/bn_expand1x1top: fire3/bn_expand1x1 } layer {name: fire3/expand3x3type: Convolutionbottom: fire3/bn_expand1x1top: fire3/expand3x3convolution_param {num_output: 64pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire3/bn_expand3x3 type: BatchNorm bottom: fire3/expand3x3 top: fire3/bn_expand3x3 }layer {name: fire3/relu_expand3x3type: ReLUbottom: fire3/bn_expand3x3top: fire3/bn_expand3x3 } layer {name: fire3/concattype: Concatbottom: fire3/bn_expand1x1bottom: fire3/bn_expand3x3top: fire3/concat }#fire3 ends: 128 channelslayer {name: bypass_23type: Eltwisebottom: fire2/concatbottom: fire3/concattop: fire3_EltAdd }layer {name: fire4/squeeze1x1type: Convolutionbottom: fire3_EltAddtop: fire4/squeeze1x1convolution_param {num_output: 32kernel_size: 1weight_filler {type: xavier}} }layer { name: fire4/bn_squeeze1x1 type: BatchNorm bottom: fire4/squeeze1x1 top: fire4/bn_squeeze1x1 }layer {name: fire4/relu_squeeze1x1type: ReLUbottom: fire4/bn_squeeze1x1top: fire4/bn_squeeze1x1 } layer {name: fire4/expand1x1type: Convolutionbottom: fire4/bn_squeeze1x1top: fire4/expand1x1convolution_param {num_output: 128kernel_size: 1weight_filler {type: xavier}} }layer { name: fire4/bn_expand1x1 type: BatchNorm bottom: fire4/expand1x1 top: fire4/bn_expand1x1 }layer {name: fire4/relu_expand1x1type: ReLUbottom: fire4/bn_expand1x1top: fire4/bn_expand1x1 } layer {name: fire4/expand3x3type: Convolutionbottom: fire4/bn_expand1x1top: fire4/expand3x3convolution_param {num_output: 128pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire4/bn_expand3x3 type: BatchNorm bottom: fire4/expand3x3 top: fire4/bn_expand3x3 }layer {name: fire4/relu_expand3x3type: ReLUbottom: fire4/bn_expand3x3top: fire4/bn_expand3x3 } layer {name: fire4/concattype: Concatbottom: fire4/bn_expand1x1bottom: fire4/bn_expand3x3top: fire4/concat } #fire4 ends: 256 channelslayer {name: pool4type: Poolingbottom: fire4/concattop: pool4pooling_param {pool: MAXkernel_size: 2stride: 2} } #fire4 ends: 256 channels / pooled layer {name: fire5/squeeze1x1type: Convolutionbottom: pool4top: fire5/squeeze1x1convolution_param {num_output: 32kernel_size: 1weight_filler {type: xavier}} }layer { name: fire5/bn_squeeze1x1 type: BatchNorm bottom: fire5/squeeze1x1 top: fire5/bn_squeeze1x1 }layer {name: fire5/relu_squeeze1x1type: ReLUbottom: fire5/bn_squeeze1x1top: fire5/bn_squeeze1x1 } layer {name: fire5/expand1x1type: Convolutionbottom: fire5/bn_squeeze1x1top: fire5/expand1x1convolution_param {num_output: 128kernel_size: 1weight_filler {type: xavier}} }layer { name: fire5/bn_expand1x1 type: BatchNorm bottom: fire5/expand1x1 top: fire5/bn_expand1x1 }layer {name: fire5/relu_expand1x1type: ReLUbottom: fire5/bn_expand1x1top: fire5/bn_expand1x1 } layer {name: fire5/expand3x3type: Convolutionbottom: fire5/bn_expand1x1top: fire5/expand3x3convolution_param {num_output: 128pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire5/bn_expand3x3 type: BatchNorm bottom: fire5/expand3x3 top: fire5/bn_expand3x3 }layer {name: fire5/relu_expand3x3type: ReLUbottom: fire5/bn_expand3x3top: fire5/bn_expand3x3 } layer {name: fire5/concattype: Concatbottom: fire5/bn_expand1x1bottom: fire5/bn_expand3x3top: fire5/concat }#fire5 ends: 256 channels layer {name: bypass_45type: Eltwisebottom: pool4bottom: fire5/concattop: fire5_EltAdd }layer {name: fire6/squeeze1x1type: Convolutionbottom: fire5_EltAddtop: fire6/squeeze1x1convolution_param {num_output: 48kernel_size: 1weight_filler {type: xavier}} }layer { name: fire6/bn_squeeze1x1 type: BatchNorm bottom: fire6/squeeze1x1 top: fire6/bn_squeeze1x1 }layer {name: fire6/relu_squeeze1x1type: ReLUbottom: fire6/bn_squeeze1x1top: fire6/bn_squeeze1x1 } layer {name: fire6/expand1x1type: Convolutionbottom: fire6/bn_squeeze1x1top: fire6/expand1x1convolution_param {num_output: 192kernel_size: 1weight_filler {type: xavier}} }layer { name: fire6/bn_expand1x1 type: BatchNorm bottom: fire6/expand1x1 top: fire6/bn_expand1x1 }layer {name: fire6/relu_expand1x1type: ReLUbottom: fire6/bn_expand1x1top: fire6/bn_expand1x1 } layer {name: fire6/expand3x3type: Convolutionbottom: fire6/bn_expand1x1top: fire6/expand3x3convolution_param {num_output: 192pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire6/bn_expand3x3 type: BatchNorm bottom: fire6/expand3x3 top: fire6/bn_expand3x3 }layer {name: fire6/relu_expand3x3type: ReLUbottom: fire6/bn_expand3x3top: fire6/bn_expand3x3 } layer {name: fire6/concattype: Concatbottom: fire6/bn_expand1x1bottom: fire6/bn_expand3x3top: fire6/concat } #fire6 ends: 384 channelslayer {name: fire7/squeeze1x1type: Convolutionbottom: fire6/concattop: fire7/squeeze1x1convolution_param {num_output: 48kernel_size: 1weight_filler {type: xavier}} }layer { name: fire7/bn_squeeze1x1 type: BatchNorm bottom: fire7/squeeze1x1 top: fire7/bn_squeeze1x1 }layer {name: fire7/relu_squeeze1x1type: ReLUbottom: fire7/bn_squeeze1x1top: fire7/bn_squeeze1x1 } layer {name: fire7/expand1x1type: Convolutionbottom: fire7/bn_squeeze1x1top: fire7/expand1x1convolution_param {num_output: 192kernel_size: 1weight_filler {type: xavier}} }layer { name: fire7/bn_expand1x1 type: BatchNorm bottom: fire7/expand1x1 top: fire7/bn_expand1x1 }layer {name: fire7/relu_expand1x1type: ReLUbottom: fire7/bn_expand1x1top: fire7/bn_expand1x1 } layer {name: fire7/expand3x3type: Convolutionbottom: fire7/bn_expand1x1top: fire7/expand3x3convolution_param {num_output: 192pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire7/bn_expand3x3 type: BatchNorm bottom: fire7/expand3x3 top: fire7/bn_expand3x3 }layer {name: fire7/relu_expand3x3type: ReLUbottom: fire7/bn_expand3x3top: fire7/bn_expand3x3 } layer {name: fire7/concattype: Concatbottom: fire7/bn_expand1x1bottom: fire7/bn_expand3x3top: fire7/concat } #fire7 ends: 384 channels layer {name: bypass_67type: Eltwisebottom: fire6/concatbottom: fire7/concattop: fire7_EltAdd }layer {name: fire8/squeeze1x1type: Convolutionbottom: fire7_EltAddtop: fire8/squeeze1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire8/bn_squeeze1x1 type: BatchNorm bottom: fire8/squeeze1x1 top: fire8/bn_squeeze1x1 }layer {name: fire8/relu_squeeze1x1type: ReLUbottom: fire8/bn_squeeze1x1top: fire8/bn_squeeze1x1 } layer {name: fire8/expand1x1type: Convolutionbottom: fire8/bn_squeeze1x1top: fire8/expand1x1convolution_param {num_output: 256kernel_size: 1weight_filler {type: xavier}} }layer { name: fire8/bn_expand1x1 type: BatchNorm bottom: fire8/expand1x1 top: fire8/bn_expand1x1 }layer {name: fire8/relu_expand1x1type: ReLUbottom: fire8/bn_expand1x1top: fire8/bn_expand1x1 } layer {name: fire8/expand3x3type: Convolutionbottom: fire8/bn_expand1x1top: fire8/expand3x3convolution_param {num_output: 256pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire8/bn_expand3x3 type: BatchNorm bottom: fire8/expand3x3 top: fire8/bn_expand3x3 }layer {name: fire8/relu_expand3x3type: ReLUbottom: fire8/bn_expand3x3top: fire8/bn_expand3x3 } layer {name: fire8/concattype: Concatbottom: fire8/bn_expand1x1bottom: fire8/bn_expand3x3top: fire8/concat } #fire8 ends: 512 channelslayer {name: pool8type: Poolingbottom: fire8/concattop: pool8pooling_param {pool: MAXkernel_size: 2stride: 2} } #fire8 ends: 512 channels layer {name: fire9/squeeze1x1type: Convolutionbottom: pool8top: fire9/squeeze1x1convolution_param {num_output: 64kernel_size: 1weight_filler {type: xavier}} }layer { name: fire9/bn_squeeze1x1 type: BatchNorm bottom: fire9/squeeze1x1 top: fire9/bn_squeeze1x1 }layer {name: fire9/relu_squeeze1x1type: ReLUbottom: fire9/bn_squeeze1x1top: fire9/bn_squeeze1x1 } layer {name: fire9/expand1x1type: Convolutionbottom: fire9/bn_squeeze1x1top: fire9/expand1x1convolution_param {num_output: 256kernel_size: 1weight_filler {type: xavier}} }layer { name: fire9/bn_expand1x1 type: BatchNorm bottom: fire9/expand1x1 top: fire9/bn_expand1x1 }layer {name: fire9/relu_expand1x1type: ReLUbottom: fire9/bn_expand1x1top: fire9/bn_expand1x1 } layer {name: fire9/expand3x3type: Convolutionbottom: fire9/bn_expand1x1top: fire9/expand3x3convolution_param {num_output: 256pad: 1kernel_size: 3weight_filler {type: xavier}} }layer { name: fire9/bn_expand3x3 type: BatchNorm bottom: fire9/expand3x3 top: fire9/bn_expand3x3 }layer {name: fire9/relu_expand3x3type: ReLUbottom: fire9/bn_expand3x3top: fire9/bn_expand3x3 } layer {name: fire9/concattype: Concatbottom: fire9/bn_expand1x1bottom: fire9/bn_expand3x3top: fire9/concat } #fire9 ends: 512 channelslayer {name: conv10_newtype: Convolutionbottom: fire9/concattop: conv10convolution_param {num_output: 3kernel_size: 1weight_filler {type: gaussianmean: 0.0std: 0.01}} }layer {name: pool10type: Poolingbottom: conv10top: pool10pooling_param {pool: AVEglobal_pooling: true} }# loss, top1, top5 layer {name: losstype: SoftmaxWithLossbottom: pool10bottom: labeltop: lossinclude { # phase: TRAIN} } layer {name: accuracytype: Accuracybottom: pool10bottom: labeltop: accuracy#include {# phase: TEST#} } 在最后一层卷积层conv10中的num_output修改类别数量。 模型超参配置文件 solver.prototxt test_iter: 2000 #not subject to iter_size test_interval: 1000000 # base_lr: 0.0001 base_lr: 0.005 # 学习率 display: 40 # max_iter: 600000 max_iter: 200000 # 迭代数 iter_size: 2 #global batch size batch_size * iter_size lr_policy: poly power: 1.0 #linearly decrease LR momentum: 0.9 weight_decay: 0.0002 snapshot: 10000 # 每多少次迭代保存一个模型 snapshot_prefix: /data/zxc/classfication/model/model_cotta/cotta_ # 模型保存路径 solver_mode: GPU random_seed: 42 net: ./trainNets_drive/trainval.prototxt # 网络结构配置文件的路径 test_initialization: false average_loss: 40max_itercaffe用的是迭代数而不是pytorch的轮数。pytorch中训练完全部的训练集为一轮而caffe中训练完一个batch_size的数据为一个迭代。如果想要等价与轮数的话一轮就等于len(train_data) / batch_size。如果有余数就要看pytorch里的dataloader里面设置舍去还是为一个batch如果舍去就是向下取整如果不舍去就是向上取整snapshot_prefix最后一部分为每个保存模型的前缀如图 运行命令 将运行命令写入bash文件中 train.sh /home/seg/anaconda3/envs/zxc/bin/caffe train -gpu 1 -solver ./solvers/solver_3.prototxt -weights/data/classfication/model/model_cotta/cotta__iter_200000.caffemodel 21 | tee log_3_4_class.txt -gpu选择哪块卡如果就一块就是0-solver后面跟网络超参配置文件路径-weights后面跟预训练模型可以用官方给的squeezenet的caffe版本的预训练模型我这里是训练中断从断点继续训练 编写完成后source activate 环境名称进入source环境然后source train.sh运行bash文件就能开始训练。
http://www.zqtcl.cn/news/834494/

相关文章:

  • 网站建设存在哪些问题学校网站手机站的建设
  • 婚恋网站设计手机免费制作网站模板
  • 北京网站建设与维护公司网络组建方案设计
  • 自己做网站好还是凡科樱花动漫做网站
  • 自己做外贸开通什么网站wordpress万能主题
  • 网站建设中添加图片链接cad线下培训班
  • 网站建站系统程序长宁区网站建设网站制
  • 合肥网站建设合肥做网站wordpress 关于页面
  • 软件开发公司赚钱吗北京网站优化解决方案
  • 泰安的网站建设公司哪家好国外ps网站
  • 网站建设制作方案做字典网站开发
  • 安徽道遂建设工程有限公司网站汽车之家网页
  • 仙居网站建设贴吧马鞍山钢铁建设集团有限公司网站
  • 编写网站 语言微网站开发语言
  • 深圳网站建设优化网站建设与维护培训
  • 张家港网站开发wordpress后台登录地址改
  • 郑州做网站的公司哪家好做网站运营工资是不是很低
  • 做网站电销公司开发个网站怎么做
  • 廊坊做网站哪家好深圳快速网站制
  • 网站开发文档实训小结与讨论做网站建设业务员好吗
  • 网站开发知识产权归属好看的个人网站设计
  • 怎么学习企业网站维护江西省城乡建设培训网站官方网站
  • 电脑网站 源码php网站数据库修改
  • 做网站系统的答辩ppt范文商品关键词优化的方法
  • 长沙网站设计公司怎么样如何在网站上推广自己的产品
  • 龙岗网站设计农业网站模板WordPress
  • 摄像头监控设备企业网站模板聊城网站设计公司
  • 做英文网站賺钱建筑设计资料网站
  • 上海专业网站建设平台百度sem认证
  • 个人房产查询系统网站官网推广普通话 奋进新征程