当前位置: 首页 > news >正文

宣传展示型网站设计电脑公司网站设计

宣传展示型网站设计,电脑公司网站设计,wordpress 启用多站点,网站被k前言 上一节学习了以TensorFlow为底端的keras接口最简单的使用#xff0c;这里就继续学习怎么写卷积分类模型和各种保存方法(仅保存权重、权重和网络结构同时保存) 国际惯例#xff0c;参考博客#xff1a; 官方教程 【注】其实不用看博客#xff0c;直接翻到文末看我的c…前言 上一节学习了以TensorFlow为底端的keras接口最简单的使用这里就继续学习怎么写卷积分类模型和各种保存方法(仅保存权重、权重和网络结构同时保存) 国际惯例参考博客 官方教程 【注】其实不用看博客直接翻到文末看我的colab就行里面涵盖了学习方法包括自己的出错内容和一些简单笔记下面为了展示方便每次都重新定义了网络结构对Python熟悉的大佬可以直接def create_model():函数把模型结构保存起来后面直接调用就行 构建卷积模型分类 回顾一下上篇博客介绍的构建模型方法有两种写法 model keras.models.Sequential([keras.layers.Flatten(...),keras.layers.Dense(...),... ])model keras.models.Sequential() model.add(keras.layers.Flatten(...)) model.add(keras.layers.Dense(...)) ])第一种简单第二种舒服本博文采用第二种写法构建一个简单的卷积网络 引入相关包 保存模型需要路径(引入os)数据归一化处理(引入numpy)此外注意虽然我们学习keras,但是不仅要引入keras还得引入tensorflow具体原因后续再说 import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt import os构建数据集 还是用mnist吧后续根据需要出一个训练本地图片数据的教程看看是不是还得数据流操作 注意要把标签改为单热度编码格式数据也得归一化 mnist_dataset keras.datasets.mnist (train_x,train_y),(test_x,test_y) mnist_dataset.load_data() train_y keras.utils.to_categorical(train_y,10) test_y keras.utils.to_categorical(test_y,10) train_x train_x / 255.0 test_x test_x / 255.0还得注意就是keras的卷积操作接受的数据是一个思维矩阵需要指定是channels_first即样本通道行列, 还是channels_last即样本行列通道默认最后的维度是通道(channels_last) train_x train_x[ ..., np.newaxis ] test_x test_x[..., np.newaxis ] print(train_x.shape)#(60000, 28, 28, 1)构建模型 构建简单的AlexNet,但是直接用这个结构可能有问题因为输入图片总共28\*28经过多次卷积池化会越变越小最后可能都不够做卷积池化了为稍微改了改 model keras.models.Sequential() model.add( keras.layers.Conv2D( filters 64, kernel_size(11,11),strides (1,1), paddingvalid, activation tf.keras.activations.relu) ) model.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model.add( keras.layers.Conv2D( filters 192, kernel_size(5,5),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model.add( keras.layers.Conv2D( filters 256, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) ))model.add( keras.layers.Flatten() ) model.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model.add( keras.layers.Dropout(rate0.5) ) model.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model.add( keras.layers.Dropout(rate0.5) )model.add( keras.layers.Dense(units10 , activation keras.activations.softmax ) )编译和训练模型 在keras中关于交叉熵分类有两个函数sparse_categorical_crossentropy和categorical_crossentropy这里就出现了第一个坑如果将标签[batch_size,10]输入到编译器使用sparse_...的时候回报错 logits and labels must have the same first dimension,got logits shape [200,10] and labels shape [2000]好像是默认把他拉长拼起来了所以我们要使用后者 model.compile( optimizer keras.optimizers.Adam(),loss keras.losses.categorical_crossentropy, metrics[accuracy] )然后就可以训练模型了 model.fit(train_x,train_y,epochs2, batch_size200)Epoch 1/2 60000/60000 [] - 26s 435us/step - loss: 0.2646 - acc: 0.9110 Epoch 2/2 60000/60000 [] - 24s 407us/step - loss: 0.0510 - acc: 0.9855 tensorflow.python.keras.callbacks.History at 0x7f65fe2de940还能看网络结构和参数使用summary()函数 model.summary()_________________________________________________________________ Layer (type) Output Shape Param # conv2d (Conv2D) multiple 7808 _________________________________________________________________ max_pooling2d (MaxPooling2D) multiple 0 _________________________________________________________________ conv2d_1 (Conv2D) multiple 307392 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 multiple 0 _________________________________________________________________ conv2d_2 (Conv2D) multiple 663936 _________________________________________________________________ conv2d_3 (Conv2D) multiple 1327488 _________________________________________________________________ conv2d_4 (Conv2D) multiple 884992 _________________________________________________________________ max_pooling2d_2 (MaxPooling2 multiple 0 _________________________________________________________________ flatten (Flatten) multiple 0 _________________________________________________________________ dense (Dense) multiple 4198400 _________________________________________________________________ dropout (Dropout) multiple 0 _________________________________________________________________ dense_1 (Dense) multiple 16781312 _________________________________________________________________ dropout_1 (Dropout) multiple 0 _________________________________________________________________ dense_2 (Dense) multiple 40970 Total params: 24,212,298 Trainable params: 24,212,298 Non-trainable params: 0 _________________________________________________________________可以用测试集评估模型 print(test_x.shape, test_y.shape) model.evaluate( test_x ,test_y)(10000, 28, 28, 1) (10000, 10) 10000/10000 [] - 3s 283us/step [0.03784323987539392, 0.9897]还能预测单张图片,但是要注意输入的第一个维度是样本数, 记得增加一个维度 test_img_idx 1000 test_img test_x[test_img_idx,...] test_img test_img[np.newaxis,...] img_prob model.predict( test_img )plt.figure() plt.imshow( np.squeeze(test_img) ) plt.title(img_prob.argmax())保存模型 训练过程中保存检查点 函数为keras.callbacks.ModelCheckpoint checkpoint_path./train_save/mnist.ckpt checkpoint_dir os.path.dirname(checkpoint_path) # 创建检查回调点 cp_callback keras.callbacks.ModelCheckpoint( checkpoint_path, save_weights_only True, verbose1 )model.fit(train_x, train_y,epochs2, validation_data(test_x,test_y), callbacks[cp_callback] )Train on 60000 samples, validate on 10000 samples Epoch 1/2 59968/60000 [.] - ETA: 0s - loss: 0.1442 - acc: 0.9681 Epoch 00001: saving model to ./train_save/mnist.ckpt WARNING:tensorflow:This model was compiled with a Keras optimizer (tensorflow.python.keras.optimizers.Adam object at 0x7faff48fae80) but is being saved in TensorFlow format with save_weights. The models weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizers state will not be saved.Consider using a TensorFlow optimizer from tf.train. 60000/60000 [] - 93s 2ms/step - loss: 0.1442 - acc: 0.9681 - val_loss: 0.0693 - val_acc: 0.9811 Epoch 2/2 59968/60000 [.] - ETA: 0s - loss: 0.0757 - acc: 0.9840 Epoch 00002: saving model to ./train_save/mnist.ckpt WARNING:tensorflow:This model was compiled with a Keras optimizer (tensorflow.python.keras.optimizers.Adam object at 0x7faff48fae80) but is being saved in TensorFlow format with save_weights. The models weights will be saved, but unlike with TensorFlow optimizers in the TensorFlow format the optimizers state will not be saved.Consider using a TensorFlow optimizer from tf.train. 60000/60000 [] - 92s 2ms/step - loss: 0.0757 - acc: 0.9839 - val_loss: 0.0489 - val_acc: 0.9876 tensorflow.python.keras.callbacks.History at 0x7fafead25f98发现有个warnning意思是说模型使用的是keras的优化器保存以后不是tensorflow能直接使用的模型格式好像少了个状态需要使用tensorflow自带的优化器好吧调整代码 import os model.compile(optimizer tf.train.AdamOptimizer(),loss keras.losses.categorical_crossentropy, metrics[accuracy] )checkpoint_path./train_save2/mnist.ckpt checkpoint_dir os.path.dirname(checkpoint_path) # 创建检查回调点 cp_callback keras.callbacks.ModelCheckpoint( checkpoint_path, save_weights_only True, verbose1 )model.fit(train_x, train_y,epochs2, validation_data(test_x,test_y), callbacks[cp_callback] )Train on 60000 samples, validate on 10000 samples Epoch 1/2 59936/60000 [.] - ETA: 0s - loss: 0.2241 - acc: 0.9325 Epoch 00001: saving model to ./train_save2/mnist.ckpt 60000/60000 [] - 60s 1ms/step - loss: 0.2239 - acc: 0.9326 - val_loss: 0.1009 - val_acc: 0.9765 Epoch 2/2 59936/60000 [.] - ETA: 0s - loss: 0.0866 - acc: 0.9801 Epoch 00002: saving model to ./train_save2/mnist.ckpt 60000/60000 [] - 56s 930us/step - loss: 0.0867 - acc: 0.9800 - val_loss: 0.0591 - val_acc: 0.9855 tensorflow.python.keras.callbacks.History at 0x7fad1a90d7b8这回没出错了尝试构建一个没训练的模型将参数载入进来 model_test keras.models.Sequential() model_test.add( keras.layers.Conv2D( filters 64, kernel_size(11,11),strides (1,1), paddingvalid, activation tf.keras.activations.relu) ) model_test.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test.add( keras.layers.Conv2D( filters 192, kernel_size(5,5),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test.add( keras.layers.Conv2D( filters 256, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) ))model_test.add( keras.layers.Flatten() ) model_test.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test.add( keras.layers.Dropout(rate0.5) ) model_test.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test.add( keras.layers.Dropout(rate0.5) )model_test.add( keras.layers.Dense(units10 , activation keras.activations.softmax ) )model_test.compile(optimizer tf.train.RMSPropOptimizer(learning_rate0.01),loss keras.losses.categorical_crossentropy, metrics[accuracy] )载入最近的检查点 ! ls train_save latest tf.train.latest_checkpoint(train_save2)#checkpoint mnist.ckpt.data-00000-of-00001 mnist.ckpt.index type(latest)#str loss,acc model_test.evaluate(test_x,test_y) print(未载入权重时:准确率{:5.2f}%.format(100*acc)) model_test.load_weights(latest) loss,acc model_test.evaluate(test_x,test_y) print(载入权重时:准确率{:5.2f}%.format(100*acc))10000/10000 [] - 3s 308us/step 未载入权重时:准确率 9.60% 10000/10000 [] - 3s 264us/step 载入权重时:准确率98.60%间隔保存 也可以指定多少次训练保存一次检查点这样能够有效防止过拟合以后自己可以挑选比较好的训练参数 checkpoint_pathtrain_save3/cp-{epoch:04d}.ckpt checkpoint_dir os.path.dirname(checkpoint_path) cp_callback keras.callbacks.ModelCheckpoint(checkpoint_path,verbose1, save_weights_onlyTrue, period1) model.fit(train_x,train_y,epochs2, callbacks[cp_callback], validation_data[test_x,test_y],verbose1)Train on 60000 samples, validate on 10000 samples Epoch 1/2 59936/60000 [.] - ETA: 0s - loss: 0.0447 - acc: 0.9897 Epoch 00001: saving model to train_save3/cp-0001.ckpt 60000/60000 [] - 61s 1ms/step - loss: 0.0446 - acc: 0.9897 - val_loss: 0.0421 - val_acc: 0.9920 Epoch 2/2 59936/60000 [.] - ETA: 0s - loss: 0.0478 - acc: 0.9885 Epoch 00002: saving model to train_save3/cp-0002.ckpt 60000/60000 [] - 61s 1ms/step - loss: 0.0478 - acc: 0.9884 - val_loss: 0.0590 - val_acc: 0.9859 tensorflow.python.keras.callbacks.History at 0x7fafea497dd8重新构建一个未训练的模型调用第一次的训练结果 model_test1 keras.models.Sequential() model_test1.add( keras.layers.Conv2D( filters 64, kernel_size(11,11),strides (1,1), paddingvalid, activation tf.keras.activations.relu) ) model_test1.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test1.add( keras.layers.Conv2D( filters 192, kernel_size(5,5),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test1.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test1.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test1.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test1.add( keras.layers.Conv2D( filters 256, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test1.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) ))model_test1.add( keras.layers.Flatten() ) model_test1.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test1.add( keras.layers.Dropout(rate0.5) ) model_test1.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test1.add( keras.layers.Dropout(rate0.5) )model_test1.add( keras.layers.Dense(units10 , activation keras.activations.softmax ) )model_test1.compile(optimizer tf.train.AdamOptimizer(learning_rate0.01),loss keras.losses.categorical_crossentropy, metrics[accuracy] )选择第一个检查点载入 loss,accmodel_test1.evaluate(test_x,test_y) print(未载入权重时:准确率{:5.2f}%.format(100*acc)) model_test1.load_weights(train_save3/cp-0001.ckpt) loss,accmodel_test1.evaluate(test_x,test_y) print(载入权重时:准确率{:5.2f}%.format(100*acc))10000/10000 [] - 3s 260us/step 未载入权重时:准确率10.28% 10000/10000 [] - 2s 244us/step 载入权重时:准确率98.30%手动保存模型 在训练完毕以后也可以自行调用save_weights函数保存权重 model.save_weights(./train_save3/mnist_checkpoint)构建未训练模型 model_test2 keras.models.Sequential() model_test2.add( keras.layers.Conv2D( filters 64, kernel_size(11,11),strides (1,1), paddingvalid, activation tf.keras.activations.relu) ) model_test2.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test2.add( keras.layers.Conv2D( filters 192, kernel_size(5,5),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test2.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test2.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test2.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test2.add( keras.layers.Conv2D( filters 256, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test2.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) ))model_test2.add( keras.layers.Flatten() ) model_test2.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test2.add( keras.layers.Dropout(rate0.5) ) model_test2.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test2.add( keras.layers.Dropout(rate0.5) )model_test2.add( keras.layers.Dense(units10 , activation keras.activations.softmax ) )model_test2.compile(optimizer tf.train.AdamOptimizer(learning_rate0.01),loss keras.losses.categorical_crossentropy, metrics[accuracy] )读取权重以及评估模型 loss,acc model_test2.evaluate(test_x,test_y) print(未载入权重时:准确率{:5.2f}%.format(100*acc)) model_test2.load_weights(./train_save3/mnist_checkpoint) loss,acc model_test2.evaluate(test_x,test_y) print(载入权重时:准确率{:5.2f}%.format(100*acc))10000/10000 [] - 3s 303us/step 未载入权重时:准确率12.15% 10000/10000 [] - 3s 260us/step 载入权重时:准确率98.59%全部保存 同时保存模型与参数 构建未训练模型 model_test3 keras.models.Sequential() model_test3.add( keras.layers.Conv2D( filters 64, kernel_size(11,11),strides (1,1), paddingvalid, activation tf.keras.activations.relu) ) model_test3.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test3.add( keras.layers.Conv2D( filters 192, kernel_size(5,5),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test3.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test3.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test3.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test3.add( keras.layers.Conv2D( filters 256, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test3.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) ))model_test3.add( keras.layers.Flatten() ) model_test3.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test3.add( keras.layers.Dropout(rate0.5) ) model_test3.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test3.add( keras.layers.Dropout(rate0.5) )model_test3.add( keras.layers.Dense(units10 , activation keras.activations.softmax ) )model_test3.compile(optimizer tf.train.AdamOptimizer(),loss keras.losses.categorical_crossentropy, metrics[accuracy] ) model_test3.fit(train_x,train_y,batch_size200,epochs2)保存 model_test3.save(my_model.h5)Currently save requires model to be a graph network. Consider using save_weights, in order to save the weights of the model.出现错误意思是需要定义的模型是一个图网络结构只能保存权重。其实错误原因在于我们的第一层没有定义输入的大小尝试定义一波 model_test4 keras.models.Sequential() model_test4.add( keras.layers.Conv2D( filters 64, kernel_size(11,11),strides (1,1), paddingvalid, activation tf.keras.activations.relu,input_shape(28,28,1)) ) model_test4.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test4.add( keras.layers.Conv2D( filters 192, kernel_size(5,5),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test4.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test4.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test4.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test4.add( keras.layers.Conv2D( filters 256, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test4.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) ))model_test4.add( keras.layers.Flatten() ) model_test4.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test4.add( keras.layers.Dropout(rate0.5) ) model_test4.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test4.add( keras.layers.Dropout(rate0.5) )model_test4.add( keras.layers.Dense(units10 , activation keras.activations.softmax ) )model_test4.compile(optimizer tf.train.AdamOptimizer(),loss keras.losses.categorical_crossentropy, metrics[accuracy] ) model_test4.fit(train_x,train_y,batch_size200,epochs2)Epoch 1/2 60000/60000 [] - 22s 364us/step - loss: 0.4112 - acc: 0.8556 Epoch 2/2 60000/60000 [] - 21s 342us/step - loss: 0.0584 - acc: 0.9838 tensorflow.python.keras.callbacks.History at 0x7f137972f4a8尝试保存 model_test4.save(my_model.h5)WARNING:tensorflow:TensorFlow optimizers do not make it possible to access optimizer attributes or optimizer state after instantiation. As a result, we cannot save the optimizer as part of the model save file.You will have to compile your model again after loading it. Prefer using a Keras optimizer instead (see keras.io/optimizers).又出warning说是不能使用tensorflow的优化器要使用keras自带的优化器好吧改 model_test5 keras.models.Sequential() model_test5.add( keras.layers.Conv2D( filters 64, kernel_size(11,11),strides (1,1), paddingvalid, activation tf.keras.activations.relu,input_shape(28,28,1)) ) model_test5.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test5.add( keras.layers.Conv2D( filters 192, kernel_size(5,5),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test5.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) )) model_test5.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test5.add( keras.layers.Conv2D( filters 384, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test5.add( keras.layers.Conv2D( filters 256, kernel_size(3,3),strides (1,1), paddingsame, activation tf.keras.activations.relu) ) model_test5.add( keras.layers.MaxPool2D( pool_size(2,2),strides(2,2) ))model_test5.add( keras.layers.Flatten() ) model_test5.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test5.add( keras.layers.Dropout(rate0.5) ) model_test5.add( keras.layers.Dense( units4096, activation keras.activations.relu ) ) model_test5.add( keras.layers.Dropout(rate0.5) )model_test5.add( keras.layers.Dense(units10 , activation keras.activations.softmax ) )model_test5.compile(optimizer tf.keras.optimizers.Adam(),loss keras.losses.categorical_crossentropy, metrics[accuracy] ) model_test5.fit(train_x,train_y,batch_size200,epochs2) model_test5.save(my_model2.h5)Epoch 1/2 60000/60000 [] - 26s 434us/step - loss: 0.2850 - acc: 0.9043 Epoch 2/2 60000/60000 [] - 25s 409us/step - loss: 0.0555 - acc: 0.9847 tensorflow.python.keras.callbacks.History at 0x7f661033a9e8这样就不出错了尝试调用模型和参数因为保存了模型结构和参数所以不需要重新定义网络结构 model_test6 keras.models.load_model(my_model2.h5) model_test6.summary()_________________________________________________________________ Layer (type) Output Shape Param # conv2d_10 (Conv2D) (None, 18, 18, 64) 7808 _________________________________________________________________ max_pooling2d_6 (MaxPooling2 (None, 9, 9, 64) 0 _________________________________________________________________ conv2d_11 (Conv2D) (None, 9, 9, 192) 307392 _________________________________________________________________ max_pooling2d_7 (MaxPooling2 (None, 4, 4, 192) 0 _________________________________________________________________ conv2d_12 (Conv2D) (None, 4, 4, 384) 663936 _________________________________________________________________ conv2d_13 (Conv2D) (None, 4, 4, 384) 1327488 _________________________________________________________________ conv2d_14 (Conv2D) (None, 4, 4, 256) 884992 _________________________________________________________________ max_pooling2d_8 (MaxPooling2 (None, 2, 2, 256) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 1024) 0 _________________________________________________________________ dense_6 (Dense) (None, 4096) 4198400 _________________________________________________________________ dropout_4 (Dropout) (None, 4096) 0 _________________________________________________________________ dense_7 (Dense) (None, 4096) 16781312 _________________________________________________________________ dropout_5 (Dropout) (None, 4096) 0 _________________________________________________________________ dense_8 (Dense) (None, 10) 40970 Total params: 24,212,298 Trainable params: 24,212,298 Non-trainable params: 0 _________________________________________________________________稳如狗做一下测试 测试集上的测试 model_test6.evaluate(test_x,test_y)10000/10000 [] - 3s 302us/step [0.05095283883444499, 0.9868]单张图片的测试 test_img test_x[5000,...] test_imgtest_img[ np.newaxis,...] pred_label model_test6.predict_classes(test_img) plt.figure() plt.imshow( np.squeeze(test_img)) plt.title(pred_label)后记 这一章主要学习了如何搭建简单的卷积网络以及集中保存方法仅权重以及权重和模型结构。 主要记住的就是如果仅保存权重注意用tensorflow自带的优化器而保存网络和权重的时候要用keras的优化器 下一章针对深度学习的几个理论做一下理解以及实验包括BatchNorm、ResNet等。 博客代码链接戳这里
http://www.zqtcl.cn/news/344366/

相关文章:

  • 智能建站与正常的网站购买 做网站 客户
  • 哪个是网络营销导向网站建设的基础微信商城开店需要费用吗
  • 宁波住房和建设局网站首页福州有做网站引流的吗
  • 国外科技类网站戴尔网站建设
  • 视频播放网站模板洞泾做网站公司
  • 深圳大学网站建设中美军事最新消息
  • gta5可用手机网站大全佛山网站建设服务
  • 智能建站软件哪个好智慧城市建设评价网站
  • 做网站用什么配资电脑织梦做的网站织梦修改网页模板
  • 手机网站制作吧网店营销策略
  • 管理员修改网站的参数会对网站的搜效果产生什么影响?网站建设新闻+常识
  • WordPress主题没有删除网站优化 工具
  • 建设外贸商城网站制作外国网站域名在哪查
  • 青浦练塘网站建设关键词优化的策略有哪些
  • 做网站链接怎么弄上海万户网络技术有限公司
  • 嵌入字体的网站网站结构和布局区别
  • 莆田网站建设五维网络有限公司零基础网站开发要学多久
  • 重庆官方网站查询系统2020最近的新闻大事10条
  • 中国网站建设公司排行榜成都彩票网站建设
  • 网站域名解析失败个人推广网站
  • 东莞网站建设网络公司排名卓业网站建设
  • 建立自己的网站平台的好处高校英文网站建设
  • 大力推进网站集约化建设兰州优秀网站推广
  • 手机wap网站怎样从微信公众号打开辽宁省住房和城乡建设厅网站上不去
  • 网站建设备案 优帮云四川建设设计公司网站
  • dede网站搬家 空间转移的方法网站建设多少钱一个平台
  • 山东济南网站开发互联网创业项目哪家好平台
  • 公司网站建设文案济南网站定制策划
  • 怎么做网站例如京东小红书推广引流
  • 游戏网站建设策划书企业vi包含哪些内容