深度学习 Day 20——优化器对比实验

news/2024/4/30 2:47:14/文章来源:https://blog.csdn.net/qq_52417436/article/details/128042415

深度学习 Day 20——优化器对比实验

文章目录

  • 深度学习 Day 20——优化器对比实验
    • 一、前言
    • 二、我的环境
    • 三、前期工作
      • 1、设置GPU
      • 2、导入数据
      • 3、配置数据集
      • 4、数据可视化
    • 三、构建模型
    • 四、训练模型
    • 五、模型评估
      • 1、Accuracy与Loss图
      • 2、评估模型
    • 六、最后我想说

一、前言

  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍦 参考文章:365天深度学习训练营-第11周:优化器对比实验(训练营内部成员可读)
  • 🍖 原作者:K同学啊|接辅导、项目定制

在上一期数据增强实验中,我们将TensorFlow版本升级到了2.4.0,可能有些库会出现不兼容异常,大家需要版本对应一下。

本期博客,我们将着眼于深度学习中的各种优化器对比进行学习。

二、我的环境

  • 电脑系统:Windows 11
  • 语言环境:Python 3.8.5
  • 编译器:DataSpell 2022.2
  • 深度学习环境:TensorFlow 2.4.0
  • 显卡及显存:RTX 3070 8G

三、前期工作

1、设置GPU

import tensorflow as tf
gpus = tf.config.list_physical_devices("GPU")if gpus:gpu0 = gpus[0] #如果有多个GPU,仅使用第0个GPUtf.config.experimental.set_memory_growth(gpu0, True) #设置GPU显存用量按需使用tf.config.set_visible_devices([gpu0],"GPU")from tensorflow          import keras
import matplotlib.pyplot as plt
import pandas            as pd
import numpy             as np
import warnings,os,PIL,pathlibwarnings.filterwarnings("ignore")             #忽略警告信息
plt.rcParams['font.sans-serif']    = ['SimHei']  # 用来正常显示中文标签
plt.rcParams['axes.unicode_minus'] = False    # 用来正常显示负号

2、导入数据

本期使用的数据集跟之前的好莱坞明星识别使用的数据集一样。

data_dir    = "/content/gdrive/MyDrive/data"
data_dir    = pathlib.Path(data_dir)
image_count = len(list(data_dir.glob('*/*')))
print("图片总数为:",image_count)
图片总数为: 1800
batch_size = 16
img_height = 336
img_width  = 336train_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="training",seed=12,image_size=(img_height, img_width),batch_size=batch_size)val_ds = tf.keras.preprocessing.image_dataset_from_directory(data_dir,validation_split=0.2,subset="validation",seed=12,image_size=(img_height, img_width),batch_size=batch_size)
Found 1800 files belonging to 17 classes.
Using 1440 files for training.
Found 1800 files belonging to 17 classes.
Using 360 files for validation.

查看一下数据文件标签:

class_names = train_ds.class_names
print(class_names)
['Angelina Jolie', 'Brad Pitt', 'Denzel Washington', 'Hugh Jackman', 'Jennifer Lawrence', 'Johnny Depp', 'Kate Winslet', 'Leonardo DiCaprio', 'Megan Fox', 'Natalie Portman', 'Nicole Kidman', 'Robert Downey Jr', 'Sandra Bullock', 'Scarlett Johansson', 'Tom Cruise', 'Tom Hanks', 'Will Smith']

3、配置数据集

AUTOTUNE = tf.data.AUTOTUNEdef train_preprocessing(image,label):return (image/255.0,label)train_ds = (train_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)           # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)val_ds = (val_ds.cache().shuffle(1000).map(train_preprocessing)    # 这里可以设置预处理函数
#     .batch(batch_size)         # 在image_dataset_from_directory处已经设置了batch_size.prefetch(buffer_size=AUTOTUNE)
)

4、数据可视化

plt.figure(figsize=(10, 8))  # 图形的宽为10高为5
plt.suptitle("数据展示")for images, labels in train_ds.take(1):for i in range(15):plt.subplot(4, 5, i + 1)plt.xticks([])plt.yticks([])plt.grid(False)# 显示图片plt.imshow(images[i])# 显示标签plt.xlabel(class_names[labels[i]-1])plt.show()

在这里插入图片描述

三、构建模型

from tensorflow.keras.layers import Dropout,Dense,BatchNormalization
from tensorflow.keras.models import Modeldef create_model(optimizer='adam'):# 加载预训练模型vgg16_base_model = tf.keras.applications.vgg16.VGG16(weights='imagenet',include_top=False,input_shape=(img_width, img_height, 3),pooling='avg')for layer in vgg16_base_model.layers:layer.trainable = FalseX = vgg16_base_model.outputX = Dense(170, activation='relu')(X)X = BatchNormalization()(X)X = Dropout(0.5)(X)output = Dense(len(class_names), activation='softmax')(X)vgg16_model = Model(inputs=vgg16_base_model.input, outputs=output)vgg16_model.compile(optimizer=optimizer,loss='sparse_categorical_crossentropy',metrics=['accuracy'])return vgg16_modelmodel1 = create_model(optimizer=tf.keras.optimizers.Adam())
model2 = create_model(optimizer=tf.keras.optimizers.SGD())
model2.summary()

打印的网络结构:

Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 8s 0us/step
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_2 (InputLayer)         [(None, 336, 336, 3)]     0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 336, 336, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 336, 336, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 168, 168, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 168, 168, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 168, 168, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 84, 84, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 84, 84, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 84, 84, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 84, 84, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 42, 42, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 42, 42, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 42, 42, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 42, 42, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 21, 21, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 21, 21, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 21, 21, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 21, 21, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 10, 10, 512)       0         
_________________________________________________________________
global_average_pooling2d_1 ( (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 170)               87210     
_________________________________________________________________
batch_normalization_1 (Batch (None, 170)               680       
_________________________________________________________________
dropout_1 (Dropout)          (None, 170)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 17)                2907      
=================================================================
Total params: 14,805,485
Trainable params: 90,457
Non-trainable params: 14,715,028
_________________________________________________________________

在这里我们是直接从网上下载VGG16模型并使用,可能会出现下载失败的情况,例如:

Exception: URL fetch failure on https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5: None -- [WinError 10054] 远程主机强迫关闭了一个现有的连接。

这种情况就是网络问题,导致无法下载,可以多试几次看看,如果一直都无法下载的话,可以直接进入上面错误中的网址:https://storage.googleapis.com/tensorflow/keras-applications/vgg16/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5它就会自动下载该模型,我们将其保存在项目文件夹中,然后我们在上面代码调用VGG16模型的时候里面的weights参数的值改成下载的VGG模型对应的地址即可。

在这里我们使用了两种优化器进行对比,Adam和SGD并对其两者进行简单的介绍:

  • Adam

    keras.optimizers.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
    

    它利用梯度的一阶矩估计和二阶矩估计动态调整每个参数的学习率。Adam的优点主要在于经过偏置校正后,每一次迭代学习率都有个确定范围,使得参数比较平稳。堆内存的需求比较小,也适用于大数据集和更高维空间的模型。

  • SGD

    keras.optimizers.SGD(lr=0.01, momentum=0.0, decay=0.0, nesterov=False)
    

    它是一种随机梯度下降优化器,SGD就是每一次迭代计算mini-batch的梯度,然后对参数进行更新,是最常见的优化方法了。

四、训练模型

NO_EPOCHS = 50history_model1  = model1.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
history_model2  = model2.fit(train_ds, epochs=NO_EPOCHS, verbose=1, validation_data=val_ds)
Epoch 1/50
90/90 [==============================] - 113s 1s/step - loss: 2.8072 - accuracy: 0.1535 - val_loss: 2.7235 - val_accuracy: 0.0556
Epoch 2/50
90/90 [==============================] - 20s 221ms/step - loss: 2.0860 - accuracy: 0.3243 - val_loss: 2.4607 - val_accuracy: 0.2833
Epoch 3/50
90/90 [==============================] - 21s 238ms/step - loss: 1.8125 - accuracy: 0.4132 - val_loss: 2.2316 - val_accuracy: 0.2972
Epoch 4/50
90/90 [==============================] - 20s 224ms/step - loss: 1.5680 - accuracy: 0.5146 - val_loss: 1.9419 - val_accuracy: 0.4361
Epoch 5/50
90/90 [==============================] - 20s 225ms/step - loss: 1.4038 - accuracy: 0.5681 - val_loss: 1.6831 - val_accuracy: 0.4833
Epoch 6/50
90/90 [==============================] - 20s 224ms/step - loss: 1.2327 - accuracy: 0.6153 - val_loss: 1.6376 - val_accuracy: 0.4944
Epoch 7/50
90/90 [==============================] - 20s 223ms/step - loss: 1.1563 - accuracy: 0.6486 - val_loss: 1.6727 - val_accuracy: 0.4417
Epoch 8/50
90/90 [==============================] - 20s 224ms/step - loss: 1.0707 - accuracy: 0.6694 - val_loss: 1.4806 - val_accuracy: 0.5250
Epoch 9/50
90/90 [==============================] - 20s 224ms/step - loss: 0.9549 - accuracy: 0.7125 - val_loss: 1.6010 - val_accuracy: 0.4889
Epoch 10/50
90/90 [==============================] - 20s 224ms/step - loss: 0.8829 - accuracy: 0.7347 - val_loss: 1.7179 - val_accuracy: 0.4611
Epoch 11/50
90/90 [==============================] - 20s 223ms/step - loss: 0.8417 - accuracy: 0.7389 - val_loss: 1.7174 - val_accuracy: 0.4833
Epoch 12/50
90/90 [==============================] - 20s 225ms/step - loss: 0.7601 - accuracy: 0.7708 - val_loss: 1.5996 - val_accuracy: 0.4833
Epoch 13/50
90/90 [==============================] - 20s 224ms/step - loss: 0.7254 - accuracy: 0.7757 - val_loss: 1.6183 - val_accuracy: 0.5278
Epoch 14/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6863 - accuracy: 0.8014 - val_loss: 1.7551 - val_accuracy: 0.4722
Epoch 15/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6336 - accuracy: 0.8069 - val_loss: 1.8830 - val_accuracy: 0.4639
Epoch 16/50
90/90 [==============================] - 20s 224ms/step - loss: 0.5819 - accuracy: 0.8319 - val_loss: 1.4917 - val_accuracy: 0.5389
Epoch 17/50
90/90 [==============================] - 20s 224ms/step - loss: 0.5748 - accuracy: 0.8340 - val_loss: 1.8751 - val_accuracy: 0.4694
Epoch 18/50
90/90 [==============================] - 20s 223ms/step - loss: 0.5219 - accuracy: 0.8396 - val_loss: 2.0875 - val_accuracy: 0.4861
Epoch 19/50
90/90 [==============================] - 20s 224ms/step - loss: 0.4934 - accuracy: 0.8556 - val_loss: 1.9038 - val_accuracy: 0.5028
Epoch 20/50
90/90 [==============================] - 20s 224ms/step - loss: 0.4942 - accuracy: 0.8514 - val_loss: 1.6452 - val_accuracy: 0.5444
Epoch 21/50
90/90 [==============================] - 20s 224ms/step - loss: 0.4933 - accuracy: 0.8431 - val_loss: 2.1585 - val_accuracy: 0.4472
Epoch 22/50
90/90 [==============================] - 20s 225ms/step - loss: 0.4514 - accuracy: 0.8701 - val_loss: 2.0218 - val_accuracy: 0.4972
Epoch 23/50
90/90 [==============================] - 20s 223ms/step - loss: 0.4458 - accuracy: 0.8694 - val_loss: 1.6499 - val_accuracy: 0.5417
Epoch 24/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3927 - accuracy: 0.8917 - val_loss: 2.3310 - val_accuracy: 0.4222
Epoch 25/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3870 - accuracy: 0.8854 - val_loss: 1.6200 - val_accuracy: 0.5583
Epoch 26/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3800 - accuracy: 0.8861 - val_loss: 1.9285 - val_accuracy: 0.5361
Epoch 27/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3792 - accuracy: 0.8771 - val_loss: 2.3675 - val_accuracy: 0.4806
Epoch 28/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3321 - accuracy: 0.8986 - val_loss: 1.7445 - val_accuracy: 0.5500
Epoch 29/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3185 - accuracy: 0.9076 - val_loss: 1.7202 - val_accuracy: 0.5639
Epoch 30/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3436 - accuracy: 0.8958 - val_loss: 1.6614 - val_accuracy: 0.5667
Epoch 31/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2917 - accuracy: 0.9118 - val_loss: 2.0079 - val_accuracy: 0.5500
Epoch 32/50
90/90 [==============================] - 20s 224ms/step - loss: 0.3325 - accuracy: 0.8868 - val_loss: 2.0677 - val_accuracy: 0.5028
Epoch 33/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2879 - accuracy: 0.9146 - val_loss: 1.6412 - val_accuracy: 0.6028
Epoch 34/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2856 - accuracy: 0.9111 - val_loss: 2.1213 - val_accuracy: 0.5222
Epoch 35/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2645 - accuracy: 0.9153 - val_loss: 2.0940 - val_accuracy: 0.5222
Epoch 36/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2528 - accuracy: 0.9160 - val_loss: 1.8489 - val_accuracy: 0.5389
Epoch 37/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2553 - accuracy: 0.9208 - val_loss: 1.8388 - val_accuracy: 0.5583
Epoch 38/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2362 - accuracy: 0.9285 - val_loss: 1.8624 - val_accuracy: 0.5667
Epoch 39/50
90/90 [==============================] - 20s 223ms/step - loss: 0.2245 - accuracy: 0.9229 - val_loss: 1.9156 - val_accuracy: 0.5639
Epoch 40/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2198 - accuracy: 0.9333 - val_loss: 2.2192 - val_accuracy: 0.5556
Epoch 41/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2144 - accuracy: 0.9278 - val_loss: 1.8951 - val_accuracy: 0.5833
Epoch 42/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2074 - accuracy: 0.9389 - val_loss: 2.0159 - val_accuracy: 0.5500
Epoch 43/50
90/90 [==============================] - 20s 225ms/step - loss: 0.2166 - accuracy: 0.9257 - val_loss: 2.2641 - val_accuracy: 0.5111
Epoch 44/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2312 - accuracy: 0.9264 - val_loss: 2.0438 - val_accuracy: 0.5750
Epoch 45/50
90/90 [==============================] - 20s 223ms/step - loss: 0.2248 - accuracy: 0.9257 - val_loss: 2.2686 - val_accuracy: 0.5472
Epoch 46/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2102 - accuracy: 0.9375 - val_loss: 2.2441 - val_accuracy: 0.5583
Epoch 47/50
90/90 [==============================] - 20s 224ms/step - loss: 0.2120 - accuracy: 0.9340 - val_loss: 2.3860 - val_accuracy: 0.5361
Epoch 48/50
90/90 [==============================] - 20s 224ms/step - loss: 0.1959 - accuracy: 0.9354 - val_loss: 2.4052 - val_accuracy: 0.5167
Epoch 49/50
90/90 [==============================] - 20s 224ms/step - loss: 0.1699 - accuracy: 0.9521 - val_loss: 2.5167 - val_accuracy: 0.5250
Epoch 50/50
90/90 [==============================] - 20s 224ms/step - loss: 0.1645 - accuracy: 0.9528 - val_loss: 2.1405 - val_accuracy: 0.5722
Epoch 1/50
90/90 [==============================] - 21s 226ms/step - loss: 3.0785 - accuracy: 0.0986 - val_loss: 2.7949 - val_accuracy: 0.1000
Epoch 2/50
90/90 [==============================] - 20s 223ms/step - loss: 2.5472 - accuracy: 0.1924 - val_loss: 2.6379 - val_accuracy: 0.1583
Epoch 3/50
90/90 [==============================] - 20s 225ms/step - loss: 2.2651 - accuracy: 0.2694 - val_loss: 2.4596 - val_accuracy: 0.2528
Epoch 4/50
90/90 [==============================] - 20s 224ms/step - loss: 2.0612 - accuracy: 0.3389 - val_loss: 2.2347 - val_accuracy: 0.3389
Epoch 5/50
90/90 [==============================] - 20s 224ms/step - loss: 1.9508 - accuracy: 0.3653 - val_loss: 2.0695 - val_accuracy: 0.3972
Epoch 6/50
90/90 [==============================] - 20s 224ms/step - loss: 1.8406 - accuracy: 0.4021 - val_loss: 1.9282 - val_accuracy: 0.3917
Epoch 7/50
90/90 [==============================] - 20s 224ms/step - loss: 1.7565 - accuracy: 0.4451 - val_loss: 1.8469 - val_accuracy: 0.4111
Epoch 8/50
90/90 [==============================] - 20s 223ms/step - loss: 1.6587 - accuracy: 0.4667 - val_loss: 1.7935 - val_accuracy: 0.4306
Epoch 9/50
90/90 [==============================] - 20s 224ms/step - loss: 1.5934 - accuracy: 0.4889 - val_loss: 1.6561 - val_accuracy: 0.4528
Epoch 10/50
90/90 [==============================] - 20s 223ms/step - loss: 1.5516 - accuracy: 0.4854 - val_loss: 1.7235 - val_accuracy: 0.3944
Epoch 11/50
90/90 [==============================] - 20s 224ms/step - loss: 1.4753 - accuracy: 0.5403 - val_loss: 1.6903 - val_accuracy: 0.4333
Epoch 12/50
90/90 [==============================] - 20s 223ms/step - loss: 1.4309 - accuracy: 0.5389 - val_loss: 1.6633 - val_accuracy: 0.4556
Epoch 13/50
90/90 [==============================] - 20s 225ms/step - loss: 1.4168 - accuracy: 0.5437 - val_loss: 1.6759 - val_accuracy: 0.4667
Epoch 14/50
90/90 [==============================] - 20s 223ms/step - loss: 1.3726 - accuracy: 0.5701 - val_loss: 1.7004 - val_accuracy: 0.4667
Epoch 15/50
90/90 [==============================] - 20s 224ms/step - loss: 1.2890 - accuracy: 0.5924 - val_loss: 1.6371 - val_accuracy: 0.4639
Epoch 16/50
90/90 [==============================] - 20s 223ms/step - loss: 1.2669 - accuracy: 0.6139 - val_loss: 1.5207 - val_accuracy: 0.4806
Epoch 17/50
90/90 [==============================] - 20s 223ms/step - loss: 1.2238 - accuracy: 0.6097 - val_loss: 1.5294 - val_accuracy: 0.4972
Epoch 18/50
90/90 [==============================] - 20s 224ms/step - loss: 1.1582 - accuracy: 0.6375 - val_loss: 1.4838 - val_accuracy: 0.5111
Epoch 19/50
90/90 [==============================] - 20s 223ms/step - loss: 1.1518 - accuracy: 0.6271 - val_loss: 1.5244 - val_accuracy: 0.5111
Epoch 20/50
90/90 [==============================] - 20s 224ms/step - loss: 1.1324 - accuracy: 0.6438 - val_loss: 1.5217 - val_accuracy: 0.4917
Epoch 21/50
90/90 [==============================] - 21s 237ms/step - loss: 1.0931 - accuracy: 0.6590 - val_loss: 1.4744 - val_accuracy: 0.5056
Epoch 22/50
90/90 [==============================] - 20s 224ms/step - loss: 1.0524 - accuracy: 0.6667 - val_loss: 1.4386 - val_accuracy: 0.5167
Epoch 23/50
90/90 [==============================] - 20s 224ms/step - loss: 1.0196 - accuracy: 0.6729 - val_loss: 1.4282 - val_accuracy: 0.5278
Epoch 24/50
90/90 [==============================] - 20s 224ms/step - loss: 1.0143 - accuracy: 0.6924 - val_loss: 1.5158 - val_accuracy: 0.5361
Epoch 25/50
90/90 [==============================] - 20s 224ms/step - loss: 0.9708 - accuracy: 0.6875 - val_loss: 1.5623 - val_accuracy: 0.4806
Epoch 26/50
90/90 [==============================] - 20s 223ms/step - loss: 0.9651 - accuracy: 0.6875 - val_loss: 1.3693 - val_accuracy: 0.5611
Epoch 27/50
90/90 [==============================] - 20s 223ms/step - loss: 0.9384 - accuracy: 0.7076 - val_loss: 1.4377 - val_accuracy: 0.5556
Epoch 28/50
90/90 [==============================] - 20s 224ms/step - loss: 0.8951 - accuracy: 0.7285 - val_loss: 1.4171 - val_accuracy: 0.5222
Epoch 29/50
90/90 [==============================] - 20s 224ms/step - loss: 0.8706 - accuracy: 0.7340 - val_loss: 1.6458 - val_accuracy: 0.5167
Epoch 30/50
90/90 [==============================] - 20s 224ms/step - loss: 0.8520 - accuracy: 0.7375 - val_loss: 1.4419 - val_accuracy: 0.5139
Epoch 31/50
90/90 [==============================] - 20s 223ms/step - loss: 0.8547 - accuracy: 0.7188 - val_loss: 1.2940 - val_accuracy: 0.5889
Epoch 32/50
90/90 [==============================] - 20s 223ms/step - loss: 0.8222 - accuracy: 0.7424 - val_loss: 1.4509 - val_accuracy: 0.5528
Epoch 33/50
90/90 [==============================] - 20s 223ms/step - loss: 0.8406 - accuracy: 0.7299 - val_loss: 1.4598 - val_accuracy: 0.5306
Epoch 34/50
90/90 [==============================] - 20s 225ms/step - loss: 0.7983 - accuracy: 0.7528 - val_loss: 1.5114 - val_accuracy: 0.5472
Epoch 35/50
90/90 [==============================] - 20s 224ms/step - loss: 0.7992 - accuracy: 0.7403 - val_loss: 1.4475 - val_accuracy: 0.5750
Epoch 36/50
90/90 [==============================] - 20s 224ms/step - loss: 0.7557 - accuracy: 0.7569 - val_loss: 1.5024 - val_accuracy: 0.5389
Epoch 37/50
90/90 [==============================] - 20s 224ms/step - loss: 0.7298 - accuracy: 0.7681 - val_loss: 1.4272 - val_accuracy: 0.5389
Epoch 38/50
90/90 [==============================] - 20s 223ms/step - loss: 0.7378 - accuracy: 0.7632 - val_loss: 1.3973 - val_accuracy: 0.5778
Epoch 39/50
90/90 [==============================] - 20s 224ms/step - loss: 0.7025 - accuracy: 0.7875 - val_loss: 1.3738 - val_accuracy: 0.5500
Epoch 40/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6812 - accuracy: 0.7958 - val_loss: 1.5651 - val_accuracy: 0.5361
Epoch 41/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6646 - accuracy: 0.7854 - val_loss: 1.4765 - val_accuracy: 0.5667
Epoch 42/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6477 - accuracy: 0.8021 - val_loss: 1.5985 - val_accuracy: 0.5361
Epoch 43/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6508 - accuracy: 0.8042 - val_loss: 1.3467 - val_accuracy: 0.5667
Epoch 44/50
90/90 [==============================] - 20s 225ms/step - loss: 0.6539 - accuracy: 0.7889 - val_loss: 1.3919 - val_accuracy: 0.5778
Epoch 45/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6402 - accuracy: 0.8104 - val_loss: 1.3426 - val_accuracy: 0.5917
Epoch 46/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6178 - accuracy: 0.8076 - val_loss: 1.4094 - val_accuracy: 0.5833
Epoch 47/50
90/90 [==============================] - 20s 223ms/step - loss: 0.6083 - accuracy: 0.8000 - val_loss: 1.3747 - val_accuracy: 0.5750
Epoch 48/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6079 - accuracy: 0.8028 - val_loss: 1.5148 - val_accuracy: 0.5583
Epoch 49/50
90/90 [==============================] - 20s 224ms/step - loss: 0.6115 - accuracy: 0.8000 - val_loss: 1.9661 - val_accuracy: 0.4556
Epoch 50/50
90/90 [==============================] - 20s 224ms/step - loss: 0.5785 - accuracy: 0.8146 - val_loss: 1.4971 - val_accuracy: 0.5500

五、模型评估

1、Accuracy与Loss图

from matplotlib.ticker import MultipleLocator
plt.rcParams['savefig.dpi'] = 300 #图片像素
plt.rcParams['figure.dpi']  = 300 #分辨率acc1     = history_model1.history['accuracy']
acc2     = history_model2.history['accuracy']
val_acc1 = history_model1.history['val_accuracy']
val_acc2 = history_model2.history['val_accuracy']loss1     = history_model1.history['loss']
loss2     = history_model2.history['loss']
val_loss1 = history_model1.history['val_loss']
val_loss2 = history_model2.history['val_loss']epochs_range = range(len(acc1))plt.figure(figsize=(16, 4))
plt.subplot(1, 2, 1)plt.plot(epochs_range, acc1, label='Training Accuracy-Adam')
plt.plot(epochs_range, acc2, label='Training Accuracy-SGD')
plt.plot(epochs_range, val_acc1, label='Validation Accuracy-Adam')
plt.plot(epochs_range, val_acc2, label='Validation Accuracy-SGD')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss1, label='Training Loss-Adam')
plt.plot(epochs_range, loss2, label='Training Loss-SGD')
plt.plot(epochs_range, val_loss1, label='Validation Loss-Adam')
plt.plot(epochs_range, val_loss2, label='Validation Loss-SGD')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')# 设置刻度间隔,x轴每1一个刻度
ax = plt.gca()
ax.xaxis.set_major_locator(MultipleLocator(1))plt.show()

在这里插入图片描述

2、评估模型

def test_accuracy_report(model):score = model.evaluate(val_ds, verbose=0)print('Loss function: %s, accuracy:' % score[0], score[1])test_accuracy_report(model2)
Loss function: 1.49705171585083, accuracy: 0.550000011920929

六、最后我想说

本期的博客到这里就结束了,最近我的电脑出现了一些问题导致无法对复杂的模型进行训练,最近考虑准备重装一下系统并清理一下电脑了,本次实验我是在Google Colaboratory上run的,这个平台目前我使用下来感觉不还错,提供免费的算力对我来说够了,大家如果自己的电脑配置不够的话也可以去试试。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.luyixian.cn/news_show_36739.aspx

如若内容造成侵权/违法违规/事实不符,请联系dt猫网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

2022 谷歌出海创业加速器展示日: 见证入营企业成长收获

经历三个月的沉淀,迎来了展示日的大放异彩。10 家入营企业的路演分享,带来诸多启发 ——企业出海有什么挑战和难点?加入谷歌出海创业加速器,团队有哪些收获?三个月的培训和交流,带来了怎样的感受&#xff1…

【车间调度】遗传算法求解车间调度问题(含甘特图)【含Matlab源码 2216期】

⛄一、车间调度简介 1 车间调度定义 车间调度是指根据产品制造的合理需求分配加工车间顺序,从而达到合理利用产品制造资源、提高企业经济效益的目的。车间调度问题从数学上可以描述为有n个待加工的零件要在m台机器上加工。问题需要满足的条件包括每个零件的各道工序…

arduino 复习题

名词解释 中断 计算机运行过程中,出现某些意外情况需主机干预时,机器能自动停止正在运行的程序并转入处理新情况的程序,处理完毕后又返回原被暂停的程序继续运行 中断服务程序 用于 CPU 处理中断的程序 中断源 引起中断的原因,或…

柯桥成人英语培训机构哪家好,新陈代谢到底是什么?

新陈代谢到底是什么? Metabolism is a combination of biochemical processes that your body uses to convert food into energy. These metabolic processes include breathing, eating and digesting food, the delivery of nutrients to your cells through the blood, th…

软件被人后台篡改了收款码属于入侵吗?

最近很多做平台的小伙伴,碰到了同样的问题,就是软件程序后台被恶意篡改收款二维码 这个问题出现在平台主身上无疑是雪上加霜,第一时间找到了小蚁君,分析了一下当时的情况,先安装了小蚁的入侵检测系统,显示…

华为机试 - TLV解析Ⅰ

目录 题目描述 输入描述 输出描述 用例 题目解析 算法源码 题目描述 TLV编码是按[Tag Length Value]格式进行编码的,一段码流中的信元用Tag标识,Tag在码流中唯一不重复,Length表示信元Value的长度,Value表示信元的值。 码…

3d-face-reconstruction比较

摘要:比较近3年,6篇顶会3d-face-reconstruction重建效果。 1:Deep3D **发表时间:**2020 成就: 1)在REALY和REALY (side-view)两个Benchmark上取得 State-of-the-art。 2)官方github上成绩: 3DMM&#xf…

计算机硬件和软件

文章目录一 计算机硬件1)主板2)显示器3)键盘4)鼠标二 计算机软件(一)系统软件(1)操作系统(2)BIOS(3)设备驱动程序(二&…

产品公开后就不能再申请专利了吗?

问题一:申请专利会导致产品技术泄密吗? 很多人担心申请专利后会导致自己的专利技术公之于众,会让同行模仿生产。其实,我们不妨反向思考一下,假如我们没有申请专利,我们销售生产出去的产品就不容易被模仿吗…

Linux之权限【读、写、执行】【详细总结】

目录权限相关介绍rwx权限详解rwx作用到文件rwx作用到目录文件及目录权限实际案例权限修改第一种方式,,-,变更权限案例演示:第二种方式:通过数字变更权限chmod urwx,grx,ox 文件目录名 chmod 751 文件目录名修改文件所…

基于DPDK(x86平台)应用性能优化实践

产生性能瓶颈有多方面的原因,包括硬件(自身能力限制或BIOS设置不当)、操作系统(某些feature没打开)和软件。软件方面的性能瓶颈主要是由于编码不当导致,常见原因有以下几种: 数据结构cache lin…

[附源码]java毕业设计疫情期间回乡人员管理系统

项目运行 环境配置: Jdk1.8 Tomcat7.0 Mysql HBuilderX(Webstorm也行) Eclispe(IntelliJ IDEA,Eclispe,MyEclispe,Sts都支持)。 项目技术: SSM mybatis Maven Vue 等等组成,B/S模式 M…

「运维有小邓」如何更有效的避免密码攻击

在这表文章中,让我们一起了解密码在网络安全中的重要性,在我们的日常工作中,密码泄露事件是常发生的, 那今天我们就一起了解ManageEngine ADSelfService Plus 是如何强化您的密码并加强您的企业AD域安全性的。 运维有小邓 2022 年…

ArcGIS绘制地球

下面这个图是非常不错的,截取自论文的一张图: 学了十几年地理学,最初的兴趣恐怕还是小时候常常摆弄的地球仪;现在终于有机会尝试地球仪风格制作了。 虽然迟到了十几年,不过今天还是有机会“复现”小时候的地球仪。 先…

计算机网络协议------从入门到深化

计算机网络通信 什么是通信协议 简单来说,通信协议就是计算机之间通过网络实现通信时事先达成 的一种“约定”;这种“约定”使那些由不同厂商的设备,不同CPU及不 同操作系统组成的计算机之间,只要遵循相同的协议就可以实现通 信。…

栈和队列及其多种接口实现-c语言

今天我们来完成栈和队列,首先我们要明白什么是栈,什么是队列。 目录 栈的选择 栈的结构 栈的初始化 栈的销毁 入栈 出栈 返回栈顶元素 计算数据个数 判断是否为空 队列的选择 队列的结构 入队列 出队列 判断是否为空 取队头元素 取队尾…

适用更多会议场景,华为云会议的分组讨论功能来了!

适用更多会议场景,华为云会议的分组讨论功能来了! 如今,线上沟通成为常态,线上会议更是成为工作推进过程中不可缺少的环节。但在一些场景中,例如在跨部门协调,沙龙研讨,教育培训或者招聘面试时&…

使用docker-compose部署达梦DEM管理工具,mac m1系列适用

之前搭建了mac m1下基于docker的达梦库(地址),但是没有一个好用的管理端。 用过DBeaver,可以使用自定jar创建dm链接,只做简单查询还行,要是用到一些修改、大文本查看、配置修改等高级点的功能就不行了。 …

小啊呜产品读书笔记001:《邱岳的产品手记-12》第22讲 产品经理的图文基本功(上):产品文档 23讲产品经理的图文基本功(下):产品图例

小啊呜产品读书笔记001:《邱岳的产品手记-12》第22讲 产品经理的图文基本功(上):产品文档 & 23讲产品经理的图文基本功(下):产品图例一、今日阅读计划二、泛读&知识摘录1、第22讲 产品经…

m在ISE平台下使用verilog开发基于FPGA的GMSK调制器

目录 1.算法描述 2.仿真效果预览 3.MATLAB部分代码预览 4.完整MATLAB程序 1.算法描述 高斯最小频移键控(Gaussian Filtered Minimum Shift Keying),这是GSM系统采用的调制方式。数字调制解调技术是数字蜂窝移动通信系统空中接口的重要组成…