caffe笔记:运行手写数字识别例程

2017-02-08 Lu Huang 更多博文 » 博客 » GitHub »

原文链接 https://hlthu.github.io/caffe/2017/02/08/caffe-example-minist.html
注:以下为加速网络访问所做的原文缓存,经过重新格式化,可能存在格式方面的问题,或偶有遗漏信息,请以原文为准。


本文主要介绍如何在安装完caffe后运行一个简单的例程:手写数字识别,以了解和熟悉caffe的基本使用过程。至于如何安装caffe,请参考我之前的文章:ubuntu 16.04上配置cuda+caffe环境

1. MNIST数据集

1.1 简介

mnist是一个手写数字数据库,由Google实验室的Corinna Cortes和纽约大学柯朗研究院的Yann LeCun等人建立,它有60000个训练样本集和10000个测试样本集。mnist数据库官方网址为:http://yann.lecun.com/exdb/mnist/

1.2 MNIST数据格式描述

可直接下载四个解压文件,使用下面的命令。

$ ./data/mnist/get_mnist.sh 

下载后的文件分别对应:训练集样本、训练集标签、测试集样本和测试集标签。解压缩之后发现,其是在一个文件中包含了所有图像。如下表所示。

文件 内容
train-images-idx3-ubyte.gz 训练集图片-55000张训练图片, 5000张验证图片
train-labels-idx1-ubyte.gz 训练集图片对应的数字标签
t10k-images-idx3-ubyte.gz 测试集图片-10000张图片
t10k-labels-idx1-ubyte.gz 测试集图片对应的数字标签

1.3 转换格式

前面所述的是原始二进制文件,需要转换为LEVELDB或LMDB才能被caffe识别。caffe例程里为我们提供了这个脚本,只需在caffe根目录下执行下面的命令即可。

$ ./examples/mnist/create_mnist.sh 
Creating lmdb...
I0208 00:25:42.315726 28021 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdb
I0208 00:25:42.316124 28021 convert_mnist_data.cpp:88] A total of 60000 items.
I0208 00:25:42.316164 28021 convert_mnist_data.cpp:89] Rows: 28 Cols: 28
I0208 00:25:47.649130 28021 convert_mnist_data.cpp:108] Processed 60000 files.
I0208 00:25:47.715617 28041 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0208 00:25:47.716063 28041 convert_mnist_data.cpp:88] A total of 10000 items.
I0208 00:25:47.716094 28041 convert_mnist_data.cpp:89] Rows: 28 Cols: 28
I0208 00:25:48.420140 28041 convert_mnist_data.cpp:108] Processed 10000 files.
Done.

2.LeNet-5模型

接下来我们来认识一个景点的深度学习模型:LeNet-5。该模型是由纽约大学柯朗研究院的Yann LeCun最早提出的,并被应用于邮政编码识别中。

2.1 LeNet-5模型描述

LeNet-5模型的描述在文件examples/mnist/lenet_train_test.prototxt中,具体内容如下:

name: "LeNet"       #网络名称
layer {             #定义一个网络层
  name: "mnist"     #层的名称叫做mnist
  type: "Data"      #类型为数据层
  top: "data"       #层的输出blob有两个:①data
  top: "label"      #②label
  include {
    phase: TRAIN    #该层只在训练阶段有效
  }
  transform_param { #数据变换时使用的缩放因子
    scale: 0.00390625
  }
  data_param {      #数据层参数,包括路径和格式
    source: "examples/mnist/mnist_train_lmdb"
    batch_size: 64
    backend: LMDB
  }
}
layer {             #一个新的数据层
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST     #只在测试时有效
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {            #定义了卷积层1
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1     #权值学习速率倍乘因子,1表示与全局参数一致
  }
  param {
    lr_mult: 2     #bias学习速率倍乘因子,是全局参数的2倍
  }
  convolution_param {   #卷积计算参数
    num_output: 20      #输出20个feature map
    kernel_size: 5      #卷积kernel为5x5
    stride: 1           #卷积输出跳跃间隔,1表示连续输出
    weight_filler {     #权值使用xavier填充器
      type: "xavier"
    }
    bias_filler {       #bias使用常数填充器
      type: "constant"
    }
  }
}
layer {                 #定义了一个pooling层
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {       #pooling层参数
    pool: MAX           #最大值pooling
    kernel_size: 2      #pooling窗口2x2
    stride: 2           #下采样输出跳跃间隔2x2
  }
}
layer {                 #又定义了一个卷积层
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {                 #又定义了一个pooling层
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {                 #定义了一个全连接层InnerProduct
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500    #该层输出元数个数为500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {                #定义了一个非线性层
  name: "relu1"
  type: "ReLU"         #ReLU方法
  bottom: "ip1"
  top: "ip1"
}
layer {                #又定义了一个全连接层
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}                   
layer {              #分类准确层
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST      #只在测试阶段有效
  }
}
layer {              #损失层
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}

2.2 LeNet-5模型可视化

使用网址http://ethereon.github.io/netscope/#/editor,在左侧输入上面的内容,然后按shift+reture,即可得到结果.

如下图所示,为训练过程的网络。

下图为测试阶段的网络,多了一个计算精确度的层。

也可以将二者绘制在同一幅图上:

2.3 训练超参数

examples/mnist/下面有一个文件lenet_solver.prototxt,其指定了训练的超参数。内容如下(中文为我加的注释):

# The train/test net protocol buffer definition
# 用于训练/预测的网络
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
# 预测阶段迭代次数
# 由于测试时一次批量读入100张图,迭代100次可以覆盖全部10k张图
test_iter: 100
# Carry out testing every 500 training iterations.
# 训练迭代500次进行一次预测
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
# 基础学习速率、动量和权重衰量
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
# 学习速率的衰减策略
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
#每100次迭代打印一次信息
display: 100
# The maximum number of iterations
# 最大迭代次数
max_iter: 10000
# snapshot intermediate results
# 每迭代5000次保存一下快照
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
# 求解模式
solver_mode: GPU

2.4 训练网络

执行下面的命令:

$ ./examples/mnist/train_lenet.sh

该文件地内容实质是:

#!/usr/bin/env sh
set -e

./build/tools/caffe train --gpu=1 --solver=examples/mnist/lenet_solver.prototxt $@

这个脚本的执行会输出很多信息,大致如下(中间省略了许多):

I0208 01:16:07.225497 28140 caffe.cpp:217] Using GPUs 1
I0208 01:16:07.306118 28140 caffe.cpp:222] GPU 1: TITAN X (Pascal)
I0208 01:16:07.701035 28140 solver.cpp:48] Initializing solver from parameters: 
test_iter: 100
test_interval: 500
base_lr: 0.01
display: 100
max_iter: 10000
lr_policy: "inv"
gamma: 0.0001
power: 0.75
momentum: 0.9
weight_decay: 0.0005
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
solver_mode: GPU
device_id: 1
net: "examples/mnist/lenet_train_test.prototxt"
train_state {
  level: 0
  stage: ""
}
....................................
....................................
I0208 01:16:22.695086 28140 sgd_solver.cpp:106] Iteration 9800, lr = 0.00599102
I0208 01:16:22.812600 28140 solver.cpp:228] Iteration 9900, loss = 0.00607422
I0208 01:16:22.812639 28140 solver.cpp:244]     Train net output #0: loss = 0.00607403 (* 1 = 0.00607403 loss)
I0208 01:16:22.812645 28140 sgd_solver.cpp:106] Iteration 9900, lr = 0.00596843
I0208 01:16:22.928777 28140 solver.cpp:454] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodel
I0208 01:16:22.934314 28140 sgd_solver.cpp:273] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstate
I0208 01:16:22.937324 28140 solver.cpp:317] Iteration 10000, loss = 0.00261267
I0208 01:16:22.937355 28140 solver.cpp:337] Iteration 10000, Testing net (#0)
I0208 01:16:23.027426 28140 blocking_queue.cpp:50] Data layer prefetch queue empty
I0208 01:16:23.038450 28140 solver.cpp:404]     Test net output #0: accuracy = 0.9904
I0208 01:16:23.038494 28140 solver.cpp:404]     Test net output #1: loss = 0.0312146 (* 1 = 0.0312146 loss)
I0208 01:16:23.038502 28140 solver.cpp:322] Optimization Done.
I0208 01:16:23.038508 28140 caffe.cpp:254] Optimization Done.

2.5 用训练好的模型进行预测

运行下面的指令

./build/tools/caffe test --gpu=1 \
-model examples/mnist/lenet_train_test.prototxt \
-weights examples/mnist/lenet_iter_10000.caffemodel \
-iterations 100

输出的内容大致如下:

I0208 01:21:45.381108 28201 caffe.cpp:270] Use GPU with device ID 1
I0208 01:21:45.463779 28201 caffe.cpp:274] GPU device name: TITAN X (Pascal)
I0208 01:21:45.878170 28201 net.cpp:322] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
I0208 01:21:45.878314 28201 net.cpp:58] Initializing net from parameters: 
name: "LeNet"
state {
  phase: TEST
  level: 0
  stage: ""
}
layer {
  name: "mnist"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    scale: 0.00390625
  }
  data_param {
    source: "examples/mnist/mnist_test_lmdb"
    batch_size: 100
    backend: LMDB
  }
}
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  convolution_param {
    num_output: 50
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 2
    stride: 2
  }
}
layer {
  name: "ip1"
  type: "InnerProduct"
  bottom: "pool2"
  top: "ip1"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 500
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "relu1"
  type: "ReLU"
  bottom: "ip1"
  top: "ip1"
}
layer {
  name: "ip2"
  type: "InnerProduct"
  bottom: "ip1"
  top: "ip2"
  param {
    lr_mult: 1
  }
  param {
    lr_mult: 2
  }
  inner_product_param {
    num_output: 10
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
    }
  }
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "ip2"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "ip2"
  bottom: "label"
  top: "loss"
}
I0208 01:21:45.878454 28201 layer_factory.hpp:77] Creating layer mnist
I0208 01:21:45.878931 28201 net.cpp:100] Creating Layer mnist
I0208 01:21:45.878943 28201 net.cpp:408] mnist -> data
I0208 01:21:45.878965 28201 net.cpp:408] mnist -> label
I0208 01:21:45.879957 28228 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdb
I0208 01:21:45.903156 28201 data_layer.cpp:41] output data size: 100,1,28,28
I0208 01:21:45.950862 28201 net.cpp:150] Setting up mnist
I0208 01:21:45.950908 28201 net.cpp:157] Top shape: 100 1 28 28 (78400)
I0208 01:21:45.950917 28201 net.cpp:157] Top shape: 100 (100)
I0208 01:21:45.950920 28201 net.cpp:165] Memory required for data: 314000
I0208 01:21:45.950932 28201 layer_factory.hpp:77] Creating layer label_mnist_1_split
I0208 01:21:45.950948 28201 net.cpp:100] Creating Layer label_mnist_1_split
I0208 01:21:45.950955 28201 net.cpp:434] label_mnist_1_split <- label
I0208 01:21:45.950969 28201 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_0
I0208 01:21:45.950980 28201 net.cpp:408] label_mnist_1_split -> label_mnist_1_split_1
I0208 01:21:45.951057 28201 net.cpp:150] Setting up label_mnist_1_split
I0208 01:21:45.951071 28201 net.cpp:157] Top shape: 100 (100)
I0208 01:21:45.951078 28201 net.cpp:157] Top shape: 100 (100)
I0208 01:21:45.951082 28201 net.cpp:165] Memory required for data: 314800
I0208 01:21:45.951087 28201 layer_factory.hpp:77] Creating layer conv1
I0208 01:21:45.951108 28201 net.cpp:100] Creating Layer conv1
I0208 01:21:45.951113 28201 net.cpp:434] conv1 <- data
I0208 01:21:45.951119 28201 net.cpp:408] conv1 -> conv1
I0208 01:21:48.778815 28201 net.cpp:150] Setting up conv1
I0208 01:21:48.778846 28201 net.cpp:157] Top shape: 100 20 24 24 (1152000)
I0208 01:21:48.778851 28201 net.cpp:165] Memory required for data: 4922800
I0208 01:21:48.778899 28201 layer_factory.hpp:77] Creating layer pool1
I0208 01:21:48.778919 28201 net.cpp:100] Creating Layer pool1
I0208 01:21:48.778925 28201 net.cpp:434] pool1 <- conv1
I0208 01:21:48.778936 28201 net.cpp:408] pool1 -> pool1
I0208 01:21:48.778980 28201 net.cpp:150] Setting up pool1
I0208 01:21:48.778987 28201 net.cpp:157] Top shape: 100 20 12 12 (288000)
I0208 01:21:48.778995 28201 net.cpp:165] Memory required for data: 6074800
I0208 01:21:48.779000 28201 layer_factory.hpp:77] Creating layer conv2
I0208 01:21:48.779014 28201 net.cpp:100] Creating Layer conv2
I0208 01:21:48.779018 28201 net.cpp:434] conv2 <- pool1
I0208 01:21:48.779024 28201 net.cpp:408] conv2 -> conv2
I0208 01:21:48.780875 28201 net.cpp:150] Setting up conv2
I0208 01:21:48.780892 28201 net.cpp:157] Top shape: 100 50 8 8 (320000)
I0208 01:21:48.780896 28201 net.cpp:165] Memory required for data: 7354800
I0208 01:21:48.780905 28201 layer_factory.hpp:77] Creating layer pool2
I0208 01:21:48.780913 28201 net.cpp:100] Creating Layer pool2
I0208 01:21:48.780920 28201 net.cpp:434] pool2 <- conv2
I0208 01:21:48.780925 28201 net.cpp:408] pool2 -> pool2
I0208 01:21:48.780959 28201 net.cpp:150] Setting up pool2
I0208 01:21:48.780966 28201 net.cpp:157] Top shape: 100 50 4 4 (80000)
I0208 01:21:48.780971 28201 net.cpp:165] Memory required for data: 7674800
I0208 01:21:48.780974 28201 layer_factory.hpp:77] Creating layer ip1
I0208 01:21:48.780984 28201 net.cpp:100] Creating Layer ip1
I0208 01:21:48.780988 28201 net.cpp:434] ip1 <- pool2
I0208 01:21:48.780993 28201 net.cpp:408] ip1 -> ip1
I0208 01:21:48.783550 28201 net.cpp:150] Setting up ip1
I0208 01:21:48.783573 28201 net.cpp:157] Top shape: 100 500 (50000)
I0208 01:21:48.783577 28201 net.cpp:165] Memory required for data: 7874800
I0208 01:21:48.783587 28201 layer_factory.hpp:77] Creating layer relu1
I0208 01:21:48.783596 28201 net.cpp:100] Creating Layer relu1
I0208 01:21:48.783602 28201 net.cpp:434] relu1 <- ip1
I0208 01:21:48.783608 28201 net.cpp:395] relu1 -> ip1 (in-place)
I0208 01:21:48.783787 28201 net.cpp:150] Setting up relu1
I0208 01:21:48.783795 28201 net.cpp:157] Top shape: 100 500 (50000)
I0208 01:21:48.783799 28201 net.cpp:165] Memory required for data: 8074800
I0208 01:21:48.783803 28201 layer_factory.hpp:77] Creating layer ip2
I0208 01:21:48.783815 28201 net.cpp:100] Creating Layer ip2
I0208 01:21:48.783820 28201 net.cpp:434] ip2 <- ip1
I0208 01:21:48.783828 28201 net.cpp:408] ip2 -> ip2
I0208 01:21:48.784708 28201 net.cpp:150] Setting up ip2
I0208 01:21:48.784719 28201 net.cpp:157] Top shape: 100 10 (1000)
I0208 01:21:48.784723 28201 net.cpp:165] Memory required for data: 8078800
I0208 01:21:48.784729 28201 layer_factory.hpp:77] Creating layer ip2_ip2_0_split
I0208 01:21:48.784736 28201 net.cpp:100] Creating Layer ip2_ip2_0_split
I0208 01:21:48.784739 28201 net.cpp:434] ip2_ip2_0_split <- ip2
I0208 01:21:48.784744 28201 net.cpp:408] ip2_ip2_0_split -> ip2_ip2_0_split_0
I0208 01:21:48.784750 28201 net.cpp:408] ip2_ip2_0_split -> ip2_ip2_0_split_1
I0208 01:21:48.784781 28201 net.cpp:150] Setting up ip2_ip2_0_split
I0208 01:21:48.784787 28201 net.cpp:157] Top shape: 100 10 (1000)
I0208 01:21:48.784792 28201 net.cpp:157] Top shape: 100 10 (1000)
I0208 01:21:48.784796 28201 net.cpp:165] Memory required for data: 8086800
I0208 01:21:48.784801 28201 layer_factory.hpp:77] Creating layer accuracy
I0208 01:21:48.784811 28201 net.cpp:100] Creating Layer accuracy
I0208 01:21:48.784816 28201 net.cpp:434] accuracy <- ip2_ip2_0_split_0
I0208 01:21:48.784821 28201 net.cpp:434] accuracy <- label_mnist_1_split_0
I0208 01:21:48.784826 28201 net.cpp:408] accuracy -> accuracy
I0208 01:21:48.784838 28201 net.cpp:150] Setting up accuracy
I0208 01:21:48.784843 28201 net.cpp:157] Top shape: (1)
I0208 01:21:48.784847 28201 net.cpp:165] Memory required for data: 8086804
I0208 01:21:48.784852 28201 layer_factory.hpp:77] Creating layer loss
I0208 01:21:48.784860 28201 net.cpp:100] Creating Layer loss
I0208 01:21:48.784864 28201 net.cpp:434] loss <- ip2_ip2_0_split_1
I0208 01:21:48.784888 28201 net.cpp:434] loss <- label_mnist_1_split_1
I0208 01:21:48.784894 28201 net.cpp:408] loss -> loss
I0208 01:21:48.784907 28201 layer_factory.hpp:77] Creating layer loss
I0208 01:21:48.785356 28201 net.cpp:150] Setting up loss
I0208 01:21:48.785369 28201 net.cpp:157] Top shape: (1)
I0208 01:21:48.785373 28201 net.cpp:160]     with loss weight 1
I0208 01:21:48.785390 28201 net.cpp:165] Memory required for data: 8086808
I0208 01:21:48.785395 28201 net.cpp:226] loss needs backward computation.
I0208 01:21:48.785399 28201 net.cpp:228] accuracy does not need backward computation.
I0208 01:21:48.785404 28201 net.cpp:226] ip2_ip2_0_split needs backward computation.
I0208 01:21:48.785408 28201 net.cpp:226] ip2 needs backward computation.
I0208 01:21:48.785413 28201 net.cpp:226] relu1 needs backward computation.
I0208 01:21:48.785416 28201 net.cpp:226] ip1 needs backward computation.
I0208 01:21:48.785421 28201 net.cpp:226] pool2 needs backward computation.
I0208 01:21:48.785425 28201 net.cpp:226] conv2 needs backward computation.
I0208 01:21:48.785429 28201 net.cpp:226] pool1 needs backward computation.
I0208 01:21:48.785434 28201 net.cpp:226] conv1 needs backward computation.
I0208 01:21:48.785439 28201 net.cpp:228] label_mnist_1_split does not need backward computation.
I0208 01:21:48.785445 28201 net.cpp:228] mnist does not need backward computation.
I0208 01:21:48.785449 28201 net.cpp:270] This network produces output accuracy
I0208 01:21:48.785454 28201 net.cpp:270] This network produces output loss
I0208 01:21:48.785464 28201 net.cpp:283] Network initialization done.
I0208 01:21:48.786916 28201 caffe.cpp:285] Running for 100 iterations.
I0208 01:21:48.789531 28201 caffe.cpp:308] Batch 0, accuracy = 1
I0208 01:21:48.789556 28201 caffe.cpp:308] Batch 0, loss = 0.0138564
I0208 01:21:48.790132 28201 caffe.cpp:308] Batch 1, accuracy = 0.99
I0208 01:21:48.790144 28201 caffe.cpp:308] Batch 1, loss = 0.0119744
I0208 01:21:48.790709 28201 caffe.cpp:308] Batch 2, accuracy = 0.99
I0208 01:21:48.790719 28201 caffe.cpp:308] Batch 2, loss = 0.0241577
I0208 01:21:48.791332 28201 caffe.cpp:308] Batch 3, accuracy = 0.99
I0208 01:21:48.791347 28201 caffe.cpp:308] Batch 3, loss = 0.0330763
I0208 01:21:48.791908 28201 caffe.cpp:308] Batch 4, accuracy = 0.98
I0208 01:21:48.791919 28201 caffe.cpp:308] Batch 4, loss = 0.0518114
I0208 01:21:48.791925 28201 blocking_queue.cpp:50] Data layer prefetch queue empty
I0208 01:21:48.793009 28201 caffe.cpp:308] Batch 5, accuracy = 0.99
I0208 01:21:48.793020 28201 caffe.cpp:308] Batch 5, loss = 0.0544784
I0208 01:21:48.794299 28201 caffe.cpp:308] Batch 6, accuracy = 0.97
I0208 01:21:48.794311 28201 caffe.cpp:308] Batch 6, loss = 0.0614181
I0208 01:21:48.795655 28201 caffe.cpp:308] Batch 7, accuracy = 0.99
I0208 01:21:48.795673 28201 caffe.cpp:308] Batch 7, loss = 0.0253358
I0208 01:21:48.796913 28201 caffe.cpp:308] Batch 8, accuracy = 0.99
I0208 01:21:48.796928 28201 caffe.cpp:308] Batch 8, loss = 0.0210847
I0208 01:21:48.798233 28201 caffe.cpp:308] Batch 9, accuracy = 0.99
I0208 01:21:48.798244 28201 caffe.cpp:308] Batch 9, loss = 0.0288414
I0208 01:21:48.799496 28201 caffe.cpp:308] Batch 10, accuracy = 0.98
I0208 01:21:48.799510 28201 caffe.cpp:308] Batch 10, loss = 0.0534136
I0208 01:21:48.800758 28201 caffe.cpp:308] Batch 11, accuracy = 0.99
I0208 01:21:48.800778 28201 caffe.cpp:308] Batch 11, loss = 0.0329061
I0208 01:21:48.802098 28201 caffe.cpp:308] Batch 12, accuracy = 0.96
I0208 01:21:48.802112 28201 caffe.cpp:308] Batch 12, loss = 0.146791
I0208 01:21:48.803298 28201 caffe.cpp:308] Batch 13, accuracy = 0.98
I0208 01:21:48.803313 28201 caffe.cpp:308] Batch 13, loss = 0.0417129
I0208 01:21:48.804535 28201 caffe.cpp:308] Batch 14, accuracy = 0.99
I0208 01:21:48.804551 28201 caffe.cpp:308] Batch 14, loss = 0.0319833
I0208 01:21:48.805675 28201 caffe.cpp:308] Batch 15, accuracy = 0.98
I0208 01:21:48.805691 28201 caffe.cpp:308] Batch 15, loss = 0.0370322
I0208 01:21:48.806882 28201 caffe.cpp:308] Batch 16, accuracy = 0.98
I0208 01:21:48.806895 28201 caffe.cpp:308] Batch 16, loss = 0.0414009
I0208 01:21:48.808073 28201 caffe.cpp:308] Batch 17, accuracy = 0.99
I0208 01:21:48.808091 28201 caffe.cpp:308] Batch 17, loss = 0.0309207
I0208 01:21:48.809263 28201 caffe.cpp:308] Batch 18, accuracy = 0.99
I0208 01:21:48.809283 28201 caffe.cpp:308] Batch 18, loss = 0.0179768
I0208 01:21:48.810477 28201 caffe.cpp:308] Batch 19, accuracy = 0.98
I0208 01:21:48.810497 28201 caffe.cpp:308] Batch 19, loss = 0.0688353
I0208 01:21:48.811635 28201 caffe.cpp:308] Batch 20, accuracy = 0.98
I0208 01:21:48.811653 28201 caffe.cpp:308] Batch 20, loss = 0.0801105
I0208 01:21:48.812798 28201 caffe.cpp:308] Batch 21, accuracy = 0.97
I0208 01:21:48.812816 28201 caffe.cpp:308] Batch 21, loss = 0.0672125
I0208 01:21:48.813985 28201 caffe.cpp:308] Batch 22, accuracy = 0.99
I0208 01:21:48.814002 28201 caffe.cpp:308] Batch 22, loss = 0.0251008
I0208 01:21:48.815296 28201 caffe.cpp:308] Batch 23, accuracy = 0.99
I0208 01:21:48.815310 28201 caffe.cpp:308] Batch 23, loss = 0.0262996
I0208 01:21:48.816368 28201 caffe.cpp:308] Batch 24, accuracy = 0.99
I0208 01:21:48.816387 28201 caffe.cpp:308] Batch 24, loss = 0.0397217
I0208 01:21:48.817464 28201 caffe.cpp:308] Batch 25, accuracy = 0.99
I0208 01:21:48.817482 28201 caffe.cpp:308] Batch 25, loss = 0.0802493
I0208 01:21:48.818650 28201 caffe.cpp:308] Batch 26, accuracy = 0.99
I0208 01:21:48.818670 28201 caffe.cpp:308] Batch 26, loss = 0.11885
I0208 01:21:48.819720 28201 caffe.cpp:308] Batch 27, accuracy = 0.99
I0208 01:21:48.819738 28201 caffe.cpp:308] Batch 27, loss = 0.0252997
I0208 01:21:48.820829 28201 caffe.cpp:308] Batch 28, accuracy = 0.99
I0208 01:21:48.820847 28201 caffe.cpp:308] Batch 28, loss = 0.0436521
I0208 01:21:48.821900 28201 caffe.cpp:308] Batch 29, accuracy = 0.96
I0208 01:21:48.821915 28201 caffe.cpp:308] Batch 29, loss = 0.118211
I0208 01:21:48.823006 28201 caffe.cpp:308] Batch 30, accuracy = 0.98
I0208 01:21:48.823019 28201 caffe.cpp:308] Batch 30, loss = 0.0249729
I0208 01:21:48.824134 28201 caffe.cpp:308] Batch 31, accuracy = 1
I0208 01:21:48.824154 28201 caffe.cpp:308] Batch 31, loss = 0.00354612
I0208 01:21:48.825242 28201 caffe.cpp:308] Batch 32, accuracy = 0.99
I0208 01:21:48.825261 28201 caffe.cpp:308] Batch 32, loss = 0.0254843
I0208 01:21:48.826396 28201 caffe.cpp:308] Batch 33, accuracy = 1
I0208 01:21:48.826411 28201 caffe.cpp:308] Batch 33, loss = 0.00646514
I0208 01:21:48.827446 28201 caffe.cpp:308] Batch 34, accuracy = 0.99
I0208 01:21:48.827461 28201 caffe.cpp:308] Batch 34, loss = 0.0563451
I0208 01:21:48.828557 28201 caffe.cpp:308] Batch 35, accuracy = 0.96
I0208 01:21:48.828574 28201 caffe.cpp:308] Batch 35, loss = 0.156729
I0208 01:21:48.829602 28201 caffe.cpp:308] Batch 36, accuracy = 1
I0208 01:21:48.829622 28201 caffe.cpp:308] Batch 36, loss = 0.0036817
I0208 01:21:48.830647 28201 caffe.cpp:308] Batch 37, accuracy = 0.98
I0208 01:21:48.830667 28201 caffe.cpp:308] Batch 37, loss = 0.0599181
I0208 01:21:48.831712 28201 caffe.cpp:308] Batch 38, accuracy = 0.99
I0208 01:21:48.831732 28201 caffe.cpp:308] Batch 38, loss = 0.0266392
I0208 01:21:48.832762 28201 caffe.cpp:308] Batch 39, accuracy = 0.98
I0208 01:21:48.832782 28201 caffe.cpp:308] Batch 39, loss = 0.0626804
I0208 01:21:48.833812 28201 caffe.cpp:308] Batch 40, accuracy = 0.98
I0208 01:21:48.833835 28201 caffe.cpp:308] Batch 40, loss = 0.0712561
I0208 01:21:48.834872 28201 caffe.cpp:308] Batch 41, accuracy = 0.98
I0208 01:21:48.834890 28201 caffe.cpp:308] Batch 41, loss = 0.0886098
I0208 01:21:48.835901 28201 caffe.cpp:308] Batch 42, accuracy = 0.98
I0208 01:21:48.835918 28201 caffe.cpp:308] Batch 42, loss = 0.0506899
I0208 01:21:48.836942 28201 caffe.cpp:308] Batch 43, accuracy = 1
I0208 01:21:48.836961 28201 caffe.cpp:308] Batch 43, loss = 0.0105805
I0208 01:21:48.837978 28201 caffe.cpp:308] Batch 44, accuracy = 0.99
I0208 01:21:48.837997 28201 caffe.cpp:308] Batch 44, loss = 0.0195225
I0208 01:21:48.839022 28201 caffe.cpp:308] Batch 45, accuracy = 0.99
I0208 01:21:48.839041 28201 caffe.cpp:308] Batch 45, loss = 0.0504271
I0208 01:21:48.840080 28201 caffe.cpp:308] Batch 46, accuracy = 1
I0208 01:21:48.840122 28201 caffe.cpp:308] Batch 46, loss = 0.00548156
I0208 01:21:48.841060 28201 caffe.cpp:308] Batch 47, accuracy = 0.99
I0208 01:21:48.841078 28201 caffe.cpp:308] Batch 47, loss = 0.0158982
I0208 01:21:48.842052 28201 caffe.cpp:308] Batch 48, accuracy = 0.97
I0208 01:21:48.842069 28201 caffe.cpp:308] Batch 48, loss = 0.0968074
I0208 01:21:48.843040 28201 caffe.cpp:308] Batch 49, accuracy = 1
I0208 01:21:48.843058 28201 caffe.cpp:308] Batch 49, loss = 0.00292469
I0208 01:21:48.844036 28201 caffe.cpp:308] Batch 50, accuracy = 1
I0208 01:21:48.844054 28201 caffe.cpp:308] Batch 50, loss = 0.000156528
I0208 01:21:48.845022 28201 caffe.cpp:308] Batch 51, accuracy = 0.99
I0208 01:21:48.845042 28201 caffe.cpp:308] Batch 51, loss = 0.0228834
I0208 01:21:48.846012 28201 caffe.cpp:308] Batch 52, accuracy = 1
I0208 01:21:48.846031 28201 caffe.cpp:308] Batch 52, loss = 0.00443821
I0208 01:21:48.847002 28201 caffe.cpp:308] Batch 53, accuracy = 1
I0208 01:21:48.847020 28201 caffe.cpp:308] Batch 53, loss = 0.000234111
I0208 01:21:48.847992 28201 caffe.cpp:308] Batch 54, accuracy = 1
I0208 01:21:48.848011 28201 caffe.cpp:308] Batch 54, loss = 0.00645191
I0208 01:21:48.848975 28201 caffe.cpp:308] Batch 55, accuracy = 1
I0208 01:21:48.848994 28201 caffe.cpp:308] Batch 55, loss = 0.000219036
I0208 01:21:48.849952 28201 caffe.cpp:308] Batch 56, accuracy = 1
I0208 01:21:48.849970 28201 caffe.cpp:308] Batch 56, loss = 0.00757466
I0208 01:21:48.850945 28201 caffe.cpp:308] Batch 57, accuracy = 1
I0208 01:21:48.850965 28201 caffe.cpp:308] Batch 57, loss = 0.00884481
I0208 01:21:48.851925 28201 caffe.cpp:308] Batch 58, accuracy = 0.99
I0208 01:21:48.851950 28201 caffe.cpp:308] Batch 58, loss = 0.010875
I0208 01:21:48.852876 28201 caffe.cpp:308] Batch 59, accuracy = 0.98
I0208 01:21:48.852900 28201 caffe.cpp:308] Batch 59, loss = 0.0964789
I0208 01:21:48.853785 28201 caffe.cpp:308] Batch 60, accuracy = 1
I0208 01:21:48.853799 28201 caffe.cpp:308] Batch 60, loss = 0.0058882
I0208 01:21:48.854725 28201 caffe.cpp:308] Batch 61, accuracy = 0.99
I0208 01:21:48.854737 28201 caffe.cpp:308] Batch 61, loss = 0.0166666
I0208 01:21:48.855672 28201 caffe.cpp:308] Batch 62, accuracy = 1
I0208 01:21:48.855686 28201 caffe.cpp:308] Batch 62, loss = 2.94048e-05
I0208 01:21:48.856607 28201 caffe.cpp:308] Batch 63, accuracy = 1
I0208 01:21:48.856621 28201 caffe.cpp:308] Batch 63, loss = 0.000171601
I0208 01:21:48.857569 28201 caffe.cpp:308] Batch 64, accuracy = 1
I0208 01:21:48.857589 28201 caffe.cpp:308] Batch 64, loss = 0.000294489
I0208 01:21:48.858522 28201 caffe.cpp:308] Batch 65, accuracy = 0.95
I0208 01:21:48.858542 28201 caffe.cpp:308] Batch 65, loss = 0.163089
I0208 01:21:48.859465 28201 caffe.cpp:308] Batch 66, accuracy = 0.98
I0208 01:21:48.859484 28201 caffe.cpp:308] Batch 66, loss = 0.0662245
I0208 01:21:48.860394 28201 caffe.cpp:308] Batch 67, accuracy = 0.99
I0208 01:21:48.860409 28201 caffe.cpp:308] Batch 67, loss = 0.024306
I0208 01:21:48.861327 28201 caffe.cpp:308] Batch 68, accuracy = 0.99
I0208 01:21:48.861340 28201 caffe.cpp:308] Batch 68, loss = 0.0106896
I0208 01:21:48.862258 28201 caffe.cpp:308] Batch 69, accuracy = 1
I0208 01:21:48.862272 28201 caffe.cpp:308] Batch 69, loss = 0.00092099
I0208 01:21:48.863180 28201 caffe.cpp:308] Batch 70, accuracy = 1
I0208 01:21:48.863194 28201 caffe.cpp:308] Batch 70, loss = 0.000953357
I0208 01:21:48.864089 28201 caffe.cpp:308] Batch 71, accuracy = 1
I0208 01:21:48.864114 28201 caffe.cpp:308] Batch 71, loss = 0.000292151
I0208 01:21:48.865034 28201 caffe.cpp:308] Batch 72, accuracy = 1
I0208 01:21:48.865053 28201 caffe.cpp:308] Batch 72, loss = 0.00534683
I0208 01:21:48.865881 28201 caffe.cpp:308] Batch 73, accuracy = 1
I0208 01:21:48.865895 28201 caffe.cpp:308] Batch 73, loss = 0.000134651
I0208 01:21:48.866792 28201 caffe.cpp:308] Batch 74, accuracy = 1
I0208 01:21:48.866804 28201 caffe.cpp:308] Batch 74, loss = 0.00460627
I0208 01:21:48.867673 28201 caffe.cpp:308] Batch 75, accuracy = 1
I0208 01:21:48.867687 28201 caffe.cpp:308] Batch 75, loss = 0.000638226
I0208 01:21:48.868602 28201 caffe.cpp:308] Batch 76, accuracy = 1
I0208 01:21:48.868641 28201 caffe.cpp:308] Batch 76, loss = 0.000240971
I0208 01:21:48.869451 28201 caffe.cpp:308] Batch 77, accuracy = 1
I0208 01:21:48.869465 28201 caffe.cpp:308] Batch 77, loss = 0.000451964
I0208 01:21:48.870386 28201 caffe.cpp:308] Batch 78, accuracy = 1
I0208 01:21:48.870405 28201 caffe.cpp:308] Batch 78, loss = 0.00177418
I0208 01:21:48.871278 28201 caffe.cpp:308] Batch 79, accuracy = 1
I0208 01:21:48.871292 28201 caffe.cpp:308] Batch 79, loss = 0.00315147
I0208 01:21:48.872150 28201 caffe.cpp:308] Batch 80, accuracy = 0.99
I0208 01:21:48.872165 28201 caffe.cpp:308] Batch 80, loss = 0.0214662
I0208 01:21:48.873039 28201 caffe.cpp:308] Batch 81, accuracy = 1
I0208 01:21:48.873052 28201 caffe.cpp:308] Batch 81, loss = 0.00210903
I0208 01:21:48.873927 28201 caffe.cpp:308] Batch 82, accuracy = 1
I0208 01:21:48.873941 28201 caffe.cpp:308] Batch 82, loss = 0.00501597
I0208 01:21:48.874831 28201 caffe.cpp:308] Batch 83, accuracy = 1
I0208 01:21:48.874843 28201 caffe.cpp:308] Batch 83, loss = 0.00763634
I0208 01:21:48.875691 28201 caffe.cpp:308] Batch 84, accuracy = 0.99
I0208 01:21:48.875705 28201 caffe.cpp:308] Batch 84, loss = 0.0317567
I0208 01:21:48.876582 28201 caffe.cpp:308] Batch 85, accuracy = 0.99
I0208 01:21:48.876601 28201 caffe.cpp:308] Batch 85, loss = 0.0255428
I0208 01:21:48.877404 28201 caffe.cpp:308] Batch 86, accuracy = 1
I0208 01:21:48.877418 28201 caffe.cpp:308] Batch 86, loss = 7.02248e-05
I0208 01:21:48.878264 28201 caffe.cpp:308] Batch 87, accuracy = 1
I0208 01:21:48.878278 28201 caffe.cpp:308] Batch 87, loss = 0.000133007
I0208 01:21:48.879115 28201 caffe.cpp:308] Batch 88, accuracy = 1
I0208 01:21:48.879128 28201 caffe.cpp:308] Batch 88, loss = 3.45387e-05
I0208 01:21:48.879963 28201 caffe.cpp:308] Batch 89, accuracy = 1
I0208 01:21:48.879976 28201 caffe.cpp:308] Batch 89, loss = 2.35079e-05
I0208 01:21:48.880820 28201 caffe.cpp:308] Batch 90, accuracy = 0.97
I0208 01:21:48.880833 28201 caffe.cpp:308] Batch 90, loss = 0.11273
I0208 01:21:48.881669 28201 caffe.cpp:308] Batch 91, accuracy = 1
I0208 01:21:48.881680 28201 caffe.cpp:308] Batch 91, loss = 2.63113e-05
I0208 01:21:48.882520 28201 caffe.cpp:308] Batch 92, accuracy = 1
I0208 01:21:48.882534 28201 caffe.cpp:308] Batch 92, loss = 0.00162566
I0208 01:21:48.883379 28201 caffe.cpp:308] Batch 93, accuracy = 1
I0208 01:21:48.883390 28201 caffe.cpp:308] Batch 93, loss = 0.00113337
I0208 01:21:48.884224 28201 caffe.cpp:308] Batch 94, accuracy = 1
I0208 01:21:48.884238 28201 caffe.cpp:308] Batch 94, loss = 0.0004182
I0208 01:21:48.885083 28201 caffe.cpp:308] Batch 95, accuracy = 0.99
I0208 01:21:48.885097 28201 caffe.cpp:308] Batch 95, loss = 0.0117184
I0208 01:21:48.885911 28201 caffe.cpp:308] Batch 96, accuracy = 0.98
I0208 01:21:48.885924 28201 caffe.cpp:308] Batch 96, loss = 0.0453633
I0208 01:21:48.886788 28201 caffe.cpp:308] Batch 97, accuracy = 0.98
I0208 01:21:48.886801 28201 caffe.cpp:308] Batch 97, loss = 0.0898979
I0208 01:21:48.887632 28201 caffe.cpp:308] Batch 98, accuracy = 1
I0208 01:21:48.887645 28201 caffe.cpp:308] Batch 98, loss = 0.00376211
I0208 01:21:48.888443 28201 caffe.cpp:308] Batch 99, accuracy = 1
I0208 01:21:48.888456 28201 caffe.cpp:308] Batch 99, loss = 0.00459322
I0208 01:21:48.888466 28201 caffe.cpp:313] Loss: 0.0312146
I0208 01:21:48.888478 28201 caffe.cpp:325] accuracy = 0.9904
I0208 01:21:48.888489 28201 caffe.cpp:325] loss = 0.0312146 (* 1 = 0.0312146 loss)

参考

  1. ubuntu 16.04上配置cuda+caffe环境
  2. 深度学习——21天实战caffe:赵永科著,中国工信出版集团、电子工业出版社,2016年7月。