Caffe框架下的图像回归测试

Caffe框架下的图像回归测试

参考资料:

1. http://stackoverflow.com/questions/33766689/caffe-hdf5-pre-processing

2. http://corpocrat.com/2015/02/24/facial-keypoints-extraction-using-deep-learning-with-caffe/

3. http://stackoverflow.com/questions/31774953/test-labels-for-regression-caffe-float-not-allowed/31808324#31808324

在caffe的框架上,如何根据现有的图像及其分值、标准差对图像进行回归,caffe_model是Imagenet训练得到模型、solver.prototxt和train_val.prototxt。本次的数据来源为:http://live.ece.utexas.edu/。 注意:所有的工程建立都是在caffe的根目录下执行。

 

Author: jinxj

Email:   

Data:     2016.3.30

Version: First Version

一、环境:

编码成功的caffe;图像数据库。

 

二、数据预处理:

在data下建立文件夹“face_key_point”,之后所有的数据处理均在“face_key_point”文件夹下,该文件夹下包含images文件夹和其他文件(.py和.txt文件)。

1. images文件夹里包含所有的train、test和val图像;

2. 其他文件

(1) meancreat_hdf5.py

用于产生train.h5和train_h5_list.txt(注意检查该文件中train.h5的路径是否是绝对路径,如果不是,请改一下),可以用HDF5View查看train.h5检查数据是否正确,ImageNum*3*227*227(SIZE,最好和Imagenet模型的保持一致),适合所有类型的图像(.JPG和.bmp同时处理),代码里对图像已经做过均值处理,具体代码如下:

######################################################################################

import h5py, os

import caffe

import numpy as np

 

SIZE = 227 # fixed size to all images

with open( 'train.txt', 'r' ) as T :

    lines = T.readlines()

        

with open( 'num.txt', 'r' ) as M :

    lines_Num = M.readlines()

        

# If you do not have enough memory split data into

# multiple batches and generate multiple separate h5 files

X = np.zeros( (len(lines), 3, SIZE, SIZE), dtype='f4' )

XMean = np.zeros( (len(lines), 3, 1, 1), dtype='f4' )

y = np.zeros( (len(lines_Num)), dtype='f4' )

 

print len(lines)

        

for i,l in enumerate(lines):

    sp = l.split('\r\n')

    img = caffe.io.load_image( sp[0] )

    img = caffe.io.resize_image( img, (SIZE, SIZE), 3 ) # resize to fixed size

    img = np.transpose(img, (2, 0, 1))

         # you may apply other input transformations here...

    X[i] = img

# Compute the mean of all images

XMean = sum(X)/len(X)

X = X - XMean

print 'image OK'     

 

for j, eachLine in enumerate(lines_Num):

    y[j] = float(float(eachLine)/92.432)

print 'Num OK'

        

with h5py.File('train.h5','w') as H:

    H.create_dataset( 'X', data=X ) # note the name X given to the dataset!

    H.create_dataset( 'y', data=y ) # note the name y given to the dataset!

with open('train_h5_list.txt','w') as L:

    L.write( '/home2/xj_jin/experiment/caffe-master/data/face_key_point/train.h5' ) # list all h5 files you are going to use    

######################################################################################

(2) meancreat_hdf5test.py

同理,用于产生test.h5和test_h5_list.txt(检查路径),代码里对图像已经做过均值处理,具体代码如下:

######################################################################################

import h5py, os

import caffe

import numpy as np

 

SIZE = 227 # fixed size to all images

with open( 'test.txt', 'r' ) as T :

    lines = T.readlines()

        

with open( 'num_test.txt', 'r' ) as M :

    lines_Num = M.readlines()

        

# If you do not have enough memory split data into

# multiple batches and generate multiple separate h5 files

X = np.zeros( (len(lines), 3, SIZE, SIZE), dtype='f4' )

XMean = np.zeros( (len(lines), 3, 1, 1), dtype='f4' )

y = np.zeros( (len(lines_Num)), dtype='f4' )

print len(lines)

for i,l in enumerate(lines):

    sp = l.split('\r\n')

    img = caffe.io.load_image( sp[0] )

    img = caffe.io.resize_image( img, (SIZE, SIZE), 3 ) # resize to fixed size

    img = np.transpose(img, (2, 0, 1))

         # you may apply other input transformations here...

    X[i] = img

        

# Compute the mean of all images

XMean = sum(X)/len(X)

X = X - XMean

print 'image OK'     

 

for j, eachLine in enumerate(lines_Num):

    y[j] = float(float(eachLine)/87.0656)

        

print 'Num OK'

                 

with h5py.File('test.h5','w') as H:

    H.create_dataset( 'X', data=X ) # note the name X given to the dataset!

    H.create_dataset( 'y', data=y ) # note the name y given to the dataset!

with open('train_h5_list.txt','w') as L:

    L.write( '/home2/xj_jin/experiment/caffe-master/data/face_key_point/test.h5' ) # list all h5 files you are going to use

######################################################################################

(3) train.txt和num.txt

train.txt存放着待训练的图片,num.txt存放待训练图片的分值。具体格式如下:

train.txt:(建议检查路径是否存在空格)

/home2/xj_jin/experiment/caffe-master/data/face_key_point/images/00t1.bmp

/home2/xj_jin/experiment/caffe-master/data/face_key_point/images/00t2.bmp

/home2/xj_jin/experiment/caffe-master/data/face_key_point/images/00t3.bmp

/home2/xj_jin/experiment/caffe-master/data/face_key_point/images/00t4.bmp

/home2/xj_jin/experiment/caffe-master/data/face_key_point/images/00t5.bmp

num.txt:

63.9634

25.3353

48.9366

35.8863

66.5092

(4) test.txt和num_test.txt

test.txt存放着待训练的图片,num_test.txt存放待训练图片的分值。具体格式和上述一致。

3. 生成train.h5和test.h5数据

在caffe/data/face_key_point下执行:

命令: python meancreat_hdf5.py和 python meancreat_hdf5test.py

 

三、模型修改

在caffe/model下建立自己的文件夹“face_key_point”,将下载得到的Imagenet的模型和网络结构bvlc_reference_caffenet.caffemodel、train_val.prototxt、solver.prototxt放入该文件夹下。

1. 修改solver.prototxt

#######################################################################################

net: "models/face_key_point/train_val.prototxt"                             -----------------------路径要改

test_iter: 100

test_interval: 1000

# lr for fine-tuning should be lower than when starting from scratch

base_lr: 0.001                                                                                    -----------------------基础学习率根据实际情况改

lr_policy: "step"

gamma: 0.1

# stepsize should also be lower, as we're closer to being done

stepsize: 20000

display: 20

max_iter: 100000

momentum: 0.9

weight_decay: 0.0005

snapshot: 10000

snapshot_prefix: "models/face_key_point/finetune_face_key_point"     ---------------------------路径+自己取名新生成的模型名称

# uncomment the following to default to CPU mode solving

# solver_mode: CPU                                                                                 ---------------------------根据自己的实际情况设置

#######################################################################################

2. 修改train_val.prototxt网络结构

这里只提供部分的网络结构。

#######################################################################################

name: "FaceKeyPointCaffeNet"                                                                   ---------------------------自己训练的网络模型名称

layer {                                                                                                            ---------------------------处理.h5数据的data层

  name: "data"

  type: "HDF5Data"

  top: "X"                                                                             ---------------------------和meancreat_hdf5test.py中最后的输出保持一致,原来是data,现在是X

  top: "y"                                                                              ---------------------------和meancreat_hdf5test.py中最后的输出保持一致,原来是label,现在是y

  include {

    phase: TRAIN

  }

  hdf5_data_param {

    source: "data/face_key_point/train_h5_list.txt"            ---------------------------保存的train.h5的绝对路径,可以处理多个.h5数据

    batch_size: 64

  }

}

layer {

  name: "data"

  type: "HDF5Data"

  top: "X"

  top: "y"

  include {

    phase: TEST

  }

  hdf5_data_param {

    source: "data/face_key_point/test_h5_list.txt"

    batch_size: 100

  }

}

卷积层、池化层、全连接层、损失等

#######################################################################################

最后,因为本次处理的是一个回归问题,所以最后的损失函数(loss)应该是一个回归,而不是分类,保证最后几层的bottom是y,而不是label。

 

四、训练

注意改掉对应路径即可:

./build/tools/caffe train --solver=models/face_key_point/solver.prototxt --weights=models/face_key_point/bvlc_reference_caffenet.caffemodel -gpu 0

 

 

 

 

转载于:https://www.cnblogs.com/holidaystudy/p/5337804.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值