淘先锋技术网

首页 1 2 3 4 5 6 7

目的

使用keras框架进行简单的图像二分类.

数据准备

使用kaggle中的cat VS dog 数据库进行简单的二分类.数据可以在这里下载
下载的数据库会有train和test两个文件夹. 其中train中包含cat文件夹包含12,500张有标记的猫的照片(有标记是指图片名带有cat字段), dog文件夹包含12,500张有标记的狗的图片. test文件夹包含12,500张没有标记的猫狗图片.

test文件夹中的图片没有标记, 因此在训练的过程中没有用, 仅能用来进行测试.
train的文件可以进行训练和验证. 在这里我们将train中的图片以6:4的比例划分为train set和validation set.最终的文件夹结构如下:

--data
    -- train
        -- cat
        -- dog
    -- test
        -- cat
        -- dog

代码

  • 导入keras库
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
  • 输入设置
# dimensions of our images.
img_width, img_height = , 

train_data_dir = r'../../Data/dog_vs_cat/train'
validation_data_dir = r'../../Data/dog_vs_cat/validation'
nb_train_samples = 
nb_validation_samples = 
epochs = 
batch_size = 
  • 网络定义
if K.image_data_format() == 'channels_first':
    input_shape = (, img_width, img_height)
else:
    input_shape = (img_width, img_height, )

model = Sequential()
model.add(Conv2D(, (, ), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(, )))

model.add(Conv2D(, (, )))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(, )))

model.add(Conv2D(, (, )))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(, )))

model.add(Flatten())
model.add(Dense())
model.add(Activation('relu'))
model.add(Dropout())
model.add(Dense())
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer='rmsprop',
              metrics=['accuracy'])
  • 查看网络结构
model.summary()
  • 网络输入设置
# this is the augmentation configuration we will use for training
train_datagen = ImageDataGenerator(
    rescale= / ,
    shear_range=,
    zoom_range=,
    horizontal_flip=True)

# this is the augmentation configuration we will use for testing:
# only rescaling
test_datagen = ImageDataGenerator(rescale= / )

train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    shuffle = True,
    class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
    validation_data_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')
  • 模型训练
model.fit_generator(
    train_generator,
    steps_per_epoch=nb_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
#     nb_val_samples = 10000,
    verbose = ,
    validation_steps=nb_validation_samples // batch_size
    )
  • 保存网络
model.save_weights('first_try.h5')