我在Python Keras中创建了一个多尺度的CNN.网络架构类似于图表.在这里,相同的图像被馈送到具有不同架构的3个CNN.权重不共享.
我写的代码如下.问题是,当我运行这个,即使在train_dir
网络中有10个图像需要大约40GB RAM,最后被操作系统杀死.这是"Out of memory ERROR".我在CPU上运行它.知道为什么在Keras发生这种情况?
我正在使用Theano-0.9.0.dev5 | Keras-1.2.1 | Python 2.7.12 | OSX Sierra 10.12.3(16D32)
## Multi scale CNN in Keras Python ## https://img.devbox.cn/3cccf/16086/243/6db233c03a276edb.png #main CNN model - CNN1 main_model = Sequential() main_model.add(Convolution2D(32, 3, 3, input_shape=(3, 224, 224))) main_model.add(Activation('relu')) main_model.add(MaxPooling2D(pool_size=(2, 2))) main_model.add(Convolution2D(32, 3, 3)) main_model.add(Activation('relu')) main_model.add(MaxPooling2D(pool_size=(2, 2))) main_model.add(Convolution2D(64, 3, 3)) main_model.add(Activation('relu')) main_model.add(MaxPooling2D(pool_size=(2, 2))) # the main_model so far outputs 3D feature maps (height, width, features) main_model.add(Flatten()) #lower features model - CNN2 lower_model1 = Sequential() lower_model1.add(Convolution2D(32, 3, 3, input_shape=(3, 224, 224))) lower_model1.add(Activation('relu')) lower_model1.add(MaxPooling2D(pool_size=(2, 2))) lower_model1.add(Flatten()) #lower features model - CNN3 lower_model2 = Sequential() lower_model2.add(Convolution2D(32, 3, 3, input_shape=(3, 224, 224))) lower_model2.add(Activation('relu')) lower_model2.add(MaxPooling2D(pool_size=(2, 2))) lower_model2.add(Flatten()) #merged model merged_model = Merge([main_model, lower_model1, lower_model2], mode='concat') final_model = Sequential() final_model.add(merged_model) final_model.add(Dense(64)) final_model.add(Activation('relu')) final_model.add(Dropout(0.5)) final_model.add(Dense(1)) final_model.add(Activation('sigmoid')) final_model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) print 'About to start training merged CNN' train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(224, 224), batch_size=32, class_mode='binary') test_datagen = ImageDataGenerator(rescale=1./255) test_generator = test_datagen.flow_from_directory(args.test_images, target_size=(224, 224), batch_size=32, class_mode='binary') final_train_generator = zip(train_generator, train_generator, train_generator) final_test_generator = zip(test_generator, test_generator, test_generator) final_model.fit_generator(final_train_generator, samples_per_epoch=nb_train_samples, nb_epoch=nb_epoch, validation_data=final_test_generator, nb_val_samples=nb_validation_samples)
展平中lower_model1
和lower_model2
之后的节点数量是
32 * 112 * 112 = 401 408
.接下来是具有64个节点的完全连接的层,这给出了401 408 * 2 * 64 = 51 380 224
参数,这是相当大的数字.我建议重新考虑喂给你的"低级"模特的图像的大小.你真的需要224 x 224
大小吗?仔细查看附加的图表.在那里你看到第二个和第三个模型的第一步是子采样:8:1
和4:1
.这是您在实施中遗漏的步骤.
你main_model
很好,因为那里有足够的最大池层,可以减少参数的数量.