DataScience/TensorFlow[CNN]
딥러닝 텐서플로우 이미지 증강 예시
leopard4
2022. 12. 30. 17:28
Image Augmentation¶
Cats v Dogs 로 다음처럼 모델링 하고, 학습시켜본다.
4 convolutional layers with 32, 64, 128 and 128 convolutions
train for 100 epochs
데이터 제너레이터를 통해 이미지를 증강한다.¶
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
- rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures.
- width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.
- shear_range is for randomly applying shearing transformations.
- zoom_range is for randomly zooming inside pictures.
- horizontal_flip is for randomly flipping half of the images horizontally. This is relevant when there are no assumptions of horizontal assymmetry (e.g. real-world pictures).
- fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
In [16]:
In [17]:
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
--2022-12-30 06:44:42-- https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip Resolving storage.googleapis.com (storage.googleapis.com)... 108.177.97.128, 108.177.125.128, 74.125.203.128, ... Connecting to storage.googleapis.com (storage.googleapis.com)|108.177.97.128|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 68606236 (65M) [application/zip] Saving to: ‘/tmp/cats_and_dogs_filtered.zip’ /tmp/cats_and_dogs_ 100%[===================>] 65.43M 22.4MB/s in 2.9s 2022-12-30 06:44:45 (22.4 MB/s) - ‘/tmp/cats_and_dogs_filtered.zip’ saved [68606236/68606236]
In [18]:
import zipfile
In [19]:
file = zipfile.ZipFile('/tmp/cats_and_dogs_filtered.zip')
In [20]:
file.extractall('/tmp')
In [20]:
In [21]:
base_dir = '/tmp/cats_and_dogs_filtered'
In [22]:
train_dir = '/tmp/cats_and_dogs_filtered/train'
In [23]:
test_dir = '/tmp/cats_and_dogs_filtered/validation'
In [23]:
In [24]:
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
In [38]:
def build_model() :
model = Sequential()
model.add( Conv2D(16, (3,3), activation='relu', input_shape=(150, 150, 3) ) )
model.add( MaxPooling2D((2,2) , 2 ))
model.add( Conv2D(32, (3,3), activation='relu' ) )
model.add( MaxPooling2D((2,2) , 2 ))
model.add( Conv2D(64, (3,3), activation='relu' ) )
model.add( MaxPooling2D((2,2) , 2 ))
model.add(Flatten())
model.add(Dense(512, 'relu'))
model.add(Dense(1, 'sigmoid'))
model.compile('rmsprop', 'binary_crossentropy', metrics=['accuracy'])
return model
In [39]:
model = build_model()
In [40]:
# 이미지 데이터를 증강하여, 학습을 시키자
# 몇배로 증강하는지는 메뉴얼을 참조
In [41]:
from keras.preprocessing.image import ImageDataGenerator
In [42]:
train_datagen = ImageDataGenerator(rescale=1/255.0, rotation_range=30,
width_shift_range=0.4,
height_shift_range=0.2,
shear_range=0.3,
zoom_range=0.5,
horizontal_flip=True)
In [43]:
test_datagen = ImageDataGenerator(rescale=1/255.0 )
In [44]:
train_generator = train_datagen.flow_from_directory(train_dir, (150,150), class_mode = 'binary', batch_size=20)
Found 2000 images belonging to 2 classes.
In [45]:
test_generator = test_datagen.flow_from_directory(test_dir, (150,150), class_mode = 'binary', batch_size=20)
Found 1000 images belonging to 2 classes.
In [46]:
epoch_history = model.fit(train_generator, epochs= 15, validation_data= (test_generator) , steps_per_epoch= 100 ) # 실무에서는 콜백이 반드시 들어간다. 다한다고 최상의 조건이 나오는것은 아니기 때문 (얼리스탑,체크포인트 등 사용)
Epoch 1/15 100/100 [==============================] - 77s 756ms/step - loss: 0.7559 - accuracy: 0.5180 - val_loss: 0.6784 - val_accuracy: 0.5880 Epoch 2/15 100/100 [==============================] - 72s 716ms/step - loss: 0.6881 - accuracy: 0.5555 - val_loss: 0.6512 - val_accuracy: 0.5720 Epoch 3/15 100/100 [==============================] - 72s 719ms/step - loss: 0.6756 - accuracy: 0.5925 - val_loss: 0.6527 - val_accuracy: 0.6260 Epoch 4/15 100/100 [==============================] - 75s 744ms/step - loss: 0.6736 - accuracy: 0.6230 - val_loss: 0.6042 - val_accuracy: 0.6490 Epoch 5/15 100/100 [==============================] - 72s 719ms/step - loss: 0.6522 - accuracy: 0.6140 - val_loss: 0.6599 - val_accuracy: 0.5830 Epoch 6/15 100/100 [==============================] - 72s 719ms/step - loss: 0.6559 - accuracy: 0.6425 - val_loss: 0.5974 - val_accuracy: 0.6610 Epoch 7/15 100/100 [==============================] - 74s 744ms/step - loss: 0.6281 - accuracy: 0.6490 - val_loss: 0.6170 - val_accuracy: 0.6310 Epoch 8/15 100/100 [==============================] - 72s 716ms/step - loss: 0.6320 - accuracy: 0.6415 - val_loss: 0.5858 - val_accuracy: 0.6960 Epoch 9/15 100/100 [==============================] - 72s 718ms/step - loss: 0.6230 - accuracy: 0.6425 - val_loss: 0.8048 - val_accuracy: 0.6220 Epoch 10/15 100/100 [==============================] - 74s 740ms/step - loss: 0.6271 - accuracy: 0.6395 - val_loss: 0.6399 - val_accuracy: 0.6690 Epoch 11/15 100/100 [==============================] - 71s 713ms/step - loss: 0.6095 - accuracy: 0.6660 - val_loss: 0.6034 - val_accuracy: 0.6720 Epoch 12/15 100/100 [==============================] - 71s 712ms/step - loss: 0.6115 - accuracy: 0.6585 - val_loss: 0.7509 - val_accuracy: 0.6440 Epoch 13/15 100/100 [==============================] - 74s 741ms/step - loss: 0.6138 - accuracy: 0.6660 - val_loss: 0.6570 - val_accuracy: 0.6490 Epoch 14/15 100/100 [==============================] - 71s 711ms/step - loss: 0.6047 - accuracy: 0.6695 - val_loss: 0.5661 - val_accuracy: 0.7070 Epoch 15/15 100/100 [==============================] - 74s 735ms/step - loss: 0.6031 - accuracy: 0.6730 - val_loss: 0.5531 - val_accuracy: 0.7150
In [33]: