DataScience/TensorFlow[CNN]
딥러닝 텐서플로우 CNN 인간,말 분류, 파이썬으로 압축풀기, 이미지파일을 넘파이 어레이로가져오기
leopard4
2022. 12. 30. 13:03
이미지 파일 다운로드. 말, 인간 분류하기 위한 사진 파일 다운로드 하기
In [1]:
!wget --no-check-certificate \ # !wget 리눅스 주피터노트북에서 웹파일을 가져오는 명령어
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip \
-O /tmp/horse-or-human.zip # -O 여기에 다운로드 받아라.
--2022-12-30 02:05:48-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip Resolving storage.googleapis.com (storage.googleapis.com)... 172.217.194.128, 74.125.68.128, 74.125.24.128, ... Connecting to storage.googleapis.com (storage.googleapis.com)|172.217.194.128|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 149574867 (143M) [application/zip] Saving to: ‘/tmp/horse-or-human.zip’ /tmp/horse-or-human 100%[===================>] 142.65M 22.7MB/s in 7.3s 2022-12-30 02:05:56 (19.6 MB/s) - ‘/tmp/horse-or-human.zip’ saved [149574867/149574867]
In [2]:
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip \
-O /tmp/validation-horse-or-human.zip
--2022-12-30 02:09:34-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.68.128, 74.125.24.128, 142.250.4.128, ... Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.68.128|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 11480187 (11M) [application/zip] Saving to: ‘/tmp/validation-horse-or-human.zip’ /tmp/validation-hor 100%[===================>] 10.95M 7.10MB/s in 1.5s 2022-12-30 02:09:36 (7.10 MB/s) - ‘/tmp/validation-horse-or-human.zip’ saved [11480187/11480187]
압축파일 풀기
In [3]:
import zipfile
In [4]:
file = zipfile.ZipFile('/tmp/horse-or-human.zip') # 집파일을 가져와서 변수로저장
In [5]:
file.extractall('/tmp/horse-or-human') # 가져온 집을 이폴더에 풀어라
In [6]:
file = zipfile.ZipFile('/tmp/validation-horse-or-human.zip')
In [7]:
file.extractall('/tmp/validation-horse-or-human')
In [ ]:
사진이 저장된 폴더 경로 만들기
In [8]:
train_horse_dir = '/tmp/horse-or-human/horses'
In [9]:
train_human_dir = '/tmp/horse-or-human/humans'
In [10]:
validation_horse_dir = '/tmp/validation-horse-or-human/horses'
In [11]:
validation_human_dir = '/tmp/validation-horse-or-human/humans'
In [ ]:
각 폴더에 저장되어 있는 사진파일이름들 출력하기
In [13]:
import os
In [14]:
os.listdir(train_horse_dir) # 폴더 내용을 리스트로 가져온다.
Out[14]:
['horse21-3.png', 'horse04-7.png', 'horse43-9.png', 'horse19-8.png', 'horse18-2.png', 'horse27-1.png', 'horse17-0.png', 'horse30-8.png', 'horse10-9.png', 'horse48-7.png', 'horse48-0.png', 'horse01-1.png', 'horse23-4.png', 'horse21-1.png', 'horse42-0.png', 'horse50-6.png', 'horse39-9.png', 'horse41-3.png', 'horse15-6.png', 'horse33-8.png', 'horse31-1.png', 'horse28-0.png', 'horse49-3.png', 'horse34-2.png', 'horse25-7.png', 'horse40-9.png', 'horse36-0.png', 'horse04-2.png', 'horse19-0.png', 'horse13-9.png', 'horse39-5.png', 'horse22-2.png', 'horse28-7.png', 'horse26-5.png', 'horse25-5.png', 'horse03-9.png', 'horse15-0.png', 'horse35-6.png', 'horse30-7.png', 'horse45-0.png', 'horse24-4.png', 'horse27-9.png', 'horse23-1.png', 'horse42-5.png', 'horse48-8.png', 'horse01-5.png', 'horse49-1.png', 'horse20-0.png', 'horse35-5.png', 'horse46-2.png', 'horse22-6.png', 'horse23-2.png', 'horse20-8.png', 'horse35-0.png', 'horse49-2.png', 'horse08-2.png', 'horse36-1.png', 'horse17-5.png', 'horse15-1.png', 'horse29-9.png', 'horse32-1.png', 'horse07-0.png', 'horse11-8.png', 'horse25-9.png', 'horse44-5.png', 'horse13-5.png', 'horse34-9.png', 'horse02-6.png', 'horse35-3.png', 'horse43-8.png', 'horse28-4.png', 'horse16-5.png', 'horse16-3.png', 'horse39-1.png', 'horse03-3.png', 'horse14-6.png', 'horse27-0.png', 'horse16-1.png', 'horse02-0.png', 'horse27-6.png', 'horse14-9.png', 'horse10-2.png', 'horse46-1.png', 'horse25-3.png', 'horse09-1.png', 'horse12-1.png', 'horse14-2.png', 'horse32-6.png', 'horse43-7.png', 'horse20-6.png', 'horse43-0.png', 'horse19-1.png', 'horse09-2.png', 'horse28-2.png', 'horse39-8.png', 'horse02-9.png', 'horse38-0.png', 'horse47-5.png', 'horse42-8.png', 'horse07-8.png', 'horse17-9.png', 'horse31-0.png', 'horse34-5.png', 'horse15-5.png', 'horse08-6.png', 'horse38-4.png', 'horse21-4.png', 'horse02-4.png', 'horse25-2.png', 'horse22-9.png', 'horse13-2.png', 'horse26-0.png', 'horse04-4.png', 'horse12-3.png', 'horse11-0.png', 'horse39-2.png', 'horse12-4.png', 'horse22-0.png', 'horse13-1.png', 'horse32-8.png', 'horse31-2.png', 'horse29-8.png', 'horse20-4.png', 'horse31-9.png', 'horse23-3.png', 'horse34-4.png', 'horse01-6.png', 'horse29-7.png', 'horse24-5.png', 'horse40-3.png', 'horse15-9.png', 'horse15-2.png', 'horse19-3.png', 'horse32-2.png', 'horse08-3.png', 'horse33-1.png', 'horse50-8.png', 'horse40-7.png', 'horse18-7.png', 'horse10-5.png', 'horse41-0.png', 'horse05-5.png', 'horse40-5.png', 'horse19-9.png', 'horse43-5.png', 'horse18-1.png', 'horse33-2.png', 'horse27-3.png', 'horse23-9.png', 'horse01-7.png', 'horse38-7.png', 'horse27-4.png', 'horse49-7.png', 'horse50-1.png', 'horse33-9.png', 'horse45-1.png', 'horse21-9.png', 'horse21-5.png', 'horse19-4.png', 'horse06-8.png', 'horse11-1.png', 'horse16-4.png', 'horse22-3.png', 'horse44-7.png', 'horse39-0.png', 'horse26-6.png', 'horse44-1.png', 'horse17-4.png', 'horse25-8.png', 'horse26-8.png', 'horse46-7.png', 'horse24-9.png', 'horse12-2.png', 'horse24-0.png', 'horse25-6.png', 'horse30-3.png', 'horse17-7.png', 'horse44-6.png', 'horse13-4.png', 'horse41-7.png', 'horse03-2.png', 'horse33-0.png', 'horse49-0.png', 'horse16-8.png', 'horse04-0.png', 'horse35-8.png', 'horse04-8.png', 'horse41-8.png', 'horse06-1.png', 'horse14-0.png', 'horse36-6.png', 'horse41-5.png', 'horse06-5.png', 'horse02-8.png', 'horse06-3.png', 'horse38-5.png', 'horse47-3.png', 'horse36-4.png', 'horse36-3.png', 'horse11-4.png', 'horse05-6.png', 'horse30-9.png', 'horse34-8.png', 'horse27-8.png', 'horse33-4.png', 'horse29-5.png', 'horse21-8.png', 'horse12-0.png', 'horse30-6.png', 'horse35-1.png', 'horse24-1.png', 'horse23-6.png', 'horse18-6.png', 'horse46-4.png', 'horse12-5.png', 'horse15-8.png', 'horse01-0.png', 'horse08-1.png', 'horse18-9.png', 'horse13-6.png', 'horse04-1.png', 'horse12-8.png', 'horse26-7.png', 'horse50-5.png', 'horse21-2.png', 'horse33-6.png', 'horse30-4.png', 'horse40-2.png', 'horse30-5.png', 'horse07-7.png', 'horse09-4.png', 'horse47-9.png', 'horse23-0.png', 'horse05-2.png', 'horse08-5.png', 'horse12-9.png', 'horse09-8.png', 'horse29-4.png', 'horse31-4.png', 'horse21-0.png', 'horse14-1.png', 'horse27-5.png', 'horse19-2.png', 'horse05-3.png', 'horse49-8.png', 'horse45-7.png', 'horse46-3.png', 'horse50-7.png', 'horse05-4.png', 'horse32-4.png', 'horse06-4.png', 'horse24-2.png', 'horse17-2.png', 'horse34-6.png', 'horse34-3.png', 'horse08-4.png', 'horse12-6.png', 'horse17-6.png', 'horse24-7.png', 'horse44-3.png', 'horse24-8.png', 'horse05-8.png', 'horse04-6.png', 'horse44-0.png', 'horse32-3.png', 'horse28-8.png', 'horse03-1.png', 'horse31-6.png', 'horse03-7.png', 'horse07-3.png', 'horse01-2.png', 'horse08-7.png', 'horse22-8.png', 'horse28-3.png', 'horse05-1.png', 'horse37-3.png', 'horse31-3.png', 'horse43-4.png', 'horse43-3.png', 'horse37-8.png', 'horse13-8.png', 'horse10-3.png', 'horse30-0.png', 'horse45-8.png', 'horse23-8.png', 'horse34-7.png', 'horse46-0.png', 'horse19-5.png', 'horse07-9.png', 'horse25-1.png', 'horse43-6.png', 'horse03-0.png', 'horse13-3.png', 'horse20-7.png', 'horse16-2.png', 'horse14-5.png', 'horse02-3.png', 'horse50-2.png', 'horse35-9.png', 'horse11-9.png', 'horse20-1.png', 'horse34-0.png', 'horse03-4.png', 'horse29-6.png', 'horse11-2.png', 'horse16-9.png', 'horse41-4.png', 'horse02-5.png', 'horse27-2.png', 'horse41-6.png', 'horse07-6.png', 'horse11-6.png', 'horse05-7.png', 'horse18-3.png', 'horse22-5.png', 'horse38-3.png', 'horse31-8.png', 'horse37-7.png', 'horse23-5.png', 'horse44-9.png', 'horse28-5.png', 'horse28-1.png', 'horse22-1.png', 'horse49-9.png', 'horse50-0.png', 'horse50-4.png', 'horse25-0.png', 'horse21-6.png', 'horse45-3.png', 'horse09-0.png', 'horse39-6.png', 'horse06-7.png', 'horse15-3.png', 'horse38-9.png', 'horse37-6.png', 'horse46-6.png', 'horse39-4.png', 'horse07-5.png', 'horse20-2.png', 'horse47-2.png', 'horse40-0.png', 'horse06-2.png', 'horse39-3.png', 'horse47-7.png', 'horse40-6.png', 'horse28-6.png', 'horse10-6.png', 'horse50-3.png', 'horse43-2.png', 'horse18-0.png', 'horse23-7.png', 'horse04-5.png', 'horse48-9.png', 'horse37-5.png', 'horse30-2.png', 'horse15-7.png', 'horse06-6.png', 'horse26-3.png', 'horse05-9.png', 'horse32-7.png', 'horse46-8.png', 'horse14-7.png', 'horse09-3.png', 'horse48-2.png', 'horse41-2.png', 'horse36-7.png', 'horse42-6.png', 'horse48-3.png', 'horse11-3.png', 'horse04-9.png', 'horse37-0.png', 'horse39-7.png', 'horse45-2.png', 'horse10-1.png', 'horse47-8.png', 'horse41-1.png', 'horse14-4.png', 'horse08-9.png', 'horse29-0.png', 'horse42-7.png', 'horse42-9.png', 'horse03-6.png', 'horse38-1.png', 'horse25-4.png', 'horse42-2.png', 'horse38-8.png', 'horse26-9.png', 'horse22-4.png', 'horse47-1.png', 'horse32-5.png', 'horse47-6.png', 'horse35-4.png', 'horse30-1.png', 'horse07-2.png', 'horse07-1.png', 'horse19-7.png', 'horse16-6.png', 'horse47-0.png', 'horse20-5.png', 'horse45-6.png', 'horse42-1.png', 'horse14-8.png', 'horse33-5.png', 'horse47-4.png', 'horse28-9.png', 'horse45-4.png', 'horse11-5.png', 'horse37-9.png', 'horse37-1.png', 'horse48-6.png', 'horse18-5.png', 'horse43-1.png', 'horse46-9.png', 'horse16-0.png', 'horse01-4.png', 'horse49-4.png', 'horse36-2.png', 'horse20-9.png', 'horse06-9.png', 'horse38-2.png', 'horse10-7.png', 'horse26-1.png', 'horse02-1.png', 'horse37-4.png', 'horse03-8.png', 'horse02-7.png', 'horse18-8.png', 'horse05-0.png', 'horse36-9.png', 'horse32-0.png', 'horse01-9.png', 'horse34-1.png', 'horse44-2.png', 'horse12-7.png', 'horse42-4.png', 'horse08-0.png', 'horse15-4.png', 'horse26-4.png', 'horse19-6.png', 'horse09-9.png', 'horse48-4.png', 'horse01-8.png', 'horse36-8.png', 'horse48-1.png', 'horse18-4.png', 'horse02-2.png', 'horse48-5.png', 'horse29-3.png', 'horse50-9.png', 'horse09-6.png', 'horse14-3.png', 'horse01-3.png', 'horse45-5.png', 'horse40-8.png', 'horse29-2.png', 'horse38-6.png', 'horse24-3.png', 'horse03-5.png', 'horse46-5.png', 'horse16-7.png', 'horse11-7.png', 'horse41-9.png', 'horse10-0.png', 'horse36-5.png', 'horse27-7.png', 'horse08-8.png', 'horse33-3.png', 'horse06-0.png', 'horse20-3.png', 'horse40-4.png', 'horse09-5.png', 'horse13-7.png', 'horse49-6.png', 'horse04-3.png', 'horse07-4.png', 'horse24-6.png', 'horse13-0.png', 'horse35-7.png', 'horse35-2.png', 'horse31-7.png', 'horse22-7.png', 'horse10-4.png', 'horse42-3.png', 'horse33-7.png', 'horse45-9.png', 'horse32-9.png', 'horse44-4.png', 'horse40-1.png', 'horse17-1.png', 'horse26-2.png', 'horse10-8.png', 'horse21-7.png', 'horse17-8.png', 'horse49-5.png', 'horse17-3.png', 'horse29-1.png', 'horse44-8.png', 'horse31-5.png', 'horse37-2.png', 'horse09-7.png']
In [ ]:
각 디렉토리에 저장된 파일의 갯수 확인하기
In [15]:
len( os.listdir(train_horse_dir) )
Out[15]:
500
In [16]:
len( os.listdir(train_human_dir) )
Out[16]:
527
In [17]:
len( os.listdir(validation_horse_dir) )
Out[17]:
128
In [18]:
len( os.listdir(validation_human_dir) )
Out[18]:
128
In [ ]:
Building a Small Model from Scratch¶
이미지는 300X300 칼라 이미지다. 간단한 모델링 하기. 사진의 결과는 2개중의 하나이미로, 맨 마지막 액티베이션 함수는 시그모이드 사용
In [20]:
import tensorflow as tf
from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
from keras.models import Sequential
In [21]:
def build_model() :
model = Sequential()
model.add( Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3) ) )
model.add( MaxPooling2D((2,2) , 2 )) # 2나 (2,2)나 동일하게 인식함
model.add( Conv2D(32, (3,3), activation='relu' ) )
model.add( MaxPooling2D((2,2) , 2 ))
model.add( Conv2D(64, (3,3), activation='relu' ) )
model.add( MaxPooling2D((2,2) , 2 ))
model.add(Flatten())
model.add(Dense(512, 'relu'))
model.add(Dense(1, 'sigmoid'))
model.compile('rmsprop', 'binary_crossentropy', metrics=['accuracy'])
return model
In [22]:
model = build_model()
prints a summary of the NN
In [23]:
model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 298, 298, 16) 448 max_pooling2d (MaxPooling2D (None, 149, 149, 16) 0 ) conv2d_1 (Conv2D) (None, 147, 147, 32) 4640 max_pooling2d_1 (MaxPooling (None, 73, 73, 32) 0 2D) conv2d_2 (Conv2D) (None, 71, 71, 64) 18496 max_pooling2d_2 (MaxPooling (None, 35, 35, 64) 0 2D) flatten (Flatten) (None, 78400) 0 dense (Dense) (None, 512) 40141312 dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 40,165,409 Trainable params: 40,165,409 Non-trainable params: 0 _________________________________________________________________
NOTE: 각 알고리즘 레퍼런스 페이지
RMSprop optimization algorithm RMSprop automates learning-rate tuning for us
stochastic gradient descent (SGD)
Adam and Adagrad, also automatically adapt the learning rate during training, and would work equally well here.
Data Preprocessing¶
In [ ]:
# 아직 학습할 준비가 다 되지 않았다.
# 왜냐하면, fit 함수에 들어가는 데이터는
# 넘파이 어레이가 들어가야 한다.
# 하지만 우리가 가지고 있는 데이터는, 이미지 파일(png) 이다.
# 따라서, 현재 상태로는 fit 함수 이용한 학습 불가능 하다.
In [ ]:
# 이미지파일을 넘파이 어레이로 변환시켜주는, ImageDataGenerator 라이브러리를 사용한다.
In [25]:
from keras.preprocessing.image import ImageDataGenerator
In [28]:
train_datagen = ImageDataGenerator(rescale= 1/255.0 ) # rescale == 읽어올때 피처스케일링 하라는것 , 2를쓰면 2배
In [29]:
validation_datagen = ImageDataGenerator(rescale= 1/255.0)
In [ ]:
# 라이브러리를 변수로 만들었으면, 그 다음 할일은,
# 이미지가 들어있는 디렉토리 정보와 이미지 사이즈정보와 몇개로 분류할지 정보를
# 알려준다.
In [ ]:
# 넘파이의 target_size 와 모델의 input_shape 은, 가로 세로가 같아야 한다.
In [ ]:
# class_mode는, 2개로 분류할때는 binary, 3개 이상일땐 categorical 사용.
In [31]:
train_generator = train_datagen.flow_from_directory('/tmp/horse-or-human', target_size=(300, 300), class_mode='binary' ) # target_size == 파일을 넘파이로 만들때 몇행 몇열로 만들것이냐
Found 1027 images belonging to 2 classes.
In [33]:
validation_generator = validation_datagen.flow_from_directory('/tmp/validation-horse-or-human', target_size=(300,300), class_mode= 'binary')
Found 256 images belonging to 2 classes.
Training¶
Let's train for 15 epochs
In [34]:
epoch_history = model.fit(train_generator, epochs= 15, validation_data=validation_generator) # train_generator 에 y_train 정보도 있다
Epoch 1/15 33/33 [==============================] - 17s 246ms/step - loss: 1.9648 - accuracy: 0.7439 - val_loss: 2.1544 - val_accuracy: 0.6719 Epoch 2/15 33/33 [==============================] - 8s 242ms/step - loss: 0.2418 - accuracy: 0.9231 - val_loss: 1.3613 - val_accuracy: 0.8047 Epoch 3/15 33/33 [==============================] - 9s 269ms/step - loss: 0.2861 - accuracy: 0.9581 - val_loss: 1.7844 - val_accuracy: 0.7969 Epoch 4/15 33/33 [==============================] - 8s 260ms/step - loss: 0.1087 - accuracy: 0.9698 - val_loss: 1.6545 - val_accuracy: 0.8477 Epoch 5/15 33/33 [==============================] - 8s 240ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 2.5168 - val_accuracy: 0.8320 Epoch 6/15 33/33 [==============================] - 8s 239ms/step - loss: 0.4770 - accuracy: 0.9757 - val_loss: 2.2515 - val_accuracy: 0.8359 Epoch 7/15 33/33 [==============================] - 8s 239ms/step - loss: 0.0010 - accuracy: 1.0000 - val_loss: 2.5181 - val_accuracy: 0.8438 Epoch 8/15 33/33 [==============================] - 8s 242ms/step - loss: 1.3202 - accuracy: 0.9825 - val_loss: 1.2713 - val_accuracy: 0.8594 Epoch 9/15 33/33 [==============================] - 8s 241ms/step - loss: 0.0191 - accuracy: 0.9932 - val_loss: 1.9808 - val_accuracy: 0.8555 Epoch 10/15 33/33 [==============================] - 8s 242ms/step - loss: 0.0646 - accuracy: 0.9903 - val_loss: 13.3508 - val_accuracy: 0.6445 Epoch 11/15 33/33 [==============================] - 8s 241ms/step - loss: 0.2081 - accuracy: 0.9737 - val_loss: 2.3280 - val_accuracy: 0.8672 Epoch 12/15 33/33 [==============================] - 9s 259ms/step - loss: 1.5230e-04 - accuracy: 1.0000 - val_loss: 2.8845 - val_accuracy: 0.8555 Epoch 13/15 33/33 [==============================] - 8s 240ms/step - loss: 2.7582e-05 - accuracy: 1.0000 - val_loss: 3.5825 - val_accuracy: 0.8438 Epoch 14/15 33/33 [==============================] - 8s 242ms/step - loss: 4.7661e-06 - accuracy: 1.0000 - val_loss: 4.0287 - val_accuracy: 0.8477 Epoch 15/15 33/33 [==============================] - 8s 241ms/step - loss: 2.8313 - accuracy: 0.9727 - val_loss: 3.3772 - val_accuracy: 0.8164
In [35]:
# 모델 평가
model.evaluate(validation_generator)
8/8 [==============================] - 1s 119ms/step - loss: 3.3772 - accuracy: 0.8164
Out[35]:
[3.3771982192993164, 0.81640625]
In [36]:
import matplotlib.pyplot as plt
In [37]:
plt.plot(epoch_history.history['accuracy'])
plt.plot(epoch_history.history['val_accuracy'])
plt.legend(['train','validation'])
plt.show()
Running the Model¶
In [38]:
# 잘돌아가는지 테스트
import numpy as np
from google.colab import files
from tensorflow.keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys() :
path = '/content/' + fn # 콘텍트디렉토리 저장
img = image.load_img(path, target_size=(300,300)) # 이미지 300,300으로 저장
x = image.img_to_array(img) / 255.0
print(x)
print(x.shape)
x = np.expand_dims(x, axis = 0) # 차원변경
print(x.shape)
images = np.vstack( [x] )
classes = model.predict( images, batch_size = 10 )
print(classes)
if classes[0] > 0.5 :
print(fn + " is a human")
else :
print(fn + " is a horse")
Saving aaa.jpg to aaa.jpg [[[0.37254903 0.65882355 0.0627451 ] [0.54509807 0.7529412 0.20392157] [0.4509804 0.7019608 0.08627451] ... [0.627451 0.6039216 0.41568628] [0.5529412 0.5411765 0.34117648] [0.4509804 0.42352942 0.2509804 ]] [[0.4509804 0.68235296 0.13333334] [0.70980394 0.8745098 0.4509804 ] [0.56078434 0.77254903 0.3372549 ] ... [0.3647059 0.3882353 0.20784314] [0.53333336 0.5647059 0.3647059 ] [0.5529412 0.58431375 0.39215687]] [[0.4392157 0.6745098 0.15686275] [0.6627451 0.84705883 0.39215687] [0.68235296 0.8901961 0.44313726] ... [0.23921569 0.3254902 0.13333334] [0.34117648 0.4392157 0.22352941] [0.21960784 0.31764707 0.09411765]] ... [[0.77254903 0.7647059 0.7764706 ] [0.77254903 0.7647059 0.7764706 ] [0.78431374 0.7764706 0.7882353 ] ... [0.80784315 0.8 0.8117647 ] [0.79607844 0.7882353 0.7921569 ] [0.80784315 0.8 0.8039216 ]] [[0.8 0.7921569 0.8039216 ] [0.8039216 0.79607844 0.80784315] [0.7921569 0.78431374 0.79607844] ... [0.80784315 0.8 0.8117647 ] [0.79607844 0.7882353 0.7921569 ] [0.79607844 0.7882353 0.7921569 ]] [[0.80784315 0.8 0.8117647 ] [0.8 0.7921569 0.8039216 ] [0.80784315 0.8 0.8117647 ] ... [0.8039216 0.79607844 0.8 ] [0.8 0.7921569 0.79607844] [0.78039217 0.77254903 0.7764706 ]]] (300, 300, 3) (1, 300, 300, 3) 1/1 [==============================] - 0s 204ms/step [[1.4245357e-18]] aaa.jpg is a horse
In [ ]:
In [ ]:
# X_train,y_train == horse-or-human
# horse-or-human 폴더 validation-horse-or-human 두 폴더만 보면된다.
# 여기선 2개로 분류할것이기 때문에 폴더도 두개다.
# 폴더별로 같은 레이블이 들어가있다.(말사진이면 말폴더, 사람사진이면 사람폴더)
# 분류할 갯수가 3개면 폴더도 3개를 만든다는 개념.
In [2]:
from IPython.display import Image
In [4]:
Image('C:\\Users\\5-10\\Desktop\\캡쳐\\20221230_124245.png')
Out[4]:
In [5]:
# horse01 로 시작하는 말은 1번말이다
# 그런데 사진이 여러개 있는 이유는 각도,배경,크기 등등으로 나눠놓았기 때문이다.
# 그렇게 해야 인식률이 높기 때문이다.
Image('C:\\Users\\5-10\\Desktop\\캡쳐\\20221230_124254.png')
Out[5]:
In [ ]: