본문 바로가기

딥러닝 학습

8장 이미지 분류 - 합성곱 신경망 (5)

반응형

- 합성곱 신경망 훈련

1. 데이터 세트 불러오기 -> (패션 MNIST)

(x_train_all, y_train_all), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()


##출력:
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz
29515/29515 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz
26421880/26421880 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz
5148/5148 [==============================] - 0s 0us/step
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz
4422102/4422102 [==============================] - 0s 0us/step

 

2. 훈련 데이터 세트를 훈련 세트와 검증 세트로 나누기

from sklearn.model_selection import train_test_split
x_train, x_val, y_train, y_val = train_test_split(x_train_all, y_train_all, stratify=y_train_all, 
                                                  test_size=0.2, random_state=42)

 

3. 타깃을 원-핫 인코딩으로 변환

y_train_encoded = tf.keras.utils.to_categorical(y_train)
y_val_encoded = tf.keras.utils.to_categorical(y_val)

 

4. 입력 데이터 준비

x_train = x_train.reshape(-1, 28, 28, 1)
x_val = x_val.reshape(-1, 28, 28, 1)

x_train.shape

##출력: (48000, 28, 28, 1)

 

5. 입력 데이터 표준화 전처리

x_train = x_train / 255
x_val = x_val / 255

 

6. 모델 훈련

cn = ConvolutionNetwork(n_kernels=10, units=100, batch_size=128, learning_rate=0.01)
cn.fit(x_train, y_train_encoded, 
       x_val=x_val, y_val=y_val_encoded, epochs=20)
       
     
##출력: 
/usr/local/lib/python3.8/dist-packages/keras/initializers/initializers_v2.py:120: UserWarning: The initializer GlorotUniform is unseeded and being called multiple times, which will return identical values  each time (even if the initializer is unseeded). Please update your code to provide a seed to the initializer, or avoid using the same initalizer instance more than once.
  warnings.warn(
에포크 0 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 1 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 2 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 3 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 4 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 5 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 6 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 7 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 8 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 9 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 10 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 11 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 12 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 13 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 14 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 15 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 16 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 17 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 18 .......................................................................................................................................................................................................................................................................................................................................................................................
에포크 19 .......................................................................................................................................................................................................................................................................................................................................................................................

 

7. 훈련, 검증 손실 그래프 그리기, 검증 세트 정확도 확인

import matplotlib.pyplot as plt

plt.plot(cn.losses)
plt.plot(cn.val_losses)
plt.ylabel('loss')
plt.xlabel('iteration')
plt.legend(['train_loss', 'val_loss'])
plt.show()

cn.score(x_val, y_val_encoded)


##출력: 0.88325
반응형

8-5 케라스로 합성곱 신경망 만들기

- 케라스로 합성곱 신경망 구현

케라스의 합성곱층은 Conv2D 클래스

최대 풀링은 MaxPooling2D 클래스

특성 맵을 일렬로 펼칠 때는 Flatten 클래스

 

1. 필요한 클래스들 임포트

2. 합성곱층 쌓기

3. 풀링층 쌓기

4. 완전 연결층에 주입할 수 있도록 특성 맵 펼치기

5. 완전 연결층 쌓기

6. 모델 구조 살펴보기

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense

conv1 = tf.keras.Sequential()
conv1.add(Conv2D(10, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 1)))

conv1.add(MaxPooling2D((2, 2)))

conv1.add(Flatten())

conv1.add(Dense(100, activation='relu'))
conv1.add(Dense(10, activation='softmax'))

conv1.summary()


##출력:
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_1 (Conv2D)           (None, 28, 28, 10)        100       
                                                                 
 max_pooling2d_1 (MaxPooling  (None, 14, 14, 10)       0         
 2D)                                                             
                                                                 
 flatten_1 (Flatten)         (None, 1960)              0         
                                                                 
 dense_2 (Dense)             (None, 100)               196100    
                                                                 
 dense_3 (Dense)             (None, 10)                1010      
                                                                 
=================================================================
Total params: 197,210
Trainable params: 197,210
Non-trainable params: 0
_________________________________________________________________

- 합성곱 신경망 모델 훈련

1. 정확도 관찰 위한 metrics 매개변수에 'accuracy'리스트로 전달

conv1.compile(optimizer='adam', loss='categorical_crossentropy',  metrics=['accuracy'])

 

2. 아담 옵티마이저 사용 ->(Adaptive Moment Estimation)

history = conv1.fit(x_train, y_train_encoded, epochs=20, 
                    validation_data=(x_val, y_val_encoded))
                    
                    
##출력:
Epoch 1/20
1500/1500 [==============================] - 30s 18ms/step - loss: 0.4541 - accuracy: 0.8419 - val_loss: 0.3870 - val_accuracy: 0.8583
Epoch 2/20
1500/1500 [==============================] - 25s 17ms/step - loss: 0.3140 - accuracy: 0.8869 - val_loss: 0.3006 - val_accuracy: 0.8932
Epoch 3/20
1500/1500 [==============================] - 29s 19ms/step - loss: 0.2710 - accuracy: 0.9015 - val_loss: 0.2760 - val_accuracy: 0.9014
Epoch 4/20
1500/1500 [==============================] - 25s 17ms/step - loss: 0.2367 - accuracy: 0.9126 - val_loss: 0.2634 - val_accuracy: 0.9066
Epoch 5/20
1500/1500 [==============================] - 25s 17ms/step - loss: 0.2115 - accuracy: 0.9201 - val_loss: 0.2582 - val_accuracy: 0.9098
Epoch 6/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.1908 - accuracy: 0.9302 - val_loss: 0.2454 - val_accuracy: 0.9137
Epoch 7/20
1500/1500 [==============================] - 25s 16ms/step - loss: 0.1708 - accuracy: 0.9369 - val_loss: 0.2535 - val_accuracy: 0.9108
Epoch 8/20
1500/1500 [==============================] - 27s 18ms/step - loss: 0.1528 - accuracy: 0.9445 - val_loss: 0.2502 - val_accuracy: 0.9163
Epoch 9/20
1500/1500 [==============================] - 25s 16ms/step - loss: 0.1354 - accuracy: 0.9506 - val_loss: 0.2585 - val_accuracy: 0.9172
Epoch 10/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.1214 - accuracy: 0.9549 - val_loss: 0.2568 - val_accuracy: 0.9177
Epoch 11/20
1500/1500 [==============================] - 25s 16ms/step - loss: 0.1084 - accuracy: 0.9600 - val_loss: 0.2679 - val_accuracy: 0.9189
Epoch 12/20
1500/1500 [==============================] - 24s 16ms/step - loss: 0.0971 - accuracy: 0.9647 - val_loss: 0.2785 - val_accuracy: 0.9168
Epoch 13/20
1500/1500 [==============================] - 24s 16ms/step - loss: 0.0846 - accuracy: 0.9699 - val_loss: 0.3072 - val_accuracy: 0.9125
Epoch 14/20
1500/1500 [==============================] - 25s 16ms/step - loss: 0.0766 - accuracy: 0.9721 - val_loss: 0.3149 - val_accuracy: 0.9144
Epoch 15/20
1500/1500 [==============================] - 24s 16ms/step - loss: 0.0690 - accuracy: 0.9754 - val_loss: 0.3250 - val_accuracy: 0.9155
Epoch 16/20
1500/1500 [==============================] - 25s 16ms/step - loss: 0.0610 - accuracy: 0.9786 - val_loss: 0.3486 - val_accuracy: 0.9119
Epoch 17/20
1500/1500 [==============================] - 25s 16ms/step - loss: 0.0551 - accuracy: 0.9807 - val_loss: 0.3716 - val_accuracy: 0.9091
Epoch 18/20
1500/1500 [==============================] - 25s 17ms/step - loss: 0.0487 - accuracy: 0.9827 - val_loss: 0.3686 - val_accuracy: 0.9158
Epoch 19/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.0427 - accuracy: 0.9846 - val_loss: 0.3842 - val_accuracy: 0.9128
Epoch 20/20
1500/1500 [==============================] - 25s 16ms/step - loss: 0.0414 - accuracy: 0.9850 - val_loss: 0.4082 - val_accuracy: 0.9156

 

3. 손실 그래프와 정확도 그래프 확인

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train_loss', 'val_loss'])
plt.show()

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train_accuracy', 'val_accuracy'])
plt.show()

- 드롭아웃

신경망에서 과대적합을 줄이는 방법

무작위로 일부 뉴런을 비활성화시킴

텐서플로에서는 드롭아웃의 비율만큼 뉴런의 출력 높임

- 드롭아웃을 적용해 합성곱 신경망 구현

Dropout 클래스를 추가

1. 케라스로 만든 합성곱 신경망에 드롭아웃 적용

from tensorflow.keras.layers import Dropout

conv2 = tf.keras.Sequential()
conv2.add(Conv2D(10, (3, 3), activation='relu', padding='same', input_shape=(28, 28, 1)))
conv2.add(MaxPooling2D((2, 2)))
conv2.add(Flatten())
conv2.add(Dropout(0.5))
conv2.add(Dense(100, activation='relu'))
conv2.add(Dense(10, activation='softmax'))

 

2. 드롭아웃층 확인

conv2.summary()


##출력:
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_2 (Conv2D)           (None, 28, 28, 10)        100       
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 14, 14, 10)       0         
 2D)                                                             
                                                                 
 flatten_2 (Flatten)         (None, 1960)              0         
                                                                 
 dropout (Dropout)           (None, 1960)              0         
                                                                 
 dense_4 (Dense)             (None, 100)               196100    
                                                                 
 dense_5 (Dense)             (None, 10)                1010      
                                                                 
=================================================================
Total params: 197,210
Trainable params: 197,210
Non-trainable params: 0
_________________________________________________________________

 

3. 훈련

conv2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = conv2.fit(x_train, y_train_encoded, epochs=20, validation_data=(x_val, y_val_encoded))


##출력:
Epoch 1/20
1500/1500 [==============================] - 28s 18ms/step - loss: 0.5001 - accuracy: 0.8198 - val_loss: 0.3494 - val_accuracy: 0.8721
Epoch 2/20
1500/1500 [==============================] - 27s 18ms/step - loss: 0.3699 - accuracy: 0.8655 - val_loss: 0.3032 - val_accuracy: 0.8928
Epoch 3/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.3328 - accuracy: 0.8778 - val_loss: 0.2987 - val_accuracy: 0.8928
Epoch 4/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.3040 - accuracy: 0.8866 - val_loss: 0.2622 - val_accuracy: 0.9048
Epoch 5/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2883 - accuracy: 0.8923 - val_loss: 0.2561 - val_accuracy: 0.9058
Epoch 6/20
1500/1500 [==============================] - 27s 18ms/step - loss: 0.2707 - accuracy: 0.8994 - val_loss: 0.2479 - val_accuracy: 0.9083
Epoch 7/20
1500/1500 [==============================] - 28s 19ms/step - loss: 0.2564 - accuracy: 0.9044 - val_loss: 0.2431 - val_accuracy: 0.9115
Epoch 8/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2471 - accuracy: 0.9075 - val_loss: 0.2419 - val_accuracy: 0.9111
Epoch 9/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2360 - accuracy: 0.9110 - val_loss: 0.2367 - val_accuracy: 0.9120
Epoch 10/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2253 - accuracy: 0.9165 - val_loss: 0.2358 - val_accuracy: 0.9124
Epoch 11/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2204 - accuracy: 0.9179 - val_loss: 0.2403 - val_accuracy: 0.9143
Epoch 12/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2149 - accuracy: 0.9189 - val_loss: 0.2280 - val_accuracy: 0.9161
Epoch 13/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2092 - accuracy: 0.9212 - val_loss: 0.2287 - val_accuracy: 0.9196
Epoch 14/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.2049 - accuracy: 0.9218 - val_loss: 0.2309 - val_accuracy: 0.9178
Epoch 15/20
1500/1500 [==============================] - 28s 18ms/step - loss: 0.1986 - accuracy: 0.9245 - val_loss: 0.2282 - val_accuracy: 0.9187
Epoch 16/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.1928 - accuracy: 0.9265 - val_loss: 0.2314 - val_accuracy: 0.9168
Epoch 17/20
1500/1500 [==============================] - 25s 17ms/step - loss: 0.1932 - accuracy: 0.9270 - val_loss: 0.2259 - val_accuracy: 0.9210
Epoch 18/20
1500/1500 [==============================] - 26s 18ms/step - loss: 0.1855 - accuracy: 0.9288 - val_loss: 0.2287 - val_accuracy: 0.9180
Epoch 19/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.1815 - accuracy: 0.9313 - val_loss: 0.2299 - val_accuracy: 0.9182
Epoch 20/20
1500/1500 [==============================] - 26s 17ms/step - loss: 0.1802 - accuracy: 0.9307 - val_loss: 0.2285 - val_accuracy: 0.9217

 

4. 손실 그래프와 정확도 그래프 그리기

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train_loss', 'val_loss'])
plt.show()

plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train_accuracy', 'val_accuracy'])
plt.show()

검증 손실이 증가되는 에포크가 늦춰지고, 훈련 손실과의 차이도 좁혀짐

정확도 증가

-> 분류 문제에서 정확도를 직접 최적화할 수는 없지만, 크로스 엔트로피 손실 함수를 최적화

loss, accuracy = conv2.evaluate(x_val, y_val_encoded, verbose=0)
print(accuracy)


##출력: 0.92166668176651

 

 

 

 

 

※ 해당 내용은 <Do it! 딥러닝 입문>의 내용을 토대로 학습하며 정리한 내용입니다.

반응형