測試這個網站時http://krasserm.github.io/2018/02/07/deep-face-recognition/
在訓練的過程中,有執行訓練但是訓練後的判斷結果,很奇怪,像是沒有訓練過一樣,然後多次測試都是相同結果,所以回去檢查每一個步驟,在線上的jupyter測試的結果與spyder都一樣,但發現錯誤代碼的部分spyter,好像比較會顯示,jupyter有可能還是可以執行,但其實是有錯的,所以放在spyder 執行比較好。
我把程式分段執行,第一段是訓練的部分:
# -*- coding: utf-8 -*-
"""
Created on Fri Apr 12 02:35:59 2019
@author: User
"""
from model import create_model
nn4_small2 = create_model()
from keras import backend as K
from keras.models import Model
from keras.layers import Input, Layer
# Input for anchor, positive and negative images
in_a = Input(shape=(96, 96, 3))
in_p = Input(shape=(96, 96, 3))
in_n = Input(shape=(96, 96, 3))
# Output for anchor, positive and negative embedding vectors
# The nn4_small model instance is shared (Siamese network)
emb_a = nn4_small2(in_a)
emb_p = nn4_small2(in_p)
emb_n = nn4_small2(in_n)
class TripletLossLayer(Layer):
def __init__(self, alpha, **kwargs):
self.alpha = alpha
super(TripletLossLayer, self).__init__(**kwargs)
def triplet_loss(self, inputs):
a, p, n = inputs
p_dist = K.sum(K.square(a-p), axis=-1)
n_dist = K.sum(K.square(a-n), axis=-1)
return K.sum(K.maximum(p_dist - n_dist + self.alpha, 0), axis=0)
def call(self, inputs):
loss = self.triplet_loss(inputs)
self.add_loss(loss)
return loss
# Layer that computes the triplet loss from anchor, positive and negative embedding vectors
triplet_loss_layer = TripletLossLayer(alpha=0.2, name='triplet_loss_layer')([emb_a, emb_p, emb_n])
# Model that can be trained with anchor, positive negative images
nn4_small2_train = Model([in_a, in_p, in_n], triplet_loss_layer)
from data import triplet_generator
# triplet_generator() creates a generator that continuously returns
# ([a_batch, p_batch, n_batch], None) tuples where a_batch, p_batch
# and n_batch are batches of anchor, positive and negative RGB images
# each having a shape of (batch_size, 96, 96, 3).
generator = triplet_generator()
nn4_small2_train.compile(loss=None, optimizer='adam')
nn4_small2_train.fit_generator(generator, epochs=10, steps_per_epoch=100)
# Please note that the current implementation of the generator only generates
# random image data. The main goal of this code snippet is to demonstrate
# the general setup for model training. In the following, we will anyway
# use a pre-trained model so we don't need a generator here that operates
# on real training data. I'll maybe provide a fully functional generator
# later.
在訓練的過程中,有執行訓練但是訓練後的判斷結果,很奇怪,像是沒有訓練過一樣,然後多次測試都是相同結果,所以回去檢查每一個步驟,在線上的jupyter測試的結果與spyder都一樣,但發現錯誤代碼的部分spyter,好像比較會顯示,jupyter有可能還是可以執行,但其實是有錯的,所以放在spyder 執行比較好。
我把程式分段執行,第一段是訓練的部分:
# -*- coding: utf-8 -*-
"""
Created on Fri Apr 12 02:35:59 2019
@author: User
"""
from model import create_model
nn4_small2 = create_model()
from keras import backend as K
from keras.models import Model
from keras.layers import Input, Layer
# Input for anchor, positive and negative images
in_a = Input(shape=(96, 96, 3))
in_p = Input(shape=(96, 96, 3))
in_n = Input(shape=(96, 96, 3))
# Output for anchor, positive and negative embedding vectors
# The nn4_small model instance is shared (Siamese network)
emb_a = nn4_small2(in_a)
emb_p = nn4_small2(in_p)
emb_n = nn4_small2(in_n)
class TripletLossLayer(Layer):
def __init__(self, alpha, **kwargs):
self.alpha = alpha
super(TripletLossLayer, self).__init__(**kwargs)
def triplet_loss(self, inputs):
a, p, n = inputs
p_dist = K.sum(K.square(a-p), axis=-1)
n_dist = K.sum(K.square(a-n), axis=-1)
return K.sum(K.maximum(p_dist - n_dist + self.alpha, 0), axis=0)
def call(self, inputs):
loss = self.triplet_loss(inputs)
self.add_loss(loss)
return loss
# Layer that computes the triplet loss from anchor, positive and negative embedding vectors
triplet_loss_layer = TripletLossLayer(alpha=0.2, name='triplet_loss_layer')([emb_a, emb_p, emb_n])
# Model that can be trained with anchor, positive negative images
nn4_small2_train = Model([in_a, in_p, in_n], triplet_loss_layer)
from data import triplet_generator
# triplet_generator() creates a generator that continuously returns
# ([a_batch, p_batch, n_batch], None) tuples where a_batch, p_batch
# and n_batch are batches of anchor, positive and negative RGB images
# each having a shape of (batch_size, 96, 96, 3).
generator = triplet_generator()
nn4_small2_train.compile(loss=None, optimizer='adam')
nn4_small2_train.fit_generator(generator, epochs=10, steps_per_epoch=100)
# Please note that the current implementation of the generator only generates
# random image data. The main goal of this code snippet is to demonstrate
# the general setup for model training. In the following, we will anyway
# use a pre-trained model so we don't need a generator here that operates
# on real training data. I'll maybe provide a fully functional generator
# later.
```
這裡可以執行但是好像有問題,只是一開始沒有仔細看所以沒有發現。
第二段,執行時就報錯了~
# -*- coding: utf-8 -*-
"""
Created on Mon Apr 22 11:19:36 2019
@author: User
"""
nn4_small2_pretrained = create_model()
nn4_small2_pretrained.load_weights('weights/nn4.small2.v1.h5')
這裡說的是不能打開資料 找不到資料簽屬。
開始思考,不能打開資料有幾種可能,第一個是資料本身不在、路徑不對,所以先去檢查路徑,第二個是資料打不開,打不開因為有可能其他檔案已經將它開啟所以不能同時打開,這個部分應該是在程式碼寫 load ,跟open 的差別,open 還有分複寫...等,檢查過後都不是這個問題,只好拿去google,開始懷疑是版本的問題,好像網路上看到都是windows才有這個問題,但想想又覺得很奇怪,那在檢查遠來下載這個 h5 file 的地方,重新下載發現檔案的大小差別很大,所以用重新下載後,跳過訓練直接執行這個,發現就解決了。
可以推論是,在訓練的時候有東西寫進去h5 ,但是她可能寫錯了,導致後續不能開啟,所以回去看原本訓練的地方,
果然這段warning,他依然可以執行,但是有部分是有問題的,他這裡說,triplet_loss_layer這個東西missing,所以我們不期待有任何的資料通過triplet_loss_layer在訓練的時候,難怪他都沒有學習到QQ
這裡的 loss值 ,也一直沒有掉下來,照理說經過每一層他都會降低,但是這裡呈現loss還是0.8,那根本就是沒有學習啦~~~~~~~~~
所以後來再把h5那個檔案上去 github 下載下來就可以了!
解決之後distance 就有改變,原本她都說是零,這樣就解決了,最後再回去看原本的網頁還有一個東西,是一開始沒有注意到的,
這一段就是說,上面只是在寫怎麼樣訓練的過程,但是因為要訓練出精準的模型,必須還是要經過大量的照片,所以接下來的還是套用他原本訓練好的模型。
就是這樣囉~~~~終於修好這個問題了 (累)。
所以遇到error,一定要靜下心來仔細看,就會找到解決方法的。
留言
張貼留言