What exactly is Keras loss in history

Wenn du dir nicht sicher bist, in welchem der anderen Foren du die Frage stellen sollst, dann bist du hier im Forum für allgemeine Fragen sicher richtig.
Antworten
MLStudent94
User
Beiträge: 1
Registriert: Samstag 19. Dezember 2020, 13:19

Hello AI-Friends,

I am using a callback function to calculate train and test error after each epoch end by calling model.evaluate() . However if I compare my calculated train loss from model.evaluate(x_train, y_train) it is different from the loss saved in history.history['loss']. The calculated test loss from my callback is identical to the history.history['val_loss'].

So I wonder how does Keras calculate the train loss and what exactly is saved in history.history['loss'] ?

I have to calculate the loss after each epoch for different datasets, since I want to evaluate the performance of losses for a sequential training of multiple datasets.

Anybody have an idea why these losses for training data are not identical. Is there a better way to do it?

This is my code:

Code: Alles auswählen

class MyCustomCallback(keras.callbacks.Callback):

def __init__(self):
    self.results = {
         'eval_train' : {},
         'eval_test' : {}
    }

def on_epoch_end(self, epoch, logs=None):

        eval_train = self.model.evaluatex_train, y_train, verbose=1)
        eval_test = self.model.evaluate(x_test, y_test, verbose=1)
        self.results['eval_train'].append(eval_train)
        self.results]['eval_test'].append(eval_test)



myCallback = MyCustomCallback()

history = model.fit(x_train, y_train,
                                  epochs=10, batch_size=256, verbose=1, 
                                  callbacks=[myCallback]),validation_data=(x_test, y_test) 
Antworten