seit gestern bekomme ich nicht mehr die Präzision von meinem neuronalen Netz, wie ich sie vorher erreicht habe.
Meine accuracy und val_accuracy sind Inordnung, aber mein val_loss ist plötzlich viel höher, als mein loss Wert und sonst war der immer nur ein paar Prozent höher, als der loss Wert.Found 30000 images belonging to 3 classes.
Found 9507 images belonging to 3 classes.
1-conv-128-nodes-0-dense-1636977515
2021-11-15 12:58:35.259656: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-15 12:58:35.785197: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1319 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 126, 126, 16) 448
max_pooling2d (MaxPooling2D (None, 63, 63, 16) 0
)
flatten (Flatten) (None, 63504) 0
dense (Dense) (None, 3) 190515
=================================================================
Total params: 190,963
Trainable params: 190,963
Non-trainable params: 0
_________________________________________________________________
Epoch 1/2
2021-11-15 12:58:42.877218: I tensorflow/stream_executor/cuda/cuda_dnn.cc:366] Loaded cuDNN version 8202
469/469 [==============================] - 3794s 8s/step - loss: 0.2431 - accuracy: 0.9434 - val_loss: 0.4150 - val_accuracy: 0.9294
Epoch 2/2
469/469 [==============================] - 2670s 6s/step - loss: 0.0414 - accuracy: 0.9870 - val_loss: 0.5484 - val_accuracy: 0.9408
2021-11-15 14:46:24.090630: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
Process finished with exit code 0
Dazu kam diese Warnung von Tensorflow.
Mein Code:
Code: Alles auswählen
import os
import time
import numpy as np
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import TensorBoard
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
CLASS_NAME = ["Klasse 1", "Klasse 2", "Klasse 3"]
train_datagen = ImageDataGenerator(rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
# Vorverarbeiten der Testdaten
test_datagen = ImageDataGenerator(rescale=1. / 255)
BS = 64
# Batch Size: Wieviele Bilder pro durchgang verwendet werden
# Trainingsdaten erschaffen
training_set = train_datagen.flow_from_directory(r"D:/Bilder/Train",
target_size=(128, 128),
batch_size=BS,
classes=CLASS_NAME,
class_mode="categorical")
# Testdaten erschaffen
test_set = test_datagen.flow_from_directory(r"D:/Bilder/Test",
target_size=(128, 128),
batch_size=BS,
classes=CLASS_NAME,
class_mode="categorical")
dense_layers = [0] # Dense Layer = Anzahl Output Neuronen
layer_sizes = [512] # Anzahl der Neuronen im Layer
conv_layers = [1] # Anzahl faltener Layer
for dense_layer in dense_layers:
for layer_size in layer_sizes:
for conv_layer in conv_layers:
NAME = "{}-conv-{}-nodes-{}-dense-{}".format(conv_layer, layer_size, dense_layer, int(time.time()))
print(NAME)
tensorBoard = TensorBoard(log_dir="logs/{}".format(NAME))
model = Sequential()
# Input Layer
model.add(Conv2D(16, (3, 3), activation="relu", input_shape=(128, 128, 3)))
model.add(MaxPooling2D(2, 2))
for l in range(conv_layer - 1):
model.add(Conv2D(layer_size, (3, 3), activation="relu"))
model.add(MaxPooling2D(2, 2))
model.add(Flatten())
for l in range(dense_layer):
model.add(Dropout(0.2))
model.add(Dense(layer_size, activation="relu"))
model.add(Dense(3, activation="softmax"))
model.summary()
opt = RMSprop(learning_rate=0.001)
model.compile(loss="categorical_crossentropy",
optimizer=opt,
metrics=["accuracy"])
model.fit(training_set, batch_size=BS, epochs=2, callbacks=[tensorBoard], verbose=1, validation_data=test_set)
# Speichern des Models
model.save(NAME + ".model")