I am new to Tensorflow and Deep Learning and my first task is to do a binary classification of arrays from greyscale images. My arrays (array_a to array_h) in this code should serve as an example.
I have many arrays (array_a to array_h) with the shape of (300x300) and with zeros and ones. These arrays should be my input as training_data (x_train). I want to label these arrays with another array (label_array_train) with the shape of (8x1). This array should be my input as y_train or target data.
My idea was to concatenate my arrays into one array with np.concatenate to convert it to a tensor and feed it to my NN. I also use tf.convert_to_tensor on my label_array_train to get another tensor. I want to label each array with one label from the label_array_train (0 or 1). Arrays with zeros should be labeled with 0 and arrays with ones should be labeled with 1. I get an error because of my different data-sizes. But i don´t know how to handle or prepare my input-data correctly to get the perfect shape and size of my tran_data and the labels.
I get this error:
Data cardinality is ambiguous:
x sizes: 2400
y sizes: 8
Make sure all arrays contain the same number of samples.
File "C:\Users\Niklas\Desktop\Python\Versuch 3.py", line 94, in <module>
model.fit(train_data,...
my code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, AvgPool2D
from tensorflow.keras.optimizers import Adam
import numpy as np
my training arrays with zeros and ones in shape of 300,300
array_a= np.zeros((300,300))
array_b= np.ones((300,300))
array_c= np.ones((300,300))
array_d= np.zeros((300,300))
array_e= np.zeros((300,300))
array_f= np.zeros((300,300))
array_g= np.zeros((300,300))
array_h= np.ones((300,300))
my array for labeling all arrays of the train_data
label_array_train = np.array([0,1,1,0,0,0,0,1], dtype = np.float)
Concatenate all arrays into one
train_array = np.concatenate((array_a,array_b,array_c,array_d,array_e,array_f,array_g,array_h), axis=0)
transform it to a tensor for compatibility with Tensorflow
train_data = tf.convert_to_tensor(train_array, np.float32)
labels = tf.convert_to_tensor(label_array_train, np.float32)
my model
model = Sequential([Conv2D(filters= 32, kernel_size= (3, 3), activation= 'relu', input_shape = (300,300,1)),
AvgPool2D(2, 2),
Conv2D(filters= 32, kernel_size= (3, 3), activation= 'relu'),
AvgPool2D(2, 2),
Conv2D(filters= 64, kernel_size= (3, 3), activation= 'relu'),
AvgPool2D(2, 2),
Conv2D(filters= 64, kernel_size= (3, 3), activation= 'relu'),
AvgPool2D(2, 2),
Flatten(),
Dense(units= 512, activation= 'relu'),
Dense(units= 1, activation= 'sigmoid')])
model.compile(optimizer=Adam (learning_rate= 0.0005),
loss= 'binary_crossentropy',
metrics= ['accuracy'])
model.fit(train_data,
labels,
epochs=5,
verbose=1)
Shape of my training_data with print(train_array):
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]], shape=(2400, 300), dtype=float32)
Shape of my labels with print(labels):
tf.Tensor([0. 1. 1. 0. 0. 0. 0. 1.], shape=(8,), dtype=float32)
I hope someone can help me and show my mistakes in my code or show me how to do it right.
CodePudding user response:
Use np.stack
instead of np.concatenate
and everything should work:
train_array = np.stack((array_a,array_b,array_c,array_d,array_e,array_f,array_g,array_h), axis=0)
print(train_array.shape)
# (8, 300, 300)
CodePudding user response:
from tensorflow.keras.utils import to_categoritcal
...
model.fit(train_data,
to_categorical(labels),
epochs=5,
verbose=1)
CodePudding user response:
the X
shape is the problem. You have to initialize arrays as:
array_a= np.zeros((1,300,300))
array_b= np.ones((1,300,300))
array_c= np.ones((1,300,300))
array_d= np.zeros((1,300,300))
array_e= np.zeros((1,300,300))
array_f= np.zeros((1,300,300))
array_g= np.zeros((1,300,300))
array_h= np.ones((1,300,300))*