Home > Software design >  creating a one-hot for Fashion-MNIST dataset with tensorFlow
creating a one-hot for Fashion-MNIST dataset with tensorFlow

Time:12-31

Is the below case a scenario were one should create a hot-one encodingfor the labels?

I also tried to create a hot-one encoding but kept getting errors. How is this done?

Note: I'm working in googles colab.

Thank you.

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt

fashion = keras.datasets.fashion_mnist
(train_images,train_labels),(test_images,test_labels) = fashion.load_data()

class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress','Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

train_images =  tf.cast(train_images, tf.float32) / 255.0
test_images = tf.cast(test_images, tf.float32) / 255.0

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])


model.fit(train_images, train_labels, epochs=10, batch_size=512, shuffle=True, validation_split=0.1)


To add the one-hot encoding I tried changing the data to:

train_images =  tf.cast(train_images, tf.float32) / 255.0
test_images = tf.cast(test_images, tf.float32) / 255.0

train_labels = tf.one_hot(tf.cast(train_labels, tf.int64), depth=10)
test_labels = tf.one_hot(tf.cast(test_labels, tf.int64), depth=10)

Which gave the error:

InvalidArgumentError Traceback (most recent call last) in () 27 28 ---> 29 model.fit(train_images, train_labels, epochs=10, batch_size=512, shuffle=True, validation_split=0.1) 30

CodePudding user response:

I think this code should work. It is without one-hot encode but it works perfectly.

import tensorflow as tf    
from tensorflow import keras    
import numpy as np    
import matplotlib.pyplot as plt 
       
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() 
   
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] 
train_images = train_images / 255.0    
test_images = test_images / 255.0
        
model = keras.Sequential([keras.layers.Flatten(input_shape=(28, 28)),keras.layers.Dense(128, activation=tf.nn.relu),   keras.layers.Dense(10, activation=tf.nn.softmax)])
model.compile(optimizer=tf.train.AdamOptimizer(),    loss='sparse_categorical_crossentropy',metrics=['accuracy'])        
    
model.fit(train_images, train_labels, epochs=20)

CodePudding user response:

I have found the answer. Please see Sparse_categorical_crossentropy vs categorical_crossentropy (keras, accuracy)

To fix the code for one-hot encoding you should fix the code:

model.compile(optimizer='adam',
          loss='sparse_categorical_crossentropy',
          metrics=['accuracy'])

To:

model.fit(train_images, one_hot_train_labels, epochs=10, batch_size=128, shuffle=True, validation_split=0.1)
  • Related