Home > OS >  Train neural network model on multiple datasets
Train neural network model on multiple datasets

Time:01-02

What I have:

  1. A neural network model
  2. 10 identically structured datasets

What I want:

  1. Train model on all the datasets separately
  2. Save their models separately

I can train the datasets separately and save the single models one at a time. But I want to load my 10 datasets and create 10 models with them in a single run. The solution may be obvious but I am fairly new to this. How do I achieve this?

Thanks in advance.

CodePudding user response:

You can use one of the concepts of concurrency and parallelism, namely Multi-Threading, or in some cases, Multi-Processing to achieve this.
The easiest way to code will be by using concurrent-futures module of python.

You can call the training function on model for each dataset to be used, all under the ThreadPoolExecutor, in order to fire parallel threads for performing individual trainings.

Code can be somewhat like this:


Step 1: Necessary imports
from concurrent.futures import ThreadPoolExecutor, as_completed

import tensorflow as tf
from tensorflow.keras.models import load_model, Sequential
from tensorflow.keras.layers import Dense, Activation, Flatten

Step 2: Creating and building model
def create_model():                                                 # responsible for creating model
    model = Sequential()
    model.add(Flatten())                                            # adding NN layers
    model.add(Dense(64))
    model.add(Activation('relu'))
    # ........ so on
    model.compile(optimizer='..', loss='..', metrics=[...])         # compiling the model
    return model                                                    # finally returning the model

Step 3: Define fit function (performs model training)
def fit(model, XY_train):                                      # performs model.fit(...parameters...)
    model.fit(XY_train[0], XY_train[1], epochs=5, validation_split=0.3)     # use your already defined x_train, y_train
    return model                                                    # finally returns trained model

Step 4: Parallel trainer method, fires simultaneous training with TPE context manager
# trains provided model on each dataset parallelly by using multi-threading
def parallel_trainer(model, XY_train_datasets : list[tuple]):
    with ThreadPoolExecutor(max_workers = len(XY_train_datasets)) as executor:
        futureObjs = [
            executor.submit(
                lambda ds: fit(model, ds), *XY_train_datasets) # Call Fit for each dataset iterrate through the datasets
            ]

        for i, obj in enumerate(as_completed(futureObjs)):          # iterate through trained models
            (obj.result()).save(f"{i}.model")                       # save models

Step 5: Create model, load dataset, call parallel trainer
model = create_model()                                              # create the model

mnist = tf.keras.datasets.mnist                                     # get dataset - for example :- mnist dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()            # get (x_train, y_train), (x_test, y_test)
datasets = [(x_train, y_train)]*10                                  # list of dataset paths (in your case, same dataset used 10 times)

parallel_trainer(model, datasets)                                   # call parallel trainer



Whole program

from concurrent.futures import ThreadPoolExecutor, as_completed

import tensorflow as tf
from tensorflow.keras.models import load_model, Sequential
from tensorflow.keras.layers import Dense, Activation, Flatten


def create_model():                                                 # responsible for creating model
    model = Sequential()
    model.add(Flatten())                                            # adding NN layers
    model.add(Dense(64))
    model.add(Activation('relu'))
    # ........ so on
    model.compile(optimizer='..', loss='..', metrics=[...])         # compiling the model
    return model                                                    # finally returning the model


def fit(model, XY_train):                                      # performs model.fit(...parameters...)
    model.fit(XY_train[0], XY_train[1], epochs=5, validation_split=0.3)     # use your already defined x_train, y_train
    return model                                                    # finally returns trained model


# trains provided model on each dataset parallelly by using multi-threading
def parallel_trainer(model, XY_train_datasets : list[tuple]):
    with ThreadPoolExecutor(max_workers = len(XY_train_datasets)) as executor:
        futureObjs = [
            executor.submit(
                lambda ds: fit(model, ds), *XY_train_datasets) # Call Fit for each dataset iterrate through the datasets
            ]

        for i, obj in enumerate(as_completed(futureObjs)):          # iterate through trained models
            (obj.result()).save(f"{i}.model")                       # save models



model = create_model()                                              # create the model

mnist = tf.keras.datasets.mnist                                     # get dataset - for example :- mnist dataset
(x_train, y_train), (x_test, y_test) = mnist.load_data()            # get (x_train, y_train), (x_test, y_test)
datasets = [(x_train, y_train)]*10                                  # list of dataset paths (in your case, same dataset used 10 times)

parallel_trainer(model, datasets)                                   # call parallel trainer
  • Related