Home > Net >  Training built-in convnets on array data in R keras
Training built-in convnets on array data in R keras

Time:10-20

I am trying to train a built-in convnet architecture on my own data in R keras. My data is stored in an array in R, rather than in individual image files, which seems to be the standard.

I think my main problem is that I don't know how to preprocess my feature data correctly.

Here is an simple example for data and model definition (which works):

#simulate data resembling images, but in array format:
p <- 32 # note: minium height/width for resnet
toy_x <- array(runif(p*p*100*3), c(100, p, p, 3))
toy_y <- runif(100)

#define and compile model
input <- layer_input(shape = c(p, p, 3))
N1 <- application_resnet50(weights = NULL,
                               input_tensor = input,
                               include_top = FALSE)
output_layer_instance <- layer_dense(units = 1, activation = 'sigmoid')
output <- input %>% N1() %>% output_layer_instance()
model <- keras_model(input, output)
model %>% compile(loss = "binary_crossentropy", optimizer = "adam")

But when I try to fit the model using the following code, I get an error:

model %>% fit(toy_x, toy_y, epochs = 1)

I'm not sure the error is very informative, but here it is:

 Error in py_call_impl(callable, dots$args, dots$keywords) : 
  ValueError: in user code:

    /root/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
        outputs = self.distribute_strategy.run(
    /root/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /root/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /root/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /root/.local/share/r-miniconda/envs/r-reticulate/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:533 train_step 

I have tried a few alternative solutions. As mentioned above, I think the issue may be due to lack of preprocessing of the feature data. I have tried using the built-in preprocessing function, but without luck - I get the same error as above from running the following:

toy_x_preproc <- imagenet_preprocess_input(toy_x)
model %>% fit(toy_x_preproc, toy_y, epochs = 1)

I have also tested that the code runs without using the built-in example resnet by replacing it with a simple convnet (still using the functional API):

#define & compile model
model2_input <- layer_input(shape = c(p, p, 3))
model2_output <- model2_input %>% 
  layer_conv_2d(filters = 25, kernel_size = c(2,2), activation = "relu", 
                           input_shape = c(p,p,1)) %>% 
  layer_max_pooling_2d(pool_size = c(2, 2)) %>% 
  layer_flatten() %>% 
  layer_dense(units = 1, activation = 'sigmoid')  
model2 <- keras_model(model2_input, model2_output)
model2 %>% compile(
  loss = "binary_crossentropy",
  optimizer = "adam")

#train on "raw" toy_x -- works
model2 %>% fit(toy_x, toy_y, epochs = 1)

This runs without an error. It also works if I rerun the entire chunk but fit on toy_x_preproc instead.

Thank you for reading - and I will greatly appreciate any help.

CodePudding user response:

Your model output shape is shape(NULL,1,1,1), and the shape of your training labels is shape(NULL). You probably want to include a dimensionality reduction layer in your model if you're doing a custom top, e.g., a layer_flatten(), layer_global_max_pooling_3d(), or something else that reduces the rank of output. You probably also want to call k_expand_dims() or manually include a dimension valued 1 in your training data labels, to take it from shape(batch_size) to shape(batch_size, 1).

Side note: the error that's printed by default is truncated if the call stack is large. You can still get the full error messages if you call reticulate::py_last_error(), which usually gives the requisite clue. For example, immedeatly after encountering the error in the fit call, if you run purrr::walk(reticulate::py_last_error(), cat) you see a long printout, which includes this as the last line:

  ValueError: `logits` and `labels` must have the same shape, received ((None, 1, 1, 1) vs (None, 1)).
  • Related