I am trying to build Test and Train groups for doing the Cross Validation. I have a total individuals pool of 95 invidual IDs and tried to make the task done like this:
# create 95 unique IDs as individuals
set.seed(1)
indv <- stringi::stri_rand_strings(95, 4)
# specify Kfold
n.folds <- 5
folds <- cut(1:length(indv), breaks = n.folds, labels = FALSE)
# randomise the folds
folds <- sample(folds, length(folds))
samples.train <- list()
samples.test <- list()
foldSet <- list()
kfold.df <- data.frame("IID" = indv)
for (f in 1:n.folds) {
samples.train[[f]] <- indv[folds != f]
samples.test[[f]] <- indv[folds == f]
# replace to x (test) if the corresponding value is TRUE, and to y (train) if it is FALSE.
foldSet[[f]] <- ifelse(kfold.df$IID %in%
samples.test[[f]], "test", "train")
# combine foldSet to datafarme.
kfold.df[[f]] <- cbind(kfold.df, foldSet[[f]])
}
The goal is preparing 5 testing and training sets of samples to do the modeling. But I have encountered with this error message:
Error in data.frame(..., check.names = FALSE) :
arguments imply differing number of rows: 95, 2
Besides, the foldSet
output is not as expected, although samples.train
and samples.test
are correct. Could you please help me to make this loop working!
UPDATE:
Here is the for-loop without using wildcards in creating foldSet
:
for (f in 1:n.folds) {
samples.train[[f]] <- indv[folds != f]
samples.test[[f]] <- indv[folds == f]
foldSet <<- ifelse(kfold.df$IID %in% samples.test[[f]], "test", "train")
# combine foldSet to datafarme.
kfold.df <<- cbind(kfold.df, foldSet)
}
By executing the loop you will find kfold.df
as a dataframe listing all five folds test/train random sets. I expect for each iteration, creating the testing and training sets corresponding to the f
, so, after five iteration, I would have access to each fold's Training/Testing sets for the next operations inside the loop, like kfold.df[foldSet == "train", "IID"]
. I need this access bcoz I want to use it for subsetting another bigger matrix based on train and test invd
of each fold, preparing it for applying to the regression model. That's why I used the wildcards for foldSet
to make the loop able creating all by itself but I failed to manage it.
CodePudding user response:
I think you may be overcomplicating things (which is something I do all the time...)
You don't need to go to great lengths to make what you are trying to make. This answer is broken down into three parts.
- Building the data frame you're looking for (I think!)
- Why you really don't need this data frame to be built
- Why not use what's already out there?
Part 1
If I understand correctly, this is about what you're looking for (less the strings). I also included how you might use it with your actual data.
library(tidyverse)
giveMe <- function(rowCt, nfolds){
# set.seed(235) # removed seed after establishing working function to incite
# the expected randomness
folds <- cut(1:rowCt, breaks = nfolds, labels = F)
# randomise the folds
folds <- sample(folds, length(folds))
# create the folds' sets
kfold.df <- map_dfc(1:nfolds,
~ifelse(folds != .x, T, F)) %>%
setNames(., paste0("foldSet_",1:nfolds)) %>% # name each field
add_column(IID = 1:rowCt, .before = 1) # add indices to the left
return(kfold.df) # return a data frame
}
given <- giveMe(95, 5)
giveMore <- giveMe(nrow(iris), 5) # uses the built-in iris data set
Part 2
You could just create your random fold sequence and use that with a model, you don't need to stack them in a data frame. You have to loop through the model the same number of times, why not do it at the same time?
folds <- sample(cut(1:nrow(iris), 5, # no seed-- random on purpose
labels = F))
tellMe <- map(1:5, # the folds start in col 2
~lm(Sepal.Length~.,
iris[ifelse(folds != .x,
T, F),
1:4])) # dropped 'Species' groups' issue
To check out the model performance:
map_dfr(1:5, .f = function(x){
y = tellMe[[x]]
sigma = sigma(y)
rsq = summary(y)$adj.r.squared
c(sigma = sigma, rsq = rsq)
})
# # A tibble: 5 × 2
# sigma rsq
# <dbl> <dbl>
# 1 0.334 0.844
# 2 0.309 0.869
# 3 0.302 0.846
# 4 0.330 0.847
# 5 0.295 0.872
Predict and inspect the testing performance
# create a list of the predictec values from the test data
showMe <- map(1:5,
~predict(tellMe[[.x]],
iris[ifelse(folds == .x,
T, F), 1:4]))
# Grab comparable metrics like those from the models
map_dfr(1:5,
.f = function(x){
A = iris[ifelse(folds == x, T, F), ]$Sepal.Length
P = showMe[[x]]
sigma = sqrt(sum((A - P)^2) / length(A))
rsq = cor(A, P)^2
c(sigma = sigma, rsq = rsq)
})
# # A tibble: 5 × 2
# sigma rsq
# <dbl> <dbl>
# 1 0.232 0.919
# 2 0.342 0.774
# 3 0.366 0.884
# 4 0.250 0.906
# 5 0.384 0.790
Part 3
Here I'm going to use the caret
library. However, there are a lot of other options.
library(caret)
set.seed(1)
# split training and testing 70/30%
tr <- createDataPartition(iris$Species, p = .7, list = F)
# set up 5-fold val
trC <- trainControl(method = "cv", number = 5)
# train the model
fit <- train(Sepal.Length~., iris[tr, ],
method = "lm",
trControl = trC)
summary(fit)
# truncated results best model:
# Residual standard error: 0.2754 on 39 degrees of freedom
# Multiple R-squared: 0.9062, Adjusted R-squared: 0.8941
fit.p <- predict(fit, iris[-tr,])
postResample(fit.p, iris[-tr, ]$Sepal.Length)
# RMSE Rsquared MAE
# 0.2795920 0.8925574 0.2302402
If you want to see each of the folds' performance, you can do that, too.
fit$resample
# RMSE Rsquared MAE Resample
# 1 0.3629901 0.7911634 0.2822708 Fold1
# 2 0.3680954 0.8888947 0.2960464 Fold2
# 3 0.3508317 0.8394489 0.2709989 Fold3
# 4 0.2548549 0.8954633 0.1960375 Fold4
# 5 0.3396910 0.8661239 0.3187768 Fold5