I have this data set:
var_1 = rnorm(1000,1000,1000)
var_2 = rnorm(1000,1000,1000)
var_3 = rnorm(1000,1000,1000)
sample_data = data.frame(var_1, var_2, var_3)
I would like to split this data set into 10 different datasets (each containing 100 rows) and then upload them on to a server.
I know how to do this by hand:
sample_1 = sample_data[1:100,]
sample_2 = sample_data[101:200,]
sample_3 = sample_data[201:300,]
# etc.
library(DBI)
#establish connection (my_connection)
dbWriteTable(my_connection, SQL("sample_1"), sample_1)
dbWriteTable(my_connection, SQL("sample_2"), sample_2)
dbWriteTable(my_connection, SQL("sample_3"), sample_3)
# etc
Is there a way to do this "quicker"?
I thought of a general way to do this - but I am not sure how to correctly write the code for this:
i = seq(1:1000, by = 100)
j = 1 - 99
{
sample_i = sample_data[ i:j,]
dbWriteTable(my_connection, SQL("sample_i"), sample_i)
}
Can someone please help me with this?
Thank you!
CodePudding user response:
Here's an example using the SQLite database engine. We'll start with your sample data set:
var_1 = rnorm(1000,1000,1000)
var_2 = rnorm(1000,1000,1000)
var_3 = rnorm(1000,1000,1000)
sample_data = data.frame(var_1, var_2, var_3)
Now we'll break your large data frame into a list of 10 data frames using split()
. The result will be stored in a list:
list_of_dfs <- split(
sample_data, (seq(nrow(sample_data))-1) %/% 100
)
We'll create a vector with the names of the tables in the database. Here, I'm just making simple vector with the names sample_1
, sample_2
, etc.
table_names <- paste0("sample_", 1:10)
Now we're ready to write to the database. We'll make a connection and then iterate over the list of data frames and the vector of table names simultaneously, calling dbWriteTable()
each time:
connection <- dbConnect(RSQLite::SQLite(), dbname = "test.db")
map2(
table_names,
list_of_dfs,
function(x,y) dbWriteTable(connection, x, y)
)