Home > Enterprise >  How to sample (with replacement and weights) more than .Machine$integer.max rows from a data.table?
How to sample (with replacement and weights) more than .Machine$integer.max rows from a data.table?

Time:07-13

I want to take a sample from a data.table larger than the integer limit of R (available via .Machine$integer.max). Here is what I have tried:

library(bit64)
library(data.table)
irisdt <- as.data.table(iris)
test <- slice_sample(irisdt, n = .Machine$integer.max   100, weight_by = Sepal.Length, replace = T)
Fehler in sample.int(n, size, prob = wt, replace = TRUE) : 
  ungültiges 'size' Argument (= invalid argument 'size')
Zusätzlich: Warnmeldung:
In sample.int(n, size, prob = wt, replace = TRUE) :
  NAs introduced by coercion to integer range

If I convert the n argument to slice_sample to integer64, I get an empty sample.

> test <- slice_sample(irisdt, n = as.integer64(.Machine$integer.max   100),
                       weight_by = Sepal.Length, replace = T)
> nrow(test)
[1] 0

I cannot take several smaller samples which would be an obvious solution to the problem.

Do you have any other ideas? Thank you!

CodePudding user response:

I think here we have 2 problems:

  • first is as @Waldi commented about data.table row number limitations.
  • second from sample function where the size argument must not exceeds .Machine$integer.max see from documentation :

Non-integer positive numerical values of n or x will be truncated to the next smallest integer, which has to be no larger than .Machine$integer.max.

you can try any size less than or equal .Machine$integer.max

irisdt[sample(.N , .Machine$integer.max - 2e9 , replace = T) ,]

that works for me (subtracting 2e9 for memory limits)

CodePudding user response:

As data.table doesn't allow more than .Machine$integer.max rows, you could as a workaround use arrow with dplyr and furrr:

library(bit64)
library(data.table)

library(arrow)
library(dplyr)
library(furrr)

irisdt <- as.data.table(iris)

# Split job
target = .Machine$integer.max 1000
split = 100

# Distribute calculations
numcalc <- rep(round(target/split),split)
numcalc[split] <- numcalc[split]   target - sum(numcalc)

plan(multisession, workers = nbrOfWorkers()-1)

# Generate files in parallel
numcalc %>% furrr::future_iwalk(~{
  test <- irisdt %>% slice_sample( n = .x , weight_by = Sepal.Length, replace = T) 
  write_dataset(test,paste0('D:/test/test',.y,'.parquet'),format = 'parquet')
},.options = furrr_options(seed = TRUE))

# Open dataset
ds <- open_dataset('D:/test',format='parquet')
ds
#FileSystemDataset with 100 Parquet files
#Sepal.Length: double
#Sepal.Width: double
#Petal.Length: double
#Petal.Width: double
#Species: dictionary<values=string, indices=int32>

result <- ds %>% group_by(Species) %>% summarize(n=n()) %>% collect() 

result
# A tibble: 3 x 2
#  Species            n
#  <fct>          <int>
#1 virginica  807049123
#2 versicolor 727198323
#3 setosa     613237201


sum(result$n)-.Machine$integer.max
#[1] 1000
  • Related