I have a data frame with calibration curves for multiple samples (signal s as a function of concentration c):
cal <- data.frame(sample = c(rep("A", 8), rep("B", 8), rep("C", 8)),
c_std = rep(c(0, 1, 5, 10, 25, 50, 100, 200), 3),
s_std = c(40341, 24196, 13403, 6956, 3000, 1507, 312, 12,
40261, 24250, 13537, 6977, 2940, 1465, 304, 12,
40075, 24469, 13696, 7060, 2972, 1487, 307, 12))
Additionally, I have a dataframe with observations made on the same samples:
obs <- data.frame(sample = c("A", "B", "C"),
s_sample = c(1364, 4726, 521))
First, I fit models for each calibration curve using drm from the library drc.
model <- function(df) drc::drm(c_std ~ s_std, fct = LL2.3(), data = df)
library(dplyr)
cal_models <- cal %>%
group_by(sample) %>%
nest() %>%
mutate(model = map(data, model)) %>%
unnest(sample) %>%
distinct(sample, model)
From the obtained models I want to get predictions for the observations I made on my samples. This is where I'm stuck. My idea was to merge the two objects containing the models and the observations by the sample ID and then apply predict from the stats package in a similar way as I used to fit the models with map. I don't know how to do this, though. This is how I merge the data frames:
dat <- merge(cal_models, obs)
And this is my approach to get predictions from the models, just that I would like to do this in one step for all samples:
cal_A <- subset(cal, sample == "A")
model_A <- drc::drm(c_std ~ s_std, data = cal_A, fct = LL2.3()
predicted <- stats::predict(model_A, data.frame(obs$s_sample[1]))
predicted
CodePudding user response:
cal <- data.frame(sample = c(rep("A", 8), rep("B", 8), rep("C", 8)),
c_std = rep(c(0, 1, 5, 10, 25, 50, 100, 200), 3),
s_std = c(40341, 24196, 13403, 6956, 3000, 1507, 312, 12,
40261, 24250, 13537, 6977, 2940, 1465, 304, 12,
40075, 24469, 13696, 7060, 2972, 1487, 307, 12))
obs <- data.frame(sample = c("A", "B", "C"),
s_std = c(1364, 4726, 521))
library(purrr)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(tidyr)
setNames(predict(drc::drm(c_std ~ s_std, fct = drc::LL2.3(), data = cal), obs), obs$sample)
#> A B C
#> 44.78515 18.20520 79.81111
cal %>%
group_by(sample) %>%
nest() %>%
mutate(
model = map(data, ~drc::drm(c_std ~ s_std, fct = drc::LL2.3(), data = .x)),
pred = map2(model, sample, ~predict(.x, filter(obs, sample == .y)))
) %>%
unnest(pred)
#> # A tibble: 3 x 4
#> # Groups: sample [3]
#> sample data model pred
#> <chr> <list> <list> <dbl>
#> 1 A <tibble [8 x 2]> <drc> 45.1
#> 2 B <tibble [8 x 2]> <drc> 18.1
#> 3 C <tibble [8 x 2]> <drc> 79.8
Created on 2022-03-07 by the reprex package (v2.0.0)
CodePudding user response:
It's probably easiest to make a list of models using your calibration data like this:
models <- by( cal , INDICES = cal$sample , FUN = model )
Then mapplying predict
over your obs
data as follows:
mapply( FUN = function(x, y) { predict(models[[x]], data.frame(y)) },
x=obs$sample ,
y=obs$s_sample)
A.Prediction B.Prediction C.Prediction
45.09688 18.07154 79.75994
If instead you want to add this to the existing data frame:
cbind(obs, prediction=mapply( FUN = function(x, y) { predict(models[[x]], data.frame(y)) },
x=obs$sample ,
y=obs$s_sample))
sample s_sample prediction
A.Prediction A 1364 45.09688
B.Prediction B 4726 18.07154
C.Prediction C 521 79.75994