If I used the build-in function optim
without stating the method=
, which method would this algorithm used?
set.seed(93420) # Creating random data
x <- rnorm(500)
y <- rnorm(500) 0.7 * x
data <- data.frame(x, y)
head(data) # Print head of data
# x y
# 1 -0.21492991 -0.06814474
# 2 -0.02217756 -0.84956484
# 3 0.55175788 0.11247758
# 4 -0.33581492 -0.86346317
# 5 -0.02489514 0.44307381
# 6 -1.44784931 -2.49701457
my_function <- function(data, par) { # Own function for residual sum of squares
with(data, sum((par[1] par[2] * x - y)^2))
}
optim_output <- optim(par = c(0, 1), # Applying optim
fn = my_function,
data = data)
CodePudding user response:
When in doubt about a function see the documentation, ?optim
:
Arguments
method: The method to be used. See ‘Details’. Can be abbreviated.
Details
By default optim performs minimization, but it will maximize if control$fnscale is negative. optimHess is an auxiliary function to compute the Hessian at a later stage if hessian = TRUE was forgotten.
The default method is an implementation of that of Nelder and Mead (1965), that uses only function values and is robust but relatively slow. It will work reasonably well for non-differentiable functions.
CodePudding user response:
Using args
, we may see that optim
provides five method parameters.
args(optim)
# function (par, fn, gr = NULL, ..., method = c("Nelder-Mead",
# "BFGS", "CG", "L-BFGS-B", "SANN", "Brent"), lower = -Inf,
# upper = Inf, control = list(), hessian = FALSE)
In the first lines of the function code, match.arg
was used,
method <- match.arg(method)
where
args(match.arg)
# function (arg, choices, several.ok = FALSE)
In ?match.arg
documentation, we may read
...default argument matching will set
arg
tochoices
, this is allowed as an exception to the ‘length one unless several.ok is TRUE’ rule, and returns the first element.
which means, since choices=
was not used, by default optim
uses the first element of method
, which is "Nelder-Mead"
.