Home > Enterprise >  Confused about the anonymous function in julia
Confused about the anonymous function in julia

Time:10-13

I have this code:

Using ForwardDiff:gradient
derivative(f,x) = gradient(x->f(x[1]),[x])

I know gradient in the ForwardDiff take a vector as argument, but what does this mean: (x->f(x[1])), I am confused by x[1].

CodePudding user response:

I am confused by x[1]

It means to take the first element of the vector x.

This code doesn't really make much sense with the [x] as the last argument. You should probably just use e.g.:

julia> ForwardDiff.derivative(sin, 1.0)
0.5403023058681398

CodePudding user response:

I am answering about the title in your question, not the specific ForwardDiff API.

Anonymous functions allow you to define functions "on the fly", without them to exist outside of the scope where they are defined. They are typically used in contexts when an other function ask for a function as a parameter. And I personally employ them often when I need to use a function that is defined with many arguments but I want to consider only one.

Take for example the following function:

foo(x,y) = x y

And assume that you want to iterate just over its first parameter. You can use the map function whose first parameter is indeed a function:

map(x->foo(x,10),[1,2,3])

this call the function foo with the first parameter as 1, then 2 and finally 3, keeping 10 as the second argument.

This is equivalent to define a new function foo2(x) = foo(x,10) and broadcast over it:

foo2.([1,2,3])

Of course in this simple example you could have just used default parameters or broadcast directly just the first parameter foo.([1,2,3],10) but sometimes this is not possible or you don't have control over the function definition to fix default parameters, so anonymous functions can become handy.

An other context when you want to use an anonymous function is when you are developing a library and one function require a function as argument and yo uwant to provide a default without defining explicitly yet another function. For a real case example, in the BetaML machine-learning toolkit I defined the Stochastic Gradient Descent constructor as:

function SGD(;η=t -> 1/(1 t), λ=2) = ...

where η is the learning rate as a function of the epoch.

I could have defined a new function learningRateAdjustmentDefault(t) = 1/(1 t) and put η=learningRateAdjustmentDefault in the SGD parameter definition, just using anonymous function is more handy and transparent in these cases...

  • Related