Home > Net >  Optimization of a nonlinear function (spline approximation) with `Optim` gives low performance
Optimization of a nonlinear function (spline approximation) with `Optim` gives low performance

Time:09-29

I have a problem in regard to using Optim package.

My optimization problem involves a non-linear function.

Object function and its gradient are calculated by spline generations.

Hence, I add s of Spline2 package on them.

However, this process consumes lots of time, although it gives an intended solution.

Even 4 times more than its R-programming counterpart.

(I can post the R code if it is needed)

I'm wondering that julia's Optim package does not have good performance with theses kind of problems, like gradients having non-algebratic parts.

Here is a working example of my code.

using Splines2
using LinearAlgebra
using Optim
using Distributions



## basic settings (its codes are irrelevant to questions)
n=200
p=100
q=100
r=10

X = randn(n, p)
Y = randn(n, q)
B = ones(p, r) ./ p
G = ones(q, r) ./ q
Z = (X * B   Y * G)
Z_c = Z .- mean(Z, dims = 1)
Z = Z_c ./ mapslices(std, Z_c, dims = 1)
theta = convert(Array{Float64}, LinRange(-3.8, 4, 5))
knots = convert(Array{Float64}, LinRange(-7.2, 7.5, 9))
coef = (3*diff(vcat(0, theta, 0)) ./ (knots[4:end] - knots[1:end-3]))[2:end-1]

function link_approx(x_v::Array)
    local est; local der
    est = bs(x_v, knots = knots, order = 4)[:, 3:end-3] * theta
    der = bs(x_v, knots = knots, order = 3)[:, 3:end-3] * coef
    return Dict{Symbol, Array{Float64}}(:est => est, :der => der)
end


## Why it takes so long time?

@time for j in 1:r
# for update G
    function grad!(storage, gamma)
      local linkfit
      linkfit = link_approx(Y*gamma)

      output = transpose(Y) * ((X*B[:,j]   linkfit[:est] - Z[:,j]) .* linkfit[:der])./n 

      for i in 1:size(Y)[2]
        storage[i] = output[i]
      end
    end
  
    function obj(gamma)
      return norm(Z[:,j] - X* B[:,j] - link_approx(Y*gamma)[:est], 2)^2/(2*n)
    end
  
    temp = optimize(obj, grad!, G[:,j], BFGS(), Optim.Options(iterations = Int(5e1)))
    G[:,j] = Optim.minimizer(temp)

end

These codes spend about 5 seconds with my laptop (set 9th gen. Intel CPU i7-9750H, 16GB memory).

I think grad! in my code has a problem.

I should use a for-loop filling its storage to avoid errors in my knowledge.

CodePudding user response:

You might also consider using another implementation of lbfgs I tried the following, which seems faster.

using JSOSolvers, ManualNLPModels

@time for j in 1:r
    function obj(gamma)
      return norm(Z[:,j] - X* B[:,j] - link_approx(Y*gamma)[:est], 2)^2/(2*n)
    end

    function grad!(storage, gamma)
        local linkfit
        linkfit = link_approx(Y*gamma)
    
        output = transpose(Y) * ((X*B[:,j]   linkfit[:est] - Z[:,j]) .* linkfit[:der])./n 
    
        for i in 1:size(Y)[2]
          storage[i] = output[i]
        end
        return storage
    end
  
    nvar = size(Y)[2]
    x0 = zeros(nvar)
    nlp = NLPModel(x0, obj, grad = grad!)
    stats = lbfgs(nlp, max_eval = Int(5e1))
    G[:,j] = stats.solution

end
  • Related