Home > Mobile >  Speeding up python computation time (solving differential equations)
Speeding up python computation time (solving differential equations)

Time:12-01

so some time ago i was assigned a project to find the position relative to time of a simulated pendulum on a free moving cart, i managed to calculate some equations to describe this motion and i tried to simulate it in python to make sure it is correct. The program i made can run and plot its position correctly, but it is quite slow especially when i try to plot it with higher accuracy. How can i improve this program, any tips is greatly appreciated.

the program :

from scipy.integrate import quad
from scipy.optimize import fsolve
import numpy as np
import matplotlib.pyplot as plt

# These values can be changed
masstot = 5
mass = 2
g= 9.8
l = 9.8
wan = (g/l)**(1/2)
vuk = 0.1
oug = 1

def afad(lah): # Find first constant
    wan = 1
    vuk = 0.1
    oug = 1
    kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2)) (4*(oug**2))
    return (kan)

solua = fsolve(afad, 1)

intsolua = sum(solua) 

def kfad(solua, wan, vuk): # Find second constant
    res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2)) (wan**2)))/((5*(solua**2)) 4)))**(1/2)
    return (res)

ksol = kfad(solua, wan, vuk)

def deg(t, solua, vuk, ksol): # Find angle of pendulum relative to time
    res = 2*np.arctan(solua*np.exp(-1*vuk*t)*np.sin(ksol*t))
    return(res)

def chandeg(t, solua, vuk, ksol): # Find velocity of pendulum relative to time
    res = (((-2*solua*vuk*np.exp(vuk*t)*np.sin(ksol*t)) (2*solua*ksol*np.exp(vuk*t)*np.cos(ksol*t)))/(np.exp(2*vuk*t) ((solua**2)*(np.sin(ksol*t)**2))))
    return(res)

xs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy  and length

def dinte1(deg, bond, solua, vuk, ksol): # used to plot angle at at a certain time
    res = []
    for x in (bond):
        res.append(deg(x, solua, vuk, ksol))
    return res

def dinte2(chandeg, bond, solua, vuk, ksol): # used to plot angular velocity at a certain time
    res = []
    for x in (bond):
        res.append(chandeg(x, solua, vuk, ksol))
    return res

def dinte(a, bond, mass, l, solua, vuk, ksol, g, masstot ): # used to plot acceleration of system at certain time
    res = []
    for x in (bond):
        res.append(a(x, mass, l, solua, vuk, ksol, g, masstot))
    return res

def a(t, mass, l, solua, vuk, ksol, g, masstot): # define acceleration of system to time
    return (((mass*l*(chandeg(t, solua, vuk, ksol)**2)) (mass*g*np.cos(deg(t, solua, vuk, ksol))))*np.sin(deg(t, solua, vuk, ksol))/masstot)

def j(t):
    return sum(a(t, mass, l, intsolua, vuk, ksol, g, masstot))

def f(ub):
    return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0]

def int2(f, bond): # Integrates system acceleration twice to get posistion relative to time
    res = []
    for x in (bond):
        res.append(f(x))
        print(res)
    return res

plt.plot(xs, int2(f, xs)) # This part of the program runs quite slowly
#plt.plot(xs, dinte(a, xs, mass, l, solua, vuk, ksol, g, masstot))
#plt.plot(xs, dinte2(chandeg, xs, solua, vuk, ksol))
#plt.plot(xs, dinte1(deg, xs, solua, vuk, ksol))
plt.show()

Ran the program, it can run relatively well just very slowly. Disclaimer that i am new at using python and scipy so it's probably a very inneficient program.

CodePudding user response:

You can try to calculate values only once and then reuse them.

from scipy.integrate import quad
from scipy.optimize import fsolve
import numpy as np
import matplotlib.pyplot as plt

# These values can be changed
masstot = 5
mass = 2
g = 9.8
l = 9.8
wan = (g/l)**(1/2)
vuk = 0.1
oug = 1

def afad(lah): # Find first constant
    wan = 1
    vuk = 0.1
    oug = 1
    kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2)) (4*(oug**2))
    return (kan)

solua = fsolve(afad, 1)[0]

def kfad(solua, wan, vuk): # Find second constant
    res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2)) (wan**2)))/((5*(solua**2)) 4)))**(1/2)
    return (res)

ksol = kfad(solua, wan, vuk)



res_a = {}
def a(t): # define acceleration of system to time
    if t in res_a:
        return res_a[t]

    vuk_t = vuk * t
    macro1 = 2 * solua * np.exp(vuk_t)
    ksol_t = ksol * t
    sin_ksol_t = np.sin(ksol_t)
    deg = 2 * np.arctan(solua * np.exp(-1 * vuk_t) * sin_ksol_t)
    chandeg = macro1 * (-vuk * sin_ksol_t   ksol * np.cos(ksol_t))  /  (np.exp(2*vuk_t)   ((solua**2) * sin_ksol_t**2))

    res = (((l * (chandeg**2))   (g * np.cos(deg))) * mass * np.sin(deg) / masstot)

    res_a[t] = res
    return res

res_j = {}
def j(t):
    if t in res_j:
        return res_j[t]

    res = a(t)

    res_j[t] = res
    return res

def f(ub):
    return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0]

def int2(bond):
    res = []
    for x in (bond):
        res.append(f(x))
        print(res)

    return res

xs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy  and length
plt.plot(xs, int2(xs)) # This part of the program runs quite slowly
plt.show()

This example looks to be 6x faster.

CodePudding user response:

An alternative solution to the one of @IvanPerehiniak, is to use a JIT compiler like Numba so to do many low-level optimization that the CPython interpreter do not. Indeed, numerically intensive pure-Python code running on CPython are generally very inefficient. Numpy can provide relatively good performance for large arrays but it is very slow for small one. The thing is you use a lot of small arrays and pure-Python scalar operations. Numba is not a silver-bullet though: it just mitigate many overhead from Numpy and CPython. You still have to optimize the code further if you want to get a very fast code. Hopefully, this method can be combined with the one of @IvanPerehiniak (though the dictionary need not to be global which is cumbersome in many cases). Note Numba can pre-compute global constants for you. The compilation time is done during the first call or when the function has a user-defined explicit signature.

import numba as nb
from scipy.integrate import quad
from scipy.optimize import fsolve
import numpy as np
import matplotlib.pyplot as plt
import scipy

# These values can be changed
masstot = 5.0
mass = 2.0
g= 9.8
l = 9.8
wan = (g/l)**(1/2)
vuk = 0.1
oug = 1.0

@nb.njit
def afad(lah): # Find first constant
    wan = 1.0
    vuk = 0.1
    oug = 1.0
    kan = (12*(lah**4)*((3*(vuk**2)-(wan**2))))-((16*((wan**2)-(vuk**2))-(5*oug**2))*(lah**2)) (4*(oug**2))
    return (kan)

solua = fsolve(afad, 1)

intsolua = np.sum(solua) 

@nb.njit
def kfad(solua, wan, vuk): # Find second constant
    res = ((wan**2)-(vuk**2)-((2*(solua**2)*((2*(vuk**2)) (wan**2)))/((5*(solua**2)) 4)))**(1/2)
    return (res)

ksol = kfad(solua, wan, vuk)

@nb.njit
def deg(t, solua, vuk, ksol): # Find angle of pendulum relative to time
    res = 2*np.arctan(solua*np.exp(-1*vuk*t)*np.sin(ksol*t))
    return(res)

@nb.njit
def chandeg(t, solua, vuk, ksol): # Find velocity of pendulum relative to time
    res = (((-2*solua*vuk*np.exp(vuk*t)*np.sin(ksol*t)) (2*solua*ksol*np.exp(vuk*t)*np.cos(ksol*t)))/(np.exp(2*vuk*t) ((solua**2)*(np.sin(ksol*t)**2))))
    return(res)

xs = np.linspace(0, 60, 20) # Value can be changed to alter plotting accuracy  and length

@nb.njit
def dinte1(deg, bond, solua, vuk, ksol): # used to plot angle at at a certain time
    res = []
    for x in (bond):
        res.append(deg(x, solua, vuk, ksol))
    return res

@nb.njit
def dinte2(chandeg, bond, solua, vuk, ksol): # used to plot angular velocity at a certain time
    res = []
    for x in (bond):
        res.append(chandeg(x, solua, vuk, ksol))
    return res

@nb.njit
def dinte(a, bond, mass, l, solua, vuk, ksol, g, masstot ): # used to plot acceleration of system at certain time
    res = []
    for x in (bond):
        res.append(a(x, mass, l, solua, vuk, ksol, g, masstot))
    return res

@nb.njit
def a(t, mass, l, solua, vuk, ksol, g, masstot): # define acceleration of system to time
    return (((mass*l*(chandeg(t, solua, vuk, ksol)**2)) (mass*g*np.cos(deg(t, solua, vuk, ksol))))*np.sin(deg(t, solua, vuk, ksol))/masstot)

# See: https://stackoverflow.com/questions/71244504/reducing-redundancy-for-calculating-large-number-of-integrals-numerically/71245570#71245570
@nb.cfunc('float64(float64)')
def j(t):
    return np.sum(a(t, mass, l, intsolua, vuk, ksol, g, masstot))
j = scipy.LowLevelCallable(j.ctypes)

# Cannot be jitted due to "quad"
def f(ub):
    return quad(lambda ub: quad(j, 0, ub)[0], 0, ub)[0]

# Cannot be jitted due to "f" not being jitted
def int2(f, bond): # Integrates system acceleration twice to get posistion relative to time
    res = []
    for x in (bond):
        res.append(f(x))
        print(res)
    return res

plt.plot(xs, int2(f, xs)) # This part of the program runs quite slowly
#plt.plot(xs, dinte(a, xs, mass, l, solua, vuk, ksol, g, masstot))
#plt.plot(xs, dinte2(chandeg, xs, solua, vuk, ksol))
#plt.plot(xs, dinte1(deg, xs, solua, vuk, ksol))
plt.show()

Here are results:

Initial solution:              35.5 s
Ivan Perehiniak's solution:     5.9 s
This solution (first run):      3.1 s
This solution (second run):     1.5 s

This solution is slower the first time the script is run because the JIT needs to compile all the functions the first time. Subsequent calls to the functions are significantly faster. In fact, int2 takes only 0.5 seconds on my machine the second time.

  • Related