I needed to write a program for polynomial interpolation of the Lagrange polynomial (g(x)) for the function exp(x) (f(x)).
n = 10
x = []
y = []
def f(x):
return math.exp(x)
def lagranz(x, y, t):
z = 0
for j in range(len(y)):
p1 = 1
p2 = 1
for i in range(len(x)):
if i != j:
p1 *= (t - x[i])
p2 *= (x[j] - x[i])
z = z y[j] * p1 / p2
return z
# uniform grid
print('Равномерная сетка')
for i in range(1, n 1):
temp = -1 2 * (i - 1) / (n - 1)
x.append(temp)
y.append(f(temp))
ynew = [lagranz(x, y, i) for i in x]
# errors
print('#\tx\tf(x)\tg(x)\tпогрешность')
for i in range(0, n):
print(i, '\t', x[i], '\t', y[i], '\t', ynew[i], '\t', ynew[i]-y[i])
And I need to investigate the error behavior.
delta g(x) = g(x) - f(x)
But the problem is that in the main nodes the error is zero.
And I need to investigate the error on a denser grid, but I don’t understand how to do this.
CodePudding user response:
Just calculate values of the interpolant somewhere else
import math
def f(x):
return math.exp(x)
def lagranz(x, y, t):
z = 0
for j in range(len(y)):
p1 = 1
p2 = 1
for i in range(len(x)):
if i != j:
p1 *= (t - x[i])
p2 *= (x[j] - x[i])
z = z y[j] * p1 / p2
return z
# This is the grid where you *define* interpolant
n = 10
xnode = []
fnode = []
for i in range(1, n 1):
temp = -1 2 * (i - 1) / (n - 1)
xnode.append(temp)
fnode.append(f(temp))
# This is where you calculate interpolant values
# at arbitrary points, but *fixed* grid
xs = [0.5, 1.5, 3.0]
finter = [lagranz(xnode, fnode, x) for x in xs]
# Benchmark
for (x, y) in zip(xs, finter):
print(x, y, f(x), f(x) - y)
You also may define a closure to make calculation of the interpolant value clearer
finter = lambda x: lagranz(xnode, fnode, x)
for x in xs:
print(x, finter(x), f(x), finter(x) - f(x))