I am trying to calculate n in equation: f = n^2 in iterative way so that f was close to 3 with precision 3 decimal places from bottom. For illustration I am writing here a code which does succesfully what I want. A problem with this code is that instead of f=n^2 I am starting external script and using this approach entire iteration becomes extremely slow and therefore can no be used. It requires too many cycles to converge to given precision. Therefore I am searching for faster way to achieve convergence. I mean with "faster' to converge with smaller number of cycles. One recommended to me binary search algorithm, could you help with creation of it with this little example f = n^2?
Here is my code which demonstrates what I want.
n = 1; % initial value
f=n^2; % function to calculate
tol = 3; % tolerance
accuracy = 0.0000001; % accuracy
while f <= tol
n = n accuracy;
f=n^2;
end
n = n-accuracy;
f=n^2;
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])
Update: Here is my example for setting of function and its derivative for Newton-Raphson algorithm. Unfortunately, it is probably incorrect, so I would like to ask you for help.
n = 7.76; % initial guess
f_error = [7.73 9 12.404 7.659 8.66];
df_error = diff(f_error)
xNew = n - f_error./df_error
CodePudding user response:
Newton-Raphson Method
f_error(x) = x^(1/2) - N
f'_error(x) = (1/2)*x^(-1/2)
xNew = x - f_error(x)/f'_error(x)
repeat:
xNew = x - (x^(1/2) - N) / ((1/2)*x^(-1/2))
approaches to targeted value quicker than trying to scan all floating-point representable values between them:
n = 3; % initial guess
nNew = n;
val = 3; % value
f=val^2; % function to calculate
accuracy = 0.0001; % accuracy
error = 10000;
count = 0;
while abs(error) > accuracy
nNew=n-(n^(1/2) - val)/((1/2)*n^(-1/2));
error = nNew-n
n=nNew
count = count 1
end
disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])
output:
c = 5 <--- only 5 iterations
n = 9
f = 9
Your method fails to complete the calculation within time-limit of https://octave-online.net server:
n = 1; % initial value
f=n^2; % function to calculate
tol = 3; % tolerance
accuracy = 0.0000001; % accuracy
count = 0;
while f <= tol
n = n accuracy;
f=n^2;
count = count 1;
end
n = n-accuracy;
f=n^2;
disp(['c = ' num2str(count,10)])
disp(['n = ' num2str(n,10)])
disp(['f = ' num2str(f,10)])
output:
!!! OUT OF TIME !!!
c = 847309
n = 1.0847309
f = 1.176641126
it moved only 0.1766 within 5 seconds so it needs at least 1-2 minutes to complete while Newton-Raphson method took only 5 iterations to complete. n=n accuracy method also does not work for big n values because of rounding error. If next binary-representable n floating point value is bigger than n accuracy then it gets stuck in infinite loop.
The Newton Raphson method above uses square root. There is CPU instruction for sqrt for many x86 CPUs so it is highly probably accelerated but you can also find the square root by another Newton-Raphson method:
f_error(x) = x^2 - N
f'_error(x) = 2*x
xNew = x - f_error(x)/f'_error(x)
xNew = x - (1/2)*x - (1/2)*(N/x)
but it still has a division for N/x. If CPU has no division then you can use another Newton Raphson for it:
f_error(x) = 1/x - N
f'_error(x) = -1/(x*x)
xNew = x - f_error(x)/f'_error(x)
xNew = x*(2-N*x)
applying this for both other methods would result in a fully multiplication-based version and probably slower since you'd require cubic-power of iterations required so it could need hundreds of iterations in total.