Modified Secant Method



The first two iterations of the secant method. The red curve shows the function f, and the blue lines are the secants. For this particular case, the secant method will not converge to the visible root.

In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a functionf. The secant method can be thought of as a finite-difference approximation of Newton's method. However, the secant method predates Newton's method by over 3000 years.[1]

The method[edit]

طرق عددية هندسية طريقة ال Modified Secant Method هي إحدى طرق ال Open Methods والتي تعتمد على أن يكون هناك نقطة بدائية 'initial. It is an open numerical method and a modified or improved version of the regular Secant method. It asks for only one initial guesses and a (fractional) constant. It generally converges to the true root faster compared to the regular Secant method. The secant method In the first glance, the secant method may be seemed similar to linear interpolation method, but there is a major difference between these two methods. In the secant method, it is not necessary that two starting points to be in opposite sign. Therefore, the secant method is not a kind of bracketing method but an open method.

The secant method is defined by the recurrence relation

xn=xn1f(xn1)xn1xn2f(xn1)f(xn2)=xn2f(xn1)xn1f(xn2)f(xn1)f(xn2).{displaystyle x_{n}=x_{n-1}-f(x_{n-1}){frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}={frac {x_{n-2}f(x_{n-1})-x_{n-1}f(x_{n-2})}{f(x_{n-1})-f(x_{n-2})}}.}

As can be seen from the recurrence relation, the secant method requires two initial values, x0 and x1, which should ideally be chosen to lie close to the root.

Derivation of the method[edit]

Starting with initial values x0 and x1, we construct a line through the points (x0, f(x0)) and (x1, f(x1)), as shown in the picture above. In slope–intercept form, the equation of this line is

y=f(x1)f(x0)x1x0(xx1)+f(x1).{displaystyle y={frac {f(x_{1})-f(x_{0})}{x_{1}-x_{0}}}(x-x_{1})+f(x_{1}).}

The root of this linear function, that is the value of x such that y = 0 is

x=x1f(x1)x1x0f(x1)f(x0).{displaystyle x=x_{1}-f(x_{1}){frac {x_{1}-x_{0}}{f(x_{1})-f(x_{0})}}.}

We then use this new value of x as x2 and repeat the process, using x1 and x2 instead of x0 and x1. We continue this process, solving for x3, x4, etc., until we reach a sufficiently high level of precision (a sufficiently small difference between xn and xn−1):

x2=x1f(x1)x1x0f(x1)f(x0),x3=x2f(x2)x2x1f(x2)f(x1),xn=xn1f(xn1)xn1xn2f(xn1)f(xn2).{displaystyle {begin{aligned}x_{2}&=x_{1}-f(x_{1}){frac {x_{1}-x_{0}}{f(x_{1})-f(x_{0})}},[6pt]x_{3}&=x_{2}-f(x_{2}){frac {x_{2}-x_{1}}{f(x_{2})-f(x_{1})}},[6pt]&,vdots [6pt]x_{n}&=x_{n-1}-f(x_{n-1}){frac {x_{n-1}-x_{n-2}}{f(x_{n-1})-f(x_{n-2})}}.end{aligned}}}

Convergence[edit]

The iterates xn{displaystyle x_{n}} of the secant method converge to a root of f{displaystyle f} if the initial values x0{displaystyle x_{0}} and x1{displaystyle x_{1}} are sufficiently close to the root. The order of convergence is φ, where

φ=1+521.618{displaystyle varphi ={frac {1+{sqrt {5}}}{2}}approx 1.618}

is the golden ratio. In particular, the convergence is superlinear, but not quite quadratic.

This result only holds under some technical conditions, namely that f{displaystyle f} be twice continuously differentiable and the root in question be simple (i.e., with multiplicity 1).

If the initial values are not close enough to the root, then there is no guarantee that the secant method converges. There is no general definition of 'close enough', but the criterion has to do with how 'wiggly' the function is on the interval [x0,x1]{displaystyle [x_{0},x_{1}]}. For example, if f{displaystyle f} is differentiable on that interval and there is a point where f=0{displaystyle f'=0} on the interval, then the algorithm may not converge.

Comparison with other root-finding methods[edit]

The secant method does not require that the root remain bracketed, like the bisection method does, and hence it does not always converge. The false position method (or regula falsi) uses the same formula as the secant method. However, it does not apply the formula on xn1{displaystyle x_{n-1}} and xn2{displaystyle x_{n-2}}, like the secant method, but on xn1{displaystyle x_{n-1}} and on the last iterate xk{displaystyle x_{k}} such that f(xk){displaystyle f(x_{k})} and f(xn1){displaystyle f(x_{n-1})} have a different sign. This means that the false position method always converges.

The recurrence formula of the secant method can be derived from the formula for Newton's method

xn=xn1f(xn1)f(xn1){displaystyle x_{n}=x_{n-1}-{frac {f(x_{n-1})}{f'(x_{n-1})}}}

by using the finite-difference approximation

f(xn1)f(xn1)f(xn2)xn1xn2.{displaystyle f'(x_{n-1})approx {frac {f(x_{n-1})-f(x_{n-2})}{x_{n-1}-x_{n-2}}}.}

The secant method can be interpreted as a method in which the derivative is replaced by an approximation and is thus a quasi-Newton method.

If we compare Newton's method with the secant method, we see that Newton's method converges faster (order 2 against φ ≈ 1.6). However, Newton's method requires the evaluation of both f{displaystyle f} and its derivative f{displaystyle f'} at every step, while the secant method only requires the evaluation of f{displaystyle f}. Therefore, the secant method may occasionally be faster in practice. For instance, if we assume that evaluating f{displaystyle f} takes as much time as evaluating its derivative and we neglect all other costs, we can do two steps of the secant method (decreasing the logarithm of the error by a factor φ2 ≈ 2.6) for the same cost as one step of Newton's method (decreasing the logarithm of the error by a factor 2), so the secant method is faster. If, however, we consider parallel processing for the evaluation of the derivative, Newton's method proves its worth, being faster in time, though still spending more steps.

Generalizations[edit]

Broyden's method is a generalization of the secant method to more than one dimension.

The following graph shows the function f in red and the last secant line in bold blue. In the graph, the x intercept of the secant line seems to be a good approximation of the root of f.

Computational example[edit]

Below, the secant method is implemented in the Python programming language.

It is then applied to find a root of the function f(x) = x2 − 612 with initial points x0=10{displaystyle x_{0}=10} and x1=30{displaystyle x_{1}=30}

Notes[edit]

  1. ^Papakonstantinou, J., The Historical Development of the Secant Method in 1-D, retrieved 2011-06-29

See also[edit]

References[edit]

  • Avriel, Mordecai (1976). Nonlinear Programming: Analysis and Methods. Prentice Hall. pp. 220–221. ISBN0-13-623603-0.
  • Allen, Myron B.; Isaacson, Eli L. (1998). Numerical analysis for applied science. John Wiley & Sons. pp. 188–195. ISBN978-0-471-55266-6.

External links[edit]

  • Secant Method Notes, PPT, Mathcad, Maple, Mathematica, Matlab at Holistic Numerical Methods Institute
  • Weisstein, Eric W.'Secant Method'. MathWorld.
Retrieved from 'https://en.wikipedia.org/w/index.php?title=Secant_method&oldid=988064369'

Chapter 5| 5.1 5.6 5.7 5.13 5.16

Chapter 6| 6.1 6.7 6.10 6.15 6.16 6.18

Determine the real roots of f (x) = −0.6x2 + 2.4x + 5.5:
(a) Graphically.
(b) Using the quadratic formula.
(c) Using three iterations of the bisection method to determine the highest root. Employ initial guesses of xl = 5 and xu = 10. Compute the estimated error εa and the true error εt after each iteration.

Answer

Determine the positive real root of ln (x4) = 0.7 (a) graphically, (b) using three iterations of the bisection method, with initial guesses of xl = 0.5 and xu = 2, and (c) using three iterations of the false-position method, with the same initial guesses as in (b).

Answer

A) Actual Value = 1.1912

Here is the Graph of these equations with the root estimations plotted:
Red is through Bisection method
Blue is through False-Position Method
Zoomed in version:
The calculated roots for this problem are:
Bisection Method: 1.0625
False-Position Method: 1.2175

Determine the real root of f (x) = (0.8 − 0.3x)/x :
(a) Analytically.

(b) Graphically.

(c) Using three iterations of the false-position method and initial guesses of 1 and 3. Compute the approximate error εa and the true error εt after each iteration. Is there a problem with the result?

Answer:

(A)

(B)

Graph of equation:

Graphical Solution:

Looks like the root is at 2.6667
(C)

Error after each iteration where e_a = approximate error and e_t = true error:

Iteration 1:

e_a = 0.0725 = 7.25%

e_t = 0.0768 = 7.68%

Secant method function matlab

Iteration 2:
e_a = 0.0279 = 2.79%

e_t = 0.0475 = 4.75%

Iteration 3:
e_a = 0.0178 = 1.78%

e_t = 0.0292 = 2.92%

We can see the algorithm working in the zoomed out model…
But with the zoomed in model, we see three iterations is not enough to reach 2.6667
So, with 20 iterations…
That is what we were looking for.

The velocity v of a falling parachutist is given by

where g = 9.8 m/s2. For a parachutist with a drag coefficient c = 15 kg/s, compute the mass m so that the velocity is v = 35 m/s at t 0004 9 s. Use the false-position method to determine m to a level of εs = 0.1%.

Answer

To calculate mass:

m = 59.8411

Water is flowing in a trapezoidal channel at a rate of Q = 20 m3/s. The critical depth y for such a channel must satisfy the equation

where g = 9.81 m/s2, Ac = the cross-sectional area (m2), and B = the width of the channel at the surface (m). For this case, the width and the cross-sectional area can be related to depth y by

Solve for the critical depth using (a) the graphical method, (b) bisection, and (c) false position. For (b) and (c) use initial guesses of xl = 0.5 and xu = 2.5, and iterate until the approximate error falls below 1% or the number of iterations exceeds 10. Discuss your results.

Answer


Here they are zoomed in: red being bisection and blue being false position

Use simple fixed-point iteration to locate the root of
f (x) = 2 sin(√x) − x
Use an initial guess of x0 = 0.5 and iterate until εa ≤ 0.001%. Verify that the process is linearly convergent as described in Box 6.1.

Answer

root = 1.9724

approximate error = .00019419%

Locate the first positive root of
f (x) = sin x + cos(1 + x2) − 1
where x is in radians. Use four iterations of the secant method with initial guesses of (a) xi–1 = 1.0 and xi = 3.0; (b) xi–1 = 1.5 and xi = 2.5, and (c) xi–1 = 1.5 and xi = 2.25 to locate the root. (d) Use the graphical method to explain your results.

Answer:


As we can see from this graph, the guess of (a) in black is not quite a root. But, the guess of (b) shown in red is too high. So, we see that (c), which is in green, is actually our lowest root.

(a) xn = 0.3964

(b) xn = 2.5321

(c) xn = 1.9446

Determine the lowest positive root of f (x) = 8*sin(x)e–x − 1:
(a) Graphically.
(b) Using the Newton-Raphson method (three iterations, xi = 0.3).
(c) Using the secant method (five iterations, xi–1 = 0.5 and xi = 0.4).
(d) Using the modified secant method (three iterations, xi = 0.3, δ = 0.01).

Answer:

(a)

Modified Secant Method

Secant

And here is the zoomed in image:

As we can see from the zoomed in image, the lowest positive root is roughly 0.1452

(b)

(c)

Now zoomed in:

Zoomed in a little more; we see things begin to converge

(d)

First zoom:

Modified Secant Method

Second Zoom:

Last zoom; There are the zeros:

Determine the roots of the following simultaneous nonlinear equations using (a) fixed-point iteration and (b) the Newton-
Raphson method:
y = −x2 + x + 0.75
y + 5xy = x2
Employ initial guesses of x = y = 1.2 and discuss the results.


Here we see the fixed point iterations in black, and the Newton-Ralphson in blue.

Roots for Fixed Point:

nx = 0.8660

ny = 0.0400
Roots for Newton Raphson:

nx = 1.3721

ny = 0.2395

Determine the roots of the simultaneous nonlinear equations
(x − 4)2 + (y − 4)2 = 5
x2 + y2 = 16
Use a graphical approach to obtain your initial guesses. Determine refined estimates with the two-equation Newton-Raphson method described in Sec. 6.6.2.

Here is a graph of the functions with roots show as red and blue asterixis:

Here is a 3D graph to show the roots

Here is a rotated 3D graph to better show the roots:

Here are the roots:

Modified Secant Method

(1.8058, 3.5692)

(3.5692, 1.8058)

A mass balance for a pollutant in a well-mixed lake can be written as

Given the parameter values V = 1 × 106m3, Q = 1 × 105 m3/yr, W = 1 × 106 g/yr, and k = 0.25 m0.5/g0.5/yr, use the modified secant method to solve for the steady-state concentration. Employ an initial guess of c = 4 g/m3 and δ = 0.5. Perform three iterations and determine the percent relative error after the third iteration.

And here the function is zoomed in

I find it odd that the plots weren’t on the line… There seems to be some error on my part that I need to revisit

Secant Method Solver

This plot generated 0% error