Thursday, May 25, 2017

Bisection Method with Python


The Bisection Method


     Suppose $f$ is a continuous function defined on the interval $[a, b]$, with $f(a)$ and $f(b)$ of opposite sign. The Intermediate Value Theorem implies that a number p exists in (a, b) with $f( p) = 0$. Although the procedure will work when there is more than one root in the interval $(a, b)$, we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving (or bisecting) of subintervals of $[a, b]$ and, at each step, locating the half containing p.


Algorithm


To find a solution to $f(x) = 0$ given the continuous function $f$ on the interval $[a, b]$, where $f(a)$ and $f(b)$ have opposite signs:

INPUT endpoints a, b; tolerance TOL; maximum number of iterations $N_0$.

STEP 1 Set $i = 1$
                     $FA = f(a)$.

STEP 2 While $i \le N_0$ do Steps 3-6.

        STEP 3 Set $p = a + (b - a)/2$
                               $FP = f(p)$

        STEP 4 If $FP = 0$ or $(b-a)/2 < TOL$ then
                         STOP
                     else OUTPUT(P)
                   
        STEP 5 Set $i = i + 1$

       STEP 6 If $FA \times FP > 0$ then set $a = p$;
                          $FA = FP$
                     else set $b = p$.

STEP 7 OUTPUT("Method failed after $N_0$")


Sample Problem:


Show that $f(x) = x^3 + 4x^2 - 10 = 0$ has a root in $[1, 2]$, and use the Bisection method to determine an approximation to the root that is accurate to at least within $10^-4$.

Solution: Because $f(1) = -5$ and $f(2) = 14$ the Intermediate Value Theorem ensures that this continuous function has a root in $[1, 2]$.

For the approximation, see the outpout below:

    n                    $a_n$                        $b_n$                          $p_n$                        $f(p_n)$
           
    1                    1                           2                           1.5                       2.375                    

    2                    1                           1.5                        1.25                     -1.796875        

    3                    1.25                      1.5                        1.375                   0.16210938        

    4                    1.25                      1.375                    1.3125                 -0.84838867      

    5                    1.3125                  1.375                    1.34375               -0.35098267      

    6                    1.34375                1.375                    1.359375             -0.096408844      

    7                    1.359375              1.375                    1.3671875            0.032355785      

    8                    1.359375              1.3671875            1.3632812            -0.032149971      

    9                    1.3632812            1.3671875            1.3652344            7.2024763e-05    

    10                  1.3632812            1.3652344            1.3642578            -0.016046691      

    11                  1.3642578            1.3652344            1.3647461            -0.0079892628    

    12                  1.3647461            1.3652344            1.3649902            -0.0039591015    

    13                  1.3649902            1.3652344            1.3651123            -0.001943659      


Python Code:


def f(x):
    f = x**3 + 4*x**2 - 10
    return f
 
 
print("Sample input: bisectionMethod(1,2,10**-4, 100)")
 
def bisectionMethod(a,b,TOL,N):
    i = 1
    FA = f(a)
    
    print("%-20s %-20s %-20s %-20s %-20s" % ("n","a_n","b_n","p_n","f(p_n)"))
    print("%-20.8g %-20.8g %-20.8g %-20.8g %-20.8g\n" % (i, a, b, a + (b-a)/2, f(a + (b-a)/2) ))
    
     
    while(i <= N):
        p = a + (b-a)/2
        FP = f(p)
         
        if(FP == 0 or (b-a)/2 < TOL):
            break
        else:
             print("%-20.8g %-20.8g %-20.8g %-20.8g %-20.8g\n" % (i, a, b, p, f(p)))
        
         
        i = i + 1
         
        if(FA*FP > 0):
            a = p
        else:
            b = p
     
    return


Final Note: 


The Bisection method, though conceptually clear, has significant drawbacks. It is relatively
slow to converge (that is, N may become quite large before $| p − p_N|$ is sufficiently
small), and a good intermediate approximation might be inadvertently discarded. However,
the method has the important property that it always converges to a solution, and for that
reason it is often used as a starter for the more efficient methods

0 comments:

Post a Comment