Numerical Methods

Key Takeaways

  • Numerical methods approximate solutions when exact analytical solutions are impractical or impossible.
  • Newton-Raphson method iterates x_{n+1} = x_n - f(x_n)/f'(x_n) to find roots of equations.
  • The bisection method is guaranteed to converge (if a root exists in the interval) but converges slowly.
  • Trapezoidal rule and Simpson's rule approximate definite integrals from discrete data points.
  • Euler's method approximates ODE solutions: y_{n+1} = y_n + h·f(x_n, y_n) with step size h.
  • Convergence, precision limits, and error estimation are key concepts tested on the FE exam.
Last updated: March 2026

Numerical Methods

Numerical methods provide approximate solutions to mathematical problems that cannot be solved analytically. The FE exam tests your understanding of these techniques, their convergence properties, and error characteristics.

Root-Finding Methods

Bisection Method

Given f(a) and f(b) with opposite signs (f(a)·f(b) < 0), a root exists in [a, b].

Algorithm:

  1. Compute midpoint: c = (a + b)/2
  2. If f(c) ≈ 0, stop — c is the root
  3. If f(a)·f(c) < 0, the root is in [a, c] → replace b = c
  4. If f(c)·f(b) < 0, the root is in [c, b] → replace a = c
  5. Repeat

Convergence: Guaranteed but slow — error is halved each iteration. After n iterations, error ≤ (b-a)/2ⁿ.

Newton-Raphson Method

xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

Advantages: Very fast convergence (quadratic) near the root.

Disadvantages:

  • Requires f'(x) — the derivative must be known
  • May diverge if the initial guess is poor
  • Fails if f'(xₙ) = 0 at any iteration

Example: Find √2 using Newton-Raphson on f(x) = x² - 2.

  • f'(x) = 2x
  • Starting with x₀ = 1.5: x₁ = 1.5 - (2.25-2)/(3) = 1.5 - 0.0833 = 1.4167
  • x₂ = 1.4167 - (2.0069-2)/(2.8334) = 1.4142 ≈ √2

Secant Method

xn+1=xnf(xn)xnxn1f(xn)f(xn1)x_{n+1} = x_n - f(x_n) \cdot \frac{x_n - x_{n-1}}{f(x_n) - f(x_{n-1})}

Does not require the derivative — approximates it using finite differences. Convergence is superlinear (order ≈ 1.618).

Numerical Integration

Trapezoidal Rule

abf(x)dxh2[f(x0)+2f(x1)+2f(x2)++2f(xn1)+f(xn)]\int_a^b f(x) \, dx \approx \frac{h}{2}[f(x_0) + 2f(x_1) + 2f(x_2) + \cdots + 2f(x_{n-1}) + f(x_n)]

where h = (b - a)/n and xᵢ = a + ih.

Error: O(h²) — error decreases quadratically with step size.

Simpson's 1/3 Rule (requires even number of intervals)

abf(x)dxh3[f(x0)+4f(x1)+2f(x2)+4f(x3)++4f(xn1)+f(xn)]\int_a^b f(x) \, dx \approx \frac{h}{3}[f(x_0) + 4f(x_1) + 2f(x_2) + 4f(x_3) + \cdots + 4f(x_{n-1}) + f(x_n)]

Error: O(h⁴) — much more accurate than the trapezoidal rule for smooth functions.

Simpson's 3/8 Rule (requires intervals in multiples of 3)

abf(x)dx3h8[f(x0)+3f(x1)+3f(x2)+2f(x3)+3f(x4)++f(xn)]\int_a^b f(x) \, dx \approx \frac{3h}{8}[f(x_0) + 3f(x_1) + 3f(x_2) + 2f(x_3) + 3f(x_4) + \cdots + f(x_n)]

Numerical ODE Solutions

Euler's Method (First-Order)

Given dy/dx = f(x, y) with y(x₀) = y₀: yn+1=yn+hf(xn,yn)y_{n+1} = y_n + h \cdot f(x_n, y_n)

Error: O(h) per step — first-order accurate. Simple but not very accurate.

Improved Euler Method (Heun's Method)

k1=f(xn,yn)k_1 = f(x_n, y_n) k2=f(xn+h,yn+hk1)k_2 = f(x_n + h, y_n + h \cdot k_1) yn+1=yn+h2(k1+k2)y_{n+1} = y_n + \frac{h}{2}(k_1 + k_2)

Error: O(h²) — second-order accurate.

Runge-Kutta 4th Order (RK4)

The most commonly used method for ODE solutions: k1=f(xn,yn)k_1 = f(x_n, y_n) k2=f(xn+h/2,yn+hk1/2)k_2 = f(x_n + h/2, y_n + hk_1/2) k3=f(xn+h/2,yn+hk2/2)k_3 = f(x_n + h/2, y_n + hk_2/2) k4=f(xn+h,yn+hk3)k_4 = f(x_n + h, y_n + hk_3) yn+1=yn+h6(k1+2k2+2k3+k4)y_{n+1} = y_n + \frac{h}{6}(k_1 + 2k_2 + 2k_3 + k_4)

Error: O(h⁴) — fourth-order accurate. Excellent balance of accuracy and computational effort.

Error Analysis

TypeDefinition
Absolute Error
Relative Error
Percent ErrorRelative Error × 100%
Round-off ErrorDue to finite precision of computer arithmetic
Truncation ErrorDue to approximating infinite processes (series, derivatives) with finite steps

Significant Figures: The number of meaningful digits in a result. When multiplying/dividing, the result has the same number of significant figures as the input with the fewest.

Test Your Knowledge

Using Newton-Raphson with f(x) = x² - 4 and x₀ = 3, what is x₁?

A
B
C
D
Test Your Knowledge

Which numerical integration method has O(h⁴) error?

A
B
C
D
Test Your Knowledge

The bisection method is guaranteed to converge if:

A
B
C
D