Numerical Methods
Key Takeaways
- Numerical methods approximate solutions when exact analytical solutions are impractical or impossible.
- Newton-Raphson method iterates x_{n+1} = x_n - f(x_n)/f'(x_n) to find roots of equations.
- The bisection method is guaranteed to converge (if a root exists in the interval) but converges slowly.
- Trapezoidal rule and Simpson's rule approximate definite integrals from discrete data points.
- Euler's method approximates ODE solutions: y_{n+1} = y_n + h·f(x_n, y_n) with step size h.
- Convergence, precision limits, and error estimation are key concepts tested on the FE exam.
Numerical Methods
Numerical methods provide approximate solutions to mathematical problems that cannot be solved analytically. The FE exam tests your understanding of these techniques, their convergence properties, and error characteristics.
Root-Finding Methods
Bisection Method
Given f(a) and f(b) with opposite signs (f(a)·f(b) < 0), a root exists in [a, b].
Algorithm:
- Compute midpoint: c = (a + b)/2
- If f(c) ≈ 0, stop — c is the root
- If f(a)·f(c) < 0, the root is in [a, c] → replace b = c
- If f(c)·f(b) < 0, the root is in [c, b] → replace a = c
- Repeat
Convergence: Guaranteed but slow — error is halved each iteration. After n iterations, error ≤ (b-a)/2ⁿ.
Newton-Raphson Method
Advantages: Very fast convergence (quadratic) near the root.
Disadvantages:
- Requires f'(x) — the derivative must be known
- May diverge if the initial guess is poor
- Fails if f'(xₙ) = 0 at any iteration
Example: Find √2 using Newton-Raphson on f(x) = x² - 2.
- f'(x) = 2x
- Starting with x₀ = 1.5: x₁ = 1.5 - (2.25-2)/(3) = 1.5 - 0.0833 = 1.4167
- x₂ = 1.4167 - (2.0069-2)/(2.8334) = 1.4142 ≈ √2
Secant Method
Does not require the derivative — approximates it using finite differences. Convergence is superlinear (order ≈ 1.618).
Numerical Integration
Trapezoidal Rule
where h = (b - a)/n and xᵢ = a + ih.
Error: O(h²) — error decreases quadratically with step size.
Simpson's 1/3 Rule (requires even number of intervals)
Error: O(h⁴) — much more accurate than the trapezoidal rule for smooth functions.
Simpson's 3/8 Rule (requires intervals in multiples of 3)
Numerical ODE Solutions
Euler's Method (First-Order)
Given dy/dx = f(x, y) with y(x₀) = y₀:
Error: O(h) per step — first-order accurate. Simple but not very accurate.
Improved Euler Method (Heun's Method)
Error: O(h²) — second-order accurate.
Runge-Kutta 4th Order (RK4)
The most commonly used method for ODE solutions:
Error: O(h⁴) — fourth-order accurate. Excellent balance of accuracy and computational effort.
Error Analysis
| Type | Definition |
|---|---|
| Absolute Error | |
| Relative Error | |
| Percent Error | Relative Error × 100% |
| Round-off Error | Due to finite precision of computer arithmetic |
| Truncation Error | Due to approximating infinite processes (series, derivatives) with finite steps |
Significant Figures: The number of meaningful digits in a result. When multiplying/dividing, the result has the same number of significant figures as the input with the fewest.
Using Newton-Raphson with f(x) = x² - 4 and x₀ = 3, what is x₁?
Which numerical integration method has O(h⁴) error?
The bisection method is guaranteed to converge if: