This text presents the central ideas of modern numerical analysis. Stewart designed this volume while teaching an upper-division course in introductory numerical analysis. To clarify what he was teaching, he wrote down each lecture immediately after it was given. The result reflects the wit, insight and verbal craftmanship which are landmarks of the author. Simple examples are used to introduce each topic, then the author moves on to the discussion of important methods and techniques. With its mixture of graphs and code segments, the book provides insights and advice that help the reader avoid the many pitfalls in numerical computation that can easily trap an unwary beginner.
Part One: Nonlinear Equations Lecture 1: By the Dawn's Early Light Interval Bisection Relative Error Lecture 2: Newton's Method Reciprocals and Square Roots Local Convergence Analysis Slow Death Lecture 3: A Quasi-Newton Method Rates of Convergence Iterating for a Fixed Point Multiple Zeros Ending with a Proposition Lecture 4: The Secant Method Convergence Rate of Convergence Multipoint Methods Muller's Method The Linear-Fractional Method Lecture 5: A Hybrid Method Errors, Accuracy, and Condition Numbers. Part Two: Floating-Point Arithmetic.. Lecture 6: Floating-Point Numbers Overflow and Underflow Rounding Error Floating-Point Arithmetic Lecture 7: Computing Sums Backward Error Analysis Perturbation Analysis Cheap and Chippy Chopping Lecture 8: Cancellation The Quadratic Equation That Fatal Bit of Rounding Error Envoi. Part Three: Linear Equations. Lecture 9: Matrices, Vectors, and Scalars Operations with Matrices Rank-One Matrices Partitioned Matrices Lecture 10: The Theory of Linear Systems Computational Generalities Triangular Systems Operation Counts Lecture 11: Memory Considerations Row-Oriented Algorithms A Column-Oriented Algorithm General Observations on Row and Column Orientation Basic Linear Algebra Subprograms Lecture 12: Positive-Definite Matrices The Cholesky Decomposition Economics Lecture 13: Inner-Product Form of the Cholesky Algorithm Gaussian Elimination Lecture 14: Pivoting BLAS Upper Hessenberg and Tridiagonal Systems Lecture 15: Vector Norms Matrix Norms Relative Error Sensitivity of Linear Systems Lecture 16: The Condition of a Linear System Artificial Ill-Conditioning Rounding Error and Gaussian Elimination Comments on the Analysis Lecture 17: Introduction to a Project More on Norms The Wonderful Residual Matrices with Known Condition Numbers Invert and Multiply Cramer's Rule Submission. Part Four: Polynomial Interpolation. Lecture 18: Quadratic Interpolation Shifting Polynomial Interpolation Lagrange Polynomials and Existence Uniqueness Lecture 19: Synthetic Division The Newton Form of the Interpolant Evaluation Existence and Uniqueness Divided Differences Lecture 20: Error in Interpolation Error Bounds Convergence Chebyshev Points. Part Five: Numerical Integration. Lecture 21: Numerical Integration Change of Intervals The Trapezoidal Rule The Composite Trapezoidal Rule Newton-Cotes Formulas Undetermined Coefficients and Simpson's Rule Lecture 22: The Composite Simpson Rule Errors in Simpson's Rule Treatment of Singularities Gaussian Quadrature: The Idea Lecture 23: Gaussian Quadrature: The Setting Orthogonal Polynomials Existence Zeros of Orthogonal Polynomials Gaussian Quadrature Error and Convergence Examples Lecture 24: Numerical Differentiation and Integration Formulas From Power Series Limitations Bibliography: Introduction References.