[General boards] [Winter 2019 courses] [Fall 2018 courses] [Summer 2018 courses] [Older or newer terms]

Convergence rate of Newton's method


For 1D optimization, when we use Newton’s to find the root of f’(x), target function’s first derivative, why is it the case that “when it converges, and if f is quadratic, Netwon’s converges in 1 iteration.”

Does this mean, if f is quadratic, then Newton’s for finding roots of f will also converge in 1 iteration? or does this mean, if F, f’s indefinite integral function is quadratic, then it will converge in iteration?


,Newton’s method for finding root of nonlinear functions,
if applied to a linear function, will find its root in one iteration.
This is because Newton’s linearizes the function by taking its tangent.
But when the function is already inear, the tangent is the function itself,
and Newton’s hits the x-axis in one iteration.
Newton’s method for optimization works on the derivative of f.
So, if f is quadratic, f’ will be linear, and Newton’s method converges in one iteration.