How to find the second derivative of function

How to find the second derivative of function

Articles Hits: 74

Differential calculations – the section of the mathematical analysis which studies derivatives of the first and the highest orders as one of methods of a research of functions. The second derivative of some function turns out from the first repeated differentiation.

Instruction

1. The derivative of some function in each point has a certain value. Thus, at its differentiation the new function which can be also differentiated turns out. In this case its derivative is called the second derivative initial function and F is designated’’ (x).

2. The first derivative is called the limit of increment of function to increment of an argument, i.e.: F’ (x) = lim (F(x) – F(x_0)) / (x – x_0) at x → 0. The second derivative initial function the derivative of the F'(x) function in the same point of x_0 is, namely: F’’ (x) = lim (F’ (x) – F’ (x_0)) / (x – x_0).

3. Apply methods of numerical differentiation to finding of the second derivative difficult functions which are difficult for defining in the usual way. At the same time for calculation use approximate formulas: F’’ (x) = (F (x + h) – 2*F (x) + F (x - h))/h^2 + α(h^2) F’’ (x) = (-F (x + 2*h) + 16*F (x + h) – 30*F (x) + 16*F (x - h) – F (x – 2*h)) / (12*h^2) + α(h^2).

4. A basis of methods of numerical differentiation – approximation by an interpolation polynomial. The given formulas turn out as a result of double differentiation of interpolation polynomials of Newton and Stirlinga.

5. Parameter h is the approximation step accepted for calculations, and α(h^2) is an error of approximation. Similarly α(h) for the first derivative this infinitesimal size is inversely proportional h^2. Respectively, it the is more, than less length of a step. Therefore for minimization of an error it is important to choose the most optimum value h. The choice of optimum value h is called regularization on a step. At the same time believe that there is such value h that it is right: | F (x + h) – F (x) |> ε, where ε – some small size.

6. There is other algorithm of minimization of an error of approximation. It consists in the choice of several points of area of values of function F near the initial point of x_0. Then values of function in these points on which the line of regression which is smoothing for F on a small interval is under construction are calculated.

7. The received values of function F represent the partial sum of a number of Taylor: G(x) = F(x) + R where G(x) is smoothed function with a margin error approximations of R. After double differentiation we will receive: G’’ (x) = F’’ (x) + R’’, from where R’’ = G’’ (x) – F’’ (x). Size R’’ as a deviation of approximate value of function from its true value will also be the minimum error of approximation.

Author: «MirrorInfo» Dream Team

Print