All people by nature strive for knowledge. (Aristotle. Metaphysics)

Numerical methods: solving nonlinear equations

Problems of solving equations constantly arise in practice, for example, in economics, when developing a business, you want to know when profits will reach a certain value, in medicine, when studying the effects of drugs, it is important to know when the concentration of a substance will reach a given level, etc.

In optimization problems, it is often necessary to determine the points at which the derivative of a function becomes 0, which is a necessary condition local extremum.

In statistics, when constructing estimates using the method least squares or the maximum likelihood method also has to solve nonlinear equations and systems of equations.

So, a whole class of problems arises related to finding solutions nonlinear equations, such as equations or equations, etc.

In the simplest case, we have a function defined on the interval ( a, b ) and taking certain values.

Each value x from this segment we can compare the number, this is functional dependence, a key concept in mathematics.

We need to find a value at which these are called the roots of the function

Visually we need to determine the intersection point of the function graphwith the abscissa axis.

Halving method

The simplest method for finding the roots of an equation is the halving method, or dichotomy.

This method is intuitive and everyone would act in a similar way when solving a problem.

The algorithm is as follows.

Suppose we find two points and , such that they have different signs, then between these points there is at least one root of the function.

Divide the segment in half and enter average point .

Then either , or .

Let us leave that half of the segment for which the values ​​at the ends have different signs. Now we divide this segment in half again and leave that part at the boundaries of which the function has different signs, and so on, to achieve the required accuracy.

Obviously, we will gradually narrow the area where the root of the function is located, and, therefore, we will determine it with a certain degree of accuracy.

Note that the described algorithm is applicable for any continuous function.

The advantages of the halving method include its high reliability and simplicity.

The disadvantage of the method is the fact that before you start using it, you need to find two points where the function values ​​have different signs. It is obvious that the method is not applicable for roots of even multiplicity and also cannot be generalized to the case of complex roots and to systems of equations.

The order of convergence of the method is linear, at each step the accuracy doubles; the more iterations are done, the more accurately the root is determined.

Newton's method: theoretical foundations

Newton's classical method or tangents is that if is some approximation to the root of the equation , then the next approximation is defined as the root of the tangent to the function drawn at the point .

The tangent equation to a function at a point has the form:

In the tangent equation we put and .

Then the algorithm for sequential calculations in Newton’s method is as follows:

The convergence of the tangent method is quadratic, the order of convergence is 2.

Thus, the convergence of Newton's tangent method is very fast.

Remember this wonderful fact!

Without any changes, the method is generalized to the complex case.

If the root is a root of second multiplicity or higher, then the order of convergence drops and becomes linear.

Exercise 1. Using the tangent method, find a solution to the equation on the segment (0, 2).

Exercise 2. Using the tangent method, find a solution to the equation on the segment (1, 3).

The disadvantages of Newton's method include its locality, since it is guaranteed to converge for an arbitrary starting approximation only if the condition is satisfied everywhere , in the opposite situation, convergence occurs only in a certain neighborhood of the root.

The disadvantage of Newton's method is the need to calculate derivatives at each step.

Visualization of Newton's method

Newton's method (tangent method) is used if the equation f(x) = 0 has a root and the following conditions are met:

1) function y= f(x) is defined and continuous at ;

2) f(af(b) < 0 (the function takes values ​​of different signs at the ends of the segment [ a; b]);

3) derivatives f"(x) And f""(x) preserve sign on the interval [ a; b] (i.e. function f(x) either increases or decreases on the segment [ a; b], while maintaining the direction of the convexity);

The main idea of ​​the method is as follows: on the segment [ a; b] such a number is selected x 0 , at which f(x 0 ) has the same sign as f"" (x 0 ), i.e. the condition is satisfied f(x 0 f"" (x) > 0 . Thus, the point with the abscissa is selected x 0 , in which the tangent to the curve y= f(x) on the segment [ a; b] intersects the axis Ox. Per point x 0 First it is convenient to select one of the ends of the segment.

Let's consider Newton's method using a specific example.

Let us be given an increasing function y = f(x) =x 2 -2, continuous on the segment (0;2), and having f"(x) = 2 x > 0 And f "" (x) = 2 > 0 .

Drawing1 . f(x) =x 2 -2

Tangent equation in general view has an idea:

y-y 0 = f" (x 0)·(x-x 0).

In our case: y-y 0 =2x 0 ·(x-x 0). For the point x 0 we choose the point B 1 (b; f(b)) = (2,2). Draw a tangent to the function y = f(x) at point B 1, and denote the point of intersection of the tangent and axis Ox dot x 1. We get the equation of the first tangent: y-2=2·2(x-2), y=4x-6.

Ox: x 1 =

Drawing2. Result of the first iteration

y=f(x) Ox through the point x 1, we get the point B 2 =(1.5; 0.25). Draw a tangent to the function again y = f(x) at point B 2, and denote the point of intersection of the tangent and axis Ox dot x 2.

Equation of the second tangent: y-0.25=2*1.5(x-1.5), y = 3 x - 4.25.

Intersection point of tangent and axis Ox: x 2 =.

Drawing3. Second iteration of Newton's method

Then we find the intersection point of the function y=f(x) and a perpendicular drawn to the axis Ox through point x 2, we get point B 3 and so on.

Drawing4. Third step of the tangent method

The first approximation of the root is determined by the formula:

= 1.5.

The second approximation of the root is determined by the formula:

=

The third approximation of the root is determined by the formula:

Thus , i The th approximation of the root is determined by the formula:

Calculations are carried out until the decimal places required in the answer match, or the specified precision e is achieved - until the inequality is satisfied | xi- xi-1 | < e.

In our case, let’s compare the approximation obtained in the third step with the real answer calculated on a calculator:

Figure 5. Root of 2, calculated on a calculator

As you can see, already at the third step we received an error of less than 0.000002.

In this way, you can calculate the value of the “square root of 2” value with any degree of accuracy. This remarkable method was invented by Newton and allows you to find roots very complex equations.

Newton's method: application in C++

In this article, we will automate the process of calculating the roots of equations by writing a console application in C++. We will develop it in Visual C++ 2010 Express, this is a free and very convenient C++ development environment.

First, let's launch Visual C++ 2010 Express. The program start window will appear. In the left corner, click “Create a project”.

Rice. 1. home page Visual C++ 2010 Express

In the menu that appears, select “Win32 Console Application” and enter the application name “Newton_Method”.

Rice. 2. Create a project

// Newton.cpp method: defines the entry point for the console application

#include "stdafx.h"

#include

using namespace std;

float f(double x) //returns the value of the function f(x) = x^2-2

float df(float x) //returns the derivative value

float d2f(float x) // value of the second derivative

int _tmain(int argc, _TCHAR* argv)

int exit = 0, i=0;//variables for exit and loop

double x0,xn;//calculated approximations for the root

double a, b, eps; // boundaries of the segment and required accuracy

cout<<"Please input \n=>";

cin>>a>>b; // enter the boundaries of the segment on which we will look for the root

cout<<"\nPlease input epsilon\n=>";

cin>>eps; // enter the required calculation accuracy

if (a > b) // if the user has mixed up the boundaries of the segment, swap them

if (f(a)*f(b)>0) // if the signs of the function at the edges of the segment are the same, then there is no root here

cout<<"\nError! No roots in this interval\n";

if (f(a)*d2f(a)>0) x0 = a; // to select the starting point, check f(x0)*d2f(x0)>0 ?

xn = x0-f(x0)/df(x0); // consider the first approximation

cout<<++i<<"-th iteration = "<

while(fabs(x0-xn) > eps) // will continue to calculate until we reach the required accuracy

xn = x0-f(x0)/df(x0); // directly Newton's formula

cout<<++i<<"-th iteration = "<

cout<<"\nRoot = "<

cout<<"\nExit?=>";

) while (exit!=1); // until the user enters exit = 1

Let's see how it works. Click on the green triangle in the upper left corner of the screen, or press the F5 key.

If a compilation error occurs “Error error LNK1123: failure to convert to COFF: file is invalid or damaged,” then this can be cured either by installing the first Service pack 1, or in the project settings Properties -> Linker disabling incremental linking.

Rice. 4. Solving the project compilation error

We will look for the roots of the function f(x) =x2-2.

First, let's check the application's performance on the “wrong” input data. There are no roots on the segment, our program should display an error message.

We now have an application window:

Rice. 5. Entering input data

Let us introduce the boundaries of the segment 3 and 5, and the accuracy is 0.05. The program, as expected, produced an error message that there were no roots on this segment.

Rice. 6. Error “There are no roots on this segment!”

We are not going to leave yet, so what about the message “Exit?” enter “0”.

Now let's check the application using the correct input data. Let's enter the segment and accuracy 0.0001.

Rice. 7. Calculation of the root with the required accuracy

As we can see, the required accuracy was achieved already at the 4th iteration.

To exit the application, enter “Exit?” => 1.

Secant method

To avoid calculating the derivative, Newton's method can be simplified by replacing the derivative with an approximation calculated from the previous two points:

The iterative process looks like:

This is a two-step iterative process because it uses the previous two to find the next approximation.

The order of convergence of the secant method is lower than that of the tangent method and is equal in the case of a single root.

This remarkable quantity is called the golden ratio:

Let us verify this, assuming for convenience that .

Thus, up to higher order infinitesimals

Discarding the remainder term, we obtain a recurrence relation, the solution of which is naturally sought in the form .

After substitution we have: and

For convergence it is necessary that it be positive, therefore .

Since knowledge of the derivative is not required, with the same amount of calculations in the secant method (despite the lower order of convergence) one can achieve greater accuracy than in the tangent method.

Note that near the root you have to divide by a small number, and this leads to a loss of accuracy (especially in the case of multiple roots), therefore, having chosen a relatively small number, perform calculations before performing and continue them until the modulus of the difference between neighboring approximations decreases.

As soon as growth begins, the calculations are stopped and the last iteration is not used.

This procedure for determining the end of iterations is called the technique Garvika.

Parabola method

Let's consider a three-step method in which the approximation is determined by the three previous points , and .

To do this, we replace, similarly to the secant method, the function with an interpolation parabola passing through the points , and .

In Newton's form it looks like:

A point is defined as the one of the roots of this polynomial that is closer in absolute value to the point.

The order of convergence of the parabola method is higher than that of the secant method, but lower than that of Newton's method.

An important difference from the previously considered methods is the fact that even if real for real and the starting approximations are chosen to be real, the parabola method can lead to a complex root of the original problem.

This method is very convenient for finding roots of high degree polynomials.

Simple iteration method

The problem of finding solutions to equations can be formulated as a problem of finding roots: , or as a problem of finding a fixed point.

Let and - compression: (in particular, the fact that - compression, as is easy to see, means that).

According to Banach's theorem, there is a unique fixed point

It can be found as the limit of a simple iterative procedure

where the initial approximation is an arbitrary point in the interval.

If the function is differentiable, then a convenient compression criterion is the number . Indeed, according to Lagrange’s theorem

Thus, if the derivative is less than one, then it is a compression.

Condition is essential, because if, for example, on , then there is no fixed point, although the derivative is equal to zero. The speed of convergence depends on the value of . The smaller , the faster the convergence.

Let the root of the equation f(x)=0 be separated on the segment , with the first and second derivatives f’(x) and f""(x) are continuous and of constant sign for xÎ.

Let at some step of root refinement the next approximation to the root x n is obtained (selected) . Then suppose that the next approximation obtained using the correction h n , leads to the exact value of the root

x = xn + hn. (1.2.3-6)

Counting h n small value, we represent f(х n + h n) in the form of a Taylor series, limiting ourselves to linear terms

f(x n + h n) »f(x n) + h n f’(x n). (1.2.3-7)

Considering that f(x) = f(x n + h n) = 0, we obtain f(x n) + h n f ’(x n) » 0.

Hence h n » - f(x n)/ f’(x n). Let's substitute the value h n in (1.2.3-6) and instead of the exact value of the root x we get another approximation

Formula (1.2.3-8) allows us to obtain a sequence of approximations x 1, x 2, x 3 ..., which, under certain conditions, converges to the exact value of the root x, that is

Geometric interpretation of Newton's method is as follows
(Fig.1.2.3-6). Let us take the right end of the segment b as the initial approximation x 0 and construct a tangent at the corresponding point B 0 on the graph of the function y = f(x). The point of intersection of the tangent with the x-axis is taken as a new, more accurate approximation x 1. Repeating this procedure many times allows us to obtain a sequence of approximations x 0, x 1, x 2 , . . ., which tends to the exact value of the root x.

The calculation formula of Newton's method (1.2.3-8) can be obtained from a geometric construction. So in a right triangle x 0 B 0 x 1 leg
x 0 x 1 = x 0 V 0 /tga. Considering that point B 0 is on the graph of the function f(x), and the hypotenuse is formed by the tangent to the graph f(x) at point B 0, we get

(1.2.3-9)

(1.2.3-10)

This formula coincides with (1.2.3-8) for the nth approximation.

From Fig. 1.2.3-6 it is clear that choosing point a as the initial approximation can lead to the fact that the next approximation x 1 will be outside the segment on which the root is separated x. In this case, the convergence of the process is not guaranteed. In the general case, the choice of the initial approximation is made in accordance with the following rule: the initial approximation should be taken as a point x 0 О, at which f(x 0)×f''(x 0)>0, that is, the signs of the function and its second derivative match up.

The conditions for the convergence of Newton's method are formulated in the following theorem.

If the root of the equation is separated on the segment, and f’(x 0) and f’’(x) are different from zero and retain their signs when, then if we choose such a point as the initial approximation x 0 О , What f(x 0).f¢¢(x 0)>0 , then the root of the equation f(x)=0 can be calculated with any degree of accuracy.

The error estimate of Newton's method is determined by the following expression:

(1.2.3-11)

where is the smallest value at

Highest value at

The calculation process stops if ,

where is the specified accuracy.

In addition, the following expressions can serve as a condition for achieving a given accuracy when refining the root using Newton’s method:

The diagram of the Newton method algorithm is shown in Fig. 1.2.3-7.

The left side of the original equation f(x) and its derivative f’(x) in the algorithm are designed as separate software modules.

Rice. 1.2.3-7. Newton method algorithm diagram

Example 1.2.3-3. Refine the roots of the equation x-ln(x+2) = 0 using Newton’s method, provided that the roots of this equation are separated on the segments x 1 О[-1.9;-1.1] and x 2 О [-0.9;2 ].

The first derivative f’(x) = 1 – 1/(x+2) retains its sign on each of the segments:

f'(x)<0 при хÎ [-1.9; -1.1],

f’(x)>0 at xО [-0.9; 2].

The second derivative f"(x) = 1/(x+2) 2 > 0 for any x.

Thus, the convergence conditions are satisfied. Since f""(x)>0 over the entire range of permissible values, then to clarify the root for the initial approximation x 1 choose x 0 = -1.9 (since f(-1.9)×f”(-1.9)>0). We obtain a sequence of approximations:

Continuing the calculations, we obtain the following sequence of the first four approximations: -1.9; –1.8552, -1.8421; -1.8414 . The value of the function f(x) at the point x=-1.8414 is equal to f(-1.8414)=-0.00003 .

To clarify the root x 2 О[-0.9;2] we choose 0 =2 (f(2)×f”(2)>0) as the initial approximation. Based on x 0 = 2, we obtain a sequence of approximations: 2.0;1.1817; 1.1462; 1.1461. The value of the function f(x) at the point x=1.1461 is equal to f(1.1461)= -0.00006.

Newton's method has a high convergence rate, but at each step it requires calculating not only the value of the function, but also its derivative.

Chord method

Geometric interpretation of the chord method is as follows
(Fig.1.2.3-8).

Let's draw a line segment through points A and B. The next approximation x 1 is the abscissa of the point of intersection of the chord with the 0x axis. Let's construct the equation of a straight line segment:

Let's set y=0 and find the value x=x 1 (next approximation):

Let's repeat the calculation process to obtain the next approximation to the root - x 2 :

In our case (Fig. 1.2.11) and the calculation formula of the chord method will have the form

This formula is valid when point b is taken as a fixed point, and point a acts as an initial approximation.

Let's consider another case (Fig. 1.2.3-9), when .

The straight line equation for this case has the form

Next approximation x 1 at y = 0

Then the recurrent formula of the chord method for this case has the form

It should be noted that the fixed point in the chord method is chosen to be the end of the segment for which the condition f (x)∙f¢¢ (x)>0 is satisfied.

Thus, if point a is taken as a fixed point , then x 0 = b acts as the initial approximation, and vice versa.

The sufficient conditions that ensure the calculation of the root of the equation f(x) = 0 using the chord formula will be the same as for the tangent method (Newton's method), only instead of the initial approximation, a fixed point is chosen. The chord method is a modification of Newton's method. The difference is that the next approximation in Newton's method is the point of intersection of the tangent with the 0X axis, and in the chord method - the point of intersection of the chord with the 0X axis - the approximations converge to the root from different sides.

The error estimate for the chord method is given by the expression

(1.2.3-15)

Condition for ending the iteration process using the chord method

(1.2.3-16)

In case M 1<2m 1 , то для оценки погрешности метода может быть использована формула | x n -x n -1 |£e.

Example 1.2.3-4. Clarify the root of the equation e x – 3x = 0, separated on the segment with an accuracy of 10 -4.

Let's check the convergence condition:

Consequently, a=0 should be chosen as the fixed point, and x 0 =1 should be taken as the initial approximation, since f(0)=1>0 and f(0)*f"(0)>0.

While struggling at school with solving equations in mathematics lessons, many students are often convinced that they are wasting their time completely in vain, and yet such a skill will be useful in life not only to those who decide to follow in the footsteps of Descartes, Euler or Lobachevsky.

In practice, for example in medicine or economics, there are often situations when a specialist needs to find out when the concentration active substance of a given drug will reach the required level in the patient’s blood, or it is necessary to calculate the time required for a particular business to become profitable.

Most often we are talking about solving nonlinear equations various types. Numerical methods allow this to be done as quickly as possible, especially using a computer. They are well studied and have long proven their effectiveness. These include Newton's tangent method, which is the subject of this article.

Formulation of the problem

In this case, there is a function g, which is defined on the segment (a, b) and takes on certain values ​​on it, i.e., each x belonging to (a, b) can be associated with a specific number g(x).

It is required to establish all the roots of the equation from the interval between points a and b (including the ends), for which the function is set to zero. Obviously, these will be the points of intersection of y = g(x) with OX.

In some cases, it is more convenient to replace g(x)=0 with a similar one, like g 1 (x) = g 2 (x). In this case, the abscissas (x value) of the intersection points of the graphs g 1 (x) and g 2 (x) act as roots.

The solution of a nonlinear equation is also important for optimization problems for which the condition for a local extremum is that the derivative of the function turns to 0. In other words, such a problem can be reduced to finding the roots of the equation p(x) = 0, where p(x) is identical to g"(x).

Solution methods

For some types of nonlinear equations, such as quadratic or simple trigonometric equations, roots can be found in fairly simple ways. In particular, every schoolchild knows formulas that can be used to easily find the values ​​of the argument of the points where the quadratic trinomial vanishes.

Methods for extracting roots of nonlinear equations are usually divided into analytical (direct) and iterative. In the first case, the desired solution has the form of a formula, using which, in a certain number of arithmetic operations, one can find the value of the desired roots. Similar methods have been developed for exponential, trigonometric, logarithmic and simple algebraic equations. For the rest, you have to use special numerical methods. They are easy to implement using computers, which allow you to find the roots with the required accuracy.

These include the so-called numerical method tangents. The latter was proposed by the great scientist Isaac Newton at the end of the 17th century. In subsequent centuries, the method was repeatedly improved.

Localization

Numerical methods for solving complex equations that do not have analytical solutions are usually carried out in 2 stages. First you need to localize them. This operation consists of finding such segments on OX on which there is one root of the equation being solved.

Let's consider the segment. If g(x) has no discontinuities on it and takes values ​​of different signs at the end points, then between a and b or in them there is at least 1 root of the equation g(x) = 0. For it to be unique, it is required that g(x) was not monotonic. As is known, it will have this property provided that the sign of g’(x) is constant.

In other words, if g(x) has no discontinuities and monotonically increases or decreases, and its values ​​at the end points do not have the same signs, then there is 1 and only 1 root of g(x).

However, you should know that this criterion will not apply to the roots of equations that are multiple.

Solving an equation by halving

Before considering more complex numerical tangents and its varieties, it is worth getting acquainted with the most in a simple way identifying roots. It is called dichotomy and refers to the intuitive way of finding roots, based on the theorem that if for g(x), continuous, the condition of different signs is satisfied, then on the segment under consideration there is at least 1 root g(x) = 0.

To find it, you need to divide the segment in half and designate the midpoint as x 2. Then two options are possible: g(x 0) * g(x 2) or g(x 2) * g(x 1) are equal to or less than 0. We choose the one for which one of these inequalities is true. We repeat the procedure described above until the length becomes less than a certain pre-selected value that determines the accuracy of determining the root of the equation on .

The advantages of the method include its reliability and simplicity, but the disadvantage is the need to initially identify points at which g(x) takes different signs, so it cannot be used for roots of even multiplicity. In addition, it does not generalize to the case of a system of equations or if we are talking about complex roots.

Example 1

Let us want to solve the equation g(x) = 2x 5 + x - 1 = 0. In order not to spend a long time searching for a suitable segment, we build a graph using, for example, the well-known Excel program. We see that it is better to take values ​​from the interval as a segment for localizing the root. We can be sure that at least one root of the required equation exists on it.

g"(x) = 10x 4 + 1, i.e. it is a monotonically increasing function, so there is only 1 root on the selected segment.

We substitute the end points into the equation. We have 0 and 1 respectively. At the first step, we take point 0.5 as the solution. Then g(0.5) = -0.4375. This means that the next segment for halving will be . His midpoint- 0.75. In it, the value of the function is 0.226. We take for consideration the segment and its middle, which is located at point 0.625. We calculate the value of g(x) to be 0.625. It is equal to -0.11, i.e. negative. Based on this result, we select the segment. We get x = 0.6875. Then g(x) = -0.00532. If the accuracy of the solution is 0.01, then we can assume that the desired result is 0.6875.

Theoretical basis

This method of finding roots using Newton's tangent method is popular because of its very fast convergence.

It is based on the proven fact that if x n is an approximation to the root f(x) = 0, such that f" C 1, then the next approximation will be at the point where the equation of the tangent to f(x) is zeroed, i.e.

Substitute x = x n+1 and set y to zero.

Then the tangents look like this:

Example 2

Let's try to use Newton's classical tangent method and find a solution to some nonlinear equation that is difficult or impossible to find analytically.

Let it be necessary to identify the roots for x 3 + 4x - 3 = 0 with some accuracy, for example 0.001. As is known, the graph of any function in the form of a polynomial of odd degree must intersect the OX axis at least once, i.e. there is no doubt about the existence of roots.

Before solving our example using the tangent method, we build a graph f(x) = x 3 + 4x - 3 pointwise. This is very easy to do, for example, using an Excel spreadsheet processor. From the resulting graph it will be clear that it does not intersect with the OX axis and the function y = x 3 + 4x - 3 increases monotonically. We can be sure that the equation x 3 + 4x - 3 = 0 has a solution and it is unique.

Algorithm

Any solution of equations by the tangent method begins with the calculation of f "(x). We have:

Then the second derivative will be x * 6.

Using these expressions, we can write a formula for identifying the roots of an equation using the tangent method in the form:

Next, you need to choose an initial approximation, i.e., decide which point to consider as the starting point (volume x 0) for the iterative process. We consider the ends of the segment. We will use the one for which the condition that the function and its 2nd derivative at x 0 are of different signs is true. As we can see, when substituting x 0 = 0 it is broken, but x 0 = 1 is quite suitable.

then if we are interested in solving the tangent method with accuracy e, then the value x n can be considered to satisfy the requirements of the problem, provided that the inequality |f(x n) / f’(x n)|< e.

At the first tangent step we have:

  • x 1 = x 0 - (x 0 3 + 4x 0 - 3) / (3x 0 2 + 4) = 1- 0.2857 = 0.71429;
  • since the condition is not met, we move on;
  • we get a new value for x 2, which is equal to 0.674;
  • we notice that the ratio of the function value to its derivative in x 2 is less than 0.0063, we stop the process.

Tangent Method in Excel

You can solve the previous example much easier and faster if you do not perform calculations manually (on a calculator), but use the capabilities of a spreadsheet processor from Microsoft.

To do this, you need to create a new page in Excel and fill its cells with the following formulas:

  • in C7 we write “= DEGREE (B7;3) + 4 * B7 - 3”;
  • in D7 we enter “= 4 + 3 * DEGREE (B7;2)”;
  • in E7 we write “= (DEGREE (B7;3)- 3 + 4 * B7) / (3* DEGREE (B7;2) + 4)”;
  • in D7 we enter the expression “=B7 - E7”;
  • in B8 we enter the condition formula “=IF(E7< 0,001;"Завершение итераций"; D7)».

In a specific problem, the inscription “Completing iterations” will appear in cell B10, and to solve the problem you will need to take the number written in the cell located one line above. You can also select a separate “stretchable” column for it by entering a formula-condition there, according to which the result will be written there if the content in one or another cell of column B takes the form “Completion of iterations”.

Implementation in Pascal

Let's try to obtain a solution to the nonlinear equation y = x 4 - 4 - 2 * x using the tangent method in Pascal.

We use an auxiliary function that will help to carry out an approximate calculation f"(x) = (f(x + delta) - f(x)) / delta. As a condition for completing the iterative process, we choose the fulfillment of the inequality |x 0 -x 1 |< некого малого числа. В Паскале его запишем, как abs(x0 - x1)<= epsilon.

The program is notable for the fact that it does not require manual calculation of the derivative.

Chord method

Let's consider another way to identify the roots of nonlinear equations. The iteration process consists in the fact that as successive approximations to the desired root for f(x) = 0, the values ​​of the points of intersection of the chord with the abscissa of the end points a and b with OX are taken, denoted as x 1, ..., x n. We have:

For the point where the chord intersects the OX axis, the expression will be written as:

Let the second derivative be positive for x £ (the opposite case will be reduced to the one under consideration if we write f(x) = 0). In this case, the graph y = f(x) is a curve, convex at the bottom and located below the chord AB. There can be 2 cases: when the function has a positive value at point a or it is negative at point b.

In the first case, we choose end a as the fixed one, and take point b as x 0. Then successive approximations according to the formula presented above form a sequence that decreases monotonically.

In the second case, end b is fixed at x 0 = a. The x values ​​obtained at each iteration step form a sequence that increases monotonically.

Thus, we can state that:

  • in the method of chords, the fixed end of the segment where the signs of the function and its second derivative do not coincide;
  • approximations for the root x - x m - lie from it on the side where f(x) has a sign that does not coincide with the sign of f"" (x).

Iterations can be continued until the conditions for the proximity of the roots are met at this and the previous iteration step modulo abs(x m - x m - 1)< e.

Modified method

The combined method of chords and tangents allows you to establish the roots of an equation by approaching them from different sides. This value, at which the graph f(x) intersects OX, allows you to refine the solution much faster than using each of the methods separately.

Suppose we need to find the roots of f(x)=0, if they exist on . You can apply any of the methods described above. However, it is better to try a combination of them, which will significantly improve the accuracy of the root.

We consider the case with an initial approximation corresponding to the condition that the first and second derivatives are of different signs at a specific point x.

Under such conditions, solving nonlinear equations by the tangent method allows one to find a root with an excess if x 0 =b, and the method using chords with a fixed end b leads to finding an approximate root with a deficiency.

Formulas used:

Now the required root x must be searched in the interval. At the next step, you need to apply the combined method to this segment. Proceeding in this way, we obtain formulas of the form:

If the first and second derivatives have different signs, then, reasoning in a similar way, to clarify the root we obtain the following recurrent formulas:

The condition used is the estimated inequality| b n +1 - a n +1 |< e. Иными словами, на практике приходится находить решение при помощи двух методов, но на каждом шаге требуется выяснять, насколько полученные результаты близки друг другу.

If the above inequality is true, then as the root of the nonlinear equation on a given segment, take the point that is exactly halfway between the solutions found at a specific iteration step.

The combined method is easily implemented in the TURBO PASCAL environment. If you really want, you can try to carry out all the calculations using the tabular method in Excel.

In the latter case, several columns are allocated for solving the problem using chords and separately for the method proposed by Isaac Newton.

In this case, each line is used to record calculations at a specific iteration step using two methods. Then, on the left side of the solution area, a column is highlighted on the active working page, in which the result of calculating the module of the difference in values ​​of the next iterative step for each of the methods is entered. Another one can be used to enter the results of calculations based on the formula for calculating the logical construct “IF”, used to find out whether a condition is true or not.

Now you know how to solve complex equations. The tangent method, as you have already seen, is implemented quite simply, both in Pascal and Excel. Therefore, you can always determine the roots of an equation that is difficult or impossible to solve using formulas.

Same as approximation. The term P. is sometimes used in the sense of bringing an object closer (for example, initial P.) ... Mathematical Encyclopedia

Newton's method- Newton's method, Newton's algorithm (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton... ... Wikipedia

One tangent method

Gauss-Newton method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Newton-Raphson method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Newton-Raphson method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Tangent method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Tangent method (Newton's method)- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Tangent method- Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643 1727), under the name ... ... Wikipedia

Numerical solution of equations- and their systems consists of an approximate determination of the root or roots of an equation or system of equations and is used in cases where it is impossible or very laborious to calculate the exact value. Contents 1 Problem statement 2 Numerical methods ... Wikipedia

Successive approximation method- a method for solving mathematical problems using a sequence of approximations that converges to a solution and is constructed recursively (i.e., each new approximation is calculated based on the previous one; the initial approximation is chosen in ... ... Great Soviet Encyclopedia

In the problem of minimizing a function, the successful choice of the initial approximation is of paramount importance. Of course, it is impossible to come up with a general rule that would be satisfactory for all cases, that is, for all possible nonlinear functions. Each time you have to look for your own solution. Below we propose a set of some methods for finding rough initial approximations, which in practice can serve as a starting point for searching for satisfactory approximations in a particular problem.

9.6.1. Grid search. This method is especially effective with a small number of actual nonlinear parameters. Often functions are designed in such a way that when the values ​​of some parameters (which we call nonlinear) are fixed, the rest of the parameters become linear.

Having then specified the lower and upper bounds for the nonlinear parameters, with a certain step it is possible to enumerate the options on the resulting grid of values ​​of these nonlinear parameters themselves and identify the linear regression that leads to the minimum sum of squares.

As an example, consider the function

Here the actual nonlinear parameter will be . Let's say it is known that . Let h be the step for the parameter. Let's calculate linear regressions

where we find the minimum sum of squares for each of them. The smallest of them corresponds to the optimal initial approximation. In principle, the step on which the “density” of the grid depends can vary, so that by decreasing the value of h, the parameter values ​​can be found with any accuracy.

9.6.2. Model transformation.

Sometimes, by some transformation, the model can be reduced to linear or the number of actual nonlinear parameters can be reduced (see section 6.2.3). Let us show how this can be achieved using the example of a logistic curve

Performing the inverse transformation on the corresponding regression equations, we obtain

By denoting, we arrive at a new function, the number of linear parameters of which has increased from one to two. The estimate for the parameter in the new model can be found, for example, using the previous method.

Here it is appropriate to make the following remark about transformations of regression models. It should be borne in mind that the error that was additive in the original equation will, generally speaking, no longer be additive after the transformation.

Using the Taylor series expansion and denoting the transformation by, we obtain, neglecting terms of order

It follows that

The last equality can be taken as a basis for analyzing the problem with a transformed model.

9.6.3. Dividing the sample into subsamples.

To find the initial approximation, you can divide the entire sample into subsamples (with approximately equal volumes), where is the number of unknown parameters. For each subsample, we find the averages over y and over X, which we denote by m, respectively. Let us solve the system of nonlinear equations for

The solution to this system will be the initial approximation of the parameters. Obviously, in order for this method to “work”, it is necessary that this system of nonlinear equations be solved quite easily, for example analytically.

9.6.4. Taylor series expansion in independent variables.

The basis of iterative minimization of the sum of squares is the expansion of the regression function in a Taylor series to linear terms in parameters. To find a rough initial approximation, the regression approximation procedure by expanding it into a Taylor series in independent variables is sometimes useful. For simplicity, we will assume that it is one-dimensional. Let be the average value, then approximately

We denote , thus we arrive at the linear model

Let be the least squares estimates of the parameters of this linear regression. As initial approximations, we will take the solution of a nonlinear system of equations with respect to