"Give me one bone and I will restore the animal"

Georges Cuvier

Georges Cuvier published a five-volume work on comparative animal anatomy: Lecons d'anatomie comparés (after his death, his students will publish a more detailed work in eight volumes).

One of the scientist’s scientific achievements is demonstration of the fact how closely all the structural and functional features of the body are connected and determine each other:

“Each animal is adapted to the environment in which it lives, finds food, hides from enemies, and takes care of its offspring. If this animal is a herbivore, its front teeth are adapted to pluck grass, and its molars are adapted to grind it. Massive teeth that grind grass require large and powerful jaws and corresponding chewing muscles. Therefore, such an animal must have a heavy, large head, and since it has neither sharp claws nor long fangs to fight off a predator, it fights off with its horns. To support the heavy head and horns, a strong neck and large cervical vertebrae with long processes to which muscles are attached are needed. To digest a large amount of low-nutrient grass, you need a voluminous stomach and a long intestine, and, therefore, you need a large belly, you need wide ribs. This is how the appearance of a herbivorous mammal emerges. “An organism,” said Cuvier, “is a coherent whole. Individual parts of it cannot be changed without causing changes in others. Cuvier called this constant connection of organs with each other “the relationship between the parts of the organism.”

The task of morphology is to reveal the patterns to which the structure of an organism is subject, and the method that allows us to establish the canons and norms of organization is a systematic comparison of the same organ (or the same system of organs) across all sections of the animal kingdom. What does this comparison give? It precisely establishes, firstly, the place occupied by a certain organ in the animal’s body, secondly, all the modifications experienced by this organ at various stages of the zoological ladder, and thirdly, the relationship between individual organs, on the one hand, and also by them and the body as a whole - on the other. It was this relationship that Cuvier qualified with the term “organic correlations” and formulated as follows: “Each organism forms a single closed whole, in which not one of the parts can change without the others also changing.”

“A change in one part of the body,” he says in another of his works, “affects the change in all others.”

You can give any number of examples illustrating the “law of correlation”. And it’s not surprising, says Cuvier: after all, the entire organization of animals rests on him. Take any large predator: the connection between the individual parts of its body is striking in its obviousness. Keen hearing, keen vision, well-developed sense of smell, strong muscles of the limbs, allowing one to jump towards prey, retractable claws, agility and speed in movements, strong jaws, sharp teeth, simple digestive tract, etc. - who does not know these “relatively developed » features of a lion, tiger, leopard or panther. And look at any bird: its entire organization constitutes a “single, closed whole,” and this unity in this case manifests itself as a kind of adaptation to life in the air, to flight. The wing, the muscles that move it, a highly developed ridge on the sternum, cavities in the bones, a peculiar structure of the lungs that form air sacs, a high tone of cardiac activity, a well-developed cerebellum that regulates the complex movements of the bird, etc. Try to change something something in this complex of structural and functional features of the bird: any such change, says Cuvier, inevitably appears to one degree or another, if not on all, then on many other features of the bird.

In parallel with correlations of a morphological nature, there are physiological correlations. The structure of an organ is related to its functions. Morphology is not divorced from physiology. Everywhere in the body, along with the correlation, another pattern is observed. Cuvier qualifies it as a subordination of organs and a subordination of functions.

The subordination of organs is associated with the subordination of the functions developed by these organs. However, both are equally related to the animal’s lifestyle. Everything here should be in some harmonious balance. Once this relative harmony is shaken, then the continued existence of an animal that has become a victim of a disturbed balance between its organization, functions and conditions of existence will be unthinkable. “During life, organs are not just united,” writes Cuvier, “but they also influence each other and compete together in the name of a common goal. There is not a single function that does not require the help and participation of almost all other functions and does not feel, to a greater or lesser extent, the degree of their energy […] It is obvious that proper harmony between the mutually acting organs is a necessary condition for the existence of the animal to which they belong, and that if any of these functions are changed out of conformity with the changes in the other functions of the organism, then it cannot exist.”

So, familiarity with the structure and functions of several organs - and often just one organ - allows us to judge not only the structure, but also the way of life of the animal. And vice versa: knowing the conditions of existence of a particular animal, we can imagine its organization. However, Cuvier adds, it is not always possible to judge the organization of an animal on the basis of its lifestyle: how, in fact, can one connect the rumination of an animal with the presence of two hooves or horns?

The extent to which Cuvier was imbued with the consciousness of the constant connectedness of the parts of an animal’s body can be seen from the following anecdote. One of his students wanted to joke with him. He dressed up in the skin of a wild sheep, entered Cuvier’s bedroom at night and, standing near his bed, shouted in a wild voice: “Cuvier, Cuvier, I will eat you!” The great naturalist woke up, stretched out his hand, felt the horns and, examining the hooves in the semi-darkness, calmly answered: “Hooves, horns - a herbivore; You can’t eat me!”

Having created a new field of knowledge - comparative anatomy of animals - Cuvier paved new paths of research in biology. Thus, the triumph of evolutionary teaching was prepared.”

Samin D.K., 100 great scientific discoveries, M., “Veche”, 2008, pp. 334-336.

1) correlation analysis as a means of obtaining information;

2) features of the procedures for determining linear and rank correlation coefficients.

Correlation analysis(from the Latin “correlation”, “connection”) is used to test the hypothesis about the statistical dependence of the values ​​of two or more variables in the event that the researcher can record (measure) them, but not control (change).

When an increase in the level of one variable is accompanied by an increase in the level of another, then we are talking about positive correlations. If an increase in one variable occurs while the level of another decreases, then we speak of negative correlations. In the absence of a connection between variables, we are dealing with null correlation.

In this case, the variables can be data from tests, observations, experiments, socio-demographic characteristics, physiological parameters, behavioral characteristics, etc. For example, the use of the method allows us to give a quantitative assessment of the relationship between such characteristics as: success in studying at a university and degree professional achievements upon completion, level of aspirations and stress, number of children in the family and the quality of their intelligence, personality traits and professional orientation, duration of loneliness and dynamics of self-esteem, anxiety and intragroup status, social adaptation and aggressiveness in conflict...

As auxiliary tools, correlation procedures are indispensable in the construction of tests (to determine the validity and reliability of the measurement), as well as as pilot actions to test the suitability of experimental hypotheses (the fact of the absence of correlation allows us to reject the assumption of a cause-and-effect relationship between variables).

The growing interest in psychological science in the potential of correlation analysis is due to a number of reasons. First, it becomes possible to study a wide range of variables, the experimental verification of which is difficult or impossible. Indeed, for ethical reasons, for example, it is impossible to conduct experimental studies of suicide, drug addiction, destructive parental influences, and the influence of authoritarian sects. Secondly, it is possible to obtain valuable generalizations of data on large numbers of studied individuals in a short time. Third, many phenomena are known to change their specificity during rigorous laboratory experiments. And correlation analysis provides the researcher with the opportunity to operate with information obtained under conditions as close as possible to real ones. Fourthly, the implementation of a statistical study of the dynamics of a particular dependence often creates the prerequisites for reliable prediction of psychological processes and phenomena.

However, it should be borne in mind that the use of the correlation method is also associated with very significant fundamental limitations.

Thus, it is known that variables may well correlate even in the absence of a cause-and-effect relationship with each other.

This is sometimes possible due to random reasons, with heterogeneity of the sample, or due to the inadequacy of the research tools for the tasks set. Such a false correlation can become, say, “proof” that women are more disciplined than men, teenagers from single-parent families are more prone to delinquency, extroverts are more aggressive than introverts, etc. Indeed, it is worth selecting men working in higher education into one group and women, suppose, from the service sector, and even test both of them on knowledge of scientific methodology, then we will get an expression of a noticeable dependence of the quality of information on gender. Can such a correlation be trusted?

Even more often, perhaps, in research practice there are cases when both variables change under the influence of some third or even several hidden determinants.

If we denote the variables with numbers and the directions from causes to effects with arrows, we will see a number of possible options:

1 2 3 4

1 2 3 4

1 2 3 4

1 2 3 4 etc.

Inattention to the influence of real factors, but not taken into account by researchers, made it possible to present justifications that intelligence is a purely inherited formation (psychogenetic approach) or, on the contrary, that it is due only to the influence of social components of development (sociogenetic approach). In psychology, it should be noted that phenomena that have an unambiguous root cause are not common.

In addition, the fact that variables are interconnected does not make it possible to identify cause and effect based on the results of a correlation study, even in cases where there are no intermediate variables.

For example, when studying the aggressiveness of children, it was found that children prone to cruelty are more likely than their peers to watch films with scenes of violence. Does this mean that such scenes develop aggressive reactions or, on the contrary, such films attract the most aggressive children? It is impossible to give a legitimate answer to this question within the framework of a correlation study.

It is necessary to remember: the presence of correlations is not an indicator of the severity and direction of cause-and-effect relationships.

In other words, having established the correlation of variables, we can judge not about determinants and derivatives, but only about how closely interrelated changes in variables are and how one of them reacts to the dynamics of the other.

When using this method, one or another type of correlation coefficient is used. Its numerical value usually varies from -1 (inverse dependence of variables) to +1 (direct dependence). In this case, a zero value of the coefficient corresponds to a complete absence of interrelation between the dynamics of the variables.

For example, a correlation coefficient of +0.80 reflects the presence of a more pronounced relationship between variables than a coefficient of +0.25. Likewise, the relationship between variables characterized by a coefficient of -0.95 is much closer than that where the coefficients have values ​​of +0.80 or + 0.25 (“minus” only tells us that an increase in one variable is accompanied by a decrease in another) .

In the practice of psychological research, correlation coefficients usually do not reach +1 or -1. We can only talk about one degree or another of approximation to a given value. Often a correlation is considered strong if its coefficient is greater than 0.60. In this case, insufficient correlation, as a rule, is considered to be indicators located in the range from -0.30 to +0.30.

However, it should immediately be stipulated that the interpretation of the presence of correlation always involves determining critical values the corresponding coefficient. Let's consider this point in more detail.

It may well turn out that a correlation coefficient of +0.50 in some cases will not be considered reliable, and a coefficient of +0.30 will, under certain conditions, be a characteristic of an undoubted correlation. Much here depends on the length of the series of variables (i.e., on the number of compared indicators), as well as on the given value of the significance level (or on the accepted probability of error in the calculations).

After all, on the one hand, the larger the sample, the quantitatively smaller the coefficient will be considered reliable evidence of correlation relationships. On the other hand, if we are willing to accept a significant probability of error, we can consider a sufficiently small value for the correlation coefficient.

There are standard tables with critical values ​​of correlation coefficients. If the coefficient we obtain is lower than that indicated in the table for a given sample at the established significance level, then it is considered statistically unreliable.

When working with such a table, you should know that the threshold value for the level of significance in psychological research is usually considered to be 0.05 (or five percent). Of course, the risk of making a mistake will be even less if this probability is 1 in 100 or, even better, 1 in 1000.

So, it is not the value of the calculated correlation coefficient itself that serves as the basis for assessing the quality of the relationship between variables, but a statistical decision about whether the calculated coefficient indicator can be considered reliable.

Knowing this, let us turn to studying specific methods for determining correlation coefficients.

A significant contribution to the development of the statistical apparatus of correlation studies was made by the English mathematician and biologist Karl Pearson (1857-1936), who at one time was engaged in testing the evolutionary theory of Charles Darwin.

Designation Pearson correlation coefficient(r) comes from the concept of regression - an operation to reduce a set of partial dependencies between individual values ​​of variables to their continuous (linear) averaged dependence.

The formula for calculating the Pearson coefficient is as follows:

Where x, y- private values ​​of variables, -(sigma) is the designation of the amount, and
- average values ​​of the same variables. Let's consider how to use the table of critical values ​​of Pearson coefficients. As we see, the number of degrees of freedom is indicated in its left column. When determining the line we need, we proceed from the fact that the required degree of freedom is equal to n-2, where n- the amount of data in each of the correlated series. In the columns located on the right side, specific values ​​of the coefficient modules are indicated.

Number of degrees of freedom

Significance levels

Moreover, the further to the right the column of numbers is located, the higher the reliability of the correlation, the more confident the statistical decision about its significance.

If, for example, we have two rows of numbers correlated with 10 units in each of them and a coefficient equal to +0.65 is obtained using the Pearson formula, then it will be considered significant at the level of 0.05 (since it is greater than the critical value of 0.632 for the probability 0.05 and less than the critical value of 0.715 for a probability of 0.02). This level of significance indicates a significant likelihood of repeating this correlation in similar studies.

Now let's give an example of calculating the Pearson correlation coefficient. Suppose in our case it is necessary to determine the nature of the connection between the performance of two tests by the same persons. Data for the first of them are designated as x, and according to the second - as y.

To simplify the calculations, some identities are introduced. Namely:

In this case, we have the following results of the subjects (in test scores):

Subjects

Fourth

Eleventh

Twelfth


;

;

Note that the number of degrees of freedom in our case is 10. Referring to the table of critical values ​​of Pearson coefficients, we find out that with a given degree of freedom at a significance level of 0.999, any correlation indicator of variables higher than 0.823 will be considered reliable. This gives us the right to consider the obtained coefficient as evidence of an undoubted correlation of the series x And y.

The use of a linear correlation coefficient becomes unlawful in cases where calculations are made within the limits of an ordinal measurement scale rather than an interval one. Then the rank correlation coefficients are used. Of course, the results are less accurate, since it is not the quantitative characteristics themselves that are subject to comparison, but only the orders of their succession.

Among the rank correlation coefficients in the practice of psychological research, the one proposed by the English scientist Charles Spearman (1863-1945), the famous developer of the two-factor theory of intelligence, is often used.

Using an appropriate example, let's look at the steps required to determine Spearman's rank correlation coefficient.

The formula for calculating it is as follows:

;

Where d-differences between the ranks of each variable from the series x And y,

n- number of compared pairs.

Let x And y- indicators of the test subjects’ success in performing certain types of activities (assessment of individual achievements). At the same time, we have the following data:

Subjects

Fourth

Note that at first the indicators are ranked separately in the series x And y. If several equal variables are encountered, then they are assigned the same average rank.

Then a pairwise determination of the difference in ranks is carried out. The sign of the difference is not significant, since according to the formula it is squared.

In our example, the sum of squared rank differences
is equal to 178. Substitute the resulting number into the formula:

As we can see, the correlation coefficient in this case is negligibly small. However, let's compare it with the critical values ​​of the Spearman coefficient from the standard table.

Conclusion: between the indicated series of variables x And y there is no correlation.

It should be noted that the use of rank correlation procedures provides the researcher with the opportunity to determine the relationships of not only quantitative, but also qualitative characteristics, in the event, of course, that the latter can be ordered in increasing severity (ranked).

We examined the most common, perhaps, practical methods for determining correlation coefficients. Other, more complex or less commonly used versions of this method, if necessary, can be found in manuals devoted to measurements in scientific research.

BASIC CONCEPTS: correlation; correlation analysis; Pearson linear correlation coefficient; Spearman's rank correlation coefficient; critical values ​​of correlation coefficients.

Questions for discussion:

1. What are the possibilities of correlation analysis in psychological research? What can and cannot be detected using this method?

2. What is the sequence of actions when determining the Pearson linear correlation coefficients and Spearman rank correlation coefficients?

Exercise 1:

Determine whether the following indicators of correlation between variables are statistically significant:

a) Pearson coefficient +0.445 for data from two tests in a group of 20 subjects;

b) Pearson coefficient -0.810 with the number of degrees of freedom equal to 4;

c) Spearman coefficient +0.415 for a group of 26 people;

d) Spearman coefficient +0.318 with the number of degrees of freedom equal to 38.

Exercise 2:

Determine the linear correlation coefficient between two series of indicators.

Row 1: 2, 4, 5, 5, 3, 6, 6, 7, 8, 9

Row 2: 2, 3, 3, 4, 5, 6, 3, 6, 7, 7

Exercise 3:

Draw conclusions about the statistical reliability and degree of expression of correlation relationships with the number of degrees of freedom equal to 25, if it is known that
is: a) 1200; b) 1555; c) 2300

Exercise 4:

Perform the entire sequence of actions necessary to determine the rank correlation coefficient between extremely general indicators of schoolchildren’s performance (“excellent student,” “good student,” etc.) and the characteristics of their performance on the mental development test (MDT). Make an interpretation of the obtained indicators.

Exercise5:

Using the linear correlation coefficient, calculate the test-retest reliability of the intelligence test at your disposal. Perform a study in a student group with a time interval between tests of 7-10 days. Formulate your conclusions.

A living organism is a single whole in which all parts and organs are interconnected. When the structure and functions of one organ change in the evolutionary process, this inevitably entails corresponding or, as they say, correlative changes in other organs related to the first physiologically, morphologically, through heredity, etc.

The law of correlation, or correlative development of organs, was discovered by J. Cuvier (1812). Using this law, it is often possible to reconstruct an entire fossil organism in parts, for example, in parts of the skeleton.

Let us give examples of correlative dependencies. One of the most significant, progressive changes in the evolution of arthropods was the appearance of a powerful external cuticular skeleton. This inevitably affected many other organs - the continuous skin-muscular sac could not function with a hard outer shell and broke up into separate muscle bundles; the secondary body cavity lost its supporting significance and was replaced by a mixed body cavity (mixocoel) of a different origin, which performs mainly a trophic function; body growth became periodic and began to be accompanied by molting, etc. In insects, there is a clear correlation between the respiratory organs and blood vessels. With the strong development of tracheas that deliver oxygen directly to the place of its consumption, blood vessels become redundant and disappear. An equally clear correlation is observed in

The purpose of correlation analysis is to identify an estimate of the strength of the connection between random variables (features) that characterize some real process.
Problems of correlation analysis:
a) Measuring the degree of coherence (closeness, strength, severity, intensity) of two or more phenomena.
b) Selection of factors that have the most significant impact on the resulting attribute, based on measuring the degree of connectivity between phenomena. Factors that are significant in this aspect are used further in regression analysis.
c) Detection of unknown causal relationships.

The forms of manifestation of relationships are very diverse. The most common types are functional (complete) and correlation (incomplete) connection.
Correlation manifests itself on average for mass observations, when the given values ​​of the dependent variable correspond to a certain series of probabilistic values ​​of the independent variable. The relationship is called correlation, if each value of the factor characteristic corresponds to a well-defined non-random value of the resultant characteristic.
A visual representation of a correlation table is the correlation field. It is a graph where X values ​​are plotted on the abscissa axis, Y values ​​are plotted on the ordinate axis, and combinations of X and Y are shown by dots. By the location of the dots, one can judge the presence of a connection.
Indicators of connection closeness make it possible to characterize the dependence of the variation of the resulting trait on the variation of the factor trait.
A more advanced indicator of the degree of crowding correlation connection is linear correlation coefficient. When calculating this indicator, not only deviations of individual values ​​of a characteristic from the average are taken into account, but also the very magnitude of these deviations.

The key questions of this topic are the equations of the regression relationship between the effective characteristic and the explanatory variable, the least squares method for estimating the parameters of the regression model, analyzing the quality of the resulting regression equation, constructing confidence intervals for predicting the values ​​of the effective characteristic using the regression equation.

Example 2


System of normal equations.
a n + b∑x = ∑y
a∑x + b∑x 2 = ∑y x
For our data, the system of equations has the form
30a + 5763 b = 21460
5763 a + 1200261 b = 3800360
From the first equation we express A and substitute into the second equation:
We get b = -3.46, a = 1379.33
Regression equation:
y = -3.46 x + 1379.33

2. Calculation of regression equation parameters.
Sample means.



Sample variances:


Standard deviation


1.1. Correlation coefficient
Covariance.

We calculate the indicator of connection closeness. This indicator is the sample linear correlation coefficient, which is calculated by the formula:

The linear correlation coefficient takes values ​​from –1 to +1.
Connections between characteristics can be weak and strong (close). Their criteria are assessed on the Chaddock scale:
0.1 < r xy < 0.3: слабая;
0.3 < r xy < 0.5: умеренная;
0.5 < r xy < 0.7: заметная;
0.7 < r xy < 0.9: высокая;
0.9 < r xy < 1: весьма высокая;
In our example, the relationship between trait Y and factor X is high and inverse.
In addition, the linear pair correlation coefficient can be determined through the regression coefficient b:

1.2. Regression equation(estimation of regression equation).

The linear regression equation is y = -3.46 x + 1379.33

Coefficient b = -3.46 shows the average change in the effective indicator (in units of measurement y) with an increase or decrease in the value of factor x per unit of its measurement. In this example, with an increase of 1 unit, y decreases by -3.46 on average.
The coefficient a = 1379.33 formally shows the predicted level of y, but only if x = 0 is close to the sample values.
But if x=0 is far from the sample values ​​of x, then a literal interpretation may lead to incorrect results, and even if the regression line describes the observed sample values ​​fairly accurately, there is no guarantee that this will also be the case when extrapolating left or right.
By substituting the appropriate x values ​​into the regression equation, we can determine the aligned (predicted) values ​​of the performance indicator y(x) for each observation.
The relationship between y and x determines the sign of the regression coefficient b (if > 0 - direct relationship, otherwise - inverse). In our example, the connection is reverse.
1.3. Elasticity coefficient.
It is not advisable to use regression coefficients (in example b) to directly assess the influence of factors on a resultant characteristic if there is a difference in the units of measurement of the resultant indicator y and the factor characteristic x.
For these purposes, elasticity coefficients and beta coefficients are calculated.
The average elasticity coefficient E shows by what percentage on average the result will change in the aggregate at from its average value when the factor changes x by 1% of its average value.
The elasticity coefficient is found by the formula:


The elasticity coefficient is less than 1. Therefore, if X changes by 1%, Y will change by less than 1%. In other words, the influence of X on Y is not significant.
Beta coefficient shows by what part of the value of its standard deviation the average value of the resulting characteristic will change when the factor characteristic changes by the value of its standard deviation with the value of the remaining independent variables fixed at a constant level:

Those. an increase in x by the standard deviation S x will lead to a decrease in the average value of Y by 0.74 standard deviation S y .
1.4. Approximation error.
Let us evaluate the quality of the regression equation using the error of absolute approximation. Average approximation error - average deviation of calculated values ​​from actual ones:


Since the error is less than 15%, this equation can be used as regression.
Analysis of variance.
The purpose of analysis of variance is to analyze the variance of the dependent variable:
∑(y i - y cp) 2 = ∑(y(x) - y cp) 2 + ∑(y - y(x)) 2
Where
∑(y i - y cp) 2 - total sum of squared deviations;
∑(y(x) - y cp) 2 - the sum of squared deviations due to regression (“explained” or “factorial”);
∑(y - y(x)) 2 - residual sum of squared deviations.
Theoretical correlation relationship for a linear connection is equal to the correlation coefficient r xy .
For any form of dependence, the tightness of the connection is determined using multiple correlation coefficient:

This coefficient is universal, as it reflects the closeness of the connection and the accuracy of the model, and can also be used for any form of connection between variables. When constructing a one-factor correlation model, the multiple correlation coefficient is equal to the pair correlation coefficient r xy.
1.6. Determination coefficient.
The square of the (multiple) correlation coefficient is called the coefficient of determination, which shows the proportion of variation in the resultant attribute explained by the variation in the factor attribute.
Most often, when interpreting the coefficient of determination, it is expressed as a percentage.
R2 = -0.742 = 0.5413
those. in 54.13% of cases, changes in x lead to changes in y. In other words, the accuracy of selecting the regression equation is average. The remaining 45.87% of the change in Y is explained by factors not taken into account in the model.

References

  1. Econometrics: Textbook / Ed. I.I. Eliseeva. – M.: Finance and Statistics, 2001, p. 34..89.
  2. Magnus Y.R., Katyshev P.K., Peresetsky A.A. Econometrics. Beginner course. Study guide. – 2nd ed., rev. – M.: Delo, 1998, p. 17..42.
  3. Workshop on econometrics: Proc. allowance / I.I. Eliseeva, S.V. Kurysheva, N.M. Gordeenko and others; Ed. I.I. Eliseeva. – M.: Finance and Statistics, 2001, p. 5..48.

Page 17. Remember

Jean Baptiste Lamarck. He mistakenly believed that all organisms strive for perfection. If with an example, then some cat strived to become a human). Another mistake was that he considered only the external environment to be an evolutionary factor.

2. What biological discoveries were made by the middle of the 19th century?

The most significant events of the first half of the 19th century were the formation of paleontology and the biological foundations of stratigraphy, the emergence of cell theory, the formation of comparative anatomy and comparative embryology, the development of biogeography and the widespread dissemination of transformist ideas. The central events of the second half of the 19th century were the publication of “The Origin of Species” by Charles Darwin and the spread of the evolutionary approach in many biological disciplines (paleontology, systematics, comparative anatomy and comparative embryology), the formation of phylogenetics, the development of cytology and microscopic anatomy, experimental physiology and experimental embryology, the formation concepts of a specific pathogen of infectious diseases, proof of the impossibility of spontaneous generation of life in modern natural conditions.

Page 21. Questions for review and assignments.

1. What geological data served as a prerequisite for Charles Darwin’s evolutionary theory?

The English geologist C. Lyell proved the inconsistency of J. Cuvier's ideas about sudden catastrophes changing the surface of the Earth, and substantiated the opposite point of view: the surface of the planet changes gradually, continuously under the influence of ordinary everyday factors.

2. Name the discoveries in biology that contributed to the formation of Charles Darwin’s evolutionary views.

The following biological discoveries contributed to the formation of Charles Darwin's views: T. Schwann created the cell theory, which postulated that living organisms consist of cells, the general features of which are the same in all plants and animals. This served as strong evidence of the unity of origin of the living world; K. M. Baer showed that the development of all organisms begins with the egg, and at the beginning of embryonic development in vertebrates belonging to different classes, a clear similarity of embryos is revealed at the early stages; While studying the structure of vertebrates, J. Cuvier established that all animal organs are parts of one integral system. The structure of each organ corresponds to the principle of the structure of the whole organism, and a change in one part of the body must cause changes in other parts; K. M. Baer showed that the development of all organisms begins with the egg, and at the beginning of embryonic development in vertebrates belonging to different classes, a clear similarity of embryos is revealed at the early stages;

3. Characterize the natural scientific prerequisites for the formation of Charles Darwin’s evolutionary views.

1. Heliocentric system.

2. Kant-Laplace theory.

3. Law of conservation of matter.

4. Achievements of descriptive botany and zoology.

5. Great geographical discoveries.

6. Discovery of the law of germinal similarity by K. Baer: “Embryos exhibit a certain similarity within the type.”

7. Achievements in the field of chemistry: Weller synthesized urea, Butlerov synthesized carbohydrates, Mendeleev created the periodic table.

8. Cell theory of T. Schwann.

9. A large number of paleontological finds.

10. Expedition material of Charles Darwin.

Thus, scientific facts collected in various fields of natural science contradicted previously existing theories of the origin and development of life on Earth. The English scientist Charles Darwin was able to correctly explain and generalize them, creating the theory of evolution.

4. What is the essence of J. Cuvier’s correlation principle? Give examples.

This is the law of the relationship between the parts of a living organism; according to this law, all parts of the body are naturally interconnected. If any part of the body changes, then there will directly be changes in other parts of the body (or organs, or organ systems). Cuvier is the founder of comparative anatomy and paleontology. He believed that if an animal has a large head, then it should have horns to defend itself from enemies, and if it has horns, then it has no fangs, then it is a herbivore, if it is a herbivore, then it has a complex multi-chambered stomach, and if it has a complex stomach and feeds on plant foods , which means a very long intestine, since plant foods have little energy value, etc.

5. What role did the development of agriculture play in the formation of evolutionary theory?

In agriculture, various methods of improving old ones and introducing new, more productive breeds of animals and high-yielding varieties of animals began to be increasingly used, which undermined the belief in the immutability of living nature. These advances strengthened Charles Darwin's evolutionary views and helped him establish the principles of selection that underlie his theory.