Qualitative methods(ethnographic, historical research as methods of qualitative analysis of local microsocieties, case study method, biographical method, narrative method) - semantic interpretation of data. When using qualitative methods, there is no link of formalized mathematical operations between the stage of obtaining primary data and the stage of meaningful analysis. These are widely known and used methods of statistical data processing.

However, qualitative methods include certain quantitative methods of collecting and processing information: content analysis; observation; interviewing, etc.

When making important decisions, the so-called “decision tree” or “goal tree”, which is a schematic description of the decision-making problem, is used to select the best course of action from the available options. Structural diagrams of goals can be presented in tabular and graph ways. The graph method has a number of advantages over the tabular method: firstly, it allows you to most economically record and process information, secondly, you can quickly create a development algorithm, and thirdly, the graph method is very visual. The “Tree of Goals” serves as the basis for selecting the most preferable alternatives, as well as for assessing the state of the systems being developed and their relationships.

Other methods of qualitative analysis are constructed similarly, including analogues of quantitative methods of factor analysis.

As rightly noted by D.S. Klementyev (21), the effect of qualitative methods of sociological research is possible only if ethical standards dominate in the reflection of social factors. A sociologist, selecting information from the mass of all kinds of information, should not limit himself only to his own preferences. In addition, when trying to answer the question about the actual state of affairs in the management environment, collecting specific information - empirical data, turning to the properties of the phenomenon under study, the sociologist should not operate with generally accepted provisions " common sense", "ordinary logic" or an appeal to the works of religious and political authorities. When compiling tests, a sociologist must avoid distortions that reflect manipulation rather than control. And another fundamental norm for a sociologist is honesty. This means that a person, presenting the results of a study, even if they do not satisfy him, should neither hide nor embellish anything. The requirement of honesty also includes providing complete documentation relevant to the case. You must be responsible for all information that other people use to critical assessment method and research results. This is especially important to keep in mind to avoid the temptation to misrepresent information, which would undermine the credibility of the findings.

Quantitative methods The study of the quantitative certainty of social phenomena and processes occurs using specific tools and methods. These are observation (uninvolved and included), survey (conversation, questionnaire and interviewing), document analysis (quantitative), experiment (controlled and uncontrolled).

Observation as a classical method natural sciences represents a specially organized perception of the object being studied. Organization of observation includes determining the characteristics of the object, goals and objectives of observation, choosing the type of observation, developing a program and procedure for observation, establishing observation parameters, developing techniques for performing the results, analyzing the results and conclusions. With non-participant observation, the interaction between the observer and the object of study (for example, the control system) is minimized. When enabled, the observer enters the observed process as a participant, i.e. achieves maximum interaction with the object of observation, without, as a rule, revealing his research intentions in practice. In practice, observation is most often used in combination with other research methods.

Polls There are continuous and selective. If a survey is conducted covering the entire population of respondents (all members of a social organization, for example), it is called continuous. The basis of a sample survey is the sample population as a reduced copy of the general population. The general population is considered to be the entire population or that part of it that the sociologist intends to study. Sample - a set of people whom the sociologist interviews (22).

The survey can be conducted using questionnaires or interviews. Interview- is a formalized type of conversation. Interviews, in turn, can be standardized or non-standardized. Sometimes they resort to telephone interviews. The person who conducts the interview is called the interviewer.

Questionnaire- written type of survey. Like an interview, a questionnaire involves a set of clearly formulated questions that are presented to the respondent in writing. Questions may require answers in a free form (“open questionnaire”) or in a given form (“closed questionnaire”), where the respondent selects one of the proposed answer options (23).

Questioning, due to its characteristics, has a number of advantages over other survey methods: the time for registering respondents’ answers is reduced due to self-counting; formalization of responses creates the possibility of using mechanized and automated processing of questionnaires; Thanks to anonymity, it is possible to achieve sincerity in answers.

In order to further develop questionnaires, it is often used scaled rating method applies. The method is aimed at obtaining quantitative information by measuring the attitude of specialists to the subject of examination on one or another scale - nominal, rank, metric. The construction of a rating scale that adequately measures the phenomena being studied is a very difficult task, but the processing of the results of such an examination, carried out using mathematical methods with the assistance of the apparatus mathematical statistics, can provide valuable analytical information in quantitative terms.

Method of analysis documents allows you to quickly obtain factual data about the object being studied.

Formalized analysis documentary sources (content analysis), designed to extract sociological information from large arrays of documentary sources inaccessible to traditional intuitive analysis, is based on identifying certain quantitative characteristics of texts (or messages). It is assumed that the quantitative characteristics of the content of documents reflect the essential features of the phenomena and processes being studied.

Having established the quantitative influence of the factors under study on the process under study, it is possible to construct a probabilistic model of the relationship between these factors. In these models, the facts under study will act as a function, and the factors that determine it will act as arguments. By giving a certain value to these argument factors, a certain value of the functions is obtained. Moreover, these values ​​will be correct only with a certain degree of probability. To obtain a specific numerical value of the parameters in this model, it is necessary to appropriately process the questionnaire survey data and build a multifactor correlation model on its basis.

Experiment just like the survey method, it is a test, but unlike the first, it aims to prove one or another assumption or hypothesis. An experiment, therefore, is a one-time test for a given pattern of behavior (thinking, phenomenon).

Experiments can be carried out in various forms. There are mental and “natural” experiments, dividing the latter into laboratory and field. A thought experiment is a special technology for interpreting the information received about the object being studied, which excludes the researcher’s intervention in the processes occurring in the object. Methodologically, the sociological experiment is based on the concept of social determinism. In the system of variables, an experimental factor is isolated, otherwise designated as an independent variable.

Experimental study social forms carried out during their operation, therefore it becomes possible to solve problems that are inaccessible for other methods. In particular, the experiment allows us to explore how the connections between a social phenomenon and management can be combined. It allows you to study not only individual aspects of social phenomena, but the totality of social connections and relationships. Finally, the experiment makes it possible to study the entire set of reactions of a social subject to changes in the conditions of activity (reaction expressed in changes in the results of activity, its nature, relationships between people, changes in their assessments, behavior, etc.). Those changes that are made during the experiment can represent either the creation of fundamentally new social forms, or a more or less significant modification of existing ones. In all cases, the experiment represents a practical transformation of a specific area of ​​control.

In general, the algorithmic nature of the quantitative method in a number of cases allows one to come to the adoption of highly “accurate” and well-founded decisions, or at least to simplify the problem, reducing it to a step-by-step finding of solutions to a certain set of simpler problems.

The final result of any sociological research is the identification and explanation of patterns and the construction of a scientific theory on this basis, which makes it possible to predict future phenomena and develop practical recommendations.

Issues for discussion

1. What is the method of sociology of management?

2. What is the specificity of the methods of management sociology?

3. List the classifications of management sociology methods known to you?

4. How do qualitative and quantitative sociological research methods differ?

5. Determine the essence of interviews, questionnaires, the method of scaled assessments, etc.

21 Klementyev D.S. Sociology of management: Textbook. allowance. - 3rd ed., revised. and additional - M.: Moscow State University Publishing House, 2010. - P.124

22 Yadov V.A. Sociological research: Methodology, program, methods. - M., 1987. - P. 22-28.

23 Ilyin G.L. Sociology and psychology of management: tutorial for students higher textbook establishments / G.L. Ilyin. - 3rd ed., erased. - M: Publishing center "Academy", 2010. - P. 19.

Home > Document

V. V. NIKANDROV

NON-EMPIRICAL METHODS OF PSYCHOLOGY

SPEECH

St. Petersburg 2003

BBK 88.5 N62

Printed by decree

editorial and publishing council

St. Petersburg State University

Reviewers: Doctor of Psychology L. V. Kulikov, Candidate of Psychological Sciences Yu. I. Filimonenko. Nikandrov V.V. H62 Non-empirical methods of psychology: Textbook. allowance. - St. Petersburg: Rech, 2003. - 53 p. The manual contains basic information about methods of organizing psychological research, processing empirical material and interpreting results, united under the name “non-empirical methods of psychology.” The manual is addressed to students, graduate students and other categories of students studying psychological directions. BBK 88.5 ISBN 5-9268-0174-5 ISBN 5-9268-0174-5 © V. V. Nikandrov, 2003 © Rech Publishing House, 2003 © P. V. Borozenets, cover design, 2003

Introduction 7 1. Organizational methods 11 1.1. Comparative method 11 1.2. Longitudinal method 12 1.3. Complex method 15 2. Data processing methods 16 2.1. Quantitative methods 18 2.1.1. Primary processing methods 18 2.1.2. Secondary processing methods 19 2.1.2.1. General overview about secondary processing 19 2.1.2.2. Complex calculation of statistics 25 2.1.2.3. Correlation analysis 25 2.1.2.4. Analysis of variance 26 2.1.2.5. Factor analysis 26 2.1.2.6. Regression analysis 27 2.1.2.7. Taxonomic analysis 28 2.1.2.8. Scaling 28 2.2. Qualitative methods 38 2.2.1. Classification 38 2.2.2. Typology 40 2.2.3. Systematization 43 2.2.4. Periodization 43 2.2.5. Psychological casuistry 44

3. Interpretive methods 45

3.1. Genetic method 45 3.2. Structural method 46 3.3. Functional method 47 3.4. Complex method 48 3.5. System method 49 References 52

Introduction

Non-empirical methods of psychology- these are scientific research techniques of psychological work outside the framework of contact (direct or indirect) of the researcher with the object of research. These techniques, firstly, contribute to the organization of obtaining psychological information using empirical methods and, secondly, make it possible to transform this information into reliable scientific knowledge. As is known, to the very first approximation, any scientific research, including psychological, goes through three stages: 1) preparatory; 2) main; 3) final. At the first stage the goals and objectives of the research are formulated, orientation is made to the body of knowledge in this area, an action program is drawn up, organizational, material and financial issues are resolved. On main stage The actual research process is carried out: the scientist, using special methods, comes into contact (direct or indirect) with the object being studied and collects data about it. It is this stage that usually best reflects the specifics of the research: the reality being studied in the form of the object and subject under study, the area of ​​knowledge, the type of research, and methodological equipment. On final stage The received data is processed and converted into the desired result. The results are correlated with the stated goals, explained and included in the existing system of knowledge in the field. The above stages can be divided, and then a more detailed diagram is obtained, analogues of which in one form or another are given in the scientific literature:

I. Preparatory stage:

1. Statement of the problem; 2. Proposing a hypothesis; 3. Study planning. II. Main (empirical) stage: 4. Data collection. III. Final stage: 5. Data processing; 6. Interpretation of results; 7. Conclusions and inclusion of results in the knowledge system. Non-empirical methods are used in the first and third stages of the study, empirical methods - in the second. In science there are many classifications of psychological methods, but most of them concern empirical methods. Non-empirical methods are presented in a few classifications, of which the most convenient are those based on the criterion of the stages of the psychological process. Among them, the most successful and widely recognized is the classification psychological methods, proposed by B. G. Ananyev, who in turn relied on the classification of the Bulgarian scientist G. Pirov. It is believed that B. G. Ananyev “developed a classification that corresponds to the modern level of science and stimulated further research on this central problem for the methodology of psychology.” The breakdown of the course of psychological research into stages according to B. G. Ananyev, although it does not completely coincide with what we gave above, is still very close to it: A) organizational stage (planning); B) empirical stage (data collection); B) data processing; D) interpretation of results. Having slightly changed and supplemented the classification of B. G. Ananyev, we will obtain a detailed system of methods, which we recommend as a reference when studying psychological tools:

I. Organizational methods (approaches).

1. Comparative. 2. Longitudinal. 3. Comprehensive.

P. Empirical methods.

1. Observational (observation): a) objective observation; b) introspection (introspection). 2. Verbal communication methods. a) conversation; b) survey (interview and questionnaire). 3. Experimental methods: a) laboratory experiment; b) natural experiment; c) formative experiment. 4. Psychodiagnostic methods: a) psychodiagnostic tests; b) psychosemantic methods; c) psychomotor methods; d) methods of socio-psychological diagnostics of personality. 5. Psychotherapeutic methods. 6. Methods for studying the products of activity: a) reconstruction method; b) method of studying documents (archival method); c) graphology. 7. Biographical methods. 8. Psychophysiological methods: a) methods for studying the work of the autonomic nervous system; b) methods for studying the functioning of the somatic nervous system; c) methods for studying the functioning of the central nervous system. 9. Praximetric methods: a) general methods research individual movements and actions; b) special methods for studying labor operations and activities. 10. Modeling. 11. Specific methods of branch psychological sciences.

III. Data processing methods:

1. Quantitative methods; 2. Qualitative methods.

IV. Interpretive methods (approaches):

1. Genetic; 2. Structural; 3. Functional; 4. Comprehensive; 5. Systemic. [ 9] The above classification does not pretend to be exhaustive or strictly systematic. And following B. G. Ananyev, we can say that “the contradictions of modern methodology, methods and techniques of psychology are reflected quite deeply in the proposed classification.” Nevertheless, it still gives a general idea of ​​the system of methods used in psychology, and methods with well-established designations and names in the practice of their use. So, based on the proposed classification, we have three groups of non-empirical methods: organizational, data processing and interpretive. Let's look at them one by one.

    ORGANIZATIONAL METHODS

These methods should rather be called approaches, since they represent not so much a specific method of research as a procedural strategy. The choice of one or another method of organizing research is predetermined by its objectives. And the chosen approach, in turn, determines the set and order of application of specific methods for collecting data about the object and subject of study.

1.1. Comparative method

Comparative method consists in comparing different objects or different aspects of one object of study at some point in time. The data taken from these objects are compared with each other, which gives rise to the identification of relationships between them. Sub-move allows you to study spatial diversity, relationships And evolution mental phenomena. Diversity and relationships are studied either by comparing various manifestations of the psyche in one object (person, animal, group) at a certain point in time, or by simultaneous comparison different people(animals, groups) according to any one type (or complex) of mental manifestations. For example, the dependence of reaction speed on the type of signal modality is studied on an individual individual, and on gender, ethnicity or age characteristics- on several individuals. It is clear that “simultaneity”, like “a certain moment in time”, in this case are relative concepts. They are determined by the duration of the study, which can be measured in hours, days and even weeks, but will be negligible compared to the life cycle of the object being studied. [ 11] Especially bright comparative method manifests itself in the evolutionary study of the psyche. Objects (and their indicators) corresponding to certain stages of phylogenesis are subject to comparison. Primates, archanthropes, paleoanthropes are compared with modern humans, data about which is supplied by zoopsychology, anthropology, paleopsychology, archeology, ethology and other sciences about animals and the origin of man. The science that deals with such analysis and generalizations is called “Comparative Psychology.” Outside the comparative method, the entire psychology of differences (differential psychology) is unthinkable. An interesting modification of the comparative method is widespread in developmental psychology and is called the “cross-sectional method.” Cross sections are a collection of data about a person at certain stages of his ontogenesis (infancy, childhood, old age, etc.), obtained in studies of relevant populations. Such data in a generalized form can act as level standards mental development a person for a certain age in a particular population. The comparative method allows the use of any empirical method when collecting data about the object of study.

1.2. Longitudinal method

Longitudinal method (lat. long - long) - long-term and systematic study of the same object. Such long-term tracking of an object (usually according to a pre-compiled program) makes it possible to identify the dynamics of its existence and predict its further development. In psychology, longitudinal is widely used in the study of age dynamics, mainly in childhood. A specific form of implementation is the method of “longitudinal sections”. Longitudinal sections are a collection of data about an individual for a certain period of his life. These periods can be measured in months, years or even decades. The result of the longitudinal method as a way of organizing a multi-year research cycle “is an individual monograph or a set of such monographs describing the course of mental development, covering a number of phases of periods of human life. A comparison of such individual monographs makes it possible to fairly fully present the range of fluctuations in age norms and the moments of transition from one phase of development to another. However, constructing a series of functional tests and experimental methods, periodically repeated when studying the same person, is an extremely difficult matter, since the adaptation of the subject to the experimental conditions and special training can influence the picture of development. In addition, the narrow base of such a study, limited to a small number of selected objects, does not provide grounds for constructing age-related syndromes, which is successfully carried out through the comparative method of “cross-sections”. Therefore, it is advisable to combine, whenever possible, longitudinal and comparative methods. J. Shvantsara and V. Smekal offer the following classification of types of longitudinal research: A. Depending on the duration of the study: 1. Short-term observation; 2. Long-term follow-up; 3. Faster observation. B. Depending on the direction of the study: 1. Retrospective observation; 2. Prospective (prospective) observation; 3. Combined observation. B. Depending on the methods used: 1. True longitudinal observation; 2. Mixed observation; 3. Pseudo-longitudinal observation. Short term Observation is recommended to be carried out to study the stages of ontogenesis, rich in changes and leaps in development. For example, the infant period of infancy, the period of maturation in adolescence - youth, etc. If the purpose of the study is to study the dynamics of large-scale periods of development, the relationship between individual periods and individual changes, then it is recommended yes long-term longitudinal Accelerated option is intended for studying long periods of development, but in a short time. Used mainly in child psychology. Several are subject to observation at once age groups. The age range of each group depends on the purpose of the study. In the practice of monitoring children, it is usually 3-4 years. Adjacent groups overlap each other for one to two years. Parallel observation of a number of such groups makes it possible to link the data of all groups into a single cycle, covering the entire set of these groups from the youngest to the oldest. Thus, a study conducted over, say, 2-3 years can provide a longitudinal slice over 10-20 years of ontogeny. Retrospective the form allows us to trace the development of a person or his individual qualities in the past. It is carried out by collecting biographical information and analyzing the products of activity. For children, these are primarily autobiographical conversations, testimonies from parents, and anamnesis data. Perspective, or prospective, method is current observations of the development of a person (animal, group) up to a certain age. Combined the study assumes the inclusion of retrospective elements in a prospective longitudinal study. True longitudinal is a classic long-term observation of one object. Mixed It is considered a method of longitudinal research in which true longitudinal observation at some stages is supplemented by cross-sections that provide comparative information about other objects of the same type as the one being studied. This method is beneficial when observing groups that “melt” over time, that is, their composition decreases from period to period. Pseudo-longitudinal The research consists of obtaining “norms” for different age groups and chronological ordering of these indicators. The norm is obtained through cross sections of the group, i.e., through averaged data for each group. Here the inadmissibility of contrasting transverse and longitudinal sections is clearly demonstrated, since the latter, as we see, can be obtained through a sequential (chronological) series of transverse sections. By the way, it is in this way that “most of the hitherto known norms of ontogenetic psychology were obtained.” [ 14]

1.3. Complex method

Integrated method (approach) involves organizing a comprehensive study of an object. In essence, this is, as a rule, an interdisciplinary study devoted to the study of an object common to several sciences: the object is one, but the subjects of research are different. [ 15]

    DATA PROCESSING METHODS

Data processing is aimed at solving the following problems: 1) organizing the source material, transforming a set of data into a holistic system of information, on the basis of which further description and explanation of the object and subject being studied is possible; 2) detection and elimination of errors, shortcomings, gaps in information; 3) identifying trends, patterns and connections hidden from direct perception; 4) discovery of new facts that were not expected and were not noticed during the empirical process; 5) determining the level of reliability, reliability and accuracy of the collected data and obtaining scientifically based results on their basis. Data processing has quantitative and qualitative aspects. Quantitative processing there is a manipulation with the measured characteristics of the object (objects) being studied, with its “objectified” properties in external manifestation. High-quality processing- this is a method of preliminary penetration into the essence of an object by identifying its unmeasurable properties on the basis of quantitative data. Quantitative processing is aimed mainly at a formal, external study of an object, while qualitative processing is mainly aimed at a meaningful, internal study of it. In quantitative research, the analytical component of cognition dominates, which is reflected in the names of quantitative methods for processing empirical material, which contain the category “analysis”: correlation analysis, factor analysis, etc. The main result of quantitative processing is an ordered a set of “external” indicators of an object (objects). Quantitative processing is carried out using mathematical and statistical methods. In qualitative processing, the synthetic component of cognition dominates, and in this synthesis the unification component prevails and the generalization component is present to a lesser extent. Generalization is the prerogative of the next stage of the interpretive research process. In the phase of qualitative data processing, the main thing is not to reveal the essence of the phenomenon being studied, but for now only in the appropriate presentation of information about it, ensuring its further theoretical study. Typically, the result of qualitative processing is an integrated representation of the set of properties of an object or set of objects in the form of classifications and typologies. Qualitative processing largely appeals to the methods of logic. The contrast between qualitative and quantitative processing (and, consequently, the corresponding methods) is rather arbitrary. They form an organic whole. Quantitative analysis without subsequent qualitative processing is meaningless, since by itself it is not able to transform empirical data into a system of knowledge. And a qualitative study of an object without basic quantitative data in scientific knowledge is unthinkable. Without quantitative data, qualitative cognition is a purely speculative procedure, not characteristic of modern science. In philosophy, the categories “quality” and “quantity,” as is known, are combined into the category “measure.” The unity of quantitative and qualitative understanding of empirical material clearly appears in many methods of data processing: factor and taxonomic analysis, scaling, classification, etc. But since traditionally in science the division into quantitative and qualitative characteristics, quantitative and qualitative natural methods, quantitative and qualitative descriptions, we will accept the quantitative and qualitative aspects of data processing as independent phases of one research stage, to which certain quantitative and qualitative methods correspond. Quality processing naturally results in description And explanation phenomena being studied, which constitutes the next level of their study, carried out at the stage interpretations results. Quantitative processing refers entirely to the data processing stage.

2.1. Quantitative methods

The quantitative data processing process has two phases: primary And secondary.

2.1.1. Primary processing methods

Primary processing aims at arranging information about the object and subject of study obtained at empirical stage research. At this stage, “raw” information is grouped according to certain criteria, entered into summary tables, and presented graphically for clarity. All these manipulations make it possible, firstly, to detect and eliminate errors made when recording data, and, secondly, to identify and remove from the general array ridiculous data obtained as a result of violation of the examination procedure, non-compliance instructions from the subjects, etc. In addition, the initially processed data, presented in a form convenient for review, gives the researcher a first approximation idea of ​​the nature of the entire set of data as a whole: their homogeneity - heterogeneity, compactness - scatteredness, clarity - blurriness etc. This information is easily readable on visual forms of data presentation and is associated with the concepts of “data distribution”. The main methods of primary processing include: tabulation, i.e. the presentation of quantitative information in tabular form, and diagramming(rice. I), histograms (Fig. 2), distribution polygons (Fig. 3) And distribution curves(Fig. 4). Diagrams reflect the distribution of discrete data; other graphical forms are used to represent the distribution of continuous data. It’s easy to move from a histogram to a plot frequency distribution polygon, and from the latter - to the distribution curve. A frequency polygon is constructed by connecting the upper points of the central axes of all sections of the histogram with straight segments. If you connect the vertices of the sections using smooth curved lines, you get distribution curve primary results. The transition from a histogram to a distribution curve allows, by interpolation, to find those values ​​of the variable under study that were not obtained in the experiment. [ 18]

2.1.2. Secondary processing methods

2.1.2.1. Understanding Recycling

Secondary processing lies mainly in statistical analysis results of primary processing. Tabulating and plotting graphs, strictly speaking, is also statistical processing, which, together with the calculation of measures of central tendency and dispersion, is included in one of the sections of statistics, namely descriptive statistics. Another section of statistics - inductive statistics- checks the compliance of the sample data with the entire population, i.e. solves the problem of the representativeness of the results and the possibility of moving from private knowledge to general knowledge. Third big section - correlation statistics- identifies connections between phenomena. In general, one must understand that “statistics is not mathematics, but, first of all, a way of thinking, and to apply it you only need to have a little common sense and know the basics of mathematics.” Statistical analysis the entire set of data obtained in the study makes it possible to characterize it in an extremely compressed form, since it allows answering three main questions: 1) Which value is most typical for the sample?; 2) is the spread of data relative to this characteristic value large, i.e., what is the “fuzziness” of the data?; 3) is there a relationship between individual data in the existing population and what is the nature and strength of these connections? The answers to these questions are provided by some statistical indicators of the sample under study. To solve the first question, calculate measures of central tendency(or localization), second - measures of variability(or dispersion, scattering), third - communication measures(or correlations). These statistical indicators are applicable to quantitative data (ordinal, interval, proportional). Measures of central tendency(m.c.t.) are the quantities around which the rest of the data is grouped. These values ​​are, as it were, indicators that generalize the entire sample, which, firstly, allows one to judge the entire sample by them, and secondly, makes it possible to compare different samples, different series with each other. Measures of central tendency include: arithmetic mean, median, mode, geometric mean, harmonic mean. In psychology, the first three are usually used. Arithmetic mean (M) is the result of dividing the sum of all values (X) by their number (N): M = EX / N. Median (Me) - this is a value above and below which the number of different values ​​is the same, i.e. this is the central value in a sequential series of data. Examples: 3,5,7,9,11,13,15; Me = 9. 3,5,7,9, 11, 13, 15, 17; Me = 10. It is clear from the examples that the median does not have to coincide with the existing measurement, it is a point on the scale. A match occurs in the case of an odd number of values ​​(answers) on the scale, a discrepancy occurs in the case of an even number. Fashion (Mo)- this is the value that occurs most frequently in the sample, i.e. the value with the highest frequency. Example: 2, 6, 6, 8, 9, 9, 9, 10; Mo = 9. If all values ​​in a group occur equally often, then it is considered that no fashion(for example: 1, 1, 5, 5, 8, 8). If two adjacent values ​​have the same frequency and they are greater than the frequency of any other value, there is a mode average these two values ​​(for example: 1, 2, 2, 2, 4, 4, 4, 5, 5, 7; Mo = 3). If the same applies to two non-adjacent values, then there are two modes and the group of estimates is bimodal(for example: 0, 1, 1, 1, 2, 3, 4, 4, 4, 7; Mo = 1 and 4). Usually the arithmetic mean is used when striving for the greatest accuracy and when the standard deviation later needs to be calculated. Median - when the series contains “atypical” data that sharply affects the average (for example: 1, 3, 5, 7, 9, 26, 13). Fashion - when high accuracy is not needed, but the speed of determining the m.c. is important. T. Measures of variability (dispersion, spread)- these are statistical indicators that characterize the differences between individual sample values. They make it possible to judge the degree of homogeneity of the resulting set, its compactness, and indirectly, the reliability of the data obtained and the results arising from them. Most used in psychological research indicators: range, average deviation, dispersion, standard deviation, semiquartile deviation. Swing (P) is the interval between the maximum and minimum values ​​of the characteristic. It is determined easily and quickly, but is sensitive to randomness, especially with a small number of data. Examples: (0, 2, 3, 5, 8; P = 8); (-0.2, 1.0, 1.4, 2.0; P - 2.2). Mean Deviation (MD) is the arithmetic mean of the difference (by absolute value) between each value in the sample and its average: MD = Id / N, where: d = |X-M|; M - sample average; X - specific value; N is the number of values. The set of all specific deviations from the average characterizes the variability of the data, but if they are not taken in absolute value, then their sum will be equal to zero, and we will not receive information about their variability. MD shows the degree of crowding of data around the average. By the way, sometimes when determining this characteristic of a sample, instead of the mean (M), other measures of central tendency are taken - the mode or median. Dispersion (D)(from lat. dispersus - scattered). Another way to measure the degree of crowding of data involves avoiding the zero sum of specific differences (d = X-M) not through their absolute values, but through their squaring. In this case, the so-called dispersion is obtained: D = Σd 2 / N - for large samples (N > 30); D = Σd 2 / (N-1) - for small samples (N< 30). Standard deviation (δ). Due to the squaring of individual deviations d when calculating the dispersion, the resulting value turns out to be far from the initial deviations and therefore does not give a clear idea of ​​them. To avoid this and obtain a characteristic comparable to the average deviation, an inverse mathematical operation is performed - the square root is extracted from the variance. His positive value and is taken as a measure of variability called root mean square or standard deviation: MD, D and d are applicable for interval and proportional data. For ordinal data, the measure of variability is usually taken semiquartile deviation (Q), also called semiquartile coefficient or half-interquartile range. This indicator is calculated as follows. The entire data distribution area is divided into four equal parts. If observations are counted starting from the minimum value on the measuring scale (on graphs, polygons, histograms, the counting is usually from left to right), then the first quarter of the scale is called the first quartile, and the point separating it from the rest of the scale is designated by the symbol Q ,. The second 25% of the distribution is the second quartile, and the corresponding point on the scale is Q 2 . Between the third and fourth quarter point Q is located in the distribution. The semi-quarterly coefficient is defined as half the interval between the first and third quartiles: Q = (Q.-Q,) / 2. It is clear that with a symmetric distribution, point Q 0 coincides with the median (and therefore with the average), and then it is possible to calculate the coefficient Q to characterize the spread of data relative to the middle of the distribution. With an asymmetric distribution, this is not enough. And then the coefficients for the left and right sections are additionally calculated: Q a lion = (Q 2 -Q,) / 2; Q rights= (Q, - Q 2) / 2. Communication measures The previous indicators, called statistics, characterize the totality of data according to one particular characteristic. This changing characteristic is called a variable value or simply “variable”. Measures of connection reveal relationships between two variables or between two samples. These connections, or correlations (from lat. correlatio - “correlation, relationship”) is determined through calculation correlation coefficients (R), if the variables are in a linear relationship with each other. It is believed that most mental phenomena are subordinated precisely linear dependencies, which predetermined the widespread use of correlation analysis methods. But the presence of a correlation does not mean that there is a causal (or functional) relationship between the variables. Functional dependence is special case correlations. Even if the relationship is causal, correlation indicators cannot indicate which of two variables is the cause and which is the effect. In addition, any connection discovered in psychology, as a rule, exists due to other variables, and not just the two considered. In addition, interconnections psychological signs are so complex that their being determined by one cause is hardly consistent; they are determined by many causes. Types of correlation: I. According to the closeness of the connection: 1) Complete (perfect): R = 1. Mandatory interdependence between the variables is stated. Here we can already talk about functional dependence. 2) no connection was identified: R = 0. [ 23] 3) Partial: 0 2) Curvilinear.

This is a relationship in which a uniform change in one characteristic is combined with an uneven change in another. This situation is typical for psychology. Correlation coefficient formulas: When comparing ordinal data, apply rank correlation coefficient according to Ch. Spearman (ρ): ρ = 6Σd 2 / N (N 2 - 1), where: d is the difference in ranks (ordinal places) of two quantities, N is the number of compared pairs of values ​​of two variables (X and Y). When comparing metric data, use product correlation coefficient according to K. Pearson (r): r = Σ xy / Nσ x σ y where: x is the deviation of an individual value of X from the sample average (M x), y is the same for Y, O x is the standard deviation for X, a - the same for Y, N - the number of pairs of values ​​of X and Y. The introduction of computer technology into scientific research makes it possible to quickly and accurately determine any quantitative characteristics of any data arrays. Various computer programs have been developed that can be used to carry out appropriate statistical analysis of almost any sample. Of the mass of statistical techniques in psychology, the following are most widespread: 1) complex calculation of statistics; 2) correlation analysis; 3) analysis of variance; 4) regression analysis; 5) factor analysis; 6) taxonomic (cluster) analysis; 7) scaling.

2.1.2.2. Comprehensive statistics calculation

Using standard programs, both the main sets of statistics presented above and additional ones not included in our review are calculated. Sometimes the researcher is limited to obtaining these characteristics, but more often the totality of these statistics represents only a block included in a wider set of indicators of the sample being studied, obtained using more complex programs. Including programs that implement the methods of statistical analysis given below.

2.1.2.3. Correlation analysis

Reduces to calculating correlation coefficients in a wide variety of relationships between variables. The relationships are set by the researcher, and the variables are equivalent, i.e., what is the cause and what is the effect cannot be established through correlation. In addition to the closeness and direction of connections, the method allows you to establish the form of connection (linearity, nonlinearity). It should be noted that nonlinear connections cannot be analyzed using mathematical and statistical methods generally accepted in psychology. Data related to nonlinear zones (for example, at points where connections are broken, in places of abrupt changes) are characterized through meaningful descriptions, refraining from their formal quantitative presentation. Sometimes it is possible to use nonparametric mathematical and statistical methods and models to describe nonlinear phenomena in psychology. For example, the mathematical theory of disaster is used.

2.1.2.4. Analysis of variance

Unlike correlation analysis, this method allows us to identify not only the relationship, but also the dependencies between variables, i.e., the influence of various factors on the characteristic being studied. This influence is assessed through dispersion relations. Changes in the characteristic being studied (variability) can be caused by the action of individual factors known to the researcher, their interaction and the effects of unknown factors. Analysis of variance makes it possible to detect and evaluate the contribution of each of these influences to the overall variability of the trait under study. The method allows you to quickly narrow the field of conditions influencing the phenomenon under study, highlighting the most significant of them. Thus, analysis of variance is “the study of the influence of variable factors on the variable being studied by variance.” Depending on the number of influencing variables, one-, two-, and multivariate analysis is distinguished, and depending on the nature of these variables - analysis with fixed, random or mixed effects. Analysis of variance is widely used in experimental design.

2.1.2.5. Factor analysis

The method makes it possible to reduce the dimension of the data space, i.e., to reasonably reduce the number of measured features (variables) by combining them into certain aggregates that act as integral units characterizing the object being studied. In this case, these composite units are called factors, from which the factors of variance analysis must be distinguished, representing which are individual characteristics (variables). It is believed that it is the totality of signs in certain combinations that can characterize a mental phenomenon or the pattern of its development, while individually or in other combinations these signs do not provide information. As a rule, factors are not visible to the eye, hidden from direct observation. Factor analysis is especially productive in preliminary research, when it is necessary to identify, to a first approximation, hidden patterns in the area under study. The basis of the analysis is the correlation matrix, i.e. tables of correlation coefficients of each characteristic with all the others (the “all with all” principle). Depending on the number of factors in the correlation matrix, there are single-factor(according to Spearman), bi-factor(according to Holzinger) and multifactorial(according to Thurston) analyses. Based on the nature of the relationship between factors, the method is divided into analysis with orthogonal(independent) and with oblique(dependent) factors. There are other varieties of the method. The very complex mathematical and logical apparatus of factor analysis often makes it difficult to choose a method option that is adequate to the research tasks. Nevertheless, its popularity in the scientific world is growing every year.

2.1.2.6. Regression analysis

The method allows you to study the dependence of the average value of one quantity on variations of another (other) quantity. The specificity of the method lies in the fact that the quantities under consideration (or at least one of them) are random in nature. Then the description of the dependence is divided into two tasks: 1) identifying the general type of dependence and 2) clarifying this type by calculating estimates of the parameters of the dependence. There are no standard methods for solving the first problem, and here a visual analysis of the correlation matrix is ​​carried out in combination with a qualitative analysis of the nature of the quantities (variables) being studied. This requires high qualifications and erudition from the researcher. The second task is essentially finding an approximating curve. Most often this approximation is done using the mathematical method of least squares. The idea of ​​the method belongs to F. Galto- well, who noticed that very tall parents had children that were somewhat shorter, and very short parents had taller children. He called this pattern regression.

2.1.2.7. Taxonomic analysis

The method is a mathematical technique for grouping data into classes (taxa, clusters) in such a way that objects included in one class are more homogeneous in some respect compared to objects included in other classes. As a result, it becomes possible to determine in one metric or another the distance between the objects being studied and to give an orderly description of their relationships at a quantitative level. Due to the insufficient development of the criterion for the effectiveness and admissibility of cluster procedures, this method is usually used in combination with other methods of quantitative data analysis. On the other hand, taxonomic analysis itself is used as additional insurance for the reliability of results obtained using other quantitative methods, in particular factor analysis. The essence of cluster analysis allows us to consider it as a method that explicitly combines quantitative processing data from their qualitative analysis. Therefore, it is apparently not legitimate to classify it unambiguously as a quantitative method. But since the procedure of the method is predominantly mathematical and the results can be presented numerically, then the method as a whole will be classified as quantitative.

2.1.2.8. Scaling

Scaling, to an even greater extent than taxonomic analysis, combines the features of quantitative and qualitative study of reality. Quantitative aspect scaling is that its procedure in the vast majority of cases includes measurement and numerical representation of data. Qualitative aspect scaling is expressed in the fact that, firstly, it allows you to manipulate not only quantitative data, but also data that does not have common units of measurement, and secondly, includes elements of qualitative methods (classification, typology, systematization). Another fundamental feature of scaling, which makes it difficult to determine its place in the general system of scientific methods, is combining procedures for data collection and processing. We can even talk about the unity of empirical and analytical procedures when scaling. Not only in a specific study is it difficult to indicate the sequence and separation of these procedures (they are often performed simultaneously and jointly), but also in theoretical terms it is not possible to detect a staged hierarchy (it is impossible to say what is primary and what is secondary). The third point that does not allow scaling to be unambiguously attributed to one or another group of methods is its organic “growth” into specific areas of knowledge and its acquisition along with signs general scientific method signs highly specific. If other methods of general scientific significance (for example, observation or experiment) can be quite easily presented both in general form and in specific modifications, then scaling at the level of the general without losing the necessary information is very difficult to characterize. The reason for this is obvious: the combination of empirical procedures with data processing in scaling. Empirics is concrete, mathematics is abstract, therefore the fusion of general principles of mathematical analysis with specific methods of data collection gives the indicated effect. For the same reason, the scientific origins of scaling have not been precisely defined: several sciences lay claim to the title of its “parent”. Among them is psychology, where such outstanding scientists as L. Thurston, S. Stevens, V. Torgerson, A. Pieron worked on the theory and practice of scaling. Having realized all these factors, we still place scaling in the category quantitative methods data processing, since in the practice of psychological research scaling occurs in two situations. The first one is construction scales, and the second - their usage. In the case of construction, all the mentioned features of scaling are fully manifested. When used, they fade into the background, since the use of ready-made scales (for example, “standard” scales for testing) simply involves comparison. Comparison with them of indicators obtained at the data collection stage. Thus, here the psychologist only uses the fruits of scaling, and at the stages following the collection of data. This situation is a common phenomenon in psychology. In addition, the formal construction of scales, as a rule, is carried out beyond the scope of direct measurements and collection of data about an object, i.e., the main scale-forming actions of a mathematical nature are carried out after the collection of data, which is comparable to the stage of their processing. In the most general sense scaling is a way of understanding the world through modeling reality using formal (primarily numerical) systems. This method is used in almost all areas of scientific knowledge (in natural, exact, humanities, social, technical sciences) and has wide applied significance. The most rigorous definition seems to be the following: scaling is the process of mapping empirical sets into formal ones according to given rules. Under empirical set refers to any set of real objects (people, animals, phenomena, properties, processes, events) that are in certain relationships with each other. These relations can be represented by four types (empirical operations): 1) equality (equal - not equal); 2) rank order (more - less); 3) equality of intervals; 4) equality of relations. By In the nature of the empirical set, scaling is divided into two types: physical And psychological. IN In the first case, objective (physical) characteristics of objects are subject to scaling, in the second - subjective (psychological). Under formal set is understood as an arbitrary set of symbols (signs, numbers) interconnected by certain relations, which, according to empirical relations, are described by four types of formal (mathematical) operations: 1) “equal - not equal” (= ≠); 2) “more - less” (><); 3) «сло-жение - вычитание» (+ -); 4) «умножение - деление» (* :). При шкалировании обязательным условием является one-to-one correspondence between the elements of the empirical and formal sets. This means that each element of the first multiplicity Only one element of the second must correspond to each other, and vice versa. In this case, a one-to-one correspondence of the types of relations between the elements of both sets (isomorphism of structures) is not necessary. If these structures are isomorphic, the so-called direct (subjective) scaling, in the absence of isomorphism, is carried out indirect (objective) scaling. The result of scaling is the construction scales(lat. scala - “ladder”), i.e. some sign (numerical) models of the reality under study, with the help of which this reality can be measured. Thus, scales are measuring instruments. A general idea of ​​the whole variety of scales can be obtained from works where their classification system is given and brief descriptions of each type of scale are given. The relationships between the elements of the empirical set and the corresponding admissible mathematical operations (admissible transformations) determine the level of scaling and the type of the resulting scale (according to the classification of S. Stevens). The first, simplest type of relationship (= ≠) corresponds to the least informative name scales, second (><) - order scales, third (+ -) - interval scales, fourth (*:) - the most informative relationship scales. Process psychological scaling can be conditionally divided into two main stages: empirical, at which data is collected about the empirical set (in this case, about the set of psychological characteristics of the objects or phenomena being studied), and the stage formalization, i.e. mathematical and statistical processing of data at the first stage. The features of each stage determine the methodological techniques for a specific implementation of scaling. Depending on the objects of study, psychological scaling comes in two varieties: psychophysical or psychometric. Psychophysical scaling consists in constructing scales for measuring the subjective (psychological) characteristics of objects (phenomena) that have physical correlates with the corresponding physical units of measurement. For example, the subjective characteristics of sound (loudness, pitch, timbre) correspond to physical parameters of sound vibrations: amplitude (in decibels), frequency (in hertz), spectrum (in terms of component tones and envelope). Thus, psychophysical scaling makes it possible to identify the relationship between the magnitude of physical stimulation and mental reaction, as well as to express this reaction in objective units of measurement. As a result, any types of indirect and direct scales of all levels of measurement are obtained: scales of names, order, intervals and ratios. Psychometric scaling consists in constructing scales for measuring the subjective characteristics of objects (phenomena) that do not have physical correlates. For example, personality characteristics, the popularity of artists, team cohesion, expressiveness of images, etc. Psychometric scaling is implemented using some indirect (objective) scaling methods. As a result, judgment scales are obtained that, according to the typology of permissible transformations, usually belong to order scales, less often to interval scales. In the latter case, the units of measurement are indicators of the variability of judgments (answers, assessments) of respondents. The most characteristic and common psychometric scales are rating scales and attitude scales based on them. Psychometric scaling underlies the development of most psychological tests, as well as measurement methods in social psychology (sociometric methods) and in applied psychological disciplines. Since the judgments underlying the psychometric scaling procedure can also be applied to physical sensory stimulation, these procedures are also applicable to identify psychophysical dependencies, but in this case the resulting scales will not have objective units of measurement . Both physical and psychological scaling can be unidimensional or multidimensional. One-dimensional scaling is the process of mapping an empirical set into a formal set according to one criterion. The resulting one-dimensional scales reflect either relationships between one-dimensional empirical objects (or the same properties of multidimensional objects), or changes in one property of a multidimensional object. One-dimensional scaling is implemented using both direct (subjective) and indirect (objective) scaling methods. Under multidimensional scaling the process of mapping an empirical set into a formal set simultaneously according to several criteria is understood. Multidimensional scales reflect either relationships between multidimensional objects, or simultaneous changes in several characteristics of one object. The process of multidimensional scaling, in contrast to one-dimensional scaling, is characterized by a greater labor intensity of the second stage, i.e., data formalization. In this regard, a powerful statistical and mathematical apparatus is used, for example, cluster or factor analysis, which is an integral part of multidimensional scaling methods. The study of multidimensional scaling problems is related to With named after Richardson and Torgerson, who proposed his first models. Shepard started the development of non-metric multidimensional scaling methods. The most widespread and first theoretically substantiated multidimensional scaling algorithm was proposed by Kruskal. M. Davison summarized information on multidimensional scaling. The specifics of multidimensional scaling in psychology are reflected in the work of G.V. Paramei. Let us expand on the previously mentioned concepts of “indirect” and “direct” scaling. Indirect, or objective, scaling is the process of mapping an empirical set into a formal one with mutual inconsistency (lack of isomorphism) between the structures of these sets. In psychology, this discrepancy is based on Fechner’s first postulate about the impossibility of direct subjective assessment of the magnitude of one’s sensations. To quantify sensations, external (indirect) units of measurement are used, based on various assessments of the subjects: barely noticeable differences, reaction time (RT), variance of discrimination, spread of categorical assessments. Indirect psychological scales, according to the methods of their construction, initial assumptions and units of measurement, form several groups, the main of which are the following: 1) accumulation scales or log-rhythmic scales; 2) scales based on the measurement of BP; 3) judgment scales(comparative and categorical). The analytical expressions of these scales are given the status of laws, the names of which are associated with the names of their authors: 1) Weber-Fechner logarithmic law; 2) for- Pieron's con (for a simple sensorimotor reaction); 3) Thurston’s law of comparative judgments and 4) Thorgerson’s law of categorical judgments. Judgment scales have the greatest applied potential. They allow you to measure any mental phenomena, implement both psychophysical and psychometric scaling, and provide the possibility of multidimensional scaling. According to the typology of permissible transformations, indirect scales are mainly represented by scales of order and intervals. Direct, or subjective, scaling is the process of mapping an empirical set into a formal one with a one-to-one correspondence (isomorphism) of the structures of these sets. In psychology, this correspondence is based on the assumption of the possibility of direct subjective assessment of the magnitude of one’s sensations (the denial of Fechner’s first postulate). Subjective scaling is implemented using procedures that determine how many times (or by how much) the sensation caused by one stimulus is greater or less than the sensation caused by another stimulus. If such a comparison is made for sensations of different modalities, then we talk about cross-modal subjective scaling. Direct scales, according to the method of their construction, form two main groups: 1) scales based on the definition sensory relationships; 2) scales based on definition magnitudes of incentives. The second option opens the way to multidimensional scaling. A significant part of direct scales is well approximated by a power function, which was proved by S. Stevens, using a large amount of empirical material, after whom the analytical expression of direct scales is named - Stevens' power law. To quantify sensations during subjective scaling, psychological units of measurement are used, specialized for specific modalities and experimental conditions. Many of these units have generally accepted names: “sons” for loudness, “brils” for brightness, “gusts” for taste, “vegs” for heaviness, etc. According to the typology of permissible transformations, direct scales are represented mainly by scales intervals and relations. In conclusion of the review of the scaling method, it is necessary to point out the problem of its relationship with measurement. In our opinion, this problem is due to the scaling features noted above: 1) combined the introduction of empirical procedures for data collection and analytical procedures for data processing; 2) the unity of the quantitative and qualitative aspects of the scaling process; 3) a combination of general science and narrow profile, i.e., the “fusion” of general principles of scaling with specific procedures of specific techniques. Some researchers explicitly or implicitly equate the concepts of “scaling” and “measurement”. This point of view is supported especially strongly by the authority of S. Stevens, who defined measurement as “the attribution of numerical forms to objects or events in accordance with certain rules” and immediately pointed out that such a procedure leads to the construction of scales. But since the process of developing a scale is a process of scaling, we end up with the result that measurement and scaling are one and the same thing. The opposite position is that only metric scaling associated with the construction of interval and proportional scales is compared with measurement. It seems that the second position is stricter, since measurement presupposes the quantitative expression of what is being measured, and therefore, the presence of a metric. The severity of the discussion can be removed if measurement is understood not as a research method, but as instrumental support for one or another method, including scaling. By the way, metrology (the science of measurements) includes in the concept of “measurement” as its obligatory attribute a measuring instrument. For scaling (at least for non-metric scaling), measuring instruments are not necessary. True, metrology is interested mainly in the physical parameters of objects, and not in the psychological ones. Psychology, on the contrary, is primarily concerned with subjective characteristics (large, heavy, bright, pleasant, etc.). This allows some authors to take the person himself as a means of measurement. This means not so much the use of parts of the human body as units of measurement (elbow, arshin, fathom, stade, foot, inch, etc.), but rather its ability to subjectively quantify any phenomena. But the infinite variability of individual differences in humans, including the variability of evaluative abilities, cannot provide information commonly used units of measurement at the stage of collecting data about the object. In other words, in the empirical part of scaling the subject cannot be considered as a measuring instrument. This role can, with great stretch, be attributed to him only after manipulations no longer with empirical, but with formal sets. Then a subjective metric is artificially obtained, most often in the form of interval values. G.V. Sukhodolsky points to these facts when he says that ordering (and this is what the subject does at the stage of “evaluation” of empirical objects) “is a preparatory, but not a measuring operation.” And only then, at the stage of processing primary subjective data, the corresponding scale-forming actions (for Sukhodolsky, ranking) “metrize the one-dimensional topological space of ordered objects, and. therefore, they measure the "magnitude" of objects." The ambiguity of the relationship between the concepts of "scaling" and "measurement" in psychology increases when they are compared with the concepts of "test" and "testing." There is no doubt about the classification of tests as measuring instruments, however their use in psychology has two aspects: the first is the use of the test in the testing process, i.e., examination (psychodiagnostics) of specific psychological objects. The second is the development, or construction of the test. In the first case, we can say with certain grounds. about measurement, since a reference measure - a standard scale - is “applied” to the object being examined (the test subject). In the second case, it is obviously more correct to talk about scaling, since the quintessence of constructing a test is the process of constructing a standard scale and related issues. These are the operations of defining empirical and formal sets, the reliability and isomorphism of which are not least ensured by the standardization of the procedure for collecting empirical data and the collection of reliable “statistics”. Another aspect of the problem arises from the fact that the test as a measuring instrument consists of two parts: 1) a set of tasks (questions) with which the subject directly deals at the stage of collecting data about him and 2) a standard scale with which the test is compared. Empirical data are collected at the interpretation stage. Where should we talk about measurement, where about scaling, if they are not the same thing? It seems to us that the empirical part of the testing process, i.e., the test subject’s performance of the test task, is not a purely measuring procedure, but it is necessary for scaling. The argument is as follows: the actions performed by the subject themselves are not a measure of the severity of the qualities being diagnosed. Only the result of these actions (time spent, number of errors, type of answers, etc.), determined not by the test subject, but by the diagnostician, represents a “raw” scale value, which is subsequently compared with standard values. The indicators of the results of the subject’s actions are called “raw” here for two reasons. First of all, they. As a rule, they are subject to translation into other units of expression. Often - into “faceless”, abstract points, walls, etc. And secondly, a common thing in testing is the multidimensionality of the mental phenomenon being studied, which presupposes for its assessment the registration of several changing parameters, which are subsequently synthesized into a single indicator. Thus, only the stages of data processing and interpretation of test results, where “raw” empirical data are translated into comparable ones and the latter are applied to a “measuring ruler,” i.e., a standard scale, can be referred to as measurement without reservations. This problematic knot is being tightened even more tightly due to the isolation and development of such scientific sections as “Psychometry” and “Mathematical Psychology” into independent disciplines. Each of them considers the concepts we are discussing as their own key categories. Psychometry can be considered psychological metrology, covering “the whole range of issues related to measurement in psychology.” Therefore, it is not surprising that scaling is included in this “range of issues”. But psychometry does not clarify its relationship with measurement. Moreover, the matter is confused by the variety of interpretations of psychometric science itself and its subject. For example, psychometry is considered in the context of psychodiagnostics. “Often the terms “psychometry” and “psychological experiment” are used as synonyms... It is a very popular opinion that psychometry is mathematical statistics taking into account the specifics of psychology... A stable understanding of psychometry: the mathematical apparatus of psychodiagnostics. .. Psychometry is the science of using mathematical models in the study of mental phenomena.” As for mathematical psychology, its status is even more vague. “The content and structure of mathematical psychology have not yet acquired a generally accepted form; the choice and systematization of mathematical-psychological models and methods are to some extent arbitrary.” Nevertheless, there is already a tendency to absorb psychometry into mathematical psychology. It is still difficult to say whether this will affect the discussed problem of the relationship between scaling and measurement and whether their place in the general system of psychological methods will become clearer.

2.2. Qualitative methods

Qualitative methods (QM) make it possible to identify the most essential aspects of the objects being studied, which makes it possible to generalize and systematize knowledge about them, as well as to comprehend their essence. Very often, CMs rely on quantitative information. The most common techniques are: classification, typologization, systematization, periodization, casuistry.

2.2.1. Classification

Classification(lat. classic - rank, facere - to do) is the distribution of many objects into groups (classes) depending on their common characteristics. Reduction into classes can be done both by the presence of a generalizing characteristic and by its absence. The result of such a procedure is a set of classes, which, like the grouping process itself, is called classification. The classification procedure is essentially a deductive division operation (decomposition): a known set of elements is divided into subsets (classes) according to some criterion. Classes are built by defining the boundaries of subsets and including certain elements within these boundaries. Elements with characteristics that go beyond the boundaries of a given class are placed in other classes or dropped out of the classification. The opinion found in science about two possible ways of implementing the classification procedure, namely deductive and inductive, seems to us incorrect. Only some known set of objects can be subject to classification, i.e. a “closed” set, since the classification criterion is selected in advance, and it is the same for all elements of the set. Consequently, one can only divide into classes. It is impossible to “add” one class to another, since during such a procedure it is not known in advance whether subsequent objects will have characteristics that correspond to the selected criterion. And the process of such group formation becomes impractical and meaningless. But if with this procedure it is possible to change the criteria for combining (or diluting) elements, then we obtain a process of specific group formation, based not on induction (and especially not on deduction), but on traduction. That is why such a procedure gives “adjacent groupings”, and a deductive one - mainly “hierarchical classifications”. According to G. Selye, “classification is the most ancient and simplest scientific method. It serves as a prerequisite for all types of theoretical constructions, including a complex procedure for establishing cause-and-effect relationships that connect classified objects. Without classification we wouldn't even be able to talk. In fact, the basis of any common noun (man, kidney, star) is the recognition of the class of objects behind it. To define a certain class of objects (for example, vertebrates) means to establish those essential characteristics (spine) that are common to all elements that make up this class. Thus, classification involves identifying those smaller elements that are part of a larger element (the class itself). All classifications are based on the discovery of some order or another. Science deals not with individual objects as such, but with generalizations, i.e., classes and those laws in accordance with which the objects that form the class are ordered. This is why classification is a fundamental mental process. This, as a rule, is the first step in the development of science." If the classification is based on a feature that is essential for these objects, then the classification is called natural. For example, a subject catalog in libraries, a classification of sensations by modality. If the criterion is not essential for the objects themselves, but is only convenient for any ordering of them, then we get artificial classification. For example, an alphabetic library catalog, a classification of sensations by the location of receptors.

2.2.2. Typology

Typology- this is a grouping of objects according to the most significant systems of signs for them. This grouping is based on the understanding of type as a unit of division of the reality being studied and a specific ideal model of objects of reality. As a result of typology, we get typology, i.e. totality types. The process of typologization, as opposed to classification, is an inductive (compositional) operation: elements of a certain set are grouped around one or more elements that have standard characteristics. When identifying types, boundaries between them are not established, but the structure of the type is set. Other elements are correlated with it on the basis of equality or similarity. Thus, if classification is a grouping based on differences, then typologization is a grouping based on similarities. There are two fundamental approaches to understanding and describing a type: 1) a type as average(extremely generalized) and 2) type as extreme(extremely peculiar). In the first case, a typical object is one with properties that are close in their expression to the average value of the sample. In the second - with the most pronounced properties. Then in the first case they talk about a typical representative of a particular group (subset), and in the second - about a bright representative of the group, about a representative with a strong manifestation of qualities specific to this group. Thus, the definition of “a typical representative of the intelligentsia” should be attributed to the first option, and “refined intellectual" to the second. The first understanding of type is characteristic of fiction and art, where types are derived. The second interpretation is inherent in scientific descriptions of the type. Both approaches are observed in everyday practice. Any option leads to the formation of a holistic image - a standard with which real objects are compared. Both varieties of the type are identical in composition, since they manifest themselves in ideas about the structure of the leading characteristics of the type. The differences between them arise at the stage of correlating real objects with them. Type as an average (artistic type) acts as a model with which it is necessary to establish the degree of similarity and proximity of a particular object. Moreover, the “similarity” of the latter can be determined both from the side of lack of expression of quality (“falls short” of the standard) and from the side of excess of expression (exceeds the standard). The type as an extreme (scientific type) serves as a standard by which the difference from it of a particular object is determined, to what extent the latter falls short of it. Thus, the scientific type is an ideal, something like a role model. So, an artistic type is an extremely generalized example for combining objects based on the degree of similarity of the systems of their essential features. A scientific type is an extremely unique standard for combining objects based on the degree of difference between the systems of their essential features, which formally (but not in essence!) brings typologization closer to classification. Analysis of psychological typologies shows that psychological scientific types have a number of specific features. They do not have a metric, i.e., a measure of the severity of characteristics - all these descriptions are qualitative. There is no hierarchy of characteristics, no indications of leading and subordinate, basic and additional qualities. The image is amorphous and subjective. Therefore, it is very difficult to attribute a real object to any one type. Such descriptions are characterized by terminological ambiguity. The so-called “halo” is common, when the characteristics of a type are taken not to be its qualities, but the consequences arising from them. For example, when describing the types of temperament, the areas of effective activity of people with a similar temperament are given. In psychological science it is known four types of typologies: 1) constitutional (typologies of E. Kretschmer and W. Sheldon); 2) psychological (typologies of K. Jung, K. Leonhard, A. E. Lichko, G. Shmi-shek, G. Eysenck); 3) social (types of management and leadership); 4) as-tropsychological (horoscopes). Understanding a psychological type as a set of maximally expressed properties “allows us to imagine the psychological status of any specific person as a result of the intersection of the properties of universal human types.” As we see, classification and typology are two different ways of qualitative processing of empirical data, leading to two completely different types of representation of research results - classification as a set of groups (classes) and typology as a set of types. Therefore, it is impossible to agree with the rather widespread confusion of these concepts, and even more so with their identification. Class is a certain set of similar real objects, and type- this is an ideal sample, which real objects resemble to one degree or another. The fundamental difference between a class and a type predetermines the fundamental separation of the procedures of typology and classification and the categorical distinction between the results of these procedures - typology and classification. In this regard, the position of some sociologists is unclear, who, on the one hand, are skeptical about the non-distinction between classification and typology, and on the other, consider it possible to consider classification as a way of constructing a typology: “if the term used “ typology" is closely related to the meaningful nature of the corresponding division of the population into groups, with a certain level of knowledge, then the term "classification" does not have a similar property. We do not put any epistemological meaning into it. We need it only for convenience, so that we can talk about the correspondence of formal methods of dividing a population into groups with a meaningful idea of ​​the types of objects.” However, such “convenience” leads to the actual identification of two completely different and oppositely directed processes: the classification procedure is defined “as the division of the original set of objects into classes”, and “the typologization process as the process of division of some kind into types, concepts into corresponding elements." The only difference here is that classes apparently mean single-level groups, and genera and species mean multi-level groups. The essence of both processes is the same: partitioning a set into subsets. Therefore, it is not surprising that these researchers complain that “when solving typology problems using formal classification methods, it does not always turn out that the resulting classes correspond to types in the meaningful sense of interest to the sociologist.”

2.2.3. Systematization

Systematization is the ordering of objects within classes, classes among themselves, and sets of classes with other sets of classes. This is the structuring of elements within systems of different levels (objects in classes, classes in their set, etc.) and the coupling of these systems with other single-level systems, which allows us to obtain systems of a higher level of organization and generality. In the extreme, systematization is the identification and visual representation of the maximum possible number of connections of all levels in a set of objects. In practice, this results in a multi-level classification. Examples: taxonomy of flora and fauna; systematics of sciences (in particular, human sciences); taxonomy of psychological methods; taxonomy of mental processes; taxonomy of personality properties; taxonomy of mental states.

2.2.4. Periodization

Periodization- this is a chronological ordering of the existence of the object (phenomenon) being studied. It consists of dividing the life cycle of an object into significant stages (periods). Each stage usually corresponds to significant changes (quantitative or qualitative) in the object, which can be correlated with the philosophical category “leap”. Examples of periodization in psychology: periodization of human ontogenesis; stages of personality socialization; periodization of anthropogeny; stages and phases of group development (group dynamics), etc. [ 43]

2.2.5. Psychological casuistry

Psychological casuistry is a description and analysis of both the most typical and exceptional cases for the reality under study. This technique is typical for research in the field of differential psychology. An individual approach in psychological work with people also predetermines the widespread use of casuistry in practical psychology. A clear example of the use of psychological casuistry can be the incident method used in professional studies. [ 44]

3. INTERPRETATION METHODS

Even more than organizational ones, these methods deserve the name approaches, since they are, first of all, explanatory principles that predetermine the direction of interpretation of the research results. In scientific practice they have developed genetic, structural, functional, complex And systemic approaches. Using one method or another does not mean cutting out others. On the contrary, a combination of approaches is common in psychology. And this applies not only to research practice, but also to psychodiagnostics, psychological counseling and psychocorrection.

3.1. Genetic method

The genetic method is a way of studying and explaining phenomena (including mental ones), based on the analysis of their development both in ontogenetic and phylogenetic plans. This requires establishing: I) the initial conditions for the occurrence of the phenomenon, 2) the main stages and 3) the main trends in its development. The purpose of the method is to identify the connection of the phenomena being studied over time, to trace the transition from lower to higher forms. So wherever it is necessary to identify the time dynamics of mental phenomena, the genetic method is an integral research tool for the psychologist. Even when the research is aimed at studying the structural and functional characteristics of a phenomenon, the effective use of the method cannot be ruled out. Thus, the developers of the well-known theory of perceptual actions under microstructures In a new analysis of perception, they noted that “the genetic research method turned out to be the most suitable.” Naturally, the genetic method is especially characteristic of various branches of developmental psychology: comparative, developmental, historical psychology. It is clear that any longitudinal study presupposes the use of the method in question. The genetic approach can generally be considered as a methodological implementation of one of the basic principles of psychology, namely development principle. With this vision, other options for implementing the principle of development can be considered as modifications of the genetic approach. For example, historical And evolutionary approaches.

3.2. Structural method

Structural approach- a direction focused on identifying and describing the structure of objects (phenomena). It is characterized by: in-depth attention to the description of the current state of objects; clarification of their inherent timeless properties; interest is not in isolated facts, but in the relationships between them. As a result, a system of relationships is built between the elements of the object at various levels of its organization. Usually, with a structural approach, the relationship between parts and the whole in an object and the dynamics of the identified structures are not emphasized. In this case, the decomposition of the whole into parts (decomposition) can be carried out according to various options. An important advantage of the structural method is the relative ease of visual presentation of results in the form of various models. These models can be given in the form of descriptions, a list of elements, a graphic diagram, classification, etc. An inexhaustible example of such modeling is the representation of the structure and types of personality: the three-element model according to 3. Freud; Jung's personality types; "Eysenck circle"; multifactorial model by R. Assagioli. Our domestic science has not lagged behind foreign psychology in this matter: endo- and exopsychics according to A.F. Lazursky and the development of his views by V.D. Balin; personality structure ty of the four complex complexes according to B. G. Ananyev; individual-individual scheme of V. S. Merlin; lists of A. G. Kovalev and P. I. Ivanov; dynamic functional structure of personality according to K. K. Platonov; scheme by A.I. Shcherbakov, etc. The structural approach is an attribute of any research devoted to the study of the constitutional organization of the psyche and the structure of its material substrate - the nervous system. Here we can mention the typology of GNI by I. P. Pavlov and its development by B. M. Teplov, V. D. Nebylitsyn and others. The models of V. M. Rusalov, reflecting the morphological, neuro- and psychodynamic constitution of a person, have received wide recognition. Structural models of the human psyche in spatial and functional aspects are presented in the works. Classic examples of the approach under consideration are the associative psychology of F. Hartley and its consequences (in particular, the psychophysics of “pure sensations” of the 19th century), as well as the structural psychology of W. Wundt and E. Titchener. A specific concretization of the approach is the method of microstructural analysis, which includes elements of genetic, functional, and systemic approaches.

3.3. Functional method

Functional approach Naturally, it is focused on identifying and studying the functions of objects (phenomena). The ambiguity of the interpretation of the concept of “function” in science makes it difficult to define this approach, as well as to identify with it certain areas of psychological research. We will adhere to the opinion that a function is a manifestation of the properties of objects in a certain system of relations, and properties are a manifestation of the quality of an object in its interaction with other objects. Thus, a function is the realization of the relationship between an object and the environment, and also “the correspondence between the environment and the system.” Therefore, the functional approach is mainly interested in connections between the object being studied and the environment. It is based on the principle of self-regulation and maintaining the balance of objects of reality (including the psyche and its carriers). [ 47] Examples of the implementation of the functional approach in the history of science are such well-known directions as “functional psychology” and “behaviorism”. A classic example of the embodiment of a functional idea in psychology is the famous dynamic field theory of K. Lewin. In modern psychology, the functional approach is enriched with components of structural and genetic analysis. Thus, the idea of ​​the multi-level and multi-phase nature of all human mental functions, operating simultaneously at all levels as a single whole, has already been firmly established. The above examples of personality structures, the nervous system, and the psyche can rightfully be taken as an illustration of the functional approach, since most authors of the corresponding models also consider the elements of these structures as functional units that embody certain connections between a person and reality.

3.4. Complex method

A complex approach- this is a direction that considers the object of research as a set of components to be studied using an appropriate set of methods. Components can be both relatively homogeneous parts of the whole, and its heterogeneous sides, characterizing the object under study in different aspects. Often, an integrated approach involves studying a complex object using the methods of a complex of sciences, i.e., organizing interdisciplinary research. It is obvious that an integrated approach presupposes the use, to one degree or another, of all previous interpretive methods. A striking example of the implementation of an integrated approach in science is concept of human knowledge, according to which man, as the most complex object of study, is subject to the coordinated study of a large complex of sciences. In psychology, this idea of ​​the complexity of the study of man was clearly formulated by B. G. Ananyev. A person is considered simultaneously as a representative of the biological species homo sapiens (individual), as a carrier of consciousness and an active element cognitive and reality-transforming activity (subject), as a subject of social relations (personality) and as a unique unity of socially significant biological, social and psychological characteristics (individuality). This view of a person allows us to study his psychological content in terms of subordination (hierarchical) and coordination. In the first case, mental phenomena are considered as subordinate systems: more complex and general ones subordinate and include simpler and more elementary ones. In the second, mental phenomena are considered as relatively autonomous formations, but closely related and interacting with each other. Such a comprehensive and balanced study of man and his psyche, in fact, is already connected with a systems approach.

3.5. System method

Systems approach- this is a methodological direction in the study of reality, considering any fragment of it as a system. The most tangible impetus for the understanding of the systems approach as an integral methodological and methodological component of scientific knowledge and for its strict scientific formulation was the work of the Austro-American scientist L. Bertalanffy (1901-1972), in which he developed a general theory of systems. System there is a certain integrity that interacts with the environment and consists of many elements that are in certain relationships and connections with each other. The organization of these connections between elements is called structure. Sometimes the structure is interpreted broadly, bringing its understanding to the volume of the system. This interpretation is typical for our everyday practice: “commercial structures”, “state structures”, “political structures”, etc. Occasionally, such a view of structure is found in science, although with certain reservations. Element- the smallest part of a system that retains its properties within a given system. Further dismemberment of this part leads to the loss of the corresponding properties. So, an atom is an element with certain physical properties - we, a molecule - with chemical properties, a cell - an element with the properties of life, a person (personality) - an element of social relations. The properties of elements are determined by their position in the structure and, in turn, determine the properties of the system. But the properties of the system are not reduced to the sum of the properties of the elements. The system as a whole synthesizes (combines and generalizes) the properties of parts and elements, as a result of which it has properties of a higher level of organization, which, in interaction with other systems, can appear as its functions. Any system can be considered, on the one hand, as combining simpler (smaller) subsystems with its properties and functions, and on the other - how a subsystem of more complex (larger) systems. For example, any living organism is a system of organs, tissues, and cells. It is also an element of the corresponding population, which, in turn, is a subsystem of the animal or plant world, etc. Systemic research is carried out using systemic analysis and synthesis. In progress analysis the system is isolated from the environment, its composition (set of elements), structure, functions, integral properties and characteristics, system-forming factors, and relationships with the environment are determined. In progress synthesis a model of a real system is created, the level of generalization and abstraction of the description of the system is increased, the completeness of its composition and structures, the patterns of its development and behavior are determined. Description of objects as systems, i.e. system descriptions, perform the same functions as any other scientific descriptions: explanatory and predictive. But more importantly, system descriptions perform the function of integrating knowledge about objects. A systematic approach in psychology makes it possible to reveal the commonality of mental phenomena with other phenomena of reality. This makes it possible to enrich psychology with ideas, facts, methods of other sciences and, conversely, the penetration of psychological data into other areas of knowledge. It allows you to integrate and systematize psychological knowledge, eliminate redundancy in accumulated information, reduce the volume and increase the clarity of descriptions, and reduce subjectivity in the interpretation of mental phenomena. Helps to see gaps in knowledge about specific objects, to detect them completeness, determine the tasks of further research, and sometimes predict the properties of objects about which there is no information, by extrapolation and interpolation of available information. In educational activities, systematic methods of description make it possible to present educational information in a more visual and adequate form for perception and memorization, to give a more holistic picture of the illuminated objects and phenomena, and, finally, to move from an inductive presentation of psychology to a deductive-inductive one. -tive. The previous approaches are actually organic components of the systems approach. Sometimes they are even considered as its varieties. Some authors compare these approaches with the corresponding levels of human qualities that constitute the subject of psychological research. Currently, most scientific research is carried out in line with the systems approach. The most complete coverage of the systems approach in relation to psychology was found in the following works. [ 51]

Literature

    Ananyev B. G. On the problems of modern human science. M., 1977. Ananyev B.G. On the methods of modern psychology // Psychological methods in a comprehensive longitudinal study of students. L., 1976. Ananyev B. G. Man as an object of knowledge. L.. 1968. Balin V.D. Mental reflection: Elements of theoretical psychology. St. Petersburg, 2001. Balin V.D. Theory and methodology of psychological research. L., 1989. Bendatalafanri L. Application of correlation and spectral analysis. M., 1983. Bertalanfanry L. History and status of general systems theory // System Research. M.. 1973. Bertalanffy L. General systems theory - review of problems and results // Systems research. M., 1969. Blagush P. Factor analysis with generalizations. M, 1989. Borovkov A. A. Mathematical statistics: Estimation of parameters. Testing hypotheses. M.. 1984. Braverman E.M.,Muchnik I. B. Structural methods for processing empirical data, M.. 1983. Burdun G.V., Markov, S.M. Fundamentals of metrology. M., 1972. Ganzen V. A. Guidelines for the course “System methods in psychology.” L., 1987. Ganzen V. A. System descriptions in psychology. L., 1984. Ganzen V. A. Systematic approach in psychology. L., 1983. Ganzen V. A., Fomin A. A. On the concept of type in psychology // Bulletin of SNbSU. ser. 6, 1993, issue. 1 (No. 6). Ganzen V. A., Khoroshilov B. M. The problem of systematic description of qualitative changes in psychological objects. Dep. VINITI, 1984, No. 6174-84. Glass J., Stanley J. Statistical methods in pedagogy and psychology. M.. 1976. Godefroy J. What is psychology? T. 1-2. M, 1992. Gordon V. M., Zinchenko V. P. System-structural analysis of cognitive activity // Ergonomics, vol. 8. M., 1974. Gusev E. K., Nikandrov V. V. Psychophysics. L., 1987. Gusev E.K., Nikandrov V.V. Psychophysics. Part P. Psychological scaling. L., 1985. Draneper I.. Smith G. Applied regression analysis. In 2 books. 2nd ed. M.. 1987. Druzhinin V.I. Experimental psychology. M.. 1997. Davison M. Multidimensional scaling. Methods for visual presentation of data. M., 1988. Durand B., Odell P. Cluster analysis. M., 1977. Ezekiel M., Fox K.A. Methods for analyzing correlations and regressions. M.. 1966. Zarochentsev K.D., Khudyakov A.I. Basics of psychometrics. St. Petersburg, 1996. Zinchenko V. P. On the microstructural method of studying cognitive activity//Ergonomics, vy. 3. M., 1972. Zinchenko V. P., Zinchenko T. P. Perception//General Psychology/Ed. L. V. Petrovsky. Ed. 2nd. M.. 1976. Iberla K. Factor analysis. M., 1980. Itelson L.B. Mathematical and cybernetic methods in pedagogy. M., 1964. Kagan M.S. Systematic approach and humanitarian knowledge. L.. 1991. Kolkot E. Significance check. M.. 1978. Kornilova G.V. Introduction to psychological experiment. M., 1997. Koryukin V.I. Concepts of levels in modern scientific knowledge. Sver-dlovsk, 1991. Krylov A.A. Systematic approach as the basis for research in engineering psychology and labor psychology // Methodology of research in engineering psychology and labor psychology, part 1. Leningrad, 1974. Kuzmin V.P. Systematic principles in the theory and methodology of K. Marx. Ed. 2nd. M.. 1980. Kuzmin V.P. Various directions in the development of a systematic approach and their epistemological foundations // Questions of Philosophy, 1983, No. 3. Kulikov L.V. Psychological research. Methodological recommendations for carrying out. 6th ed. St. Petersburg, 2001. Kyun Yu. Descriptive and inductive statistics. M., 1981. Leman E. L. Testing statistical hypotheses. 2nd ed. M., 1979. Lomov B.F. Methodological and theoretical problems of psychology. M., 1984. Lomov B.F. On the systems approach in psychology // Questions of psychology, 1975, No. 2. Lomov B.F. On the ways of development of psychology // Questions of psychology. 1978. No. 5. Lawley D., Maxwell L. Factor analysis as a statistical method. M., 1967. Mazilov V. A. On the relationship between theory and method in psychology // Ananyevye readings - 98 / Materials of scientific and practical studies. conferences. St. Petersburg, 1998. Malikov S. F., Tyurin N. I. Introduction to metrology. M, 1965. Mathematical psychology: theory, methods, models. M, 1985. Mirkin B. G. Analysis of qualitative features and structures. M.. 1980. Miroshnikov S. A. Study of the levels of organization of human mental activity // Theoretical and applied issues of psychology, vol. 1, part II. St. Petersburg, 1995. Mondel I. D. Cluster analysis. M., 1988. Nikaidrov V.V. On a systematic description of the functional structure of the psyche // Theoretical and applied issues of psychology, vol. 1. St. Petersburg, 1995. Nikandrov V.V. Historical psychology as an independent scientific discipline//Bulletin of Leningrad State University, ser. 6. 1991, issue. 1 (No. 6). Nikandrov V.V. On the relationship between psychological macrocharacteristics of a person // Bulletin of St. Petersburg State University, vol. 3. 1998. Nikandrov V.V. Spatial model of the functional structure of the human psyche // Bulletin of St. Petersburg State University, 1999, no. 3, no. 20. Okun Ya. Factor analysis. M., 1974. Paramey G.V. Application of multidimensional scaling in psychological research // Bulletin of Moscow State University, ser. 14. 1983, no. 2. Pir'ov G. D. Experimental psychology. Sofia, 1968. Pir'ov G. D. Classification of methods in psychology // Psychodiagnostics in socialist countries. Bratislava, 1985. Plokhinsky N. A. Biometrics. 2nd ed. M., 1970. Poston T., Stewart I. Catastrophe theory and its applications. M., 1980. Workshop on psychodiagnostics. Differential psychometrics/Ed. V. V. Stolina, A. G. Shmeleva. M., 1984. The principle of development in psychology / Rep. ed. L. I. Antsyferova. M., 1978. The problem of levels and systems in scientific knowledge. Minsk, 1970. Pfanzagl I. Theory of measurements. M., 1976. PierroiA. Psychophysics//Experimental psychology, vol. 1-2. M.. 1966. Rappoport A. Systematic approach in psychology // Psychological journal, 1994, No. 3. Rogovin M. S. Structural-level theories in psychology. Yaroslavl, 1977. Rudestam K. Group psychotherapy. M., 1980. Rusalov V. M. Biological bases of individual psychological differences. M., 1979. Selye G. From dream to discovery: How to become a scientist. M., 1987. Sergeants V.F. Introduction to the methodology of modern biology. L., 1972. Sergeants V.F. Man, his nature and the meaning of existence. L., 1990. Sidorenko E. V. Methods of mathematical processing in psychology. St. Petersburg, 2001. Systematic approach to the psychophysiological problem / Rep. ed. V. B. Shvyrkov. M., 1982. Steven S S. Mathematics, measurement and psychophysics // Experimental psychology / Ed. S. S. Stephen. T. 1. M.. 1960. Stephen S.S. On the psychophysical law // Problems and methods of psychophysics. M., 1974. Sukhodolsky G.V. Mathematical psychology. St. Petersburg.. 1997. Sukhodolsky G.V. Fundamentals of mathematical statistics for psychologists. L., 1972. Thurston L.L. Psychological analysis // Problems and methods of psychophysics. M., 1974. Typology and classification in sociological research//Responsible. ed. V. G. Andreenkov, Yu. N. Tolstova. M., 1982. Uemov A. I. Systems approach and general systems theory. M., 1978. Factorial discriminant and cluster analysis / Ed. I. S. Enyu-kova. M., 1989. Harman G. G. Modern factor analysis. M., 1972. Shvaitsara I. and others. Diagnostics of mental development. Prague, 1978. Sheffe G. Analysis of variance. M., 1963. SchreiberD. Problems of scaling // Process of social research. M., 1975. BertalanffyL. General System theory. Foundations. Development, Applications. N.Y., 1968. Choynowski M. Die Messung in der Psychologic /7 Die Probleme der mathematischen Psychologic Warschaw, 1971. Guthjahr W. Die Messung psychischer Eigenschaftcn. Berlin, 1971. Leinfellner W. Einfuhrung in die Erkenntnis und Wisscnschafts-theorie. Mannheim, 1965. Lewin K. A dynamic theory of personality. N.Y., 1935. Lewin K. Principles of topological psychology. N. Y., 1936. Sixtl F. Mesmethoden der psychologic Weinheim, 1966, 1967. Stevens S.S. Sensory scales of taste intensity // Percept, a. Psychophys. 1969 Vol. 6. Torgerson W. S. Theory and methods of scaling. N.Y., 1958.
  1. Tutorial. St. Petersburg: Rech Publishing House, 2003. 480 p. BBC88

    Tutorial

    In the textbook, experimental psychology is considered as an independent scientific discipline that develops the theory and practice of psychological research and has a system of psychological methods as the main subject of study.

  2. Andreeva G. M., Bogomolova N. N., Petrovskaya L. A. "Foreign social psychology of the twentieth century. Theoretical approaches"" (1)

    Document
  3. Andreeva G. M., Bogomolova N. N., Petrovskaya L. A. "Foreign social psychology of the twentieth century. Theoretical approaches"" (2)

    Document

    The first edition of this book was published in 1978 (G. M. Andreeva, N. N. Bogomolova, L. A. Petrovskaya “Social psychology in the West”). If we consider that at that time the “publishing path” was very long, it becomes clear that the manuscript

  4. State exam program in pedagogy and psychology of education direction

    Program

    The standard period for mastering the main educational program for a master's degree in the direction 050700.68 Pedagogy for full-time study is 6 years.

  5. Psychology of the 21st century volume 2

    Document

    Members of the Organizing Committee: Akopov G.V., Bazarov T.Yu., Zhuravlev A.L., Znakov V.V., Erina S.I., Kashapov S.M., Klyueva N.V., Lvov V.M. , Manuilov G.M., Marchenko V.

Quantitative and qualitative data in experiment and other research methods.

Qualitative data– text, description in natural science language. Can be obtained through the use of qualitative methods (observation, survey, etc.)

Quantitative data– the next step in organizing qualitative data.

Distinguish between quantitative processing of results and measurement of variables.

Quality – eg. observation. The postulate of immediacy of observation data is the presentation of psychological reality to observation. The activity of the observer in organizing the observation process and the involvement of the observer in the interpretation of the facts obtained.

Different approaches to the essence of psychological measurement:

1. Presentation of the problem assigning numbers on a scale of a psychological variable for the purpose of ordering psychological objects and perceived psychological properties. Assumption that The properties of the measuring scale correspond to the empirically obtained measurement results . It is also assumed that the presented statistical criteria for data processing are adequate to researchers’ understanding of different types of scales , but the documents are lowered.

2. Goes back to the traditions of psychophysical experiment, where the measurement procedure has the ultimate goal of describing phenomenal properties in terms of changes in objective (stimulus_h-k. Merit of Stevens)

He introduced a distinction between types of scales:

names, order (fulfillment of the monotonicity condition, ranking is possible here), intervals (for example, IQ indicators, here the answer to the question “how much” is possible), ratios (here the answer to the question “how much”, absolute zero and units of measurement - psychophysics)

Thanks to this, psi measurement began to act not only as an establishment of quantitative psychophysical dependencies, but also in the broader context of measuring psi variables.

Qualitative description– 2 types: description in a natural language dictionary and development of systems of symbols, signs, units of observation. Categorical observation – reduction of units into categories – generalization. An example is Bales's standardized observation procedure for describing the interaction of small group members in solving a problem. Category system(in a narrow sense) – a set of categories that covers all theoretically permissible manifestations of the process being studied.

Quantitative assessment): 1) event-sampling– complete verbal description of behavioral events, their subsequent reading and psychological reconstruction. Narrow meaning of the term: the observer's precise temporal or frequency reflection of the "units" of description. 2) time-sampling– the observer records certain time intervals, i.e. determines the duration of events. Time sampling technique. Also specially developed for quantitative assessment subjective scales(Example: Sheldon, somatotype temperaments).

Data processing methods can be divided into qualitative and quantitative. Qualitative processing is a special way of penetrating into the essence of an object by identifying its unmeasurable properties; it is aimed primarily at a meaningful, internal study of the object. In the qualitative processing of research results, synthetic methods of cognition and logical methods dominate. Qualitative processing of the research results goes into the description and explanation of the phenomena being studied, which constitutes the next level of their study at the stage of interpretation of the results.

Primary data processing may include compiling summary tables of the results obtained, which record quantitative and qualitative data (frequencies of their occurrence, indicators converted into ranks, numerical codes of qualitative parameters, etc.). The data obtained as a result of the study, grouped into tables, can be easily and conveniently processed using statistical data processing methods, i.e. with the help of mathematical formulas, certain methods of quantitative calculations, thanks to which indicators can be generalized, brought into the system, revealing patterns hidden in them.

All methods of statistical data processing can be divided into primary and secondary. Primary methods Statistical analysis is the methods by which indicators are obtained that directly reflect the results of psychodiagnostics. The primary methods of statistical processing include:

1. Definition sample mean , i.e. the average rating of the psychological quality studied in the study. The sample mean is determined by the formula:

Where x avg-sample mean value or arithmetic mean value for the sample;

P - the number of subjects in the sample or private psychodiagnostic indicators on the basis of which the average value is calculated;

x k - private values ​​of indicators for individual subjects. Total of such indicators P, so the index k this variable takes values ​​from 1 to P;



- the sign accepted in mathematics for summing the values ​​of those variables that are to the right of this sign.

The expression respectively means the sum of all X with index k from 1 to n.

2. Sample variance - a value characterizing the degree of deviation of particular values ​​from the average value in a certain sample. The greater the variance, the greater the deviation or spread of the data, and vice versa. The dispersion is determined by the formula:

Where - sample variance, or simply variance;

- expression meaning that for everyone x k from the first to the last in a given sample, it is necessary to calculate the differences between the partial and average values, square these differences and sum them up;

P - the number of subjects in the sample or primary values ​​from which the variance is calculated.

3. Selective fashion - this is the quantitative value of the characteristic being studied, most often found in the sample. The mode is determined by the formula:

Where Mo– fashion,

x 0– value of the beginning of the modal interval,

h– size of the modal interval,

f Mo– frequency of the modal interval,

f Mo-1– frequency of the interval located before the modal,

f Mo1– frequency of the interval after the modal one.

4. Sample median - this is the value of the characteristic being studied, dividing the sample, ordered by the value of this characteristic, in half. If the number of values ​​is odd, then the median will correspond to the central value of the series, which is determined by the formula:

Where No. Me– number of the value corresponding to the median,

N– the number of values ​​in the data set.

Then the median will be denoted as

If the number of data is even, that is, instead of one there are two central values, then the arithmetic mean of the two central values ​​is taken:

By secondary methods Statistical processing refers to methods by which, based on primary data, statistical patterns hidden in them are revealed. The secondary methods most commonly used in psychological research include:

1. Comparison of sample average values ​​belonging to two populations, determining the reliability of the differences between them in terms of t-Student test . It is calculated by the formula:

,

where x 1 is the average value of the variable for one data sample;

x 2 - the average value of a variable based on another data sample;

t 1 And t 2 - integrated indicators of deviations of partial values ​​from two compared samples from their corresponding average values.

t 1 And t 2 in turn are calculated using the following formulas:

where is the sample variance of the first variable (for the first sample);

Sample variance of the second variable (based on the second sample);

P ] - the number of private values ​​of the variable in the first sample;

p 2 - the number of partial values ​​of the variable in the second sample.

After determining the indicator using this formula t, according to table 5 for a given number of degrees of freedom equal to n 1 + n 2- 2, and the required table value is found for the selected probability of acceptable error t and compare the calculated value with them t. If the calculated value t greater than or equal to the table, then they conclude that the compared average values ​​from the two samples are indeed statistically significantly different with the probability of an acceptable error being less than or equal to the selected one.

Table 5. Critical values ​​of Student's t-test for a given number of degrees of freedom and probabilities of acceptable errors equal to 0.05; 0.01 and 0.001

Number of degrees of freedom (n 1 + n 2 -2) Probability of acceptable error
0,05 0,01 0,001
Critical values ​​of the indicator t
2,78 5,60 8,61
2,58 4,03 6,87
2,45 3,71 5,96
2,37 3,50 5,41
2,31 3,36 5,04
2,26 3,25 4,78
2,23 3,17 4,59
2,20 3,11 4,44
2,18 3,05 4,32
2,16 3,01 4,22
2,14 2,98 4,14
2,13 2,96 4,07
2,12 2,92 4,02
2,11 2,90 3,97
2,10 2,88 3,92
2,09 2,86 3,88
2,09 2,85 3,85
2,08 2,83 3,82
2,07 2,82 3,79
2,07 2,81 3,77
2,06 2,80 3,75
2,06 2,79 3,73
2,06 2,78 3,71
2,05 2,77 3,69
2,05 2,76 3,67
2,05 2,76 3,66
2,04 2,75 3,65
2,02 2,70 3,55
2,01 2,68 3,50
2,00 2,66 3,46
1,99 2,64 3,42
1,98 2,63 3,39

2. Comparison of frequency, for example percentage, distributions of data using χ 2 test - Pearson test. It is calculated by the formula:

Where Pk-. frequency of observation results before the experiment;

Vk- frequency of observation results made after the experiment;

T- the total number of groups into which the observation results were divided.

After determining the indicator χ 2 using this formula , Using the table for a given number of degrees of freedom and the selected probability of permissible error, find the required table value of χ 2 and compare the calculated value of χ 2 with them . If the calculated value of χ 2 is greater than or equal to the tabulated value, then it is concluded that the compared values ​​from two samples are indeed statistically significantly different with a probability of an acceptable error less than or equal to the chosen one.

3. Method Spearman rank correlation is a method that allows you to determine the closeness (strength) and direction of the correlation between two characteristics or two profiles (hierarchies) of characteristics. Its formula is as follows:

where R s is the Spearman rank correlation coefficient;

d i - the difference between the ranks of indicators of the same subjects in ordered series;

P - the number of subjects or digital data (ranks) in correlated series.

4.Factor analysis is a method for determining the totality of internal relationships and possible cause-and-effect relationships in research material. As a result of factor analysis, factors are identified, which in this case are understood as the reasons that explain many partial (paired) correlation dependencies. Factor analysis involves calculating a correlation matrix for all variables involved in the analysis, extracting factors, rotating factors to create a simplified structure, and interpreting factors. The mathematical model of factor analysis can be presented as follows:

V i = A i,1 F 1 + A i,2 F 2 + ... + A i,k F k + U,

where V i is the value of the i-th variable, which is expressed as a linear combination of k common factors, A i,k are regression coefficients showing the contribution of each of the k factors to this variable; F 1...k - factors common to all variables; U is a factor characteristic only of the variable Vi.

Workshop

Exercise 1. Define experiment as a method of psychological research. What are the differences between an experiment and other research methods (observation, correlation research)?

Task 2. Define an experimental hypothesis. What types of hypotheses do you know (at least 5)? Give examples of these hypotheses.

Task 3. What types of variables do you know? Identify them. What variables are the main ones and are included in the formulation of the main experimental hypothesis? Give examples of variables.

Task 4. Indicate the NP and GP, the features of the NP (intersubjective or intrasubjective, controlled or subjective), state what experimental design was used.

To study the effects of crowding on problem solving, participants were asked to solve a series of word puzzles while in either large or small rooms. To get the same average verbal IQ across the groups, the researchers measured the participants' verbal intelligence and then assigned them to the two conditions.

Task 5. How does a single-factor experiment differ from a multi-factor experiment? Give examples.

Task 6. Using the text provided, indicate which methods in psychology F. Galton can be considered the founder of. Do you agree that the results of sensory discrimination tests can help assess intelligence?

In 1884, at the World's Fair and London, Francis Galton organized an anthropometric laboratory, where, for a fee of 3 pence, visitors were asked to test visual acuity, hearing, muscle strength and measure some physical characteristics. F. Galton believed that sensory discrimination tests could serve as a means of assessing intelligence (in particular, he discovered that in idiocy, the ability to distinguish between heat, cold, and pain is impaired).

Task 7. Combine the listed parameters into two groups, characterizing the features of individual and group testing. Explain the advantages and disadvantages of both types of examination.

Taking into account individual characteristics; freedom of subjects in answering questions and tasks; the ability to cover large groups of subjects; impossibility of taking into account random factors (illness, fatigue, emotional discomfort); the ability to achieve mutual understanding with the subject; presenting tasks through a microphone; obtaining a large amount of data; the ability to monitor how a task is performed; presenting tasks in the most formalized form; projective techniques; simplification of instructions; objectivity in data processing; saving test material; ease of data collection; speed of data collection (saving time); use of flexible test tasks.

Task 8. Correct the errors in the given text.

The purpose of observation is to accurately and in detail describe experiences, mental states and behavior. It should be limited to impartial recording of facts of behavior, without attempting to penetrate into their causes. Observation performs only auxiliary functions, allowing the accumulation of empirical material, and is practically not used as an independent method. There are no situations where observation can be used as the only objective method.

Task 9. Formulate your attitude to the statement:

“Method is the very first, basic thing. The seriousness of the research depends on the method, on the method of action. It's all about good method. With a good method, even a not very talented person can do a lot. And with a bad method, even a brilliant person will work in vain and will not receive any valuable, accurate knowledge.”

1. Nikandrov V.V. Psychological research and its methodological support. St. Petersburg, 2003.

2. Druzhinin V.N. Experimental psychology. M., 2006.

3. Nikandrov V.V. Observation and experiment in psychology. St. Petersburg, 2001.

4. Nikandrov V.V. Experimental psychology. St. Petersburg, 2003.

5. Workshop on general and experimental psychology / ed. A.A. Krylova. L., 1990.

6. Workshop on general, experimental and applied psychology. 2nd ed. / ed. A.A. Krylov, S.A. Manichev. St. Petersburg, 2000.

Within psychology, there are two main approaches to data collection - qualitative and quantitative. In a quantitative approach, information is converted into numbers. Examples might be filling out a questionnaire or answering questions about the extent to which people agree or disagree with certain statements. Answers can be assessed in points corresponding to the views of the respondents. One of the advantages of the quantitative method is that it can test hypotheses and easily make comparisons between different social groups - for example, employed and unemployed. The main drawback is that people’s real statements are hidden behind abstract numbers.

When conducting qualitative research, the richness and diversity of people's feelings and thoughts is preserved. In this case, surveys are also widely used, but what is important here is what will be done later with the data obtained, which can be converted into numbers. For example, by analyzing John's responses quantitatively, one can count the number of words he used that indicate his depressed psychological state. Qualitative analysis consists of analyzing the meaning of these answers - for example, what John means by the word “unemployment”. Qualitative methodology examines connections between events and activities and explores how people imagine these connections.

Using quantitative and qualitative analyses, you can also study personality. Quantitative or variation-statistical analysis consists of calculating the coefficients of correct problem solving and the frequency of repetition of observed mental phenomena. To compare the results of research on different numbers of tasks or different quantitative composition of the group, they use not absolute, but relative, mainly percentage indicators. When quantitatively analyzing research results, the arithmetic mean of all studies of a particular mental process or individual psychological feature is often used. In order to draw conclusions about the probability of the arithmetic mean, the coefficient of deviations from it of individual indicators is calculated. The smaller the deviation of the indicators of individual studies from the arithmetic mean, then it is more indicative for studies of the psychological characteristics of the individual.

Qualitative analysis is performed on the basis of quantitative analysis, but is not limited to it. In a qualitative analysis, the reasons for high or low indicators are clarified, their dependence on the age and individual characteristics of the individual, living and learning conditions, relationships in the team, attitude to activity, etc.

Quantitative and qualitative analysis of research data provide the basis for obtaining psychological and pedagogical characteristics of the individual and conclusions about educational activities.