Menu
For free
Registration
home  /  Business/ Research design in psychology example. Types of Design Research Every Designer Should Know

Research design in psychology is an example. Types of Design Research Every Designer Should Know

UI/UX, Design

Some people think that design is an absolutely creative profession. But inspiration and a sense of beauty are not enough to create a professional design.

To do their job effectively, professionals must not only master the art of design, but also apply many principles from different fields of activity. Psychology is one of the basic sciences that helps designers to better understand users and analyze their behavior. Today we will find out what role psychology plays in the field of design, and also find out what its principles are important to consider in the design process.

The role of psychology in design

Due to the trend of user-centric design, experts began to reconsider approaches to work, trying to better understand the target audience. Donald Norman, in his book The Design of Everyday Things, defined the concept of design as an act of communication that involves a deep understanding of the person with whom the designer is communicating.

To understand user requirements, designers are encouraged to look at the psychological principles that shape human behavior, aspirations, and motivation. By applying psychological principles when creating a design, you can improve the result, because the product becomes much closer to the actual requirements of its users. In addition, knowledge of psychology helps to create a design that encourages people to take actions that are expected of them, for example, to buy a product or contact a company.

Psychology can seem rather complicated and boring to designers, so it happens that they skip the stage of analyzing the target audience, deciding to rely only on their instincts. But in order to effectively apply the principles of psychology, it is not necessary to be a doctor of science in this field. For a positive result, it is important to study the main positions that affect the interaction indicators. Based on practical experience and research on this issue, we have identified six effective psychological principles that are often applied when creating a design.

Gestalt principles

This theory from the field of psychology is more than a hundred years old, but it does not lose its relevance. The word "gestalt" means "one whole", and the theory itself explores the visual perception of elements in relation to each other. In other words, the principles of gestalt show the tendency of people to combine individual elements into groups. The principles by which users form groups include:

Similarity. When users notice some similarity between objects, they automatically perceive them as belonging to the same group. The similarity of objects is usually determined by their shape, color, size, or texture. The similarity principle gives users a sense of coherence between design elements.

Continuity. This principle states that people tend to interpret visual elements as a continuous chain of information. Even when the elements are arranged in a broken line, our eyes follow naturally from one object to another.

Closure. This law is based on the tendency of the human eye to complete unfinished figures. When we see an unfinished figure, we automatically perceive it as a whole. The principle has found frequent application in logo design.

Proximity. When objects are located nearby, people are more likely to perceive them as a group than as individual objects, even if they are completely different.

figure and background. This principle demonstrates the human eye's tendency to separate objects from the background. There are many examples of pictures that are perceived differently depending on the object on which the eye is focused.

Gestalt principles in practice confirm that our brain tends to play with our visual perception. Therefore, designers need to consider these factors when creating digital products in order to avoid possible misunderstandings.

Visceral reaction

Have you ever had the feeling that you fell in love with a website the second you opened it? Or maybe some application disgusted you after just looking at it? If yes, then you are already familiar with the visceral reaction. This kind of response comes from a part of our brain called the "old brain". He is responsible for instincts and reacts faster than our consciousness. Visceral reactions are rooted in human DNA and are fairly easy to predict.

How do designers use this knowledge? First of all, they seek to evoke positive aesthetic sensations. It's not that hard to predict what looks good if you know your target audience and their needs. Therefore, the trend to use high-quality beautiful photos or nice color pictures on landing pages, websites and other digital products is not accidental.

The psychology of color

The science that studies the influence of color on human consciousness, behavior and reactions is called color psychology. Today we will not delve into all its aspects, since it is quite complex and voluminous, and therefore deserves a separate article (which, by the way, you can already find in the English version of our site).

In short, the main idea is that colors have a significant impact on user experience. For this reason, designers need to be conscious in choosing colors for their projects in order to properly convey the message and mood of each of them.

We have compiled a list of base colors and the meanings they are commonly associated with.

Red. Color is associated with passionate, strong and aggressive feelings. It can symbolize both positive and negative emotions including love, confidence, passion and anger.
Orange. Energetic and warm color that evokes a feeling of pleasant excitement.
Yellow. This is the color of happiness. Symbolizes sunlight, joy and warmth.
Green. The color of nature. Brings feelings of calm and renewal. It may also be associated with inexperience.
Blue. Often represents corporate images. Usually means calm, but being a cold color, it is also associated with parting and sadness.
Purple. Long time associated with royalty and wealth, as many kings wore purple robes. It is also called the color of mystery and magic.
Black. This color is very meaningful. Often associated with tragedy and death, it also signifies mystery. It can be considered both traditional and modern. It all depends on how you use it and what colors you combine with.
White. The color of purity and innocence.

Recognizable patterns

You may have noticed that websites and apps that share the same theme often use similar design patterns. It's all about the psychology of the users: when they visit a website or use an app, people expect to see certain elements that are inherent to that kind of product.

For example, when visiting a site of a strict luxury barbershop, users are unlikely to expect to see bright colors, pictures of cats, or anything like that. Such elements will only scare away customers, as they will look strange and out of place.

But it's not just about colors or pictures. Such obvious and common elements as the list of articles on a blog or filters on commercial sites are also important for successful navigation. Users quickly become accustomed to certain patterns, and in the absence of some standard elements, people may feel uncomfortable.

Text scanning patterns

In our article “Tips on Applying Copy Content in User Interfaces”, we already talked about this: before reading the text on a web page, people quickly scan (scan) it to understand whether it is interesting to them or not. According to various studies, including publications by the Nielsen Norman Group, the UXPin team, and others, there are several popular web crawling patterns, including the "F" and "Z" patterns.

The F-model is considered the most common crawling pattern, especially for web pages with a lot of content. The user first views the horizontal line at the top of the screen, where headings and other important information are usually located. Then moves down the page a bit and scans a shorter area horizontally. Finally, the user's eyes slide down the vertical line, covering the left side of the text, where readers can find keywords in the first sentences of each paragraph. This pattern is often used on pages with a lot of text, such as blogs.

The Z-model is applied on pages that are not focused on text. The user first scans the top of the page, starting at the top left corner, where he hopes to find important information, and then moves to the opposite corner diagonally downwards. Ends scanning along the horizontal line at the bottom of the page, again from left to right. This model is typical for websites that are not loaded with text and do not require page scrolling, where all the main data is visible at once.

Experiment design (DOE , DOX or experimental design) is the development of some task that seeks to describe or explain the change in information under conditions that are hypothesized to reflect the change. The term is usually associated with experiments in which the design introduces conditions that directly affect change, but can also refer to the design of quasi-experiments in which natural conditions that affect change are chosen for observation.

In its simplest form, an experiment aims to predict outcomes by introducing a change in preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictors." A change in one or more independent variables is usually hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The pilot design may also define control variables that should be kept constant to prevent external factors from influencing the results. Experimental design includes not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions, taking into account the limitations of available resources. There are several approaches for determining the set of design points (unique combinations of explanatory variable settings) to be used in an experiment.

Major concerns in development include action creation, reliability, and reproducibility. For example, these problems can be partly addressed by careful selection of the independent variables, reducing the risk of measurement error, and ensuring that the documentation of the methods is sufficiently detailed. Related challenges include achieving appropriate levels of statistical power and sensitivity.

Properly designed experiments advance knowledge in the natural and social sciences and engineering. Other applications include marketing and policy development.

history

Systematic clinical trials

In 1747, while serving as a surgeon on HMS Salisbury, James Lind conducted a systematic clinical trial to compare remedies for scurvy. This systematic clinical study is a type of ME.

Lind selected 12 people from the ship, all suffering from scurvy. Lind restricted his subjects to males who "looked like I could them", that is, he granted strict entry requirements to reduce outside change. He divided them into six pairs, giving each pair a different supplement to their basic diet for two weeks. The procedures were all means that were suggested:

  • A quart of cider every day.
  • Twenty-five Gutts (drops) of vitriol (sulfuric acid) three times a day on an empty stomach.
  • One half pint sea ​​water everyday.
  • A mixture of garlic, mustard and horseradish in a nutmeg-sized lump.
  • Two tablespoons of vinegar three times a day.
  • Two oranges and one lemon every day.

Citrus treatments stopped after six days when they ran out of fruit, but by then one sailor was fit for duty and the others had nearly recovered. In addition, only one group (cider) showed some effect of his treatment. The rest of the crew presumably served as controls, but Lind did not report results from any control (untreated) group.

Statistical experiments, next C. Pierce

The theory of statistical inference was developed by Ch. Peirce in Illustrations to the Logic of Science (1877-1878) and The Theory of Probable Inferences (1883), two editions that emphasized the importance of randomization based on inference in statistics.

randomized experiments

C. Pierce randomized volunteers to a blind, repeated measurements design to assess their ability to distinguish between weights. Peirce's experiment inspired other researchers in psychology and education, who developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.

Optimal Designs for Regression Models

comparison In some areas of study it is not possible to have independent measurements on a traceable metrological standard. Comparisons between treatments are far more valuable, and are generally preferred, and are often compared to scientific controls or traditional treatments that act as a baseline. Randomness Randomization is the process of assigning individuals to random groups or to different groups in an experiment so that each person in the population has the same chance of becoming a study participant. Random assignment of individuals into groups (or conditions within a group) distinguishes rigorous, "true" experiment from observational studies or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of decisions to allocate units to treatments by some random mechanism (such as tables of random numbers, or the use of randomized devices such as playing cards or dice). The assignment of units to the treatment is random, usually to mitigate the puzzling effect that makes the effects due to factors other than treatment, presumably as a result of the treatment. Risks associated with random distribution (eg, having a major imbalance in a key characteristic between treatment and control groups) are calculable and therefore can be managed to an acceptable level using a sufficient number of experimental units. However, if the population is divided into several subpopulations that are in some way different, and the study requires that each subpopulation be equal in size, stratified sampling may be used. Thus, the units in each subpopulation are random, but not the entire sample. The results of an experiment can be reliably generalized from experimental units to a larger statistical population of units only if the experimental units are a random sample from a larger population; the likely error of such an extrapolation depends on the sample size, among other things. Statistical replication Measurements are generally subject to variation and measurement uncertainty; Therefore, they are repeated and complete experiments are replicated to help identify sources of variability, to better assess the true effects of treatment, to further enhance the experiment's reliability and validity, and to add to existing knowledge of the topic. However, certain conditions must be met before a replication experiment is initiated: the original research question has been published in a peer-reviewed journal or is widely cited, the researcher is independent of the original experiment, the researcher must first attempt to replicate the original data using the original data, and The review should indicate that the study conducted is a replication study that attempted to follow the original study as closely as possible. blocking Blocking is the non-random arrangement of experimental units into groups (blocks/lots) consisting of units that are similar to each other. Blocking reduces the known but irrelevant sources of inter-block variability and therefore provides greater accuracy in estimating the source of variation under study. Orthogonality Orthogonality concerns forms of comparison (contrast) that can be legitimately and effectively exercised. The contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data is normal. Because of this independence, each orthogonal processing provides different information to the others. If there T- procedures and T- 1 orthogonal contrasts, all information that can be captured from the experiment can be obtained from the set of contrasts. Factorial experiments Use factorial experiments instead of a single factor-at-a-time method. They are effective in assessing the effects and possible interactions of several factors (independent variables). An experiment design analysis is built on the foundation of ANOVA, a collection of models, Partition of the observed variance into components, according to what factors, the experiment should evaluate or test.

example

This example is attributed to Hotelling. It conveys some of the flavor of these aspects of the theme, which involve combinatorial constructions.

The weights of eight objects are measured using balance panning and a set of standard weights. Each weighty measures the difference in weight between objects in the left pan versus any objects in the right pan, adding a calibrated weight for a lighter pan, until the balance is in balance. Each measurement has a random error. The average error is zero; on standard deviations according to the distribution of the probability of errors coincides with the number σ on different weightings; errors at different weighings are independent. Let us denote the true weights with

θ 1 , ... , θ 8 , (\displaystyle \theta _(1),\dots,\theta _(8).\)

We will consider two different experiments:

  1. Weigh each object in one pan, with the other pan empty. Let X I be measured the weight of an object, I = 1, ..., 8.
  2. There are eight weighings according to the following graph and let Y I difference to be measured I = 1, ..., 8:
left pan right pan First weighing: 1 2 3 4 5 6 7 8 (Empty) second: 1 2 3 8 4 5 6 7 third: 1 4 5 8 2 3 6 7 fourth: 1 6 7 8 2 3 4 5 fifth: 2 4 6 8 1 3 5 7 sixths: 2 5 7 8 1 3 4 6 sevenths: 3 4 7 8 1 2 5 6 eighths: 3 5 6 8 1 2 4 7 (\displaystyle (\ (begin array) (lcc) &(\text(left pan))&(\text(right pan))\\\hline(\text(1 weighting:))&1\2\3\4\5\6\7\8&(\ text((blank))) \\ (\ text(2)) & 1 \ 2 \ 3 \ 8 \ & 4 \ 5 \ 6 \ 7 \\ (\ text (3rd: )) & 1 \ 4 \ 5 \ 8 \ & 2 \ 3 \ 6 \ 7 \\ (\ text (4th :)) & 1 \ 6 \ 7 \ 8 \ & 2 \ 3 \ 4 \ 5 \\ (\ text (5th :)) : )) & 2 \ 4 \ 6 \ 8 \ & 1 \ 3 \ 5 \ 7 \\ (\text(6th:)) & 2 \ 5 \ 7 \ 8 \ & 1 \ 3 \ 4 \ 6 \ \ (\ text (7th: )) & 3 \ 4 \ 7 \ 8 \ & 1 \ 2 \ 5 \ 6 \\ (\ text (8th:)) & 3 \ 5 \ 6 \ 8 \ & 1 \ 2 \ 4 \ 7 \ end (array))) Then the calculated value of the weight θ 1 is θ ^ 1 = Y 1 + Y 2 + Y 3 + Y 4 - Y 5 - Y 6 - Y 7 - Y 8 8 , (\displaystyle (\widehat (\theta))_(1)=(\frac( Y_ (1) + Y_ (2) + Y_ (3) + Y_ (4) -Y_ (5) -Y_ (6) - Y_ (7) -Y_ (8)) (8)).) Similar estimates can be found for the weights of other items. For example θ ^ 2 = Y 1 + Y 2 - Y 3 - Y 4 + Y 5 + Y 6 - Y 7 - Y 8 8 , θ ^ 3 = Y 1 + Y 2 - Y 3 - Y 4 - Y 5 - Y 6 + Y 7 + Y 8 8 , θ ^ 4 = Y 1 - Y 2 + Y 3 - Y 4 + Y 5 - Y 6 + Y 7 - Y 8 8 , θ ^ 5 = Y 1 - Y 2 + Y 3 - Y 4 - Y 5 + Y 6 - Y 7 + Y 8 8 , θ ^ 6 = Y 1 - Y 2 - Y 3 + Y 4 + Y 5 - Y 6 - Y 7 + Y 8 8 , θ ^ 7 = Y 1 - Y 2 - Y 3 + Y 4 - Y 5 + Y 6 + Y 7 - Y 8 8 , θ ^ 8 = Y 1 + Y 2 + Y 3 + Y 4 + Y 5 + Y 6 + Y 7 + Y 8 8 , (\displaystyle (\(begin aligned)(\widehat (\theta)) _(2)=(&\frac(Y_(1)+Y_(2)-Y_(3 )-Y_(4)+(5 Y_)+Y_(6)-Y_(7)-Y_(8))(8)).\\(\widehat(\theta))_(3)&=(\ fracturing (Y_ (1) + Y_ (2) -Y_ (3) -Y_ (4) -Y_ (5) -Y_ (6) + Y_ (7) + (Y_ 8)) (8)).\\ ( \widehat(\theta))_(4)&=(\r hydraulic fracturing (Y_ (1) -Y_ (2) + Y_ (3) -Y_ (4) + Y_ (5) -Y_ (6) + Y_ (7) (-Y_ 8)) (8)). \\(\widehat(\theta))_(5)&=(\frac(Y_(1)-Y_(2)+Y_(3)-Y_(4)-Y_(5)+Y_(6)- Y_ (7) + (Y_ 8)) (8)). \\(\widehat(\theta))_(6)&=(\frac(Y_(1)-Y_(2)-Y_(3)+Y_(4)+Y_(5)-Y_(6)- Y_ (7) + (Y_ 8)) (8)) \\. (\widehat(\theta))_(7)&=(\frac(Y_(1)-Y_(2)-Y_(3)+Y_(4)-Y_(5)+Y_(6)+(7) Y_ ) -Y_ (8)) (8)). \\(\widehat(\theta))_(8)&=(\frac(Y_(1)+Y_(2)+Y_(3)+Y_(4)+Y_(5)+Y_(6)+ Y_ (7) + (Y_ 8)) (8)). \ (end justified)))

Experiment design question: Which experiment is best?

Estimation variance X 1 of & thetas 1 is σ 2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ 2 /8. Thus, the second experiment gives us 8 times more than the accuracy for estimating one element, and evaluates all elements at the same time, with the same accuracy. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, we note that the estimates for the elements obtained in the second experiment have errors that correlate with each other.

Many experimental design problems involve combinatorial designs, as in this example and others.

To avoid false positives

False positives, often resulting from publication pressure or author's own confirmation bias, are an inherent danger in many areas. Good way To prevent distortions potentially leading to false positives during the data collection phase is the use of a double-blind design. When double-blind designs are used, participants are randomly assigned to experimental groups, but the researcher is unaware of which group the participants belong to. Thus, the researcher cannot influence the participants' response to the intervention. Experimental samples with undisclosed degrees of freedom are a problem. This can lead to conscious or unconscious "p-hacking": trying multiple things until you get the result you want. This typically involves manipulating - perhaps unconsciously - during statistical analysis and degrees of freedom until they return the figure below p<.05 уровня статистической значимости. Таким образом, дизайн эксперимента должен включать в себя четкое заявление, предлагающие анализы должны быть предприняты. P-взлом можно предотвратить с помощью preregistering исследований, в которых исследователи должны направить свой план анализа данных в журнал они хотят опубликовать свою статью прежде, чем они даже начать сбор данных, поэтому никаких манипуляций данных не возможно (https: // OSF .io). Другой способ предотвратить это берет двойного слепого дизайна в фазу данных анализа, где данные передаются в данном-аналитик, не связанный с исследованиями, которые взбираются данные таким образом, нет никакого способа узнать, какие участник принадлежат раньше они потенциально отняты, как недопустимые.

Clear and complete documentation of the experimental methodology is also important in order to support the replication of results.

Topics for discussion when creating development projects

A developmental or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. Experimental design of the laying out of the detailed experimental plan in advance to do the experiment. Some of the following topics have already been discussed in the Experimental Design Principles section:

  1. How many factors does design have, and are the levels of these factors fixed or random?
  2. Are control conditions necessary, and what should they be?
  3. Manipulation checks; did manipulation really work?
  4. What are background variables?
  5. What is the sample size. How many units must be collected for an experiment to be generalizable and have sufficient power?
  6. What is the significance of the interaction between factors?
  7. What is the influence of the long-term effects of the main factors on the results?
  8. How do response changes affect self-report measures?
  9. How realistic is the introduction of the same measuring devices into the same units, in different cases, with post-test and subsequent tests?
  10. What about using a proxy pretest?
  11. Are there lurking variables?
  12. Should the client/patient, researcher, or even data analyst be conditionally blind?
  13. What is the possibility of subsequently applying different conditions to the same unit?
  14. How much of each control and noise factors should be taken into account?

The independent variable of a study often has many levels or different groups. In a true experiment, researchers can get an experimental group which is where their intervention is performed to test the hypothesis, and a control group which has all the same element in the experimental group without the intervention element. Thus, when everything else except for one intervention is kept constant, researchers can certify with some confidence that this one element is what is causing the observed change. In some cases, having a control group is not ethical. Sometimes this is solved by using two different experimental groups. In some cases, independent variables cannot be manipulated, such as when testing for a difference between two groups that have different diseases, or testing for a difference between men and women (obviously a variable that would be difficult or unethical to assign to a participant). In these cases, quasi-experimental design can be used.

causal attributions

In pure experimental design, the independent variable (the predictor) is manipulated by the researcher - that is - each participant in the study is selected at random from the population, and each participant is randomly assigned to the conditions of the independent variable. Only when this is done is it possible to verify with a high degree of probability that differences in outcome variables are caused by different conditions. Therefore, researchers should choose an experiment design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In cases, researchers should be aware of not certifying causal attribution when their design does not allow it. For example, in observational projects, participants are not randomly assigned to conditions, and therefore, if there are differences found in the outcome variables between conditions, it is likely that there is something other than differences between conditions that cause differences in outcomes, which is the third variable. The same goes for studies with correlational design. (Ader & Mellenbergh, 2008).

Statistical control

It is best that the process is under reasonable statistical control prior to conducting the designed experiments. If this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments. To control for interfering variables, the researcher establish control checks as additional measures. Researchers must ensure that uncontrolled influences (eg, perceptions of a source of trust) do not skew research results. The manipulation check is one example of a control check. Manipulation testing allows researchers to isolate key variables to reinforce support that these variables are working as intended.

Some effective designs for evaluating several main effects were found independently and in the near succession of Raja Chandra Bose and K. Kishen in 1940, but remained little known until the Plackett-Burmese designs were published in Biometrics in 1946. About the same time, CR Rao introduced the concept of orthogonal arrays as experimental samples. This concept was central to the development of Taguchi methods by Taguchi, who took place during his visit to the Indian Statistical Institute in the early 1950s. His methods were successfully applied and adopted by Japanese and Indian industry, and subsequently also adopted by American industry, albeit with some reservations.

In 1950 Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the main reference work for the design of experiments on statisticians for many years thereafter.

The development of the theory of linear models has embraced and surpassed the cases that concerned the early authors. Today, theory relies on complex topics in

In UX design, research is a fundamental part of solving relevant problems and/or reducing to the “right” problems that users face. The job of a designer is to understand their users. It means going beyond initial assumptions to put yourself in other people's shoes to create products that meet human needs.

Good research doesn't just end with good data, it ends with good design and functionality that users love, want, and need.

Design research is often overlooked because designers focus on how the design looks. This leads to a superficial understanding of the people for whom it is intended. Having such a mindset is contrary to what isUX. It's user-centric.

UX design is centered around research to understand people's needs and how the products or services we create will help them.

Here are some research techniques that every designer should know when starting a project, and even if they are not doing research, they can better communicate with UX researchers.

Primary Research

Primary research essentially boils down to new data to understand who you are designing for and what you are planning to design. This allows us to test our ideas with our users and develop more meaningful solutions for them. Designers typically collect this kind of data through interviews with individuals or small groups, through surveys or questionnaires.

It is important to understand what you want to research before you stop searching for people, and the kind or quality of data you want to collect. In an article from the University of Surrey, the author draws attention to two important points to consider when conducting primary research: validity and practicality.

The validity of the data refers to the truth, this is what it tells about the subject or phenomenon being studied. It is possible for the data to be reliable without being justified.

The practical aspects of the study should be carefully considered when designing the study design, for example:

Cost and budget
- time and scale
- sample size

Bryman in his book Methods of social research(2001) identifies four types of validity that can affect the results obtained:

  1. Measurement validity or design validity: whether the measure being measured uses what it claims.

That is, do church attendance statistics really measure the strength of religious beliefs?

  1. Internal Validity: refers to causality and determines whether the conclusion of a study or theory is a developed true reflection of causes.

That is, is unemployment really the cause of crime, or are there other explanations?

  1. External Validity: considers whether the results of a particular study can be generalized to other groups.

That is, if one kind of community development approach is used in this region, will it have the same impact elsewhere?

  1. Environmental soundness: considers whether “…social scientific results are appropriate for people's daily natural environment” (Bryman, 2001)

That is, if the situation is observed in a false setting, how can this affect people's behavior?

Secondary Research

Secondary research uses existing data such as the Internet, books, or articles to support your design choices and the context behind your design. Secondary studies are also used as a means to further validate information from primary studies and create a stronger case for the overall design. As a rule, secondary studies have already summarized the analytical picture of existing studies.

It's okay to only use secondary research to evaluate your design, but if you have the time, I'd definitely recommended doing primary research along with secondary research to really understand who you are designing for and collecting insights that are more relevant and compelling than existing data. When you collect user data specific to your design, it will generate better ideas and a better product.

Evaluation studies

Evaluation studies describe a specific problem to ensure usability and justify it with the needs and desires of real people. One way to conduct evaluation research is for a user to use your product and give them questions or tasks to reason out loud as they try to complete the task. There are two types of evaluation studies: summarizing and shaping.

Summative evaluation study. Summary evaluation is aimed at understanding the results or effects of something. It emphasizes the result more than the process.

A summary study may evaluate things such as:

  • Finance: impact in terms of costs, savings, profits, etc.
  • Impact: broad effect, both positive and negative, including depth, spread, and time factor.
  • results: Whether desired or undesired effects are achieved.
  • Secondary Analysis: analysis of existing data for more information.
  • Meta-analysis: integration of the results of several studies.

Formative evaluation research. Formative assessment is used to help strengthen or improve the person or thing being tested.

Formative research may evaluate things such as:

  • Implementation: monitoring the success of a process or project.
  • Needs: a look at the type and level of need.
  • Potential: the ability to use information to form a goal.

Exploratory research


Combining pieces of data and making sense of them is part of the exploratory research process

Exploratory research is conducted around a topic that little or no one knows about. The goal of exploratory research is to gain a deep understanding and familiarity with the topic, immersing yourself as much as possible in it, in order to create a direction for the potential use of this data in the future.

With exploratory research, you have the opportunity to get new ideas and create worthy solutions to the most significant problems.

Exploratory research allows us to confirm our assumptions about a topic that is often overlooked (i.e. prisoners, the homeless), providing an opportunity to generate new ideas and developments for existing problems or opportunities.

Based on an article from Lynn University, exploratory research tells us that:

  1. Design is a convenient way to obtain background information on a particular topic.
  2. Exploratory research is flexible and can address all types of research questions (what, why, how).
  3. Provides the ability to define new terms and clarify existing concepts.
  4. Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  5. Exploratory research helps to prioritize research.

Experimental psychology is based on the practical application of the plans of the so-called true experiment, when control groups are used in the course of the study, and the sample is in laboratory conditions. The schemes of experiments of this kind are designated as plans 4, 5 and 6.

Plan with pre-test and post-test and control group (Plan 4). Scheme 4 is a classic "design" of a psychological laboratory study. However, it is also applicable in the field. Its peculiarity lies not only in the presence of a control group - it is already present in the pre-experimental scheme 3 - but in the equivalence (homogeneity) of the experimental and control samples. An important factor in the reliability of the experiment, built according to scheme 4, are also two circumstances: the homogeneity of the research conditions in which the samples are located, and the full control of factors affecting the internal validity of the experiment.

The choice of an experiment plan with preliminary and final testing and a control group is made in accordance with the experimental task and the conditions of the study. When it is possible to form at least two homogeneous groups, the following experimental scheme is applied:

Example. For practical assimilation of the possibilities of implementing experimental plan 4, we will give an example of a real study in the form of a laboratory formative experiment, which contains a mechanism for confirming the hypothesis that positive motivation affects the concentration of a person's attention.

Hypothesis: the motivation of the subjects is a significant factor in increasing the concentration and stability of the attention of people who are in the conditions of training cognitive activity.

Experiment procedure:

  • 1. Formation of experimental and control samples. Participants in the experiment are divided into pairs, carefully equalized by indicators of preliminary testing or by variables that are significantly correlated with each other. The members of each nara are then "randomly" (randomized) drawn by lot into the experimental or control groups.
  • 2. Both groups are invited to work out the test "Correction test with rings" (O, and 0 3).
  • 3. The activity of the experimental sample is stimulated. Suppose that the subjects are given an experimental stimulating installation (X): “Students who score 95 or more points (correct answers) on the basis of concentration and attention stability testing will receive an “automatic” credit this semester.
  • 4. Both groups are invited to work out the test "Correction test with syllables" (0 2 and OD

Algorithm for analyzing the results of the experiment

  • 5. Empirical data are tested for "normality" of distribution 1 . This operation makes it possible to find out at least two circumstances. Firstly, as a test used to determine the stability and concentration of the subjects' attention, it discriminates (differentiates) them according to the measured attribute. In this case, the normal distribution shows that the indicators of the features correspond to the optimal ratio with the development situation of the applied test, i.e. the technique optimally measures the intended area. It is suitable for use in these conditions. Secondly, the normality of the distribution of empirical data will give the right to correctly apply the methods of parametric statistics. Statistics can be used to estimate the distribution of data A s and E x or at .
  • 6. The calculation of the arithmetic mean M x and root-mean-square 5 L. deviations of the results of preliminary and final testing.
  • 7. A comparison is made of the average values ​​of test indicators in the experimental and control groups (O, 0 3 ; Oh OD
  • 8. The average values ​​are compared according to Student's t-test, i.e. determination of statistical significance of differences in mean values.
  • 9. The proof of the ratios Oj = O e, O, 0 4 as indicators of the effectiveness of the experiment is being carried out.
  • 10. A study of the validity of the experiment is carried out by determining the degree of control of factors of invalidity.

To illustrate a psychological experiment on the influence of motivational variables on the process of concentration of attention of the subjects, let us turn to the data placed in Table. 5.1.

Table of experimental results, points

Table 5.1

The end of the table. 5.1

Subjects

Measurement before exposure X

Measurement after exposure X

experimental

Control group

experimental

Control group 0 3

Experimental group 0 2

Control group 0 4

Comparison of the data of the primary measurement of the experimental and control samples - Oh! and O3 - is made in order to determine the equivalence of the experimental and control samples. The identity of these indicators indicates the homogeneity (equivalence) of the groups. It is determined by calculating the level of statistical significance of the differences in the means in the confidence interval R t-test Styodeita.

In our case, the Studentent /-criterion value between the empirical data of the primary examination in the experimental and control groups was 0.56. This shows that the samples do not differ significantly in the confidence interval/?

Comparison of the data of the primary and repeated measurements of the experimental sample - Oj and 0 2 - is carried out in order to determine the degree of change in the dependent variable after the influence of the independent variable on the experimental sample. This procedure is carried out using the /-Studeit test if the variables are measured in the same test scale or are standardized.

In this case, the preliminary (primary) and final examinations were carried out using different tests that measure the concentration of attention. Therefore, comparison of averages without standardization is not feasible. Let's calculate the correlation coefficient between the indicators of the primary and final studies in the experimental group. Its low value can serve as an indirect evidence that there is a data change. (Rxy = 0D6) .

The experimental effect is determined by comparing the re-measurement data of the experimental and control samples - 0 2 and 0 4 . It is performed in order to determine the degree of significance of the change in the dependent variable after exposure to the independent variable. (X) for the experimental sample. Psychological sense this study is to assess the impact X on the test subjects. In this case, the comparison is made at the stage of the final measurement of the data of the experimental and control groups. Impact Analysis X carried out with the help of /-Stuodent's criterion. Its value is 2.85, which is more than the tabular value of the /-criterion 1 . This shows that there is a statistically significant difference between the mean test values ​​in the experimental and control groups.

Thus, as a result of the experiment according to plan 4, it was revealed that in the first group of subjects, which does not differ from the other group in terms of attitudinal psychological characteristics (in terms of concentration of attention), except for the impact on it of the independent variable x, the value of the indicator of concentration of attention is statistically significantly different from the similar indicator of the second group, which is in the same conditions, but outside the influence x.

Consider the study of the validity of the experiment.

Background: controlled due to the fact that events occurring in parallel with the experimental exposure are observed in both the experimental and control groups.

Natural development: controlled due to short inter-test and exposure periods and occurs in both experimental and control groups.

Test effect and instrumental error: are controlled because they appear in the same way in the experimental and control groups. In our case, there is a sample bias of 1.

Statistical regression: controlled. First, if randomization led to the appearance of extreme results in the experimental group, then they will also appear in the control group, as a result of which the regression effect will be the same. Secondly, if randomization did not lead to the appearance of extreme results in the samples, then this question is removed by itself.

Selection of test subjects: controlled because explanation of differences is ruled out to the extent that randomization provides equivalence of samples. This degree is determined by the sample statistics we have adopted.

Screening: controlled completely, since the period between tests in both samples is relatively small, and also through the need for the presence of the test subjects at the lesson. In experiments with a long exposure period (the period between tests), a bias in the sample and the effect of the results of the experiment is possible. The way out of this situation is to take into account, when processing the results of the preliminary and final testing data, all participants in both samples, even if the subjects of the experimental group did not receive experimental exposure. Effect x, will apparently be weakened, but there will be no sampling bias. The second way out entails changing the design of the experiment, since it is necessary to achieve equivalence of groups by randomization before the final testing:

The interaction of the selection factor with natural development: controlled by forming a control equivalent group.

Reactive effect: pre-testing really sets the subjects up to perceive the experimental impact. Therefore, the effect of exposure is "shifted". It is unlikely that in this situation one can absolutely assert that the results of the experiment can be extended to the entire population. Reactive effect control is possible to the extent that repetitive examinations are characteristic of the entire population.

Interaction of selection factor and experimental influence: in a situation of voluntary consent to participate in the experiment, there is a threat of validity (“bias”) due to the fact that this consent is given by people of a certain personality type. The formation of equivalent samples in a random order reduces invalidity.

The reaction of the subjects to the experiment: the situation of the experiment leads to a bias in the results, as the subjects fall into "special" conditions, trying to understand the meaning of this work. Hence, manifestations of demonstrativeness, games, alertness, guessing attitudes, etc. are frequent. Any element of the experimental procedure can elicit a reaction to an experiment, such as the content of the tests, the randomization process, dividing the participants into separate groups, keeping the subjects in different rooms, the presence of strangers, the use of an extraordinary X etc.

The way out of this difficulty is to "mask" the study, i.e. drawing up and strict adherence to a system of legending experimental procedures or their inclusion in the usual course of events. To this end, it seems most rational to conduct testing and experimental exposure under the guise of regular verification activities. In the study of even individual members of the group, it is desirable to participate in the experiment of the team as a whole. It seems expedient to carry out testing and experimental influence by staff leaders, teachers, activists, observers, etc.

In conclusion, it should be noted that, as D. Campbell pointed out, “common sense” and “considerations of a non-mathematical nature” can still be the optimal method for determining the effect of an experiment.

R. Solomon's plan for four groups (plan 5). In the presence of certain research conditions that allow the formation of four equivalent samples, the experiment is built according to scheme 5, which was named after its compiler - "Solomon's plan for four groups":

Solomon's plan is an attempt to compensate for factors that threaten the external validity of the experiment by adding to the experiment two additional (to plan 4) groups that are not pre-measured.

Comparison of data for additional groups neutralizes the impact of testing and the influence of the experimental setting itself, and also allows for a better generalization of the results. Identification of the effect of experimental exposure is reproduced by statistical proof of the following inequalities: 0 2 > Oj; 0 2 > 0 4 ; 0 5 > About b. If all three relations are satisfied, then the validity of the experimental conclusion much increases.

The use of plan 5 determines the likelihood of neutralizing the interaction of testing and experimental exposure, which facilitates the interpretation of the results of studies according to plan 4. Comparison of Ob with O, and 0 3 reveals the combined effect of natural development and background. Comparison of means 0 2 and 0 5 , 0 4 and 0 0 makes it possible to estimate the main effect of preliminary testing. Comparison of the averages () 2 and 0 4 , 0 5 and 0 D) makes it possible to estimate the main effect of the experimental exposure.

If the pre-test effect and the interaction effect are small and negligible, then it is desirable to perform a covariance analysis of 0 4 and 0 2 using the pre-test results as a covariate.

Plan with control group and testing only after exposure (plan 6). Very often, when performing experimental tasks, researchers are faced with the need to study psychological variables in the absence of a preliminary measurement. psychological parameters subjects, since the study is conducted after exposure to independent variables, i.e. when an event has already occurred and its consequences need to be identified. In this situation, the optimal design of the experiment is a plan with a control group and testing only after exposure. Using randomization or other procedures that provide optimal selective equivalence, homogeneous experimental and control groups of subjects are formed. Testing of variables is carried out only after experimental exposure:

Example. In 1993, by order of the Research Institute of Radiology, a study was made of the effect of radiation exposure on the psychological parameters of a person 1 . The experiment was built according to plan 6. A psychological examination of 51 liquidators of the consequences of an accident at Chernobyl nuclear power plant using a battery of psychological tests ( personality questionnaires, SAN (Health. Activity. Mood), Luscher test, etc.), EAF according to R. Voll (R. Voll) and automated situational diagnostic game (ASID) "Test". The control sample consisted of 47 specialists who did not participate in radiological activities at the Chernobyl nuclear power plant. The average age of the subjects of the experimental and control groups was 33 years. The subjects of both samples were optimally correlated in terms of experience, type of activity and structure of socialization, therefore the formed groups were recognized as equivalent.

Let us make a theoretical analysis of the plan according to which the experiment was built, and its validity.

Background: controlled because the study used an equivalent control sample.

natural development: controlled as a factor of experimental influence, since there was no interference of experimenters in the process of socialization of the subjects.

Test effect: controlled, since there was no pre-testing of the subjects.

Instrumental error: controlled, since a preliminary check of the reliability of methodological tools and clarification of their normative indicators after the experiment was carried out, and the use of the same type of “test battery” was carried out on the control and experimental groups.

Statistical regression: was controlled by working out the experimental material on the entire sample, formed at random. However, there was a threat to validity due to the fact that there were no preliminary data on the composition of the experimental groups, i.s. the probability of occurrence and polar variables.

Selection of test subjects, was not fully controlled due to natural randomization. Special selection of subjects was not carried out. In a random order, groups were formed from the participants in the liquidation of the accident at the Chernobyl nuclear power plant and chemical specialists.

screening of test subjects, was not present during the experiment.

Interaction of the selection factor with natural development", no special selection was made. This variable was controlled.

Interaction of composition of groups and experimental influence", Special selection of subjects was not carried out. They were not informed which study group (experimental or control) they were in.

The reaction of the subjects to the experiment, uncontrollable factor in this experiment.

Mutual interference (superposition) of experimental influences: was not controlled due to the fact that it was not known whether the subjects participated in such experiments and how this affected the results of psychological testing. By observing the experimenters, it turned out that, in general, the attitude towards the experiment was negative. It is unlikely that this circumstance had a positive effect on the external validity of this experiment.

Experiment results

  • 1. A study was made of the distribution of empirical data, which had a bell-shaped form, close to the theoretical normal distribution curve.
  • 2. Using the Student's t-test, the averages Oj > 0 2 were compared. According to ASID "Test" and EAF, the experimental and control groups differed significantly in the dynamics of emotional states (in the liquidators - higher), the effectiveness of cognitive activity (in the liquidators there was a decrease), as well as the functioning of the motor apparatus, liver, kidneys, etc. due to chronic endogenous intoxication.
  • 3. Using Fisher's ^-criterion, the influence of "fluctuations" (dispersion of the independent variable) was calculated X on the variance of the dependent variable 0 2 .

As a conclusion of this study, appropriate recommendations were made to the participants in the experiment and their leaders, the diagnostic battery of psychological tests was validated, and psychophysiological factors that affect people in extreme radiological conditions were identified.

Thus, the experimental "design" 6 represents the optimal scheme for psychological research when it is not possible to make a preliminary measurement of psychological variables.

It follows from the above that the basis experimental method in psychology are the so-called true plans, in which almost all the main factors affecting internal validity are controlled. The reliability of the results in experiments designed according to Schemes 4-6 does not raise doubts among the vast majority of researchers. The main problem, as in all other psychological studies, is the formation of experimental and control samples of subjects, the organization of the study, the search and use of adequate measuring instruments.

  • The symbol R in the scheme indicates that the homogeneity of the groups was obtained by randomization. This symbol can be conditional, since the homogeneity of the control and experimental samples can be ensured in other ways (for example, pairwise selection, preliminary testing, etc.). The value of the correlation coefficient (0.16) reveals a weak statistical relationship between measurements, i.e. it can be assumed that there has been some change in the data. Post-exposure readings do not match pre-exposure readings. EAF - Voll's method (German: Elektroakupunktur nach Voll, EAV) - a method of electrochemical express diagnostics in alternative (alternative) medicine by measuring the electrical resistance of the skin. The method was developed in Germany by Dr. Reinold Voll in 1958. In essence, it is a combination of acupuncture and the use of a galvanometer.
  • Assessment of the psychological status of military personnel - liquidators of the Chernobyl accident using the dynamic situational game "Test" / I. V. Zakharov, O. S. Govoruha, I. II. Poss [et al.] // Military Medical Journal. 1994. No. 7. S. 42-44.
  • Research B. II. Ignatkin.

A medical death certificate is an important medical document certifying the fact of a person's death for state registration with the civil registry offices, and is the basis for statistics on the causes of death.

In accordance with federal law"On acts of civil status" dated 15.11.97. No. 143-FZ registration of a person's death is carried out on the basis of a document of the established form "Medical death certificate". Registration of a child born dead or born alive but died in the first week of life is carried out on the basis of the document “Medical certificate of perinatal death”. These documents are issued by a medical organization or a private practitioner.

The previous revision of the Medical Certificate of Death (account form 106/u-98) and the Medical Certificate of Perinatal Death (account form 106/2u-98) was carried out in 1998. These forms (of the established sample) were approved by the order of the Ministry of Health of Russia dated 07.08.98. No. 241.

In international practice, death certificates are reviewed after about 10 years. The purpose of the revision is to adapt the forms to the changed conditions, taking into account the achievements of domestic and foreign healthcare.

Over the past 10 years, some experience has been gained in working with valid death and perinatal death certificates.

The aim of our study was to develop proposals for improving the reliability and international comparability of mortality statistics.

Research objectives:

  1. Examine the current system for collecting and processing mortality statistics and the application of standards for medical death certificates.
  2. To analyze the completion and processing of medical certificates of death and perinatal death and, on the basis of an expert assessment, determine the reliability of the received statistical data on mortality in the study areas.
  3. Develop proposals for improving mortality records.
  4. Develop a system for training specialists in methodological foundations achievement of certainty of causes of death.
  5. Develop and implement a methodological package of programs for the use of the ICD-10 (RUTENDON).
  6. Develop a set of programs "Monitoring of mortality" with automated selection and coding of the initial cause of death.

Materials, methods and research bases.

We analyzed 120,715 medical death certificates and 1,093 medical certificates of perinatal death for different years from 2000 to 2006 in the Tula, Vladimir, Kurgan, Tyumen regions, Stavropol and Krasnoyarsk territories and the Republic of Buryatia.

In the present study, a continuous and sampling methods, peer review, and the Birth and Mortality Monitoring software package.

The materials, databases and methods used allowed us to solve the tasks.

As a result of the study, it was found that the existing system in Russia for recording, processing and presenting information on mortality basically complies with WHO recommendations. At the same time, there are deviations from international definitions in the state mortality statistics.

The quality of filling in certificates was analyzed and shortcomings in the certificate itself, errors in filling out, coding and choosing the original cause of death were identified.

Duplicate items were found in the medical certificate of death (points 14, 15 and 18), which leads to errors and inconsistencies when filling out the certificate in case of injuries and poisoning. In paragraph 8 of the spine and paragraph 18 "Cause of death" of the certificate, interlineators are unnecessary, i.e. they are correct only in cases where there is enough information for all three lines. If there is information on one or two lines, then the rules for filling out the certificate are violated, i.e. the third line is filled in with the first and second empty.

Established, albeit isolated, violations of death registration. For example, in the Tula region, these violations amounted to 0.4%, in Stavropol Territory - 5,5%.

An expert evaluation of filling out a medical death certificate showed that basically all points of the certificate are filled out. At the same time, item 12 "Education" is filled in only in 36.1% of cases, item 13 "Where and by whom he worked" - in 50% of cases, and in other cases the information is unknown. At the same time, this information is very important, because explain social status deaths and are widely used in the analysis of mortality.

Errors were found when filling out item 18 "Cause of death": from 23.6 to 47.4% in different territories, when choosing the initial cause of death - from 5.9 to 15.4% and when coding: from 27.9 to 52 ,nine%.

Thus, the problem lies in the fact that the reliability of information on mortality in general for subjects Russian Federation is about 50%.

These errors distort the actual structure of the causes of overall mortality in the regions under study and give an incorrect idea of ​​the medical and demographic processes.

Serious shortcomings and distortions in indicators of causes of death are explained by the lack of unified methods for training specialists in filling out medical certificates of death and perinatal death.

The absence of Instructions on the procedure for filling out and issuing a medical death certificate significantly reduces its quality.

Solving the tasks of the study, from 1999 to 2006 the following works were performed:

  • 106 seminars were held on the use of ICD-10, the rules for filling out a medical death certificate and coding, covering 6977 listeners;
  • a methodological package of programs for the use of the ICD-10 "RUTENDON" was developed and implemented at 87 sites (30 in the subjects of the Russian Federation, 51 in the NIS, 6 in the countries of Eastern Europe);
  • a set of programs "Monitoring of fertility and mortality" with automated selection and coding of the initial cause of death was developed and implemented in healthcare in 19 subjects of the Russian Federation (Tula, Vladimir, Bryansk, Kirov, Sverdlovsk, Kurgan, Tyumen, Belgorod, Saratov, Yaroslavl regions, Krasnoyarsk and Stavropol regions , Republics of Buryatia, Dagestan, Tuva, Udmurtia and Chuvashia, Yamalo-Nenets and Khanty-Mansiysk autonomous regions). The introduction of this complex of programs into healthcare practice makes it possible to increase the reliability of causes of death by 18%.

We consider relevant the decision of the Ministry of Health and Social Development of the Russian Federation to revise the medical certificate of death and perinatal death with the simultaneous development of instructions for new certificates.

At the same time, we believe that the drafts of new certificates and instructions submitted for discussion require revision, because they complicated and did not correct the shortcomings of existing documents.

We propose to make the following changes to the draft of the new medical death certificate:

  • remove “having lived __ years, __ months, __ days”, because do not complicate the accounting statistical document if the dates of birth and death are recorded;
  • remove records of the terms of prematurity, full-term and post-maturity, transfer these explanations to the instructions;
  • change the record of the term of full-term pregnancy from (37-42 weeks) to (37-41 weeks), because in accordance with ICD-10, a full-term pregnancy is considered from 37 completed weeks to less than 42 completed weeks (259-293 days);
  • replace mother's date of birth with mother's age ( full years);
  • delete paragraphs 13 and 15, because it duplicates point 19, which is more informative;
  • remove subscript entries from clause 19 “Causes of death” and transfer them to the instructions, where the rules for filling out this clause are described in detail;
  • replace in paragraph 19 "Causes of death" English letters in the title of lines a), b), c), d) into Russian letters a), b), c), d);
  • include in clause 19 "Causes of death" an additional column "Approximate time period between the onset of the pathological process and death", recommended by ICD-10;
  • reduce paragraph 20 to its previous size, limiting information about the current pregnancy and pregnancy during the year preceding death, t.to. in Russia, monitoring of maternal mortality is carried out, where you can get all the information in paragraph 20.

The instruction “On the procedure for filling out and issuing a medical death certificate, approved by order of the USSR Ministry of Health of November 19, 1984 No. 1300, was not reprinted, despite the fact that the revision of the medical death certificate was in 1998, registration form No. 106 / y-98 approved by order of the Ministry of Health of Russia dated 07.08.98 No. 241.

The development and widespread implementation of the Instructions on the procedure for filling out and issuing a medical death certificate is relevant, because. will reduce 10% of errors and will help to increase the reliability of the causes of death of the population of the Russian Federation.

The draft Instruction was prepared on the basis of the Instruction on the procedure for filling out and issuing a medical death certificate, developed at the Central Research Institute of Healthcare of the Ministry of Health of the Russian Federation in 1998, and generally complies with the requirements of ICD-10.

At the same time, in order to increase the reliability of causes of death and in accordance with WHO recommendations, we propose to make fundamental changes to the draft Instruction:

  • allow paramedical staff (paramedic, midwife) to issue a medical death certificate in the absence of a doctor and replace the entry “not having a staff medical organization positions of medical personnel” on the record “does not have medical personnel in a medical organization”, t.to. a case is possible when there is a position of a doctor in the state, but he is not physically there, he is on vacation, in the hospital, etc .;
  • transfer the definitions of “premature”, “full-term” and overdue from the certificate to the instructions, correcting the terms of term at 37-41 weeks of pregnancy;
  • to correct the entry “including as a result of an accident, poisoning and injury” in a case related to pregnancy to the entry “except for an accident, poisoning and injury, HIV infection and obstetric tetanus”, because death from these causes is not included in maternal mortality;
  • correct codes in examples 5, 7, 10 and 13

During the study, medical certificates of perinatal death were studied in detail in order to determine the correctness of filling out the document, coding and selection of the main disease (condition) of the child that led to death, and the main disease of the mother that had an adverse effect on the child. An expert assessment was carried out separately for live births and stillborn children. Significantly more errors were found when filling in the causes of death and processing the data of stillborn children.

Errors in paragraph 18 "Causes of death" in stillborns were grouped into 5 groups (Tula region and Stavropol Territory, respectively):

  • when filling out a medical certificate of perinatal death” (18.9-40.5%),
  • when choosing the underlying disease in a child (10.8-20.3%),
  • when choosing the main disease of the mother, which had an adverse effect on the child (62.2-60.8%),
  • in choosing the code for the underlying disease of the child (73-77%),
  • in the choice of the code of the main disease of the mother (67.5-91.9%)

Thus, as a result of the study, it was found that the existing system for collecting and processing medical certificates of perinatal death in the Tula region and the Stavropol Territory does not provide the reliability of statistical indicators of the causes of death.

One of important factors that affect the increase in the reliability of perinatal mortality statistics is the restructuring of the accounting system and bringing it into line with international rules.

The draft medical certificate of perinatal death proposed by the Ministry of Health and Development of the Russian Federation marked the transition to a new perinatal period, and it also added separate items on signs of life recommended by WHO and corresponding to ICD-10.

The transition to a new perinatal period will somewhat worsen our indicators of perinatal mortality, given that up to 2% of fetuses with a body weight of 500g to 999g are born in our country, but it will ensure comparability of our indicators with those of developed countries.

At the same time, we propose to make some changes to the draft medical certificate of perinatal death:

  • delete the entry “filled in for stillborns and live births who died during the first 0-6 days (169 hours) after birth” and transfer this entry to the instruction;
  • exclude the words “fetus” from all points of evidence, leaving in those points that relate to previous pregnancies, t.to. for fetuses, medical certificates of perinatal death are not filled out;
  • in clause 30, the words “by a doctor who certified death”, on the basis of “examination of the corpse” and “autopsy data can be obtained later”, should be deleted in paragraph 30, and it should be explained in the instructions that all cases of perinatal death should be opened, and if additional studies are needed, then a preliminary certificate is issued.

We consider it expedient to simultaneously develop a new medical certificate of perinatal death and an instruction “On the procedure for filling out and submitting to the civil registry authorities a medical certificate of perinatal death and registering deaths of children in the perinatal period”

The novelty of the proposed project is the transition to a new perinatal period from 22 completed weeks of fetal life to 7 completed days after birth, which is in line with WHO recommendations.

The transition to a new perinatal period will entail a transition to a new definition of the concept of a child and a fetus. And this means that previously considered fetuses with a body weight of 500 to 999 g will be considered children and will be subject to universal registration as children, including those who died before 7 days after birth and stillborns.

In order to increase the reliability of perinatal mortality statistics, we propose to include the following definitions of the concepts of “child” and “fetus” in the instructions, if we consider the perinatal period from 22 completed weeks of fetal life to 7 completed days after birth.

A child is a product of human conception after complete expulsion or removal from the body of a mother with a birth weight of 500 g or more, with a gestational age of 22 weeks or more, a body length of 25 cm or more from the top of the crown to the heels, regardless of singletons or multiples childbirth.

The fetus is a product of human conception after complete expulsion or extraction from the mother's body with a birth weight of 499 g or less, with a gestational age of less than 22 weeks, a body length of less than 25 cm from the crown of the head to the heels, regardless of singleton or multiple births.

The leading criterion in determining "child" or "fetus" is body weight, but if body weight at birth is unknown, appropriate criteria for determining the gestational age should be used or focus on body length from the top of the crown to the heels.

We propose to amend the draft instructions:

  • add the words “a medical certificate of perinatal death (f.106/2u-98) is filled in for children, stillborns and live births who died within 0-6 days (169 hours) after birth”;
  • exclude the words “fetus” from the instructions, except in cases related to previous pregnancies;
  • add the following established order of paperwork:
  • a medical certificate of birth and perinatal death is not issued for the “fetus”, because "fruits" are not subject to registration;
  • a stillborn "child" is issued a medical certificate of perinatal death and is not issued a medical birth certificate;
  • for a "child" who was born alive and died in the first week of life, a medical birth certificate and a medical certificate of perinatal death are issued simultaneously;
  • fix Congenital Anomaly codes to Q00-Q99;
  • clarify the text and correct the codes in examples 4 and 5.

Thus, improving the records of mortality statistics is one of the factors that can increase reliability by 10%, and the transition to the perinatal period recommended by WHO will allow international comparison of demographic indicators. After the transition to a new perinatal period, it is important to define the concepts of "child" and "fetus".


Views: 99840
  • Please leave comments only on the topic.
  • You can leave your comment with any browser except Internet Explorer older than 6.0