Menu
For free
Registration
home  /  Relationship/ Simple and complex sound vibrations. Harmonic analysis Harmonic analysis of sound is called

Simple and complex sound vibrations. Harmonic analysis Harmonic analysis of sound is called

The application of the harmonic analysis method to the study of acoustic phenomena made it possible to resolve many theoretical and practical problems. One of the difficult questions of acoustics is the question of the peculiarities of the perception of human speech.

The physical characteristics of sound vibrations are frequency, amplitude and initial phase of vibrations. For the perception of sound by the human ear, only two things are important: physical characteristics- frequency and amplitude of oscillations.

But if this is really the case, then how do we recognize the same vowels a, o, u, etc. in speech different people? After all, one person speaks in bass, another in tenor, another in soprano; therefore, the pitch of the sound, i.e., the frequency of sound vibrations, when pronouncing the same vowel turns out to be different for different people. We can sing a whole octave on the same vowel a, changing the frequency of sound vibrations by half, and still we learn that it is a, but not o or u.

Our perception of vowels does not change when the volume of the sound changes, that is, when the amplitude of vibrations changes. We confidently distinguish loudly and quietly spoken a from i, u, o, e.

An explanation for this remarkable feature of human speech is provided by the results of an analysis of the spectrum of sound vibrations that arise when pronouncing vowels.

Analysis of the spectrum of sound vibrations can be carried out in various ways. The simplest of these is to use a set of acoustic resonators called Helmholtz resonators.

An acoustic resonator is a cavity, usually spherical

form communicating with the external environment through a small hole. As Helmholtz showed, the natural frequency of oscillations of the air enclosed in such a cavity, to a first approximation, does not depend on the shape of the cavity and for the case of a round hole is determined by the formula:

where is the natural frequency of the resonator; - speed of sound in air; - hole diameter; V is the volume of the resonator.

If you have a set of Helmholtz resonators with different natural frequencies, then to determine the spectral composition of sound from some source, you need to alternately bring different resonators to your ear and determine by ear the onset of resonance by increasing the sound volume. Based on such experiments, it can be argued that complex acoustic vibrations contain harmonic components, which are the natural frequencies of the resonators in which the phenomenon of resonance was observed.

This method of determining the spectral composition of sound is too labor-intensive and not very reliable. One could try to improve it: use the entire set of resonators at once, providing each of them with a microphone for converting sound vibrations into electrical vibrations and a device for measuring the current strength at the microphone output. To obtain information about the spectrum of harmonic components of complex sound vibrations using such a device, it is enough to take readings from all measuring instruments at the exit.

However, this method is not used in practice, since more convenient and reliable methods for spectral analysis of sound have been developed. The essence of the most common of them is as follows. Using a microphone, the studied sound frequency air pressure fluctuations are converted into electrical voltage fluctuations at the microphone output. If the quality of the microphone is high enough, then the dependence of the voltage at the microphone output on time is expressed by the same function as the change in sound pressure over time. Then the analysis of the spectrum of sound vibrations can be replaced by the analysis of the spectrum of electrical vibrations. Analysis of the spectrum of electrical vibrations of sound frequency is technically simpler, and the measurement results turn out to be much more accurate. The operating principle of the corresponding analyzer is also based on the phenomenon of resonance, but not in mechanical systems, but in electrical circuits.

The application of the spectrum analysis method to the study of human speech made it possible to discover that when a person pronounces, for example, the vowel a at a pitch up to the first octave

sound vibrations of a complex frequency spectrum arise. In addition to oscillations with a frequency of 261.6 Hz, corresponding to a tone up to the first octave, a number of harmonics of higher frequencies are found in them. When the tone in which a vowel is pronounced changes, changes occur in the spectrum of sound vibrations. The amplitude of the harmonic with a frequency of 261.6 Hz drops to zero, and a harmonic appears corresponding to the tone at which the vowel is now pronounced, but a number of other harmonics do not change their amplitude. A stable group of harmonics characteristic of a given sound is called its formant.

If you play a record of a song performed at 78 rpm, intended to be played at 33 rpm, the melody of the song will remain unchanged, but the sounds and words will not only sound higher pitched, but will become unrecognizable. The reason for this phenomenon is that the frequencies of all the harmonic components of each sound change.

We come to the conclusion that the human brain, based on signals received through nerve fibers from the hearing aid, is capable of determining not only the frequency and amplitude of sound vibrations, but also the spectral composition of complex sound vibrations, as if performing the work of a spectrum analyzer of the harmonic components of non-harmonic vibrations.

A person is able to recognize the voices of familiar people, distinguish sounds of the same tone obtained using various musical instruments. This ability is also based on the difference in the spectral composition of sounds of the same fundamental tone from different sources. The presence in their spectrum of stable groups - formants of harmonic components - gives the sound of each musical instrument a characteristic “coloring”, called sound timbre.

1. Give examples of non-harmonic vibrations.

2. What is the essence of the harmonic analysis method?

3. What are practical applications harmonic analysis method?

4. How do different vowel sounds differ from each other?

5. How is harmonic analysis of sound carried out in practice?

6. What is the timbre of sound?

When discussing the question of the nature of sound waves, we had in mind such sound vibrations that obey the sinusoidal law. These are simple sound vibrations. They are called pure sounds, or tones. But in natural conditions Such sounds are practically never encountered. The rustling of leaves, the murmuring of a stream, the rumble of thunder, the voices of birds and animals are complex sounds. However, any complex sound can be represented as a set of tones of varying frequency and amplitude. This is achieved by performing spectral analysis of sound. Graphic image The result of analyzing a complex sound by its constituent components is called the amplitude-frequency spectrum. On the spectrum, amplitude is expressed in two different units: logarithmic (in decibels) and linear (in percent). If a percentage expression is used, then the counting is most often carried out relative to the amplitude of the most pronounced component of the spectrum. In this case, it is taken as zero decibels, and the decrease in the amplitude of the remaining spectral components is measured in negative units. Sometimes, in particular when averaging several spectra, it is more convenient to take the amplitude of the entire analyzed sound as the basis of reference. The quality of sound, or its timbre, significantly depends on the number of its constituent sinusoidal components, as well as on the degree of expression of each of them, that is, on the amplitudes of the tones composing it. You can easily verify this by listening to the same note played on different musical instruments. In all cases, the fundamental frequency of the sound of this note - for string instruments, for example, corresponding to the frequency of vibration of the string - is the same. Note, however, that each instrument is characterized by its own shape of the amplitude-frequency spectrum.

1. Amplitude-frequency spectra of the note “C” of the first octave, played on different musical instruments. The amplitude of oscillations of the first harmonic, called the fundamental frequency (it is marked with an arrow), is taken as 100 percent. The peculiarity of the sound of a clarinet in comparison with the sound of a piano is manifested in a different ratio of the amplitudes of the spectral components, that is, harmonics; In addition, the clarinet sound spectrum lacks the second and fourth harmonics.

Everything said above about the sounds of musical instruments is also true for vocal sounds. The bulk of vocal sounds - in this case usually called the fundamental frequency - corresponds to the frequency of vibration vocal cords. The sound emanating from the vocal apparatus, in addition to the main tone, also includes numerous accompanying tones. The main tone and these additional tones make up a complex sound. If the frequency of accompanying tones exceeds the frequency of the main tone by an integer number of times, then such a sound is called harmonic. The accompanying tones themselves and the corresponding spectral components in the amplitude-frequency spectrum of sound are called harmonics. The distances on the frequency scale between adjacent harmonics correspond to the frequency of the fundamental tone, that is, the frequency of vibration of the vocal cords.


2. Amplitude-frequency spectra of the sound produced by a person’s vocal cords when he pronounces any vowel (picture on the left), and the vowel sound “i” created by the vocal tract (picture on the right). Vertical segments depict harmonics; the distance between them on the frequency scale corresponds to the frequency of the fundamental tone of the voice. The change (decrease) in the amplitude of harmonics is expressed in decibels relative to the amplitude of the largest harmonic. So-called formant frequencies (F 1, F 2, F 3) appeared on the envelope of the spectrum of the sound “and”, which are the harmonic components with the largest amplitude.

As an example, consider the process of formation of speech sounds. During the pronunciation of any vowel, the vibrating vocal cords create a complex sound, the spectrum of which consists of a series of harmonics with gradually decreasing amplitude. For all vowels, the spectrum of sound produced by the vocal cords is the same. The difference in the sound of vowels is achieved due to changes in the configuration and size of the air cavities of the vocal tract. So, for example, when we pronounce the sound “and”, the soft palate blocks the access of air to the nasal cavity and the front part of the back of the tongue rises to the palate, as a result of which the oral cavity acquires certain resonant properties, modifying the original spectrum of the sound created by the vocal cords. In this spectrum, a number of peaks in the amplitude of spectral components, called spectral maxima, specific to a given vowel sound appear. In this case, they talk about a change in the envelope of the sound spectrum. The energetically most pronounced spectral maxima, due to the operation of the vocal tract as a resonator and filter, are called formants. Formants are designated by serial numbers, with the first formant being the one immediately following the pitch frequency.

In the form of a sum of harmonic vibrations, one can imagine not only vocal sounds, but also various noises made by animals: sniffing, snorting, knocking and smacking. Since the spectra of noise sounds consist of many tones closely adjacent to each other, it is impossible to identify individual harmonics in them. Typically, noise sounds are characterized by a fairly wide range of frequencies.

In bioacoustics, as in technical sciences, all sounds are usually called acoustic or sound signals. If the spectrum of an audio signal covers a wide frequency band, the signal itself and its spectrum are called broadband, and if it is narrow, then it is called narrowband.

Harmonic analysis sound is called

A. establishing the number of tones that make up a complex sound.

B. establishing the frequencies and amplitudes of the tones that make up a complex sound.

Correct answer:

1) only A

2) only B

4) neither A nor B


Sound Analysis

Using sets of acoustic resonators, you can determine which tones are part of a given sound and what their amplitudes are. This determination of the spectrum of a complex sound is called its harmonic analysis.

Previously, sound analysis was performed using resonators, which are hollow balls of different sizes, with an open extension inserted into the ear, and a hole with opposite side. For sound analysis, it is essential that whenever the analyzed sound contains a tone whose frequency is equal to the frequency of the resonator, the latter begins to sound loudly in this tone.

Such methods of analysis, however, are very imprecise and laborious. Currently, they are being replaced by much more advanced, accurate and fast electroacoustic methods. Their essence boils down to the fact that an acoustic vibration is first converted into an electrical vibration, maintaining the same shape, and therefore having the same spectrum, and then this vibration is analyzed by electrical methods.

One of the significant results of harmonic analysis concerns the sounds of our speech. We can recognize a person's voice by timbre. But how do sound vibrations differ when the same person sings different vowels on the same note? In other words, how do the periodic air vibrations caused by the vocal apparatus differ in these cases? different positions lips and tongue and changes in the shape of the mouth and pharynx? Obviously, in the vowel spectra there must be some features characteristic of each vowel sound, in addition to those features that create the timbre of a given person's voice. Harmonic analysis of vowels confirms this assumption, namely: vowel sounds are characterized by the presence in their spectra of overtone areas with large amplitude, and these areas always lie at the same frequencies for each vowel, regardless of the height of the sung vowel sound.

What physical phenomenon underlies the electroacoustic method of sound analysis?

1) conversion of electrical vibrations into sound

2) decomposition of sound vibrations into a spectrum

3) resonance

4) conversion of sound vibrations into electrical ones

Solution.

The idea of ​​the electroacoustic method of sound analysis is that the sound vibrations under study act on the microphone membrane and cause its periodic movement. The membrane is connected to a load, the resistance of which changes in accordance with the law of movement of the membrane. Since the resistance changes while the current remains the same, the voltage also changes. They say that modulation of the electrical signal occurs - electrical oscillations arise. Thus, the electroacoustic method of sound analysis is based on the conversion of sound vibrations into electrical ones.

The correct answer is listed at number 4.

Artifacts of spectral analysis and the Heisenberg uncertainty principle

In the previous lecture, we examined the problem of decomposing any sound signal into elementary harmonic signals (components), which in the future we will call atomic information elements of sound. Let us repeat the main conclusions and introduce some new notation.

We will denote the sound signal under study in the same way as in the last lecture, .

The complex spectrum of this signal is found using the Fourier transform as follows:

. (12.1)

This spectrum allows us to determine into which elementary harmonic signals of different frequencies our studied sound signal is decomposed. In other words, the spectrum describes the complete set of harmonics into which the signal under study is decomposed.

For convenience of description, instead of formula (12.1), the more expressive following notation is often used:

, (12.2)

thereby emphasizing that a time function is supplied to the input of the Fourier transform, and the output is a function that depends not on time, but on frequency.

To emphasize the complexity of the resulting spectrum, it is usually presented in one of the following forms:

where is the amplitude spectrum of harmonics, (12.4)

A is the phase spectrum of harmonics. (12.5)

If we take the right side of equation (12.3) logarithmically, we get the following expression:

It turns out that the real part of the logarithm of the complex spectrum is equal to the amplitude spectrum on a logarithmic scale (which coincides with the Weber-Fechner law), and the imaginary part of the logarithm of the complex spectrum is equal to the phase spectrum of harmonics, the values ​​of which are ( phase values) our ear does not feel. Such an interesting coincidence may be disconcerting at first, but we will not pay attention to it. But let us emphasize a fact that is fundamentally important for us now - the Fourier transform transfers any signal from the temporary physical signal domain into the information frequency space, in which the frequencies of the harmonics into which the audio signal is decomposed are invariant.


Let us denote the atomic information element of sound (harmonic) as follows:

Let's take advantage graphically, reflecting the range of audibility of harmonics with different frequencies and amplitudes, taken from the wonderful book by E. Zwicker and H. Fastl “Psychoacoustics: facts and models” (Second Edition, Springer, 1999) on page 17 (see Fig. 12.1).

If a certain sound signal consists of two harmonics:

then their position in the auditory information space may have, for example, the form shown in Fig. 12.2.

Looking at these figures, it is easier to understand why we called individual harmonic signals atomic information elements of sound. The entire auditory information space (Fig. 12.1) is limited from below by the curve of the hearing threshold, and from above by the curve of the pain threshold of sounding harmonics of different frequencies and amplitudes. This space has somewhat irregular outlines, but it is somewhat reminiscent in shape of another information space that exists in our eye - the retina. In the retina, the atomic information objects are rods and cones. Their analogue in digital information technology is piskels. This analogy is not entirely correct, since in an image all pixels (in two-dimensional space) play their role. In our sound information space, two points cannot be on the same vertical. And therefore, any sound is reflected in this space, at best, only in the form of some curved line (amplitude spectrum), starting on the left at low frequencies (about 20 Hz) and ending on the right at high frequencies (about 20 kHz).

Such reasoning looks quite beautiful and convincing, unless you take into account the real laws of nature. The fact is that, even if the original sound signal consists of only one single harmonic (of a certain frequency and amplitude), then in reality our auditory system “will not see” it as a point in the auditory information space. In reality, this point will blur somewhat. Why? Yes, because all these arguments are valid for the spectra of infinitely long-sounding harmonic signals. But our real auditory system analyzes sounds over relatively short time intervals. The length of this interval ranges from 30 to 50 ms. It turns out that our auditory system, which, like the entire neural mechanism of the brain, works discretely with a frame rate of 20-33 frames per second. Therefore, spectral analysis must be carried out frame by frame. And this leads to some unpleasant effects.

In the first stages of research and analysis of audio signals using digital information technologies, the developers simply cut the signal into separate frames, as, for example, shown in Fig. 12.3.

If one piece of this harmonic signal in a frame is sent to the Fourier transform, then we will not get a single spectral line, as shown for example in Fig. 12.1. And you will get a graph of the amplitude (logarithmic) spectrum shown in Fig. 12.4.

In Fig. 12.4 shows in red the true value of the frequency and amplitude of the harmonic signal (12.7). But the thin spectral (red) line has blurred significantly. And, worst of all, a lot of artifacts have appeared that actually reduce the usefulness of spectral analysis to nothing. Indeed, if each harmonic component of the sound signal introduces its own similar artifacts, then it will not be possible to distinguish true traces of sound from artifacts.



In this regard, in the 60s of the last century, many scientists made intensive attempts to improve the quality of the obtained spectra from individual frames of the audio signal. It turned out that if the frame is not roughly cut (“straight scissors”), but the sound signal itself is multiplied by some smooth function, then artifacts can be significantly suppressed.

For example, in Fig. Figure 12.5 shows an example of cutting out a piece (frame) of a signal using one period of the cosine function (this window is sometimes called the Hanning window). The logarithmic spectrum of a single harmonic signal cut out in this way is shown in Fig. 12.6. The figure clearly shows that the artifacts of spectral analysis have largely disappeared, but still remain.

In those same years, the famous researcher Hamming proposed a combination of two types of windows - rectangular and cosine - and calculated their ratio in such a way that the size of artifacts was minimal. But even this best of the best combinations of the simplest windows turned out to be, in fact, not the best in principle. The Gaussian window turned out to be the best in all window respects.

To compare the artifacts introduced by all types of time windows in Fig. Figure 12.7 shows the results of using these windows using the example of obtaining the amplitude spectrum of a single harmonic signal (12.7). And in Fig. Figure 12.8 shows the spectrum of the vowel sound “o”.

It is clearly seen from the figures that the Gaussian time window does not create artifacts. But what should be especially noted is one remarkable property of the resulting amplitude (not on a logarithmic, but on a linear scale) spectrum of the same single harmonic signal. It turns out that the graph of the resulting spectrum itself looks like a Gaussian function (see Fig. 12.9). Moreover, the half-width of the Gaussian time window is related to the half-width of the resulting spectrum by the following simple relation:

This relationship reflects the Heisenberg uncertainty principle. Tell us about Heisenberg himself. Give examples of the manifestation of the Heisenberg uncertainty principle in nuclear physics, V spectral analysis, in mathematical statistics (Student's t-test), in psychology and in social phenomena.



The Heisenberg uncertainty principle provides answers to many questions related to why the traces of some harmonic components of a signal do not differ in the spectrum. The general answer to this question can be formulated as follows. If we build a spectral film with a frame rate , then we will not be able to distinguish harmonics that differ in frequency by less than , their traces on the spectrum will merge.

Let's consider this statement using the following example.


In Fig. Figure 12.10 shows a signal about which we only know that it consists of several harmonics of different frequencies.


By cutting out one frame of this complex signal using a Gaussian time window of small width (i.e., relatively small), we obtain the amplitude spectrum shown in Fig. 12.11. Due to the fact that it is very small, the half-width of the amplitude spectrum from each harmonic will be so large that the spectral lobes from the frequencies of all harmonics will merge and overlap each other (see Fig. 12.11).

By slightly increasing the width of the Gaussian time window, we obtain another spectrum, shown in Fig. 12.12. Based on this spectrum, it can already be assumed that the signal under study contains at least two harmonic components.

Continuing to increase the width of the time window, we obtain the spectrum shown in Fig. 12.13. Then - the spectra in Fig. 12.14 and 12.15. Looking at the last figure, we can say with a high degree of confidence that the signal in Fig. 12.10 consists of three separate components. After such large-scale illustrations, let’s return to the issue of searching for harmonic components in real speech signals.

It should be emphasized here that there are no pure harmonic components in a real speech signal. In other words, we do not produce harmonic components of type (12.7). But, nevertheless, quasi-harmonic components are still present in speech.

The only quasi-harmonic components in the speech signal are the damped harmonics that occur in the resonator (vocal tract) after the clap of the vocal cords. The relative arrangement of the frequencies of these damped harmonics determines the formant structure of the speech signal. A synthesized example of a damped harmonic signal is shown in Fig. 12.16. If you cut a small fragment from this signal using the Gaussian time window and send it to the Fourier transform, you will get the amplitude spectrum (on a logarithmic scale) shown in Fig. 12.17.


If we cut out from a real speech signal one period between two clap of the vocal cords (see Fig. 12.18), and somewhere in the middle of this fragment we place a time window for spectral estimation, then we will obtain the amplitude spectrum shown in Fig. 12.19. In this figure, the red lines show the values ​​of the manifested frequencies of complex resonant oscillations of the vocal tract. This figure clearly shows that with the chosen small width of the spectral estimation time window, not all resonant frequencies of the vocal tract were clearly visible in the spectrum.

But it's inevitable. In this regard, the following recommendations can be formulated for visualizing traces of resonant frequencies of the vocal tract. The frame rate of the spectral film should be an order of magnitude (times 10) greater than the frequency of the vocal cords. But it is impossible to increase the frame rate of the spectral film indefinitely, since due to the Heisenberg uncertainty principle, traces of the formants on the sonogram will begin to merge.



What would the spectrum on the previous slide look like if a rectangular window cut out exactly N periods of the harmonic signal? Remember the Fourier series.

Artifact - [from lat. arte artificially + factus made] – biol. formations or processes that sometimes arise during the study of a biological object due to the influence of the research conditions themselves on it.

This function is called variously: weighting function, windowing function, weighing function, or weighting window.

If you press the pedal of a piano and shout hard at it, you can hear an echo from it, which will be heard for some time, with a tone (frequency) very similar to the original sound.

Sound analysis and synthesis.

Using sets of acoustic resonators, you can establish which tones are part of a given sound and with what amplitudes they are present in this sound. This establishment of the harmonic spectrum of a complex sound is called its harmonic analysis. Previously, such an analysis was actually carried out using sets of resonators, in particular Helmholtz resonators, which are hollow spheres of different sizes, equipped with an extension that is inserted into the ear, and having an opening on the opposite side.

For sound analysis, it is essential that whenever the sound being analyzed contains a tone with the frequency of the resonator, the resonator begins to sound loudly at this tone.

Such methods of analysis are very inaccurate and laborious. Currently, they are being replaced by much more advanced, accurate and fast electro-acoustic methods. Their essence boils down to the fact that an acoustic vibration is first converted into an electrical vibration, maintaining the same shape, and therefore having the same spectrum; then the electrical vibration is analyzed using electrical methods.

One significant result of harmonic analysis can be pointed out regarding the sounds of our speech. We can recognize a person's voice by timbre. But how do sound vibrations differ when the same person sings different vowels on the same note: a, i, o, u, e? In other words, how do the periodic vibrations of air caused by the vocal apparatus differ in these cases with different positions of the lips and tongue and changes in the shape of the oral cavities and throat? Obviously, in the vowel spectra there must be some features characteristic of each vowel sound, in addition to those features that create the timbre of a given person's voice. Harmonic analysis of vowels confirms this assumption, namely, vowel sounds are characterized by the presence in their spectra of overtone areas with large amplitude, and these areas always lie at the same frequencies for each vowel, regardless of the height of the sung vowel sound. These regions of strong overtones are called formants. Each vowel has two formants characteristic of it.

Obviously, if we artificially reproduce the spectrum of a particular sound, in particular the spectrum of a vowel, then our ear will receive the impression of this sound, although its natural source would be absent. It is especially easy to carry out such synthesis of sounds (and synthesis of vowels) using electroacoustic devices. Electrical musical instruments allow you to very easily change the sound spectrum, i.e. change its timbre. A simple switch makes the sound similar to the sounds of a flute, a violin, or a human voice, or completely unique, unlike the sound of any ordinary instrument.

Doppler effect in acoustics.

The frequency of sound vibrations heard by a stationary observer when the sound source approaches or moves away from him is different from the sound frequency perceived by an observer who moves with this sound source, or both the observer and the sound source are standing still. The change in sound frequency (pitch) associated with the relative motion of the source and observer is called the acoustic Doppler effect. When the source and receiver of sound come closer, the pitch of the sound increases, and if they move away. then the pitch of the sound decreases. This is due to the fact that when a sound source moves relative to the medium in which it propagates sound waves, the speed of such movement is added vectorially to the speed of sound propagation.

For example, if a car with a siren on is approaching, and then, having passed by, moves away, then a high-pitched sound is heard first, and then a low-pitched one.

Sonic booms

Shock waves occur during a shot, explosion, electrical discharge, etc. The main feature of a shock wave is a sharp jump in pressure at the wave front. At the moment of passage of the shock wave, the maximum pressure at a given point occurs almost instantly in a time of the order of 10-10 s. At the same time, the density and temperature of the medium change abruptly. Then the pressure slowly drops. The power of the shock wave depends on the force of the explosion. The speed of propagation of shock waves can be greater than the speed of sound in a given medium. If, for example, a shock wave increases the pressure by one and a half times, then the temperature rises by 35 0C and the speed of propagation of the front of such a wave is approximately 400 m/s. Walls of medium thickness that meet in the path of such a shock wave will be destroyed.

Powerful explosions will be accompanied by shock waves, which create a pressure 10 times higher than atmospheric pressure at the maximum phase of the wave front. In this case, the density of the medium increases 4 times, the temperature rises by 500 0C, and the speed of propagation of such a wave is close to 1 km/s. The thickness of the shock wave front is of the order of the free path of molecules (10-7 - 10-8 m), therefore, upon theoretical consideration, we can assume that the shock wave front is an explosion surface, upon passing through which the gas parameters change abruptly.

Shock waves also occur when a solid body moves at a speed exceeding the speed of sound. A shock wave is formed in front of an aircraft that flies at supersonic speed, which is the main factor determining the resistance to the movement of the aircraft. To reduce this resistance, supersonic aircraft are given an arrow-shaped shape.

Rapid compression of air in front of an object moving at high speed leads to an increase in temperature, which increases with increasing speed of the object. When the plane reaches the speed of sound, the air temperature reaches 60 0C. At a speed twice as high as the speed of sound, the temperature rises by 240 0C, and at a speed close to triple the speed of sound, it becomes 800 0C. Velocities close to 10 km/s lead to melting and transformation of a moving body into a gaseous state. The fall of meteorites at a speed of several tens of kilometers per second leads to the fact that already at an altitude of 150 - 200 kilometers, even in a rarefied atmosphere, meteorite bodies noticeably heat up and glow. Most of them completely disintegrate at altitudes of 100 - 60 kilometers.

Noises.

The superposition of a large number of oscillations, randomly mixed with respect to one another and randomly changing intensity over time, leads to a complex form of oscillations. Such complex vibrations, consisting of large number simple sounds of different tonality are called noise. Examples include the rustling of leaves in the forest, the roar of a waterfall, the noise on a city street. Noises can also include sounds expressed by consonants. Noises may differ in distribution in terms of sound intensity, frequency and duration of sound over time. Noises created by the wind, falling water, and sea surf can be heard for a long time. The rumble of thunder and the roar of waves are relatively short-lived and are low-frequency noises. Mechanical noise can be caused by vibration of solids. The sounds that arise when bubbles and cavities burst in a liquid, which accompany the processes of cavitation, lead to cavitation noise.