FFT (Fast Fourier transformation) is a classic algorithm used for digital signal processing. Many people who have studied DSP or chip design know this algorithm. However, have you ever wondered why there are so many FFT operations in digital signal processing? Some may say that in order to analyze the signal spectrum. The problem below is to analyze the daily needs of the spectrum, such as mobile phone calls, radar measurement speed and direction, and so on. What are the relationships with actual needs? Why is FFT so important? This article provides some Concise examples to illustrate how to use FFT.
First, let's recall what FFT is. Before the 1970s S, we used Analog Circuits for signal processing. For example, we were familiar with the use of diodes and capacitors to perform the envelope detection of AM-modulated signals. With the popularization of digital systems, we can use a processor or digital circuit to process signals more accurately. For example, we can perform am detection. In fact, we can use a carrier to mix the signals (multiplication with the cosine function) and then perform low-pass filtering, the digital circuit multiplier and FIR filter can be used in this process. FIR is much higher than the order of the low-pass filter consisting of diodes and capacitors, and its performance is naturally more ideal, since digital circuits are easy to form integrated circuits, we use analog-digital converters to convert analog signals (such as microphone audio) into digital values for processing. There are several problems with such a system. One is that the signal needs to be sampled, and the other is that the signal is divided into several order. The signal is sampled, which means that what we get is not the original continuous signal, but a discrete sample. Sampling Time-domain signals will inevitably result in a periodic spectrum. If the original spectrum is limited to a limited bandwidth, there will be no aliasing distortion as long as the cycle is greater than the original bandwidth. However, the digital circuit only allows us to calculate binary fields such as multiplication and Addition and obtain discrete points. Therefore, we have to "sample" the spectrum ", sampling in the frequency domain results in a cyclical time domain. Fortunately, if we only take a finite length, it can be assumed that the uncollected part carries out periodic continuation (because the stable system deems that the signal can be decomposed into a combination of positive Cosine functions, while the positive Cosine functions can be periodically extended, so there is no problem with this assumption), so we get that the time domain and frequency domain are discrete periodic continuation of the point set. Since it is a periodic extension, the continuation part and the main value range (the period close to 0) are repeated values, so we only keep the part of the main value range, the transformation from such a time-domain point set to a frequency-domain point set is called Discrete Fourier Transform (DFT ). However, its operations are too complex, so the library and graph (Cooley,
Tukey) They tried to simplify it and found some inherent operation rules of the algorithm. The calculation workload was reduced from the original square level to the nlogn level, this algorithm is called time-based extraction and Fast Fourier transformation. A similar low-complexity algorithm can also be obtained through studying frequency-Based Extraction by sound and graph. These algorithms are collectively referred to as fast Fourier Transformation (FFT ), the calculation result of FFT is equivalent to that of DFT, but the calculation workload is reduced. Because the time-frequency transformation energy remains unchanged (parseval theorem), the absolute values in the frequency domain are meaningless, as long as the relative values are obtained, therefore, the quantitative order in the digital system and the scaling adjustment after the digital system overflows only affect the accuracy of the FFT calculation result, rather than the right or wrong. Therefore, the FFT meets the premise that the digital system can process, at the same time, the computing complexity is not high, so it has been widely used. So can the simulation system perform similar FFT? Yes. Construct a band-pass filter with the same number of frequency points to form an array. The signal enters the band-pass filter group. Each filter only retains the Sinc-like frequency response function centered on the corresponding frequency point, then we can get the result of FFT. Of course, this price is not affordable for general systems. Therefore, in the era of no digital circuits, FFT is basically a mathematical algorithm and cannot be implemented.
Now we know what FFT is. It is a time-frequency discrete transformation algorithm that can be computed digitally, the object calculated by this algorithm is the main value range (finite number) of the Point Set for periodic extension in the time domain. The result is the spectrum and the main value range of the Point Set for periodic extension, the premise equivalent to Fourier transformation is that the sampling rate is twice the maximum frequency of the signal (the high-frequency continuation does not overlap), and the parts outside the finite length of the time domain are assumed to be extended to infinity by period. To meet the first premise, we often add a low-pass filter before Signal Processing (or even before analog-to-digital conversion), so that the high-frequency components are restrained, for communication systems such as sound or within a certain frequency band, the high frequency component itself is meaningless, so this premise can be met. In order to meet the second premise, we need to ensure that the values of the samples outside the sampling area are consistent with the hypothetical values of the periodic extension. This is obviously not possible and cannot be achieved. What is the result? Spectrum leakage occurs, that is, spectrum energy is dispersed out of band (for example, cosine is no longer a line, but sinc). The dispersion process can be seen as adding a rectangular window in the time domain (multiplied by the gate function) as a result, the spectrum is equivalent to Convolution with the Sinc function. The smaller the time-domain window (that is, the less the collection points), the wider the Sinc main flap, and the more serious the spectrum leakage, that is, the energy of the original frequency point will be distributed to a larger nearby range, and its own peak value will decrease. If there is a peak value for each adjacent point, it will be difficult to distinguish it after it is distributed, therefore, the actual resolution of the system is inversely proportional to the length of the time-domain window. More points are collected to obtain more precise spectrum. Is there any way to mitigate this leakage? So it is better to make the value point at the boundary take a little less effect, and the weight in the middle is a little larger. In fact, a series of weighted values are used to form a window function in the time domain, after the window is added, the spectrum leakage will be reduced, and the energy will be concentrated, but the main valve will be wider, which is a trade-off. In this way, the two prerequisites are almost met. Although not completely, they are enough.
These are basic knowledge. Let's talk about interesting things. If FFT is only used to analyze deterministic stable signals, such as sine or a combination of several sine waves for an infinite long-period signal, looking at the spectral line or something, it will not be in today's position. What can it do?
1. Quick correlation
The importance of digital signal processing can be said to be extremely popular. To put it simply, if you do not know a certain parameter in the signal (such as frequency, or phase, or a piece of code or waveform ), then you will design a set of signals with all possible values of this parameter to relate to it. Let's see the one with the largest result. The corresponding parameter is the most likely, this algorithm is called the maximum likelihood detection, which is usually used as the actual execution process of the maximum likelihood detection. In many cases, the parameter to be measured is related to the time delay. For example, after the mobile phone is started, it must be synchronized with the base station, that is, you must know the start time of each data frame, so how can we get it? First, there is a protocol between the base station and the mobile phone. There will be a fixed sequence at a certain position in the frame. After this sequence is modulated, there will be a fixed waveform, then the mobile phone can create several delayed waveform copies, which are related to the received waveform. Then, the latency corresponding to the obtained peak value can be used to calculate the start time of the frame. What is the relationship between the related strength and the FFT function when I seek a chance to talk about it again? Correlation and convolution are very complex operations. Each time the correlation value under a delay is calculated, all the non-zero parts of the two waveforms need to be multiplied and obtained, the correlation value of all latencies forms a curve called a correlation function. After the signal is converted to the frequency domain, the process of obtaining the relevant function can be simplified into a process of multiplying the signal's concatenation (inverse of the imaginary part) with another signal. Even with the plus and minus two FFT overhead, the calculation is still much smaller than the original (the difference between N and nlogn). As a result, the complexity of related algorithms is greatly reduced. So what if the input signal is too long? We also found that related operations can be performed in segments, and segments can be combined to get the relevant results. In this way, regular operations on the mobile phone can be completed using a quick and relevant process. For another example, how can a radar locate a target's distance? The simplest idea is to create an impact signal and see when it will come back. latency multiplied by the light speed divided by 2 is the distance. However, the waveform similar to the impulse function is very difficult for the power amplifier to implement, therefore, the radar system is actually playing out a signal with a long bandwidth for a certain period of time, such as a chirp or some kind of molding. When receiving the signal, we need to know how much the signal is delayed, therefore, it is related to the locally formed waveform copy. Successful operations will produce a waveform with several peak points. The corresponding positions of these peak points are the echo delay values of several targets, the conversion is the location. This process of compressing the waveform energy to the point is generally implemented using FFT.
2. fast convolution
Similar to correlation, a major operation type of signal processing is convolution. A system can be characterized by system functions. Its output is the system function of input data convolution, or for stable random signals, the output is the square of the system function modulo value on the input data convolution. Convolution operations are often used to filter signals by fir. Because the FIR filter has no feedback and no memory, you can use the convolution algorithm to directly obtain the output. Then, if the data is block data (not stream data based on the clock cycle), you can use Quick convolution to reduce the computational workload. The process is similar to the Fast Correlation. The difference is that the frequency-domain multiplication does not require a combination of them. Think about it. What if the FM radio receives several very close ones? You can use a FIR Band-pass filter with a relatively high order to select the desired one. For example, the radio that demodulated the desheng DSP chip can achieve a resolution of 0.01mhz, this is one of the reasons why our digital FM radio is better than the original analog FM radio.
3. Classic Spectrum Estimation
In our real life, what we cannot see is the superposition of fixed infinitely long positive cosine signals. At least such signals will also overlay noise during transmission, that is, Gaussian white noise. Therefore, the premise of Fourier transform spectrum analysis cannot be met, and such a time-frequency transformation has no practical significance. Let's take a step back. What we can analyze is often infinitely long and random, however, in the random sense (auto-correlation and other second-order statistical characteristics), if the signal does not reach this premise, it can be satisfied at least within a certain period of time. Such signals are often random, but self-correlation and cross-correlation characteristics often contain second-order statistical information of signals. For the actual system, we can only estimate it. One of the basic methods for estimating the relevant function is to calculate the point multiplication between one signal and the other signal with a delay and a combination, the result is a latency-based function. As mentioned above, you can use FFT to solve it.
In the analysis of such random signals, there is an important theorem called winner-khintchine theorem, which proves that the relationship between signal self-correlation and signal power spectrum is FFT transformation. This interesting bridge allows us to do a few things. One is to use self-correlation estimation to obtain the power spectrum through FFT, and the self-correlation estimation is calculated through quick Correlation Algorithms. Careful people will find that the last FFT in this process is followed by IFFT. After the offset, only one FFT is actually needed, instead of three. Such a power spectrum estimation is called the periodic Graph Method for Power Spectrum Estimation, it is also the most frequently used classical spectral estimation. It is worth noting that the power spectrum of the signal actually corresponds to the ideal values of self-correlation, rather than the estimated values obtained from the received data (the estimated values are calculated using quick correlation above ), if the points at the boundary are used for this estimation, the related values are deviated because the value outside the hypothetical boundary is 0. If we only use several points in the center, then it will be relatively accurate. This method selects a reliable point after self-correlation calculation, and then implements Fourier's method called self-correlation power spectral estimation. The most interesting thing is that the two methods can be used in combination. First, perform periodic graph estimation for data overlapping segments, and obtain the mean IFFT of these results to obtain auto-correlation, we can get better self-correlation estimation from the related plus window (similar to the weighted function above) and then FFT to get the spectral estimation. This method is called the Welch method, which is a better classical spectral estimation method. These methods are accepted mainly because of the function of the FFT bridge. Otherwise, this method cannot be applied to the system. These classical spectral estimation algorithms (of course, frequency scanning is also available) can be used in equipment such as analyzer ). In addition, because the subscript of the peak power spectrum represents the signal frequency, if the signal is the reflected echo of an object, the object speed can be calculated based on the Doppler formula, A category of radar that adopts this speed measurement mechanism is called Pulse Doppler radar.
4. Modern Spectral Estimation
To achieve more accurate spectral estimation, some scholars believe that a better spectral estimation can be achieved by constructing a model and setting parameters to be determined, as long as the system model is properly designed, if the undetermined parameter is estimated to be effective, the spectral estimation of the signal can be converted to the spectrum of this model, so that more accurate estimation results can be obtained. Such model-based spectral estimation is called modern spectral estimation. So how is it reasonable? We all think that it is reasonable to minimize the Euclidean distance, that is, the least mean square criterion. In this criterion, we add the preconditions for a linear system, the interesting square expectation is converted into the system's auto-correlation and cross-correlation items (the Gini-Markov equation), and the estimation methods of these two items do not need to be asked more, it is only effective when fast correlation is used. This is why FFT can still be used after theories such as modern spectral estimation, AR model, and MA model are applied. At least in terms of calculation, the self-correlation method is the simplest method for estimating known AR parameters. After estimating the model, we can also do other things, such as estimating the future trend of the Signal Based on the model, called signal prediction, you can also use this model to smoothly process collected signals. Another example is how to measure the orientation of an object? In modern radar, there is usually an antenna array. If the echo of the received object has an angle with the antenna array, then different antennas will receive the echo at different times (different phases, the amplitude is basically unchanged), so we can use the echo signals of several antennas to estimate the azimuth angle of several target objects, called the arrival angle (DOA) estimation. There are two classic algorithms: music and esprit, both of which are second-order statistical signal algorithms, rely on the Self-correlation matrix as the starting point of calculation, so FFT can be used as a fast algorithm for self-correlation estimation.
5. Construct an Orthogonal System
What are the values of the discrete spectrum obtained by one FFT and the original continuous spectrum following the discrete points? These parts can be considered as the result of the superposition of a family of orthogonal sinc functions, it can also be considered as the result of convolution between the Sinc function with the main flap width as the spectrum interval width and the flat function string (because the time domain is the result of multiplying the rectangle window and the original signal ), orthogonal here refers to the fact that the value of a discrete point has no relationship with the value of another discrete point and does not affect each other. Such a system can be used to build a communication receiving and sending machine, because the result of using the Sinc Function orthogonal is the best spectrum efficiency (although the method of suppressing bandwidth can suppress the out-of-band sides of each sub-band, it will lead to the continuation of time domain signals, which causes inter-symbol interference ). Orthogonal Frequency Division Multiplexing (OFDM) builds a high-performance, low-complexity transceiver based on orthogonal characteristics between discrete frequency points. The working principle is that the data to be transmitted is first placed on various frequency points, generally using either QAM or PSK modulation, and then IFFT obtains the time domain signal to be sent out, the time-domain emission waveform formed by these frequency points is actually an orthogonal cosine function of a set of frequency multiples. After receiving such a signal, the Receiver performs FFT, convert to the frequency domain (or orthogonal domain) to obtain the symbol of transmission at each frequency point. This system cleverly overcomes the multi-path fading of the broadband wireless communication system, because the channel is not a constant in the whole band, and the values of some frequency points are enhanced, some are weakened, and there are also phase distortion. However, after dividing it into many orthogonal sub-channels, these channels can be regarded as flat internally. Then, through the balancer, such as after Zero-force balancing, we can obtain subchannel features similar to the impact response, so as to call up the value of each subchannel symbol. Because sub-channels are orthogonal and do not interfere with each other, multi-carrier communication can be implemented to transmit all data in parallel, greatly accelerating the transmission speed, this is one of the core technologies that can speed up 4G mobile communication to hundreds of megabytes or even higher.
Now, FFT computing power has become one of the performance benchmarks of the processor (http://www.fftw.org/benchfft/ffts.html ). For example, if a high-performance DSP processor of the Ti C6000 series can output a data point in half a clock, the processing rate can be several hundred megabytes per second. Hardware optimization makes FFT an internal highway of the signal processing algorithm.
The significance of FFT may be far more than that, and there are also many variants, such as two-dimensional FFT and DCT, which play a significant role in Image Blur (Clarity), contour, compression and other contributions. This article is just an introduction, so that colleagues who are devoted to researching and developing digital systems can better understand the principles and algorithms of the system. The content mentioned above is a basic book on signal processing. Generally, master's courses are involved in many cases. It is a little new to sort them out, and I hope you can learn from them.