1. Overview

Fourier Analysis has taken the heed of most researchers in the last two centuries. One can argue that Fourier Transform shows up in more applications than Joseph Fourier would have imagined himself!

In this tutorial, we explain the internals of the Fourier Transform algorithm and its rapid computation using Fast Fourier Transform (FFT):

Fourier transform time and frequency domains small

We discuss the intuition behind both and present two real-world use cases showing its importance.

2. Introduction

Expanding a Function into a summation of simpler constituent Functions has redirected several scientists to tune into understanding numerous fields, e.g., overtones, wireless frequencies, harmonics, beats, and band filters. When Fourier first introduced his analysis and the possibility of expressing a periodic function (e.g., tan, cos, sine) using a sum of periodic functions, he ran into multiple criticisms.

For example, Laplace (who was “the” physicist of his time) could not believe that a sine series could express a cos(x). In 1811, Fourier resubmitted his work with an important addition, the Fourier Transform, which showed that not exclusively it is possible to expand Periodic Functions but non-Periodic ones too.

2.1. Computation of Fourier Transform

The study of Linear Systems and Signal Processing recognized the importance of Fourier Transforms. However, it did not reflect on numerical computation because of the number of arithmetic operations needed to calculate the discrete version of the Fourier Transform. At least that was the case until James Cooley and John Tukey introduced the Fast Fourier Transform algorithm.

3. Discrete Fourier Transform (DFT)

The Discrete Fourier Transform of n points is given by:

    \[A_{i} = \sum_{j=0}^{n-1} \omega_{n}^{ij}a_j, \qquad i = 0, 1, . . ., n-1,\]

Where

    \[\begin{split} \omega_n & = e^{2\pi i/n} \qquad\text{the } n^{th} \text{ root of 1; referred to as ``De Moivre's number" or the ``root of unity"} \\ & = cos^2(\frac{2\pi i}{n}) + sin^2(\frac{2\pi i}{n}) \end{split}\]

3.1. De Moivre’s Formula

\omega_n and its powers construct the Fourier Transform. Although \omega_n has two forms, using the exponential one (e^{2\pi i/n}) should serve us better in understanding its behavior.

Let n = 8; \omega_8 will be the first root.

The figure below shows the eight different roots where \omega_8 is the first blue line, making a 45 degree (e^{\frac{i\pi}{4}}), and each of the following blue lines (also where the annotations are) will be the i^{th} root of \omega^i for \qquad i = 2, . . ., 8-1:

de moivre roots

We can think of the multiplication between the different powers of \omega_n and the input sequence as asking each \omega_n^j to capture the frequency it senses from the current input sequence. When the sum is infinite (i.e., having too many \omega‘s sensing the frequencies), it converges to the exact input sequence:

ezgif

Each computation of A_{i} requires n (complex) multiplication and n-1 (complex) additions. Hence, the complexity of total computation is n^2 (complex) multiplications and n^2-n (complex) additions, complexity of O(n^2).

4. Fast Fourier Transform

The observation made by Cooley and Tukey was that if n = n_1 \cdot n_2 is a composite number, a reduction in complexity exists. We can notice that as we square \omega_n^2, we get \omega_{n/2}.

For example, {(\omega_8)^2 is \omega_4. \omega^i_4 for i = 0, ..., 3:

fast fourier transform

4.1. DFT Vector Notation

To start our optimization attempt, it would be easier to work with and understand the vector notation of DFT:

    \[\underset{n\times 1}{\mathrm{A}} = \underset{n \times n}{F(n)} \times \underset{n\times 1}{a}\]

Where

    \[F(n) = \begin{bmatrix} \omega^{00} & \omega^{01} & \dots & \omega^{0(n-1)} \\ \omega^{10} & \omega^{11} & \dots & w^{1(n-1)}\\ \vdots & \vdots & \ddots &  \vdots \\ \omega^{(n-1)0} & w^{(n-1)1& \dots & \omega^{(n-1)^2} \end{bmatrix} = \begin{bmatrix} 1 & 1 & \dots & 1 \\ 1 & \omega^{11} & \dots & w^{1(n-1)}\\ \vdots & \vdots & \ddots &  \vdots \\ 1 & w^{(n-1)1& \dots & \omega^{(n-1)^2} \end{bmatrix}\]

Now F(n) is a mesmerizing matrix; we construct it, multiply it by a sequence, and get the Fourier Transform of that sequence. But, we want to multiply F by a as quickly as possible.

Normally, such multiplication would take n^2, and the optimization is possible if the matrix contained zeros, but the Fourier matrix has no zeros!

However, it is possible to make use of the special pattern w^{ij} we previously learned so that F can be factored in a way that produces many zeros (a.k.a FFT). The key idea is to connect F(n) to F(n/2).

For instance, let n=4; we would like to find the connection between

    \[F(4) = \begin{bmatrix} 1 & 1 & 1 & 1  \\ 1 & \omega & \omega^2 & \omega^3 \\ 1 & \omega^2 & \omega^4 & \omega^6 \\ 1 & \omega^3 & \omega^6 & \omega^9 \\ \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 & 1  \\ 1 & i & i^2 & i^3 \\ 1 & i^2 & i^4 & i^6 \\ 1 & i^3 & i^6 & i^9 \\ \end{bmatrix} \qquad \text{and} \qquad \begin{bmatrix} F(2) & \textbf{0} \\ \textbf{0} & F(2) \end{bmatrix} = \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & i^2 & 0 & 0\\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & i^2 \end{bmatrix}\]

4.2. Complexity Reduction

So, we are somehow trying to reduce the number of operations by half, but these two matrices are not equal, yet. How about we add a permutation matrix that splits the input sequence into odd and even coordinates?

    \[\begin{bmatrix} F(2) & \textbf{0} \\ \textbf{0} & F(2) \end{bmatrix} \begin{bmatrix} \text{even-odd} \\ \text{permutation} \end{bmatrix} = \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & i^2 & 0 & 0\\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & i^2 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 &1& 0\\ 0 & 1 &0 & 0\\ 0 & 0& 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 &1& 0 \\ 1 & 0 & i^2& 0 \\ 0 & 1 & 0 & 1\\ 0& 1 & 0& i^2 \\ \end{bmatrix}\]

At this point, we succeeded in producing a lot of zeros, but the factorized version seems far from being equal to the original F(4). However, we are pretty close, only if we can find a “fix-up matrix” to correct the previous output.

4.3. Introducing a “Fix up Matrix”

Let’s consider the following matrix:

    \[\begin{bmatrix} \text{fix up} \\ \text{matrix} \end{bmatrix} \begin{bmatrix} F(2) & \textbf{0} \\ \textbf{0} & F(2) \end{bmatrix} \begin{bmatrix} \text{even-odd} \\ \text{permutation} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & i\\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -i \end{bmatrix} \begin{bmatrix} 1 & 1 & 0 & 0 \\ 1 & i^2 & 0 & 0\\ 0 & 0 & 1 & 1 \\ 0 & 0 & 1 & i^2 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & 0\\ 0&0 &1& 0\\ 0 & 1 & 0& 0 \\ 0&0&0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 & 1  \\ 1 & i & i^2 & i^3 \\ 1 & i^2 & i^4 & i^6 \\ 1 & i^3 & i^6 & i^9 \\ \end{bmatrix}\]

The new “fix up matrix” works on the reassembly of the half-sizes F(n/2) matrices. The form of that matrix is:

    \[\begin{bmatrix} I_n & D_n \\ I_n & -D_n \end{bmatrix} \qquad \text{and in our case is} \qquad \begin{bmatrix} I_4 & D_4 \\ I_4 & -D_4 \end{bmatrix}\]

Where D_n is a diagonal matrix with entries (1, \omega, \dots, w^{n-1}) and I is the identity matrix:

    \[D_n = \begin{bmatrix} 1 & 0 & 0 & \dots &  0  \\ 0 & \omega & 0 & \dots & 0 \\ 0 & 0 & \omega^2 & \dots & \vdots \\ \vdots & \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \dots & 0 & \omega^{n-1} \end{bmatrix} \qquad \text{and} \qquad I_n = \begin{bmatrix} 1 & 0 & 0 & \dots &  0  \\ 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 & \dots & \vdots \\ \vdots & \vdots & \vdots & \ddots & 0 \\ 0 & 0 & \dots & 0 & 1 \end{bmatrix}\]

The full form would be:

    \[F(n) = \begin{bmatrix} I_n & D_n \\ I_n & -D_n \end{bmatrix} \begin{bmatrix} F(n/2) & \textbf{0} \\ \textbf{0} & F(n/2) \end{bmatrix} \begin{bmatrix} \text{even-odd} \\ \text{permutation} \end{bmatrix}\]

What comes next? We reduced F(n) to F(n/2). Keep going to F(n/4), …, F_1. That’s recursion! There are l levels, going from n=2^l down to n=1. Each level has n/2 multiplications from the diagonal D‘s, to reassemble the half-size outputs from the lower levels. This yields the final count \frac{1}{2}\cdot n \cdot l., which is \frac{1}{2}n\cdot log_2(n).

Note: In case n is not a power of 2, we can pad the input sequence with \textbf{0}‘s.

For F(1024), it would take (1024^2) = 1024\cdot1024 multiplications, where FFT would perform \frac{1}{2}\cdot 1024 \cdot log_2(2^{10}) = 1024\cdot5 multiplications only.

5. Applications

Fourier Transform has an indispensable number of applications to be listed. We’ll cast a shadow on Acoustics and Convolution Theorem.

5.1. Acoustics

How can Shazam recognize a song/music in a matter of seconds? More surprisingly, it does so regardless it is listening to an intro, verse, or chorus:

shazam song

There are (roughly) four steps it performs:

  • Captures the song using a mobile phone (i.e., analog-to-digital wave conversion)
  • Transform the captured function (or wave) into a frequency domain using FFT
  • Fingerprint the expanded function
  • Compare the fingerprint against the songs fingerprints database

5.2. Convolution Theorem

The theorem states that circular convolutions (i.e., periodic convolutions between two periodic functions having the same period) in the spatial domain are equivalent to point-wise products in the Fourier domain.

Let \mathcal{F} denote the Fourier Transform and \mathcal{F}^{-1} its inverse, we can compute convolutions between functions f and g as follows

    \[f * g  = \mathcal{F}^{-1}\{\mathcal{F}\{f\} \cdot \mathcal{F}\{g\}\}\]

Originally, the convolution between f and g is calculated with

    \[f * g = \int_{-\infty}^{\infty} f(\tau) g(t - \tau) d\tau\]

A convolution of size n \cdot n with a kernel size k \cdot k requires (n - k + 1)^2k^2 operations using the direct method, the complexity would be O(n^2k^2). On the other hand, FFT-based method would need 6Cn^2 log n + 4n^2 operations with complexity O(n^2\log{}n):

convolution complexity

The convolution operation is used in a wide variety of applications such as:

6. Conclusion

In this tutorial, we have provided a short mathematical analysis of the Fast Fourier Transform algorithm and how it can, surprisingly, impact a lot of fields and applications. The breakthrough of the algorithm only started a few decades ago even though Joseph Fourier introduced his analysis almost two centuries back.

Comments are closed on this article!