## Chapter 0: Preamble on the Fourier Transform

—–

*If you already understand the Fourier Transform, and how it applies to biomolecular X-ray crystallography, skip ahead to later chapters on **Diffuse Scattering. For an in depth review of the mathematics behind the Fourier Transform, head to the next chapter*

—–

I want you to imagine that you and I are trying to build an algorithm — a piece of code that will take in a recording of an instrument playing a song, and output a list of the notes or chords being played.

We’d want to start with something simple to test with, so we’d sit down at the guitar, play a C-major chord, record it, and view the sound wave: the amplitude of the pressure wave hitting the microphone over time (Fig. 1).

The first thing we’d notice is that the signal is noisy, so we’d try to smooth it out. There are plenty of ways to do this, but lets say we use a simple smooth average (plotted in red, Fig. 2) one that is continuous and well approximates our signal. What do we do with it? How do we go from this fitted signal to the notes/chords?

Well, the first thing we’d take advantage of is that the signal appears to be periodic, it repeats itself. So, we can focus our attention on just one of the repeating parts (let’s call this the “unit cell”, for reasons that will make more sense later), and by solving the problem for one unit cell, we’ll solve the problem for the whole thing.

Also thanks to the signal’s periodicity, we can take advantage of an incredible piece of machinery: the Fourier Transform.

### The Fourier Transform

The Fourier Transform is a mathematical operator that takes in any (sufficiently well-behaved) periodic function, and outputs another function: the amplitude and phase of the sine and cosine waves that sum to the inputted function, indexed as a function of frequency.

In our case, this Transform provides a map from the “time domain” to the “frequency domain”. Frequency, as it relates to our algorithm, corresponds to the “notes” that make up our chord.

*(aside: here, the “hat” over the Amplitude function after the Transform reminds us that Fourier coefficients are (in general) complex numbers that account for both the amplitude and phase of the wave put in to the Transform. For our purposes, a**mplitude corresponds how loudly each note is played; p**hase is the amount of time we need to offset each note by so that they sum up to the right wave.)*

We played a C-major chord, so the notes that make it up would be a C, an E and a G (Figure 3). The amplitude of each note corresponds to how much each individual note contributes to the chord as a whole (if we strummed the C string more forcefully than the E and G strings, its amplitude would be larger than the other two). We didn’t strum any notes more forcefully than the others, so their amplitudes would be the same. Notice that the highest peaks in the signal correspond to points in time where the peaks of each note roughly line up; the lowest troughs of the signal correspond to points in time where the troughs of each note line up.

The Fourier Transform automatically tells us, with only the signal as input, what amplitude, and phase each note must be played with to be so that they add to our signal. The plot of the output from the Fourier Transform would look something like Figure 4:

If we were to only run our Fourier Transform on the fitted signal, we would get the three solid black bars as output: the amplitudes of the three notes that make up our wave (we could plot the phases as well, but they are less meaningful to visualize).

However, if we were to run our *un*fitted signal through the Fourier Transform, there would be more frequencies appearing: harmonics from the strings would appear higher in the frequency spectrum with lower amplitude; imperfections in the strings and the plucking of the fingers deform the shape of the sound wave from purely sinusoidal; the shape of the body of the guitar would reflect the waves causing them to interfere. These imperfections would show up in the frequency space profile as a “diffuse” pattern (gray in the figure).

### The “peaks” in the Fourier Transform tell us about the notes being played, the general structure of the sound wave.

### The “diffuse” information from the Fourier Transform *could**,* for example, be used to determine the instrument playing the notes. (We might imagine every instrument playing the same pitch/chord to have the same “peaks”, but a different diffuse pattern in frequency space).

We already knew what the notes would be, because we played a specific chord to test. But, the magic of the Fourier Transform is that it can do this mapping for *any periodic signal** we give it*.

So, it appears the only algorithm we need was invented over 200 years ago. In fact, many programs already exist to do this. An autotune machine works almost exactly like our example: “smoothing” (fitting) the signal and mapping to frequency space so that it can output pitch-corrected vocals (with a sliding scale of how much of your unique “diffuse” information you want to keep; T-Pain in the early 2000s didn’t like very much diffuse information, Bon Iver like a little more). Music “listening” apps like SoundHound, Shazam, Siri, etc. do comparisons in frequency space, not amplitude space.

**We’ll get in to how any of this relates to my research in a later chapter. **

**Looking ahead, Chapter 1 introduces the mathematics behind this machinery in a more rigorous way, and Chapter 2 illustrates the use for the machinery in X-ray Crystallography.**