# Summing sinusoids

## Contents

# 5.7. Summing sinusoids#

In the previous sections, we’ve made use of the fact that the samples of a wave at an analysis frequency must sum to zero. However, we never actually proved this from first principles.

In fact, we can say something a little stronger, and exactly characterize the sum of samples for a wave at any frequency:

Computing this sum requires adding up wave samples, which can be done directly, though it can be tedious. As we’ll see, this is a case where the complex exponential form is more convenient to work with.

## 5.7.1. Aside: geometric series#

Recall that Euler’s formula (4.1) converts between rectangular and polar coordinates:

Since this holds for any angle \(\theta\), it must also hold for \(n\cdot \theta\) as we vary \(n\):

and by the product rule for exponents, we can re-write the left-hand side as follows:

This will allow us to turn a summation of wave samples into a summation of the form \(z^n\), which is also known as a *geometric series*.
Geometric series have many nice properties, but for now, the one we’ll need is a formula for calculating the sum of the first \(N\) terms.

(Summing finite geometric series)

Let \(z \neq 0, 1\) be a complex number, and \(N > 0\) be an integer. Then

In plain language, this lemma says that summing up increasing powers of a number \(z\) (even a complex number \(z\)) can be equivalently expressed as a ratio, rather than adding up individual terms.

Proof. To prove Lemma 5.1, observe that if we multiply the left-hand side (the summation) by \((1-z)\), then successive terms in the summation will partially cancel each-other out.

Since \(z\neq 1\) (by hypothesis), we know that \(1-z \neq 0\), so we can safely divide both sides of this equation by \(1-z\) to get the identity:

## 5.7.2. Summing complex exponentials#

Now that we have Lemma 5.1, we can state the following theorem.

(Complex exponential sums)

Let \(\theta \neq 0\) be an angle, and let \(N>0\) be an integer. Then,

for an integer \(k \neq 0 \mod N\).

In plain language, Theorem 5.1 says that a complex sinusoid with a frequency that completes a whole number of cycles in \(N\) samples must sum to 0. An example of this is illustrated below by Fig. 5.18. We can actually say a bit more than that, and also characterize what happens for any frequency, and see that if the wave does not complete a whole number of cycles, then its sum cannot be 0.

Note that this does not handle the special case of \(\theta\equiv0\) (\(f_0 = 0\)) case, for which the summation simplifies to \(1+1+1+\dots = N + 0\mathrm{j}\).

### 5.7.2.1. Proof#

Theorem 5.1 has an “if and only if” form, so we’ll need to prove both directions:

\((\Rightarrow)\) \(\theta\) of the given form implies the summation must be zero, and

\((\Leftarrow)\) the summation being zero implies \(\theta\) takes the given form.

Note that \(e^{\mathrm{j}\cdot n\cdot \theta} = \left(e^{\mathrm{j}\cdot\theta} \right)^n\) by the product rule for exponents. If \(\theta \not\equiv 0\) then \(e^{\mathrm{j}\cdot\theta} \neq 1\), so we are allowed to use the geometric series identity with \(z = e^{\mathrm{j}\cdot\theta}\):

Proof. \((\Rightarrow)\)

If \(\theta = 2\pi \cdot k/N\) for integer \(k\), then the numerator can be equivalently expressed as

Since the numerator is 0, so too is the entire summation.

Proof. \((\Leftarrow)\)

In the other direction, if we assume the summation is 0, then we must have

which implies \(N \cdot \theta = 2\pi \cdot k\) for some integer \(k\) because \(N\cdot\theta\) must be equivalent to a whole number of rotations (in either direction). Dividing through by \(N\), we get \(\theta = 2\pi \cdot k / N\).

## 5.7.3. What about phase?#

We can generalize the statement above to handle phase offsets as well. Because the phase offset does not change with the sample index \(n\), we can factor it out of the summation:

This says that when a wave is shifted by \(\phi\), the summation is multiplied by \(e^{\mathrm{j}\cdot \phi}\).
Note that this is a pure rotation, so if the summation was 0 (i.e., we had an analysis frequency), then it will still be zero under *any* phase shift.
Likewise, if the summation was non-zero, it will remain non-zero under any shift.

## 5.7.4. Back to the original question#

The theorem above is for complex exponentials, which involve both a real and imaginary component. However, our original question was about summations of general waves in standard form:

For this, we can use the fact that \(\cos(\theta)\) is equivalent to the real part of \(e^{\mathrm{j}\cdot \theta}\), which implies

or when a sample index \(n\) and phase offset \(\phi\) are introduced, and letting \(\theta = 2\pi \cdot f / f_s\):

By using (5.14), we can transform each part of this summation independently. The end result is the rather unwieldy formula:

## 5.7.5. Why does this matter?#

Many of the things we’d like to say about Fourier series depend on having waves “average out to zero”. For continuous (time) signals, we can show this kind of thing via symmetry arguments (like in chapter 1), but when using discretely sampled signals, a bit more care must be taken.

The main theorem in this section tells us that waves at analysis frequencies always sum to 0, but in proving that theorem, we got as a byproduct a general equation (5.15) for sums of waves at arbitrary (non-analysis) frequencies and phase offsets. While this equation could be used in principle to bypass computing sums sample-by-sample, it is more useful as an analytical tool: it allows us to reason about the properties of the sum (e.g., whether it is 0 or non-zero, real or complex, etc) just by knowing the wave’s parameters.