Defining convolution
Contents
3.2. Defining convolution#
Now that we’ve worked through a couple of examples of ways to combine delay, gain, and mixing to produce effects, we can formally define the convolution operation. In full generality, convolution is an operation that takes two sequences \(\blue{x}\) and \(\red{h}\), and produces a new signal \(\purple{y}\). Here, \(\red{x}\) is the input signal and \(\purple{y}\) is the output signal. The sequence \(\red{h[k]}\) contains the coefficients by which each delayed signal is scaled before being mixed to produce the output:
\(\red{h[0]}\) scales the raw input signal \(\blue{x[n]}\),
\(\red{h[1]}\) scales the input delayed by one sample \(\blue{x[n-1]}\),
\(\red{h[2]}\) scales \(\blue{x[n-2]}\), and so on.
Note that \(\blue{x}\) and \(\red{h}\) can have different lengths.
3.2.1. The convolution equation#
Definition 3.1 (Convolution)
The convolution of two sequences \(x[n]\) (of length \(N\)) and \(h[k]\) (of length \(K\)) is defined by the following equation:
This equation is just a slight generalization of the examples we saw in the previous section. In words, it says that the output signal \(\purple{y}\) is computed by summing together several delayed copies of the input (including delay-0), each multiplied by a coefficient \(\red{h[k]}\):
Fig. 3.3 illustrates the computation of a convolution between a square wave and a sequence \(h = [1/2, 1, 0, -1, -1/2]\). The intermediate steps show how each scaled and shifted copy of \(x\) contributes to the total convolution \(y\) (bottom subplot).