Sequences#

A sequence is exactly what it sounds like: an ordered list of things. Sequences are different from sets in that they are ordered, and repetition is allowed.

The most common usage of sequences in signal processing is the sequence of sample values representing a digital signal. This is commonly denoted by \(x[n]\), where \(n \in \mathbb{N}\) is the sample index, and \(x[n]\) is the sampled value of the signal at the \(n\)th position.

Summations#

Throughout signal processing, we often need to add up long sequences of numbers. While we could easily express this using the standard \(+\) notation, such as

\[ S = 1 + 2 + 3 + \dots + 100, \]

this can get cumbersome and difficult to read, especially when the terms of the summation involve other calculations.

Instead, mathematicians express summations using the \(\sum\) symbol, derived from the (capital) Greek letter sigma (equivalent to S in the latin alphabet, standing for summation). The summation above would be equivalently expressed as

\[\begin{align*} S &= \sum_{n=1}^{100} n\\ &= 1 + 2 + 3 + \dots + 100. \end{align*}\]

The notation specifies the variable which is changing throughout the summation (in our case, \(n\)), and the start and end-points of the sum (1 to 100, inclusive) below and above the \(\sum\) symbol. Everything after the summation symbol describes what’s being added up: in this case, just the values of \(n\).

It’s often helpful to think programmatically when reading summation expressions. The summation above would be translated to Python code as follows:

S = 0

# Python ranges do not include the end point, so we need to go one past where we want to stop
for n in range(1, 101):
    S = S + n