It's finally time to start looking at the relationship between the discrete Fourier transform (DFT) and the discrete-time Fourier transform (DTFT). Let's look at a simple rectangular pulse, for . The DTFT of is:
Let's plot for over a couple of periods:
M = 8; w = linspace(-2*pi, 2*pi, 800); X_dtft = (sin(w*M/2) ./ sin(w/2)) .* exp(-1j * w * (M-1) / 2); plot(w, abs(X_dtft)) title('|X(\omega)|')
It turns out that, under certain conditions, the DFT is just equally-spaced samples of the DTFT. Suppose is the P-point DFT of . If is nonzero only over the finite domain , then equals at equally spaced intervals of :
The MATLAB function fft computes the DFT. Here's the 8-point DFT of our 8-point rectangular pulse:
x = ones(1, M); X = fft(x)
X = 8 0 0 0 0 0 0 0
One 8 and a bunch of zeros?? That doesn't seem anything like the DTFT plot above. But when you superimpose the output of fft in the right places on the DTFT plot, it all becomes clear.
P = 8; w_k = (0:P-1) * (2*pi/P); X = fft(x); plot(w, abs(X_dtft)) hold on plot(w_k, abs(X), 'o') hold off
Now you can see that the seven zeros in the output of fft correspond to the seven places (in each period) where the DTFT equals zero.
You can get more samples of the DTFT simply by increasing P. One way to do that is to zero-pad.
x16 = [x, zeros(1, 8)]
x16 = Columns 1 through 15 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 Column 16 0
P = 16; X16 = fft(x16); w_k = (0:P-1) * (2*pi/P); X = fft(x); plot(w, abs(X_dtft)) hold on plot(w_k, abs(X16), 'o') hold off
Another way to increase P is to use the fft(x,P) syntax of the fft function. This syntax computes the P-point DFT of x by using zero-padding. Let's try a 50-point DFT.
P = 50; Xp = fft(x, P); w_k = (0:P-1) * (2*pi/P); X = fft(x); plot(w, abs(X_dtft)) hold on plot(w_k, abs(Xp), 'o') hold off
If you've ever wondered what that whole zero-padding business was all about with Fourier transforms, now you know. When you tack on a bunch of zeros to a sequence and then compute the DFT, you're just getting more and more samples of the DTFT of the original sequence.
I think the next logical place to go in our Fourier exploration is to start considering some of the reasons why many people find the output of fft so surprising or puzzling. Here's a sample:
- Why isn't the zero frequency (or "DC" frequency) in the center of the output from fft?
- Why isn't the output of fft real when the input is symmetric?
Do you have puzzles to add? Let me know by adding your comments.
Get the MATLAB code
Published with MATLAB® 7.9
Comments are closed.
8 CommentsOldest to Newest
I am enjoying this series of blogs & encourage you to continue. This latest entry was very helpful. I suspect that most users who use the fft function are really after a dft with good calculation performance & really don’t care how much of Cooley & Tukey’s genius is incorporated in the routine.
I think most users are happy to trust that World Class experts in signal processing have ensured that these functions in MatLab do the right thing & perform efficiently.
However, for folks like me who don’t have a graduate exposure to signal processing, I’m interested in an fft type function because I want some frequency domain insight into some real, time domain data. As such, the single detail that always trips me up is how to relate the output of the fft function to a vector of real frequencies that are meaningful to me. I’m not talking about the difference between Hertz & radians per second. I’m talking about how the frequencies are non-dimensionalized in the world of signal processing.
Both the FRP for fft & this blog entry are good examples of this issue. No where in this blog do you mention units at all. Your frequency vector w spans – / + 2pi, so most of us would assume it is radians per second. But, this isn’t specified and you don’t label the X axis of your plots. So, we are left guessing.
Lay users like me are further tripped up by this concept of negative frequencies. The FRP for fft is just as bad in that it only addresses the frequency issue by inference in the single example wherein they use the common shorthand of taking advantage of the complex-conjugate nature of the fft output, throw away last half of the fft output and generate a vector of only positive frequencies.
Certainly an fft function is more general & powerful than just for understanding frequency domain characteristics of time domain data, so it is appropriate for MatLab to provide this generality. However, since many, if not most, uses are for understanding time domain data, the FRP should address this frequency detail specifically.
Further, the FRP should include several more examples including ones that specifically demonstrate the effect of real vs complex input data.
I am implementing a code from a paper ( http://www.cns.nyu.edu/~zwang/files/papers/icip00.pdf ) and i am having trouble with the fft which is giving me very high values
[Long code fragment edited out. -SE]
The high value output from the fft is carried forward till my end result.
OysterEngineer—Excellent questions and comments. I will follow up in additional blog posts.
Amit—I’m afraid I can’t help you with debugging your implementation of something you read in a research paper, but I can suggest that you go back and check all your assumptions. The MATLAB fft function computes the discrete Fourier transform (DFT). Sometimes different people use different formulas for the DFT. Most often the difference is in the scale factors used.
Thanks Steve ,i’ll try reading more on fft and using different scale factors
Thanks for this excellent series of blogs on the FT. I feel it’s one of those topics that cannot be revisited enough. I use the DFT in matlab a lot for image processing tasks but I can still manage to get confused about some of the concepts involved every so often.
I have a question so far- you mention here that zero-padding is effectively for increasing the sampling of the FT but isn’t it also for preventing ‘wraparound’ error when filtering too?
There’s also Gibbs phenomenon/ringing and widowwing to consider – I hope you can cover these too in the future.
This is a great resource your making. I’m looking forward to the rest.
Fen—I appreciate the encouragement, thanks. You are right that there is a different way to think about zero-padding when you are doing frequency-domain based filtering. I imagine I’ll get to that, but I’m not sure when. I touched briefly on windowing back in December, and I’ll come back to that topic again. I haven’t really thought much yet about how to talk about Gibbs phenomenon. Thanks for taking the time to make good suggestions.
Fen—I think I misunderstood your comment about avoiding wrap-around error. Yes, when using the DFT to compute convolution, the inputs must be zero-padded by the proper amount. The reason is related to the condition I gave in this post on P. P must be at least as big as the nonzero domain of the output of the convolution, which is N+K-1 for convolution of an N-point sequence with a K-point sequence. Therefore the inputs must be zero-padded to at least N+K-1.