Lecture 30: Discrete Fourier Series

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Prof. Gilbert Strang

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR STRANG: OK, so I thought I'd quickly list the four central topics for the Fourier part of the course, and I would guess that there'll be a question on each of those four in the final quiz. Final exam. And we covered 4.1, Fourier series. I don't plan to do discussion in 4.2 of other series like Bessel and Legendre and so on. If I can, I'll come back to those in the very last lectures. But I want to pick up now on the second key example, which is the discrete Fourier series that has only N terms, instead of this series, as you're looking at it, has infinitely many terms. But I'm going to cut back to N terms. So this k, well I can go one to N, but the best way is zero to N-1. So that's the discrete series. We're going to be dealing with a vector, c_0 up to c_(N-1). And a vector of function values. Vector to vector. So a matrix is going to be the key to everything here. An N by N matrix. So that's what's coming. Then, after that probably by Friday will be the integral transform. And early next week, and a little after Thanksgiving, would be this key idea of convolution, which is really important in so many applications and needs to be separated out. Convolution will apply to all of these. And you'll see the point there.

OK, ready for the idea of the discrete one. OK, so the key idea is we only have N terms, so I can't expect to reproduce a whole function. I can get N function values, so I'll get this at N different points. And if I'm thinking of my F(x), on, let me take zero to 2pi now, and I'm thinking again of an F(x) that's periodic. So it goes, does whatever. If it was like that, if that was my F(x), just remember about Fourier series, what could you tell me about the Fourier series for that F(x)? Well, when I continue periodically, am I going to see a jump? Yes. There'll be a jump at every 2pi, so the underlying function is not continuous, has a jump, so that our special examples with jumps and slow decay, 1/k decay rate for the Fourier coefficients would apply here. OK, that's remembering Fourier series. Now, Fourier integrals, I'm just going to take, let me just take N=4. So I'll take this point, you know, what four points am I going to take? I'm not going to repeat-- Ah. Just for the heck of it, let's make it look periodic. Yeah, let's make it look periodic. I never should have done it otherwise. OK, so there's one point. I'm just going to take four equally spaced, equally spaced being key, zero, one, two, three. So N-1 is three. And these four values I can get right. So those are my four x's. The four x's that I'm going to take. So this is x=0, of course. And this one is, what's the x there? x is 2pi, the whole deal, the whole interval, but it's divided into N pieces, so 2pi/N. So we are going to see a lot of fractions involving pi/N. Or 2pi/N. They're just going to come up everywhere. Because that's the delta x, the h, the step in the discrete form. OK.

So those are the x's. So I'm only going to get this at x=0, 2pi/N, 4pi/N, up to whatever it comes out to, (N-1)*2pi/N. That doesn't look elegant so we probably will try to avoid writing that again. But this was the three times 2pi/4 that got us to there. So this is 2pi/8, pi/4. Sorry, 2pi/4, 4pi/4, 6pi-- Oh, OK. So we're only going to get equality at those x's. So it's those x's that I should plug in here. OK, so let me do that. Let me plug in, what does this series look like at these four points? Let me see. So writing out all four equations, F(0) is going to match the right side at zero, because zero is one of my points. So I get c_0 + c_1 + c_2 + c_3. At x=0, right, all the e to the whatevers were one. Now at the next x, 2pi/N, now I'm matching the series with a function at that value, there. 2pi/N. So now I have k=0, so what does k=0 give me? When k is zero, what's my term look like? It's just c_0 times one. Because if k is zero, e to the whatever is again one. So this is just c_0. Now, here's the key. What is that business there at x=2pi/N? Here comes the important number in this whole business. The important number this whole business is, I'll call it w, is e^(2pi*i/N). That's a complex number, and we'll place it in the complex plane. So you must get to know that number, w. Because when I plug in x equal-- k is now one here, so I have c_1. Times this e to the i one 2pi/N, what is that? w. So it's w*c_1. And what do I get from the next term in this series? I have c_2*e^(i2x). x is 2pi/N, do you see what's happening? w squared is e^(4pi*i/N).

We're using the beauty of exponentials, that's making everything go. It's w squared. And this guy would be w cubed, c_3. That's the series. At our points. So this, like, we'll just see this once, what are we doing? But this is what we're going to live with here. So I'll rewrite it all in terms of w, so you just, you've got the idea of what are we doing? We're matching at N points and now we get the good notation w. OK, and at the next point, 4pi/N. Well, again, k=0 term, it's c_0. The k=1 term, can you tell me what I get when k is one, I have something times c_1, it's e^(i*1*4pi/N), what's that? w squared. e^(i*4pi/N) is w squared. Look, this matrix is coming out symmetric. And this will be w^4, and that'll turn out to be w^6, and then the last row of the matrix is going to come from matching at 6pi/N, and it will turn out to be c_0, and it'll be w^3, w^6, and w^9. So I did the last row or two a bit fast. Just because the pattern's there, and you'll spot it. Now I want to pull out of that the matrix. The Fourier matrix, the four by four Fourier matrix that multiplies the c's and gives me the f's. Let me call these numbers are y_0, y_1, y_2, and y_3, just to have a, so it's a vector of values of the function. And this is a vector of coefficients. So it's wise to-- The Fourier transform goes between y's and c's, and y's. Connects a vector-- And this is N values, N function values in physical space. These are N coefficients in frequency space, and one way is the discrete Fourier transform and the other way is the inverse discrete Fourier transform. So, and it's a little bit confused, which is which, actually. There was no question with Fourier series, it was easy to tell the function from the coefficients because the function was f(x) and the coefficients were a bunch of numbers. Here we only have N numbers. N y's and N c's and we're transforming back and forth.

So I'm finding right now the matrix, this Fourier matrix is going to take the c's and give me the y's. So I'm talking here about the matrix F. And let's see what F is. Can you just see what-- That was y_0 to y_3, equals this matrix F, the four by four matrix. Times c_0 to c_3. This is really the inverse DFT. It reconstructs the y's from the c's, or an easier way to say that is add up the series. So that's the-- The usual DFT is taking us, starting with the y's and giving us the c's. So this would be the discrete Fourier transform. Take a function, produce its coefficients. The reverse step is take the coefficients, add back, add up the series, get the function. And that's what F does. So I'm going to focus on F. Which is the ones that involves this w guy. OK, can you read off, peel off from here what the matrix F is, the Fourier matrix? Its first row is all ones. Its first column is all ones. And then it's one, w, w^2, w^3; w^2, w^4, w^6; w^3, w^6, and w^9. To the ninth power. OK, so it's made up of these powers of w. Zero, first, second, and third power and then higher powers. But higher powers are all-- Let's draw a picture. Where is w? So this is the complex plane. Here's the real, and here's the imaginary direction. And let me take N to be, here N was four. Let me take N to be eight in my picture, just to have some, then I'll be able to spot the fours and the eights. If N is eight, where would w_8 be? So w_8 is meant to be the e^(2pi*i/8), now. Where's that number in that picture? Is it on the circle? Absolutely. This discrete Fourier transform never gets off the circle. And in fact, it never gets off eight points on the circle. Which are the eight equally spaced points. This says go 1/8 of the way of the whole way around, which will be here. So there's w_8. When I square it, what's the square of w_8? The angle, what happens to the angle when I multiply numbers? I add the exponents. Here, if I'm multiplying by itself, it'll double the exponent. Double the angle. It'll be, there's w_8 squared, so that point is w_8 squared. And it's the same as w_4 because it's 1/4 of the way around.

And what is that number in ordinary language? i. It's i, right? It's on the unit circle at 90 degrees. And on the imaginary axis it's just i. So all this here was i, i^2 i^3; i^2, i^4, i^6; i^3, i^6, i^9. The four by four Fourier matrix is just powers of i. And the N by N, the eight by eight Fourier matrix is powers of w_8. And let's find all the rest of the powers. So there's w_8, w_8 squared, where's w_8 cubed? Here. I multiply those guys, I add the angle, I'm out to here. Here's the fourth power, here's the fifth power, sixth power, seventh power and finally we get to w_8 to the eighth power, which is one. Because if I take the eighth power here, I've got e^(2pi*i), and that's one. So that's the picture to remember. And while looking at that picture, could you tell me the sum of those eight numbers? What do those eight members add up to? So I just want to ask what do one plus w up to w^7, when I'm doing the eighth guy, adds up to? So the sum of those eight numbers, well I wouldn't ask the question if it didn't have a nice answer. Zero. That's of course the usual answer in math. We do all this work and we get zero. But now, of course the real math is why. I mean, so math is, you get equations but then always there's that kicker, why. And why do they add up to zero? Here. Can you see that? Why those eight numbers would add to zero? Well, yeah. So what do these add up to? Just those two guys? One and minus one add to zero. That and that add to zero, I can pair them off. This and this add to zero. That and that add to zero. Yeah, so that's one way of seeing it. But actually, even if N was three or five or some odd number, and I had-- So should I do the picture for N=3? Just a little, doesn't need a very big picture for N=3.

So where would the roots be for N=3? One of them is always one. These are the three cube roots of one. That I'm going to draw. Those were the eight eighth roots of one. Because if I take any one of those numbers its eighth power brings me back to one. So where are these guys, so this is three numbers equally spaced around the circle. Where do you think they are? Well, it's gotta be at 120 degrees, 240 degrees and 360. And again, those will add to zero. You can't pair them off because we got an odd number, but it's safely zero and we could see why. OK, so this section, this topic is all about that matrix. It's all about that four by four, or N by N matrix. Which has powers of w. Notice that the matrix is, it hasn't got any zeroes. Hasn't even got anything close to zero. All those numbers are on the unit circle. So multiplying by that matrix, which is the same as adding up the Fourier series at the four points, doing this calculation, finding these y's from the c's, reconstructing the y's, given the coefficients, has 16 terms. Right? The matrix has got 16 entries, all the entries in the matrix are showing up here in powers of w. We've got 16 multiplications. Well, maybe a couple of them were easy, but basically 16 multiplications and additions. Pretty much N squared work. And that's not bad. Except if you want to do it a million times, right? Suppose we have a typical N might be 1024. So the matrix has then got, is 1024 squared numbers, which is about a million.

So if we have a matrix with a million entries, a million operations to do, and then we do it a million times then it's getting up to money that Congress would give to the banks. Whatever. Anyway. Serious money. Serious computing time. And the wonderful thing is that there is a faster way than N squared. There's a faster way to do these 16 terms than the obvious way. You might say, well, they're pretty nice. And they are. But they're nicer than you can know. I mean, nicer than you can see immediately. This matrix will break up into simple steps with lots of zeroes in each step, and the final result is that instead of n squared, capital N squared, which is done the old fashioned way, let's say, that there's a fast Fourier transform. So the FFT, I should maybe find a better space for the most important algorithm in the last hundred years, and here it is. Right there. So my one goal for Wednesday will be to give you some insight into the FFT. But here I'm just looking at the result. The number of computations is, instead of N squared, it's N times log N. It's log to the base two, or maybe 1/2 N log N. So I'll erase some of the other things close by so you can see it. So that, where N squared was, if N was a thousand, N squared was a million. If N is a thousand, this is a thousand, and what's the logarithm of a thousand, to the base two? That's the ten, right? The 2^10 gave us that 1,024. So this is 10^4. So N is 10^3, or 1,024 more exactly, the logarithm is ten, so we're 10^4 instead of 10^6. So that's a saving of a factor of 100, that's true. I mean it just comes from doing the addition and multiplication in the right order. You get this incredible saving. And it's sort of makes whole calculations that were previously impossible are now possible because instead of where it might have been 100 minutes, it's now one minute.

So everything focuses on this Fourier matrix and its inverse, which we still have to find but it'll come out beautifully. In fact, I could tell you what F inverse, just so you see it coming. F inverse will, so what happened to w, that all-important number w? Well, let me repeat what w is. I'll put it on this board. F has powers of w. w is e^(2pi*i/N). And what power has it got? In the j,k position? So now you can see the formula for these entries. Only, you have to let me start the count at j and k equals zero. This is zero-based, where MATLAB is one-based. And to do this stuff you always have to shift things by one. Some other, more recent, software like Python is zero based, because this is so common. So here's the formula. It's w to the power j times k. That's neat. That's what goes into the matrix F. This here is row three. It looks like row four but I'm starting with zero, so that's row three. That's column three. This is w three times three, w^9. Which by the way is the same as what? If w is i in this four by four, it is. w is i, what's w^9, what's i to the 9th power? Same as? Can you do i^9 in your head? Sure. Because you're starting here. You're taking its ninth power, so one, two, three, four, you're back to one. i^4 is one. One, two, three, four more, i^8 is one. i^9. That will actually be just i. i^9. So they're all powers of i. But this is the good way to see it. It's three times three, because it's in-- And this, of course, is three times two, and two times two, and three times one. You see the entries of F? Yeah.

So now I'd better write down, I promised to write this formula in a better form. All those formulas in a better-- Nobody is going to write this out for matrices of order 1000, right? So we've got to write the decent formula. So the decent formula is that the j-th y, because it's going to come in equation number j, will be the sum on k=0 to N-1, of c_k times w^(jk). Those are the entries, those are the c's that it multiplies and the y's. This is just y, vector y, this is y=Fc. Now we've finally got a good notation. So this was horrible notation. I mean, that's exhausting. This shows you the algebra, but you have to be sort of prepared-- Well, the information is here because this is telling you what the entries of F, of the matrix, are. But this is with indices, and this is with whole vectors. So that would be y, in MATLAB that would be the inverse fast Fourier transform of c. So you see that that's our matrix. Now, I was going to say, before I show why, I was going to say what about the inverse? Because if the FFT gave us a fast way to multiply by F, we want it also to give us a fast way to do the inverse. We have to go both ways here. We get our data, we put it into frequency space, we look at it in frequency space, we understand what's going on. By separating out the frequencies. We maybe smooth it, we maybe convolve it, whatever we do. Compress it. And then we've got to go back to physical space. So we have to go both ways. And the question is what about F inverse? And let me just say, what goes into F inverse? The point is that the number that goes into F inverse is just like w. But it's the complex conjugate, and I'll call it w bar. And that's e^(-2pi*i/N). So the thing that's going to go into the inverse matrix is the powers of w bar. The complex conjugate. jk.

Let me just say right away, that just as there were 2pi's in the Fourier series world, that got in our way. And physicists can't stand those 2pi's. They try to get rid of them but course they can't, they're there. So they put it up into the exponent. Well, I guess, I'm blaming physicists, I've done it too. I've put the 2pi up there. For the Fourier series. Anyway whatever, 2pi's are all in your hair in the Fourier series. And in this, for the discrete one, the corresponding thing is an N. There's a factor N that I have to deal with. Because look, here's what I'm saying. Let me multiply F by F inverse. OK. Say, 1, 1, 1, 1; 1 i, i^2, i^3, this is my F, right? Whoops. Don't ever let me do that. i^2, fourth, sixth; i^3, i^6, i^9. Now, my claim is that F inverse is going to involve the complex conjugate, w bar. i bar. What is the complex conjugate of i? You see that we really have to use complex numbers here, but they're on the unit circle. You couldn't ask for a better line. What's the complex conjugate that's going to go into F inverse? If I take i and I take its conjugate, what does that mean? So the conjugate of that is just, flip it across the real axis to the other one. You see, it's great? The complex conjugate of i is minus i. Here's w_8 here's w bar 8. It's just across the axis. So it still has size one. And, of course, it's e to the? This angle, which is just minus this angle. So that's why we get this minus. Because when I flip it across, i changes to minus i, angle changes to minus angle. And finally I was going to do this multiplication just to see. So now I'm claiming that this is 1, 1, 1, 1, 1, 1, 1; -i, -i^2, -i^3. So on, -i^2, -i^3, fourth, sixth, sixth, and ninth.

And now, if you do that multiplication, you will get the identity matrix. That's the great thing. You get the identity. 1, 1, 1, 1. Except for a factor N. And do you see that coming? When I multiply this by this, what do I get? Four. If they were N by N, I'd have N ones against N ones, so I'd have to expect an N there. So F times F inverse, so, in other words if I want to get the identity, I'd better divide this by N. So now I have F F inverse equal I. So F inverse is just like F, except complex conjugate and divide by N. So if I want this formula to be really correct, I have to divide by N. OK? So and that's what w and w bar is. And another letter that often gets used in this topic is omega. And I figure that's a good way to separate out the complex conjugates, let's call this guy w and call this guy omega. So this is w and here's omega. I don't know if that helps. But you do have to keep straight whether you're talking about this number or that number. And this number could be called w, because omega wouldn't be easy to type. So you have to pay attention, which one you're doing and when, somewhere you have to divide by N. But otherwise it's fantastic. It's just beautiful. Can we do that multiplication, just to see that it really works? Well, the ones worked fine. What about the ones times, or what about the ones times that column. So I'm looking at the (1,2) entry in this multiplication. Or rather, the (0,1), entry I should say. This is the zeroth row, and this is the row number one, and what do I get out of that? Zero, right. That was my point. That this is adding up, oh well, going the other way around. Because it's the complex conjugate, but it's adding up the four, same four numbers and giving me that zero.

And now how about the ones on the diagonal? When I do this times this, so this is the same, the row times itself. What am I seeing there? What are these terms? That's certainly one, what is i times minus i? i times minus i is one again. A number's getting multiplied by its conjugate. That gives you something real and positive when you multiply a number by its conjugate. In fact, we get one every time, so we get four ones, we divide by four and we get that. Maybe you actually see here, just so you remember. So F inverse is, I'm concluding that-- F inverse, I'm looking now, what is F inverse? It's F conjugate divided by this nuisance number N. I could put square root of N with F, and then square root of N would show up with F inverse, just the way I could have done, square root of 2pi with one, and square root of 2pi with the other. But let's leave it like so. So that's not proved in a way. Just, checked for a couple of cases with N=4, but it always works. That's the beauty. In other words, the two directions, the y to c and the c to y, are both going to be speeded up by the FFT. The FFT, whatever it did, whatever it could do with the w and powers of w, it can do the same with w bar. And powers of w bar. If the FFT is a fast way to to multiply by that, it'll also give me a fast way to multiply, to do the other transform. So the FFT is what makes every-- This would all be important if the FFT didn't exist. And of course actually, Fourier didn't notice it. Gauss again had the idea, but he had so many ideas it wasn't very clear in his notes. The FFT idea. But then Cooley and Tukey, guys at Bell Labs in the 1950s discovered it. In a small paper, I don't think they had any idea how important it would be. But they published their paper and people saw what was happening, and realized that they should, that that gave the right way to do the transforms.

OK, I was going to say here, do you notice that this one, when I take a vector, that vectors and its complex conjugate, what am I getting? I'm getting the length squared. I'm getting the right inner product. This is the right inner product when things are complex. Something times its conjugate. I'm just reminding you that the inner product of two vectors, uv, u dot v or u comma v, should now be the sum-- It used to be the sum of u_i*v_i. But now we're in the complex case, so we should take the conjugate one of them. And here I guess that's the v, and this is the u, and so the point is that we're taking the right, yeah. We have, oh yeah. let me just say it this way. These columns are orthogonal. Those four columns, the columns of the Fourier matrix, are orthogonal. In other words, we have an orthogonal basis. And it's those four columns. I almost missed saying that. But you remember from last week, the whole point about sines and cosines and exponentials in the function case, was orthogonal. That's what made everything work, and it makes everything work here. Here we have vectors, not functions. Orthogonal just means dot products are zero. But they're complex, so that in the dot product, one factor or the other has to get conjugated. Anyway, that's an orthogonal matrix. Apart from this N, this capital N thing. So if I normalize it, if I take out one over square root of N, now the length of every column is one. So now I have orthonormal columns. One squared, one squared, one squared, one squared is four, divided by four is one. And every column has length, is a vector in four-dimensional complex space, of length one. And orthogonal. So that means orthonormal. It's a fantastic matrix. That's the best basis we'll ever have for complex four-dimensional space. That basis.

Well, actually when I say the best basis we'll ever have, I realize I wrote a book about wavelets. And the idea of wavelets is, there's another basis. A wavelet basis. And I hope to tell you about that at the very end of the semester. So wavelets would be another basis, would you like to know the wavelet basis? Have we got? Just in case the end of the semester doesn't come, can I write down here the wavelet basis? OK. And you can decide you maybe like it better. So the four by four wavelet basis, alright. So this will be the wavelet matrix. Well, this was too good to lose, right? That's a terrific vector. All ones, constant vector. It's the thing that gives us the average, this c_0 is going to again be the average of the y's. How do I get that average? Somehow I'm doing, the c_0, the constant term is again I'm going to take one of everything and I'll divide by four and I'll have the average. So I like that vector. That's the DC vector, if I'm talking about-- direct current vector, if I'm talking about networks. In an image, it would be the whole, you could call it the blackboard vector. The whole image of a empty blackboard would be constant. OK, so that's too good to lose. Now, the obvious thing orthogonal to that would be [1, 1, -1, -1]. Right? Now I've got two orthogonal columns. And probably that's pretty close to something I have over here.

Now, here's the point about wavelets. You'll see it with the next guy. The next one, you saw that those two are orthogonal. And real, and simple, nice numbers. Now, the next one will be [1, -1, 0, 0]. Do you see that that's orthogonal? Obviously, because its dot product with that is zero and that is zero. And you want to tell me the last column of the wavelet, of the simple wavelet matrix? This would be the Haar, named after Haar, the Haar wavelets. The whole point about wavelets was a new basis. Somebody realized, well actually, Haar lived a long time ago, but they got improved and improved, but here's the simplest one. What's the good fourth column? Fourth orthogonal vector here? What shall I start with? Zeroes, and then I'll put one, minus one. So the beauty of wavelets is, I've got this averaging vector. And then I've got this difference, sort of on a big scale, and then I've got differences on a little scale with zero, differences here with zeroes so as I keep going I could do eight by eight, 16 by 16, it would-- I get the averaging guy. And then I get differencing, I get large scale differences and then I get finer differences. So coarse differences, finer differences, finer finer differences. And that's a really good way to-- If your function, if your signal has jumps, we saw that Fourier wasn't perfect, right? And a lot of signals have jumps. Right? Your shocks, whatever you're trying to represent. This is a better basis to represent a jump, a discontinuity, than Fourier. Because Fourier, at a discontinuity, does its best but it's not a local basis. It can't, like, especially focus on that jump point. And and the result is the coefficients decay very slowly and it's hard to compute with. Whereas the wavelet is great for signals that have jumps. If something happens suddenly at a certain time. Anyway, that would be another, and of course I haven't made these unit vectors, I'd have to divide that by the square root of four and divide these guys by the square root of two just to normalize them. But normalizing isn't important. Orthogonal is what is important. OK, so that's competition, you could say. That's the competition. OK, back to main guy, so this is the famous formula that gives the y's from the c's, and what's the other formula that gives the c's from the y's? So now this is the transform in the other direction. So it's going to be a sum from j=0 to N-1. And what's going to go in there? Have you got the idea of the two transforms? In one direction this is e^i-- You see why we don't want to write it. That is, that number is e to the 2pi*i*j*k/N. That's what that number is, in the Fourier matrix. So you see using w makes it much cleaner. OK, what goes there? w bar. The complex conjugate, e to the minus all that stuff. w bar to the same power, jk. And then I have to remember to divide by N somewhere. I can put it here, or I can put it there. But the all ones vector has length squared equal to N and I've got to divide by an N somewhere. OK, those are the two formulas that many people would just remember. For them that's the discrete Fourier transform and the inverse, and maybe they would put the N over there. Probably that would be more common. I got started with the Fourier matrix, with this side, and then the DFT matrix comes here, and I needed the N. So those are the formulas. I don't know how you'd prefer to remember these. Somehow you have to commit something to memory here, certainly what w is. And the fact that it's w and w bar? And the fact that the sums start at zero. And the fact that you have just these simple expressions for the entries in the matrix. And this then would be c equals F inverse y. That's what I've written down there in detail. OK, so that's the discrete Fourier transform, and I think we're out of time. I've got more to say in a little bit about the FFT, how that works.