Lecture 29: Fourier Series (part 2)

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Prof. Gilbert Strang

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR STRANG: OK, so I've got Quiz 2 to give back to with good scores. So it's Christmas early. Or, good work on that exam. That's fine and so we all know Fourier series is like a central topic on the final third of the course, and I'll just keep going on Fourier series. I'll give you homework on 4.1 and 4.2. So 4.1 is what we complete today, the Fourier series. 4.2 is the discrete Fourier series. So those are two major, major topics for this part of the course. This is the periodic one and 4.2 will be the finite one, with the Fourier matrix showing up. OK, so can I pick out, I've made a list of topics last time that were important for 4.1, for Fourier series. And I think these are the remaining entries on the list. I did the Fourier series for the odd square wave, the minus one stepping up to plus one. And you remember that, well just let me put that example down here. That was the step function; easier if I draw it than if I try to write equations, so it was minus one up to plus one, ending at-- of course period 2pi. And I guess we called that S(x), for S signaling step function, S signaling square wave. And S the general a signal that the function is odd. And that means that it goes with sine functions. And I think the numbers that we found from the formula was sin(x)/1, sin(3x)/3, sin(5x)/5, it's just a great one to remember.

And that's the example that has the important, and I don't know if I wrote down Gibbs' name. He was the great physicist at Yale, well, a hundred years ago or more, and did this key idea that appears all the time, that any time you have a step function then the Fourier series, it does its best, I if I take a thousand term, it'll do its best but it will overshoot by an amount that Gibbs found, and then it will get really close and then it will overshoot again and then, symmetrically. Or anti-symmetrically, I should say. So that's the Gibbs phenomenon of great importance. I write this one out because first it's an important one to remember. Second it'll give us a good example for this important equality, that the energy in the function is the energy in the coefficient. That'll be good. OK, actually maybe I should do that one first because the delta function has got infinite energy and we don't learn anything from this equation. So let me jump to the energy in the function and the energy in the coefficient.

So what do I mean by energy? Well, it's quadratic. Right? It's the length squared here. It's the length squared of the function. So let me compute, maybe I'll do it on this board underneath and leave space for the delta function. The energy in x space is just the integral-- It's the length squared. The integral of S(x) squared. dx. It's just what you would expect. We have a function, not a vector, so we can't sum coefficients squared. Instead we integrate all the values squared. And of course, this is a number that we can quickly compute for that function. So what does it turn out to be? Well, what is S(x) squared for that function? One, obviously. The function is one here. I'm looking at the original S(x), not the series. The function is one there and minus one there. When I squared those, S(x) squared is one everywhere so I'm integrating one everywhere from minus pi to pi. So I get the answer 2pi. So that's a case where the energy in the physical space, the x space, was totally easy to compute.

Now, what about energy, what is this equality? This really neat easy to remember equality? I'm just going to find it by taking this thing squared. What's the integral of the right-hand side? The two are equal, so suppose I just fire away? I integrate the square of that infinite series. You're going to say, well that's going to take a while. But what's going to be good? The key point, the first point in last time's lecture, the first point in every discussion of Fourier series, is orthogonality. Sines times other sines integrated are zero. So a whole lot of terms will go. So I take that thing, I square it. So let me let me do that one here. The interval from minus pi to pi. May I take out the (4/pi) squared? Just so it's not confusing. Now, this is the sin(x)/1, sin(3x)/3, sin(kx)/k, and so on. All squared. dx, and so what do I get? The (4/pi) squared. And now I've got a whole lot of terms. But the thing is, I can do this. Because when I square this, I'll have a lot of terms like sin(x)*sin(3x), and when I integrate those I get zero. So the only ones that I don't get zero are when sin(x) integrates against itself. And sin(3x) against itself. So when sin(x) integrates against itself, that's sine squared. Its integral is, you remember the integral of sine squared, which is, its average value is 1/2. We're over an interval-- I think I'm going to get pi, for sine squared. Because sine squared-- We could do that calculation separately. It's just a standard integral. The integral of sine squared is pi. Actually, yeah it just uses the fact that sine squared x is the same as whatever it is the same as. Is it 1-cos(2x) or something? Over two. Or plus, who cares? Because the integral of whichever, plus or minus, let me, well I suppose for history's sake we should get it right. Which is it? Is it a minus, so it looks OK now? OK, alright, if it's wrong I didn't say.

OK, but I'm going to integrate. So the integral of the cosine is zero, and the integral of the 1/2 is the part I'm talking about. That 1/2 is there all the way from minus pi to pi. So I get a pi. From all these sines. And now, what are all the terms? Well, one over one squared, that just had a coefficient one, but what's the next guy? You remember I'm squaring it, I'm integrating. But I have a 1/3 squared. And 1/5 squared. And so on. And here's a great point. These two are equal. I've got the same function, expressed in x space, and here it's expressed in sine space, you could say. In harmonic space. OK, so that's equal. And that's going to be the fact in general. In general, that the integral of S(x) squared, so the general fact will be the integral of-- Well, I'll write it down below. But let's just see what we got for numbers here. So I had pi on both sides. And so if I lift that over there, I get something like-- what do I have? Pi squared over 16, maybe I have pi squared over eight. You just get a remarkable formula. Putting that up there would be pi squared over 16, and the two makes it an eight. And here I have the sum of 1/1 squared plus 1/3 squared plus 1/5 squared. So, that's an infinite sum that I would not have known how to do except it appears here. The sum of one over all those squares. If I picked another example, I could get the sum-- Oh, this was all the odd numbers squared. If I picked a different function, I could have got one that also had the sin(2x)/2 and the sin(4x)/4. So this would have been the sum of all the squares. Do you happen to know what that comes out to be?

I mean, here's a way to compute pi. We have a formula for pi. And we'd have another formula that involved all the sums. Maybe I have room for it up here. This would be the sum of 1/n squared, right? This here I have only the odd ones. And I get pi squared over eight. Do you happen to know what I get for all of them? So I'm also including 1/2 squared, there's a quarter also in here. And also a 16th. And also a 36th, in this one. And the answer happens to be pi squared over six. Pi squared over six. The important point about this energy equality is not being able to get a few very remarkable formulas for pi. There's another remarkable formula in the homework. This is a little famous. Do you know what this is? This is the famous Riemann zeta function. The sum of 1/n^x is the zeta function at x. Here's the zeta function at two. So if I could draws a zeta. Maybe? There's a Greek guy in this class who could do it properly, but anyway. I'll chicken out. Zeta of, at the value two, zeta(2). So we know zeta(2), we know zeta(4). I don't think we know zeta(3), I think it's not a special number like pi squared over six. So the zeta function, the sum of 1 over n to this thing is, actually that's the subject of a problem that Riemann did not solve. There's a problem Riemann did not solve, and nobody has succeeded to find it, to solve it since. There's a million-dollar prize for its solution. My neighbors think I should be working on this, but I know better. It's going to be solved one day, but it's pretty difficult. And that is to know where this zeta function, where it's zero. Of course, it isn't zero at two. Because it's pi squared over six. And actually the conjecture is that it's zero, all the zeroes are at points, complex numbers with real part 1/2. So they're on this famous line, the imaginary line with real part 1/2. And that's the most important problem in pure mathematics. So here we go. We got a formula for pi out of this energy identity. And I'll write it again, once I have the complex form. OK, but you see where it comes from. It just comes from orthogonality. The fact that we could integrate that square is what made it all work.

OK, let's do the delta function. So that's an even function, the delta, right? Now I'm looking at the delta function. Minus pi to pi. It has the spike at zero and it's certainly even so we expect cosines. And what are the coefficients? So it's just an important one to know. Very important example. So what's a_0? In general, the coefficient a_0 in the Fourier series is the-- If I have a function, delta(x), S(x), whatever my function, the a_0 coefficient is the average. a for average, a_0 is the average value. So this is 1/(2pi) the integral from minus pi to pi of my function. Where did that come from? I just integrated. I just multiplied both sides by one. Or by 1/(2pi), and integrated. And those terms disappeared, and I was left with a_0, and what's the answer? Everybody knows that integral. The integral of the delta function is one, so I just get 1/(2pi). So 1/(2pi). OK, now ready for a_1. How much of cos(x) do I have? Can I just change this formula to give me a_1 and you can tell me what it gives? Well let me do it here. Here's a_1. What's the formula for a_1? It's just like b_1, like the sine formulas. You have to remember you're only dividing by pi. Because that average value was 1/2, as we saw. And then you have the integral of whatever your function is. delta(x) in this case. Times the cos(1x). If we're looking for a_1 we've multiplied both sides by cos(x) and integrated. And what answer do we get? For a_1? What's the integral of delta(x) times cos(x)? dx, so I should put in a dx. And the answer is? One, also one. The delta function, this spike, picks out the value of this function at the spike. Because of course it doesn't matter what that function is away from the spike, because the other factor, delta, is zero. Everything is at that spike, all the action, and this happens to be one at the spike. So I get a 1/pi. And actually, the same formula for all these guys. This will be the same with cos(x) changed to cos(2x). And what will be the answer now? Again, one. Right? It's the value of this integral. The formula for a_2 would have come by multiplying both sides by cos(2x), integrating. And the integral of cos squared would give me the pi, and I'm dividing by the pi, and I just need that integral and it's easy. It's also 1/pi. So all these are 1/pi. Let me just put in the formula. 1/(2pi), and all the rest are 1/pi. cos(3x). All the cosines are there. All in the same amount. And the constant term is slightly different. OK, that's the formal Fourier series for the delta function. Formal meaning you can use it to compute, of course some things will fail, like what's the energy? If I integrate, if I try to do energy in x space of the delta function, or energy in k space, what answer would I get? Right, the integral of delta squared, its energy, is infinite, right? The integral of delta, if I have delta times delta then I'm really in trouble, right? Because this delta says, if I can speak informally, that delta says take the value of this function at zero, but of course that's infinite, so that would be infinite. That would be the energy in x space. What about the energy in k space? Well, let's think, what's the energy in k space? I'm going to do the squares of the coefficients, you remember that's what I had down here? And I'll have it again in a moment. It's the sum of the squares of the coefficients, fixed up by factors of pi. And here, all the coefficients are constants. So that sum is infinite. Again, it's the sum of squares, of constant, constant, constant, constants, and that series doesn't converge, it blows up. So the energy is infinite. That's OK. The key is that formula. OK, there's the formula to remember. There's the formula. On forever. Every frequency is in here to the same amount. OK, good. That's the delta function example. OK, ready to go to complex?

Complex is no big deal because we know, actually, you can tell me the complex series for the delta function. I'll write it right underneath. The complex series for the delta function, just turn these guys into e^(i*theta)-- e^(i*x)'s, and e^(i*2x)'s, and e^(-i*2x)'s. Just term by term, just to see it. To see it clearly for this great example. So what does cos(2x) look like in terms of complex exponentials? Everybody knows cos(x) is the same as e^(ix) and e^(-ix), right? Divided by two, because this is cos(x)+i*sin(x), this is cos(x)-i*sin(x), when I add them I get two cos(x)'s. So I must divide by two. Let me divide by two all the way. So I'm dividing this 1/pi by two. You'll see it's so much nicer. So 1/(2pi) times the one, that's our first guy. And now I've got the next guy. Right? Because I need to divide by the two to get the cosine, and there's my two. OK, what's the next? What do I have next? From this guy. 1/pi, still there. What's cos(2x), if I want to write it in terms of complex exponentials? So this guy now, I'm ready for him, is e^(i*2x) and e^(-i*2x). Divided by two, and there's my two. Do you see what's happening? There's an example to show you why the complex case is so nice. Here we had to remember a different number for a_0. Here it's just, so the next one will be e^(i*3x), and e^(-i*3x), so that the delta function in the complex Fourier series, all the terms have coefficients one divided by the 2pi. That's a great example. And of course we see again that the sum of squares, oh yeah.

So let's do the complex formula, by which I mean I'm taking any function F, not necessarily even, not necessarily odd, not necessarily real. Any function can now be a complex function, because we're going to use complex things here. So I'll have all the complex exponentials. For integer k. This k is an integer but it can go from minus infinity to infinity. That's the complex form. F(x) is a series again. The beauty is that every term looks the same. The thing you have to remember is that negative k is allowed, as well as positive k. You see that we needed the negative k to get cosines. k was minus 1 there, k was minus 2 there. And for sines we would also need them. So cosines go into it, sines go into it. F(x) could be complex. That's the complex series. So maybe we could have started with that series. But we didn't, we came to it here. But what's the formula for its coefficient? OK, actually, so the next half-hour now, we have to think complex. And that will bring a few changes. So watch for the changes. You see we're almost in the same ballpark, but there are a couple of things to notice. So let me write down that series again. Minus infinity to infinity of some coefficient e^(ikx).

Now, what is c_k? What is the formula for c_k? As soon as we answer that question, you'll see the new aspect for complex. How do I find coefficients? I multiply by something, I integrate, and I use orthogonality. Same idea, just repeat after repeat. The question is, what do I multiply by? If I wanted to know c_3, suppose I want to know a formula for c_3, the coefficient. What am I going to multiply by that's going to give me the orthogonality I need, that all the other integrals are going to disappear, that's the key? I want to multiply by something so that when I integrate, all the other integrals are going to disappear. And let's just do it. Here, suppose I have e^(i5x). And I'm looking for c_3. So I'll look at e^(i3x). So watch. This is the small point we have to make. So I'm looking for e^(i3x). So I'm going to multiply by something, and I'm going to integrate. And what would be the good thing to multiply by? Well, you would say, if you were just a real person, you would say multiply by e^(i3x), integrate. And hope for getting zero. You won't get zero. If I multiply e^(i3x)-- I'm sorry, you might say-- What am I doing here? I'm trying to check orthogonality. Let me instead of three use kx. So that's a typical complex one. It's any one of these guys. And I want to see what's orthogonality. That's what I'm asking. Everything hinged on orthogonality. We've got to have orthogonality here. But let me show you what it is. The thing you multiply by to get the c_3 is not e^(i3x), it is? e^(-i3x). You take the conjugate. You change i to minus i in the complex case. So if I take e^(ikx) times e^(-ilx), and notice that minus, I claim that I get zero. Except if k is l. And when k is l, I probably get 2pi. If k is l. I get zero if k is not l. That's the beautiful orthogonality. I'm not too worried about the 2pi, I'll figure that out. So what I'm saying is, when you're taking inner products, dot products, and you've got complex stuff, one of the factors takes a complex conjugate. Change i to minus i. Do you see that we've got a completely easy integral now?

What is the integral of this guy? How do I see that I really get zero? So this is the first time I'm actually doing an integration and seeing orthogonality. Anybody likes integrating these things, because they're so simple to integrate. Before I plug in the limits, what's the integral of this guy? How do I rewrite that to integrate it easily? I put the two exponents together. This is the same as e^(i(k-l)x), right? The exponentials follow that rule. If I multiply exponentials, I combine the exponents. And now I'm ready to integrate. And so what is the integral of e^(i(k-l))x? When I integrate e to the something x, I get that same thing again divided by the something. So now I've integrated, and now I just want to go from zero to 2pi, plug in the limits from x=0 to x=2pi. This is what the integral is asking me for. So I'm actually doing the integral here. I put these together into that, I integrated, which just brought this term down below, because the derivative will bring that term above. Oh, they weren't meant to change. Actually, it's wrong but right. To change those integration limits. I mean, any 2pi would work, but thank you. You're totally right, I should have done minus pi to pi. Why do we get zero? That's the whole point. Here we actually did the integral, and we can just plug in x=pi and x=-pi. Or we could plug in zero and 2pi, or I could plug in any guys that were 2pi apart, any period of 2pi. Why do we get zero? Do you have to do the plugging in part to see it? You can, certainly. But the point is, this function is periodic. That's a function that has period 2pi. So it has to be the same at the lower and the upper limit. That's what it's coming to. That's a periodic function. It's equal at these two limits. And therefore, when I do the subtraction I take it at the top limit, minus the answer at the bottom limit, it's the same at both. So I get zero. So there is the actual check of orthogonality. So the key point was, orthogonality, or inner product, for complex functions, one of them has to take the complex conjugate.

Let me just do for vectors, too. Complex vectors. I may have mentioned it, let me take the vector [1, i] as an extreme example. And suppose I wanted to find the inner product with itself. Which will be the length squared. What's the inner product of that vector, that complex vector [1, i], with itself? Let me just raise it up so we see it. Focus on this. Usually the length squared would be one squared plus i squared. No good. Why no good? Because it's zero. One squared plus i squared is zero. We want absolute values squared. Absolute values, we need squares. We need positive numbers here. So the length of this squared, I would-- So let me call that vector v. I want to take not v transpose v, that's the thing that would give me zero, if I did [1, i]. I have to take the complex conjugate. One of the two factors, and it's just a matter of convention which one you do, this is the thing that gives me the length of v squared. And it's the same thing for functions. It's the integral of F(x)-- Do you want me to write it as, it's the integral of F(x) times its conjugate. That gives me the length of F squared. And of course I can rewrite that as the integral of F. We have this handy notation for F times its conjugate. It's like re^(i*theta) times re^(-i*theta), the product of re^(i*theta) times re^(-i*theta) is what? This is the complex number, this is its conjugate, when I multiply, you notice that i went to minus i when I drop below the real axis, I just change i to minus i. So those cancel and I get r squared. The length, the size of the number. It's just, if you haven't thought complex just make this change and you're ready to go. Yeah just v bar transpose v, F bar transpose F, if you like, or F squared. And when we make the correct change we still have the orthogonality. OK?

So that's what orthogonality is, now what's the coefficient? So now, after all that speech, tell me what do I multiply both sides by? Let me start again and remember here. So my F(x) is going to be sum of c_k*e^(ikx). And if I want to get a coefficient, I'm looking for a formula for c_3. So how do I find c_3? Let me slow down a second. How to find c_3? Multiply both sides by what? And integrate. Minus pi to pi. What goes there? If I want c_3, I should multiply both sides by, should I multiply both sides by e^(i3x)? Nope. e^(-i3x). Multiply both sides by e^(-i3x) and integrate. e^(-i3x) d x. And then on the right side I get what? I get c_3, because that'll be e^(-i3x) times e^(+i3x), I'll be integrating one so I get a 2pi, and all zeroes. From all the other terms. Orthogonality doing its job again. All these zeroes are by orthogonality. By exactly this integration that we did. If k is three and l is seven or k is seven and l is three, the integral gives zero. So what's the formula then? This is all gone, divide by 2pi and you've got it. There you are. That's the coefficient. OK.

We have to move to complex numbers, and now in those 20 minutes-- The only change to make is conjugate one of them. One of the things when it can be complex. So that's the complex series. What's the energy inequality now? Let me do the energy equality in x space and in k space. If I take this, now I'd like-- So that's an equality of two functions. Now I'd like to get it, I want to do energy. This is fantastic. And extremely practical and useful. The fact that the energy is going to come out beautifully, too. So what am I going to do for energy? I'm going to integrate F(x) squared, dx. That's the energy. That's the energy in x space, in the function. What do I do now? I integrate this series. c_k*e^(ikx) squared. Oh no, whoa. If I put a square there, right, I'm fired. Right? What do I have to do? I will put a square there, but I have to straighten out something. What do I do? Those curvy lines, which just meant take the thing and square, should be changed to? Straight. Just a matter of getting straight. Straightening this out. OK, straight. There we are. OK. Now, that means the thing times its conjugate. That's the thing times its conjugate. So what do I get? All the terms disappear except the perfect, the ones that are squared. So what do I get here? And the ones that are squared I'm integrating one, so I get a 2pi. So there's a 2pi. This whole subject's full of these pi's and 2pi's, it's just part of the deal. Now, what's left? And now I'm looking only at the terms where I'm integrating something by its conjugate, by its own conjugate. And then I'm getting c_k times its own conjugate, so I'm getting all the terms. I'm getting no cross terms, just the terms that come from that times it's own conjugate. Which is c_k squared. That's the energy in the coefficients. That's the energy in k space, and of course that sum goes minus infinity to infinity. I've got them all. That's the energy in k space, here's the energy in x space. You can expect that this orthogonality is going to give you something nice for the energy. For the integral of the square.

Alright, so you saw that. And let's see. So we saw it for, we actually got a good number for the square wave. We got infinity for the delta function. I've put on here as a last topic, to just give the word function. Part of what this course is doing is to speak the language, teach the language of applied math. So that when you see something, you see this, you recognize hey I've seen that word before. So I want to know, what's the right space of functions? This is my measure for the length squared of a function. My square wave is great. It's length squared was whatever it was, pi squared over something. And that's its length squared. The delta function gave infinity. So it's not going to be allowed into the function space. It's a vector of infinite length. Like the vector 1, 1, 1, 1, 1 forever. Too long. Pythagoras fails, because we sum the squares, we get infinite. So what functions, what vectors should we allow in our space? So we're going to have a space that's going to be infinite-dimensional because our coefficients, we've got infinitely many coefficients. Our functions have infinitely many values. So we've moved up from n-dimensional space to infinite-dimensional space. And everybody calls it after the guy who, Hilbert space. So I don't know if you've seen that word before, that name before, Hilbert space. It's the space of functions with finite energy. Finite length. So this function is in it, the delta function is not in it. And the point is that this space of functions, we've got-- These guys are a great basis for the space of functions. The sines and cosines are another basis for this space of functions. We just have a whole lot of functions, and all the facts of n-dimensional space.

So what are important facts about n-dimensional space? One that comes to mind that involves length is the-- A key fact about length is, length and angle, I could say actually-- Many people would say this is the most important inequality in mathematics. That the dot product of two vectors, it's called the Schwarz inequality. Several people found it independently, Schwarz is the single name most often used. What do you know about the dot product of two vectors? Somehow it tells you the angle between them, right? Somehow the dot product of two vectors is, if I divide by the length of the vectors, so the dot product of vectors divided by the length, do you know what this is, in geometry? It's a cosine. It's the cosine of the angle between them. And cosines are never larger than one. So this quantity here is never larger than one; in other words, this is never larger than the length of one vector times the length of the other vector. I could do an example. Let v be [1, 3]. And let w be [1, 6]. I don't know how this is going to work. What's the dot product of those two vectors? Oh, it's 19. Sorry about that. Let's change this, I'd like a nice number here. What do you suggest? Make it five somewhere? Five wouldn't be bad. It wouldn't be too good either, but. Ah, OK. What's the dot product of those? 16. And now what am I claiming, that that's length less than the length of this vector, which is what? What's the length of [1, 3]? Square root of ten. It's good to do these small ones, just to remember. The length of that vector is the sum of the square root of the sum of the squares. Square root of ten. And the length of this guy? Is the square root of 26. And so I hope, and Schwarz hopes, that 16 is less than that square root. Can we check it? Let's square both sides, that would make it easier. So the right-hand side when I square both sides will be? 260. When I square both sides, and what's the square of 16? 256. That was close. But, it worked. I'll admit to you-- Oops not equal. Ah. Less or equal, Schwarz would say. And it's actually less than because these vectors are not in the same direction. If they were exactly in the same direction, or opposite directions, the cosine would be one and we would have equal. But since the angle, you see the angle between those two vectors is a pretty small angle. The cosine is quite near one. But it's not exactly one. So I'm glad somebody knew 16 squared. Does anybody know 99 squared? The reason I ask that is, or 999. I'll make it sound harder. The hope, when I was about 11 or something, I was I was always hoping somebody would ask me 999 squared. Because I was all ready with the answer. Nobody ever asked. Anyway. But you've asked, I think. So 998,001. And now I I've finally got a chance to show that I know it. OK, have a great weekend and see you.