Lecture 28: Fourier Series (part 1)

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Prof. Gilbert Strang

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

AUDIENCE: OK. I hoped I might have Exam 2 for you today, but it's not quite back from the grader. It's already gone to the second grader, so it will not be long. And I hope you've had a look at the MATLAB homework for a variety of possible-- I think we've got, there were some errors in the original statement, location of the coordinates, but I think they're fixed now. So ready to go on that MATLAB. Don't forget that it's four on the right-hand side and not one, so if you get an answer near 1/4 at the center of the circle, that's the reason. Just that factor four is to remember. I'll talk more about the MATLAB this afternoon in the review session right here. Just to say, I'm highly interested in that problem. Not just increasing N, the number of mesh points in the octagon, but also increasing the number of sides. So there are two numbers there, we had N points on a ray, out from the center. But we have M sides of the polygon. And I'm interested in both of those, getting big. Growing. I don't know how. And maybe a reasonable balance is to take, I think N proportional to M is a pretty good balance. So I'd be very happy; I mean I'm very happy with whatever you do. But I'm really interested to know what happens as both of these increase. How close, how quickly do you approach the eigenvalues of a circle. And you might keep the two proportional as you increase them.

So let me say more about that this afternoon, because it's a big day today, to start Fourier. Fourier series, the new chapter, the new topic. In fact, the final major topic of the course. So I tried to list here, so here I'm in Section 4.1, so I'm talking about Fourier series. So Fourier series is for functions that have period 2pi. It involves things like sin(x), like cos(x), like e^(ikx), all of those if I increase x by 2pi, I'm back where I started. So that's the sort of functions that have Fourier series. Then we'll go on to the other two big forms, crucial forms of the Fourier world. But 4.1 starts with the classical Fourier series. So I realize you will have seen, many of you will have seen Fourier series before. I hope you'll see some new aspects here. So, let me just get organized. It's nice to have some examples that just involve sine. And since the sine is an odd function, that means it's sort of anti-symmetric across zero, those are the functions that will have only sine, that will have a sine expansion. Cosines are the opposite. Cosines are symmetric across zero. Like a constant, or like cos(x). Zero comes right at the symmetric point. So those will have only cosines. And a lot of examples fit in one or the other of those, and it's easy to see them. The general function, of course, is a combination odd and even. It has cosines and it has sines, it's just the sum of the two pieces.

So, this is the standard Fourier series, which I couldn't get onto one line, but it has all the cosines including this slightly different cos(0), and all the sines. But because this one has these three different pieces, the constant term, the other cosines, all the sines, three slightly different formulas, it's actually nicest of all, to use this final form. Because there's just one formula. There's just one kind. And I'll call its coefficient c_k, and now they multiply e^(ikx), so we have to get used to e^(ikx). We may be more familiar with sin(kx) and cos(kx), but everybody knows e^(ikx) is a combination of them. And if we let k go from minus infinity to infinity, so we've got all the terms, including e^(-i3x), and e^(+i3x), those would combine to give cosines and sines of 3x. We get one nice formula. There's just one formula for the c's. So that's one good reason to look at the complex form. Even if our function is actually real. That form is kind of neat, and the second good reason, the really important reason, is then when we go to the discrete Fourier transform, the DFT, everybody writes that with complex numbers. So it's good to see complex numbers first and then we can just translate the formulas from-- And these are also almost always written with complex numbers. So this is the way to see it.

OK, so what do we do about Fourier series? What do we have to know how to do and what should we understand? Well, if you've met Fourier series you may have met the formula for these coefficients. That's sort of like step one. If I'm given the function, whatever the function might be, might be a delta function. Interesting case, always. Always interesting. Always crazy right? But it's always interesting, the delta function. The coefficients can be computed. The coefficients, you'll see, I'll repeat those formulas. They involve integrals. What I want to say right now is that this isn't a course in integration. So I'm not interested in doing more and more complicated integrals and finding Fourier coefficients of weird functions. No way. I want to understand the simple, straight, the important examples. And here's a point that's highly interesting. In practice, in computing practice, we're close to computing practice here. In everything we do. I mean, this is really constantly used. And one important question is, is the Fourier series quickly convergent? Because if we're going to compute, we don't want to compute a thousand terms. Hopefully ten terms, 20 terms would give us good accuracy. So that question comes down to how quickly does those a's and b's and c's go to zero? That's highly important. And you'll connect this decay rate, we'll connect this with the smoothness of the function. Oh, I can tell you even at a start.

OK, so I just want to emphasize this point. We'll see it over and over that like for a delta function, which is not smooth at all, we'll see no decay at all. In the coefficients. They're constant. They don't decrease as we go to higher and higher frequencies. I think of k here, I'll use the word frequency for k. So high frequency means high k, far up the Fourier series, and the question is, are the coefficients staying up there big, and we have to worry about them? Or do they get very small? So a delta function is a key example and then a step function. So what will be the deal with those? If I have a function that's a step function, I'll have decay at rate is 1/k.. So they do go to zero. The thousandth coefficient will be roughly of size 1/1000. That's not fast. That's not really fast enough to compute with. Well, we meet step functions, I mean, functions with jumps. And we'll see that their Fourier series, the coefficients do go to zero but not very fast. And we get something highly interesting. So when we do these examples, so I've sort of moved on to examples, so these are two basic examples. What would be the next example? Step function. Well, yeah, or maybe a hat next. A hat function would be, you see what I'm doing at each step? I'm integrating. A hat function might be the next, yeah, a ramp, exactly. Hat function, which is a ramp with a corner.

Now, so that's one integral better. You want to guess the decay rate on that one? k squared. Now we're getting better. That's a faster follow-up. One over k squared. And then we integrate again, we'd get one over k cubed. Then one more integral, one over k fourth would be a cubic spline. You remember the cubic spline is continuous. Its derivative is continuous, that gives us a one over k cubed. Its second derivative is continuous, that gives us a one over k to the fourth, and then you really can compute with that, if you have such a function. So, point: pay attention to decay rate. That, and the connection to smoothness. So examples, we'll start right off with these guys. And then we'll see the rules for the derivative. Oh yeah, rules for the derivative. The beauty of Fourier series is, well, actually you can see this. You can see the rule. Let me just show you the rule for this. So the rule for derivatives, the whole point about Fourier is, it connects perfectly with calculus. With taking derivatives. So suppose I have F(x) equals, I'll use this form, the sum of c_k e^(ikx). And now I take its derivative. dF/dx. What do you think is the derivative, what's the Fourier series for the derivative? Suppose I have the Fourier series for some function, and then I take Fourier series for the derivative. So I'm kind of going the backwards way. Less smooth. I'm going from, the derivative of the step function involves delta functions, so I'm going less smooth as I take derivatives.

It's so easy, it jumps at you. What's the rule? Just take the derivative of every term, so I'll have the sum of, now what happens when I take the derivative? Everybody see what happens when I take the derivative of that typical term in the Fourier series? What happens? The derivative brings down a factor ik. With k being the thing that-- So it's ik times what we have. So these are the Fourier coefficients of the derivative. And that again makes exactly the same point about the decay rate or the opposite, the non-decay rate. As I take the derivative you got a rougher function, right? Derivative of a step function is a delta, derivative of a hat would have some steps. We're going less smooth as we take more derivatives. And every time we do it, we see, you understand the decay rate now? Because the derivative just brings a factor ik, so its high frequencies are more present. Have larger coefficients. So and of course, the second derivative would bring down ik squared. So that our equations, for example, let me just do an application here. Without pushing it. Our application, we started this course with equations like -u''(x) = delta(x-a). Right? If we wanted to apply to a differential equation, how would I do it? I would take the Fourier series of both sides. I would look at, I'd jump into what people would call the frequency domain. So this is a differential equation written as usual in the physical domain. And with physical variable x, position. Or it could be time.

And now let me take Fourier transforms. So what would happen here? If I take the Fourier transform of this, well, we'll soon see, right? We get Fourier coefficients of the deltas. Of the delta function. That's a key example, and you see why. Over here, what will we get? And now I'm taking two derivatives, so I bring down ik twice. So I'm looking. Here it would be the sum of whatever the delta's coefficients are. Shall we call those d? The alphabet's coming out right. d for delta. So the right side has coefficients, d_k. And what about the left side? What are the coefficients-- If the solution u has coefficients c_k, so let's call this u now. Has coefficients c_k, then what happens to the second derivative? ik, ik again, that's i squared k squared, the minus sign. So we would have the sum of k squared c_k e^(ikx). This is if u itself has coefficient c_k, then -u'' has these coefficients. So what's up? How would we use that? It's going to be easy. We'll just match terms. Right? I can see, what's my formula, what should c_k be if I know the d_k? I'm given the right-hand side. We're just doing what's constantly happening, this three step process. You're given the right side. Step one, expand it in Fourier series now. Step two, match the two sides. So what's the formula for c_k? In this application, which, by the way I had no intention to do this. But it jumped into my head and I thought why not just do it.

What would be the formula for c_k? It'll be d_k divided by? k squared. You're just matching terms. Just the way, when we expanded things in eigenvectors, we'd match the coefficients of the eigenvectors, and that involved just the simple step, here it's d_k over k squared. Good. And then what's the final step? The final step is, now you know the right coefficients, add them back up. Add the thing back up, like here, only I'm temporarily calling it u, to find the solution. Right? Three steps. Go into the frequency domain. Write the right-hand side as a Fourier series. Second quick step is look at the equation for each separate Fourier coefficient. Match the coefficients of these eigenvectors. Eigenfunctions. And that's this quick middle step. And then you've got the answer, but you're still in Fourier space, you're still in frequency space. So you have to use these, put them back to get the answer in physical space. Right? That's the pattern. Over and over. So that's sort of the general plan of applying Fourier. And when does it work? When does it work? Because, I mean it's fantastic when it works. So what is it about this problem that made it work? When is Fourier happy? You know, when does he raise his hand, say yes I can solve that problem? OK, what do I need here for this plan to work? I certainly don't need always just -u'', Fourier could do better than that. But what's the requirement for Fourier to work perfectly?

Well, linear equation, right? If we didn't have linear equations we couldn't do all this adding and matching and stuff. So linear equations. Well, OK. Now, what other linear equations? Could I have a c(x) in here? My familiar c(x), variable material property inside this equation? No. Well, not easily, anyway. That would really mess things up if there's a variable coefficient in here then it's going to have its own Fourier series. We're going to be multiplying Fourier series. That comes later and it's not so clean. So we want, it works perfectly when it's constant coefficients. Constant coefficients in the differential equations. And then one more thing. Very important other thing. The boundary conditions. Everybody remembers now, it's a part of the message of this course is that boundary conditions are often a source of trouble. They're part of the problem, you have to deal with them. Now, what boundary conditions do we think about here? Well, fixed-fixed was where we started. So if we had fixed-fixed boundary conditions what would I expect? Then things would give me a sine series, possibly. Because those are the eigenfunctions we're used to. Fixed-fixed, it's sines that go from zero back to zero. Fixed-free will have some sines or cosines. Periodic would be the best of all. Yeah, so we need nice boundary conditions. So the boundary conditions, let me just say, periodic would be great. Or sometimes fixed-free. Our familiar ones, at least in simple cases, can be dealt with.

OK. So now, boy, that board is already full of formulas. But, let's go back to the start and say how do we find the coefficients? So because that was the first step. Take the right-hand side, find its coefficient. If we want to, just as applying eigenvalues, the first step is always find eigenvalues. Here, in applying Fourier, the first step is always find the coefficients. So, how do we do that? And at the beginning it doesn't look too easy, right? Because let me take the first guy, sin(x). Let me take an example. A particular S(x). The most important, interesting function. S(x), I want it to be an odd function, so that it will have only sines. And it should have period 2pi. So let me just graph it. So it's going to have coefficients, and I use b for sine, so it's going to have b_1*sin(x), and b_2*sin(2x), and so on. And so it's got a whole infinity of coefficients. Right? We're in function space. We're not dealing with vectors now. So how is it possible to find those coefficients? And let me chose a particular S(x). So I'll put, since it's 2pi periodic, if I tell you what it is over a 2pi interval, just repeat, repeat, repeat. So I'll pick the 2pi interval to be minus pi to pi here. Just because it's a nice way, and so that's a 2pi length.

There's zero, I want to function to be odd across zero. And I want it to be simple, because it's going to be an important example that I can actually compute. So I'm going to make it a one. And a minus one there. So, a step function. A step function, a square-- And if I repeat it, of course, it would go down, up, down, up, so on. But we only have to look over this part. OK. Now, well, you might say wait a minute how are we going to expand this function in sines. Well, sines are odd functions. Everybody knows what odd means? Odd means that S(-x) is -S(x). So that's the anti-symmetric that we see in that graph. We also see a few problems in this graph. At x=0, what is our sine series going to give us? If I plug in x=0 on the right-hand side I get zero, certainly. So this sine series is going to do that. And actually Fourier series tend to do this. In the middle of a jump it'll pick the middle point of a jump. Fourier series generally, it's the best possible, will pick the middle point of the jump. And what about at x=pi? At the end of the interval? What does my series add up at x=pi? Zero again, because sin(pi), sin(2pi), all zero. And that'll be in the middle of that jump. So it's pretty good.

But now what I'm hoping is that my sine series is going to somehow get real fast up to one, and level out at one. We're asking a lot. In fact, when Fourier proposed this idea, Fourier series, there was a lot of doubters. Was it really possible to represent other functions, maybe even including a step function, in terms of sines or maybe cosines? And Fourier said yes, go with it. So let's do it. OK, so and he turned out to be incredibly right. How do I find b_2? Do you remember how to-- I don't want to know the formula. I want to know why. What's the step to find the coefficient b_2? Well, the step is-- The key point. Which makes everything possible. Why don't I identify the key point without which we would be in real trouble. The key point is that all these sine functions, sin(2x), sin(3x), sin(4x), are orthogonal. Now, what do I mean by two functions being orthogonal? Somehow my picture in function space, so my picture in function space is that here is, this is the sine x coordinate. And somewhere there's a sin(2x) coordinate and it's 90 degrees and then there's a sin(3x) coordinate, and then there's a sine, I don't know where to point now. But there is a sin(4x), we're in infinite dimensions. And the sine vectors are an orthogonal basis. They're orthogonal to each other. What does that mean? Vectors, we take the dot product. Functions, we take, we don't use the word dot product as much as inner product. So let me take the inner product of-- The whole point is orthogonality. Let me write that word down. Orthogonal.

The sines are orthogonal. And what does that mean? That means that the integral over our 2pi interval, or any 2pi interval, of one sine, sin(kx), let's say, multiplied by another sine, sin(lx), dx is, you can guess the answer. And everything is depending on this answer. And it is? Zero. It's just terrific. If k is different from l, of course. If k is equal to l then I have to figure that one out. I'll need that one. What is it if sine, if k=l so I'm integrating sine squared of kx, then it's certainly not zero. I getting like, the length squared of the sin(kx) function. If k=l, what is it? It has some nice formula. Very nice. Let's see. Sine squared, do I need to think about sine squared kx? Sine squared kx, what does it do? Well, just graph sine squared x. What would the graph of sine squared x look like, from minus pi to pi? So it goes up, right? Doesn't it go up? And then it goes back down. OK. Sorry, I made that a little hard. Is that right? And then it keeps it up. Right. So, what's the integral of that? I'm not seeing quite why. The answer is its average value is 1/2. The integral of sine squared is half of the length. The whole interval is of length 2pi, and we're taking the area under sine squared. I may have to come back to it, but the answer would be half of 2pi, which is pi. Yeah, yeah. So you could say the length of the sine function is square root of pi.

So these are integrals. You told me the answer was zero. And I agreed with you, but we haven't computed it. And nor have we really got that. So a little bit to fix, still. But the crucial fact, I mean, those are highly important integrals that just come out beautifully. And beautifully really means zero. I mean, that's the beautiful number, right, for an integral. OK, so now how do I use that? Again, I'm looking for b_2. How do I pick off b_2, using the fact that sin(2x) times any other sine integrates to zero. Ready for the moment? To find the coefficient b_2? I should, let me start this sentence and if you finish it. I'll multiply both sides of this equation by sin(2x). And then I will integrate. I'll multiply both sides by sin(2x), so I take S(x) sin(2x). And on the right hand, I have b_1 sin(x) sin(2x). And then I have b_2-- Now, here's the one that's going to live through the integration. It's going to survive, because it's the sin(2x) times sin(2x), sin(2x) squared. And then comes the b_3 guy, would be b_3 sin(3x) sin(2x). Everybody sees what I'm doing? As we did with the weak form in differential equations, I'm multiplying through by these guys. And then I'm integrating over the interval. And what do I get? Integrate everyone dx. And what's the result? What is that integral? Zero. It's gone. What is this integral, the integral of sin(3x) times sin(2x)? Zero.

All those sines integrate to zero, and I have to come back and see it's a simple trig identity to do it. To see why that's zero. Do you see that everything is disappearing, except b_2. So we finally have the formula that we want. Let me just with put these formulas down. So b_k, b_2 or b_k, yeah tell me the formula for b_k. Let me go back, here. What did b_2 come out to be? So I have b_2, that's a number. It's got this right-hand side. That's the integral that I mentioned. You'd have to compute that integral. And then what about this stuff? This sin(2x) squared? I've integrated that. And what did I get for that? This is b_2, and then this is some number. And it's pi. So this is b_2, and multiplying, right? That b_2 comes out, and then I have the integral of sine squared 2x, and that's what's pi. So that's b_2 times pi here, and I just divide by the pi. So I divide by pi and I get the integral from minus pi to pi of my function times my sine. That's the model for all the coefficients of orthogonal series. That's the model. Cosines, the complete ones, the complex coefficients. The Legendre series, the Bessel series, everybody's series will follow this same model. Because all those series are series of orthogonal functions. Everything is hinging on this orthogonality. The fact that one term times another gives zero. What that means, really. I want to say it with a picture, too. So let me draw two orthogonal directions. I intentionally didn't make them just x and y axes. This might be the direction of sin(x), and this might be the direction of sin(2x). And then I have a function. And I'm trying to find out how much of sin(2x) has it got in it? How much of sin(x) has it got in it, and then of course there's also a sin(3x) and all the other sin(kx)'s. The point is, the point of this 90 degree angle there is, that I can split this S(x), whatever it might be, I can find its sin(x) piece directly. By just projecting it, it's the projection of my function on that coordinate. If you don't like sin(x), sin(2x), S(x), write v_1, v_2, whatever. To think of it as vectors. What's the sin two-- So that is b_1*sin(x). That's the right amount of sin(x). And the whole point is that that calculation didn't involve b_2 and b_3 and all the other b's. When I'm projecting onto orthogonal directions, I can do them one at a time. I can do one one-dimensional projection at a time. This b_k*sin(kx) is the, so I'm just saying this in words, is the projection of my function onto sin(kx). And the point is, I could do this and get this answer because of that 90 degree angle. If I didn't have 90 degrees, do you see that this wouldn't work? Suppose my two basis functions are at some 40 degree angle.

Then I take my function. Can I project that onto this guy? And project that onto this guy, so the projections are there? And there? Do they add back to the function that I started with? The given function? No way. I mean, these are much too big, right? If I add that one to this one I'm way out here somewhere. But over here, with 90 degrees, these are the two projections, project there. Project there. Add those two pieces and I got back exactly. I just want to emphasize the importance of orthogonality. It breaks the problem down into one-dimensional projections. So here we go with b_k*sin(kx). OK, let me do the key example now. This example. Let me find the coefficients of that particular function S(x). This is the step function, the square wave, S(x), let's find its coefficients. I'll just use this formula. OK, maybe I'll erase so that I can write the integration right underneath. OK. Oh, one little point here. Well, not so little, but it's a saving. It's worth noticing. The reward for picking off the odd function is, I think that this integral is the same from minus pi to zero as zero to pi. In other words, I think that for an odd function, I get the same answer if I just do the integral from zero to pi, that I have to do. And double it. So I think if I just double it, I don't know if you regard that as a saving. In some way, the work is only half as much.

It'll make this particular example easy, so let me do this example. What are the Fourier coefficients of the square wave? OK, so I'll do this integral. So from zero to pi, what is my function? My N from the graph? Just one. This is going to be a picnic, right? The function is one here. So S(x) is one, so I want 2/pi, the integral from zero to pi of just sin(kx) dx, right? Which is, so I've got 2/pi, now I integrate sin(kx), I get minus cos(kx), right? Between zero and pi. And what else? What have I forgotten? The most important point. The integral of sin(kx) is not minus cos(kx). I have to divide by k. It's the division by k that's going to give me the correct decay rate. 2/(pi*k). Alright, now I've got a little calculation to do. I have to figure out what is cos(kx) at zero, no problem, it's one. And at the other point, at x=pi. So what am I getting, then? I'm getting 2/pi-- no, 2/(pi*k). With that minus sign, I'll evaluate it at x=0, I have one minus whatever I get at the top. cos(k*pi). That's b_k. So there's a typical, well not typical but very nice, answer.

Now let's see what these numbers are. So let me take a 2/pi out here. And then just list these numbers. So k is one, two, three, four, five, right? Tell me what these numbers are for-- Let me put the k in here because that's part of it. So it's a constant, 2/pi. At k=1, what do I get? At k=1? This is the little bit that needs the patience. At k=1, the cosine of pi is? Negative one. So I have net minus minus one, I get a two. I get a two over a one. k is one. Alright, that is the coefficient for k=1. Now, what's b_2, the coefficient for k=2? I have 1-cos(2pi), what's cos(2pi)? One. So they cancel, so I get a zero. There is no b_2. What about b_3? So now b_3, I have 1-cos(3pi). What's the cosine of 3pi? It's negative one again. Right, same as the cosine of pi. So that gives me a two, and now I'm dividing by three. 2/3. Alright, let's do two more. k=4, what do I get? Zero, because the cosine of 4pi has come back to one. So I get a zero. And what do I get from k=5? 1-cos(5pi), which is? cos(5pi) is back to negative one, so one minus negative one is a two. You see the pattern.

And so let me just copy the famous series for this S(x). This S(x) is, let's see. The twos, I'll make that 4/pi, right? I'll take out all those twos. So I have 4/pi 1-cos(5pi), I have no sin(2x), forget that. Now I do have some sin(3x)'s, how much do I have? 4/pi sin(3x)'s. But divide by three, right? And then there's no 4x's, no sin(4x)'s. But then there will be a 4/pi sine, what's the next term now? Are you with me? So this is a typical nice example, an important example. Sine of what? 5x, divided by five. OK. That's a great example, it's worth remembering. Factor the 4/pi out if you want to. 4/pi times sin(x), sine(3x)/3, sin(5x)/5, it's a beautiful example of an odd function. OK, and let's see. So what do you think, MATLAB can draw this graph far better than we can. But let me draw enough so you see what's really interesting here. Interesting and famous. So the leading term is 4/pi sin(x), that would be something like that. That's as close as sin(x) can get, 4/pi is the optimal number. The optimal coefficient. The projection. This 4/pi*sin(x) is the best, the closest I can get to one. On that interval. With just sin(x). But now when I put in sin(3x), I think it'll do something more like this. Do you see what's happening there? That's what I've got with sin(3x), and of course odd on the other side.

What do you think it looks like with sin(5x)? It's just so great you have to let the computer draw it a couple of times. You see, it goes up here. And then it's sort of, you know, it's getting closer. It's going to stay closer to that. But I don't know if you can see from my picture, I'm actually proud of that picture. It's not as bad as usual. And it makes the crucial point, two crucial points. One is, I am going to get closer and closer to one. These oscillations, these ripples, will be smaller. But here is the great fact and it's a big headache in calculation. At the jump, the first ripple doesn't get smaller. The first ripple gets thinner, the first ripple gets thinner. You see the ripples moving over there, but their height doesn't change. Do you know whose name is associated with that, in that phenomenon? Gibbs. Gibbs noticed that the ripple height as you add more and more terms, you're closer and closer to the function over more and more of the interval. So the ripples get squeezed to the left. The area under the ripples goes to zero, certainly. But the height of the ripples doesn't. And it doesn't stay constant, but nearly constant. It approaches a famous number. And of course we'll have the same odd picture down here. And it'll bump up again, the same thing is happening at every jump. In other words, if you're computing shock. If you're computing air flow around shocks, with Fourier-type methods, Gibbs is going to get you. You'll have to deal with Gibbs. Because the shock has that extra ripple. OK, that's a lot of Section 4.1. Energy, we didn't get to, so that'll be the first point on Friday. And I'll see you this afternoon and talk about the MATLAB or anything else. OK.