Recitation 6

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Prof. Gilbert Strang

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR STRANG: OK, so this is a review session open to questions on homework. Open to questions on topics in the exam that's coming tomorrow. This morning I wrote down what the four questions would be about, and I'm glad I did. I never-- should have done this many times before. So you would know exactly and get down to seeing what those problems are. And of course the matrices called K, and A transpose C A are going to appear probably more than once. So, open for any questions. About any topic whatsoever. Please. Yes, thank you.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: The fourth question on the exam?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: I'm glad you used that word, fun. Yes. That's exactly what I mean. Section 2.4, and they are fun, yeah. So I drew by hand a little graph with nodes and edges. And you want to be able to take that first basic step. So the first step, which is as far as we got by last Wednesday, the first lecture on Section 2.4, was just creating the matrix A, understanding A transpose A, and of course there's more to understand about A transpose A. Actually, why don't we take one second. Suppose I have a graph with six nodes, let's say. Can you imagine a graph with six nodes? And every node connected to every other node. So however many edges that would be. Actually, my grandson just got that question on his exam. He was told there were l islands with a flight from every island to every other island, and he was asked how many flights that makes. So I sent him the answer. But I was very happy with his reply. He said "that's exactly what I got." So, what do you know. It seems to work. So anyway. Suppose we had, how many nodes did I say? Six? OK.

So we have like a six-node-- So n is six, and it's a complete graph, this is really just to start us off talking about some of these problems. So the matrix A, so I think it would be 15, where did I come up with that number 15? And is it right, actually? Yes. This is one way, would be the first node has five edges going out and then the second node would have four additional edges, and three and two and one. And five, four, three, two, one add to 15. What would be the shape of A in that case? So it has a row for every edge. So 15 by 6, I think. OK, and I could create A transpose A just to have a look at it. So it would be, what shape would A transpose A be? Six by six. Symmetric, of course. Will it be singular or non-singular? Singular. Singular, because we haven't grounded any nodes. We've got all these nodes, all these edges, nothing. We haven't taken out that column; when I reduce it to five by five, then it'll be invertible. But six by six, so what will be the diagonal of this?

This'll be now six by six, the size will be six by six. And what will go on the diagonal is the degrees of every node. That means how many edges are coming in, and what number is that? Five. So I'll have five down the diagonal, and what else, what will be off the diagonal? Minus, a whole lot of minus ones, a minus one above and below for every edge. And since we have a complete graph, how many minus ones have we got? All of them. All minus ones. So all minus ones and all minus ones. That's fine to cross over if you need to, sure. I'm not sure what else to say about that matrix. Well, it's not invertible. Now let's take the next step which, I'm now going probably beyond the exam part. Just really to get us started, I ground the sixth node. Suppose I ground node number six, that wipes out a row and a column, is that right? So I'm now left with a five by five matrix. It still has all minus ones there and there, but now it's five by five, now it is what kind of a matrix, what are its properties? Square, obviously, symmetric obviously, and now invertible. Positive definite, OK. So it's fives there and now I would have four minus ones. Let's just write them in here. Typical row, now, in this five by five matrix would have four minus ones and of course more here, and more here, and one there. And symmetric. All I want to say is that that matrix, we don't often write down the inverses of matrices, but that one I think we could. I think we could actually, and it's a little bit interesting to know, for that special matrix, everything about it. We could find its eigenvalues, its determinant, its pivots, the whole works for that matrix. And that's one page of the book, maybe at the end of Section 2.4, I think, comes more detail about that matrix.

So in a way that special guy is like our special K matrix, -1, 2, -1, for second differences. Somehow this is taking, all nodes are connected. Instead of in a line, springs in a line, points in a line, we now have everybody connected to everybody. So this is sort of the special matrix when everybody is connected to everybody and we could learn all about that particular one. But then, of course, if some edges are not in then some zeroes will appear off the diagonal in the adjacency matrix part. The degrees will drop a little if we're missing some edges and the inverse will be not some simple expression. Anyway, that's to get us started. So that's really where the last lecture, Friday, brought us to this point. And I'll take this chance to add in the block matrix just because I think of it as quite important. So for this case, C is the identity. So I would have the identity up in that block, A in that block, A transpose in this block. That would be my mixed method matrix, you could say. My saddle point matrix. It starts out very positive definite. But it ends up negative definite. And that's typical of mixed methods, when both unknowns, the currents as well as the potentials, are included in the system. So A transpose w, that was f, I think, and this is b. I just mentioned that again, it was in Friday's lecture and it's in the book but I would just want to say I often refer to this as the fundamental problem of numerical analysis, is how do you solve that system. And of course elimination is one way to do it. When I eliminate w, that will lead me to the equation A transpose Au equals, I think it'll be, there'll be an A transpose b minus f, I think. C being the identity there. So that's the mixed method, this is the displacement method, and this is the popular one. Because it's all at once. But people think about this one, too. So that's like saying again what was in Friday's lecture and will be used going forward. OK, that was just to get us started. Now, please let's have some questions. We need another question. Who else? Yes, thank you.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: The first on the homework. What number was that? Section 2.2, number six? About the trapezoidal rule? Yes. OK, now I did speak about that a little in the last review session, but can I just say a couple words more about it here?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: What's it asking? Yes. People often ask me that about my problems. I don't know. You can't read my mind? You should. OK, so the point is that for special differential equations-- So let me just summarize what we did there. So this we did before, but I didn't do everything. So what we did before was point out that the system du/dt = Au conserves energy. u squared, u of time-- for all time, u of time squared stays constant if A transpose equals minus A. OK, essentially you take the derivative, it's got two terms because we've got a product there, a product rule. The derivative will be, one term will involve A, the other term will involve A transpose, and if our matrix has this antisymmetric property, those terms will cancel; the derivative will be zero, and that'll mean that this is a constant. OK, so that's the differential equation.

Now, the question was about the difference equation. So we're taking the trapezoidal rule and we want to show that u_n squared stays constant for the trapezoidal rule. And so what that means, in other words, is step by step, u_(n+1), and I could write it u_(n+1) squared, but the other way to write that and the way we have to work with it is that, is the same. Now, that was just an identity, that's just the meaning. Now, I want to show that that's the key. That's what we would want to prove. That the trapezoidal rule copies the property of constant energy of the differential equations. And of course, you know that in oscillating springs when there's no source, no forces coming from outside, the total energy will stay constant. And you could think of many other situations. You have a spacecraft, where you've turned off the engines. It's just going there, it's possibly rotating. So there you've got angular velocity included in the total energy. Important fact, if energy stays constant you want to know it. And you're very happy if the finite difference method copies it. OK, so then it was just a question of-- Here we took derivatives to do that one. Here we're going to be playing with differences, and my suggestion was that the good way to get it was to take that vector times the trapezoidal equation and show that this turned out to-- The trapezoidal equation is something equals zero, and you hope, and it takes a few lines of jiggling around, that when you do that you'll get the difference, you get exactly this. You get u_(n+1) transpose u_(n+1) minus u_n transpose u_n. That's the goal.

We know that the trapezoidal equation-- maybe I move everything onto one side so I have something equals zero. Then my trick is take that vector equation, multiply by that, play around with those terms and you'll get this. So, since that is zero, this is zero. And that's exactly what our goal was to prove. So it's just in the jiggling around and maybe we don't want to take the full time because I'll post that. Actually, I may post some of these solutions even before the quiz. And therefore before the homework is due, just because this particular homework, as I say, is not-- The graders are just going to be so busy with all the quizzes. This is for learning. Now, here's the one thing you want to learn out of this messy computation. You also have a term, you'll also find a term U_n, when you just do this mechanically, you'll find a u_(n+1) transpose u_n, and you'll find a u_n transpose u_(n+1), and they'll come in with opposite signs. That'll be when you've plugged in the fact that A transpose equals minus A, and all I wanted to do is ask you, what does that term amount to? Because that term will show up. One way or another. And what does it equal? Zero. Everybody should know that. That's the one thing, that you have to add to just mechanically computing, is the fact that the dot product of that vector with that, v transpose w is the same as w transpose v. So, it's good to just call attention to the easy things that are like that.

Why is that? That's because both sides, this is equal to what? v_1*w_1, v_2*w_2, v_3*w_3, component by component. And this is w_1*v_1. But we're just talking numbers at that point. So the numbers of v_1 times w_1 are certainly the same as w_1 times v_1. Every component by component, they're exactly the same and of course then the dot products are the same. So that's the fact which for this v and that w, make the term go away, that's still sitting there. Other terms go away because of this property. Having written that and recognizing that we have Fourier stuff coming up in the last third of the course, where we have complex numbers. I have to say that if when I have complex vectors, do you know about those? The dot product, or the length squared, if this was a vector of complex, with possibly complex numbers, I wouldn't take the length squared just by adding up these squares. Suppose my-- Yes, I'm really anticipating weeks ahead, but suppose my vector was [1, i]. What's the length of that particular vector v? Well, if I do v transpose v, what do I get? For v equals [1, i], what does v transpose v turn out to be? Zero. One squared plus i squared is zero. No good. So obviously some rule has to change a little bit to get the correct number. The correct length squared, I would rather expect two. The size of that squared plus the size of that squared. So I don't want to square i, I want to square its absolute value.

And the way to do that is conjugate one of the two things. Now I'm taking [1, i], and on the other side I have [1, -i] and that gives me the two that I want. So what I'm doing, when I've got complex vectors then I would really do that, and now that is not the same as that. Right. Yeah. If in one case if I'm doing the conjugate of v and the other case it's the conjugate of w, then I've got a complex conjugate. OK, that's a throwaway comment that just is relevant because it keeps us focused for a moment on the real case, where we do have equals. Now, I don't know if that was sufficient answer? It wasn't a complete answer because I didn't do the manipulations, but the solutions posted should show you those. And, of course, you can organize them a little different. OK, good for that one. Yes, please.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: The other two terms here?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Yes. You couldn't cancel them. Well, I recommend just, that's how the best mathematics is done, right? You want zero, you just X it out. Anyway. Let me leave the posted solutions to be a hint and come back to it. Yeah.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: The next problem was?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Oh, yes. OK, that one I spoke a little bit about, but now let me read from the problem set that I got. I noticed that was quite brief. Oh, to find that actual angle? Somehow that's a little interesting, isn't it?

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: To tell the truth, I meant numerically. I meant, what's the point of that question. The point is we're trying to solve, this isn't a big deal. But it was just if we're using this trapezoidal method, the beauty of that, exactly what our thing proves, is-- Here the constant energy surface is the circle. The point of the trapezoidal method for this simple equation u''+u=0, which amounted to the equation uv' equals something like, our a matrix was antisymmetric. So it fit perfectly in that problem, and if we started on the circle we stay on the circle. And if I take 32 steps I come back pretty closely to here, and I just thought it might be fun to figure out numerically, with MATLAB or a calculator or something, we take an angle, theta, is that what the problem asks, and then come around here to 32 theta, and 32 theta will not be exactly 2pi. But darned close. Because you could see in the figure in the book it wasn't too far off.

So the question was, what is that theta? So I think that the formula turned out to be that each step multiplied by this one plus i delta t, or h, on two divided by one minus i delta t over two. And when you plug in delta t to be 2pi over 32, so that's the, what did I say, that's the tangent of theta or something? Sorry, I've forgotten the way the problem was put. Oh, it's e to the i theta, yeah. What's the main point about that complex number? When you look at that complex number what's the most essential thing you see? That it, yeah, tell me again.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Magnitude one, great. It's a number divided by its complex conjugate, so it's a number of magnitude one. And now tell me, if you see a complex number of magnitude one, what jumps to mind? What form do you naturally put it in? e^(i*theta). Every complex number of absolute value one is just beautifully written in the form e^(i*theta). That complex number is some point on the unit circle, so there it is. Right there, there it is, e^(i*theta). With that-- theta is negative there, because we're going the wrong way. No big deal. Maybe here theta's positive. I've forgotten, so I won't try to go either the clockwise or the counterclockwise way around. So, if I wanted to figure out what theta was and plugged in these things, let's see. So that's pi over 32, delta t over two would be pi over 32, and this guy would be its conjugate. pi over 32, and then in this solution that'll be plugged on the homework this will be, I think maybe, maybe the theta comes out to be, it's kind of cool actually, twice the arc tangent of pi over 32 or something. I didn't know that. But that'll be in the solutions for you to check. So now, why do I like e^(i*theta) so much? Because now I could tell you what this point is, after you've done it 32 times. What angle have you reached? This is the crunch line of using complex numbers, e^(i*theta), is that they're absolutely great for taking powers.

If I take the 32nd power of x plus iy, I'm lost, right. x plus iy to the 32nd power starts out x^32, ends up i^32 y^32, with horrible stuff in between. But what is the 32nd power of e^(i*theta)? e^(i*32theta). Just that angle 32 times exactly as we've drawn it. So that's the point e^(i*32theta). OK, and therefore if we now know what theta is, so yeah. So it must be pretty near 2pi, but not exactly. I guess that's about right. In fact, having got this far, the tangent of a very small angle is approximately what? It's approximately the angle, right? The sine of a very small angle is approximately the angle. The cosine is approximately one. The tangent is approximately the angle. So this, 32 theta, is 32 times two times approximately the angle. And what answer do you get? 2pi. Which makes sense. So you could say what the trapezoidal method has done is to replace the exact angle by the inverse tangent approximately. That's sort of nice. In this example you can get as far as that and you could actually find out how near that is. And, by the way, how near would I expect it to be? I would expect it, so what do we know about the trapezoidal method without having proved it? It's second order accurate, right?

If it was first order accurate, I would expect it to miss by something of the size of theta. Maybe a fraction of theta. But being second order accurate, I'm expecting it to miss by something of size theta squared. So it would be pretty near zero, right. And actually, another way I know it's around-- So the error would be something like, it would have a 32 squared in the denominator. And I've just thought of another way to see that. We just said that the first term in the arc tangent of a small angle, theta, of a small angle, alpha, whatever that is, pi over 32, the first term in the arc tangent is? The angle. That's what we just said. Then, do you know what would come next? Now we're looking at the error. So that of a very small angle will start theta, and I want to ask you about how many theta squareds and theta cubes. You're seeing what you can do with paper and pencils type stuff.

Here's my main question, how many theta squareds in there? You want to make a guess? A mathematician's favorite number, zero. Right, there will be no theta squared terms in. That's an odd function, so I'm expecting only odd powers and therefore I won't be surprised to see theta cubed come up first. And then when I multiply by the 32, I get the theta squared that I guessed we would have. OK, once again I'll stop there because that's very narrow path to be following but it shows you how. You know, there's a lot of room still for what you can do with paper and pencil to understand a model problem. And then the computer would tell us about a serious problem of following the solar system for a million years.

OK, another totally different question, if I can. Yes, thank you.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Yeah, sure.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: 2.4.1, right. A mistake in the book.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Or in the, yeah. It's quite possible. OK, there's a printed error in the graph. Yeah. So in numbering the edges, well let's blame it on the printer, right? Not the author. OK, so the diagonal edge, that five probably was intended to be a three, yeah. Thank you. So we'll catch that in the next printing. And you recognize that always, numbering the edges and nodes is a pretty arbitrary thing, it's just if we number differently that just reorders the rows of the matrix. If we number the edge differently, it'll reorder the rows and it'll reorder rows and columns of A transpose A. So it won't make a serious difference in the matrix. Yeah.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Do you want to go back to this guy? OK.

AUDIENCE: So if you have an anti-symmetric matrix, does it follow that the eigenvectors used are perpendicular?

PROFESSOR STRANG: This is a good question. This guy, way up here, with this property,

AUDIENCE: The eigenvectors are perpendicular?

PROFESSOR STRANG: The eigenvectors are perpendicular. Yes, yeah. So we have, there's this, like, the nobility among matrices are the ones with perpendicular eigenvectors. So that includes symmetric matrices, this is a good and straightforward point. So these are the good matrices. Symmetric matrices. A transpose equals A. Their eigenvalues lie on the real line. And these are all perpendicular eigenvectors. What about antisymmetric? OK, that means A transpose is minus A. They also fall in this noble family of matrices, and where are their eigenvalues? Pure imaginary, right up here. Now do you want to know, who else is in this family? What's the other, this is the complex plane; there's one more piece of the complex plane that you know I'm going to put. Which is? What else to make that complex plane look familiar, it's going to have the unit circle. Every complex plane has got to have the unit circle. OK, so these guys went with the, and now what do you think goes with the, this will be the matrices-- Can I call them Q instead, because I called them Q this morning. Q transpose is Q inverse. Q transpose Q, and they're the orthogonal matrices. So those matrices again, beautiful matrices in the best class. And their eigenvalues are on the unit circle. And that would be--

Why don't I just show you why? Because orthogonal matrices, there are not so many that are really worth knowing. So, let me take Qx=lambda*x, and what is it that I want to prove? I want to prove that the eigenvalues have absolute value one. That's the unit circle. So how do I show that the eigenvalues have absolute value of one? Let me take the dot product with Qx transpose. So both sides, I'll do Qx transpose Qx and I'll do lambda*x transpose lambda*x, right? Only these are complex. I've got to take complex stuff. OK. I just took the length squared of both sides, and kept in mind the possibility that this x and lambda could be, and probably will be, complex numbers. Now, what do I have on the left? Do you see it happening? I get an x bar transpose, what do I get? Q transpose Qx on the left side. That's the combination I'm looking for. For an orthogonal matrix. Let's imagine the matrix itself is real, otherwise I would just conjugate it. What's nice about that left side? What fact am I going to use about Q? Q transpose Q is the identity. So this thing is nothing but x bar transpose x. That's the length of x squared. What have I got on the right side? I've got the length of x squared times a number, lambda bar times lambda squared. It's there, now. On the left side I have the length of x squared. On the right side I have the length of x squared times that number, mod lambda squared. Therefore, that number has to be one and the eigenvalues are on the unit circle.

So, I've given you the three big important classes of matrices with perpendicular eigenvectors. I think anybody would wonder, OK, what about other eigenvalues. What's the condition for perpendicular eigenvectors that includes this. And includes this. And includes this, and also allows eigenvalues all over the place. Would you like to know that condition? What the heck. That condition, that includes all these is this, that A transpose times A equals A times A transpose. That's the test for perpendicular eigenvectors. A transpose commutes with A. So this passes, of course. This passes, of course. This passes because both sides are the identity, and then some more matrices pass also. OK. Is that alright? You asked for some linear algebra and you got it. Now I'm ready, yes, thanks.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: 2.4.19. Oh, let me look. 2.4.19. Ah. OK, yes, sorry and I should have done better with that. So one graphs that are important are grids like this. And we'll see them-- Two, three, four, one, two, three, four. That would be where-- These are the nodes. So this is a grid. I meant to draw them all in, but I won't. n squared. So n is six, and I'd have 36 nodes. And you can see the edges in there. So that's the graph I have in mind. And the reason that problem is there is that last year, I think it was last year or the year before, we spent some time with figuring out resistances and currents and so on for these problems. And we needed some fast way to generate A, because this matrix A is now, what size is the matrix A? It's got, I don't know how many edges does it have? One, two, three, four, five, maybe 30 edges going across and 30 coming down. It'll be 60 by how many columns in this matrix? You know the answer now and it's worth knowing, for the quiz of course. 36. OK. Anyway, the class rebelled about creating these matrices and working with the matrices, with 2,160 non-zeroes. People were dropping the course. So we needed a command that would create A pretty quickly. And so that's what the book, and so this was like the 18.085 command. After we stumbled around for a while, we discovered that a MATLAB command called kron was a quick way to create the matrix. We'll see that when we get to that point.

This is an important graph. And it's closely connected to Laplace's-- You remember Laplace's--? I'll just tell you. Laplace's equation is this, you have a second x derivative, we know how to deal with those. But it also has a second y derivative. So I'm really looking ahead at the most important equation of Chapter 3, Laplace's equation. And suppose I use finite differences. I want a matrix K_(2D) that deals with this 2D problem. And let me just say what it would be. At a typical point this x derivative is giving me a minus one, a two and a minus one. And the y derivative is giving me a minus one that moves this guy up to four and minus one. So instead of -1, 2, -1 along a typical row, we'll now have a four on the diagonal and four minus one in a certain pattern. Anyway. You'll see that, it's interesting when we get to it. That would show up in A transpose A. So what I've said here is what happens with A transpose A. I guess I'm hoping that you begin to know these matrices, first seeing them occasionally in homeworks and then in the lecture. Good. But that's looking ahead.

I needed some questions that are like, close to, really on what we're doing or what the quiz would do. Any - thank you.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Oh yes. A little bit. OK, yeah. So I wrote down this equation and what I'm writing right there is the new part. Sort of new, and I guess-- Equals some right hand side f(x). And the discrete version will be an A transpose C A equal a vector f, maybe there will be a delta x squared here. OK. I guess maybe, I don't want to go far but I want you to see that if we have a coefficient c in here it should show up there. You could actually, it might be reasonable to look at 3.1 just to look slightly ahead to see the parallels, but you would get them right without a lecture on it. Your coefficient shows up in the differential equation, and it shows up on the diagonal of C in the difference equation. I won't give a whole lecture on that, just to say that correspondence is exactly the one we know. A is a difference matrix. Like the derivative. C will be a diagonal matrix, A transpose will be whatever that comes out to be. And so you've seen A transpose A, but think again about that difference-- And ask yourselves this. I suggest, take c to be one, get c out of there. And just think, again, what is the difference matrix A with a boundary either fixed-fixed or fixed-free, those will be two different A's. What are the A's for fixed-fixed and for fixed-free?

This is what we were doing at the very beginning of the course. So A is a first difference matrix, and A transpose A will be the second difference. So the A transpose A, of course, I was doing A transpose A, then the answer here would be the matrix K and the answer here would be the matrix T. Or, depending which end is free, but we'd have one change. That's A transpose A, but now think about the A that it came from. So A will be, A is the matrix that takes differences of the u's, and then A transpose A takes second differences. Of all the questions asked, this is the one most relevant for the exam and for what we've done so far. I've gone off onto topics that we look ahead to, but this is where we are. So that matrix A is a first difference matrix, and then you put in the boundary conditions. OK.

There was another question. Yeah.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Of number four? Which number four in which? Oh, in the quiz. Oh yes, right. Yes. Did I tell you what problem four was? No. I hope not. OK problem four in the quiz. It's about a delta function? Yeah. What do I know, what do you want me to tell you? So the delta, of course, comes in as, we've seen it, as the right hand side of a differential equation. So it might be the right-hand side even of this equation. So if this equation was delta of x, or x-1/2 or something. I mean, the essential thing is that when delta's on the right side, that gives you a drop in the slope. Suppose I just have a first order equation like d -- I'll call it z -- dz/dx = delta(x-a). Yeah. And suppose that I know that z(0) starts at zero. You've got to be able to solve that equation, so this is a useful prep for that. That would be a good equation to know the solution to. And what kind of function is this? What kind of a function is z(x) there? I just use the letter z to have a new letter. z(x) will be a step. Right. z(x) will be a step function, yes. That's right. OK, so the solution is that at the point a, which I'm presuming is beyond zero, I come along at zero and I step up. Yep. OK. That would be a picture of z(x), yeah. So it's basic delta function material that I'm speaking about here.

So z jumps by one and if z is a slope, then the slope jumps by one or drops by one, depending on a plus or a minus sign here. The things that we've used to deal with delta functions, so that's what Question 4b is about. The drop in slope, or the jumps, or whatever happens when a delta function shows up on the right side. Good question. Yep.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: 1.1.27. Well. AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Oh, and then left a typo.

AUDIENCE: [INAUDIBLE]

PROFESSOR STRANG: Oh. I'm sorry, OK. 1.1.27. My copy isn't showing it. Yeah. I may have to punt on that question. Or do you want me to look at it? OK, can you maybe just pass that the book up, alright. I'll try to read what that question was. OK. Yeah, maybe this is a question to answer. This is probably the one new question that got added. OK, yeah. Fair enough. So this is continuing the discussion that you asked me to start here about A, the first difference matrix, OK. So I'll go a little more over that. So in the book here, this writes down a matrix A_0, which is-- I'll discuss this matrix. So there's a difference matrix. You see what I mean by a difference matrix, if were to multiply it by u, [u_0, u_1, u_2, u_3] or something, I would get differences, right? I'd get u_1-u_0, u_2-u_1, and u_3-u_2. Good. So that's A_0 times u. Alright. So that's a difference matrix. What graph would that come from? That's also the incidence matrix of a very simple graph. This is connecting Chapter 1 with Chapter 2. It would be a line of springs, it would be a graph. It's got edges and nodes. It's got three edges, so I've got three rows. It's got four nodes so I've got four columns. Are the columns independent here? No, they never are. The vector of all ones would have differences of all zeroes. So what would that, that would be the difference matrix for fixed? Free? Fixed, free, circular what would that difference matrix correspond to? Everybody's saying it. Say it a little louder just to. Free-free. That's a free-free problem, because they're all in there. We haven't knocked any out. There are no boundary conditions yet. That's a free-free, so that A_0 would be free-free. OK.

I'll take one more case and then I think we're at time. Suppose it was fixed-fixed? What would be the difference matrix that would correspond to, how would I change that matrix if my problem became fixed-fixed? So now I'm fixing that u, I'm fixing that u, in the mass spring case I'm adding supports there. How would that change the matrix? First and fourth, good. Say it again? First and fourth columns would go. So fixed-free would then be three by two. Free-free was three by four. Yeah, I'm glad this question came up because this is the right thing for you to be thinking about in connection with the recent question you asked. OK. Maybe that's the right place to stop, because now you've asked questions that are really on target for what we've done, and I hope useful to you. OK, see you guys tomorrow evening at 7:30 in 54-100, OK.