Lecture 4: Continuous-Time (CT) Systems

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Instructor: Dennis Freeman

Description: Drawing analogies with previous concepts in discrete-time systems, this lecture discusses the block diagrams, polynomial expressions, poles, convergence regions, and fundamental modes of continuous-time systems.

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

DENNIS FREEMAN: OK. So last time, the idea was to think through a different way, or in fact, several different ways, to think about discrete-time systems. Today is a crash course to do the same thing in CT systems. So last time, for the last week, we've been looking at different kinds of representations for DT systems.

Difference equations, because they're concise and precise. Block diagrams, because they let us visualize the signal flow paths. And operator expressions, because it lets you treat systems like polynomials.

What we'll see today is that that same kind of strategy works precisely. The strategy, not the answers. The same strategy works precisely for CT. We'll have a concise, precise representation in terms of differential equations, which I'm sure you already know all about.

But we will also develop the notion of a block diagram so you can visualize signal flow paths. And we'll have an analogous operator, the A operator, that will let us treat the systems as polynomials. So that's the overview.

Because it's so similar, we'll be able to do all of this in one lecture. So that's what I mean by a crash course. So this is Introduction to CT in 50 minutes.

So you already know a lot about how you'd represent systems as differential equations. You've seen this in other classes. You should have seen this sort of approach in physics.

You should have seen this sort of approach in math, in 1803. So we're to assume that you already know how to think about differential equations. And what we're going to do is skip ahead and think about the alternative representations instead.

So just like in DT, where we thought about block diagrams, which gave us a way of thinking about signal flow paths, we'll have the same sort of idea in CT. The big difference, you can anticipate. I mean, if the CT [INAUDIBLE]. In the difference equation, the fundamental operation in the difference equation was a delay operation.

If you contrast that to a CT system, a fundamental operation is not delay. You know from 1803, you know for physics, you know from lots of other exposures, that the fundamental operation is differentiation. And so the blocks will not be built out of delays, but will instead be built out of integrators.

Apart from, that the block diagram structure looks extremely similar. And we'll be able to apply the same idea that we did in DT, to simplify the representation of the block diagram by thinking in terms of operators. The operator has to similarly change.

So the new operator, the operator that we use to represent a CT system, we'll call the A operator, A is intended to mean accumulator. The thing that's in my mind when I say the A operator is this tank. This tank accumulates water. And you can think about the relationship between the height of the water, and the input and output rates by some differential equation.

But the point is that the tank itself, the system, accumulates. And that's what we want to have in our heads when we think about the A operator. So the operator is going to be like the r operator.

You will apply the r operator to a discrete time signal to generate a new discrete time signal that we shifted to the right. That's what r meant, r was right. So right shift operator.

Here, you apply the A operator to the X signal. X is now CT, continuous time. And when you apply the A operator to a signal X, it generates a whole new signal Y that is everywhere equal to the indefinite integral of the input signal.

So the A operator is, start with the input signal X. And integrate it with regard to time starting at minus infinity and going up to t, that's the t value of the output signal. Simple. So let's see how simple.

So here's a bunch of block diagrams written in terms of A. Here's a bunch of differential equations. Figure out the correspondence, if any. OK, traditionally, it seems that people are quiet.

So before you start, say something that has nothing to do with 003 to you next-door neighbor. It has to have nothing to do with 003. And then figure out which of these bottom diagrams, number 1, 2, 3, 4, or none of them, corresponds best to the correspondence.

[SIDE CONVERSATION]

OK, you've got 30 seconds to answer the question, now.

[SIDE CONVERSATION]

Well, that killed everything.

OK, everybody raise your hand and tell me which of the five available answers, 1 through 5, best illustrates the correspondence. Raise your hands. Let me see how many of you figured it out.

Remember, if you're wrong, you can blame it on your partner. OK, it's more than 95% correct. How do you think about it? What do I do first?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: [INAUDIBLE] So what do you want me to look at first? This diagram, that equation? This diagram, that equation?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: Write an equation for each of the diagrams. How would I figure out an equation for that diagram? [INAUDIBLE]

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: So what's coming in and out of the plus sign. That's very good . So let's focus on [? that ?] coming out first. Us. What's the name of that signal?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: Well, I can think of two names. That's why it's a good idea to look at what's coming in and out. One way that I could name this would be with reference to the A operator. If I know that what comes out of the A operator is capital Y, what goes into the A operator?

AUDIENCE: [INAUDIBLE].

DENNIS FREEMAN: The derivative. So if I know that what comes out is Y, then what goes in must have been Y dot. But that [? also ?] has to be the output of the adder. So what were the two inputs to the adder?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: X and-- py. So y dot, the thing that goes into the accumulator, has to be the same as X plus py. So y dot is X plus py, so there's a correspondence between these two.

Everybody sort of get the idea? We can do the same sort of thing here. What's the name of that signal?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: [? Shout. ?]

AUDIENCE: Y dot over p.

DENNIS FREEMAN: y dot over p. Exactly. So that one is going to be y dot over p. And that's going to have to be the same as x plus y. So if you clear the fraction here, you would find that y dot is px plus py. So there is a correspondence that way.

And you can do the same sort of thing down here. It looks a little bit different. What's this? This says that the output of the adder, y, the output of the adder here is y. So y must have been x plus what?

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: So p times the integral of y. And I'd like to write that in a differential form, so I could differentiate term by term to say that that's the same thing as y dot is x dot plus py. So y dot is x dot plus py, so there's correspondence that way. So the answer is 1. OK? Everybody's happy with them that?

So the point is that the differential operators look very much the same. You can use the same kind of reasoning that you did for difference equations with r. Of course, the operators are different. And just like r, you can think about these things as polynomials.

So if you think about this feedforward system that's now CT, not DT, the feedforward system says that I can construct W by taking X plus A X, X plus A X, where that's X plus the the integral of X. Then I could think about the Y signal as W plus the integral of W. So that's this one, y is the w plus the integral w. And now I can substitute instances of this into here.

So this w expression could go in there. And that gives me this. If I integrated this w expression once, then I would get the integral of that plus the double integral of that, so that gives me these. And I think about those things in terms of A's, and I get precisely the same expression.

So the idea is, just like r, you can manipulate operator expressions in r, like they were polynomials. You can manipulate operator expressions in A, just like they were polynomials. And the reason I can say that is that, if I think through all the properties of polynomials, I can draw an isomorphism between the way the system would have behaved and the way the polynomial would have behaved.

And in particular, here's three statements that have to be true. If the operator expressions were supposed to behave like polynomials, polynomials have the property that the polynomials in A should commute. Polynomials should have the-- the A should have the distributive property, multiplication over addition. And polynomials should associate.

And if you think about each of those in detail, the operations that would correspond to what you've put in the block diagram have precisely those same relations. So that's the outline of how you go about proving something like this. But the takeaway message is the operator expressions in A obey all the rules that you would expect from a polynomial.

So, second question. Think about these two systems, and think about the notion of equivalent. Equivalent here is going to be just the same as it was in DT. In DT, equivalent [? man, ?] if all of the right shift operators started at rest, that is to say their output was 0, we will have the same kind of notion here. At rest is going to mean all of the integrators-- integrators have a starting value, an initial value.

If those initial values are all, 0 then we say that the system started at rest. We have that same kind of proviso. Given that proviso, then all of the operator expressions behave like polynomials. So determine k1 so that these two systems are equivalent, given that definition of equivalence.

[SIDE CONVERSATION]

So anyone have an answer? So how should I choose k1 if I want those two systems to be equivalent? Yes? Raise your hands.

OK, it was a trick question. So now, given the additional information that it was a trick question, revise your answers. Ah excellent, excellent. Instant revisions. Wonderful.

OK, what was the trick? [INAUDIBLE] got it. What was a trick? Yeah.

AUDIENCE: It would be minus 1.6.

DENNIS FREEMAN: It should have been minus 1.6. Yes. So what do you do? What do I do first in order to answer a question like this?

[INAUDIBLE]. [? Right, ?] so translate the block diagrams into a differential equation. That's a good approach. Because what we'd like to do is figure out an equivalence. The equivalence is not trivial.

So in fact, the equivalence is easier to think about if you do polynomials. You could try to manipulate the blocks to make them look the same. That's actually hard. It it's pretty easy to do it if you think about the block diagrams being represented as operators.

So you can look at the first one and you can say, OK. W is the signal that comes out of A operating on an X minus 0.7W. So that's what that's adder says.

And you can similarly do this box. It looks the same. Treat them as polynomials and you can reduce it to some equivalent form. Does everybody have the gist of what's going on?

Same thing down here. The equations look a little bit different. But it's the same idea. You figure out what are the constraints imposed by the blocks.

The output of every A has to be the integral of the thing that went into it. The output of every adder has to be the sum of the two things that went into it. Figure out all those constraints, reduce them to polynomials, and then manipulate them as though they were polynomials.

And the polynomial manipulation shows that minus k1 should be 1.6, OK? [INAUDIBLE] should be a minus 1.6. Point is, it's easy.

If you know polynomials, you know the answer. That's pretty good. So we're doing something completely different from polynomials. But we're able to use the intuitions that we get from polynomials in order to do the problem.

There is one remaining problem. Something that is harder than it is in DT. Probably the hardest part of CT. And that involves thinking about what should be our basis signals.

In DT, there is a really good candidate for that, which we call the unit sample single. The unit sample signal is the simplest possible non-trivial DT signal. By which I mean it is 0 everywhere, except one place.

If it were 0 everywhere, we would call it trivial. So in order for it not to be trivial, it has to be non-zero at least some place. The place is non-trivial. It is at n equals 0. And the value that it is, given that it can't be 0, is 1.

So the unit sample signal is the most simple non-trivial DT signal. So what we need is the equivalent for CT. So here it is. Define a signal in CT that is 0 everywhere except 0. And at 0, make it [? 1. ?]

So is that a good choice? And I hope I've biased my presentation enough so that the answer is obviously no. So the real question is, why not? Can somebody think of a good reason why this is a poor choice for a building block signal? Yes.

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: So you would need infinite gain to go from 0 to 1. That's sort of true for the r thing, too. You have to go from 0 to 1. It's in 0 time. That seems hard. Any other ideas? Yeah.

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: Not continuous. That's also true. Yes.

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: The integral is always 0. That's a problem. So it's just not going to be a very useful signal.

OK, what am I doing? I'm trying to figure out a basis signal that I would use in block diagrams. My basic block diagram is this integrator thing. Mine is infinity to t on tau. So I'd like to put something in here.

But if I put that signal, what did I call it? W. If I put W into an iterator, what's the output of the integrator after you integrate W? 0. Well, that's a problem.

It's not a very good basis function if the very first operator I go through turns it into 0. Everyone see that? That's a problem.

This is the problem in CT. If you guys figure out the next slide, you're done. CT's easy.

So what we do is think about defining a signal that would have the property that it behaved as though it had those desirable properties. What would have to happen? We'd like a signal that had some amount of area. Because that's what this thing is. This is an area operator.

We would like a signal that didn't have a 0 area. The W signal has 0 area. Bad. We'd like a signal that doesn't have zero area. In fact, if we wanted it to be the simplest possible, then you should have an area of 1.

But we'd like it to be zero almost everywhere. How do you do that? OK, well, the way you can do it, and the way we will do it, is think about building a signal [? with ?] a limit.

What if I had a pulse signal that was 2 epsilon wide and 1 over 2 epsilon high? The area would be 1, regardless of the value of epsilon. And if I shrunk epsilon to be a very small number, say 0, if I think about the limit, as epsilon gets smaller and smaller and smaller and smaller, then I approach the condition I'd like to have. The signal is 0 everywhere, except 0.

Now, it has a horrendous value at 0. But, oh well, I'll sweep that under the rug. The important thing is that, when I integrate it, I'd like it to give me one. That's the important thing.

Everyone with me? That's the hardest part in this part of the course. If you get this, CT systems are a breeze. So this is a different kind of a signal.

The idea is to incorporate in it two seemingly contradictory things. It should be 0 almost everywhere and the integral should be 1. So there is no such thing. This is an idealized signal. This is something that we think about in the limit.

It turns out that we can get a lot of insight by thinking about this signal. But it is important to realize that it is only defined in a limit. So we will write it. Since it's so useful, we will write it this way.

We'll draw a little arrow. The arrow is supposed to connote in your mind, this thing goes really high. It goes so high that telling you how high it is is meaningless. So instead, to the side, we tell you what's the area of it. What would come out if you integrated it? Yes.

AUDIENCE: [INAUDIBLE] DT [INAUDIBLE] decompose an [INAUDIBLE] signal into a series of [INAUDIBLE] Can you do the same thing with [INAUDIBLE]?

DENNIS FREEMAN: Yes. And that will be something that we spend a lot of time on, because that decomposition is not trivial. But it is possible. So the question was, can we decompose arbitrary signals into sums of these things? And the answer is absolutely yes. That's why we like it.

OK, so the first thing to think about is what happens if you put, now, instead of W, let's put delta of t, the unit impulse function. What happens if you put the unit impulse function into an integrator? Well, if you integrate the delta of t, so if my input x of t is delta of t, and if I want to think about y of t, which is going to A operating on x? So think about what would be the answer to this integral for times less than 0?

Well, the system started the rest and the integrator turned on at 0. What's the value of the answer of the integrator here? 0. It started at 0 back at minus infinity some time.

Started at rest. There was no input. So the answer is 0. So the answer is going to be 0 up until something happens.

During that brief epsilon of time, a unit of area went in. So what's the output of the integrator become? 1. So at that time it became 1.

And then what's the value as you go forward in time? 1. It got stuck at 1. So the most primitive signal that we'll think about is the unit impulse function. We'll denote that by delta of t. And it has the property that it goes through our most primitive block.

Delta goes to a unit-step And unit-step is so useful that we'll give it a special name, u of t. U means unit-step. And with that, all of these systems now make sense, in the same sense that DT systems did.

So for example, if I have a feedforward system, I can think about that as having several signal flow paths going forward. And the output of the signal, when stimulated with a delta function, is the sum of all those different signal paths that I can think of. So I can express the input-output relationship by dysfunctional relationship. Everybody sees that? Or I can just think about all the different signal flow passed through here.

What would happen if I had a delta function here? Well, here's a signal flow path. Here's another one. Here's another one. Here's one.

For signal flow paths, I have to think about ways the input could turn into the output. If the input went through the top half, the output would be delta. If the input went through, the path the output would be u.

If the input went through this path, u. If the input went through this, it turns into u, and then it goes through another accumulator, what happens when you integrate u? tu. If this is u, and if you put that in through an integrator, that integrator, like all the other integrators we'll talk about, unless we tell you otherwise, is initially at rest.

Therefore, the output started at 0. Because u is 0 for t less than 0, it persists at 0 until something happens at t equals 0. At which point, the input becomes 1, and the integral of 1 is t. So we will usually write that this way, meaning it's t. So we will write as a shorthand times u.

That's a strange thing to do. It's just very convenient. It's convenient, because normally, when you have a function, so that last function, this one, is a signal that does this. So we will call that tu of t because, obviously, t is that signal.

We don't mean that. We want to lop off the part that came before 0. Multiplying by u of t is a quick way of writing lop off the stuff that came before 0. That's all that means.

So we'll write that that way. And you can see that, by thinking about signal flow paths, and by thinking about how this basic signal, this basis function, goes through the system, it's easy to figure out the response of such a system. Just like in DT, feedback is different.

So now, we have to think about what would happen if there's a feedback loop. So it was crucial, when I was doing this kind of an illustration, that it was feedforward only. I could decompose all of the signal flow paths into a finite number of forward-going paths. This is harder.

This happened in DT, too. We had to figure out how the DT system was going to work when we had a feedback path. And it was more complicated, and we had to fall back on thinking about sample-by-sample propagation of the signal through the system. And we'll do the same thing here.

In order to figure out how to think about signal flow through such a diagram, through a diagram that has feedback, we'll think about falling back to 1803. Something we can trust. Something we all loved.

Nod your heads yes. Make me-- reassures me. Fond memories of 1803.

So it's pretty simple to think about how you would solve that system by using an 1803 type of approach. Convert the system into a differential equation. We've already done that one.

Then from the form, we will usually, in this class, use the general method of solving differential equations, which is-- What's the general method for solving differential equations? They may not have told you in 1803. What's the general method of solving differential equations?

Guess! Yes. So the general method is guess. If you can guess, plug it in and it works, you're done. The kinds of systems that we will look at will be linear systems. They will have a single unique solution.

If you can find it by guessing, you're done. And we will prove later in the course that all systems of this class, where I haven't really defined what this class is, but if you made a system out of adders, gains, and integrators, and only those parts, then that system will always be linear. And the solutions can always be written as complex exponentials.

We'll prove that later. I'm not going to bother with it now. For now, you would say, OK. First-order linear differential equation with constant coefficiency, the answer is obviously an exponential function.

So you plug that into the differential equation. You figure out the constraints that would have to be solved for this to be true. And the answer is that y should be e to the pt [? of ?] u of t. Should start at 0. And there should be an exponential after that. And the p shows up as the exponent in the exponential.

What we'd like to do is develop an alternative way of thinking about that, using the operator approach. So a completely different way to solve that problem would be to think about, write an operator expression for this. OK, so Y is A times the signal that results when you add x to py. Solve for the ratio Y by X, and you get an operator expression, a over 1 minus pA.

Just like in DT, we have to figure out how we would think about that. It's an implicit operation. It's telling me that if I knew the answer, the answer is the signal that, when operated on by 1 minus p is the integral of the input. But I don't know the answer. So that reasoning doesn't quite work.

The reasoning that I used to solve the feedforward question doesn't quite work. So I have to think of a different way of doing it, just like we did in DT. And the same solution works.

Not surprisingly. A behaves like a polynomial. R behaves like a polynomial. After you turn it into a polynomial, you can't tell if you started with a or r. It's a polynomial.

So for that reason, exactly the same stuff that you did in DT works. That's why we can solve everything in 50 minutes. It's all the same.

So in DT, we thought about this as being a series. You could figure out an ascending series, a power series, that's equivalent by thinking about something like synthetic division, Taylor series, whatever you're comfortable with. But whatever you're comfortable with.

This, A over 1 minus pA, take the A out front. Then you're left with 1 over 1 minus pA. Think about that is Taylor series. That's 1 plus pA, plus p squared A squared, plus p cubed A cubed, etc. Done.

Now I know what the system functional looks like. Now I've got a feedforward system. So I can use the technique from the other side.

What happens if you put delta into a system whose functional representation is A? So you get a u. Let me skip that one. That's on the next slide.

What happens if you put in-- no, that's right. I did that right. OK 1 u. Sorry, I'm confusing myself.

This A turned into that u. What happens if you put it into pA squared? So the first A turned delta into u. The second A turns u into tu.

And so I get this term. So the pA squared turns into ptu. The p squared A cubed turns into a half p squared t squared u, etc.

So what I've done is I've thought of a way of constructing this output signal from the input signal without ever using calculus. I started with a solution based on 1803 that's strictly calculus. I just redid the whole problem and didn't use any calculus. And the method works just the same reason that the our method worked.

In the r method, when we were thinking about the simple feedback with an r system, every loop around the feedback loop generated one new sample. Here, every loop around the CT loop generates one more contribution to the output. You put a delta function in, it gets integrated once, and you get a u function out.

[INAUDIBLE] [? first ?] in this symbolic representations. The first thing that comes out is A times the input. Second thing that comes out is A squared p times the input. Third thing that comes out, p squared a cubed, etc. Every time you go through the loop, you pick up one more turn.

Now, think about it in a time domain. You put in a delta function. The first thing that comes out is a step. The second thing that comes out, and gets added to the first thing that comes out, is integrate, multiply by p, and integrate again. So that's this term, p times t.

So we get the unit step from the first term, which looks like this. We get the linear increase from the second term. We get a squared term, we get a cubed' term.

And voila, we get the series expansion for e to the pt. No calculus. It's purely thinking about how you would think about constructing this system by a series representation for spinning around the feedback loop.

So that's an insight that comes from thinking about the system as a functional representation, as a polynomial. Polynomials can be expanded in Taylor series. Therefore, systems can be expanded in Taylor series.

If you change the sign of p, you'll notice that that previous example exploded. So we got to an increasing exponential. If you change the sign of p, it's not too surprising that what will happen is that it will converge. So here, symbolically, the first signal is a unit-step just like before. But now because of the minus sign, there is a negative on this pt, so you get sloping downward.

In the squared term, it's still positive. So that breaks it up. But the cubic is now downward again. And you add an infinite number of those and you get another exponential, this time a convergent one.

So the point is that it works extremely like the way the DT stuff did, except now the basis functions are different. The basis functions are derived from this thing, which is the unit impulse function. And the things that come out of the integrator are steps and ramps and parabolas and stuff like that.

So in r, the basic input signal was the unit sample signal. And what came out on each revolution around-- for each cycle in a feedback system, what came out was a successive delay. Here, we get successive integration.

Delay was the fundamental operator. Integration is the fundamental operator. Successive delays, successive integrations. And we get the idea that we have convergence and divergence, just like we did in DT.

Except now, the shapes of the regions are different. So if the p, which we will later call the pole, by complete analogy to what we did in DT, if the pole is in the right half plane, then the system's response is divergent. If the pole is in the left half plane, the system's responses is convergent.

So it looks just like DT. We think about this as signal flow paths. We think about how feedback gives rise to an infinite number of those.

In DT, each time through, it gave one unit of delay in CT. It gave one unit of integration. The method is the same, the answer is different.

So what we got is tremendous similarities. Differential equation compared to difference equation, block diagram compared to block diagram. Integrators instead of delays.

Functional functional, A instead of r. Pole, pole. Mode, mode.

Fundamental mode. Here, the fundamental mode was a geometric sequence. Here, it's an exponential function of time.

And we've got different kinds of regional convergence. Signal converges for poles in the left half plane. Signals converge for poles inside the unit circle. Same idea, but some differences.

OK, so [INAUDIBLE] have time think about this for 30 seconds. We now have two representations, R polynomials and A polynomials. I'm giving you four examples to think through, and your job is to figure out which of those functionals correspond to convergent responses when excited by a unit sample or a unit impulse signal.

So, how do I think about this? Is this convergent or divergent? What do I think about? Step 1. Convergent. How did you get that? Yeah. It's this one.

AUDIENCE: [INAUDIBLE]

DENNIS FREEMAN: So factor. It's polynomial. You get a pole at minus 1/2 and a pole at plus 1/2. They're both inside [? the unit ?] circle. They both converge.

How about this one? Same equation, same answer. Kind of.

Same poles, minus 1/2 and plus 1/2. Convergent or divergent? Divergent, because the regions are different. One of the poles is in the right half plane. Divergent.

How about this one. Where were the poles? Minus 1/2, minus 3/2.

Outside unit circle? One of the poles is outside the unit circle. Diverges, because of that pole. Minus 1/2 and minus 3/2, same poles.

Both in the left half lane. Convergence. That's the point. So they worked very similar. But there are differences.

And now in the last three minutes, the last thing I want to tell you about is that even complex numbers work just the same as in DT. So think about a system that's slightly harder. Here, I can represent this by this kind of relationship. So F equals Kx, but F is MA. So 801.

And so I generate a block diagram or a differential equation or however I want to think about it. And I can solve for the answer. What I'd like to do is think about it in terms of the functional representation.

So I just think about, take this system that you're familiar with. We did this on the first lecture. Turn it into a differential equation. Turn that into a functional representation.

Now, I want to think about how you would solve the functional representation. The idea is just like before, factor. But now the trick is that the factors become complex.

So one way you can think about this is to force the second order system into a canonical form. The canonical form for DT was 1 over 1 minus p not R. For CT, it's A over 1 minus p not A. So I want to make factors look like that, because I know that here, the response looks like p not to the n. Here, the response looks like e to the p, not t.

If I can coerce it into that form, I already know the answer. So that's what's illustrated here. I coerced this into that form. And then I'll know what the answer looks like. Just like we could in DT, substitute R goes to 1 over z.

In CT, we can substitute A to 1 over s, and solve for s. We get the same answer. Same as we did in DT, except now we call it s.

So the poles to this system are plus or minus j constant. For convenience, I'll call the constant omega not. So then I have a pole at e to the j omega not. And another one-- so I have a pole at j omega 0 and a second pole at minus j omega 0.

By that argument, the fundamental modes are e the j omega not t and e to the minus j omega not t. Just like in DT, the complex poles gave complex modes. Here, the complex pole, j omega not, gave a complex mode, cos omega t plus j sine omega t.

And just like DT, the system conspires. So that started out being mass and spring system. Obviously, it's not going to have an imaginary output.

The system conspires so that the imaginary parts of the different fundamental modes kill each other off. And the answer is a real number. So even though the mode, even though the pole is complex, the multipliers are complex, this sum is real.

And if you just think about what that sum is, by thinking about how complex numbers work, you get an expression that looks like omega not sine, omega not t. It's a little more fun to think about how that evolves as a series. If we do a Taylor series for this, we can represent it that way.

And the series, then, the first term is t. The second term is a t cubed, which goes down. Then there's a fifth and a seventh and a ninth. And if you keep adding up the terms, amazingly enough, the series representation for the operator expression is the Taylor series expansion of the sine wave. Just like we would have expected.

Again, here, the response looks like e to the p not t. If I can coerce it into that form, I already know the answer. So that's what's illustrated here. I coerce this into that form, and then I'll know what the answer looks like.

Just like we could in DT, substitute R goes to 1 over z. In CT, we can substitute A goes to 1 over s, and solve for s. We get the same answer. Same as we did in DT, except now we call it s.

So the poles to this system are plus or minus j constant. For convenience, I'll call the constant omega not. So then I have a pole at e to the j omega not, and another one-- so I have a pole at j omega not and a second poet minus j omega not.

By that argument, the fundamental modes are e to the j omega not t and e to the minus j omega not t. Just like in DT, the complex poles gave complex modes. Here, the complex pole, j omega not, gave a complex mode, cos omega t plus j sine omega t.

And just like DT, the system conspires. So that started out being a [? mass ?] [INAUDIBLE] system. Obviously, it's not going to have an imaginary output. The system conspires so that the imaginary parts of the different fundamental modes kill each other off, and the answer is a real number. So even though the pole is complex, the multipliers are complex, this sum is real.

And if you just think about what that sum is by thinking about how complex numbers work, you get an expression that looks like omega not sine, omega not t. It's a little more fun to think about how that evolves as a series. If we do a Taylor series for this, we can represent it that way.

And the series, then, the first term is t. The second term is a t cubed, which goes down. Then there's a fifth and a seventh and a ninth. And if you keep adding up the terms, amazingly enough, the series representation for the operator expression is the Taylor series expansion of the sine wave. Just like we would have expected.

Again, I solved the differential equation, a second order differential equation, this time without calculus. All I did was polynomial math. And so with that, I'll finish just by saying, today, what we did was introduce, and basically finish, CT. Because there's such a strong analogy between what we did in DT, and how we'll approach thinking about CT.

Free Downloads

Video


Subtitle

  • English - US (SRT)