Lecture 18: Series Expansions Part 4

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion on Series Expansions, including Exact Free Energy of the Square Lattice Ising Model.

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: OK, let's start. So back to our Ising model. And for this lecture, focusing mostly on a square like this.

At each side, we have a binary variable. And a weight that tries to keep neighboring the sides to be parallel. So every pair of neighboring sides is subject to a joined fate such as that. You're going to ignore the magnetic field.

And to calculate a partition function, we have to sum over r 2 to the n configuration on both sides. And this will be a partition function that will [INAUDIBLE] on this one parameter k, which is some energy divided by kt. So then what we did was you we rewrote each one of these factors as a hyperbolic cosine 1 plus tangent and took out all of the factors of the hyperbolic cosine in the outside.

And for the case of this square lattice, each side has two bonds going out of that. So there's 2 to the n, and I'll have to sum r 2 to the n configurations product of what all bonds. These factors of 1 plus t sigma i sigma j, where this T is, of course, my shorthand for hyperbolic function.

Now we saw that basically what we need to do is either we take 1-- that's the t right there-- or terms that have these factors of t sigma sigma. But in order to ensure that they will survive the summation over the two possibilities of sigma, each one of these terms has to be matched with another one. And so the most trivial diagram would be something like this.

And then sum over the two possibilities at each side will give us a factor of n per side. So you have 2 to the n [INAUDIBLE] to the power of 2n. And then we have a sum over graphs which have zero, two, or four bonds emanating per site. So that this summation over the sigmas will give us a factor of 2 and another factor of 0. And then we just have to count t to the power of the number of 1's in the graph.

Now we said that all of the exciting things have to do with this sum, which depends on this parameter t, of course. And this sum, as written here, has graphs such as the one that I indicated, but also potentially graphs that are more complicated which each have these joined pieces. And we've attempted to replace this sum s with another sum, s prime, which is the sum over gas of phantom loops. Let's call it multiple phantom loops.

Essentially, if I allow these loops to basically go through each other, then I can exponentiate this and write it as a sum over all single phantom loops thereby removing, the factors of n squared, n cubed, et cetera, that will arise by moving the destroyed pieces all over the lattice. This sum, since I can essentially pick it apart particular loop and then slide it once over the lattice, is certainly extensive proportional to n.

So this would be nice. We calculate this sum as prime last time, and we saw that it actually reproduced for us the Gaussian model. So this was equivalent to the Gaussian model.

And in particular, because we were allowing this phantom condition, when we went to sufficiently low temperature or when t became sufficiently large, essentially the model was unstable and because I could just continue to put more and more of these loops. There was no condition that would say you should have a finite density. And then the density of the loops would go to infinity. And just like the Gaussian model, it doesn't make sense beyond some point.

Now mathematically, it is clear to us that s is not equal to s prime because of two important reasons, one of which is obvious is multiple occupation of a bond that is in the original sum that I have up here. You can see that the contribution of each point connecting neighbors is either 1 or one factor of t. I cannot have more than that.

But when I am calculating phantom loops and I have self-crossings, I have some things that are very trial. Like I can start and go back on myself. That completes a walk.

I can have things that are more complicated. I could have a diagram such as this that still involves crossing something twice. Or something like this is another example.

Now all of these are examples where I essentially continuously moved my chalk and drew a closed loop. But also from the exponentiation, I can also generate things that have multiple loops, such as this loop that does not intersect itself but may happen to intersect on a bond with another loop. So it is partly the presence of all of these things that allow multiple occupation that ultimately leads to the instability.

But I also hinted last time, in response to a question, that there is another mistake that is involved not when there is intersection at the bond, but intersection at the site. This leads to over-counting. It's more subtle. Along the diagrams that I have in s, we certainly complete the good diagram such as this one where I have from this side four bonds going out farther than the usual bonds. So this is certainly OK.

But when I want to represent that in terms of blocks on the lattice, I notice that I can do the following. Let's say in all cases, I start from here, I can start from there, I can go and do something like this, and come back to my starting point. Or I can do something like this, and back to the starting point.

Or I could have a diagram that would appear at the second order in the expansion of the exponential that I have that involves two loops that would correspond to the same geometry. So this thing that should have been counted once is really being counted three times. And so that's a mistake to be corrected.

And actually, the reason for this factor of three goes back to this Gaussian model that I said that essentially what amounts to is that we are replacing Ising variables over here with these Gaussian variables, s. And so we arrange things such that the average of s squared was 1, so that the sides were reproducing the things that I had before.

But then for something like this, I need to know the average s to the fourth. And if you use Wick's theorem, you can quickly see that the average of s to the fourth is three times average of s squared, so you get a factor of 3 here. That's the origin of the factor of 3.

And indeed, if you had gone and done rather than one component, s, something that was n component, you could convince yourself that this becomes n plus 2. And that's consistent whenever you see something that has a loop having a factor of n, something that we had seen in we were doing this diagramatic expansion.

OK, so that's a problem. So s is not equal to s prime. But I want to make the following very nice assertion that s is indeed the sum over multiple phantom loops, just like I had written for s prime with a couple of important constraints.

The constraints are-- maybe I should write them in red-- with no U-turns. That is, you are not going to allow anything such as this or this when you would go forward and then immediately step backward. That's what I will call a U-turn. No U-turn is allowed.

And more importantly, with a factor of minus 1 to the number of crossings. So what do I mean by crossing? If you follow what I did in drawing this diagram, you can see that there was a path that I drew that never crossed itself, whereas here I kind of had to jump over where I was. I indicated that.

So according to this rule, this diagram will get a factor of minus. These two diagrams don't have any self-crossings. They give factors of plus. And you can see that at least this particular diagram is resolved.

And you can see that this continues to more complicated things. Let's say that I have a perfectly good diagram that is something like this that involves two crossings. And then I can break it roughly into two process, and each piece I can decompose as I had done before.

Let's see, the left part I can either do this without crossing or I could cross or I could have this part as a separate loop from this half. And then I can join it on the other side with essentially any one of these things like this, which comes with a plus; this, which comes with a minus; or this one that comes with a plus.

So you can see that this particular diagram of s in s prime would arise in nine possible ways. So I would have had an over-counting by a factor of a three-paired crossing or nine total, except that now when I assign these factors, some of these diagrams will come with minus, some of them will come with plus. But ultimately, only one will survive. This is it. All of them survive, then the net contribution is just 1, which is the correct on.

Now let's see. So this removed problem B. so this was problem B resolved. Let's see about our problem A, which had to do with multiple occupation of the particular bond.

So these are diagrams that will appear in s prime that have no counterpart in s. That is, let's say there is this bond that is occupied twice, so that would be a contribution by a factor of t squared. And then this part can go and join whatever it wants. This part can go and do whatever it wants.

The point is that for every diagram such as this, I can construct a diagram where I leave everything out here exactly as it was, everything out here exactly as it was, except that the two terminals that I'll have to the left and the two terminals that I have to the right of the bond, rather than joining them this way, I'll join them like this. So this complicated diagram, as far as this bond is concerned, I can prescribe in two different ways using graphs of s prime.

And one of them with respect to the other has an additional factor of minus 1, and so they give me this 0. And you can convince yourself that the same construction will hold if I have three terminals, four terminals-- it doesn't matter. I will do it for one pair and it will be OK.

The only time that I wouldn't have been able to is if I have to terminals on one side and one terminal on the other side, which is these guys. And that's what I said no U-turns. So now that's taken care of first. So this problem A is now also resolved.

So what do we have? We have now established that s-- let me just get a 0. Let's define this loop star to be the loops with no U-turns and minus 1 to the number of crossings. So this star is going to symbolize that these two constraints of no U-turns and minus 1 to the power of number of crossings are imposed in construction and calculation of the contribution of these objects.

So then what do I have? I have that s is-- well, there's possibility of no loop. There is the one loop graphs. There's the two loops star graphs. My little loop star graphs.

And just as in the case of s prime, I can exponentiate this as the sum of all one stars. And you may wonder what happens when I go to higher order terms. In higher order terms, potentially I will generate two of these things that cross when I go to a second order term. But then any intersection will involve two, four, and even number. Minus 1 to an even number is one, so there is really no additional interaction to worry about if multiple things are crossing each other.

So we could exponentiate that s of t that we have over there. We are interested in taking the log of the expression for the partition function. I will do the factor of n log 2 hyperbolic cosine squared, because there are essentially two bonds per side. And then I have a sum over these loop stars.

Essentially, I took the log of the expression that I had before. Now this sum-- well, let's sort of take care of this factor of n. So I divide by n. Log z divided by n is log of 2 hyperbolic cosines [INAUDIBLE] k plus-- well, the way to get rid of the factor of n in this sum is to fix one point of this loop so that it doesn't go all over the place.

So let's say that I have the number of loops that start and end at the origin. I would have to sum over the length of those loops, and those things will give me a factor of t to the l. So I have defined this n star of l to be number of loop stars from 0 to 0 in l steps. So in place of this-- actually, maybe I will emphasize it's minus 1 to the number of crossings and no U-turns.

Now gain, so what I have is my entire lattice. Let's say I have in loop that is of length 4. And now I have forced it to start and end at the origin so that I can put out this factor of n.

But then I realized that I could have over-counted this because this loop could have been started from here, here, here, here and translated to the origin. So just as we saw for the case of the random walk loops before, there is this factor of l to correct. And then I'm talking about walks, I can either go clockwise or counterclockwise, so I have to divide by a factor of 2 to get rid of that.

AUDIENCE: Question?

PROFESSOR: Yes?

AUDIENCE: So what happens if you have when you're doing your exponential loops and we have one loop nested inside another loop that you're multiplying together so that they share one of their edges. It seems like then they don't have to cross twice, and so you would still need something to cancel them out.

PROFESSOR: OK, I should have maybe explained that graph a little bit more. But let's do it over here. So when I exponentiate s-- what am I calling it?-- s sum of the loops, among the terms that I will generate will certainly be something like this.

And then you say this one is shared between the two of them. At the level of one loop graphs, there was a one-loop object that went like this. So the statement that I made here does not necessarily map the number of loops to each other, but it is correct, and the cancellation occurs at the level of one.

And actually, I should have also indicated what's happening with the other graph that I have up there because I have this graph. So that's a one-loop graph, that cancels against this graph. So whatever you do, you can just follow the rule that I gave you and ensure that the cancellation works, of course. So Thank. You. I wanted to say this and I had forgot.

OK, any other questions? OK, so essentially we are back to some extent of the formula that I had for ordinary random walks. And phantom loops these are partly phantom loops, but I have to take care of something like this. What did I go originally last lecture, rather than my ordinary graphs to these phantom loops?

The reason was that for a phantom loops, I said that I had this Markovian condition. I could relate l step walks to l minus 1 step walks because there was no memory. I didn't have to know where I had crossed before.

But it seems that in order to give a correct date to these new loops, I have to know how many times I cross myself. And c by itself is a non-Markovian thing. It requires memory. Expect that I don't need any c. I need only the parity of the number of crossings.

And here is where there's a beautiful mathematical theorem that tells us here in this memory-like problem for something that is Markovian. And the statement is a theorem of Whitney's which states that the parity of a planar loop-- the thing that I've done here-- whether it's even or odd. Parity of the number of crossings is related to the total angle through which the tangent vector turns by the following. In C mod 2, which is the parity of the crossings, is 1 plus this total angle that I will call theta divided by 2 pi mod 2.

So I'm not going to give a proof of this, but I will show you examples to make sure you understand what's happening by, let's say, comparing the following two graphs, one of which has no crossing and another one which is essentially the same thing but has a crossing. So basically I put an arrow so that I can follow where the orientation of the bond, which is the location of this tangent s, as I step. Let's say from the origin, this is the first step, second step, third step, fourth step, so on and so forth. So let's do this for the upper graph.

And what I will do is I will plot the angle. And the first step here, I start at 0 degrees pointing this way. So this is my step number one. At the next, I have gone to 90 degrees, so this is where I go to.

At three, I'm back to pointing along the horizon in the direction. At four, I have gone back up here. At five, I go to 180 degrees.

At six, I go all the way down. At seven, I go back to this horizontal. At eight, I go back to pointing down. And then I'm back to one.

So if you follow what that tangent is doing, it's going woop, woop. At the end of the day, it has turned through 3 pi. So in this case, the total turn is 2 pi. 1 plus the total turn divided by 2 pi in this case is 1 plus 2 pi over 2 pi, which is 2-- mod 2 which is the same thing as 0.

And of course, how many times has this thing crossed itself? Zero times. So let's see how it works if you go to apply the same set of rules to this other one.

So again, one, two, three, four, five, six, seven, eight are my steps. One is pointing in the horizontal direction. Two goes vertical just as before.

Three stays the same place, so two and three are at the same point. Four, I go back to horizontal. Five, I go down to minus 90 degrees.

Six, I go all the way to 180 degrees and stay there at seven. Eight, I go back to minus 90 degrees, and then rejoin the origin. So this goes up, down, back, never completes a full turn.

So in this case, theta is 0. 1 plus theta over 2 pi is 1, and the number of crossings of his graph is 1. So you can go and repeat this for any graph that you like and convince yourself that this rule works.

Well, how does that help us? Well, the diagrams that I have drawn here already tells you how it helps us because in order to find the total angle, all I need to do is to keep track of local changes. So essentially, as I go along, I carry a bag with me which adds the changing angle at every step.

I don't need to know where I was 100 steps before. I just add another changing angle. By the time I get to the last stop, I figure out what my total angle is, and then I'm done.

AUDIENCE: Yes? So is it really the total angle that matters, or is it more just the number of circles that you complete or not?

PROFESSOR: They are the same thing. So if you prefer to say it in terms of the entanglement of your loop and the point at the origin, that's another way of saying it. This entity divided by 2 pi is a topological number, which counts essentially the number of times you have gone around the origin.

So what I'm saying is that minus 1 to the power of the number of crossings-- this factor that I was after-- I can write as e to the i pi times the number of crossings. And it is evident that the only thing that is important here is the parity, so I can replace into the i pi the number of crossings with 1 plus this total angle divided by 2 pi. Which means I have e to the i pi, which is a factor of minus 1.

And then I have e to the i theta over 2. And my statement is that this is the same thing as e to the i over 2, sum over the little bits of change of angle that people have as you go along there. So what I have to do is as I am walking around this square lattice, I better keep track of which direction I'm pointing so that I know from one step to the next step whether I change by 0 degrees, 90 degrees, minus 90 degrees, et cetera.

So what do I do? I define a convention. So we are going to introduce orientation mu of step as follows. So let's say I'm at some point on the lattice i. Then I can increase the particle c along one of four directions.

And I'm going to label them by mu being equal to 1, 2, 3, or 4. You can choose any notation you want. This will be the notation I will use.

Secondly, I'm going to introduce the analog of the quantity that we had for the phantom random box, which was I introduced a set of matrices that were counting how many base I can go from one side to another side in l steps. So I will introduce the following notation. Something that involves the starred box that involves l steps.

And I say that I start at some point xy and I end at some other point x prime y prime. So again, just counting how many ways I can go from one to another point. Except that I also want to keep track of these orientations. So this quantity is defined as follows.

It is the sum over random walks that start at xy along m. That is, if this is my point, xy, and I'm looking at the second element of this, the next step I have to go up. If I have specified that mu equals to 1, it means that the first step I have to go to the right.

OK, as I go further on the lattice, I ensure that I never have any U-turns, and I keep track of the factors of e to the i theta that changes as I take one step to the next step so that if I took my first step here and then next if I continued here, there would be no factor. But if I went up, I would have a factor of e to the i pi 4. So I keep track of those factors.

And then I want to end at x prime y prime and head along mu prime. Since I specified that my first step will then go along mu, when I reach the last step, I already know where I came from. But depending on which direction I specify I would turn to and head for my next step, I will get the changing angle.

So I have to include the changing angle somewhere so that there are l changes in angle so the way that I have defined it, it will be essentially keeping track of where the next step is headed to. So again, if I were to draw a diagram, I'll start with this xy point, and I want to arrive at some point here-- let's say x prime y prime. And I want to do in l steps.

And the first step, I go along the direction that is specified by mu. So this is step 1, and there's the 2, 3, 4, 5, 6, 7, 8, 9, 10. Let's see, the last one-- so this would step l minus 1.

This is the last step, arrives me at point x prime y prime. But then I have to specify what is the direction of mu prime so that I keep track of the appropriate change of angle that I have to do over here as well. So this is the procedure.

Now this walk, this quantity that I have defined for you here, has the Markovian property in that if I arrive at this point after l steps, then after l minus one steps, I was at one point-- x double prime, y double prime-- from which I took one step along some direction-- mu double prime-- and arrived at this. So I can write that I have to sum over all possible locations and orientations of the before [INAUDIBLE]. I start from the point xy, proceed along direction mu for a total of l minus 1 steps, landing on x double prime y double prime, and then going in direction to mu double prime.

So then I have a walk that started at x double prime, y double prime along the direction in double prime, which is one step has to get me to my destination. And once I am at the destination, I head along the direction in prime. So this is clearly a matrix, a product.

And what I have established is this matrix, w star of l-- which, by the way, is a 4n by 4n matrix, because our n points and 4 orientation, so it's 4n by 4n matrix-- and I have established that that is the product of the matrix that I have for one step, and the product of the matrix that I have for l minus 1 step. And this object that I will call t star essentially tells me something about the combined connectivity orientation that I have for square lattice. And since I can repeat this many times, I can see that I have the result that I want, that w star of l is simply t star raised to the power of l. Questions? Yes?

AUDIENCE: Have we accounted for loops that [INAUDIBLE] we have the square loop of four steps and then the same square loop of eight steps that are exactly on top of the four-step one?

PROFESSOR: OK, so you want me to take this and do it again?

AUDIENCE: Yeah.

PROFESSOR: OK, so I can certainly do something like this.

AUDIENCE: I see.

PROFESSOR: And of course, I can do the same thing over any of the bonds, but they are always [INAUDIBLE]. Yes?

AUDIENCE: Doesn't [INAUDIBLE] become kind of like transfer matrix [INAUDIBLE]?

PROFESSOR: No. It will reproduce the result that Onsager had, but the transfer matrix that Onsager had was it essentially going from column to column. Its size was 2 to the n times 2 to the n. This is 4n by 4n. It is vastly smaller matrix that I have to do.

All right, so maybe we should just write down what this matrix t star is. So t star, I said, is this 4n by 4n matrix that tells me how going from some side xy I arrive at some other side x prime y prime. But it also has orientation information.

So really, I should have four of these for this, and four of these for this. So it's actually a 4 by 4 matrix that I have once I keep track of orientations. So let me write down the 4 by 4 matrix explicitly.

So here we have in mu, and this mu could be one, two, three, four-- which, again, specifically I have indicated as this, this, this, and this. And along the other direction, I can arrive at mu prime. Once I have arrived at the mu prime, I can either go forward, up, left, or down.

So essentially what this says is start with a side xy, head in the horizontal direction. Since this is a one-step walk, after one step I will be arriving at some other point. Once you arrive at that next point, continue to [INAUDIBLE].

The next element says head to the right and then go up. The next element says go to the right and then start to the back. The next element says go to the right and then go down.

Now we can construct the rest of them. Up, right, up, up, up, left, up, down, left, right, left, up, left, left, left, down. Lastly, down, right, down, up, down, left, down, down. So you do your aerobic exercises, and then in next stage is to actually write down the numbers that these correspond to.

So first of all, you can see that-- I'm not sure whether I will have enough space, but let's hope that I do-- that in this first row of this matrix, your first step was always to the right. So always, you will start from x and you end up at x plus 1 while y does not change. So I will indicate that by x plus 1-- actually, what should I write it?

Yeah, x prime, y prime. x prime has to be x plus 1. y prime has to be y. So I'll have to introduce the notation that x prime y prime xy means theta x x prime delta y y prime. So essentially, you just read off for x prime, y prime what the new points have to be.

And I proceed forward. There's no change in face. The next one, I arrive at the same point, so x prime y prime is x plus 1 y. But now my tangent vector, my heading has shifted by 90 degrees, so I have to put a factor of e to the i pi over 4.

The next one, I try to go back, but I've said U-turns are not allowed, so this matrix element is 0. The next matrix element, I have x prime y prime x plus 1 y. Now I have pinned it down, so minus 90 degrees-- the changing angle-- is minus i pi over 4.

The second column, you can see that essentially the y-coordinate has to change by 1, increase by 1. So I have x prime, y prime being xy plus 1. And this element has a change of angle that corresponds to the minus 90 degrees, so this is e to the minus 5 pi over 4.

The diagonal element continues to head straight, so there was no phase angle associated with that. The third element heads in the opposite direction, so the phase element for that is e to the minus i pi over 4. The fourth element is a U-turn, so it will be 0.

The third column starts with a U-turn, which is a 0. The next one, you can see I have to step to the left, so x prime has to become x minus 1 while y does not change. The phase factor is e to the minus i pi over 4. Along with diagonal, there is no phase factor. And if I already had one e to the minus i over 4, I should have e to the i pi over 4 for the last one.

And the last element corresponds to y decreasing by 1. So I start with x prime, y prime in xy minus 1. The first phase that I have to do, I can kind of read how things go diagonally. This is i pi over 4 to the i pi over 4.

The next one has to be 0 because the 0's you can see, are proceeding diagonally. The next one would be x prime, y prime xy minus 1 e to the i minus i pi over 4 and then x prime y prime xy minus 1 for the last time. So this keeps track of the changes in phase.

So now the next thing that we did when we had the ordinary random walks was we took advantage of the translational invariance of the lattice to go to Fourier space and make diagonalization. And indeed, we can do that over here, too, but only partially in that this object has two sets of indices. There is the lattice coordinates and there's the orientation.

But what I can certainly do is to diagonalize the the subspace that corresponds to positions. What do I mean by that? What I will do is I will introduce, let's say, qx, qy, xy-- these Fourier elements-- which are e to the i qx x plus qy y divided by square root of n just as before, without any orientation component.

Then you can see that if I multiply this object to xq y xy, with this matrix that I have over here, xy t star x prime y prime, and sum over x and y, but leave the orientations unchanged. That is, basically I do this individually for each one of these 16 elements of this matrix that each one of them clearly depends on x, y, x prime, y prime, but also has some additional factors. In each case, what is going to happen is that because I'm shifting x or y by one step, I will get this factor back up to e to the i qx, e to the minus i qx, e to the i qy, e to the minus iqy exactly as I was doing before, except that I will have to do this for every single one of them.

So you can see that essentially what this reproduces is a matrix that is four by four that depends on q, and then I will get xy qx of qy actually x prime y prime because I sum over x and y back. So what is this matrix, t star of q? It is very easily constructed from what I have over there.

Because you can see that from the first one, what happens is that when I see x, I will change it to x prime, which is x minus 1. So from here, I will get a factor of e to the minus i qx. y and y prime are the same. I don't get anything from here.

Next one, I will get e to the minus i qx plus i pi over 4 0 e to the minus i qx minus i pi over 4. The next column, the y has been shifted by 1. So I will get e to the minus i cube y minus i pi over 4 e to the minus i qy e to the minus i qy plus i pi over 4 0.

The next level, the third one, x prime is set to x minus 1, so I essentially get e to the i qx-- oops. Third element starts with 0, and then I'll have e to the i qx minus i pi over 4 e to the i qx, and then I'll have e to the i qx plus i pi over 4. The fourth element is e to the i cube y plus i pi over 4 0 into the i qy minus i pi over 4 e to the i qy.

So in the positions space, I have this 4n by 4n matrix where the different sides were connected to their neighbors with these phase factors. I have gone from coordinate to Fourier basis. I did this transformation.

Now I have a matrix that is blocked diagonal. So for each value of the q, I have a four by four block. So in the q picture, imagine that you have this 4n by 4n matrix, and you have blocks of four for different q along the diagonal. Each one of them is this.

So now let's go and calculate our partition function. So what do we have? We have that log z over n is log 2 hyperbolic cosine squared of k plus 1/2 sum over l.

Sum over l of these loops that go back all the way to themselves. So this is t to the l over l. This is loop star of length l.

I want to relate loop star of length l to this w star of length l, but I want to start and end at the origin. So I start from the origin and end at the origin. But I have to be careful, because let's say I make a loop such as this and I end up at the origin. I have to get the right phase factor.

So if I started along direction mu, when I get back to the starting point, I cannot turn this way, this way and get the right phase factor. I have to go and head in the same direction as mu. So I have to head to the same direction.

Now I can certainly do this as a summation also over the starting point. Instead, I have to start and end at 0. I could have started and ended at any point, and then I do a sum of xy and mu, but then I better divide by the n, right? And the reason I do that is because now you can see that the structure of this is like a trace, and I like that.

Actually, I made a mistake when I did this because, you see, the factor that I had to really include is minus 1 to the number of crossings. But my w l star is just keep track of e to the i delta thetas. Actually, I should have put in here a delta theta over 2. So I had forgotten that.

But we can see that that factor is different from minus 1 to the nc to the minus sign. So there was that minus sign that I had forgotten. And actually, I better make this a minus sign. So what was plus before becomes a minus because of this factor that I have over here.

So let's write this again. This is log 2 hyperbolic cosine squared of k minus 1 over 2n. And then this sum over xy mu is like a trace. And what is it that I'm tracing? I'm tracing a sum over l, t t star to the l divided by l. Yes?

AUDIENCE: So we'll also sum it over mu [INAUDIBLE] 4?

PROFESSOR: No, because I don't know which direction my first step is. So what I'm doing is I'm summing over all always of starting step from the origin, head in direction mu. Then I have to make sure that I come back to mu.

Now it is true that I sum over mu, but I already took care of that when I divided by 2l, because let's say you look at the diagrams of length 4 that I generate through this procedure. Starting from here, depending on which direction I go, I will generate this or this or this, or this. These are precisely the four diagrams that I generate depending on which starting point I pick. So it is truly there is an over-counting, but that's an over-counting that you've already taken care of.

Now again, we did this last time. This is the series for minus log of 1 minus 2 t star-- matrix t star. So this is the same thing as log of 2 hyperbolic cosine squared of k plus now 1 over 2n trace of log of 1 minus t t star.

Now I said that my matrix was blocked diagonal when I went to look at the q basis. And I can take the trace in any basis, whether it's in coordinate basis, in momentum basis. Trace is trace.

So now focus on what the trace will look like if I go to the Fourier basis. I have these four by four blocks, and then I calculate the trace, I will calculate the trace of one four by four. And the other four by four, I go over all q's. So basically this can be written as a sum over q's.

Log of 1 minus t. This four by four matrix t star of q, and the trace of that. And finally-- oops, I forgot a factor of 1 over 2n here. The sum over q I'm going to replace with an integral over q times n over 2 pi squared.

So the final answer here is going to look like log 2 hyperbolic cosine squared root of k plus 1/2. The 1 over n I will get rid of when I write the sum over q as n integral d2 of q. So I have integral d2 of 1 divided by 2 pi squared.

One more step. Trace of a log of any matrix I can write-- let's say we find a basis in which the m is diagonal. Then it becomes a sum over alpha log of lambda alpha, where lambda alphas are diagonal values of this. But sum over logs is the same thing as log of the product over alpha of lambda alpha.

The product of eigenvalues of the matrix you also recognize to be the determinant. So this is the log of the determinant of the matrix. So this is a very useful, famous identity that trace log is the same thing as log determinant that I will use. And rather than calculate the trace log, I will write it as the log of the determinant, and I will explicitly write down for you the determinant of which four by four matrix.

It is simply 1 minus p times the elements of that matrix. So it's 1 minus t e to the minus i qx minus t e to the minus i qx plus i pi over 4 0 minus t e to the minus i qx minus i pi over 4. Second element, second row-- minus t e to the minus i qy minus i pi over 4 1 minus t e to the minus i qy minus t e to the minus i qy plus i pi over 4 minus-- oops, 0 for the last element here. It's a U-turn.

The third thing is 0 minus t e to the i qx minus i pi over 4 1 minus t e to the i qx diagonal element minus t e to the i qx plus i pi over 4. Final row-- minus t e to the i cube y plus i pi over 4, 0 for the U-turn, minus t e to the i qy minus i pi over 4 diagonal turn 1 minus t e to the i cube y, and that's it. And that's the answer.

So calculating the partition function of the 2-dimensionalizing model is reduced to calculating this four by four determinant, which we can do by hand. I won't do it here. I will write the answer.

So the log of z over n is log 3 hyperbolic cosine squared of k plus 1/2 integral d2 q 2 pi squared. Log, you take the determinant. What you find is 1 plus t squared squared minus 2 t 1 minus t squared cosine of qx plus cosine of qy.

If you want, you can write it in a slightly different way by taking the cosine squared inside this logarithm and doing a little bit of algebra. We get log 2 plus 1/2. Explicitly, these are integrals that go from 0 to 2 pi, because that's the range of q vectors that are allowed by this transformation.

I'll have a q in the x direction. I'll have a q in the y direction. 2 pi squared, each one of them goes in the range 0 to 2 pi.

I have a log. And once I take this cosine squared inside, it becomes cos to the fourth. You write this t sine squared divided by cos squared. You can see that the cos to the fourth will cancel this. You will get cos squared plus sine squared, which is the same thing as hyperbolic cosine of twice the angle squared.

And the other terms conspires to give you the sine of 2kx and then cosine of qx plus cosine of qy. And so this is the partition function of the 2-dimensionalizing model. You can do a little bit more manipulations, write this integral in terms of special functions, but you won't gain much.

So this is the answer. I want you to absorb and appreciate this derivation. And next time we look at this and see what it means for the similarities in the phase behavior of the 2-dimensionalizing. Yes?

AUDIENCE: [INAUDIBLE] in the log, it's cos squared minus [INAUDIBLE]?

PROFESSOR: Yes. But both of them went to twice diagonal. Here everything is in terms of k. And once I took the hyperbolic cosine squared inside then did the manipulations, they became twice the length.

AUDIENCE: What's the subscript up there [INAUDIBLE]?

PROFESSOR: 2. There should be a subscript, yes.

AUDIENCE: Is there an extension to higher dimensions where you just do a summation over cosine?

PROFESSOR: You would wish, right? I mean, that's actually--

[LAUGHTER]

PROFESSOR: And quite a number of people have come with that conjecture, including myself when i was a graduate student and I didn't know better. If you sort of write things in terms of not only if you make kx and ky to be different, then this takes the form of cos 2kx cos 2ky. This becomes, in some sense, a very nice version of 2kx and 2ky.

There is a way natural-- and the thing that is nice is that if you put any one of the 2kx's to 0, then you have reduced the formula for the 1-dimensionalizing model, as you should. And then the natural thing would be to write a similar product in three dimensions, and you did when said one of k's equals to 0. You get the corrected dimensionalizing model. So it passes a number of test, yet it's unfortunately not correct.

AUDIENCE: So the problem is conditional in this three dimensions is counting the loops?

PROFESSOR: What we relied on heavily was this factor of minus 1 to the product of the number of crossings. And if you think about it as a topological entity, these crossings only make sense in two dimensions. So you don't have the basic tool to go in three dimensions.

And actually, what those minus signs mean I will explain next time. Has something to do with fermionic character of this [INAUDIBLE]. So it's a beautiful result. You should appreciate it. Yes?

AUDIENCE: [INAUDIBLE] this Thursday?

PROFESSOR: I'll go through the history, too. The person who first derived this field energy was Onsager with this transfer matrix method that I described. This way of doing it in terms of graphs came much later. And a number of people that were involved that, including Feynman. In fact, in Feynman's book, there's a very nice derivation along these lines that you can see.

The connections to fermions and the number of people Mattis, Schultz, Lieb, et cetera, came up with. One thing that-- well, OK, guess I have a few minutes. I can say a few things.

So this result dates back to around 1950s. And I know a generation of physicists who are now about to retire or have retired in the past 10 years or so who were very young when these things were introduced. And as far as I can see, all through their life, they did versions of this.

So you can sort of do versions of the 2-dimensionalizing model and you can make the interactions to be different. You can play the different types of interactions. You can make the boundaries to be different, periodic, et cetera.

There's been variants that you can find, and there are some people who seem to have done that throughout their career. But then after that, we had the renormalization group, and a totally different perspective.