Lecture 19: Series Expansions Part 5

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion on Series Expansions, including Critical Behavior of the Two Dimensional Ising Model.

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

OK, let's start. So let's go back to our starting point for the past couple of weeks. The square lattice, let's say, where each side we assign a binary variable sigma, which is minus 1. And a weight that tends to make subsequent nearest neighbors [? things ?] to be parallel to each other.

So into the plus k, they are parallel. Penalized into the minus k if they are anti-parallel. And of course, over all pairs of nearest neighbors. And the partition function that is obtained by summing over 2 to the n configurations. Let's make this up to give us a function of the rate of this coupling, which is some energy provided by temperature.

And we expect this-- at least, in two and higher dimensions-- to capture a phase transition. And the way that we have been proceeding with this to derive this factor as the hyperbolic cosine of k. 1 plus r variable t, which is the hyperbolic [? sine ?] of k. Sigma i, sigma j.

And then this becomes a cos k to the number of bonds, which is 2n on the square lattice. And then expanding these factors, we saw that we could get things that are either 1 from each bond or a factor that was something like t sigma sigma. And then summing over the two values of sigma would give us 0 unless we added another factor of sigma through another bond. And going forth, we had to draw these kinds of diagrams where at each site, I have an even number of 1's selected.

Then summing over the sigmas would give me a factor of 2 to the n. And so then I have a sum over a whole bunch of configurations. [? There are ?] [? certainly ?] one. There are configurations that are composed of one way of drawing an object on the lattice such as this one, or objects that correspond to doing two of these loops, and so forth.

So that's the expression for the partition function. And what we are really interested is the log of the partition function, which gives us the free energy in terms of thermodynamic quantities that potentially will tell us about phase transition. So here we will get a log 2 to the n. Well actually, you want to divide everything by n so that we get the intensive part. So here we get log 2 hyperbolic cosine squared of k.

And then I have to take the log of this expression that includes things that are one loop, disjointed loop, et cetera. And we've seen that the particular loop I can slide all over the place-- so if you have a factor of n-- whereas things that are multiple loops have factors of n squared, et cetera, which are incompatible with this. So it was very tempting for us to do the usual thing and say that the log of a sum that includes these multiple occurrences of the loops is the same thing as sum over the configurations that involve a single loop.

And then we have to sum over all shapes of these loops. And each loop will get a factor of t per the number of bonds that are occurring in that. Of course, what we said was that this equality does not hold because if I exponentiate this term, I will generate things where the different loops will coincide with each other, and therefore create types of terms that are not created in the original sum that we had over there. So this sum over phantom loops neglected the condition that these loops, in some sense, have some material to them and don't want to intersect with each other.

Nonetheless, it was useful, and we followed up this calculation. So that's repeat what the result of this incorrect calculation is. So we have log of 2 hyperbolic cosine squared k, 1 over n.

Then we said the particular way to organize the sum over the loops is to sum over there the length of the loop. So I sum over the length of the loop and count the number of loops that have length l. All of them will be giving me a contribution that is t to the l.

So then I said, well, let's, for example, pick a particular point on the lattice. Let's call it r. And I count the number of ways that I can start at r, do a walk of l steps, and end at r again.

We saw that for these phantom loops, this w had a very nice structure. It was simply what was telling me about one step raised to the l. This was the Markovian property.

There was, of course, an important thing here which said that I could have set the origin of this loop at any point along the loop. So there is an over-counting by a factor of l because the same loop would have been constructed with different points indicated as the origin. And actually, I can go the loop clockwise or anti-clockwise, so there was a factor of 2 because of this degeneracy of going clockwise or anti-clockwise when I perform a walk.

And then over here, there's also an implicit sum over this starting point and endpoint. If I always start and end at the origin, then I will get rid of the factor of n. But it is useful to explicitly include this sum over r because then you can explicitly see that sum over r of this object is the trace of that matrix. And I can actually [INAUDIBLE] the order, the trace, and the summation over l.

And when that happens, I get log 2 cos squared k exactly as before. And then I have 1 over n. I have sum over r replaced by the trace operation.

And then sum over l-- t, T raised to the l divided by l-- is the expansion for minus log of 1 minus t T. And there's the factor of 2 over there that I have to put over here. So note that this plus became minus because of the expansion for log of 1 minus x is minus x minus x squared over 2 minus x cubed over 3, et cetera.

And the final step that we did was to note that the trace I can calculate in any basis. And in particular, this matrix t is diagonalized by going to Fourier representation. In the Fourier representation, the trace operation becomes sum over all q values. Sum over all q values, I go to the continuum and write as n integral over q, so the n's cancel. I will get 2 integration over q.

These are essentially each one of them, qz and qz, in the interval. Let's say 0 to 2 pi or minus pi to 2 pi, doesn't matter. Interval of size of 2pi. So that's the trace operation.

Log of 1 minus t. Then the matrix that represents walking along the lattice represented in Fourier. And so basically at the particular site, we can step to the right or to the left. So that's e to the i qx, e to the minus i qx, e to the i qy, e to the minus i qy. Adding all of those up, you get 2 cosine of qz plus cosine of qy.

So that was our expression. And then we realized, interestingly, that whereas this final expression certainly was not the Ising partition function that we were after, that it was, in fact, the partition function of a Gaussian model where at each site I had a variable whose variance was 1, and then I had this kind of coupling rather than with the sigma variable, with these Gaussian variables that will go from minus to infinity.

But then we said, OK, we can do better than that. And we said that log z over n actually does equal a very similar sum. It is log 2 hyperbolic cosine squared k, and then I have 1 over n. Sum over all kinds of loops where I have a similar diagram that I draw, but I put a star.

And this star implied two things-- that just like before, I draw all kinds of individual loops, but I make sure that my loops never have a step that goes forward and backward. So there was no U-turn. And importantly, there was a factor of minus 1 to the number of times that the walk crossed itself.

And we showed that when we incorporate both of these conditions, the can indeed exponentiate this expression and get exactly the same diagrams as we had on the first line, all coming with the correct weights, and none of the diagrams that had multiple occurrences of a bond would occur. So then the question was, how do you calculate this given that we have this dependence on the number of crossings, which offhand may look as if it is something that requires memory?

And then we saw that, indeed, just like the previous case, we could write the result as a sum over walks that have a particular length l. Right here, we have the factor of t to the l.

Those walks could start and end at the particular point r. But we also specified the direction mu along which you started. So previously I only specified the origin. Now I have to specify the starting point as well as the direction. I have to end at the same point and along the same direction to complete the loop.

And these were accomplished by having these factors of walks that are length l. So to do that, we can certainly incorporate this condition of no U-turn in the description of the steps that I take. So for each step, I know where I came from. I just make sure I don't step back, so that's easy.

And we found that this minus 1 to the power of nc can be incorporated through a factor of e to the i sum over the changes of the orientation of the walker-- as I step through the lattice-- provided that I included also an additional factor of minus. So that factor of minus I could actually put out front. It's an important factor. And then there's the over-counting, but as before, the walk can go in either one of two directions and can have l starting points.

OK, so now we can proceed just as before. log 2 hyperbolic cosine squared k, and then we have a sum over l here, which we can, again, represent as the log of 1 minus t-- this 4 by 4 matrix t star. Going through the log operation will change this sign from minus to plus. I have the 1 over 2n as before.

And I have to sum over r and mu, which are the elements that characterize this 4n by 4n matrix. So this amounts to doing the trace log operation. And then taking advantage of the fact that, just as before, Fourier transforms can at least partially block diagonalize this 4n by 4n matrix.

I go to that basis, and the trace becomes an integral over q. And then I would have to do the trace of a log of a 4 by 4 matrix. And for that, I use the identity that a trace of log of any matrix is the log of the determinant of that same matrix.

And so the thing that I have to integrate is the log of the determinant of a 4 by 4 matrix that captures these steps that I take on the square lattice. And we saw that, for example, going in the horizontal direction would give me a factor of t e to the minus i qx. Going in the vertical direction-- up-- y qy. Going in the horizontal direction-- down.

These are the diagonal elements. And then there were off-diagonal elements, so the next turn here was to go and then bend upward. So that gave me in addition to e to the minus i qx, which is the same forward step here. A factor of, let's call it omega so I don't have to write it all over the place. Omega is e to the pi pi over 4.

And the next one was a U-turn. The next one was minus t e minus i qx omega star. And we could fill out similarly all of the other places in this 4 by 4 matrix.

And then the whole problem comes down to having to evaluate a 4 by 4 determinant, which you can do by hand with a couple of sheets of paper to do your algebra. And I wrote for you the final answer, is 1/2 2 integrals from minus pi 2 pi over qx and qy divided by 2 pi squared. And then the result of this, which is log of 1 plus t squared squared minus 2t 1 minus t squared cosine of qx plus cosine of qy. This was the expression for the partition function.

OK, so this is where we ended up last time. Now the question is we have here on the board two expressions, the project one and the incorrect one-- the Gaussian model and the 2-dimensionalizing one. They look surprisingly similar, and that should start to worry us potentially because we expect that many when we have some functional form, that functional form carries within it certain singularities.

And you say, well, these two functions, both of them are a double integral of log of something minus something cosine plus cosine. So after all of this work, did we end up with an expression that has the same singular behavior as the Gaussian model? OK, so let's go and look at things more carefully.

So in both cases, what I need to do is to integrate A function a that appears inside the log. There is an A for the Gaussian, and then there is this object-- let's call it A star-- for the correct solution. So the thing that I have to integrate is, of course, q. So this is a function of the vector q, as well as the parameter that is a function of which I expect to have a phase transition, which is t.

Now where could I potentially get some kind of a singularity? The only place that I can get a singularity is if the argument of the log goes to 0 because log of 0 is something that's derivatives of singularities, et cetera. So you may say, OK, that's where I should be looking at.

So where is it this most likely to happen when I'm integrating over q? So basically, I'm integrating over qx and qy over a [INAUDIBLE] [? zone ?] that goes from minus pi to pi in both directions. And potentially somewhere in this I encounter a singularity.

Let's come from the site of high temperatures where t is close to 0. Then I have log of 1, no problem. As I go to lower and lower temperatures, the t becomes larger.

Then from the 1, I start to subtract more and more with these cosines. And clearly the place that I'm subtracting most is right at the center of q equals to 0. So let's expand this in the vicinity of q goes to 0 in the vicinity of this place.

And there what I see is it is 1 minus. Cosines are approximately 1 minus q squared over 2. So I have 1 minus 4t, and then I have plus t qx squared plus qy squared, which is essentially the net q squared. And then I have order of higher power z, qx and qy.

Fine. So this part is positive, no problem. I see that this part goes through 0 when I hit tc, which is 1/4. And this we had already seen, that basically this is the place where the exponentially increasing number of walks-- as 4 to the number of steps-- overcomes the exponentially decreasing fidelity of information carried through each walk, which was t to the l.

So 4dt being 1, tc is of the order 1/4. We are interested in the singularities in the vicinity of this phase transition, so we additionally go and look at what happens when t approaches tc, but from this side above, because clearly if I go to t that is larger than 1/4, it doesn't make any sense. So t has to be less than 1/4.

And so then what I have here is that I can write this as 4tc, so this is 4 times tc minus t. And this to the lowest order I can replace as tc-- q squared plus higher orders. This 4 I can write as 1 over tc. So the whole thing I can write as q squared plus delta t divided by tc, and there's a factor of 4. And delta t I have defined to be tc minus 10. How close I am to the location where this singularity takes places.

So what I'm interested is not in the whole form of this function, but only the singularities that it expresses. So I focus on the singular part of this Gaussian expression. I don't have to worry about that term.

So I have minus 1/2. I have double integral. The argument of the log, I expanded in the vicinity of the point that I see a singularity to take place.

And I'll write the answer as q squared plus 4 delta t over tc. If I am sufficiently close in my integration to the origin so that the expansion in q is acceptable, there is an additional factor of tc. But if I take a log of tc, it's just a constant. I can integrate that out.

It's going to not contribute to the singular part, just an additional regular component. Now if I am in the vicinity of q equals to 0 where all of the action is, at this order, the thing that I'm integrating has circular symmetry so that 2-dimensional integration. I can write whether or not it's d qx, d qy as 2 pi q dq divided by 2 pi squared, which was the density of state.

And this approximation of a thing being isotropic and circular only holds when I'm sufficiently close to the origin, let's say up to some value that I will call lambda. So I will impose some cut-off here, lambda, which is certainly less than one of the order of pi-- let's say pi over 10. It doesn't matter what it is. As we will see, the singular part, it ultimately does not matter what I put for lambda.

But the rest of the integration that I haven't exclusively written down from all of this-- again, in analogy to what we had seen before for the Landau-Ginzburg calculation will give me something that is perfectly analytic. So I have extracted from this expression the singular part. OK, now let's do this integral carefully.

So what do I have? I have 1 over 2 pi with minus 1/2, so I have minus 1 over 4 pi. If I call this whole object here x-- so x is q squared plus this something-- then we can see that dx is 2 q dq.

So what I have to do is the integral of dx log x, which is x log x minus x. So essentially, let's keep the 2 up there and make this 8 pi. And then what I have is x log x minus x itself, which I can write as log of e. And then this whole thing has to be evaluated between the two limits of integration, lambda and [INAUDIBLE].

Now you explicitly see that if I substitute in this expression for q the upper part of lambda, it will give me something like lambda squared plus delta t-- an expandable and analytical function. Log of a constant plus delta t that I can start analytically expanding. So anything that I get from the other cut-off of the integration is perfectly analytic. I don't have to worry about it.

If I'm interested in the singular part, I basically need to evaluate this as it's lower cut-off. So I evaluate it at q equals to 0. Well first off all, I will get a sign change because I'm at the lower cut-off.

I will get from here 4 delta t over tc. And from here, I will get log of a bunch of constants. It doesn't really matter. 4 over e delta t over tc.

What is the leading singularity? Is delta t log t-- log delta t? And so again, the leading singularity is delta t log of delta t. There's an overall factor of 1 over pi. [? Doesn't match. ?]

You take two derivatives. You find that the heat capacity, let's say that is proportion to two derivatives of log z by delta t squared. You take one derivative, it goes like the log. You take another derivative of the log, you find that the singularity is 1 over delta t. That corresponds to a heat capacity divergence with an exponent of unity, which is quite consistent with the generally Gaussian formula that we had in d dimensions, which was 2 minus t over 2.

So that's the Gaussian. And of course, this whole theory breaks down for t that is greater than tc. Once I go beyond tc, my expressions just don't make sense. I can't integrate a lot of a negative number.

And we understand why that is. That's because we are including all of these loops that can go over each other multiple times. The whole theory does not make sense.

So we did this one to death. Will the exact result be any different? So let's carry out the corresponding procedure-- for A star that is a function of q and t.

And again, singularities should come from the place where this is most likely to go to 0. You can see it's 1 something minus something. And clearly when the q's are close to 0 is when you subtract most, and you're likely to become negative. So let's expand it around 0.

This as q goes to 0 is 1 plus t squared squared. And then I have minus. Each one of the cosines starts at unity, so I will have 4t 1 minus t squared. And then from the qx squared over 2 qy squared over 2, I will get a plus t 1 minus t squared q squared-- qx squared plus qy squared-- plus order of q to the fourth.

So the way that we identified the location of the singular part before was to focus on exactly q equals to 0. And what we find is that A star at q equals to 0 is essentially this part. This part I'm going to rewrite the first end slightly.

1 plus t squared squared is the same thing as 1 minus t squared squared plus 4t squared. The difference between the expansion of each one of these two terms is that this has plus 2t squared, this has minus 2t squared, which I have added here. And then minus 4t 1 minus t squared.

And the reason that I did that is that you can now see that this term is twice this times this when I take the square. So the whole thing is the same thing as 1 minus t squared minus 2t squared. So the first thing that gives us reassurance happens now, whereas previously for the Gaussian model 1 minus 4t could be both positive and negative, this you can see is always positive.

So there is no problem with me not being able to go from one side of the phase transition to another side of the phase transition. This expression will encounter no difficulties. But there is a special point when this thing is 0.

So there is a point when 1 minus 2t c plus tc squared is equal to 0. The whole thing goes to 0. And you can figure out where that is. tc is 1.

It's a quadratic form. It has two solutions-- 1 minus plus square root of 2. A negative solution is not acceptable. Minus. I have to recast this slightly. Knowing the answer sometimes makes you go too fast.

tc squared plus 2tc minus 1 equals to 0. tc is minus 1 plus or minus square root of 2. The minus solution is not acceptable. The plus solution would give me root 2 minus 1.

Just to remind you, we calculated a value for the critical point based on duality. So let's just recap that duality argument. We saw that the series that we had calculated, which was an expansion in high temperatures times k, reproduced the expansion that we had for 0 temperature, including islands of minus in a sea of plus, where the contribution of each bond was going from e to the k to e to the minus k. So there was a correspondence between a dual coupling.

And the actual coupling, that was like this. At a critical point, we said the two of them have to be the same. And what we had calculated based on that, since the hyperbolic tang I can write in terms of the exponentials was that the value of e to the minus e to the plus 2kc was, in fact, square root of 2 plus 1.

And the inverse of this will be square root of 2 minus 1, and that's the same as the tang. That's the same thing as the things that we have already written. So the calculation that we had done before-- obtain this critical temperature based on this Kramers-Wannier duality-- gave us a critical point here, which is precisely the place that we can identify as the origin of the singularity in this expression. So what did we do next?

The next thing that we did was having identified up there where the critical point was-- which was at 14-- was to expand our A, the integrand, inside the log in the vicinity of that point. So what I want to do is similarly expand a star of q for t that goes into vicinity of tc. And what do I have to do? So what I can do, let's write it as t is tc plus delta t and make an expansion for delta t small.

So I have to make an expansion or delta t small, first of all, of this quantity. So if I make a small change in t, I'll have to take a derivative inside here. So I have minus 2t minus 2, evaluate it at tc, times the change in delta t. So that's the change of this expression if I go slightly away from the point where it is 0. Yes?

AUDIENCE: I have a question. T here is [? tangent ?] of k where we calculated the [INAUDIBLE] to [? be as ?] temperature.

PROFESSOR: Now all of these things are analytical functions of each other. So the k is, in fact, some unit of energy divided by kt. So Really, the temperature is here. And my t is tang of the above objects, so it's tang of j divided by kt.

So the point is that whenever I look at the delta in temperature, I can translate that delta in temperature to a delta int times the value of the derivative at the location of this, which is some finite number. Of basically up to some constant, taking derivatives with respect to temperature, with respect to k, with respect to tang k, with respect to beta. Evaluate it at the finite temperature, which is the location of the critical point. They're all the same up to proportionality constants, and that's why I wrote "proportionality" there.

One thing that you have to make sure-- and I spent actually half an hour this morning checking-- is that the signs work out fine. So I didn't want to write an expression for the heat capacity that was negative. So proportionalities aside, that's the one thing that I better have, is that the sign of the heat capacity, that is positive.

So that's the expansion of the term that corresponds to q equals to 0. The term that was proportional to q squared, look at what we did in the above expression for the Gaussian model. Since it was lower order, it was already proportional to q squared. We evaluated it at exactly t equals to tc. So I will put here tc 1 minus tc squared q squared, and then I will have high orders.

Now fortunately, we have a value for tc that we can substitute in a couple of places here. You can see that this is, in fact, twice tc plus 1, and tc plus 1 is root 2. So this is minus 2 root 2. I square that, and this whole thing becomes 8 delta t squared.

This object here, 1 minus tc squared-- you can see if I put the 2 tc on the other side-- 1 minus tc squared' is the same thing as 2 tc. So I can write the whole thing here as 2 tc squared q squared. And the reason I do that is, like before, there's an overall factor that I can take out of this parentheses. And the answer will be q squared now plus 4 delta t over tc squared.

It's very similar to what we had before, except that when we had q squared plus 4 delta t over tc, we have q squared plus 4 delta t over tc squared. Now this square was very important for allowing us to go for both positive and negative, but let's see its consequence on the singularity. So now log z of the correct form divided by n-- the singular part-- and we calculate it just as before.

First of all, rather than minus 1/2 I have a plus 1/2. I have the same integral, which in the vicinity of the origin is symmetric, so I will write it as 2 pi qd q divided by 4 pi squared. And then I have log. I will forget about this factor for consideration of singularities 4 delta t over tc squared.

And now again, it is exactly the same structure as x dx that I had before. So it's the same integral. And what you will find is that I evaluate it as 1/8 pi integral, essentially q squared plus 4 delta t over tc squared, log of q squared plus 4 delta t over tc squared over e evaluated between 0 and lambda. And the only singularity comes from the evaluation that we have at the origin.

And so that I will get a factor of minus. So I will get 1/8 pi. Actually then, I substitute this factor. The 4 and the 8 will give me 2.

And then I evaluate the log of delta t squared, so that's another factor of 2. So actually, only one factor of pi survives. I will have delta t over tc squared log of, let's say, absolute value of delta t over tc.

So the only thing that changed was that whereas I had the linear term sitting in front of the log, now have a quadratic term. But now when I take two derivatives, and now we are sure that taking derivatives does not really matter whether I'm doing it with respect to temperature or delta t or any other variable. You can see that the leading behavior will come taking two derivatives out here and will be proportional to the log.

So I will get minus 1 over pi log of delta t over tc. So that if I were to plug the heat capacity of the system, as a function of, let's say, this parameter t-- which is also something that stands for temperature-- but t goes between, say, 0 and 1. It's a hyperbolic tangent.

There's a location which is this tc, which is root 2 minus 1. And the singular part of the heat capacity-- there will be some other part of the heat capacity that is regular-- but the singular part we see has a [INAUDIBLE] divergence. And furthermore, you can see that the amplitudes-- so essentially this goes approaching from different sides of the transition, A plus or A minus log of absolute value of delta a. And the ratio of the amplitudes, which we have also said is universal, is equal to 1. And you had anticipated that based on [? duality ?] [INAUDIBLE].

All right, so indeed, there is a different behavior between the two models. The exact solution allows us to go both above and below, has this logarithmic singularity. And this expression was first written down-- well, first published Onsager in 1944. Even couple of years before that, he had written the expression on boards of various conferences, saying that this is the answer, but he didn't publish the paper.

The way that he did it is based on the transfer matrix network, as we said. Basically, we can imagine that we have a lattice that is, let's say, I parallel in one direction, l perpendicular in the other direction. And then for the problems that you had to do, the transform matrix for one dimensional model, we can easy to do it for a [? ladder ?]. It's a 4 by 4.

For this, it becomes a 2 to the l by 2 to the l matrix. And of course, you are interested in the limit where l goes to infinity so that you can come 2-dimensional. And so he was able to sort of look at this structure of this matrix, recognize that the elements of this matrix could be represented in terms of other matrices that had some interesting algebra. And then he arguably could figure out what the diagonalization looked like in general for arbitrary l, and then calculate log z in terms of the log of the largest eigenvalue.

I guess we have to multiply by l parallel. And showed that, indeed, it corresponds to this and has this phase transition. And before this solution, people were not even sure that when you sum an expression such as that for partition function if you ever get a singularity because, again, on the face of it, it's basically a sum of exponential functions. Each one of them is perfectly analytic, sums of analytical functions. It's supposed to be analytical.

The whole key lies in the limit of taking, say, l to infinity and n to infinity, and then you'll be able to see these kinds of singularities. And then again, some people thought that the only type of singularities that you will be able to get are the kinds of things that we saw [INAUDIBLE] point. So to see a different type of singularity with a heat capacity that was actually divergent and could explicitly be shown through mathematics was quite an interesting revelation.

So the kind of relative importance of that is that after the war-- you can see this is all around the time of World War II-- Casimir wrote to Pauli saying that I have been away from thinking about physics the past few years with the war and all of that. Anything interesting happening in theoretical physics? And Pauli responded, well, not much except that Onsager solved the 2-dimensionalizing model.

Of course, the solution that he has is quite obscure. And I don't think many people understand that. Then before, the form that people refer to was presented is actually kind of interesting, because this paper has something about crystal statistics, and then there's the following paper by a different author-- Bruria Kaufman, 1949-- has this same title except it goes from number two or something.

So they were clearly talking to each other, but what she was able to show, Bruria Kaufman, was that the structure of these matrices can be simplified much further and can be made to look like spinners that are familiar from other branches of physics. And so this kind of 50-page paper was kind of reduced to something like a 20-page paper.

And that's the solution that is reproduced in Wong's book. Chapter 15 of Wong has essentially a reproduction of this. I was looking at this because there aren't really that many women mathematical physicists, so I was kind of looking at her history, and she's quite an unusual person.

So it turns out that for a while, she was mathematical assistant to Albert Einstein. She was first married to one of the most well known linguists of the 20th century. And for a while, they were both in Israel in a kibbutz where this important linguist was acting as a chauffeur and driving people around.

And then later in life, she married briefly Willis Lamb, of Lamb shift. He turned out the term, 1949. She had done some calculation that if Lamb had paid attention to, he would have also potentially won a Nobel Prize for Mossbauer effect, but at that time didn't pay attention to it so somebody else got there first. So very interesting person. So to my mind, a good project for somebody is to write a biography for this person. It doesn't seem to exist.

OK, so then both of these are based on this transfer matrix method. The method that I have given you, which is the graphical solution, was first presented by Kac and Ward in 1952. And it is reproduced in Feynman's book. So Feynman apparently also had one of these crucial steps of the conjecture with his factor of minus 1 to the power of the number of crossings, giving you the correct factor to do the counting.

Now it turns out that I also did not prove that statement, so there is a missing mathematical link to make my proof of this expression complete. And that was provided by a mathematician called Sherman, 1960, that essentially shows very rigorously that these factors of minus 1 to the number of crossings will work out and magically make everything happen. Now the question to ask is the following.

We expect things to be exactly solvable when they are trivial in some sense. Gaussian model is exactly solvable because there is no interaction among the modes. So why is it that the 2-dimensionalizing model is solvable?

And one of the keys to that is a realization of another way of looking at the problem that appeared by Lieb, Mattis, and Schultz in 1964. And so basically what they said is let's take a look at these pictures that I have been drawing for the graphs. And so I have graphs that basically on a kind of coarse level, they look something like this maybe.

And what they said was that if we look at the transfer matrix-- the one that Onsager and Bruria Kaufman were looking at. And the reason it was solvable, it looked very much like you had a system of fermions. And then the insight is that if we look at these pictures, you can regard this as a 1-dimensional system of fermions that is evolving in time. And what you are looking at these borderline histories of two particles that are propagating.

Here they annihilate each other. Here they annihilate each other. Another pair gets created, et cetera.

But in one dimensions, fermions you can regard two ways-- either they cannot occupy the same site or you can say, well, let them occupy in same site, but then I introduce these factors of minus 1 for the exchange of fermions. So when two fermions cross each other in one dimension, their position has been exchanged so you have to put a minus 1 for crossing.

And then when you sum over all histories for every crossing, there will be one that touches and goes away. And the sum total of the two of them is 0. So the point is that at the end of the day, this theory is a theory of three fermions. So we have not solved, in fact, an interacting complicated problem. It is. But in the right perspective, it looks like a bunch of fermions that completely non-interacting pass through each other as long as we're willing to put the minus 1 phase that you have for crossings.

One last aspect to think about that you learn from this fermionic perspective. Look at this expression. Why did I say that this is a Gaussian model?

We saw that we could get this, z Gaussian, by doing an integral essentially over weights that were continuous by i. And the weight I could click here as kij phi i phi j. Because in principle, if I only count an interaction once, let's say I count it twice. I could put a factor of 1/2 if I allow i and j to be some.

Especially I said the weight that I have to put here is something like phi i squared over 2. So this essentially you can see z of the Gaussian ultimately always becomes something like z of the Gaussian is going to be 1 over the square root of a determinant of whatever the quadratic form is up there. And when you take the log of the partition function, the square root of the determinant becomes minus 1/2 of the log of the determinant, which is what we've been calculating. So that's obvious.

You say this object over here that you write as an answer is also log of some determinant. So can I think of these as some kind of a rock that is prescribed according to these rules on a lattice, give weights to jump according to what I have, then do a kind of Gaussian integration and get the same answer? Well, the difficulty is precisely this 1/2 versus minus 1/2, because when you do the Gaussian integration, you get the determinant in the denominator.

So is there a trick to get the determinant in the numerator? And the answer is that people who do have integral formulations for fermionic systems rely on coherent states that involve anti-commuting variables, called Grassmann variables. And the very interesting thing about Grassmann variables is that if I do the analog of the Gaussian integration, the answer-- rather than being in the determinant, in the denominator-- goes into the numerator. And so one can, in fact, rewrite this partition function, sort of working backward, in terms of an integration over Gaussian distributed Grassmannian variables on the lattice, which is also equivalent to another way of thinking about fermions.

Let's see. What else is known about this model? So I said that the specific heat singularity is known, so we have this alpha which is 0 log. Given that the structure that we have involves inside the log something that is like this, you won't be surprised that if I think in terms of a correlation length-- so typically q's would be an inverse correlation length, some kind of a q at which I will be reaching appropriate saturation for a delta t-- I will arrive at a correlation range that diverge as delta t to the minus 1.

Again, I can write it more precisely as b plus b minus to the minus mu minus. The ratio of the b's this is 1, and the mus are the same and equal to 1. So the correlation length diverges with an exponent 1, and one can [INAUDIBLE] this exactly.

One can then also calculate actual correlations at criticality and show that that criticality correlations decay with separation between the points that you are looking at as 1 over r to the 1/4. So the exponent that we call mu is 1/4 in 2 dimensions. Once you have correlations, you certainly know that you can calculate the susceptibility as an integral of the correlation function.

And so it's going to be an integral b 2r over r to the 1/4 that is cut off at the correlation length. So that's going to give me c to the power of 2 minus 1/4, which is 7/4. So that's going to diverge as delta t 2 to the minus gamma. Gamma is, again, 7/4.

So these things-- correlations-- can be calculated like you saw already for the case of the Gaussian model with a appropriate modification. That is, I have to look at walks that don't come back and close on themselves, but walks that go from one point to another point. So the same types of techniques that is described here will allow you to get to all of these other results.

AUDIENCE: Question?

PROFESSOR: Yes?

AUDIENCE: Do people try to experiment on the builds as things that should be [INAUDIBLE] to the Ising model?

PROFESSOR: There are by now many experimental realizations of [INAUDIBLE] Ising model in which these exponents-- actually, the next one that I will tell you have been confirmed very nicely. So there are a number of 2-dimensional absorb, systems, a number of systems of mixtures in 2 dimensions that phase separate. So there's a huge number of experimental realizations. At that time, no, because we're talking about 70 years.

So the last thing that I want to mention is, of course, when you go below tc, we expect that-- let's call it temperate-- when I go below temperature, there will be a magnetization. That has always been our signature of symmetry breaking. And so the question is, what is the magnetization?

And then this is another interesting story, that around 1950s in a couple of conferences, at the end of somebody's talk, Onsager went to the board and said that he and Bruria Kaufman have found this expression for the magnetization of the system at low temperature as a function of temperature or the coupling constant. But they never wrote the solution down until in 1952, actually CN Yang published a paper that has derived this result. And since this goes to 1 at the critical point-- why duality [INAUDIBLE] 2k [INAUDIBLE] 2k-- [? dual ?] plus 1. This vanishes with an exponent beta, which is 1/8, which is the other exponent in this series.

So as I said, there are many people who since the '50s and '60s devoted their life to looking at various generalizations, extensions of the Ising model. There are many people who try to solve it with a finite in 2 dimensions, a finite magnetic field. You can't do that. This magnetization is obtained only at the limit of [INAUDIBLE] going to 0. And clearly, people have thought a lot about doing things in 3 dimensions or higher dimensions without much success.

So this is basically the end of the portion that I had to give with discrete models and lattices. And as of next lecture, we will change our perspective one more time. We'll go back to the continuum and we will look at n component models in the low temperature expansion and see what happens over there. Are there any questions?

OK, I will give you a preview of what I will be doing in the next few minutes. So we have been looking at these lattice models in the high t limit where expansion was graphical, such as the one that I have over there. But the partition function turned out to be something of two constants, as 1 plus something that involves loops and things that involve multiple loops, et cetera. This was for the Ising model.

It turns out that if I go from the Ising model to n component space. So at each side of the lattice, I put something that has n components subject to the condition that it's a unit vector. And I try to calculate the partition function by integrating over all of these unit vectors in n dimension of a weight that is just a generalization of the Ising, except that I would have the dot product of these things.

And I start making appropriate high temperature expansions for these models. I will generate a very similar series, except that whenever I see a loop, I have to put a factor of n where n is the number of components. And this we already saw when we were doing Landau-Ginzburg expansions. We saw that the expansions that we had over here could be graphically interpreted as representations of the various terms that we had in the Landau-Ginzburg expansion.

And essentially, this factor of n is the one difficulty. You can use the same methods for numerical expansions for all n models that are these n component models. You can't do anything exactly with them.

Now the low temperature expansion for Ising like models we saw involved starting with some ground state-- for example, all up-- and then including excitations that were islands of minus in a sea of plus. And so then there higher order in this series. Now for other discrete models, such as the Potts model, you can use the same procedure. And again, you see that in some of the problems that you've had to solve.

But this will not work when we come to these continuous spin markets, because for continuous spin models, the ground state would be when everybody is pointing in one direction, but the excitations on this ground state are not islands that are flipped over. They are these long wavelength Goldstone modes, which we described earlier in class.

So if we want to make an expansion to look at the low temperature [INAUDIBLE] for systems with n greater than 1, we have to make an expansion involving Goldstone modes and, as we will see, interactions among Goldstone modes. So you can no longer regard at that appropriate level of sophistication as the Goldstone modes maintain independence from each other. And very roughly, the main difference between discrete and these continuous symmetry models is captured as follows.

Suppose I have a huge system that is L by L and I impose boundary conditions on one side where all of this spins in one direction, and ask what happens if I impose a different condition on the other side. Well, what would happen is you would have a domain boundary. And the cost of that domain boundary will be proportional to the area of the boundary in d dimensions.

So the cost is proportional to some energy per bond times L to the d minus 1. Whereas if I try to do the same thing for a continuous spin system, if I align one side like this, the other side like this, but in-between I can gradually change from one direction to another direction.

And the cost would be the gradient, which is 1 over L squared times integrated over the entire system. So the energy cost of this excitation will be some parameter j, 1 over L-- which is the shift-- squared-- which is the [? strain ?] squared-- integrated over the entire volume, so it goes L to the d minus 2.

So we can see that these systems are much softer than discrete systems. For discrete systems, thermal fluctuations are sufficient to destroy order at low temperature as soon as this cost is of the order of kt, which for large L can only happen in d [INAUDIBLE] 1 and lower. Whereas for these systems, it happens in d of 2 and lower. So the lower critical dimension for these models.

With continuous symmetry, we already saw it's 2. For these models, it is 1. Now we are going to be interested in this class of models. I have told you that in 2 dimensions, they should not order. So presumably, the critical temperature-- if I continuously regard it as a function of dimension, will go to 0 as a function of dimension as d minus 2. So the insight of Polyakov was that maybe we can look at the interactions of these old Goldstone modes and do a systematic low temperature expansion that reaches the phase transition of a critical point systematically ind minus 2. And that's what we will attempt in the future.