Lecture 17: Series Expansions Part 3

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion on Series Expansions, including Summing over Phantom Loops.

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free.

To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: OK. Let's start. So we are back to calculating partition functions for Ising models. So again some in general hypercubic lattice where you have variables sigma i being minus plus 1 at each side with a tendency to be parallel. And our task is to calculate the partition function which for n sides amounts to summing over 2 to the n binary configurations of a weight that favors near-neighbors to be in the same state with a strength K.

OK. So the procedure that we are going to follow is this high temperature expansion, that is writing this as hyperbolic cosine of k 1 plus t standing for tan K sigma i, sigma j. And then as we discussed, actually this i would have to write as a product over all bonds. And then essentially to each bond on this lattice I would have to either assign 1 or t sigma i, sigma j. Binary choice now moves to bonds.

And then we saw that in order to make sure that following the summation over sigma these factors survived we have to make sure that we construct graphs where out of each side go out an even number of bonds. And we rewrote the whole thing as 2 to the n, cos K to the number of bonds, which for a hypercubic lattice is dn. And then we had a sum over graphs where we had an even number of bonds per side. And the contribution of the graph was basically t to the power of the number of bonds.

So I'm going to call this sum over here s. After all, the interesting part of the problem is captured in this factor s, these 2 to the n cos K to the power of dn are perfectly well-behaved analytical functions. We are looking for something that is singular. So this s I can either pick one from all of the factors-- so to the lowest order in t I would have to do to one. And then we saw that the next order correction would be something like a square, would be this t to the 4th. And then objects that are more complicated versions.

And I can write all of those as a sum over kind of graphs that I can draw on the lattice by going around and making a loop. But then I will have graphs that will be composed of two loops, for example, that are disconnected. Since this can be translated all over the place this would have a factor of n for a lattice that is large for getting side effects and edge effects. This would have a contribution once I slide these two with respect to each other. That is, of the order of n squared. And then I would have things that would three loops and so forth.

Now based on what we have seen before it is very tempting to somehow exponentiate this sum and write it as exponential of the sum over objects that have a single loop. And I will call this new sum actually s prime for reasons to become apparent because if I start to expand this exponential, what do I get?

I will certainly start to get 1, then I will have the sum over single loop graphs plus one half whatever is in the exponent over here squared, 1 over 3 factorial whatever is in the exponent, which is this sum cubed, and so forth.

And if I were to expand this thing that is something squared, I will uncertainty get terms in it that would correspond to two of the terms in the sum. Here I would get things that would correspond to three of the terms in the sum. And the communitorial factors would certainly work out to get rid of the 2 and the 6, depending on-- let's say I have loop a, b, or c, I could pick a from the first sum here, b from the second, c from the third or any permutation thereafter that would amount to c. I would have a factor of c abc.

But s definitely is not equal to s prime because immediately we see that once I square this term a plus b squared, in addition to a squared ab I will also get a squared and b squared, which corresponds to objects such as this. One half of the same graph repeated twice.

Right? It is the a squared term that is appearing in this sum. And this clearly has no analog here in my sum s. I emphasize that in the sum s each bond can occur either zero or one time.

Whereas if I exponentiate this you can see that s prime certainly contains things that potentially are repeated twice. As I go further I could have multiple times. Also when I square this term I could potentially get 1 factor from 1, another factor, which is also a loop from the second term, the second in this product of two brackets, which once I multiply them happen to overlap and share something. OK? All right.

So very roughly and hand-wavingly, the thing is that s has loops that avoid each other, don't intersect. Whereas s prime has loops that intersect. So very naively s is sum over, if you like, a gas of non-intersecting loops. Whereas s prime is a sum over gas of what I could call phantom loops. They're kind of like ghosts. They can go through each other.

OK? All right.

So that's one of a number of problems. Now in this lecture we are going to ignore this difference between s and s prime and calculate s prime, which therefore we will not be doing the right job. And this is one example of where I'm not doing the right job. And as I go through the calculation you may want to figure out also other places where I don't know correctly reproduce the sum s in order to ultimately be able to make a calculation and see what it comes to.

Now next lecture we will try to correct all of those errors, so you better keep track of all of those errors, make sure that I am doing everything right. Seems good-- all right.

So let's go and take a look at this s prime.

So s prime, actually, log of s prime is a sum over graphs that I can draw on the lattice.

And since I could exponentiate that, there was no problem in the exponentiation involved over here. Essentially I have to, for calculating the log, just calculate these singly connected loops. And you can see that that will work out fine as far as extensivity is concerned because I could translate each one of these figures of the loops over the lattice and get the factors of n. So this is clearly something that is OK.

Now since I'm already making this mistake of forgetting about the intersections between loops I'm going to make another assumption that is in this sum of single loops. I already include things where the single loop is allowed to intersect itself. For example, I'm going to allow as a single loop entity something that is like this where this particular bond will give a factor of t squared. Clearly I should not include this in the correct sum.

But since I'm ignoring intersections among the different groups and I'm making it phantom, let's make it also phantom with respect to itself, allow it to intersect itself [INAUDIBLE].

It's in the same spirit of the mistake that we are going to do.

So now what do we have? We can write that log of s prime is this sum over loops of length l and then multiply it by t to the l. So basically I say that I can draw loops, let's say of size four, loops of size six. Each one of them I have to multiply by t to the l. Of course, I have to multiply with the number of loops of length l. OK?

And this I'm going to write slightly differently. So I'm going to say that log of s prime is sum over this length of the loop. All the loops of length l are going to contribute a factor of t to the l. And I'm going to count the loops of length l that start and end at the origin. And I'll give that a symbol wl00.

Actually, very soon I will introduce a generalization of this. So let me write the definition. I define a matrix that is indexed by two side of the lattice and counts the number of walks that I can have from one to the other, from i to j in l steps. So this is number of walks of length l from i to j. OK?

So what am I doing here? Since I am looking at single loop objects, I want to sum over, let's say, all terms that contribute, in this case t to the 4. It's obvious because it's really just one shape. It's a square, but this square I could have started at anywhere on the lattice. And this slight factor, which captures the extensivity, I'll take outside because I expect this log of s to be extensive. It should be proportional to n. So one part of it is essentially where I start to draw this loop. So I say that I always start with the loops that I have at point zero.

Then I want to come back to myself. So I indicate that the end point should also be zero. And if I want to get a term here, this is a term that is t to the fourth. I need to know how many of such blocks I have. Yes?

AUDIENCE: Are you allowing the loop to intersect itself in this case or not?

PROFESSOR: In this case, yes. I will also, when I'm calculating anything to do with s prime I allow intersection. So if you are asking whether I'm allowing something like this, the answer is yes.

AUDIENCE: OK.

PROFESSOR: Yeah.

AUDIENCE: And are we assuming an infinitely large system so that-- PROFESSOR: Yes. That's right. So that the edge effects, you don't have to worry about.

Or alternatively you can imagine that you have periodic boundary conditions. And with periodic boundary conditions we can still slide it all over the place. OK? But clearly then the maximal size of these loops, et cetera, will be determined potentially by the size of the lattice.

Now this is not entirely correct because there is an over-counting. This 1one square that I have drawn over here, I could have started from this point, to this point, this point.

And essentially for something that has length l, I would have had l possible starting points.

So in order to avoid the over-counting I have to divide by l. And in fact I could have started walking along this direction, or alternatively I could have gone in the clockwise direction.

So there's two orientations to the walk that will take me from the origin back to the origin in l steps.

And not to do the over-counting, I have to divide by 2l. Yes?

AUDIENCE: If we allow walking over ourselves is it always a degeneracy of 2l?

PROFESSOR: Yes. You can go and do the calculation to convince yourself that even for something as convoluted as that, you can... is too.

PROFESSOR: OK. So this is what we want to calculate. Well, it turns out that this entity actually shows up somewhere else also. So let me tell you why I wanted to write a more general thing. Another quantity that I can try to calculate is the spin-spin correlation.

I can pick spin zero here and say spin r here, some other location. And I want to calculate what is the correlation between these two spins. OK?

So how do I have to do that for the Ising model? I have to essentially sum over all configurations with an additional factor of sigma zero sigma r of this weight, e to the k, sum over ij, sigma i, sigma j, appropriately normalized, of course, by the partition function.

And I can make the same transformation that I have on the first line of these exponential factors to write this as a sum over sigma i, I have sigma zero, sigma r, and then product over all bonds of these factors of one plus t sigma i, sigma j.

The factors of 2 to the n cos K will cancel out between the numerator and the denominator.

And basically I will get the same thing. Now of course the denominator is the partition function. It is the sum s that we are after, but we can also, and we've seen this already how to do this, express the sum in the numerator graphically. And the difference between the numerator and the denominator is that I have an additional sigma sitting here, an additional sigma sitting there, that if left by themselves will average out to 0.

So I need to connect them by paths that are composed of factors of t sigma sigma, originating on one and ending on the other. Right? So in the same sense that what is appearing in the denominator is a sum that involves these loops, the first term that appears in the numerator is a path that connects zero to r through some combination of these factors of t, and then I have to sum over all possible ways of doing that.

But then I could certainly have graphs that involves the same thing. And the loop, there is nothing that is against that, or the same thing, and two loops, and so forth. And you can see that as long as, and only as long as I treat these as phantom objects that can pass through each other I can factor out this term and the rest of 1 plus 1, loop plus 2 loop, is exactly what I have in the denominator.

So we see that under the assumption of phantomness, if phantom then this becomes really just the sum over all paths that go from 0 to r. And of course the contribution of each path is how many factors of t I have. Right? So I have to have a sum over the length of this path l. Paths of length l will contribute a factor of t to the l. But there are potentially multiple ways to go from i0 to r in l steps.

How many ways? That's precisely what I call this 0w of lr. Yes?

AUDIENCE: Why does a graph that goes from 0 to r in three different ways have [INAUDIBLE]?

PROFESSOR: OK. So you want to have to go from 0 to r, you want to have a single path, and then you want that path to do something like this?

AUDIENCE: Yeah. That doesn't [INAUDIBLE].

PROFESSOR: That's fine. If I ignore the phantomness condition this is the same as this multiplied by this, which is a term that appears in the denominator and cancels out.

AUDIENCE: But you're assuming that you have the phantom condition. So this is completely normal. It doesn't matter.

PROFESSOR: I'm not sure I understand your question. You say that even without the phantom condition this graph exists.

AUDIENCE: With them phantom condition-- PROFESSOR: Yes.

AUDIENCE: --this graph is perfectly normal.

PROFESSOR: Even without the phantom condition this is an acceptable graph. Yeah. OK. Yeah?

AUDIENCE: So what does phantomness mean? Why then we can simplify only a [INAUDIBLE]?

PROFESSOR: OK. Because let's say I were to take this as a check and multiply it by the denominator. The question is, would I generate a series that is in the numerator? OK? So if I take this object that I have said is the answer, I have to multiply it by this object. And make sure that it introduces correctly the numerator. The question is, when does it? I mean, certainly when I multiply this by this, I will get the possibility of having a graph such as this. And from here, I can have a loop such as this. And the two of them would share a bond such as that. So in the real Ising model, that is not allowed. So that's the phantomness condition that allows me to factor these things. OK? All right.

So we see that if I have this quantity that I have written in red, then I can calculate both correlation functions, as well as the free energy, log of the partition function within this phantomness assumption. So the question is, can I calculate that? And the answer is that calculating number of random walks is one of the most basic things that one does in statistical physics, and easily accomplished as follows.

Basically, I say that, OK, let's say that I start from 0, actually let's do it 0 and r, and let's say that I have looked at all possible paths that have l steps and end up over here.

So this is step one, step two, step number three. And the last one, step l, I have purposely drawn as a dotted line. Maybe I will pull this point further down to emphasize that this is the last one. This is l minus 1 is the previous one. So I can certainly state that the number of walks from 0 to r in l steps.

Well, any walk that got to r in l steps, at the l minus one step had to be somewhere.

OK? So what I do is I do a number of walks from 0 to r prime, I'll call this point r prime, in l minus one steps. And then times the number of ways, or number of walks from r prime to r in one step. So before I reach my destination the previous step I had to have been somewhere, I sum over all possible various places where that somewhere has to be, and then I have to make sure that I can reach from that somewhere in one step to my destination. That's all that sum is, OK?

Now I can convert that to mathematics. This quantity is start from 0, take l steps, arrive at r. By definition that's the number. And what it says is that this should be a sum over r pi, start from 0, take l minus 1 steps, arrive at r prime. Start from r prime, take one step, arrive at your destination r, sum over r prime. OK?

Now these are n by n matrices that are labeled by l. Right? So these, being n by n matrices, this summation over r prime is clearly a matrix multiplication. So what that says is that summing over r prime tells you that that is w 1, w of l minus 1, 0. And that is true for any pair of elements, starting and final points. So basically, quite generically, we see that w, the matrix that corresponds to the count for l steps, is obtained from the matrix that corresponds to the count for one step multiplying that of l minus 1 steps. And clearly I can keep going. wl minus 1, I can write as w 1, w of l minus 2, and so forth.

And ultimately the answer is none other than the entity that corresponds to one step raised to the l power. And just to make things easier on my writing, I will indicate this as t to the l where t stands for this matrix for one set. OK?

This condition over here that I said in words that allows me to write this in this nice matrix form is called the Markovian condition. The kind of walks that I have been telling you our Markovian in the sense that they only they depend on where you came from at the last step. They don't have memory of where you had been before.

And that's what enables us to do this. And that's why I had to do the phantom condition because if I really wanted to say that something like this has to be excluded, then the walk must keep memory of every place that it had been before.

Right? And then it would be non-Markovian.

Then I wouldn't have been able to do this nice calculation. That's why I had to do this phantomness assumption so that I forgot the memory of where my walk was previously. OK?

Now... Yeah? Question? No?

So this matrix, where you can go in one step, is really the matrix of who is connected to whom. Right? So this tells you the connectivity. So for example, if I'm dealing with a 2D square lattice the sides on my lattice are labeled by x and y. And I can ask, where can I go in one step if I start from x and y? And the answer is that either x stays the same and why y shifts by one, or y stays the same and x shifts by one. These are the four non-zero elements of the matrix that allows you to go on the square lattice either up, down, right, or left. And the corresponding things that you would have for the cube or whatever lattice. OK?

Now you look at this and you can see that what I have imposed clearly is such that for a lattice where every side looks like every equivalent side this is really a function only of the separation between the two points. It's an n by n matrix. It has n squared elements, but the elements really are essentially one column that gets shifted as you go further and further down in a very specific way.

And whenever you have a matrix such as this translational symmetry implies that you can diagonalize it by Fourier transformation. And what do I mean by that? This is, I can define a vector q such that it's various components are things like e to the i q dot r in whatever dimension. Let's normalize it by square root of n. And I should be sure that that is an eigenvector.

So basically my statement is that if I take the matrix t, act on q, then I should get some eigenvalue in the vector back. And let's check that for the case of the 2D system.

So for the 2D system, if I say that x y t qx qy, what is it? Well, that is x y t x prime, y prime, the entity that I have calculated here, x prime, y prime qx, qy. And of course, I have a sum over x prime and y prime. That's the matrix product.

And so again, remember this entity is simply e to the i qx x prime plus qy y prime, divided by square root of n. And because this is a set of delta functions, what does it do? It basically sets x prime either to x plus 1, or x minus 1, y prime either to y or y minus 1, y minus 1. You can see that you always get back your e to the i qx x plus qy y with a factor of root n.

So essentially that by the delta functions just changes the x primes to x at the cost of the different shifts that you have to do over there, which means that you will get a factor of e to the i qx, e to the minus iqx, with qy not changing, or e to the i qy, and e to the minus i qy with the x component not changing. So this is the standard thing that you have seen, is none other than 2 cosine of qx, plus cosine of qy. And so we can see that quite generally in the d-dimensional hypercube, my t of q is going to be the sum over all d components of these factors of cosine of q alpha. And that's about it, OK?

So why did I bother to this diagonalization? The answer is that that now allows me to calculate everything that I want. So, for example, I know that this quantity that I'm interested in, sigma 0 sigma r, is going to be a sum over l, t to the l, then 0. wl is t to the l times r. Right?

Now this small t, I can take inside here and do it like this. And if I want, I can write this as the 0 r component of a sum over l tt raised to the power of l. So it's a new matrix, which is essentially sum over l, small t times this connectivity matrix to the l-th power. This is a geometric series. We can immediately do the geometric series. The Answer is 0, 1 over 1 minus tt going all the way to r. OK? And the reason I did this diagonalization is so that I can calculate this matrix element, because I don't really want to invert a whole matrix, but I can certainly invert the matrix when it is in the diagonal basis because all I have to do is to invert pure numbers.

So what is done is to go to the Fourier basis and rotate to the Fourier basis calculate this.

It is diagonal in this basis. I have q r. And so what is that? These are these exponentials here evaluated at the origin, so it's just 1 over root n. This is 1 over root n. This is e to the i q dot r over root n. This is just the eigenvalue that I have calculated over here. So this entity is none other than a sum over q, e to the i q dot r divided by n, two factors of square root of n. 1 over 1 minus t-- well, actually let's write it 2t-- sum over alpha cosine of q alpha.

And then of course I'm interested in big systems. I replace the sum over q with an integral over q, 2 pi to the d density of states. In going from there to there, there's a factor of volume, the way that I have set the unit of length in my system. The volume is actually the number of particles that I have. So that factor of n disappears. And all I need to do is evaluate this factor of 1 minus 2t sum over alpha cosine of q alpha integrated over q Fourier transformed. OK? Yes?

AUDIENCE: So I notice that you have a sum over q, but then you also have a sum alpha q alpha.

PROFESSOR: Right.

AUDIENCE: Is there a relationship between the q and the q alpha or not?

PROFESSOR: OK. So that goes back here. So when I had two dimensions, I had qx and qy.

Right? And so I labeled, rather than x and y, with q1 and q2. So the index alpha is just a number of spatial dimensions. If you like, this is also dq1, dq2, dqd.

AUDIENCE: OK. [INAUDIBLE].

PROFESSOR: OK. All right. So we are down here. Let's proceed. So what is going to happen?

So suppose I'm picking two sides, 0 and r, let's say both along the x direction, some particular distance apart. Let's say seven, eight apart. So in order to evaluate this I would have an integral if this is the x direction of something like e to the iq x times r. Now when I integrate over qx the integral of e to the iqx times r would go to 0.

The only way that it won't go to 0 is from the expansion of what is in the denominator.

I should bring on enough factors of e to the minus iqx, which certainly exist in these cosine factors, to get rid of that. So essentially, the mathematical procedure that is over here is to bring in sufficient factors of e to the minus i q dot r to eliminate that. And the number of ways that you are going to do that is precisely another way of capturing this entity, which means that clearly if I'm looking at something like this, and I mean the limit that t goes to be very, very small so that the lowest order in t contributes, the lowest order t would be the shortest path that joins these two points.

So it is like connecting these two points with a string that is very tight. So that what I am saying is that the limit as t goes to 0 of something like sigma 0, sigma r is going to be identical to t to the minimum distance between 0 and r. Actually, I should say proportional, is in fact more correct. Because there could be multiple shortest paths that go between two points. OK?

Now let's make sense. There's kind of an exponential. Essentially I start with the high temperature limit where the two things don't know anything about each other. So sigma 0, sigma r is going to be 0. So anything that is beyond 0 has to come from somewhere in which the information about the state of the site was conveyed all the way over here. And it is done through passing one bond at a time. And in some sense, the fidelity of each one of those transformations is proportional to t.

Now as t becomes larger you are going to be willing to pay the costs of paths that go from 0 to r in a slightly more disordered way. So your string that was tight becomes kind of loose and floppy. And why does that become the case? Because now although these paths are longer and carry more factors of this t that is small, there are just so many of them that the entropy, the number of these paths starts to dominate. OK?

So very roughly you can state the competition between them as follows, that the contribution of the path of length l, decays like t to the l, but the number of paths roughly grows like 2d to the power of l. And since we have this phantomness character, if I am sitting at a particular site, I can go up, down, right, left. So at each step I have in two dimensions a choice of four, in three dimensions a choice of six, in d-dimensions a choice of 2d.

So you can see that this is exponentially small, this is exponentially large. So they kind of balance each other. And the balance is something like e to the minus l over some typical l that we'll contribute. And clearly the typical l is going to be finite as long as 2dt is less than 1. So you can see that something strange has to happen at the value where tc is such that 2dtc is equal to 1.

At that point the cost of making your paths longer is more than made up by increasing the number of paths that you can have, the entropy starts to be. And you can see that precisely that condition tells me whether or not this integral exists, right? Because one point of this integral where the integrand is largest is when q goes to 0. And you can see that as q goes to 0, the value in the denominator is 1 minus 2td. So there is precisely a pole when this condition takes place.

And if I'm interested in seeing what is happening when I'm in the vicinity of that transition, right before these paths become very large, what I can do is I can start exploring what is happening in the vicinity of that pole. So 1 minus 2d, sum over alpha cosine of q alpha, how does it behave?

Each cosine I can start expanding around its q going to 0 as 1 minus q squared over 2, so you can see that this is going to be 1 minus 2td. And then I would have plus t q squared, because I will have q1 squared plus q2 squared plus qd. So this q squared is the sum of all the q's.

And then I do have higher order terms. Order of q to fourth, and so forth. OK? So if I'm looking in the vicinity of t going to tc, this is roughly tc q squared. This is something that goes to 0 and once I factor out the tc, I can define a length squared in this fashion, inverse length squared. OK?

PROFESSOR: You can see that this inverse length squared is going to be 1 over tc 1 minus 2d, which is 1 over tc times t. So you can see that this is none other than tc minus t. And if I'm looking at the vicinity of that point, I find that the correlation between sigma 0 sigma r is approximately the integral ddq 2 pi to the d, Fourier transform of the denominator that I said is approximately 1 over tc times q squared plus z to the minus 2.

We've seen this before. You evaluated this Fourier transform when you were doing Landau-Ginzburg.

So this is something that grows when you are looking at distances that are much less than this correlation length as the Coulomb power law. When you are looking at distances that are much larger than the correlation event, you get the exponential decay with this r to the d minus 1 over 2 factor.

So what we find is that the correlation of these phantom loops is precisely the correlation that we had seen for the Gaussian model, in fact. It has a correlation length that diverges in precisely the same way that we had seen for the Gaussian model with the square root singularity. So this is our usual mu equals 1/2 type of behavior that we've seen.

And somehow, by all of these routes, we have reproduced some property of the Gaussian model.

In fact, it's a little bit more than that, because we can go back and look at what we had here for the free energy. So let's erase the things that pertain to the correlation length and correlations, and focus on the calculation that we kind of left in the middle over here.

So what do we have? We have that log of S prime, the intensive part, is a sum over the lengths of these loops that start and end at the origin. And the contribution of a loop of length l is small t to the l.

And since w of l is the connectivity matrix to the l power, it's really like looking at the matrix element of this entity. And of course, there is this degeneracy factor of 2l. And I can write this as 1/2 sum over l.

Well, let's do it this way-- the 0-th, 0-th element of sum over l dt to the l over l.

And what is this? This is, in fact, the series expansion of minus log of 1 minus dt.

So I can, again, go to the Fourier basis, write this as minus 1/2 sum over q0, 0 q log of 1 minus t the eigenvalue t of q, and then q0. Each one of these is just the factor of 1 over square root of n. The sum over q goes over to n integral over q. So this simply becomes minus 1/2 integral over q 2 pi to the d log of 1 minus t, this sum over alpha cosine of q alpha [INAUDIBLE] 2 that we had over here.

And again, if I go to this limit where I am close to tc, the critical value of this t, and focus on the behavior as q goes to 0, this is going to be something that has this q squared plus c to the minus 2 type of singularity. And again, this is a kind of integral that we saw in connection with the Gaussian model. And we know the kind of singularities it gives.

But why did we end up with the Gaussian model? Let's work backward. That is, typically, when we are doing some kind of a partition function of a Gaussian model-- let's say we have some integral over some variables phi i.

Let's say we put them on the sides of a lattice. And we have e to the minus phi i some matrix m ij phi j over 2 sum over i and j implicit over there. What was the answer? Then the answer was typically proportional to 1 over the determinant of this matrix to the 1/2, which, if I exponentiated, would be exponential of minus 1/2 logarithm of the determinant of this matrix.

So that's the general result. And we see the result for our log of S prime is, indeed, the form of 1/2 minus 1/2 of the log of something. And indeed, this sum over q corresponds to summing over the different eigenvalues. And if I were to express det m in terms of the product of its eigenvalues, it would be precisely that.

So you can see that actually, what we have calculated by comparison of these two things corresponds to a matrix m ij, which is delta ij minus t times this single step connectivity matrix that I had before. So indeed, the partition function that I calculated, that I called Z prime or S prime, corresponds to doing the following-- doing an integral over phi i's from the delta ij. Essentially for each phi i, I would have a factor of minus phi i squared over 2. So essentially, I have to do this. And then from here, once it's exponentiated, I will get a factor of e to the sum over ij this t phi i phi j.

So you can see that I started calculating Ising variables on this lattice. The result that I calculated for these phantom walks is actually identical if I had to replace the Ising variables with just quantities that I integrate all over the place, provided that I weigh them with this factor. So really, the difference between the Ising and what I have done here can be captured by putting a weight for the indirect integration per site.

So if I really want to do Ising, the weight that I want to do for the Ising-- let's do it this way-- for phi has to have a delta function at minus 1 and a delta function at plus 1. Rather than doing that, I have calculated a w that corresponds to the Gaussian where the weight for each phi is basically a Gaussian weight.

And if I really wanted to do the Landau-Ginzburg, all I would need to do is to add here a phi to the 4th.

The problem with this Gaussian-- the phantom system that I have-- is the same problem that we had with the Gaussian model. It only gives me one side of the phase transition. Because you see that I did all of these calculations. All of these calculations were consistent, as long as I was dealing with t that was less than tc.

Once I go to t that is greater than tc, then this denominator that I had became negative.

It just doesn't make sense. Correlations negative don't make sense. The log, the argument that I have to calculate here, if t is below-- is larger than 1 over 2d, then it doesn't make sense.

And of course, the reason the whole theory doesn't make sense is kind of related to the instability that we have in the Gaussian model. Essentially, in the Gaussian model also, when t becomes large enough, this phi squared is not enough to remove the instability that you have for the largest eigenvalue. Physically, what that means is that we started with this taut string. And as we approached the transition, the string became more flexible.

And in principle, what this instability is telling me is that you go below the transition of t greater than tc, and the string becomes something that can go over and over itself as many times, and gain entropy further and further. So it will keep going forever. There is nothing to stop it.

So the phantomness condition, the cost that you pay for it, is that once you go beyond the transition, you essentially overwhelm yourself. There's just so much that is going on. There is nothing that you can do. So that's the story.

Now, let's try to finally understand some of the things that we had before, like this other critical dimension of 4. Where did it come from, et cetera? You are now in the position to do things and understand things.

First thing to note is, let's try to understand what this exponent mu equal to 1/2 means.

So we said that if I think about having information about my site at the origin, that has to propagate so that further and further neighbors start to know what the information was at site sigma 0-- that that information can come through these paths that fluctuate, go different distances, and eventually, let's say, reach a boundary that is at size r. As we said, the contribution of each path decays exponentially, but the number of paths grows exponentially.

And so for a particular t that is smaller than the critical value, I can roughly say that this falls off like this, so that there is a characteristic length, l bar. This characteristic l bar is going to be minus 1 over log of 2dt. And 2dt I can write as 2d tc plus t minus tc. 2 d tc is, by construction, 1. So this is minus 1 over log of 1 plus something like 2d, which is 1 over t minus tc, t minus tc over tc.

Now, log of 1 plus a small number-- so if my t goes and approaches tc-- this log will behave like what I have over here. So you can see that this diverges as t minus tc to the minus 1 power. I want it, I guess, to be correct-- tc minus t, because t is less than tc. But the point is that the divergence is linear. As I approach tc, the length of these paths will grow inversely to how close I am.

Now what are these paths? I start from the origin, and I randomly take steps. And I've said that the typical steps that I will get will roughly have length l bar.

How far have these paths carried the information? These are random walks, so the distance over which they have managed to carry the information, c, is going to be like the square root of the length of these walks. And since the length of the walks grows like t minus tc, this goes like tc minus t to the minus 1/2 power. So the exponent mu of 1/2 that we have been thinking about is none other than the 1/2 that you have for random walks, once you realize that what is going on is that the length of the paths that carry information essentially diverges linearly on approaching this. So that's one understanding.

Now, you would say that this is the Gaussian picture. Now I know that when we calculated things to order of epsilon, we found that mu was 1/2 plus something. It became larger.

So what does that mean? Well, if you have these paths, and the paths cannot cross each other-- it comes here, it has to go further away, because they are really non-phantom-- then they will swell. So the exponent mu that you expect to get will be larger than 1/2. So that's what's captured in here.

Well, how can I really try to capture that more mathematically? Well, I say that in the calculations that I did-- let's say when I was calculating the correlation functions sigma 0 sigma r, in the approximation of phantomness, I included all paths that went from 0 to r.

Among those there were paths that were crossing themselves. So I really have to subtract from that a path that comes and crosses itself.

So I have to subtract that. I also had this condition that I had the numerator and denominator that cancel each other, which really means that I have to subtract the possibility of my path intersecting with another loop that is over here. And we can try to incorporate these as corrections.

But we've already done that, because if I Fourier transform this object, I saw that it is this 1 over q squared plus k squared. And then we were calculating these u perturbative corrections, and we had diagrams that kind of looked like this. Oops, I guess I want to first draw the other diagram.

And then we had a diagram that was like this. You remember when we were doing these phi to the 4th calculations, the corrections that we had for the propagator, which was related to the two-point correlation function, were precisely these diagrams, where we were essentially subtracting factors that were set by u. Of course, the value of u could be anything, and you can see that there is really a one-to-one correspondence.

Any of the diagrams that you had before really captures the picture of one of these paths trying to cross itself that you have to subtract. And you can sort of put a one-to-one mathematical correspondence between what is going on here. Yeah.

AUDIENCE: So why can't we have the path in the first correction you drew? Because aren't we allowed to have four bonds that attach to one site when we're doing the original expansion?

PROFESSOR: OK, so I told you at the beginning that you should keep track of all of my mistakes.

And that's a very subtle thing. So what you are asking is, in the original Ising model, I can draw perfectly OK a graph such as this that has an intersection such as this. As we will show next time-- so bear with me-- in calculating things while in the phantom condition, this is counted three times as much as it should. So I have to subtract that, because a walk that comes here can either go forward, up, or down. There is some degeneracy there that essentially, this has done an over-counting that is important, and I have to correct for when I do things more carefully next time around. Yes.

AUDIENCE: When you did the Gaussian model, we never had to put any sort of requirement on the lattice being a square lattice.

PROFESSOR: No.

AUDIENCE: Didn't we have to do that here when you did those random walks?

PROFESSOR: No, I only use the square condition or hypercube condition in order to be able to write this in general dimension. I could very well have done triangular, FCC, or any other lattice. The expression here would have been more complicated.

So finally, we can also ask, we have a feel from renormalization group, et cetera, that the Gaussian exponents, like mu equals to 1/2, are, in fact, good provided that you are in sufficiently high dimension-- if you are above four dimensions. Where did you see that occurring in this picture? The answer is as follows.

So basically, I have ignored the possibility of intersections. So let's see when that condition is roughly good. The kind of entities that I have as I get closer and closer to tc in the phantom case are these random walks.

And we said that the characteristic of a random walk is that if I have something that carries l steps, that the typical size in space that it will grow scales like l to the 1/2. So we can recast this as a dimension. Basically, we are used to say linear objects having a mass that grows-- what do I want to do?

Let's say that I have a hypercube of size L. Let's actually call it size R. Then the mass of this, or the number of elements that constitute this object, grow like R to the d.

So if I take my random walk, and think of it as something in which every step has unit mass, you would say that the l is proportional to mass so that the radius grows like the number of elements to the 1/2 power or the mass of the 1/2 power. So you would say that the random walk, if I want to force it in terms of a relationship between mass and radius, that mass goes like radius squared. So in that sense, you can say that the random walk has a fractal or Hausdorff dimension of 2.

So if you kind of are very, very blind, you would say that this is like a random walk.

It's a two-dimensional thing. It's a page.

So now the question is, if I have two geometrical entities, will they intersect? So if I have a plane and a line in three dimensions, they will barely intersect. In four dimensions, they won't intersect.

If I have two surfaces that are two dimensional, in three dimensions, they intersect in a line.

In four dimensions, they would intersect in a point. And in five dimensions, they won't generically intersect, like two lines generically don't intersect in three dimensions.

So if you ask, how bad is it that I ignored the intersection of objects that are inherently random walking in sufficiently high dimensions, I would say the answer geometrically would be in intersection is generic if d is less than 2 df, which is 4. So we made a very drastic assumption. But as long as we are above four dimensions, it's OK. There's so much space around that statistically, these intersections, this non-phantomness, is so entropically constraining that it never happens.

You can ignore it, and the results are OK. But you go to four dimensions and below, you can't ignore it, because generically, these things will intersect with each other. That's why these diagrams are going to blow up on you, and give you some important contribution that would swell, and give you a value of mu that is larger than the 1/2 that we have for random walks.

So that's the essence of where the Gaussian model was, why we get mu equals to 1/2, why we get mu's that are larger than 1/2, what the meaning of these diagrams is, what four dimensions is special. All of it really just comes down to central limit theorem and knowing that the sum of a large number of variables that has a square root of n type of variance and fluctuations. And it's all captured by that.

But we wanted to really solve the model exactly. It turns out that we can make the conditions that were very hard to implement in general dimensions to work out correctly in two dimensions.

And so the next lecture will show you what these mistakes are, how to avoid them, and how to get the exact value of this sum in two dimensions.