Lecture 12: Perturbative Renormalization Group Part 4

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion on the Perturbative Renormalization Group, including the Irrelevance of Other Interactions and comments on the ε-expansion.

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: So today, I'd like to wrap together and summarize everything that we have been doing the last 10, 12 lectures. So the idea started by saying that you take something like a magnet, you change its temperature. You go from one phase that is paramagnet at some critical temperature Tc to some other phase that is a ferromagnet.

Naturally, the direction of magnetism depends whether you put on a magnetic field and went to 0. So there's a lot of these transitions that involve ferromagnet. There were a set of other transitions that involved, for example, superfluids or superconductivity. And the most common example of phase transition being liquid gas, which has a coexistence line that also terminates at the critical point. So you have to turn it around a little bit to get a coexistence line like this. Fine. So there are phase transitions.

The interesting thing was that when people did successively better and better experiments, they found that the singularities in the vicinity of these critical points are universal. That is, it doesn't matter whether you have iron, nickel, or some other thing that is undergoing ferromagnetism. You can characterize the divergence of the heat capacity to an exponent alpha, which ferromagnets is minus 0.12. For superfluids, we said that people even take things on satellites to calculate this exponent to much higher accuracy than I have indicated. For the case of liquid gas, there is a true divergence and the exponent is around 0.11.

There are a whole set of other exponents that I also mentioned. There is the exponent beta for how the magnetization vanishes. And the values here were 0.37, 0.35, 0.33. There is the divergence of the susceptibility gamma that is characterized through exponents that are 1.39, 1.32, 1.24. And there is a divergence of the correlation length characterized by exponents [INAUDIBLE] 0.71, 0.67, 0.63. So there is this table of pure numbers that don't depend on the property of the material that you are looking at.

And the fact that you don't have this material dependence suggests that these pure numbers are some characteristics of the collective behavior that gives rise to what's happening at this critical point. And we should be able to devise some kind of a theory to understand that, maybe extract these nice, pure numbers, which are certainly embedded in the physics of the problem that we are looking at.

So the first idea that we explored conceptually due to lambda probably, probably others, is that we should construct the statistical field. That is, what is happening is irrespective of whether we are dealing with nickel or iron, et cetera. So the properties of the microscopic elements should disappear and we should be focusing on the quantity that is undergoing a phase transition. And that quantity, we said, is some kind of a magnetization.

And what distinguishes the different systems is that for ferromagnet, it's certainly a three-component system. But in general, for superfluid we would have a phase. And that's a two-component object, so we introduced this parameter n that characterized the symmetry of the order parameter.

And in the same way, we said let's look at things that are embedded in space. That is, in general, d dimensional. So our specification of the statistical field was on the basis of these things.

And the idea of lambda was to construct a probability for this field, configurations across space. Once we had that, we could calculate a partition function, let's say, by integrating over all configurations of this field of some kind of a weight.

And we constructed the weight. We wrote something as beta H was an integral d dx. And then we put a whole bunch of things in here. We said we could have something like t over 2m squared. We had gradient of m squared. Potentially, we could have higher derivatives staying at the order of m squared. And so this list of things that I could put that are all order of m squared is quite extensive.

Then I could have things that are fourth order, like u m to the fourth. And we saw that when we performed this renormalization group last time around, that a term that we typically had not paid attention to was generated. Something that was, again, order of m to the fourth, but had a structure maybe like m squared gradient of m squared, or some other form of the two-derivative operator.

This is OK. There could be other types of things that have four m's in it. And you could have something that is m to the sixth, m to the eighth, et cetera. So the idea of lambda was to include all kinds of terms that you can put. But actually, we have already constrained terms. So the idea of lambda is all terms consistent with some constraints that you put. What are the constraint that we put?

We put locality in that we wrote this as an integral over x. We considered symmetry. So if I am at the 0 field limit, I only have terms that are proportional to m squared, rotationally symmetric.

And there is something else that is implicit, which is analyticity. What do I mean?

I mean that there is, in some sense here, a space of parameters composed of all of these coefficients-- t, k, u, v. And these are supposed to represent what is happening to my system that I have in mind as I change the temperature. And so in principle, if I do some averaging procedure and arrive at this description, all of these parameters presumably will be functions of temperature.

And the statement is that the process of coarse gaining the degrees of freedom and averaging to arrive at this description and the corresponding parameters involves finite number of degrees of freedom. And adding and integrating finite numbers of degrees of freedom can only lead to analytical functions. So the statement here is that this set of parameters are analytical functions.

So given this construction by lambda, we should be able to figure out if this is correct, what is happening and where these numbers come from. So what did we attempt?

The first thing that we attempted was to do saddle point. And we saw that doing so just looking at the most probable state fails because fluctuations were important. We tried to break the Hamiltonian into a part that was quadratic and Gaussian. And we could calculate everything about it, and then treating everybody else as a perturbation. And when we attempted to do that, we found that perturbation theory failed below four dimensions. So at this stage, it was kind of an impasse in that as far as physics is concerned, we feel that this thing captures all of the properties that you need in order to somehow be able to explain those phenomena. Yet, we don't have the mathematical power to carry out the integrations that are implicit in this. So then the idea was, can we go around it somehow?

And so the next set of things that we introduced were basically versions of scaling. And quite a few statistical physicists were involved with that. Names such as [INAUDIBLE], Fisher, and number of others.

And the idea is that if we also consider, let's say, introducing a magnetic field direction here and look at the singularities in the plane that involves those two that is necessary to also characterize some of these other things such as gamma, then you have a singular part for the free energy that is a function of how far you go away from Tc. So this t now stands for T minus Tc. And how far you go along the direction that breaks symmetry. And the statement was that all of the results were consistent with the form that depended on really two exponents that could be bonded together into the behavior of the singular part of the free energy or the singular part of the correlation function. And essentially, this approach immediately leads to exponent identities.

And these exponent identities we can go back and look at the table of numbers that we have up there. And we see that they are correct and valid. But well, what are the two primary exponents? How can we obtain them?

Well, going and looking at this scaling behavior a little bit further, one could trace back to some kind of a self-similarity that should exist right at the critical point. That is, the correlation functions, et cetera at the critical point should have this kind of scaling variance. And then the question is, can we somehow manage to use that property, that looking at things at different scales at the critical points, should give you the same thing to divine what these exponents are. So the next stage in this progression was the work of Kadanoff in introducing the idea of RG.

And the idea of RG was to basically average things further. Here, implicit in the calculation that we had was some kind of a short distance [INAUDIBLE] a. And if we average between a and b a, then presumably these parameters mu would change to something else-- mu prime-- that correspond to rescaling by a factor of b. And these mu primes would be a function of the original set of parameters mu.

And then Kadanoff's idea was that the scaling variant points would correspond to the points where you have no changed. And that if you then deviated from that point, you would have some characteristic scale in the problem. And you could capture what was happening by looking at essentially how the changes, delta mu prime, were related to the changes delta mu. So essentially, linearizing these relationships. So there would be some kind of a linearized transformation. And then the eigenvalues of this transformation would determine how many relevant quantities you should have.

Now, the physics, the entire physics of the process, then comes into play here. That the experiments tell us that you can, let's say, take superfluid and it has this phase transition. You change the pressure of it, it still has that phase transition-- slightly different temperature, but it's the same phase transition. You can add some impurities to it as you did in one of the problems. You still have the phase transition.

So basically, the existence of a phase transition as a function of one parameter that is temperature-like is pretty robust. And if we think about that in the language of fixed point, it meant that along the symmetry direction, there should be only one relevant eigenvalue. So this construction that Kadanoff proposed is nice and fine, but one has to demonstrate that, indeed, this infinite number of parameters can be boiled down to a fixed point. And that fixed point has only one relevant direction.

So the next step in this progression was Wilson who did perturbative version of this procedure. So the idea was that we can certainly solve beta H0. And beta H0 is really a bunch of Gaussian independent modes as long as we look at things in Fourier space.

So in Fourier space, we have a bunch of modes that exist over some [INAUDIBLE] zone. And as long as we are looking at some set of wavelengths and no fluctuation shorter than that wavelength has been allowed, there is a maximum to here. And the procedure of averaging and increasing this minimum wavelength then corresponds to integrating out modes that are sitting outside lambda over v and keeping modes that we call m tilde that live within 0 to lambda over p.

And if we integrate these modes-- so if I rewrite this integration as an integration over Fourier modes, do this decomposition, et cetera-- what I find is that my-- after I integrate. So step 1, I do a coarse graining. I find a Hamiltonian that governs the coarse-grained modes.

Well, if I integrate out the sigma modes and treat them as Gaussians, then there will be a contribution to the free energy trivially from those modes proportional to volume, presumably. Since these modes and these modes don't couple at the Gaussian level, at the Gaussian level we also have beta H0 acting on these modes that I have kept.

And the hard part is, of course, the interaction between these types of modes that are governed by all of these non-linearities that we have over here. And we kind of can formally write that as minus log of e to the minus u that depends on m tilde and sigma when I integrate out in the Gaussian weight the modes that are the sigma parameters. So that's formally correct.

This is some complicated function of m tilde after I get rid of the sigma variables. But presumably, if I were to expand and write this in powers of m tilde and powers of gradient, it will reproduce back the original series. Because I said that the original series includes everything that could possibly be generated. This is presumably, after I do all of these things, still consistent with symmetries and so will generate those kinds of terms.

But of course to evaluate it, then we have to do perturbation. And so we can start expanding this in powers of u. And the first term would be u, assuming that u is a small quantity. Then minus 1/2 u squared minus u average squared and so forth.

And of course, this RG has two other steps. After I have performed this step, I have to do rescaling, which in Fourier space means I blow up q. So q I will replace with b inverse q prime. And renormalize, which meant that in Fourier space I replace m with z m prime. So after I do these procedures, what do I find?

I find that to whatever order I go, I start with some original Hamiltonian that includes all terms consistent with symmetries. And I generate a new log of probability. It's not really a Hamiltonian. It's the kind of effective free energy. It's the log of the probability of these configurations.

So now I should be able to read this transformation of how I go from mu to mu prime. So let's go through this list and do it.

So t prime is something that in Fourier space went with one integration over q. So I got b to the minus d. There's two factors of m, so it's a z square type of contribution. And at the 0 order from here, I have my t. And then when I did the average, from the average of u I got a contribution that was proportional to u. There was a degeneracy of 4. There were two kinds of diagrams that were contributing to it, and then I had the integral from lambda over b to lambda, d dk 2 pi to the d 1 over t plus t plus k k squared.

And just to remind you the kind of diagrams that were contributing to this, one of them was something like this and the other one was something like this. This one that had a closed loop gave me the factor of n and the other gave me what was eventually the eighth that I have observed here. So this is what we found at order of u. We went on and calculated the u squared.

And the u squared will give me another contribution. There is a coefficient out here that also similarly involves integrals. The integrals will depend on t. They will depend on k. They will depend on this lambda that I'm integrating. They will depend on t, et cetera. So there is some function here.

And we argued last time that we don't need to evaluate it, but let's write it down. And let's make sure that it doesn't contribute. But this is only looking at the effect of this u. And I know that I have all of these other terms. So presumably, I will get something that will be of the order of, let's say, v squared uv. I will certainly get something that is of the order of u6.

If I think of u6 as something that has six legs associated with it, I can certainly join two of these legs, two of these legs and have something that is two legs leftover, just as I did getting from the u4 m to the 4 2m squared. I certainly can do that. So there is all kinds of higher order terms here in principle that we have to keep track of.

Now, the next term in the series is the k. Compared to the t-terms, it had two additional factor of gradients. When we do everything, it turns out that it will be minus 2 because of the two gradients that became q squared. It's still second order in m, so I would get this. And then I start with k.

Now, the interesting and important thing is that when we do the calculation at order of u, we don't get any correction to k. The only diagrams that could have contributed to k were diagrams of this variety. But diagrams of this variety, we saw that when I performed those integrals the result just doesn't depend on q. It only corrected the constant term that is proportional to t squared. But that structure will not preserve.

If I go to order of u squared, there will be some kind of a correction that is order of u squared. And we kind of had in the table in the table of 6 by 6 things that I had a diagram that gives contribution such as this. That, in fact, look something like this. So basically, this is a four-point vertex, a four-point vertex. I joined them. I make these kinds of calculation.

Now, once you do that, you'll find that the difference between this diagram and let's say that diagram is that the momentum that goes in here, q, will have to just go through here and there is no influence on it on the momentum that I'm integrating.

Whereas, if you look at this diagram, you will find that it is possible to have a momentum that going in here and gets a contribution over here. And so the calculation, if I were to do at higher order, will have in the denominator a product of two of these factors, but one of them will explicitly depend on q.

And if I expanded in powers of q, I will get a correction that will appear here. It will not change our life as we will see what it's good to know that it is there. And there be higher-order corrections here, too.

AUDIENCE: Question.

PROFESSOR: Yes.

AUDIENCE: Are the functions a1 and a2 just labeled as such to make our lives easier or because they don't have any sort of universality with them?

PROFESSOR: Both. They don't carry at this order in the expansion any information for what we will need, but I have to show it to you explicitly. So right now, I keep them as placeholders.

If, at the end of the day, we find that our answers will depend on these quantities, then we have to go back and calculate them. But ultimately, the reason that we won't need them is what I described last time. That we will calculate exponents only to lowest order in 4 minus epsilon-- 4 minus d, which is epsilon. And that all of these u's at the fixed point will be of the order of epsilon. So both of these terms are terms that are order of epsilon squared and will be ignorable at level of epsilon.

But so far I haven't talked anything about epsilon, so I may as well keep it. And similarly, l prime would be something that goes with q to the fourth if I were to Fourier transform it. So this would be b to the d minus 4. It is still z squared. It will be proportional to l. It will have exactly these kinds of corrections also. And I can keep going with the list of all second-order terms.

Now, then we got to u prime. u prime was a fourth order, fourth power of m. So it carried z to the fourth. It involved three derivatives in q space, so it gave me b to the minus 3d. And then to the lowest order when I did the expansion from u, one of the terms that I got was the original potential evaluated for m tilde rather than the original m. So I always will get this term.

And then I noticed that when I go and calculate things at second order-- and we explicitly did that-- we got a term that was minus 4 u squared n plus 8 integral lambda over b to lambda d dk 2 pi to the d 1 over t plus k k squared. And presumably, these series also continue. The whole thing squared. It was a squared propagator that was appearing over here.

And again, reminding you that these came from diagrams that were-- some of it was like this. There was a loop that gave us the factor of n. And then there were things like this one or this one. Yeah. And those gave out this huge number compared to this.

Now, if I had included this term that is proportional to v, presumably I would have gotten corrections that are order of uv. If I go to higher orders, I will certainly get things that are of the order of u6. I will certainly get things that are of the order of u cubed and so forth. So there is a whole bunch of corrections that in principle, if I am supposed to include everything and keep track of everything, I should include.

Now, the v itself, v prime, has two additional derivatives with respect to u. So it will be b to the minus 3d minus 2. It's a z to the fourth type of term. It is v, and then it will certainly get corrections at, say, at order of uv and so forth. And what else did I write down in the series?

I can write as many as we like. u prime to the 6. This is something that goes with 6 powers of z and will have 5 derivatives. Sorry, 5 integrations in q. So it will give me b to the minus 5d. And again, presumably I will have u6 minus order of something like u squared v and all kinds of things. All right. Yes.

AUDIENCE: So in the calculation of terms like k prime and l prime, you have factors like b minus d minus 2. Is it minus 2 or plus 2?

PROFESSOR: OK. So these came from Fourier transforming this entity. When I Fourier transform, I get integral dd q. I will have t plus k q squared plus l q to the fourth, et cetera. And my task is that whenever I see q, I replace it with p inverse q prime. So this would be b to the minus d. This would be b to the minus 2. This would be b to the minus 4. So that's how it comes.

AUDIENCE: OK. Thank you.

PROFESSOR: Anything else? OK. So then we had to choose what this factor of z is. And we said, let's choose it such that k prime is the same as k. But k prime over k we can see is z squared b to the minus d minus 2.

If I divide through by this k, then I will get 1, and then something here which is order of u squared. Now, we will justify later why u in order to be small so that I can make a construction that is perturbative in u will be of the order of epsilon. But in any case, if I want to in some sense keep the lowest order in u, at this order I am justified to get rid of this term.

And when I do that to this order, I will find that z is b to the 1 plus d over 2. And probably in principle, corrections that will be of the order of this epsilon to the squared. So I do that choice.

Secondly, I'll make my b to be infinitesimal. And so that means that mu prime at scale b, the set of parameters-- each parameter would be basically mu plus a small shift d mu by dl. And then I can recast these jumps by factors of b that I have up there to flow equations. And so what do I get?

For the first one, we got dt by dl. And I had chosen z squared to be b to the minus d minus 2. So compared to the original one, it's just two more factors of b. Two more factors of b will give me 2t. And then I will have to deal with that integration evaluated when b is very small, which means that I have to just evaluate it on the shell. So I will have 4u n plus 2 kd lambda to the d divided by t plus k lambda squared plus higher-orders terms in this propagator.

And then I will have, presumably, some a1-looking quantity, but evaluated on the shell that depends on u squared, and then I will have higher-order terms. Yes.

AUDIENCE: I'm kind of curious on why we choose k equals k prime instead of the constant in front of any of the other gradient terms. Why is k equals k prime better than l equals l prime or--

PROFESSOR: OK. We discussed this in the context of the Gaussian model. So what we saw for the Gaussian model is that if I choose l prime to be l, then I will have k prime being b squared k and t prime will be b to the fourth t. So I will have two relevant directions. So I want to have, in some sense, the minimal levels of direction guided by the experimental fact that you do whatever you like and you see the phase transition, except that you have to change one parameter.

There could be something else. There could very well-- somebody comes to me later and describes some kind of a phase transition that requires two relevant directions. And the physics of it may guide me to make the other choice. But for the problem that I'm telling you right now, the physics guides me to make this choice.

There is no equation for k because we already set that to be 0. Let's write the equation for u.

So four u, z to the fourth becomes 4 plus 2d. And then minus 3d becomes 4 minus d u. And then the next order term becomes minus 4 u squared n plus 8 kd lambda to the d t plus k lambda squared and so fourth squared when I evaluate that integral on the shell. And then I will have higher-order terms.

So this is where this idea of making an expansion in dimensions come into play. Because we want to have these sets of equations somehow under control, we need to have a small parameter in which we are making an expansion. And ultimately, we will be looking at the fixed point. And the fixed point occurs at u star. That is, of the order of 4 minus d. Otherwise, there is no small control parameter.

So the suggestion, actually, that goes to Fisher was to organize the expansion as a power series in this quantity epsilon. And eventually then, ask what the properties of these series are as a function of epsilon.

So then I have all those others. I forgot, actually, to write l, by dt by dl. Well, compared to k, it has two additional factor of gradients, which means that it will start with minus 2l. And then we said that it will get corrections that are of the order of u squared, and uv, and such things.

dv by dl. I mean, compared to this term, compared to u, it has two more gradients in the construction. So it's dimension will be minus 2 plus epsilon. And then we'll get directions of the order of, presumably, u squared, uv, [INAUDIBLE]. And then, what else did I write?

I wrote something about d u6 by dl. d u6 by dl, I have to substitute for z 1 plus d over 2. Subtract 5d. Rewrite d as 4 minus epsilon. Once you do that, you will find that it becomes minus 2 plus 2 epsilon. Let me just make sure that I am not saying something wrong.

Yeah, u6 plus order of uv and so forth. So there is this whole set of parameters that are being changed as a function of-- going away-- changing the rescaling by a factor of b is 1 plus an infinitesimal. So this is the flow of parameter in this space.

So then to confirm the ideas of Kadanoff, we have to find the fixed point. And there is clearly a fixed point when all of these parameters are 0. If they are 0, nothing changes. And I'm back to the Gaussian model, which is described by just gradient of m squared type of theory. So this is the fixed point that corresponds to t star, u star, l star, v star, all of the things that I can think of, are equal to 0. It's a perfectly good fixed point of the transformation.

It doesn't suit us because it actually has still two relevant directions. It's obvious that if I make a small change in u, then in dimensions less than 4, u is a relevant direction and t is a relevant direction. Two directions does not describe the physics that I want.

But there is fortunately another fixed point, the one that we call the O n fixed point because it explicitly depends on the parameter n. And what I need to do is to set this equal to 0. And if I set that equal to 0, what do I get?

I get u star just manipulating this. It is proportional to epsilon. 1u drops out. It is proportional to epsilon The coefficient has a factor of 1 over 4 n plus 8. Basically, the inverse of this. And then also, the inverse of all of that. So I have t star plus k lambda squared and so forth squared divided by kd lambda to the d.

Then, what I need to do is to set the second equation to 0. You can see that this is a term that is order of epsilon squared now. Whereas, this is a term that is order of epsilon. So for calculating the position of the fixed point, I don't need this parameter. And what do I get?

I will get that t star is minus 2 n plus 2 kd lambda to the d divided by t star plus k lambda squared and so forth. Times u star. u star is epsilon 4 n plus 8 t star plus k lambda squared and so forth squared. Divided by kd lambda to the d.

Again, I'm calculating everything correctly to order of epsilon. So since t star is order of epsilon, I can drop it over here. So my u star is, in fact, epsilon divided by 4 n plus 8. And then this combination k lambda squared and so forth squared divided by kd lambda to the d.

And doing the same thing up here, my t star is epsilon n plus 2 divided by 2 n plus 8. It has an overall minus sign. The kd parts cancel. And one of these factors cancel, so I will get k lambda squared squared. Sorry, no square here.

And both of these will get corrections that are order of epsilon squared that I haven't calculated. Now, let's make sure that it was justified for me to focus on these two parameters and look at everything else as being not important before.

Well, look at these equations. This equation says that if I had a term that was order of u squared evaluated at a fixed point, it would be epsilon squared. So l star would be of the order of epsilon squared. You can check that v star would be of the order of epsilon squared. A lot of those things will be of the order of epsilon squared. v star.

And actually, if you look at it carefully, you'll find that things like u6 will be even worse. They would start at order of epsilon cubed and so forth. So quite systematically in this small parameter that Fisher introduced, we see that what has happened is that we have a huge set of parameters, these mu's. But we can focus on the projection in the parameter space t and u.

And in that parameter space, we certainly always have the Gaussian fixed point. But as long as I am in dimensions less than 4, the Gaussian fixed point is not only relevant in the t-direction by a factor of 2, but it also is relevant in another direction. There is an eigen-direction that is slightly shifted with respect to t equals to 0. It's not just the u-axis. Along that direction, it moves away.

Here you have an eigenvalue of 2. Here you have an eigenvalue of epsilon. So that's the Gaussian fixed point. But now we found another fixed point, which is occurring for some positive u star and some negative u star. This is the o n fixed point.

Kind of just by the continuity, you would expect that if things are going into here, it probably makes sense that it should be going like here and this should be a negative eigenvalue. But one can explicitly check that.

So basically, the procedure to check that is to do what I told you. I have to construct a linearized matrix that relates delta mu going away from this fixed point to what happens under rescaling. So basically, under rescaling I will find that if I set my mu to be mu star plus a small change delta mu, then it will be moving away. And I can look at how, let's say, delta t changes, how delta u changes, how delta l changes, the whole list of parameters that I have over here.

The linearized matrix will relate them to the vector that corresponds to delta t, delta u, blah, blah. So I have to go back to these recursion relations, make small changes in all of the parameters, linearize the result, construct that matrix, and then evaluate the eigenvalues of that matrix. Again, consistency to the order that I have done things.

And for example, one of the things that we saw last time is that there will be an element here that corresponds to the change in u if I make a change in delta t. There is such a contribution.

If I make a change delta t with respect to the fixed point, I will get a derivative from here. But that derivative multiplies u squared. Evaluated at the fixed point means that I will get a term down here that is order of epsilon squared. And then the second element here, what happens if I make a change in u?

Well, I will get a epsilon here. And then I get a subtraction from here. And this subtraction we evaluated last time and it turned out to be epsilon minus 2 epsilon. So the relevance that we had over here became an irrelevance that I wanted. So there is some matrix element in this corner. But since this is 0, as we discussed if I look at this 2 by 2 block, it doesn't affect this eigenvalue.

Since I did not evaluate this eigenvalue last time, I'll do it now. So in order to calculate the yt, what I need to do is to see what happens if I change t to t plus delta t. So I have to take a derivative with respect to t.

From the first one, I will get 2. From the second term, I will get minus 4 u n plus 2 kd lambda to the d-- what is in the denominator squared. So t plus k lambda squared and so forth squared. But I have to evaluate this at the fixed point.

So I put a u star here. I put a u star here. Since u star is already order of epsilon, this order of epsilon term I can ignore. And so what I have here is 2 minus 4 n plus 2 epsi-- OK.

Now, let's put u star. u star I have up here. It is epsilon divided by 4 n plus 8. I have k lambda squared and so forth squared. kd lambda to the d. So I substituted the u star. I had the n plus 2. So now I have the kd lambda to the d. And I have this whole thing squared.

And you see that all of these things cancel out. And the answer is simply 2 minus n plus 2 epsilon divided by n plus 8. And somehow, I feel that I made a factor of-- no, I think that's fine. Double check. Yep. All right. So let's see what happened.

We have identified two fixed points, Gaussian and in dimensions less than 4, the O n. Associated with this are a number of operators that tell me-- or eigen-directions that tell me if I go away from the fixed point, whether I would go back or I would go away.

And so for the Gaussian, these just have the names of the parameters that we have to set non-zero. So their names are things like t, u, l, v, u6, and so forth. And the value of these exponents we can actually get without doing anything because what is happening here is just dimensional analysis. So if I simply replace m by something, m prime, gradients or integrations with some power of distance b, I can very easily figure out what the dimensions of these quantities are.

They are 2, epsilon, minus 2, minus 2 plus epsilon, minus 2 plus 2 epsilon, and so forth. So this is simple dimensional analysis. And in some sense, these correspond to their dimensions of the theory of the variables that you have.

Problem with it as a description of what I see in experiments is the presence of two relevant directions. Now, we found a new fixed point that is under control to order of epsilon. And what we find is that this exponent for what was analogous to t shifted to be 2 minus n plus 2 over n plus 8 epsilon. While the one that was epsilon shifted by minus 2 epsilon and became minus epsilon.

What I see is a pattern that essentially all that can happen, since I'm doing a perturbation in epsilon, is that these quantities can at most change by order of epsilon. So this was minus 2 becomes minus 2 plus order of epsilon. This becomes minus 2 plus order of epsilon. It will not necessarily be minus 2 plus epsilon. It could be minus 2 minus 7 epsilon plus 11 epsilon. Maybe even epsilon squared, I don't know. But the point is that clearly, even if I put all the infinity of parameters, as long as I am in 3.999 dimension, at this fixed point I only have one relevant direction. So it does describe the physics that I want, at least in this perturbative sense of the epsilon expansion.

And so I have my yt. Actually, in order to get all of the exponents, I really need two. I need yt and maybe yh. But yh is very simple. If I were to add to this magnetic field term, then in Fourier representation it just goes and sits over here at q equals to 0.

And when I do all of my rescalings, et cetera, the only thing that happens to it is that it just picks the factor of z. And we've shown that z is b to the 1 plus d over 2 plus order of epsilon squared. And so essentially, we also have our yh. So I can even add it to this table. There is a yh. There is a magnetic field. The corresponding yh is 1 plus d over 2 for the Gaussian. It is 1 plus d over 2 plus order of epsilon squared for the O n model. So now we have everything that we need.

We can compute things in principle. We find, first of all, that if I look at the divergence of the correlation length, essentially we saw that under rescaling yt tells us that if our magnetic field is 0, how I get thrown away from the fixed point over here. There is a relevant direction out here that we've discovered whose eigenvalue here is no longer 2. It is 2 minus this formula that we've calculated.

And presumably again, if I go and look at my set of parameters, what I have is that in this infinite dimensional space, as I reduce temperature, I will be going from, say, one point here, then lower temperature would be here, lower temperature would be here. So there would be a trajectory as a function of shifting temperature, which at some point that trajectory hits the basin of attraction of this O n fixed point that we found.

And then being away from here will have a projection along this axis. And we can relate that to the divergence of, say, the correlation length to the free energy. Our u was 1 over yt. It is the inverse of this object. So let's say this is-- if I divide, I have 1/2 1 minus n plus 2 over 2 n plus 8 epsilon raised to the minus 1 power.

And to be consistent, I should really only expand this to the order of epsilon. So I have 1/2 plus 1/4 n plus 2 n plus 8 epsilon. So what does it tell me?

Well, it tells me that Gaussian fixed point, the correlation length exponent was 1/2. We already saw that. We see that when we go to this O n model, the correlation length exponent becomes larger than 1/2. I guess that agrees with our table.

And I guess we can try to estimate what values we would get if we were to put n equals to 1, n equals to 2, et cetera. So this is n equals to 1.

If I put epsilon equals to 1, what do I get for mu? I will get 1/2 plus 1/4 of 3/9. So that's 1/12. So that would give me something like 0.58. All right. Not bad for a low-order expansion coming from 4 to something that's in three dimensions. What happens if I go to n equals to 2?

OK, so correction is 4/10 divided by 4. So it's 0.1. So I would get 0.6. What happens if I put n equals to 3?

I will get 5 divided by 44. And I believe that gives me something like 61. So it gets worse when I go to larger values of n, but it does capture a trend.

Experimentally, we see that mu becomes larger as you go from 1 to 2 or 3-component order parameter. That trend is already captured by this low-order expansion.

Once you have mu, you can, for example, calculate alpha. Alpha is 2 minus d mu. So you do 2 minus your d is 4 minus epsilon. Your mu is this 1/2 1 plus 1/8 n plus 2 n plus 8 epsilon. 1/2, sorry. And you do the algebra and I'll write the answer. It is 4 minus n epsilon divided by 2 n plus 8. OK, and let me check.

So if I now substitute epsilon equals to 1 for these different values of n, what I get are for alpha 0.17, 0.11, 0.06. I don't know, maybe I have a factor of 2 missing, or whatever. But these numbers, I think, are correct.

So you can see that in reality alpha is positive for the liquid gas system n equals to 1. It is more or less 0. This is the logarithmic lambda point for superfluids. And then it becomes negative, clearly, for magnets.

The formula that we have predicts all of these numbers to be positive, but it gets the right trend that as you go to larger values of n, the value of the exponent alpha calculated at this order in epsilon expansion becomes lower. So that trend is captured.

So at this stage, I guess I would say that the problem that I posed is solved in the same sense that I would say we have solved for the energy levels of the helium atom. Certainly, you can sort of ignore the interaction between electrons and calculate hydrogenic energies. And then you can do perturbation in the strength of the interaction and get corrections to that. So essentially, you know the trends and you know everything.

And we have been able to sort of find the physical structure that would give us a root to calculating what these exponents are. We see that the exponents are really a function of dimensionality and the symmetry of the order parameter. All of the trends are captured, but the numerical values, not surprisingly, at this low order have not been captured very well.

So presumably, you would need to do the same thing that you would do for the helium atom. You could do to higher and higher order calculations. You could do simulations. You could do all kinds of other things. But the conceptual foundation is basically what we have laid out here.

OK, there is one thing that-- well, many things that remain to be answered. One of them is-- well, how do you know that there isn't a fixed point somewhere else?

You calculated things perturbatively. The answer is that once you do higher-order calculations, et cetera, you find that your results converge, more or less, better and better to the results of simulations or experiments, et cetera. So there is no evidence from whatever we know that there is need for something that I would call a strong coupling, non-perturbative fixed point.

It's not a proof. We can't prove that there isn't such a thing. But there is apparently no need for such a thing to discuss what is observed experimentally. Yes.

AUDIENCE: You told us last time that in order for the mu fixed point to make sense, we must have epsilon very small.

PROFESSOR: Yes.

AUDIENCE: But now we're putting epsilon back to 1.

PROFESSOR: OK. So when people inevitably ask me this question, I give them the following two functions of epsilon. One of them is e to the epsilon over 100 and the other is e to the 100 epsilon. Do I know if I put epsilon of 1 a priori whether or not putting epsilon equals to 1 is a good thing for the expansion or not?

I don't. And so I don't know whether it is bad and I don't know whether it is good, unless I calculate many more terms in the series and discuss what the convergence of the series is.

AUDIENCE: Epsilon is 4 minus d is supposed to be an integer. We just--

PROFESSOR: Oh, you're worried about its integerness as opposed to treating it as a continuum? OK.

AUDIENCE: It's OK when you first assume it [INAUDIBLE]. At the end of the day, it must be an integer.

PROFESSOR: OK. So the example somebody was asking me also last time that I have in mind is you've learned n factorial to be the product of 1, 2, 3, 4. And you know it for whatever integer that you would like, you do the multiplication. But we've also established that n factorial is an integral 0 to infinity dx x to the n e to the minus x. And so the question is now, can I talk about 4.111 factorial or not? Can I expand 4.11 factorial close to what I have 4 assuming the form of this so-called gamma function?

So the gamma function is a function of n that at integer values it falls on whatever we know, but it has a perfect analytic continuation and I can in principle evaluate 4 factorial by expanding around 3 and the derivatives of the gamma function evaluated at 3.

AUDIENCE: But these kind of integers don't have anything with dimensionality.

PROFESSOR: OK, where did our dimensionality come from?

Our dimensionality appears in our expressions because we have to do integrals of this form. And what do we do?

We replace this with a surface area, which actually involves the factorial by the way. And then we have k to the d minus 1 dk. So these integrals are functions of dimension that have exactly the same properties as the gamma function and the n factorial. They're perfectly well expandable. And they do have singularities. Actually, it turns out the gamma functions also have singularities at minus 1, things like that. Our functions have singularities at two dimensions and so forth.

But the issue of convergence is very important. So let's say that there were some powerful field theories. And in order to do calculations at higher orders, you need to go and do field theory. And you calculate the exponent gamma. And I will write the gamma exponent for the case of n equals to 1. And the series for that is 1 plus-- if we sort of go and do all of our calculations to lowest order, gamma is 2 mu. So it will simply be twice what we have over here.

And the first correction is, indeed, 167 times epsilon. The next one is 0.077 epsilon squared. The next one is minus-- problematic-- 049 epsilon cubed. Next one, 180 epsilon to the fourth. Next one-- and I think this is as far as people have calculated things-- epsilon to the fifth.

Then, let's put epsilon equals to 1 and see what we get at the various orders. So clearly, I start with 1. Next order I will get 1.167. At the next order, I will get 1.244. It's getting there, huh?

And the next order I will get 1.195. Then I will get 1.375. And then I will get 0.96.

[LAUGHTER]

So this is the signature of what is called an asymptotic series, something that as you evaluate more therms gets closer to the expected result, but then starts to move away and oscillate. Yet, there are tricks. And if you know your tricks, you can put epsilon equals to 1 in that series and do clever enough terms to get 1.2385 minus plus 0.0025. So the trick is called Borel summation.

So one can show that if you go to high orders in this series, asymptotically the terms in the series scale like, say, the p-th term in the series will scale as p factorial, something like a to the power of p and some coefficient in front. So if I write the general term in this series f sub p epsilon to the p, my statement is that the magnitude of f sub p asymptotically for p much larger than 1 going to infinity has this form.

So clearly, because of this p factorial, this is growing too rapidly. But what you can do is you can rewrite this series, which is sum over p f of p epsilon of p using this integral that I had over here for p factorial.

So I multiply and divide by p factorial. So it becomes a sum over p f of p epsilon of p integral 0 to infinity dx x to the p e to the minus x divided by p factorial. And the fp over p factorial gets rid of this factor. So then you can recast this as the integral from 0 to infinity dx e to the minus x. And what you do is you sum the series f of p divided by p factorial epsilon x raised to the power of p.

And it turns out that this is called some kind of a Borel function corresponding to this series. And as long as the terms in your series only diverge this badly, people can make sense of this function, Borel function. And then you perform the integration, and then you come up with this number. So that's one thing to note.

The other thing to note is I said that what I want to do for my perturbation theory to make sense is for this u star to be small. And I said that the knob that we have is for epsilon to be small. But there is, if you look at that expression, another knob.

I can make n going to infinity. So if n becomes very large-- so that also can make the thing small. So there is an alternative expansion. Rather than going with epsilon going to 0, you go to what is called a spherical model. That is, an infinite number of components, and then do in expansion in 1 over n. And so then you basically-- what you are interested is things that are happening as a function of d and n.

And you have-- above 4, you know that you are in the Gaussian world. At n goes to infinity, you have this O n type of models. And you find that these models actually only make sense in dimensions that are larger than 2. So you can then perturbatively come either from here or you can come from here and you try to get to the exponents you are interested over here or over here. So basically, that's the story.

And for this work, I think, as I said, Wilson did this pertubative RG. Michael Fisher was the person who focused it into an epsilon expansion. And in 1982, Wilson got the Nobel Prize for the work. Potentially, it could have been also awarded to Fisher and Kadanoff for their contributions to this whole story. So that's the end of this part of the course.

And now that we have established this background, we will try to get the exponents and the statistical behavior by a number of other perspectives. So basically, this was a perspective and a route that gave an answer. And hopefully, we'll be able to complement it with other ways of looking at the story.