Lecture 9: Perturbative Renormalization Group Part 1

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar introduces the Perturbative Renormalization Group, including the Expectation Values in the Gaussian Model, Expectation Values in Perturbation Theory, Diagrammatic Representation of Perturbation Theory, and Susceptibility.

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: OK. Let's start with our standard starting point of the last few lectures. That is, we are looking at some system. We tried to describe it by some kind of a field, statistical field after some averaging. That is, a function of position. And we are interested in calculating something like a partition function by integrating over all configurations of the statistical field.

And these configurations have some kind of a weight. This weight we choose to write as exponential of some kind of a-- something like an effective Hamiltonian that depends on the configuration that you're looking at.

And of course, the main thing faced with a problem that you haven't seen before is to decide on what the field is that you are looking at to average. And what kind of symmetries and constraints you want to construct in this form of the weight.

In particular, we are sort of focusing on this Landau-Ginzburg model that describes phase transitions. And let's say in the absence of magnetic field, we are interested in a system that is rotationally symmetric. But the procedure is reasonably standard.

Maybe in some cases you can solve a particular part of this. Let's call that part beta H0. In the context that we are working with, it's the Gaussian part that we have looked at before. So that's the integral over space. We used this idea of locality. We had an expansion in all things that are consistent with this. But for the purposes of the exactly solvable part, we focus on the Gaussian. So there is a term that is proportional to m squared, gradient of m squared and higher-order terms.

So clearly for the time being, I'm ignoring the magnetic field. So let's say in this formulation the problem that we are interested is how our partition function depends on this coefficient, which where it goes to 0, the Gaussian weight becomes kind of unsustainable.

Now, of course, we said that the full problem has, in addition to this beta H0, a part that involves the interaction. So what I have done is I have written the weight as beta H0 and a part that is an interaction. By interaction, I really mean something that is not solvable within the framework of Gaussian.

In our case, what was non-solvable is essentially anything-- and there is infinity of terms-- that don't have second-order powers of m. So we wrote terms like m to the fourth, m to the sixth, and so forth.

Now, the key to being able to solve this problem was to make a transformation to Fourier modes. So essentially, what we did was to write our m of x as a sum over Fourier modes. You could write it, let's say, in the discrete form as e to the i q dot x. And whether I write e to the i q dot x or minus i q dot x is not as important as long as I'm consistent within one session at least.

And the normalization that I used was 1 over V. And the reason I used this normalization was that if I went to the continuum, I could write it nicely as an integral over q divided by the density of states. The V would disappear. e to the i q x m tilde of q.

Now in particular if I do that transformation, the Gaussian part simply becomes 1 over V sum over q. Then the Fourier transform of this kernel t plus k q squared and so forth divided by 2 m of q discrete squared. Which if I go to the continuum limit simply becomes an integral over q t plus k q squared and so forth over 2 m of q squared.

Now, once I have the Gaussian weight, from the Gaussian weight I can calculate various averages. And the averages are best described by noting that essentially after this transformation I can also write my weight as a product over the contributions of the different modes of something that is of this form, e to the minus beta H0. Now written in terms of these q modes, clearly it's a product of independent contributions.

And then of course, there will be the u to be added later on. But when I have a product of independent contributions for each q, I can immediately see that if I look at, say, m evaluated for some q, m evaluated for some different q with the Gaussian weight. And when I calculate things with the Gaussian weight, I put this index 0. So that's my 0 to order or exactly solvable theory.

And of course, we are dealing here with a vector. So these things have indices alpha and beta associated with them. And if I look at the discrete version, I have a product over Gaussians for each one of them. Clearly, I will get 0 unless I am looking at the same components. And I'm looking at the same q. And in particular, the constraint really is that q plus q prime should add up to 0.

And if those constraints are satisfied, then I am looking at the particular term in this Gaussian. And the expectation value of m squared is simply the variance that we can see is V divided by t plus k q squared, q to the fourth, and so forth.

And the thing is that most of the time we will actually be looking at things directly in the limit of the continuum where we replace sums q's with integrals over q. And then we have to replace this discrete delta function with a continuum delta function.

And the procedure to do that is that this becomes delta alpha beta. This combination gets replaced by 2 pi to the d delta function q plus q prime, where this is now a direct delta function t plus k q squared plus l q to the fourth and so forth. And the justification for doing that is simply that the Kronecker delta is defined such that if I sum over, let's say, all q, the delta that is Kronecker, the answer would be 0.

Now, if I go to the continuum limit, the sum over q I have to replace with integral over q with a density of states, which is V divided by 2 pi to the d. So if I want to replace this with a continuum delta function of q, I have to get rid of this 2 pi to the d over V. And that's what I have done.

So basically, you replace-- hopefully I didn't make a mistake.

Yes. So the discrete delta I replace with 1 over V. The V disappears and the 2 pi to the d appears. OK?

Now, the thing that makes some difficulty is that whereas the rest of these things that we have not included as part of the Gaussian, because of the locality I could write reasonably simply in the space x. When I go to the space q, it becomes complicated. Because each m of x here I have to replace with a sum or an integral. And I have four of those m's.

So here, let's say for the first term that involves u, I have in principle to go over an integral associated with conversion of the first m, conversion of the second m, third m. Each one of them will carry a factor of 2 pi to the d. So there will be three of them.

And the reason I didn't write the fourth one is because after I do all of that transformation, I will have an integral over x of e to the i q1 dot x plus q2 dot x plus q3 dot x plus q4 dot x. So I have an integral of e to the i q dot x where q is the sum of the four of them over x. And that gives me a delta function that ensures the sum of the four q's have to be 0.

So basically, one of the m's will carry now index q1. The other will carry index q2. The third will carry index q3. The fourth will carry index that is minus q1 minus q2 minus q3. Yes?

AUDIENCE: Does it matter which indices for squaring it, or whatever? Sorry. Do you need a subscript for alpha and beta?

PROFESSOR: OK. That's what I was [INAUDIBLE]. So this m to the fourth is really m squared m squared, where m squared is a vector that is squared. So I have to put the indices-- let's say alpha alpha-- that are summed over all possibility to get the dot product here. And I have to put the indices beta beta to have the dot products here. OK?

Now when I go to the next term, clearly I will have a whole bunch more integrals and things like that. So the e u terms do not look as nice and clean. They were local in real space. But when I go to this Fourier space, they become non-local. q's that are all over this [INAUDIBLE] are going to be coupled to each other through this expression.

And that's also why it is called interaction. Because in some sense, previously each q was a mode by itself and these terms give interactions between modes that have different q's. Yes?

AUDIENCE: Is there a way to understand that physically of why you get coupling in Fourier space? [INAUDIBLE] higher than [INAUDIBLE].

PROFESSOR: OK. So essentially, we have a system that has translational symmetry. So when you have translational symmetry, this Fourier vector q is a good conserved quantity. It's like a momentum.

So one thing that we have is, in some sense, a particle or an excitation that is going by itself with some particular momentum. But what these terms represent is the possibility that you have, let's say, two of these momenta coming and interacting with each other and getting two that are going out. Why is it [INAUDIBLE]?

It's partly because of the symmetries that we built into the problem. If I had written something that was m cubed, I had the possibility of 2 going to, 1 or 1 going to 2, et cetera. All right.

I forgot to say one more thing, which is that for the Gaussian theory, I can calculate essentially all expectation values most [INAUDIBLE] in this context of the Fourier representation. So this was an example of something that had two factors of m. But very soon, we will see that we would need terms that, let's say, involve m factors of n that are multiplied each other. So I have m alpha i of qi-- something like that.

And again, 0 for the Gaussian expectation value. And if I have written things the way that I have-- that is, I have no magnetic field. So I have m2 minus m symmetry, clearly the answer is going to be 0 if l is odd. If l is even, we have this nice property of Gaussians that we described in 8.333, which is that this will be the sum over all pairs of averages. So something like m1, m2, m3, m4.

You can have m1 m2 multiplying by m3 m4 average. m1 m3 average multiply m2 m4 average. m1 m4 average multiplied by m2 m3 average. And this is what's called a Wick's theorem, which is an important property of the Gaussian that we will use.

So we know how to calculate averages of things of interest, which are essentially product of these factors of m, in the Gaussian theory. Now let's calculate these averages in perturbation theory.

So quite generally, if I want to calculate the average of some O in a theory that involves averaging over some function l-- so this could be some trace, some completely unspecified things-- of a weight that is like, say, e to the minus beta H0. A part that I can do and a part that I want to treat as a small change to what I can do.

The procedure of calculating the average is to multiply the probability by the quantity that I want to average. And of course, the whole thing has to be properly normalized. And this is the normalization, which is the partition function that we had previously.

Now, the whole idea of perturbation is to assume that this quantity u is small. So we start to expand e to the minus u. I can certainly do that very easily, let's say, in the denominator. I have e to the minus beta H0, and then I have 1 minus u plus u squared over 2 minus u cubed over 6. Basically, the usual expansion of the exponential.

In the numerator, I have the same thing, except that there is an I that is multiplying this expansion for the operator, or the object that I want to calculate the average.

Now, the first term that I have in the denominator here, 1 multiplied by all integrals of e to the minus H0 is clearly what I would call the partition function, or the normalization that I would have for the Gaussian weight. So that's the first term.

If I factor that out, then the next term is u integrated against the Gaussian weight and then properly normalized. So the next term will be the average of u with the Gaussian or whatever other 0 to order weight is that I can calculate things and I have indicated that by 0. And then I would have 1/2 average of u squared 0 and so forth.

And the series in the numerator once I factor out the Z0 is pretty much the same except that every term will have an o. The Z 0's naturally I can cancel out. So what I have is from the numerator o minus ou plus 1/2 ou squared.

What I will do with the denominator is to bring it in the numerator regarding all of these as a small quantity.

So if I were to essentially write this expression raised to the minus 1 power, I can make a series expansion of all of these terms. So the first thing, if I just had one over 1 minus u in the denominator, it would come from 1 plus u plus u squared and u cubed, et cetera. But I've only kept thing to order of u squared. So when I then correct it because of this thing in the denominator, the 1/2 becomes minus 1/2 u squared 0. And then, there will be order of u cubed terms. So the answer is the product of two brackets.

And I can reorganize that product, again, in powers of u. The lowest-order term is the 0 to order, or unperturbed average. And the first correction comes from ou average. And then I have the average of u average of u. You can see that something like a variance or connected correlation or cumulant appears because I have to subtract out the averages. And then the next order term, what will I have?

I will write it as 1/2. I start with ou squared 0. Then I can multiply this with this, so I will get minus 2 o u0 u0. And then I can multiply o0 with those two terms. So I will have minus o0 u squared 0 plus 2 o0-- this is over here. u0 squared and higher-order terms.

So basically, we can see that the coefficients, as I have written, are going to be essentially the coefficients that I would have if I were to expand the exponential. So things like minus 1 to the n over n factorial.

And the leading term in all cases is o u raised to the n-th power 0, out of which are subtracted various things. And the effect of those subtractions, let's say we define a quantity which is likely cumulants that we were using in 8.333, which describe the subtractions that you would have to define such an average. So that's the general structure.

What this really means-- and I sometimes call it cumulant or connected-- will become apparent very shortly. This is the general result. We have a particular case above, which is this Landau-Ginzburg theory perturbed around the Gaussian. So let's calculate the simplest one of our averages, this m alpha of q m beta of q prime, not at the Gaussian level, but as a perturbation.

And actually, for practical reasons, I will just calculate the effect of the first term, which is u m to the fourth. So I will expand in powers of u. But once you see that, you would know how to do it for m to the sixth and all the higher powers.

So according to what we have here, the first term is m alpha of q m beta of q prime evaluated with the Gaussian theory.

The next term, this one, involves the average of this entity and the u. So our u I have written up there. So I have minus from the first term. The terms that are proportional to u, I will group together coming from here.

u itself involved this integration over q1 q2 q3. And u involves this m i of q1 m i of q2 mj of q3 mj of minus q1 minus q2 minus q3. And I have to multiply it. So this is u. I have to multiply it by o. So I have m alpha of q m beta of q prime. So this is my o. This is my u. And I have to take the average of this term. But really, the average operates on the m's, so it will go over here. So that's the average of ou.

I have to subtract from that the average of o average of u. So let me, again, write the next term. u will be the same bunch of integrations. I have to do average of o and then average of u. This completes my first correction coming from u, and then there will be corrections to first order coming from V. There will be second-order corrections coming from u squared, all kinds of things that will come into play.

But the important thing, again, to realize is the structure. u is this thing that involves four factors of m. The averages are over the m, so I can take them within the integral. And so I have one case which is an expectation value of six m's. Another case, a product of two and a product of four. So that's why I said I would need to know how to calculate in the Gaussian theory product of various factors of m because my interaction term involves various powers of m that will be added to whatever expectation value I'm calculating perturbation theory. So how do I calculate an expectation that involves six factors-- certainly, it's even-- of m?

I have to group-- make all possible pairings. So this, for example, can be paired to this. This can be paired to this. This can be paired to this. That's a perfectly well-defined average.

But you can see that if I do this, if I pair this one, this one, this one, I will get something that will cancel out against this one. So basically, you can see that any way that I do averaging that involves only things that are coming from o and separately the things that come from u will cancel out with the corresponding-- oops. With the corresponding averages that I do over here. That c stands for connected.

So the only things that survive are pairings or contractions that pick something that is from o and connect it to something that is from u. And the purpose of all of these other terms at all higher orders is precisely to remove pieces where you don't have full connections among all of the o's and the u's that you are dealing with. So let's see what this is.

So I will show you that using connections that involve both o and u, I will have two types of contractions joining o and u.

The first type is something like this. I will, again, draw all-- or write down all of the fours. So I have m alpha of q m beta of q prime m i of q1 m i of q2 mj of q3 mj of minus q1 minus q2 minus q3. I have to take that average. And I do that the average according to Wick's theorem as a product of contractions. So let's pick this m alpha.

It has to ultimately be paired with somebody. I can't pair it with m beta because that's a self-contraction and will get subtracted out. So I can pick any one of these fours. As far as I'm concerned, all four are the same, so I have a choice of four as to which one of these four operators from u I connect to. So that 4 is one of the numerical factors that ultimately we have to take care of.

Then, the two types comes because the next m that I pick from o I have two possibilities. I can connect it either with the partner of the first one that also carries index i or I can connect to one of the things that carries the opposite index j. So let's call type 1 where I make the choice that I connect to the partner. And once I do that, then I am forced to connect these two together.

Now, each one of these pairings connects one of these averages. So I can write down what that is. So the first one connected an alpha to an i as far as indices were concerned. It connected q to q1, so I have 2 pi to the d a delta function q plus q1. And the variance associated with that, which is t plus k q squared, et cetera.

The second pairing connects a beta to an i. So that's a delta beta i. And it connects q prime to q2. And so the variance associated with that is t plus k q prime squared and so forth.

And finally, the third pairing connects j to itself j. So I will get a delta jj. And then I have 2 pi to the d. q3 to minus q1 minus q2 minus q3. So I will get minus q1 minus q2, and then I have t plus, say, k q3 squared and so forth.

Now, what I am supposed to do is at the next stage, I have to sum over indices i and j and integrate over q1 q2 q3. So when I do that, what do I get?

There is an overall factor of minus u. Let's do the indices.

When I sum over i, delta alpha i delta beta i becomes-- actually, let me put the factor of 4 before I forget it. There is a factor of 4 numerically. Delta alpha i delta beta i will give me a delta alpha beta.

When I integrate over q1, q1 is set to minus q. So this after the integration becomes q. When I integrate over q2, the delta function q2 forces minus q2 to be q prime. And through the process, two of these factors of 2 pi to the d disappear. So what I'm left with is 2 pi to the d. This delta function now involves q plus q prime.

And then in the denominator, I have this factor of t plus k q squared. I have t plus k q prime squared. Although, q prime squared and q squared are the same. I could have collapsed these things together.

I have one integration left over q3. And these two factors went outside the integral [INAUDIBLE] independent q3. The only thing that depends on q3 is t plus k q3 squared and so forth. So that was easy.

AUDIENCE: I have a question.

PROFESSOR: Yes.

AUDIENCE: If you're summing over j, won't you get an n?

PROFESSOR: Thank you very much. I forgot the delta jj. Summing over j, I will get a factor of n. So what I had written here as 4 should be 4n. Yes.

AUDIENCE: This may be a question too far back. But when you write a correlation between two different m's, why do you write delta function of q plus q prime instead of q minus q prime?

PROFESSOR: OK. Again, go back all the way to here when we were doing the Gaussian integral. I will have for the first one, q1. For the second m, I will write q2. So when I Fourier transform this term, I will have e to the i q1 plus q2 dot x. And then when I integrate over x, I will get a delta function q1 plus q2. So that's why I write all of these as absolute value squared because I could have written this as m of q m of minus q, but I realized that m of minus q is the complex conjugate of m of q. So all of these are absolute values squared.

Now, the second class of contraction is-- again, write the same thing, m alpha of q m beta of q prime m i of q1 m i of q2, mj of q3 mj of minus q1 minus q2 minus q3.

The first step is the same. I pick m alpha of q and I have no choice but to pick one of the four possibilities that I have for the operators that appear in u. But for the second one, previously I chose to connect it to something that was carrying the same index. Now I choose to carry it to something that carries the other index, j in this case.

And there are two things that carry index j, so I have two choices there. And then I have the remaining two have to be connected. Yes?

AUDIENCE: Just going back a little bit. Are you assuming that your integral over q3 converges because you're only integrating over the [INAUDIBLE] zone?

PROFESSOR: Yes.

AUDIENCE: OK.

PROFESSOR: That's right. Any time I see a divergent integral, I have a reason to go back to my physics and see why physics will avoid infinities. And in this case, because all of my theories have an underlying length scale associated with them and there is an associated maximum value that I can go in Fourier space.

The only possible singularities that I want to get are coming from q goes to 0. And again, if I really want to physically cut that off, I would put the size of the system. But I'm interested in systems that become infinite in size.

So the first term for this way of contracting things is as follows. There are eight such terms. I should have really put the four here. There are eight such types of contractions. Then I have a delta alpha i 2 pi to the d delta function q plus q1 divided by t plus k q squared and so forth.

The first contraction is exactly the same as before. The next contraction I connect i to j and q prime to q3. So I have a delta beta j 2 pi to the d delta function q prime going to q3 divided by t plus k q prime squared and so forth.

And the last contraction connects an i to a j. Delta ij. I have 2 pi to the d. Connecting q2 to minus q1 minus q2 minus q3 will give me a delta function which is minus q1 minus q3. And then I have t plus k-- I guess in this case-- q2 squared and so forth.

So once more sum ij. Integrate q1 q2 q3 and let's see what happens. So again, it's a term that is proportional to minus u. The numerical coefficient that it carries is 8. And there is no n here because when I sum over i, you can see that j is set to be the same as alpha. Then when I sum over j, I set alpha to be the same as beta. So there is just a delta alpha beta.

When I integrate over q1, q1 is set to minus q. q3 is set to minus q prime. So this factor becomes the same as q plus q prime. And the two variances, which are in fact the same, I can continue to write as separate entities but they're really the same thing.

And then the one integral that is left-- I did q1 and Q3-- it's the integral over q2, 2 pi to the d 1 over t plus K q2 squared and so forth. It is, in fact, exactly the same integral as before, except that the name of the dummy integration variable has changed from q2 to q3, or q3 to q2.

So we have calculated m alpha of q m beta of q prime to the lowest order in perturbation theory. To the first order, what I had was a delta alpha beta 2 pi to the d delta function q plus q prime divided by t plus k q squared.

Now, note that all of these factors are present in the two terms that I had calculated as corrections. So I can factor this out and write it as the correction as 1 plus or minus something. It is proportional to u. The coefficient is 4n plus 8, which I will write as 4 n plus 2.

I took out one factor of t plus k q squared. There is one factor that's will be remaining. Therefore, t plus k q squared. And then I have one integration over some variable. Let's call it k. It doesn't matter what I call the integration variable. 1 over t plus k, k squared, and so forth. And presumably, there will be higher-order terms.

Now again, I did the calculation specifically for the Landau-Ginzburg, but the procedure you would have been able to do for any field theory. You could have started with a part that you can solve exactly and then look at perturbations and corrections.

Now, there is, in fact, a reason why this correction that we calculated had exactly the same structure of delta functions as the original one. And why I anticipate that higher-order terms, if I were to calculate, will preserve that structure. And the reason has to do with symmetries.

Because quite generally, I can write for anything-- m alpha m beta of q prime without doing perturbation theory. Again, let's remember m alphas of q are going to be related to m of x by inverse Fourier transformation. So m alpha of q I can write an integral d dx e to the-- I guess by that convention, it has to be minus i q dot x. m alpha of x.

And again, m beta of q prime I can write as minus i q prime dot x prime. And I integrate also over an x prime of m alpha of x m beta of x prime. Now, these are evaluated in real space as opposed to Fourier space. And the average goes over here.

At this stage, I don't say anything about perturbation theory, Gaussian, et cetera. What I expect is that this is a function, that in a system that has translational symmetry, only depends on x minus x prime. m beta.

Furthermore, in a system that has rotational symmetry that is not spontaneously broken, that is approaching from the high temperature side, then just rotational symmetry forces you that-- the only tensor that you have has to be proportional to delta alpha beta. So I can pick some particular component-- let's say m1-- and I can write it in this fashion. So the rotational symmetry explains the delta alpha beta.

Now, knowing that the function that I'm integrating over two variables actually only depends on the relative position means that I can re-express this in terms of the relative and center of mass coordinates. So I can express that expression as e to the minus i q minus q prime x minus x prime over 2. And then I will write it as minus i q plus q prime x plus x prime over 2.

If you expand those, you will see that all of the cross terms will vanish and I will get q dot x and q prime dot x prime. Yes.

AUDIENCE: [INAUDIBLE]?

PROFESSOR: Yes. Thank you. So now I can change integration variables to the relative coordinate and the center of mass coordinate rather than integrating over x and x prime. The integration over the center of mass, the x plus x prime variable, couples to q plus q prime. So it will immediately tell me that the answer has to be proportional to q plus q prime.

I had already established that there is a delta alpha beta. So the only thing that is left is the integration over the relative coordinate of e to the minus i some q-- either one of them. q dot r. Since q prime is minus q, I can replace it with e to the i q dot the relative coordinate. m1 of r m1 of 0. So that's why for a system that has translational symmetry and rotational symmetry, this structure of the delta functions is really imposed for this type of expectation value. Naturally, perturbation theory has to obey that.

But then, this is a quantity that we had encountered before. If you recall when we were scattering light out of the system, the amplitude of something that was scattered was proportional to the Fourier transform of the correlation function. And furthermore, in the limit where S is evaluated for q equal to 0, what we are doing is we're essentially integrating the correlation function.

We've seen that the integrals of correlation function correspond to the susceptibilities. So you may have thought that what I was calculating was a two-point correlation function in perturbation theory. But what I was actually leading up to is to know what the result is for scattering from this theory. And in some limit of it, I've also calculated what the susceptibility is, how the susceptibility is corrected.

And again, if you recall the typical structure that people see for S of q is that S of q is something like 1 over something like this. This is the Lorentzian line shapes that we had in scattering. And clearly, the Lorentzian line shape is obtained by Fourier transformation and expectation values of these expansions that we make. So it kind of makes sense that rather than looking at this quantity, I should look at its inverse.

So I have calculated s of q, which is the formula that I have up there. So this whole thing here is S of q. If I calculate its inverse, what do I get?

First of all, I have to invert this. I have t plus k q squared, which is what would have given me the Lorentzian if I were to invert it. And now we have found the correction to the Lorentzian if you like, which is this object raised to the power of minus 1.

But recall that I've only calculated things to lowest order in u. So whenever I see something and I'm inverting it just like I did over here, I better be consistent to order of u. So to order of u, I can take this thing from the numerator, bring it to the num-- from denominator to numerator at the expense of just changing the sign. Order of u squared that we haven't really bothered to calculate.

So now it's nice because you can see that when I expand this, this factor will cancel that factor. So the inverse has the structure that we would like. It is t plus something that is a constant, doesn't depend on q, 4 n plus 2 u.

Well, actually, no. Yeah. Because this denominator gets canceled. I will get 4 n plus 2 u integral over k 2 pi to the d 1 over t plus k k squared and so forth. And then I have my k q squared. And presumably, I will have higher-order terms both in u and higher powers of q, et cetera.

And in particular, the inverse of the susceptibility is simply the first part. Forget about the k q squared. So the inverse of susceptibility is t plus 4 n plus 2 u integral d dk 2 pi to the d 1 over t plus k k squared and so forth. Plus order of things that we haven't computed. So why is it interesting to look at susceptibility?

Because susceptibility is one of the quantities-- it always has to be positive-- that we were associating previously with singular behavior. And in the absence of the perturbative correction from the Gaussian, the susceptibility we calculated many times. It was simply 1 over t.

If I had added a field, the field h would have changed the free energy by an amount that would be h squared over 2t as we saw. Take two derivatives, I will get 1 over t for the susceptibility. So the 0 order sustainability that I will indicate by chi sub 0 was something that was diverging at t equals to 0. And we were identifying the critical exponent of the divergence as gamma equals to 1. So here, I would have added gamma 0 equals to 1.

Because of the linear divergence-- and the linear divergence can be traced back to the linearity of the vanishing of the inverse susceptibility at temperature.

Now, let's see whether we have calculated a correction to gamma. Well, the first thing that you notice that if I evaluated the new chi inverse at 0, all I need to do is to put 0 in this formula. I will get 4 n plus 2 u. This integral d dk 2 pi to the d 1 over k k squared. I set t equals to 0 here.

Now this is, indeed, an integral that if I integrate all the ay to infinity would diverge on me. I have to put an upper cutoff. It's a simple integral. I can write it as integral 0 to lambda dk k to the d minus 1 with some surface of a d dimensional sphere. I have this 2 pi to the d out front and I have a k squared here. I can put the k out here. So you can see that this is an integral that's just a power. I can simply do that.

The answer ultimately will be 4 n plus 2 u Sd 2 pi to the d. There is a factor of 1 over k that comes into play. The integral of this will give me the upper cutoff to the d minus 2 divided by d minus 2. So what we find is that the corrected susceptibility to lowest order does not diverge at t equals to 0. Its inverse is a finite value.

So actually, you can see that I've added something positive to the denominator so the value of susceptibility is always reduced. So what does that mean? Does that mean that the susceptibility does not have a singularity anymore?

The answer is no. It's just that the location of the singularity has changed. The presence of u m to the fourth gives some additional stiffness that you have to overcome. t equals to 0 is not sufficient for you. You have to go to some other point tc. So I expect that this thing will actually diverge at a new point tc that is negative.

And if it diverges, then its inverse will be 0. So I have to solve the equation tc plus 4 n plus 2 u integral d dk 2 pi to the d of 1 over tc plus k k squared and so forth. So this seems like an implicit [INAUDIBLE] equation in tc because I have to evaluate the integral that depends on tc, and then have to set that function to 0.

But again, we have calculated things only correctly to order of u. And you can see already that tc is proportional to u. So this answer here is something that is order of u presumably. And a u compared to the u out front will give me a correction that is order of u squared. So I can ignore this thing that is over here.

So to order of u, I know that tc is actually minus what I had calculated before. So I get that tc is minus this 4 n plus 2 u k Sd lambda to the d minus 2 2 pi to the d d minus 2. It doesn't matter what it is, it's some non-universal value.

Point is that, again, the location of the phase transition certainly will depend on the parameters that you put in your theory. We readjusted our theory by putting m to the fourth. And certainly, that will change the location of the transition. So this is what we found.

The location of the transition is not universal. However, we kind of hope and expect that the singularity, the divergence of susceptibility has a form that is universal. There is an exponent that is characteristic of that. So asking the question of how these corrected chi divergence diverges at this tc is the same as asking the question of how its inverse vanishes at tc. So what I am interested is to find out what the behavior of chi inverse is in the vicinity of the point that it goes to 0. So basically, what this singularity is.

This is, of course, 0. By definition, chi inverse of tc is 0. So I am asking how chi vanishes its inverse when I approach tc. So I have the formula for chi once I substitute t, once I substitute tc and I subtract them. To lowest order I have t minus tc.

To next order, I have this 4 u n plus 2 integral over k 2 pi to the d. I have for chi inverse of t 1 over t plus k k squared minus 1 over tc plus k k squared. And terms that I have not calculated are certainly order of u squared.

Now, you can see that if I combine these two into the same denominator that is the product, in the numerator I will get a factor of t minus tc. The k q squareds cancel. So I can factor out this t minus tc between the two terms. Both terms vanish at t equals to tc.

And then I can look at what the correction is to one, just like I did before. The correction is going to-- actually, this will give me tc minus t, so I will have a minus 4 u n plus 2 integral d dk 2 pi to the d. The product of two of these factors, t plus k-- tc plus k k squared t plus k k squared, and then order of u squared. OK? Happy with that?

Now again, this tc we've calculated is order of u. And consistently, to calculate things to order of u, I can drop that. And again, consistently to doing things to order of u, I can add a tc here. And that's also a correction that is order of u. And this answer would not change.

The justification of why I choose to do that will become apparent shortly, but it's consistent that this is left. So what I find at this stage is that I need to evaluate an integral of this form.

And again, with all of these integrals, we better take a look as to what the most significant contribution to the integral is. And clearly, if I look at k goes to 0, there are various factors out there that as long as t minus tc is positive, I will have no worries because this k squared will be killed off by factors of k to the d minus 1 in dimensions above 2.

But if I go to large k-values, I find that large k-values, the singularity is governed by k to the power of d minus 4. So as long as I'm dealing with things that have some upper cutoff, I don't have to worry about it even in dimensions greater than 4.

In dimensions greater than 4, what happens is that the integral is going to be dominated by the largest values of k. But those largest values of k will be cutoff by lambda. The answer ultimately will be proportional to 1 over k squared, and then k to the power of d minus 4 replaced by lambda to the d minus 4-- various overall coefficient of d minus 4 or whatever. It doesn't matter.

On the other hand, if you go to dimensions less than 4-- again, larger than 2, but I won't write that for the time being-- then the behavior at large k is perfectly convergent. So you are integrating a function that goes up, comes down. You can extend the integration all the way to infinity, end up with a definite integral. We can rescale all of our factors of k to find out what that definite integral is dependent on.

And essentially, what it does is it replaces this lambda with the characteristic value of k that corresponds roughly to the maximum. And that's going to occur at something like t minus tc over k to the power of 1/2. So I will get d minus 4 over 2.

There is some overall definite integral that I have to do, which will give me some numerical coefficient. But at this time, let's forget about the numerical coefficient. Let's see what the structure is. So the structure then is that chi inverse of t, the singularity that it has is t minus tc to the 0 order. Same thing as you would have predicted for the Gaussian.

And then we have a correction, which is this minus something that goes after all of these things with some coefficient. I don't care what that coefficient is. u n plus 2 divided by k squared. And then multiplied by lambda to the power of d minus 4, or t minus tc over k to the power of d minus 4 over 2. And then presumably, higher-order terms. And whether you have the top or the bottom will depend on d greater than 4 or d less than 4.

So you see the problem. If I'm above four dimensions, this term is governed by the upper cutoff. But the upper cutoff is just some constant. So all that happens is that the dependence remains as being proportional to t minus tc. The overall amplitude is corrected by something that depends on u. You are not worried. You say that the leading singularity is the same thing as I had before. Gamma will stay to be 1.

I try to do that in less than four dimensions and I find that as I approach tc, the correction that I had actually itself becomes divergent. So now I have to throw out my entire perturbation theory because I thought I was making an expansion in quantity that I can make sufficiently small. So in usual perturbation theory, you say choose epsilon less than 10 to the minus 100, or whatever, and then things will be small correction to what you had at 0 order.

Here, I can choose my u to be as small as I like. Once I approach tc, the correction will blow up. So this is called a divergent perturbation theory. Yes.

AUDIENCE: So could we have known a priori that we couldn't get a correction to gamma from the perturbation theory because the only way for gamma to change is for the correction to have a divergence?

PROFESSOR: You are presuming that that's what happens. So indeed, if you knew that there is a divergence with an exponent that is larger than gamma, you probably could have guessed that you wouldn't get it this way.

Let's say that we are choosing to proceed mathematically without prior knowledge of what the experimentalists have told us, then we can discover it this way.

AUDIENCE: I was thinking if you're looking for how gamma changes due to the higher-order things, if we found that our perturbation diverged with a lower exponent than gamma, then the leading one would still be there, original gamma would be gone.

PROFESSOR: Yes.

AUDIENCE: And then if it's higher, then we have the same problem.

PROFESSOR: That's right. So the problem that we have is actually to somehow make sense of this type of perturbation theory. And as you say, it's correct. We could have actually guessed. And I'll give you another reason why the perturbation theory would not have worked.

But the only thing that we can really do is perturbation theory, so we have to be clever and figure out a way of making sense of this perturbation theory, which we will do by combining it with the normalization group. But a better way or another way to have seen maybe why this does not work is good old-fashioned dimensional analysis.

I have within the exponent of the weight that I wrote down terms that are of this formula, t m squared k gradient of m squared u m to the fourth and so forth. Since whatever is in the exponent should be dimensionless-- I usually write beta H for example-- we know that this t has some dimension. Square the dimension of m multiplied by length to the d. This should be dimensionless.

Similarly, k m squared again. Because of the gradient l to the d minus 2, that combination should be dimensionless. And my u m to the fourth l to the d should be dimensionless.

So we can get rid of the dimensions of m by dividing, let's say, u m to the fourth with the square of k m squared. So we can immediately see that u divided by k squared, I get rid of the dimensions of m. l to the power of d l to the 2d minus 4, giving me l to the 4 minus d is dimensionless.

So any perturbation theory that I write down ultimately where I have some quantity x, which is at 0 order 1, and then I want to make a correction where u appears, I should have something, u over k squared, and then some power of length to make the dimensions work out. So what lengths do I have available to me?

One length that I have is my microscopic length a. So I could have put here a to the power of 4 minus d. But there is also an emergent length in the problem, which is the correlation length. And there is no reason why the dimensionless form that involves the correlation length should not appear.

And indeed, what we have over here to 0 order, our correlation length had the exponent 1/2 divergence. So this is really the 0 order correlation length that is raised to the power of 4 minus d. So even before doing the calculation, we could have guessed on dimensional ground that it is quite possible that we are expanding in u, we think.

But at the end of the day, we are expanding in u c to the power of 4 minus d. And there is no way that that's a small quantity on approaching the phase transition. And that hit us on the face and also is the reason why I replaced this t over here with t minus tc because the only place where I expect singularities to emerge in any of these expansions is at tc. I arranged things so they would appear at the right place.

So should we throw out perturbation theory completely since the only thing that we can do is really perturbation theory?

Well, we have to be clever about it. And that's what we will do next lectures.