Lecture 10: Perturbative Renormalization Group Part 2

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion on the Perturbative Renormalization Group, including Perturbative RG (First Order).

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: OK, let's start. So last lecture we started with the strategy of using perturbation theory to study our statistical themes. For example, we need to evaluate a partition function by integrating over all configurations of a field. Let's say n components in d dimensions with some kind of a weight that we can write as e to the minus beta H. And the strategy of perturbation was to find part of this Hamiltonian that we can calculate exactly. And the rest of it, hopefully treating as a small quantity and doing perturbative calculations.

Now, in the context of the Landau-Ginzburg theory that we wrote down, this beta H0 was bets described in terms of Fourier modes. So basically, we could make a change of variables to integrate over all configurations of Fourier modes. And the same breakdown of the weight in the language of Fourier modes.

Since the underlying theories that we were writing had translational symmetry, every point in space was the same as any other, the composition in to modes was immediately accomplished by going to Fourier representation. And each component of each q-value would correspond to essentially an independent weight that we could expand in some power series in this parameter q, which is an inverse in the wave length. And the lowest order terms are what determines the longer and longer wavelengths. So there is some m of q squared characterizing this part of the Hamiltonian.

And since is real space we had emphasized some form of locality, the interaction part in real space could be written simply in terms of a power series, let's say, in m. Which means that if we were to then go to Fourier space, things that are local in real space become non-local in Fourier space. And the first of those terms that we treated as a perturbation would involve an integral over four factors of m tilde. Again, translational invariance forces the four q's that appear in the multiplication to add up to 0.

So I would have m of q1 dot for dot with m of q2, m of q3 dot for dot with m of q4, which is minus q1 minus q2 minus q3. And I can go on and do higher order.

So once we did this, we could calculate various-- let's say, two-point correlation functions, et cetera, in perturbation theory. And in particular, the two-point correlation function was related to the susceptibility. And setting q to 0, we found an expression for the inverse susceptibility where the 0 order just comes from the t that we have over here. And because of this perturbation calculated to order of u, we had 4u n plus 2 and integral over modes of just the variance of the modes if y like.

Now, first thing that we noted was that the location of the point at which the susceptibility vanishes, or susceptibility diverges or its inverse vanishes is no longer at t equals to 0. But we can see just setting this expression to 0 that we have a tc, which is minus 4u n plus 2 integral d dk 2 pi to the d 1 over-- let's put the k here, k squared, potentially higher-order terms.

Now, this is an integral that in dimensions above 2-- let's for time being focus on dimensions above 2-- there is no singularity as k goes to 0. k goes to 0, which is long wavelength, is well-behaved. The integral could potentially be singular if I were allowed to go all the way to infinity, but I don't go all the infinity. All of my theories have an underlying short wavelength. And hence, there is a maximum in the Fourier modes which would render this completely well-behaved integral.

In fact, if I forget higher-order term, I could put them. But if I forget them, I can evaluate what this correction tc is. It is minus 4u n plus 2 over k. This-- I've been writing-- symmetry at surface area of a d dimensional unit sphere divided by 2 pi to the d. And then I have the integral of k to the d minus 3, which integrates to lambda to the d minus 2 divided by d minus 2.

I wrote that explicitly because we are going to encounter this combination a lot of times. And so we will give it a name k sub d. So it's just the solid angle in d dimensions divided by 2 pi to the d. OK, so essentially, in dimensions greater than 2, nothing much happens.

There is a shift in the location of the singularity compared to the Gaussian. Because you are no longer at the Gaussian, you are at a theory that has additional stabilizing terms such as m to the fourth, et cetera. So there is no problem now for p going to negative values.

The thing that was more interesting was that when we looked at what happens in the vicinity of this new tc, and to lowest order we got this form of a divergence. And then at the next order, I had a correction. Again, coming from this form, 4u n plus 2, an integral. And actually, this was obtained by taking the difference of two of these factors evaluated at p and tc. That's what gave me the t minus tc outside.

And then I had an integral that involved two of these factors. Presumably to be consistent to lowest order, I have to evaluate them as small as I can. And so I would have two factors of k squared or k squared plus something. Presumably, higher-order terms.

The thing about these integrals as opposed to the previous one is that, again, I can try to look at the behavior at large k and small k. At large k, no matter how many terms I add to the series, ultimately, I will be concerned by cutting it off by lambda.

Whereas, if I have something that I have set t equals to 0 in both of these denominator factors, I now have a singularity at k goes to 0 in dimensions less than 4. The integral would blow up in dimensions less than 4 if I am allowed to go all the way to 0, which is arbitrarily long wavelengths.

Now in principle, if I am not exactly at tc and I'm looking at singularity as being away from tc, I expect on physical grounds that fluctuations will persist up to some correlation length. So the shortest value of k that I should really physically be able to go to, irrespective of how careful or careless I am with the factors of t and t minus tc that I put here, is of the order of the physical correlation length. And as we saw, this means that there is a correction that is of the form of u k to the 4-- k squared psi to the power of 4 minus d.

And I emphasized that the dimensionless combination of the parameter u that potentially can be added as a correction to a number of order of 1 is u k squared divided by some-- multiplied by some length scale to the power of 4 minus d.

Above four dimensions, the integral is convergent at small values. And the integral will be dominated and the length scale that would appear here would be some kind of a short distance cutoff, like the averaging length. Whereas in four dimensions with divergence of the correlation length is the thing that will lead this perturbation theory to be kind of difficult and [INAUDIBLE].

So this is an example of a divergent perturbation theory. So what we are going to do in order to be able to make sense out of it, and see how this divergence here can be translated to a change in exponent, which is what we are physically expecting to occur, we reorganize this perturbation theory in a conceptual way that is helped by this perturbative renormalization group approach. So we keep the perturbation theory, but change the way that we look at perturbation theory by appealing to renormalization.

So you can see that throughout doing this perturbation theory, I end up having to do integrals over modes that are defined in the space of the Fourier parameter q. And a nice way to implement this coarse graining that has led to this field theory is to imagine that this integration is over some sphere where the maximum inverse wavelength that is allowed, or q number that is allowed is some lambda.

And so the task that I have on the first line is to integrate over all modes that live in this. And just on physical grounds, we don't expect to get any singularities from the modes that are at the edge. We expect to get singularities by considering what's going on at long wavelengths or 0q.

So the idea of renormalization group was to follow three steps. The first step was to do coarse graining, which was to take whatever your shortest wavelength was, make it b times larger and average. That's in real space.

In Fourier space, what that amounts to is to get rid of all of the variations and frequencies that are up to lambda over b. So what I can do is to say that I had a whole bunch of modes m of q. I am going to subdivide them into two classes.

I will have the modes sigma of q. Maybe I will write it in different color, sigma of q. That are the ones that sitting here. So these are the sigmas. And these correspond to wave numbers that are between lambda over b and lambda.

And I will have a bunch of other variables that I will call m tilde. And this wave close to the singularity, but now removed by an amount lambda over b. So essentially, getting rid of the picture where it had fluctuations at short length scale amounts to integrating over Fourier modes that represent your field that lie in this integral.

So I want to do that as an operation that is performed, let's say, at the level of the partition function. So I can say that my original integration can be broken up into integration over this m tilde and the integration over sigma. So that's just rewriting that rightmost integral up there. And then I have a weight exponential.

OK, let's write it out explicitly. So the weight is composed of beta H0 and the u. Now, we note that the beta H0 part, just as we did already for the case of the Gaussian, does not mix up these two classes of modes. So I can write that part as an integral from 0 to lambda over b dd 2 pi over d. And these are things that are really inside, so I could also label them by q lesser. So I have m tilde of q lesser squared.

And this multiplies t plus k q lesser squared and so forth over 2. I have a similar term, which is the modes that are between lambda over b and lambda. So I just simply changed or broke my overall integration in beta H0 into two parts.

Now I have the higher q numbers. And these are sigma of q larger squared.

Again, same weight, t plus k q larger squared and so forth over 2. Make sure this minus is in line with this. And then I have, of course, the u. So then I have a minus u.

Now, I won't write this explicitly. I will write it explicitly on the next board. But clearly, implicitly it involves both m tilde and sigma mixed up into each other.

So I have just rewritten my partition function after subdividing it into these two classes of modes and just hiding all of the complexity in this function that mixes the two. So let's rewrite this.

I have integral over the modes that I would like to keep. The m tilde I would like to keep. And there is a weight associated with them that will, therefore, not be integrated. This is the integral from 0 to lambda over b dd k lesser 2 pi to the d t plus k, k lesser q lesser squared, et cetera. And then I have tilde of q lesser squared.

Now, if I didn't have this there, the u, I could immediately perform the Gaussian integrals over sigmas. Indeed, we already did this. And the answer would be e to the minus-- there are n-components to this vector. So the answer is going to be multiplied by n. 1/2 is because the square root that I get from each mode. I get a factor of volume integral dd q larger 2 pi to the d integrated from lambda over v to lambda log of t plus k q greater squared and so forth.

So if I didn't have the u, this would be the answer for doing the Gaussian integration. But I have the u, so what should I do?

The answer is very simple. I write it as e to the minus u m tilde m sigma average. So what I have done is to say that with this weight, that is a Gaussian weight for sigma, I average the function e to the minus u.

If you like, this is a Gaussian sigma. So to sort of write it explicitly, what I have stated is an average where I integrate out the high-frequency short wavelength modes is by definition integrate over all configurations of sigma with the Gaussian weight whatever object you have, and then normalize by the Gaussian.

Of course, in our case, our O depends both on sigma and m tilde, so the result of these averaging will be a function of in tilde. And indeed, I can write this as an integral over m tilde of q with a new weight, e to the minus beta H tilde, which only depends on m tilde because I got rid of and I integrated over the sigmas.

And by definition, my beta H tilde that depends only on m tilde has a part that is the integral from 0 to lambda over b dd q lesser 2 pi to the d the Gaussian weight over the range of modes that are allowed.

There is a part that is just this constant term when I take care-- if I write it in this fashion, there is an overall constant. Clearly, what this constant is, is the free energy of the modes that I have integrated, assuming that they are Gaussian, in this interval.

The answer is proportional to volume. But as usual, when we are thinking about weights and probabilities, overall constants don't matter. But I can certainly continue to write that over here. So that part went to here, this part went to here. And so the only part that is left is minus log of d to the minus u of m tilde and sigma after I get rid of the sigmas.

So far, I have done things that are extremely general. But now I note that I am interested in doing perturbation. So the only place that I haven't really evaluated things is where this u is appearing inside the exponential log average, et cetera. So what I can do is I can perturbatively expand this exponential over here.

So I will get log of 1 minus u, which is approximately minus u. So the first term here would be u averaged Gaussian. The next term will be minus 1/2 u squared average minus u average squared and so forth. So you can see that the variance appeared in this stage.

And generally, the l-th term in the series would be minus 1 to the l divided by l factorial. And we saw this already. The log of e to the something is the generator of cumulants. So this would be the l-th power of u, the cumulant here would appear.

And again, the cumulant would serve the function of cutting off connected pieces as we shall see shortly. So that's what we are going to do. We are going to insert the u's, all things that go beyond the Gaussian-- but initially, just the m to the fourth part-- inside this series and term by term calculate the corrections to this weight that we get after we integrate out the long lambda. Or sorry, the short wavelength modes or the long q modes. OK?

So let's focus on this first term. So what is this u that depends on both m tilde and sigma? And I have the expression for u up there.

So I can write it as u integral dd q1 dd q2 dd q3 to be symmetric in all of the four q's. I write an integration over the fourth q, but then enforce it by a delta function that the sum of the q's should be 0.

And then I have four factors of m, but an m depending on which part of the q space I am encountering is either a sigma or an m tilde. So without doing anything wrong, I can replace each m with an m tilde plus sigma. So depending on where my q1 is in the integrations from 0 to lambda, I will be encountering either this or this.

And then I have the dot product of that with m tilde q2 plus sigma of q2. And then I have the dot product that would correspond to m tilde of q3 plus sigma of q3 with m tilde of q4 plus sigma of q4. So that's the structure of my [INAUDIBLE].

And again, what I have to do in principle is to integrate out the sigmas keeping the m tildes when I perform this averaging over here. So let's write down, if I were to expand this thing before the integration, what are the types of terms that I would get?

And I'll give them names. One type of term that is very easy is when I have m tilde of q1 dotted with m tilde of Q2, m tilde of q3 dotted with m tilde of q4. If I expand this so there's 2 terms per bracket and there are 4 brackets, so there are 16 terms, only 1 of these terms is of this variety out of the 16.

What I will do is also now introduce a diagrammatic representation. Whenever I see an m tilde, I will include a straight line. Whenever I see a sigma, I will include a wavy line. So this entity that I have over here is composed of four of these straight lines.

And I will indicate that by this diagram, q1, q2, q3, q4. And the reason is, of course-- first of all, there are four of these. So this is a so-called vertex in a diagrammatic representation that has four lines.

And secondly, the lines are not all totally equivalent because of the way that the dot products are arranged. Say, q1 and q2 that are dot product together are distinct, let's say, from q1 and q3 that are not dot product together. And to indicate that, I make sure that there is this dotted line in the vertex that separates and indicates which two are dot product to each other.

Now, the second class of diagram comes when I replace one of the m tildes with a sigma. So I have sigma of q1 dotted with m tilde of q2, m tilde of q3 dotted with m tilde of q4.

Now clearly, in this case, I had a choice of four factors of m tilde to replace with this. So of the 16 terms in this expansion, 4 of them belong to this class. Which if I were to represent diagrammatically, I would have one of the legs replaced with a wavy line and all the other legs staying as solid lines.

The third class of terms correspond to replacing two of the m tildes with sigmas. Now here again, I have a choice whether the second one is a partner of the first one that became sigma, such as this one, sigma of q1 dotted with sigma of q2, m tilde of q3 dotted with m tilde of q4.

And then clearly, I could have chosen one pair or the other pair to change into sigmas. So there are two terms that are like this. And diagrammatically, the wavy lines belong to the same branch of this object.

OK, next. Keep going. Actually, I have another thing when I replace two of the m tildes with sigma, but now belonging to two different elements of this dot product. So I could have sigma of q1 dotted with m tilde of q2. And then I have sigma of q3 dotted with m tilde of q4. In which case, in each of the pairs I had a choice of two for replacing m tilde with sigma. So that's 2 times 2. There are four terms that have this character.

And if I were to represent them diagrammatically, I would need to put two wavy lines on two different branches. And then I have the possibility of three things replaced. So I have sigma of q1 sigma of q2 sigma of q3 m tilde of q4. And again, now it's the other way around. One term is left out of 4 to be m tilde. So this is, again, a degeneracy of 4.

And diagrammatically, I have three lines that are wavy and one line that is solid. And at the end of story, 6, I will have one diagram which is all sigmas, which can be represented essentially by all wavy lines.

And to check that I didn't make any mistake in my calculation, the sum of these numbers better be 16. So that's 5, 7, 11, 15, 16. All right?

Now, the next step of the story is to do these averages. So I have to do the average.

Now, the first term doesn't involve any sigmas. All of my averages here are obtained by integrating over sigmas. If there is no sigmas to integrate, after I do the averaging here I essentially get the same thing back. So I will get this same expression.

And clearly, that would be a term that would contribute to my beta H tilde, which is identical to what I had originally. It is, again, m to the fourth. So that we understand.

Now, the second term here, what is the average that I have to do here?

I have one factor of sigma with which I can average. But the weight that I have is even in sigma. So the average of sigma, which is Gaussian-distributed, is 0. So this will give me 0.

And clearly, here also there is a term that involves three factors of sigma. Again, by symmetry this will average out to 0.

Now, there is a way of indicating what happens here. See, what happens here is that I will have to do an average of this thing. The m tildes are not part of the averaging. They just go out. The average moves all the way over here.

And the average of sigma of q1, sigma of q2, I know what it is. It is going to be-- I could have just written it over here. It's 2 pi to the d delta function q1 plus q2 divided by k q squared. Maybe I'll explicitly write it over here.

So what we have here is that the average of sigma of q1 with some index sigma of q2 with some other index is-- first of all, the two indices have to be the same. I have a delta function q1 plus q2. And then I have t plus k q1 squared and so forth. So it's my usual Gaussian.

So essentially, you can see that one immediate consequence of this averaging is that previously these things had two different momenta and potentially two different indices. They get to be the same thing.

And the fact that the labels that were assigned to this, the q and the index alpha, are forced to be the same, we can diagrammatically indicate by making this a closed line. So we are going to represent the result of that averaging with essentially taking-- these two lines are unchanged. They can be whatever they were. These two lines really are joined together through this process. So we indicate them that way.

And similarly, when I do the same thing over here, I do the averaging of this and the answer I can indicate by leaving these two lines by themselves and joining these two wavy lines together in this fashion.

Now, when you do-- this one we said is 0. So there's essentially one that is left, which is number 6. For number 6, we do our averaging. And for that we have to use- for average or a product of four sigmas that are Gaussian-distributed Wick's theorem.

So one possibility is that sigma 1 and sigma 2 are joined, and then sigma 4 and sigma 3 have to be joined. So basically, I took sigma 1 and sigma 2 and joined them, sigma 3 and sigma 4 that I joined them.

But another possibility is I can take sigma 1 with sigma 3 or sigma 4. So there are really two choices. And then I will have a diagram that is like this.

Now, each one of these operations and diagrams really stands for some integration and result. And let's for example, pick our number 3.

For number 3, what we are supposed to do is to do the integration. Sorry. First of all, number 3 has a numerical factor of 2. This is something that is proportional to u when we take the average.

I have in principle to do integration over q1 q2 q3 q4. OK.

The m tilde of q3 and m tilde of q4 in this diagram were not averaged over. So that term remains. I did the averaging over q1 and q2. When I did that averaging, I, first of all, got a delta alpha alpha because those were two things that were dot product to each other, so they were carrying the same index to start with.

I have a 2 pi to the d, a delta function q1 plus q2. And I have t plus k q1 squared.

Now, delta alpha alpha. Summing over alpha gives a factor of n. And when you look at these diagrams, quite generally whenever you see a loop, with a loop you would associate the factor of n because of the index that runs and gets summed over. So this answer is going to be proportional to 2 u n.

Now, q1 and q2 are said to be 0, the sum. So this is 0. So q3 and q4 have to add up to 0. So the part that involves q3 and q4, the m tilde, essentially I will get an integral dd-- let's say whatever q3. It doesn't matter because it's an index of integration. I have m tilde of q3 squared.

And again, q3, it is something that goes with one of these m tildes. So this is an integration that I have to do between 0 and lambda over b. So there is essentially one integration left because q1 and q2 are left to be the same. So this is an integral. Let's call the integration variable that was q1-- I could k, it doesn't matter-- 2 pi to the d the same integral that we've seen before. Except that since this originated from the sigmas, the integration here is from lambda over b lambda.

So this is basically a number that I can take, say, out here and regard as a coefficient that multiplies a term that is m tilde squared. And similarly, 4.

4. We said we have four diagrams of his variety, so this would be a contribution that is 4u. I can read out the whole-- write out the whole thing.

Certainly, I have all of this. I have all of this. In that case, I have m tilde q3 m tilde-- well, let's see. I have m tilde of q2 m tilde of q4. And they carry different indices because they came from two different dot products. And then I have to do an average over sigma 1 and sigma 3 which carry different indices that are for beta. 2 pi to the d delta function q1 plus q3 e plus k q1 squared.

Again, since q1 plus q3 is 0 and the sum of the four q's is 0, these two have to add up to 0. So the answer, again, will be written as 4u integral 0 to lambda over b dd of some q divided by 2 pi to the b m tilde of q squared. And then actually, the same integration, lambda over d lambda dd k 2 pi to the d 1 over 2 plus k squared.

So out of the six terms, two are 0. Two are explicitly calculated over here. One is trivially just m to the fourth. The last one is basically summing up all of these things. But these explicitly do not depend on m tilde. So I'll just call the result of doing all of this sum V delta f v at level 1. In the same way that integrating the modes sigma that I'm not interested and averaging over them gave a constant of integration, that constant of integration gets corrected to order of u over here. I don't need to explicitly take care of it.

So given all of this information, let's write down what our last line from the previous board is. So our intent was to calculate a weight that governed these coarse-grained modes. And our answer is that, first of all, we will get a bunch of constants delta f v 0 plus delta f v 1 that we don't really care. They're just an overall change that doesn't matter for the probabilities. It's just contribution to the free energy. And then we start to get things.

And to the lowest order what we had was replacing the Gaussian weight, but only over this permitted set of wavelengths. So I have dd q lesser, let's say, 2 pi to the d t plus k q lesser squared and so forth. Divided by 2 m tilde of q squared.

Then, term number 1 in the series gave me what? It gave me something that was equivalent to my u if I were to Fourier transform back to real space, m to the fourth. Except that my cutoff has been shifted by lambda over b. So I don't want to bother to write down that full form in terms of Fourier modes.

Essentially, if I want to write this explicitly, it is just like that line that I have, except that for the integrations I'll have to explicitly indicate 0 to lambda over b. So the only terms that we haven't included are the ones that are over here.

Now, you look at those terms and you find that the structure of these terms is precisely what we have over here. Except that there is a modification. There is a constant term that is added from this one and there's a constant term that is added from that one. So the effect of those things I can capture by changing the parameters t to something else t tilde.

So you can see that to order of u squared that I haven't calculated, to order of u, the only effect of this coarse graining is to modify this one parameter so that t goes to t tilde. It certainly depends on how much I coarse grain things. And this is the original t plus the sum of these things. So I will have 2-- n plus 2 u integral dd k 2 pi to the d 1 over t plus k k squared. And presumably, higher-order terms are allowed, going from 0 to lambda over here.

AUDIENCE: Question.

PROFESSOR: Yes. Question?

AUDIENCE: Yeah. When you have an integration over x, if you have previously defined lambda to be a cutoff in k space, might it be-- is it 1 over b lambda then? Or, is it b over lambda? Because--

PROFESSOR: OK. You're right. So previously, maybe the best way to write this would have been that there is a shortest length scale a. So I should really indicate what's happening here as shortest length scale having gone to v. And there is always a relationship between the a and lambda, which is inverse relation, but there are factors of 2 pi and things like that which I don't really want to bother. It doesn't matter. Yes.

AUDIENCE: Shouldn't it be t tilde is equal to 2 plus 4 multiplied by?

PROFESSOR: Exactly. Good. Because the coefficients here are divided by 2. So that 2 I forgot. And I should restore it. And if I had gone a little bit further, I would have then started comparing this formula with this formula. And I realized that I should have had the 4.

Clearly, the two formula are telling me the same thing. You can see that they are almost exactly the same with the exception of how much integration. Yes?

AUDIENCE: One other thing. Are bounds of those integrals for your tb, shouldn't they be lambda over d to lambda?

PROFESSOR: Yes. Lambda over d to lambda. And it is because of that that I don't really have to worry because we saw that when we were doing straightforward perturbation theory, the reason that perturbation theory was blowing up in my face was integrating all the way to the origin. And the trick of renormalization group is by averaging of there, I don't really yet reach the singularity that I would have at k equals to 0. This integral by itself is not problematic at k equals to 0, but future integrals would. Any other mistakes?

No. All right.

But the other part of this story is that the effect of this coarse graining at lowest order mu was to modify this parameter t. But importantly, to do nothing else. That is, we can see that in this Hamiltonian that we had written, the k that is rescaled is the same as the old k. And the u is the same as the old u. That is, to the lowest order the only effect of coarse graining was to modify the parameter that covers coefficient of t squared.

But we have not completed our task of constructing a renormalization group. Renormalization group had three steps. The most difficult step, which was the coarse graining we have completed, but now we have generated a grainy picture in which the shortest wavelengths are a factor of b larger than what we had before. In order to make our pictures look the same, we had to do steps 2 and 3 of rg.

Step 2 was to shrink all of the lengths in real space, which amounts to q prime being b times q. And that will restore the upper part of the q integration to be lambda. And there was a rescaling that we had to perform for the magnitude of the fluctuations which amounted to replacing the m tilde of q with z N prime. So this would be, if you like, q lesser and this would be m prime.

Now, the reason these steps are trivial is because just whenever I see a q, I replace it with b inverse q prime. Whenever I see an m tilde, I replace it with z m prime. So then I will find the Hamiltonian that characterizes the m prime variables, the rg implemented variables, which is-- OK, there is a bunch of constants out front. There is the fv. Actually, it's 0 delta fv1. In the sign, I had to put as plus. But really, it doesn't matter.

Then, I go and write down what I have. The integration over q prime after the rescaling is performed goes back to the same cutoff or the same shortest wavelength as before. Except that when I do this replacement of q lesser with q prime, I will get a factor of b to the minus d down here. And there are b integrations.

And then I have t tilde. The next term is k tilde, but k tilde is the same thing as k. Goes with q squared. So this becomes q prime squared, and then I will get a b to the minus 2. Higher orders will get more factors of b to the minus 2 over 2. And then I have m tilde, which is replaced by z. And there are two of them. I will get m prime of q prime squared.

If I were to explicitly now write the factors that go in construction of u, since u had three integrations over q-- left board over there-- I will get three integrations over q prime giving me a factor of b to the minus 3d. And then I have four factors of m tilde that become m prime.

Along the way, I will pick up four factors of z. And then, of course, order of u squared. So under this three steps of rg, what happened was that I generated t prime, which was z squared b to the minus d t tilde. I generated u prime, which was z to the fourth b to the minus 3d u, the original u. And I generated the k prime, which was z squared b to the minus d minus 2 k. And I could do the same for various other parameters.

Now again, we come up with this issue of what to choose for zeta. Sorry, for z. And what I had said previously was that we went through all of these exercise of doing the Gaussian model via rg in order to have an anchoring point. And there we saw that the thing that we were interested was to look at the point where k prime was the same as k. So let's stick with that choice and see what the consequences are.

So choose z such that k prime is the same a k, which means the I choose my z to be b to the 1 plus d over 2 exactly as I had done previously for the Gaussian model. So now I can substitute these values. And therefore, see that following a rescaling by a factor of b, the value of my t prime.

z squared b to the minus d. I will get b to the power of 2 plus d minus d. So that would give me b squared. And I have t plus 4u n plus 2. This integral from lambda over b to lambda to dk 2 pi to the d 1 over d plus k, k squared, and so forth. Presumably, order of u squared.

And that my factor of u rescaled by b. I have to put four factors of z. So I will get b to the 4 plus 2d, and then 3d gets subtracted, so I will get b to the 4 minus d and u again. But presumably, order of u squared. So the first factors are precisely the things that we had done and obtained for the Gaussian model. So the only thing that we gained by this exercise so far is this correction to t.

We will see that that correction is not that important. And in order to really gain an understanding, we have to go to the next order. But let's use this opportunity to also make some changes of terminology that is useful.

So clearly, the way that we had constructed this-- and in particular, if you are thinking about averaging over spins, et cetera, in real space-- the natural thing to think about is that maybe your b is a factor of 2, or a factor of 3. You sort of group things by a factor of twice as much and then do the averaging.

But when you look at things from the perspective of this momentum shell, this b can be anything. And it turns out to be useful just as a language tool, not as a conceptual tool, to make this b to be very close to 1 so that effectively you are just removing a very, very tiny, thin shell around the boundary.

So essentially what I am saying is choose b that is almost 1, but maybe a little bit shifted from 1. Then clearly, as b goes to 1, then t prime has to go to t-- prime has to go to u. So what I can do is I can define t prime scaled by a factor of 1 plus delta l to be t plus a small amount dt by dl. And in order of delta l squared.

And similarly, I can write u prime to be u plus delta l du by dl and higher order. And the reason is that if we now look at the parameter space-- t, u, whatever-- the effect of this procedure rather than being some jump from some point to another point because we rescaled by a factor of b is to go to a nearby point. So these things, dt by dl, du by dl, essentially point to the direction in which the parameters would change. And so they allow you to replace these jumps that you have by things that are called flows in this parameter space. So basically, you have constructed flows and vectors that describe the flows in this parameter space.

Now, if I do that over here, we can also see some other things emerging. So b squared I can write as 1 plus 2 delta l. And then here, I have t.

Now clearly here, if d was 1, the integral would be 0. I am integrating over a tiny shell. So the answer here when b goes to 1 is really just the area of that sphere multiplied by the thickness. And so what do I get?

I get 4u n plus 2 is just the overall coefficient. Then, what is the surface area?

I have the solid angle divided by 2 pi to the d. OK, you divide solid angle divided by 2 pi to the d to be kd. What's the value of the integrand on shell?

On shell, I have to replace k with the lambda. So I will get t plus k lambda squared. Actually, the surface area is Sd lambda to the d minus 1. But then the thickness is lambda delta l.

The second one is actually very easy. It is 1 plus 4 minus b delta l times u. So you can see that if I match things to order of delta l, I get the following rg flows.

So the left-hand side is t plus delta l dt by dl. The right-hand side-- if I expand, there is a factor of t. So that it gets rid of. And I'm left with a term that is proportional to delta l, whose coefficient on the left-hand side is dt by dl. And the right-hand side, I get either a factor of 2t from multiplying the 2 delta l with t or from multiplying the 1 with the result of this integration. I will get a 4u n plus 2 kd lambda to the d divided by t plus k lambda squared. And I don't need to evaluate any integrals. And the second flow equation for the parameter u du by dl is 4 minus d u.

Now, clearly in the language of flows, a fixed point is when there is no flow. So I could have dt by dl du by dl need to be 0. And du by dl being 0 immediately tells ms that u star has to be 0. And if I set u star equals to 0, I will see that t star has to be 0. So these equations have one and only one fixed point at this order.

And then looking for relevance and irrelevance of going away from the fixed point can be captured by linearizing. That is, I write my t to be t star plus delta l. Of course, my t star and u star are both 0. But in general, if they were not at 0, I would linearize my in general, non-linear rg recursions by going slightly away from the fixed point. And then the linearized form of the equation would say that going away be a small amount delta t delta u-- and there could be more and more of these operators-- an be written in terms of a matrix multiply delta t and delta u.

And clearly, the matrix for u is very simple. It is simply proportional to 4 minus d delta u. The matrix for t?

Well, there's two terms. First of all, there is this 2. And then if I am taking a derivative, there will be a derivative of this expression because there is a t-dependence here. But ultimately, since I'm evaluating it at u star equals to 0, I don't need to include that term here.

But if I now make a variation in u, I will get an off-diagonal term here, which is k lambda squared. So these two can combine with each other.

Now, looking for relevance or irrelevance is then equivalent-- previously, we were talking about it in terms of the full equations with b that could have been anything, but now we have gone to this limit of infinitesimal b. Then, what I have to do is to find the eigenvalues of the matrix that I have for these flows.

Now, a matrix that has this structure where there's 0 on one side of the diagonal, I immediately know that the eigenvalues are 2 and 4 minus d. So basically, depending on whether I'm in dimensions greater than 4 or less than 4, I will have either two or one relevant direction.

In particular, if I look at what is happening for d that is greater than 4-- for d greater than 4, I will just have one relevant direction. And if I look at the behavior and the flows that I am allowed to have in the two parameters t and u.

Now, if my Hamiltonian only has t and u, I'm only allowed to look at the case where u is positive in order not to have weights that are unbounded. My fixed point occurs at 0, 0. And then one simple thing is that if u is 0, it will stay 0. And then dt by dl is 2t. So I know that I always have an eigen-direction that is along the t and is flowing away with a velocity of 2 if you like.

The other eigen-direction is not the axis t equals to 0. Because if t is 0 originally but 0 is nonzero, this term will generate some positive amount of t for me.

So if I start somewhere on this axis, t will go in the direction of becoming positive. Above four dimensions, you will become less. So above four dimension, you can see that the general trend of flows is going to be something like this. Indeed, there has to be a second eigen-direction because I'm dealing with a 2 by 2 matrix.

And if you look at it carefully, you'll find that the second eigen-direction is down here and corresponds to a negative eigenvalue. So I basically would be having flows that go towards that. And in general, the character of the flows, if I have parameters somewhere here, they would be flowing there. If I have parameters somewhere here, they would be flowing there.

So physically, I have something like iron in five dimension, for example. And it corresponds to being somewhere here. When I change the temperature of iron in five dimension and I execute some trajectory such as this, all of the points that are on this side of the trajectory on their flow will go to a place where u becomes small and t becomes positive. So I will essentially go to this Gaussian-like fixed point that describes independent spins.

All the things that are down here, which previously in the Gaussian model we could not describe, because the Gaussian model did not allow me to go to t negative. Because now I have a positive view, I have no problem with that. And I find that essentially I go to large negative t corresponding to some positive u. I can figure out what the amount of magnetization is.

And so then in between them, there is a trajectory that separates paramagnetic and ferromagnetic behavior. And clearly, that trajectory at long scales corresponds to the fixed point that is simply the gradient squared. Because all the other terms went to 0, so we know all of the correlation functions, et cetera, that we should see in this system.

Unfortunately, if I look for d less than 4, what happens is that I still have the same fixed point. And actually , the eigen-directions don't change all that much. u equals to 0 is still an eigen-direction that has relevance, too. But the other direction changes from being irrelevant to being relevant. So I have something like this. And the natural types of flows that I get kind of look like this.

And now if I take my iron in three dimensions and change its temperature, I go from behavior that is kind of paramagnetic-like to ferromagnetic-like. And there is a transition point. But that transition point I don't know what fixed point it goes to. I have no idea.

So the only difference by doing this analysis from what we had done just on the basis of scaling and Gaussian theory, et cetera, is that we have located the shift in t with u, which is what we had done before. So was it worth it?

The answer is up to here, no. But let's see if you had gone one step further what would have happened.

The series is an alternating series. So the next order term that I get here I expect will come with some negative u squared. So let's say there will be some amount of work that I have to do. And I calculate something that is minus a u squared.

I do some amount of work and I calculate something that is minus b u squared. But then you can see that if I search for the fixed point, I will find another one at u star, which is 4 minus d over b. So I will find another point. And indeed, we'll find that things that left here will go to that point. And so that point has one direction here. It was relevant that becomes irrelevant. And the other direction is the analog of this direction, which still remains relevant, but with modified exponents. We will figure out what that is. So the next step is to find this b, and then everything would be resolved.

The only thing to then realize is, what are we perturbing it? Because the whole idea of perturbation theory is that you should have a small parameter. And if we are perturbing a u and then basing our results on what is happening here, the location of this fixed point better be small-- close to the original one around which I am perturbing. So what do I have to make small?

4 minus d. So we thought we were making a perturbation in u, but in order to have a small quantity the only thing that we can do is to stay very close to four dimension and make a perturbation expansion around four dimension.