Lecture 8: The Scaling Hypothesis Part 3

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion of The Scaling Hypothesis, including the Gaussian Model (Direct Solution), The Gaussian Model (Renormalization Group).

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: Hey. Let's start. So a few weeks ago we started with writing a partition function for a statistical field that was going to capture behavior of a variety of systems undergoing critical phase transitions. And this was obtained by integrating over configurations of this statistical field a rate that we wrote on the basis of a form of locality.

And terms that were consistent with that were of the form m squared, m to the fourth. Let's say m to the sixth. Various types of gradient types of terms. And in principle, allowing for a symmetry-breaking field that was more in the form of h dot 1. And again, we always emphasized that in writing these statistical fields, we have to do averaging.

We have to get rid of a lot of short wavelength fluctuations. And essentially, the future m of x, although I write it as a continuum, has an implicit short scale below which it does not fluctuate. OK, so we tried to evaluate this by certain point, and we didn't succeed. So we went phenomenologically and tried to describe things on the basis of scaling theory. Ultimately, this renormalization group procedure that we would like to apply to something like this.

Now, there is a part of this that is actually pretty easy to solve. And that's when we ignore anything that is higher than second order in m. Because once we ignore them, we have essentially a generalized Gaussian integral. We can do Gaussian integrals.

So what we are going to do is, in this lecture, focusing on understanding a lot about the behavior of the Gaussian version of the theory. Which is certainly a diminished version, because it doesn't have lots of essential things. And then gradually putting back all of those things that we have not considered at the Gaussian level. In particular, we'll try to do with them with a version of a perturbation theory. We'll see that standard perturbation theory has some limitations that we will eventually resolve by using this renormalization procedure.

OK. So what happens if I do that? Why do I say that that theory is now solve-able? And the key to that is, of course, to go into Fourier representation. Which, because the theory that I wrote down has this inherent translational of symmetry, Fourier representation decouples the various m's that are currently connected to their neighborhood by these gradients and high orders.

So let's introduce a m of q, which is the Fourier transform of m of x. Let's see. m of x. And these are all vectors. And I should really use a different symbol, such as m [INAUDIBLE], to indicate the Fourier components of this field m of x. But since in the context of renormalization group we had defined a coarse grained field that was in tilde, I don't want to do that.

I hope that the argument of the function is sufficient indicator of whether we are in real space or in momentum space. Initially, I'll try to put a tail on the m to indicate that I'm doing Fourier space, but I suspect that very soon I'll forget about the tail. So keep that in mind. So if I-- oops. OK. m of q.

So if I go back and write what this m of x is, it is an integral over 2, 2 pi to the, d to the minus iq dot x with m of q. Now, I also want to at some stage, since it would be cleaner to have this rate in terms of a product of q's, remind you that this could have obtained, if I hadn't gone to the continuum version-- if I had a finite system-- to a sum over q.

And the sum over q would be basically things that are separated q values by multiples of 1 over the size of the system. And e to the minus iq dot x. This m with the cues that are now discretized. But let's remember that the density of state has a factor of 1 over v. So if I use this definition, I really should put the 1 over v here when I go to the discrete version.

And I emphasize this because previously, we had done Fourier decomposition where I had used the square root of v as a normalization. It really doesn't matter which normalization you use at the end as long as you are consistent. We'll see the advantages of this normalization shortly.

AUDIENCE: Is there any particular reason for using the different sign in the exponential?

PROFESSOR: Actually, no. I'm not sure even whether I used iqx here or minus iqx here. It's just a matter of which one you want to stick with consistently. At the end of the day, the phase will not be that important. So even if we mistake one form or the other, it doesn't make any difference.

So if I do that, then again, to sort of be more precise, I have to think about what to do with gradients. Gradients, I can imagine, are the limit of something like n at x plus A minus n at x divided by A. If this is a gradient in the x direction. And I have to take the limit as A goes to 0.

So when I'm thinking about this kind of functional integral, keeping in mind that I have a shortest landscape, maybe one way to do it is to imagine that I discretize my system over here into spacing of size A. And then I have a variable on each size, and then I integrate every place, subject this replacement for the gradient.

Again, what you do precisely does not matter here. If you remember in the first lecture when we were thinking about the dl lattice system and then using these kinds of coupling between springs that they're connecting nearest neighbors, what ended up by using this was that when I Fourier transformed, I had things like cosine. And then when I expanded the cosine close to q, close to 0, I generated a series that had q squared, q to the fourth, et cetera.

So essentially, any discretized version corresponds to an expansion like this with sufficient [INAUDIBLE] powers of q in both. So at the end of the day, when you go through this process, you find that you can write the partition function after the change of variables to m of x to m of q to doing a whole bunch of integrals over different q's.

So, essentially you would have-- actually, maybe I will explicitly put the product over q outside to emphasize that essentially, for each q I would have to do independent integrals. Of course, for each q mode I have, since I've gone to this representation of a vector that is n-dimensional, I have to do n integrals on n tilde of q. On-- m will be the tail of q.

And if I had chosen the square root of V type of normalization, the Jacobian of the transformation from here to here would have been 1. Because it's kind of a symmetric way of writing things. Because I chose this way of doing things, I will have a factor of V to the n over 2 in the denominator here. But again, it's just being pedantic, because at the end of the day, we don't care about these factors.

We are interested in things like this singular part of the partition function as it depends on these coordinates. This really just gives you an overall constant. Of course, how many of these constants you have would depend basically how you have discretized the problem. But it is a constant independent of tnh, not something that we have to worry about.

Now what happens to these Gaussian factors? Essentially, I have put the product over q outside. So when I transform this integral over xm squared goes over to an integral over q, m of q squared, which then I can write as a product over those contributions. And what you will get is t plus, from here, you will get a Kq squared, put in Lq to the fourth and all kinds of order terms that I have included.

Multiplying this m component vector m of q squared. Again, reminding you this means m of q, m of minus q, which is the same thing as m star of q, if you go through these procedures over here. There is 2. And this factor of the v actually will come up over here.

So previously, I had used the normalization square root of V, and I didn't have this factor of 1 over V. Now I have put if there, I will have that factor. Yes?

AUDIENCE: m minus q is star q only if it is the real field, right? If m is real.

PROFESSOR: Yes. And we are dealing with the field m of q of this.

AUDIENCE: And in the case of superfluidity?

PROFESSOR: In the case of superfluidity? So let's see. So we would have a psi of q integral d dx into the i q dot x psi of x. If I Fourier transform this, I will get a psi star of q integral into the x into the minus [INAUDIBLE] x psi star of x. So what you are saying is that in the case where psi of x is a complex number-- I have psi1 plus ipsi2-- here I would have psi1 minus ipsi2.

So here I would have to make it a statement that the real part and the imaginary part come when you Fourier transform with an additional minus. But let's remember that something like this that we are interested is psi1 squared plus psi2 squared. So ultimately that minus sign did not make any difference.

But it's good to sort of think of all of these issues. And in particular, we are used to thinking of Gaussians, where I would have a scalar and then I would have x squared. When I have this complex number and I have psi of q, psi of minus q, then I have a real part squared plus an imaginary part squared.

And you have to think about whether or not you have changed the number of degrees of freedom. If you basically integrate over all q's, you may have problems. You may have at some point to think about seeing psi of q and psi of minus q star are the same thing. Maybe you have to integrate over just the positive values.

But then at each q you will have two different variables, which is the real part and the imaginary part. So you have to think about all of those doublings and halvings that are involved in this statement. And in the notes, I have the writeup about that that you go and precisely check where the factors of one half and two go.

But ultimately, it looks as if you're dealing with a simple scalar quantity. So I did not give you that detail explicitly, but you can go and check it in the important issue. The other term that we have. One advantage of this normalization is that h multiplies the integral of m of x, which is clearly this m with a tail for q equals to 0.

So that's [INAUDIBLE] mh dotted by this m [INAUDIBLE]. Yes?

AUDIENCE: This is assuming a uniform field?

PROFESSOR: Yes, that's right. So we are thinking about the physics problem, but we added the uniform field. So if you are for some physical reason interested in a position where you feel you can modify that, then this would be h of q, m of minus q. Actually, one reason ultimately to choose this normalization is that clearly what appears here is a sum of q. If I go over to my integral over q, then the factor of 1 over V disappears.

So that's one reason-- since mostly after this, going through the details we'll be dealing with the continuum version-- I prefer this normalization. And we can now do the Gaussian integrals. Basically, there's an overall factor of 1 over V to the n over 2 for each q mode. Then each one of these Gaussian integrals will leave me a factor of root 2 pi times the variant.

So I will get 2 pi. The variance is V divided by t plus k q squared plus lq to the fourth, and so forth. Square root, but there are n components, so I will get something like this. And then the term that corresponds to q equals to 0 does not have any of this part.

So it will give a contribution even for q equals to 0 that is like this. But you have a term that shifts the center of integration from m equals to 0 because of the presence of the field. So you will get a term that is exponential of essentially-- completing the square-- will give you V divided by 2t times h squared.

Now, clearly the thing that I'm interested is log of Z as a function of t and h. I'm interested in t and h dependents. So there is a bunch of things that are constants that I don't really care. And then there is a, from here, minus 1/2, actually minus n 1/2 sum over q log of t plus k q squared and so forth. And plus here, I have V a squared over 2t.

So I can define something that's like if the energy from log of Z divided by the volume. And you can see that once I replace this sum of a q with an integral, I will get a factor of volume that I can disregard then. So there's some other constant. And then I have plus n over 2 integral over q divided by q pi to the d log of q plus k q squared, and so forth. Minus V k squared divided by 2t.

Now, again, the question is what's the range of q's that I have to integrate, given that I'm making things that are coarse grained. Now, if I were to really discretize my system and, say, put it on q and you plot this, then the allowed values of q would leave on the [INAUDIBLE] zone. [INAUDIBLE] zone, say, in the different directions in q would be something like the q that would be centered around pi over a. But it would be centered at 0, but then you would have pi plus pi over a. Yes?

AUDIENCE: The d would disappear, right?

PROFESSOR: The d would disappear because I divided by it. So in principle, if I had done the discretization to a cube and plot this, I would have been integrating over q that this would find to a cube like this. But maybe I chose some other lattice like a diamond lattice, et cetera. Then the shape of this thing would change.

But what's the meaning of doing the whole thing on a lattice anyway? The thing that I want to do is to make sure that I have done some averaging in order to remove short wavelength fluctuations. So a much more natural way to do that averaging and removing short wavelength operations is to say that my field has only Fourier components that are from 0 to some maximum value of lambda, which is the inverse of some radiant.

And if you are worried about the difference in integration between doing things on this nice mirror that has nice symmetry and maybe doing it on a cube, then the difference is essentially the bit of integration that you would have to do over here. But the function that you are integrating his no singularities for large values of q. You are interested in the singularities of the function when t goes to 0.

And then the log has singularities when its argument goes to 0. So I should be interested, as far as singularities are concerned, only in the vicinity of this point anyway. What I do out there, whether I replace the sphere with the cube or et cetera, will add some other non-singular term over here, which I don't really care.

Actually, if I do that, this non-singular term here could be actually functions of t. But they would be very perfect and regular functions of t. Like constant plus alpha t, plus pheta q squared, et cetera, that have no singularities. So if I'm interested in singularities, I am going to be focused on that.

Now actually, we encountered this integral before when we were looking at corrections to the saddle-point approximation. And if you remember what we did then was to take, let's say, C of d of h across 0 while taking two derivatives of this free energy with respect to t. And then we ended up with an integral. There's a minus sign here over d. n over 2 integral dt 2 pi squared. 2 pi to the d.

Taking two derivatives of the log. The first derivative will give me 1 over the argument. The second derivative will give me 1 over the argument squared. One side take care of the minus sign. Now, I think this is a kind of integral, after I have focused on the singular part, that I can replace when integrating over a sphere.

Now, when I integrate over a sphere, I may be concerned about what's going on at small values. At q, at small values of q, as long as t is around, I have no problem. When t goes to 0, I will have to worry about the singularity that comes from 1 over k, 2 squared, et cetera.

So that's really the singularity that I'm interested in. Exactly what happens at large q, I'm not really all that interested in. And in particular, what I can do is I can rescale things. I can call q squared over t to the x squared. So I can essentially make that change over there. So that whenever I see a factor of q, I replace it with t over k to the 1/2 x.

What happens here? I have, first of all, n over 2. I have 1 over 2 pi to the d. Writing this in terms of spherical symmetry, I will have the solid angle d dimensions. And then I would have q to d minus 1 q. Every time I put a factor of q, I can replace it with this. So I would have a t over k with a power of q/2. And then I have my integral that becomes the x, x to the d minus 1, 1 plus x squared plus potentially higher order things like this.

Now, the upper cut-off for x is in fact square root k over t times lambda. And we are interested in the limit of when t goes to 0. So that upper limit is essentially going to infinity. Now, whether or not this integral, if I learn to ignore higher order terms and focus on the first term, exists really depends on whether d is larger, d minus 1 plus 1 d minus 4 is positive or negative.

And in particular, if I learn to get rid of all those higher order terms. And basically, the argument for that is the things that would go with x to the fourth, et cetera, if we carry additional factors of t-- and hopefully getting rid of them as to go to 0-- will give me an integral like this. This will exist only if I am in dimensions d that is less than 4. Yes?

AUDIENCE: Are you missing the factors of t over t that comes with the denominator?

PROFESSOR: Yes. There is a factor of 1 over t here. So I have to put out the factor of t. Write this as 1 plus k over t plus the element of t, et cetera. So there is a factor of 1 over t.

AUDIENCE: t squared.

PROFESSOR: And that's a factor of t squared, because that's two powers. So if I'm in dimensions d less than 4, what I can write is that this c singular, this as t goes to 0. The leading behavior, this goes to the constant. So as we discussed, after all of the mistakes that I made, there will be some overall coefficient A. The power of t will be d over 2 minus 2.

d over 2 came from the integrations. 1 over t squared came from the denominator. And then if I were to expand all of these other terms that we've ignored, higher powers of-- here I will get various series that will correct this. But the leading key dependents in dimensions less than 4 is this thing that we had seen previously. Now I can take this, and you see that in dimensions d less than 4, this is a singular term that is diversion.

If I were to say what kind of thing was of the energy that gave result to this? Then it would say that if the energy must have had some other constant that was proportionate of the t to the d over 2, that when I put two derivatives, I got something like this. Of course, if the energy could also have had a term that was linear in t, I wouldn't have seen it. So there is a singular part.

Essentially, if I were to do that integral in dimensions less than fourth, I will get a leading singularity that is applied. I will get a singularity that is like this. I will get additional terms per constant-- t, t squared, et cetera-- and singular terms that are subbing in to this one. And then, of course, I have a term that is minus h squared over t if I were to include this here.

So why don't I write the answer as B minus h divided by t to the 1/2 plus d/4, the whole thing squared. So what I did was essentially I divided and multiplied by inputting d and put the whole thing in the form of h divided by t to something squared? Why did I do that? It's because we had first related a singular form for the energies in the scaling picture that had the E to the 2 minus alpha in front of them and the function of h t to the delta.

And all I wanted to emphasize is that this picture, 2 minus alpha is d over 2. And the thing that we call the gap exponent is 1/2 plus d over 4. Of course, I can't use this theory as a description of the case. And the reason for that is that the Gaussian theory exists and is well-defined only as long as t is positive.

Because once t becomes negative, then the rate essentially becomes ill-defined. Because if I look at the various rates that I have here, we certainly-- the rate for q equals to 0. It is proportional to minus t over 2v. If the t changes sign, rather than having a Gaussian, I have essentially a rate that is maximized as [INAUDIBLE].

So clearly, again, by issue of stability, the theory for t negative does not describe a stable theory. And that's why n to the fourth and all of those terms will be necessary to describe that side of the phase transition. So if you like, this is a kind of a description of a singularity that exists only in this half of the space. Kind of reminiscent of coming from the disordered side, but I don't want to give it more reality than that.

It's a mathematical construct. If we want to venture to make the connection to the actual phase transition, we have to prove the n to the fourth. Now, the only reason to go and recap this Gaussian theory is because since it is solve-able, we can try to use it as a toy model to apply the various steps of renormalization group that we had outlined last lecture. And once we understand the steps of renormalization group for this theory, then it gives us an anchoring point when we describe the full theory that has n to the fourth, et cetera-- how to sort of start with the renormalization approach to the theory as we understand and do the more complicated.

So essentially, as I said, it's not really a phase transition that can be described by this theory. It's a singularity. But its value is that it is this fully-modelled anchoring point for the full theory that we are describing. So what we want to do is to do an RG for the Gaussian model.

So what is the procedure. We have a theory best described in the space of variables q, the Fourier variables. Where I have modes that exist between 0-- very long wavelength-- to lambda, which is the inverse of some shortest wavelength that I'm allowing. And so basically, I have a bunch of modes m of q that are defined in this range of qx.

The first step of RG was to coarse grain. The idea of coarse graining was to change the scale over which you were doing the averaging from some a to ba. So average from a to ba of fluctuations. So once I do that, at the end of the day I have fluctuations whose minimum wavelength has gone from a to ba.

So that means that q max, after I go and do this procedure, is the previous q max that I had divided by a factor of b. So basically, at the end of the day I want to have, after coarse graining, variables that only exist up to lambda over b. Whereas previously, they existed after that.

So this is very easy at this level. All I can do is to replace this m tilde of q in terms of two sets. I will call it to be sigma if q is greater than this lambda over b. That is, everybody that is out here, their q-- I will call it q larger. Everybody that is here, their q I will call q lesser.

And all the modes that were here, I will give them a different name. The ones here I will call sigma. The ones here, if q less than lambda over b, will get called m tilde. So I just renamed my variables. So essentially, right here I had integration over all of the modes. I just renamed some of the modes that were inside q lesser and sigma-- and tilde, the ones that are outside q greater.

So what I have to do for my Gaussian theory. Let's write it rather than in this form that was discrete in terms of the continuum. I have to iterate over all configurations of these Fourier modes. So I have these m tilde of q's. And the wave that I have to assign to them when I look at the continuum is exponential, integral in dq q to pi to the d. T plus kq squared, and so forth. And tilde of q squared.

And then I had the one term that was hm of 0. What I have done is to simply rewrite this as two sets of integrations over the-- whoops. This was m. m, let's call is sigma first-- sigma of q larger integrate over m tilde of q lesser. And actually, you can see that the modes here and the modes here don't talk to each other.

And that's really the advantage of doing the Gaussian theory. And the thing that allowed me to solve the problem here and also to do the coarse graining there. Once we do things like n to the fourth, then I will have couplings between modes that go across between the three sets. And then the problem becomes difficult.

But now that I don't have that, I can actually separately write the integral as two parts. And this is for q lesser. And for each one of them, I essentially have the same rate. The integral over q greater goes between lambda over d and lambda. The integral over m tilde of q lesser is essentially the same thing.

Exponential minus integral 0 to lambda over d. dv q lesser to five to the d, t plus kq lesser squared, and so forth. And q lesser squared. And then I have the additional term which sits at 0. It is part of the modes that are assigned with q lesser.

OK? Fine. Nothing particularly profound here. In fact, it's very simple. It's just renaming two sets of modes. And the averaging that I have to do, and getting rid of the fluctuations at short wavelength, here is very trickier. Because this is just a bunch of integrations that I had to do over here, but it is only over things that are sitting close to the edge of this [INAUDIBLE].

So essentially, the integrations over these modes is doing this integral over here, from lambda over d to lambda, and none of the singularities has anything to do with the range of integration from lambda over d to lambda. So the result of doing all of that is simply just a constant-- but not a constant.

It's a function of t that is completely non-singular and have a nice state of expansion powers of t. A kind of [INAUDIBLE] I call non-singular functions sometimes. Constant thing is that eventually if you take sufficiently high derivatives, I guess, of this value, the t dependents [INAUDIBLE]. So all of the interesting thing is really in this m tilde of k lesser.

And really, the eventual process of renormalization in this picture is something like this. That all of the singularities are sitting at the center of this kind of orange-shaped entity. And rather than biting the whole thing, you kind of cut it slowly and slowly from the edge, approaching to where all of the exciting things are at the center.

For this problem of the Gaussian, it turns out to be trivial to do so. But for the more general problem, it can be interesting because procedure is the same. We are interested in what's happening here, but we gradually peel of things that we know don't cause anything difficult for the problems. So then I have to multiply with this, and I have found in some sense a probability for configurations of the coarse grain system, which is simply given by this.

But then renormalization group has two other steps. The second step was to say, well, in real space, as we said, the picture that is represented by these coarse grain variables is grainy. If my pixels were previously one by one by one, now my pixels are d by d by d. So I can make my picture look to have the same resolution as my initial picture if I rescale all of the events for a factor of t.

In momentum representation, or intuitive presentation, it corresponds to rescaling all of the q's by a factor of B. And clearly, what that serves to achieve is that if I replace q lesser with B times q prime, then the maximum value will go back to 0 to lambda. So by doing this one in formation, I can ensure that the upper cut-off is, in fact, lambda again.

Now, there was another thing, which in real space we said that we defined m prime to be m tilde rescaled by some factor zeta. I had to do a change of the contrast. I did have to do the same change of contrast here, except that the variables that I am dealing with here, it was in x coordinates. What I want to do it is in the q coordinate. So I will call m with a tail prime of q prime to be m tilde of q prime by a factor of z. The difference between the z and the zeta, which is real space and Fourier space is just the fact that in going from one to the other, you have to do integrations over space. So dimensionally, there is a factor of b to the d difference between the rescaling of this quantity and that quantity, and if you want to use or the other zeta against b to the minus d and z.

But since we would be doing everything in Fourier space, we would just use this factor traditionally. So if I do that, what do I find? I find that Z of t of h is exponential of some singular, non-singular dependents. And then I have to integrate over these new variables, m prime of q prime. Yes?

AUDIENCE: In your real space renormalization your m tilde is a function of an x. But in your Fourier space representation your m tilde is a function of q prime?

PROFESSOR: I guess I could have written here x prime, also. It doesn't really matter. So do you have here? You have exponential minus the integral. The integration for q prime now is going back to 0 to lambda. I have db of q prime divided by 2 pi to the d. Now, you see that every time I have a q-- V or q, q lesser, in fact-- I have to go to q prime by introducing a factor of the inverse.

So there will be a total factor of V to the minus V that comes from this integration. And that will multiply t. That will multiply kb to the minus d. But then here I have to q's because of the q squared there. Again, doing the same thing, I will get V to the d minus two. I had q plus 2, if you like.

And then the next l would be lb to the d minus 4. And you can see that as I have higher and higher derivatives of q, I get higher and higher powers with negative [INAUDIBLE]. But then I have m tilde that I want to replace with m prime. And that process will give me a factor of z squared. And then I have m prime of q prime squared.

There is no integration for this terms. It's just one mode. But each mode I have rescaled by a factor of z. So I will have a term that is z h dot m prime of 0. So what we see is that what we have managed to do is to make the Gaussian integration over here precisely the same thing as the Gaussian integration that I started with.

So I can conclude that this function tnh that I am interested in has a path that is non-singular. But its singular part is the same as the same z calculated for a bunch of new parameters. And in particular, the new t is v to the minus d z squared the old t. The new k is b to the minus d minus 2 z squared q.

The new L would be to the minus d minus 4 z squared L, and so forth. And the new h is zh. Yes?

AUDIENCE: There should be q prime squared and q prime 4?

PROFESSOR: Yes. Yes. This is my day to do a lot of algebraic errors. OK. So what is the change in parameters? So I wrote it over there. So this kind of captures the very simplest type of renormalization.

Actually, all I did was a scaling analysis. If I were to change positions by a factor of b and change the magnitude of my field m by a factor z or zeta, this is the kind of results that I will get. Now, how can we make this capture the kind of picture that we have over here in the language of renormalization? Want to be able to change two parameters and reach a fixed point.

So we know that kind of [INAUDIBLE] that t and h have to go to 0. They are the variables that determine essentially whether you are at this said similar point. So if t and h I forget, the next most important term that comes into play is k prime, which is some function of k. And if I want to be at the fixed point, I may want to choose the factor z such that k prime is the same as k.

So choose z such that k prime is k. And that tells me immediately that z would be b to the power of 1 plus d over 2. If I choose that particular form of z, then what do I get? I get t prime is z squared b to the minus b. So when I do that, I will get b squared t. I get that h prime is just z times h. So it is b to the 1 plus b over 2 times h.

These are both directions that as b becomes larger than 1, b prime becomes larger than th prime, becomes larger than h. These are relevant directions. I would associate with them eigenvalues y dt minus 2. Divide h. That is 1 plus d over 2.

So if I go according to the scaling construction that we had before, f singular of tnh is t to the power d over y dt, some scaling function of h, g to the power of divide h over y dt. This is what we have established before. With these values I will get t to the d over 2, some scaling function of h, t to the power of 1/2 plus d over 4.

We can immediately compare this expression and this expression that we have over here. Yes?

AUDIENCE: Wait. What's the reason to choose scale as the parameter that maps onto itself and not L?

PROFESSOR: OK. I'll come to that. So having gone this far, let's see what l is doing. So if I put here-- you can see that clearly L has v to the minus 2 compared to k. So currently, the way that we established, L prime is b to the minus 2m. If I had a higher derivative, it would be b to a minus larger number, et cetera.

So L, out of these other things, are irrelevant variables. So they are essentially under rescaling, under looking at the system in larger and larger scale, they will go to 0. And I did get a system that has the same topological structure as what I had established here. Because I have to tune two parameters in order to reach the critical point.

Let's say I had chosen something else. If I had chosen z such that L prime equals to L. I could do that. Then all of the derivatives that are higher factors of q in this [INAUDIBLE], they would be all irrelevant. But then I would have k, t, and h all with irrelevant variables.

So yeah, it could be that there is some physics. I mean, certainly mathematically I can ask the system what happens if k goes to 0. I kind of ignore the k dependencies that I have in all of these expressions, but there are going to be singular dependencies on k.

So if there is indeed some experimental system in which you have to tune, in addition temperature, something that has to do with the way that the spins or degrees of freedom are coupled to each other, and that coupling changes sign from being positive to being negative, you go from one type of behavior to another type of behavior, maybe this would be a good thing for it. But you can see the kind of structure you would get if k has to go to 0, you go from a structure where things want to be in the same direction to things that want to be anti-parallel.

And then clearly you need higher order terms to stabilize things so that your singularity does not go all the way to 0 wavelength, et cetera. So one can actually come up with physical systems that kind of resemble that, were there is some landscape that is also chosen. But for this very simplest thing that we are doing, this is what is going on.

But you could have also asked the other question. So clearly we understand what happens if you choose z so that some term is fixed and everything above it is relevant, everything below it is irrelevant. But why not choose z such that t is fixed? So that's going to be b to the d over 2, then t prime equals to t.

If I choose that, then clearly the coupling k will already be irrelevant. So this is actually a reasonable fixed point. It's a fixed one that corresponds to a system where k has gone to 0, which means that the different points don't talk to each other. Remember, when we were discussing the behavior of correlation lens at fixed points, there was two possibilities-- either the correlation lens was infinite or it was 0.

So if I choose this, then k prime will go eventually to 0. I go towards a system in which the degrees of freedom are completely decoupled from each other. Perfectly well-behaved. Fixed behavior that corresponds to 0 correlation lens. And you can see that if I go through this formula that I told you over here, zeta in real space would be b to the minus d over 2.

And what that means is that if you average independent variables over a size b, the scale of fluctuation is because of the central limit theeorem is the square root of the volume. So that's how it scales. So essentially, what's at the end of the story? That's a behavior in which there is only one coefficient event-- forget about h. The eventual rate is just t over 2m squared at different points.

That's the central limit here. So through a different route, we have rediscovered, if you like, the central limit theorem. Because if you average lots of uncorrelated variables, you will generate Gaussian rates. So what we are really after in this language is how to generalize the central limit theorem, how to-- as we find the analog of a Gaussian, the degrees of freedom that are not correlated but talk to their neighborhood.

So the kind of field theory that we are after are these generalizations of central limit theorem to the types of field theories that have some locality enablement.

AUDIENCE: Question.

PROFESSOR: Yes.

AUDIENCE: So wherever you can define the renormalization you're finding different z's?

PROFESSOR: Yes.

AUDIENCE: We can tune how many parameters we want to be able to--

PROFESSOR: Exactly. Yes. And that's where the physics comes into play. Mathematically, there's a whole set of different fixed points that you can construct for choosing different z's. You have to decide which one of them corresponds to the physical problem that you are working on.

AUDIENCE: Yes. So if the fixed point stops being just defined by the nature of the system, but it's also depends on how we define renormalization? On mathematical descriptions and--

PROFESSOR: If by how we define renormalization looking to choose z, yes, I agree with you. Yes. But again, you have this possibility of looking at the system at different scales. But we have been very agnostic about what that system is. And so you how many ways of doing things. Ultimately, you need some reality to come and choose among these different ways. Yes?

AUDIENCE: So you do want to keep k a relevant variable in group problems, right?

PROFESSOR: No. I make k to be a fixed variable.

AUDIENCE: Oh, exactly. Why don't you add a small amount, like an absolute to the power of bf and [INAUDIBLE] point z. Plus or minus, doesn't matter. Why the equality assumption exactly? And the smaller one doesn't change anything? All the other variables like L become irrelevant?

PROFESSOR: OK. So the point is that it is b raised to some power. So here I had, I don't know, Katie k prime was k. And you say, why not kb to the absolute?

AUDIENCE: Yeah, exactly.

PROFESSOR: Now, the thing that I'm interested in what happens at larger and larger scale. So in principle, I should be able to make v as large as I want. So I don't have the freedom that you mentioned. And you are right in the sense that, OK, what does it mean whether this ratio is larger than or smaller than what?

But the point is that once you have selected some parameter in your system-- L or whatever you have, some value-- you can, by playing around with this, choose a value of V for any epsilon such that you reach that limit. So by doing this, you in a sense have defined a lens scale. The lens scale would depend on epsilon, and you would have different behaviors, whether you have shorter than that lens scale or larger than that lens scale.

So this has to be done precisely because of this freedom of making b larger, and so on. Now, if you are dealing with a finite system and you can't make your b much larger than something or whatever, then you're perfectly right. Yes?

AUDIENCE: Physically, z or zeta should be whatever type quantity is needed to actually make it look exactly the same-- where it keeps coming out.

PROFESSOR: Exactly, yes. That's right.

AUDIENCE: And then we know, because we already know that we have two relevant variables, that z has to look this way for a system that has two relevant variables.

PROFESSOR: For the Gaussian one, right.

AUDIENCE: Yeah. But then if we had a different kind of system, then actually, just going from the physical perspective, we would need a different z to make things look the same. And that would give us a different number of variables here.

PROFESSOR: Yes. That's right. Now, in terms of that practically in all cases we either are dealing with a phase that has 0 correlation on that, and then this Gaussian behavior and central limit theorem is what we are dealing-- and the averaging is by 1 over volume. Or we have something that is very pretty close to this big [INAUDIBLE] that we have now discovered, which is just the gradient squared.

And that has its own scaling according to these powers that I have found here, and I will explain that more deeply. It turns out that at the end of the day, that when we look at real phase transitions, all of these exponents will change, but not too much. So this Gaussian fixed point is actually in some sense rather close to where we want to end up. So that's why it's also an important anchoring point, as I just mentioned.

Again, I said that essentially what we did was take the rate that we had originally, and we did a rescaling. So basically, we replace x by-- let me get the directions there. So we replace x by bx prime. If I had started being in real space, I would have replaced m with zeta m prime. m after getting rid of some degrees of freedom. Again, zeta m prime.

Before I just do that to the rate that I had written before, there was a beta h. Which was we could derive d d x t over 2m squared, um to the fourth and higher order terms, k over 2 gradient m squared, L over 2 Laplacian of m squared and so forth. Just do this replacement of things. What do I get?

I get that t prime is b to the d. Whenever I see x, I replace it with dx prime. Whenever I see m, I replace it with zeta m prime. So I get here the zeta squared. u prime would be b to the d zeta to the fourth. k prime would be b to the b minus 2 zeta squared. L prime would be b to the d minus 4 zeta squared, and so forth.

Essentially. All I did was replace x with b times x prime and m with zeta m prime. If I do that throughout, you can see how the various factors will change. So I didn't do all of these integrations, et cetera that I did over here. I just did the dimensional analysis, if you like. And within that dimensional analysis now in real space, if I set k prime to be k, you can see that zeta is d to the 2 minus d over 2.

And again, you can see that once I have fixed k, all of the things that have the same power of m but two higher derivatives would get a factor of b to the minus 2, just as we had over here. Again, with this choice, you can check that if I put it back here, I would get b squared. But let's imagine that I have a generalization of m to the n.

If I have a term that multiplies m to some power p-- with the coefficient up-- then under this kind of rescaling I will get up prime is b to the d zeta to the power of p, up. And with this choice of zeta, what do I get? I will get b to the d. And then I will get plus p 1 minus d over 2 times up. Which I can define to be b to some power yp times up. Look here to make sure.

So my yp, the dimension of something that multiplies m to some power p is simply p plus d 1 minus p or 2. And let's check some things. I have y1. y1 would correspond to a magnetic field, something that is proportional to the m itself. And if I push p close to 1, I will get 1 plus d over 2. And that is, indeed, the yh that we had over here.

1 plus d over 2. So this is yh. If I ask what is multiplying m squared, I put p equals to 2 here. I will get 2, and then here I would get 1 minus 2 over 2. So that's the same thing. This is the thing that we were calling before yt. We didn't include any nq term in the theory, didn't make sense to us.

But we certainly included the u that was multiplied in m to the fourth.

AUDIENCE: So is the p [INAUDIBLE] in the yp?

PROFESSOR: There is p times 1 plus d 1 minus p over 2. p over 2. Just rewrote it. If I look at 4, here would be 4. And then I would put 1 minus 4 over 2, which is 1 minus 2, which is minus 1. So I would get 4 minus z. If I look at y6, I would get 6 minus 2d. And so forth.

So if I just do dimensional analysis and I say that I start with a fixed point that corresponds to gradient of m squared, and everybody else 0, and I ask, if in the vicinity of the fixed point where k is fixed and everybody else is 0 I put on a little bit up any of these other terms, what happens? And I find that what happens is that certainly the h term, the term that is linear, will be relevant. The term that is m squared is relevant.

Whether or not all the other terms in the series-- like m to the fourth, m to the sixth, et cetera-- will be relevant depends on dimension. So once more we've hit this dimensional fork. So the term m to the fourth that we said is crucial to getting this theory to have some meaning-- and there's no reason for it to be absent-- is, in fact, relevant. In fact, close to three dimensions you would say that that's really the only other term that is relevant.

And you'd say, well, it's almost good enough. But almost good enough is not sufficient. If we want to describe a physical theory that has only two relevant directions, we cannot use this fixed point, because this fixed point has three relevant directions in three dimensions. We have to deal with this somehow. So what will we do?

Next is to explicitly include this m to the fourth. In fact, we will include all the other terms, also. But we will see that all the other terms, all the higher powers, are irrelevant in the same sense that all of these higher derivative terms are irrelevant. But that m to the fourth term is something that we really have to take care of. And we will do that.