Lecture 5: The Landau-Ginzburg Approach Part 4

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion of The Landau-Ginzburg Approach, including Gaussian Integrals, Fluctuation Corrections to the Saddle Point, The Ginzburg Criterion.

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: OK, let's start. So recapping what we have been doing, we said that many systems that undergo phase transition-- so there's some material that undergoes phase transition-- we could look at it and characterize it through a statistical field. But my analogy in the case of magnetization of magnet will be noted by m that varies as a function of the position on the sample. And it's a vector. And this vector has n components.

And we said that basically we could distinguish different types of systems by the number of components of n. And for the case of things like liquid gas, we had a scale or density difference, which is one component. For the case of superfluid, we had the phase of a quantum mechanical wave function, which had, therefore, two components when we included the magnitude. And for the case of, say, [INAUDIBLE] ferromagnet, we had n equals 3.

We said that basically all of these systems in the vicinity of the transition point where the field n of x is presumably fluctuating around a small quantity and the correlation lengths are large, we could describe in terms of weight that was constructed on the basis of symmetry and a form of locality which allowed us to express the weight in powers of m squared integrated in the vicinity of some point x.

Then the connection between the different clients was captured through terms that involves gradients of m. And higher order derivatives are also possible. And that easy back to deviate from the symmetry axis, we could add a term that is h.m.

So that was this statistical weight that we assigned to configurations of this field. Now we said that when you do measurements of these kinds of systems, for example, you will see singularities in heat capacity. And those in the vicinity of the phase transitions were characterized by an exponent alpha.

Now the value of alpha, you can go and look at various system, you find for liquid gas systems many different versions of carbon dioxide, et cetera, and other systems that you would correspond to n equals 1, like binary mixtures such as the one that is in the first problem set, correspond to a value of alpha divergence that is roughly around 0.11. For the case of superfluid, we saw curves that described this lambda point. There is, again, a divergence, or the divergence is weaker than the [INAUDIBLE]. It is approximately a logarithmic divergence.

Whereas for ferromagnets, there is a cost singularity. There's no divergence. And the singularity can be expressed in terms of a negative alpha. So there are these classes, depending on the value of this parameter n, which are all the same. And in our case, they are all described by this same field theory, with different number of components of this quantity n.

So we asked whether or not we could get that result. So what we did was we said, OK, let's calculate the partition function that corresponds to this system by integrating over all configurations of this field. And this is actually just the singular part, because in the process of going from whatever microscopic variables we have to these variables that describe the statistical field, we have to integrate over many microscopic configurations.

So there could be a non-singular part that emerges. But the singularities are due to the appearance of magnetization spontaneously. So they should be reflected in calculating the partition function of this component.

Now what we did was, then, to say, OK, this is difficult thing. What I am going to do is do a subtle point approximation, which really amounted to finding the most probable state. And that most probable state corresponded to the m being uniform across the system, value m bar, that potentially would be directed along the magnetic field. But there's only the limit that magnetic field goes to 0, spontaneously select some kind of a direction.

But of course, this m bar would be 0 if you are above the transition, which in this most probable state occurs for t's that are positive at h equals 0. While for t negative, minimizing tm squared plus um to the fourth gave us a value of square root of minus t over 4.

Then our z singular in the subtle point approximation evaluated as a function of t for h equals 0 is simply related to the value of this most probable state at this particular point. And we found that the answer was exponential of minus because of the integration over space. But everything is uniform. It will be proportional to volume. And then multiplied by a function that was either 0, if you were looking at t positive. Whereas for t negative, substituting that value of h bar, gave us minus t squared over 16. Yeah.

So essentially, it's a function. There is really no magnetization above the critical point. And you get 0. Below the critical point, what you have is this quadratic behavior in t. So if I were to take two derivatives of it, which would give me something that is proportional to the heat capacity-- so from here I would get a heat capacity evaluated in the subtle point method as a function of t for h equals 0, which would be either 0 or 1 over 8u, if I'm taking this derivative. So the prediction is that you have an alpha which is 0 because there is no power law dependence. And what it really reflects is that there is a discontinuity in heat capacity.

So none of the examples that I showed you aboved-- the liquid gas, the superfluid, ferromagnet-- have a discontinuous heat capacity. So this does not seem to work. On the other hand, this discontinuity is observed for superconductor transitions.

So that's the state of the affairs. What we have to understand now is, first of all, why doesn't it work in general? Secondly, why does it work for superconductors? So that's the task for today. All right?

So the one thing that is certainly a glaring approximation is to replace this integration over all configuration by just the one most probable state. But we did precisely that when we were talking about the subtle point method of integration in the previous class in 8 333. So let's examine why it was legitimate to do so at that point.

So there we are evaluating essentially an integration that involved one variable. Let's call it m. And we had a large number that was appearing in the exponent. And we had some function that we were looking at, depending on the variable of integration.

Now the most probable value of this occurs for some particular m bar. And what we can do, without essentially doing any approximation at this point, is to make a Taylor expansion of the function around its maximum. So the function I can write as psi evaluated at this extremum. But since I am looking at an extremum, if I make a Taylor expansion, the term that is proportional to the first derivative is absent. I'm expanding around an extremum. The term that is proportional to the second derivative evaluated at m bar will go with m minus m bar squared. And in principle, there are higher and higher order terms I can put in this expansion.

Now the value at the most probable position, which is the subtle point value, is a constant. I can put it outside. So essentially, terminating here is exactly like what I was doing over there, more or less. But then I have fluctuations around this most probably state.

So I can do the integration, let's say, in the variable delta n. I have the differential of delta m into the minus 1/2 psi double m bar m minus delta m bar delta m squared. And then I have higher order terms. And principle, those higher order terms I can start expanding over here.

And I forgot the very important factor, which is that this whole thing is proportional to n. And indeed, all of these terms over here will also be proportional to n. OK? But the first term in the series is just the Gaussian integration. And so I know that the leading correction to the subtle point comes from this factor of root 2 pi n psi double prime of m bar.

And then, in principle, there will be higher order terms. And if you keep track of how many factors of delta m allowed-- delta m cubed is certainly not allowed because of the evenness of what I'm integrating against. So the next order term will be delta m to the fourth. Evaluated against this Gaussian, it will give you something that is order of 1 over n squared. Multiplied by n, you will get corrections of the order of 1 over n.

So very systematically, we could see that if I called the result of this integration i, that log of i has a term that is dominated by the most probable value of the integrant. And then there are corrections, such as this factor of log n psi double prime m bar over 2 pi, and lower order corrections of order of 1 over n. Basically all of these terms in the limit of n being much larger than 1, you can ignore. And essentially, this term will dominate everything.

So what we did over there kind of looks the same. So let's repeat that for our functional integral. So I have z. Actually it is the singular part of the partition function, which is obtained by integrating over all functions m of x. And for the time being, let's just focus on the h equals 0 part. So I have exponential of minus integral over x, t over 2m square, um to the fourth, k over 2 gradient of m squared, and so forth.

And repeat what we did over there. So what we need over here was to basically pick the most probable state and then expand around the most probable state. So going beyond just picking the contribution of most probable state involves including these fluctuations.

So let me write my m of x to be essentially m bar, but allowing a little bit of fluctuation. And we saw that we could divide the fluctuations into a longitudinal part. Let's call it e1 hat. And the transfers part, which is an n minus 1 component vector in the n minus 1 transfers directions.

And then I substitute this over here. So what do I get? Just like here, I can pull out the term that corresponds to the subtle point. In fact, I had calculated it up there. So I have exponential of minus v, the value of this thing at the subtle point.

And then I have essentially replaced the variable m with the integration over fluctuations. So now I have to integrate over the longitudinal fluctuations and the transfers fluctuations. And what I need to do is to expand this quantity up to second order. But that's exactly what we did last time, where you were looking at how the system was scattering. So we can rely on the result from last time for what the quadratic part is.

So we saw that the answer could be written as minus k over 2, integral ddx. Well, actually, let's keep it this way. We have cl to the minus 2 plus phi l squared plus gradient of phi l squared.

So this is what I did. What we had to do was to replace this function. The only part that has a contribution from variation in space, and hence contributes to gradient, comes from phi. So from here, we will get a k over 2 gradient of phi l squared.

Then there is a contribution from t, and one that comes from expanding m to the fourth to quadratic folder, that are proportional to phi l squared. And the coefficient of both of them we combine to write as cl to the minus 2. And if I go back to what we had last time, our result was that k over cl squared was either t, if I was for t positive, or minus 2t if I was for t negative.

Whereas, when I expanded the transfers component, what I got above tc, for t positive there is no difference between longitudinal transfers, so I had the same result. Below, there was no cost for these Goldstone modes, and the answer was 0. But essentially, I have a similar expression, then, to write for the transfers component.

So this part amounts to essentially what I have over here. And in principle, I can put a whole bunch of other things that would correspond to higher order fluctuations, effects beyond the quadratic. But again, our anticipation is that, just like what is happening here, the leading correction to the subtle point will already come from the quadratic part. So let's evaluate that.

So let's continue. So this is exponential of the subtle point phi energy. And then I have to do all of these integrations over phi l and phi q. Now what I can do, and I already did this also last time around, is we introduced an expansion of phi. We said each phi of x I can write as a sum over Fourier components-- e to the iq.x phi tilda of q, and with a root phi for normalization convenience.

So I can certainly replace both phi l and phi t, just as I did last time, in terms of Fourier component. And then the integration over all configurations of phi is equivalent to integrating over all configurations of the phi tilda of q's, sll the Fourier amplitudes.

But the advantage is that when we look at the Fourier amplitudes, the different q's are completely independent of each other. So this integration over here that was not the one-dimensional integration becomes a product of one-dimensional integrations when we go to the Fourier component representation.

So now I have to integrate for each q. I have either phi l of q, or I have the n minus 1 component phi p of q. So these are whole bunch of one dimensional Gaussian integrations. Because when I look at what these rates are doing, I get e to the minus k over 2, q squared plus cl to the minus 2 phi l of q squared for the longitudinal mode, and a very similar factor k over 2 q squared plus ct to the minus 2, phi t of q squared for the transfers vectors. I have a whole bunch of these different things.

Now we can, again, follow like what we had before. The leading behavior is minus v t m bar squared over 2, u m bar to the fourth. And then I have a product of Gaussian integrations. For each one of these longitudinal modes, just like here, I will get a factor of 2 pi divided by k q squared plus cl to the minus 2 square root. And for each one of the transfers components, I will get 2 pi k q 2 plus ct to the minus 2. And there are n minus 1 of these. So I will get that factor. And then presume, again, there will be corrections due to higher orders that will be multiplying the whole thing.

So the quantity that we are interested is, in fact, something like phi energy. So we take log of z. Let's look at the singular part. Let's divide it by volume, because we expect this to be an extensive quantity, just like this other result was proportional to n. And let's put a minus sign-- typically you have to change sign in any case-- so that the leading term then becomes this tm squared plus um to the fourth, which, let me remind you, is-- actually, let's just write it. tm bar squared over 2 plus u m bar to the fourth.

And then when I take the log, this product over q will go to a sum over q. And the sum over q in the continuum limit, I will replace by v integral over q divided by 3 pi to the d. So then the next step of the process, I will have a sum over q which I replace with v times the integration. But the volume will go away, and what I'm left with is the integration.

So I have the integral vdq 2 pi to the d. And I have the log of whatever is appearing over here. So what I have there is log of k q squared plus k cl to the minus 2 with 1/2.

Why the 1/2? Because it's the square root. I take it to the exponential because it becomes one half of the log. In fact, it is in the denominator. So there's a minus sign. And the minus sign cancels the minus sign out here.

And then the next term from the transfers component, I will get n minus 1 over 2, integral dbq 2 pi to the d log of kq squared plus kct to the minus 2. And presumably, if I go ahead with higher and higher order corrections, there will be other things. Yes, Carter.

AUDIENCE: So [INAUDIBLE].

PROFESSOR: No. It's just, like the subtle point, I'm trying to calculate a systematic expansion around the subtle point. So I've calculated so far the lowest order term, although I haven't explicitly told you what its behavior is. Once I'm satisfied with what kind of connection that this, I need to go beyond and include higher and higher order terms, and maybe show you that they are explicitly unimportant, like they are in the ordinary subtle point, or that they are important. At this stage, we are agnostic. We don't say anything.

One thing to note-- of course, there are all of these factors of 2 pi. Now if you go to a mathematician and show them a functional integral, they say it's an undefined quantity. And part of the reason for undefined quantity is, well, how many factors of 2 pi do you have? And what are the limits of this integration?

So from the perspective of mathematics, a functional integral is something that is very sick and ill-behaved. In our case, there is no problem, because we know that our field, although I wrote it as a continuous function, it is really a continuous function that has a limited set of Fourier components.

This product over q will not extend to arbitrary short wavelength. There's a characteristic wavelength which is the scale over which I did the coarse graining, and I don't have anything beyond that. So these are finite number of Fourier modes that I'm integrating here. There is a finite number of 2, 2 pi, et cetera, that one has. All right.

So fine, so this is the behavior. Again, I have looked only as a function of t setting h equals to 0. I didn't include the effect of h. And let's explicitly look at what these things are for t positive that I will write above, and t negative that I will write below.

We saw that above, this is 0. Below, it is minus t squared over 16u. That this quantity kcl to the minus 2, it is t above and it is minus 2t below. This quantity kc to the minus t squared is t above and 0 below.

Why do I bother to write that? Because I want to go and address this question of heat capacity. And we said that heat capacity is ultimately related to taking two derivatives of this log c singular with respect to temperature and beta and all of that. But let's write it as a proportionality. It goes like this. Yes? Yes?

AUDIENCE: So the third line on the top board, you have under this continuous product over all elements of q.

PROFESSOR: So this product over q goes all the way to the end of the line, yes.

AUDIENCE: Yeah. So can [INAUDIBLE] be in the exponents, or are they still outside?

PROFESSOR: What infinitesimals?

AUDIENCE: d phi l and d phi t.

PROFESSOR: OK, so what I have left out, and you're quite right, is the integral. So for each q, I have to do n integrations over this variable and this n minus 1 component. So I forgot the integral sign, so that's correct.

All right. So what do we have? So for t positive, if I take two derivative of this with respect to t-- and actually there is a minus sign involved here, sorry. Above the transition, I will get 0. Below the transition, I will get this 1 over 8u. So this is the discontinuity that I had calculated before.

Now above the transition, I have to take a derivative of log of tkq squared plus t with respect to t. Taking the derivative of log will give me 1 over its argument. Taking the second derivative will give me the argument squared.

Because of the minus sign, I forget about the minus sign. So two derivatives of this object with respect to t will bring down a factor of kq squared plus t squared. And I have to integrate that over q. And there is one from here, and there's n minus 1 from here. So there is a total of n over 2 of that.

Below the transition, I have to take a derivative, except that plus thing is replaced with minus 2t. So every time I take a derivative, I will get an additional factor of 2. So rather than 1/2, I will end up with 2 integral over q 2 pi to the d 1 over kq squared minus 2t squared from the longitudinal part. And the transfers part has no t dependence, so it doesn't contribute.

So the entire thing, you can see, is what I had calculated at the subtle point. And to this order in expansions around the subtle point, which corresponds, essentially, only to the quadratic part, I have found a correction. And generically, we see that these corrections are proportional to an integral over q 2 pi to the d. I can actually pull out one factor of k squared outside so that the integral more looks more nice with some characteristic lengths scale, which is either coming from t or from minus 2t. So I can write it as cl squared.

So in order to understand how important these corrections are-- and here the corrections were under control. So really, I'm asking question, are these corrections small compared to what I started in the same sense that log n is small compared to n? Well, what I need to do is to understand how this integral behaves. There is no factor of log n versus n, because you can see both of those terms have the factor of volume. So the issue is not that you have something like square, log of the volume that will give you small quantity. You have to hope that for some reason or other this whole integral here is not important.

So if I look at the integrand-- well, I can do one more thing. I can note that it behaves as 1 over k squared. There is a combination that you will see appearing many, many times in this course. Because this is spherically symmetric, I can write it as some solid angle q to the d minus 1 with q. And so the whole thing is proportional to the ratio of solid angle divided by 2 pi to the d, which will occur so many times in this course that we will give it a name k sub d.

And then the eventual integral is simply an integral over one variable q, q to the d minus 1. And then I have q squared plus c to the minus 2 squared. Yes?

AUDIENCE: Should that 1 over kd be 1 over k squared?

PROFESSOR: There is a q over k-- Yeah, that's right. I already had it. Yes, 1 over k squared. And then there's 1 over k. Thank you.

So how does this integrand look like, the thing that I have to integrate? As a function of q, I have to integrate a function that at least that small q has no problem of singularity, divergence, et cetera. It is simply q to the minus something with the coefficient that is like c.

At large distances, however, let's say three dimensions, it would fall off as q to the power of d minus 1 minus 4. At large q, I can ignore whatever is from here and just look at the powers of q. But then if I'm at sufficiently large dimension, the function will keep growing. So basically, depending on which dimensions you are, and the borderline dimension is clearly for, it's an integration that you can either perform without any difficulty going all the way to infinity in q, or you have to worry about the upper column. OK?

So if you are in dimensions greater than 4, what you find is that this cf in dimensions that are larger than 4, as you go to larger and larger q, you are integrating something that is getting bigger and bigger. And you have to worry about that being infinity.

Except, as I told you, we don't have any worries about infinity, because our q has to be cut off by the inverse of the character wavelength, which is the length scale over which I am doing the coarse grain. So let's call that cut off lambda, presumably this inverse of some kind of lattice-like spacing. It's not the lattice spacing. It's the coarse graining scale.

So if I'm doing this, then this integral, I can really forget about what's happening here. Most of the integral contribution will come from the large lambda, and so the answer will be proportional to 1 over k squared and whatever this other cut off is raised to the power of t minus 4. It's proportion. I don't care about constants of proportionality, et cetera.

However, if I am at dimensions that is 3 less than 4, any dimension less than 4, I can as well say the upper cut off go all the way to infinity, because the contribution that I get by replacing lambda to infinity is going to be very small. So then it becomes like a definite integral. And it becomes more like a definite integral if I scale q by c inverse.

And then what you have to do is you have 1 over k squared. You have c inverse to the power of t minus 4 or c to the power of 4 minus t, and then some definite integral, which is 0 to infinity dx, x to the d minus 1 divided by x squared plus 1 to the squared. I don't really care what the number is. It's just some number that goes in this proportionality. So this is what happens for d less than 4.

So let's see what all of this means. So we are trying to understand the behavior of the heat capacity of the system as a function of this parameter t. And actually, only the part that corresponds to integrating the magnetization field. As I said, there's phonon contributions, all kinds of other phonon contributions that give you some kind of a background. Let's subtract that background and see what we have.

So what we have is that from the subtle point part, we get this continuity. So let's draw the subtle point part. So the subtle point part is-- oops. Wrong direction. Above 0, it's 0. Below 0, it jumps to 1 over 8u. So it's a behavior such as this. So this part is the c of the subtle point.

But to that, I have to add a correction. So let's look at the correction. First of all, if I'm looking at the correction above four dimensions, whether I'm above or below, I have to add one of these quantities. These quantities don't have any explicit dependence on t itself. So what happens is that if I add that, presumably there is a correction that I will get from below and a correction that I will get from above. So this is cf for d that is larger than 4.

So what it certainly does is when I add this part to what I had before, I will change the magnitude of the discontinuity. But so what? The discontinuity itself was not something that was important, because u was not a universal number. So there was some singularity before, some singularity above.

We see that the corrections for dimensions greater than 4 do not change the qualitative statement that the heat capacity should have a discontinuity. But if I go to dimensions less than 4 and I realize that my c goes like the square root of t-- there is the formulas for c over there, or t to the minus 1/2-- we find that this quantity is proportional to t to the minus 4 minus d over 2.

So below four dimensions, what we get is that the correction that we calculated is actually divergent. So this is cf for d less than 4. There is a divergence as t goes to 0 that, let's say, if you're sitting three dimensions would be an exponent t to the minus 1/2.

So you started with a subtle point prediction that the heat capacity should be discontinuous. You add the analog of these corrections to the subtle point calculation, and you find that the correction is much, much more important than the original discontinuity. It completely changes your conclusions. So once we go beyond this approximation that we did over here, the subtle point, and the difference between our problematic and the one that we did in 8 333 is that we don't have one variable that we are integrating. We are integrating over fluctuations over the entirety of the system.

And we see that these fluctuations over the entirety of the system are so severe, at least close to the transition point, that they completely invalidate the results that you had from the subtle point. Yes?

AUDIENCE: So obviously you have some high order [INAUDIBLE]. And here you're basically completing [INAUDIBLE].

PROFESSOR: Exactly.

AUDIENCE: Is there an easy way to argue that for b greater than 4 there is no divergence lurking in the higher order terms?

PROFESSOR: Actually, the answer is no. If I look at this integral that I have over here, it depends on t. If I take sufficiently high derivatives of it, I will encounter a singularity. So indeed, what I have focused here is at the level of the heat capacity. But if I were to look at the fifth derivative of the phi energy, I will see singularities.

AUDIENCE: No, I'm talking about the second derivative for higher order terms.

PROFESSOR: These higher order terms, the phis? OK, all right. So that was my next one. So you may be tempted to say, OK, I found the divergence. Let's say that the heat capacity diverges with exponent of 1/2. And no. The only thing that it says is that your starting point was wrong.

Any conclusion that you want to make based on what we are doing here is wrong. There is no point in my going beyond and calculating the higher order term, because I already see that the lowest order correction is invalidating my result.

AUDIENCE: So you [INAUDIBLE] conclude that mean field theory is good for bigger than 4.

PROFESSOR: From what I have told you, I've shown you that the discontinuity in the heat capacity is maintained. It is true that if I look at sufficiently high derivatives, I may encounter some difficulty in justifying why d greater than 4 or less that 4 is making a difference. But certainly, as we will build on what we know later on in the course, I will be able to convince you that the mean field theory is certainly valid in dimensions greater than 4.

But right now, I guess the only thing that we can say for sure is that the subtle point method cannot be applied when you are dealing with a field that is varying all over the space. So we have this situation.

On the other hand, you say, well, if it is so bad, why does it work for the case of the superconductor? So let's see if we can try to understand that. Again, sticking with the language of the heat capacity, we see that if I am, let's say, sitting in some dimensions below 4, to the lowest order I will predict that there is a discontinuity in the singular part and that the fluctuations lead to a correction where it should be divergent.

Now it is mathematically correct. But let's see how you would go and see that in the experiments. So presumably in the experiment, in the analog of your t going to 0 is that you have a t that passes through tc. And what you are doing in the experiment is that you are making measurements, let's say, at this point, at this point, at this point, and then you are going all the way.

Now we can see that there could potentially be a difference, depending on the amplitude of this term. If it is like that, and I can resolve things at this scale that I have indicated here, there's no problem. I should see the divergence.

But suppose the amplitude is much, much smaller and it is something that is looking like this, and you are taking measurements that correspond to, essentially, intervals such as this, then you really integrate across this. You don't see the peak. You don't sufficient resolution. It's kind of searching for a delta function more or less.

And so whether or not you are in one situation or another situation could tell you about the result of experimental observation. So how do I find out something about that? Well, I want the amplitude of this to be at least as large as the discontinuity for me to be able to state it. That is, I want to have a c that I have from the subtle point, which is a discontinuity that is of the order of one over 8u, so there's at a discontinuity heat capacity.

This discontinuity should be of the order of this quantity 1 over k squared c to the power of 4 minus t. But now it becomes kind of non-universal because I really want to compare things, compare amplitudes. I know that my c is predicted from the subtle point to go like t to the minus 1/2, where t is kind a rescaled version of temperature. So t is, let's say, tc minus t over tc. It is something that is dimensionless.

And so all of the dimensions should be carried by some kind of a prefactor here, that is some kind of a landscape. So the correlation, then, is a length scale. There is some prefactor that is also a length scale, and then this reduced temperature that controls the functional divergence.

Actually, I can read off what this c0 should depend on. You can see that c0 should scale like k square root of k. So then you can see that this object k scales like c0 squared. So this scales like 1 over c0 to fourth power. And this scales like c0 to the power of 4 minus t. And then I have this reduced temperature to the power of d minus 4 over 2.

So you can see that for these things to be compatible, I should reduce my t to a value such that this divergence compensates for the combination c0 to the d delta csp, should be of the order of some minimal value of t. Let's call it tc. Actually, let's call it tg to the power of d minus 4 over 2. Or tg is of the order of delta csp c0 to the d, the whole thing to the power of 2 divided by d minus 4.

Let me wrote that slightly better. So tg goes off the order of delta cp, delta csp to the power of minus 2 4 minus t, since we are going to be looking at dimensions such as 3, and then c0 to the power of minus 2 divided by 2d divided by 4 minus d.

So we can see that the resolution that you need, how close you have to go to the critical point, very much depends on this quantity c0. It does depend also on delta csp. But we can argue that that is a less important contribution. Let's focus, for the time being, on the dependence on this c0.

So c0 presumably has something to do with the physics of the system that you are looking at. So then we are leaving the realm of things that were universal. And we have to think about the system under consideration. And we have to identify a length scale associated with the system that is under consideration.

Now if I think about something like liquid gas, well, one kind of length scale that immediately comes to mind is the length scale over which the particles are interacting. Also I can look at the kind of phase diagrams that we were looking get, and there was some critical volume where this transition from one type of isotherm to another type of isotherm occurs, I can ask that critical volume how many angstroms it is.

But again, everything here, we have to try to be as dimensionless as possible. So let's say this critical volume corresponds to how many particles. And let's take the cube root of that and convert it to a length scale over which these number of particles are confined in three dimensions. And what we find is, for liquid gas systems, that number c0 that you get in units of atomic spacing is of the order of 1 to 10 atomic spacings. Yes.

AUDIENCE: Scale on which atoms interact with each other?

PROFESSOR: Well, it could be. But for the case of, say, particles in this room, the range of interaction is not that different than the size of the particles coming together. It's maybe a few times that. So that's basically a few times of [INAUDIBLE] saying here. And I'm not going to argue whether it is twice that or 10 times that. It really makes no difference.

The thing is that when I'm looking about the problem of superconductivity, this is the only place where we introduce a little bit of physics. When one is looking at something like aluminum that goes into being a superconductor, it is an ordering of bosons in the same sense that we have for liquid helium. But the difference is that what is ordering in superconductivity is not bosons, but it is fermions or electrons.

And electrons have Coulomb repulsion. So what has to happen is that there is some mechanism, phonons or whatever, that gives an effective attraction between electrons and pairs them together into a Cooper pair. The characteristic size of a Cooper pair, because of the repulsion that you have between electrons, rather than being 1 to 10, say, angstroms, is c0 is suddenly of the order of 1,000 angstroms.

Now note that if you are in three dimensions, this is something that is raised to the sixth power. So if I think of this after dividing by an atomic size or whatever, to a number that is of the order of, let's say, 100 or even 1,000 and I raise it to the sixth power, you can see that the kind of resolution that you need when you raise something large to a huge power corresponds to t that is of the order of 10 to the minus 12, 10 to the minus 15, et cetera. And that's just not the resolution that you have in experiment.

So basically experiment will go over this without really seeing it. Essentially the units are so big that you don't have that many of them to fluctuate across the system. The effect of fluctuations is much diminished compared to superfluid helium or compared to liquid gas, where over the size of the system, you have many, many fluctuations that can take place. This condition, whether or not you're going to be able to see the effects of fluctuations and something that is [INAUDIBLE] field like, I'll call it t sub g, because it's called a Ginzburg criterion.

So this basically answers the questions that we had over here. For all of our phase transitions, we constructed the Landau-Ginzburg theory, and we evaluated its consequences for phase transition, such as divergence of heat capacity using the subtle point method. We saw that the results worked extremely well for superconductors, but not for anything else.

And the answer to that is that for superconductors, fluctuations are not so important. And the most probable state gives you a good idea of what is happening. Whereas for super helium, for liquid gas, et cetera, fluctuations are very important, and the starting point that is the subtle point, most probable state, is simply not good enough. Yes.

AUDIENCE: So when you were giving us the system of different phase transitions [INAUDIBLE], you only talked about the critical exponents, because, for instance, there is a discontinuity of [INAUDIBLE] heat capacity for all phase transitions. But it's often masked with fixed singularity, right?

PROFESSOR: Once you have a divergence, I don't know how you would be talking about a singularity.

AUDIENCE: If you roughly measure the heat capacity further away from singularity, wouldn't it kind of converges left and right of two different values?

PROFESSOR: OK. So if I draw a random function that has divergence, the chances are very, very good that, if I go a little bit further, the two of them will not be exactly at the same height. There will be an asymmetry. So are you talking about the asymmetry in amplitudes? Because I know the amplitudes are not symmetric.

If I go very, very far away, then all kinds of other things come in to play. There's the phonon, heat capacity, et cetera. So the statement that you make, I have never heard before, in fact. But I'm trying to see whether or not it's even mathematically conceivable.

AUDIENCE: Another question with this series you just wrote out with [INAUDIBLE] singularity, doesn't it give you that exponent for the singularity?

PROFESSOR: No.

AUDIENCE: It's a [INAUDIBLE] number.

PROFESSOR: It is 1/2, yes. So there is a theory. There is a mathematical theory that has this 1/2 exponent divergence. What is that theory? It's a theory that is cut off at the Gaussian level.

So if we had some system for which we were sure that when we write our statistical field theory, I can terminate at the level of Gaussian terms, m squared, gradient of m squared, et cetera. If such a theory existed, it would have exactly this divergence. But I don't see any reason for eliminating all those--

AUDIENCE: So we still have not found the reason why the actual experimental exponents are--

PROFESSOR: No, we have not found. Yes.

AUDIENCE: So how do we interpret the larger signal of superconductity? Does that mean the correlation actually is longer?

PROFESSOR: Yes, yes.

AUDIENCE: But then why are we saying that the fluctuation there is not so important? We have longer correlation, then usually that means we have bigger fluctuation [INAUDIBLE].

PROFESSOR: OK, so let's see if we can unpack that. So our correlation length is some c0 t to the minus 1/2. And indeed, what that says is that at some particular same value of how far I am away from the critical point, the correlations are longer ranged. If I go and look at the amplitudes of the fluctuations that I have, then I am closer as a function of q to a situation such as this.

So c0 is large c0. Inverse would be smaller. So that's correct. And then in real space, what it would mean is that if I look at my system, what I would have is that there are parts that are of the order of c0 t to the minus 1/2 that are doing the same thing.

Let me understand your question. So it is true that the superconductor you have more correlations, and what that means is that the number of independent modes that you have that can contribute and fluctuate is less. And what we will see ultimately is that the reason for all of these exponents being different from what we have in superconductivity is that there is essentially a much more broader range of the influence that is contributing to the whole thing.

So I'm not sure if I'm answering your question. Let's go back and think about your question. So basically for superconductor, certainly everything that we said, including being able to express it in terms of this statistical field theory, having large correlation lengths close to the critical point, all of that is correct. The only thing that is not correct is that the diversity of fluctuations here is less. And this lack of diversity of fluctuations compared to something like liquid gas gives you more subtle point, like exponents.

AUDIENCE: So you mean the limit for my integrand with respect to q is smaller? [INAUDIBLE]. So the q space I'm integrating is smaller.

PROFESSOR: Yes.

AUDIENCE: But if I calculate fluctuation function, something?

PROFESSOR: Yes. So this is what I was trying to calculate here, yes.

AUDIENCE: Then it should be larger than--

PROFESSOR: Yes. But it is larger for a smaller limit of q's. So I guess what you are saying is that if I look at the superconductor, I will see something like this. If I look at the liquid gas, I will see something like this.

AUDIENCE: And [INAUDIBLE] just intuitively interpret what's the behavior of the heat capacity from this--

PROFESSOR: [INAUDIBLE], because if you look at something like this, and particular its t dependence-- after all, everything that we are interested is how things change as a function of t minus tc. So presumably, when we do that for superconductor, if you do some kind of a scattering experiment, you will see some peak like this emerging, but the peak never expanding as much as it would do for these things.

You should be able, based on that, to deduce that the range of wavelengths that are fluctuating in the superconductor is less compared to the liquid gas system. And so there is not much range in the diversity of length scales that are contributing to the fluctuations in a superconductor.

AUDIENCE: So that explains why we have only very narrow peak in a cp?

PROFESSOR: Yes. You have to go very close in order to expand the range of wavelengths. But then you go a little bit one side or the other, and then you are passed that range that you can see very large wavelengths. Yes.

AUDIENCE: So in our subtle point, the approximation we found our maximum when we looked at the second derivative, if we had considered more derivatives, would we have captured those exponents?

PROFESSOR: So if we think about things, and mathematical consistence, here we have a parameter. And we can explicitly calculate higher and higher order terms and how they are smaller and become more and more small as the parameter becomes larger and larger.

Now what we have here is the following situation. If I presumably stick at some value that is away from the critical point, let's say t of 10 to the minus 1, at that point I calculate subtle point. And then I calculate fluctuations around subtle point, and I add more and more term, eventually, I think, I will converge to some value for the heat capacity. The problem is that I don't want to stick to one value of t. I want to see what's the singularity as I approach 0.

Now we can see that the problem here is that this correction gives a functional form that is divergent. And then I would say that if I go to from t of minus 1 to 10 to the minus 3, then I'm less sure about the first correction and maybe I will do many, many more corrections, and then I would get something else. And presumably, the closer I get to t equals to 0, I have to go further and further down in the series. And so that becomes essentially useless.

Now we will actually do later on another version of this problem, where we say the following. What I did for you here was calculating essentially Gaussian integrals. And I know how to do Gaussian integrals. And for Gaussian theory, this result is exact. I will get alpha equals to 1/2.

Maybe what I can do, instead of doing subtle points approximation, approach the problem completely in a different fashion. I will start with the Gaussian part, and then I do a perturbation in all of these nonlinearities. That's another approach.

You can say, OK, I know the problem for u equals 0, and so let's say I got this result for u equals to 0. And I want to calculate what the correction will be in proportion to u, u squared, et cetera. But what we find is that we start expanding in u and calculate the first correction. And the first correction, you'll find, is proportional to uc to the power of t minus 4 over 2.

So exactly the same problem over here reappears when we try to do preservation theory. You think you are preserving around a small quantity, but as you go to t equals to 0, you find that the coefficient of the first term in the preservation theory actually blows up. So we will try a number of these methods to try to extract the right answer out of this expression.

This expression is, in fact, correct. The difficulty is mathematical. We don't know how to deal with this kind of integration. And I was just listening to the story of Oppenheimer and Pauli. And Oppenheimer, when he was young, goes to-- actually, not Pauli but [INAUDIBLE].

And he says, I am working on some problem, and I'm not having any progress. He says is the problem, the difficulty, mathematical or physical? And Oppenheimer is flustered because he didn't know the answer.

So here, we know the problem is mathematical, because the physics is entirely captured here. We haven't done anything. Now the question, however, is whether the mathematical problem will be resolved by mathematical insights or physics insights. And the interesting thing is that, in a number of cases where the problem originates from physics, eventually the mathematical solution is provided also by physics.

So ultimately, people develop this idea of a normalization group that I will be developing for you in future lectures, which is how to solve this mathematical problem, which we have addressed from this perspective. We will try to approach from the perturbative perspective. And it just doesn't work until we introduce a more physical way of looking at it.