Lecture 2: Lec 1 (continued); The Landau-Ginzburg Approach Part 1

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, Prof. Kardar continues his discussion of the principles of collective behavior from particles to fields, and introduces the Landau-Ginzburg Approach.

Instructor: Prof. Mehran Kardar

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: OK, let's start, so last lecture, we were talking about elasticity as an example of a field theory-- statistical field theory. And I ended the class by not giving a very good answer for a question was asked. So let's go back and revisit it.

So the idea that we had was that maybe we have some kind of a lattice. And we make a distortion through a set of vectors in however many dimensional space we are. And in the continuum format, we basically regarded this u as being an average over some neighborhood.

But before going to do that, we said that the energy cost of the distortion at the quadratic level-- of course, we can expand to go to a higher level-- was the sum over all pairs of positions, all pairs of components, alpha and beta data running from one to a higher whatever the dimensionality of space is u alpha at r or u beta at r prime. The components alphabet beta of this vector u at these two different locations-- let's say r and r prime here. And then there was some kind of an object that correlated the changes here and here maybe obtained as a second derivative of a potential energy as a function of the entirety of the coordinates.

And this in principle or a general function of all of these distortions depends on all pairs of variables. But then we said that because we are dealing with a lattice and one pair of coordinates in the lattice is the same as another pair of coordinates that have the same spatial orientations that this function merely is a function of the separation r minus r prime, which of course can be a vector in this lattice. And the fact that it has this form then allows us to simplify this quadratic form rather than be a sum over pairs and squared, if you like, into a sum over one coordinate which is the wave vector obtained by Fourier transform. So once we Fourier transform, that V can be written as a sum over just one set of K vectors, of course appropriately discretized depending on the overall size of the system and confined to a [INAUDIBLE] zone that is determined by the type of wavelengths that lattice structure allows. And

Then this becomes your alpha tilde. The Fourier transform's evaluated at K and the other point, which is minus K that if I write it again as K is really this. And this entity, once we Fourier transform it, becomes a function of the wave number K. Now, there is one statement that is correct, which is that if I have to take the whole lattice and move it, then essentially there would be no cost.

And moving the entire lattice only changes the Fourier component that corresponds to K equals to 0-- translation of everything. So you know for sure that this K alpha beta at K equals to 0 has to be 0. Now, we did a little bit more than that, then.

We said, let's look not only at K equals to 0, but for small k. And then for small k, we are allowed to make an expansion of this. And this is the expansion K squared, K to the fourth, et cetera, that we hope to terminate at low values of K. Now, that itself is an assumption.

That's part of the locality that i stated that I can make an expansion as a function of K. After all, let's say K to the 1/2 is a function that goes to 0 at K equals to 0. But it is non analytic.

Turns out that in order to generate that kind of non analyticity in this, this function in real as a function of separation should decay very slow. Sometimes something like a Coulomb interaction that is very long range will give you that kind of singularity. But things that are really just interacting within some neighborhood or even if they are long range, if the action falls of sufficiently rapidly-- and that's one of the things that we will discuss later on-- precisely what kind of potential allow you to make an expansion that is analytical in powers of k and hence consistent with this idea of locality.

Let's also ignore cases where you don't have inversion symmetry. So we don't need to start and worry about anything that is linear in K. So the first thing that I can appear is quadratic in K.

Now, when we have things that are quadratic in K, I have to make an object that has two indices-- alpha and beta. And, well, how can I do that? Well, the kinds of things that I mentioned last time, we could a term that is K squared. And then we have a delta alpha beta.

If I insert that over here, what do I get? I get K squared delta alpha beta will give me U tilde squared. So that's the term that is in an isotropic solid identified with the shear modulus. Again, if the system is isotropic, I don't know any distance apart from any other distance, any direction, any other direction. I can still make another tensor between this is alpha and beta by multiplying k alpha and k beta.

If I then multiply this with this, I will get K.U-- the dot product squared. So that's the other term that in an isotropic solid was identified with nu plus lambda over 2. Now, as long as any direction for K in this lattice is the same, those are really the only terms that you can write down.

But suppose that your system was unisotropic, such as for example a rectangular lattice. Or let's imagine we put a whole bunch of coins here to emphasize that rather than something like a square lattice, we have something that is very rectangular. In this case, we can see that the x and y directions are not the same.

And there is no reason why this K squared here could not have been separately kx squared plus some other number ky squared. And hence, the type of elasticity that you would have would no longer depend on just two numbers. But, say, over here with three, in fact there would be more. And precisely how many independent elastic constants are given depend on the point group symmetry-- the way that your lattice is structured over here.

But that's another story that I don't want to get into. But last time, I was rather careless and I said that just rotational symmetry will give you this k squared, k alpha, k beta. I have to be more precise about that. So this is what's happened.

Now, given that you have an isotropic material, then the conclusion is that the potential energy will have two types of terms. There is the type of term that goes with k squared u tilde squared. And then there's a type of term that goes with u plus lambda over 2k tilde k times u tilde squared.

Now you can see that immediately there is a distinction between two types of distortion. Because if I select my wave vector K here, any distortion that is orthogonal to that then will not get the contribution from this. And so the cost of those so called transverse distortions only come from u.

Whereas if I make a distortion that is along k, it will get contribution from both of those two terms. And hence, the cost of those longitudinal distortions would be different. So when we go back to our story of how does the frequency depend on a whole bunch of ks, you can see that when we go to the small k limit in an isotropic material, it you've established that there will be longitudinal mode.

And we'll some kind of longitudinal sound velocity. And in three dimensions, there will be two branches of the transverse mode that fall on top of each-other. As I said, once you go to further larger K, then you can include all kinds of other things. In fact, the types of things that you can include at fourth order, sixth order, et cetera again are constraints somewhat by the symmetry. But their number proliferates.

And eventually you get all kinds of things that could potentially describe how these modes depend. And those things are dependent on your potential I don't know too much about. And why was this relevant to what we were discussing? We said that basically, once you go to some temperature other than zero, you start putting energy into the modes.

And how far in frequency you can go given your temperature, typically, the maximum frequency at the particular temperature is of the order of kt over hr. So that if you are at very high temperature, all of the r1 equals [INAUDIBLE] and all of the vibrations are relevant. But if you go to low temperatures, eventually you hit the regime where only these long wavelength, low frequency modes are excited.

And your heat capacity was really proportional to how many modes are excited. And from here, you can see since omega goes like T, either the case, for longitudinal or the k for transverse has to be proportional to T the maximum 1. Larger values of k would correspond to frequencies that are not excited.

So all of the nodes that are down here are going to be excited. How many of them are there? Up to from 0 to this k max that is of the order of kt over h bar and some sound velocity.

So you would say that the heat capacity which is proportional to the number of oscillators that you can have, you kt-- kb per oscillator. The number of oscillators was roughly volume times kt h bar v, cubed if you are in three dimensions. You saw that it was one in one dimension. So in general, it will be something like d.

Of course, here, I was kind of not very precise because I really have to separate out the contribution of longitudinal modes and transverse modes, et cetera. So ultimately, the amplitude that I have will depend on a whole bunch of things. But the functional form is this universal t to the d.

So the lesson that we would like to get from this simple, well known example is that sometimes you get results that are independent in form for very different materials. I will use the term universality. And the origin of that universality is because we are dealing with phenomena that involve collective behavior.

As k becomes small, you're dealing with large wavelengths encompassing the collective motion of lots of things that are vibrating together. And so this kind of statistical averaging over lots of things that are collectively working together allows a lot of the details to in some sense be washed out and gives you a universal form just like adding random variables will give you a Gaussian if you add sufficient many of them together. It's that kind of phenomenon.

Typically, we will be dealing with these kinds of relationships between quantities that have dimensions. Heat capacity has dimensions. Temperature has dimensions.

So the amplitude will have to have things that have dimensions that come from the material that you are dealing with. So the thing that is universal is typically the exponents that appear in these functional forms. So those are the kinds of things that we would like to capture in different contexts.

Now, as I said, this was supposed to be just an illustration of the typical approach. The phenomenon that we will be grappling with for a large part of this course has to do with face transactions. And again, this is a manifest result of having interaction among degrees of freedom that causes them to collectively behave separately differently from how a single individual would behave. And the example that we discussed in the previous course and is familiar is of course that of ice, water, steam.

So here, we can look at things from two different perspectives. One perspective, let's say we start by-- actually, let's start with the perspective in which I show you pressure and temperature. And as a function of pressure and temperature, low temperatures and high pressures would respond to having some kind of a-- actually, let me not incur the wrath of people who know the slopes of these curves precisely.

So there is at low temperatures, you have ice. And then the more interesting part is that you have at high temperatures and low pressures gas. And then at immediate values, you have, of course, the liquid. And the other perspective to look at this is to look at isoterms of the system for pressure versus velocity.

So basically, what I'm doing is I'm staking at some temperature and calculating the behavior along that isoterm of how pressure and volume of the system are related. So then you are far out to the right, you'll have behavior that is releasing of an ideal gas. You have PV be proportional to the number of particles in temperature.

As you lower the temperatures and you hit this critical isoterm at TC, the shape of the curve gets modified to something like this. And if you're looking at things at isoterms at an even lower temperature, you encounter a discontinuity. And that discontinuity is manifested in a coexistent interval between the liquid and gas.

So isoterms for T and less than TC has this characteristic form. So here, I have not bothered to go along the way down to the case of the solid. Because our focus is going to be mostly as to what happens at this critical point-- TC and TC, where the distinction between liquid and gas disappears. So there is here the first case of a transition between different types of material that is manifested here, the solid line indicating a discontinuity in various thermodynamic properties.

Discontinuity here being, say, the density as a function of pressure. Rather than having a nice curve, you have a discontinuity. In these curves, isoterms are suddenly discontinuous.

And the question that we posed last time last semester was that essentially, all the properties of the system, the thermodynamic properties, I should be able to obtain through the partition function, log of the partition function, which involves an integral, let's say, over all of the coordinates and momenta of some kind of energy. And this energy in the part about momenta is not particularly important. Let's just get rid of that.

And the part about the coordinates involves some kind of potential interaction between pairs of particles. That is not that difficult. Maybe particles are slightly attracted to each other when they're close enough.

And they have a hard core. But somehow, after I do this calculation bunch of integrals, all of them are perfectly well behaved. There is no divergence. As

The range of integration goes to zero and infinity, I get this discontinuity. And the question of how that appears is something that clearly is a consequence of interactions. If we didn't have interactions, we would have ideal gas behavior.

And maybe this place is really a better place to go and try to figure out what's going on than any other place in the phase diagram. The reason for that is in the vicinity of this point, we can see that the difference between liquid and gas is gradually disappearing. So in some sense, we have a small parameter.

There's some small difference that appears here. And so maybe the idea that we can start with some phase that we understand fully and then perturb it and see how the singularity appears is a good idea. I will justify for you why that's the case and maybe why we can even construct a statistical field theory here. But the reason it is also interesting is that there is a lot of experimental evidence as to why you should be doing this.

We discussed the phase diagrams. And we noticed that a interesting feature of these phase diagrams is that below TC when the liquid and gas first manifest themselves as different phases , there is a coexistence interval. And this coexistence interval is bounded on two sides by the gas and liquid volumes or alternatively densities. Now, the interesting experimental fact is that when you go and observe the shape of this curve, for the whole bunch of different gases-- here, you have neon, argon, krypton, xenon, oxygen, carbon dioxide, methane, things that are very different-- and scale them appropriately so that all of the vertical axes comes to one. So you divide P by PC.

You appropriately normalize the horizontal axis so the maximum is at 1. You see that all of the curves you get from very, very different systems after you just do this simple scaling fall right on top of each-other. Now, clearly something like carbon dioxide and neon, they have very different inter atomic potentials that I have to put in this calculation.

Yet despite that, there has emerged some kind of a universal law. And so I should be able to describe that. Why is that happening?

Now I'll try to convince you that what I need to do is similar to what I did there. I need to construct a statistical field theory. But we said that statistical field theories rely on averaging over many things. So why is that justified in this context?

So there is a phenomenon called critical opalescence. Let's look at it here. Also, this serves to show something that people sometimes don't believe, which is that since our experience comes from no pressures, where we cool the gas and it goes to a liquid or heat it and liquid goes to a gas and know that these are different things, it's kind of-- they find it hard to believe that if I repeat the same thing at high pressure, there is no difference between liquid and gas. They are really the same thing.

And so this is an experiment that is done going more or less around here where the critical point is. And you start initially at the low temperatures side where you can see that at the bottom of your wire, you have a liquid. And a meniscus separates it nicely from a gas. As you heat it up, you approach the critical temperature. Well, the liquid expands and keeps going up.

And once you hit TC and beyond, that well, one is the liquid and which one's the gas? You can't tell the difference anymore, right? There is a variation in density.

Because of gravity, there's more density down here up there. OK, now what happens when we start to cool this? Ah, what happened?

That's called critical opalescence. You can't see through that. And why?

Because there are all of these fluctuations that you can now see. Right at TC when the thing became black, there were so many fluctuations covering so different length scales that light could not get through. But now gradually, the size of the fluctuations which exists both in the liquid and in the gas are becoming small but still quite visible.

So clearly, despite the fact that whatever atoms and molecules you have over here have very short range interactions, somehow in the vicinity of this point, they decided to move together and have these collective fluctuations. So that's why we should be able to do a statistical theory. And that's why there is a hope that once we have done that, because the important fluctuations are covering so many large numbers of atoms or molecules, we should be able to explain what's going on over here. So that's the task that we set ourselves. Anything about this before I close the video? OK. Yes?

AUDIENCE: There are Gaussians that when rescaled, their plots still comply to that?

PROFESSOR: No. No. In the vacinity-- so maybe the range of the things that collapse on top of each-other is small. Maybe it is so small that you have to do things at low pressure differences in order to see that. But as far as we know, if you have sufficient high resolution, everything will collapse on top of that [INAUDIBLE].

And what is more interesting than that-- that is not confined to gases. If I had done an experiment that involved mixing some protein that has a dense phase and a dilute phase and I went to the point where there is a separation between dense and dilute and I plotted that, that would also fall exactly on top of this [INAUDIBLE]. So it's not even gases.

It's everything. And so-- so yes?

AUDIENCE: I heard you built a system where some [INAUDIBLE] damages?

PROFESSOR: Yes. And we will discuss those too.

AUDIENCE: And the exponents will change?

PROFESSOR: Yes.

AUDIENCE: So it's [INAUDIBLE].

PROFESSOR: You may anticipate from that T to the D over there, the dimension is an important point, yes. OK? Anything else?

All right. I'm going to rather than construct this theory, this statistical filter for the case of this liquid gas system, I'm going to do it for the case of the ferromagnet just to emphasize this fact that this result, this set of results spans much more than simple liquid gas phenomena-- a lot of different things-- and secondly, because writing it for the case of ferromagnet is much simpler because of the inherent symmetries that I will describe to you shortly. So what is the phenomenon that I would like to describe?

What's the phase transition in the case of the ferromagnet like a piece of iron? Where one axis that is always of interest to us is temperature. And if I have a piece of iron or nickel or some other material, out here at high temperatures, it is a paramagnet.

And there is a critical temperature TC below which it becomes a ferromagnet, which means that it has some permanent magnetization. And actually, the reason I drew the second axis is that a very nice way to examine the discontinuities is to put an external magnetic field and see what happens. Because if you put an external magnetic field like you put your piece of magnet enclosed in some parent [INAUDIBLE] loop so that you have field in one direction or the other direction, then as you change this [INAUDIBLE] of the field on the high temperature side, what you'll find is that if I plot where the magnetization is pointing, well, if you put the magnetic field on top of a paramagnet, it also magnetizes.

It does pull in the direction of the field. The amount of magnetization that you have in the system when you are on the high temperatures side looks something like this. At very high fields, all of your spins are aligned with the field.

As you low the field because of entropy and thermal fluctuations, they start to move around. They have too much fluctuation when you're in the phase that is a paramagment. And it then when the field goes to 0, the magnetization reverses itself. And you will get a structure such as this.

So this is the behavioral of magnetization as a function of the field if you go along the path such as this which corresponds to the paramagnet. Now, what happens if I do the same thing but I do it along a looped route such as here? So again, when you are out here at high magnetic field, not much happens.

But when you hit 0, then you have a piece of magnet. It has some magnetization. So basically, it goes to some particular value. There is hysteresis.

So let's imagine that you start with a system down here and then reduce the magnetic field. And you would be getting a curve such as this. So in some sense, right at H equals to 0 when you are 4T less than TC, there is a discontinuity. You don't know whether you are one side or the other side.

And in fact, these curves are really the same as the isoterms that we had for the liquid gas system if I were to turn them around by 90 degrees. The paramagnetic curve looks very much topologically like what we have for their isoterm at high temperatures where the discontinuity that we have for the magnetization of the ferromagnet looks like the isoterm that we would have at low temperatures.

And indeed, separating these two, there will be some kind of a critical isoterm. So if I were to go exactly down here, you see what I would get is a curve that is kind of like this-- comes and hogs the vertical axis and then does that, which is in the sense that we were discussing before identical to the curve that you have for the isoterm of the liquid gas coexistence. What do I mean?

I could do the same type of collapse of data that I showed you before that I was doing for the coexistence curve. I can do that for these inverted coexistence curves. I can do the same collapse for this critical isoterm of the ferromagnet. And it would fall on top of the critical isoterm that I would have for neon, argon, krypton, anything else. So that is that kind of overall universality. Now-- yes?

AUDIENCE: In this [INAUDIBLE], why didn't you draw the full loop of hysteresis?

PROFESSOR: Because I'm interested at this point with equilibrium phenomena. Hysteresis is a non equilibrium thing. So depending on how rapid you cool things or not cool things, you will get larger curves-- larger hysteresis curves.

AUDIENCE: So what would it have been if we had high-- wasn't [INAUDIBLE] fields that were slowly reduce it to 0? And then go a little bit beyond and then just stay in that state for a long time? Will the--

PROFESSOR: Yes. If you wait sufficiently long time, which is the definition of equilibrium, you would follow the curve that I have drawn. There would be-- so the size of the hysteresis loop will go down as the amount of time that you wait. Except that in order to see this, you may have to wait longer than the age of the universe. I don't know. But in principle, that's what's going to happen.

AUDIENCE: I have a question here.

PROFESSOR: Yes

AUDIENCE: When you create the flat line for temperature below the critical one--

PROFESSOR: Yes

AUDIENCE: --are you essentially doing sort of a Maxwell construction again? Or is that too much even for this?

PROFESSOR: No. I mean, I haven't done any theoretical work. At this point, I'm just giving you observation. Once we start to give theory, there is a type of theory for the magnet that is the analog of Maxwell's construction that you would do for the case of the liquid gas.

Indeed, you will have that as the first problem set. So you can figure it out for yourself. Anything else?

OK. So I kind of emphasized that functional forms are the things that are universal. And so in the context of the magnet, it is more clear how to characterize these functional forms. So one of the things that we have over here is that if I plot the true equilibrium magnetization as a function of temperature for fields that are 0-- so H equals to 0.

If I start along the 0 field line, then all the way up to TC, magnetization at high temperatures is of course 0. You're dealing with a paramagnet. By definition, it has no magnetization. When you go below TC, you'll have a system that is spontaneously magnetized.

Again, exactly what that means is it needs a little bit of clarification. Because it does depend on the direction of your field. The magnitude of it is well defined.

If I go from H to minus H, the sine of it could potentially change. But the magnitude has a form such as this. It is again very similar, if you like, to half of that co existence curve that we had for the liquid gas. And we look at the behavior that you have in this vicinity. And we find that is well-described by a power law.

So we say that M as T goes to TC for H equals to 0 is characterized or is proportional to TC minus T to an exponent that is given the symbol beta. Now, sometimes you can't write this as TC minus T to the-- make it dimension. It doesn't matter.

It's the same exponent. Sometimes in order not to have to write all of this thing again and again, you call this small T. It's just that reduced temperature from the critical point.

And so basically, what we have is that T to the beta characterizes the singularity of the coexistence line, the magnetization [INAUDIBLE]. Now, rather then sitting at H equals to 0, I could have sat exactly at T equals to TC and merely H. So essentially, this is the curve.

I say that T equals to TC. This is the blue curve. We see that new blue has this characteristic form that it also comes to 0 not linearly. So there is an exponent that characterizes that. that again is typically written as 1 over delta.

So you can see that if I go back to the critical isoterm of the liquid gas system. For the liquid gas system, I would conclude that, say, delta v goes like delta P to the power of delta or one goes with the other to the power of 1 if it. So essentially, the shapes of these two things are very much related, characterized by these two exponents.

What else? Another thing that we can certainly measure for a magnet is the susceptibility. So chi-- let's measure it as a function of temperature but for field equal to 0. So basically, I sit eventually at the equals to TC. But I put on a small magnetic field and see how the magnetization changes.

You can see that as long as I am above TC in the paramagnetic phase, there is a linear relationship here. Chi is proportional to H. But the proportionality here is the susceptibility. As I get closer to C, that susceptibility diverges.

OK, so chi is proportional to T diverges with an exponent that is indicated by gamma. Actually, let me be a little bit more precise. So what I have said is that if I plot the susceptibility as a function of temperature at TC, it diverges.

I could also calculate something similar to that below TC. Below TC, it is true that I already have some magnetization. But if I put on a magnetic field, the magnetization will go up.

And I can define the slope of that as being the susceptibility below TC. And again, as I approach TC from below, that susceptibility also diverges. So there is a susceptibility that comes like this.

So you would say, OK. There's a susceptibility above. And there is a susceptibility below. And maybe they diverge with different exponents. at this point, we don't know.

Yes or no. We will show shortly that indeed the two gammas are really the same and using one exponent is sufficient for that story. And analog of susceptibility in the case of the liquid gas would be the compressibility. And the compressibility is related somehow to the inverse of these PV curve slopes.

We know that the sign of it has to be negative for stability. But right on the critical isoterm, you see that this slope goes to 0 or its inverse diverges. So there is, again, the same exponent gamma for some magnets also describes that divergence. Susceptibility is an example of a response function. You perturb the system and see how it responds.

Another response function that we've seen is the heat capacity where you put heat into this system or you try to change the temperature and see how the heat energy is modified. And again, we experimentally observe. We already saw something like this when we were discussing the super fluid transition, the lambda transition-- that the heat capacity as a function of temperature at some transition temperatures happens to diverge.

And in principle, one can again look at the two sides of the transition and characterize them by divergences that are typically indicated by the exponent alpha. Again, you will see that there are reasons why there is only one exponent [INAUDIBLE]. So this is essentially the only part where there is some zoology involved.

You have to remember and learn where the different exponents come from. The rest of it, I hope, is very logical-- but which one is alpha, which one's beta, which one's gamma. There's four things, five things you should [INAUDIBLE].

OK? Now, the next thing that I want to do is to show you that the two things that we looked at about the liquid gas transition-- in fact, one of them implies the other. So we said that essentially by continuity, when I look at the shape of this critical isoterm, it come down in this zero slope. When, again, to continuously join the type of curve that I have for T greater than TC of magnetization and T less than TC below, I should have a curve that comes and hogs the axis or has infinite susceptibility. I will show you that the infinite susceptibility does, in fact imply, that you must have collective behavior.

That this critical opalescence that we saw is inseparable from the fact that you have diverging susceptibility. I'll show that in the case of the magnet. But it also applies, of course, to the critical opalescence.

So let's do a little bit of thermodynamics. So I can imagine if I have some kind of Hamiltonia that describes my interactions among the spins in my argon or any other magnet that I have. And if I was to calculate a partition function, what I need to do would be to trace over all degrees of freedom of this system.

Now, for the case of the magnet, it is more convenient actually to look at the ensemble that is corresponding to this. That is, fix the magnetic field and allow the magnetization to decide where it wants to be. So I really want to evaluate things in this ensemble, which really to be precise, is not the [INAUDIBLE] ensemble, but the Gibbs ensemble. So this should be a Gibbs free energy. This should be a Gibbs partition function.

But traditionally, most texts including my notes ignore that difference and log of this. Rather than calling it G, we will call F. But that doesn't make much difference.

This clearly is a function of temperature and H. Now, if I wanted-- let's imagine that always we put the magnetic field in one direction. So I don't have to worry for the time being about vectorial aspect. When you go back to the vectorial aspect later on, clearly-- actually this is the net magnetization of the system. If I have a piece of iron, it would be the magnetization of the entire iron.

And the average of that magnetization, given temperature and field, et cetera, I can obtain by taking the log z by D beta H. Because when I do that, I go back inside the trace, take a derivative in respect to beta H. And I have a trace of M into the minus beta H plus beta HM, which is how the different probabilities are rated once because of the log z.

In a derivative, I have a 1 over z to make this more properly normalized probabilities. So that's the standard story. Now if I were to take another derivative, if I were to take a derivative of M with respect to H, which is what the susceptibility is, after all. So the sensitivity of the system-- there is the derivative of magnetization with respect to H.

It is the same thing as beta derivative of M with respect to beta H, of course. So I have to go to this expression that I have on top and take another derivative with respect to beta H. The derivative can act on beta H that is in the numerator.

And what that does is it essentially brings down another factor of M. And the Z is not touched. Or I leave the numerator as is and take a derivative of the denominator. And the derivative of z I've already taken.

So essentially, because it's in the denominator, I will get the minus 1. The derivative of 1 over z will become 1 over z squared. It will also give me that fact factor that multiplies that factor itself. So I have e to the minus beta H plus beta HM.

And this whole thing got squared. OK? So a very famous formula-- always true. Response functions such as susceptibilities are related to variances-- in this case, variance of the net magnetization of the system of course true for other responses.

OK, this doesn't seem to tell us very much. But then I note the following that-- OK, so I have my magnet. What I have been asking is what is the net magnetization of piece of magnet and its response to adding a magnetic field.

If I want to think more microscopically-- if I want to think, go back in terms of what we saw for the liquid gas system and the critical opalescence where there were fluctuations in density all over the place, I expect that the reality also at some particular instance, when I look at this, there will be fluctuations in magnetization from one location of the sample to another location of the sample. It's kind of building gradually in the direction of the statistical field. I kind of expect to have these long wavelength fluctuations.

And that's where I want to go. But at this stage, I cannot even worry about that. I can say that I define in whatever way some kind of a local magnetization so that the net magnetization, let's say in V dimension, is the integrated version of the local magnetization. So magnetization density I integrate with the net magnetization.

OK, I put that over here. I have two factors of big M. So I will get two factors of integration over R and R prime.

Let's stick to three dimensions. Doesn't matter. We can generalize it to be dimensions.

And here I have M of R, M of R prime. And average would give me the average of this M of R minus M of R prime. Whereas in the second part, there are the product of two averages-- M and R, average of M and R prime.

So it is the covariance. So I basically have to look at the M at some point in my sample, the M at some other point R prime in the sample, calculate the covariance. Now, on average, this piece of iron doesn't know its left corner from its right corner.

So just as in the case of the lattice, I expect once I do the averaging, on average, M of R, M of R prime should be only a function of R minus R prime. All right, so I expect this covariance that's indicated by MNC is really only a function of the separation between the two points, OK? Which means that one of these integrations I can indeed perform.

There's the integral with respect to the relative coordinate and the center of mass coordinate. For getting boundary dependence, this will give you a factor of V. There was a factor of beta that I forgot. There is an overall beta. So I have chi beta. And then I have the integral over the relative coordinate of the correlations within two spins in the system as a function of separation.

Now, of course, like any other quantity, susceptibility is proportional to how much material you have. It's an extensive quantity. So when I say that the susceptibility diverges, I really mean that the susceptibility per unit volume-- the intensive part is the thing that will diverge.

But the susceptibility per unit volume is the result of doing an integration of this covariance as a function of position. Let's see what we expect this covariance to do. So as a function of separation, if I look at the covariance between two spins-- OK, so there's this note that I have already subtracted out the average. So whether you are in the ferromagnetic phase or in the paramagnetic phase, the statement is how much is the fluctuation around the average at this point and this other point are related?

Well, when the two points come together, what I'm looking is some kind of a variance of the randomness. Now, when I go further and further away, I expect that eventuality, what this does as far as a fluctuation around the averages is concerned, does not influence what is going on very far away. So I expect that as a function of going further and further away, this is something that will eventually die off and go to zero.

Let's imagine that there is some kind of characteristic landscape below which the correlations have died off to 0. And I will say that this integral is less than or equal to essentially looking at the volume over which there are correlations. The correlations within this volume would typically be less than sigma squared.

But let's bound it by sigma squared times the volume that we're dealing with, which is either four pi over 3c cubed. Coefficients here are not important. OK, so if I know that my response function per unit volume, like my compressibility or susceptibility, if I know that the left hand side as I go to TC is diverging and going to infinity, variance is bounded. I can't do anything with it.

Data is bounded. I can't do anything with this. The only thing that I conclude, the only knob I have, is that C must go to infinity. So K over V going to infinity implies and is implied by this [INAUDIBLE].

So I was actually not quite truthful. Because you need to learn one other exponent. So then how the correlation then divergence as a function of temperature is also important and is indicated by an exponent that is called nu. So C diverges as T to some x point under this nu. So this is what you were seeing when I was showing you the critical opalescence.

The size of these fluctuations became so large that you couldn't see through the sample. All kinds of wavelengths were taking place in the system. And if I had presented things as square, I could have given you that as a prediction. I should have shown you, say, the critical isoterm looks like this.

Therefore, if you look at it at TC, you shouldn't be able to see through it. This is [INAUDIBLE]. All right, so those are the phenomena that we would like to now explain, phenomena being these critical exponent alpha, beta, gamma, and nu, et cetera, being universal and the same across very many different systems.

AUDIENCE: Question.

PROFESSOR: Yes.

AUDIENCE: Is [INAUDIBLE] in elementary fundamental field theory, like-- I mean, like, in the standard model or in quantum field theory where something like this happens where there's-- I mean, it's not a thermodynamical system. It's like elementary theory. And yet--

PROFESSOR: Yep. In some sense, the masses of the different particles are like correlation lengths. Because it is the mass of the particles like in the nuclear potential that describes the range of the interactions.

So there are phenomena such as the case of [INAUDIBLE] or whatever where the range is infinite. So in some sense, those phenomena are sitting at the critical point. OK, so the name of the statistical field theory that we will construct is Landau Ginzburg and originally constructed by Landau in connection to super fluidity.

But it can describe a lot of different phase transitions. Let's roughly introduce it in the context of these magnetic systems. So basically, I have my magnet. And again, in principle, I have all kinds of complicated degrees of freedom which are the spins that have quantum mechanical interactions, the exchange interactions, whatever the result of the behavior of electrons and their common interactions with ions is that eventually, something like nickel becomes a ferromagnet at low temperatures.

Now, hopefully and actually precisely in the context that we are dealing with, I don't need to know any of all those details. I will just focus on the phenomena that there is a system that undergoes a transition between ferromagnetic and paramagnetic behavior and focus on calculating an appropriate partition function for the degrees of freedom that change their behavior ongoing through TC. So what I expect is that again, just like we saw for the case of liquid gas system, on one side for the magnet, there will be an average zero magnetization in the paramagnet.

But there will be fluctuations of magnetization, presumably with long wavelengths. On the other side, there will be these fluctuations on top of some average magnetization that has formed in the system. And if I stick sufficiently close to TC, I expect that that magnetization is small.

That's where the exponent beta comes from. So if I go sufficiently close to TC, these new Ms that I have at each location hopefully will not be very big. So maybe in the same sense that when I was doing elasticity by going to low temperature, I could look at small deformations. By sticking in the vicinity of TC, I can look at small magnetization fluctuations.

So what I want to do is to imagine that within my system that has lots and lots of electrons and other microscopic degrees of freedom, I can focus on regions that I average. And with each region that I average, I associate magnetization as a function of position, which is a statistical field in the same sense that displacement was. See, over here, I will write this quantity-- the analog of the U-- in fact by using two different vectorial symbols for reasons that will become obvious shortly, hopefully.

Because I would like to have the possibility of having R to describe systems that live in D dimensions where D would be one if I'm dealing with a wire. D would be two if I'm dealing with a flat plane. D equals to three in three dimensional space.

Maybe there's some relationship to relativistic field theories where we would be four for space time. But M I will allowed to be something else that has components M1, M2, Mn. And we've already seen two cases where n is either 1 or 3.

Clearly, if I'm thinking about the case of a ferromagnet, then there are three components of the magnetization. N has to three. But then if I'm looking at the analogous thing for the liquid gas system, the thing that distinguishes different locations is the density-- density fluctuations. And that's a scalar quantity. So that corresponds to N equals to 1.

There's other-- then we are dealing with super fluidity where what we are leaving it is a quantum mechanical object that has a phase and an amplitude. It has an x component-- real and imaginary component that corresponds to n equals to 2. Again, n equals to 1 would describe something like liquid gas.

N equals to 3 would correspond to something like a magnet. And there's actually no relationship between n and v. N could be larger than 3. So imagine that you take a wire.

So x is clearly one-dimensional. But along the wire, you put three component spins. So N could still be true. So we can discuss a whole bunch of different quantities at the same time by generalizing our picture of the magnet to have n component and existing D dimensions. OK?

Now, the thing that I would like to construct is that I look at my system. And I characterize it by different configurations of this field M of R. And if I have many examples of the same system at the same temperature, I will have different realizations of that statistical field. I can assign the different realizations some kind of weight of probability, if you like.

And what I would like to do is to have an idea of what the weight of probability of different configurations is. Just because I have a background in statistical mechanics, in statistical mechanics, we are used to [INAUDIBLE] weight. So we take the log of weight of probabilities and call them some kind of effective Hamiltonian.

And this effective Hamiltonian is distinct from a true microscoping Hamiltonian that describes the system. And it's just a way of describing what the logarithm of the probability for the different configurations is [INAUDIBLE]. Say, well, OK.

How do you go and construct this? Well, I say, OK. Presumably, there is really some true microscopic Hamiltonia. And I can take whatever that microscopic Hamiltonia is that has all of my degrees of freedom.

And then for a particular configuration, I know what the probability of a true microscopic configuration is. Presumably, what I did was to obtain my M of R by somehow averaging over these other two degrees of freedom. So the construction to go from the true microscopic probabilities to this effective weight is just a change of variable.

I have to specify some configuration M of R in my system. That configuration of M of R will be consistent with a huge number of microscopic configurations. I know what the weight of each one of those microscopic configurations is.

I sum over all of them. And I have this. Now, of course, if I could do that, I would immediately solve the full problem and I wouldn't need to have to deal with this. Clearly, I can't do that.

But I can guess what the eventual form of this is in the same way that I guessed what the form of the statistical field theory for elasticity was just by looking at symmetries and things like that, OK? So in principle, W from change of variables starting from into the minus beta H in practice from symmetries and a variety of other statements and constraints that I would tell you about. Actually, let's keep this.

And let's illuminate this. OK. So there is beta H, which is a function of M as a function of this vector x. First thing that I will do is what I did for the case of elasticity, which is to write the answer as an integral in D dimensions of some kind of a density at location x.

So this is the same locality type of constraint that we were discussing before and has some caveats associated with that. This is going to be a function of the variable at that location x. So that's the field. But I will also allow various derivatives to appear so that I go beyond just single [INAUDIBLE] will just give me independent things happening at each location by allowing some kind of connections in a neighborhood.

And if I go and recruit higher and higher order derivatives, naturally I would have more pips. Somebody was asking me-- you were asking me last time in principle, if the system is something that varies from one position to another position, the very function itself would depend on x. But if we assume that the system is uniform, then we can drop that force.

So to be precise, let's do this for the case of five 0 field. Because when you are at zero field in a magnet, the different directions of space are all the same to you. There's no reason to be pointing one direction as opposed to another direction.

So because of this symmetry in rotations, in this function, you cannot have M-- actually, you couldn't have M for other reasons. Well, OK.

You couldn't have M because it would break the directionality. But you could have something that is M squared. Again, to be precise, M squared is some, let's say, alpha running from 1 to n and alpha of x and alpha of x.

Now, if the different directions in space are the same, then I can't have a gradient appearing by itself because it would pick a particular direction. In the same sense that M squared, two Ms have to be appearing together, if the different directions in space forward and backward are to be treated the same, I have to have gradients appearing together in powers of two. So there's a term that, before doing that, let's say also sometimes I will write something that is M to the fourth.

M to the fourth is really this quantity M dot M squared. If I write M to the sixth, it is M dot n. Mute and so forth-- so there's all kinds of terms such as that can be appearing in this series. Gradients-- there's a term that I will write symbolically as gradient of M square. And by that, I mean to we take a derivative of n alpha with respect to the alt component and then repeat the same thing sum wrote over both alpha and alt. So that would be the gradient. So basically, the xi appears twice.

M alpha appears twice. And you can go and construct higher and higher order terms and derivatives, again ensuring that each index both on the side of the position x and the side of the fields M is a repeated index to respect the symmetries that are involved. So if we-- there is actually maybe one other thing to think about, which is that again like before, I assume that I can make an analytical expansion in M.

And who says you are allowed to do an analytical expansion in M? Again, the key to that is the averaging that we have to do in the process. And I want you to think back to the central limit theorem. Let's imagine that there is a variable x where the probability of selecting that variable x is actually kind of singular. Maybe it is something like e to the minus x that is a discontinuity.

Maybe it even has an integrable divergence at some point. Maybe it has additional delta function. All kinds of-- it's a very complicated, singular type of a function. Now, I tell you-- add thousands of these variables together and tell me what the distribution of the sum is.

And the distribution of the sum, because of the central limit theorem, you know has to look Gaussian. So then take its log, it has a nice analytical expansion. So the point is that again, part of the course grading that we did to get from whatever microscopic degrees of freedom that we have, to reaching the level of this effective field theory, we added many variables together.

And because of the central limit theorem, I have a lot of confidence that quite generically, I can make an analytical expansion such as this. But I have not. So having done all of this, you would say that in order to describe this magnet, what we need is to evaluate the partition function, let's say, as a function of temperature, which if I had enormous power, I would do by trace of into the beta H microscopic.

But what I have done is I have subdivided different configurations of microscopic degrees of freedom to different configurations of this effective magnetization. And so essentially, that sum is the same as integrating over all configurations of this constrained magnetization field. And the probabilities of these configurations of the magnetization field are exponential of this minus data H that I'm constructing on the basis of principles that I told you-- principles where that I first of all have a locality.

I have an integrality of x. And then I have to write terms that are consistent with the symmetry. First term that we saw that is consistent with the symmetry is this M squared.

And let's give a name to its coefficient. Just like the coefficient of elasticity, I put it nu over 2. Let's get here a T over 2. There will be higher order terms.

There will be M to the fourth. There will be M to the sixth. There will be a whole bunch of things in principle.

There will be gradient types of terms. So there will be K over 2, gradient of M squared. There will be L over 2 Laplacian of M squared. There would be higher order terms that involve multiplying M's and various gradient of M, et cetera.

And actually, in principle, there could be an overall constant. So there could be up here some overall constant of integration-- let's call it beta F0-- which depends on temperature and so forth but has nothing to do with the ordering of these magnetization type of degrees of freedom. Actually, if I imagine that I go slightly away from the H equals to 0 axis, just I did before, I will add here a term minus beta H of M and evaluate this whole thing.

So this is the Landau-Ginzberg theory. In principle, you have to put a lot of terms in the series. Our task would be to show you quite rigorously that just a few terms in the series aren't enough just as in the case of theory of vibration for small enough vibration. In this case, the analog of small enough vibration is to be close enough to the critical point, exactly where we want to calculate these universal exponents and to then calculate the universal exponents, which will turn out to be a hard problem that we will struggle with in the next many lectures.