Connect with us

# The second law of thermodynamics, part 3 $Entropy$

Ed Gregory

## The second law of thermodynamics, part 3 $Entropy$

The direction of time and thermodynamic processes, the efficiency of engines, and the expansion of our universe all have one thing in common: entropy.

The second law has proved to be a long journey. It often does: just ask any physics student. We have so far discussed various statements of the second law of thermodynamics in part one, looked at heat engines in general and the Carnot reversible engine in particular, and discussed its efficiency and also equivalent terms in case of refrigerators, all in part two. The two dissatisfactions, perhaps, are that our discussions have largely been verbal, and that we have been talking about a mysterious thing called entropy that we are yet to properly define and understand. In that sense, this is the article we have been waiting for.

# Clausius’s equation

Think back to the Carnot cycle. In it we saw that a system undergoes a series of processes to return to its initial state. We said, with no particular initial point, that these processes are isothermal expansion, isentropic expansion, isothermal compression, and isentropic compression, specifically in that cyclic order.

Although we represented this with a sort of rhombic, but curved-sided figure, we can always think of a cyclic process as just that: an elliptic cycle. Fig. 1 shows what one such process might look like.

Fig. 1: A generalised cyclic process

It is hard to imagine how this is directly related to a Carnot cycle, so we can perhaps make a small, harmless modification. Suppose the phenomenon is not cyclic as shown but that what fig. 1 represents is a rather averaged out showcase of our cyclic process.

We could then imagine the actual process to be like saw teeth mounted atop this ellipse. We would then not be unrealistic in looking at this as more of a Carnot engine, especially if we considered each pair of saw teeth opposite each other as forming our Carnot cycle, as depicted in fig. 2, which zooms in on a random part of the cyclic process that fig. 1 depicts.

Fig. 2: Carnot cycle in a reversible process

In fact, that in every reversible process there exists a zigzag path between two states consisting of alternate isentropic and isothermal processes is empirically testable. Proceeding further, we invoke Kepler’s statement of the second law that we discussed in part one. We can simply state it as follows:

### No net work can be extracted from an ideal cyclic process.

Mathematically, the heat absorbed or emitted, , the temperature exchanged, , and the work done, must all balance each other as varying by sign depending on whether work is done on or by the object, heat is extracted or emitted etc. This equation itself, of course, is something we already discussed as the first law of thermodynamics.

Accordingly, for the process from A to B, we have , and for the process from C to D, . From Kepler’s statement then, , which leaves us with and .

You will recall that we had used a form of this equation in part one when we discussed re-writing the equation for efficiency in terms of temperatures instead of the heat exchanged. Of course writing at this point serves no purpose and we may just as well write it as . This gives us an important result:

As a result, if this equation is satisfied for process between AB and CD, it must also be satisfied for any and process as well. And, thanks to the zero on the right-hand side, this equation can just as well be summed over all the processes constituting our elliptical $cyclic$ process.

or, integrated over every heat exchange, , over the entire cyclic process, we end up with

$1$

This is called Clausius’s equation for a cyclic process. At this point it seems disconnected with Clausius’s own statement which simply declared that heat does not spontaneously flow from a cold body to a hot body. To see how this result means exactly what his statement said, we follow Clausius’s work further.

# What is entropy?

We have been talking about entropy all this while, never once defining it properly, but it is finally time. Almost.

Let us extend our elliptic, cyclic process example from fig. 1 to a more general case. What if the process the cyclic but does not follow a symmetric path? Recall that Carnot’s own heat engine was somewhat symmetric. The genius of Clausius lies in his extending this to a completely arbitrary process that has only one condition: it must be cyclic. So we might have something that looks like the process depicted in fig. 3 below.

Fig. 3: An arbitrary cyclic process

Equation $1$ can then be re-written, once for the forward process, shown in red, from the initial $I$ to the final point $F$, and then again for the reverse process, shown in blue. Both of these equate to zero as suggested by Clausius’s equation and hence can themselves be added up as

where the subscripts denote that the integrals refer to the initial $red$ path and the final $blue$ path. Why so? We will soon show there is no need to think of it this way $and of course we have to show it explicitly, we cannot simply assume it$.  Now the above equation can be simplified as below in a result that shows that the two integrals are path dependent, and hence that our insistence of the red path being the initial and so on is undeniably baseless.

$2$

One may wonder why we had to go through the trouble of proving that the Clausius integral is path independent. On the one hand, assuming it has no valid reason to back it. Indeed, given that our process is arbitrary, there is every reason to doubt that the integrals are path independent. Nonetheless, we have now proven clearly that there is path dependence.

More importantly, however, we must realise that this simple fact gives our integral a universality. We now know that any process, and I mean any process at all, has an integral like associated with it. We know, then, that this integral can be written as some quantity that changes from the final point to the initial point as

The box is awkward, so let us call this quantity by some symbol, . So now we know that every process has some

The quantity, , or , is known as the change in entropy. And the answer to our question, ‘What is entropy?’, then, is that there is no such thing as entropy. But every process in the known universe where our known laws of physics apply has a quantity, , known as the change in entropy, associated with it. We therefore have

$3$

This is the mathematical definition entropy in terms, quite naturally, of its change. This is a quantity whose physical meaning is complex enough to grasp that we are better off treating it as a mathematical quantity that we are, after our discussion so far, convinced can be associated with any given process. It would not be an exaggeration to say that this is the crux of almost all of thermodynamics.

Although we dropped the modulus long back for convenience, if we did retain it, we will realise that, by equation $3$, the quantity is equal to a modulus and hence always a positive quantity. $As for having no modulus, remember that we are dealing in terms of the thermodynamic temperature, i.e. temperature measured in kelvin, which has no negative.$ So, given that is positive, we have , which means that for any random, spontaneous, process, the entropy always increases. The term ‘spontaneous’ is important here: entropy can decrease, but this requires that we do some work on the system to bring this about. Left to itself, as the above inequality tells us, a system always sees an increase in entropy.

Clausius himself first defined entropy as , which he called the equivalence value, and asserted that the entropy of the universe tends to a maximum. Although he only talked about entropy tending to a maximum as a means of saying it would never reduce, we find that Clausius was ultimately right. The entropy of the universe, as we will see towards the end of this article, increases and never eases to a halt.

The time-variation of entropy may be shown to be and this is called the rate of entropy production. The rate of entropy production times the temperature at a given point in a machine, , the dissipated energy. This is a useful term in realising how the entropy of a machine can never be a hundred percent $we will see this is greater detail later, although not in terms of .$

There is one last point of note here: that the right- and left-hand sides in equation $2$ are, in fact, the expressions for entropy of an ideal onwards and reverse process and they are shown to be equal to each other. This clearly means that the change in entropy in a cyclic process is zero.

# The statistical statement of the second law

Our mathematical definition in equation $3$ does a good job of explaining its relationship with heat and temperature, making it understandably a part of the second law, at least in terms of heat engines and efficiencies. We will see this in greater detail soon, but first we still need to understand that one statement of the second law we have not talked about after briefly mentioning it in part one: the statistical statement, which says —

### Any system, over a period of time, spontaneously moves towards a thermodynamic macrostate corresponding to the largest microstates.

To better understand this we will have to re-define entropy from scratch. We already discussed how, somewhat trivially, this statement tells us that a random arrangement of T-H-T-T-H-H-H-T-T-H-… is more likely to occur than is an ordered H-H-H-H-H-H-H-H-H-… in case of a series of coin tosses. In other words, any process is simply more likely to spontaneously tend towards disorder unless you do specific external work on it to keep it ordered. Not unlike your bedroom.

We can extend this to a more laboratory-worthy example and, of course, bring some mathematics into our discussion. In the end we will see that we end up with a solution that is fundamentally the same as our simple claim made in terms of coin tosses.

Consider a container with two equal-sized compartments. Say we have some gas in one of them, the left one, and a vacuum in the other. If we now perforate the wall separating the two compartments, experience tells us that the gas is going to spread from the left half to the right, eventually going all over the container. We can regulate how quickly it spreads by varying the perforations but that is of no consequence for our intentions now.

Say we start off with gas molecules initially. After a while, the probability, , of finding out of those particles on the one side of our container is given by

Case 1: What is the probability that we will find an ordered arrangement of all gas molecules on the left side? This describes a system with complete order. Let us give numbers to better understand these situation. Say we start off with just 30 molecules. On perforation, what is the likelihood that all 30 molecules remain disciplined enough to stay on the same $in this case, left$ side? In other words, what is the likelihood of having ?

In other words, not a lot. There is only a one-in-a-billion chance of finding order in our system with just 30 molecules. Most realistic systems of this sort have billions of billions of molecules in a space much smaller. $For instance, CO has about molecules in a volume.$

Case 2: What is the probability that we will find a perfectly disordered arrangement of all gas molecules spread all over the container? This is the exact opposite of case 1, and we now have complete disorder. In other words, 15 of our 30 initial molecules are to be found on any one side, making and

which is a billion times more likely than the complete order case.

We can prove it stepwise, taking at several values between 0 and 30, but it is not unreasonable at this point to simply state that maxes out as , which is complete disorder, or, in the language of the statistical statement, the case of maximum microstates. In other words, any given system is statistically more likely to be at a state of utter disorder. And, in everyday circumstances, any system is more likely to tend towards a state of greater disorder.

The change in entropy is quite simply the change in the order of a system. It can loosely be stated as a measure of the disorder in a system. So a given process or a given system is statistically more likely to tend to a state of greater disorder or entropy. This is in complete agreement with our assertion that, for a spontaneous process $which is exactly what our gas-in-container experiment represents$, we have exactly as we expected.

# Reversibility and thermodynamic equilibrium

The concept of having an increasing quantity $called entropy$, that determines whether a spontaneous process can occur, gives rise to a simple pattern with which we can examine the reversibility of a process. Recall the condition that we discussed before. We will now use both these cases $and$ to define the reversibility of a process.

If a system goes from a state A to a state B spontaneously with an increase in entropy, then we know that such a process cannot spontaneously come back from state B to state A since this would require a decrease in entropy. Such processes are called irreversible processes.

Suppose the same system goes from A to B keeping its entropy constant, then there is a possibility that it can return from B to A keeping the entropy constant once again. Both directions, A to B and B to A, can then occur spontaneously. Such processes are called reversible processes.

In the statistical example, for our experimental setup with a container filled with some gas, we saw how, by starting with all the gas in one compartment and then opening the ‘gate’ between the compartments, we can see a gradual increase in disorder as we go from to , at which point the system is said to be in a state of maximum disorder.

Such a system that has achieved maximum possible disorder and, as a result, sees an even spread of matter $and therefore of energy$, sees no further spontaneous increase in entropy, because the evenly spread energy means there is nowhere left for energy to spontaneously flow to. All heat exchange will therefore stop, as will all movement, and such a system is said to be in thermodynamic equilibrium.

This, of course, is a statistical definition and it is equivalently defined, more physically, as a system that is in mechanical, chemical and thermal equilibria, seeing no pressure, chemical and temperature changes respectively.

# The direction of spontaneous processes and time

Here is a thought we take for granted too often: if you were locked up in a room with no light, no windows and no objects whatsoever except a wine glass with a lot of warm coffee powder in it and the wine glass shatters, you did not conclusively know the direction of time until the wine glass shattered. Until, in other words, it went from a state of order to disorder.

In other words, the direction of increase in entropy of a spontaneous process, a wine glass going from its regular, usable state to a shattered one, also gives the direction of time. This is something we think about far too rarely. The only reason we know the direction of time to be from when a coffee mug falls off to when it breaks into pieces is because of this increase in entropy.

This could also be stated as causality, in which case we can just as well define it as the order of spontaneous events in phase with the increase in entropy of these processes. So, as entropy increases, we go from cause to effect, maintaining causality.

# Entropy and the efficiency of heat engines

In part two, while describing Carnot’s heat engine, we stated that no other heat engine can have an efficiency so great. The efficiency was not great by itself, yet it was greater than what any other heat engine could ever achieve, or so we claimed. Now we prove it.

In part one, we saw how an engine satisfies the equation when a working substance draws heat form a high temperature source and releases some heat to a low temperature sink while doing work in-between. We also said this was true in case of a Carnot reversible heat engine.

What then happens in case of a real heat engine $as opposed to Carnot’s ideal one$? First of all, if flows from the source to the working fluid, in spite of all our efforts to keep the process isothermal $which is an extremely difficult task$ it is safe to assume some change does occur, however small. In other words, for some temperature, , of the working substance, there occurs some change in entropy:

where the inequality arises because of .

We can apply the same logic to the outwards path from the working substance too. In this case, for heat transfer, the temperature, , of the sink must be less than the temperature, , of the working substance. Assuming this is an isothermal process as well, at least in so far as practicality allows it to be, we find a change in entropy given by

where, once again, the inequality arises due to .

We know that the change in entropy in a cyclic process is zero, i.e. , which we can write in terms of our inequalities above as

$4$

in other words, the left-hand side now gives the ratio of the practical work involved to a given heat transfer of from the source, which is nothing but the efficiency of a practical or real heat engine. And the right hand side is an equation we have already encountered, the efficiency of an ideal heat engine. Therefore,

$5$

is mathematical evidence telling us that Carnot’s seemingly casual statement, about his ideal heat engine being the most efficient possible one, was actually deeply rooted in scientific thought. Further, notice how the efficiency of the ideal, Carnot-reversible heat engine, , is independent of the properties of the working substance itself and depends only on temperature. This was another claim of Carnot’s statement which, as it turns out, also has solid mathematical evidence to back it.

# Perpetual motion machines

One of the more maddening pursuits of scientists in the 15 and 16 centuries was the idea of building a perpetual motion machine, a machine that would self-sufficiently run forever. From all the statements we have seen so far it is clear that the only potentially successful perpetual motion machine would be a Carnot-reversible heat engine. But we also saw that it is ideal, which means no perpetual motion machine can possible exist. However, this did not stop people from pursuing the possibility of such perfectly efficient machines.

We will discuss perpetual motion briefly, solely out of interest for its various implications and how it shows the universality of the laws of thermodynamics. However, Physics Capsule does not endorse that you spend your time building such impossible contraptions; we just believe they are fun to think about.

One of the basic problems a perpetual motion machine must overcome is the excess heat lost. Most other problems, such as the nature of the working substance used and how it absorbs heat itself can be overcome easily. The speed with which a process occurs is the cause of most energy loss and, in turn, of low efficiency. In thermodynamics in general this is overcome by using what are called quasistatic processes, or process that proceed so slowly they might as well be static but are not, of course.

Perpetual motion machines come in three types, all free from reality:

1. The first kind is a machine that works without the input of energy, thereby violating the first law of thermodynamics.
2. The second kind is a machine that violates the second law of thermodynamics by spontaneously converting all heat input into useful work, which conserves energy but, as we discussed throughout this three-part series, violates the idea of entropy.
3. The third kind is a machine that eliminates energy loss by eliminating friction, unwanted heat exchange etc.

Unsurprisingly, none of these three types of machines exist, but there are examples that come close if, instead of calling them ‘perpetually’ moving machines, we simply consider them to be machines that work for an exceptionally long time without external influence.

A planet or natural satellite is an excellent example of such a thermodynamic system. The earth has been going around the sun for a few billion years, which, by human measure already feels perpetual, but any such celestial body, including the earth, is slowly losing energy as it falls towards the centre of its orbit.

Closer to home, the Fludd water screw, built in the early 1600s by the Englishman, Robert Fludd, used a complication powered by running water that, as an aside, ground grain whose water output would be fed back to the main water supply to kept the initial complication running. There are still others, such as the overbalanced wheel or the capillary bowl, which work for a long time, but not forever, or are simply plans on paper without working models, or, interestingly, are contraptions that do no useful work.

# Maxwell’s demon

One of the early potential violations of the second law of thermodynamics, though, came from the physicist, J.C. Maxwell. While it was by no means a perpetual motion machine, it is worth discussing in the same vein since it is concerned with the violation of the second law. In an 1867 letter to the Scottish physicist, Peter Guthrie Tait, Maxwell writes of what he believes to be a probable method of circumventing the second law. His argument, more than being a statement disproving the second law, was an attempt to show that the second law was purely a statistical idea.

Returning to our example of the container with a partition through which gas molecules can move around, if we now manage to have a gatekeeper $Maxwell calls him a ‘finite being’ but Lord Kelvin later called him a ‘demon’$ who, beginning with all the gas trapped in the left half of the container, selectively opens the partition so as to let only high-velocity molecules through to the right side, we will eventually end up with a setup that consists of half the container filled with high-velocity molecules and the other half filled with slow-moving molecules. In other words, one half of our container $the right half$ will heat up while the other half cools down.

While this seems good enough on paper, recent calculations have shown that Maxwell’s demon will, itself, cause an increase in entropy while it selects molecules and opens and shuts the gate $incredibly quickly$ to allow molecules to pass. The fundamental argument against Maxwell’s demon, as against so many perpetual motion machines, is that the system looks good to talk about, but extremely specific calculations will end up increasing the entropy of the system in agreement with the second law.

# The heat death of our universe

With our discussion so far we have successfully tied up all lose ends from parts 1 and 2 in this series. The various statements, the ideas of heat engines and refrigerators, the statistical significance of the second law and all their mathematical proofs rooted in a strange quantity known as entropy.

We also talked about perpetual motion and Maxwell’s demon, but there is one last scientifically important debate that arose around the time of the second law. Yet again we return to the container in our previous examples: we saw, while discussing the statistical likelihood of the spontaneous increase of entropy, that eventually the system will reach a state of maximum disorder. Beyond this, then, the system sees no increase in entropy.

As we defined earlier, this means the system is in a state of thermodynamic equilibrium, experiencing, particularly, no heat exchange of any sort. One way to think of this, given that the container is a completely isolated system, is to reason that once the molecules of the gas have dissipated energy and spread all over the container reaching maximum disorder, they no longer need to move and will thus not cause any further heat dissipation.

Lord Kelvin applied this idea to the entire universe, treating it as an isolated system, and reasoned along the same lines that, at some point, the universe will reach a state of maximum disorder, with energy spread evenly throughout so that no spontaneous energy transfer can possibly occur, thereby dying a ‘heat death’. This idea, called the heat death of the universe, was long seen as a valid idea that a lot of debates centred around and the idea seemed almost frightening but the universe seemed to be showing no signs of such a slow down just yet.

The solution would come over a century later when we first realised that the universe was, in fact, not like the container of gas. Let us modify our experiment: what if the container was somehow becoming larger and larger faster than the molecules were spreading about? This would mean that maximum disorder would never be achieved since there would always be some volume unoccupied by the molecules of gas. In other words, as long as the container would keep expanding, the system will never reach thermodynamic equilibrium.

We now have ample evidence that the universe is expanding and, much like our modified experiment, there is no fear of the universe dying a ‘heat death’ as a result of this. The mass in the universe will never reach a state of complete disorder as the universe is constantly expanding, thereby making a heat death an extremely remote possibility.

The second law, therefore, is immense in the amount of information it contains by direct implications alone. It also brings in the idea of using statistics to examine physical phenomena, a characteristic that often is coupled with thermodynamics as a whole and a concept that will go on to form an important part of our discussions in this field over the coming days.

You may also like...

V.H. Belvadi is an Assistant Professor of Physics. He teaches postgraduate courses in advanced classical mechanics, astrophysics and general relativity. When he is free he makes photographs and short films, writes on his personal website, makes music, reads voraciously, or plays his violin. He currently serves as the Editor-in-Chief of Physics Capsule.

Click to comment