The post The postulate of equal a priori probabilities appeared first on Physics Capsule.

]]>Say, our macroscopic system is something like a gas inside a sealed box. We wish to describe this gas quantitatively. The most straightforward way to do this is to count the number of particles forming the gas and specify the positions and momenta of each particle, at a specific instant of time. This set of values of the positions and momenta of the gas particles, is what we call the **state of the system**. If we observe the system at a later instant, the positions and momenta of the particles would have changed, in general, and hence we’d say that the state of the system has changed.

With the system set up, let us now make our aim more specific: We want to find the probability of finding our system in a specific state, out of the literally infinite possibilities. Before you scoff us off saying that such a probability would essentially be equal to zero – it’d be like finding the probability of catching a specific mosquito out of the billions that exist out there. But you must agree to the fact that the problem becomes much more solvable if we were to find the probability of finding a mosquito belonging to a specific species, rather than hunt a particular individual. Therefore, we impose certain constraints on our system. Our system has a volume , containing number of particles and the energy of the system lies anywhere in the margin . With these impositions, the number of possible states the system can exist in, reduces.

Now, if you look at the problem from a slightly different perspective, you will appreciate that calculating the probability of our system having a specific state (under the given constraints) is equivalent to calculating the probability of picking a single system, out of a large number of systems, in which each system is in a unique state. I.e., instead of imagining a large number of possibilities for a single system, imagine a large number of systems, each having one possibility. With the mosquito analogy, here you are, instead of finding the number of diseases a single mosquito can spread, you are considering a large number of mosquitoes and assuming that each mosquito spreads only one kind of disease. Then the probability of a particular disease being spread from a single bite of your mosquito, will be equivalent to the probability of any one of the mosquitoes of the large group of mosquitoes, biting (assuming that a single bite will certainly inflict a particular disease!).

Now, this large number of systems isn’t something real. They are mere mental copies of the system that we’re trying to study. All these systems are identical in composition but each has a different state (under the constraints). We call such an imaginary cluster of systems, an **ensemble**. Our system, that we set out to study, is one of these.

We’ve set up this extraordinary setting of systems, only to calculate the probability of finding our system in one of the states that obey the constraints. As explained above, this will be equal to calculating the probability of picking a system from the ensemble, that has any of the required states (under constraints). So, we simply calculate the total number of systems in the ensemble that have these required states and divide by the total number of systems present in the ensemble.

One important detail is that as we calculate the probability, our original system is in thermal equilibrium. Meaning, its “macrostate” does not depend on the passage of time (macrostates involve macro properties such as temperature and pressure) .

Finally, the big question – will the probability of picking a system from the ensemble be the same as that of picking any other? In other words, is it equally likely that our system be in a specific state or in any other, obeying the assumed constraints? The answer is a simple yes. How? Try to answer, why is it that when a coin is tossed randomly, it is equally likely that you get a head or a tail? You’d say that it is common knowledge that a head or a tail are equally likely to turn up. We say the same here: it is obvious or using the specific phrase “a priori”, to conclude that the probability of finding a system in a state is equal to the probability of finding it in any other state of the ensemble. There’s no bias; nothing that would lead us to suppose that one of the states is more likely to be found than another. All states have an equal chance of turning up. Hence, the equal a priori probabilities.

The post The postulate of equal a priori probabilities appeared first on Physics Capsule.

]]>The post The origins of life appeared first on Physics Capsule.

]]>Rewind 4.5 billion years: the solar system has just formed from a cloud of dust that collapsed. Most of the collapsed mass is at the centre of the system and has formed the Sun, around which revolve hundreds of rocky masses. These are constantly colliding, sometimes clinging to one another and forming larger rocks, and at other times shattering into tinier ones. Such repeated processes eventually lead to the formation of larger masses called planets. Our Earth was one of them, but it was nothing like how it is today.

The surface of the earth was but a sea of molten rock. It’d take some time until the surface cools and solidifies. Even after the cooling, there’s a lot of heat trapped beneath it, which often erupts out in the form of volcanoes. The gases from these eruptions will form the first atmosphere of earth. But still, there’s no water, no oxygen, and no chance for life to thrive.

It was the icy meteors which entered the earth from outer space that introduced water into our atmosphere. With clouds formed, there was rain. And the earth harbored water on itself for the first time. But the presence of water only facilitates life; it does not necessarily imply the origin of life. Then how exactly did life begin?

As we said before, we don’t know how it all began. Different people have guessed different explanations. Some say it was the lightning bolts from the skies, some say it was the deep sea vents underneath. Some even say that the comets that brought the water also brought the first microbes and life evolved. And some say it all happened in the most natural manner – inch by inch, molecules got more and more complex, with repeated reactions, going from single atoms to single cells to complex multicellular organisms capable of reproduction.

The bacteria that so formed turned the sunlight and the carbon dioxide that was available to them into oxygen (a process we now call photosynthesis). They produced oxygen in such large amounts, that newer life forms that evolved thereafter, based their existence on the consumption of oxygen. With the stage now set, life evolved, becoming increasingly advanced, and from what began as the complexity of our universe came about a system just as complex within every single one of us.

The post The origins of life appeared first on Physics Capsule.

]]>The post Laws of reflection of light appeared first on Physics Capsule.

]]>When you look at yourself in the mirror every morning, you naturally begin examining your appearance without giving a second thought to the question of how exactly you are able to see yourself in the mirror in the first place. A more careful inspection of what’s happening when you’re looking at your face in the mirror, can be quite fascinating. Firstly, to see yourself in the mirror, the room/place you’re in must be fairly lit. This external light first falls on your face and is reflected in different directions. Let’s consider the light that is reflected in the direction of the mirror – this light again undergoes reflection at the mirror and the reflected light is what enters your eyes and gives you a perception of your appearance. The reflection of light at the mirror happens according to certain rules. There are basically two laws.

Speaking only in terms of single rays: the ray that is incident on the mirror, the ray that is reflected back, and the “normal” to the mirror surface, all lie in the same plane.

Recall that a plane can be uniquely defined by two lines – give us two lines and we will tell you the plane in which they lie. And with the plane defined this way, we can very easily decide whether or not a given third line lies in the same plane. In stating the first law, we have three “lines” – the incident ray, the reflected ray and the normal. While the incident and reflected rays are very much real, the normal is a geometric construction, an imaginary line drawn for our reference. It is drawn perpendicular to the surface of the mirror, passing through the point at which the incident light ray is incident on the mirror (see the image below).

Now, what the rule says is, when a light ray is incident on the plane mirror, it forms a unique plane with the normal. With this plane defined, now the reflected ray has to lie in the same plane (the direction in which the reflected ray can travel is confined to this plane).

The angle of incidence equals the angle of reflection ()

What this law says is that the reflected ray is not just confined to the plane of the incident ray and the normal, but also is confined to travel in a particular direction after reflection. What this direction is, is ascertained by measuring the angles at which the incidence and the reflection happen. The angle of incidence is the angle formed between the incident ray and the normal. Similarly, the angle of reflection is the angle formed between the reflected ray and the normal. Therefore, the law 2 says that the reflected ray gets reflected at a particular angle with the normal which is equal to the angle made by the incident ray with the normal.

In conclusion, how a ray gets reflected from a plane mirror completely depends on how the incident ray is incident. Both, the plane in which the reflection must happen and the angle at which the reflected ray must emerge, are decided by the direction from which the incident ray is incident on the mirror.

An important special case is when the angle of incidence equals zero degrees. This is the case of “normal incidence” of light. As the second law says, the angle of reflection will also be zero. Therefore, the light ray falling normally at a plane mirror, reflects back along the same path that it came through.

And as you go on increasing the angle of incidence from zero, the angle of reflection too increases, and at every instance will be equal to the incidence angle.

Up next, we’ll see how reflection of light happens at mirrors that may not be plane, and may be curved.

The post Laws of reflection of light appeared first on Physics Capsule.

]]>The post Conservation laws: mass, energy and momentum appeared first on Physics Capsule.

]]>All principles of conservation say more or less the same thing, that the observable being conserved neither goes anywhere nor appears out of the blue. The law of conservation of mass says the same:

Quite simple. It has a depth of meaning within it. All mass in this universe is conserved. No new mass can be created (meaning, no magic), and, conversely, the mass already present can’t be destroyed. The meaning of ‘destruction’ here is to be understood properly. Sure, you can break a chalk piece, but (classically) the number of chalk molecules before and after the breakage, on your hands, on the two pieces, and those that fell on the floor, remain the same.

The latter part of the law allows for the phase transitions of matter and for complex phenomena like radioactive decay. Also, from jewelery to furniture, anything that involves making items by melting and molding and such, is an example, and a proof of this law. To make an iron bar, one has to mine the ferrous ore, get iron from it, melt it into a liquid, pour it into a mold, and let it cool. No iron ore, no iron bar. That’s the iron rule, if you will excuse the pun.

A subtler, more restrictive condition also exists. Mass is localized, that is, it occupies a limited amount of space. It has to have a head and tail, so to speak, at some point. Mass retains it existence at all points of time. If a ball has to go from point A to point B, it has to go through some path between A and B. It has to *travel* the distance between. The ball will not suddenly disappear from the South Pole and reappear – with or without a time lag – on the North Pole.

The same Law that applies for mass applies for energy as well.

Simple, again. The total amount of energy in this universe is conserved. What the universe had at the start of time, it has now, and it will have in the future. The second sentence is what clears the huge numbers of doubts that come to mind because of the first. Most natural displays of motion are instances of conversion of potential energy into kinetic energy. A spring jumping when the force pressing it down is released, anything falling from a higher place to a lower place, an arrow flying from a bow, etc.

*Potential energy* is an apt name. A body that possesses this energy has the *potential* to do something – jump or fall down or whatever the case may be. Let us go about it one by one:

- When a spring is pressed, all the energy applied to it gets stored as potential energy, simply, , where is the co-efficient of stiffness of the spring and is the length by which it is compressed. When the pressure is released, there is nothing holding back the spring, and all the energy stored is converted into kinetic energy, and the spring jumps.
- When something is lifted up, work is done against the force of gravity. All this work done is kept stored in the body as potential energy. When the object then has a chance of falling down, the potential energy slowly transforms itself into kinetic energy. This transformation goes on till either the object’s fall is broken or it reaches terminal velocity, by which time its potential energy is converted into kinetic energy, lost as friction etc. This increase in kinetic energy is basically increase in , and in usual cases, since the mass m is a constant, it is the velocity, or rather, that increases. This is another way of looking at why something accelerates while falling down, as opposed to the use of g.
- As the arrow is pulled back, the bowstring is stretched taut, storing potential energy into it. When released, all of this potential energy is converted into kinetic energy as the string tries to get back to its original state. This violent jerk is passed on to the arrow, shooting it forward and off the bow.

In all these cases, one point is of importance. The kinetic energy that the body acquires is derived from the potential energy that it had, and nothing else. This implies that, in any case, the kinetic energy is equal to, or less than the potential energy, never more than it. In mathematical terms, the total energy of the body, , is equal to the sum of its kinetic and potential energies,

This implies the law of conservation – without any external source of energy, the LHS of the equation remains a constant, and, in turn, any change in the kinetic energy forces an equal negative change in the potential energy.

Apart from this, there are other occurrences that are not simple changes from stored energy to moving energy. Riding a cycle, for instance. The rider uses his weight and muscular energy to rotate the pedal, which, using a system of gears, gives rotational energy to the wheel. The wheel rotates, and by friction between the tyres and the road, the cycle goes forward. In other words, a bicycle is a machine that allows you to convert muscular energy to rotational energy and that to kinetic energy. It wouldn’t work if the rider does not have enough strength in him.

One of the most famous equations in physics, though many do not grasp its true meaning, is Einstein’s mass-energy equivalence, , where c is the speed of light. This is not to be taken at face value. Indeed, a coin or a button has nothing to do with the speed of light. What the equation means to say is that, *if* a coin of mass 2 grams was completely converted into energy, the result would be joule. Try to write down the number with all its zeroes and you realise how big it is.

Why is the equation so important ? That equation is what explains the energy released during a nuclear reaction, for example. During a nuclear reaction, it was observed that there is a slight difference between the sum of masses of the reactants and that of the products. This slight defect, , is converted to energy, and is numerically given by this equation, so, it is useful in reactors.

Consider another case when matter becomes energy: when matter and antimatter meet. Every particle has an antiparticle, like a positron for an electron and an antiproton for a proton, that is same in all respects except for the fact that it carries the opposite charge. Matter that is made of these antiparticles is aptly named antimatter. When a particle and its antiparticle meet, they annihilate each other and what is left is a lot of energy, numerically given by , where M is a particle’s mass.

But all of the above seemingly go against the law of conservation of mass. Solid mass seems to be destroyed! That’s where the equivalence of mass and energy plays its role. Since both independently have their own conservation laws, it is quite called for to have a law of conservation that takes both into account. Mass can become energy, and in some cases, vice versa. So it is not exactly destroyed, and we have

Now we have a general idea of what a conservation law says: ‘No magic allowed’. Everything that happens can be accounted for. Up next, we talk of momentum.

Momentum is defined from Newton’s laws of motion, so it seems apt to refer back to them when we try to understand where the conservation laws come from. For example, Newton’s First law says,

And the Second law gives a hint of what the external unbalanced force does,

Or, mathematically,

where is the momentum of the body. The First Law by itself is the Law of conservation of momentum. The Second is invoked just for a mathematical proof. In the case that the external force is non-existent, we have . That is, in the absence of external force, the momentum of the body does not change with time. is a constant with respect to time.

There are subsections of momentum conservation. Translational motion has rotational analogues. Angle for displacement, angular frequency for velocity, and so on. The analogue for force is torque, , and that for linear momentum, , is angular momentum, . Both linear and angular momenta are separately conserved in the absence of linear force and torque respectively. Momentum is conserved, no magic allowed.

Now to clear a doubt that might occur to you. When studying collisions, we are taught that in inelastic collisions, the momentum is not conserved, since the colliding bodies stick together. Whether this is right or not is a matter of perspective. But it does not violate the law of conservation. The reason for this is that in a collision, there is a non-zero external force involved. Our law is defined when there is no such thing. The two situations are mutually exclusive. So the next time you find yourself in a situation and seem to be missing mass, energy or momentum, know that they are always conserved. They are still out there and you will find them if you look closely.

The post Conservation laws: mass, energy and momentum appeared first on Physics Capsule.

]]>The post Selecting a load cell for weighing systems appeared first on Physics Capsule.

]]>A load cell, also commonly known as a load sensor, is a transducer that is used in force measurement applications. It produces a measurable electrical output whose magnitude is proportional to the mechanical force applied. Most industrial weighing systems are based on strain gauge load cells. Other common types of load cells include pneumatic load cells, hydraulic load cells, piezoelectric load cells, and capacitive load cells. Load cells can also be classified according to direction of loading, shape, precision, air tightness, and so on.

Strain gauge load cells produce a measurable electrical output when a force is applied. The gauges are usually bonded onto a structural member. As the load is applied, the resulting strain causes the resistance of the gauges to change. This change in electrical resistance is proportional to the force applied. In most weighing applications, four strain gauges are used; two in compression and two in tension.

Capacitive load cells are based on the principle of operation of capacitors. The distance between the plates of a capacitor changes when a force is applied. The resulting change in capacitance is proportional to the applied force. As compared to strain gauge load cells, capacitive load cells are more rugged.

The pressure of a fluid that is contained in an enclosed space increases when a force is applied. The proportional increase in pressure is detected by a calibrated pressure measuring device. The accuracy of a hydraulic load cell is enhanced by ensuring that it is properly mounted and calibrated. Hydraulic load cells are commonly used in hazardous environments because they do not have electrical components.

Just like hydraulic load cells, pneumatic load cells use the force-balance principle to measure the magnitude of the applied force. These explosion proof load cells do not contain fluids that can cause contamination. Pneumatic load cells offer higher accuracy and they are suitable for measuring small weights.

The weighing industry is dominated by force measurement devices that utilize strain gauge load cells. In weighing systems, load cells are used in combinations or individually depending on the requirements of the weighing application. A suitable configuration is determined by factoring the characteristics of the load, especially its geometry and size. The accuracy of a weighing system depends on many factors including the accuracy of the load cell, signal interference, load factors, and environmental forces. The combined accuracy of a load cell is determined by the following specifications:

The outputs obtained by decreasing the load applied to a load cell from the maximum (maximum rated capacity) to the minimum (no-load) and by increasing the load from the minimum to the maximum load are different. This difference in output readings is referred to as hysteresis. This specification is usually provided by the manufacturer as a percentage of the load cell’s full range.

Repeatability of a load cell refers to its ability to produce readings with minimum deviation when the same load is applied repeatedly under the same loading conditions. For high accuracy, the maximum difference between the readings, usually referred to as non-repeatability, should be as small as possible.

The output of a loaded transducer changes over time when the load is applied for a long time. This variation in output reading over time is referred to as creep. ** **In some applications such as filling, creep has little or insignificant effect because the load is usually applied for one or two minutes. However, in weighing applications where the load may be applied for a longer period of time, it is critical to consider the creep effect.

When an increasing load is applied to a transducer, its output curve deviates from a straight line. This deviation is referred to as non-linearity. Some of the methods used to correct non-linearity in weighing systems include employing a look up table and using multiple calibration points.

The output of a load cell changes with temperature. This variation is usually caused by the combined effect of temperature changes on the gauge factor and the spring sensor material. In most weighing systems, temperature compensation is used to minimize the errors caused by changes in temperature.

Changes in temperature can cause a positive or a negative change in no-load output readings of a load cell. Temperature compensation techniques are used to reduce errors caused by this effect.

This refers to the variation in the output of the load cell when no load is applied. The barometric pressure effect is common in load cells that have bellows. To enhance the accuracy of a weighing system, this effect is corrected by equalizing pressure through venting.

Load cells come in different types and designs to meet the diverse requirements of today’s weighing industry. The accuracy of a weighing system is significantly determined by the characteristics of the load cell used. It is therefore critical to consider various factors when selecting a load cell for your weighing system. Key considerations include accuracy, loading conditions, electrical considerations, mechanical requirements, and environmental considerations.

*All images in this article were provided by the author. *Physics Capsule* disclaims all responsibility and rights regarding their ownership. *Physics Capsule *is not responsible for images sourced and provided by guest authors.*

The post Selecting a load cell for weighing systems appeared first on Physics Capsule.

]]>The post The second law of thermodynamics, part 3 (Entropy) appeared first on Physics Capsule.

]]>Think back to the Carnot cycle. In it we saw that a system undergoes a series of processes to return to its initial state. We said, with no particular initial point, that these processes are isothermal expansion, isentropic expansion, isothermal compression, and isentropic compression, specifically in that cyclic order.

Although we represented this with a sort of rhombic, but curved-sided figure, we can always think of a cyclic process as just that: an elliptic cycle. Fig. 1 shows what one such process might look like.

It is hard to imagine how this is directly related to a Carnot cycle, so we can perhaps make a small, harmless modification. Suppose the phenomenon is not cyclic as shown but that what fig. 1 represents is a rather averaged out showcase of our cyclic process.

We could then imagine the actual process to be like saw teeth mounted atop this ellipse. We would then not be unrealistic in looking at this as more of a Carnot engine, especially if we considered each pair of saw teeth opposite each other as forming our Carnot cycle, as depicted in fig. 2, which zooms in on a random part of the cyclic process that fig. 1 depicts.

In fact, that in every reversible process there exists a zigzag path between two states consisting of alternate isentropic and isothermal processes is empirically testable. Proceeding further, we invoke Kepler’s statement of the second law that we discussed in part one. We can simply state it as follows:

Mathematically, the heat absorbed or emitted, , the temperature exchanged, , and the work done, must all balance each other as varying by sign depending on whether work is done on or by the object, heat is extracted or emitted etc. This equation itself, of course, is something we already discussed as the first law of thermodynamics.

Accordingly, for the process from A to B, we have , and for the process from C to D, . From Kepler’s statement then, , which leaves us with and .

You will recall that we had used a form of this equation in part one when we discussed re-writing the equation for efficiency in terms of temperatures instead of the heat exchanged. Of course writing at this point serves no purpose and we may just as well write it as . This gives us an important result:

As a result, if this equation is satisfied for process between AB and CD, it must also be satisfied for any and process as well. And, thanks to the zero on the right-hand side, this equation can just as well be summed over all the processes constituting our elliptical (cyclic) process.

or, integrated over every heat exchange, , over the entire cyclic process, we end up with

(1)

This is called ** Clausius’s equation** for a cyclic process. At this point it seems disconnected with Clausius’s own statement which simply declared that heat does not spontaneously flow from a cold body to a hot body. To see how this result means exactly what his statement said, we follow Clausius’s work further.

We have been talking about entropy all this while, never once defining it properly, but it is finally time. Almost.

Let us extend our elliptic, cyclic process example from fig. 1 to a more general case. What if the process the cyclic but does not follow a symmetric path? Recall that Carnot’s own heat engine was somewhat symmetric. The genius of Clausius lies in his extending this to a completely arbitrary process that has only one condition: it must be cyclic. So we might have something that looks like the process depicted in fig. 3 below.

Equation (1) can then be re-written, once for the *forward *process, shown in red, from the initial (I) to the final point (F), and then again for the *reverse* process, shown in blue. Both of these equate to zero as suggested by Clausius’s equation and hence can themselves be added up as

where the subscripts denote that the integrals refer to the initial (red) path and the final (blue) path. Why so? We will soon show there is no need to think of it this way (and of course we have to show it explicitly, we cannot simply assume it). Now the above equation can be simplified as below in a result that shows that the two integrals are path dependent, and hence that our insistence of the red path being the initial and so on is undeniably baseless.

(2)

One may wonder why we had to go through the trouble of proving that the Clausius integral is path independent. On the one hand, assuming it has no valid reason to back it. Indeed, given that our process is arbitrary, there is every reason to doubt that the integrals are path independent. Nonetheless, we have now proven clearly that there is path dependence.

More importantly, however, we must realise that this simple fact gives our integral a universality. We now know that any process, and I mean *any* process at all, has an integral like associated with it. We know, then, that this integral can be written as some quantity that changes from the final point to the initial point as

The box is awkward, so let us call this quantity by some symbol, . So now we know that every process has some

The quantity, , or , is known as the *change in entropy*. And the answer to our question, ‘What is entropy?’, then, is that there is no such thing as entropy. But every process in the known universe where our known laws of physics apply has a quantity, , known as the *change in entropy*, associated with it. We therefore have

(3)

This is the mathematical definition entropy in terms, quite naturally, of its change. This is a quantity whose physical meaning is complex enough to grasp that we are better off treating it as a mathematical quantity that we are, after our discussion so far, convinced can be associated with any given process. It would not be an exaggeration to say that this is the crux of almost all of thermodynamics.

Although we dropped the modulus long back for convenience, if we did retain it, we will realise that, by equation (3), the quantity is equal to a modulus and hence always a positive quantity. (As for having no modulus, remember that we are dealing in terms of the thermodynamic temperature, i.e. temperature measured in kelvin, which has no negative.) So, given that is positive, we have , which means that for any random, *spontaneous*, process, the entropy always *increases*. The term ‘spontaneous’ is important here: entropy *can* decrease, but this requires that we do some work *on* the system to bring this about. Left to itself, as the above inequality tells us, a system always sees an increase in entropy.

Clausius himself first defined entropy as , which he called the *equivalence value*, and asserted that the entropy of the universe tends to a maximum. Although he only talked about entropy tending to a maximum as a means of saying it would never reduce, we find that Clausius was ultimately right. The entropy of the universe, as we will see towards the end of this article, increases and never eases to a halt.

The time-variation of entropy may be shown to be and this is called the *rate of entropy production*. The rate of entropy production times the temperature at a given point in a machine, , the dissipated energy. This is a useful term in realising how the entropy of a machine can never be a hundred percent (we will see this is greater detail later, although not in terms of .)

There is one last point of note here: that the right- and left-hand sides in equation (2) are, in fact, the expressions for entropy of an ideal onwards and reverse process and they are shown to be equal to each other. This clearly means that *the change in entropy in a cyclic process is zero*.

Our mathematical definition in equation (3) does a good job of explaining its relationship with heat and temperature, making it understandably a part of the second law, at least in terms of heat engines and efficiencies. We will see this in greater detail soon, but first we still need to understand that one statement of the second law we have not talked about after briefly mentioning it in part one: the statistical statement, which says —

To better understand this we will have to re-define entropy from scratch. We already discussed how, somewhat trivially, this statement tells us that a random arrangement of T-H-T-T-H-H-H-T-T-H-… is more likely to occur than is an ordered H-H-H-H-H-H-H-H-H-… in case of a series of coin tosses. In other words, any process is simply more likely to spontaneously tend towards disorder unless you do specific external work on it to keep it ordered. Not unlike your bedroom.

We can extend this to a more laboratory-worthy example and, of course, bring some mathematics into our discussion. In the end we will see that we end up with a solution that is fundamentally the same as our simple claim made in terms of coin tosses.

Consider a container with two equal-sized compartments. Say we have some gas in one of them, the left one, and a vacuum in the other. If we now perforate the wall separating the two compartments, experience tells us that the gas is going to spread from the left half to the right, eventually going all over the container. We can regulate how quickly it spreads by varying the perforations but that is of no consequence for our intentions now.

Say we start off with gas molecules initially. After a while, the probability, , of finding out of those particles on the one side of our container is given by

**Case 1:***What is the probability that we will find an ordered arrangement of *all* gas molecules on the left side?* This describes a system with complete order. Let us give numbers to better understand these situation. Say we start off with just 30 molecules. On perforation, what is the likelihood that *all* 30 molecules remain disciplined enough to stay on the same (in this case, left) side? In other words, what is the likelihood of having ?

In other words, not a lot. There is only a one-in-a-billion chance of finding order in our system with just 30 molecules. Most realistic systems of this sort have billions of billions of molecules in a space much smaller. (For instance, CO has about molecules in a volume.)

**Case 2:***What is the probability that we will find a perfectly disordered arrangement of* all *gas molecules spread all over the container?* This is the exact opposite of case 1, and we now have complete disorder. In other words, 15 of our 30 initial molecules are to be found on any one side, making and

which is a *billion* times more likely than the complete order case.

We can prove it stepwise, taking at several values between 0 and 30, but it is not unreasonable at this point to simply state that maxes out as , which is complete disorder, or, in the language of the statistical statement, the case of maximum microstates. In other words, any given system is statistically more likely to be at a state of utter disorder. And, in everyday circumstances, any system is more likely to *tend towards* a state of greater disorder.

The change in entropy is quite simply the change in the order of a system. It can loosely be stated as a *measure of the disorder* in a system. So a given process or a given system is statistically more likely to tend to a state of *greater disorder or entropy*. This is in complete agreement with our assertion that, for a spontaneous process (which is exactly what our gas-in-container experiment represents), we have exactly as we expected.

The concept of having an increasing quantity (called entropy), that determines whether a spontaneous process can occur, gives rise to a simple pattern with which we can examine the reversibility of a process. Recall the condition that we discussed before. We will now use both these cases ( and ) to define the reversibility of a process.

If a system goes from a state A to a state B spontaneously with an increase in entropy, then we know that such a process cannot spontaneously come back from state B to state A since this would require a decrease in entropy. Such processes are called * irreversible processes*.

Suppose the same system goes from A to B keeping its entropy constant, then there is a possibility that it can return from B to A keeping the entropy constant once again. Both directions, A to B and B to A, can then occur spontaneously. Such processes are called * reversible processes*.

In the statistical example, for our experimental setup with a container filled with some gas, we saw how, by starting with all the gas in one compartment and then opening the ‘gate’ between the compartments, we can see a gradual increase in disorder as we go from to , at which point the system is said to be in a state of maximum disorder.

Such a system that has achieved maximum possible disorder and, as a result, sees an even spread of matter (and therefore of energy), sees no further spontaneous increase in entropy, because the evenly spread energy means there is nowhere left for energy to spontaneously flow to. All heat exchange will therefore stop, as will all movement, and such a system is said to be in * thermodynamic equilibrium*.

This, of course, is a statistical definition and it is equivalently defined, more physically, as a system that is in mechanical, chemical and thermal equilibria, seeing no pressure, chemical and temperature changes respectively.

Here is a thought we take for granted too often: if you were locked up in a room with no light, no windows and no objects whatsoever except a wine glass with a lot of warm coffee powder in it and the wine glass shatters, you did not conclusively know the *direction of time* until the wine glass shattered. Until, in other words, it went from a state of order to disorder.

In other words, the direction of increase in entropy of a spontaneous process, a wine glass going from its regular, usable state to a shattered one, also gives the *direction of time*. This is something we think about far too rarely. The only reason we know the direction of time to be from when a coffee mug falls off to when it breaks into pieces is because of this increase in entropy.

This could also be stated as *causality*, in which case we can just as well define it as the order of spontaneous events in phase with the increase in entropy of these processes. So, as entropy increases, we go from cause to effect, maintaining causality.

In part two, while describing Carnot’s heat engine, we stated that no other heat engine can have an efficiency so great. The efficiency was not great by itself, yet it was greater than what any other heat engine could ever achieve, or so we claimed. Now we prove it.

In part one, we saw how an engine satisfies the equation when a working substance draws heat form a high temperature source and releases some heat to a low temperature sink while doing work in-between. We also said this was true in case of a Carnot reversible heat engine.

What then happens in case of a real heat engine (as opposed to Carnot’s ideal one)? First of all, if flows from the source to the working fluid, in spite of all our efforts to keep the process isothermal (which is an extremely difficult task) it is safe to assume some change does occur, however small. In other words, for some temperature, , of the working substance, there occurs some change in entropy:

where the inequality arises because of .

We can apply the same logic to the *out**wards* path from the working substance too. In this case, for heat transfer, the temperature, , of the sink must be less than the temperature, , of the working substance. Assuming this is an isothermal process as well, at least in so far as practicality allows it to be, we find a change in entropy given by

where, once again, the inequality arises due to .

We know that the change in entropy in a cyclic process is zero, i.e. , which we can write in terms of our inequalities above as

(4)

in other words, the left-hand side now gives the ratio of the practical work involved to a given heat transfer of from the source, which is nothing but the efficiency of a practical or *real* heat engine. And the right hand side is an equation we have already encountered, the efficiency of an *ideal* heat engine. Therefore,

(5)

is mathematical evidence telling us that Carnot’s seemingly casual statement, about his ideal heat engine being the most efficient possible one, was actually deeply rooted in scientific thought. Further, notice how the efficiency of the ideal, Carnot-reversible heat engine, , is independent of the properties of the working substance itself and depends only on temperature. This was another claim of Carnot’s statement which, as it turns out, also has solid mathematical evidence to back it.

One of the more maddening pursuits of scientists in the 15 and 16 centuries was the idea of building a perpetual motion machine, a machine that would self-sufficiently run forever. From all the statements we have seen so far it is clear that the only potentially successful perpetual motion machine would be a Carnot-reversible heat engine. But we also saw that it is ideal, which means no perpetual motion machine can possible exist. However, this did not stop people from pursuing the possibility of such perfectly efficient machines.

We will discuss perpetual motion briefly, solely out of interest for its various implications and how it shows the universality of the laws of thermodynamics. However, *Physics Capsule* does not endorse that you spend your time building such impossible contraptions; we just believe they are fun to think about.

One of the basic problems a perpetual motion machine must overcome is the excess heat lost. Most other problems, such as the nature of the working substance used and how it absorbs heat itself can be overcome easily. The speed with which a process occurs is the cause of most energy loss and, in turn, of low efficiency. In thermodynamics in general this is overcome by using what are called *quasistatic processes*, or process that proceed so slowly they might as well be static but are not, of course.

Perpetual motion machines come in three types, all free from reality:

is a machine that works without the input of energy, thereby violating the first law of thermodynamics.**The first kind**is a machine that violates the second law of thermodynamics by spontaneously converting all heat input into useful work, which conserves energy but, as we discussed throughout this three-part series, violates the idea of entropy.**The second kind**is a machine that eliminates energy loss by eliminating friction, unwanted heat exchange etc.**The third kind**

Unsurprisingly, none of these three types of machines exist, but there are examples that come close if, instead of calling them ‘perpetually’ moving machines, we simply consider them to be machines that work for an exceptionally long time without external influence.

A planet or natural satellite is an excellent example of such a thermodynamic system. The earth has been going around the sun for a few billion years, which, by human measure already *feels* perpetual, but any such celestial body, including the earth, is slowly losing energy as it falls towards the centre of its orbit.

Closer to home, the Fludd water screw, built in the early 1600s by the Englishman, Robert Fludd, used a complication powered by running water that, as an aside, ground grain whose water output would be fed back to the main water supply to kept the initial complication running. There are still others, such as the overbalanced wheel or the capillary bowl, which work for a long time, but not forever, or are simply plans on paper without working models, or, interestingly, are contraptions that do no useful work.

One of the early potential violations of the second law of thermodynamics, though, came from the physicist, J.C. Maxwell. While it was by no means a perpetual motion machine, it is worth discussing in the same vein since it is concerned with the violation of the second law. In an 1867 letter to the Scottish physicist, Peter Guthrie Tait, Maxwell writes of what he believes to be a probable method of circumventing the second law. His argument, more than being a statement disproving the second law, was an attempt to show that the second law was purely a statistical idea.

Returning to our example of the container with a partition through which gas molecules can move around, if we now manage to have a gatekeeper (Maxwell calls him a ‘finite being’ but Lord Kelvin later called him a ‘demon’) who, beginning with all the gas trapped in the left half of the container, selectively opens the partition so as to let only high-velocity molecules through to the right side, we will eventually end up with a setup that consists of half the container filled with high-velocity molecules and the other half filled with slow-moving molecules. In other words, one half of our container (the right half) will heat up while the other half cools down.

While this seems good enough on paper, recent calculations have shown that Maxwell’s demon will, itself, cause an increase in entropy while it selects molecules and opens and shuts the gate (incredibly quickly) to allow molecules to pass. The fundamental argument against Maxwell’s demon, as against so many perpetual motion machines, is that the system looks good to talk about, but extremely specific calculations will end up increasing the entropy of the system in agreement with the second law.

With our discussion so far we have successfully tied up all lose ends from parts 1 and 2 in this series. The various statements, the ideas of heat engines and refrigerators, the statistical significance of the second law and all their mathematical proofs rooted in a strange quantity known as *entropy.*

We also talked about perpetual motion and Maxwell’s demon, but there is one last scientifically important debate that arose around the time of the second law. Yet again we return to the container in our previous examples: we saw, while discussing the statistical likelihood of the spontaneous increase of entropy, that eventually the system will reach a state of maximum disorder. Beyond this, then, the system sees no increase in entropy.

As we defined earlier, this means the system is in a state of thermodynamic equilibrium, experiencing, particularly, no heat exchange of any sort. One way to think of this, given that the container is a completely isolated system, is to reason that once the molecules of the gas have dissipated energy and spread all over the container reaching maximum disorder, they no longer need to move and will thus not cause any further heat dissipation.

Lord Kelvin applied this idea to the entire universe, treating it as an isolated system, and reasoned along the same lines that, at some point, the universe will reach a state of maximum disorder, with energy spread evenly throughout so that no spontaneous energy transfer can possibly occur, thereby dying a ‘heat death’. This idea, called the *heat d**eath of the universe*, was long seen as a valid idea that a lot of debates centred around and the idea seemed almost frightening but the universe seemed to be showing no signs of such a slow down just yet.

The solution would come over a century later when we first realised that the universe was, in fact, *not* like the container of gas. Let us modify our experiment: what if the container was somehow becoming larger and larger faster than the molecules were spreading about? This would mean that maximum disorder would never be achieved since there would *always* be some volume unoccupied by the molecules of gas. In other words, as long as the container would keep expanding, the system will never reach thermodynamic equilibrium.

We now have ample evidence that the universe is expanding and, much like our modified experiment, there is no fear of the universe dying a ‘heat death’ as a result of this. The mass in the universe will never reach a state of complete disorder as the universe is constantly expanding, thereby making a heat death an extremely remote possibility.

The second law, therefore, is immense in the amount of information it contains by direct implications alone. It also brings in the idea of using statistics to examine physical phenomena, a characteristic that often is coupled with thermodynamics as a whole and a concept that will go on to form an important part of our discussions in this field over the coming days.

Save this article as a pdfThe post The second law of thermodynamics, part 3 (Entropy) appeared first on Physics Capsule.

]]>The post The electric current and Ohm’s law appeared first on Physics Capsule.

]]>All matter is made of charged particles – there are positive nuclei and negative electrons. So, if you’re planning to move charges, just move any material object (really?). Even as you wave your hand through air, the positive and negative charges that make your hand, are indeed moving along. Does that mean you’ve just generated an electric current just with a wave of your hand? Of course, not. The fact is that your hand is normally electrically neutral – it is made of equal number of positive and negative charges. So, the total electric charge on your hand is zero. Hence simply waving your hand through air, in no way generates any electric current.

Therefore, we need a net charge in the first place, to even start thinking of an electric current. Consider the simplistic example of a single positively charged particle. As we know, this single charge cannot constitute an electric current. It simply isn’t possible to measure a “rate of flow of charge through a point per second”, for the charge passes through any given point only once, and the next moment it’s gone out of the picture. But we consider the single charge now, to understand how we can get it moving.

If the charge is initially stationary, we could get it moving by applying a mechanical force. But a more natural way to do this is to pass an electric field. As we’ve seen before, much like massive objects *fall* in a gravitational field, charged particles accelerate in an electric field. So, even if we have a bunch of charges, the simplest way to get them moving and hence generate an electric current is to pass an electric field through the space they are in.

Mathematically, we say that the electric current density is directly proportional to the electric field applied.

And introduce the proportionality constant , the “electrical conductivity”.

(1)

This equation is true in most cases (we’ll discuss the exceptions at a later time). So, if you want an electric current flowing through a conductor, all you have to do is pass an electric field through it. This field will apply a force () on each free electron (of charge ) present in it, producing an electric current.

Now, you might be imagining a smooth flow of electrons, as soon as the electric field is applied – something like water flowing through a hose pipe, steadily. But in reality, the electrons move in a very higgledy-piggledy manner. They keep bumping into the fixed atoms of the material as they make their way along the applied electric field. You can imagine it something like, each electron bumps into an atom, comes to a stand still, and then again accelerates along the field, until it bumps into another atom in its way and again stops momentarily. As if this weird motion wasn’t enough, to add to the complexity, the electrons experience other forces from the atoms, too. To make some sense of the chaos, we disregard these additional forces on the electrons, by saying that the electrons behave just like free particles, albeit with a different mass , instead of its actual mass . in other words, mentally freeing the electrons from the clutches of the atoms, we are paying with a change in their observed mass.

Applying Newton’s second law of motion, we get the acceleration of each electron to be

Since, in our model, we have assumed that the electron starts afresh after each bump, from rest, the velocity gained in a time , called the **drift velocity**, will be,

If is the **mean free time** – the average time between collisions, the average velocity gained by the electrons during their motion is,

We call the quantity , the **mobility** , which is a measure of how quickly the electron can move through the material. In other words, it is the drift velocity of an electron when a unit electric field is applied. It depends on the details of the structure of the material.

Therefore,

(2)

All these electrons that are drifting along the conductor, under the applied electric field, constitute an electric current. To find the electric current, we just have to find the quantity of charge (or the number of electrons) that will cross an imaginary surface (placed perpendicular to the flow direction) in one second. Let be the area of a tiny chunk of the surface. Your job now is to wait at the surface and carefully count the number of electrons that cross it in one second. Now, either you can go ahead and literally count, or you can use the simple logic that the only electrons that will cross the surface in one second are the ones that are not farther than *one-second’s drive* from the surface. By that it is meant that only the electrons within a distance (since second) from the surface, at the beginning of that particular second, will make it through the surface before the second ends. In other words, the electrons in a volume will cross the surface of area during the second. If is the number of mobile electrons in every unit volume of the conductor, the total number of electrons in a volume will be .

With the number of electrons crossing known, the total charge that crosses the elemental surface in a second, is simply found to be (, as usual, is the charge of each electron). Hence, the current passing through is . Now, recall that we’d defined the electric current density as the amount of current flowing per unit area .

With that, we may write, the electric current density through the conductor to be,

But we’ve already seen in equation (2) that .

So,

Again, we’d defined the mobility as, . Hence we get,

Calling the electrical conductivity , we have arrived at equation (1),

If the mean free time and the effective mass , do not depend on the applied electric field (which is the case when the current is not too high, causing the temperature to rise), we can safely say that the electric current density is directly proportional to the applied electric field. This statement, connecting and is a form of the famous Ohm’s law, which we will explore further in the next article.

Save this article as a pdfThe post The electric current and Ohm’s law appeared first on Physics Capsule.

]]>The post The electric motor appeared first on Physics Capsule.

]]>Michael Faraday and Joseph Henry, in the early 1830s, worked extensively on electricity and magnetism (then thought to be disparate) and chanced upon an interesting phenomenon. Faraday suspended a wire in a vessel half filled with mercury, and connected the other end of the wire and the bottom of the vessel, to a battery. Mercury being a good conductor of electricity, current flows through the closed path. If however a bar magnet is fixed in the vessel, half immersed in the mercury, the suspended wire does something strange -it starts rotating around the magnet.

Faraday’s reasoning, as would be later tested several times and eventually confirmed, was that the current carrying wire produces a magnetic field of its own, which interacts with the magnetic field of the bar magnet, hence applying force on the wire and causing it to rotate. This was the first electric motor ever, capable of converting the energy of interaction between electricity and magnetism, into mechanical work.

Our aim now is to understand the basic principle of working of the Faraday’s mercury bath experiment, and hence learn how a modern electric motor works. Firstly, let us get rid of the practical limitations that may get in our way of understanding the concept in this section. In describing, we will need a wire through which a current flows, which will be placed in a magnetic field. As you know, current flows only when there’s a closed loop (connected to a battery, say). So, should we consider the entire loop of the wire? No. We assume that the wire does form a closed loop so that current may flow through it, but we observe only a part of this wire, which we indicate in the images, below. Also, there’s no need to question the source of the (uniform) magnetic field we use. For imagination’s sake, you could assume the field is generated by magnets (which won’t be shown in the images).

Now, switch on the magnetic field and place the wire in it. Let the wire be placed so that the direction of current flow through it is parallel to the direction of the magnetic field. You will observe nothing. No forces act on the wire. Next, place the wire so that the direction of the current is perpendicular to the direction of the magnetic field. Now you’ll see that the wire segment bends outwards (coming out of the screen), as can be seen in the image below. Therefore, we may conclude that when the current and the magnetic field are perpendicular to each other, the current wire experiences a force which is perpendicular to both the current and the field (for further reasoning along this line, read this).

To ascertain the correct relative directions of the force and fields, John Ambrose Fleming came up with an interesting rule. The rules only work if the two known quantities are perpendicular to each other. The third will then be perpendicular to them both.

Start by stretching out your thumb, forefinger and middle finger, of your left hand, so that they are all perpendicular to each other, somewhat like the x, y, z axes of a coördinate system. Position your hand so that your forefinger points in the direction of the magnetic field, and your middle finger in the direction of the current flow through the wire. Then the direction in which your thumb points, then, gives the direction in which the wire experiences a force.

There’s a right hand rule of Fleming, as well, which concerns with finding the direction of the current produced due to motion of a conductor in a magnetic field – a process reverse to what we’re interested in here. We’ll describe this when we explain electromagnetic induction.

Now that we know the rules of the game, let’s lay out the arena. How can we use the force that current wires experience in magnetic fields, and turn it into something very useful? We have seen that in a given magnetic field, current flowing one way, leads to a force on the wire in one direction, while current flowing the other way, causes a force on the wire in the opposite direction. So, the question is, can these opposite forces be used to produce a rotation? We already know that two forces acting on different parts of an object, in opposite directions, will rotate the object. It turns out, there’s a very simple set up that will allow us to produce continuous rotations just making use of the force the current wires experience in magnetic fields.

As we pointed out in the beginning, current flow requires a closed loop. Therefore, we consider a loop of a conducting wire in a shape as shown in the image below. We have an *almost-rectangular* loop , whose ends are connected to a battery. The battery is the source of the electric current through the loop. On either side of the rectangular loop, we place magnets of opposite polarities. These magnets will be responsible for the required magnetic field.

Now, with the current loop immersed in the magnetic field, we analyze if there are any forces experienced by the loop due to the field. Let’s pick the part AB of the loop first. We observe that the current flowing through this segment of the wire is perpendicular to the direction of the magnetic field due to the magnets. Hence, we may apply the Fleming’s left hand rule, to find the direction of the force on the wire segment. (We hope you have your left hand fingers stretched and ready, right now.) With the magnetic field going rightwards, and the current flowing from A to B, we find that the rule tells us that the direction of the force on the wire AB is “downwards”. Next, pick the segment CD of the loop. Here, the direction of the current is opposite to that in AB, while the magnetic field is unaltered in any way. So, naturally, the rule tells us that the force on the segment CD is “upwards”. And what about the segments BC and DA? They are always parallel to the magnetic field direction, and hence, experience no force at all, from the magnetic field.

In conclusion, there’s a downward force on the segment AB and an upward force on the segment CD, of the rectangular loop. These two forces will cause the entire loop to start rotating (counter-clockwise, as seen from the perspective of the image above). Now, consider the situation when the loop has rotated by 90 degrees. The force on the segment CD will continue to point upwards, while the force on AB will remain downwards. In this configuration, the two opposite forces won’t cause a rotation, but will conspire to change the shape of the loop (which we won’t allow to happen, by taking care that the material of the wire loop is strong enough to withstand the distorting force). Despite there being no forces to cause further rotation, the loop continues rotating due to inertia. Once the loop has rotated through 180 degrees, apply Fleming’s left hand rule. You will find that the force on CD is still upwards, while the force on AB is still downwards (AB and CD have exchanged places, with the direction of current flow through each, unaltered). Which means, that the forces now will cause a rotation in the opposite direction, bringing the loop back to where we started. And the process repeats over and over again – the loop oscillates.

A back and forth oscillation isn’t of much good use. What we need is a continuous rotational motion. To make this happen, we first realize that the core reason for the oscillatory motion of the loop is that the direction of current through each segment of the loop remains the same all through the motion. If we had a mechanism to somehow flip the direction of the current through each segment of the loop, every “half rotation”, we could have a continuous rotatory motion. We accomplish this task by using what is called a commutator.

The commutator is basically a ring split into two, which is fixed in place, and to which are connected two conducting brushes as shown in the image. The brushes are connected to the battery terminals, while the rings are connected to the ends of the rotating loop ABCD. With this set up, as the loop rotates through every 180 degrees, the direction of the current through the loop flips. And hence we have a perpetual rotational motion, caused solely due to the force current-carrying wires experience when placed in a magnetic field.

What started with accidental discoveries of the connection between electricity & magnetism, has quite cleverly been converted into something of immense use for us. For there isn’t (and can never be) any dispute regarding the wonders an electric motor is capable of.

Save this article as a pdfThe post The electric motor appeared first on Physics Capsule.

]]>The post The second law of thermodynamics, part 2 appeared first on Physics Capsule.

]]>At the heart of every engine is a cyclic process. Fundamentally, this is simply a series of processes that take a system through any number of stages that finally return to the initial stage, leaving the system as it was. To gain a good understanding of the typical, ideal cyclic process, one needs to grasp the relationship between heat and work, which we discussed in part one, and the idea of temperature.

While there are several such cyclic processes in thermodynamics, including the the steam engine, the Otto cycle, the diesel engine etc., we will restrict our discussion to what is arguably the most important of the lot, the *Carnot heat engine* or the *Carnot cycle*. This is also motivated by the fact that the most important statements of the second law of thermodynamics are built on the Carnot cycle. As a result, this cycles is sometimes also known simply as the *thermodynamic cycle*.

With reference to the figure above, consider our system to be at initially, with its pressure and volume given by . Say it is at a temperature . The cyclic process it undergoes will be as follows:

of the system takes place with an input of heat which is manifested as a change in the volume and pressure of the system, or, simply, some work done**Isothermal expansion***by*the system on its surroundings, implying that the temperature, , remains constant. Say the system achieves the state at the end of this, arriving at .of the system now takes it to at with further increase in pressure and volume as the system continues to do some work on its surroundings and, in turn, sees a fall in temperature to some since no external heat is being supplied. Such a process is also known as an**Isentropic expansion***adiabatic process*and completes half of our cycle. Having expanded in volume, our system is now ready to begin the remaining half of its process with contractions.of the system drops its volume and pressurises it to some with a loss of heat and maintenance of the same temperature, , as work is done*Isothermal compression**on*the system this time.finally returns the system to as work is continued to be done**Isentropic compression***on*the system resulting in a rise in temperature to since no heat is lost. This is also known as*adiabatic compression*and sees our system complete its cyclic process to return to its initial state from which the cycle can repeat starting with another bout of isothermal expansion.

In this way, the system can, theoretically, keep going perpetually. However, as stated earlier, this is an ideal process, particularly because of the isentropic stages. (Isentropic processes are ideally defined adiabatic process, i.e. processes that see no heat or mass exchange, which further see frictionless work.) It is also worth noting that, while the process as a whole is cyclic, so is each process: the move from A to B, B to C, C to D and D to A are all individually reversible processes.

The simplest *engine* one can think of is a gas in a piston-cylinder system. Work is done *by* the gas is when it pushes against the piston and *on* the gas when we push the piston in. The entire thermodynamic cycle can be imagined this way. The product of pressure and volume is known as the *pressure-volume work*, which means the area of the graph (roughly equated to a rectangle) gives the total work done.

Let us take a look back at the first statement of the second law of thermodynamics that we discussed in part one:

The idea of an engine, or even something as fundamental as a gas-filled piston-cylinder system, can be combined with the thermodynamic cycle to arrive at a physical picture for Carnot’s statement. The operating temperatures, according to our graph, are and , which correspond to the source and sink temperatures.

What Carnot says, then, is that if one were to build an engine that operates according to the graph above, it would achieve 100% efficiency. In part one we defined efficiency as

which tends to one as the difference between the source and sink temperatures increase. For instance, if you had an engine operating between , or, approximately, the temperature of a pleasant room, and , somewhere around the boiling point of water, it would have an efficiency of 20%, which is not all that impressive.

What if we operate it between the cold of a vacuum () and the heat of the sun’s surface ()? We have an incredible engine running at 99.95% efficiency.

Notice that even the Carnot cycle does not achieve 100% efficiency, since can never go to absolute zero. But, closer to reality, we had to operate between the Sun’s surface and outer space to even reach 99% efficiency. The Carnot cycle is but a conceptual entity. It is the ideal engine that no engine can be like, but the purpose of the Carnot cycle is not to be a exemplary but instead to help establish the maximum possible efficiency that an engine can achieve given its operating temperatures.

Let us attribute this *work* to a substance which we shall call, rather uninspiringly, the “working substance” or “working fluid”. The job of this substance (and indeed its definition) is that, through changes in pressure, volume and temperature, a working substance, which is often a fluid, helps run a thermodynamic process. In our example system above, the gas in the piston-cylinder setup is the working substance.

A Carnot engine, or an ideal heat engine, works as depicted in the figure above. It follows the Carnot reversible cycle, or the thermodynamic cycle discussed above and is therefore also known as a Carnot reversible heat engine. (It is worth noting that this is different from a steam engine, a diesel engine etc. and is more efficient that all of them — and also more idealistic.)

It draws heat from a *high temperature reservoir*, simply called the *source*, and does some work while expelling heat to a *low temperature reservoir* or, simply, *sink*. It is the working substance that, in going from A to B to C to D and back to A in its thermodynamic cycle, returns to its original state after experiencing a temperature change between and and heat gain and loss. All of this is directly related to the entropy of the work substance which we will discuss in detail in part three of this series. We will, then, also see just *why* no other heat engine operating between the same and can achieve a greater efficiency.

The fact that the thermodynamic cycle is reversible means our ideal heat engine working on that principle is also reversible. If we then simply reverse the directions of all the arrows in the figure above, we have an engine that *takes heat from the sink* while some work is being done on its working substance and it then *supplies heat to the sink*. In other words, so long as we keep doing work *on* the engine, it cools the sink. In other words, we have with us a refrigerator.

The fact that work needs to be done *on* the working fluid (or, more generally, on the engine) is in agreement with the Clausius statement that heat passing from any body to a hotter one is not a spontaneous process. It further clarifies an ambiguity with the first law. Whereas the first law requires that heat be accounted for, it does not prevent spontaneous energy transfer from, say, the floor to a chair, or from ice to steam. It is the second law that puts a restriction on this, stating that spontaneous heat flow can only occur from a hotter body to a colder one and not vice versa.

This might seem like little more than common sense, but further exploration of the idea and the introduction of a mathematical basis (both of which we will do in part three) will quickly give us a more solid foundation on which to make such a claim.

What about the refrigerator we now have before us? To understand how good or bad it is, we can define, somewhat analogously to the efficiency, , a new ratio called the *co-efficient of performance*, which has no attractive greek letter associated with it and is simply defined as

or, in terms of temperature,

which is simply the reciprocal of . Consequently, it can also be much greater than one. Using our realistic room temperature system from the heat engine example, a refrigerator operating between and has a co-efficient of performance of . (The sun-vacuum system, by contrast, has a co-efficient of performance of the order of .)

In case of an actual refrigerator (the one in your house) the low temperature reservoir is the food, the high temperature reservoir exists on the back of the refrigerator (and gets pretty hot on occasion) and the coolant acts as a working substance flowing in cycles between the reservoirs and working just like our ideal heat engine in reverse, only with a much lower efficiency.

To understand the reason why spontaneous heat flow has a direction, to understand why the Carnot reversible cycle is the most efficient and to understand how scientists in the 19th century expected the universe to end, we have to explore the wonderful idea of *entropy*. This will be the subject of our discussion in part three of this series.

The post The second law of thermodynamics, part 2 appeared first on Physics Capsule.

]]>The post Physics in 2016: the year in review and more appeared first on Physics Capsule.

]]>In our first editorial of 2016, we take a look back at the most notable events in the world of physics in 2016. From astrophysics to high-energy physics to condensed matter physics, there was something interesting in every field this past year.

We published an explainer about LIGO almost as soon as the news was announced because this was a big discovery in physics. It was perhaps the biggest since the discovery of the Higgs boson. In 1915, Einstein’s theory of general relativity led to several predictions, among which gravitational waves were some of the more exotic ones. (Wormholes, we hope, are on the way.) Thanks to the success of LIGO, a more precise Advanced LIGO is now in the works.

While gravitational waves did not *prove* Einstein’s theory, they were an important *confirmation* of his predictions. With a black hole merger giving off energy as gravitational waves, the two Laser-Interferometer Gravitational-waves Observatories in America, consisting of 4 km tunnels placed 3,000 km apart, simultaneously detected spatial expansions/contractions due to these waves in an historic confirmation of Einstein’s theory that came 101 years after his first paper on general relativity.

Quarks are one of the most fundamental particles we know today. And we knew that pairs and triplets of quarks existed. But earlier this year CERN results showed the presence of quadruplets of quarks that exist together, albeit for only a tiny fraction of time, before decaying into other (deemed less exotic) particles.

The LHCb collaboration noticed quite a few such *tetraquarks* made up of charm, anti-charm, strange and anti-strange quarks when studying the decay of B-mesons. Although first proposed in 2003, the first solid evidence came in 2014, with some doubts being cast on further candidates in February 2016 before three promising candidates were finally named in June of last year.

Another year, another Nobel for condensed matter physics. David Thouless, Duncan Haldane and Michael Kosterlitz won the physics Nobel in 2016 for their work on topological phase transitions.

We published an interesting explainer on their work, complete with an introduction to topology and details of its role in physics. The KT transition, their explanation of the quantum hall effect and how their use of topology paved the way for similar analysis of various other phenomena across physics.

A new type of crystal symmetry was defined in 2016 wherein the symmetry is not defined based on atomic or molecular positions in space but rather based on the relationship between their periodic motions. Interestingly enough, this concept arose form the symmetry one may quickly recognised among satellites (which, needless to say, are in periodic motion).

Physicists working on a proposed gravitational wave observatory thought of having four satellites in non-coplanar orbit around the Sun, which led to the proposal of this new type of crystal structure. The amount of such relative positional symmetry among periodically moving bodies is given by their *choreography*.

The rather funnily named two-mode Schrödinger’s cat state speaks of the famous dead/alive cat-in-the-box thought experiment but this time in terms of two cats dead or alive and also possibly in two boxes at once. Nobody asked for this. In all seriousness, though, this is the idea of having photons in superposition and in two entangled states, represented by the cats, dead/alive states and the boxes respectively.

The reason this idea is on our list is because it marks a milestone in quantum computing as physicists find ways of defining increasingly complex quantum states that can one day make everyday quantum computing a reality. The experiment involved using two harmonic oscillators, generating microwave fields that would be confined in two cavities and then comparing the states of the photons in these cavities. *Science*a concise description provided when the study came out in May last year.

There were many more discussions this year that proved to be extremely interesting and sparked several debates. New potentially habitable planets were found, some questioned dark matter while others tried to explain it, the idea of a sterile neutrino was cast aside, negative refraction of electrons was discovered in graphene etc. And there were some news we never wanted to hear: Vera Rubin, one of the first physicists to help confirm the existence of dark matter, died in December. And yet, thorough the year, in the spirit of physics, some indulged in fun bubble blowing.

Here at Physics Capsule, we went through some big changes ourselves: we moved hosting to Los Angeles, ramped up security, speed and reliability and refined some of our backstage work. We also refined our design with bolder images and a generally more wholesome and polished user experience. For a second year, we retained *Adobe Text* as our main typeface, accompanied by *Filson Pro* and, for headings, we moved to a classic choice that embodies our website, aims and beliefs: *Cheltenham*.

We hope to have a lot more for you this year, in writing and otherwise, and we are thrilled and thankful to have you with us on this wonderful journey. Here’s looking forward to a great year ahead.

The post Physics in 2016: the year in review and more appeared first on Physics Capsule.

]]>