Physics


book_cover_big.gifSemiconductors play an important role in solving the planet’s energy issues. There are two distinct, but related, phenomena: the conversion of (sun) light in electricity and the conversion of electrical power in visible light. The first conversion is known as the photo-voltaic (PV) technology and the second conversion is the one that is used by Light Emitting Diodes (LED’s) used in Solid State Lighting applications. Both conversions enjoy  considerable interest from scientists, governments, energy companies as well as citizens. Clear is that both energy conversions can contribute substantially in solving the availability and distribution of energy around the planet.

A key factor for the successful acceptance (at least in terms of economically feasibility) of both PV and LED’s is the efficiency of these two types of energy conversions since this will directly impact cost per Watt or cost per Lumen. Indeed, the question arises are there fundamental limitations to these energy conversions? For PV cells it has been reported that the upper efficiency of on silicon based cells will run at about 30%. For LED’s there has not been reported so far a fundamental barrier that would keep the LED away from 100% efficiency (however, the fact that the device heats up during operation hints already to a less than 100% efficient light conversion).

On the efficiency of PV cells I will come back in a future contribution, for now I would like to focus on the efficiency of a LED. A LED is typically constructed from a classical p-n junction but in the LED case the p and n material are separated by what is called an active zone that can be either doped or intrinsic. The semiconductor material must be a direct band gap semiconductor in order to have sufficient conversion efficiency[i]. By putting the LED in a forward bias the electrons and holes that arrive in the active zone can recombine in two different ways:

–         Radiative recombination. It is this recombination that fuels the light emission from the LED.

–         Several other non-radiative recombination processes occur as well. These reduce the amount of holes and electrons available for light emission.

There are other loss (non-radiative) mechanisms operating (such as absorption of the photons by the semiconductor) that further reduce the light generation efficiency.

Recently an article in the Journal of Applied Physics[ii] appeared that gives good insight in the different factors that influence the power-light conversion efficiency. An important factor is the so-called wall plug efficiency, defined as follows:

Wall Plug Efficiency = emission power/electrical power

a pretty straightforward definition. In the article all the different recombination and loss mechanisms are mathematically described and then put together in one model for the LED. This model can then calculate the behavior (and thus wall plug efficiency) of the LED device in terms of operating conditions (temperature, current, voltage), material properties (semiconductor material such as GaN or GaAS and doping) and LED structure (thickness of the different layerings, metal contacts and lay out of the active layer). This is of great help when optimizing the LED device for conversion efficiency.

Let me summarize a few important conclusions from the article:

–         There is not a fundamental reason why the power-light conversion cannot be 100%. Even stronger, the conversion can be more than 100% (see next point for explanation)! However, the high efficiencies may not always in a practical operating window (for instance at the current densities the LED needs to run because of a certain required light output per surface area semiconductor).

–         The energy of the photon may come not only because from the band gap energy difference but phonons (thermal energy from the lattice) may contribute as well. In that case the LED can act as a heat pump: the device cools actually and can in that way extract heat from the environment and achieve efficiency better than 100% (using the above wall plug efficiency definition).

–         Further improvements will be possible to increase the light output of LED’s.

Thus, we can expect to see in the coming years more developments coming to improve the Solid State Light technology and this will be a very valuable contribution to our energy strategy.


[i] See for an explanation: http://en.wikipedia.org/wiki/Direct_and_indirect_band_gaps

[ii] O. Heikkila, J. Oksanen, J. Tulki, Ultimate limit and temperature dependency of light-emitting diode efficiency,  Journal of Applied Physics 105, 093119 (2009)

©  Copyright John Schmitz 2010

book_cover_big.gifClassical thermodynamics, the dynamics of Carnot, Clausius, Boltzmann, Gibbs, Joule, Kelvin and Helmholtz, is often also called equilibrium thermodynamics. Indeed, frequently we are alerted that classical thermodynamics holds only for systems that are in equilibrium (or at least close to equilibrium). But why is that? For instance, the First Law of thermodynamics, the conservation of energy, holds for any system you could argue. Certainly this is true. But how about the Second Law, the law of ever increasing entropy? Well here it becomes already a bit trickier since the change in entropy for a given change in the system parameters is defined as:

ΔS = ΔQrev / T

where Qrev stands for the reversible exchanged heat and T is the absolute temperature. Thus the change in entropy between start and end state of a system can only be calculated when designing a reversible path from the beginning to the end. The result of all this is that calculations, as they are done by engineers to determine efficiencies, result is upper limits and actual performances are lower.

 However, there are ways to overcome the entropy problem described above. Important  to realize here is that the system properties that we use to describe a system in thermodynamic terms have some peculiarities. Whereas properties such as volume, mass and energy can easily be calculated for any system regardless whether it is in equilibrium or not, parameters such as pressure, temperature, and as we saw above, entropy, are not so straightforward to define for systems that are not at equilibrium. For example if we take two heat reservoirs at different temperatures and we connect them with a heat conducting rod, heat will flow from the hot reservoir to the cooler one. But the temperature in the rod is not so easily defined and for the system as a whole certainly not. The same is true for liquids where we have a thermal gradient established or where we have a pressure gradient because there is a mass transport going on.

 These kind of problems where already noticed in the early days of thermodynamics. Just in the beginning of the second half of the last century a new chapter was added to thermodynamic theory, namely that of non-equilibrium thermodynamics. Important names connected to this are Prigogine and Onsager, both earned the Nobel price for their work. Key assumption done is that the system is broken down in subsystems such that in the subsystems a condition called “local equilibrium” can be assumed. In such a subsystem the internal states can relax much faster to equilibrium than the change in parameters such as pressure and temperature. In general this approach worked well when the systems were not too far from equilibrium.

 But science moves on and new situations were found where also the approach of Onsager and Prigogine no longer were valid. In a recent article in Scientific American and in other articles by J.M. Rubi it is argued that in many relevant systems such as molecular biology and in nanotechnology systems the conditions can be far from equilibrium and the question arises whether the Second Law will still hold up. It appears that if the description of the system under study is done in a multi-parameter space spun by all the relevant parameters in addition to the spatial coordinates that then non-equilibrium thermodynamics can be applied again. And indeed no reason to get worried, the Second Law still holds up.

 As I mentioned  in previous blogs, the discussion on the Second Law is also today still very lively even 150 years after its discovery and description.

 Further reading:

  1. J.M. Rubi, The Long Arm of the Second Law, Scientific American, 41, Nov 2008
  2. J.M.G. Vilar and J.M. Rubi, Thermodynamics “beyond” Local Equilibrium, Sept 2001 (http://arxiv.org/abs/cond-mat/0110614)
  3. J.W. Moore, Physical Chemistry, pp 356, 1978, Longman Group Limited, London, ISBN 0 582 44234 6

book_cover_big.gifThere are many instances that we can see that in our attempts to transform energy into as much as possible usable work, we are always left with this “rest” amount of heat that we can not use anymore to generate even more work¹. Clear examples of these imperfect transformations are the coolant radiators in our cars and the cooling towers of many factories or power plants. In power plants that use fossil fuels we can have an efficiency as poor as 50% or often even lower, meaning that only 50% of the energy enclosed in the fuel is converted into electrical power, by means of burning fuel, heat generation that leads to steam and steam that will drive then turbines and generators. 50% or less is that not a shame? Of course the question arises why that is the case?

Why can we not convert for the full 100% the energy enclosed in the fuel into utile work? Well it is here that the Second Law of thermodynamics kicks in, also known as the entropy law. But before we go deeper into this entropy law first a bit more about the First Law of thermodynamics. The First Law is nothing more than the law of conservation of energy. Energy can be present in many forms (chemical, heat, work, electrical, nuclear etc etc) and the total amount of all this energy in the universe is constant. The First Law will not object to convert a given amount of energy fully into work. Unfortunately we never observe this attractive situation. The answer why that is so can be found from an analysis of the entropy law.

What is entropy? Entropy is a concept discovered while people were answering “simple” questions such as why heat only streams from warm to cold places. Another question that came up around 1800 was caused by the growing popularity of steam engines. Steam engines can also be called heat engines because they convert heat into work. Another example of a heat engines is a car engine. Steam engines where used in England to pump water out of the coal mines, a job that was done by many workers day and night before steam engines became available. To keep the steam engine running, fuel (such as wood or coal) was burned to generate the steam. While the steam engine was gaining ground, many improvements (for instance James Watt was able to improve efficiency with about 25%) were done that increased the efficiency of the steam engines considerably. Therefore much more work could be obtained from a given amount of fuel.

While this went on there was a young French military engineer, Sadi Carnot, who asked himself the question whether there was perhaps an upper limit to this efficiency. To answer that question he carried out a careful analysis around 1825 using a simplified model of a steam engine². The result of his analysis was that the upper limit of the efficiency was only determined by two factors: the temperature of the heat source (the steam) and the temperature of the heat sink (the location where the steam was condensed, for all practical matters the outside air). More precisely he found that the amount of heat, Qh, taken from the heat source at temperature , Th, is related to the amount of heat given up at the heat sink, Qc, at temperature Tc, as: Qh/Th = Qc/Tc. Although he did not coined the factor Q/T as entropy (that was done by Rudolph Clausius around 1850) he clearly laid the foundation for scientists such as Clausius who came to the conclusion that “something was missing” and was needed in addition to the First Law . That something became later the Second Law of thermodynamics.

The best possible efficiency of the steam engine was then shown by Carnot to be equal to (Th-Tc)/Th (an atmospheric steam engine efficiency is therefore limited to about (373-272)/373 = 25% efficiency).

The work of Carnot showed very clearly that in order for a heat engine to work you MUST have a heat source at high temperature and a heat sink at colder temperature and that the heat disposed at the heat sink can NEVER generate any work anymore unless you have another heat sink available at an even lower temperature. Also, from the fact that Qh/Th = Qc/Tc, it becomes clear that in an heat engine you MUST give up an amount of heat, Qc, to the cold sink no escape. That is the fundamental reason for having the efficiency of the heat engines less than 100%! We can also see now that the efficiency of heat engines will increase if we make the temperature difference between the heat source and heat sink as large as possible.

See for more background on this topic:

https://secondlawoflife.wordpress.com/2007/08/28/carnot-efficiencies/

https://secondlawoflife.wordpress.com/2007/05/25/can-we-recycle-energy-or-the-role-of-law-of-entropy/

 

© 2008 John Schmitz

____________________________

1. With work we mean here the ability to lift weights, or to to turn wheels which in turn can rotate shafts.

2. This model is well known as the Carnot cycle.

book_cover_big.gifNear the end of the medieval era we saw the invention of the mechanical clock. Before then people relied on sun clocks, which of course gave rise to differences all over the world in terms of duration of an hour, the synchronization of time, and, moreover, obviously did not work after sunset. The synchronization of time over large distances has quite some history[1]. There were at least two reasons why civilization wanted to have reliable time synchronization. One was the discovery of America and the other one the development of a reliable train schedule which required the synchronization of clocks in different cities. There were many occasions of colliding trains simply because of differences in time at different locations along the train rail network. Especially in France, several attempts were made to synchronize clocks; one used pneumatic systems driven by steam pressure. However, even at short distances, such as within the boundaries of Paris, this did not work at all. The introduction of a telegraph allowed synchronization over large distances and Britain and the USA were leading those efforts. In 1885 there was an international conference that agreed on a standard time (Greenwich Time) and the division of the world into 24 time zones. Humans, it seemed, captured time and put it in a convenient box.

 

However, one can say that since the birth of the theory of relativity our thinking about what time[2] exactly is has changed dramatically. Einstein had to accept that the time could impossibly be the same for each observer traveling at different speeds in order to explain why the speed of light was the same for each observer. Since then insights have grown but the discussion about time continues. This can be illustrated by a recent article from Sean M. Caroll[3]. The article poses some fundamental questions about what is generally called the “time’s arrow”. What is the time’s arrow? This is simple: it is the arrow that points from the past to the future. However, according to Stephen Hawking there are at least three arrows of time[4]:

 

 The thermodynamic time’s arrow: this is basically the second law of thermodynamic that states that in an isolated system the entropy always increases.

 The psychological time’s arrow: we remember the past but not the future

 The cosmological time’s arrow: the direction in which the universe is expanding rather than contracting

 

Hawking identifies the psychological arrow with the thermodynamic arrow. Then he argues that a universe that contracts rather than expands will not be able to support life as we know it because there are essentially no gradients (for example temperature gradient or particles concentration gradient) left;[6] this is basically the Heat Death, the final end situation the universe is suppose to arrive at.

 

The big question that Caroll, and with him many others, addresses is why is the time asymmetric? This question is relevant to pose as the laws of physics are definitely time symmetric. One way to get more insight in this asymmetry is to focus on the thermodynamic arrow of time. Why does the entropy increases all the time in the universe? Well implicit in this statement one assumes that the entropy at the birth of the universe was very low.

 

Aha! You will say, that assumption seems to be a quite unnatural thing since low entropy situations are not so likely, as we learned from Boltzmann’s micro- and macro states considerations. Then the thought emerges that maybe the big bang was not the real beginning of the universe but that it was merely an intermediate stage. Before the big bang there was a prior universe that possessed high entropy. In that prior universe the time will run backward and in this way the asymmetry of time could be taken away (and people would remember the future rather then the past). Interesting idea isn’t it? See for more mind provoking thoughts the article by Caroll and references cited there and the ones provided below.

 

Thus, do we understand time? I guess not yet but we are making progress!

 

© 2008 John Schmitz


[1]Galison, Peter, Einstein’s Clocks, Poincaré’s Maps, Empires of Time; W.W. Norton & Company, Inc., New York (2004)

[2] Saint Augustine wrote already around 400, “I know what time is, if no one asks me, but if I try to explain it to one who asks me, I no longer know.” (from Confessiones)

[3] Caroll Sean M., The Cosmic Origins of Time’s Arrow, Scientific American, June 2008

[4] Hawking Stephen, A Brief History of Time,

[5] A system that does not exchange energy or material with its surroundings

[6] The argument runs as follows. It will take a very long time before the universe starts to contract. At that point in time all stars will have burned up their fuel and the particles and energy will be evenly distributed in space.

 

 

 

 

book_cover_big.gifRecently I got a few questions from Dr.  Wang who read my book. I believe that his questions are excellent and that the answers to his questions will help other readers of this blog site as well to understand entropy. I had some e-mail exchanges with Dr. Wang and I am happy that he agreed that I post parts of our conversation.

 Question: Is heat the ONLY energy form being of dispersion? In addition to heat, do we know one more energy form being of the ability to dispersion?

Answer: Heat, being fundamentally atomic or molecular in nature through vibrations, translations and rotations (remember the simple ideal gas result that 1/2 mv2 = 3/2kT), is indeed a form of energy that is very abundant. Thus in many energy transformations it is difficult to prevent that some part of the energy is transformed in heat! Once heat is generated it is difficult to prevent that part of it leaks away into the environment.

The dispersion of energy refers to the tendency of energy to spread out in space. Indeed for heat this will happen because the atoms and molecules can propagate heat their movements to their neighbors: a bar of iron will conduct the heat from the hot end to the cold end till the temperature is even across the bar. Dispersion is not limited to heat only, for example electromagnetic radiation or magnetic fields will spread out as well.

Question: Is it possible to have entropy increase during the transformation of energies without the involvement of heat, for example, between two non-heat energies?

Answer: Yes, the best example I can come up with are fuel cells. In the cell you convert chemical energy directly in electrical while the entropy of the entire system will increase. But, and that is important, because no heat is directly involved the efficiency of a fuel cell in generating electricity can be much higher then conventional power plants.

Another example can be found in my book on page 173. There you can see how the expansion or mixing of a gas in an isolated system will lead indeed to a higher entropy. Thus the entropy of a system can increase without any change in energy of that system.

A related phenomenon in this respect is the Demon of Maxwell. I have spent a few words on that extremely unraveling thought experiment in the book as well.

Question: Is entropy more fundamental than energy?

Answer: This is a real interesting question, I never thought about that. I would say that energy represents a quantity that never changes and must be therefore quite fundamental. This is basically the First Law of thermodynamics. Entropy says something about the quality of that energy quantity. As long as entropy increases (or can increase) there are gradients (of energy of temperature or of species concentrations) present. As long as gradients are present, life is possible. Thus from that point of view entropy is perhaps the more fundamental one (at least from our planet’s viewpoint) because the presence of energy alone is not enough to enable life. Life needs energy gradients.

Question: Will the increase in entropy SURELY lead to the transformation efficiency less than 100%?

Answer: Here we need to be careful how we phrase this. Since energy is constant, transformations from one form to the others must be 100%. However, if our objective is to transform a given quantity of energy fully into another single form (for instance heat into work) then the increase in entropy will certainly limit the transformation efficiency as an amount TΔS will be no longer “available” to us as that amount has become more “diffuse”.

Question: How about the transformation among energies without the involvement of heat?

Answer: See my remark above for the fuel cells.

See also: https://secondlawoflife.wordpress.com/2007/05/06/what-is-entropy-3/ 

Copyright © 2007  John Schmitz

book_cover_big.gifOf all forms of energy that of electricity is perhaps the most utile. One reason is the ease of transportation through a relative simple and vast infrastructure (power grid). Burning fossil fuels generates a large portion of the world’s electrical power. Heat applied in the boilers of power plants is used to generate steam and the steam drives subsequently turbines and the actual generator. The whole of this approach is particular geared towards mass volume electricity production. A typical size of a power plant these days is 2000 MW.

However, there are situations where it is beneficial to work at a much smaller scale to convert heat in electricity. For instance, the cooling of powerful processor chips in a computer. The microprocessor can dissipate up to and sometimes even more than 100 Watts of heat. The idea is now to turn that waste heat into electricity for re-use. There are several ways that that can be done for example through the use of Peltier[1] elements. But here I would like to focus on a method that is researched at the university of Utah. Professor Orest Symko studies methods to convert (waste) heat into electricity through an intermediate step namely through the generation of …… sound.

For this to work there are needed two types of devices. The first one is called a thermo-acoustic device. This device transforms heat into sound. A simple version is a tube in which there is a kind of a sieve or metal screen that can be heated by a flame or other heat sources such as the microprocessor chip. By air expansion in contact with hot parts and contractions when air comes in contact with cold parts a sound is generated in the pipe in a similar fashion as it is done in a flute or an organ pipe. Quite intense sound levels of 120 dB[2] or more can be generated in relatively small devices (dimensions a few cm’s). Once sound (basically pressure variations) is generated, it can be converted by an piezo-electric device[3]  in electricity. The situation has some similarity with that of the heat engines in power plants described above,  where heat is converted into steam under pressure and then used to generate electricity.


[1] A Peltier element is a device that upon current passage will cool at one location and heat at the other location.[2] Sound intensity of a rock and roll concert or a plane at 100 m distance.[3] A material that has piezo-electrical properties converts mechanical pressure variations in electrical energy.

book_cover_big.gifSince the 11th century, many people have tried to beat the First Law with ingenious machines. There are good reasons for trying: if you could build a machine that could work forever without needing energy, that would solve the world’s energy problem in one stroke. You would gain immeasurable wealth, fame, and surely the Nobel Prize as well. Remember the tremendous excitement that emerged when Pons, et al. wrote about “cold fusion” in 1980? And that did not even involve perpetual energy production, but only a claim that nuclear fusion could proceed at room temperature instead of 5000oC – the temperature on the surface of the sun!              

There are basically two different aims for perpetual devices: to achieve perpetual motion, and to generate work. A perpetual motion machine is typically not very useful other than its allure as a kind of magic show that can attract big crowds and so generate income from admission fees. In contrast, perpetual motion engines claim to generate work. (We call them perpetual motion engines because even perpetual motion requires that a certain amount of work be generated to overcome the friction forces, however small, that are present in all engines.) The hundreds of proposals made over the years for perpetual motion engines use many different forces to keep the movement going – including purely mechanical forces (gravity, expanding fluids, or springs) as well as magnetic, electrical, or buoyancy forces.

As said above, the First and Second Laws are postulates. This means no proof is possible, but it is merely based on many observations. Thus, in principle it could happen that tomorrow somebody builds an ingenious machine that can produce “free” energy. This would obviously be an enormous blow for the thermodynamic theory, but a blessing for humankind. Many such claims have been made, and many machines have been built either by people who intentionally produced frauds, or who were very serious about the matter and saw a mission to provide humanity with a useful tool. Also, many designs were made but never translated into real machines, and they are sometimes very difficult to prove wrong before they are actually built. Detailed mechanical analysis is often required before the flaw in the design can be found. The author knows no verified perpetual motion engine or machine has ever been built. Nevertheless, it’s fun to look at some of these concepts.

Before we do, though, it’s good to know that there are two kinds of perpetual motion engines, based on which of the two laws is being violated. Engines of the first kind typically claim that they can generate more energy (in the form of work) then the amount of energy that was put in, which clearly violates the principle of conservation of energy. Engines of the second kind are a bit trickier to describe. They try to convert heat into work without implementing any other change (achieving 100% efficiency), or purport to let heat flow from cold to warm, or attempt to convert heat into work without using two heat reservoirs at different temperatures. Simply put, second-kind perpetual motion engines draw energy from a heat reservoir and convert this heat into work without doing anything else.

Many perpetual motion engines of the first kind use the classic design of “overbalanced wheels.” An early example comes from the Indian mathematician and astronomer Bhaskara, whose design incorporates tubes filled with mercury. In the figure below, we see the operating principle of Bhaskara’s idea. He claimed that the wheel would continue to rotate with great power, because the mercury in the tubes is not at the same distance from the axis at opposite sides of the wheel. Bhaskara probably never built a real device, but similar ideas later were incorporated into the designs of other inventors’ engines, none of which ever worked. Fraudulent designs for perpetual-motion machines even made it to actual patents[1], which were later challenged in courtrooms. A famous example of a fraud was that made by Charles Redheffer in 1812 in Philadelphia. He claimed to have invented a work-generating perpetual motion engine, which seemed convincing until it was discovered that a man in an adjacent room was powering it.

Several famous names are connected to the idea of perpetual motion engines. Leonardo Da Vinci designed and built many devices and machines, including two devices to study the workings of perpetual motion. In his time, the principle of the conservation of energy was not known, but Leonardo had good insight into the working of machines and did not believe one could construct a perpetual engine. Simon Stevin, a Flemish scientist who lived from 1548 to 1620, actually showed that a purported perpetual engine based on a chain looped over a pair of asymmetric ramps would indeed not move without the addition of external energy.

An example of a perpetual motion engine of the second kind was provided by John Gamgee with his invention of the “Zeromotor” in 1880. His idea was to draw heat from the environment to let liquid ammonia boil; the ammonia vapor would expand and drive a piston. Afterward, the vapor was expected to cool down and condense, allowing the process to start again. Gamgee proposed this idea to the American Navy as an alternative to its coal-fueled steamships[2]. The problem, however, was that ammonia at atmospheric pressure condenses only at temperatures lower than -33°C, and that temperature was not present in Gamgee’s system. Thus, we see here a violation of the Second Law: if you want to draw work from heat, you must have two different heat reservoirs, one at a high temperature, and the other at a low temperature. 

Perpetual engine after a design of Bhaskara.

© 2007 William Andrew Publishing. Reproduced with permission from the publisher William Andrew Publishing


[1] Many patents can be found that claim to have invented a perpetual engine (for instance a patent for perpetual movement by Alexander Hirschberg in 1889, patent number GB 7421/1889). That these patents were granted was because in Great Britain patents filed before 1905 were not checked for whether the claims were realistic. This is unlike patents in the US where within a year a working prototype was required [van Dulken, 2000].[2] The American Navy was wrestling with the fact that their steamships were too limited in their routing because they could not get coal everywhere. Thus the Zeromotor was seen as a solution to this problem. The invention was even shown to President Garfield who was very positive about this approach.

book_cover_big.gifIn previous blogs, I explained the principle of the Carnot cycle, focusing mainly on how to obtain work out of heat. The Carnot cycle was designed by Sadi Carnot to understand the workings of steam engines. A basic steam engine contains  a heat source at  high temperature (Th) and a heat sink at a lower temperature (Tl); the engine gives us useful work (W) by extracting heat (Qh) from the heat source and shifting  an amount of heat (Ql) to the heat sink. Carnot showed that the best possible [i] efficiency (defined by W/Qh ) was determined by the temperatures Th  and Tl. We can use Carnot’s cycle in three different applications to calculate  the best possible efficiency of each [ii]. These applications are:

1) Extraction of heat from the heat source for the purpose of converting the heat into work, as in  a steam engine. The efficiency, η = W/Qh , is given by the formula:

η ≤ (T – Tl)/ (Th)

2) Extraction of heat from the low temperature location in order to lower the temperature of that area, or to maintain a low temperature in warmer surroundings – for  example, a refrigerator. The efficiency in this case is defined as Ql /W and shows how much work (W) we must put in to extract an amount of heat (Ql) from the low temperature location. The efficiency (also called Coefficient of Performance, COP) is defined by Ql/W and is given by this equation:

COP ≤ Tl/(ThTl)

3) The third application is where we want to extract heat, Ql, from a low-temperature location  and bring this heat to a high-temperature location;. an example is a heat pump that warms a building with heat extracted from ground water. The efficiency (COPhp) in this case is defined by the ratio Qh/W and is expressed as:

COPhp ≤ Th/(ThTl)

Note: The ≤ symbol in the above equations indicates the best possible efficiencies; in the real world, efficiencies are often considerably lower.

In the table below,  I have given some examples of how these efficiencies  perform  at different temperatures. Thus, for instance, a heat engine running between 293K and 800K cannot achieve a better efficiency than 63%, meaning it can convert into work only 63% of the heat extracted from the heat source.

An important point: a refrigerator and a heat pump can achieve high efficiencies with  small temperature differences between Th and Tl. For instance, a refrigerator with  an efficiency of 10.11, and internal/external  temperatures of  283K and 300K, can expel  an amount of heat more than 10 times greater than the amount of  work put in. However,  this is a bit misleading since pumping heat between two locations of similar temperatures is not all that useful.  In fact, the refrigerator is the top electricity consumer in the average household, despite its impressive COPs calculated below. This is mainly because of  the constant uphill battle of removing the heat that leaks into the refrigerator from the warmer kitchen environment. But other factors play a role as well.

Carnot Efficiencies

    Heat engine Refrigerator Heat pump

Tl

        Th

(Th-Tl)/Th

Tl/(Th-Tl)

Th/(Th-Tl)

293

800

0.63

0.58

1.58

293

700

0.58

0.72

1.72

293

600

0.51

0.95

1.95

293

500

0.41

1.42

2.42

293

400

0.27

2.74

3.74

293

320

0.08

10.85

11.85

293

310

0.05

17.24

18.24

293

300

0.02

41.86

42.86

273

800

0.66

0.52

1.52

273

700

0.61

0.64

1.64

273

600

0.55

0.83

1.83

273

500

0.45

1.20

2.20

273

400

0.32

2.15

3.15

273

320

0.15

5.81

6.81

273

310

0.12

7.38

8.38

273

300

0.09

10.11

11.11

283

293

0.03

28.30

29.30

283

303

0.07

14.15

15.15

283

288

0.02

56.60

57.60

See also: https://secondlawoflife.wordpress.com/2008/10/26/wasting-energy-why/


[i] The best possible efficiency is obtained for a reversible process. Unfortunately, all practical processes are irreversible, meaning that their efficiencies will be less than those calculated from the given formulas. Therefore, in the equations we use the less than or equal to symbol: ≤.[ii] The efficiency is defined by η = (desired output)/(required input). For more details, see J.R. Howell and R.O. Buckius, Fundamentals of Engineering Thermodynamics, 2nd ed., p.337 (1992), McGraw-Hill Inc.

Copyright © 2007 John E.J. Schmitz

book_cover_big.gifThe human body can deliver lots of work. Consider, for instance, the athlete running a marathon, or the cyclist racing in the Tour de France. We also know that human body temperature is normally 37°C and that usually the environment is cooler, say 20°C. From this we could suggest that there is some resemblance between a heat engine, in which the body is the heat source, and the cooler environment could act as a heat sink. So let’s make a few simple calculations to see how closely the body resembles a heat engine. From earlier blogs (see for instance May 6), we know that the efficiency of a heat engine is determined by the temperatures of the heat source (the body temperature, Tbody = 310K) and the heat sink (the environmental temperature, Tsink  = 293K):

                     Efficiency = (Tbody Tsink )/Tbody = (310-293)/310 = 5.5%

Thus, based on this temperature difference, the body would be able to achieve only 5.5% efficiency. Fortunately, scientific studies already have estimated the human body’s efficiency [Whitt et al.] in other ways. One study reasons that for an average man to produce 75 Watts of power, he will need to breathe about one liter of oxygen per minute. That liter of O2 is combusted in body cells to form carbon dioxide (CO2). It has also been determined that one liter of oxygen generates in this way about 300 Watts of power. Thus, we can conclude that the efficiency of the human “engine” is 75/300 = 25%. What causes the difference between the 5.5% efficiency as calculated above, and the 25% from the combustion determination? The explanation is that the human body cannot be considered a heat engine. The work is not generated in the same way as a steam engine, which directly transforms heat into work and lower-temperature waste heat. Instead, the human body is more like a fuel cell, where chemical energy is transformed into work [Whitt et al]. For this kind of transformation, one obviously cannot use the efficiency formula of a heat engine.

Copyright © 2007 William Andrew Publishing, NY

_____________________ 

– Reprinted from The Second Law of Life with permission of the copyright holder William Andrew Publishing, NY 

– Whitt, F.R. and Wilson, D.G., Bicycling Science, MIT Press, Cambridge (1976)

book_cover_big.gifI have in previous blogs explained a little bit about what entropy is and its relationship with the amount of heat exchanged. We have also seen that Clausius discovered that in isolated systems the entropy always increases and that the there is no way that we can reduce the entropy of a given isolated system. Also we saw that an increase in entropy points towards a degradation of the quality of energy such that low quality (=high entropy) energy can no longer be applied to convert it into work.

So far this has been a more or less a phenomenological description of what entropy is. You can compare this with the observation that every time you toss up a stone it will fall back to earth. So if you make a law of gravity that says that a bodies will always fall back to earth then that law is probably very accurate but it does not add to your understanding of what gravity really is! We are in a similar situation with Clausius definition of entropy, we still do not understand what entropy is. This changed when around 1900 Ludwig Boltzmann[1], an Austrian scientist, started to think about what entropy actually was.

Around 1900 there was still a fierce debate going on between scientists whether atoms really existed or not. Boltzmann was convinced that they existed and realized that models that relied on atoms and molecules and their energy distribution and their speed and momentum, could be of great help to understand physical phenomena. Because atoms where supposed to be very small, even in relatively small systems one faces already a tremendous number of atoms. For example: one mililiter of water contains about 3×10²² molecules! Clearly it is impossible to track of each individual atom things like energy and velocity. Boltzmann introduced therefore a mathematical treatment using statistical mechanical methods to describe the properties of a given physical system (for example the relationship between temperature, pressure and volume of one liter of air). Boltzmann’s idea behind statistical mechanics was to describe the properties of matter from the mechanical properties of atoms or molecules. In doing so, he was finally able to derive the Second Law of Thermodynamics around 1890 and showed the relationship between the atomic properties and the value of the entropy for a given system. It was Max Planck who based on Boltzmann results formulated of what was later called Boltzmann expression:

S = k lnW

Here is S the entropy, k is Boltzmann constant, ln is the natural logarithm and W is the amount of realization possibilities the system has. This last sentence typically encounters some problems when we try to understand this. The value of W is basically a measure of how likely a system can exist given certain characteristics. Let me give an example. Imagine you have a deck of cards with 4 identical cards. The deck as a total can be described with parameters such as the number of cards, thickness of the deck, weight and so on. With four cards we have 4x3x2x1 = 24 possible configurations that all lead to the same (in terms of the parameters above) deck of cards. Therefore in this case W = 24.  The Boltzmann constant, k, equals to 1.4 10-²³ J/K and the entropy S is then kln24 = 4.4 10 -²³ J/K. The more possibilities a given system has to establish itself (and with the many atoms we have in one gram of material there are many possibilities!) the more likely it will be that we will indeed observe that system and the higher the entropy will be.

Now it is easier to understand the observation of Clausius that the entropy increases all the time. This is because a given (isolated) system will tend to become more disordered and thus more likely to occur. Unfortunately the more disorder a given system has the less useful such a system is from a human perspective. Energy is much more useful when it is captured in a liter of fuel than when that same amount of energy, after we burned the fuel, is distributed all over the environment! Clearly the entropy went up because the disorder after burning increased.    

Copyright © 2007 John E.J. Schmitz


Ludwig Boltzmann was born in 1844 in Vienna. He was a theoretical physicist who worked in various locations: Graz, Heidelberg, Berlin, Vienna. In 1902 he was teaching mathematical physics and philosophy in Vienna for which he became very famous. His statistical mechanical theory received a lot of criticism from his peers such as Wilhelm Ostwald. Because of these continuous attacks and his depressions he committed suicide in 1906 in Trieste (Italy). On his tomb one can find the famous formula S = k log W.

Next Page »