book_cover_big.gifIn the economics literature, one can find two opposing points of view: mainstream economists who believe that technological innovation will solve the degradation in quality of both energy and materials and that therefore growth can go on forever; and biophysical economists, who use the thermodynamic laws to argue that mainstream economists do not incorporate long-term sustainability in their models. For instance, the costs to repair the ozone hole or to mitigate increasing pollution are not accounted for in mainstream economic assessments. Industrial and agricultural processes accelerate the entropy production in our world. Entropy production can only go on until we reach the point where all available energy is transformed into non-available energy. The faster we go toward this end, the less freedom we leave for future generations. If entropy production were included in all economic models, the efficiency of standard industrial processes would show quite different results……..

Even if there were no humans on this planet, there would be continuous entropy production. So from that point of view the ecological system is not perfect, either; even the sun has a limited lifespan. The real problem for us is that, in our relentless effort to speed things up, we increase the entropy production process tremendously. In fact, you can see some similarity between economic systems and organisms: both take in low entropy resources and produce high entropy waste. This leaves fewer resources for future generations.

Although recycling will help a lot to slow down the depletion of the earth’s stocks of materials, it will only partly diminish the entropy production process. So whenever we design or develop economic or industrial processes, we should also have a look at the associated rate of entropy production compared to the natural “background” entropy production. We have seen that for reversible processes, the increase in entropy is always less than for irreversible processes. The practical translation of this is that high-speed processes always accelerate the rate of entropy production in the world. Going shopping on your bike is clearly a much better entropy choice than using your car.

Conclusion: the entropy clock is ticking, and can only go forward!

From:  The Second Law of Life

book_cover_big.gifA few blogs ago,  I wrote about the life cycle analysis (LCA) of Compact Fluorescence Lamps (CFL’s,)[1]. CFL’s do “consume” during their life indeed about 5 times less electricity than incandescent light bulbs (and CFL’s live about 4 times longer). However, the manufacturing of CFL’s is much more complicated and therefore environmentally more demanding than classical bulbs and rightfully the question was raised that when you sum it all up would the environmental advantage still hold? After a careful and detailed LCA, a team of Australian researchers came with the answer: a big yes!

However,  it was pointed out by the researchers at the University of Ghent[2], Belgium, that one needs to look not just at the environmental impact (for  factors such as global warming, ozone depletion, toxics emission, acidification, etc.) of a certain product but also need to take into account  resources such as organic and inorganic, fuel and feedstock, renewable and non-renewable, energy and materials. It is here where thermodynamics kicks in using the concept of entropy[3] (as already suggested by Nicholas Georgescu-Roegen[4] quite a while ago). Entropy, can be used to describe the degradation of resources during the manufacturing and actually usage of products. One can say, very roughly, that the faster and further away from equilibrium a certain production process is done, the more energy is degraded and made not-available anymore to do further work. This is described by an increase in entropy and is non-reversible, i.o.w. high quality energy (such as energy contained in fossil fuels for that matter) is turned into low quality energy (heat).

This sort of analysis is then used to study the environmental impact of bio-foods versus large scale agriculture produced foods. And sure enough you can find situations where bio-foods (because of their poor yields or their transport over large distances) have more negative impact on the environment than have traditional produced foods. It was found[5] that if bio-beans are locally produced they are environmentally better than conventional produced beans. But when the beans needed to get transported from other areas to make it to our stores the balance can easily change and even reverse the situation! Bio-potatoes are always worse than conventional potatoes because they have such a lower yield per surface area land[6].

Therefore, before drawing conclusions on the impact of a given process or product on the environment or resources a careful evaluation (LCA) needs to be done. Such an evaluation is not a trivial matter at all and can only be done by qualified people.

 

© Copyright 2009, John Schmitz

 


[1] https://secondlawoflife.wordpress.com/2008/10/05/compact-fluorescence-lamps/

[2] http://pubs.acs.org/doi/abs/10.1021/es071719a

[3] As a matter of fact a concept of « exergy » is used but it has a very close relationship to entropy

[4] https://secondlawoflife.wordpress.com/2007/04/28/nicholas-georgescu-roegen/

[5] http://www.standaard.be/Artikel/Detail.aspx?artikelId=4I2B40SO

[6] See also: https://secondlawoflife.wordpress.com/2007/07/28/entropy-and-the-food-chain-part-i/ and https://secondlawoflife.wordpress.com/2007/08/22/entropy-and-the-foodchain-part-ii/

book_cover_big.gifThere are many instances that we can see that in our attempts to transform energy into as much as possible usable work, we are always left with this “rest” amount of heat that we can not use anymore to generate even more work¹. Clear examples of these imperfect transformations are the coolant radiators in our cars and the cooling towers of many factories or power plants. In power plants that use fossil fuels we can have an efficiency as poor as 50% or often even lower, meaning that only 50% of the energy enclosed in the fuel is converted into electrical power, by means of burning fuel, heat generation that leads to steam and steam that will drive then turbines and generators. 50% or less is that not a shame? Of course the question arises why that is the case?

Why can we not convert for the full 100% the energy enclosed in the fuel into utile work? Well it is here that the Second Law of thermodynamics kicks in, also known as the entropy law. But before we go deeper into this entropy law first a bit more about the First Law of thermodynamics. The First Law is nothing more than the law of conservation of energy. Energy can be present in many forms (chemical, heat, work, electrical, nuclear etc etc) and the total amount of all this energy in the universe is constant. The First Law will not object to convert a given amount of energy fully into work. Unfortunately we never observe this attractive situation. The answer why that is so can be found from an analysis of the entropy law.

What is entropy? Entropy is a concept discovered while people were answering “simple” questions such as why heat only streams from warm to cold places. Another question that came up around 1800 was caused by the growing popularity of steam engines. Steam engines can also be called heat engines because they convert heat into work. Another example of a heat engines is a car engine. Steam engines where used in England to pump water out of the coal mines, a job that was done by many workers day and night before steam engines became available. To keep the steam engine running, fuel (such as wood or coal) was burned to generate the steam. While the steam engine was gaining ground, many improvements (for instance James Watt was able to improve efficiency with about 25%) were done that increased the efficiency of the steam engines considerably. Therefore much more work could be obtained from a given amount of fuel.

While this went on there was a young French military engineer, Sadi Carnot, who asked himself the question whether there was perhaps an upper limit to this efficiency. To answer that question he carried out a careful analysis around 1825 using a simplified model of a steam engine². The result of his analysis was that the upper limit of the efficiency was only determined by two factors: the temperature of the heat source (the steam) and the temperature of the heat sink (the location where the steam was condensed, for all practical matters the outside air). More precisely he found that the amount of heat, Qh, taken from the heat source at temperature , Th, is related to the amount of heat given up at the heat sink, Qc, at temperature Tc, as: Qh/Th = Qc/Tc. Although he did not coined the factor Q/T as entropy (that was done by Rudolph Clausius around 1850) he clearly laid the foundation for scientists such as Clausius who came to the conclusion that “something was missing” and was needed in addition to the First Law . That something became later the Second Law of thermodynamics.

The best possible efficiency of the steam engine was then shown by Carnot to be equal to (Th-Tc)/Th (an atmospheric steam engine efficiency is therefore limited to about (373-272)/373 = 25% efficiency).

The work of Carnot showed very clearly that in order for a heat engine to work you MUST have a heat source at high temperature and a heat sink at colder temperature and that the heat disposed at the heat sink can NEVER generate any work anymore unless you have another heat sink available at an even lower temperature. Also, from the fact that Qh/Th = Qc/Tc, it becomes clear that in an heat engine you MUST give up an amount of heat, Qc, to the cold sink no escape. That is the fundamental reason for having the efficiency of the heat engines less than 100%! We can also see now that the efficiency of heat engines will increase if we make the temperature difference between the heat source and heat sink as large as possible.

See for more background on this topic:

https://secondlawoflife.wordpress.com/2007/08/28/carnot-efficiencies/

https://secondlawoflife.wordpress.com/2007/05/25/can-we-recycle-energy-or-the-role-of-law-of-entropy/

 

© 2008 John Schmitz

____________________________

1. With work we mean here the ability to lift weights, or to to turn wheels which in turn can rotate shafts.

2. This model is well known as the Carnot cycle.

book_cover_big.gifRecently I got a few questions from Dr.  Wang who read my book. I believe that his questions are excellent and that the answers to his questions will help other readers of this blog site as well to understand entropy. I had some e-mail exchanges with Dr. Wang and I am happy that he agreed that I post parts of our conversation.

 Question: Is heat the ONLY energy form being of dispersion? In addition to heat, do we know one more energy form being of the ability to dispersion?

Answer: Heat, being fundamentally atomic or molecular in nature through vibrations, translations and rotations (remember the simple ideal gas result that 1/2 mv2 = 3/2kT), is indeed a form of energy that is very abundant. Thus in many energy transformations it is difficult to prevent that some part of the energy is transformed in heat! Once heat is generated it is difficult to prevent that part of it leaks away into the environment.

The dispersion of energy refers to the tendency of energy to spread out in space. Indeed for heat this will happen because the atoms and molecules can propagate heat their movements to their neighbors: a bar of iron will conduct the heat from the hot end to the cold end till the temperature is even across the bar. Dispersion is not limited to heat only, for example electromagnetic radiation or magnetic fields will spread out as well.

Question: Is it possible to have entropy increase during the transformation of energies without the involvement of heat, for example, between two non-heat energies?

Answer: Yes, the best example I can come up with are fuel cells. In the cell you convert chemical energy directly in electrical while the entropy of the entire system will increase. But, and that is important, because no heat is directly involved the efficiency of a fuel cell in generating electricity can be much higher then conventional power plants.

Another example can be found in my book on page 173. There you can see how the expansion or mixing of a gas in an isolated system will lead indeed to a higher entropy. Thus the entropy of a system can increase without any change in energy of that system.

A related phenomenon in this respect is the Demon of Maxwell. I have spent a few words on that extremely unraveling thought experiment in the book as well.

Question: Is entropy more fundamental than energy?

Answer: This is a real interesting question, I never thought about that. I would say that energy represents a quantity that never changes and must be therefore quite fundamental. This is basically the First Law of thermodynamics. Entropy says something about the quality of that energy quantity. As long as entropy increases (or can increase) there are gradients (of energy of temperature or of species concentrations) present. As long as gradients are present, life is possible. Thus from that point of view entropy is perhaps the more fundamental one (at least from our planet’s viewpoint) because the presence of energy alone is not enough to enable life. Life needs energy gradients.

Question: Will the increase in entropy SURELY lead to the transformation efficiency less than 100%?

Answer: Here we need to be careful how we phrase this. Since energy is constant, transformations from one form to the others must be 100%. However, if our objective is to transform a given quantity of energy fully into another single form (for instance heat into work) then the increase in entropy will certainly limit the transformation efficiency as an amount TΔS will be no longer “available” to us as that amount has become more “diffuse”.

Question: How about the transformation among energies without the involvement of heat?

Answer: See my remark above for the fuel cells.

See also: https://secondlawoflife.wordpress.com/2007/05/06/what-is-entropy-3/ 

Copyright © 2007  John Schmitz

book_cover_big.gifIn Part 1 (see my previous blog), I described the “food chain” and some energetic aspects of it. Now, I’d like to continue with more details and include entropic aspects as well.

Chlorophyll is the pigment in leaves that absorbs light. Typically, chlorophyll absorbs only visible light, mostly in red and blue wavelengths, and tends to reflect the green wavelength; this gives plants their familiar green color. Photosynthesis can be divided in two major steps: the oxygen-producing step (photophosphorylation) and the carbon fixation step that eventually produces glucose (also called the Calvin-Benson cycle, depicted below).

The first step, photophosphorylation, is a light-enabled reaction in which water is consumed and oxygen and molecules of adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADPH) are produced. To this day, we don’t completely understand all the steps and chemicals involved in making this happen. That said, we’ll limit ourselves to a brief description of how ATP and NADPH carry energy. Each molecule of ATP can store a large amount of solar energy within its chemical bonds. Likewise, an NADPH molecule can carry excited electrons, which is another way to store energy. Together, ATP and NADPH serve as the primary energy carriers in living plant cells.

calvin_benson_blog.jpg

 

 

A very schematic and simplified representation of the photosynthetic process. ATP, ADP, NADPH, and NADP+ are energy carriers. While photophosphorylation needs light, the Calvin‑Benson cycle can run in darkness.

The second stage (or Calvin-Benson cycle) can work without light. Using the energy of the ATP and NADPH molecules and carbon dioxide from the atmosphere, this cycle creates chemical reactions in the cell that eventually form glucose, a simple sugar.

The  overall chemical reaction of photolysis can be expressed as:

 6H2O + 6CO2 +  → C6H12O6 + 6O 

where stands for the incoming sunlight needed to drive the reaction.

On the left side are six water molecules and six carbon dioxide molecules. Altogether, they are less complex than the single sugar molecule and six oxygen molecules on the right side. Therefore, we expect the entropy to decrease during photosynthesis, which is driven by the energy in sunlight. Indeed, quantitative calculations [1] show that the entropy change for the overall reaction is negative and can be calculated as ΔS = -262 J/K (for one mole [2] of glucose, which is 180 grams). Because the entropy decreases, this reaction cannot proceed spontaneously and therefore must be driven by an external energy source – which is, of course, sunlight.

The reverse of the photosynthetic reaction occurs when the plant needs energy. This process is called “respiration of glucose,” and is in fact the combustion of glucose under well-controlled conditions in the plant cell. The products of that reaction are water and carbon dioxide, as expressed below:

C6H12O6 + 6O2 → 6H2O + 6CO2

How much energy will this reaction deliver? A lot! For example, burning 180 grams of glucose (about 40 sugar cubes) will generate almost 3000 kJ of energy – enough to allow a human weighing 75 kg (165 pounds) to climb a mountain about 4000 meters (13,200 feet) high. Impressive, isn’t it? At the molecular level, the aerobic (oxygen-based) respiration of glucose produces energy that is stored in molecules of ATP. This happens by adding a phosphate group (PO4) to ADP. Per molecule of glucose, 32 molecules of ATP are created and together they store  about 1100 kJ of energy that can be used to drive other reactions in the cell. In the case of anaerobic respiration (where little or no  oxygen is available), the energy efficiency is much less, since only two ATP molecules are formed for each combusted glucose molecule.

A sort of artificial “photosynthesis” technology is possible with photovoltaic (solar) cells. These devices use a rather elegant process to convert sunlight into electrical energy. The electrical current generated by the solar cell can then be used to split water into hydrogen and oxygen, and the resulting gases can be used in fuel cells to produce electrical power and water. On paper, this looks like a very attractive energy conversion technology, without the hazards or pollution of nuclear and fossil fuel-based power plants.

(Adapted from The Second Law of Life with permission from William Andrews Publishers)


[1] The calculation is rather simple, thanks to scientific tables that provide the standard entropy for many chemical compounds. (The term “standard” means 1 atmosphere of pressure and a temperature of  298.15 K.  The standard entropy (in J/K per mole) is 70 for (liquid) water, 214 for carbon dioxide, 205 for oxygen, and 212 for glucose. Thus, the total entropy change for the reaction without incoming sunlight works out as: ΔS = (products) – (reactants) = (212 + 6×205) – (6×214 + 6×70) = -262 J/(K mol). The decrease in entropy has to be balanced by a similar increase somewhere else in the universe. This is indeed the case, because the incoming sunlight is accompanied by an entropy increase (For instance, see W. Yourgrau and A. van der Merwe, Proc. Nat. Ac. Sci., Vol 59, p. 734 (1968). If the reaction involved water  vapor rather than liquid water, the entropy decrease would be much larger (972 J/(K mol), as the standard entropy of water vapor is 189 J/(K mol).

[2] One mole  represents  6.02 x 1023 atoms or molecules (for glucose that is 180 gram).

book_cover_big.gifAll life on earth is made possible by sunlight. We know that life needs a low-entropy resource – photosynthesis – to survive and reproduce. Utilized by plants, algae, and some bacteria, photosynthesis involves a photochemical reaction that leads eventually to a process called carbon dioxide (CO2) fixation. But before we discuss this elegant process, let’s first look at the food chain here on earth, as depicted below.

foodchain_blog.jpg

 

 

Simplified food chain. For each movement upward, the efficiency of energy utilization is only 10%. This means, for example, that humans use 10% of the available energy from the food we eat.

The principle is well known: each step in the food chain serves as nutrition for the creatures on the next step, and each animal produces carbon dioxide while consuming oxygen. What is less well known is that there is a large amount of inefficiency in this food chain. (Efficiency is defined here as the proportion of energy actually used by an organism, compared to the total energy present in its food.) This can be illustrated with several examples [Glencoe, 2004]: 100 kilograms of grain are needed to produce 10 kilograms of beef, which create only 1 kilogram of human tissue. Similarly, 3000 blades of grass are needed to produce 250 grasshoppers, which will feed 25 birds that will be eaten by just one fox. In general, each higher level in the food chain transforms only 10% of the energy from the next level beneath it. From that point of view, it is rather inefficient to feed cattle with grain, and then eat the cattle; ecologically, we would be much better off if we just ate the grain ourselves[1]. Also, energy efficiency isn’t very high in the photosynthetic process either (as we will see shortly), but the difference here is that solar energy is so abundant, photosynthetic efficiency is not a concern!

Overall, photosynthesis can be written as a chemical reaction:

6H2O + 6CO2 + hν  →  C6H12O6 + 6O2

In ordinary language, this says that six molecules of water and six molecules of carbon dioxide are transformed into one molecule of sugar and six molecules of oxygen. (The term stands for the light quanta that are needed to drive the reaction.) Massive research has shown that the fundamental chemical reactions involved in producing sugar and oxygen are the same in all photosynthetic organisms. The structure of a common sugar, β-D-glucose, is as follows:

glucose_blog.jpg

 

Although the overall photosynthesis reaction suggests a rather simple mechanism, the reality is that photosynthesis is extremely complex, and even today is not completely understood. The first step in unraveling the process was Joseph Priestly’s discovery in 1770 that leafy plants produce a gas (oxygen) that supports combustion. In 1845, Julius Robert von Mayer conjectured that plants convert light energy into chemical energy.

Just a few numbers will give you a feeling for this essential life-enabling process [Whitmarsh, 1995]:

  • Producing 1 oxygen molecule requires about 8 (red [2]) light quanta. To make 180 grams of glucose, you need about 3000 kJ of energy.
  • Each year, 1014 kilograms of carbon are removed from the atmosphere by photosynthesis. The energy needed to do this represents only 0.1% of all solar energy received by the earth.
  • Every year, more than 10% of the total atmospheric carbon dioxide is converted into carbohydrates (or glucose, a 6-carbon sugar).

Pretty neat stuff , isn’t it? In Part 2, I will describe the actual photosynthetic process in a bit more detail, along with its entropy aspects. Stay tuned!

(Taken from The Second Law of Life with permission of William Andrew Publishers)


General further reading:

– Glencoe, Biology the Dynamics of Life, McGraw-Hill (2004)

– Whitmarsh J. and Govindjee; Encyclopedia of Applied Physics, Vol 13 (1995)

– Manning, Richard, “The oil we eat: following the food chain back to Iraq”, Harper’s Magazine, February (2004)

 

[1] A few examples to illustrate the inefficiency of the “artificial food chain” [from Manning, 2004]: The agriculture industry needs about 35 J of fossil fuel to produce one J of beef and about 68 J to produce one J of pork. Processed food requires about 10 J to produce one J of food energy.

[2] The color of the light quanta is significant, since quanta energy depends on their frequency, each of which has a characteristic color.

book_cover_big.gifThe two laws of thermodynamics (energy and entropy) have been related to the fundamental questions of the existence of life. For the finding answers to these questions several angles are possible to take. Of course we have the religious points of views. Creationists consider the First Law of thermodynamics (conservation of energy) typically as a confirmation of the ever existence of God since energy has been and will be present forever. The Second Law (increase of entropy), however, is often interpreted with a more negative flavour. The entropy law is connected to things such as decay, destruction, and chaos or disorder. There has been a lively discussion in the religious-thermodynamic realm but I prefer to come back to that discussion in another future blog. Let’s restrict ourselves for now to a more scientific treatment of the subject. For that purpose it is good to define first what the system is that we want to discuss. In thermodynamics we often work then with what is called an isolated system. Isolated means here a system that can not exchange energy, materials or anything else with its environment.

entropy-and-habitat_paint_2.jpg

We know from the inequality of Clausius (see earlier blogs) that for an isolated system the entropy can only increase over time[1]. This is a real important statement and should be kept in the mind for the remaining part of the discussion. Have a look at the figure above. For our isolated system (the big grey box) we have, after Clausius,  ΔS >0. But for the living organism, represented by the box “Life”, we have the peculiar situation that this organism is able to keep its entropy low as is visible from the tremendous degree of order present in a living organism.

How is that done? Well the organism feeds itself on low entropy food (or energy if you wish), see also below. However, this consumption of low entropy food and from that food to built or maintain the organism structure comes with waste production (like CO2 and faeces) and also dissipation of energy into work (by the muscles) or heat (our body is able to keep us at 37°C). This is causing an entropy increase in the habitat of the organism (represented by ΔShabitat ) such that the total entropy (=ΔSlife + ΔShabitat ) ) of the isolated system increases as a whole! Erwin Schrödinger has described the feeding on low entropy energy by a living organism in his famous little book “What’s Life”[2], I can recommend to read this work. We can take this even one step further. As long as the organism is alive it is able to keep its entropy low, but when it dies this will no longer be possible and the decay and associated entropy increase starts[3]. Thus, perhaps we have here an alternative definition of living organism:

a structure that is able to keep its entropy artificially low by an intake of low entropy energy from its habitat.

If we can relate the thermodynamic laws to the fundamentals of organic life, is there then also a role for them to play in the process of natural selection? This intriguing question has been posed quite some years ago already by Alfred Lotka (1880-1949), a scientist who studied topics in the fields of popular dynamics and energetics. In 1922 he published two early articles on the relation between energy and natural selection[4],[5]. I would like to take a few interesting thoughts from his articles. Lotka regards the driving force behind natural selection as the maximization of the energy flux through the organism provided that there is still a not used residue of energy left in the system (habitat). Two, fundamentally different, categories of living species can be seen: plants which are energy accumulators (they can convert sun light into chemical energy) and animals which are basically energy engines meaning that they convert low entropy energy (stored in their food such as the plants or other animals) into high entropy (low quality) energy. According to the energy flux definition of natural selection, one could consider man as the most successful species as humans have (unconsciousness???) really mastered the “art” of maximizing or accelerating the circulation of energy and matter. However, this is only possible because of the existence of the energy accumulators, the plants!

Copyright © 2007 John E.J. Schmitz


[1] See for a more detailed discussion of this principle The Second Law of Life

[2] Erwin Schrödinger, What is life?, Cambridge University Press, London, (1951)

[3] A slightly alternative formulation of this was offered in 1921 by J. Johnstone in The Mechanism of Life: in living mechanisms the increase in entropy is retarted, see also the articles from Lotka here below

[4] A.J. Lotka, Contribution to the energetics of evolution, Proc. Natl. Acad. Sci., 8, pp 147-151 (1922)

[5] A.J. Lotka, Natural selection as a physical principle, Proc. Natl. Acad. Sci., 8, pp 151-154 (1922)