Natural oil (petroleum) is a mixture of many components. A well-known component of course is the gas or diesel used in our cars. A less well known component is naphtha. Naphtha is a mixture of hydrocarbon molecules that can be saturated (only single bonds between the carbon atoms) or unsaturated (double or even triple bonds between the carbon atoms. Naphtha is used as a precursor for plastics. For example the plastic poly-ethylene is a plastics that is formed when the naphtha mixture is subjected to a process called cracking (breaking up the larger molecules in smaller ones). This gives in first instance the molecule ethane that can subsequently be polymerized under formation of poly-ethylene, a plastic used in an almost endless variety range of products such as in toys, plastic garbage sacs and electrical isolation of wires. 

 

Because plastic is so widely used it leads at the same time also to a lot of plastic waste (plastic packaging materials, plastic bottles, toys etc). A Swiss company, Innovation Solar/Diesoil, is now doing exactly the opposite as the process described above: they have developed a process that will convert plastic waste materials into diesel fuel. 1000 kilograms of plastic will yield about 850 liter of diesel and all this at a cost price of only 26 Eurocents per liter. Recently a Dutch company (Petrogas) announced a big order to build 15 units that can turn plastic into diesel oil based on this chemical process.  

 

Is this not something? Sounds almost like a perpetual process…….

My question to the reader is: how will the thermodynamic balance (both energy and entropy) be for this reaction: 

Petroleum —> Plastic —>  Diesel oil

book_cover_big.gifRecently the European Commission  (EC) has released a green paper on how to accelerate innovative lighting technologies (http://ec.europa.eu/information_society/digital-agenda/actions/ssl-consultation/index_en.htm). The focus in the entire document is on solid state lighting (SSL) only. About 20% of the world wide total electrical energy generated  is used to generate light. SSL is expected to play a substantial role in an energy efficiency improvement of 20% (EC ambition versus 1990). It is anticipated that SSL (which can be either LED or OLED based technology) can save in combination with smart lighting management systems up to 70% of required electrical energy today. LED’s are expected to convert electrical energy  at an efficiency of about 60%, compare that to incandescent bulbs of only 2% and CFL’s of about 25%.

Looks OK at first sight isn’t ? But it totally overlooks that these new light sources will create new applications and therefore a possible risk that the net result is that we save much less or, even worse, spent even more of our electricity bill on lighting than today. This is comparable to the anticipated reduction in paper use with the arrival of the PC and high quality monitor screens. Well we know how that ended….. look for instance to the amount of junk mail that you find almost daily in your mailbox. Thus we will need to be careful how we apply SSL.

The EC is worried about Europe’s competitive position (quote from the report):

“The USA in 2009 put in place a long-term SSL strategy (from research to commercialisation). China is implementing a municipal showcase programme for LED street lighting involving more than 21 cities; it is granting significant subsidies to LED manufacturing plants and aims to create 1 million related jobs in the next 3 years. South Korea has defined a national LED strategy with the goal to become a top-3 world player in the LED business by 2012”

Two linked objectives are mentioned by the EC: 1) Develop the demand side (European users) and 2) Develop the supply side (role of  the European industry)

One of the problems to overcome is the high price of SSL: a 60W incandescent bulb cost about 1 Euro, a CFL about 5 Euro and a LED about 30 Euro. It is expected that by continuous price erosion in 2015 market share of CFL and SSL will be balanced. Not so far away!

book_cover_big.gifIn the economics literature, one can find two opposing points of view: mainstream economists who believe that technological innovation will solve the degradation in quality of both energy and materials and that therefore growth can go on forever; and biophysical economists, who use the thermodynamic laws to argue that mainstream economists do not incorporate long-term sustainability in their models. For instance, the costs to repair the ozone hole or to mitigate increasing pollution are not accounted for in mainstream economic assessments. Industrial and agricultural processes accelerate the entropy production in our world. Entropy production can only go on until we reach the point where all available energy is transformed into non-available energy. The faster we go toward this end, the less freedom we leave for future generations. If entropy production were included in all economic models, the efficiency of standard industrial processes would show quite different results……..

Even if there were no humans on this planet, there would be continuous entropy production. So from that point of view the ecological system is not perfect, either; even the sun has a limited lifespan. The real problem for us is that, in our relentless effort to speed things up, we increase the entropy production process tremendously. In fact, you can see some similarity between economic systems and organisms: both take in low entropy resources and produce high entropy waste. This leaves fewer resources for future generations.

Although recycling will help a lot to slow down the depletion of the earth’s stocks of materials, it will only partly diminish the entropy production process. So whenever we design or develop economic or industrial processes, we should also have a look at the associated rate of entropy production compared to the natural “background” entropy production. We have seen that for reversible processes, the increase in entropy is always less than for irreversible processes. The practical translation of this is that high-speed processes always accelerate the rate of entropy production in the world. Going shopping on your bike is clearly a much better entropy choice than using your car.

Conclusion: the entropy clock is ticking, and can only go forward!

From:  The Second Law of Life

An interesting article in Electronic Design News on SSL and CFL. Follow this link:

http://www.edn.com/blog/PowerSource/39403-Can_adding_a_reliability_standard_to_Energy_Star_actually_hurt_LED_lighting_.php

The article deals about a new proposed energy efficiency standard but has some real interesting quotes about CFL (compact fluorescense lamps). Read also the comments!

See also my earlier blog on CFL’s: https://secondlawoflife.wordpress.com/2010/04/06/reliability-of-compact-fluorescence-lamps/

book_cover_big.gifSome time ago I wrote about the advantages of compact fluorescence lamps (CFL) and a life cycle analysis (LCA) of these devices described in the literature[1]. Basic outcome was that CFL’s indeed do give overall resource savings[2]. In an LCA you have of course to assume an average lifetime of the CFL, typically taken as 5 times[3] that of a regular incandescent lamp (ICL). Because CFL’s are  so much more complex to make than ICL’s,  the resource savings benefit would fall apart if the CFL does deviate substantially from the assumed lifetime.

The  positive LCA outcome convinced me to replace many of the ICL’s in my house by CFL’s and accept the high upfront cost (which is easily 5 times as expensive as ICL’s). I bought about 15 lamps. Much to my surprise and frustration within a year I had 3 failures. Note that I bought the CFL’s from a top brand but  that the manufacturer gives no guarantee whatsoever in case of an early failure.

Therefore, I did a quick and dirty web search to see what one can find about reliability of CFL’s. Well not too much. Two interesting leads I found though.

The first one is a study from the Energy Federation Inc., published in 2002[4]. Over the period 1994-2001 four big brand and four little brand manufacturers were tracked for sales and returns. The big brands had a return rate of 1.4%[5]. Much more detail is in this report such as relation between return rate and wattage of the lamp so I recommend you go to their website and read the report [6].

Based on this you can expect on average one early failure out of 70 CFL’s that you will buy[7]. Clearly, my failure rate (3 out of 15) is much higher. And what is most frustrating is that there is no warranty on these lamps. If they fail after 6 months or so what can you prove? Nothing.

But I am not the only on suffering from this problem. See the kiloxray.com blog (http://www.kiloxray.com/blog/?page_id=8). The author is actually logging the number of failures (there are many!!) he is experiencing and has a good tip: note down on the lamp the date that you put the CFL in operation and……. hold on to the original receipt. You may have a chance to get your money back from the manufacturer although don’t have to high expectations on this. If you have similar experiences or recommendations to share please put in a comment.

© Copyright 2010 John Schmitz


[1] https://secondlawoflife.wordpress.com/2008/10/05/compact-fluorescence-lamps/

[2] Parsons, David. “The Environmental Impact of Compact Fluorescent Lamps and Incandescent Lamps for Australian Conditions”, The Environmental Engineer 7(2): 8-14 (2006).

[3] Actually numbers vary, you can find numbers as high as 10!

[4] Bradley Steele, The Performance and Acceptance of Compact Fluorescent Lighting Products in the Residential Market; Energy Federation, Inc

[5] Little brands were running slightly higher at 1.5%

[6] http://www.lrc.rpi.edu/programs/lightingTransformation/pdf/bradSteele.pdf

[7] This should be a worse case return rate as you may expect that the CFL manufacturers would have improved the reliability of their products since 2002

book_cover_big.gifSemiconductors play an important role in solving the planet’s energy issues. There are two distinct, but related, phenomena: the conversion of (sun) light in electricity and the conversion of electrical power in visible light. The first conversion is known as the photo-voltaic (PV) technology and the second conversion is the one that is used by Light Emitting Diodes (LED’s) used in Solid State Lighting applications. Both conversions enjoy  considerable interest from scientists, governments, energy companies as well as citizens. Clear is that both energy conversions can contribute substantially in solving the availability and distribution of energy around the planet.

A key factor for the successful acceptance (at least in terms of economically feasibility) of both PV and LED’s is the efficiency of these two types of energy conversions since this will directly impact cost per Watt or cost per Lumen. Indeed, the question arises are there fundamental limitations to these energy conversions? For PV cells it has been reported that the upper efficiency of on silicon based cells will run at about 30%. For LED’s there has not been reported so far a fundamental barrier that would keep the LED away from 100% efficiency (however, the fact that the device heats up during operation hints already to a less than 100% efficient light conversion).

On the efficiency of PV cells I will come back in a future contribution, for now I would like to focus on the efficiency of a LED. A LED is typically constructed from a classical p-n junction but in the LED case the p and n material are separated by what is called an active zone that can be either doped or intrinsic. The semiconductor material must be a direct band gap semiconductor in order to have sufficient conversion efficiency[i]. By putting the LED in a forward bias the electrons and holes that arrive in the active zone can recombine in two different ways:

–         Radiative recombination. It is this recombination that fuels the light emission from the LED.

–         Several other non-radiative recombination processes occur as well. These reduce the amount of holes and electrons available for light emission.

There are other loss (non-radiative) mechanisms operating (such as absorption of the photons by the semiconductor) that further reduce the light generation efficiency.

Recently an article in the Journal of Applied Physics[ii] appeared that gives good insight in the different factors that influence the power-light conversion efficiency. An important factor is the so-called wall plug efficiency, defined as follows:

Wall Plug Efficiency = emission power/electrical power

a pretty straightforward definition. In the article all the different recombination and loss mechanisms are mathematically described and then put together in one model for the LED. This model can then calculate the behavior (and thus wall plug efficiency) of the LED device in terms of operating conditions (temperature, current, voltage), material properties (semiconductor material such as GaN or GaAS and doping) and LED structure (thickness of the different layerings, metal contacts and lay out of the active layer). This is of great help when optimizing the LED device for conversion efficiency.

Let me summarize a few important conclusions from the article:

–         There is not a fundamental reason why the power-light conversion cannot be 100%. Even stronger, the conversion can be more than 100% (see next point for explanation)! However, the high efficiencies may not always in a practical operating window (for instance at the current densities the LED needs to run because of a certain required light output per surface area semiconductor).

–         The energy of the photon may come not only because from the band gap energy difference but phonons (thermal energy from the lattice) may contribute as well. In that case the LED can act as a heat pump: the device cools actually and can in that way extract heat from the environment and achieve efficiency better than 100% (using the above wall plug efficiency definition).

–         Further improvements will be possible to increase the light output of LED’s.

Thus, we can expect to see in the coming years more developments coming to improve the Solid State Light technology and this will be a very valuable contribution to our energy strategy.


[i] See for an explanation: http://en.wikipedia.org/wiki/Direct_and_indirect_band_gaps

[ii] O. Heikkila, J. Oksanen, J. Tulki, Ultimate limit and temperature dependency of light-emitting diode efficiency,  Journal of Applied Physics 105, 093119 (2009)

©  Copyright John Schmitz 2010

book_cover_big.gifIn an earlier blog[1] I wrote about the connection between the Second Law, the economy  and the problem of a sustainable society. Of course the most important inputs on this topic were provided by Nicholas Georgescu-Roegen is his book  in 1971 entitled: The Entropy law and the Economic Process[2]. Georgescu-Roegen stated that the entropy law applies to everything we do, and that with every action that degrades energy (it is never really “used up”) entropy is produced, leaving a smaller entropy budget for future generations. In other words, he made us aware of the entropic constraint on all economic activity. The entropy law simply prevents us from creating a kind of perpetual cycle that would miraculously restore natural resources. Georgescu-Roegen’s main complaint about  economists is that they ignore this fact, and assume that everything in the economic process is cyclic in nature, and that in any case technology will provide us with solutions. However, it can be shown that often each new technology tends to accelerate the entropy production even more.

Interesting in this respect is a very recent publication by the Economics Web Institute: Innovative Economical Policies for Climate Change Mitigation[3]. About 30 economists, managers, consultants and technologists have gathered to describe 20 approaches to mitigate the climate change. Three key transitions (as they coin it) are needed: 

1) Transitions in market structures and firm behaviour

2) Transitions in consumer lifestyles and purchasing rules

3) Transition in government policy making

They argue that economical aspects must play a much stronger role climate change mitigation and that the neoclassical economical model (that reduces all entities to prices and quantities but neglects for instances the extinction of the human race) needs a major revision. Instead they believe that climate mitigation not necessarily must be considered as just a cost factor but merely as an opportunity for innovation, business growth, profit and employment.

The entire book counts more than 350 pages, I will in upcoming blogs zoom in on a few of the articles. In the mean time have a look at the web site of the Economics Web Institute (www.economicswebinstitute.org) as it contains tons of interesting articles, data and tables.

@ 2009 Copyright John Schmitz


[1] https://secondlawoflife.wordpress.com/category/entropy-and-economy/

                   [2] Georgescu-Roegen, Nicolas, The Entropy Law and the Economic Process, Harvard University Press, Cambridge, Massachusetts (1971)

[3] Innovative Economic Policies for Climate Change Mitigation

Piana V. (ed.), Aliyev S., Andersen M. M., Banaszak I., Beim M., Kannan B., Kalita B., Bullywon L., Caniëls M., Doon H., Gaurav J., Karbasi A.,  Komalirani Y., Kua H. W., Hussey K., Lee J., Masinde J., Matczak P., Mathew P. , Moghadam Z.  G., Mozafary M. M., Rafieirad S., Romijn H., Oltra V., Schram A., Malik V. S., Stewart G.,  Wagner Z., Weiler R. (2009), www.economicswebinstitute.org/innopolicymitigation.htm,  Economics Web Institute, Lulu.com.

book_cover_big.gifWhile the quantum mechanical framework was being developed after Plank’s discovery in 1901, physicists were wrestling with the dual character of light (wave or particle?). Thomas Young’s double slit experiment in 1803, where interference patterns were observed, seemed to show without doubt that light was a wave phenomenon. However, Planck’s interpretation of black body radiation as light quanta, followed by Einstein’s explanation of the photoelectronic effect, both contradicted the light-as-wave theory. Additionally, a shocking discovery was made by Compton in 1925. Compton found that when he let X-rays (a form of light with extremely short wavelengths) collide head-on with a bundle of electrons, the X-rays were scattered as if they were particles. This phenomenon became known as the “Compton scattering experiment.”

At about that time, French physicist Louis de Broglie combined two simple formulas: Plank’s light quanta expression (E = hν, with ν as the frequency) and Einstein’s famous energy‑mass equation (E = mc2). This led to another simple equation: λ = h/mc, with λ as wavelength. This equation really tells us that all matter has wave properties. However, since the mass, m, of most everyday visible objects is so large, their wavelengths are too small for us to notice any wave effect. But when we consider the small masses of atomic particles such as electrons and protons, their wavelengths become relevant and start to play a role in the phenomena we observe.

All this brought Erwin Schrödinger to the conclusion that electrons should be considered waves, and he developed a famous wave equation that very successfully described the behavior of electrons in a hydrogen atom. Schrödinger’s equation used a wave function to describe the probability of finding a rapidly moving electron at a certain time and place. In fact, the equation confirmed many ideas that Bohr used to build his empirical atom model. For instance, the equation correctly predicted that the lowest energy level of an atom could allow only two electrons, while the next level was limited to eight electrons, and so on. In the year 1933 Schrödinger was awarded the Nobel Prize for his wave equation.

Schrödinger had, as did Planck and Einstein, an extensive background in thermodynamics. From 1906 to 1910, he studied at the University of Vienna under Boltzmann’s successor, Fritz Hasenöhrl. Hasenöhrl was a great admirer of Boltzmann and in 1909 he republished 139 of the latter’s scientific articles in three volumes [Hasenöhrl, 1909]. It was through Hasenöhrl that Schödinger became very interested in Boltzmann’s statistical mechanics. He was even led to write of Boltzmann, “His line of thoughts may be called my first love in science. No other has ever thus enraptured me or will ever do so again [Schrödinger 1929].Later he published books, (Statistical Thermodynamics and What’s Life), and several papers on specific heats of solids and other thermodynamic issues. [1]

 © 2009 Copyright John Schmitz


[1] Taken from “The Second Law of Life”:http://www.elsevierdirect.com/product.jsp?isbn=9780815515371

book_cover_big.gifThe human body can deliver lots of work. Consider, for instance, the athlete running a marathon, or the cyclist racing in the Tour de France. We also know that human body temperature is normally 37°C and that usually the environment is cooler, say 20°C. From this we could suggest that there is some resemblance between a heat engine, in which the body is the heat source, and the cooler environment could act as a heat sink. So let’s make a few simple calculations to see how closely the body resembles a heat engine. We know that the efficiency of a heat engine is determined by the temperatures of the heat source (the body temperature, Tbody = 310K) and the heat sink (the environmental temperature, Tsink  = 293K):

  Efficiency = [Tbody – Tsink]/Tbody = [310-293]/310 = 5.5%

 Thus, based on this temperature difference, the body would be able to achieve only 5.5% efficiency. Fortunately, scientific studies already have estimated the human body’s efficiency [1] in other ways. One study reasons that for an average man to produce 75 Watts of power, he will need to breathe about one liter of oxygen per minute. That liter of O2 is combusted in body cells to form carbon dioxide (CO2). It has also been determined that one liter of oxygen generates in this way about 300 Watts of power. Thus, we can conclude that the efficiency of the human “engine” is 75/300 = 25%. What causes the difference between the 5.5% efficiency as calculated above, and the 25% from the combustion determination? The explanation is that the human body cannot be considered a heat engine. The work is not generated in the same way as a steam engine, which directly transforms heat into work and lower-temperature waste heat. Instead, the human body is more like a fuel cell, where chemical energy is transformed into work (see also Whitt et. al.). For this kind of transformation, one obviously cannot use the efficiency formula of a heat engine.

 


[1] Whitt, F.R. and Wilson, D.G., Bicycling Science, MIT Press, Cambridge (1976)

book_cover_big.gifEinstein, like Planck, was very fluent in thermodynamic theory. Before 1905, Einstein published several papers on thermodynamic topics. One of these dealt with the fundamentals of thermodynamic theory [Einstein, 1903]. In this work, he studied whether the thermodynamic laws could be derived from a minimum amount of elementary assumptions. In 1905[1] he published a study in which he explained the photoelectric phenomenon. In that explanation, he not only used the results of Planck’s discrete energy packets for the black body radiation description, but fully acknowledged Boltzmann’s work, calling the expression S = k lnW  “the principle of Boltzmann.”

How highly Einstein regarded thermodynamics can be appreciated in the following quote:

 “A law is more impressive the greater the simplicity of its premises, the more different are the kinds of things it relates, and the more extended its range of applicability. (..) It is the only physical theory of universal content, which I am convinced, that within the framework of applicability of its basic concepts will never be overthrown.”

Einstein is best known for his invention of relativistic theory where time is no longer invariable[2]. Less remembered is that he searched his whole life for a theory that could unify the electromagnetic theory of Faraday and Maxwell on one hand, and the mechanical theory of the material particles of Newton on the other. For instance, Newton unified the observations of falling objects on earth with the fact that the earth and planets orbited the sun. He did this by using a single concept – namely, gravity – to explain both phenomena. Maxwell showed that seemingly quite different magnetic and electric observations could be described by a single theory of electromagnetic waves.

Around 1900 several outstanding physicists were working to explain Planck’s black body radiation. Planck had to introduce quantum theory to explain the experimental observation of the relationship between energy and wavelength. Einstein did not like this explanation, since it introduced yet another theory rather then unifying existing theories. Einstein was convinced that the answers could be found in thermodynamics, since this theory was based on structure-independent assumptions. Indeed, the special theory of relativity can be considered as a theory of principles analogous to the theory of thermodynamics [Klein, 1967].

What brought Einstein to his Special Theory of Relativity was his idea (conceived when he was 16 years old!) that the velocity of light must be the same for all observers, regardless of their respective speeds. He derived this conclusion from Maxwell’s electromagnetic equations and so kept his mind puzzled for a long time. His familiarity with thermodynamic theory also gave him a lot of inspiration. We can appreciate the challenge he found from two questions (taken from the publication of Martin Klein: “Thermodynamics in Einstein’s Thought”; Science, Vol 157, 509 (1967).  In essence what the classical thermodynamic accomplishment was, was to find mathematical expressions to the dilemma:

“What must the laws of nature be like so that it is impossible to construct a perpetual motion machine from either the first or the second kind?”

This question refers to the empirical fact that perpetual machines have never been observed that could violate the first principle that energy cannot be added or destroyed in an isolated system, or contradict the second principle that entropy always increases for spontaneous processes in, again an isolated system. Similarly, while developing the Special Theory of Relativity, Einstein wondered:

“What must the laws of nature be like so that there are no privileged observers?”

This question refers to the fact that the speed of light is the same for all observers, regardless of how fast their platform (a planet, a rocket, or an angel’s wings) is going. Therefore, one must derive expressions that will obey the principle of the constancy of light speed. In the same way that classical thermodynamics does not worry about why energy is conserved or why entropy increases, so Einstein didn’t try to puzzle out why the speed of light was constant, but merely accepted it as fact. Once accepted, the equations that describe this assumption are pretty straightforward!

Thus, the Special Theory of Relativity can be viewed as a theory of principles analogous to thermodynamics, and not as a constructive theory – as, for instance, gravity or the kinetic gas theory[3]. This means that no model is needed (like a model of an atom in the case of quantum mechanics) in either the Special Theory of Relativity or in thermodynamics, in order to arrive at the end results of both theories. The nice thing is that both theories can live on indefinitely with little risk of needing adjustment because of new insights. That is, in fact, what we’ve seen: both thermodynamics and the Special Theory of Relativity have not changed since their conception. [4]

Taken from:

“The Second Law of Life, Energy, Technology and the Future of Earth As We Know It”

http://www.elsevier.com/wps/find/bookdescription.cws_home/715243/description#description]

© Copyright 2009 John Schmitz

 


[1] 1905 was also the year Einstein published his Special Theory of Relativity, along with his articles on the photoelectric effect, the explanation of Brownian movement, and an article where he stated his famous equation, E=mc2. Because of Einstein’s overwhelming amount of important material in one year, 1905 is sometimes called Annus Mirabilis (the MiracleYear) [Bushev, Michael, “A Note on Einstein’s Annus Mirabilis”, Annales de la Fondation Louis de Broglie, Vol 25, no 3 (2000)].

[2] Einstein worked for several years at the Swiss patent office in Bern. During that period, because of the ongoing electrification and synchronization of clocks in the cities and across the countries, many patent applications came in that proposed all sort of ingenious ways to implement the synchronization. Because of that Einstein saw of course many proposals dealing with these kinds of problems and that may have very well triggered his interest in time, see also footnote 72, [Galison, Peter, Einstein’s Clocks, Poincaré’s Maps, Empires of Time; W.W. Norton & Company, Inc., New York (2004)].

[3] The kinetic gas theory starts with the existence of gas molecules, their continuous motion, and their finite dimensions. Then, by applying Newton’s mechanical kinetic theory it is possible to derive a relation among the macroscopic gas parameters: pressure, temperature, and volume. In this way a model can be built that has predictive and verifiable power.

[4] I feel that a few more words are needed here. Einstein himself pointed out in an article in 1919 in the Times of London that a theory of principle is based on empirical observations without the need for a particular model whereas a constructive model will first make assumptions about a fundamental structure then will built a mathematical description of that structure that hopefully will give relationships between the empirically observed parameters. In his own words: “Thus the science of thermodynamics seeks by analytical means to deduce necessary conditions, which separate events have to satisfy, from the universally experienced fact that perpetual motion is impossible”. Thus, classical thermodynamics can be regarded as a theory of principles, whereas statistical thermodynamics (i.e., Boltzmann approach) should be categorized as a constructive theory. In 1904 it was Poincaré who made a similar classification in scientific theories in his book The Value of Science.