book_cover_big.gifWhile the quantum mechanical framework was being developed after Plank’s discovery in 1901, physicists were wrestling with the dual character of light (wave or particle?). Thomas Young’s double slit experiment in 1803, where interference patterns were observed, seemed to show without doubt that light was a wave phenomenon. However, Planck’s interpretation of black body radiation as light quanta, followed by Einstein’s explanation of the photoelectronic effect, both contradicted the light-as-wave theory. Additionally, a shocking discovery was made by Compton in 1925. Compton found that when he let X-rays (a form of light with extremely short wavelengths) collide head-on with a bundle of electrons, the X-rays were scattered as if they were particles. This phenomenon became known as the “Compton scattering experiment.”

At about that time, French physicist Louis de Broglie combined two simple formulas: Plank’s light quanta expression (E = hν, with ν as the frequency) and Einstein’s famous energy‑mass equation (E = mc2). This led to another simple equation: λ = h/mc, with λ as wavelength. This equation really tells us that all matter has wave properties. However, since the mass, m, of most everyday visible objects is so large, their wavelengths are too small for us to notice any wave effect. But when we consider the small masses of atomic particles such as electrons and protons, their wavelengths become relevant and start to play a role in the phenomena we observe.

All this brought Erwin Schrödinger to the conclusion that electrons should be considered waves, and he developed a famous wave equation that very successfully described the behavior of electrons in a hydrogen atom. Schrödinger’s equation used a wave function to describe the probability of finding a rapidly moving electron at a certain time and place. In fact, the equation confirmed many ideas that Bohr used to build his empirical atom model. For instance, the equation correctly predicted that the lowest energy level of an atom could allow only two electrons, while the next level was limited to eight electrons, and so on. In the year 1933 Schrödinger was awarded the Nobel Prize for his wave equation.

Schrödinger had, as did Planck and Einstein, an extensive background in thermodynamics. From 1906 to 1910, he studied at the University of Vienna under Boltzmann’s successor, Fritz Hasenöhrl. Hasenöhrl was a great admirer of Boltzmann and in 1909 he republished 139 of the latter’s scientific articles in three volumes [Hasenöhrl, 1909]. It was through Hasenöhrl that Schödinger became very interested in Boltzmann’s statistical mechanics. He was even led to write of Boltzmann, “His line of thoughts may be called my first love in science. No other has ever thus enraptured me or will ever do so again [Schrödinger 1929].Later he published books, (Statistical Thermodynamics and What’s Life), and several papers on specific heats of solids and other thermodynamic issues. [1]

 © 2009 Copyright John Schmitz


[1] Taken from “The Second Law of Life”:http://www.elsevierdirect.com/product.jsp?isbn=9780815515371

book_cover_big.gifThe two laws of thermodynamics (energy and entropy) have been related to the fundamental questions of the existence of life. For the finding answers to these questions several angles are possible to take. Of course we have the religious points of views. Creationists consider the First Law of thermodynamics (conservation of energy) typically as a confirmation of the ever existence of God since energy has been and will be present forever. The Second Law (increase of entropy), however, is often interpreted with a more negative flavour. The entropy law is connected to things such as decay, destruction, and chaos or disorder. There has been a lively discussion in the religious-thermodynamic realm but I prefer to come back to that discussion in another future blog. Let’s restrict ourselves for now to a more scientific treatment of the subject. For that purpose it is good to define first what the system is that we want to discuss. In thermodynamics we often work then with what is called an isolated system. Isolated means here a system that can not exchange energy, materials or anything else with its environment.

entropy-and-habitat_paint_2.jpg

We know from the inequality of Clausius (see earlier blogs) that for an isolated system the entropy can only increase over time[1]. This is a real important statement and should be kept in the mind for the remaining part of the discussion. Have a look at the figure above. For our isolated system (the big grey box) we have, after Clausius,  ΔS >0. But for the living organism, represented by the box “Life”, we have the peculiar situation that this organism is able to keep its entropy low as is visible from the tremendous degree of order present in a living organism.

How is that done? Well the organism feeds itself on low entropy food (or energy if you wish), see also below. However, this consumption of low entropy food and from that food to built or maintain the organism structure comes with waste production (like CO2 and faeces) and also dissipation of energy into work (by the muscles) or heat (our body is able to keep us at 37°C). This is causing an entropy increase in the habitat of the organism (represented by ΔShabitat ) such that the total entropy (=ΔSlife + ΔShabitat ) ) of the isolated system increases as a whole! Erwin Schrödinger has described the feeding on low entropy energy by a living organism in his famous little book “What’s Life”[2], I can recommend to read this work. We can take this even one step further. As long as the organism is alive it is able to keep its entropy low, but when it dies this will no longer be possible and the decay and associated entropy increase starts[3]. Thus, perhaps we have here an alternative definition of living organism:

a structure that is able to keep its entropy artificially low by an intake of low entropy energy from its habitat.

If we can relate the thermodynamic laws to the fundamentals of organic life, is there then also a role for them to play in the process of natural selection? This intriguing question has been posed quite some years ago already by Alfred Lotka (1880-1949), a scientist who studied topics in the fields of popular dynamics and energetics. In 1922 he published two early articles on the relation between energy and natural selection[4],[5]. I would like to take a few interesting thoughts from his articles. Lotka regards the driving force behind natural selection as the maximization of the energy flux through the organism provided that there is still a not used residue of energy left in the system (habitat). Two, fundamentally different, categories of living species can be seen: plants which are energy accumulators (they can convert sun light into chemical energy) and animals which are basically energy engines meaning that they convert low entropy energy (stored in their food such as the plants or other animals) into high entropy (low quality) energy. According to the energy flux definition of natural selection, one could consider man as the most successful species as humans have (unconsciousness???) really mastered the “art” of maximizing or accelerating the circulation of energy and matter. However, this is only possible because of the existence of the energy accumulators, the plants!

Copyright © 2007 John E.J. Schmitz


[1] See for a more detailed discussion of this principle The Second Law of Life

[2] Erwin Schrödinger, What is life?, Cambridge University Press, London, (1951)

[3] A slightly alternative formulation of this was offered in 1921 by J. Johnstone in The Mechanism of Life: in living mechanisms the increase in entropy is retarted, see also the articles from Lotka here below

[4] A.J. Lotka, Contribution to the energetics of evolution, Proc. Natl. Acad. Sci., 8, pp 147-151 (1922)

[5] A.J. Lotka, Natural selection as a physical principle, Proc. Natl. Acad. Sci., 8, pp 151-154 (1922)