Physical History
By Mark Ciotola
First published on May 17, 2019. Last updated on February 13, 2020.
Physical History
By Mark Ciotola
First published on May 17, 2019. Last updated on February 13, 2020.
Table of Contents
- 1 Introduction
- 2 Big Bang and the Formation of Our World
- 2.1 Energy Balance of the Earth
- 2.2 Energy of Life
- 2.3 Energy Flows in Ecology
- 2.4 Formation and Endurance of Life
- 2.5 Statistical and Evolutionary Intelligence
- 2.6 Smarter Intelligence
- 2.7 Development of Agriculture and Civilization
- 3 Fast Entropy and the eth law
- 4 Flows and Bubbles
- 4.1 Resource Bubbles
- 4.2 Economic Bubbles
- 4.3 Business Bubbles
- 4.4 Exponential functions
- 5 Psychological Reactions
- 5.1 Psychology Versus Fast Entropy Paradox
- 5.2 Psychological Reactions to Change
- 6 Modeling History
- 6.1 Creating And Using Models
- 6.2 Fitting, Uncertainty, Significance and Error
- 6.3 To What Extent Can History Be Quantitatively Modeled?
- 6.4 Long-Term Trends and the Emergence of Societies
- 6.5 Emergence of Dynasties
- 6.6 Modeling A Dynasty Using EDEG
- 6.7 Comparing Historical Data to Dynastic Models
- 6.8 Secondary Dynastic Events
- 6.9 Modeling History as A Series of Dynasties
- 6.10 Interrelations Between Concurrent Dynasties
- 6.11 The Colossus Model of World History
- 6.12 A GIS Approach to World History
- 6.13 Modern Times and the Near Future
- 7 The Future: Beyond Our World And Time
- 7.1 Longterm Trends – Future
- 7.2 Modeling The Future
- 7.3 Other Worlds and Societies
- 8 Conclusions
This book strives to demonstrate how humanity has developed and progressed as a result of cosmological processes, as well as to explore the development of a science of historical processes. This book goes on to discuss the implications and uses for analyzing societies.
This book strives to demonstrate how humanity has developed and progressed as a result of cosmological processes, as well as to explore the development of a science of historical processes. This book goes on to discuss the implications and uses for analyzing societies.
-
1 Introduction
First published on May 16, 2019. Last updated on January 20, 2021.
Mission
This section introduces the subject of Physical History and Economics (PHE). Physical History and Economics is a small treatise on the development of a unified science of history. It is chiefly a physical model, in that it deals with physical principles, quantities, tendencies and constraints. At attempts to do so quantitatively where possible. It also delves into other areas such as psychology, and traditional history. However, the author has great respect for researchers and their work in those areas, and does not assert expertise in those areas.
Work In Progress
This entire publication is a work-in-progress. Substantive research is being conducted behind the scenes and presented in academic settings. Trying to weave everything together is a challenge, and one never knows how much time may be left to improve the content. The author engages in this work in the hope that it may be of great benefit to both the positive advancement and sustainability of humanity, and the seeking of knowledge.
So changes will be made in a hard-to-predict manner to the various sections. Therefore, if you do cite this work, please include the date last viewed.
Imagine…
Imagine the hot sun shining brightly upon the Earth situated in cold space. Much light is reflected back from the Earth into space. The remainder of the light is absorbed by the Earth and heats its surface. Nature abhors temperature differences, and tries to rectify the situation as quickly as possible by having the Earth emit heat back into space. Yet the Earth’s atmosphere a good insulator. To bypass that insulation, great blobs of hot air at the surface rise wholesale into the upper atmospheric cooler regions, so the escape of heat is greatly increased, and Nature is pleased.
Yet the light that gets reflected from the Earth is not heat nor does much to warm the coldness of space. Nature does not gladly tolerate such rogue light. So living organisms develop upon the Earth that can capture and photosynthesize some of the rogue light. Those organisms release heat or are consumed by other organisms that produce heat. Nature is still not satisfied and demands greater haste. Intelligent organisms form that can release heat faster, and civilizations form that can release heat yet faster, further pleasing Nature.
Nature is greedy and demands all that it can seize. Just as great blobs of air form and rise through the atmosphere, dynasties and empires form in succession one after another, releasing heat that is otherwise inaccessible. History is literally a pot of water boiling on a hot stove in a cold kitchen, with dynasties and empires forming and bubbling up to the surface. Is there more that Nature can yet demand? New technologies and untapped sources of energy? New forms of civilization? Or the yet totally unknown?
This book is intended to serve as an introduction and handbook. Rich descriptions as well as much technical detail have been omitted to improve readability and avoid confusion. Additional sources of information are cited for the reader who wishes to know more. In this book, you will envision how humans are linked to the entire universe and how we share its drive and destiny. Unfortunately, PHE does not provide quick, easy answers to society’s challenges. Nevertheless, you will discover analytical tools as powerful as the astronomer’s telescope and the biologist’s microscope to investigate human affairs. This is a tall order to fill. It is best to remember that this book is more of a framework of perspectives and tools to help you get started, rather than an encyclopedia of answers. This is still a pioneering field. There are considerable opportunities for further contributions of the greatest significance.
PHE derives social science primarily from physics, but also from other areas such as cosmology, ecology and psychology. PHE is more fundamental than social science derived merely from the observation of humans, because it views the existence of humans as the result of cosmological trends and physical processes. Likewise, PHE strives to be generic, so that it can be used to describe and analyze any society anywhere and anytime, be it the Carolingian dynasty in medieval France or an extraterrestrial society across the galaxy. Observation strongly suggests that the laws of physics remain invariant across time and space, allowing for the possibility of a truly generic, non-geocentric social science derived from physical principles.
Although PHE is based upon the physical sciences, no claim is made for its ability to “produce” a perfectly deterministic science. In fact the approaches of PHE are only practical because people act as individuals and have a wide freedom of action. This seems paradoxical, but that is the way things work out.
Inner Versus Outer Philosophy
In ancient times, natural (outer) and social (inner) philosophy were closely linked. Then, a philosopher’s view of the composition of matter might be closely linked to their view of the best type of government for society. This unity of inner and outer philosophy continued in Europe until the Renaissance.[1] However, the heliocentric universe proposed by Copernicus and the findings of imperfect heavens by Galileo were deemed inconsistent with the inner, social philosophy of that time. The resulting severance of inner and outer philosophy began in earnest and has continued to this day.
PHE approaches social science from the perspective of outer philosophy. Both approaches are necessary for the development of a complete and meaningful social science. We are humans who attempt to develop social science. We try to be impartial, but must admit that our ability to do so is inherently limited. Motivation and incentives are always a factor in what gets studied. Why should we develop social science if it does not benefit those of us who endeavor to do so? Even physical scientists are human and have the same sort of needs that other people have. The subject of psychology and how it colors people’s reaction to PHE is discussed in a later section.
A Unified Model
The social sciences already utilize some quantitative methods. Economists utilize them perhaps exhaustively and several historians practice cliometrics. Nevertheless, the social sciences have lacked the type of unified model that Newton provided for the physical sciences. Ever since Newton created his three laws to describe the mechanical universe, numerous philosophers and social scientists have tried to create a mechanical model of society without success. Meanwhile, in the early 1900s, Newton’s laws of mechanics were shown to be idealizations of a much less deterministic, statistical universe. Ironically, it is the fall of Newtonian mechanics that allows for the achievement of a true “science of society.” PHE is not the purely deterministic dream of early “Newtonian” sociologists. Rather PHE uses concepts from modern statistical mechanics to provide a firm foundation for a fundamental understanding of history and economics.
This book provides the skeleton of such a unified model. The Principle of Fast Entropy, an extension of the Second Law of Thermodynamics[1], is suggested as a unifying, driving principle. Just as gravity is the key force in Newton’s unified model of the physical universe, Fast Entropy is the key tendency for a unified model of the social universe. Fast Entropy is literally the “gravity” of social science. Fast Entropy applies to both the social and physical sciences. Fast Entropy can be used to analyze, understand and validate other economic and historical methodologies. It is a constraint that can be used to identify other constraints. In science, a known constraint is a valuable piece of knowledge.
The author hopes you will find this text useful. The philosophical implications are glossed over in favor of presenting pragmatic approaches and tools. It is hoped that this work will stimulate you to develop your own ideas and approaches, for one of the fundamental characteristics of science is that it is always unfinished.
Notes & References
[1] H. Scott had previously proposed deriving economic policy from thermodynamics, in particular the works of W. Gibbs, in the 1920s. Source: www.technocracy.org.
-
2 Big Bang and the Formation of Our World
First published on . Last updated on January 19, 2021.
Here, we describe the cosmological context of Physical History and Economics. Without the contrasts provided by this context, the rest of this book would be moot.
The Big Bang and the Expansion of the Universe
13.772 billion years ago, the known universe began from a single point in time and space in a tremendous explosion known as the Big Bang. For a brief moment, the universe was filled with pure light containing all of the energy in the universe, a literal swarm of light. The universe was so hot that no matter could exist.
Cosmology from Big Bang to present (credit: NASA)
Growing Darkness and Clumpiness
In the Universe, total energy has essentially remained constant (but see below). As the universe began to expand, that constant amount of energy spread over a larger area, first rapidly during its inflationary area, then more slowly until relatively recently. Therefore, as the energy density of the universe decreased, the universe cooled down and became darker. At the same time, the universe began to exhibit clumsiness in terms of temperature and density variation.
Cosmic microwave background radiation (photo credit: NASA)
The Contrasted Universe
As the universe progressed in time, contrasting trends have occurred. Overall, the universe expands, cools and dims. Yet, in local areas, the universe heats up and grows brighter. In certain very important ways, the universe has become less homogeneous in some regions over particular periods of time.
The Formation of Matter, Stars and Planets
As the universe continued to further expand, it eventually became sufficiently cool for matter to form.[1]The first matter comprised sub-atomic particles, since the universe was still too hot for more atoms and molecules to form.
The, eventually, as the universe continued to further expand and cool, atoms[2]and then molecules formed. Matter gravitationally attracts itself[3], so it pulled itself together into gigantic clouds and structures.
Horsehead nebula (photo credit: NASA)
Within those clouds, some matter condensed into spheres of gas. When gravitational contraction caused many of those spheres to heat up sufficiently so that nuclear fusion[4]occurred in their centers, those spheres became stars[5]. Fusion caused those stars to become much hotter and begin to emit large amounts of light. Disks of dust and gas formed around many of those stars.
Dust disk around protostar (photo credit: NASA)
Some of those dust particles stuck together due to gravity and heat, forming larger and larger clumps. Gravitational attraction between these clumps and gasses resulted in the consolidation of increasingly larger rocky and gaseous spheres. Some of those spheres further violently collided together to form planets.
Collision of planetessimals form Earth’s Moon (photo credit: NASA)
Some of those planets were dominated by rocket components. Eventually, some of them cooled down sufficiently to allow liquid water on their surfaces, but remained close enough to their stars to prevent all of that water from freezing. Planets such as the Earth formed.
Earth as seen from Apollo 17 (credit: NASA)
Summary
As the Universe progressed in time, tremendous differentiation of temperature, density and structure have developed. Overall, the Universe has expanded, resulting in a decreasing energy and matter density as manifested by a decreasing mean temperature. Yet, in local regions, density and temperature have increased to the point that complex structures, such as stars, have formed and have triggered energy release mechanisms such as nuclear fusion. In between the coldness of the voids of space and the hot stars are planets, which receive sunlight then expel that energy back into space.
Notes & References
[1]As Einstein’s relationship between mass and energy shows, it takes a great deal of energy to form even a small amount of matter, since the speed of light is a large quantity. However, the universe contains a great deal of energy. Energy = mass x (speed of light)2, or more familiarly E = mc2.
[2]Most of the initial atoms that formed were of the element hydrogen, with a lesser amount of the element helium.
[3]Hydrogen and helium are the least massive of all the elements. Yet they still have mass, and so are gravitationally attracted towards other matter.
[4]When hydrogen gas becomes hot enough, individual hydrogen atoms combine to form helium atoms. This nuclear reaction releases a tremendous amount of energy.
[5]Stars themselves are part of larger structures called star clusters such as the Pleiades, which in term are part of galaxies. Galaxies themselves are part of clusters and super-clusters of Galaxies that weave the fabric of the universe.
-
2.1 Energy Balance of the Earth
First published on May 17, 2019. Last updated on June 15, 2024.
Sources of Energy
Sunlight is the chief source of energy for the Earth. Gravitational contraction provides a tiny amount. Tidal interactions with the Moon provide a small but significant amount at the surface. Radioactive decay provides an important source of energy below the Earth’s surface. The burning of fossil fuels can release considerable heat locally (enough to upset ecosystems), but the total amount of heat released is small compared to that from solar and radioactive heating.
Sun photographed in various wavelengths. (Credit: NASA)
Energy in the Atmosphere
The Earth is bathed in sunlight. Some of that sunlight is reflected back into space by the Earth’s surface and atmosphere. The reflectivity of the Earth is called its albedo. Some of the remaining sunlight directly heats up the atmosphere. A small amount is absorbed by processes such as photosynthesis.
Much of the remaining sunlight heats up the Earth’s surface. As surface temperature becomes raised, the Earth emits increasing amounts of infrared energy. This radiation in turn further heats the atmosphere. Much atmospheric radiation is re-emited to the Earth’s surface. Some of it eventually makes it to the upper atmosphere and is radiated back into space.
The amount of energy entering and leaving the Earth’s atmosphere is called its energy balance. If more energy enters the Earth’s atmosphere than is emitted, the temperature of the Earth’s atmosphere increases. This is the current situation and is called global warming. Climate change results from global warming.
Energy flows to and from the Earth’s surface and atmosphere (credit: U.S. Govt.)
Resources
- NASA Earth’s Energy Budget
- NASA (Scientific Visualization Studio) Earth’s Energy Budget
Further Reading
- NOAA The Earth-Atmosphere Energy Balance
- NASA Climate and Earth’s Energy Budget (more detailed information)
-
2.2 Energy of Life
First published on . Last updated on January 19, 2021.
Development of Energy Processes In Life
Energy is essential to the functioning of life. A chief characteristic of life is that it moves, does things and changes. Such activities require energy. As early forms of life on Earth metabolized hydrocarbons in their environment, which were initially abundant. These initial hydrocarbons were limited in quantity and nonrenewable, and as they became consumed, they became scarce. Life required a more sustainable energy source to endure.
Sunlight arrived at the Earth in bountiful supply. Plankton and plants formed that could photosynthesize sunlight into sugars, an energy-rich fuel. Animals formed that ate plants or each other for energy. Living organisms can be viewed as a form of engine. An engine requires a potential to operate across. The relative coolness of the environment (ocean, atmosphere) in contrast to the higher energy of sunlight provides such a potential.
Jungle plants (credit: NASA)
Chemical Processs
Energy from photons in sunlight gets photosynthesized into carbohydrates by plants and phytoplankton. Such molecules are composed of carbon, hydrogen and oxygen. Mitochondria are specialized organelles in both plant and animal cells that can metabolize carbohydrates to produce ATP in a process called aerobic respiration. The cell can than use ATP to power its own processes. Waste energy is given off as heat.
Ball and stick model of organic molecule (credit: US NIH)
Further Reading:
- Nature Education, Mitochondria.
Reference:
- Aydin Tözeren, Stephen W. Byers, New Biology for Engineers and Computer Scientists. Pearson Prentice Hall, 2004.
-
2.3 Energy Flows in Ecology
First published on . Last updated on June 15, 2024.
Energy flows through ecological networks such as food webs. Generally, sunlight flows into plants that create sugars. Animals eat sugars. Both plants and animals expel heat into their environment.
Marine food web in Alaska (Source: US Govt.)
Food webs are generally energy webs. Energy in the form of high order photons flows to plants and phytoplankton that produce sugars and starches. Other organisms and animal eat plants and phytoplankton to gain energy. Predators eat those animals to gain energy. All plants, animals and organisms give off some heat into the atmosphere.
The Jouleis the standard unit of energy, but for food, the calorie and Calorie are often used. A calorie is sufficient energy to raise one gram of water one degree Celsius (1 K). A Calorie (with a capital C) is equal to 1000 calories, and also known is a kilocalorie.
If the energy leaving the food web is the same as that entering it, then the temperature will generally stay the same (after being adjusted for season and weather). However, a food web will typically store some of the energy into biomass, which contains varying amounts of energy. Organism bodies contain some energy, such as the cellulose that makes up much plant structure, such as cell walls. Proteins also contain energy. Fruits contain considerable energy in the form of sugar, typically 4 Calories per gram. Seeds contain tremendous amounts of energy in the form of oils (typically 9 Calories per gram) and starches.
Food webs can also release more energy than is imputed for relatively short amounts of time, such as during forest fires.
Energy typically enters and leaves an ecological system in the form of higher energy photons (visible light) and leaves in the form of lower energy photons (but possibly more of them). Yet, there are alternatives. Some of the energy may leave in the form of “waste” biomass. Such as dead plant structures, dead animal bodies and dead bacteria that get stored in the soil or ocean sediments, and eventually become nutrients, minerals or fossil fuels. Alternatively, some energy may enter an ecosystem in the form of high energy molecules, such as near ocean thermal vents.
Energy typically travels through an ecosystem in the form of chemical energy, such as sugars, carbohydrates, fats and proteins.
Note: only part of physical energy can be utilized by living organisms and in industrial processes. The useful part is called exergy. A related quantity, emergy, is the amount of energy consumed in these processes.
Resource
Further Reading
- U.S. Geological Survey (USGS), Food Web and its Function
Reference:
- Aydin Tözeren, Stephen W. Byers, New Biology for Engineers and Computer Scientists. Pearson Prentice Hall, 2004.
-
2.4 Formation and Endurance of Life
First published on . Last updated on January 19, 2021.
Recall the Contrasted Universe
Recall that due to expansion, the universe has become much cooler and darker over time. In fact, the typical temperature of the space between stars is nearly absolute zero. We have learned that heat energy tends to flow from warmer places to cooler places, as systems attempt to move towards thermal equilibrium (that is, until their temperatures are the same). Space is much cooler than stars, so energy tends to flow from within stars out into space. This is why we see stars shine.[1]
Planets are typically much cooler than stars. In fact, the temperature of a geologically dead, barren rocky planet would be about the same as space, that is nearly absolute zero. Shaded areas of moons and planets that lack atmospheres quickly drop to near zero. The side of the planet Mercury that faces away from the sun is such an example.
Solar System showing sun and planets (credit: NASA)
Yet planets orbiting around a star receive a significant continuing dose of energy in the form of light emitted from that star that then warms up the planet. The planet then becomes warmer than space, and so then the planet must start shedding energy into space. For example, the Earth receives significant amounts of sunlight that warms the earth. The Earth must then shed some of the energy into to attempt to move towards thermal equilibrium with space.[2]
Sun-Earth-Space Potential
Heat Engine Analogy to Life
Recall the heat engine example. A heat engine bridges a temperature difference. Heat flows across that difference through the heat engine. Some of that heat energy is converted to work while the rest is exhausted as waste heat. Entropy is produced while the engine continues to function.
Part of the work done by a heat engine can be used to maintain that heat engine. More significantly, part of the work can go to build additional heat engines. These additional heat engines can produce yet more work to produce even more heat engines. The growth of heat engines is then exponential, at least until limiting factors come into play. This is a key point. Because heat engines can beget heat engines, an exponential increase in entropy production can take place.
Here, entropy production is proportional to the quantity of heat engines. Fast entropy favors exponential growth in entropy production, so fast entropy favors the “spontaneous” appearance and endurance of heat engines. Under the Second Law along, the spontaneous appearance of a heat engine is improbable but possible.[3]Fast entropy then utilizes those improbable appearances to create probable, self-sustaining, exponentially growing systems. Some of those systems have developed into what we call life.
Formation of Life
Steps
The motion of atoms and small molecules in a liquid or gas is nearly random. The statistics of these particles is known as statistical mechanics, or more traditionally, thermodynamics. The formation of life from this random motion involves several steps.
- Microscopic structures frequently appear by random chance. For example, atoms can combine to form molecules, and some molecules combine form to larger molecules.
- Even more complex microscopic structures occasionally appear by random chance.
- Some very complex microscopic structures form. Some of those forms will be durable.
- Some of those durable structures will be self-replicating. (Or they will be replicated by environment such as by catalysts). Such structures can be defined as the simplest form of life.
- Durable, self-replicating structures that degrade energy more quickly than their environment will be more probable (they will be favored under the principle fast entropy). Free energy will tend to be degraded through these structures.
- Where frequent chemical reactions can take place, where they can be durable and where there is an available source of free energy (such as from a thermodynamic potential), then the existence of the most basic life forms (as defined above) will approach being a certainty, given the passage of sufficient time.
Life Appears!
Jungle plants (credit: NASA)
Once these steps have occurred, life has developed. One can view life as the residue of random action subjected to the principle of fast entropy.
Summary
The Earth’s surface reflects some sunlight into space. Reflected light results in little entropy.
However, plants absorb much of that sunlight that would otherwise maintain its high level energy by being reflected into space.
Plants store some of that energy in the form of biomass. Animals have evolved to consume biomass.Life As A Faster Path
Life itself can be viewed as the process of heat engines begetting heat engines. Bacteria are an easy example.
Hence, life represents a mechanism to maximize the rate of entropy production. Therefore, life is not due to pure luck; rather, the formation and evolution of life is favored under the e th Law.
Intelligence allows life to produce entropy even faster; thus the formation of increasingly powerful brains and intelligence are favored.Notes and References
[1]That humans should have developed eyes that are particularly sensitive to the peak wavelengths emitted from the star our planet orbits should not be surprising.
[2]As long as the sun shines upon the Earth, the Earth will not reach thermal equilibrium with space. This continuing energy flow between the sun and the Earth maintains a continuing potential.
[3]I. Prigogine has proposed that dissipative structures can appear that increase entropy production. In his terminology, living organisms can be viewed as dissipative structures. Astrobiologist J. Lunine has paraphrased Prigogine’ s finding as follows: “complicated systems that are held away from equilibrium and have access to sufficiently large amounts of free energy exhibit self-organizing, self-complexifying properties.” (J. Lunine, Astrobiology, A Multidisciplinary Approach. Peason Addision Wesley, 2005).
-
2.5 Statistical and Evolutionary Intelligence
First published on . Last updated on February 15, 2020.
Introduction
Reproducing molecules are a far cry from the complex genetic machinery of the living cell. this section will explain how Fast Entropy results in the development of a form of random intelligence known as evolution.
Random Action Recalled
Random action involves a statistically significant amount of actors that are free to behave independently of each other in at least one way.
One example of random action would involve the roll of a dice. The results of a large number of rolls should be random. Another example of random action is the movement of molecules in a gas. Even though the gas may have an overall motion, such as in a gust of wind, the individual molecules may be moving in absolutely any direction. Molecules moving about in a liquid may be a reasonable representation of random movement.
Steps in the Development of Random Intelligence
- Random action can “figure out and solve” some problems. Recall the parallel conductor example, where the correct proportion of heat flow through each conductor was channeled through each conductor to maximize free energy degradation. The combination of the random actions of many tiny particles[1]within the conductors effectively figures out how to solve this problem and maximize entropy production.
The term “random action intelligence” may seem an oxymoron. Perhaps a more appropriate sounding term would be “dumb luck” or to refer to the proverbial monkey at a typewriter who eventually pounds out Shakespeare. Yet, the term “dumb luck” here is not accurate. In reality, random action is not quite random. There are slight asymmetries in the distribution of behavior. It is the combination of these asymmetries along with large numbers of nearly random acting actors (such as particles) that produces the intelligent result.
- Some of the durable complex structures (see Chapter 5) developed into RNA[2]and (most likely later) DNA[3]and represent the genetic code and operating instructions for all known living organisms.
- RNA and DNA mutations may themselves involve a significant component of random chance in forming mutations. (Naturally occurring radiation, itself a random phenomena, may have played a role in this).
- Most RNA and DNA mutations are of no known consequence, and most others will be detrimental and even fatal.
Neutral changes will be passed on but not favored.
Detrimental changes will be disfavored and less likely to be passed on.
Positive changes will be favored and be more likely to be passed on.
- Therefore, the mutations of RNA and DNA can be viewed as a form of random intelligence. Essentially, nature throws the dice again and again until it gets to solve problems (such as maximizing entropy production), if given enough time. This process is commonly known as evolution. Typically considerable time is required.
Fast entropy represents an asymmetry that tilts the random mutations of RNA and DNA in favor of maximizing entropy production. Therefore, the desire to maximize entropy production is essentially the driving desire of each one of our cells. Yet remember, what matters is the maximization of entropy by an entire system. Cells within multi-cellular organisms have specialized. So each such specialized individual cell will act in a manner to maximize entropy production by the organism (or some larger system), and not necessarily in a manner to maximize entropy production as an individual cell.
[1]Typically electrons, if the conductors are metals.
[2]More fully known as ribonucleic acid. RNA is involved in the synthesis of proteins, that in turn form much of the structure and processes of cells.
[3]More fully known as deoxyribonucleic acid. DNA encodes genetic information that is vital for cell and organism reproduction.
-
2.6 Smarter Intelligence
First published on . Last updated on February 15, 2020.
Introduction
Recall how the random action of microscopic particles and energy reactions acts as a “brain” to “figure out and solve” problems such as degrading free energy more quickly. Evolution (the random mutation of RNA and DNA) acts to figure out problems related to the endurance of more complicated life, typically requires considerable time. If faster energy degradation is favored, then it is conceivable that faster means of problem-solving and intelligence will have developed. This section will discuss how Fast Entropy encourages the formation of more powerful, efficient forms of intelligence.
Random Action Considerations
Random action intelligence uses considerable amounts of time and is relatively inefficient. Evolution is a form of random action intelligence. Nature literally keeps throwing the dice, producing random genetic mutations. Most are unsuitable and can even be fatal. However, all it takes is one successful mutation to solve a problem, as long as the bearer of that gene reproduces.
Evolution can take many millions of years to solve problems and can result in incomprehensibly large amounts of wasted mutations.
Chemical Signaling
Even simple living organisms have developed chemical signaling that can respond to internal and environmental changes, such as the need for a cell to absorb more oxygen within seconds as compared to millions of years for evolution. Chemical signaling may be sufficiently quick to help an organism decide to move out of the sunlight into shade to keep from overheating. However, chemical signaling may itself be dependent upon evolution to adapt the way it functions, so its short-term abilities apply to only a range of situations, and cannot easily keep pace with unprecedented environmental changes.
Nervous Systems
Nervous systems are electric networks in more complex, multi-cellular organisms. They can perceive and relay information nearly instantaneously across many cells, and so they can make decisions quickly. Yet, their reactions are in the form of reflexes, so that their problem-solving is quite limited and inflexible.
The Development of Brains and Bigger Brains
Nervous systems can further develop so that they can be partially controlled and operated by a computing organism known as a brain. Nervous system brains have formed that can make decisions quickly. Such brains can change the way in which decisions are made and make more complicated decisions. Further, brains can learn, and so are more quickly adaptable. The formation of such brains is favored to the extent that they improve endurance of their entropy-producing species. Species with bigger brains displace other groups of less brainy organisms who degrade free energy less quickly, so there is a thermodynamic push for brain size and capability to grow.
Characteristics of Brains
The simplest brains, such as of a worm or insect, follow regular patterns of decision making that vary relatively little among members of a species (although there is some variation). However, even for the simplest of organisms that possess a brain, changes in environment and physical characteristics will provide a large range of actions. Imagine a fly deciding which direction to fly. Wind direction and the presence of predators can be from any direction, and so the fly may decide to fly in any direction. Yet, an individual brain does not appear to make decisions randomly, but rather it tends to act in particular ways with patterns of reaction These traits are often called habit and stubbornness.
The more complex a brain, the greater flexibility it has to vary its decisions from those of other members of its species. Memory becomes more consciously accessible. Processing becomes more sophisticated. For example, a simple nervous system may respond to one-dimensional changes of light intensity. A sudden change in light might cause a jerking reaction, which may be sufficient to escape from a predator. However, a brain may be able to organize sensations of light and recognize images. Predator versus prey can be distinguished visually. Plans for hunting or escape can be devised and improvised. Certain types of problems can be solved more quickly or with greater sophistication.
Further, organisms with brains have more complicated social interactions, particularly with members of its own species. Brains allow organisms to differentiate between other members of its species, so that organisms become individual, rather than just another member of their species. Preferences, grudges and hierarchy can be formed, organized and remembered.
So the development of nervous systems allow living organisms (and by inference nature) to solve numerous problems of entropy maximization much more quickly than they could have been solved by mere random intelligence. Therefore the formation of such “smarter” intelligence is favored under the principle of fast entropy. For example, the human brain learned how to make and master fire, which more quickly produces entropy from materials such as wood than mere rotting. The human brain’s next type of solution, civilization, would really put entropy maximization into the fast lane.
-
2.7 Development of Agriculture and Civilization
First published on . Last updated on January 19, 2021.
Introduction
Important events in the progression of humanity is the development of agriculture and civilizations. Societies involve collective action between individual organisms such as individual humans that tends to increase entropy production by increasing efficiency or accessing otherwise inaccessible useful energy. Civilization tends to further involve centralization and coordination that increases efficiency even further.
Green field (Credit: US govt.)
From Brains to Civilization
- Although brains do not make random decisions, a collection of brains can exhibit nearly random behavior. Recall that even the simplest brains provide a large range of actions in response to environmental factors. Further, the more complex a brain, the greater flexibility it has to vary its decisions from those of other members of its species.
Admittedly, diverse action is not necessarily purely random. Certainly, brains of a particular species will tend to exhibit similar responses to certain types of events, to the extent that brains are an artifact of evolution. This can be thought of as evolutionary “inertia”. Further, an individual brain does not appear to make decisions randomly, but rather it tends to act in particular ways with patterns of reaction. These traits are often called habit and stubbornness.
Nevertheless, despite the particular habits and stubbornness of individual brains, a collection of brains, especially the highly developed brains of humans, act in many different ways. For some purposes, a large of collections of brains produces a roughly random set of reactions. (Although for other purposes, brains make very similar decisions, such as where “swarm logic” applies). Random types of decisions can be modeled statistically, in some ways even thermodynamically. In fact, the random aspects of brain decision-making can be used to give predictability to social models.
- Fast entropy still favors the more rapid degradation for free energy. Although an individual brain can make decisions that are highly unencumbered by the considerations of fast entropy, there will still be the subtle pressure of fast entropy on each deciding brain. Therefore a collection of brains will, everything else being equal, tend to make decisions that are consist with more rapidly degrading energy. Otherwise, the collection of brains may lack endurance, especially when there are other competing collections of brains.
- Civilization tends to act to more rapidly degrade free energy. It forms organization and develops technology. In fact, civilization often replicates biological structures that themselves increase entropy production. Roads and railroad lines are analogous to blood vessels. Telephone and internet lines are analogous to nerves.
- Civilized, more organized groups of people (who degrade free energy more quickly) tend to displace other groups of less civilized, less organized groups of people who degrade free energy less quickly. This there is thermodynamic pressure to become more and more civilized. The term civilization refers to developing a complex, organized, technologically capable society rather than polite “civilized” behavior. For example, having the complex social structure required to build a nuclear weapon would be considered being civilized here, while merely wiping ones mouth with a napkin after dinner would not, although those traits often do go hand in hand.
Summary
- Civilizations form that consume energy even more quickly.
- Irrigation allows more areas to be covered by plants. Mining allows depletion of energy trapped in fossil fuel biomass. Cities are a complex structure that allows greater concentrations of energy use.
Civilization As An Even Faster Path
- Civilization allows for coordinated behavior that allows a society to produce entropy even more quickly.
- The e th Law can be used as the foundation for a unified social science that can be used to describe any society, whether on Earth or elsewhere.
Agriculture patterns (credit: NASA)
-
3 Fast Entropy and the eth law
First published on . Last updated on January 19, 2021.
“The question is not whether nature abhors a vacuum, but how much nature abhors it.”
Introduction
Here we introduce the ethLaw of Thermodynamics, or more descriptively as Fast Entropy. (Here, “e” represents the transcendental number e, which is about 2.718. The number efits nicely between Laws 2 and 3 of Thermodynamics and expresses the importance of the eth Law in numerous cases of exponential growth.)
Physics is a relatively “pure” subject. Physics is not as pure as mathematics. However, the motions and behaviors of subatomic particles exhibit a beauty and perfection reminiscent of the celestial spheres of the ancient Greeks. Newton’s Three Laws likewise brook no ambiguity, and describe a precise ballet of mechanical motion in the vacuum of the planetary heavens.
With this purity in mind, physicists tend to consider thermodynamic systems in terms of before and after a change. Thermodynamic processes themselves tend to me “messy”. The state of a system in terms of entropy, temperature and other quantities is compared before and after a change, such as heat flow or the performance of work. By doing so, it is possible to neglect the amount of time required for thermodynamic changes to take place. This works well in physics, and the First and Second Laws of Thermodynamics typically suffice.
However, much of the world is a mess (involving tremendous complexity and uncertainty) and frequently must be studied in less than ideal conditions. Further, in the fields of Physical History and Economics, time is of the essence. Utopian idealism aside, how long changes take can make all of the difference in societies. For example, people can’t wait forever to be fed and late armies will often lose wars.
The element of time mustbe introduced in order to apply thermodynamics to social science, which is the thrust of this entire book. This chapter will do so.
Fast Entropy As A Unifying Principle
Fast entropy can be used as a unifying principle among both the physical and social sciences. Fast entropy has application to applied and professional fields as well. A better name for fast entropy could be the “e” th Law of Thermodynamics.
The ethLaw of Thermodynamics states that an isolated system will tend to configure itself to maximize the rate of entropy production.[1]
Heat flow through a thermal conductor example
Most introductory physics textbooks do have an example concerning thermodynamics that involves time.[2]Picture a simple thermal conductor through which energy flows from a hot reservoir to a cold one. For this example, we will consider the term reservoirhere refers to a body whose temperature remains constant regardless of how much heat energy flows in or out of it. [3]
Heat flow through a thermal conductor. The magnitude of that flow is proportional to both the area of the conductor and as its thermal conductivity. More heat will flow through a broad conductor than a narrow one. Also, more heat will flow through a material with a high thermal conductivity such as aluminum than through one with low thermal conductivity such as wood. Heat flow is inverselyproportional to the conductor’s length. Thus, more heat will flow through a short conductor than a long one.
Heat flow is also proportional to the difference in the two temperatures that the thermal conductor bridges. This difference in temperatures has nothing to do with the conductor itself. A greater temperature difference will provide a greater heat flow across a given conductor, regardless of the characteristics of that conductor.
Equation for thermal energy flow through a conductor:
\(\frac{\Delta Q}{\Delta t} = k A \frac{\Delta T}{L} \).
where, Qis the flow of thermal energy, tis time, kis a constant dependent upon conductor material, Lis conductor length, and A is conductor area, and \(\Delta T\) is the temperature difference. This equation states how much heat will flow through a conductor, assuming the temperature difference remains constant. So once again, we face an example that is constant with respect to time, but it provides a reasonable starting point.
Electrical engineers will find this equation similar to a rearrangement of Ohms Law, where electric current is proportional to voltage divided by resistance:
\(I = \frac{V}{R}. \)
Recalling The Second Law of Thermodynamics
The Second Law of Thermodynamics states that the universe is moving towards greater entropy. Stated another way, the entropy of an isolated system shall tend to increase.[4]A corollary is that a system will approach a state of maximum entropy if given enough time. A system in a state of maximum entropy is analogous to a system in equilibrium.
However, neither law nor corollary describe the rate at which entropy shall be produced, nor how long it would take a system to produce maximum entropy.
The ethLaw—Fast Entropy
The author has proposed[5]that the Second Law can be extended by stating that not only will entropy tend to increase, but also it will tend to do so as quickly as possible.[6] (Others have made the same observation. e.g. A. Annila, R. Swenson). In other words, entropy increase will not happen in a lazy, casual way. Rather, entropy will increase in a relentless, vigorous manner. The author calls this extension the ethof Thermodynamics[7], or more descriptively, Fast Entropy. A more precise statement of the ethLaw is that “entropy increase shall tend to be subject to the principle of least time.” The ethLaw gives teeth to the Second Law. It will need those teeth in order to be useful for the social sciences.
Really, though, the ethLaw is already widely practiced astrophysicists and atmospheric scientists. Whether a stellar or planetary atmosphere tends to convect or radiate depends on which results in the greatest heat flow. The maximization of heat flow results in the maximization of entropy increase, so this scenario represents the ethLaw in action.
Fast Entropy can be used as a unifying principle among both the physical and social sciences. Fast entropy has applications to applied and professional fields as well.
More Precise Statement of ethLaw
The ethLaw needs to be stated more precisely to be of much use. A more precise statement is that “entropy increase shall tend to be subject to the Principle of Least Time.” The Principle of Least Time is a general principle in physics that applies to diverse areas such as mechanics and optics. Snell’s Law of Refraction is an example.
Physical Examples
Neither the ethLaw nor Fast Entropy will be found in a typical physics textbook, although it could said to fall under non-equilibrium thermodynamics or transport theory discussed in some texts. Fast Entropy involves an element of change over time that can involve challenging mathematics and measurements. Nevertheless, a few simple examples can be offered to support the validity of Fast Entropy.
One example is heat flow through two parallel conductors each bridging the same two thermal reservoirs (see figure). No matter what area, materials or other characteristic comprise each of the conductors, the percentage of heat that flows through each conductor is always that which maximizes total heat flow. In this case, when total heat flow is maximized, so to is entropy production maximized.
Thermal conductors in parallel
Another example is heat flow through conductors in series between a warmer and cooler heat reservoir (see figures). This example replicates the classic demonstration the applicability of the Principle of Least Time in optics (Snell’s Law), but using thermal conductors in place of refractive material, and replacing the entrance point of light with a contact point with a warmer reservoir and the exit point of light with a contact point with a cooler reservoir.[9]
Thermal conductors in series
While heat flow tends be a nebulous affair, the path of maximum heat flow can nevertheless be ascertained. This can be accomplished by noting perpendicular paths to isotherms indicated by placing temperature sensitive color indicator film upon the conductors (below). The greatest color change gradient represents the path of maximum heat flow. Observations show that the path of maximum heat flow is consistent mathematically with Snell’s Law (which is based upon the principle of least time but usually reserved for light rays). This example is reasonably easy to replicate.
Idealized path of maximum heat flow through conductors in series
A third example is well known to atmospheric scientists. Here, in an atmosphere where heat is flowing from a warm planetary or stellar surface, whether thermal radiation or convection will occur tends to be dependent upon whichever produces the greatest heat flow. Whichever produces the greatest heat flow tends to produce the entropy most quickly.
A Heat Engine Begetting Heat Engines
The work done by heat engines can be used for human activities. Part of it can be used to maintain the heat engine. More significantly, part of the work can go to build additional heat engines. These additional heat engines can produce more work to produce even more heat engines. This idea is pictured here (see figure). The growth of heat engines is then exponential, at least until limiting factors come into play. This is a key point. Because heat engines can beget heat engines, an exponential increase in entropy can take place.
Heat engines begetting heat engines
Here, entropy production is proportional to the quantity of heat engines. Fast entropy favors exponential growth in entropy production, so fast entropy favors the “spontaneous” appearance and endurance of heat engines. Under the Second Law alone, the spontaneous appearance of a heat engine is possible, but improbable. Fast entropy then utilizes those improbable appearances to create self-sustaining, exponentially growing systems.
Emergence of Complex Dissipation Structure
When systems are far out of equilibrium, there is a tendency for complex structures to form to dissipate potential (Progogine, ___). Such a process is an example of fast entropy. The emergence of atmospheric convection structures are examples of complex dissipative structures. Convective structures tend to form where convection results in greater thermal energy transport from the surface of the Earth to its upper atmosphere than does simple radiation. Storm systems, tornadoes and hurricanes are further examples. The spiral arms of galaxies are similar in appearance to those of hurricanes. this is no coincidence, since the spiral structure of galaxies also result in greater production of entropy (see paper from Naval Observatory astrophysicist ____).
Rising cloud column (credit: NOAA)
Applicability of Fast Entropy to Life and Social Sciences
If Fast Entropy is a fundamental tendency in physics that especially applies to living organisms, life would have evolved to produce entropy in a manner consistent with the Principle of Least Time. Evolution is quite similar to statistical mechanics. It finds the answer it is seeking by rolling the dice an unimaginable amount of times. Statistical mechanics, including thermodynamics, operates most reliably upon systems of many components. Evolution likewise requires a sufficiently high population to operate upon. Endangered species are especially at risk, because their populations often become to small to support the evolution of that species, making it especially vulnerable to change. Evolution is whatever survives the “dice throwing” in response to environmental change. Successful mutations out survive non-mutants and other mutations to multiply and dominate their environment.
In thermodynamics, the Second Law statistically allows small regions of lower entropy. Most of these regions will quickly disappear due to the random motion of molecules. However, a rare few of these regions, by pure statistical chance, will be able to act as heat engines and will increase overall entropy (despite their own lower entropy). If these rare, entropy-creating regions can reproduce, then they will be favored by fast entropy, and will come to dominate their region. Certain chemical reactions are examples, and from chemistry comes life.
So then, life can be viewed as a literal express lane from lower to higher entropy. Although living organisms comprise regions of reduced entropy, they can only maintain themselves by producing entropy. Life has produced a diversity of organisms in order to maximize entropy production with respect to time. For example, if one drops a sandwich in a San Francisco park, a dog will rush by to bite off a big piece of the sandwich, then the large seagulls will tear away medium sized pieces to eat. Smaller birds will eat smaller pieces, and injects and bacteria will consume smaller pieces yet. If only one or two of those organisms existed, some of the pieces or certain sizes could not easily be consumed. If they couldn’t be consumed, they could not be used to increase entropy.
Humans are living organisms and do their part to contribute to maximizing entropy production with respect to time. In fact, the more complex, structured and technologically advanced human civilization becomes, the faster it creates entropy. It is true that cities and technology themselves represent regions of lower entropy, but only at the cost of increased overall entropy.
Further Applications
There are both physical and social applications for Fast Entropy.[8]Physically, Fast Entropy might be used to improve heat distribution and removal. Socially, Fast Entropy drives Hubbert Curves. Further, Fast Entropy might be used to determine key parameters of Hubbert curves and constraints upon them.
Fast Entropy analysis requires that some indication of entropy production with respect to time be determined. An exact determination might prove to be difficult, but comparisons of entropy production are easier. For example, if people consume a known mean number of calories, then the more people a regime has, the more entropy it produces. Most historic regimes have a sufficiently low level of technology that this type of analysis is quite practicable.
Conclusions and Future Research
Fast Entropy can be used in history as a criterion of success for a regime. Was a regime overtaken by another regime that was able to produce more entropy more quickly? In economics, Fast Entropy can be used to study the progress of a regime along its Hubbert curve, and infer factors such as efficiency, economic centralization and wealth distribution. Fast Entropy can be a power tool for the analysis of proposed social policy. However, an important issue to be investigated is whether and how the value of entropy production needs to be weighted with regards to its distance in time.
Notes & References
[1]However, the behavior of systems the atomic level can vary from that discussed in this chapter.
[2]One can infer the passage of time by multiplying the calculated heat flow by time. However, this is example is not really time dependent. The heat flow remains constant regardless of how much time passes in this idealized example. It is nevertheless a good approximation for many real situations.
[3]Heat flow is also proportional to the difference in two temperatures that the thermal conductor bridges. This difference has nothing to do with the conductors themselves. Heat flows through a thermal conductor in proportion to the area of the conductor as well as its thermal conductivity. More heat will flow through a broad conductor than a narrow one. Also, more heat will flow through a material with a high thermal conductivity such as aluminum than through a material with low thermal conductivity such as wood. Heat flow is inverselyproportional to the conductor’s length. More heat will flow through a shorter conductor than a long one. This is known as Fourier’s heat conduction law
[4]A more precise definition is that “any large system in equilibrium will be found in the macrostate with the greatest multiplicity (aside from fluctuations that are normally too small to measure).” D. Schroeder, An Introduction to Thermal Physics. San Francisco: Addison-Wesley, 2002.
[5]This proposed extension was anticipated in a talk given by the author to a COSETI conference (San Jose, CA, Jan. 2001, SPIE Vol. 4273), was presented at a talk entitled Hurting Towards Heat Death (Sept. 2002) and appeared in the Fall 2003 issue of the North American Technocrat. Subsequent to this proposal, the author has observed that a form of this extension is already in use by astrophysicists and meteorologists. When modeling atmospheres, their models will tend to choose the form of energy transfer that maximizes heat flow, such as convection versus conduction or radiation. See B. Carroll and D. Ostlie, An Introduction to Modern Astrophysics, 2ndEd., Pearson Addison-Wesley, 2007, p. 315.
[6]The Second and A Half Law is not well known and therefore is neither generally accepted nor rejected by most physicists. Although the Second and A Half Law is fairly consistent with standard physics, it is primarily intended for use in the applied physical sciences and the social sciences. There is some possibility that this proposed law is flawed. However, it has somemerit and is somewhat better than what we have without it.
[7]As stated above, e in ethlaw referring to the transcendental number e, that is 2.718.
[8]Psychologist and musician Rod Swenson had proposed some elements of this, perhaps as early as 1989. He suggested that a law of maximum entropy production could apply to economic phenomena.
[9] Mark Ciotola, Olivia Mah, A Colorful Demonstration of Thermal Refraction, arXiv, submitted on 21 May 2014.
-
4 Flows and Bubbles
First published on . Last updated on February 6, 2021.
Many phenomena in both nature and society can be examined in terms of bubbles and flows. Many can be modeled as a combination of potential, flows, barriers and bubbles.
Flows
In the most general sense, a flow is the continuous transport of something from one place to another. In a more abstract sense, it is the continuous change of a quantity. For a short amount of time, a flow can be caused by inertia. For longer periods, something must drive the flow.
The consumption of potential can drive a flow. Then the flow can be said to contribute to the achievement of the potential. The flow can continue indefinitely as long as both the potential and that which flows are both steadily replenished. For many purposes, a flow can be viewed as a the result of a continuous supply of potential.
The shining of the Sun on the Earth in cold space is a continuous flow of energy that has lasted billions of years. The current of water down the Nile River is another flow that has lasted thousands of years.
Physical Flows
The current of water down the Nile River is another flow generated by a gravitational force. Let us examine this. Water flows from higher elevations to lower ones, such as via the Nile. Water in highlands represent a higher gravitational potential than water at sea level. Water flowing downhill consumes (achieves) this potential.
Yet the Nile has been flowing for many thousands of years. How does the water at the high elevations get replenished? Atmospheric storm systems represent complex structures to dissipate potential. Sunlight places powerful amounts of energy at the surface of the oceans and wet land. Storms form to pump this energy more quickly away from the surface into the cold upper atmosphere. The transport of water into the atmosphere and its rain on the Earth’s surface increases the rate of energy transport. (Condensing water vapor in the upper atmosphere releases prodigious amounts of energy into outer space).
Resource and Economic Flows
There are also many physical flows in our economy. The transport of food from farm to city and of mineral from mine to factory represent flows.
Generalizing the Emergence of Structures
We discussed how regimes can emerge from civilizations as dissipative structures to increase entropy production. Here, we generalize the concept of a regime.
Formation of Bubbles
Bubbles emerge when a flow gets blocked. As potential builds up, the force against the blockage increases. Eventually the accumulation and force become so large that the blockage can no longer impede the flow. At this point, the blockage might be partially overcome, or it might become catastrophically destroyed. This is analogous to the formation and popping of a bubble. Another term for blockage is “Logjam”.
Emergence of Exponential Structures
Limits
In the case of a flow, heat engines will exponentially grow until they reach a limiting efficiency. Heat engine population and entropy production will reach a limit called a carrying capacity.
Thermodynamic Interpretation
Heat engines begetting heat engines results in exponential growth in both quantity of heat engines and entropy production. Where the magnitude of potential is fixed, as entropy is produced, the potential decreases. As potential decreases the efficiency of the heat engines decreases. This decrease in efficiency comprises a limiting factor.
This decreased efficiency decreases the ability of heat engines to do work. Eventually, the total amount of both work and entropy production will decrease. Less work will be available to beget heat engines. If the heat engines require work to be maintained, the number of functioning heat engines will decline. Irreplaceable potential entropy continues to decrease as it gets consumed. Eventually, the potential entropy will be completely consumed, and both work and entropy production will cease.
As this scenario begins, proceeds and ends, a dissipative structure (a literal thermodynamic “bubble”) forms, grows, possibly shrinks and eventually disappears. Entropy production versus time can often be graphed as a roughly bell-shaped curve, giving a graphic illusion of a rising bubble.
Bubbles Involving Life
Populations of living organisms can experience thermodynamics bubbles. A bacteria colony placed in a media dish full of nutrients faces a potential of fixed magnitude. Each bacterium fills the role of a heat engine, producing both work and entropy. The bacteria reproduce exponentially, increasing the consumption of potential entropy exponentially. Eventually, it becomes increasingly difficult for the bacteria to locate nutrients[1], decreasing their efficiency. As efficiency decreases, the bacteria will reproduce at a slower rate and eventually stop functioning.
Ultimate Bubbles
Ultimately, all potentials are fixed in magnitude. Possibly, the entire Big Bang and its progression could be viewed as a bubble. In practice, many potentials are renewable to a limited extent. For example, as long as the Sun shines upon the Earth in cold space, a potential will exist there.
Series of Bubbles
As long as a system maintains the ability to produce new heat engines, then instead of a single bubble, there will be a series of bubbles over time. There are several reasons that systems form bubbles instead of maintaining a single flow. Chaos (in the mathematical sense) provides one reason. Another reason is that a series of bubbles may provide for an overall higher entropy production rate than a more steady, consistent rate of production. Heat engines in a bubble may be able to obtain much higher efficiency during a bubble than during steady state, so that the average production in a series of bubbles may be much higher than during a steady flow, despite the below average production between bubbles.
Overshoot and the Predator-Prey Cycle
Yet even in the case of a flow, the rate of replenishment will be limited. Yet the rate of engine reproduction may have continued beyond carrying capacity. This can be called overshoot, a systematic “momentum” in a sense. In this case, even the flow can be treated as a substantially fixed (or “conserved” in the physics sense) quantity. A thermodynamic bubble will form.
Another case such as predator-prey cycles can also form where overshoot occurs, where the population of a predator overshoots the available prey, reducing both the population of the predators and the prey, so that there are cycles where the population of the predator is always “reacting “ to the population of the prey. Predator-prey cycles can also be expressed in terms of flows, bubbles and efficiencies.
Notes and References
[1]Or escape toxins produced by the colony.
-
4.1 Resource Bubbles
First published on . Last updated on February 6, 2021.
Introduction
Rise-fall nonrenewable resource consumption functions (“curves”) are examples of resource “bubbles”. M. King Hubbert’s modeling of Peak Oil is the most famous example of a resource bubble. However, that case was inspired by the earlier work of Donnel Foster Hewett regarding regional metal mining.
The essence of a bubble is a build up of potential that then gets relieved. The key is that there is a critical resource that is not renewable. Any amount that gets consumed cannot get replaced. Once that resource is consumed, it is gone forever. So production must eventually end.
Deposits of gold are an example that represent built-up potential. Usually achievement (consumption) of the potential begins slowly, but then grows exponentially. Hence production will grow quickly, but intrinsic efficiency begins to drop, impacting production. Eventually efficiency will drop below the level at which achievement can be obtained, or the entire potential will be consumed, and the bubble ends.
Regional Example—San Juan Mining Region
To apply bubble analysis, the region considered must be sufficiently large enough to initially support many mines. The San Juans region in Colorado is such a region, and is a suitable example of a single historical regime is the mining society that developed in the San Juan mining region. Since precious metals tend to be a nonrenewable resource, they can be said to be conserved (that only a fixed amount of the resource ever will exist) for a given region. In other words:
Potential consumption + cumulative production = constant
at any point of time. Potential consumption equals that constant before exploitation begins.
The San Juan mining region of Colorado produced gold and silver from dozens of mines, around which towns and communities eventually developed. Mining began as early as 1765. Its heydays were between about 1889 and 1900. There is again mining in miscellaneous minerals, but the not much in gold, which was the primary economic driver for the “great days”. The region is now used primarily for recreation and some agriculture. (Smith, 1982)
Spanish gold mining of placer deposits took place between about 1765-1776 (native pieces of nearly pure gold found on the surface). Some mining took place in 1860, but it was interrupted by U.S. Civil War. At this point, “only the smaller deposits of high-grade ore could be mined profitably.” Mining slowly started again in 1869. There were 200 miners by 1870. An Indian Treaty was negotiated in 1873, which removed a major obstacle to an increase of mining. (Smith, 1982)
By 1880 there was nationally a “surplus of silver; pressures to lower wages; labor troubles.” In 1881 a railroad service was established, resulting in a “decline in ore shipping rates.” By 1889, $1 million[1] in gold and silver were being produced each year (for one particular sub-region). Around 1889, English investors had come to control the major mines by this time. The 1890 production total for San Juans was $1,120,000 in gold; $5,176,000 in silver. The region produced saw $4,325,000 in gold and $5,377,000 in silver in 1899. (Smith, 1982)
By 1900, the region began to take on more of the characteristics of a settled community. There was a movement for more “God” and less “red lights.” By 1909, “the gilt had eroded” (dilapidation set in; decreasing population). In 1914, production greatly fell, due to decreased demand from Europe (because of World War I) and the region lost workers. Farming becomes more important to local economy than mining. Recreation and tourism revenues become the only bright spot for many mining towns. Silver and gold mining all but ceased by about 1921. (Smith, 1982)
Here, the end of mining has a fairly clear cut-off date. However, the beginning of mining seems to have stretched out over a longer period of time, during which mining levels were quite small. [The curve was previously modeled using a Maxwell-Boltzmann distribution, but this was a more empirical approach. Originally, only a few data points were readily available (Smith, 1982), but digitization of sources, even if just scans, has made much more data available.]
Deviations will be shown in the curve occurred to both random events, social, economic and logistic “turbulence”, business cycles and major external events such as the U.S. Civil War.
Colorado San Juans gold production versus model
A EDEG model was created (in 2010) for U.S. domestic petroleum extraction (see below). Actual data exceeds model prior and after peak. Parameters were set to match peak, but could have beed adjusted to for less error elsewhere at the expense of greater peak error. This model used a older version of EDEG than the most recent San Juans model, so the overall plot is not as well matched.
EDEG model for US petroleum production up to 2008.
Another example, on a multi-continental basis was gold and silver production in areas of the Americas controlled by Spain, primarily during the Habsburg dynasty. An EDEG model was produced for that period. This model used a cruder version of EDEG than the most recent San Juans model, so the peak is not as well matched.
Silver and gold exports from New World versus model (data from Gibson, 1996)
References
Ciotola, M. 1997. San Juan Mining Region Case Study: Application of Maxwell-Boltzmann Distribution Function. Journal of Physical History and Economics 1.
Ciotola, M. 2001. Factors Affecting Calculation of L, edited by S. Kingsley and R. Bhathal. Conference Proceedings, International Society for Optical Engineering (SPIE) Vol. 4273.
Ciotola, M. 2003. Physical History and Economics. San Francisco: Pavilion Press.
Ciotola, M. 2009. Physical History and Economics, 2nd Edition. San Francisco: Pavilion of Research & Commerce.
Ciotola, M. 2010. Modeling US Petroleum Production Using Standard and Discounted Exponential Growth Approaches.
Gibson, C., Spain in America. Harper and Row, 1966.
Hewett, D. F. 1929. Cycles in Metal Production, Technical Publication 183. New York: The American Institute of Mining and Metallurgical Engineers.
M. King Hubbert. 1956. Nuclear Energy And The Fossil Fuels. Houston, TX: Shell Development Company, Publication 95.
Hubbert, M. K. 1980. “Techniques of Prediction as Applied to the Production of Oil and Gas.” Presented to a symposium of the U.S. Department of Commerce, Washington, D.C., June 18-20.
Mazour, A. G., and J. M. Peoples. 1975. Men and Nations, A World History, 3rd Ed. New York: Harcourt, Brace, Jovanovich.
Smith, D. A. 1982. Song of the Drill and Hammer: The Colorado San Juans, 1860–1914. Colorado School of Mines Press.
U.S. Energy Information Administration
-
4.2 Economic Bubbles
First published on . Last updated on January 19, 2021.
Here we discuss economic regimes, more commonly known as “bubbles”.
Economic Flows
Food and mineral flows also represent economic flows. So do transfers from one group to another, such as from parent to children, workers to retirees, exporter to importer. Flows often work at least two ways. For example, goods and services flow from an exporter while money flows from an importer. Many trade partners engage in both importing and exporting with each other.
Financial Flows
Economic flows can be abstracted into financial flows, such as an annual market demand or income. An example of a steady income flow is called an annuity.
Direct Logistic or Gaussian Approach
It is traditional to model growth by one of two types of curves, the pure exponential growth curve or the logistic growth curve. Since most new business plan for three to five years, this is a reasonable approach.
All things end sooner or later, so it might make more sense to model growth with a Gaussian or Maxwell-Boltzmann distribution. However, most businesses don’t like to plan for a downturn. Yet for particular products or businesses in industries where the typical lifetime may only be a few year or a single season, either of these curves may be superior to the pure exponential or logistic approaches.
Beginning Point
None of these approaches has a clear beginning point, mathematically speaking. The pure exponential, logistic and Maxwell-Boltzmann curves can arbitrarily be assigned a beginning point without too much thought.
The Gaussian curve can prove more challenging to assign a beginning point. A fair approach is to initially establish a pure exponential growth curve, then later fit a Gaussian to that curve.
Pure Exponential Growth Phase
Sometimes it is hard to determine the parameters soon enough to make useful forecasts. Yet there are ways to handle this, although they are imperfect. However, if a business has a great product, and there is strong demand for it, the question is how quickly can the business expand to meet that demand? If the expansion cost and resultant speed can be calculated, then a model exponential growth curve can be generated, assuming that the business will expand as quickly as possible. Also assumed is that the growth of the business at a particular time will be proportional to its size at that time. If the business can only expand linearly, then a linear model must be generated.
Leveling or Decline Phases
Eventually, limiting factors will level off growth and even cause a decline in business. A logistics curve is appropriate for a product or business that will have relatively long-term, stable sales, such as a popular soft drink. For products that will have a known or likely decrease, a Gaussian or Maxwell-Boltzmann curve can model both the growth and decline.
Efficiency Approach
A more fundamental approach is to use efficiency data for modeling. This approach can work better if there are similar cases available for comparison so that reasonable parameters for reproduction costs and efficiency can be proposed at an early phase so that reasonable forecasts might be possible. This approach is similar to modeling a single historical regime.
Two Places to Begin—Relation to Supply and Demand
There are two placed to begin using the efficiency approach. One way is to determine the total lifetime sales for the product or business. (Take the raw value, not the Net Present Value-discounted value). If you can then determine what the peak sales amount will be, and the beginning and end dates of the business, you can treat efficiency as a linearly decreasing quantity (this is not entirely accurate, particularly for the beginning and end of the lifecycle, but can be a reasonable approximation).
A perhaps better, but more complicated way is to first model demand for the product (in terms of a series of classical economic demand curves over time). Then determine a series of classic supply curves over time. This will tell you the sales revenue and volume over time. The trick is to use fast entropy and thermodynamic efficiency to model how the supply and demand curves will change over time. Fast entropy will cause the quantity supply curve to fall: as the business develops, the business will likely increase production capacity, so that it can afford to sell more at a lower per unit price.
However, as time goes on, the demand curve will also fall due to growth leveling off or falling as the market becomes saturated. It is also possible, that there will be limits to how much production can grow is a required resource becomes scarcer (and thus expensive), so that the supply curve can only fall so far. These events represent decreasing thermodynamic efficiency. Thermodynamic efficiency should be differentiated from empirical efficiency, which may be due to such factors as economies of scale. In fact, it is often falling thermodynamic efficiency that required increased economies of scale to meet demand at sufficiently low prices. This is one reason why there is often consolidation in maturing industries.
Modeling Macroeconomic Business Cycles
It is possible to use this approach to model entire macro economic business cycles. (Despite their name, these cycles are really thermodynamic bubbles).
US Adjusted GNP Example
Figure below shows US adjusted GNP for 1993-2013 (the scale is nominal), a litter “sizzle” plot of the U.S. economy during that period. Long-term trends have been stripped out of the data. The figure shows the dot com bubble peaking around 2001 and the housing bubble peaking around 2006. These bubbles are not just random occurrences, for they share a similar structure. There is an underlying thermodynamic potential. . An engine of GNP growth forms to bridge that potential, such as firms that can create or take advantage of new computing technology or a relaxation of banking standards. At the beginning of the bubble, the potential is high, so that exploitation can take place at a high thermodynamic efficiency. However, as potential is consumed, the amount potential decreases, so efficiency necessarily drops. At the same time, old firms are expanding and new firms are being formed, resulting an increasing amount of heat engines” to consume potential.
US Economy Sizzle Index 1993-2013
Once formed, these firm heat engines remain “hungry”. They need to consume potential to survive and they very badly want to survive and grow. So they keep growing, even though potential is decreasing. Eventually, the potential (and therefore thermodynamic efficiency) drops so low and there are so many heat engines, that most of the heat engines can no longer support themselves. Chances are, industry overshoot has occurred (i.e. formation of too many hungry heat engines), and a crash occurs. This cycle usually repeats itself for each macroeconomic business cycle bubble, although the chief industries involved may vary among bubbles. The bubble itself apparently increases overall entropy production of a society, which is consistent with the principle of fast entropy.
-
4.3 Business Bubbles
First published on . Last updated on May 17, 2019.
Bubbles Involving Business
Businesses are interesting cases to study. There are many businesses, both large and small. Some are long-lived, many are short-lived. They utilize many different types of opportunities. They all involve people. Many involve money, which is quantifiable and often the figures are recorded.
Many businesses can also represented as bubbles, using the approach of a heat engines (or collections of heat engines). A business faces a new market opportunity of fixed magnitude. Businesses exploit the market opportunity, producing both work and entropy. The business or its industry reproduces exponentially, increasing the consumption of potential entropy exponentially. Eventually, it becomes increasingly difficult for the business or industry to locate new customers or orders, resulting in increased competition and decreased margins, hence lower efficiency. As efficiency decreases, the business will expand at a slower rate and eventually stop functioning.
Lifecycle of a Business
Some businesses can endure for longer than many dynasties. Yet most businesses go through common sorts of life cycles. They usually start small and are founded by an innovative entrepreneur. They get bigger and become efficient but start getting institutionalized. Eventually the overhead of their bureaucracy becomes more of a drag than help on overall efficiency. At the same time, the company has a harder time adapting to change. Eventually the opportunity the company originally exploited is gone, management can’t adapt and the company ends.
Companies don’t exist in a vacuum. They are dependent upon their government for law and security, the the population for revenues. So a business can find new opportunities, be bought by another business, be regulated out of business, etc. The business does not progress on the same precise lifecycle as an animal, but there are frequent patterns.
Phases
The Opportunity and Conception
The business opportunity (potential) is identified. The opportunity could be one-time in nature, such as the discovery of a deposit of gold. It could be ongoing, such as a new technology that will be adopted for a long period, such as the commercialization of electricity in the 1800s or the internet.
A means (engine) to exploit it is identified. The means could be a new mechanical invention, the building of a factory or construction of a mine. The development of the engine will require some initial “seed” resources, and literally has a “start-up” cost, called an initial investment (fixed cost).
The business begins. Revenues are received and marginal costs are incurred.
Growth
The company embarks upon exponential growth, which often starts slowly and then becomes rapid. Either the engine gets bigger or more engines are built. Often profits are reinvested in growing capacity.
Peak
The growth slows as it approaches its peak then levels off. Also, both the engines and developing bureaucracy involve maintenance (overhead) costs.
Decline, Acquisition or Transformation
Eventually the business may decline as the original business opportunity declines or ends. The business might be able to take advantage of new opportunities, or may be bought by another company. It might buy other companies who are better at developing new opportunities.
Modeling Business Growth
Businesses as consumers of limited resources
Businesses can be modeled as of consumers of limited resources and therefore as Hubbert curves. A business based upon an oil well or a gold mine is an obvious example. The limited resource can be intangible. Nearly all businesses are ultimately dependent upon a particular business opportunity that is often in turn dependent upon a limited resource. That limited resource might be satiable customer demand for a highly durable product. It might be a technology niche that has a limited lifetime or marketing window in a rapidly transforming marketplace. Other examples include resources can include intangibles such as goodwill and patents.
Business Development Stages
Businesses tend to develop through fairly well-defined stages: start-up, growth, stalling, acquisition of or by other businesses (or decline and then termination).
Business Modes of Operation
Businesses tend to operate in one of two modes, depending upon their current development stage. Growing start-ups are in an exponential growth (EG) mode, while established businesses move to an exponential decay (ED) mode. Operation in the EG mode is characterized by emphasis on revenue growth. Sources of revenue growth include new products, increased sales or acquisition of other businesses. Operation in the ED mode is characterized by emphasis on cost cutting. Forms of cost-cutting include consolidating product lines, reduced R&D spending, and layoffs. (The former case of stable major airline striving to raise profits by removing an olive per salad served is such an example). A firm that is experiencing the plateau of a logistic growth curve will rend to oscillate between the EG and ED modes, depending on short-term events.
The transition from the EG mode to the ED mode can be a dangerous time for a business. Sometimes businesses grow too quickly and cannot make a successful transition. Cash shortages and the inability to fulfill customer orders are symptoms. Frequently the founder and the original management will be replaced at this point.
Modeling Business Managers
Some business managers desire growth, so much that they don’t especially mind the ensuing disorder. Other people prefer order and harmony, even at the expense of growth.
Managers who emphasize getting sales, launching truly new products, and even mergers and acquisition tend to be operating in an exponential growth mode. Managers who attempt to increase profits by focusing on reducing product costs and decreasing workforce size tend to be operating in a plateau of exponential decline mode.
Precautionary Considerations
General statements about a society or a category of people within a society should certainly not be taken to apply to individuals. Individuals tend to have a wide range of freedom to act and don’t fit into most generalizations.
-
4.4 Exponential functions
First published on . Last updated on February 6, 2021.