# Buzz Blog

## Chaos, Fractals, and Complexity: Big Ideas in the “Science of ‘Roughness

Monday, September 14, 2020

By: Hannah Pell

To find solutions for introductory physics problems such as these we make particular assumptions about the mechanical system in question. Physicists really like being able to identify symmetries or make approximations (remember all those Taylor Series?). Physics is useful for answering questions like, given this initial set of conditions defining some physical system, what could happen over time? Not surprisingly, the more restrictions we can put on the system and the more assumptions we can make, that number of possibilities reduces dramatically. (Thankfully, it's also very convenient that everything can be modeled as a harmonic oscillator, too).

Yet, over the last few decades, mathematicians, meteorologists, biologists, and yes, physicists, too, started embracing Nature’s intrinsic messiness and leaned into interdisciplinary problems that could not be solved using conventional approaches. In doing so, they learned to think about the world in entirely new ways. They developed new concepts, such as chaos and fractals. They found that they could successfully describe a wide-ranging collection of phenomena, from population growth to coastal distances to Jupiter’s Great Red Spot, if they renounced their intuition to simplify, assume, and reduce. What would come of these efforts is the science of complex systems.

Here are a few short stories behind some of the biggest ideas in chaos and complexity theory.

“When our results concerning the instability of nonperiodic flow are applied to the atmosphere, which is ostensibly nonperiodic, they indicate that prediction of the sufficiently distant future is impossible by any method, unless the present conditions are known exactly. In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent,” he concluded.

“Assume the cow is a perfect sphere.”

Any of us who have taken a physics class or two has probably met the Spherical Cow at some point (or a version of it). How can I calculate the volume of an irregular, misshapen 3-D object like a cow? Well, I’ll smooth out the edges and approximate it to a sphere, and my answer will surely be close enough.

To find solutions for introductory physics problems such as these we make particular assumptions about the mechanical system in question. Physicists really like being able to identify symmetries or make approximations (remember all those Taylor Series?). Physics is useful for answering questions like, given this initial set of conditions defining some physical system, what could happen over time? Not surprisingly, the more restrictions we can put on the system and the more assumptions we can make, that number of possibilities reduces dramatically. (Thankfully, it's also very convenient that everything can be modeled as a harmonic oscillator, too).

Yet, over the last few decades, mathematicians, meteorologists, biologists, and yes, physicists, too, started embracing Nature’s intrinsic messiness and leaned into interdisciplinary problems that could not be solved using conventional approaches. In doing so, they learned to think about the world in entirely new ways. They developed new concepts, such as chaos and fractals. They found that they could successfully describe a wide-ranging collection of phenomena, from population growth to coastal distances to Jupiter’s Great Red Spot, if they renounced their intuition to simplify, assume, and reduce. What would come of these efforts is the science of complex systems.

Here are a few short stories behind some of the biggest ideas in chaos and complexity theory.

**Lorenz & The Butterfly Effect**

In the early 1960s, Edward Lorenz was a research meteorologist and faculty member at MIT. He was very concerned with the mathematics of forecasting and wanted to know if it was possible to make long-term weather predictions. One day he left his office to get a cup of coffee while he was waiting for his computer to finish simulating two months of weather. It was a simulation he had previously run several days before, although this time he rounded off one of the values of the variables from .50617 to .506 to save a bit of computational space (it was a precious resource back then). When he came back, fresh coffee in hand, he found that the results were astonishingly different from what he’d seen before. Groundbreaking, in fact.

What Lorenz saw was that the small change he made to just one of twelve variables caused a massive change in the output of the simulation. Such a sensitive dependence on initial conditions — which is often described as the “Butterfly Effect” — is one of the defining features of chaotic systems. He published his results in a 1963 paper titled “Deterministic Nonperiodic Flow” in the Journal of the Atmospheric Sciences. As of this year, Lorenz’s paper has been cited more than 22,000 times. Lorenz’s discovery was an important moment in the development of modern chaos theory.

What Lorenz saw was that the small change he made to just one of twelve variables caused a massive change in the output of the simulation. Such a sensitive dependence on initial conditions — which is often described as the “Butterfly Effect” — is one of the defining features of chaotic systems. He published his results in a 1963 paper titled “Deterministic Nonperiodic Flow” in the Journal of the Atmospheric Sciences. As of this year, Lorenz’s paper has been cited more than 22,000 times. Lorenz’s discovery was an important moment in the development of modern chaos theory.

A small change causes extremely different results. Image credit: “Chaos at fifty” in Physics Today. |

“When our results concerning the instability of nonperiodic flow are applied to the atmosphere, which is ostensibly nonperiodic, they indicate that prediction of the sufficiently distant future is impossible by any method, unless the present conditions are known exactly. In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent,” he concluded.

**Mandelbrot’s Fractals**

While mathematician Benoit Mandelbrot was working for IBM, he (sort-of accidentally) became interested in the long-term behavior of cotton prices. Cotton prices — like other market phenomena — behaved in an unusual way. They didn’t abide by the normal distribution curve. In fact, they seemed almost random.

But Mandelbrot started looking at patterns across all scales, taking a macroscopic view of the history of cotton prices stretching back to 1900. And he saw patterns that were characteristic of the whole. It wasn’t a matter of examining one year of cotton prices to the next, it was that the degree of variation in prices over the entire sixty year period was constant. The consistency was hidden in how the prices changed, not in the changes themselves.

Patterns that occur across scales are known as fractals. Mandelbrot also found fractals hidden in a very different problem: measuring the length of a coastline. He wondered, if you attempted to measure a coastline with an infinitely small measuring stick, would you get a finite result? In fact, you wouldn’t. (So next time someone tells you they enjoy long walks on the beach, ask them if they really know what they’re in for). His results were published in a famous 1967 paper in Science: “How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension”.

“Seacoast shapes are examples of highly involved curves such that each of their portion can—in a statistical sense—be considered a reduced-scale image of the whole,” the paper begins.

Fractals describe many other things: the structure of snowflakes or crystal growth, turbulence, galaxy formation, and more. To Mandelbrot, they encompass a “theory of roughness.” In his book, The Fractal Geometry of Nature, he wrote: “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.” When faced with problems with no obvious symmetries or opportunities to make reductions, Mandelbrot developed a new way to comprehend the seemingly incomprehensible.

But Mandelbrot started looking at patterns across all scales, taking a macroscopic view of the history of cotton prices stretching back to 1900. And he saw patterns that were characteristic of the whole. It wasn’t a matter of examining one year of cotton prices to the next, it was that the degree of variation in prices over the entire sixty year period was constant. The consistency was hidden in how the prices changed, not in the changes themselves.

Patterns that occur across scales are known as fractals. Mandelbrot also found fractals hidden in a very different problem: measuring the length of a coastline. He wondered, if you attempted to measure a coastline with an infinitely small measuring stick, would you get a finite result? In fact, you wouldn’t. (So next time someone tells you they enjoy long walks on the beach, ask them if they really know what they’re in for). His results were published in a famous 1967 paper in Science: “How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension”.

“Seacoast shapes are examples of highly involved curves such that each of their portion can—in a statistical sense—be considered a reduced-scale image of the whole,” the paper begins.

Fractals describe many other things: the structure of snowflakes or crystal growth, turbulence, galaxy formation, and more. To Mandelbrot, they encompass a “theory of roughness.” In his book, The Fractal Geometry of Nature, he wrote: “Clouds are not spheres, mountains are not cones, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.” When faced with problems with no obvious symmetries or opportunities to make reductions, Mandelbrot developed a new way to comprehend the seemingly incomprehensible.

A fractal coastline. Image credit: math.yale.edu. |

**Complexity Theory & The El Farol Bar problem**

Consider a situation that most of us have probably experienced. There’s a recurring event that you want to go to — let’s say that The Clash is playing every week at a local bar. The problem is that there is only a finite number of seats. It might end up being too crowded and you have to stand. It also might not end up being crowded and you’ll have a guaranteed seat. These outcomes depend on some finite number of other people asking themselves the same question: should I stay or should I go?

This scenario is outlined in the El Farol Bar problem, a famous conundrum in complexity theory. Unlike chaotic systems in which disorder is localized, complex systems change unpredictably over time. The problem was articulated in 1994 by economist W. Brian Arthur and is based on a real bar in Santa Fe, New Mexico (although the problem was originally proposed several years earlier under another name). The problem is stated as follows: if less than 60% of people go to the bar, they’ll have more fun than if they stayed home, and vice-versa for more than 60%. The catch is that everyone must decide at the same time whether to go or not with no knowledge of anyone else’s choices.

The El Farol bar problem is just one example of a collection of people who don’t have full information about a situation within which they take part but nevertheless have to make some kind of decision about doing so. Everybody who considered going to the concert is part of the system, so the possibility of securing a seat depends entirely on what everyone else decides to do. There are many other everyday experiences just like this one — making the decision to drive on the road at a certain time or to buy and sell stock. Usually the results are random, but occasionally the systems will self-organize and emergent phenomena will arise. If too many people decide to drive at the same time — rush hour, perhaps — then traffic jams will result. If too many stocks are traded in a short period of time, a crash could follow. Strangely, disorder becomes ordered.

This scenario is outlined in the El Farol Bar problem, a famous conundrum in complexity theory. Unlike chaotic systems in which disorder is localized, complex systems change unpredictably over time. The problem was articulated in 1994 by economist W. Brian Arthur and is based on a real bar in Santa Fe, New Mexico (although the problem was originally proposed several years earlier under another name). The problem is stated as follows: if less than 60% of people go to the bar, they’ll have more fun than if they stayed home, and vice-versa for more than 60%. The catch is that everyone must decide at the same time whether to go or not with no knowledge of anyone else’s choices.

The El Farol bar in Santa Fe, New Mexico. Image credit: Wikipedia. |

The El Farol bar problem is just one example of a collection of people who don’t have full information about a situation within which they take part but nevertheless have to make some kind of decision about doing so. Everybody who considered going to the concert is part of the system, so the possibility of securing a seat depends entirely on what everyone else decides to do. There are many other everyday experiences just like this one — making the decision to drive on the road at a certain time or to buy and sell stock. Usually the results are random, but occasionally the systems will self-organize and emergent phenomena will arise. If too many people decide to drive at the same time — rush hour, perhaps — then traffic jams will result. If too many stocks are traded in a short period of time, a crash could follow. Strangely, disorder becomes ordered.

Traffic jams can be understood as emergent phenomena. Image Credit: smartmotorist.com |

**Symmetries versus Roughness**Chaos and complexity have found their way into physics, but so recently that it’s still very much in a “pre-paradigmatic stage.” “What is our Hamiltonian? What is our Schroedinger equation?,” physicist and complex systems theorist Neil Johnson recently asked, whose research intertwines complexity with physics to understand the spread of online extremism.

The simple, isolated systems we encounter throughout introductory physics — blocks on ramps, balls rolling across a pool table, apples falling from trees — show us that behaviors we can observe are predictable and patterned. Physics successfully and accurately describes so much of the world around us, from the microscopic to the cosmos. Yet so much of daily life happens "far from equilibrium." These problems still remain a mystery, but by using ideas from chaos and complexity theory, maybe they won’t stay that way for much longer.

The simple, isolated systems we encounter throughout introductory physics — blocks on ramps, balls rolling across a pool table, apples falling from trees — show us that behaviors we can observe are predictable and patterned. Physics successfully and accurately describes so much of the world around us, from the microscopic to the cosmos. Yet so much of daily life happens "far from equilibrium." These problems still remain a mystery, but by using ideas from chaos and complexity theory, maybe they won’t stay that way for much longer.

**Further reading: Chaos**: Making a New Science by James Gleick (1987)