Nowadays, some segments of the prediction business, such as weather
and economic predictions, are cunningly marketed as "scientific forecasting," as
they make extensive use of mathematical models. But since I am quite certain
that neither these models nor the predictions made using them could ever be
considered sufficiently reliable to be used as evidence in a court of law to
demonstrate knowledge of what the future holds, I have always been puzzled by
the fact that mainstream economists, who should know better, use them to this
very end. As a consequence of this, public policy remains guided by economic
forecasters and their models, despite systematically humiliating results and
terrible consequences for human lives and society.
If astrologers and meteorologists have little influence on public
policy, economists predicting the future occupy strategic positions throughout
the public decision-making process, whether in government or at central banks,
nurturing the enactment of more and more wrongheaded large-scale policies. That
the work of economic forecasters continues to escape close scrutiny seems
disturbing at the very least, and prompted the writing of this paper.
It’s the complexity, stupid! |
“Complex: involving a lot of different but
connected parts in a way that is difficult to understand.” |
|
–Cambridge Learner’s Dictionary |
|
As I pointed out, economic predictions are based on mathematical
economic models. In science, a model is generally understood as an
abstract representation of a given subject, such as a process or a
system. Mathematical models are models which use mathematical language
(data, equations) to describe their underlying systems, and are found in
a variety of fields such as physics, economics, biology, meteorology and
climatology, etc. Mathematical economic models are thus abstract
representations of an economy based on available data and equations. One
of the outputs of these models are quantitative predictions of the
future state of the economy depending on what actions are taken at a
given moment, allegedly similar to the way a model of the solar system
can predict the future position of the planets given the correct present
data.
Since mainstream economics relies on these tools to formulate
theories and prescriptions, mathematical economic models have become the
foundation of most advice guiding public policy, such as the setting of
interest rates, for some decades now. Indeed, once one assumes that we
can predict the state of the economy, what follows is the idea that we
can foresee the results of our actions, and therefore properly manage
society. Yet, as Albert Einstein said as far back as the early 40s:
When the number of factors coming into play in a
phenomenological complex is too large, scientific method in most cases fails
us. One need only think of the weather, in which case prediction even for a
few days ahead is impossible. Nevertheless no one doubts that we are
confronted with a causal connection whose causal components are in the main
known to us. Occurrences in this domain are beyond the reach of exact
prediction because of the variety of factors in operation, not because of
any lack of order in nature.(1)
Since that time, complexity theory has gained momentum in a variety
of scientific fields as a way of understanding complex systems, but it seems
that economic forecasting remains impervious. We will discuss below what could
become the next paradigm shift in economics.
Einstein was a dazzling thinker on a number of subjects, among them
the limits of science (or the philosophy of science). He was among the first to
identify the peculiar obstacles posed to science by complexity, and on this he
was followed by mathematician Warren Weaver, considered to be a pioneer on the
subject. Then, inspired by Weaver’s work, the first to specifically discuss the
scientific implications of complexity in economics was Friedrich Hayek. In
The Theory of Complex Phenomena (1964), he pointed out that what
distinguishes complex phenomena (such as the economy) from simpler phenomena is
the multiplicity of elements and of their relationships within the system,
coupled with the subjectivity of the data in social sciences that eludes
mathematical formulae.
Nevertheless, complexity theory seemed to gain real traction only in
the 90s, as described by Cornell mathematician Steven Strogatz:
Every decade or so, a grandiose theory comes along,
bearing an ominous-sounding C-name. In the 1960s it was cybernetics. In the
'70s it was catastrophe theory. Then came chaos theory in the '80s, and
complexity theory in the '90s. In each case, the skeptics at the time
grumbled that these theories were being oversold and that the results were
either wrong or obvious. Then everyone had a good laugh and went back to the
lab bench for some more grinding, reductionist science, walled off from
their colleagues in adjoining disciplines, who were themselves grinding away
on their own tiny corners of the universe [...]
What's different now is a feeling in the air. Even the most hard-boiled
mainstream scientists are beginning to acknowledge that reductionism may not
be powerful enough to solve all the great mysteries we're facing: cancer,
consciousness, the origin of life, the resilience of the ecosystem, AIDS,
global warming, the functioning of a cell, the ebb and flow of the economy
[...] What makes all these unsolved problems so
vexing is their decentralized, dynamic character, in which enormous numbers
of components keep changing their state from moment to moment, looping back
on one another in ways that can't be studied by examining any one part in
isolation. In such cases the whole is surely not equal to the sum of the
parts. These phenomena, like others in the universe, are fundamentally
nonlinear.(2)
One of the first significant scientific developments to come out of
studies of complexity came in meteorology in the early 2000s. Another
mathematician, David Orrell, sparked off a debate in this field when he put
forward the idea that errors in weather forecasts were not attributable to chaos
but rather to errors in the models. Further, he claimed that errors in the
models are insurmountable because of the complexity of the underlying system;
unlike chaos, complexity is incomputable.
If his argument has made inroads in meteorology, mainstream
economics (which uses similar models for similar systems) has mostly sailed over
the entire issue so far. Obviously, there are immense political implications for
mainstream economics that are not shared by meteorology and that might present
an incentive for maintaining the status quo. This leads to a strange
divergence in which, out of two mathematical models sharing similar physics,
methods and limitations, one is known to not be reliable enough to decide on
buying waterproof clothes for a trek planned in a week, whereas the other still
settles public policy decisions designed to affect millions of lives.
Orrell addresses the limitations of mathematical models with regard
to predicting complex systems such as the economy, health or the climate in his
book, The Future of Everything.(3)
He first distinguishes between chaos and complexity. One of their main
differences is that there is no order in chaos, whereas order emerges
spontaneously in complex systems. And indeed, without any central will planning
it, order exists in nature, in living organisms and in economies. It emerges out
of the multiple relationships and their feedback effects between the various
elements of the systems, as these feedbacks create an ever-adjusting balance.
Another emerging property of some complex systems is adaptation.
Such properties of complex systems are difficult for models to
capture because they are alien to reductionism. Order and adaptation are
displayed by a system as a whole and cannot be understood under the same rules
that govern the relationships between the individual elements of the systems.
Think of the human brain: drawing a map of all its neurons and knowing how one
interacts with another still does not capture intelligence or memory. Think also
of Adam Smith’s metaphor of the "invisible hand"—the ability of the free market
to allocate resources and serve society despite the fact that each agent merely
seeks his own interest.
Reductionism as a way of understanding and modelling complex systems
is therefore a scientific error: a method which worked in many physical
instances but which was erroneously transposed to fields where it was not
appropriate. Further, if reductionism is a mistaken approach merely due to its
obfuscation of the holistic nature of complex systems, it is made even worse by
the fact that even the individual, nonlinear relationships between the parts
themselves are difficult to capture mathematically. For instance, there are no
equations for clouds or for their exact relationships with the oceans, and thus
climate models must use approximations.
In economics, models must assume that economic agents act rationally
or tend towards maximum efficiency. Yet, people do make irrational economic
decisions. This can affect the validity of the models enormously. Orrell
explains that models of complex systems are very sensitive to small errors in
the approximate equations, notably because these systems are host to an
extremely delicate balance between opposing forces and feedback mechanisms,
where a slight imbalance in their representation has big effects on the accuracy
of the models’ projections. Approximations are thus one of the main sources of
forecast error—an error that grows with time as the system adopts a path that
departs from the one projected.
Put another way, computer programs must necessarily follow pre-determined,
well-defined rules, whereas the behaviours of humans or of the natural world are
often based on perpetually evolving rules, or on no rules at all. This means
that the world within a model can only evolve towards the outcome set by the
rules contained in the program, a mere description of what would happen if those
rules were followed in the real world, holding all else constant.
Finally, a scientific theory’s validity lies in its ability to
survive testing, and models can only be tested against past experience. If such
tests can work for simpler problems, they cannot work for complex systems where
non-linearity and emergent properties such as adaptation mean that the past is
no indication of the future. Even if the models are set to "predict" past data,
they are still blind about what is to come. The test is therefore hopelessly
flawed: predictions can always succeed in predicting the past, but this test is
irrelevant if history does not repeat itself. And without a worthy test, any
theory can seem to work.
All in all, complexity theory gives us a more rigorous insight into
what was before a more intuitive perception of the difficulty of accurately
forecasting the economy, as it addresses the very mathematical aspects of the
issue. Now, knowing this, what should be our course of action? To keep making
predictions that we know cannot be saved from error, and nevertheless act on
their basis, or stop using them and adopt a more stochastic approach where we
would use the available information, aware of the limits to our knowledge? If
"the study of the characteristics of complex dynamic systems is showing us
exactly why limited knowledge is unavoidable [and] confronts us with the limits
of human understanding,"(4) learning
our limits is then an actual scientific discovery. To disregard this advancement
would be foolish and unscientific. And yet, it seems that this is what economic
forecasters are paid to do by our most powerful public institutions.
In the land of the blind, the one-eyed man is king |
“Predictions of the future are never anything but
projections of present automatic processes and procedures, that is,
of occurrences that are likely to come to pass if men do not act and
if nothing unexpected happens; every action, for better or worse,
and every accident necessarily destroys the whole pattern in whose
frame the prediction moves and where it finds its evidence.” |
|
Quantitative financier, former Wall Street trader and now
bestselling author Nassim Taleb also took mainstream economics to task
in his book, The Black Swan,(6)
which was translated into dozens of languages and was named one of the
12 most influential books of the post-WW2 period by the Sunday Times.(7)
Taleb made a fortune during the 2008 financial crisis betting
against the models, as he understood that they discounted the
"improbable" risk of systemic failure. He now uses his fame to good
cause, warning us about the folly of guiding entire economies with
wrongheaded economic models and theories. He is most emphatic about the
dangers presented by economic forecasters, suggesting that "[a]nyone who
causes harm by forecasting should be treated as either a fool or a liar.
Some forecasters cause more damage to society than criminals."
Taleb points to Paul Samuelson as the father of mainstream economics
as it is currently taught in academia. Samuelson’s textbook,
Economics: An Introductory Analysis, was first published in 1948 as
one of the first American textbooks to explain the principles of
Keynesian economics, and led to today’s intensified use of quantitative
methods in economic analysis. As of today, his book still reigns in
colleges and is now in its 19th edition. Taleb’s reflection on
Samuelson’s legacy is unequivocal:
In orthodox economics, rationality became a straitjacket.
Platonified economists ignored the fact that people might prefer to do
something other than maximize their economic interests. This led to
mathematical techniques such as "maximization," or "optimization," on which
Paul Samuelson built much of his work. [...] I
would not be the first to say that this optimization set back social science
by reducing it from the intellectual and reflective discipline that it was
becoming to an attempt at an "exact science." By "exact science," I mean a
second-rate engineering problem for those who want to pretend that they are
in the physics department—so-called physics envy. In other words, an
intellectual fraud.
Taleb argues against the Nobel Prize in economics for the damage it
has done through its beatification of mistaken ideas about prediction and risk
management; incorrect economic theories can be devastating and should never
become gospel in such an uncertain environment. Forecasting methods create a
false sense of security, or worse, send people in the wrong direction. Colleges
then exacerbate the problem by teaching these Nobel-approved ideas as orthodoxy.(8)
|