A useful book to read on Big Data is entitled “Big Data,” by
Viktor Mayer-Schonbeger and Kenneth Cukier.
No rigorous definition of Big Data is available. According
to the book, essentially, Big Data means “things one can do at a large scale
that cannot be done at a smaller one, to extract new insights or create new
forms of value, in ways that change markets, organizations, and relationship
between citizens and governments, and more.” The question is how this new
phenomenon can affect economic policy, such as monetary policy.
The book discusses a few important issues surrounding Big
Data, which are worth mentioning here, and may require pondering. First, the
sheer number of observations, or N=all, implies that the principle of
randomness, which we use in statistics, is no longer applicable. Second, Big
Data maybe messy, hard to structure and tabulate, but the authors argue that “more
trumps better”. Indeed, MIT economists Alberto Cavallo and Roberto Rigobon collected
half a million prices of products sold in the US every day, and used them to
measure inflation. While it is a messy data set, they claim that they have
immediately detected the deflationary episodes in prices after the Lehman
Brothers collapse in September 2008, while the official CPI data showed that in
November 2008. Third, Big Data could tell us more about correlation between X
and Y, for example, but nothing about causality. Fourth, correlation is used in
predictions. Fifth, Big Data handles nonlinearity better than small samples.
I do not have much quarrel with these assertions.
Let us talk about policy. Take for example monetary policy
in the US, the EU, New Zealand, and in a number of the advanced countries,
where the primary objective is price stability. Regardless of the operational
details of the policy, which vary from one central bank to another, the policy is,
essentially, demand management. Demand management is based on economic theory: as
data arrive in time, central banks try to discern the nature and the permanency
of the shocks. When the shocks are thought to alter the future paths of
output and prices, central banks intervene by moving the current short-term
interest or change the current money supply, its growth rate, or whatever the policy instrument is. This forwardness
is the essence of monetary policy because we know that monetary policy
affects the real economy with a lag, and that these lags are long and
variable. For example, for the central bank to change the future path of output,
it moves the interest rate a year or a year and a half earlier. A
popular economic theory among central banks predicts that when central banks
project output to be above its potential, i.e., a positive output gap, aggregate
demand increases, and inflation would increase above its expected level.
The problem is that the most important variables within the demand
management framework are unobservable to the policymaker. The central bank needs
to estimate, propose, or calibrate these variables. Neither potential output nor
expected inflation is observable. Similarly, the Wicksellian Natural Rate of
Interest, which is also of interest to some policymakers, is also unobservable.
Given that most of the policy-relevant variables are unobservable, the question is how could Big Data benefit policymakers? I suspect that “more data” can actually help policymakers measure the values of the important unobservable variables needed for policy-making. However, Big Data may provide an estimate of future aggregate demand for goods and services that the policymaker can use to infer future inflation. Alternatively, Big Data could tell us whether we would have higher future inflation directly without inferring it from excess demand, i.e., as in Cavallo and Rigobon. That would raise a number of questions. For example, would that mean the end of economic theory in the conduct of monetary policy? If super computers can tell the policymakers that prices of goods and services are going up with such accuracy, could the policy be tightened, i.e., increase the short-term interest rate? If so, there would be no more fuss about models, estimations, predictions, etc. Could this be the future of monetary policy? The models used by central banks are based on a set of assumptions, which reflect a certain economic paradigm or belief. Therefore, it is hard to change how central banks think and work. Milton Friedman advocated a different way to do monetary policy, i.e., the x% money growth rule, but it was highly resisted by central banks because implementing such policy regime would have left very little for them to do.
Even if Big Data could tell us something about inflation a
couple of months ahead, such as in the Cavallo and Rigobon’s study, it would
not be sufficient. They say that they knew in September 2008 that prices were falling,
while the CPI data showed that decline in November 2008. That might be true for
the public, but it cannot be the case for economists working in the central
banks because they actually would know the prices of more than 70 percent of
the goods and services in the CPI basket before November. Indeed, central bank
forecasts for the next quarter’s CPI is accurate most of the time. Instead, the
one-year-ahead inflation rate is most relevant for policy, which is very
difficult to forecast. Could Big Data tell us what would inflation be a year
ahead?
One could imagine scenarios where Big Data could shed more
light on aggregate demand. For example, one could find out whether millions of
people, for example, are shopping online for a new car, new homes, or durable
goods in general. That might be a useful signal about future aggregate demand. Would
policymakers alter policy because of such information? Similar information
could be obtained online about imports and exports, which affects aggregate
demand. This idea is not very different from measuring vacancies by counting
job ads on the Internet, which has been used to fit the Beveridge curve (the
empirical relationship between vacancy and unemployment).
Government’s security agencies seem to be benefiting from
the Internet Big Data phenomenon, but other departments have not invested yet
in it. Also see an article by Kalev Leetaru published in Foreign Policy in May 29, 2014. He uses Global Database of Events, Language, and Tone Online Library for 2.4 million protests to analyze the Arab Spring. I think that the time will come. In the past, central banks used
large-scale models, which failed to increase the forecasting accuracy and to make
a better policy. Then central banks used factor VAR’s, where a large number of
variables are used. The forecasting accuracy did not increase either. I do not
think Big Data would improve forecasting accuracy in economics, but that does
not mean that central banks will not explore this avenue. I have a feeling that
various governmental departments, and the central banks,
are likely to be investing in Big Data this decade.
Razzakw@gmail.com
No comments:
Post a Comment