Insightful lecture by Jan Frait, Deputy Governor of the Czech National Bank. He questions the dominance of New Keynesian macroeconomics in central banking policy:
I joined the Faculty at a time when New Keynesian macroeconomics (NKM) and its approaches, immodestly referred to as “the science of monetary policy”, had become the official mantra in the central banks of developed countries and had subsequently been translated into a monetary strategy referred to as inflation targeting (or inflation forecast targeting). It was an era characterised by optimistic expectations about long-term economic developments. Macroeconomists spoke of a “Great Moderation” and predicted that we would see decades of low and stable inflation amid brisk economic growth with no significant cyclical fluctuations. With the benefit of hindsight, these expectations were not fulfilled. The low inflation that took hold globally in the 1990s was largely a product of globalisation, not super-smart monetary policy.
….
Just after the GFC, a heated debate erupted about the merits of mainstream macroeconomics. By that we mean the NKM presented in DSGE models. I will thus refer to it as NK-DSGE throughout my lecture, although today this term covers a wider range of approaches than it did 10 or 15 years ago. State-of-the-art NK-DSGE approaches introduced rigour and strong microeconomic foundations into macroeconomic thinking. They became an important tool for simulating the effects of monetary policy and other macroeconomic shocks. But nothing is for free. The positive features of this framework came at the cost of what is described as a high degree of model stylisation. Put simply, these models, especially at their beginnings, were very narrow in terms of the specification of the economy and its behaviour, and omitted a number of important linkages and processes. Despite much progress in the academic literature over the last 10 years, the variants used by central banks remain suitable mainly for analysing small deviations of macroeconomic variables from their long-term trends. They represent a highly imperfect description of the economy and are not able to cope with shocks or structural changes that deflect it from these trends for lengthy periods.
The macroeconomic instability we have seen over the last 20 years is essentially inconceivable in these models. Despite these limitations, NK-DSGE models have for quite some time been strongly dominant in academic macroeconomics and been the main tool used for analysis and forecasting in some central banks. In a scientific discipline in which scholars have always held different views and argued heatedly among themselves, such a situation was by definition strange. It is good that after the GFC, the fundamental debate on different approaches to macroeconomic modelling was revived and the seemingly irreconcilable camps were able to reach some common conclusions. The current inflationary wave will undoubtedly give a new impetus to this debate.
How did one rather narrow and quite technical approach come to dominate macroeconomics? It’s hard to say. Personally I think it mainly reflects psychological factors. From my experience of more than 30 years in both the academic and financial sectors, I see a rather strong conflict between the exact scientific thinking in the ideal of the natural sciences and the excessive uncertainty associated with the unpredictable behaviour of social systems.
I have met many very smart people, including winners of what is inaccurately referred to as the Nobel Prize in Economics, and I have observed that even extremely intelligent people tend to simplify situations dramatically in the face of high levels of uncertainty. They simply cling to a highly stylised economic model, abstract from many theoretical and practical inconveniences, and then offer a clear answer that can be claimed to be “scientifically” derived. Without this, they would probably have to say, “I don’t know, it depends on this, that and the other”.
He questions the singleminded focus on inflation:
A persistent challenge for inflation-targeting central banks is the phenomenon of inflation itself. We recognise that the CPI or some similar inflation measure, which lies at the heart of this monetary policy framework, may not be an ideal representation of what is meant by inflation in traditional monetary theory. Personally, I still adhere to the monetarist definition that inflation means a steady and continuous rise in the price level, that is, a situation where different price categories and nominal variables generally show a similar trend. CPI inflation does not quite fit this definition, and it is not always desirable to regard its short-term fluctuations as genuine inflation or deflation. This is partly because CPI inflation tends to change quite frequently as a result of shocks of a non-macroeconomic and non-monetary nature. If the central bank were to try to slavishly keep CPI inflation at a certain target level at all times, it would have to change interest rates or other instruments frequently and significantly, and would probably not be successful in doing so. After all, NKM itself recommends focusing on the narrower price indices of so-called core or super-core inflation, which consist mainly of items whose prices are rigid. But there is no doubt that targeting a specific and narrow price index would not be very comprehensible to the markets and the public. For this reason, central banks tend to choose pragmatically to target a broad index close to the CPI and explain sophisticatedly in their communications why they sometimes react less or more strongly than the observed or forecasted CPI inflation would imply.
Under inflation targeting as practised in reality, central banks then set their rates to meet the CPI inflation forecast in the relatively short term. The problem is that we cannot reliably forecast CPI inflation and its twists and turns. Virtually every major shock to inflation, in one direction or the other, has more or less surprised central banks this century. On the other hand, we are undoubtedly quite capable of understanding that inflationary potential is being generated in the economy.
But again, we cannot reliably predict how this potential will manifest itself over time, through what channels it will enter the economy and whether or not it will show up as overt inflation at some point in time. Particularly in small open economies, where shocks to the exchange rate and other shocks from abroad have a significant impact on inflation, it is quite likely that the generation of inflationary potential may initially translate into asset price growth or, in some cases, current account deficits rather than into overt CPI inflation. The latter may then stay quite low for a long time even when the economy is overheating. As a result, rather than inflation-forecast targeting, central banks may slide towards a response-to-recent-inflation-numbers regime.
Solution? Move to broader set of developments:
What is the solution to this situation? I offer no simple one. The above dilemmas need to be reflected in our thought processes. In monetary policy decision-making, in addition to model-based forecasts of highly variable CPI inflation, we need to give equal weight to the longer-term fundamental inflationary pressures generated by macroeconomic, monetary and financial developments. I understand the arguments that we have no reliable indicator of these pressures and that few people outside the central bank itself would pay heed to an index of fundamental inflationary pressures. But you can’t escape reality. It doesn’t pay to put everything on one card. It’s better to have a broader set of models and, on top of that, a set of indicators to allow for cross-checking.
How and why do certain ideas become dominant?
Like this:
Like Loading...