News in English

Hitting the Right Target with Monetary Policy

Most central banks around the world have a price stability mandate, and since the international monetary system regained its footing on a fiat basis after the inflations of the 1970s, that is mostly understood to mean a low inflation target. Over the past couple decades though, a growing number of economists have suggested instead an NGDP target. In George Selgin’s classic formulation, just like we expect prices of particular goods to fall if that industry becomes more productive, the price level should track changes in aggregate productivity, because this minimizes the total number of prices that have to change.

What I want to argue here is that, not only is the price level the wrong target for monetary policy, it’s also a needlessly fuzzy target compared to feasible alternatives like NGDP.

 

Price, Quantity, and Quality

Since Covid, we’ve all become familiar with the two ways firms can deal with rising nominal costs: the usual way – raising the price on the same container of yogurt – or “shrinkflation” – reducing the quantity of yogurt at a given price.

The same is true in reverse, when nominal costs fall. Firms can cut the price of a given item. Or they can improve the item.

If it’s not clear that product improvements are just shrinkflation in reverse, think of the “quantity” of something, not as the number of units that get bought, but as the satisfaction you get from the services it provides. I paid more for my current OnePlus phone than I did for my first smartphone, a Nexus 5, back in 2013. But while they both might be thought of as “one” phone, the OnePlus provides much higher quantities of things I actually care about: communication services, entertainment services, emergency services, and so on. Even though the price of a phone has risen since 2013, the quantity of services packaged inside it has plausibly risen even more, meaning that the price of these things I care about has fallen, not risen, even in nominal terms.

A price index, which measures inflation, should track the price over time of meeting these needs, not the price of discrete goods, which may or may not be comparable over time in terms of the needs they meet. So any inflation index that looks at the latter instead of the former, will severely overstate inflation.

But the argument here is broader than just a productivity norm. Think about what you’re looking for when buying the latest summer fashions. It’s not just fabric to cover your body. It’s stylishness services. And last year’s fashions provide a lower quantity than they did last year, despite being the exact same item physically. Productivity is just a special case of the point that similar goods might package different bundles of services over time.

And this makes it tricky to decompose price changes of particular goods into quality changes (that is, changes in the quantity of the services it provides), versus actual changes in the price of those services.

Given that we can only observe things like “the price of phones” and “the price of shirts”, and not things like “the price of communication services” and “the price of stylishness services”, the Consumer Price Index Handbook suggests a few different ways of dealing with this problem:

  • Ignore it and just track item prices, which is equivalent to attributing all price increases to inflation, and none to quality changes. This approach is specifically not recommended, and leads to dramatically higher estimates of inflation.
  • Pick a benchmark industry and attribute higher prices up to the benchmark to inflation, and the rest to quality improvements. This of course depends on appropriate choice of benchmark, which should be some industry with no quality or productivity changes.
  • Find overlapping goods. If a new and an old model sell at the same time, the difference in price can be assumed to reflect quality differences. But this does not account for considerations like future-proofing or the value of newness as such.
  • Explicitly adjust for quality based on expert opinion. This comes with all the obvious downsides of attributing expert preferences to actual consumers.

 

Actual price indices turn out to be very sensitive to the particular methods used here. Index and aggregation theory are, on the whole, well-developed and rigorous in thinking through how to interpret actual data in terms of subjective meaning. I’m on record defending imputed rental equivalence, one of the more misunderstood and controversial aspects of price index calculations. But the fact is, there is simply no satisfying way to account for quality differences over time in a price index

 

Inflation Targeting in a Progressive Economy

While most statistical agencies are savvy enough not to ignore quality change entirely, in practice, each of these methods is employed conservatively enough that we can be confident reported inflation is systematically overstated. One might even think of a 2% inflation target as, implicitly, just compensating for the systematic mismeasurement of inflation.

But can we do better?

George Selgin, Scott Sumner, David Beckworth, and many others have made convincing cases that NGDP targeting is better for financial and macroeconomic stability than inflation targeting. But in addition to that, it also has the advantage of sidestepping the issue of decomposing the price changes of goods into changes in the quantity of the services they provide, or changes in the price of those services themselves.

No doubt there is still a place for price indices, imperfect as they are, in important policy questions like keeping the purchasing power of benefit payments roughly stable, or in macroeconomic questions like growth accounting (how much of NGDP growth represents real growth versus inflation?). But for monetary policy, relying on indicators that correspond more directly to something economically meaningful – indicators like nominal GDP, that simply tally up nominal spending, rather than price levels, that demand complicated hedonic adjustments to be economically meaningful – has a number of benefits in addition to financial stability:

  • An NGDP target provides clearer information to the Fed itself when monetary policy is on the right or wrong track to achieve its long-term goals.
  • An NGDP target improves accountability. Fuzzy as the price level is, the success of monetary policy cannot be evaluated solely on the behavior of the price level, even ex post. A single, clearer benchmark reduces the scope for excuses for monetary policy failure (and, by the same token, makes it clearer when monetary policy is not at fault).
  • A clearer target reduces the incentive to issue cart-before-the-horse regulations with a view to improving the clarity of the target. Pricing regulations in the EU mandating per-kilogram rather than per-item prices, for example, are justified in part by their effect on statistical collection, and the collection itself can entail significant overhead in many industries. 

This argument doesn’t uniquely suggest NGDP as a target. Indeed, the money supply might work just as well, provided we measure it correctly. But it does suggest that, in an economy where tastes and technology change from year to year, inflation in particular is much too fuzzy to be an appropriate target for monetary policy.

 


Cameron Harwick is a monetary economist and Associate Professor of Economics at SUNY Brockport. Follow him on Twitter at @C_Harwick.

(0 COMMENTS)

Читайте на 123ru.net