- | Monetary Policy Monetary Policy
- | Mercatus Original Podcasts Mercatus Original Podcasts
- | Macro Musings Macro Musings
- |
Athanasios Orphanides on Real-Time Monetary Rules and their Impact on the Fed’s Framework Review
Natural Growth Targeting and the Future of Monetary Policy
Athanasios Orphanides is a professor at MIT, formerly served at the Federal Reverse for 15 years, and was the Governor of the Central Bank of Cyprus. In Athanasios’s first appearance of show he discussed real-time monetary policy rules, their implication on the Fed’s framework review, his natural growth target, and much more.
Subscribe to David's new Substack: Macroeconomic Policy Nexus
Check out our new AI chatbot: the Macro Musebot!
Read the full episode transcript:
This episode was recorded on February 18th, 2025
Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, please reach out to [email protected].
David Beckworth: Hey Macro Musings listeners, this is your host, David Beckworth, and I have some exciting news for you! I am launching a Substack newsletter this week that is titled Macroeconomic Policy Nexus. Yes, after a five-year hiatus I am returning the world of blogging, at least in its modern form as a Substack newsletter. The newsletter will publish twice a week. On Monday the newsletter will highlight the podcast and bonus content related to the show. On Thursday the newsletter will report on news, papers, and topics, that strike my fancy. So, if you cannot get enough of the podcast, it’s guests, and myself this is your chance for more. Be sure to sign up. Details are in the show notes. Now on to the show!
Welcome to Macro Musings, where each week we pull back the curtain and take a closer look at the most important macroeconomic issues of the past, present, and future. I am your host, David Beckworth, a senior research fellow with the Mercatus Center at George Mason University, and I’m glad you decided to join us.
Our guest today is Athanasios Orphanides. Athanasios is a professor at MIT and a well-known and seasoned veteran of the central banking world. He served for 17 years at the Federal Reserve, and then from 2007 to 2012, he was the governor of the Central Bank of Cyprus, and as a result, a voting member of the ECB Governing Council. Athanasios joins us today to talk about his research of real-time monetary policy rules and its implication for the Fed’s framework review. Athanasios, welcome to the show.
Athanasios Orphanides: Glad to be with you, David. It’s really a pleasure. It’s been a long time since we wanted to do this. Finally, we get a chance.
Beckworth: Yes. This is long overdue. I have chatted with you at many conferences. Of course, I’ve read your research from many years back, and of course, you have a target that I like very much. We’ll get to later, natural growth targeting, which is a version of a nominal income rule, so of course, I love that. You have done a lot of great work on monetary policy rules. We want to get to that. Before we do that, though, tell us about your amazing career journey. You’ve been to the Fed. You’ve been to the ECB, the Central Bank of Cyprus. Tell us a bit about that story.
Athanasios’ Career Journey
Orphanides: Just a couple of words. I’m an MIT product. When I teach, I introduce myself to my students like this. I showed up in this country. I came to MIT as an undergraduate because I wanted to be an economist, and I had heard that it had the best economics department in the world. I was very lucky already as an undergraduate. I managed to have courses with Bob Solow, Robert Samuelson, Rudi Dornbusch, Olivier Blanchard, Stan Fischer. I got the policy background from these guys. When I went on the market, I got advice by professors here that the Federal Reserve Board should be regarded as equivalent to a pretty much top 10 school.
With the policy background I had, I ended up at the Federal Reserve and really enjoyed my career there. Having that background, when the ECB was founded, it was very attractive to see if I could actually go back and help in Europe a little bit. When they called me up and said, “Hey, would you like to come for one term as governor and be on the ECB Governing Council?” I said, “Yes, sure, that is great.” One thing I should mention that’s quite amazing, given the multicultural environment at the ECB, somehow the Europeans decided to use English as the working language of the ECB, which is extremely convenient for US-educated economists.
Beckworth: Very nice. Now, your time at the Federal Reserve is really interesting because you got to work in the Division of Monetary Affairs and your job, as I understand, was on real-time analysis of monetary policy. This is how you dipped your toes into real-time monetary policy rules. Out of that, I believe you have probably your best-cited paper in the AER, 2001, “Monetary Policy Rules Based on Real-Time Data.” Tell us how that all unfolded. You were working on it and you got to publish out of it as well.
Orphanides: First, I’m going to correct you on how I started my career at the Fed. I started by estimating and monitoring money and demand functions.
Beckworth: Oh, really?
Orphanides: This was part of the monetary policy analysis when I joined the Fed. Indeed, I was in the Monetary Studies section in the Monetary Affairs Division. When the Taylor Rule came out in 1992, there was this immense interest in actually examining why the Taylor Rule appeared to fit the Greenspan era as well as had been suggested at the time. We were called to do some research on it and to monitor it. When you mention real-time data, I’m going to mention that as a matter of practical experience, starting to track the classic version of the Taylor Rule as was specified in 1992 for a couple of meetings, I immediately confirmed that it had a problem.
This is a problem that Ben McCallum had identified at the Carnegie-Rochester in 1992. The rule is not operational. The inputs to the rule keep changing all the time. You need to have within-quarter forecasts of the output gap. Back then, the GDP deflator was being used as the inflation measure. These things get revised over time. We could immediately see that there is a problem in evaluating and tracking the rule that has to do with the revisions in the data. There was sufficient interest in this that I actually selected the topic for one of my board briefings. That ended up being an AER paper a few years later.
Real-Time Policy Rules
Beckworth: I was looking through your Google Scholar, and you have many, many papers. I believe this one is like your number one paper. That’s fantastic. You get to work on it and then publish out of it. This paper speaks to your research agenda. A lot of your research agenda has been on this evaluation of design and monetary policy rules using real-time data. There’s lots of uncertainty with y-star, r-star, u-star. I go back and think about Chair Jay Powell’s speech in 2018. He talked about navigating by these stars. Then I believe last year, he gave a similar speech where he said navigating by the stars under cloudy skies.
That defines everything you’ve been working on, right? That’s been your research agenda. We’ve already touched on this Taylor Rule paper you did in the AER, but you’ve had several papers in this vein. Another one, you had an REStat paper with a colleague, Simon van Norden, “The Unreliability of Output-Gap Estimates in Real Time.”
I want to go and park for a few minutes on the paper where I first met you when I read your work. This was a JME paper, 2003, “Historical Monetary Policy Analysis and the Taylor Rule.” Athanasios, I was brought up and taught, everything was just really bad until Volcker came along. Then he changed the world and then everything was wonderful. If we go back and estimate Taylor Rules over this period, it’s just obvious. The Taylor Rules show what a change in systematic and disciplined monetary policy. This paper, you said, “Not so fast.” Why not so fast?
Orphanides: It’s a couple of things. The first thing I’m going to say is that, indeed, Paul Volcker was a fantastic Fed chair. We are so lucky in this country that he came along and managed to bring stability back to the monetary system. Without Paul Volcker’s revolution for the time, who knows what the economy in the US would have looked like the last 30 years.
The issue, and this goes back to the real-time analysis, the issue is that if you are giving policy advice in real time—and this is part of the job of the group I was in at the Federal Reserve—is you always have to take into account what are the informational constraints, what information is known, how trustworthy is the information. It’s only natural that you always ask this question, how do you give advice on good policy, taking into account the uncertainty that is ever present around us? This is the challenge. This has to be reflected in the design of systematic monetary policy and the design of monetary policy rules.
Now, this is not a new issue. This is not a new issue. If you go back 100 years ago, or if you go back to the ’50s, the ’60s when you have the monetarist debates, the whole point of monetarism can be summarized in very much this same context. Can you identify a simple rule that will be robust, that will make sure you avoid major mistakes? Milton Friedman was always talking about avoiding major mistakes. This was the message of his presidential address, for example.
You’ve got to take this thing into consideration all along. The work that you mentioned arose from the fact that we always see—and this has come back, so we really need to bring this to the present—there is always a tendency, I think this is a human tendency, to try to overreach and do a little bit better in the policy and design process. What this translates to in terms of macroeconomic policy is what used to be called fine-tuning. Why don’t we always try to do a little bit better? Why don’t we try to optimize around the information by using a nice model we have?
The problem is that we have the tendency to downplay how little we know about the models we build, and we end up advising academics, advising policymakers to fine-tune policy, and that becomes a problem. What I was trying to do in that paper is to ask the question, suppose that the Fed had followed the Taylor Rule back in the 1960s and ’70s, would results have been really much better based on the information they had in real time, not based on the information we had 30 years later?
What I realized is that the answer was no. One of the difficulties with the classic Taylor Rule was that it assumed a pretty precise knowledge of the output gap. The output gap is one of these concepts that we cannot measure very well. It’s really one of the star variables. The level of potential output is similar to the natural rate of unemployment, similar to the natural rate of interest. It’s one of the star variables that we tend to mismeasure because we don’t know how to measure them very well. If you try to apply policies that are based on these nebulous measures, chances are you would be making a mistake.
In terms of the specifics, revisiting the 1960s and ’70s, what I realized was that part of the problem was that we had a productivity slowdown in the United States. Ex post, we recognized that. What had not been recognized 20 years ago as much is that productivity slowdown meant that the concept of the output gap was severely measured for a couple of decades. That created serially correlated errors. If you are trying to implement the Taylor Rule with real-time data, you ended up having too easy policy systematically. That, of course, ended up producing inflation. It’s really a matter of taking that into account in your evaluation.
I want to correct you in one more thing. You said when Volcker came, he came along and he fixed everything and policy has been good since then. I actually want to remind you that there has been a discussion going back, and a very nice paper by David and Christina Romer on this one, for example, on the rehabilitation of the 1950s policy and Chair Martin. When I look at policy at the Fed in the last 110 years now, there are good periods and bad periods and terrible periods.
Before the 1960s inflation episode, we had a period where policy was pretty good. Policy, under Chair Martin, was trying to keep the growth of the economy steady, not fine-tune the economy, not try to reduce the unemployment rate to as low as it can be, just have the economy grow at a steady pace. In retrospect, compared to the late ’60s and ’70s, that period was much better. When I look at this in context, I think that what Paul Volcker did with his monetary reform was restore, to a very large extent, what policy was in the 1950s. Then, with Chair Greenspan, inflation being lower, inflation expectations being better anchored, finally the economy was operating much, much better.
Beckworth: How should we think about the late 1960s, 1970s, if they were, in fact, following a Taylor Rule given the real-time data they had? Were they being simply uninformed, or was there some other policy mistake that led to this high inflation?
Orphanides: Just to be clear, the Fed never followed any specific rule. This is part of the problem that we need to convince them to do better. We want to convince them to be a little bit more systematic going forward. The statement is that with the data available in real time, especially the output gap data—this is where most of the revisions in the data come along—had the Fed followed the Taylor Rule precisely in the late ’60s and ’70s, the inflation outcomes that we would have gotten would have been indistinguishable from the discretionary policy that was followed.
Now, how do we explain this? The way I find it easiest to reconcile these facts is by noticing what the objective of policy was. Under Arthur Burns, the policy was very similar to what you will recognize as more recent policy, which is try to keep the unemployment rate as low as possible, and at all costs avoid having it rise above what we think the natural rate of unemployment is.
What this did was that it actually meant that from time to time, actually quite often, more often than not, the Fed was pushing for over easy policy, and only after they realized that inflation got very high, they tried to undo that. This is what used to be called the stop-go policies of the 1970s. What was the common thread? Trying to target a real level variable, the unemployment rate, the level of the unemployment rate in that instance. That’s very similar to having a policy rule that places a large weight on the level of the output gap. With Okun’s law, we can go from the output gap to the unemployment gap, back and forth all the time. This is really the common thread.
This is very different from policy in the 1950s under Chair Martin, and policy under Volcker and Chair Greenspan later on. If you check at how Chair Martin would describe his policies, or how Chair Volcker would describe their policies, their policies were always focused on, we want to have stable growth in the economy. They never talked about having a low unemployment rate. They never talked about an output gap. They always stressed a stable growth in the economy, and first and foremost, restore and maintain price stability to help the economy grow.
Beckworth: Speaking of these stars that we’ve been talking about that we can’t measure in real time, Chair Powell has often talked about r-star, and he says you can’t observe it, but we will know it by its works. I bring that up because you had another fascinating paper, this time with John Williams, now the New York Fed president, 2002.
Orphanides: C. Williams. Very important to remember the middle initial.
Beckworth: The current John Williams at the New York Fed. It was a Brookings paper on economic activities, and one was, again, the same research agenda theme, but “Robust Monetary Policy Rules with Unknown Natural Rates.” In this paper, you make the case and demonstrate clearly that r-star and u-star are really hard to get right in real time. You have a quote at the beginning of this paper. I just want to read it because I think listeners will recognize this. These are the words that you hear from Jay Powell often, and I always thought, “Man, that’s a clever way, Jay Powell, to coin this, to think about it.”
Apparently, he’s not the first person. In fact, there’s another John Williams, John H. Williams, 1931, the Quarterly Journal of Economics. I think you said you found this, but it says—this is his quote from 1931, “The natural rate is an abstraction; like faith, it is seen by its works. One can only say if the bank policy succeeds in stabilizing prices, the bank rate must have been brought in line with the natural rate, but if it does not, it must not have been.” That was really interesting to see someone in 1931 say that, and then hear our current Fed chair say that.
Orphanides: It was a lot of fun finding that quote. I remember I was discussing this with John C. Williams. John said, “We need to start the paper with this.” Absolutely, it’s John H. Williams who used to be a professor at Harvard and also a policy adviser at the New York Fed. It’s quite remarkable how far this goes back. This shows these concepts of the star variables go back.
Let me remind you: Milton Friedman’s presidential address, which was focused on the natural rate of unemployment, he deliberately structured the concept to be similar to the natural rate of interest that goes back to Knut Wicksell from the late 19th century. One of the interesting things with Wicksell’s work is that if you go to the very end of his book that was written, I think, 1898, in the original, he has a small section on policy.
After discussing this concept of the natural rate of interest that is just wonderful because it makes the economy be in equilibrium, when he goes to the section of policy, he recognizes that, of course, nobody can observe this measure, so nobody can actually use it in practical terms. When it comes to giving policy advice, he simply says, “Look, what the central bank needs to do is observe where prices are headed. If prices are rising, then clearly you need to raise the policy rate. If prices are falling, clearly you need to reduce the policy rate.”
This was really the premise of the so-called difference rules of simply moving in the right direction if you do not know where the stars are. Just use your policy instrument to move in the right direction, and you will be guiding the economy safely in steady state without needing to know where that is. This is the relationship with that work.
Beckworth: The takeaway from that paper is you need simple rules that don’t rely on output gaps or unemployment gaps, but rely more on difference rules. Is that right?
Orphanides: Or the real interest rate gap. The point is that you can actually very easily calibrate this error. Let’s say we’re mismeasuring the natural rate of interest by 100 basis points, and we follow a level-style Taylor Rule. What does that mismeasurement mean? Let’s say that r-star is 100 basis points higher than the estimates that are available inside the Fed. That means that the policy is going to be calibrated to be 100 basis points too easy. If policy is too easy for a few years, and of course you will end up with high inflation. This is the whole premise. The question is, how can you avoid that?
You cannot avoid the measurement error. You cannot avoid not knowing. What you can do is find a different style of policy rules that are robust to this uncertainty. The style of policy rules that you quickly converge to—and this was one of the lessons that came out of that Brookings paper—was that you can actually do pretty good policy with a difference rule. You simply change the nominal interest rate to go toward the direction that you think you should be going. You do not need to know what r-star is. You should not be doing this with an unemployment gap or an output gap, because again, you don’t know what the level of financial output is.
If the economy is growing too fast, then tighten a little bit. If the economy is growing too slowly, then ease a little bit. That’s all you need. You’re effectively correcting toward what the natural rate of the economy is without having to take a stance of what exactly those estimates are. There are pros and cons to that. If you follow an approach like this, one of the things that you’re going to see in terms of any model is that you will be, from time to time, overcorrecting, and that’s okay.
You’re not going to be perfect. Sometimes you’re going to be a little bit above inflation, a little bit below inflation. You’re overcorrecting in a systematic fashion. The bottom line is that by overcorrecting in a systematic fashion, you end up always keeping inflation under control and never losing track of that. The Brookings paper was an interesting project. If I remember correctly, it was the last project that John worked on before moving to San Francisco. I was very lucky that I started working with John Williams at the Fed on this. John was in the model section. He could bring all of the modeling technology from that.
What we ended up doing was using three different models and finding robust, simple rules with alternative models, nesting the classic Taylor Rule as an alternative and difference rules that do not have any stars in any other alternative. We found that with just normal uncertainty and mismeasurement of the star variables, you would not want to be using level policy rules. You would not want to be using something like the classic Taylor Rule. You would not want to be targeting the unemployment rate or using r-star as your benchmark for policy. That was there. That was there over 20 years ago. The question is, has the Fed been using that? That’s the question.
Potential Drawbacks of Monetary Rules
Beckworth: Let’s talk about that. Before we do, you mentioned something. You mentioned there’s potential danger in using a level target. I, as someone who advocates nominal GDP level targeting, needs to acknowledge this point and have some humility that, you know what, even if I love this rule, level targeting, and despite all of its advantages, makeup policy, a lot of good things, there’s also great potential to go way off course and be very destabilizing.
You have a version of a nominal income rule we’re going to get to in just a minute. One other observation about this paper—because, again, what you show is output gap or the real interest rate gap—these things, potential real GDP or r-star, they’re just really hard to measure in real time. You might get a critic outside of New Keynesian monetary policy, let’s call it, the mainstream central banking approach and say, “Aha, that’s why we need to go back to money supply targeting.” Maybe you get an MMT-er who wants to do something else. Maybe you get John Cochran arguing for the fiscal theory of the price level.
I think this critique applies to all of those approaches. If I’m a monetarist, I have to know real money demand. You mentioned you worked on this. We don’t know real money demand in real time. That’s a real variable that’s unobservable, that’s latent. Same questions. If I’m doing fiscal theory of the price level, I don’t know the discounted present value of primary real surpluses. They’re unobservable variables. If I’m an MMT-er, they say we need to know the potential capacity. Every macro theory has these unobservables. I just put that there as an aside. Any comments on that before we go back to difference rules?
Orphanides: Sure. I’m going to say that any model we write down relies on some basic assumptions. Of course, any simple policy rule you write down is going to have some unobservables in it, even the simplest ones. Some are simpler than others, and the errors are not the same in this concept. Let me compare a couple. Let’s start with money growth versus interest rate as the instrument. You will recall Milton Friedman, Allan Meltzer, Ben McCallum, Karl Brunner, and others, they were suggesting try to make sure that money is growing at a reasonable pace. Don’t let that out of sight.
Now, you can design a simple rule with money growth, and it’s going to be good or bad depending on the underlying assumption. The Fed was using money growth as an indicator during the 1980s, and then it gave up. Why did it give up? Because the concept of the equilibrium velocity actually started being very, very difficult to measure. You need to have stable velocity if you’re going to be using money growth. If you do not have stable velocity, you’re not going to be using that.
This is why some people, and I know we’re going to discuss this later on, but I’m going to bring it in already a little bit right now. This is why some people say, “Why don’t you focus on nominal income growth?” Because nominal income growth is velocity-adjusted money growth. It’s very similar to money growth, but without having to worry about the velocity element. When I’m thinking about nominal income growth-style policy rules, they actually do have the flavor of the monetary growth rules that the old-fashioned monetarists would have been advocating.
The key difference is the focus on the growth rate as opposed to the level. This is where I want to make sure we’re on the same page. You can’t always be having in mind what level of nominal GDP would be good to have in the economy for the next 20 years. If I were working at the CBO, Congressional Budget Office, I would need to have an assumption about that in order to figure out the fiscal dynamics. We need to have assumptions about this. The problem is that these level concepts are very sensitive to the underlying assumptions about potential output growth. Yes, if you put too much emphasis on the levels, you can be off.
At the Federal Reserve, I recall when I started my career, one of the fashionable concepts to be tracked at the time was the so-called p-star model of inflation. That relied effectively on the level of money and the level of potential output. Because it was a levels concept, it suffered from the same issues. In reality, figuring the growth rate of potential output and what I call the natural growth of income, is simply an order of magnitude less complicated than figuring out the level of potential output. This is the measurement comparison that you want to have in mind when you do these calculations.
Natural Growth Targeting Rule
Beckworth: Let’s talk about your rule, your natural growth targeting rule. Let’s maybe spell it out, all its details. It’s, you’re looking at a year ahead or four quarters ahead, forecasted growth of nominal GDP or nominal income compared to potential growth plus 2% inflation. Is that the idea?
Orphanides: Yes. This goes back to the paper you mentioned before that was published in the JME 2003. This paper was one of the papers in a symposium on the 10th anniversary of the Taylor Rule. My task in that conference was to see how I could use the Taylor Rule framework as an interest rate policy rule to try to explain a broader sense of policy frameworks. Money growth targeting, nominal income growth targeting, in addition to the classic Taylor Rule that focused on the output gap.
One of the things I realized is that if you wanted to do money growth targeting with an interest rate instrument, what you would need to do was formulate a policy rule that would be tracking the changes of the interest rate as a function of the difference between nominal income growth from the natural growth rate of income. This is what money growth rules are doing with a money instrument. You can do exactly the same thing with an interest rate instrument.
There are advantages and disadvantages to that. One of the advantages is that you don’t need to worry about the mismeasurement of equilibrium velocity if you do that. Central banks actually find it easier to explain policy with interest rates. Once you get the basics down, then the next question is, if you’re trying to have a simple policy rule that is useful for policy purposes, should it be backward looking or forward looking? You can actually compare the two.
What I was arguing, and this is why my preferred version of the natural growth targeting rule is forward looking, is that one of the things that I observed as soon as I started my career at the Federal Reserve is that policymakers are always trying to make sure they understand where the economy is headed in the next few quarters. If you want to summarize the state of the economy in the simplest possible terms, you don’t want to use last year’s data. You actually want to use the information that the forecasting team is going to bring, the now-casting team is going to bring, say, what is current quarter GDP, what is GDP predicted to be in the next couple of quarters?
You want to use these short-term projections as the summary state of the economy. This is why a forecast-based rule along these lines would be more useful in practice than a rule that relies on historical data, and this is why I ended up preferring this growth targeting rule that is forecast-based, looking at—the preferred specification, to be exact—is three quarters ahead year over year.
If you think about this thing, like this quarter, we are in Q1. By the end of the quarter, we will have data for Q4 of last year. We take that as a starting point, and we go forward four years from now, which is three quarters from now. This is three quarters ahead year over year would be the specification of that rule. Again, if you look at this thing, it’s what I call the natural growth rule with an interest rate instrument, which is the change of the instrument depends on this difference between the short-term projection of nominal income minus the natural growth rate of income. It’s an interest rate-based formulation of Ben McCallum’s base rule or Milton Friedman’s k-percent rule in a more modern setting.
Beckworth: Just to, again, spell this out, the inputs to your rule, one would be a forecast of nominal GDP or nominal income, which you can get from a consensus measure, and then you need a forecast of potential GDP over this period, and that’s really your only unobservable. That’s the only thing you really need. Compare that to a Taylor Rule. Taylor Rule, you would need potential real GDP as well, but you would also need real GDP, which has measurement issues, and then you’d need some natural rate measure for that intercept term. You’ve got all kinds of measurement error potential in the Taylor Rule greater than in this one. This one is, it’s minimizing the potential for errors. In fact, you found that the error on measuring the growth of potential GDP is very small compared to these other star measures.
Orphanides: Yes, and this is trivial to actually see in the data. If you look at the US estimates of potential output growth in the last 50 years, they have ranged from about 2% to 4-point-something%.
Beckworth: That’s huge.
Orphanides: This is the variation we’ve had. Huge. Frankly, we would have been much better off if we had 4% potential output growth right now instead of 2%, but that’s about the variation we’ve had. In terms of errors in that, so real-time mismeasurement, it’s almost always within one percentage point. Today, for example, if I look at around, 2% is the estimate that I see in the summary of economic projections that the FOMC is giving us, for example, I said, “I’m pretty confident that potential output growth is going to be between 1% and 3% if they give me a 2% estimate.” That’s the sort of error.
If we look at the level of potential output, the level can shift up and down. Historically, we have had errors that are several percentage points. If you have a productivity slowdown, for example—we’ve had that a couple of times—if you have a one percentage point slowdown and you are missing it for five years, you’re 5 percentage points off. If you are missing it for 10 years, you’re 10 percentage points off. We are not really 10 percentage points off, but 5 percentage points off is actually quite easy to make an error.
Same thing with the natural rate of unemployment. Again, you can do the same back on the envelope calculations by thinking about by how far off is the natural rate of unemployment and what kind of error would this create for the unemployment gap that is similar to the output gap? You just need to apply Okun’s law to do that, which is right now we say the Fed is using an Okun’s law coefficient of two. What does that mean?
Let’s say that you think the unemployment rate is 4%, but instead it’s 6%. That’s a 2-percentage point error in the unemployment rate. By the way, this is very similar to the error in the measurement of the natural unemployment that was there in real time in the early ’70s, like 2 percentage points. Not a big deal. Two percentage points. If you apply a coefficient of two, you get a 4-percentage point mismeasurement of the output gap. If you go back to 1970, they were applying a coefficient of three. The Okun’s law estimates at the time were three. I give you a 6-percentage point error on the output gap. You can actually do this back on the envelope calculations and see how far off you can be with the growth rates.
When it comes to the unemployment rate, if you want to formulate a difference rule with the unemployment rate, and this actually is the sort of rules that we had used in the JME paper and also in the Brookings paper with John Williams. We used this in a number of papers. There the prescription is very simple. We know that the natural rate of unemployment moves slowly, it doesn’t jump around very much. So the prescription is, well, just be informed by whether the unemployment rate is rising or falling, and don’t be worried if the unemployment rate is moving sideways. If the unemployment rate is moving sideways, then don’t respond to the economy at all; just focus on stabilizing inflation. It’s really as simple as that. In the unemployment rate formulation, you don’t really even have the growth rate of potential GDP as a star variable at all.
Federal Reserve’s Framework Review
Beckworth: Let’s segue into the Fed’s framework review, and I want to use another article of yours to make this transition. You had a 2018 Cato paper that was titled, “Improving Monetary Policy by Adopting a Simple Rule,” and you bring together a lot of the arguments you’ve been making over the years in one place. And we’ll provide links to all of these articles in the transcript so listeners can check them out.
You say this near the beginning of your piece, you say, “The combination of, one, meeting-by-meeting discretion, and two, multiple conflicting goals makes the Fed vulnerable to all the pitfalls that monetary theory and history teach us.” Two things you bring out there, there’s discretion and not a rule, number one, and number two, having multiple goals, conflicting goals. Then you go on to explain why a simple rule can solve this, as well as some of the critiques you respond to. Tell us about this paper.
Orphanides: Let me say a word about the multiple goals problem. I want to do this first to give credit to the Fed for moving in the direction of an inflation target in central bank by adopting a clear definition of price stability, 2%, that has not been settled, that’s great, since 2012 we have this, okay? Then also recognize that the Fed is not really an inflation-targeting central bank because it’s constrained by its mandate.
The law says that the Fed needs to deliver maximum employment, stable prices, and moderate long-term interest rates. Strictly speaking, this is infeasible to do. Most inflation-targeting central banks do not face this constraint. The legislation, in most cases, has been adjusted over the past 30 years in most countries that are doing inflation targeting so that price stability is recognized as the primary mandate of the central bank. This resolves this tension.
The case of the Fed, this has not been resolved. We can actually look at periods of the Fed when the Fed did good policy. Those were the periods when the Fed voluntarily said, in order to achieve our mandate, we need to first and foremost focus on maintaining price stability. Whenever the Fed elevated the status of maximum employment mandate, for example, in the 1970s or, for example, recently, we end up with an inflation episode. Yes, the mandate is part of the problem.
Now once we realize that the mandate is part of the problem, we realize that it’s far more important for a central bank like the Fed, compared to an inflation-targeting central bank with a single primary mandate, to be systematic in its policy and make sure that it seriously takes rules into account in designing policy. The rules that can help keep policy more systematic are effectively how policy can be constrained.
This will bring us to one of the most important elements of inflation targeting in the way that, I recall back in the ’90s, two famous academics in the US had characterized it. You will recognize both of them, Ben Bernanke and Rick Mishkin, before they went to the Fed. They said, one way to understand the inflation-targeting framework is as constrained discretion.
We want to find a way to help the central bank reduce its discretion and focus on maintaining price stability. This is what we need to do with the Fed. The problem with the Fed is that, because the law doesn’t help do that, it’s very important for the Fed to adopt a simple policy rule that will help constrain its discretion while it’s trying to achieve its mandate.
Beckworth: Now, in that paper, you also make a suggestion for the Fed doing more work on these simple rules. Let me quote from your paper again. You say, “The Fed could publish an evaluation of its rule on an annual basis and adapt its rule if needed. Updates to the Fed’s rule could be presented with the annual revision of the ‘Statement on Longer-Run Goals and Monetary Policy Strategy’ the Fed first published in 2012. Replacing these meeting-by-meeting discretions with a transparent process of selecting and periodically updating a simple and robust rule would ensure that monetary policy is systematic and contributes to social welfare over time.” Tell us about that idea.
Orphanides: One of the criticisms of simple rules, let’s start with that, is that, well, the environment around us changes. If you have a simple rule, for example, Milton Friedman’s 4% M2 growth as a policy rule, you say, “Well, but your estimate of potential output may change, your estimate of velocity may change, you want to adjust this.”
The whole point is to design a simple policy rule that is robust based on the knowledge you have at the time, and then every so often, reevaluate, reexamine the process through which you select what would be a robust rule in order to take into account changes in the economy, new information you have about modeling and measuring techniques.
The fact that a simple rule needs to be adapted from time to time is something that needs to be recognized, and the recommendation there is that rather than focus at every single meeting, this is what the Fed is doing, at every single meeting, the focus is let’s use discretion to figure out what we do with the instrument right now. Rather than do that, it would be better for the FOMC to be discussing every year, once a year, uses discretion to select a benchmark policy rule, and then use that benchmark policy rule to give a prescription that can be the baseline for the discussion at every meeting.
Now let’s understand how the process works, and let’s actually link it to a current process. When there is an FOMC meeting, there needs to be some benchmark for what the decision is going to be. Those of us who are calling the Fed to adopt a benchmark rule are effectively telling the Fed, adopt a benchmark rule, then show us what the prescription from that policy rule is, that’s your benchmark, and then only use discretion on top of that. If your benchmark rule is calling for a 25-basis points tightening, then use your discretion to say, “Okay, yes, we can do that, we don’t need to change that,” or, “maybe we want to wait another quarter before we do a 25-basis point tightening.” That’s the level of discretion you want to use on top of the rule.
Right now, one of the things that we see in many central banks, and from time to time, we see this at the Fed, the benchmark is always do nothing. Do nothing is actually not a robust rule in any model I am aware of, and this is one of the reasons why we end up having too much discretion in the process of designing policy. This is why people like me complain about this meeting-by-meeting discretion, that, from time to time, causes policy errors, like the one we’ve had postpandemic.
Beckworth: Okay, let’s talk about the Fed’s framework review explicitly here. They had a big change in 2020; FAIT, flexible average inflation targeting, was introduced. They introduced asymmetries; they introduced both makeup policy from below 2% but not above, that’s one asymmetry. The other one we’ve talked about, probably the more consequential one, is they now looked at shortfalls from maximum employment as opposed to deviations, a symmetric approach. Two asymmetries. What are your thoughts on FAIT or the 2020 change, and what would you like to see happen in this current one that’s going on right now as we speak?
Orphanides: A couple of things. First, I will give credit to the Fed for one change they had in 2020 that was an improvement in my view, and this was doubling down on the importance of keeping inflation expectations well anchored in line with the 2% price stability objective. In my view, if they had just stuck with that change and didn’t change anything else, they would have ended up with a better framework than the one they have right now.
The introduction of asymmetries and especially the shortfalls from unemployment that pushed the Fed to be backward-looking later on in its policy was really asking for trouble. This is one example of what I called earlier, it’s a human tendency to try to micromanage and fine-tune the process. We know how this came about. This came about because the estimates that the Fed was using for the natural level of employment were declining before the pandemic, and the Fed got into this mentality, “Hey, maybe the cost of letting the economy overheat is not that big.” That’s, of course, a terrible conclusion to draw.
They hard-coded that in the policy strategy, effectively inviting a policy mistake of having high inflation. We got that high inflation sooner than the Fed thought, but the policy mistake was already there. The framework was effectively making the Fed open to a mistake sooner or later. We saw it sooner, but it should not have been put there in the first place. The framework was not resilient the way it was set up.
A better framework would have been a framework that was always asking the question, how do we make sure the economy is growing at a steady pace going forward? This is 1950s language. This is Paul Volcker, Chair Greenspan’s language. You will notice one thing with that language, and this is where I’m going to go back to the mandate. This language never used the concept of maximum employment in describing what the Fed was doing.
One of the questions that I have right now for the Fed is, can the Fed drastically improve its framework if it insists on starting every statement by saying, “Our job is to deliver maximum employment in the country,” because every time they keep insisting on this, they’re effectively inviting policy errors going forward. That’s part of the question.
Beckworth: Your rule, your natural growth targeting, is that something you would like them to consider, a step in that direction?
Orphanides: Let’s talk a little bit about this. As we know, because this work is public, the Fed is very good in publishing the briefing material from its policy meetings with a five-year lag. We know that the Fed has been tracking simple policy rules internally for 20-plus years. This goes back almost 30 years. For the past 20 years, we actually have it because it was already there in the Bluebook and then Tealbook starting in 2004. Why do I bring that up?
If we do check those historical Bluebook and Tealbook documents, we see that the Fed has been tracking in real time the classic version of the Taylor Rule and the forecast version of the natural growth targeting rule that I mentioned, not using nominal income specifically, but instead using the variant of using real GDP growth and core PCE inflation. These things are very closely related. The Fed has already been using this sort of analysis internally.
The problem is that they have not selected anything as a benchmark. When the committee wants to apply its discretion and the rules are inconvenient, they simply ignore them. We cannot tell that they are doing this because the briefing documents are not disclosed until six years later. One of the things that the committee can do, again, this would be in the spirit of constraining discretion in order to improve policy, would be to take that part of the analysis that is being shown internally, simple policy rules, the natural growth targeting rule, the simple Taylor Rule, and make them public in real time.
In order to do that, they will actually need to come to terms with a couple of tricky technical assumptions. In my view, this is what should be part of the review. What are those assumptions? If I want to mention the Taylor Rule, it requires nowcasts within quarter forecasts. The natural growth targeting rule requires three quarter ahead forecasts.
Once you realize that you actually do need information along these lines, you need to ask yourselves, what projections do we use? Whose projections do we use? Do we use the staff projections that the Fed does not make public? Do we have a variation on that, that is using the summary of economic projections that the committee is publishing every quarter?
If I were on the staff at the Fed, I could design alternatives and take to a committee, and then have the committee decide how they want to communicate simple policy rules that would transparently inform the public of what useful benchmarks are, and that would constrain the discretion of the committee.
Let me say one more thing about this. I keep emphasizing you want to constrain discretion. What do I mean by that? If I announce a benchmark policy rule, the natural growth targeting rule, for example, and the natural growth targeting rule is telling me that I should raise interest rates by 25 basis points in the first quarter of this year, for example, then the discretion would be that I publish that and I say, “But I want to take into account some considerations that we have that are not reflected in the rule, and I will deviate from that benchmark, and I will not raise interest rates at all. I will keep interest rates steady.”
This is how you would be using discretion at the margin. It would be constrained discretion because you already published that the benchmark rule was suggesting that you should tighten by 25 basis points, so everybody can see that you’re deviating by a small amount. This is quite important because major mistakes occur when discretion deviates far more than these policy rules.
For example, in 2021, as we know, the Fed deviated massively from policy rules, more than 200 basis points from the natural growth targeting rule, more than 300, 400 basis points in a few quarters from the classic Taylor Rule, for example. Those are the sort of deviations that the Fed would not have wanted to have if it had been transparent. It would be constrained to explain why it deviates so much. Of course, realizing that it could not plausibly explain why it had to deviate so much, it would have deviated far less, and the mistake would have been contained.
Beckworth: Yes, I think that’s a misperception among many that when you say have a benchmark rule is that you have to follow it religiously no matter what. You’re like, no, you have a benchmark rule, and you use that to explain any deviations. You’re allowed to deviate, but you need to explain it, but this will be what guides you.
This brings back the FORM Act, if you recall, 2015, 2016, they introduced, I think it was a Taylor Rule as a benchmark, or actually it was any rule that you want. Set a benchmark up, and if you have to deviate, fine, but please justify it. It’s a very modest request, but man, the Fed had an allergic reaction to it.
Orphanides: Yes, and if you go back to John Taylor’s beautifully written paper, 1992 Carnegie-Rochester published in 1993, it actually focuses exactly on this point. Have a benchmark rule that you have found to be robust using your macroeconomic evaluation, and then use that as a benchmark and deviate from it when circumstances require that you deviate, but transparently explain that. That’s all it takes. It’s not about religion at all. It’s about being technocratic and systematic.
Beckworth: That’s right. I will put a plug in for your natural growth targeting because you have presented several papers on it at Hoover Monetary Policy Conference last year and also, I believed, a number of other places, including you had a version of it at in the Southern Economic Journal for a conference there as well.
In those presentations, you see that your rule, the natural growth targeting rule, the only time there’s a major deviation from what the Fed did is during this inflation surge during COVID. It caught the big policy error. At other times, what the Fed did really wasn’t that different. It wouldn’t have tied the Fed’s hands all the time. It just would have alerted them to a major policy mistake that was unfolding. I put a plug out there for it. Of course, I’m an advocate of nominal income targeting.
Orphanides: Yes, that’s right. The point is that, for any simple rule that you have, if you tell me, “Oh, I deviate a couple of quarters by 25, 50 basis points,” that’s not a big deal. We know that 25, 50 basis points for a couple of quarters are not really materially influencing macroeconomic outcomes with the models that the Fed is using. That’s why it’s not a big deal. Indeed, if you check the Bluebooks and Tealbooks, the version of the natural growth targeting rule that the Fed had been publishing since 2004, Fed policy was pretty close to that policy rule all the time.
One of the things that’s going to be interesting to see in a little bit more than a year, we’ll have by how much the Fed policy deviated in 2021. Now, I reconstructed that using the several professional forecasters projections, and that’s why I expect to see a deviation of between 200 and 300 basis points lasting for a year and a half. That’s a major deviation, no such major deviation in the previous 30 years.
Beckworth: We will eagerly anticipate that. With that, our time is up. Our guest today has been Athanasios Orphanides. Thank you so much for coming on the program.
Orphanides: Thank you.
Beckworth: Macro Musings is produced by the Mercatus Center at George Mason University. Dive deeper into our research at mercatus.org/monetarypolicy. You can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. If you like this podcast, please consider giving us a rating and leaving a review. This helps other thoughtful people like you find the show. Find me on Twitter @DavidBeckworth and follow the show @Macro_Musings.