In a speech last week the President of the Minneapolis Fed, Narayana Kocherlakota, made the following rather startling claim:
Long-run monetary neutrality is an uncontroversial, simple, but nonetheless profound proposition. In particular, it implies that if the FOMC maintains the fed funds rate at its current level of 0-25 basis points for too long, both anticipated and actual inflation have to become negative. Why? It’s simple arithmetic. Let’s say that the real rate of return on safe investments is 1 percent and we need to add an amount of anticipated inflation that will result in a fed funds rate of 0.25 percent. The only way to get that is to add a negative number—in this case, –0.75 percent.
To sum up, over the long run, a low fed funds rate must lead to consistent—but low—levels of deflation.
The proposition that a commitment by the Fed to maintain a low nominal interest rate indefinitely must lead to deflation (rather than accelerating inflation) defies common sense, economic intuition, and the monetarist models of an earlier generation. This was pointed out forcefully and in short order by Andy Harless, Nick Rowe, Robert Waldmann, Scott Sumner, Mark Thoma, Ryan Avent, Brad DeLong, Karl Smith, Paul Krugman and many other notables.
But Kocherlakota was not without his defenders. Stephen Williamson and Jesus Fernandez-Villaverde both argued that his claim was innocuous and completely consistent with modern monetary economics. And indeed it is, in the following sense: the modern theory is based on equilibrium analysis, and the only equilibrium consistent with a persistently low nominal interest rate is one in which there is a stable and low level of deflation. If one accepts the equilibrium methodology as being descriptively valid in this context, one is led quite naturally to Kocherlakota's corner.
But while Williamson and Fernandez-Villaverde interpret the consistency of Kocherlakota's claim with the modern theory as a vindication of the claim, others might be tempted to view it as an indictment of the theory. Specifically, one could argue that equilibrium analysis unsupported by a serious exploration of disequilibrium dynamics could lead to some very peculiar and misleading conclusions. I have made this point in a couple of earlier posts, but the argument is by no means original. In fact, as David Andolfatto helpfully pointed out in a comment on Williamson's blog, the same point was made very elegantly and persuasively in a 1992 paper by Peter Howitt.
Howitt's paper is concerned with the the inflationary consequences of a pegged nominal interest rate, which is precisely the subject of Kocherlakota's thought experiment. He begins with an old-fashioned monetarist model in which output depends positively on expected inflation (via the expected real rate of interest), realized inflation depends on deviations of output from some "natural" level, and expectations adjust adaptively. In this setting it is immediately clear that there is a "rational expectations equilibrium with a constant, finite rate of inflation that depends positively on the nominal rate of interest" chosen by the central bank. This is the equilibrium relationship that Kocherlakota has in mind: lower interest rates correspond to lower inflation rates and a sufficiently low value for the former is associated with steady deflation.
The problem arises when one examines the stability of this equilibrium. Any attempt by the bank to shift to a lower nominal interest rate leads not to a new equilibrium with lower inflation, but to accelerating inflation instead. The remainder of Howitt's paper is dedicated to showing that this instability, which is easily seen in the simple old-fashioned model with adaptive expectations, is in fact a robust insight and holds even if one moves to a "microfounded" model with intertemporal optimization and flexible prices, and even if one allows for a broad range of learning dynamics. The only circumstance in which a lower nominal rate results in lower inflation is if individuals are assumed to be "capable of forming rational expectations ab ovo".
Howitt places this finding in historical context as follows (emphasis added):
In his 1968 presidential address to the American Economic Association, Milton Friedman argued, among other things, that controlling interest rates tightly was not a feasible monetary policy. His argument was a variation on Knut Wicksell's cumulative process. Start in full employment with no actual or expected inflation. Let the monetary authority peg the nominal interest rate below the natural rate. This will require monetary expansion, which will eventually cause inflation. When expected inflation rises in response to actual inflation, the Fisher effect will put upward pressure on the interest rate. More monetary expansion will be required to maintain the peg. This will make inflation accelerate until the policy is abandoned. Likewise, if the interest rate is pegged above the natural rate, deflation will accelerate until the policy is abandoned. Since no one knows the natural rate, the policy is doomed one way or another.
This argument, which was once quite uncontroversial, at least among monetarists, has lost its currency. One reason is that the argument invokes adaptive expectations, and there appears to be no way of reformulating it under rational expectations... in conventional rational expectations models, monetary policy can peg the nominal rate... without producing runaway inflation or deflation... Furthermore... pegging the nominal rate at a lower value will produce a lower average rate of inflation, not the ever-higher inflation predicted by Friedman...
Thus the rational expectations revolution has almost driven the cumulative process from the literature. Modern textbooks treat it as a relic of pre-rational expectations thought... contrary to these rational expectations arguments, the cumulative process is not only possible but inevitable, not just in a conventional Keynesian macro model but also in a flexible-price, micro-based, finance constraint model, whenever the interest rate is pegged... the essence of the cumulative process lies not in an economy's rational expectations equilibria but in the disequilibrium adjustment process by which people try to acquire rational expectations... under a wide set of assumptions, the process cannot converge if the monetary authority keeps interest rates pegged... the cumulative process is a manifestation of this nonconvergence.
Thus the cumulative process should be regarded not as a relic but as an implication of real-time belief formation of the sort studied in the literature on convergence (or nonconvergence) to rational expectations equilibrium... Perhaps the most important lesson of the analysis is that the assumption of rational expectations can be misleading, even when used to analyze the consequences of a fixed monetary regime. If the regime is not conducive to expectational stability, then the consequences can be quite different from those predicted under rational expectations... in general, any rational expectations analysis of monetary policy should be supplemented with a stability analysis... to determine whether or not the rational expectations equilibrium could ever be observed.
To this I would add only that a stability analysis is a necessary supplement to equilibrium reasoning not just in the case of monetary policy debates, but in all areas of economics. For as Richard Goodwin said a long time ago, an "equilibrium state that is unstable is of purely theoretical interest, since it is the one place the system will never remain."
Update (8/29). From a comment by Robert Waldmann:
---
Update (8/29). From a comment by Robert Waldmann:
I think that it is important that in monetary models there are typically two equilibria -- a monetary equilibrium and a non-monetary equilibrium.In fact, the only stable steady state under a nominal interest rate peg in the Howitt model is the non-monetary one.
The assumption that the economy will end up in a rational expectations equilibrium does not imply that a low nominal interest rate leads to an equilibrium with deflation. It might lead to an equilibrium in which dollars are worthless.
I'd say the experiment has been performed. From 1918 through (most of) 1923 the Reichsbank kept the discount rate low (3.5% IIRC) and met demand for money at that rate.
The result was not deflation. By October 1923 the Reichsmark was no longer used as a medium of exchange.
Rajiv:
ReplyDeleteNow you quote a good chunk of it, I remember now I did read Peter's paper.
But when I read it, no knock on Peter, it was like reading something I assumed everybody already basically knew, and he was just checking it was really right, under a wide range of cases, and proving it more formally. Which is always worth doing. Yet it shouldn't have really needed saying, but it did.
Some things we know, even though we don't know how to say them formally. It's never a bad thing to be able to model them formally as well, but it shouldn't be a necessary condition for knowing things. What if Peter had never written that paper? You could still have explained the point, in words, but would it have been understood, or given any credibility? It shouldn't matter *that* much.
We don't really teach economics in grad school. We teach students who already know economics how to do economics with more technique. But some of them don't already know economics. And some of them never learn any (some teach themselves).
They find the technique easy, while the economics students often find it hard. So we conclude that a previous understanding of economics isn't really needed to do a PhD.
Anyone with a reasonable understanding of macro can run through the causal chain in this case.
I've been arguing that it *might* be possible, if the Fed changes the way it frames monetary policy, and communicates it in the right way, or adopts the right instrument, to loosen monetary policy while having nominal interest rates *rise*, even in the short run, and so escape the zero bound. Others (like Adam P) think I'm wrong. There's a real issue here that the best and brightest monetary theorists should be looking at. Instead, they are not even at step one. So I have to back-track, and argue that one of the bus drivers has the brake and gas pedals mixed up.
Thanks Nick (both for the comment and your excellent posts on this). I agree about Peter's paper - it confirmed rather than altered my intuitions, and the argument can indeed be made in a compelling way less formally. But the fact that he felt the need to write it suggests that many find the argument to be far from self-evident. The adaptive expectations hypothesis is so obviously flawed, and has been so savagely disparaged, that the idea that it might provide more robust insights in some cases relative to rational expectations (which is Peter's point) is hard for some to stomach.
ReplyDeleteI first started thinking about the possible instability of RE paths while reading Minsky on leverage cycles - it struck me that his financial instability hypothesis was inconsistent with RE but might well be consistent with some very sophisticated models of learning.
I believe that we should model agents as amateur econometricians as far as belief formation is concerned, rather than simply assume consistency of beliefs, which is what RE does. When Marcet and Sargent introduced least squares learning into the economics literature, I thought that this is where we were headed. The earliest results established convergence of learning to RE paths, but then Peter came along and upset the apple cart. And as far as I can tell, the learning literature just dried up and we were back to RE.
In any case, I think this has been a really interesting and lively few days on the econ blogs, thanks largely to you and Andy Harless. I also think that David Andolfatto hit the nail on the head when he mentioned the Howitt paper.
Rajiv, Nick,
ReplyDeleteI’m out of my depth here with regard to the theory, but the idea that the central bank will always have to keep rebalancing Nick’s pole (to take Nick’s pole analogy as an illustration) seems intuitive to me, notwithstanding any time commitment to keep the pole vertical – i.e. keeping the pole vertical forever is physically impossible and can only result in it toppling at the end. The fact that central banks have always changed their policy rates, given time, in order to go against the flow, seems to support this.
So my question is – what is so “rational” about expecting a process that defies this – i.e. that expects the central bank to be able to keep the pole vertical forever? Because the ability to do that seems to be the required assumption in suggesting that keeping nominal rates too low for too long is fraught with the danger of equilibrium deflation.
JKH: Suppose the pole had no inertia. Suppose it were a beam of light, a Star Wars light sabre thingy. That's what NK has in mind. No inertia in prices, or expectations. They go immediately to where they need to go.
ReplyDeleteRajiv. Yep. I agree.
ReplyDeleteBy the way, my interpretation of Lucas' interpretation of RE, is the same -- people are amateur econometricians. Given a long enough data set, they will get it right.
Why oh why would you base policy that profoundly affects the lives of millions of people critically, and with little tolerance for relaxation, on an assumption like rational expectations having to hold perfectly or near perfectly.
ReplyDeleteHarvard econ PhD student and blogger Jodi Begges recently wrote of Ricardian equivalence, which relys on the rational expectations assumption:
"Given that many people seem barely able to even say what country the U.S. declared its independence from, I am not so concerned with such sophisticated and forward-looking behavior occurring on a large scale."
at: http://www.economistsdoitwithmodels.com/2010/07/07/dear-jon-stewart-economists-are-happy-to-tell-you-about-unemployment-so-locking-us-in-closets-is-not-needed/
Paul Krugman wrote of it:
Does this argument sound convincing? It did (and still does) to many economists. Akerloff pointed out, however, that it depends critically on the assumption that people do something that they are unlikely to do in real life: take account of the implications of current government spending for their future tax liabilities. That is, the claim that deficits don't matter implicitly assumes that ordinary families sit around the dinner table and say, "I read in the paper that President Clinton plans to spend $150 billion on infrastructure over the next five years; he's going to have to raise taxes to pay for that, even though he says he won't, so we're going to have to reduce our monthly budget by $12.36."
...the truth is that even families of brilliant economists don't have conversations like this. No, the point is that the effort isn't worth it. If a family has arrived at a sensible rule of thumb for deciding how much to spend, trying to improve on that rule by making sophisticated predictions about the future implications of government spending will improve the families decisions so little that it isn't worth the investment of time and attention.
– the book, "Peddling Prosperity", 1994, page 208.
Ricardian equivalence is an interesting idea to think about, and perhaps a relatively small number of people think somewhat like this, so perhaps it's some minor force in the economy, but to think it happens on a large scale and to a large extent is to make some wild assumptions about the reality I've experienced and read about over a lifetime. To put together a chain of logic that it is not common requires anchoring to just really mild assumptions, much milder than even those depended on even in typical econometric tests.
With the Kocherlakota controversy, if you have a model that does not assume and require absolute rational expectations ability, and another one that critically depends on it, with little or no tolerance for relaxing this assumption, then certainly when the two require very different policy you go with the one that doesn't critically need the fairy land assumption.
And yes I know, even with the fairy land assumption the equilibrium is not stable, but it is if you assume individuals to be "capable of forming rational expectations ab ovo"; then it works, so I can see some economists wanting to base extremely important policy critically on that, even if it's incredibly sensitive in a negative way to relaxation of that ludicrously unrealistic assumption, especially (or only) if it can be used to justify more libertarian economic policy.
ReplyDeleteJKH, you're right that the expectation of an interest rate pegged indefinitely at a low rate is not "rational" given central bank objectives and past behavior. But it's worth thinking through the implications of such a peg, as Howitt does, to try and understand how robust are predictions based on the RE hypothesis in this context. Like many useful thought experiments, this one doesn't deal with a realistic scenario, but I think that it does have the potential to shed some light on the functioning of the economy.
ReplyDeleteRichard, I agree, these may seem like esoteric debates but they have enormous practical consequences, which is all the more reason to get the analysis right. Thanks for the comments and references.
I think that it is important that in monetary models there are typically two equilibria -- a monetary equilibrium and a non-monetary equilibrium.
ReplyDeleteThe assumption that the economy will end up in a rational expecations equilibrium does not imply that a low nominal interest rate leads to an equilibrium with deflation. It might lead to an equilibrium in which dollars are worthless.
I'd say the experiment has been performed. From 1918 through (most of) 1923 the Reichsbank kept the discount rate low (3.5% IIRC) and met demand for money at that rate.
The result was not deflation. By October 1923 the Reichsmark was no longer used as a medium of exchange.
Robert, thanks, and yes, I agree completely. In fact, the only stable equilibrium in the Howitt model is the non-monetary one. And although he doesn't mention this, the dynamics in his model fit the case of the German hyperinflation quite well.
ReplyDeletewhat is your opinion about Hysteresis (effects of monetary in the long term o ver the product)?
ReplyDeleteEven if we are willing to suspend disbelief and maintain the rational expectations assumption absolutely (and also exclude the non-monetary equilibrium), I don’t think the RE analysis makes any sense.
ReplyDeleteAs I understand it, the setup is something like this: The Fed follows a Taylor rule, for which it must choose parameters. Consider two alternative sets of parameters, one of which results in a maintained zero nominal rate and the other of which doesn't, and both of which result in a monetary equilibrium. The first rule, the one in which the zero nominal rate is maintained, will result in deflation. Since the Fed chooses the parameters of the Taylor rule, we can say that the Fed has, in the first case, chosen to keep the nominal rate at zero.
But here's the problem: Rational expectations assumes that agents know beforehand (at least to an unbiased estimate) which rule the Fed is following. But it is implicit in Kocherlakota's argument that agents don't know beforehand which rule the Fed is following. He is talking about a choice that the Fed will have to make, say in 2011 or 2012, about whether to raise the nominal interest rate. He is considering what would happen if the Fed makes the wrong choice at that point in time. But if agents already know – before the Fed makes the choice – which rule the Fed is following, then it's meaningless to talk about the Fed's choice at that point in time. The Fed has already made the choice by committing to whatever Taylor rule agents believe it is following. If the Fed makes a choice different than what agents previously believed, that is a violation of rational expectations.
So no matter how confidently one believes in RE, once can't use it to analyze an interest rate decision that takes place at a specific point in time. One can only use it to analyze policy rules. Kocherlakota worries about what will happen "if the FOMC hews too closely to conventional thinking." But surely conventional thinking does not require the FOMC to follow a rule that would result in a zero interest rate even when the equilibrium real rate rises above zero. If Kocherlakota thinks it does, I'd like to hear him specify the parameters of that rule. I very much doubt he can come up with a set of parameters that is plausibly consistent with "conventional thinking."
Andy, here's how the RE thought experiment works, as I understand it. Suppose the Fed decides to change the Taylor Rule in such a manner as to result in a lower target nominal rate. Suppose that it is common knowledge that this change has been made and that it is permanent. Then nominal rates fall right away, inflation falls by an even greater amount, real rates rise temporarily and then fall back as convergence to a new steady state with lower nominal interest rates is attained.
ReplyDeleteThere is no need in this scenario for a period of high nominal rates to lower inflation - in sharp contrast with the old monetarism.
I don't believe that this is how the world works. I also have no way of knowing whether Naryana believes this model or whether he just misspoke. But I thought it important to communicate that there is a coherent (if empirically dubious) argument to support a literal reading of his words.
Regarding your broader point, you're right that implicit in Narayana's argument is the view that there is genuine policy discretion at any point in time, and this kind of policy uncertainty is difficult (if not impossible) to absorb into the RE framework.
Rajiv: but in that thought experiment, what is the (exogenous) policy instrument? And what does "policy instrument" mean?
ReplyDeleteI interpret "policy instrument" to mean that variable which is assumed fixed, and is assumed to be expected by the agents in the model to be held fixed, even at points off the equilibrium path, that are not observed in equilibrium. An outside observer, who observed only the equilibrium path, could not tell which variable is the exogenous policy instrument.
Right now, given the way people think of monetary policy as a path for the nominal interest rate, and how the Fed communicates its policy as a time-path for the policy instrument, we can't expect raising the interest rate to be associated with raising actual and expected inflation. Because people would interpret it as a tightening of monetary policy.
Now, change the way the Fed communicates policy, and how people interpret policy, and I think it is perfectly possible for a loosening of monetary policy to be consistent with rising nominal interest rates, even immediately.
(And, I think that this is what's really going on in Jesus' models. He has (implicitly) switched the instrument and communications strategy.)
It's all to do with the Game-theoretic idea that expectations of players' actions off the equilibrium path matter, even though they are not observed.
I think this stuff is very important. If we understood it, we could easily escape the zero lower bound. It's what I've been trying to say for some time.
I've been chasing you across Mark Thoma's and Brad DeLong's blogs, making essentially this point.
Hi Nick,
ReplyDeleteI'm sorry to have scattered my comments all over the place, I've actually been following you and Andy around.
I think that you're right to highlight the importance of considering out-of-equilibrium beliefs. An upward nominal interest rate surprise, if interpreted in an optimistic manner by enough people, could indeed be expansionary.
In the simple RE model that I've been describing, the policy instrument (by your definition) is the Taylor Rule itself. This is assumed to be chosen freely by the bank and common knowledge. There is no explicit consideration of out-of-equilibrium beliefs but the implicit assumption is that beliefs about the rule are insensitive to observation. That is, the beliefs would be maintained even if the data were found to be inconsistent with these beliefs. But, of course, on the equilibrium path this never occurs.
Rajiv, do you have a reference to a paper that shows RE to justify what Kocherlakota said? The argument as you've stated it doesn't seem right in the statndard NK type models.
ReplyDeleteWhen you say "Suppose the Fed decides to change the Taylor Rule in such a manner as to result in a lower target nominal rate" I take it you mean the "target nominal rate" to be the one that will prevail in the steady state? So then basically you just refer to lowering the inflation target?
If that's the case then the response of inflation depends on whether or not we start in the steady state. If we are in the steady state when the policy rule is changed then yes inflation falls immediately. If target inflation is being reduced from 2% to 1% and current inflation is 1% then inflation doesn't change but yes the nominal rate is increased by the Fed.
If target inflation is reduced from 2% to 1% and current inflation is 0% then neither inflation nor the nominal rate changes today, the Fed simply stands ready to raise the short earlier than under the old policy rule.
This last case seems to best describe what Kocherlakota has in mind but he is still wrong. He is advocating for the policy shift that stands ready to raise the nominal rate earlier but seems to think it constitutes an increase in the target inflation rate, it helps avoid a deflationary steady state.
RE doesn't save him here, in no RE model does the Fed raising nominal rates from 0% to 10% cause todays expected inflation to go to 8% or higher (if the natural real rate is negative).
Rajiv, just to follow up. If the Fed changes the Taylor Rule to lower the current fed funds rate then in a situation where current inflation is below target (like now) then expected inflation, in no case, will fall.
ReplyDeleteEither the fed has *raised* the inflation target and thus lowered the nominal rate because we're no further from target, in which case expected inflation rises.
Or, the fed has made the coefficient on (inflation - target) higher in which case expected inflation shouldn't change. (If it does change it increases as we get back to target sooner).
Adam, thanks for the comments, it's important to get all this straightened out. I don't have a link to the model the Jesus sent me but will quote from his email below. Before I do that, let me say a couple of things. First, as you suspected, the analysis starts with a steady state and asks what should be done to move to a steady state with lower inflation. In particular, should nominal rates be raised or lowered in the short run. I agree with you that this is not relevant to current policy, but it has a bearing on the theoretical debates that Kocherlakota's sppech triggered. Second, Jesus want is made clear that he is not endorsing an increase in rates sooner rather than later, he just wanted to clarify some theoretical issues.
ReplyDeleteNow for the model. I quote directly from Jesus' email.
Begin Quote:
1) Take a basic NK model with Calvo pricing and capital, something like Mike Woodford would write.
2) Specify a Taylor rule of the form:
FFR/R_target_t = (inflation/inflation_target_t)^gamma*exp(m)
where FFR is the federal funds rate, R_target_t is the target FFR (with an index t because we let it change) and m is a monetary shock. Gamma>1 ensures stability of the model.
Remember that, in a Taylor rule world in general equilibrium, the central bank can pick either R_target or inflation_target, but not both. Once one of them is picked, the other needs to satisfy:
beta*R_target_t = inflation_target_t
(subject of course to the Zero Lower Bound that prevents us to set a R_target below 1).
3) Then, you can rewrite the Taylor rule as:
FFR = R_target_t * (inflation/inflation_target_t)^gamma*exp(m)
4) Substitute beta*R_target_t = inflation_target_t
Then, we get:
FFR = R_target_t * (inflation/beta*R_target_t )^gamma*exp(m)
and rearrange terms:
FFR = ((1/beta)^gamma) *(R_target_t^(1-gamma)) * (inflation^gamma)*exp(m)
What happens if there is a shock to R_target_t? for example, if R_target_t falls?
Well, given a level of inflation, since (R_target_t^(1-gamma)) is bigger than before (R_target_t is smaller and gamma>1). That is, the monetary authority responds by raising the FFR in this period. As inflation goes down because the FFR is higher, we get closer to the R_target_t. This increase in the FFR ensures that the equilibrium is stable.
End Quote
Continued in next message...
Adam, continuing on from my last message, Jesus sent me simulation results for a particular specification of the model. I don't want to post his figures without permission, but here is what they show:
ReplyDeleteBegin Quote
To illustrate this point, I have a dynare code, changing_target.mod, where I do the following:
1) As I was mentioning before, I take a basic NK model with Calvo pricing and capital. It is a relatively simple model because I do not have habit persistence and investment adjustment costs. On the other hand, I introduce quite a bit of nominal rigidities (the calvo parameter is 0.75, implying an average duration of prices of around 1 year, quite more than the micro evidence).
2) I make R_target_t, instead of being a fixed value, a changing variable (and with it, the inflation target). The idea is that, by doing so, I can explore what are the effects of a change in the target FFR. Ideally, it would be great to specify a change once and for all of the target FFR. However, it will take me quite a bit of time because it implies a non-stationarity of the problem and I wanted to get a quick answer.
Instead, I assume that it follows an AR(1) in logs:
log(Rtarget) = (1-rrhotarget)*0.01+rrhotarget*log(Rtarget(-1))-0.01*etarget;
where rrhotarget= 0.9999999
that is, as close as 1 as I can before the code breaks down. The idea is that we have today a shock that lowers the target FFR and that shock stays with us (nearly) for ever. While I recognize that, sometimes, there are subtle converge problems when we go from 0.9999999 to 1, I do not think this is the case here.
Note that also, the shock to the Rtarget is multiplied by -0.01 (the minus is just to generate a shock that lowers the target FFR and 0.01 is scaling it so it is not too big, we want to avoid being thrown into Rtargets less than 1, which are not feasible).
The fact that, in my specification, agents believe that other shocks can come in the future to the target FFR is not terribly important because we are linearizing and hence, the variances of future shocks do not enter into the current decision rules of agents at time t. Also, I checked that in a non-linear second order approximation these effects are nearly zero...
In summary, by tracing down the Impulse-Response functions (IRFs) of the economy to a shock etarget of standard deviation 1 (scales by -0.01) which lowers the target FFR (nearly) for ever, we can answer the question of causality in a well-defined (and rather standard) way.
What happens after this change in the target FFR by the central bank?
The IFRs are included in the file irf_target.pdf. Look at the bottom panel on the right: it shows the drop in the FFR target. Then, you can see, right on top, ppi (inflation) going down. The interesting thing is that in panel (3,2), the FFR goes down less than R_target_t at impact. This is the equilibrium stability showing herself up.
Again, let me emphasize that this is a well defined experiment: we wake up, the FED decides to lower the FFR target, this lowers inflation in a causal way and we do not lose stability.
How can this happen? I think that part of the confusion on the discussion is that all these New Keynesian models assume Rational Expectations. When the FED changes its target FFR, everyone knows it (and everyone knows that everyone knows it and so on) and we just jump directly into the new equilibrium path (in my model it is a bit more subtle because agents know there is a distribution of probabilities on this happening, but let's forget about that) because everyone immediately readjust their behavior to the new policy regime (except firms that cannot adjust their prices, they would wait until it is their turn to update them).
End Quote
Rajiv, thanks for the response. I've only just now scanned it but still, I think Jesus's counter-example somewhat misses the point.
ReplyDeleteAs you say, R_target pins down pi_target (inflation target), lowering R_target is equivalent to lowering pi_target.
This brings up 2 things:
1) (I know you've already said you agree with this but it is relevant to whether Kocherlakota is saying something even a bit sensible.) Is lowering of R_target really what's happening right now? Clearly not. FFR is low because pi is persistently below target and the output gap is huge. Yet Kocherlakota seems to be inerpreting it that way.
2) What Kocherlakota advocates, as a change to the policy rule, is for FFR to be raised before pi has risen up close to pi_target. That is, he wants the Fed to raise earlier than the current policy rule would stipulate. This is equivalent lowering pi_target, just as Jesus interprets it, but that constitutes an increase in the prevailing real interest rate *today* and as such the policy rule change that Kocherlakota is advocating for is a tightening. That's just crazy and, most importantly, that would make the deflation worse. This sort of change to the policy rule would not mitigate the deflation.
Again, I understand that neither you nor Jesus is defending Kocherlakota here but nonetheless, what he said does not have a sound theoretical basis (or at least Jesus is not providing one). Kocherlakota directly implied that this sort of change to the policy rule would help *avoid* a prolonged defltation. This is manifestly not what NK models would say.
Adam, there are two separate (but related) issues here: (i) does the model have any bearing on the validity of Narayana's claim, and (ii) are the predictions of the model robust?
ReplyDeleteRegarding the first point, I'm inclined to agree with you since we are obviously not in a steady state. It's possible that Narayana had something like this in mind when he made the claim but as you say, the model is actually silent on this. However, I should note that Jesus thinks it is relevant because he has in mind a "virtual" nominal rate, currently negative, so "raising" this virtual rate to make it less negative involves no change in the actual rate.
But it's the second point that I find more interesting because it shows just how different RE predictions are relative to the old monetarism, and raises important questions about robustness. Would you agree that under the assumptions of the model, starting from a steady state, a lower target inflation rate requires a lowering of the nominal interest rate? If so, this would come as a big surprise to the likes of Friedman or Volcker. So who is right in this case and why? I think this is a more important question right now than whether or not Narayana's claim is coherent. (But you're right, I should not have said that the model establishes the coherence of his claim, it does not really do this.)
"Would you agree that under the assumptions of the model, starting from a steady state, a lower target inflation rate requires a lowering of the nominal interest rate? "
ReplyDeleteI'm tempted to say no, that a lower inflation target implies that the nominal rate must increase until inflation falls. However, I don't think that's what this model says.
I think you're correct that in the model a lower inflation target implies that inflation falls instantly to the new value (because otherwise the Fed threatens to tighten and the threat is believed) and if the Fed fails to lower the nominal rate immediately then actually the real rate ends up too high and inflation falls further. So a lower inflation target should require a lowering of the nominal rate. (The Woodford/Curdia papers had some impulse responses under differing policy rules that, in a different context, I think made this point more starkly then the standard NK off-equilibrium threats to pin down equilibrium paths.)
Is this a disconnect with reality? It depends on whether it's prices or inflation that is sticky and people are usually pretty loose on this. The model, to my understanding, says prices are sticky and inflation is not. The data tends to say inflation is very persistent (on average) but then it sometimes has been observed changing abruptly to policy changes (thinking Sargent's 4 big inflations here).
So I guess the conclusion is that I'm not ready to give an opinion on the second point. But you're right, it is the more interesting question.
Rajiv,
ReplyDeleteI don't mean to beat a dead horse here but I think there's another sort of deeper question going on here.
Does "equilibrium" really mean "steady-state equilibrium"? You use it that way when you talk about the transition to steady state as "disequilibrium dynamics". Clearly so does Kocherlakota, Williamson and friends.
Seriously though, someone on one of the blogs made the analogy with the Solow Growth model, if you're on the saddle path but not at the steady state aren't you in equilibrium?
This is mystifying me a bit, my understanding of equilibrium is just that markets clear, everyone optimizes and nobody violates a constraint. Thus, "dis-equilibrium" is something you'd never really see. If a price is too high to sell all the stock of some good then we'd say the seller of the good still holds hit, the market has "cleared" in the sense that everything is held by someone and no constraint is violated.
As to why the price didn't fall to sell all stock you'd say that either a constraint prevented or the vendor made a decision that holding it in inventory was preferred to selling at a lower price.
The point here is that the Howitt thought expirement of the fed shifting to a lower steady-state nominal rate doesn't seem to lead to any instabilty if you allow for equilibrium paths that aren't in steady-state.
The manner in which the Fed tries to shift to a lower steady-state nominal rate would not be to simply lower today's nominal rate and hope expectations coordinate on lower inflation, that is not how the Taylor rule models work (and if you're thinking John Cochrane here I think he has it wrong).
If the fed shifts to a lower steady-state nominal rate it does so by setting a lower inflation target then, if inflation doesn't move immediately to the new steady state rate the fed *increases* the current nominal rate by enough to raise the real rate, output and inflation fall and we move towards the steady state.
Adam, a fundamental feature of equilibrium (aside from those you mention) is consistent beliefs. Equilibrium paths need not be steady states, they can be transition paths to steady states, limit cycles, or even chaotic attractors (as in Grandmont, Econometrica 1985). But they satisfy consistent beliefs (or rational expectations) by definition.
ReplyDeleteIn this sense the Howitt model (and the Evans video posted by Mark) explored disequilibrium dynamics, while the model that Jesus sent me looks at equilibrium dynamics in response to a changed target inflation rate.
Now here's the point: the convergence of an equilibrium path to a steady state tells us nothing about the stability of the path with respect to disequilibrium dynamics (what Evans calls adaptive learning). The path might converge to a steady state but will agents converge to the path if they start with inconsistent beliefs? Understanding this question, I believe, is key to the future development of macro. I don't know why there is so much confusion about it since Marcet/Sargent introduced least-squares learning to deal precisely with this issue and Evans and collaborators have been working on it for years.
Please read my posts (linked above, before I mention Howitt) on disequilibrium dynamics and rational expectations. I would never confuse an equilibrium path with a steady state.
Rational expectations is part of the definition of equilibrium? Aren't we now ruling out any notion of equilibrium in many classes of models? Say those with bounded rationality or other behavioural models with less than rational agents?
ReplyDeleteIt's been a few year but I seem to recall in Sargent's conquest of inflation book that agents were boundedly rational, there were self-confirming equlibria where private agent beliefs were consistent with the monetary authority's beliefs but neither had beliefs consistent with the true data generating process. Hence the study of escape dynamics where eventually a large enough shock got you out of the self-confirming equilibrium. Was that not really an equilibrium? Would this model have any equilbra by your definition?
And what about what gets taught to undergrads, they certainly use the term "equilibrium" to mean the intersection of the IS and LM curves or the intersection of AS and AD curves. In this context, with a static model, the word equilibrium means nothing other than clearing of all markets.
Perhaps I've never properly understood but I always thought of rational expectations as a property of agents, not as part of the very definition of equilibrium.
Adam I'm on the road so this will be brief but the REH is nothing more or less than Nash equilibrium.
ReplyDeleteyes, and aren't their other notions of equilibrium besides Nash's?
ReplyDeleteI'm asking, if Sargent's self-confirming equilibria aren't equilibria then what are they?
ReplyDeleteAs I recall the whole point of these examples was that both the private sector and monetary authorities had mis-specified forcasting functions in such a way that they found an equilibrium where each other's behaviour confirmed the other's forecasting function and so neither ever learned the true model. (Or perhaps the misspecification was such that they couldn't learn the truth, like they used linear forcasting functions when the truth is non-linear).
These agents would never find a Nash equilibrium but they did find states that Sargent called an equilibrium.
Adam of course you can have weaker and stronger notions of equilibrium than Nash but they all involve restrictions on beliefs and need to be checked for stability under disequilrium dynamics, starting from states in which beliefs are not self-confirming. Sargent Evans Howitt all do this, most people don't.
ReplyDeleteok, thanks for your patience:)
ReplyDeleteThe fact is that both assumptions discussed in this post do not match observations. The fed's rate has always been lagging behind inflation and thus has been a passenger not the driver. To argue about the capabilities of a driver not controlling the wheel is a profound topic, without real outcome, however.
ReplyDeleteThe rate of price inflation has been driven by a different force and the approaching deflationary period had been predicted long before the FOMC downed the rate to the level between 0% and 0.25%. Hence, this action of the feds was also well foreseen five years ago (http://mechonomic.blogspot.com/2010/09/of-deflation-once-again.html).
Please note that I did not use the word "stable" in the quoted passage. Therefore your comment which notes that the monetary equilibrium is not stable is not a correction.
ReplyDeleteRobert, I agree, and my comment was intended as an addendum and not a correction. I'm sorry if this was not sufficiently clear.
ReplyDeletePeople who don't agree can shout all they like... the data universally supports this conclusion and neo-classical economists have to jump through contorted intellectual loops and spins to try and weave a tangled explanation as to why they are not wrong... Akum's Razor would suggest they are wrong both by the complexity of their arguments, and by the fervor of their objections.
ReplyDelete