Site Meter

Saturday, March 03, 2012

Modern Macroeconomic Methodology



Modern Macroeconomic Methodology (MMM) rests on two pillars: Milton Friedman's methodology of positive economics and the Lucas critique. The problem is that an alternative possible title for "Econometric Policy Evaluation: A Critique" is "The Methodology of Positive Economics: A Critique." One of the pillars consistgs essentially of the claim that the other is unsound.

How can two opposite claims both support the same theoretical edifice ? I think it is very simple, Friedman's methodology for me, the Lucas critique for thee.


Below I argue my claims.




A nickel version of Friedmans methodology of positive conomics start with models can be useful even if they are not true (even if they are false by definition). This is universally agreed. This implies that we shouldn't treat models as hypotheses to be tested , so we are not necessarily interested in every testable implication of the model. Instead we should care about the implications of the model for the variables which matter to us. The reason is that if the model fits the data on those variables we can reasonably use it to forecast those variables.

The Lucas critique argues that a model can fit but not be useful for conditional forecasts. Following Marschak and many others (as cited in the paper which includes the text "no claim of originality") Lucas argued that the forecasts which most interest us are forecasts of how things will be different if policy is changed, but a model might fit the data for one policy regime and not for another. To get this far, it is enough to note that correlation isn't causation. A model in which fluctuations in A cause B can fit data generated by a world where B causes A exactly until a policy maker begins to manipulate A (for example hoping to influence B).

In particular, Lucas (and his followers and leaders) focus on how expected future policy affects current behavior. The true objective mathematical expected value of future policy variables conditional on current data depends on the policy rule. Unless there is no connection between what is likely to happen and what people think is likely to happen, the link between data and expected policy must change when the policy changes. One way to force oneself to consider this is to assume that agents know the true objective probability distribution of everything -- that they have rational expectations. Lucas ordered economists to so assume and the vast majority have obeyed.

But note that Lucas's argument is exactly that an model can fit past data and yet yield misleading forecasts, and, in particular, mislead us about the effects of policy reforms which are exactly the most important forecasts. It seems to me that Friedman's methodology and the Lucas critique are not just logically inconsistent but basically opposite -- that the substance of Lucas's paper is a critique of Friedman's.

I have to put the key example which mattered somewhere so I will put it here. Lucas and Friedman agreed on something very important. They agreed that it was unwise to trust the Phillips curve. Phillips innocently noted a very simple pattern relating wage inflation and unemployment in the UK over a century (he is not to blame for the use made of his scatter plot). Some people decided that this showed a menu of options open to policy makers who could choose high inflation and low unemployment or low inflation and high unemployment. Friedman (and separately Phelps) argued that this made no sense. That if the inflation rate increased 1% and remained at that level forever, firms and workers would eventually incorporate 1% more expected price inflation when negotiating wages. So the effect of a permanent increase of inflation should be temporary. This is the key example which illustrated the Lucas critique. It is, in fact, the key reason macroeconomists decided in the 70s that they had been barking up the wrong tree. The Phillips curve was augmented so that the unemployment rate was related to actual inflation minus expected inflation (not just inflation alone). It is very hard to believe (OK impossible) that expected inflation is exogenous. Inflation depends on policy and so should expected inflation (note I have not assumed rationoal expectations just not totally dumbe expectations). The view became that the problem is that expected inflation, a key variable, which changes and which depends on policy, just happened to not change much in the UK during the century studies by Phillips. If you just look at the data and are an accidental theorist, you make assumptions which you would never make if you thought about them.

I promise the reader fortunate enough to not know economists that the claim that the two papers are the foundation of MMM (and a lot of modern microeconomic methodology too) is not controversial. How can this be ?

Friedman's argument is used whenever the realism of assumptions is questioned. The conclusion of the discussion is (in my experience) that the models might be useful and we should check if they are and not dismiss them. But it is also argued that it is essential for an economic model to have agents with well defined objectives maximizing, at least, the subjective expected values of their objectives. In practice, rational expectatoins are almost always assumed. Specifically it is almost universally agreed that we don't have rational expectations (I can recall one economist once claiming he thought we did but I think he was joking). Yet it is still often argued that models with agents with rational expectations might be useful and that we must assume rational expectations or our conclusions will be proven invalid in advance by Lucas.

The claim that rational expectations must be assumed is not as common as it was in the 80s. Many argue that one can assume rational expectations or some kind of boundedly rational learning such that agents can't be tricked by policy makers into always making expectational errors of the same sign. I think it is easy to prove that no such model can be tractable in the sense that expectations are a tractable function of available information.
Proof: If we can handle a model of boundedly rational learning and figure out agents' expected value of policy variable X as a function of lagged information, then the policy maker can figure it out then set X to that expected value plus one. So, in practice, fear of the Lucas critique forces people to assume rational expectations (not immediately, it might take some a while to come up with the not so hard proof above, but soon and for the rest of their lives).

I think the last paragraph is the only thing I have written so far which might be controversial.

The problem is that the reasoning is a logical falacy. The logic is that if we don't assume rational expectations and instead assume some other tractable expectations, then our work is definitely vulnerable to the Lucas critique. The conclusion is that if we assume rational expectations it might not be. That is if P implies Q then not P implies not Q.

It is absolutely not sufficient to make assumptions about tastes and technology then have agents with the tastes use the technology rationally to avoid fitting the past and totally failing to forecast the effects of a change in policy. It would be fine to use the model if the model were the truth (then the forecasts would be the best possible). It would also be fine if we knew it were approximately true (which just means that the forecasts will be close to the best possible). It is not at all enough to know that it fits available data.

An example. One model of interest to macroeconomists is the model of a representative consumer. The idea is that aggregate consumption is (about) what it would be if (for example) we were all identical and rationally maximised a utility function whcih depends on the stream of consumption. In the simplest version, it is assumed that the utility function is the sum of a term which depends on consumption and a term which depends on everything else (so consumption saving decisions are separate from all other decisions) Also in the oldest version the utility function is assumed to be time seperable -- the sum of terms each of which depends on consumption in one period. Typically pleasure now is an unchanging function of consumption now and the consumer maximizes the stream of pleasure discounted exponetially due to impatience. In

It is very common (OK universal in fact and for relatively good reason) to assume utility functions in a parametric class called constant elaasticity of substitution. So we can try to fit the data estimating as few as two parameters the impatience factor and the intertemporal elasticity of substitution of consumption. Embarassingly estimates of the intertemporal elasticity of consumption are tiny (about 0.1) . Here is a model which fits the data not horribly (badly enough to be rejected against alternatives) but which gives implications which no one takes seriously -- that the rate of growth of consumption would increase by 0.5% if after tax real interest rates permanently increased 5%.

One way in which some economists deal with this problem is to assume that the utility function is not time separable due to habit formation (as discussed by Keneys among others). The idea is that low consumption is particularly painful to people who have had high consumption (the manner to which he/she has become accustomed). One way to think of it is that consumption is addictive. This is absolutely consistent with full rationality. It can cause a low estimate of intertemporal elasticity. However the long run effect of a permanent increase in the real interest rate is not tiny.

But see what has happened here. Writing down an optimizing model and fitting it to the data gave parameter estimates which imply terrible long term predictions. A variable (the degree of addiction to consumption) which is exogenous was not considered or, at least, was considered exogenous when it is exogenous. This is just like the case of expected inflation in the old expectations unaugmented Phillips curve.

Notice I have assumed throughout that there is a representative consumer with rational expectations.

I conclude that if you look at the data and then try to analyse them with a tractable model , you risk making assumptions which you don't believe (really its a certainty not a risk) and which are critical to your predictions. I think no problem is or could be elminated by putting some utility maximization between the assumptions and the implications which are confronted with data from the past then used to predict the future.

I think all of our models and hypotheses will be vulnerable to the Lucas critique. This is true just because we have to make assumptions which matter, in order to think about the world. When I wrote "our" I referred to us human beings not just us economists.

6 comments:

JLD said...

Why is stating the obvious called an insight?


Your write:

I have to put the key example which mattered somewhere so I will put it here. Lucas and Friedman agreed on something very important. They agreed that it was unwise to trust the Phillips curve. Phillips innocently noted a very simple pattern relating wage inflation and unemployment in the UK over a century (he is not to blame for the use made of his scatter plot). Some people decided that this showed a menu of options open to policy makers who could choose high inflation and low unemployment or low inflation and high unemployment. Friedman (and separately Phelps) argued that this made no sense. That if the inflation rate increased 1% and remained at that level forever, firms and workers would eventually incorporate 1% more expected price inflation when negotiating wages. So the effect of a permanent increase of inflation should be temporary. This is the key example which illustrated the Lucas critique. * * * If you just look at the data and are an accidental theorist, you make assumptions which you would never make if you thought about them.

This does not seem to be an insight by either Friedman or Lucas worthy of mention, especially as neither suggests a means or method by which to judge: (a) how rational are people; (b) what the expectations were; and (c) the assumption that people will agree to pay higher prices which they know that may not have the money to pay (among others)

Wouldn't we all be better off to just be honest and say: (1) there is some correlation, as shown by the plot points; (2) decision making about such is driven by the psychology of human misjudgment; (3) therefore, trying to control inflation is going to be very hard, much much harder than just looking at the plot points suggests?

Anonymous said...

The two only seem contradictory if you have an axe to grind. Both are about useful predictions. Friedman says you need a good predictive model, whether or not the model is correct. Lucas says you need to be able to predict out-of-sample/alternative equilibrium behavior, and this will likely come if you have microfounded agents. Both require correct predictions, perhaps in different conditional spaces.

A maximizing representative agent that accurately predicted labor market or savings responses to expected inflation would be a useful model by both standards. They would both reject a hydraulic model that failed to predict stagflation.

reason said...

Anonymous,
I'm not really sure I understand where you are coming from. The main point of the post is that BOTH ideas are being used to justify the SAME modeling technique. Your answer doesn't address that issue.

Anonymous said...

I see the crackpots are out in force today. Both reason and John D (welcome back, you lunatic)! You're only missing one of the Four Horseman of the Crackpocalypse, many you can see where Greg Ransom or Richard Serlin are.

marcel proust said...

... and for relatively good reason

This is an absurdly low bar in this context (the context being a discussion of economics as she is done).

Nathanael said...

Anonymous: " Lucas says you need to be able to predict out-of-sample/alternative equilibrium behavior, and this will likely come if you have microfounded agents."

Which is, to put it very politely, total bullshit. If you want to predict out-of-sample behavior, the LAST thing you want is microfounded agents, particularly with bad microfoundations. Broad-brush macro models with no individual agents -- "hydraulic" models -- have been much better at predicting out-of-sample behavior.

So much for Lucas.