I’ve worked off-and-on in the general area of political economy—understood as models in which policies are endogenously determined given a framework (like majority voting) for aggregating individual policy preferences—since some early papers with Greg Huffman; for example, my and Greg’s IER paper on the political economy of immigration and redistribution.
A current project—which is joint with Eric Young and Dan Carroll—applies some solution concepts from political theory, namely the uncovered set and the essential set, to examine implications for taxes on consumption, labor, and capital income (and associated transfers) in an Aiyagari-type model featuring rich heterogeneity in individuals’ income and wealth. We calibrate the model to US experience, and, strikingly, we find a unique majority-rule equilibrium (a Condorcet winner). In contrast to the data, the model outcome features minimal taxes on labor income and essentially no redistribution. We conduct some experiments to try and uncover the factors driving this outcome.
In a precursor to this paper, I considered similar issues in a model with much less agent heterogeneity (and simpler transitional dynamics). While the results in that paper were ultimately disappointing, one thing that did come out of it was a very stark example of the potential pitfalls for macroeconomists in applying probabilistic voting to multi-dimensional issue spaces. I think poli sci theorists understand this, but among macro folk it’s less well understood (if understood at all)—outcomes under probabilistic voting are quite fragile with respect to assumptions on the structure of candidate uncertainty about voter preferences. In fact, in the three-dimensional issue space of taxes on consumption, labor income and capital income, a simple shift from additive to multiplicative errors can produce equilibria that are very nearly orthogonal. That’s the message of this paper, whose intended audience is those macro-political-economy types who are apt to pull out an “off-the-shelf” probabilistic voting framework as an answer to multi-dimensionality.
A current project combines my interests in macro-type models of asset-pricing and alternative models of individual choice in the face of risk. In it, I combine disappointment aversion, as employed by Routledge and Zin (Journal of Finance, 2010) or Campanale, Castro and Clementi (RED, 2010) with rare disasters in the spirit of Rietz (JME, 1988), Barro (QJE,2006), Gourio (Finance Research Letters, 2008), Gabaix (AER, 2008) and others. When the model’s representative agent is endowed with an empirically plausible degree of disappointment aversion, a rare disaster model can produce moments of asset returns that match the data reasonably well, using disaster probabilities and disaster sizes much smaller than have been employed previously in the literature.
This is good news. Quantifying the disaster risk faced by any one country is inherently difficult with limited time series data. And, it is open to debate whether the disaster risk relevant to, say, US investors is well-approximated by the sizable risks found by Barro and co-authors in cross-country data. On the other hand, we have evidence—see Starmer (JEL, 2000), Camerer and Ho (JRU, 1994) or Choi et al. (AER,2007)—that individuals tend to over-weight bad or disappointing outcomes, relative to the outcomes’ weights under expected utility. Recognizing aversion to disappointment means that disaster risks need not be nearly as large as suggested by the cross-country evidence for a rare disaster model to produce average equity premia and risk-free rates that match the data.
I am currently extending the model to allow for time-varying disaster probabilities.
My recent papers in asset-pricing were jump-started by the experience of teaching a half-semester, second-year Ph.D. macro field course—focussed on asset-pricing—at SMU back in 2012; if you’re interested in the (book-length) notes for that class, they’re here.