André Reslow disputerar med avhandlingen Electoral Incentives and Information Content in Macroeconomic Forecasts


André Reslow försvarar sin avhandling, Electoral Incentives and Information Content in Macroeconomic Forecasts, fredag 19 mars 10:15​ i Hörsal 2 på Ekonomikum. Disputationen sker digitalt.

André Reslow

Avhandlingen behandlar makroekonomiska prognosmakares beteende och prognosförmåga. Avhandlingen introducerar politiska incitament i makroekonomiska prognoser och visar att prognosmakare som föredrar ett visst utfall i ett val eller en folkomröstning kommer att försöka påverka väljarna genom att publicera partiska prognoser. Avhandlingen studerar även prognosmakares beteende och prognosförmåga med ett fokus på informationsinnehåll. Specifikt undersöks hur prognosmakare använder information från konkurrenter och hur viktigt det är att ta hänsyn till publikationstidpunkten, och den tillgängliga informationen, när man bedömer prognosförmåga.

Opponent är professor Gisle Natvik, Handelshøyskolen BI, Oslo och betygsnämndens ledamöter är professor Sven Oskarsson, Statsvetenskapliga institutionen, Uppsala universitet, professor Oskar Norström Skans, Nationalekonomiska institutionen, Uppsala universitet och docent Anna Seim, Nationalekonomiska institutionen, Stockholms universitet.

Handledare är professor Mikael Carlsson, Nationalekonomiska institutionen, Uppsala universitet och docent Jesper Lindé, Sveriges Riksbank.

Abstract (engelska)

Essay I (with Davide Cipullo): This essay introduces macroeconomic forecasters as new political agents and suggests that they use their forecasts to influence voting outcomes. The essay develops a probabilistic voting model in which voters do not have complete information about the future economy and rely on professional forecasters when forming beliefs. The model predicts that optimal forecasters with economic interests (stakes) and influence publish biased forecasts before a referendum. The theory is tested using data surrounding the Brexit referendum. The results show that forecasters with stakes and influence released more pessimistic and incorrect estimates for GDP growth subject to the leave outcome than other forecasters.

Essay II (with Davide Cipullo): This essay documents the existence of Political Forecast Cycles. A theoretical model of political selection shows that governments release overly optimistic GDP growth forecasts ahead of elections to increase the reelection probability. The theory is tested using forecast data from the United States, the United Kingdom, and Sweden. The results confirm key model predictions and show that governments overestimate short-term GDP growth by 10 to 13 percent during campaign periods. Moreover, the bias is larger when the incumbent is not term-limited or constrained by a parliament led by the opposition. Furthermore, election timing determines the size of the bias at different forecast horizons.

Essay III: This essay assesses to what extent forecasters use competitors’ forecasts efficiently. Empirical results using a large panel of forecasters suggest that forecasters underuse information from their competitors when forecasting GDP growth and inflation. The results also show that forecasters pay more attention to competitors when releasing short-term forecasts than medium-term forecasts. A belief updating model with noisy and private information supports the underuse interpretation and predicts that it is optimal to pay sizable attention to competitors’ work. Furthermore, the essay shows that a revision cost model can only match the observed behavior if asymmetric horizon discounting between cost from revisions and loss from forecast errors is assumed.

Essay IV (with Michael K. Andersson and Ted Aranki): This essay proposes a method to account for differences in release dates when assessing an unbalanced panel of forecasters. Cross-institutional forecast evaluations may be severely distorted because forecasts are made at different points in time and thus with different amounts of information. The proposed method computes the timing effect and the forecaster’s ability (performance) simultaneously. Simulations demonstrate that evaluations that do not adjust for the differences in information may be misleading. The method is also applied to a real-world data set of 10 Swedish forecasters, and the results show that the forecasters’ ability ranking is affected by the proposed adjustment.

Ladda ned avhandlingen här

Läs mer om André på hans CV-sida