We use a large number of fiscal forecasting models to generate our bottom-up forecasts of the public finances. This box outlined why models are essential forecasting tools, the various types of model used and how their performance is assessed.

We use a large number of fiscal forecasting models to generate our bottom-up forecasts of the public finances. These models are typically owned and maintained by other parts of government. In particular, most tax forecasting models are maintained by HMRC while most benefit forecasting models are maintained by DWP. We also work with forecasters in the Treasury (e.g. on debt interest and EU finances), DECC (on environmental levies), BIS (on student loans) and CLG (on local authorities). And we have a number of models that we maintain ourselves (e.g. on VAT refunds). Separate models are often developed by departments and the Treasury to estimate the cost or yield of the Government’s policy measures.

Models should be seen as a tool of the forecasting process, rather than the ultimate source of forecasts. The shape of any forecast a model produces ultimately reflects the assumptions and judgements that are fed into it by the forecaster. For our forecasts, while the models are typically operated outside the OBR, the assumptions and judgements that are fed into them are determined by the Budget Responsibility Committee.

Models are essential forecasting tools for a number of reasons. Models can:

  • illustrate how receipts or spending have evolved in the past relative to other developments in the economy or policy regime, thereby providing a guide to future behaviour;
  • be a framework for bringing together – in a consistent and systematic way – a large amount of information; and
  • provide a representation of the structure of the tax or benefit system, allowing the forecaster to understand how developments in the economy might interact with the policy regime to influence receipts or spending.

Forecasting models take various different forms, including:

  • econometric equations – e.g. using historical trends to estimate the demand for fuel that underpins our fuel duty forecasts;
  • microsimulation models – taking a sample of individual tax or benefit records and projecting the distribution forward using assumptions about relevant factors;
  • simple spreadsheet models – e.g. setting up a calculation that mimics calculations within the tax or benefit system; or
  • a combination of these approaches – e.g. a microsimulation model might project forward the distribution using relationships estimated via econometric equations.

Forecasting models will inevitably be imperfect representations of the world, so we apply judgements to incorporate factors that are not captured. For example, we typically include the effect of policy announcements via off-model adjustments and utilise in-year information on tax receipts and benefits to adjust model outputs. Adjustments can also be made if we are aware of problems with particular models, but have not been able to find better modelling solutions.

In our future reporting on the performance of fiscal forecasting models, we will be considering their performance against a number of criteria. These will include:

  • accuracy – how well does the model match outturns? Once outturn data are available for the inputs and outputs of a model, we can capture the extent to which errors reflect factors exogenous to the model – e.g. the economic determinants fed into the model, or any subsequent policy decisions or classification changes – or factors intrinsic to the model itself;
  • plausibility – how well do the model outputs align with theory and experience? It is important that a model’s results are intuitively plausible: do they respond broadly as expected to changes in economic determinants or other forecast inputs? If not, does the forecaster understand why – e.g. has the structure of the tax system changed in a way that means receipts will rise faster or slower for a given change in the tax base? This is particularly relevant for taxes that are highly geared to changes in the tax base, as is the case with stamp duty land tax or capital gains tax;
  • transparency – how easily can the model outputs be understood and scrutinised? It is vital that we can readily explain changes in our forecasts and relate them to the drivers of the forecasts. It is therefore essential that the outputs of a model can be scrutinised to identify any developments that differ from the model’s results and allow us to assess whether they are the result of issues with the assumptions being fed into the model or the structure of the model itself. We do that via diagnostic breakdowns of changes from forecast-to-forecast or from year-to-year. If a model cannot be scrutinised in this way, it becomes more difficult for a forecaster to respond appropriately to new information; and
  • effectiveness – how well does the model capture the tax or benefit system? The complexity of a forecasting model is typically driven by the complexity of the tax or benefit system and the behavioural responses that it generates. Taxes such as self-assessment or corporation tax involve multiple income streams and/or reliefs and deductions. But the more complex a model, the greater the potential sources of forecast error. There is clearly some trade-off between capturing all the elements of a particular tax or benefit system and its usability as a forecasting tool.

Without pre-empting the result of a more comprehensive review of fiscal forecasting models, we can give some indication of likely conclusions from our experience of scrutinising forecasts at each Economic and fiscal outlook and in Forecast evaluation reports. Some forecasting models are likely to perform well against these criteria – e.g. alcohol and fuel duties, which are relatively transparent models of relatively simple taxes that have generally been subject to small fiscal forecasting errors. In contrast, in this report we have identified relatively large fiscal forecasting errors for self-assessment receipts in 2014-15. This is a complex tax where liabilities are paid long after the activity to which they relate. The forecast relies on inputs that may not be closely aligned with the true tax base – more likely to be a source of potential assumption or judgement errors rather than issues with the model itself. Self-assessment has also been subject to significant behavioural responses to policy changes in recent years (e.g. the shifting of income between years when the additional rate of income tax has been changed).

This box was originally published in Forecast evaluation report – October 2015