In practice, this has meant that the federal government and US states have variously chosen, contracted, and created models in an ad hoc and opaque manner. Rivers, who has called for the creation of a “national infectious disease forecasting center” in the mold of the National Weather Service, observes that “Right now, there’s nobody responsible for really recording and archiving who said what—what did modelers say, and what happened. What prediction did they make today, and how does it change tomorrow, how does it change the next day, and how does that relate to what actually happened?”
Weather modeling and epidemiological modeling have some important differences. One is that weather forecasts are made many times every day, for every location in the country. This allows for the rigorous evaluation of forecast skill. Pandemic forecasts are made rarely (thank goodness), meaning that we can’t know that much about their accuracy, even when we have to rely on them. Another important difference is that weather forecasts don’t change the weather, while forecasts of disease outbreaks may influence how people respond and behave, and thus alter the conditions being forecasted.
For these reasons, pandemic forecasting presents a much greater challenge than does weather forecasting. This is why we really don’t want to ever have to rely on pandemic forecasts. It will always be much better to stop a pandemic before it starts, which requires effective surveillance and strategies for quick intervention. That fact only makes the absence of the federal government in producing or evaluating coronavirus models more alarming.
In its place, we have a free-for-all. The plethora of pandemic models looks like a big bowl of cherries to political partisans, who can pick and choose whichever results seem most supportive of their favored policies or damaging to their opponents’ positions. We have seen similar dynamics in the climate debate, where scientific arguments can be just Trojan horses for views grounded in politics, economics, or culture.
Most notably, on April 1, Trump and the White House Coronavirus Task Force showed a figure indicating that successful social distancing would limit the number of US deaths to between 100,000 and 240,000. Those numbers have, in turn, been laid out by the administration as the metric of policy success in the pandemic: Any five-digit body count, no matter how high, shall thus be counted as salvation. Never mind the fact that these estimates have been widely criticized by experts as being unrealistically high in the first place, including by Trump’s own advisers.
In this instance, the White House’s use of coronavirus forecasts appears to be for purposes of evading accountability for poor decisions or justifying decisions already made, in addition to whatever role they had informing policy. Unsurprisingly, the White House has not released details on its projections; and task force member Deborah Birx only referred, in presenting them, to “five or six international and domestic modelers from Harvard, from Columbia, from Northeastern, from Imperial [College London].” The model from the University of Washington also has been frequently cited by the task force, but another with less aggressive projections, which would be far less favorable to evaluating the White House response, was dismissed as an outlier.
The almost complete lack of transparency from the White House is like gasoline poured on a hot fire of politicized science. As a consequence, both supporters and critics of the Trump administration’s policies cite bits and pieces of research to support their arguments, but the absence of a broader scientific context for interpreting the pandemic forecasts means that everyone lacks a rigorous, authoritative basis for their views. This is convenient for political battles, but fatal to effective policy development or evaluation.
It is true that the widely cited University of Washington model has been shown to produce deeply flawed projections. Of course, we should expect that any newly developed and untested model deployed in a completely novel context will produce poor forecasts. To expect otherwise is to misunderstand the difficulty of such modeling. That is precisely why it is important to compare side-by-side forecasts from all available models. Looking at a diversity of models can help us to characterize areas of agreement and uncertainty.