Influenza is an important public health concern. Influenza leads to the death or hospitalization of thousands of people around the globe every year. However, the flu-season varies every year viz. when it starts, when it peaks, and the severity of the outbreak. Knowing the trajectory of the epidemic outbreak is important for taking appropriate mitigation strategies. Starting with the 2013–2014 flu season, the Influenza Division of the Centers for Disease Control and Prevention (CDC) has held a “Predict the Influenza Season Challenge” to encourage the scientific community to make advances in the field of influenza forecasting. A key observation from these challenges is that a simple average of the submitted forecasts outperformed nearly all of the individual models. Further, ongoing efforts seek ways to assign weights to individual models to create high-performing ensemble models. Given the sheer number of models, as well as variation in methodology followed among teams contributing influenza-risk forecasts, multiple forecasting models can be combined, by capturing human judgment, to outperform a simple average of these same models. This project exploits such a “wisdom of crowds” approach, using public votes acquired with the help of an R/Shiny based web-application platform in order to assign weights to individual forecasting models on a week-over-week basis, in an effort to improve overall ILI risk prediction accuracy. We describe a strategy for improving the accuracy of influenza risk forecast modeling based on a crowd-sourced set of team-specific forecast votes and the results of the 2017–2018 season. Our approach to assigning weights based on crowd-sourced votes on individual models outperformed an average forecasts of the individual models. The crowd was statistically significantly more accurate than the average model and all but one of the individual models.

This content is only available via PDF.
You do not currently have access to this content.