AbstractBuilding on the verification and validation work developed under the Second Wind Forecast Improvement Project, this work exhibits the value of a consistent procedure to evaluate wind power forecasts. We established an open-source Python code base tailored for wind speed and wind power forecast validation, WE-Validate. The code base can evaluate model forecasts with observations in a coherent manner. To demonstrate the systematic validation framework of WE-Validate, we designed and hosted a forecast evaluation benchmark exercise. We invited forecast providers in industry and academia to participate and submit forecasts for two case studies. We then evaluated the submissions with WE-Validate. Our findings suggest that ensemble means have reasonable skills in time series forecasting, whereas they are often inferior to single ensemble members in wind ramp forecasting. Adopting a voting scheme in ramp forecasting that allows ensemble members to detect ramps independently leads to satisfactory skill scores. Throughout this document, we also emphasize the importance of using statistically robust and resistant metrics as well as equitable skill scores in forecast evaluation.
Published: October 13, 2022