b. Verification methods
The entire range of available hindcasts (0 to 6 month lead) for all the four standard seasons is verified. The multitude of measures includes deterministic, categorical, and probabilistic skill scores. The anomaly correlation coefficient (ACC), the ensemble spread and signal-to-noise ratio (S/N) are estimated. Additionally, persistence forecast and root mean square error (RMSE) are calculated. For the target regions (Fig. 1), the distribution of ensemble members, the ensemble mean, and the observed anomaly are plotted for every year. These plots help in understanding the year-to-year variations in the spread and its association with the predicted value, and whether there is asymmetry in the model’s predictive skill (positive versus negative values).
To verify multi-category forecasts, Heidke skill score (HSS) is employed. The probabilistic forecasts is evaluated using rank probability skill score (RPSS). While ACC provides a measure for the entire period, both HSS and RPSS provide information about the different categorical events, and the forecast errors associated with the highly probable events, respectively. A brief description of them is provided here, while more details of the methods are provided in Sooraj et al. (2010).
- ACC: For ensemble mean as the deterministic forecast from dynamical seasonal prediction methods, the anomaly correlation between observed and forecast time series, a measure of deterministic prediction skill, is computed. Here, ACC is estimated between ensemble mean and observations, as well as between individual members and observations.
- HSS: For categorical forecasts, HSS measures the forecast success rate (hits versus misses) relative to a random guess as the forecast (Wilks 1995). The measure is based on tercile classification (above normal, normal and below normal), and the score (expressed in percentages) indicates the accuracy of the forecast in predicting the correct category relative to that of a random guess as the forecast (here climatology or equal chance for each tercile). A score of 0 means that the forecast did no better than climatology. A score of 100 depicts a perfect forecast and a score of -50 depicts the worst possible forecast.
- RPSS: This metric is intended to assess probabilistic forecast in a dynamical system (Goddard et al. 2003). Specifically, RPSS is a measure of square of difference in the cumulative forecast and observed probability for each category, and penalizes for forecasting the wrong category. The RPSS is usually expressed in percentage and a negative value implies that the forecast is less skilful than climatology. For typical climate forecasts with modest skill, for which forecast probabilities typically fall within 20% of their climatological values (33.3%), RPSS scores are often in the range of 5-20.