Tuesday 15 November 2016

Sea ice melting at Arctic Sea 3: Evaluation -- can we trust the future projections?


Whether we can trust the projections is based on how well the projections can match observations and represent what will likely happen in the future. Therefore, the models need to be evaluated, via comparing model simulations with observations. Model evaluation is a key component in my academic background (Environmental Modelling), and I may write some knowledge beyond those related to sea ice below. To begin with, I adapt two figures to demonstrate the quality of model simulations. 


Figure 1. Comparison between observed seasonal cycle and modelled seasonal cycle. (source: Stroeve et al., 2012). 

The first one (Figure 1) shows seasonal cycle of sea ice extent at Arctic sea from model simulations and observations 1979-2011 (Stroeve et al., 2012). The multi-model means, particularly the CMIP5’s (diamonds), quite match the observations (red line). All the diamonds are placed between the maximum and minimum observations of each month. IPCC (2013) stated that the error in multi-model mean is less than 10% of the observations. The quality of model simulations have been improved. 


Figure 2. Modelled and observed sea ice extent 1900-2012. Each colour line represents a singe simulation from an individual model. Black lines represent observations. Red line shows multi-model mean from CMIP5, and blue line shows that from CMIP3.  Red shade shows simulations range from CMIP5, and blue shade shows that from CMIP3. (source: IPCC,2013).

From Figure 2, it is obvious that the model simulations from CMIP5 is better than those from CMIP3’s, because the multi-model mean of September ice extent from CMIP5 better matches the value of observations as well as catches up the sharp decline in the last few decades. The improvements were achieved by improving parameters used in modelling sea ice and improvements in other environmental components affecting sea ice. For instance, according to Stroeve et al (2012), one contributor of improvement from CMIP3 to CMIP5 is the improvement in parameters of sea ice albedo. The improvement can also be contributed to the improvement in simulating atmosphere (Notz et al., 2013), as sea ice is formed by the interactions between the atmosphere and ocean.

With the comparison between model simulations and observations, can we now trust the projections? We still need to consider uncertainties. There are three major types of uncertainty, which are internal variability, model uncertainty and scenario uncertainty. Their relative proportions are various with different spatial and temporal resolution.Natural fluctuations, coming from when any radiative forcing is absent, causes internal variability. The internal variability is a key reason for Notz et al. (2013) to against comparing model simulations with the observations directly, as their study found that the internal variability can cause a range of trends in model realisation. Model uncertainty is the difference among the different models’ simulations when responding to the same radiative forcing. This is why models in CMIP5 provide different simulations (covering a wide range) in sea ice extent and its relationship with annual global surface warming (for details, see last post). The last one, scenario uncertainty, is the difference in predicting greenhouse gases emissions in the future, and this be used to explain why Arctic sea would reach nearly ice-free at different time under different RCPs. I found a nice figure to show the relationship among the uncertainties, but it is not about sea ice (Figure 3).
Figure 3. Relative relationship among the three types of uncertainty. (source: Hawkins and Sutton, 2009).








No comments:

Post a Comment