HadCRUT4 Mean Global Temperature Reconstruction Uncertainty
Posted September 17, 2018
on:
FIGURE 1: THE 95% CONFIDENCE INTERVAL OF UNCERTAINTY
FIGURE 2: TEN RANDOM OLS LINEAR TRENDS FOR EACH CALENDAR MONTH: 1850-1905
FIGURE 3: TEN RANDOM OLS LINEAR TRENDS FOR EACH CALENDAR MONTH: 1906-1961
FIGURE 4: TEN RANDOM OLS LINEAR TRENDS FOR EACH CALENDAR MONTH: 1962-2017
FIGURE 5: TEN RANDOM OLS LINEAR TRENDS FOR EACH CALENDAR MONTH: 1850-2017
FIGURE 6: RANGE OF TREND VALUES
FIGURE 7: TEMPERATURE ANOMALIES IN RANDOM 30-YEAR WINDOW
FIGURE 8: 30YR WARMING TRENDS ACROSS THE FULL SPAN IN RANDOM MONTHS
- In 2012 the Hadley Centre of the Climate Research Unit (CRU) of the Met Office of the Government of the UK, who publish and maintain the global mean temperature data reconstruction from 1850, completed their work on estimating the uncertainty in temperature in the reconstruction and published them online [LINK] with an online data dictionary posted here [LINK] A detailed description of this work by Colin Morice et al of the CRU is published into the public domain [Morice, Colin P., et al. “Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set.” Journal of Geophysical Research: Atmospheres 117.D8 (2012)]. The full text of Colin’s paper is available online [LINK ]. In brief, for each of 168 temperature anomaly values for each calendar month, a range is provided for the 95% confidence interval of the temperature estimate for different sources of error. Here we use columns 11 and 12 that contain the lower and upper bounds of the 95% confidence interval of the combined effects of all the uncertainties described in the HadCRUT4 error model. As an example, the best mean global temperature anomaly estimate for January 1850 is -0.7C with an uncertainty range of -1.1 to -0.299.
- This work is an exploration of the implications of these uncertainties for global warming research in terms of the impact on warming trend estimations. Specifically, we use Monte Carlo simulation to repeatedly select random numbers within the uncertainty range and construct a sample of temperature and warming trend possibilities that can serve as an indication of the impact of these uncertainty values on the study of warming trends.
- Figure 1 is a graphical presentation of the width of the 95% confidence intervals for each calendar month across the full span of the data from 1850 to 2017. The data are presented in four charts with each chart containing data for three calendar months. A clear pattern of changes in uncertainty with time and among calendar months is seen in these charts. A high uncertainty band is found in the earliest 40-year period which is particularly high for the months January, February, and March and less than half that size in the other calendar months. In all calendar months a gradual reduction in uncertainty is seen after that high uncertainty period in all calendar months with exceptionally low values after 1960.
- These patterns are used to divide the full span of the data into three 55-year segments according to the level of uncertainty. These are a period of high uncertainty in 1850-1905, a period of low uncertainty in 1906-1961, and a period of very low to no uncertainty in 1962-2017. OLS linear trend analysis is carried out for these three sections separately using Monte Carlo simulation to randomly select ten OLS trend values to study as indicators of the impact of uncertainty on OLS warming trends. The monthly mean temperature anomalies are not combined but rather, the twelve calendar months are studied separately so that the impact of the reported uncertainty may be evaluated directly. The results are presented in Figures 2, 3, and 4 respectively for the three uncertainty levels in the three 55-year sub-spans described in paragraph 3. All trend values are in units of DegC/century.
- Figure 2 presents the OLS trends for the twelve calendar months with uncertainty included in ten Monte Carlo trials for the earliest and highest uncertainty period 1850-1905. Here, the trend values vary from 0.5C/century of cooling to 0.25C/century of warming. Most of the trend values show a cooling trend. The standard error of estimate is very high ranging from 0.03C to 0.05C. Only two statistically significant mean trend is found and they are both cooling trends in the summer months of June and July. The sub-span 1850-1905 is found to be a high uncertainty period with little or not trend information available in the data in terms of the new uncertainty data provided. A significant impact of the uncertainty values is seen in this sub-period.
- Figure 3 presents the OLS trends for the twelve calendar months with uncertainty included in ten Monte Carlo trials for the mid-uncertainty period 1906-1961. Strong warming trends are seen for all calendar months from 0.7C/century to greater than 0.9C/century. Very little uncertainty is seen in these results. The standard error for the mean trend for each calendar month is very low and all trends are statistically significant warming trends. There is no evidence of an impact of the new uncertainty values on trends in the sub-span 1906-1961.
- Figure 4 presents the OLS trends for the twelve calendar months with uncertainty included in ten Monte Carlo trials for the low-uncertainty period 1962-2017. Very strong warming trends are seen for all calendar months. They range from 01.4C/century to greater than 1.6C/century, approximately two times the warming trends seen in Figure 3 for 1906-1961. There is no sign of uncertainty in these results. The standard error for the mean trend for each calendar month is very low and all trends are statistically significant warming trends. There is no impact of the new uncertainty values on trends in the sub-span 1962-2017.
- Full span Monte Carlo simulation trends are presented in Figure 5 for the full span of the data 1850-2017. All calendar months show statistically significant warming trends ranging from 0.43C/century to 0.57C/century close to the usually cited figure of 0.5C/century in the HadCRUT4 global temperature anomaly reconstruction. An impact of uncertainties on the full span trends is not apparent.
- A graphical visualization of the impact of the uncertainty assessment on temperature trends is provided inn Figure 6. It is a plot of the range of values seen in the ten Monte Carlo simulations computed as [range=maximum-minimum] of the ten randomly selected values. There are four lines shown in different colors, one for each of the four spans for which trends were computed (Full=1850-2017, First=1850-1905, Middle=1906-1961, and Last=1962-2017). The first twelve values in each of the four lines are for the twelve calendar months January to December. The thirteenth value is the average of the twelve values for the calendar months. These curves show that the largest range is seen in the first sub-period 1850-1905 with 0.25C/century to 0.65C/century difference among the ten Monte Carlo trend values. The larges values are seen in August and September and the least in February, March, and December. The lowest ranges are seen in the full span 1850-2017 with values close to 0.05C/century. The very low uncertainty ranges of Middle and Last show ranges as high as 0,4C/century (in September). The average range across the twelve calendar months are 0.45C/century for 1850-1905, 0.3C/century for 1906-1961, and 0.2C/century for 1962-2017.
- We conclude from the analysis that the effect of the uncertainty values on temperature trends is evident only in the earliest sub-spans of the data that that begin prior to 1906. No effect of the uncertainty values is found in the full span of the data 1850-2017. The effect of these uncertainties on trends is less severe and not of much consequence in sub-spans of the data that begin after 1905. The published uncertainties appear to be window dressing. They create the appearance that due consideration has been given to uncertainty but no evidence is found that these uncertainties have a real implication for trend analysis. Based on this distribution of uncertainty ranges, it is recommended that trend analysis of the HadCRUT4 data be limited to the period after 1905 and that such analysis should be carried out for each calendar month separately because the uncertainties and trend behaviors of the calendar months are different and a significant loss of information occurs when the months are combined into an annual mean.
- Further evidence of unreliability of this dataset is reported by Joanne Nova [LINK] where she reports that Australian researcher John McLean has found serious flaws in the HadCRUT4 temperature anomaly reconstruction. In summary, the flaws reported are “Large gaps where there is no data and where instead averages were calculated from next to no information. For two years, the temperatures over land in the Southern Hemisphere were estimated from just one site in Indonesia. Almost no quality control, with misspelled country names (‘Venezuala” “Hawaai” “Republic of K” (aka South Korea) and sloppy, obviously inaccurate entries. Adjustments – “I wouldn’t be surprised to find that more than 50 percent of adjustments were incorrect,” says McLean – which artificially cool earlier temperatures and warm later ones, giving an exaggerated impression of the rate of global warming. Methodology so inconsistent that measurements didn’t have a reliable policy on variables like Daylight Saving Time. Sea measurements, supposedly from ships, but mistakenly logged up to 50 miles inland. A Caribbean island – St Kitts – where the temperature was recorded at 0 degrees C for a whole month, on two occasions (somewhat implausibly for the tropics). A town in Romania which in September 1953, allegedly experienced a month where the average temperature dropped to minus 46 degrees C (when the typical average for that month is 10 degrees C).“
BIBLIOGRAPHY
- 2012: Morice, Colin P., et al. “Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set.” Journal of Geophysical Research: Atmospheres 117.D8 (2012). Recent developments in observational near‐surface air temperature and sea‐surface temperature analyses are combined to produce HadCRUT4, a new data set of global and regional temperature evolution from 1850 to the present. This includes the addition of newly digitized measurement data, both over land and sea, new sea‐surface temperature bias adjustments and a more comprehensive error model for describing uncertainties in sea‐surface temperature measurements. An ensemble approach has been adopted to better describe complex temporal and spatial interdependencies of measurement and bias uncertainties and to allow these correlated uncertainties to be taken into account in studies that are based upon HadCRUT4. Climate diagnostics computed from the gridded data set broadly agree with those of other global near‐surface temperature analyses. Fitted linear trends in temperature anomalies are approximately 0.07°C/decade from 1901 to 2010 and 0.17°C/decade from 1979 to 2010 globally. Northern/southern hemispheric trends are 0.08/0.07°C/decade over 1901 to 2010 and 0.24/0.10°C/decade over 1979 to 2010. Linear trends in other prominent near‐surface temperature analyses agree well with the range of trends computed from the HadCRUT4 ensemble members.
- 2014: Robeson, Scott M., Cort J. Willmott, and Phil D. Jones. “Trends in hemispheric warm and cold anomalies.” Geophysical Research Letters 41.24 (2014): 9065-9071. Using a spatial percentile approach, we explore the magnitude of temperature anomalies across the Northern and Southern Hemispheres. Linear trends in spatial percentile series are estimated for 1881–2013, the most recent 30 year period (1984–2013), and 1998–2013. All spatial percentiles in both hemispheres show increases from 1881 to 2013, but warming occurred unevenly via modification of cold anomalies, producing a reduction in spatial dispersion. In the most recent 30 year period, trends also were consistently positive, with warm anomalies having much larger warming rates than those of cold anomalies in both hemispheres. This recent trend has largely reversed the decrease in spatial dispersion that occurred during the twentieth century. While the period associated with the recent slowdown of global warming, 1998–2013, is too brief to estimate trends reliably, cooling was evident in NH warm and cold anomalies during January and February while other months in the NH continued to warm.
- 2014: Curry, Judith. “Climate science: uncertain temperature trend.” Nature Geoscience 7.2 (2014): 83. Global mean surface temperatures have not risen much over the past 15 years, despite continuing greenhouse gas emissions. An attempt to explain the warming slow-down with Arctic data gaps is only a small step towards reconciling observed and expected warming.
- 2014: Cowtan, Kevin, and Robert G. Way. “Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends.” Quarterly Journal of the Royal Meteorological Society 140.683 (2014): 1935-1944. Incomplete global coverage is a potential source of bias in global temperature reconstructions if the unsampled regions are not uniformly distributed over the planet’s surface. The widely used Hadley Centre–Climatic Reseach Unit Version 4 (HadCRUT4) dataset covers on average about 84% of the globe over recent decades, with the unsampled regions being concentrated at the poles and over Africa. Three existing reconstructions with near‐global coverage are examined, each suggesting that HadCRUT4 is subject to bias due to its treatment of unobserved regions.Two alternative approaches for reconstructing global temperatures are explored, one based on an optimal interpolation algorithm and the other a hybrid method incorporating additional information from the satellite temperature record. The methods are validated on the basis of their skill at reconstructing omitted sets of observations. Both methods provide results superior to excluding the unsampled regions, with the hybrid method showing particular skill around the regions where no observations are available. Temperature trends are compared for the hybrid global temperature reconstruction and the raw HadCRUT4 data. The widely quoted trend since 1997 in the hybrid global reconstruction is two and a half times greater than the corresponding trend in the coverage‐biased HadCRUT4 data. Coverage bias causes a cool bias in recent temperatures relative to the late 1990s, which increases from around 1998 to the present. Trends starting in 1997 or 1998 are particularly biased with respect to the global trend. The issue is exacerbated by the strong El Niño event of 1997–1998, which also tends to suppress trends starting during those years.
- 2016: Jones, Phil, and Jean Palutikof. “Global temperature record.” Climate research unit, University of East Anglia. http://www. cru. uea. ac. uk/cru/info/warming (2016). The time series shows the combined global land and marine surface temperature record from 1850 to 2014. This year was the equal warmest on record. This record uses the latest analysis, referred to as HadCRUT4 (Morice et al., 2012). The period 2001-2010 (0.488°C above the 1961-90 average) was 0.214°C warmer than the 1991-2000 decade (0.274°C above the 1961-90 average). The equal warmest years of the series are 2010 and 2014. The value for 2014, given uncertainties discussed in Morice et al. (2012), is not distinguishable from the years 2010 (0.555°C), 2005 (0.543°C) and 1998 (0.535°C). The coldest year of the 21st century (2008 with a value of 0.394°C) was warmer than all years in the 20th century with the exception of 1998. The average of the first four years of the present decade (2011-2014) is 0.002°C cooler than the average for 2001-2010, but warmer than all years before 2001 except for 1998. This time series is compiled jointly by the Climatic Research Unit and the UK Met Office Hadley Centre. Increased concentrations of greenhouse gases in the atmosphere due to human activities are most likely the underlying cause of warming in the 20th century. The warmth or coldness of individual years is strongly influenced by whether there was an El Niño or a La Niña event occurring in the equatorial Pacific Ocean
- 2017: Haustein, K., et al. “A real-time global warming index.” Scientific Reports 7.1 (2017): 15417. We propose a simple real-time index of global human-induced warming and assess its robustness to uncertainties in climate forcing and short-term climate fluctuations. This index provides improved scientific context for temperature stabilisation targets and has the potential to decrease the volatility of climate policy. We quantify uncertainties arising from temperature observations, climate radiative forcings, internal variability and the model response. Our index and the associated rate of human-induced warming is compatible with a range of other more sophisticated methods to estimate the human contribution to observed global temperature change.
- 2017: Cowtan, Kevin. “Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. COBE-SST2 based land-ocean dataset.” (2017). This update document describes an new dataset created using the Cowtan and Way (2014) methodology version 2 (Cowtan 2015, supplement), but using the COBE-SST2 sea surface temperature data in place of HadSST3. The HadSST3 dataset (Kennedy et al., 2011) provides gridded temperature fields based on the ICOADS observational archive, using probably the most extensive analysis of the observation metadata to address inhomogenities in the data. However a recent unexplained drift in ship observations compared to free floating buoys and other sources (Hausfather et al., 2017) has led to an apparent underestimation of 21st century temperature trends. As a result, HadSST3 shows slower warming over the past two decades than ERSSTv4/v5, which upweight the buoy data relative to the ship observations. However the RSST records show hard-to-explain features in the earlier record, including a large warm spike during the second world war and unusual warmth in the 19th century inconsistent with the recorded use of wooden buckets (Folland and Parker, 1995). The new COBE-SST2 dataset (Hirahara et al., 2014) uses a similar if less complete metadata analysis to HadSST3 with very similar results (Kent et al.,2016), except that it does not show the effect of the drift in ship observations since 2005 (Hausfather et al., 2017). A reconstruction based on COBE-SST2 may therefore be useful for the evaluation of temperature trends over the purported “hiatus” period.
- 2018: Blesic, Suzana, Davide Zanchettin, and Angelo Rubino. “Effects of the data loss and data homogenization on the long-term properties of the observed temperature data.” EGU General Assembly Conference Abstracts. Vol. 20. 2018. We use scaling analysis in the form of a combination of the detrended fluctuation analysis of the second order (DFA2) and the wavelet transform spectral estimation (WTS) to assess how the calculations of long-term properties of time series of historical temperature records are affected by data loss, or by considerable adjustments due to data inhomogeneities. We have analysed instrumental records and publicly available derived regional temperature data of the HadCRUT4 dataset. We have calculated DFA2-WTS scaling exponents for both the adjusted and unadjusted records retrieved from the NCDC Global Historical Climatology Network land stations monthly dataset. In this contribution, we will illustrate results that demonstrate that in both cases of substantial amount of missing data and of considerable homogenization the DFA2 exponents for the adjusted temperature data used in the gridded HadCRUT4 dataset can differ even substantially from that of the raw unadjusted data. We will discuss how the corresponding WTS can help reveal the possible sources of such discrepancies. In order to further illustrate this artificial alteration of the scaling properties we will present and discuss temporal changes in the global pattern of the long-term persistence (LTP) of the HadCRUT4 during the period from 1850 to 2000. Our findings indicate that for a largely predominant part of the HadCRUT4 grid where there is a large percentage of missing values the true LTP is likely higher than the one estimated from the available data.
- 2018: Richardson, Mark, Kevin Cowtan, and Richard J. Millar. “Global temperature definition affects achievement of long-term climate goals.” Environmental Research Letters 13.5 (2018): 054004. The Paris Agreement on climate change aims to limit ‘global average temperature’ rise to ‘well below 2 °C’ but reported temperature depends on choices about how to blend air and water temperature data, handle changes in sea ice and account for regions with missing data. Here we use CMIP5 climate model simulations to estimate how these choices affect reported warming and carbon budgets consistent with the Paris Agreement. By the 2090s, under a low-emissions scenario, modelled global near-surface air temperature rise is 15% higher (5%–95% range 6%–21%) than that estimated by an approach similar to the HadCRUT4 observational record. The difference reduces to 8% with global data coverage, or 4% with additional removal of a bias associated with changing sea-ice cover. Comparison of observational datasets with different data sources or infilling techniques supports our model results regarding incomplete coverage. From high-emission simulations, we find that a HadCRUT4 like definition means higher carbon budgets and later exceedance of temperature thresholds, relative to global near-surface air temperature. 2 °C warming is delayed by seven years on average, to 2048 (2035–2060), and CO2 emissions budget for a >50% chance of <2 °C warming increases by 67 GtC (246 GtCO2).
- 2018: Widmann, Martin, et al. “The DAPS data assimilation intercomparison experiment.” EGU General Assembly Conference Abstracts. Vol. 20. 2018. Various approaches for data assimilation for paleoclimatic state estimation have been implemented over the past years. They differ with respect to the assimilation setup and method, the dynamical models, and the type of assimilated information. The setups comprise online approaches, where the background states depend on the outcome of the previous assimilation step, transient offline approaches, where the background states are independent of the previous assimilation step but vary in time due to the influence of climatic forcings, as well as stationary offline approaches, which use the same background states in each assimilation timestep. The main data assimilation methods used in paleoclimate modelling are Particle Filters, where the analysis is given by a weighted version of the background ensemble, and Kalman Filters, where the background ensemble states are changed through the Kalman Gain; variational methods have also been explored. Dynamical models that are used include General Circulation Models, Earth System Models of Intermediate Complexity, and linear models. The empirical information is incorporated either in the form of local or regionally averaged climate variables derived by inverse models from proxy data, or directly as proxy information (currently only used for oxygen isotopes) using forward models. The potential advantages and disadvantages of the different approaches are not well understood. For instance online approaches allow for information propagation in time, but it is not clear whether there is actually any substantial information propagation for the annual or longer time steps used in paleo data assimilation. If information propagation is not relevant, offline approaches would be sufficient and easier to implement, and in particular the stationary offline approach allows for using very large background ensembles. The relative performance of Particle and Kalman Filters depends on the ensemble size, and there may also be differences with respect to physical consistency. Assimilating local climate reconstructions allows to constrain small-scale structures in the climatic states, whereas using regional reconstructions can be expected to be influenced less by non-climatic noise in the proxies. Directly assimilating proxies avoids problems related to proxy-based, statistical climate reconstructions, but requires good forward models and small climate model biases. In the DAPS (PAGES working group on paleoclimate reanalyses, Data Assimilation and Proxy System modeling) data assimilation intercomparison experiment we will apply the different approaches in a pseudo-proxy set-up for the Northern Hemisphere for the period 1900 – 2017 CE to systematically validate the analyses against a reasonably well-known estimate for the true climatic state. We will assimilate local temperature pseudo-proxies over land with annual resolution, constructed by adding white noise to the HadCRUT4 gridded temperature observations. They will be given at the locations of the PAGES2k proxy network at 1500 CE. The analyses will be comprehensively validated against the HadCRUT4 gridded temperature observations and the HadSLP gridded sea level pressure data sets. The poster will present the details of the assimilation and validation set-up, and some preliminary results. The intercomparison has just started and we are inviting contributions from any groups working on paleoclimate data assimilation.
OTHER POSTS AT THIS SITE
Total Hurricane Energy & Fossil Fuel Emissions
Correlation Between Cumulative Emissions and Cumulative Sea Level Rise
TCRE: Transient Climate Response to Cumulative Emissions
A CO2 Radiative Forcing Seasonal Cycle?
Climate Change: Theory vs Data
Correlation of CMIP5 Forcings with Temperature
The Anomalies in Temperature Anomalies
The Greenhouse Effect of Atmospheric CO2
ECS: Equilibrium Climate Sensitivity
Climate Sensitivity Research: 2014-2018
TCR: Transient Climate Response
Peer Review of Climate Research: A Case Study
Spurious Correlations in Climate Science
Global Warming and Arctic Sea Ice: A Bibliography
Global Warming and Arctic Sea Ice: A Bibliography
Carbon Cycle Measurement Problems Solved with Circular Reasoning
NASA Evidence of Human Caused Climate Change
Event Attribution Science: A Case Study
Event Attribution Case Study Citations
Global Warming Trends in Daily Station Data
History of the Global Warming Scare
The dearth of scientific knowledge only adds to the alarm
Nonlinear Dynamics: Is Climate Chaotic?
Eco-Fearology in the Anthropocene
Carl Wunsch Assessment of Climate Science: 2010
Gerald Marsh, A Theory of Ice Ages
History of the Ozone Depletion Scare
Empirical Test of Ozone Depletion
Brewer-Dobson Circulation Bibliography
Elevated CO2 and Crop Chemistry
Little Ice Age Climatology: A Bibliography
Sorcery Killings, Witch Hunts, & Climate Action
Climate Impact of the Kuwait Oil Fires: A Bibliography
Noctilucent Clouds: A Bibliography
Climate Change Denial Research: 2001-2018
4 Responses to "HadCRUT4 Mean Global Temperature Reconstruction Uncertainty"

[…] HadCRUT4 Mean Global Temperature Reconstruction Uncertainty […]


[…] HadCRUT4 Mean Global Temperature Reconstruction Uncertainty […]


[…] In brief, for each of 168 temperature anomaly values for each calendar month, a range is provided for the 95% confidence interval of the temperature estimate for different sources of error. Here we use columns 11 and 12 that contain the lower and upper bounds of the 95% confidence interval of the combined effects of all the uncertainties described in the HadCRUT4 error model. More detail on Colin Morice’s work is presented in a related post [LINK] […]

1 | ECS Climate Sensitivity to CO2 1901-2017 | Thongchai Thailand
September 21, 2018 at 1:11 pm
[…] A similar analysis is presented in this work. Here, a 50-year moving window is used instead of the 60-year window and also the earliest period of the HadCRUT4 temperature data 1850-1900 is not used because it has been shown to contain an unacceptable level of data uncertainty. This uncertainty issue is described in a related post [HadCRUT4 Mean Global Temperature Reconstruction Uncertainty] . […]