Thongchai Thailand

Archive for November 2018















































  1. SUMMARY: In a general linear model for global mean annual Accumulated Cyclone Energy (ACE) in six basins and seven decades from 1945 to 2014 we find some evidence of a rising trend in tropical cyclone activity in the early part of the study period prior to the decade D03 [1965-1974]. No global trends are found after this decade. The same pattern is found in three of the six cyclone basins studied, namely, EP [Eastern Pacific], SI [South Indian], and SP [South Pacific]; with each basin showing a rising trend relative to the decades prior to D03 and none since D03. No trends could be detected in the other three cyclone basins in the study, namely, NA [North Atlantic], NI [North Indian], and WP [Western Pacific]. The global model found significant differences in mean overall ACE index among the six basins. The Western Pacific Basin was the most active and the North Indian Basin was the least active. Not much separates the other four basins except that the South Indian Basin was more active than the South Pacific Basin.
  2. BACKGROUND: Sea surface temperature (SST) is the link that connects climate change research with tropical cyclone research. Rising SST is observed (Hadley Centre, 2017) and thought to be an effect of Anthropogenic global warming or AGW (Hansen, 2005) . At the same time, the theory of tropical cyclones holds that cyclone formation, and particularly cyclone intensification are related to SST (Vecchi, 2007) (Knutson, 2010). Testable implications of the theory for empirical research are derived from climate model simulations (Knutson, 2010) and also from sedimentary evidence of land-falling hurricanes over a 1500-year period (Mann, 2009). These studies suggest some guidelines and testable implications for empirical tests of the theory that AGW affects tropical cyclone activity (Knutson, 2010).
  3. These guidelines are as follows: 1. Globally averaged intensity of tropical cyclones will rise as AGW increases SST. Models predict globally averaged intensity increase of 2% to 11% by 2100. 2. Models predict falling globally averaged frequency of tropical cyclones with frequency decreasing 6%-34% by 2100. 3. The globally averaged frequency of “most intense tropical cyclones” should increase as a result of AGW. The intensity of tropical cyclones is measured as the ACE (Accumulated Cyclone Energy). 4. Models predict increase in precipitation within a 100 km radius of the storm center. A precipitation rise of 20% is projected for the year 2100.
  4. Complications of empirical tests in this line of research are (Knutson, 2010): 1. Extremely high variance in tropical cyclone data at an annual time scale suggests longer, perhaps a decadal time scale which in turn greatly reduces statistical power. 2. Limited data availability and poor data quality present barriers to research. 3. Limited theoretical understanding of natural variability makes it difficult to ascertain whether the variability observed in the data is in excess of natural variability. 4. Model projections for individual cyclone basins show large differences and conflicting results. Thus, no testable implication can be derived for studies of individual basins. It is necessary that empirical studies have a global geographical span. 5. Advances in data collection activity, methods, and technology create trends in the data that must be separated from climate change effects (Landsea, 2007) (Landsea, 2010). A high level of interest in tropical cyclones derives from an unusually active hurricane season in 2004 when more than 14 tropical cyclones formed in the North Atlantic basin . Four of these storms intensified to Category 4 or greater and made landfall in the USA causing considerable damage. The even more dramatic 2005 season followed in its heels with more than thirty depressions. Four of them intensified to Category 5 and three made landfall. The most intense was Hurricane Wilma but the most spectacular was Hurricane Katrina which made landfall in Florida and again in Louisiana. Its devastation was facilitated by a breach in the levee system that was unrelated to AGW but its dramatic consequences made it an icon of the possible extreme weather impacts of AGW.
  5. DATA: The “best track” cyclone data were used as received from the NCDC without corrections, adjustments, additions, or deletions with the exception that the years 1848-1944 were not used because they did not contain data for all six basins. It is generally assumed that these data may contain a measurement bias over time and across basins because of differences in data collection methods and procedures (Figure 11) (Kossin, 2013). Although aircraft reconnaissance of tropical cyclones in selected basins began as early as the 1940s, these data did not reach a level of coverage and sophistication until the C-130 was deployed in the 1960s. Satellite data gathering for tropical cyclones began in the 1970s.  The undercount bias in the oldest data explains why a rising trend in cyclone activity is found only against the early part of the study period. The findings presented here are entirely empirical and their utility depends on the validity of the ACE index as a measure of tropical cyclone activity. All data and computational details are available in the online data archive for this paper [LINK] . The full text of the source paper for this post may be downloaded from [SSRN.COM] or [ACADEMIA.EDU] .
  6. THEORY: The effect of rising atmospheric carbon dioxide and sea surface temperature (SST) in the climate change era on the formation and intensification of tropical cyclones is not well understood (Walsh, 2014). The conventional theory is that rising SST under the right atmospheric conditions will increase both the formation and intensification of tropical cyclones (Gray W. , 1967) (McBride, 1995) (Emanuel K. , The dependence of hurricane intensity on climate, 1987) (Gray W. , 1979). However, historical tropical cyclone data in a warming world as well as future tropical cyclone conditions generated by general circulation climate models imply that the relationship between the warming trend in the climate change era and tropical cyclone formation and intensification may be more complicated (Hodges, 2007) (Kozar, 2013) (Lin, 2015) (Walsh, 2014). Perhaps it has to do with the amount and extent of rainfall associated with tropical cyclones with higher SST producing more rain (Scoccimarro, 2014) and localized SST relatively higher than surrounding waters producing a greater extent of the rainfall area (Lin, 2015). It is also possible that a complex relationship exists between SST and the frequency2 and intensity of tropical cyclones with rising temperatures implying fewer but more intense storms (Hodges, 2007). On the other hand, a simulation on a millennial time scale by Kozar, Mann, Emanuel, and others suggests that warming will increase the decadal frequency of North Atlantic hurricanes and proportionately, the decadal frequency of hurricanes that make landfall (Kozar, 2013). An extensive study by the US CLIVAR hurricane working group3 (HWG) with multiple general circulation climate models found that warming may cause the frequency of tropical cyclones to decline in the long term and that rising CO2 may have its own independent effect on hurricane activity (Walsh, 2014) (Held, 2011) (Royer, 1998). The authors of the Walsh study included the disclaimer that the effect of climate change on tropical cyclones is “uncertain” and the sobering implication that we don’t really know the relationship between climate change and tropical cyclones. At the root of the tropical cyclone conundrum is the extreme inter-annual variation in the number and maximum intensity of tropical cyclones and the seemingly independent and unrelated behavior of the six major tropical cyclone basins (Hodges, 2007) (Frank, 2007) (Mann, 2007) (Zhao M. , Simulations of global hurricane climatology, interannual variability, and reponse to global warming, 2009) (Zhao H. , 2011) (Eric, 2012) (Chan, Interannual and interdecadal variations of tropical cyclone activity over the western North Pacific, 2005). Although apparent patterns may be visualized in decadal and multi-decadal means, their differences can be interpreted only within the low statistical power imposed by the high variance at the annual level, and their utility is constrained by the limited historical reach of the data along with a measurement bias imposed on the time series by changing measurement technology (Kozar, 2013) (Mann, Evidence of a modest undercount bias in early historical Atlantic tropical cyclone counts, 2007) (Landsea, 2007).
  7. DATA ANALYSIS: There are six tropical and sub-tropical oceanic regions where tropical cyclones form from an isolated patch of relatively higher sea surface temperature. They are, alphabetically, The East Pacific, North Atlantic, North Indian, South Indian, South Pacific, and West Pacific. North Atlantic tropical cyclones are called Hurricanes and those in the West Pacific are called Typhoons. In the other basins they are called cyclones. Figure 1 shows their relative locations of the six tropical cyclone basins as well as the General Linear Model used used to combine them at a decadal time scale in this study of long term trends in global tropical cyclone activity in six basins and seven decades.
  8. RESULTS: The results of the general linear model analysis of global mean ACE for all six tropical cyclone zones at a decadal time scale [as suggested by (Knutson 2010)], are displayed in Figure 2. The left panel is a tabulation of the regression coefficients and their statistical significance. The right panel is a plot across time of the derived global decadal mean ACE for each of the seven decades in the study period 1945-2015. The 21 possible differences in global mean ACE among the seven decades are tested for statistical significance in Figure 3. In these tests, only 2 of the 21 hypothesis tests show statistically significant differences. It shows that decade#5 (1985-1994) and decade#6 (1995-2004) had higher mean global ACE than decade#1 (1955-1964). No other statistically significant difference is found.
  9. The general linear model depicted in Figures 1&2 is also used to compare tropical cyclone activity among the six cyclone basins net of the variation among the seven decades. The mean annual ACE index in each basin for the entire study period 1945-2014 is shown in Figure 3 where the six basins are compared graphically. Hypothesis tests for all pairwise comparisons of the six basins are listed in Figure 3. They show that the Western Pacific (WP) is the most active basin and that North Indian (NI) is the least active. No difference among the other four basins is found except that the South Indian basin (SI) is more active than the South Pacific (SP). Interestingly, the North Atlantic (NA) basin that gets a great deal of attention from researchers due its proximity and relevance to the USA, is not a particularly active basin in the global context. It is more active than only one  basin – the least active North Indian (NI) basin. Tropical cyclone research is therefore biased by a lopsided attention to the North Atlantic basin such that many of the conclusions drawn may not be relevant in a global context, the only context for tests of the effect of global warming on tropical cyclone activity (Knutson 2010). 
  10. The trends for each basin are studied in Figure 4 to Figure 9 alphabetically from EP to WP. Some trends are found in the Eastern Pacific (EP), South Indian (SI), and the South Pacific (SP) basins relative to the earliest decades. No trends are found in the other three basins. In particular, no trend is found in the most active basin WP or in the most popular research basin NA. Cyclonic activity in the EP basin in the twenty-year period 1975-1994 was greater than in the decade 1945-1954 and greater in the decade 1985-1994 than in the decade 1955-1964. No overall trend is found. In particular, there is no evidence that tropical cyclone activity has increased in subsequent decades since the decade D03 [1965-1974]. That in the SI basin is found to be higher in 1965-2004 than in the decade 1945-1954 and higher in the decade1995-2004 than in the decade 1955-1964. However, no sustained trend is found in the sample period 1945-2014 and in particular we find no evidence of an increase in cyclonic activity since the decade D03 [1965-1974]. In the SP basin, cyclonic activity shows a difference between the period 1975-2004 and the decade 1945-1954. However, no sustained trend in cyclonic activity is found. In particular, there is no evidence that cyclonic activity has increased since the decade D02 [1955-1964].
  11. In CONCLUSION, in this work, the ACE index is used to compare decadal mean tropical cyclone activity worldwide in all six basins among seven decades from 1945 to 2014. Some increase in tropical cyclone activity is found relative to the earliest decades. No trend is found after the decade 1965-1974. A comparison of the six cyclone basins in the study shows that the Western Pacific Basin is the most active basin and the North Indian Basin the least. These findings are best understood in terms of the known undercount bias in the data in the earliest decades; and not in terms of the theory of anthropogenic global warming and climate change.








  1. American Meteorological Society. (2014). State of the climate in 2013. Bulletin of the American Meterological Society, V. 95, No. 7, July 2014.
  2. Balaguru, K. (2014). Increase in the intensity of post monsoon Bay of Bengal tropical cyclones. Geophysical Research Letters, 3594-3601.
  3. Bister, M. (1998). Dissipative heating and hurricane intensity. Meteorology and Atmospheric Physics, 52: 233-240.
  4. Chan, J. (2005). Interannual and interdecadal variations of tropical cyclone activity over the western North Pacific. Meteorology and Atmospheric Physics, 89: 143-152.
  5. Chan, J. (2006). Comments on “changes in tropical cyclone number, duration, and intensityi a warming environment”. Science, 311: 1731b.
  6. Dodla, V. (2007). GIS based analysis of climate change impacts on tropical cyclones over Bay of Bengal. Jackson, MS, 39217, USA: Trent Lott Geospatial and Visualization Research Center, Jackson State University.
  7. Draper, N. a. (1981). Applied regression analysis. NY: Wiley.
  8. Elsner, J. (2008). The increasing intensity of the strongest tropical cyclones. Nature, 455, 92-95.
  9. Emanuel, K. (1987). The dependence of hurricane intensity on climate. Nature, 326: 483-485.
  10. Emanuel, K. (1988). The maximum intensity of hurricanes. Journal of Atmospheric Sciences, 45: 11431155.
  11. Emanuel, K. (2005). Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436: 686-688.
  12. Eric, K. (2012). Interannual variations of tropical cyclone activity over the North Indian Ocean. International Journal of Climatology, Volume 32, Issue 6, pages 819–830.
  13. Frank, W. (2007). The interannual variability of tropical cyclones. Monthly Weather Review, 135: 3587-3598.
  14. Girishkumar, M. (2012). The influences of ENSO on tropical cyclone activity in the Bay of Bengal during October-December. Journal of Geophysical Research, V.117, C02033, doi:10.1029/2011JC007417.
  15. Gray, W. (1967). Global view of the origins of tropical disturbances and storms. Fort Collins, CO: Technical Paper #114, Dept of Atmospheric Sciences, Colorado State University.
  16. Gray, W. (1979). Hurricanes: their formation, structure, and likely role in the tropical circulation. In D. Shaw, Meteorology over tropical oceans. Bracknell: Royal Meteorological Society.
  17. Held, I. (2011). The response of tropical cyclone statistics to an increase in CO2 … Journal of Climate, 24: 5353-5364.
  18. Hodges, K. (2007). How may tropical cyclones change in a warmer climate. Tellus A: Dynamic Meteorology and Oceanography, 59(4): pp. 539-561.
  19. Holland, G. (1997). The maximum potential intensity of tropical cyclones. Journal of Atmospheric Sciences, 54: 2519-2541.
  20. Holm, S. (1979). A simple sequentially rejective multiple test procedure. Holm, S. (1979). “A simple sequentially rejective multipScandinavian Journal of Statistics, 6 (2): 65–70.
  21. Hurricane Science. (2010). 1970 The great Bhola cyclone. Retrieved 2015, from
  22. IPCC. (2007). Climate change 2007. Retrieved 2015, from
  23. Islam, T. (2008). Climatology of landfalling tropical cyclones in Bangladesh 1877–2003. Natural Hazards, 48(1), 115–135.
  24. JAXA. (2015). Typhoon Search. Retrieved 2015, from The Japan Aerospace Exploration Agency :
  25. JMA. (2005). Tropical cyclone basins. Retrieved 2015, from Japan Meteorological Agency:
  26. Johnson, V. (2013). Revised standards for statistical evidence. Proceedings of the National Academy of Sciences,
  27. Kikuchi, K. (2010). Formation of tropical cyclones in the northern Indian Ocean. Journal of the Meteorological Society of Japan, Vol. 88, No. 3, pp. 475–496.
  28. Knutson, T. (2010). Tropical cyclones and climate change. Nature Geoscience, 3.3 (2010): 157-163.
  29. Knutson-McBride-Landsea-Emanuel-Chan. (2010). Tropical cyclones and climate change. Nature Geoscience, 3.3 (2010): 157-163.
  30. Klotzbach, P. (2006). Trends in global tropical cyclone activity over the past twenty years 1986-2005. Geophysical research letters, 33: L10805.
  31. Knapp, K. (2010). The International Best Track Archive for Climate Stewardship (IBTrACS) . Bulletin of the American Meteorological Society, 91, 363–376.
  32. Kossin, J. (2013). Trend analysis with a new global record of cyclone intensity. Journal of Climate, 26: 9960-8876.
  33. Kozar, M. (2013). Long term variations of North American tropical cyclone activity … Journal of Geophysical Research, 118: 13383-13392.
  34. Kumar, R. (2013). A brief history of Indian cyclones. Retrieved 2015, from
  35. Landsea, C. (2007). Counting Atlantic tropical cyclones back to 1900. EOS Transactions of the American Geophysical Union, 88:18.197-208.
  36. Li, T. (2003). Satellite data analysis and numerical simulation of tropical cyclones. Geophysical Research Letters, V. 30 #21 2122.
  37. Li, T. (2010). Global warming shifts Pacific tropical cyclone location. Geophysical Research Letters, 37: 1-5.
  38. Lin, H. (2015). Recent decrease in typhoon destructive potential and global warming implications. Nature Communications, DOI: 10.1038/ncomms8182.
  39. Lin, Y. (2015). Tropical cyclone rainfall area controlled by relative sea surface temperature. Nature Communications, DOI: 10.1038/ncocoms7591.
  40. Mann, M. (2007). Atlantic tropical cyclones revisited. EOS Transactions American Geophysical Union, 88:36:349-350.
  41. Mann, M. (2007). Evidence of a modest undercount bias in early historical Atlantic tropical cyclone counts. Geophysical Research Letters, 34: L22707.
    McBride, J. (1995). Tropical cyclone formation. In W. Frank, A global view of tropical cyclones (pp. 63-100). Geneva: World Meteorological Organization.
  42. Munshi, J. (2015). Global cyclone paper data archive. Retrieved 2015, from Dropbox: [LINK]
  43. Murakami, H. (2014). Contributing Factors to the Recent High Level of Accumulated Cyclone Energy (ACE) and Power Dissipation Index (PDI) in the North Atlanti. Journal of Climate, v.27, n. 8.
  44. Neetu, S. (2012). Influence of upper ocean stratification on tropical cyclone induced surface cooling in the Bay of Bengal. Journal of Geophysical Research, V117 C12020.
  45. NHC. (2015). National Hurricane Center. Retrieved 2015, from NOAA:
  46. NOAA. (2015). La Nina. Retrieved 2015, from NOAA:
  47. NOAA. (2015). NOAA. Retrieved 2015, from NOAA:
  48. NOAA/NCDC. (2015). IBTRACS. Retrieved 2015, from NCDC:
  49. Royer, J. (1998). A GCM study of the impact of greenhouse gas increase on the frequency of tropical cyclones. Climate Change, 38: 307-343.
  50. Scoccimarro, E. (2014). Intense precipitation events associated with landfalling tropical cyclones in response to a warmer climate and increased CO2. Journal of Climate, 27: 4642-4654.
  51. Sengupta, D. (2008). Cyclone-induced mixing does not cool SST in the post-monsoon north Bay of Bengal. Atmospheric Science Letters, 9(1), 1–6.
  52. Sharkov, E. (2012). Global tropical cyclogenesis. Berlin: Springer-Verlag.
    Singh, O. P. (2001). Has the frequency of intense tropical cyclones increased in the north Indian Ocean? Current Science, 80(4), 575–580.
  53. Sriver, R. (2006). Low frequency variability in globally integrated tropical cyclone power dissipation. Geophysical Research Letters, 33: L11705.
  54. Stormfax. (2015). El-Nino. Retrieved 2015, from
  55. Walsh, K. (2014). Hurricanes and climate. Bulletin of the American Meteorological Society, DOI: 10.1175/BAMS-D-13-00242.1.
  56. Webster, P. (2005). Changes in tropical cyclone number, duration, and intensity in a warming environment. Science, 309: 1844-1846.
  57. Zhao, H. (2011). Interannual Changes of Tropical Cyclone Intensity in the Western North Pacific . Journal of the Meteorological Society of Japan, Vol. 89, No. 3, pp. 243–253, 2011 .
  58. Zhao, M. (2009). Simulations of global hurricane climatology, interannual variability, and reponse to global warming. Journal of Climate, 22: 6653-6678.
  59. Zhao, M. (2012). GCM simulations of hurricane frequency response to sea surface temperature anomalies. Journal of Climate, 25: 2995-3009.









Figure 1: Annual homicides in England and Wales: Full Span 1898-2003figure01figure02


Figure 2: Annual homicides in England and Wales: 1stHalf 1898-1950FIGURE04FIGURE05


Figure 3: Annual homicides in England and Wales: 2ndHalf 1951-2003FIGURE06FIGURE07


Figure 4: HACRUT4 global mean temperature anomaly: Full span 1898-2003hadcru-fullspanhadcru-fullspan-det

Figure 5: HACRUT4 global mean temperature anomaly: 1st Half 1898-1950hadcru1sthalfhadcru-1sthalf-det


Figure 6: HACRUT4 global mean temperature anomaly: 2nd Half 1951-2003hadcru-2ndhalfhadcru-2ndhalf-det


Figure 7: Summary Tables for Figure 1 to Figure 6FIGURE03hadcru-summary






  1. The theory that fossil fuel emissions since the Industrial Revolution have caused global warming is based on the proposition that such emissions increase atmospheric carbon dioxide concentration which in turn increases surface temperature according to a heat trapping effect first proposed by Arrhenius in a failed attempt to explain ice ages. A testable implication of the theory is the Charney Climate Sensitivity equal to the increase in surface temperature for a doubling of atmospheric CO2 and based on the proportionality of surface temperature with the logarithm of atmospheric CO2. This proportionality is described in terms of a linear regression coefficient based on an assumed statistically significant correlation between the two variables.  [RELATED POST ON ECS]
  2. However, the large body of empirical research in climate sensitivity has not produced an orderly accumulation of knowledge but instead created confusion and mistrust of the climate sensitivity parameter by virtue of an unacceptably large range of empirical sensitivity values. The frustration of climate science with this so called “uncertainty issue in climate sensitivity” has motivated proposals to abandon the climate sensitivity approach in favor of the “Climate Response to Cumulative Emissions or TCRE (Transient Climate Response to Cumulative Emissions) (Knutti, 2017) (Matthews, 2009).  [RELATED POST ON TCRE]
  3. This state of affairs in climate sensitivity research is likely the result of insufficient statistical rigor in the research methodologies applied. This work demonstrates spurious proportionalities in time series data that can yield specious climate sensitivities that have no interpretation. A parody of the Charney sensitivity with data for homicides in England and Wales 1898-2003 is used for the demonstration. The homicide parody is compared with a parallel analysis of global mean temperature reconstructions for the same period.
  4. The analysis demonstrates that such spurious results are more likely to be taken seriously when they occur under conditions where they are more likely to be accepted at face value. The results imply that the large number of climate sensitivities reported in the literature are likely to be mostly spurious and without an interpretation in terms of the Charney climate sensitivity. Sufficient statistical discipline is likely to settle the Charney climate sensitivity issue one way or the other, either to determine its hitherto elusive value or to demonstrate that the assumed relationships do not exist in the data.
  5. Homicides in England and Wales 1898-2003 are studied against the atmospheric carbon dioxide data for the same period. The Charney Equilibrium Sensitivity of Homicides is found to be λ=1.7 thousands of additional annual homicides for each doubling of atmospheric CO2. The sensitivity estimate is supported by a strong correlation of ρ=0.95 and detrended correlation of ρ=0.86. The analysis illustrates and demonstrates that spurious proportionalities in time series data derived from inadequate statistical rigor in the interpretation of the data has led to a theory of human caused global warming since the Little Ice Age that is unlikely to survive a review with sufficient statistical rigor.  [RELATED POST ON THE LIA] .
  6. The full text of this work is available for download from [ACADEMIA.EDU] or from [SSRN.COM] . This blog post is a brief presentation of the work and its findings. The discussion consists of a presentation of the seven charts and tables shown above in sequence from Figure 1 to Figure 7.
  7. Figure 1, Figure 2, and Figure 3 present the data for the annual number of homicides in England and Wales for the 106-year period from 1889 to 2003. Each Figure contains two panels (upper and lower) and three frames (left, middle, and right). The upper panel is a presentation of the proportionality between homicides and log(CO2) seen in the source data as received. The lower panel tests that proportionality for responsiveness at an annual time scale with detrended correlation analysis [DESCRIBED IN A RELATED POST] . Each panel consists of three frames. The left frame presents the log(CO2) data, the middle frame presents object data, either homicides or temperature, and the right frame displays their proportionality.
  8. Figure 1: Full Span of the homicide data (1898-2003): The top panel displays the source data with the right frame showing a strong observed correlation in the sample of ρ=0.945 between log(CO2) and the number of homicides per year (in thousands) in the 106-year sample period 1898-2003. This correlation appears to validate the proportionality and in particular, the OLS linear regression coefficient of β=2.45 that represents the homicide sensitivity of to atmospheric CO2. To restate the sensitivity homicides to carbon dioxide in the Charney/Manabe format in terms of a doubling of atmospheric CO2 concentration, we multiply by Ln(2) = 0.694 to find that λ=1.70 thousand additional homicides for each doubling of atmospheric CO2. The 95% confidence interval for the Charney Sensitivity is 95%CI=[1.58<λ<1.81].Thus we find strong empirical support for the proportionality of homicides to atmospheric carbon dioxide concentration that would support a theory that atmospheric CO2 causes homicides.
  9. However, it is known that correlations in time series data are often spurious in this context because these correlations can be driven by shared long term trends with little or no responsiveness information for a finite time scale of interest that is shorter than the full span of the data being studied. Therefore it is necessary to study correlation in time series data net of the trend as a way of extracting the responsiveness information [DESCRIBED IN A RELATED POST] . The lower panel of Figure 1 presents this analysis. The left and middle frames show the detrended series for log(CO2) and thousands of homicides per year. In the detrended series, the OLS linear regression line has been subtracted from the data. The right frame shows the proportionality between the detrended series. What we expect is that some of the correlation seen in the source data is attributable to long term trends but some may remain and if the portion of the correlation that survives into the detrended series is statistically significant, then responsiveness at the time scale of the detrending procedure is implied. In this case, of the source data correlation of r=0.945, a statistically significant ρ=0.859 survives into the detrended series at an annual time scale. The result implies that homicides are responsive to atmospheric CO2 at an annual time scale. Therefore, the source data correlation is not an artifact of shared trends but a result of responsiveness at an annual time scale and can be interpreted in terms of sensitivity of homicides to atmospheric CO2.
  10. Yet another aspect of time series data that must be taken into consideration is the assumption implicit in the full span analysis that the behavior of the data derived from full span analysis is more or less homogeneous across the full span of the data. This condition is imposed by OLS linear regression assumptions. A common method of carrying out the test is the “split-half” test in which the first half and second half of the full span are compared. If they are found to be very different then full span homogeneity cannot be assumed. Figure 2 presents the analysis for the first 53 years of the homicide data 1898-1950. The corresponding analysis for the second half, 1951-2003, is presented in Figure 3. A comparison of these results shows somewhat different sensitivity values particularly in the first half with the Charney Sensitivity λ=[1.70, 0.60, 2.1] thousand additional homicides for each doubling of atmospheric CO2 in the full span, 1st half, and 2nd half of the time series. The detrended correlation supporting the interpretation of these sensitivities at an annual time scale are  ρ=[0.86, 0.28, 0.30]. The strong detrended correlation supporting the regression coefficient seen in the full span is not found in either half of the span and that explains the instability of the regression coefficient in this analysis.
  11. The corresponding analysis of annual HADCRUT4 global mean temperature anomaly data from the Hadley Centre for the same sample period 1898-2003 is presented in Figure 4, Figure 5, and Figure 6. These temperature data are available for q longer period but the sample period studied is that which corresponds with the homicide data so that the same set of CO2 data are used in each case for a common comparison basis. Figure 4 is a graphical display of the analysis for the full span of the data and it shows a regression coefficient of β=3.1 which implies a climate sensitivity of λ=2.15ºC of warming for each doubling of atmospheric CO2. However, the OLS linear regression coefficient is not supported by a sufficient correlation. The full span correlation in the source data is ρ=0.85. However, unlike the homicide data where almost all of the source data correlation survived into the detrended series, almost all of this strong correlation is attributed to the common trend and only ρ=0.27 survives into the detrended series. Thus, although a sensitivity of λ=2.15 can be computed from the data, the existence of sensitivity at an annual time scale is not supported by the data.
  12. The split half analysis of the temperature anomaly data shows a further weakness in the computed climate sensitivity with a dramatic difference between the two halves. The 1st half 1898-1950 shows a very high regression coefficient of β=8.14 that implies an impossibly high climate sensitivity of λ=5.64ºC of warming for each doubling of atmospheric CO2 but no support for the regression is found in the correlation. The significant source data correlation of ρ=0.80 in the source data derives entirely from shared trends and vanishes when detrended leaving a detrended correlation of ρ=0.04 with no statistical significance. The large and anomalous values the regression coefficient and climate sensitivity are likely to be artifacts of violations of OLS assumptions without any interpretation in terms of a relationship between atmospheric CO2 concentration and surface temperature.
  13. Very different results are seen for the 2nd half of the temperature anomaly data 1951-2003 where strong support for climate sensitivity is found. The regression coefficient β=2.77 implies a climate sensitivity of λ=1.92ºC of warming for each doubling of atmospheric CO2 very close to the full span sensitivity of λ=2.15ºC . The sensitivity is supported by a strong and significant source data correlation of ρ=0.81 almost all of which survives into the detrended series with ρ=0.66 . Thus neither the detrended correlation in the full span of the temperature data nor the split half analysis supports the existence of a climate sensitivity parameter in the temperature anomaly data 1898-2003.
  14. Conclusion#1: It is found that the data show stronger support for the parody research question of the sensitivity of homicides to atmospheric CO2 than for the real research question about the sensitivity of surface temperature anomalies to atmospheric CO2. Yet, though there is an overall acceptance of climate sensitivity as being true and proven by data, no one would of course subscribe to the idea of homicide sensitivity. This kind of interpretation of data is a well known property of human cognition called confirmation bias described more fully in a related post [LINK] .
  15. This anomalous result reveals real and possibly serious issues and weaknesses in empirical sensitivity research in climate science in terms of statistics.The weaknesses likely have to do with overlooked OLS linear regression assumptions as well as flawed interpretation of source data correlation in time series data without consideration for the the effect of shared trends on correlation. This consideration is necessary before source data correlation in time series field data are interpreted in terms of causation at a finite time scale. The uncertainty problem in empirical climate sensitivity research likely arises from inadequate attention to whether regression coefficients are supported by correlation at the time scale of interest. Without such support, though regression coefficients may be computed from the data, they have no interpretation in terms of causal relationships. This issue is discussed in detail in related posts [LINK] [LINK]
  16. Conclusion#2: The relationship between correlation in field data and a theory of causation is that correlation at the correct time scale is a necessary but not sufficient condition for causation. This means that that without correlation at the time scale of interest, no causation theory is possible; but it does not mean that correlation at the time scale of interest implies causation. A dramatic demonstration of this principle is provided by the data presented in this work where we find that the homicide parody shows stronger correlation than the climate sensitivity data.
  17. Conclusion#3: The general state of uncertainty and confusion in empirical climate sensitivity research outside of climate models and in the world of observational data may imply that the hypothesized warming effect of atmospheric CO2 concentration, though programmed into climate models, is not supported by observational data and that therefore there is no empirical support for this theory. This conclusion is supported by related posts at this site that may be found at the links that follow: [LINK#1]  ,  [LINK#2]   [LINK#3] [LINK#4]The source paper for this post may be downloaded from [ACADEMIA.EDU] or from [SSRN.COM] 





ECS Bibliography

  1. 1963: Möller, Fritz. “On the influence of changes in the CO2 concentration in air on the radiation balance of the earth’s surface and on the climate.” Journal of Geophysical Research68.13 (1963): 3877-3886. The numerical value of a temperature change under the influence of a CO2 change as calculated by Plass is valid only for a dry atmosphere. Overlapping of the absorption bands of CO2 and H2O in the range around 15 μ essentially diminishes the temperature changes. New calculations give ΔT = + 1.5° when the CO2 content increases from 300 to 600 ppm. Cloudiness diminishes the radiation effects but not the temperature changes because under cloudy skies larger temperature changes are needed in order to compensate for an equal change in the downward long‐wave radiation. The increase in the water vapor content of the atmosphere with rising temperature causes a self‐amplification effect which results in almost arbitrary temperature changes, e.g. for constant relative humidity ΔT = +10° in the above mentioned case. It is shown, however, that the changed radiation conditions are not necessarily compensated for by a temperature change. The effect of an increase in CO2 from 300 to 330 ppm can be compensated for completely by a change in the water vapor content of 3 per cent or by a change in the cloudiness of 1 per cent of its value without the occurrence of temperature changes at all. Thus the theory that climatic variations are effected by variations in the CO2 content becomes very questionable.
  2. 1964: Manabe, Syukuro, and Robert F. Strickler. “Thermal equilibrium of the atmosphere with a convective adjustment.” Journal of the Atmospheric Sciences 21.4 (1964): 361-385. The states of thermal equilibrium (incorporating an adjustment of super-adiabatic stratification) as well as that of pure radiative equilibrium of the atmosphere are computed as the asymptotic steady state approached in an initial value problem. Recent measurements of absorptivities obtained for a wide range of pressure are used, and the scheme of computation is sufficiently general to include the effect of several layers of clouds. The atmosphere in thermal equilibrium has an isothermal lower stratosphere and an inversion in the upper stratosphere which are features observed in middle latitudes. The role of various gaseous absorbers (i.e., water vapor, carbon dioxide, and ozone), as well as the role of the clouds, is investigated by computing thermal equilibrium with and without one or two of these elements. The existence of ozone has very little effect on the equilibrium temperature of the earth’s surface but a very important effect on the temperature throughout the stratosphere; the absorption of solar radiation by ozone in the upper and middle stratosphere, in addition to maintaining the warm temperature in that region, appears also to be necessary for the maintenance of the isothermal layer or slight inversion just above the tropopause. The thermal equilibrium state in the absence of solar insulation is computed by setting the temperature of the earth’s surface at the observed polar value. In this case, the stratospheric temperature decreases monotonically with increasing altitude, whereas the corresponding state of pure radiative equilibrium has an inversion just above the level of the tropopause. A series of thermal equilibriums is computed for the distributions of absorbers typical of different latitudes. According to these results, the latitudinal variation of the distributions of ozone and water vapor may be partly responsible for the latitudinal variation of the thickness of the isothermal part of the stratosphere. Finally, the state of local radiative equilibrium of the stratosphere overlying a troposphere with the observed distribution of temperature is computed for each season and latitude. In the upper stratosphere of the winter hemisphere, a large latitudinal temperature gradient appears at the latitude of the polar-night jet stream, while in the upper statosphere of the summer hemisphere, the equilibrium temperature varies little with latitude. These features are consistent with the observed atmosphere. However, the computations predict an extremely cold polar night temperature in the upper stratosphere and a latitudinal decrease (toward the cold pole) of equilibrium temperature in the middle or lower stratosphere for winter and fall. This disagrees with observation, and suggests that explicit introduction of the dynamics of large scale motion is necessary.
  3. 1967: Manabe, Syukuro, and Richard T. Wetherald. “Thermal equilibrium of the atmosphere with a given distribution of relative humidity.” Journal of the Atmospheric Sciences 24.3 (1967): 241-259. [ECS=2]bandicam 2018-09-21 13-24-28-297
  4. 1969: Budyko, Mikhail I. “The effect of solar radiation variations on the climate of the earth.” tellus 21.5 (1969): 611-619. It follows from the analysis of observation data that the secular variation of the mean temperature of the Earth can be explained by the variation of short-wave radiation, arriving at the surface of the Earth. In connection with this, the influence of long-term changes of radiation, caused by variations of atmospheric transparency on the thermal regime is being studied. Taking into account the influence of changes of planetary albedo of the Earth under the development of glaciations on the thermal regime, it is found that comparatively small variations of atmospheric transparency could be sufficient for the development of quaternary glaciations.
  5. 1969: Sellers, William D. “A global climatic model based on the energy balance of the earth-atmosphere system.” Journal of Applied Meteorology 8.3 (1969): 392-400. A relatively simple numerical model of the energy balance of the earth-atmosphere is set up and applied. The dependent variable is the average annual sea level temperature in 10° latitude belts. This is expressed basically as a function of the solar constant, the planetary albedo, the transparency of the atmosphere to infrared radiation, and the turbulent exchange coefficients for the atmosphere and the oceans. The major conclusions of the analysis are that removing the arctic ice cap would increase annual average polar temperatures by no more than 7C, that a decrease of the solar constant by 2–5% might be sufficient to initiate another ice age, and that man’s increasing industrial activities may eventually lead to a global climate much warmer than today.
  6. 1971: Rasool, S. Ichtiaque, and Stephen H. Schneider. “Atmospheric carbon dioxide and aerosols: Effects of large increases on global climate.” Science 173.3992 (1971): 138-141. Effects on the global temperature of large increases in carbon dioxide and aerosol densities in the atmosphere of Earth have been computed. It is found that, although the addition of carbon dioxide in the atmosphere does increase the surface temperature, the rate of temperature increase diminishes with increasing carbon dioxide in the atmosphere. For aerosols, however, the net effect of increase in density is to reduce the surface temperature of Earth. Because of the exponential dependence of the backscattering, the rate of temperature decrease is augmented with increasing aerosol content. An increase by only a factor of 4 in global aerosol background concentration may be sufficient to reduce the surface temperature by as much as 3.5 ° K. If sustained over a period of several years, such a temperature decrease over the whole globe is believed to be sufficient to trigger an ice age.
  7. 1975: Manabe, Syukuro, and Richard T. Wetherald. “The effects of doubling the CO2 concentration on the climate of a general circulation model.” Journal of the Atmospheric Sciences 32.1 (1975): 3-15. An attempt is made to estimate the temperature changes resulting from doubling the present CO2 concentration by the use of a simplified three-dimensional general circulation model. This model contains the following simplifications: a limited computational domain, an idealized topography, no beat transport by ocean currents, and fixed cloudiness. Despite these limitations, the results from this computation yield some indication of how the increase of CO2 concentration may affect the distribution of temperature in the atmosphere. It is shown that the CO2 increase raises the temperature of the model troposphere, whereas it lowers that of the model stratosphere. The tropospheric warming is somewhat larger than that expected from a radiative-convective equilibrium model. In particular, the increase of surface temperature in higher latitudes is magnified due to the recession of the snow boundary and the thermal stability of the lower troposphere which limits convective beating to the lowest layer. It is also shown that the doubling of carbon dioxide significantly increases the intensity of the hydrologic cycle of the model. bandicam 2018-09-21 15-17-14-922
  8. 1976: Cess, Robert D. “Climate change: An appraisal of atmospheric feedback mechanisms employing zonal climatology.” Journal of the Atmospheric Sciences 33.10 (1976): 1831-1843. The sensitivity of the earth’s surface temperature to factors which can induce long-term climate change, such as a variation in solar constant, is estimated by employing two readily observable climate changes. One is the latitudinal change in annual mean climate, for which an interpretation of climatological data suggests that cloud amount is not a significant climate feedback mechanism, irrespective of how cloud amount might depend upon surface temperature, since there are compensating changes in both the solar and infrared optical properties of the atmosphere. It is further indicated that all other atmospheric feedback mechanisms, resulting, for example, from temperature-induced changes in water vapor amount, cloud altitude and lapse rate, collectively double the sensitivity of global surface temperature to a change in solar constant. The same conclusion is reached by considering a second type of climate change, that associated with seasonal variations for a given latitude zone. The seasonal interpretation further suggests that cloud amount feedback is unimportant zonally as well as globally. Application of the seasonal data required a correction for what appears to be an important seasonal feedback mechanism. This is attributed to a variability in cloud albedo due to seasonal changes in solar zenith angle. No attempt was made to individually interpret the collective feedback mechanisms which contribute to the doubling in surface temperature sensitivity. It is suggested, however, that the conventional assumption of fixed relative humidity for describing feedback due to water vapor amount might not be as applicable as is generally believed. Climate models which additionally include ice-albedo feedback are discussed within the framework of the present results.
  9. 1978: Ramanathan, V., and J. A. Coakley. “Climate modeling through radiative‐convective models.” Reviews of geophysics16.4 (1978): 465-489. We present a review of the radiative‐convective models that have been used in studies pertaining to the earth’s climate. After familiarizing the reader with the theoretical background, modeling methodology, and techniques for solving the radiative transfer equation the review focuses on the published model studies concerning global climate and global climate change. Radiative‐convective models compute the globally and seasonally averaged surface and atmospheric temperatures. The computed temperatures are in good agreement with the observed temperatures. The models include the important climatic feedback mechanism between surface temperature and H2O amount in the atmosphere. The principal weakness of the current models is their inability to simulate the feedback mechanism between surface temperature and cloud cover. It is shown that the value of the critical lapse rate adopted in radiative‐convective models for convective adjustment is significantly larger than the observed globally averaged tropospheric lapse rate. The review also summarizes radiative‐convective model results for the sensitivity of surface temperature to perturbations in (1) the concentrations of the major and minor optically active trace constituents, (2) aerosols, and (3) cloud amount. A simple analytical model is presented to demonstrate how the surface temperature in a radiative‐convective model responds to perturbations.
  10. 1985: Wigley, Thomas ML, and Michael E. Schlesinger. “Analytical solution for the effect of increasing CO2 on global mean temperature.” Nature 315.6021 (1985): 649. Increasing atmospheric carbon dioxide concentration is expected to cause substantial changes in climate. Recent model studies suggest that the equilibrium warming for a CO2 doubling (Δ T2×) is about 3–4°C. Observational data show that the globe has warmed by about 0.5°C over the past 100 years. Are these two results compatible? To answer this question due account must be taken of oceanic thermal inertia effects, which can significantly slow the response of the climate system to external forcing. The main controlling parameters are the effective diffusivity of the ocean below the upper mixed layer (κ) and the climate sensitivity (defined by Δ T2×). Previous analyses of this problem have considered only limited ranges of these parameters. Here we present a more general analysis of two cases, forcing by a step function change in CO2 concentration and by a steady CO2 increase. The former case may be characterized by a response time which we show is strongly dependent on both κ and Δ T2×. In the latter case the damped response means that, at any given time, the climate system may be quite far removed from its equilibrium with the prevailing CO2 level. In earlier work this equilibrium has been expressed as a lag time, but we show this to be misleading because of the sensitivity of the lag to the history of past CO2 variations. Since both the lag and the degree of disequilibrium are strongly dependent on κ and Δ T2×, and because of uncertainties in the pre-industrial CO2 level, the observed global warming over the past 100 years can be shown to be compatible with a wide range of CO2-doubling temperature changes.
  11. 1991: Lawlor, D. W., and R. A. C. Mitchell. “The effects of increasing CO2 on crop photosynthesis and productivity: a review of field studies.” Plant, Cell & Environment 14.8 (1991): 807-818. Only a small proportion of elevated CO2 studies on crops have taken place in the field. They generally confirm results obtained in controlled environments: CO2increases photosynthesis, dry matter production and yield, substantially in C3 species, but less in C4, it decreases stomatal conductance and transpiration in C3 and C4 species and greatly improves water‐use efficiency in all plants. The increased productivity of crops with CO2 enrichment is also related to the greater leaf area produced. Stimulation of yield is due more to an increase in the number of yield‐forming structures than in their size. There is little evidence of a consistent effect of CO2 on partitioning of dry matter between organs or on their chemical composition, except for tubers. Work has concentrated on a few crops (largely soybean) and more is needed on crops for which there are few data (e.g. rice). Field studies on the effects of elevated CO2 in combination with temperature, water and nutrition are essential; they should be related to the development and improvement of mechanistic crop models, and designed to test their predictions.
  12. 2009: Danabasoglu, Gokhan, and Peter R. Gent. “Equilibrium climate sensitivity: Is it accurate to use a slab ocean model?.” Journal of Climate 22.9 (2009): 2494-2499. The equilibrium climate sensitivity of a climate model is usually defined as the globally averaged equilibrium surface temperature response to a doubling of carbon dioxide. This is virtually always estimated in a version with a slab model for the upper ocean. The question is whether this estimate is accurate for the full climate model version, which includes a full-depth ocean component. This question has been answered for the low-resolution version of the Community Climate System Model, version 3 (CCSM3). The answer is that the equilibrium climate sensitivity using the full-depth ocean model is 0.14°C higher than that using the slab ocean model, which is a small increase. In addition, these sensitivity estimates have a standard deviation of nearly 0.1°C because of interannual variability. These results indicate that the standard practice of using a slab ocean model does give a good estimate of the equilibrium climate sensitivity of the full CCSM3. Another question addressed is whether the effective climate sensitivity is an accurate estimate of the equilibrium climate sensitivity. Again the answer is yes, provided that at least 150 yr of data from the doubled carbon dioxide run are used.
  13. 2010: Connell, Sean D., and Bayden D. Russell. “The direct effects of increasing CO2 and temperature on non-calcifying organisms: increasing the potential for phase shifts in kelp forests.” Proceedings of the Royal Society of London B: Biological Sciences (2010): rspb20092069. Predictions about the ecological consequences of oceanic uptake of CO2 have been preoccupied with the effects of ocean acidification on calcifying organisms, particularly those critical to the formation of habitats (e.g. coral reefs) or their maintenance (e.g. grazing echinoderms). This focus overlooks the direct effects of CO2 on non-calcareous taxa, particularly those that play critical roles in ecosystem shifts. We used two experiments to investigate whether increased CO2 could exacerbate kelp loss by facilitating non-calcareous algae that, we hypothesized, (i) inhibit the recovery of kelp forests on an urbanized coast, and (ii) form more extensive covers and greater biomass under moderate future CO2 and associated temperature increases. Our experimental removal of turfs from a phase-shifted system (i.e. kelp- to turf-dominated) revealed that the number of kelp recruits increased, thereby indicating that turfs can inhibit kelp recruitment. Future CO2 and temperature interacted synergistically to have a positive effect on the abundance of algal turfs, whereby they had twice the biomass and occupied over four times more available space than under current conditions. We suggest that the current preoccupation with the negative effects of ocean acidification on marine calcifiers overlooks potentially profound effects of increasing CO2and temperature on non-calcifying organisms.
  14. 2011: Schmittner, Andreas, et al. “Climate sensitivity estimated from temperature reconstructions of the Last Glacial Maximum.” Science 334.6061 (2011): 1385-1388. Assessing the impact of future anthropogenic carbon emissions is currently impeded by uncertainties in our knowledge of equilibrium climate sensitivity to atmospheric carbon dioxide doubling. Previous studies suggest 3 kelvin (K) as the best estimate, 2 to 4.5 K as the 66% probability range, and nonzero probabilities for much higher values, the latter implying a small chance of high-impact climate changes that would be difficult to avoid. Here, combining extensive sea and land surface temperature reconstructions from the Last Glacial Maximum with climate model simulations, we estimate a lower median (2.3 K) and reduced uncertainty (1.7 to 2.6 K as the 66% probability range, which can be widened using alternate assumptions or data subsets). Assuming that paleoclimatic constraints apply to the future, as predicted by our model, these results imply a lower probability of imminent extreme climatic change than previously thought.
  15. 2012: Fasullo, John T., and Kevin E. Trenberth. “A less cloudy future: The role of subtropical subsidence in climate sensitivity.” science 338.6108 (2012): 792-794. An observable constraint on climate sensitivity, based on variations in mid-tropospheric relative humidity (RH) and their impact on clouds, is proposed. We show that the tropics and subtropics are linked by teleconnections that induce seasonal RH variations that relate strongly to albedo (via clouds), and that this covariability is mimicked in a warming climate. A present-day analog for future trends is thus identified whereby the intensity of subtropical dry zones in models associated with the boreal monsoon is strongly linked to projected cloud trends, reflected solar radiation, and model sensitivity. Many models, particularly those with low climate sensitivity, fail to adequately resolve these teleconnections and hence are identifiably biased. Improving model fidelity in matching observed variations provides a viable path forward for better predicting future climate.
  16. 2012: Andrews, Timothy, et al. “Forcing, feedbacks and climate sensitivity in CMIP5 coupled atmosphere‐ocean climate models.” Geophysical Research Letters 39.9 (2012). We quantify forcing and feedbacks across available CMIP5 coupled atmosphere‐ocean general circulation models (AOGCMs) by analysing simulations forced by an abrupt quadrupling of atmospheric carbon dioxide concentration. This is the first application of the linear forcing‐feedback regression analysis of Gregory et al. (2004) to an ensemble of AOGCMs. The range of equilibrium climate sensitivity is 2.1–4.7 K. Differences in cloud feedbacks continue to be important contributors to this range. Some models show small deviations from a linear dependence of top‐of‐atmosphere radiative fluxes on global surface temperature change. We show that this phenomenon largely arises from shortwave cloud radiative effects over the ocean and is consistent with independent estimates of forcing using fixed sea‐surface temperature methods. We suggest that future research should focus more on understanding transient climate change, including any time‐scale dependence of the forcing and/or feedback, rather than on the equilibrium response to large instantaneous forcing.
  17. 2012: Bitz, Cecilia M., et al. “Climate sensitivity of the community climate system model, version 4.” Journal of Climate 25.9 (2012): 3053-3070.Equilibrium climate sensitivity of the Community Climate System Model, version 4 (CCSM4) is 3.20°C for 1° horizontal resolution in each component. This is about a half degree Celsius higher than in the previous version (CCSM3). The transient climate sensitivity of CCSM4 at 1° resolution is 1.72°C, which is about 0.2°C higher than in CCSM3. These higher climate sensitivities in CCSM4 cannot be explained by the change to a preindustrial baseline climate. This study uses the radiative kernel technique to show that, from CCSM3 to CCSM4, the global mean lapse-rate feedback declines in magnitude and the shortwave cloud feedback increases. These two warming effects are partially canceled by cooling because of slight decreases in the global mean water vapor feedback and longwave cloud feedback from CCSM3 to CCSM4. A new formulation of the mixed layer, slab-ocean model in CCSM4 attempts to reproduce the SST and sea ice climatology from an integration with a full-depth ocean, and it is integrated with a dynamic sea ice model. These new features allow an isolation of the influence of ocean dynamical changes on the climate response when comparing integrations with the slab ocean and full-depth ocean. The transient climate response of the full-depth ocean version is 0.54 of the equilibrium climate sensitivity when estimated with the new slab-ocean model version for both CCSM3 and CCSM4. The authors argue the ratio is the same in both versions because they have about the same zonal mean pattern of change in ocean surface heat flux, which broadly resembles the zonal mean pattern of net feedback strength.
  18. 2012: Rogelj, Joeri, Malte Meinshausen, and Reto Knutti. “Global warming under old and new scenarios using IPCC climate sensitivity range estimates.” Nature climate change 2.4 (2012): 248. Climate projections for the fourth assessment report1 (AR4) of the Intergovernmental Panel on Climate Change (IPCC) were based on scenarios from the Special Report on Emissions Scenarios2 (SRES) and simulations of the third phase of the Coupled Model Intercomparison Project3 (CMIP3). Since then, a new set of four scenarios (the representative concentration pathways or RCPs) was designed4. Climate projections in the IPCC fifth assessment report (AR5) will be based on the fifth phase of the Coupled Model Intercomparison Project5 (CMIP5), which incorporates the latest versions of climate models and focuses on RCPs. This implies that by AR5 both models and scenarios will have changed, making a comparison with earlier literature challenging. To facilitate this comparison, we provide probabilistic climate projections of both SRES scenarios and RCPs in a single consistent framework. These estimates are based on a model set-up that probabilistically takes into account the overall consensus understanding of climate sensitivity uncertainty, synthesizes the understanding of climate system and carbon-cycle behaviour, and is at the same time constrained by the observed historical warming.
  19. 2014: Sherwood, Steven C., Sandrine Bony, and Jean-Louis Dufresne. “Spread in model climate sensitivity traced to atmospheric convective mixing.” Nature 505.7481 (2014): 37. Equilibrium climate sensitivity refers to the ultimate change in global mean temperature in response to a change in external forcing. Despite decades of research attempting to narrow uncertainties, equilibrium climate sensitivity estimates from climate models still span roughly 1.5 to 5 degrees Celsius for a doubling of atmospheric carbon dioxide concentration, precluding accurate projections of future climate. The spread arises largely from differences in the feedback from low clouds, for reasons not yet understood. Here we show that differences in the simulated strength of convective mixing between the lower and middle tropical troposphere explain about half of the variance in climate sensitivity estimated by 43 climate models. The apparent mechanism is that such mixing dehydrates the low-cloud layer at a rate that increases as the climate warms, and this rate of increase depends on the initial mixing strength, linking the mixing to cloud feedback. The mixing inferred from observations appears to be sufficiently strong to imply a climate sensitivity of more than 3 degrees for a doubling of carbon dioxide. This is significantly higher than the currently accepted lower bound of 1.5 degrees, thereby constraining model projections towards relatively severe future warming.
  20. 2015: Mauritsen, Thorsten, and Bjorn Stevens. “Missing iris effect as a possible cause of muted hydrological change and high climate sensitivity in models.” Nature Geoscience 8.5 (2015): 346. Equilibrium climate sensitivity to a doubling of CO2 falls between 2.0 and 4.6 K in current climate models, and they suggest a weak increase in global mean precipitation. Inferences from the observational record, however, place climate sensitivity near the lower end of this range and indicate that models underestimate some of the changes in the hydrological cycle. These discrepancies raise the possibility that important feedbacks are missing from the models. A controversial hypothesis suggests that the dry and clear regions of the tropical atmosphere expand in a warming climate and thereby allow more infrared radiation to escape to space. This so-called iris effect could constitute a negative feedback that is not included in climate models. We find that inclusion of such an effect in a climate model moves the simulated responses of both temperature and the hydrological cycle to rising atmospheric greenhouse gas concentrations closer to observations. Alternative suggestions for shortcomings of models — such as aerosol cooling, volcanic eruptions or insufficient ocean heat uptake — may explain a slow observed transient warming relative to models, but not the observed enhancement of the hydrological cycle. We propose that, if precipitating convective clouds are more likely to cluster into larger clouds as temperatures rise, this process could constitute a plausible physical mechanism for an iris effect.
  21. 2015: Schimel, David, Britton B. Stephens, and Joshua B. Fisher. “Effect of increasing CO2 on the terrestrial carbon cycle.” Proceedings of the National Academy of Sciences 112.2 (2015): 436-441. Feedbacks from terrestrial ecosystems to atmospheric CO2 concentrations contribute the second-largest uncertainty to projections of future climate. These feedbacks, acting over huge regions and long periods of time, are extraordinarily difficult to observe and quantify directly. We evaluated in situ, atmospheric, and simulation estimates of the effect of CO2 on carbon storage, subject to mass balance constraints. Multiple lines of evidence suggest significant tropical uptake for CO2, approximately balancing net deforestation and confirming a substantial negative global feedback to atmospheric CO2 and climate. This reconciles two approaches that have previously produced contradictory results. We provide a consistent explanation of the impacts of CO2 on terrestrial carbon across the 12 orders of magnitude between plant stomata and the global carbon cycle.
  22. 2016: Tan, Ivy, Trude Storelvmo, and Mark D. Zelinka. “Observational constraints on mixed-phase clouds imply higher climate sensitivity.” Science 352.6282 (2016): 224-227. How much global average temperature eventually will rise depends on the Equilibrium Climate Sensitivity (ECS), which relates atmospheric CO2 concentration to atmospheric temperature. For decades, ECS has been estimated to be between 2.0° and 4.6°C, with much of that uncertainty owing to the difficulty of establishing the effects of clouds on Earth’s energy budget. Tan et al. used satellite observations to constrain the radiative impact of mixed phase clouds. They conclude that ECS could be between 5.0° and 5.3°C—higher than suggested by most global climate models.
  23. 2018: Watanabe, Masahiro, et al. “Low clouds link equilibrium climate sensitivity to hydrological sensitivity.” Nature Climate Change (2018): 1. Equilibrium climate sensitivity (ECS) and hydrological sensitivity describe the global mean surface temperature and precipitation responses to a doubling of atmospheric CO2. Despite their connection via the Earth’s energy budget, the physical linkage between these two metrics remains controversial. Here, using a global climate model with a perturbed mean hydrological cycle, we show that ECS and hydrological sensitivity per unit warming are anti-correlated owing to the low-cloud response to surface warming. When the amount of low clouds decreases, ECS is enhanced through reductions in the reflection of shortwave radiation. In contrast, hydrological sensitivity is suppressed through weakening of atmospheric longwave cooling, necessitating weakened condensational heating by precipitation. These compensating cloud effects are also robustly found in a multi-model ensemble, and further constrained using satellite observations. Our estimates, combined with an existing constraint to clear-sky shortwave absorption, suggest that hydrological sensitivity could be lower by 30% than raw estimates from global climate mode





































  1. The context for this study is the prediction by climate models that anthropogenic global warming due to fossil fuel emissions will cause widespread drought conditions in the Southwest of the United States that is expected to include a gradual decline in the flow of the Colorado River, an important source of water in the region. This study is a presentation of the data for two Drought Indexes, the Palmer Drought Severity Index (PDSI) and the Palmer hydrological drought index (PHDI) for eleven states in West and Southwest of the United States where climate models have predicted drought and wildfire effects of fossil fuel emissions.
  2. These data are available for the long time period from 1895 to 2018 and are provided by the National Climate Data Center (NCDC) of the National Oceanic and Atmospheric Agency (NOAA) and made available at the [/pub/data/cirs/climdiv] data archive as mean values for each state of the United States. Statewide mean values are derived from area-weighted averages of 5km by 5km grid-point estimates interpolated from station data. The drought indexes are combinations of temperature, precipitation, and humidity data that are thought to discriminate drought conditions from non-drought conditions. The data are provided as monthly mean values and are studied separately for each calendar month as there significant differences among the calendar months in the behavior of drought indexes. The Standardized Precipitation Index (SPI) is a new index thought to be better than the Palmer indexes (see Bibliography below) but is not included in this study.
  3. The Palmer Drought Severity Index (PDSI) indicates the severity
    of a wet and dry spells. It is based on the principles of a balance between moisture supply and moisture demand. The index values usually range from -6 to +6, with negative values denoting dry spells and positive values indicating wet spells. The index is interpreted as follows: values in the range PDSI=[0 to -.5] indicate normal conditions with PDSI < –0.5 indicating various degrees of drought conditions described as Incipient Drought [-1.0 < PDSI < -0.5], Mild Drought [-2 < PDSI < -1], Moderate Drought [-3 < PDSI < -2], and Severe Drought [PDSI < -3]. The additional category of “extreme drought” for PDSI < -4 and the interpretation of positive values of the index in terms of wetness are not considered in this study. The Palmer Hydrological Drought Index (PHDI) is similar but with an emphasis on long-term moisture supply. The index values have the same interpretation as the PDSI. Both PDSI and PHDI values are reported in this study and their behavior is found to be similar for the states studied.
  4. Eleven states of the United States are selected for study. They are, alphabetically, Arizona (AZ), California (CA), Colorado (CO), Idaho (ID), Oregon (OR), Montana (MT), Nevada (NV), New Mexico (NM), Texas (TX), Utah (UT), and Wyoming (WY). Some of these states are known to be drought prone (AZ, CA, NV, NM, TX, WY) while others are not (ID, OR) and the comparison of their drought indexes serve as the ability of the index to discriminate between them.
  5. Figure 1 to Figure 11 present the PDSI and PHDI data for the eleven states in two side by side frames with PDSI in the left frame and PHDI in the right frame. Each figure is a GIF animation that cycles through the twelve calendar months presenting the data for one calendar month at a time from January to December. The range of values displayed are limited to PDSI/PHDI < +2 since the focus of the study is on detecting drought conditions that lie in the region of negative values. Three different drought categories are demarcated by color coded horizontal lines. The YELLOW line drawn at [PDSI/PHDI = -1] demarcates drought conditions below it from non-drought conditions above. The ORANGE line drawn at [PDSI/PHDI = -2] demarcates moderate drought conditions below it from mild drought; and finally the RED LINE drawn at [PDSI/PHDI = -3] demarcates severe drought below it from moderate drought. The “extreme drought” condition found in the PDSI literature is not included in this work. Drought periods are seen mostly in the summer months in the 1930s, again in the 1950s, and in the most recent period since 2003 for many of the states. The data shown are not the source data but their moving 10-year medians. Decadal smoothing was made necessary here by the extreme volatility of the source data and because of some extremely high positive values, the median rather than simple average was used.
  6. Figures 12&13 summarize the drought patterns observed in the GIF animations of Figures 1 to 11. Figure 12 is a tabulation of the number of moderate drought or more severe (PDSI/PHDI<-2) events for each state and each calendar month using the PDSI as the criterion in the top panel and the PHDI in the bottom panel. In the tabulation each row represents one of the eleven states and each column represents one of twelve calendar months. The numbers in the tabulation are counts of the number of moderate or more extreme drought events for each state in each calendar month. At the end of each row and at the bottom of each column, these counts are summed to yield the total number of such drought events for each state and for each calendar month. The sum of these totals is the grand total of all such drought events for all eleven states in all twelve calendar months. The total for each calendar month and for each state are converted into a percentage with division by the grand total. These percentages are then used to discriminate among the calendar months and among the states in terms of relative drought frequency. This procedure is carried out for both the PDSI and the PHDI.
  7. The procedure is repeated exactly in Figure 13 where the number of severe or greater drought events (PDSI/PHDI<-3) for each state and each calendar month are tabulated. A comparison of the Figure 12 percentages and Figure 13 percentages  shows that the severe drought category has  greater discriminating power in terms of a clear distinction between summer months (when droughts are more common) and non-summer months and between drought prone states and non-drought-prone states. Using this criterion, it is found that Figure 13 shows a greater discriminating power of the Severe Drought category for both PDSI and PHDI measures of drought severity. Here the summer months are clearly demarcated as are the differences between drought prone states and non-drought prone states.
  8. A curious result in Figure 13 is that the state of California, though thought to be drought prone and currently (as of this writing in November 2018) considered a casualty of climate change in terms of drought driven wildfires, neither the  PDSI nor PHDI tabulation in Figure 13 places California in the same category as the four clearly drought prone states indicated by the percent scores and marked in red.
  9. Figure 14 displays the timeline of these drought events as the total number of severe or greater drought events in distinct 5-year windows that moves 5-years at a time across the time span of the data from 1908 to 2018. The counts in the left panel use the PDSI as the criterion while the right panel shows the counts for the PHDI criterion. Here we find little evidence of drought until the 1930s where it rises to a peak of 25 PDSI drought events and 40 PHDI drought events per 5-year period by 1943 and then drops back to the zero line by the end of the 1940s. It then rises precipitously to more than 52 PDSI events and 55 PHDI events in the year 1958 before falling back to zero in 1968. Thereafter, no drought event is found in either index for 30 years from 1968 to 1998.
  10. At the end of this drought hiatus, a steep rise is seen from 2003 to 2008, with 83 PDSI events and 95 PHDI events. It is this sharp increase in drought events in the West and Southwest USA that led their attribution to global warming in terms of climate model simulations that confirmed a devastating drought effect of global warming in terms of drying soils and heat waves (see bibliography below). The idea of the so called “desertification of the Southwest” due to human caused global warming entered into the climate change lexicon after these events and that helped to cultivate a sequence of climate model simulation papers that confirmed the attribution (see bibliography below).
  11. Although a linear regression line through the curves in Figure 14 will show a rising trend, the actual pattern seen does not support that view. Rather, the the attribution of these events to human cause and the climate model predictions of drought in the South therefrom, appear to fit a pattern of circular reasoning and the so called “Texas sharpshooter fallacy” possibly by virtue of a strong confirmation bias in climate science that has shaped prior research in the so called “Event Attribution Science”. A related post at this site on confirmation bias is relevant in this regard  [LINK].





Featured Authors: Thomas Swetnam, Ed Fredirickson, Philip Dennison, Virginia Dale, Anthony Westerling, & Richard Seager

  1. 1990: Swetnam, Thomas W., and Julio L. Betancourt. “Fire-southern oscillation relations in the southwestern United States.” Science 249.4972 (1990): 1017-1020. Fire scar and tree growth chronologies (1700 to 1905) and fire statistics (since 1905) from Arizona and New Mexico show that small areas burn after wet springs associated with the low phase of the Southern Oscillation (SO), whereas large areas burn after dry springs associated with the high phase of the SO. Through its synergistic influence on spring weather and fuel conditions, climatic variability in the tropical Pacific significantly influences vegetation dynamics in the southwestern United States. Synchrony of fire-free and severe fire years across diverse southwestern forests implies that climate forces fire regimes on a subcontinental scale; it also underscores the importance of exogenous factors in ecosystem dynamics.
  2. 1996: Hayes, Michael J., et al. “Monitoring the 1996 drought using the standardized precipitation index.” Bulletin of the American meteorological society 80.3 (1999): 429-438. Droughts are difficult to detect and monitor. Drought indices, most commonly the Palmer Drought Severity Index (PDSI), have been used with limited success as operational drought monitoring tools and triggers for policy responses. Recently, a new index, the Standardized Precipitation Index (SPI), was developed to improve drought detection and monitoring capabilities. The SPI has several characteristics that are an improvement over previous indices, including its simplicity and temporal flexibility, that allow its application for water resources on all timescales. In this article, the 1996 drought in the southern plains and southwestern United States is examined using the SPI. A series of maps are used to illustrate how the SPI would have assisted in being able to detect the onset of the drought and monitor its progression. A case study investigating the drought in greater detail for Texas is also given. The SPI demonstrated that it is a tool that should be used operationally as part of a state, regional, or national drought watch system in the United States. During the 1996 drought, the SPI detected the onset of the drought at least 1 month in advance of the PDSI. This timeliness will be invaluable for improving mitigation and response actions of state and federal government to drought-affected regions in the future.
  3. 1997: Edwards, Daniel C. Characteristics of 20th century drought in the United States at multiple time scales. No. AFIT-97-051. AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH, 1997. The purpose of this study is to define the occurrence and variability of drought in the United States in order to furnish climatologists and drought mitigation planners with information on how to put current drought into historical perspective. The opposite of drought is a period of anomalously wet conditions. Analyses of both drought and wet periods on national and regional scales are provided. Analysis of drought and wet periods in terms of areal coverage, intensity, duration, and variability at these different space and time scales provides valuable insight not only into the historical perspective of anomalously dry and wet conditions, but also into the long-term variation of climate in the United States.
  4. 1998: Guttman, Nathaniel B. “Comparing the palmer drought index and the standardized precipitation index 1.” JAWRA Journal of the American Water Resources Association 34.1 (1998): 113-121. The Palmer Drought Index (PDI) is used as an indicator of drought severity, and a particular index value is often the signal to begin or discontinue elements of a drought contingency plan. The Standardized Precipitation Index (SPI) was recently developed to quantify a precipitation deficit for different time scales. It was designed to be an indicator of drought that recognizes the importance of time scales in the analysis of water availability and water use. This study compares historical time series of the PDI with time series of the corresponding SPI through spectral analysis. Results show that the spectral characteristics of the PDI vary from site to site throughout the U.S., while those of the SPI do not vary from site to site. They also show that the PDI has a complex structure with an exceptionally long memory, while the SPI is an easily interpreted, simple moving average process.
  5. 1998: Fredrickson, Ed, et al. “Perspectives on desertification: south-western United States.” Journal of Arid Environments 39.2 (1998): 191-207. Several climatic changes occurred in the northern Chihuahuan Desert and other parts of the south-west United States during the last 12,000 years leading to a markedly warmer and drier climate. Vegetation changed in response to this climatic shift. Generally, this transition was from coniferous woodland to grasslands and eventually to the present day desert scrub. PreColumbian inhabitants of this region adapted by changing from hunter-gatherer to primarily agrarian economics. European immigration into the south-west U.S. beginning in the mid 1500s greatly affected this region. The greatest impact occurred after the U.S. Civil War in the 1860s. Before that time land use tended to be localized near small agricultural areas, mines, and military installations. The post-war range livestock industry expanded dramatically, especially during the 1880s — a period of general abuse of arid lands in the region. Recognition of this abuse and the deteriorating productivity of the land led to greater government involvement, including establishment of experimental stations and eventually management of the public domain by governmental agencies. Fire suppression, mismanaged grazing, changing climatic conditions, loss of soil and increasing atmospheric CO2 concentrations, mainly due to the burning of fossil fuels, are among the probable causes of continued desertification trends. Urban and rural populations, presently technologically isolated from their environment, need to better understand the dynamic nature of their environment. A greater degree of co-operation among diverse entities will be crucial. [FULL TEXT]
  6. 1999: Guttman, Nathaniel B. “Accepting the standardized precipitation index: A calculation algorithm1.” JAWRA Journal of the American Water Resources Association 35.2 (1999): 311-322. The Palmer Drought Severity Index (PDSI) has been calculated for about 30 years as a means of providing a single measure of meteorological drought severity. It was intended to retrospectively look at wet and dry conditions using water balance techniques. The Standardized Precipitation Index (SPI) is a probability index that was developed to give a better representation of abnormal wetness and dryness than the Palmer indices. Before the user community will accept the SPI as an alternative to the Palmer indices, a standard method must be developed for computing the index. Standardization is necessary so that all users of the index will have a common basis for both spatial and temporal comparison of index values. If different probability distributions and models are used to describe an observed series of precipitation, then different SPI values may be obtained. This article describes the effect on the SPI values computed from different probability models as well as the effects on dry event characteristics. It is concluded that the Pearson Type III distribution is the “best” universal model, and that the reliability of the SPI is sample size dependent. It is also concluded that because of data limitations, SPIs with time scales longer than 24 months may be unreliable. An internet link is provided that will allow users to access Fortran 77 source code for calculating the SPI.
  7. 2001: Dale, Virginia H., et al. “Climate change and forest disturbances: climate change can affect forests by altering the frequency, intensity, duration, and timing of fire, drought, introduced species, insect and pathogen outbreaks, hurricanes, windstorms, ice storms, or landslides.” AIBS Bulletin 51.9 (2001): 723-734. Over geologic time, changes in disturbance regimes are a natural part of all ecosystems. Even so, as a consequence of climate change, forests may soon face rapid alterations in the timing, intensity, frequency, and extent of disturbances. The number and complexity of climate variables related to forest disturbance make integrated research an awesome challenge. Even if changes cannot always be predicted, it is important to consider ways in which impacts to forest systems can be mitigated under likely changes in disturbance regimes. The task for the next decade is to understand better how climate affects disturbances and how forests respond to them. Improved monitoring programs and analytic tools are needed to develop this understanding. Ultimately, this knowledge should lead to better ways to predict and cope with disturbance-induced changes in forests.  [FULL TEXT]
  8. 2002: Cole, Julia E., Jonathan T. Overpeck, and Edward R. Cook. “Multiyear La Niña events and persistent drought in the contiguous United States.” Geophysical Research Letters29.13 (2002): 25-1.  La Niña events typically bring dry conditions to the southwestern United States. Recent La Niñas rarely exceed 2 years duration, but a new record of ENSO from a central Pacific coral reveals much longer La Niña anomalies in the 1800s. A La Niña event between 1855–63 coincides with prolonged drought across the western U.S. The spatial pattern of this drought correlates with that expected from La Niña during most of the La Niña event; land‐surface feedbacks are implied by drought persistence and expansion. Earlier periods also show persistent La Niña‐like drought patterns, further implicating Pacific anomalies and surface feedbacks in driving prolonged drought. An extended index of the Pacific Decadal Oscillation suggests that extratropical influences would have reinforced drought in the 1860s and 1890s but weakened it during the La Niña of the 1880s. [FULL TEXT]
  9. 2006: Westerling, Anthony L., et al. “Warming and earlier spring increase western US forest wildfire activity.” science 313.5789 (2006): 940-943. Western United States forest wildfire activity is widely thought to have increased in recent decades, yet neither the extent of recent changes nor the degree to which climate may be driving regional changes in wildfire has been systematically documented. Much of the public and scientific discussion of changes in western United States wildfire has focused instead on the effects of 19th- and 20th-century land-use history. We compiled a comprehensive database of large wildfires in western United States forests since 1970 and compared it with hydroclimatic and land-surface data. Here, we show that large wildfire activity increased suddenly and markedly in the mid-1980s, with higher large-wildfire frequency, longer wildfire durations, and longer wildfire seasons. The greatest increases occurred in mid-elevation, Northern Rockies forests, where land-use histories have relatively little effect on fire risks and are strongly associated with increased spring and summer temperatures and an earlier spring snowmelt. Wildfires have consumed increasing areas of western U.S. forests in recent years, and fire-fighting expenditures by federal land-management agencies now regularly exceed US$1 billion/year. Hundreds of homes are burned annually by wildfires, and damages to natural resources are sometimes extreme and irreversible. Media reports of recent, very large wildfires (>100,000 ha) burning in western forests have garnered widespread public attention, and a recurrent perception of crisis has galvanized legislative and administrative action (1–3). Extensive discussions within the fire-management and scientific communities and the media seek to explain these phenomena, focusing on either land-use history or climate as primary causes. If increased wildfire risks are driven primarily by land-use history, then ecological restoration and fuels management are potential solutions. However, if increased risks are largely due to changes in climate during recent decades, then restoration and fuels treatments may be relatively ineffective in reversing current wildfire trends (4, 5). We investigated 34 years of western U.S. (hereafter, “western”) wildfire history together with hydroclimatic data to determine where the largest increases in wildfire have occurred and to evaluate how recent climatic trends may have been important causal factors. Competing explanations: Climate versus management. Land-use explanations for increased western wildfire note that extensive livestock grazing and increasingly effective fire suppression began in the late 19th and early 20th centuries, reducing the frequency of large surface fires (6–8). Forest regrowth after extensive logging beginning in the late 19th century, combined with an absence of extensive fires, promoted forest structure changes and biomass accumulation, which now reduce the effectiveness of fire suppression and increase the size of wildfires and total area burned (3, 5, 9). The effects of land-use history on forest structure and biomass accumulation are, however, highly dependent upon the “natural fire regime” for any particular forest type. For example, the effects of fire exclusion are thought to be profound in forests that previously sustained frequent, low-intensity surface fires [such as Southwestern ponderosa pine and Sierra Nevada mixed conifer (2, 3, 10, 11)], but of little or no consequence in forests that previously sustained only very infrequent, high-severity crown fires (such as Northern Rockies lodgepole pine or spruce-fir (1, 5, 12)]. In contrast, climatic explanations posit that increasing variability in moisture conditions (wet/dry oscillations promoting biomass growth, then burning), and/or a trend of increasing drought frequency, and/or warming temperatures have led to increased wildfire activity (13, 14). Documentary records and proxy reconstructions (primarily from tree rings) of fire history and climate provide evidence that western forest wildfire risks are strongly positively associated with drought concurrent with the summer fire season and (particularly in ponderosa pine–dominant forests) positively associated to a lesser extent with moist conditions in antecedent years (13–18). Variability in western climate related to the Pacific Decadal Oscillation and intense El Niño/La Niña events in recent decades along with severe droughts in 2000 and 2002 may have promoted greater forest wildfire risks in areas such as the Southwest, where precipitation anomalies are significantly influenced by patterns in Pacific sea surface temperature (19–22). Although corresponding decadal-scale variations and trends in climate and wildfire have been identified in paleo studies, there is a paucity of evidence for such associations in the 20th century. We describe land-use history versus climate as competing explanations, but they may be complementary in some ways. In some forest types, past land uses have probably increased the sensitivity of current forest wildfire regimes to climatic variability through effects on the quantity, arrangement, and continuity of fuels. Hence, an increased incidence of large, high-severity fires may be due to a combination of extreme droughts and overabundant fuels in some forests. Climate, however, may still be the primary driver of forest wildfire risks on interannual to decadal scales. On decadal scales, climatic means and variability shape the character of the vegetation [e.g., species populations and their drought tolerance (23) and biomass (fuel) continuity (24), thus also affecting fire regime responses to shorter term climate variability]. On interannual and shorter time scales, climate variability affects the flammability of live and dead forest vegetation (13–19, 25).High-quality time series are essential for evaluating wildfire risks, but for various reasons (26), previous works have not rigorously documented changes in large-wildfire frequency for western forests. Likewise, detailed fire-climate analyses for the region have not been conducted to evaluate what hydroclimatic variations may be associated with recent increased wildfire activity, and the spatial variations in these patterns. We compiled a comprehensive time series of 1166 large (>400 ha) forest wildfires for 1970 to 2003 from federal land-management units containing 61% of western forested areas (and 80% above 1370 m) (26) (fig. S1). We compared these data with corresponding hydroclimatic and land surface variables (26–34) to address where and why the frequency of large forest wildfire has changed. Increased forest wildfire activity. We found that the incidence of large wildfires in western forests increased in the mid-1980s (Fig. 1) [hereafter, “wildfires” refers to large-fire events (>400 ha) within forested areas only (26)]. Subsequently, wildfire frequency was nearly four times the average of 1970 to 1986, and the total area burned by these fires was more than six and a half times its previous level. Interannual variability in wildfire frequency is strongly associated with regional spring and summer temperature (Spearman’s correlation of 0.76, P < 0.001, n = 34). A second-order polynomial fit to the regional temperature signal alone explains 66% of the variance in the annual incidence of these fires, with many more wildfires burning in hotter than in cooler years. [FULL TEXT]
  10. 2010: Woodhouse, Connie A., et al. “A 1,200-year perspective of 21st century drought in southwestern North America.” Proceedings of the National Academy of Sciences 107.50 (2010): 21283-21288. A key feature of anticipated 21st century droughts in Southwest North America is the concurrence of elevated temperatures and increased aridity. Instrumental records and paleoclimatic evidence for past prolonged drought in the Southwest that coincide with elevated temperatures can be assessed to provide insights on temperature-drought relations and to develop worst-case scenarios for the future. In particular, during the medieval period, ∼AD 900–1300, the Northern Hemisphere experienced temperatures warmer than all but the most recent decades. Paleoclimatic and model data indicate increased temperatures in western North America of approximately 1 °C over the long-term mean. This was a period of extensive and persistent aridity over western North America. Paleoclimatic evidence suggests drought in the mid-12th century far exceeded the severity, duration, and extent of subsequent droughts. The driest decade of this drought was anomalously warm, though not as warm as the late 20th and early 21st centuries. The convergence of prolonged warming and arid conditions suggests the mid-12th century may serve as a conservative analogue for severe droughts that might occur in the future. The severity, extent, and persistence of the 12th century drought that occurred under natural climate variability, have important implications for water resource management. The causes of past and future drought will not be identical but warm droughts, inferred from paleoclimatic records, demonstrate the plausibility of extensive, severe droughts, provide a long-term perspective on the ongoing drought conditions in the Southwest, and suggest the need for regional sustainability planning for the future.
  11. 2010: Seager, Richard, and Gabriel A. Vecchi. “Greenhouse warming and the 21st century hydroclimate of southwestern North America.” Proceedings of the National Academy of Sciences 107.50 (2010): 21277-21282.  Climate models robustly predict that the climate of southwestern North America, defined as the area from the western Great Plains to the Pacific Ocean and from the Oregon border to southern Mexico, will dry throughout the current century as a consequence of rising greenhouse gases. This regional drying is part of a general drying of the subtropics and poleward expansion of the subtropical dry zones. Through an analysis of 15 coupled climate models it is shown here that the drying is driven by a reduction of winter season precipitation associated with increased moisture divergence by the mean flow and reduced moisture convergence by transient eddies. Due to the presence of large amplitude decadal variations of presumed natural origin, observations to date cannot confirm that this transition to a drier climate is already underway, but it is anticipated that the anthropogenic drying will reach the amplitude of natural decadal variability by midcentury. In addition to this drop in total precipitation, warming is already causing a decline in mountain snow mass and an advance in the timing of spring snow melt disrupting the natural water storage systems that are part of the region’s water supply system. Uncertainties in how radiative forcing will impact the tropical Pacific climate system create uncertainties in the amplitude of drying in southwest North America with a La Niña-like response creating a worst case scenario of greater drying.
  12. 2010: Allen, Craig D., et al. “A global overview of drought and heat-induced tree mortality reveals emerging climate change risks for forests.” Forest ecology and management 259.4 (2010): 660-684. Greenhouse gas emissions have significantly altered global climate, and will continue to do so in the future. Increases in the frequency, duration, and/or severity of drought and heat stress associated with climate change could fundamentally alter the composition, structure, and biogeography of forests in many regions. Of particular concern are potential increases in tree mortality associated with climate-induced physiological stress and interactions with other climate-mediated processes such as insect outbreaks and wildfire. Despite this risk, existing projections of tree mortality are based on models that lack functionally realistic mortality mechanisms, and there has been no attempt to track observations of climate-driven tree mortality globally. Here we present the first global assessment of recent tree mortality attributed to drought and heat stress. Although episodic mortality occurs in the absence of climate change, studies compiled here suggest that at least some of the world’s forested ecosystems already may be responding to climate change and raise concern that forests may become increasingly vulnerable to higher background tree mortality rates and die-off in response to future warming and drought, even in environments that are not normally considered water-limited. This further suggests risks to ecosystem services, including the loss of sequestered forest carbon and associated atmospheric feedbacks. Our review also identifies key information gaps and scientific uncertainties that currently hinder our ability to predict tree mortality in response to climate change and emphasizes the need for a globally coordinated observation system. Overall, our review reveals the potential for amplified tree mortality due to drought and heat in forests worldwide.
  13. 2010: Cayan, Daniel R., et al. “Future dryness in the southwest US and the hydrology of the early 21st century drought.” Proceedings of the National Academy of Sciences 107.50 (2010): 21271-21276. Recently the Southwest has experienced a spate of dryness, which presents a challenge to the sustainability of current water use by human and natural systems in the region. In the Colorado River Basin, the early 21st century drought has been the most extreme in over a century of Colorado River flows, and might occur in any given century with probability of only 60%. However, hydrological model runs from downscaled Intergovernmental Panel on Climate Change Fourth Assessment climate change simulations suggest that the region is likely to become drier and experience more severe droughts than this. In the latter half of the 21st century the models produced considerably greater drought activity, particularly in the Colorado River Basin, as judged from soil moisture anomalies and other hydrological measures. As in the historical record, most of the simulated extreme droughts build up and persist over many years. Durations of depleted soil moisture over the historical record ranged from 4 to 10 years, but in the 21st century simulations, some of the dry events persisted for 12 years or more. Summers during the observed early 21st century drought were remarkably warm, a feature also evident in many simulated droughts of the 21st century. These severe future droughts are aggravated by enhanced, globally warmed temperatures that reduce spring snowpack and late spring and summer soil moisture. As the climate continues to warm and soil moisture deficits accumulate beyond historical levels, the model simulations suggest that sustaining water supplies in parts of the Southwest will be a challenge.
  14. 2013: Seager, Richard, et al. “Projections of declining surface-water availability for the southwestern United States.” Nature Climate Change 3.5 (2013): 482. Global warming driven by rising greenhouse-gas concentrations is expected to cause wet regions of the tropics and mid to high latitudes to get wetter and subtropical dry regions to get drier and expand polewards1,2,3,4. Over southwest North America, models project a steady drop in precipitation minus evapotranspiration, PE, the net flux of water at the land surface5,6,7, leading to, for example, a decline in Colorado River flow8,9,10,11. This would cause widespread and important social and ecological consequences12,13,14. Here, using new simulations from the Coupled Model Intercomparison Project Five, to be assessed in Intergovernmental Panel on Climate Change Assessment Report Five, we extend previous work by examining changes in PE, runoff and soil moisture by season and for three different water resource regions. Focusing on the near future, 2021–2040, the new simulations project declines in surface-water availability across the southwest that translate into reduced soil moisture and runoff in California and Nevada, the Colorado River headwaters and Texas.
  15. 2014: AghaKouchak, Amir, et al. “Global warming and changes in risk of concurrent climate extremes: Insights from the 2014 California drought.” Geophysical Research Letters 41.24 (2014): 8847-8852.  Global warming and the associated rise in extreme temperatures substantially increase the chance of concurrent droughts and heat waves. The 2014 California drought is an archetype of an event characterized by not only low precipitation but also extreme high temperatures. From the raging wildfires, to record low storage levels and snowpack conditions, the impacts of this event can be felt throughout California. Wintertime water shortages worry decision‐makers the most because it is the season to build up water supplies for the rest of the year. Here we show that the traditional univariate risk assessment methods based on precipitation condition may substantially underestimate the risk of extreme events such as the 2014 California drought because of ignoring the effects of temperature. We argue that a multivariate viewpoint is necessary for assessing risk of extreme events, especially in a warming climate. This study discusses a methodology for assessing the risk of concurrent extremes such as droughts and extreme temperatures.
  16. 2014: Dennison, Philip E., et al. “Large wildfire trends in the western United States, 1984–2011.” Geophysical Research Letters41.8 (2014): 2928-2933. We used a database capturing large wildfires (> 405 ha) in the western U.S. to document regional trends in fire occurrence, total fire area, fire size, and day of year of ignition for 1984–2011. Over the western U.S. and in a majority of ecoregions, we found significant, increasing trends in the number of large fires and/or total large fire area per year. Trends were most significant for southern and mountain ecoregions, coinciding with trends toward increased drought severity. For all ecoregions combined, the number of large fires increased at a rate of seven fires per year, while total fire area increased at a rate of 355 km2 per year. Continuing changes in climate, invasive species, and consequences of past fire management, added to the impacts of larger, more frequent fires, will drive further disruptions to fire regimes of the western U.S. and other fire‐prone regions of the world.  [FULL TEXT]
  17. 2015: Diffenbaugh, Noah S., Daniel L. Swain, and Danielle Touma. “Anthropogenic warming has increased drought risk in California.” Proceedings of the National Academy of Sciences(2015): 201422385. California ranks first in the United States in population, economic activity, and agricultural value. The state is currently experiencing a record-setting drought, which has led to acute water shortages, groundwater overdraft, critically low streamflow, and enhanced wildfire risk. Our analyses show that California has historically been more likely to experience drought if precipitation deficits co-occur with warm conditions and that such confluences have increased in recent decades, leading to increases in the fraction of low-precipitation years that yield drought. In addition, we find that human emissions have increased the probability that low-precipitation years are also warm, suggesting that anthropogenic warming is increasing the probability of the co-occurring warm–dry conditions that have created the current California drought.
  18. 2015: Cook, Benjamin I., Toby R. Ault, and Jason E. Smerdon. “Unprecedented 21st century drought risk in the American Southwest and Central Plains.” Science Advances 1.1 (2015): e1400082. In the Southwest and Central Plains of Western North America, climate change is expected to increase drought severity in the coming decades. These regions nevertheless experienced extended Medieval-era droughts that were more persistent than any historical event, providing crucial targets in the paleoclimate record for benchmarking the severity of future drought risks. We use an empirical drought reconstruction and three soil moisture metrics from 17 state-of-the-art general circulation models to show that these models project significantly drier conditions in the later half of the 21st century compared to the 20th century and earlier paleoclimatic intervals. This desiccation is consistent across most of the models and moisture balance variables, indicating a coherent and robust drying response to warming despite the diversity of models and metrics analyzed. Notably, future drought risk will likely exceed even the driest centuries of the Medieval Climate Anomaly (1100–1300 CE) in both moderate (RCP 4.5) and high (RCP 8.5) future emissions scenarios, leading to unprecedented drought conditions during the last millennium.
  19. 2015: Williams, A. Park, et al. “Contribution of anthropogenic warming to California drought during 2012–2014.” Geophysical Research Letters 42.16 (2015): 6819-6828. A suite of climate data sets and multiple representations of atmospheric moisture demand are used to calculate many estimates of the self‐calibrated Palmer Drought Severity Index, a proxy for near‐surface soil moisture, across California from 1901 to 2014 at high spatial resolution. Based on the ensemble of calculations, California drought conditions were record breaking in 2014, but probably not record breaking in 2012–2014, contrary to prior findings. Regionally, the 2012–2014 drought was record breaking in the agriculturally important southern Central Valley and highly populated coastal areas. Contributions of individual climate variables to recent drought are also examined, including the temperature component associated with anthropogenic warming. Precipitation is the primary driver of drought variability but anthropogenic warming is estimated to have accounted for 8–27% of the observed drought anomaly in 2012–2014 and 5–18% in 2014. Although natural variability dominates, anthropogenic warming has substantially increased the overall likelihood of extreme California droughts.




















  1. It has been proposed in various climate change blogs that a negative correlation between cloud cover and temperature (HadCRUT4 global mean temperature anomalies) explains the observed warming prior to the year 2000 and the hiatus in warming since the year 2000 [LINK] . It is shown that the HadCRUT4 monthly mean temperature anomalies are negatively correlated with monthly mean cloud cover and this relationship is developed into a climate model that explains changes in surface temperature in terms of incident solar radiation, the heat trapping effect of carbon dioxide, and the effect of cloud cover. This model is then validated by its ability to faithfully reproduce the HadCRUT4 global mean temperature anomaly series.
  2. Monthly time scale: The analysis is carried out at a monthly time scale considered valid because the temperature data are anomalies with the seasonal cycle removed; but no attention is paid to the possibility that the cloud data may contain a seasonal cycle. In Figure 1 above, a plot of the mean cloud cover shows a strong seasonal cycle with low cloud cover in the Northern summer and higher cloud cover in the Northern fall, spring, and winter. This seasonal cycle implies that at a monthly time scale the source cloud data may not be used directly but must be deseasonalized to conform with the deseasonalized temperature anomalies.
  3. Monthly time scale: Deseasonalized cloud data are computed by subtracting the mean seasonal cycle from the data. These cloud anomalies and the HadCRUT4 temperature anomalies are shown in Figure 2 along with their corresponding detrended series. The correlation between the cloud cover anomalies and the temperature anomalies is found to be r = -0.562, a strong negative correlation in support of the hypothesis derived from these data that warming is driven by low cloud cover.
  4. Monthly time scale: It is noted, however, that spurious correlations in time series data imposed by long term trends do not imply a responsiveness of Y to changes in X at the time scale of interest. The Tyler Vigen website contains many examples to demonstrate this spurious correlation property of time series data [LINK] . In this case, to detect the responsiveness of temperature to cloud cover at a monthly time scale, net of the effect of long term trends, it is necessary to remove the long term trends from the data with a detrending procedure as explained in this brief lecture by Alex Tolley [LINK] .
  5. Monthly time scale: The detrended data are shown in Figure 2 (in red) and their correlation is displayed graphically in the second frame of Figure 3. This graphic shows that no correlation at a monthly time scale remains when the effect of long term trends is removed. The strong negative correlation seen in the source data of  r=-0.562 turns out to be an artifact of long term trends. At a monthly time scale a statistically insignificant correlation of r=0.09554 remains and the negative sign, essential for the theory that low cloud cover causes warming, is gone. Thus, though the required negative correlation is seen in the source data, detrended analysis shows that it has no implication in terms of responsiveness of temperature to cloud cover at a monthly time scale.
  6. Annual time scale: The corresponding analysis is presented at an annual time scale for each of the twelve calendar months separately. This analysis option does not require the computation of deseasonalized anomalies because the calendar months are not combined. Instead we test the hypothesis that the monthly mean temperature is responsive to monthly mean cloud cover from one year to the next. The results are presented in Figure 4 to Figure 6.
  7. Annual time scale: Strong statistically significant negative correlations are seen in the source data for the eleven calendar months from February to December from r=-0.5 to r=-0.7 as seen in the left frame of Figure 6. However, these correlations are artifacts of long term trends and do not represent responsiveness of temperature to cloud cover at an annual time scale as can be seen in the right frame of Figure 6 where all the statistically significant negative correlations have vanished.
  8. This analysis shows that the spurious correlation in the source data created by long term trends that does not imply responsiveness at an annual or monthly time scale, has been misinterpreted by the authors of the blog posts in terms of an inverse causal relationship between cloud cover and surface temperature. Of course correlation even at the time scale of interest does not imply causation but that condition is more clearly stated as “correlation is a necessary but not sufficient condition for causation”. Here we have addressed the necessity condition and do not claim the sufficient condition.
  9. All data and computational details used in this work are available for download from an online data archive:  [LINK]