Thongchai Thailand

Archive for November 2019

FIGURE 1: EL NINO AND LA NINA CYCLES (ENSO): DATA PROVIDED BY JAN NULL, METEOROLOGIST, GOLDEN GATE WEATHER SERVICES, A RECOGNIZED SOURCE OF ENSO INFORMATION[LINK]elnino4

bandicam 2019-07-03 11-47-06-708

bandicam 2019-07-03 11-48-49-549

 

FIGURE 2: TREND ANALYSIS OF OCEANIC NINO INDEX 1950-2019oni-gifTREND-ANALYSIS-TABLE

 

FIGURE 3: CORRELATION BETWEEN ONI AND TEMPERATUREONITEMP-GIF

 

FIGURE 4: CORRELATION ANALYSIS: JJA-MAJJJAMAJ-1JJAMAJ-2

 

FIGURE 5: CORRELATION ANALYSIS NDJ-MAJNDJMAJ-1

NDJMAJ-2

 

FIGURE 6: THE GEOLOGICAL SOURCE OF ENSO EVENTSelnino-1

ELNINO2

[LINK TO THE HOME PAGE OF THIS SITE]

 

THIS POST IS A CRITICAL REVIEW OF THE CLAIM BY CLIMATE SCIENCE THAT AGW CLIMATE CHANGE HAS MADE THE EL NINO – LA NINA ENSO CYCLE MORE INTENSE.

  1. THE SOURCE STUDY BY GROTHE & COBB of the Georgia Institute of Technology’s School of Earth and Atmospheric Sciences: Grothe, Pamela R., et al. “Enhanced El Niño‐Southern Oscillation variability in recent decades.” Geophysical Research Letters, 2019}  Abstract: The El Niño‐Southern Oscillation (ENSO) represents the largest source of year‐to‐year global climate variability. While earth system models suggest a range of possible shifts in ENSO properties under continued greenhouse gas forcing, many centuries of pre-industrial climate data are required to detect a potential shift in the properties of recent ENSO extremes. Here, we reconstruct the strength of ENSO variations over the last 7,000 years with a new ensemble of fossil coral oxygen isotope records from the Line Islands, located in the central equatorial Pacific. The corals document a significant decrease in ENSO variance of ~20% from 3,000 to 5,000 years ago, coinciding with changes in spring/fall precessional insolation. We find that ENSO variability over the last five decades is ~25% stronger than during the preindustrial. Our results provide empirical support for recent climate model projections showing an intensification of ENSO extremes under greenhouse forcing. 
  2. The authors have kindly provided a translation of their scientific language into plain English as follows: Our climate models tell us that El Niño will intensify due to greenhouse warming. Here, new coral reconstructions of the El Niño‐Southern Oscillation (ENSO) record show sustained, significant changes in ENSO variability over the last 7,000yrs, and at the same time we have found ENSO extremes over the last 50 years that are stronger than ENSO cycles of pre-industrial times.These results tell us that El Niño events are intensifying due to anthropogenic climate change. Key Point: (1) Data from Line Island corals show ENSO strength significantly weaker between 3,000 and 5,000 years ago compared to the 2,000‐year ago. (2) ENSO extremes of the last 50 years are significantly stronger than those of the pre‐industrial era in the central tropical Pacific. (3) Therefore, AGW climate change is causing ENSO cycles to become more extreme.
  3. A further translation into plain language has been kindly provided by Sci-Tech-Daily online magazine [LINK] as follows: Compelling Hard Evidence: El Nino Swings More Violently in the Industrial Age: El Ninos have become more intense in the industrial age, which stands to worsen storms, drought, and coral bleaching in El Nino years. A new study has found compelling evidence in the Pacific Ocean that the stronger El Ninos are part of a climate pattern that is new and strange.It is the first known time that enough physical evidence spanning millennia has come together to allow researchers to say definitively that: El Ninos, La Ninas, and the climate phenomenon that drives them have become more extreme in the times of human-induced climate change. The industrial age ENSO swings are 25% stronger than in the pre-industrial records. The evidence had slumbered in and around shallow Pacific waters, where ENSO and El Ninos originate. The corals’ recordings of sea surface temperatures proved to be astonishingly accurate when bench-marked. Coral records from 1981 to 2015 matched sea surface temperatures measured via satellite in the same period exactly. In 2018, enough coral data had amassed to distinguish ENSO’s recent activity from its natural pre-industrial patterns. To stress-test the data, Grothe left out chunks to see if the industrial age ENSO signal still stuck out. She removed the record-setting 1997/1998 El Nino-La Nina and examined industrial age windows of time between 30 and 100 years long. The signal held in all windows, but the data needed the 97/98 event to be statistically significant. This could mean that changes in the ENSO activities have just now reached a threshold that makes the impact of the industrial economy detectable.

 

 

RESPONSE AND CRITICAL COMMENTS ON THESE REPORTS

  1. The ENSO data in the form of the Oceanic Nino Index (ONI) published by Jan Null [LINK]  is displayed in Figure 1. As of this writing, the data cover a 70-year time period from 1950 to 2019 and present the ONI data for a moving 3-month window. The period since 1950 {“mid century”} is claimed by NASA and also by climate scientists in general  to be one in which the theory of AGW in terms of CO2 forcing of surface temperature is most apparent [LINK] . Therefore, if there is an impact of AGW CO2 forcing of surface temperature on ENSO strength, it should be apparent in an analysis of the relevant trends and correlations with respect to the ONI (Oceanic Nino Index) and temperature.
  2. Trends in the 3-month running average of the Oceanic Nino Index are displayed graphically in Figure 2. The top frame of Figure 2 is a GIF animation that cycles through the ten 3-month moving average values of the ONI from June-July-Aug to April-May-June of the following year. The study period in these charts is 1950-2019. The red line through the data in these charts is a 3rd order polynomial regression line that should show the trends as well as changes if any in the trend midstream. These lines appear to be rather flat without any kind of trend information being apparent in the graphical representation.
  3. The bottom frame of Figure 2 presents the results of linear regression trend analysis of the ONI against time from 1950 to 2019. No evidence of a trend in the ONI is found in the data. This finding is inconsistent with the proposition that AGW climate change has caused an increase in El Nino strength. The mechanism of this change is assumed to be rising sea surface temperatures (SST) attributed to CO2 forcing of AGW climate change. A more direct analysis of this relationship is presented with correlation analysis presented in Figure 3, Figure 4, and Figure 5.
  4. The top frame of Figure 3 is a GIF animation that cycles through the six regions studied (global, northern hemisphere, southern hemisphere, northern extent, southern extent, and tropics. The graphic shows that the temperature above oceans in the tropics shows the strongest correlations. The somewhat weaker correlations found for the two hemispheres vanish when the tropical portion of the hemisphere is removed to define the northern and southern extents. The ordinate contains the ten three-month periods for which the mean ONI and temperature data are used in the correlation analysis. They are 1=June-July-August to 10=March-April-May.  The highest correlations are seen for the winter months along with late fall and early spring.
  5. Thus, we find that detrended correlation analysis shows a strong correlation between UAH lower troposphere temperature above ocean areas and ONI. The correlation supports a relationship between temperature and ONI. This correlation is strongest for temperatures above the tropics (TR). Statistically significant correlations are also seen for temperatures for the Northern and Southern Hemispheres (NH & SH) but these correlations disappear when the Tropical section of the hemisphere is removed in the hemispheric region being studied (NX & SX).
  6. These results suggest that there is a temperature phenomenon in the tropics that causes rising and falling ONI levels even in the absence of a trend. A rational explanation for this relationship is found in the geology of the tropical region near the Solomon Islands and Papua New Guinea as shown in Figure 6. A detailed description of this localized geological source of the energy that drives the ENSO variability is described in a related post [LINK] .
  7. We conclude from the results of the trend analysis and detrended correlation analysis presented above that the data do not support the hypothesis that AGW temperature trends since 1950 have caused ENSO intensity to increase. If that were so there would be a trend and that trend would show a strong correlation with temperature that is not restricted to the tropics. The Temperature trend above the tropics attributed to AGW for the ten 3-month means in the Jan Null analysis presented in Figure 7 below do not show an AGW forcing behavior. More importantly, the ONI does not show a rising trend. The data are consistent with the evaluation in a related post that ENSO variability is driven by geological forces rather than by atmospheric composition. 

 

FIGURE 7:  TROPICAL OCEAN TEMPERATURE AND ONI TRENDStemp-oni-trends

 

 

ENSO VARIABILITY BIBLIOGRAPHY

  1. Yeh, Sang-Wook, et al. “El Niño in a changing climate.” Nature 461.7263 (2009): 511.  El Niño events, characterized by anomalous warming in the eastern equatorial Pacific Ocean, have global climatic teleconnections and are the most dominant feature of cyclic climate variability on subdecadal timescales. Understanding changes in the frequency or characteristics of El Niño events in a changing climate is therefore of broad scientific and socioeconomic interest. Recent studies1,2,3,4,5 show that the canonical El Niño has become less frequent and that a different kind of El Niño has become more common during the late twentieth century, in which warm sea surface temperatures (SSTs) in the central Pacific are flanked on the east and west by cooler SSTs. This type of El Niño, termed the central Pacific El Niño (CP-El Niño; also termed the dateline El Niño2, El Niño Modoki3 or warm pool El Niño5), differs from the canonical eastern Pacific El Niño (EP-El Niño) in both the location of maximum SST anomalies and tropical–midlatitude teleconnections. Here we show changes in the ratio of CP-El Niño to EP-El Niño under projected global warming scenarios from the Coupled Model Intercomparison Project phase 3 multi-model data set6. Using calculations based on historical El Niño indices, we find that projections of anthropogenic climate change are associated with an increased frequency of the CP-El Niño compared to the EP-El Niño. When restricted to the six climate models with the best representation of the twentieth-century ratio of CP-El Niño to EP-El Niño, the occurrence ratio of CP-El Niño/EP-El Niño is projected to increase as much as five times under global warming. The change is related to a flattening of the thermocline in the equatorial Pacific.
  2. Timmermann, Axel, et al. “El Niño–southern oscillation complexity.” Nature 559.7715 (2018): 535-545.  El Niño events are characterized by surface warming of the tropical Pacific Ocean and weakening of equatorial trade winds that occur every few years. Such conditions are accompanied by changes in atmospheric and oceanic circulation, affecting global climate, marine and terrestrial ecosystems, fisheries and human activities. The alternation of warm El Niño and cold La Niña conditions, referred to as the El Niño–Southern Oscillation (ENSO), represents the strongest year-to-year fluctuation of the global climate system. Here we provide a synopsis of our current understanding of the spatio-temporal complexity of this important climate mode and its influence on the Earth system.
  3. Grothe, Pamela R., et al. “Enhanced El Niño‐Southern Oscillation variability in recent decades.” Geophysical Research Letters (2019).  The El Niño‐Southern Oscillation (ENSO) represents the largest source of year‐to‐year global climate variability. While earth system models suggest a range of possible shifts in ENSO properties under continued greenhouse gas forcing, many centuries of preindustrial climate data are required to detect a potential shift in the properties of recent ENSO extremes. Here, we reconstruct the strength of ENSO variations over the last 7,000 years with a new ensemble of fossil coral oxygen isotope records from the Line Islands, located in the central equatorial Pacific. The corals document a significant decrease in ENSO variance of ~20% from 3,000 to 5,000 years ago, coinciding with changes in spring/fall precessional insolation. We find that ENSO variability over the last five decades is ~25% stronger than during the preindustrial. Our results provide empirical support for recent climate model projections showing an intensification of ENSO extremes under greenhouse forcing.

 

 

FIGURE 1: THE ARCTIC AS SEEN FROM ABOVE THE NORTH POLE

svalbard-1

Figure 1 above is a map of the Arctic Ocean as seen from a point in space directly above the North Pole. From the North Pole, all directions are South. The Chuckhi Sea appears at the top of the map directly North of Beringia where Siberia meets Alaska. The Chukchi Sea is wedged in between Alaska and the Chukchi Peninsula of Siberia and extends as far North as Wrangel island. A close up of the Chukchi Sea area in a normal map where North is up and South is down is shown the two maps in Figure 2 below.

 

FIGURE 2: MAP OF THE CHUKCHI SEA

THIS POST IS A REVIEW OF A GIZMODO ARTICLE BY BRIAN KAHN [LINK TO TEXT] [LINK TO THE YOUTUBE VIDEO]  ON THE ODDITIES OF SEA ICE IN THE CHUKCHI SEA AND THE INTERPRETATION OF THESE ODDITIES IN TERMS OF AGW CLIMATE CHANGE AND ITS FEEDBACK AMPLIFICATION BY WAY OF LOST ALBEDO IN THE DECLINE OF SEA ICE EXTENT. THE FULL TEXT OF THE BRIAN KAHN ARTICLE IS INCLUDED BELOW AT THE END OF THIS PRESENTATION.  WITH BIBLIOGRAPHIC REFERENCES TO RESEARCH PAPERS ON SEA ICE EXTENT IN THE CONTEXT OF AGW.  

 

[LINK TO HOME PAGE OF THIS SITE]

 

FIGURE 3: THE ZACK LABE SEA ICE CHART

ZACKLABE

 

THE BRIAN KAHN ARTICLE ABOUT THE ZACK LABE SEA ICE CHART: 

CLIMATE CHANGE: Some Arctic Sea Ice Is Acting Like It’s Mid-Summer

  1. Winter has extended its grip on the Arctic, dropping a curtain of darkness on the top of the world. But at least one part of the Arctic is resisting its grasp. In what’s becoming an unfortunately common story, seasonal sea ice growth is stalling out in one of the gateway seas leading to the heart of the Arctic Ocean. The Chukchi Sea currently has a sea ice extent more reminiscent of summer than early winter, a sign that something is not right in the waters at the highest latitudes of the globe.

  2. The Chukchi Sea sits between northern Alaska and Russia. That makes it a crucial bridge to the Bering Sea, a place for sea to latch on and spread its icy tendrils to the south. But this winter so far has seen ice suffer. After bottoming out in September, the ice in the Chukchi Sea has failed to rebound. Usually, the dip in temperatures coupled with the lack of sunlight causes ice to build back up quickly. This year, though, growth has been much slower. Sea ice data crunched by University of California, Irvine PhD candidate and Arctic watcher Zack Labe shows that sea ice extent in the Chukchi Sea is the lowest on record for this time of year by a long shot.
  3. Arctic sea ice as a whole sits at its third lowest extent on record for this time of year and is well below the long-term average. Part of the reason for the sluggish growth ties to this spring and summer of sweltering discontent. Temperatures were abnormally high much too often. It reached nearly 95 degrees Fahrenheit in the Swedish Arctic. 
  4. Lightning, which generally requires warm, humid conditions, struck near the North Pole.  The northernmost settlement on Earth hit 70 degrees Fahrenheit for the first time ever. That’s just a smattering of all the ways the Arctic was fucked this summer. 
  5. Don’t even get me started on the fires, but they all point to the culprit likely driving weak sea ice growth: heat, and lots of it. The intense heat this summer helped melt ice. This year’s Arctic sea ice minimum was the second lowest on record. That in turn meant more dark, open water was available to absorb the suns rays and heat up itself. So even now that the sun has gone down for much of the Arctic, the last rays of summer are still very much present in the form of toasty (by Arctic standards) waters and making it hard for sea ice to form.
  6. This feedback loop is one of the hallmarks of climate change. Carbon pollution has warmed the Arctic twice as fast as the rest of the world, and the system has rapidly destabilized in recent years. The more plentiful fires and melting permafrost are releasing more carbon that will further speed up the changes. Meanwhile, disappearing sea ice and thus more open water will ensure the region continues to heat faster than the rest of the world. The vicious cycle has put the Arctic on the brink of a tipping point into a more volatile state unrecognizable from the Arctic we know today. If you want to know what the transition could look like, the Chukchi Sea is offering quite the lesson right now.

 

 

COMMENT ON THE BRIAN KAHN ARTICLE

It is noted and acknowledged by the authors (Zack Labe and Brian Kahn) that the sea ice phenomenon in question cannot be generalized to the Arctic Sea nor across the time span and time scale of AGW climate change that relates to long term trends in atmospheric heat balance. The event is localized to the Chukchi Sea, a small corner of the Arctic wedged in between Alaska and Siberia. The phenomenon is also time constrained to a singular event in time. A more rational explanation for this event than atmospheric heat energy trends since pre-industrial times is proposed in terms of the known geological features of the Chukchi Sea presented in the charts in Figure 4 below that include the Graben and Laptev rift systems. Anomalous events constrained by time and geography may not have a ready explanation in terms of long term atmospheric trends. It is shown in related posts that year to year changes in September minimum sea ice extent are unrelated to atmospheric temperature trends attributed to AGW climate change [LINK] [LINK] [LINK] . We therefore propose that Arctic sea ice dynamics should be understood not exclusively in terms of atmospheric phenomena but that their study should include effects of known geological dynamics of the Arctic particularly so when the the sea ice event in question is localized in time and space [LINK] [LINK] .

 

FIGURE 4: GEOLOGICAL FEATURES OF THE CHUKCHI SEA

TECTONIC-FEATURES-AMERASIA-BASIN

TECTONIC FEATURES OF THE AMERASIA BASIN: (KONONOV 2013): In the bottom frame of Figure 4 above, the black circles are the points of the heat flux measurements, and the digits in the circles are the average heat flux values. The straight lines are the magnetic anomalies. The double arrows denote the Aptian–Albian sublatitudinal extension of the Eurasian margin. The dashed line delineates the central block of the Arctida continent, which was fragmented as a result of rifting and diffuse spreading into the provinces of basins and ridges. Bathymetry simulation indicates that the Mesozoic Arctic Plume is in the lithosphere of the Alpha-Mendeleev and Lomonosov ridges (map above). The study also presents a model of the thermal subsidence to the asthenosphere. The calculated coefficients are compared with those obtained for the Greenland-Iceland and Iceland-Faeroe ridges, which were formed in response to hotspot activity. It was shown that the coefficients of the thermal subsidence in the central part of the Alpha-Mendeleev and Lomonosov Ridges are similar to those calculated for the Greenland-Iceland and Iceland-Faeroe ridges. This indicates the thermal regime of the subsidence of the Alpha-Mendeleev and Lomonosov ridges since the Early Miocene and the increased influence of the Arctic plume on the ridge genesis.

 

 

CHUKCHI SEA SEA-ICE BIBLIOGRAPHY 

 

  1. Neal, Victor T., Stephen Neshyba, and Warren Denner. “Thermal stratification in the Arctic Ocean.” Science 166.3903 (1969): 373-374.  Fine scale measurements of the vertical temperature profile in an Arctic water column show the presence of several cascaded isothermal layers. Layers between the depths of 300 anid 350 meters range from 2 to 10 meters in thickness, while the temperature change between adjacent layers is approximately 0.026°C. The individual layers are isothermal to within ± 0.001°C.
  2. Van, Hulsen A. “Geothermal channel and harbor ice control system.” U.S. Patent No. 3,807,491. 30 Apr. 1974A thermo-arctic sea passage is formed and maintained by providing a series of geothermal wells spaced along the intended route. Heat energy transferred from a deep geothermal strata to the surface melts the ice to form a water channel. Reformation of ice is inhibited by efficient and active water movement and wave action induced by wind action.
  3. von Quillfeldt, Cecilie H., William G. Ambrose, and Lisa M. Clough. “High number of diatom species in first-year ice from the Chukchi Sea.” Polar Biology 26.12 (2003): 806-818.  Our study describes the species composition of microalgae, primarily diatoms, in two ice cores collected from the Chukchi Sea in early June 1998. At least 251 species were present in 2 cores collected 10 m apart in first-year ice. This is a greater number of algal species in ice from one locality than has been recorded from any other area of the Arctic. Microalgae were distributed throughout the 173-cm-long core, but abundance and species composition varied among different sections of the core, with maximum species richness (108 and 103 species in the 94- to 103- and 103- to 113-cm sections, respectively) occurring in the middle sections. More than 237 species were recorded from this core. Only the bottom 20 cm of the shorter (110 cm) core was analysed and it contained 135 algal species, still an extraordinarily high number of species. Marine species dominated both cores, but typical brackish and freshwater species were also present. None of these species, however, had more than 1% relative abundance. It should be noted, though, that there were several distinct, but unidentified, species of unknown origin. Characteristic ice algal species (e.g. Nitzschia frigida, Navicula pelagica, solitary Navicula spp., in addition to Cylindrotheca closterium) were the numerical dominants in most sections of the long core, but phytoplankton and benthic species were quite abundant in some sections. One section was dominated by a blue-green bacterium, presumably of the genus Anabaena. The species composition is consistent with several different mechanisms for algal incorporation into ice (i.e. seawater filtration ice, seeding from the sea floor, freshwater input). Over time, ice dynamics and sources of ice in the Chukchi Sea appear to result in high numbers of algal species in the ice. It is also likely that season of collection contributed to the high number of species observed. Determining the geographical area of origin for the different species is however difficult, due to the large-scale pattern of ice circulation.
  4. Martin, Seelye, et al. “Estimation of the thin ice thickness and heat flux for the Chukchi Sea Alaskan coast polynya from Special Sensor Microwave/Imager data, 1990–2001.” Journal of Geophysical Research: Oceans 109.C10 (2004).  One of the largest Arctic polynyas occurs along the Alaskan coast of the Chukchi Sea between Cape Lisburne and Point Barrow. For this polynya (iceless sea surface surrounded by sea ice), a new thin ice thickness algorithm is described that uses the ratio of the vertically and horizontally polarized Special Sensor Microwave/Imager (SSM/I) 37‐GHz channels to retrieve the distribution of thicknesses and heat fluxes at a 25‐km resolution. Comparison with clear‐sky advanced very high resolution radiometer data shows that the SSM/I thicknesses and heat fluxes are valid for ice thicknesses less than 10–20 cm, and comparison with several synthetic aperture radar (SAR) images shows that the 10‐cm ice SSM/I ice thickness contour approximately follows the SAR polynya edge. For the twelve winters of 1990–2001, the ice thicknesses and heat fluxes within the polynya are estimated from daily SSM/I data, then compared with field data and with estimates from other investigations. The results show the following: First, our calculated heat losses are consistent with 2 years of over‐winter salinity and temperature field data. Second, comparison with other numerical and satellite estimates of the ice production shows that although our ice production per unit area is smaller, our polynya areas are larger, so that our ice production estimates are of the same order. Because our salinity forcing occurs over a larger area than in the other models, the oceanic response associated with our forcing will be modified.
  5. De Vernal, Anne, Claude Hillaire‐Marcel, and Dennis A. Darby. “Variability of sea ice cover in the Chukchi Sea (western Arctic Ocean) during the Holocene.” Paleoceanography 20.4 (2005).  Dinocysts from cores collected in the Chukchi Sea from the shelf edge to the lower slope were used to reconstruct changes in sea surface conditions and sea ice cover using modern analogue techniques. Holocene sequences have been recovered in a down‐slope core (B15: 2135 m, 75°44′N, sedimentation rate of ∼1 cm kyr−1) and in a shelf core (P1: 201 m, 73°41′N, sedimentation rate of ∼22 cm kyr−1). The shelf record spanning about 8000 years suggests high‐frequency centennial oscillations of sea surface conditions and a significant reduction of the sea ice at circa 6000 and 2500 calendar (cal) years B.P. The condensed offshore record (B15) reveals an early postglacial optimum with minimum sea ice cover prior to 12,000 cal years B.P., which corresponds to a terrestrial climate optimum in Bering Sea area. Dinocyst data indicate extensive sea ice cover (>10 months yr−1) from 12,000 to 6000 cal years B.P. followed by a general trend of decreasing sea ice and increasing sea surface salinity conditions, superimposed on large‐amplitude millennial‐scale oscillations. In contrast, δ18O data in mesopelagic foraminifers (Neogloboquadrina pachyderma) and benthic foraminifers (Cibicides wuellerstorfi) reveal maximum subsurface temperature and thus maximum inflow of the North Atlantic water around 8000 cal years B.P., followed by a trend toward cooling of the subsurface to bottom water masses. Sea‐surface to subsurface conditions estimated from dinocysts and δ18O data in foraminifers thus suggest a decoupling between the surface water layer and the intermediate North Atlantic water mass with the existence of a sharp halocline and a reverse thermocline, especially before 6000 years B.P. The overall data and sea ice reconstructions from core B15 are consistent with strong sea ice convergence in the western Arctic during the early Holocene as suggested on the basis of climate model experiments including sea ice dynamics, matching a higher inflow rate of North Atlantic Water.
  6. Björk, Göran, and Peter Winsor. “The deep waters of the Eurasian Basin, Arctic Ocean: Geothermal heat flow, mixing and renewal.” Deep Sea Research Part I: Oceanographic Research Papers 53.7 (2006): 1253-1271.  Hydrographic observations from four separate expeditions to the Eurasian Basin of the Arctic Ocean between 1991 and 2001 show a 300–700 m thick homogenous bottom layer. The layer is characterized by slightly warmer temperature compared to ambient, overlying water masses, with a mean layer thickness of 500±100 m and a temperature surplus of 7.0±2×10−3 °C. The layer is present in the deep central parts of the Nansen and Amundsen Basins away from continental slopes and ocean ridges and is spatially coherent across the interior parts of the deep basins. Here we show that the layer is most likely formed by convection induced by geothermal heat supplied from Earth’s interior. Data from 1991 to 1996 indicate that the layer was in a quasi steady state where the geothermal heat supply was balanced by heat exchange with a colder boundary. After 1996 there is evidence of a reformation of the layer in the Amundsen Basin after a water exchange. Simple numerical calculations show that it is possible to generate a layer similar to the one observed in 2001 in 4–5 years, starting from initial profiles with no warm homogeneous bottom layer. Limited hydrographic observations from 2001 indicate that the entire deep-water column in the Amundsen Basin is warmer compared to earlier years. We argue that this is due to a major deep-water renewal that occurred between 1996 and 2001.
  7. Francis, Jennifer A., and Elias Hunter. “New insight into the disappearing Arctic sea ice.” Eos, Transactions American Geophysical Union 87.46 (2006): 509-511. The dramatic loss of Arctic sea ice is ringing alarm bells in the minds of climate scientists, policy makers, and the public. The extent of perennial sea ice—ice that has survived a summer melt season—has declined 20% since the mid‐1970s [Stroeue et al., 2005]. Its retreat varies regionally, driven by changes in winds and heating from the atmosphere and oceanLimited data have hampered attempts to identify which culprits are to blame, but new satellite‐derived information provides insight into the drivers of change. A clear message emerges. The location of the summer ice edge is strongly correlated to variability in longwave (infrared) energy emitted by the atmosphere (downward longwave flux; DLF), particularly during the most recent decade when losses have been most rapid. Increasing DLF, in turn, appears to be driven by more clouds and water vapor in spring over the Arctic.
  8. McKay, J. L., et al. “Holocene fluctuations in Arctic sea-ice cover: dinocyst-based reconstructions for the eastern Chukchi Sea.” Canadian Journal of Earth Sciences 45.11 (2008): 1377-1397.  Cores from site HLY0501-05 on the Alaskan margin in the eastern Chukchi Sea were analyzed for their geochemical (organic carbon, δ13Corg, Corg/N, and CaCO3) and palynological (dinocyst, pollen, and spores) content to document oceanographic changes during the Holocene. The chronology of the cores was established from 210Pb dating of near-surface sediments and 14C dating of bivalve shells. The sediments span the last 9000 years, possibly more, but with a gap between the base of the trigger core and top of the piston core. Sedimentation rates are very high (∼156 cm/ka), allowing analyses with a decadal to centennial resolution. The data suggest a shift from a dominantly terrigenous to marine input from the early to late Holocene. Dinocyst assemblages are characterized by relatively high concentrations (600–7200 cysts/cm3) and high species diversity, allowing the use of the modern analogue technique for the reconstruction of sea-ice cover, summer temperature, and salinity. Results indicate a decrease in sea-ice cover and a corresponding, albeit much smaller, increase in summer sea-surface temperature over the past 9000 years. Superimposed on these long-term trends are millennial-scale fluctuations characterized by periods of low sea-ice and high sea-surface temperature and salinity that appear quasi-cyclic with a frequency of about one every 2500–3000 years. The results of this study clearly show that sea-ice cover in the western Arctic Ocean has varied throughout the Holocene. More importantly, there have been times when sea-ice cover was less extensive than at the end of the 20th century.
  9. Grebmeier, Jacqueline M., et al. “Biological response to recent Pacific Arctic sea ice retreats.” Eos, Transactions American Geophysical Union 91.18 (2010): 161-162.  Although recent major changes in the physical domain of the Arctic region, such as extreme retreats of summer sea ice since 2007, are well documented, large uncertainties remain regarding responses in the biological domain. In the Pacific Arctic north of Bering Strait, reduction in sea ice extent has been seasonally asymmetric, with minimal changes until the end of June and delayed sea ice formation in late autumn. The effect of extreme ice retreats and seasonal asymmetry in sea ice loss on primary production is uncertain, with no clear shift over time (2003–2008) in satellite‐derived chlorophyll concentrations. However, clear changes have occurred during summer in species ranges for zooplankton, bottom‐dwelling organisms (benthos), and fish, as well as through the loss of sea ice as habitat and platform for marine mammals.
  10. Nicolsky, D., and N. Shakhova. “Modeling sub-sea permafrost in the East Siberian Arctic Shelf: the Dmitry Laptev Strait.” Environmental Research Letters 5.1 (2010): 015006The present state of sub-sea permafrost modeling does not agree with certain observational data on the permafrost state within the East Siberian Arctic Shelf. This suggests a need to consider other mechanisms of permafrost destabilization after the recent ocean transgression. We propose development of open taliks wherever thaw lakes and river paleo-valleys were submerged shelf-wide as a possible mechanism for the degradation of sub-sea permafrost. To test the hypothesis we performed numerical modeling of permafrost dynamics in the Dmitry Laptev Strait area. We achieved sufficient agreement with the observed distribution of thawed and frozen layers to suggest that the proposed mechanism of permafrost destabilization is plausible. Two basic mechanisms are proposed to explain permafrost dynamics after the inundation: the so-called upward degradation under geothermal heat flux in the areas underlain by fault zones (Romanovskii and Hubberten 2001), and the so-called downward degradation under the warming effect of large river bodies (Delisle 2000).
  11. Douglas, David C. Arctic sea ice decline: projected changes in timing and extent of sea ice in the Bering and Chukchi Seas. No. 2010-1176. US Geological Survey, 2010 The Arctic region is warming faster than most regions of the world due in part to increasing greenhouse gases and positive feedbacks associated with the loss of snow and ice cover. One consequence has been a rapid decline in Arctic sea ice over the past 3 decades?a decline that is projected to continue by state-of-the-art models. Many stakeholders are therefore interested in how global warming may change the timing and extent of sea ice Arctic-wide, and for specific regions. To inform the public and decision makers of anticipated environmental changes, scientists are striving to better understand how sea ice influences ecosystem structure, local weather, and global climate. Here, projected changes in the Bering and Chukchi Seas are examined because sea ice influences the presence of, or accessibility to, a variety of local resources of commercial and cultural value. In this study, 21st century sea ice conditions in the Bering and Chukchi Seas are based on projections by 18 general circulation models (GCMs) prepared for the fourth reporting period by the Intergovernmental Panel on Climate Change (IPCC) in 2007. Sea ice projections are analyzed for each of two IPCC greenhouse gas forcing scenarios: the A1B `business as usual? scenario and the A2 scenario that is somewhat more aggressive in its CO2 emissions during the second half of the century. A large spread of uncertainty among projections by all 18 models was constrained by creating model subsets that excluded GCMs that poorly simulated the 1979-2008 satellite record of ice extent and seasonality. At the end of the 21st century (2090-2099), median sea ice projections among all combinations of model ensemble and forcing scenario were qualitatively similar. June is projected to experience the least amount of sea ice loss among all months. For the Chukchi Sea, projections show extensive ice melt during July and ice-free conditions during August, September, and October by the end of the century, with high agreement among models. High agreement also accompanies projections that the Chukchi Sea will be completely ice covered during February, March, and April at the end of the century. Large uncertainties, however, are associated with the timing and amount of partial ice cover during the intervening periods of melt and freeze. For the Bering Sea, median March ice extent is projected to be about 25 percent less than the 1979-1988 average by mid-century and 60 percent less by the end of the century. The ice-free season in the Bering Sea is projected to increase from its contemporary average of 5.5 months to a median of about 8.5 months by the end of the century. A 3-month longer ice- free season in the Bering Sea is attained by a 1-month advance in melt and a 2-month delay in freeze, meaning the ice edge typically will pass through the Bering Strait in May and January at the end of the century rather than June and November as presently observed.
  12. Carmack, Eddy C., et al. “The Arctic Ocean warms from below.” Geophysical Research Letters 39.7 (2012).  The old (∼450‐year isolation age) and near‐homogenous deep waters of the Canada Basin (CBDW), that are found below ∼2700 m, warmed at a rate of ∼0.0004°C yr−1 between 1993 and 2010. This rate is slightly less than expected from the reported geothermal heat flux (Fg ∼ 50 mW m−2). A deep temperature minimum Tmin layer overlies CBDW within the basin and is also warming at approximately the same rate, suggesting that some geothermal heat escapes vertically through a multi‐stepped, ∼300‐m‐thick deep transitional layer. Double diffusive convection and thermobaric instabilities are identified as possible mechanisms governing this vertical heat transfer. The CBDW found above the lower continental slope of the deep basin maintains higher temperatures than those in the basin interior, consistent with geothermal heat being distributed through a shallower water column, and suggests that heat from the basin interior does not diffuse laterally and escape at the edges.
  13. Jay, Chadwick V., Anthony S. Fischbach, and Anatoly A. Kochnev. “Walrus areas of use in the Chukchi Sea during sparse sea ice cover.” Marine Ecology Progress Series 468 (2012): 1-13. The Pacific walrus Odobenus rosmarus divergens feeds on benthic invertebrates on the continental shelf of the Chukchi and Bering Seas and rests on sea ice between foraging trips. With climate warming, ice-free periods in the Chukchi Sea have increased and are projected to increase further in frequency and duration. We radio-tracked walruses to estimate areas of walrus foraging and occupancy in the Chukchi Sea from June to November of 2008 to 2011, years when sea ice was sparse over the continental shelf in comparison to historical records. The earlier and more extensive sea ice retreat in June to September, and delayed freeze-up of sea ice in October to November, created conditions for walruses to arrive earlier and stay later in the Chukchi Sea than in the past. The lack of sea ice over the continental shelf from September to October caused walruses to forage in nearshore areas instead of offshore areas as in the past. Walruses did not frequent the deep waters of the Arctic Basin when sea ice retreated off the shelf. Walruses foraged in most areas they occupied, and areas of concentrated foraging generally corresponded to regions of high benthic biomass, such as in the northeastern (Hanna Shoal) and southwestern Chukchi Sea. A notable exception was the occurrence of concentrated foraging in a nearshore area of northwestern Alaska that is apparently depauperate in walrus prey. With increasing sea ice loss, it is likely that walruses will increase their use of coastal haul-outs and nearshore foraging areas, with consequences to the population that are yet to be understood.
  14. Arrigo, Kevin R., et al. “Massive phytoplankton blooms under Arctic sea ice.” Science 336.6087 (2012): 1408-1408Phytoplankton blooms over Arctic Ocean continental shelves are thought to be restricted to waters free of sea ice. Here, we document a massive phytoplankton bloom beneath fully consolidated pack ice far from the ice edge in the Chukchi Sea, where light transmission has increased in recent decades because of thinning ice cover and proliferation of melt ponds. The bloom was characterized by high diatom biomass and rates of growth and primary production. Evidence suggests that under-ice phytoplankton blooms may be more widespread over nutrient-rich Arctic continental shelves and that satellite-based estimates of annual primary production in these waters may be underestimated by up to 10-fold.
  15. Arrigo, Kevin R., et al. “Phytoplankton blooms beneath the sea ice in the Chukchi Sea.” Deep Sea Research Part II: Topical Studies in Oceanography 105 (2014): 1-16.  In the Arctic Ocean, phytoplankton blooms on continental shelves are often limited by light availability, and are therefore thought to be restricted to waters free of sea ice. During July 2011 in the Chukchi Sea, a large phytoplankton bloom was observed beneath fully consolidated pack ice and extended from the ice edge to >100 km into the pack. The bloom was composed primarily of diatoms, with biomass reaching 1291 mg chlorophyll a m−2 and rates of carbon fixation as high as 3.7 g C m−2 d−1. Although the sea ice where the bloom was observed was near 100% concentration and 0.8–1.2 m thick, 30–40% of its surface was covered by melt ponds that transmitted 4-fold more light than adjacent areas of bare ice, providing sufficient light for phytoplankton to bloom. Phytoplankton growth rates associated with the under-ice bloom averaged 0.9 d−1 and were as high as 1.6 d−1. We argue that a thinning sea ice cover with more numerous melt ponds over the past decade has enhanced light penetration through the sea ice into the upper water column, favoring the development of these blooms. These observations, coupled with additional biogeochemical evidence, suggest that phytoplankton blooms are currently widespread on nutrient-rich Arctic continental shelves and that satellite-based estimates of annual primary production in waters where under-ice blooms develop are ~10-fold too low. These massive phytoplankton blooms represent a marked shift in our understanding of Arctic marine ecosystems.
  16. Carmack, Eddy, et al. “Toward quantifying the increasing role of oceanic heat in sea ice loss in the new Arctic.” Bulletin of the American Meteorological Society 96.12 (2015): 2079-2105.  The loss of Arctic sea ice has emerged as a leading signal of global warming. This, together with acknowledged impacts on other components of the Earth system, has led to the term “the new Arctic.” Global coupled climate models predict that ice loss will continue through the twenty-first century, with implications for governance, economics, security, and global weather. A wide range in model projections reflects the complex, highly coupled interactions between the polar atmosphere, ocean, and cryosphere, including teleconnections to lower latitudes. This paper summarizes our present understanding of how heat reaches the ice base from the original sources—inflows of Atlantic and Pacific Water, river discharge, and summer sensible heat and shortwave radiative fluxes at the ocean/ice surface—and speculates on how such processes may change in the new Arctic. The complexity of the coupled Arctic system, and the logistic and technological challenges of working in the Arctic Ocean, require a coordinated interdisciplinary and international program that will not only improve understanding of this critical component of global climate but will also provide opportunities to develop human resources with the skills required to tackle related problems in complex climate systems. We propose a research strategy with components that include 1) improved mapping of the upper- and middepth Arctic Ocean, 2) enhanced quantification of important process, 3) expanded long-term monitoring at key heat-flux locations, and 4) development of numerical capabilities that focus on parameterization of heat-flux mechanisms and their interactions.

 

 

 

 

 

 

 

 

 

 

Arctic Ocean could be ice-free for part of the year as soon as 2044

[LINK TO HOME PAGE OF THIS SITE]

TRUMP VS AAAS SCIENTISTS | Thongchai Thailand

ICE FREE ARCTIC OBSESSION 2020 UPDATE:

The August 2020 report: OK so there were a few ice free Arctic forecasts we didn’t get right but we fixed our climate model with data from the Eemian interglacial and we got it right this time. By  the summer of 2035 the frozen white caps on the top and bottom of he earth we see from space will be gone because Arctic summer sea-ice will disappear for the first time since primitive humans left Africa. This is the conclusion of a research paper published in Nature Climate Change. We must do something about it fast otherwise it will be too late. That the Eemian interglacial was hotter than the Holocene is not an issue according to these scientists. The amount of sea-ice floating atop the Arctic Ocean at summer’s end has fallen about 13% per decade since 1979 and this summer is a sure bet to be No. 14. The 2035 estimate is based on what had happened in the Eemian interglacial,130,000 years ago. Related post on the Eemian: that was much hotter than the Holocene [LINK]Climate scientists concede that these estimates come with lots and lots of uncertainty that makes it possible that they don’t really know but then continue with the forecast scenarios and the climate action needed without consideration of uncertainty because another research paper publshed jointly by North Carlonia State University and NOAA and so these findings are validated and uncertainty is not an issue. And as for the uncertainty issue,

  1. Scientists from North Carolina State University and the NOAA this year used a different model to arrive at a similar 2035 target for the ice-free Arctic summer. By “ice-free,” scientists usually mean an extent of less than 1 million square-kilometers. The lowest it has reached is 3.4 million km² in 2012.
  2. Ge Peng, research scholar on the North Carolina State University team, also noted that unexpected events could alter the timeline. The eruption of a large volcano, which spews chemicals into the atmosphere that block sunlight and lower temperatures, could push the estimates out a few years. Whichever summer is the first to lose its sea ice, Peng and her colleagues warn that businesses, governments, and people living in the Arctic need to prepare now for changes in regional geopolitics, transportation, and food availability. Peng hopes to travel to the still-frozen Arctic and find it the way we imagine it with our eyes closed. “I want to do that soon,” she says, “because I don’t want the sea ice to be gone by the time I take the cruise.”

ICE FREE ARCTIC OBSESSION UPDATE: NOVEMBER 2019

M.PHYS.ORG 2019 ARCTIC OCEAN ICE FREE [LINK] 

  1. The fate of Arctic sea ice is a key topic for climate scientists because of its role in temperatures around the rest of the world. It’s hard to imagine the Arctic without sea ice. But according to a new study by UCLA climate scientists, human-caused climate change is on track to make the Arctic Ocean functionally ice-free for part of each year starting sometime between 2044 and 2067. 
  2. As long as humans have been on Earth, the planet has had a large cap of sea ice at the Arctic Circle that expands each winter and contracts each summer. The knowledge that sea ice is on the decline is not new: Satellite observations show that since 1979, the amount of sea ice in the Arctic in September—the month when there is the least sea ice, before water starts freezing again—has declined by 13 percent per decade.
  3. Scientists have been attempting to predict the future of Arctic sea ice for several decades, relying on an array of global climate models that simulate how the climate system will react to the carbon dioxide entering the atmosphere. But the models’ predictions have disagreed widely. Some show ice-free Septembers as early as 2026; others suggest the phenomenon will begin as late as 2132.
  4. The UCLA study, which was published in Nature Climate Change, narrows the predictions to a 25-year period. Sea ice loss diverge because they differ in how their estimation of sea ice loss albedo feedback is expected to cause greater local warming, which in turn leads to further ice melt. This feedback exacerbates warming and is one reason why the Arctic is heating up twice as fast as the rest of the globe.
  5. Thackeray and co-author Alex Hall, UCLA professor of atmospheric and oceanic sciences, determined which models are most realistic in sea ice albedo feedback estimation that would lead them to the most realistic projections for sea ice. They used the seasonal sea ice cycle to estimate the albedo feedback effect. Satellite observations track the seasonal melt cycle that includes the albedo feedback. Of 23 different climate models, the authors identified six models that were closest to the observational data in the period 1980-2015.
  6. The approach of using an observable process in the current climate to evaluate global climate model projections of future climate was pioneered by Hall and his group in 2006, in a study focused on snow albedo feedback. It has since become widely used in climate science as researchers try to improve the precision of their projections. The fate of Arctic sea ice is a key topic for climate scientists because of its global impact on temperature. “Arctic sea ice is a key component of the earth system because of its highly reflective nature, which keeps the global climate relatively cool“, Thackeray said.
  7. There are other environmental and economic implications to ice loss as well. Sea ice is critical to the Arctic ecosystem, and to the fishing industry and indigenous peoples who depend on that ecosystem. And as Arctic ice is lost, more waters are used for commercial shipping and oil and gas exploration, which presents economic opportunity for some nations, but which also contributes to further greenhouse gas emissions and climate change. “The changes to come will have broad environmental, ecological and economic implications,” Thackeray said. “By reducing the uncertainty in in our ice free prediction, we can be better prepared.”
  8. CITATION AND ABSTRACT: Thackeray, Chad W., and Alex Hall. “An emergent constraint on future Arctic sea-ice albedo feedback.” Nature Climate Change (2019): 1-7Arctic sea ice has decreased substantially over recent decades, a trend projected to continue. Shrinking ice reduces surface albedo, leading to greater surface solar absorption, thus amplifying warming and driving further melt. This sea-ice albedo feedback (SIAF) is a key driver of Arctic climate change and an important uncertainty source in climate model projections. Using an ensemble of models, we demonstrate an emergent relationship between future SIAF and an observable version of SIAF in the current climate’s seasonal cycle. This relationship is robust in constraining SIAF over the coming decades (Pearson’s r = 0.76), and then it degrades. The degradation occurs because some models begin producing ice-free conditions, signalling a transition to a new ice regime. The relationship is strengthened when models with unrealistically thin historical ice are excluded. Because of this tight relationship, reducing model errors in the current climate’s seasonal SIAF and ice thickness can narrow SIAF spread under climate change. 

 

ICE FREE ARCTIC OBSESSION UPDATE: 1999-2009

  1. 1999, STUDY SHOWS ARCTIC ICE SHRINKING BECAUSE OF GLOBAL WARMING. Sea ice in the Arctic Basin is shrinking by 14000 square miles per year because of global warming caused by human activity according to a new international study that used 46 years of data and sophisticated computer simulation models to tackle the specific question of whether the loss of Arctic ice is a natural variation or caused by global warming. The computer model says that the probability that these changes were caused by natural variation is 1% but when global warming was added to the model the ice melt was a perfect fit. Therefore the ice melt is caused by human activities that emit greenhouse gases.
  2. 2003, SOOT WORSE FOR GLOBAL WARMING THAN PREVIOUSLY THOUGHT
    Soot that lands on snow has caused ¼ of the warming since 1880 because dirty snow traps more solar heat than pristine snow and induces a strong warming effect, according to a new computer model by James Hansen of NASA. It explains why sea ice and glaciers are melting faster than they should. Reducing soot emissions is an effective tool to curb global warming. It is easier to cut soot emissions than it is to cut CO2 emissions but we still need to reduce CO2 emissions in order to stabilize the atmosphere.
  3. 2004, ARCTIC CLIMATE IMPACT ASSESSMENT
    An unprecedented 4-year study of the Arctic shows that polar bears, walruses, and some seals are becoming extinctArctic summer sea ice may disappear entirely. Combined with a rapidly melting Greenland ice sheet, it will raise the sea level 3 feet by 2100 inundating lowlands from Florida to Bangladesh. Average winter temperatures in Alaska and the rest of the Arctic are projected to rise an additional 7 to 13 degrees over the next 100 years because of increasing emissions of greenhouse gases from human activities. The area is warming twice as fast as anywhere else because of global air circulation patterns and natural feedback loops, such as less ice reflecting sunlight, leading to increased warming at ground level and more ice melt. Native peoples’ ways of life are threatened. Animal migration patterns have changed, and the thin sea ice and thawing tundra make it too dangerous for humans to hunt and travel.
  4. 2004, RAPID ARCTIC WARMING BRINGS SEA LEVEL RISE
    The Arctic Climate Impact Assessment (ACIA) report says: increasing greenhouse gases from human activities is causing the Arctic to warm twice as fast as the rest of the planet; in Alaska, western Canada, and eastern Russia winter temperatures have risen by 2C to 4C in the last 50 years; the Arctic will warm by 4C to 7C by 2100. A portion of Greenland’s ice sheet will melt; global sea levels will rise; global warming will intensify. Greenland contains enough melting ice to raise sea levels by 7 meters; Bangkok, Manila, Dhaka, Florida, Louisiana, and New Jersey are at risk of inundation; thawing permafrost and rising seas threaten Arctic coastal regions; climate change will accelerate and bring about profound ecological and social changes; the Arctic is experiencing the most rapid and severe climate change on earth and it’s going to get a lot worse; Arctic summer sea ice will decline by 50% to 100%polar bears will be driven towards extinction; this report is an urgent SOS for the Arctic; forest fires and insect infestations will increase in frequency and intensity; changing vegetation and rising sea levels will shrink the tundra to its lowest level in 21000 years; vanishing breeding areas for birds and grazing areas for animals will cause extinctions of many species; “if we limit emission of heat trapping carbon dioxide we can still help protect the Arctic and slow global warming”.
  5. 2007: THE ARCTIC IS SCREAMING
    Climate science declares that the low sea ice extent in the Arctic is the leading indicator of climate change. We are told that the Arctic “is screaming”, that Arctic sea ice extent is the “canary in the coal mine”, and that Polar Bears and other creatures in the Arctic are dying off and facing imminent extinction. Scientists say that the melting sea ice has set up a positive feedback system that would cause the summer melts in subsequent years to be greater and greater until the Arctic becomes ice free in the summer of 2012. We must take action immediately to cut carbon dioxide emissions from fossil fuels. [DETAILS] 
  6. 2007: THE ICE FREE ARCTIC CLAIMS GAIN MOMENTUM
    The unusual summer melt of Arctic sea ice in 2007 has encouraged climate science to warn the world that global warming will cause a steep decline in the amount of ice left in subsequent summer melts until the Arctic becomes ice free in summer and that could happen as soon as 2080 or maybe 2060 or it could even be 2030. This time table got shorter and shorter until, without a “scientific” explanation, the ice free year was brought up to 2013. In the meantime, the data showed that in 2008 and 2009 the summer melt did not progressively increase as predicted but did just the opposite by making a comeback in 2008 that got even stronger in 2009. [DETAILS]
  7. 2008: POSITIVE FEEDBACK: ARCTIC SEA ICE IN A DOWNWARD SPIRAL
    Our use of fossil fuels is devastating the Arctic where the volume of sea ice “fell to its lowest recorded level to date” this year and that reduced ice coverage is causing a non-linear acceleration in the loss of polar ice because there is less ice to reflect sunlight. [DETAILS]
  8. 2008: THE ARCTIC WILL BE ICE FREE IN SUMMER IN 2008, 2013, 2030, OR 2100
    The unusually low summer sea ice extent in the Arctic in 2007
    The IPCC has taken note and has revised its projection of an ice free Arctic first from 2008 to 2013 and then again from 2013 to 2030. The way things are going it may be revised again to the year 2100. [DETAILS]
  9. 2008: GLOBAL WARMING IS THE CAUSE OF ALL ICE MELT EVENTS
    When there was a greater focus on Antarctica climate scientists said that global warming was melting the West Antarctic Ice Shelf; but the melting was found to be localized and with an active volcano underneath the melting and the attention of “melt forecast” climate science shifted to Arctic sea ice after the an extensive summer melt was observed in September 2007. [DETAILS]
  10. 2008: THE POLAR BEAR IS THREATENED BY OUR USE OF FOSSIL FUELS
    The survival of the polar bear is threatened because man made global warming is melting ice in the Arctic. It is true that the Arctic sea ice extent was down in negative territory in September 2007. This event emboldened global warming scaremongers to declare it a climate change disaster caused by greenhouse gas emissions from fossil fuels and to issue a series of scenarios about environmental holocaust yet to come. [DETAILS]
  11. 2009: SUMMER ARCTIC SEA ICE EXTENT IN 2009 THE 3RD LOWEST ON RECORD
    The second lowest was 2008 and the first lowest was 2007. This is not a trend that shows that things are getting worse. It shows that things are getting better and yet it is being sold and being bought as evidence that things are getting worse due to rising fossil fuel emissions. [DETAILS]
  12. 2009: THE ARCTIC WILL BE ICE FREE IN SUMMER BY 2029
    An alarm is raised that the extreme summer melt of Arctic sea ice in 2007 was caused by humans using fossil fuels and it portends that in 20 years human caused global warming will leave the Arctic Ocean ice-free in the summer raising sea levels and harming wildlife. [DETAILS]
  13. 2009: THE ARCTIC WILL BE ICE FREE IN SUMMER BY THE YEAR 2012
    Climate scientists continue to extrapolate the extreme summer melt of Arctic sea ice in 2007 to claim that the summer melt of 2007 was a climate change event and that it implies that the Arctic will be ice free in the summer from 2012 onwards. This is a devastating effect on the planet and our use of fossil fuels is to blame. [DETAILS]
  14. 2009: THE SUMMER SEA ICE EXTENT IN THE ARCTIC WILL BE GONE
    Summer melt of Arctic ice was the third most extensive on record in 2009, second 2008, and the most extensive in 2007. These data show that warming due to our carbon dioxide emissions are causing summer Arctic ice to gradually diminish until it will be gone altogether. [DETAILS]

COMMENTS ON THE THESIS OF THE ARTICLE AND REFERENCE PAPER

  1. THE PREMISE OF THE 2019 OBSESSION IS: “Yes, there were a few ice-free-Arctic forecasts that didn’t happen but this time around we got the ice free forecast right because we tuned the climate model with the seasonal cycle“. This premise is supported by the claim that the climate models that were used to predict the ice free condition were pre-tested against the seasonal cycle and those climate models that could correctly predict the September minimum sea ice extent in the seasonal cycle were taken to be accurate predictors of long term trends in sea ice extent and these models were used to make the long term prediction of an ice free September. It is proposed that the long term forecast is validated by the ability of the same climate model to forecast the seasonal September minimum sea ice extent. Therefore the forecast of that validated climate model that the long awaited ice-free September will occur sometime between 2044 and 2067 {at some point 28 to 48 years from now} is is finally the correct forecast of an ice free Arctic. 
  2. This argument is flawed in a number of ways. First, the seasonal cycle in Arctic air temperature above the Ocean imposes a range of surface air temperatures equivalent to more than 70 years (range = 40 to 130 years) of warming at the current long term warming rate in September temperature of 0,026C per year (Figure 1). Also the regression for the trend contains large uncertainties (Figure 2) so that temperature forecasts contain large uncertainties regardless of the quality of the climate model (Figure 4). Note in Figure 3 that the regression residuals when expressed as multiples of the regression coefficient are in a range of -40 to +25 times the long term trend contained in the regression coefficient. Therefore, the seasonal cycle of the year and the year to year long term trend and very different phenomena such that the forecast for future years cannot be based on seasonal cycle dynamics. Uncertainties in year to year September temperature when compared with the predictable seasonal cycle imply that year to year surface air temperatures are not as exactly predictable as one might presume from the more predictable nature of the seasonal cycle.
  3. An added consideration in the frustration of climate science with ice free Arctic predictions is the atmosphere bias of climate science such that all ice melt events in the Arctic region are assumed to be governed by air temperature and by changes in air temperature without empirical evidence of this relationship. This overarching assumption of climate science derives from a reliance on climate models as we see in the Thackeray paper presented here. Yet it is not possible to use climate models in a test of theory because climate models are themselves an expression of theory. A lopsided reliance of climate science on climate models diminishes the role of observational data and and emphasizes theoretical considerations. This aberration in climate science is particularly severe in the case of their evaluation of sea ice dynamics.
  4. In related posts on this site it is shown that this assumption derived from climate models is not supported by the empirical data. Detrended correlation analysis does not provide the needed evidence in the form of a statistically significant negative correlation between temperature and sea ice extent that is a necessary condition for a causal relationship between surface air temperature and sea ice extent [LINK] [LINK] . Thus, no empirical evidence exists that Arctic sea ice extent is responsive to surface air temperature at an annual time scale and yet this responsiveness is assumed in the attribution of changes in Arctic sea ice extent to changes in surface air temperature.
  5. Yet another relevant consideration is that the Arctic is a known to be a geologically active region with large flows of geothermal heat from the mantle into the ocean. The continued attribution of sea ice dynamics whether in extent, area, or volume, to AGW climate change and without consideration for known geothermal heat flows likely derives from the atmosphere bias of climate science such that there is a tendency to explain all observed changes in the Arctic, such as sea ice melt, in terms of AGW climate change and to overlook the extensive geothermal heat sources in the Arctic. Some of the geological features of the Arctic including the Mid Arctic Rift system and the Jan Mayen Trend are described in related posts [LINK] [LINK] and in the Graphic in Figure 5. A detailed study of the geology of the Arctic is presented in a related post [LINK] .

FIGURE 1: SEASONAL RANGE AS MULTIPLES OF TREND: 1979-2018SEASONAL-RANGE

FIGURE 2 LINEAR REGRESSION: ARCTIC OCEAN AIR TEMPERATUREREG

FIGURE 3: REGRESSION RESIDUALS AS MULTIPLES OF TRENDRESID-MULTIPLES

FIGURE 4: TEMPERATURE FORECASTS 2018-20482018FORECAST

FIGURE 5: SOURCES OF GEOTHERMAL HEAT IN THE ARCTICbandicam 2019-07-01 16-29-44-526

THIS POST IS A SUMMARY OF SOME METHODOLOGICAL AND STATISTICAL ISSUES IN ANTHROPOGENIC GLOBAL WARMING AND CLIMATE CHANGE (AGW) PRESENTED IN A NUMBER OF RELATED POSTS ON THIS SITE. 

[LINK TO THE HOME PAGE OF THIS SITE]

 ISSUE#1 > THE RESPONSIVENESS OF ATMOSPHERIC COMPOSITION TO FOSSIL FUEL EMISSIONS: The human cause of warming begins with the argument that our use of fossil fuels has caused atmospheric CO2 concentration to rise and conversely, that causal relationship implies that climate action in the form of reducing or eliminating fossil fuel emissions will reduce or halt the rise in atmospheric CO2 and thus attenuate the rate of warming. Therefore, a fundamental and necessary condition for AGW and its implied climate action imperative is that a causal relationship must exist between atmospheric composition and fossil fuel emissions such that (a) the observed changes in atmospheric composition since pre-industrial times can be explained in terms of fossil fuel emissions of the industrial economy, and (b) the effectiveness of climate action in attenuating and halting the rate of warming by reducing and eliminating fossil fuel emissions can be established . An annual time scale is assumed in climate science. This relationship is therefore tested at an annual time scale. Some longer time scales are also investigated.

TEST#1[LINK] > Is atmospheric CO2 concentration responsive to fossil fuel emissions at an annual time scale? Detrended correlation analysis is used to test this relationship. The detrending procedure removes the spurious effect of shared trends on correlation so that only the responsiveness at the specified time scale is measured in the correlation. Although strong correlations from ρ=0.742 for a 1-year time scale to ρ=0.931 for a 5-year time scale are seen in the source data time series, these correlations do not survive into the detrended series where no statistically significant correlation is found. We conclude that the data do not provide evidence that atmospheric CO2 concentration is responsive to fossil fuel emissions.

TEST#2 > [LINK] > Can nature’s carbon cycle flows be measured with sufficient precision to detect the presence of fossil fuel emissions? As seen in the IPCC carbon cycle flows presented in the linked document, the estimated mean values of the flows of the carbon cycle, with flow uncertainties not considered, provide an exact mass balance in the presence of fossil fuel emissions with the so called “Airborne Fraction” computed as 50%, meaning that the mass balance shows that 50% of the CO2 in fossil fuel emissions remain in the atmosphere where they accumulate, change atmospheric CO2 concentration, and cause anthropogenic global warming (AGW) by way of the GHG effect of rising atmospheric CO2. The issue here is that these flow estimates contain very large uncertainties because they cannot be directly measured but must be inferred. In the related post a statistical test is devised to determine the level of uncertainty in carbon cycle flows at which the much smaller CO2 flows in fossil fuel emissions can be detected in the presence of the much larger carbon cycle flows. In the related post [LINK] , a Monte Carlo simulation is devised to estimate the highest value of the unknown standard deviations in carbon cycle flows at which we can detect the presence of CO2 in fossil fuel emissions. In the test, an uncertain flow account is considered to be in balance as long as the Null Hypothesis that the sum of the flows is zero cannot be rejected. The alpha error rate for the test is set to a high value of alpha=0.10 to ensure that any reasonable ability to discriminate between the flow account WITH Anthropogenic Emissions from a the flow account WITHOUT Anthropogenic Emissions is taken as evidence that the relatively small fossil fuel emissions can be detected in the presence of much larger and uncertain natural flows. In the simulation we assign different levels of uncertainty to the flows for which no uncertainty data are available and test the null hypothesis that the flows balance with anthropogenic emissions (AE) included and again with AE excluded. If the flows balance when AE are included and they don’t balance when AE are excluded then we conclude that the presence of the AE can be detected at that level of uncertainty. However, if the flows balance with and without AE then we conclude that the stochastic flow account is not sensitive to AE at that level of uncertainty. If the presence of AE cannot be detected no role for their effect on climate can be deduced from the data at that level of uncertainty in carbon cycle flows. The p-values for these hypothesis tests vary from 1% of mean to 6.5% of mean as shown in the tabulation below. The results show that when nature’s carbon cycle flows contain an uncertainty of 2% of the mean or less, the carbon cycle flow account can detect the presence of fossil fuel emissions. The presence of fossil fuel emissions cannot be detected at higher carbon cycle flow uncertainties. Climate science and the IPCC estimate that uncertainties in carbon cycle flows vary from 6.5% to more than 10%. The lowest value of 6.5% is for photosynthesis. We conclude from this Monte Carlo simulation that, given the uncertainty in our estimate of natural carbon cycle flows, it is not possible to detect the impact of fossil fuel emissions on atmospheric composition.

TEST#3: Monte Carlo Simulation of fossil fuel emissions being inserted into carbon cycle flows. Carbon cycle flows for which no uncertainty data are found in the IPCC report are assigned the low photosynthesis value of 6.5% of the flow estimate provided. The results are displayed in the charts below. They demonstrate that it is not possible to detect an impact of fossil fuel emissions on atmospheric CO2 concentration when uncertainties in carbon cycle flows are taken into account. These results show that within the stated uncertainties of carbon cycle flows, no evidence is found in the data that fossil fuel emissions cause changes in atmospheric CO2 concentration. The uncertainty in carbon cycle flows is too large to detect the assumed effect as in the so called “retained fraction”. These results also imply that therefore there is no evidence in the observational data that climate action will have an effect on the observed dynamics of atmospheric composition.  [LINK TO MONTE CARLO SIMULATION]  

MONTE-1

MONTE-3

MONTE-2

MONTE-4

ISSUE#2: GEOLOGICAL CARBON FLOWS: In estimating the impact of fossil fuel emissions and climate action on changes to atmospheric composition, climate science looks at two sources of these flows – fossil fuel emissions and carbon cycle flows. Natural carbon flows from hydrocarbon seeps, submarine volcanism, mud volcanoes, mantle plumes, and hydrothermal vents to the atmosphere are not taken into account. A survey of geological flows of carbon to the atmosphere from these sources is presented in a related post.[LINK TO GEOLOGICAL FLOWS]  . We find in that analysis as follows:  {We conclude from the information presented in the bibliography and the analysis above, that there are significant natural flows of carbon from geological sources such as hydrocarbon seeps, methane hydrates, submarine volcanism, and submarine mud volcanoes and that these flows make it difficult to interpret changes in atmospheric CO2 exclusively in terms of the use of fossil fuels in the industrial economy.  In that context, it should also be noted that the bibliography below shows that oil and gas production lowers the pressure that forces out the natural hydrocarbon seeps. Without oil and gas production, the seepage rate will increase and undermine the apparent advantage to the climate of not producing oil and gas. It is noted that humans began using fossil fuels from seeps and natural outflows. It was only after its utility became obvious that seeps were no longer sufficient. It was then that humans began to look for the sources of those seeps.}. An extensive bibliography on this topic is provided. SANTA-BARBARA-OIL-SEEPS

ISSUE #3 > THE RESPONSIVENESS OF SURFACE TEMPERATURE TO FOSSIL FUEL EMISSIONS: Anthropogenic global warming (AGW) theory says that fossil fuel emissions cause warming and that their reduction and eventual elimination can be used to attenuate the rate of warming and this option is offered and demanded as the “climate action” plan needed to save the world from the destructive effects forecast for uncontrolled AGW. These relationships imply that a correlation must exist between the rate of emissions and the rate of warming and in fact, climate science has presented just such a correlation. It is called the Transient Climate Response to Cumulative Emissions (TCRE) described more fully in a related post [LINK] . And in fact, the TCRE provides the structure and mathematics of the proposed climate action in terms of the so called carbon budgets proposed to constrain warming to a given target level. The TCRE derives from the observation by Damon Matthews and others in 2009 that a near perfect proportionality exists between cumulative emissions and cumulative warming.

TEST#1 > Does the TCRE imply that the rate of warming is related to the rate of emissions such that climate action plans of reducing emissions can be used to attenuate the rate of warming? A test for this correlation is presented in a related post [LINK] where it is shown that a time series of the cumulative values of another time series has neither time scale nor degrees of freedom.

The details of the proof of this condition are provided in the post on carbon budgets where it is also shown that the observed correlation derives not from responsiveness of warming to emissions but from a fortuitous sign pattern in which emissions are always positive and in a time of rising temperatures, annual warming is mostly positive [LINK] where it is shown for example, that the TCRE correlation exists in random numbers if the same sign pattern is inserted into the random numbers and disappears when the sign pattern is also random.

The relevant GIF charts are reproduced below.  Random numbers in the left frame and their cumulative values in the right frame. The difference between the two charts is that in the first chart the random numbers are truly random with no sign pattern imposed meaning that positive and negative values are equally likely; whereas in the second chart the random number generator for both x and y favors positive values 55% to 45%. The analysis of the TCRE in these related posts implies that the observed correlation is illusory and spurious and has no implication for the real world phenomena the data apparently represent. Therefore, the TCRE has no interpretation in terms of a causal relationship between emissions and warming.

UNCON-SOURCE-GIFUNCON-CUM-GIF

CON-SOURCE-GIFCON-CUM-GIF

The only information content of the TCRE is the sign pattern. If the the two time series have a common sign bias, either both positive or both negative, the correlation will be positive. If the the two time series have different sign biases, one positive and the other negative, the correlation will be negative. If the the two time series have no sign bias, no correlation will be found. Therefore, the only information content of the TCRE is the sign pattern and no rational interpretation of such a proportionality exists in terms of a causal relationship that can be used in the construction of carbon budgets. The TCRE carbon budgets of climate science is a a meaningless exercise with an illusory statistic. 

TEST#2 > The problem with the TCRE correlation is that it has no time scale and no degrees of freedom. Both issues can be resolved by inserting a fixed time scale X into the TCRE computation such that the correlation is rendered statistically valid as TCREX. The second research question is therefore, {Does the TCREX show that the rate of warming is responsive to the rate of emissions} such that climate action plans of reducing emissions can be used to attenuate the rate of warming? This test is carried out in a related post [LINK] with time scales of ten to thirty years. The test is carried out with both climate model estimations of global mean temperature (RCP) and reconstructions of global mean temperature from the instrumental record  (HADCRUT4). The results of detrended correlation analysis are summarized below. They show strong statistically significant detrended correlations in the theoretical temperatures from climate models but no statistically significant result is found in the observational data. The agreement with the theoretical temperature series validates the procedure and therefore the absence of evidence to relate observational data to emissions provides convincing evidence that when the TCRE measure is corrected to insert time scale and degrees of freedom, the “near perfect proportionality” of the TCRE disappears. We conclude from these results that no evidence is found in the observational data that the rate of warming is responsive to the rate of emissions. [DETAILS]SUMMARY-TABLE 

TEST#3: CARBON BUDGETS: The claimed catastrophic impacts of AGW Climate Change serve as the needed motivation for Climate Action. Climate action involves the reduction and eventual elimination of the use of fossil fuels – and thereby of fossil fuel emissions. The theoretical linkage between climate change and climate action is the Carbon Budget. A climate action plan specifies the maximum warming target over a specified time span. The corresponding carbon budget is the maximum amount of carbon in fossil fuel emissions that can be emitted over that time period to comply with the climate action plan. The statistical issue in this case arises because the carbon budget is based on the flawed TCRE – Transient Climate Response to Cumulative Emissions discussed  above in Issue#2.TEST#1 > The use of the illusory TCRE correlation in the construction of carbon budgets renders the carbon budget equally spurious and illusory as described in this related post [LINK] . It shows that the carbon budget computed from the TCRE has no interpretation in the real world because it is based on an illusory correlation that is the creation of sign patterns and not based on the responsiveness of temperature to emissions. The further consideration is that the absence of time scale and degrees of freedom in the carbon budget renders it a spurious statistic that has no interpretation in terms of the emissions and warming dynamics it apparently represents.TEST#2 > A further test of the validity of the carbon budget is seen in the Remaining Carbon Budget Puzzle (RCBPthat has created a state of confusion in climate science as described in a related post [LINK] . In brief, the RCBP issue is that the carbon budget for the full span of the budget period does not equal the sum of the carbon budgets computed for its sub-spans. There is a simple statistics explanation of this apparent oddity in terms of the spuriousness of the correlation and the TCRE regression coefficient. The positive TCRE correlation is a creation of a fortuitous sign pattern such that emissions are always positive and during a time of warming, annual warming values are mostly positive, as shown in a related post [LINK] . Thus, the TCRE regression coefficient is determined by the fraction of annual warming values that are positive; and this fraction is not likely to be the same in different sub-spans and not the same in any given sub-span as in the full span. It is this statistical oddity and not the absence of Earth System climate model variables that explains the RCBP. And yet, as seen in the literature, climate science reaches out to Earth System Models to explain and then to apparently solve the RCBP, as shown in a related post [LINK] . In summary, the Remaining Carbon Budget issue is a simple statistical issue that has been interpreted in climate science in terms of AGW climate forcing portfolio needed to implement carbon budgets.

ISSUE#4: > WILL CLIMATE ACTION MODERATE THE RATE OF SEA LEVEL RISE? A principal argument for climate action has been the fear of sea level rise that threatens hundreds of millions of people in vulnerable small island states, low lying deltas such as Bangladesh, and coastal communities such as Florida [LINK] [LINK] [LINK] [LINK] and it is therefore proposed that climate action in the form of reducing or eliminating fossil fuel emissions must be taken to attenuate the rate of sea level rise [LINK] . The only empirical evidence presented to relate sea level rise to emissions is the paper by Professor Peter Clark of Oregon State University [LINK] [Reducing carbon emissions will limit sea level rise] . The Clark paper shows a strong and statistically significant correlation between cumulative emissions and cumulative sea level rise in the TCRE format and thus suffers from the same limitations as the TCRE in that there is no time scale and no degrees of freedom, and that the correlation derives from a sign pattern in that emissions are always positive and annual sea level rise values are mostly positive.

TEST#1 > The correlation presented by the Peter Clark paper is evaluated in a related post and found to be spurious and illusory because it has neither time scale nor degrees of freedom [LINK] . This spurious correlation contains no information in terms of a causal relationship between emissions and sea level rise. It is necessary to test the correlation with time scale and degrees of freedom restored.

TEST#2 > The Peter Clark correlation is tested in a related post [LINK] with finite time scales inserted. Time scales of 30, 35, 40, 45, and 50 years are used in the test. The correlation and detrended correlation between emissions and sea level rise are shown in the summary of results table below. A statistically significant positive correlation is required to support the causation hypothesis being tested. Although such correlations are seen in the source data, none of them survives into the detrended series where the correlations are negative. Note that source data correlations between time series data is influenced by shared trends and therefore have no interpretation in terms of responsiveness at a finite time scale. We conclude from these results that, although a significant correlation is seen between the cumulative values, no evidence of correlation between emissions and sea level rise is found in the data at finite time scales and with degrees of freedom restored. Thus there is no evidence that climate action in the form of reducing fossil fuel emissions will attenuate sea level rise.

SUMMARY

ISSUE#5 > THE IMPACT OF AGW ON TROPICAL CYCLONES:   This issue is presented in a related post [LINK]ISSUE#6 > THE IMPACT OF AGW ON SEA ICE: This issue is presented in three related posts [LINK] [LINK] [LINK] 

Climate Change | Open Development Thailand

ISSUE#6: WILL EMISSION REDUCTION CHANGE THE RATE OF WARMING? [LINK].

Climate science has presented a causal relationship between emissions and warming in terms of the TCRE where we find a near perfect proportionality between cumulative emissions and surface temperature. The TCRE is used in climate science to construct “CARBON BUDGETS” Tto relate the climate action needed for any given warming target.

However, as shown in related posts [LINK]  [LINK] , the TCRE has no interpretation in terms of the data because of a fatal statistical error. This work addresses these shortcomings of the TCRE by defining finite time scales shorter than the full span. Here we use data for fossil fuel emissions from the CDIAC and the theoretical temperatures from these emissions in the CMIP5 forcings found in the RCP8.5 as well as the HadCRUT4 temperature reconstructions. We then compute the corresponding correlations between emissions and warming as in the TCRE but with the changes needed to retain degrees of freedom.

The results are summarized in Figure 6 and Figure 7 below where we find that the source data correlation rises as the time scale is increased. The theoretical model predictions (RCP) show stronger detrended correlations (0.3 to 0.56) than the HAD observational data (0.1 to 0.2) because a larger portion of the model prediction RCP source correlation survives into the detrended series, indicating a stronger relationship between emissions and warming in climate models than in observational data. We find that the four time scales greater than ten years (15, 20, 25, and 30 years) show statistically significant detrended correlations for the climate model series RCP8.5. No statistically significant detrended correlation is found in the observational data HadCRUT4. 

We conclude from these results that though the causal relationship between emissions and warming is found in the RCP8.5 generated by climate models, it is not found in the data and that thereforee no empirical evidence is found to support the rationale for costly climate action that assumes a causal relationship between the rate of emissions and the rate of warming. 

THE ESSENTIAL FINDING HERE IS THAT CLIMATE ACTION WORKS IN CLIMATE MODELS BUT NOT IN THE REAL WORLDTHE ANOMALIES IN CARBON BUDGETS THAT CLIMATE SCIENTISTS ARE STRUGGLING WITH MAY BE UNDERSTOOD IN THIS CONTEXT.  [LINK]  

The faulty science, doomism, and flawed conclusions of Deep Adaptation |  openDemocracy

CONCLUSION: The analyses presented above imply that the assumption in climate science that relates changes in atmospheric composition to fossil fuel emissions such as to explain the rate of warming and the effectiveness of the proposed climate action has no empirical support. The further proposition by climate science that relates warming directly to cumulative emissions is statistically flawed and also inconsistent with their original theory that warming is explained by accumulation in the atmosphere of CO2 from fossil fuel emissions and its GHG warming effect thereof. This means that the theory of global warming as proposed by climate science contains fatal statistical flaws. When these statistical errors are corrected no evidence of human caused global warming by way of fossil fuel emissions remains and no evidence is found that the proposed climate action of reducing fossil fuel emissions will change the rate of warming. 

 

 

Spurious Correlations

 

 

Bad Science eBook: Zimmermann, Linda: Amazon.in: Kindle Store

THIS POST EXAMINES THE CLIMATE SCIENCE POSITION THAT THE USE OF FOSSIL FUELS HAS INCREASED THE DESTRUCTION CAUSED BY TROPICAL CYCLONES. 

[LINK TO THE HOME PAGE OF THIS SITE]

  1. The claimed causal connection between AGW climate change and the destructiveness of tropical cyclones did not emerge from the science nor from an extensive study of historical data but from an unlikely event in 2005 when Hurricane Katrina damaged a levee system that had not been properly maintained. The damage to the levee caused a catastrophic flooding of New Orleans that became the signature issue in the destructiveness of Hurricane Katrina as seen in the 2005-2009 reports in paragraph#2 below. The role of levee management in the destruction was downplayed and forgotten and the entire destruction was thus attributed to fossil fueled climate change with the subsumed climate action lesson of Katrina being that such destruction can and must be attenuated by reducing and eventually eliminating the use of fossil fuels.
  2. Hurricane Katrina Historical Notes: (i) HURRICANE KATRINA IS THE HARBINGER OF WORSE YET TO COME: The IPCC claimed that it had the scientific evidence to prove that our use of fossil fuels caused Hurricane Katrina to forecast with a great certainty that there was more to come in the 2006 hurricane season but the 2006 hurricane season turned out to be milder than normal. The IPCC blamed the dissipation of El Nino for the mild hurricane season in 2006 and issued a new warning that 2007 will be the hottest year on record and will bring a killer hurricane season worse than 2005 but the 2007 forecast also failed. The IPCC’s dream hurricane season has finally arrived in 2008 unannounced and unexpected with strong hurricanes Gustav and Hanna expected to be followed by Ike and a dozen others before the season is through. More info: [LINK] . (ii) The IPCC’s claim that Hurricane Katrina was caused by man-made global warming has been thoroughly discredited and their forecasts for more severe hurricane seasons in 2006 and 2007 have been proven wrong. They are merchants of fear and their method is the dissemination of convenient lies. More info: [LINK] . (iii) Climate science shows that AGW climate change is increasing the frequency and severity of extreme weather as for example Hurricane Katrina. Further research shows a causal link between AGW and increasing wave intensity that provides direct evidence of the extreme weather impacts of AGW. More info: [LINK] . (iv) 2009: In 2005 climate science declared that Hurricane Katrina was the harbinger of killer super storms yet to come created by fossil fueled global warming but after no further cyclone activity in the North Atlantic Basin in the next three years, new evidence for the destructiveness of AGW extreme weather was found in Cyclone Nargis in the North Indian Basin. Though not unusually strong, Nargis did create a freak storm surge in rising tides that swept up the Irrawaddy River in Burma and claimed a horrific death toll. Nargis thus became an AGW issue and climate scientists changed their extreme weather focus from the North Atlantic Basin to the North Indian Basin saying that Cyclone Nargis was a creation of climate change caused by fossil fuel emissions and as the harbinger of “destruction on a global scale” by human caused global warming. More [LINK] .
  3. The Climate Science of Hurricane Katrina:  It was thus that the climate science of the destructiveness of hurricanes did not predict Katrina but was in fact constructed from Katrina based only on the destructiveness of the flood caused by the break in the levee system and by discounting the greater climatology data that it was Wilma, not Katrina, that was by far the stronger hurricane of the 2005 season but Katrina, not Wilma, was clearly the better tool to sell AGW’s fear based climate action agenda. This relationship between climate science and real science is seen more clearly in the foundational and keynote paper by noted MIT climate scientist Professor Kerry Emanuel reviewed in detail in a related post [LINK] . There it is shown that the need by climate science to establish the fear of climate change in terms of hurricanes made it possible for Professor Emanuel to abandon all pretension to scientific principles and statistical rigor to publish in a peer reviewed journal a circular reasoning paper that begins with the assumption that AGW increases the destructiveness of hurricanes and then proves that AGW increases the destructiveness of hurricanes [LINK] .
  4. Yet another issue is that the single minded focus on the North Atlantic Basin (NA) for the detection of the impact of climate change on tropical cyclone destructiveness during periods when NA is unusually active is a form of circular reasoning; particularly so because NA by itself is not a globally important source of cyclone energy. An additional consideration is the finding by Knutson (2010) and others that total cyclone energy variance for a single basin is too large to come to meaningful conclusions about trends and recommended that only the aggregate of all six basins could contain useful trend information.
  5. Knutson, Thomas R., et al. “Tropical cyclones and climate change.” Nature geoscience 3.3 (2010): 157-163. In the paper, Tom Knutson spells out exactly what climate science claims in terms of the impact of AGW climate change on tropical cyclones with climate model predictions of the effect of rising SST on tropical cyclones. His main points are as follows: (1) Globally averaged intensity of tropical cyclones will rise as AGW increases SST. Models predict globally averaged intensity increase of 2% to 11% by 2100. (2). Models predict falling globally averaged frequency of tropical cyclones with frequency decreasing 6%-34% by 2100. (3). The globally averaged frequency of “most intense tropical cyclones” should increase as a result of AGW. The intensity of tropical cyclones is measured as the ACE (Accumulated Cyclone Energy). (4). Models predict increase in precipitation within a 100 km radius of the storm center. A precipitation rise of 20% is projected for the year 2100. (5) Extremely high variance in tropical cyclone data at an annual time scale suggests longer, perhaps a decadal time scale which in turn greatly reduces statistical power. (6) Model projections for individual cyclone basins show large differences and conflicting results. Thus, no testable implication can be derived for studies of individual basins.
  6. There are six tropical cyclone basins in the world where tropical cyclones form.  These are the West Pacific (WP), South Indian Ocean (SI), East Pacific (EP), North Atlantic (NA), North Indian Ocean (NI), and the South Pacific (SP). The most intensive and active  basin is the West Pacific Basin where tropical cyclones are called Typhoons. The North Atlantic Basin, where tropical cyclones are called Hurricanes, is a lesser basin and not a significant source of total global cyclone energy. Of the other four basins, the South Indian Ocean basin is the most active. Together, WP and SI generate more than 60% of the total global cyclone energy with the East Pacific and the North Atlantic together coming in second with about 25% of the world’s cyclone energy. The North Atlantic generates about 14% of the world’s cyclone energy. The details of this comparison are tabulated below.
  7. BASIN-SUMMARY
  8. Since AGW climate change is proposed as a global phenomenon, its effect on tropical cyclones must be studied and understood only in terms of global measures of tropical cyclone activity and not in terms of convenient localized phenomena that fit the narrative or that might derive from a USA bias of American researchers and the American news media. Here we provide an integration and summary of three related posts where the global impact of AGW on tropical cyclone activity is measured as a global aggregate of all six cyclone basins.
  9. Study#1 > Trends[LINK] . The trend study presents data for total cyclone energy for all six basins for the 70-year study period 1945 to 2014. The object variable is the Accumulated Cyclone Energy (ACE) used as a measure of total cyclone energy. Knutson (2010) and others have suggested that year to year variance in cyclone energy is too large and random to draw meaningful interpretation of and recommended a decadal time scale for the study of tropical cyclone trends. Accordingly, the total global ACE for all six cyclone basins is computed for each of the seven decades in the 70-year study period. Trend analysis is carried out by comparing each decade against the other six. The results are summarized in the Table presented in Paragraph#9 below. They show that only two statistically significant differences are found. Specifically, we find that Decade#5 (1985-1994) and Decade#6 (1995-2004) show higher total global cyclone energy than Decade#1 (1945-1954). No other statistically significant difference is found among the seven decades studied.
  10. It is tempting here to conclude that the higher global cyclone energy in the two recent decades from 1985 to 2004 than in the decade 1945-1954 can and should be attributed to AGW climate change but there are other well understood considerations that explain this difference. It is well established and generally accepted in the tropical cyclone research community that the early decade in this study, 1945-1954, suffered from a measurement bias such that not all tropical cyclones were detected and of those that were not all were adequately measured. In other words the early data are incomplete and the incompleteness of the data provides a stronger and more rational explanation of the observed statistically significant trend in total cyclone energy. We conclude from these results that he data do not show an impact of AGW climate change in the form of increasing the destructiveness of tropical cyclones. 
  11. Study #2: > SST: [LINK] . Sea surface temperature (SST) is the link that connects climate change with tropical cyclone activity with the proposition that higher SST provides more energy for tropical cyclones that form on the basis of high SST. Cyclone theory tells us that cyclone formation, and intensification are related to SST (Vecchi, 2007) (Knutson, 2010). Testable implications of the theory for empirical research are derived from climate model simulations (Knutson, 2010). Knutson’s work also suggests that the high variance in tropical cyclone activity at an annual time scale or for any single cyclone basin means that data analysis must be carried out on a global basis for all six tropical cyclone basins and time scales longer than annual should be used. Detrended correlation analysis for total cyclone energy and SST are carried out at a decadal time scale 1945-2014. The results are tabulated in Paragraph#12. They show that the high correlation seen between total global cyclone energy (ACE) and global sea surface temperature (SST) derives from a rising trend in both time series and not from a responsiveness of ACE to SST at a decadal time scale.
  12. We conclude from the results presented in Paragraph#8 to Paragraph#12 that no evidence is found for the usual assumption in climate science that AGW climate change is intensifying tropical cyclone activity by way of SST.
  13. Study#3: > Pre-Industrial: [LINK] . The fundamental theoretical  basis for the theory of AGW climate change is a stark difference between “pre-industrial times” and the “era of the industrial economy” in terms of climate as assumed in climate science. A testable implication of the claimed impact of AGW climate change on tropical cyclones in terms of this dichotomy is that a comparison of the two eras should show a stark difference in tropical cyclone activity in terms of an absence of intense and destructive tropical cyclones in the pre-industrial era.
  14. The Treasure Coast Hurricanes of 1715 & 1733The Dreadful Hurricane of 1667, The Calcutta Cyclone of 1737, The Great Hurricane of 1780, The Great September Gale of 1815, The Coringa Cyclone of 1839, and The Last Island Hurricane of 1856, The San Diego Hurricane of 1858 are described and presented as tropical cyclones with intensity and destructiveness comparable to the high profile hurricanes cited by climate science as evidence of the impact of AGW climate change. We conclude from the comparison that it does not provide convincing evidence that tropical cyclones such as the destructive hurricanes cited by climate science as a creation of AGW are unique to the industrial economy that could not have occurred in pre-industrial times. It is also noted that the strongest and most destructive tropical cyclone of the post industrial era was the monster Bhola Cyclone [LINK][LINK]that killed half a million people in Bangladesh. It occurred way back in 1970 right in the middle of the 1970s cooling period[LINK]that had sparked fears of a return to Little Ice Age conditions[LINK] .
  15. CONCLUSION: The data and their interpretation presented in these posts reveal serious weaknesses in the claim by climate science that the industrial economy has caused greater intensity and destructiveness of tropical cyclones by way of global warming and rising sea surface temperature. The abstract from the Knutson 2010 paper listed in the bibliography below provides the basis for the study of tropical cyclones in the global warming context. It says as follows: 
  16. Tropical cyclones and climate change.” Nature geoscience 3.3 (2010): 157-163. In the paper, Tom Knutson spells out exactly what climate science claims in terms of the impact of AGW climate change on tropical cyclones with climate model predictions of the effect of rising SST on tropical cyclones. His main points are as follows: (1) Globally averaged intensity of tropical cyclones will rise as AGW increases SST. Models predict globally averaged intensity increase of 2% to 11% by 2100. (2). Models predict falling globally averaged frequency of tropical cyclones with frequency decreasing 6%-34% by 2100. (3). The globally averaged frequency of “most intense tropical cyclones” should increase as a result of AGW. The intensity of tropical cyclones is measured as the ACE (Accumulated Cyclone Energy). (4). Models predict increase in precipitation within a 100 km radius of the storm center. A precipitation rise of 20% is projected for the year 2100. (5) Extremely high variance in tropical cyclone data at an annual time scale suggests longer, perhaps a decadal time scale which in turn greatly reduces statistical power. (6) Model projections for individual cyclone basins show large differences and conflicting results. Thus, no testable implication can be derived for studies of individual basins.

1715cyclone

[LINK TO THE HOME PAGE OF THIS SITE]

THE RELEVANT LITERATURE

  1. American Meteorological Society. (2014). State of the climate in 2013. Bulletin of the American Meterological Society, V. 95, No. 7, July 2014.
  2. Balaguru, K. (2014). Increase in the intensity of post monsoon Bay of Bengal tropical cyclones. Geophysical Research Letters, 3594-3601.
  3. Bister, M. (1998). Dissipative heating and hurricane intensity. Meteorology and Atmospheric Physics, 52: 233-240.
  4. Chan, J. (2005). Interannual and interdecadal variations of tropical cyclone activity over the western North Pacific. Meteorology and Atmospheric Physics, 89: 143-152.
  5. Chan, J. (2006). Comments on “changes in tropical cyclone number, duration, and intensityi a warming environment”. Science, 311: 1731b.
  6. Dodla, V. (2007). GIS based analysis of climate change impacts on tropical cyclones over Bay of Bengal. Jackson, MS, 39217, USA: Trent Lott Geospatial and Visualization Research Center, Jackson State University.
  7. Draper, N. a. (1981). Applied regression analysis. NY: Wiley.
  8. Elsner, J. (2008). The increasing intensity of the strongest tropical cyclones. Nature, 455, 92-95.
  9. Emanuel, K. (1987). The dependence of hurricane intensity on climate. Nature, 326: 483-485.
  10. Emanuel, K. (1988). The maximum intensity of hurricanes. Journal of Atmospheric Sciences, 45: 11431155.
  11. Emanuel, K. (2005). Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436: 686-688.
  12. Eric, K. (2012). Interannual variations of tropical cyclone activity over the North Indian Ocean. International Journal of Climatology, Volume 32, Issue 6, pages 819–830.
  13. Frank, W. (2007). The interannual variability of tropical cyclones. Monthly Weather Review, 135: 3587-3598.
  14. Girishkumar, M. (2012). The influences of ENSO on tropical cyclone activity in the Bay of Bengal during October-December. Journal of Geophysical Research, V.117, C02033, doi:10.1029/2011JC007417.
  15. Gray, W. (1967). Global view of the origins of tropical disturbances and storms. Fort Collins, CO: Technical Paper #114, Dept of Atmospheric Sciences, Colorado State University.
  16. Gray, W. (1979). Hurricanes: their formation, structure, and likely role in the tropical circulation. In D. Shaw, Meteorology over tropical oceans. Bracknell: Royal Meteorological Society.
  17. Held, I. (2011). The response of tropical cyclone statistics to an increase in CO2 … Journal of Climate, 24: 5353-5364.
  18. Hodges, K. (2007). How may tropical cyclones change in a warmer climate. Tellus A: Dynamic Meteorology and Oceanography, 59(4): pp. 539-561.
  19. Holland, G. (1997). The maximum potential intensity of tropical cyclones. Journal of Atmospheric Sciences, 54: 2519-2541.
  20. Holm, S. (1979). A simple sequentially rejective multiple test procedure. Holm, S. (1979). “A simple sequentially rejective multipScandinavian Journal of Statistics, 6 (2): 65–70.
  21. Hurricane Science. (2010). 1970 The great Bhola cyclone. Retrieved 2015, from hurricanescience.org: http://www.hurricanescience.org/history/storms/1970s/greatbhola/
  22. IPCC. (2007). Climate change 2007. Retrieved 2015, from ipcc.ch: https://www.ipcc.ch/pdf/assessment-report/ar4/wg2/ar4_wg2_full_report.pdf
  23. Islam, T. (2008). Climatology of landfalling tropical cyclones in Bangladesh 1877–2003. Natural Hazards, 48(1), 115–135.
  24. JAXA. (2015). Typhoon Search. Retrieved 2015, from The Japan Aerospace Exploration Agency : http://sharaku.eorc.jaxa.jp/cgi-bin/typ_db/typ_track.cgi?lang=e&area=IO
  25. JMA. (2005). Tropical cyclone basins. Retrieved 2015, from Japan Meteorological Agency: http://ds.data.jma.go.jp/gmd/jra/atlas/eng/indexe_time3.htm
  26. Johnson, V. (2013). Revised standards for statistical evidence. Proceedings of the National Academy of Sciences, http://www.pnas.org/content/110/48/19313.full.
  27. Kikuchi, K. (2010). Formation of tropical cyclones in the northern Indian Ocean. Journal of the Meteorological Society of Japan, Vol. 88, No. 3, pp. 475–496.
  28. Knutson, T. (2010). Tropical cyclones and climate change. Nature Geoscience, 3.3 (2010): 157-163.
  29. Knutson-McBride-Landsea-Emanuel-Chan. (2010). Tropical cyclones and climate change. Nature Geoscience, 3.3 (2010): 157-163.
  30. Klotzbach, P. (2006). Trends in global tropical cyclone activity over the past twenty years 1986-2005. Geophysical research letters, 33: L10805.
  31. Knapp, K. (2010). The International Best Track Archive for Climate Stewardship (IBTrACS) . Bulletin of the American Meteorological Society, 91, 363–376.
  32. Kossin, J. (2013). Trend analysis with a new global record of cyclone intensity. Journal of Climate, 26: 9960-8876.
  33. Kozar, M. (2013). Long term variations of North American tropical cyclone activity … Journal of Geophysical Research, 118: 13383-13392.
  34. Kumar, R. (2013). A brief history of Indian cyclones. Retrieved 2015, from dnaindia.com: http://www.dnaindia.com/india/report-a-brief-history-of-indian-cyclones-1902774
  35. Landsea, C. (2007). Counting Atlantic tropical cyclones back to 1900. EOS Transactions of the American Geophysical Union, 88:18.197-208.
  36. Li, T. (2003). Satellite data analysis and numerical simulation of tropical cyclones. Geophysical Research Letters, V. 30 #21 2122.
  37. Li, T. (2010). Global warming shifts Pacific tropical cyclone location. Geophysical Research Letters, 37: 1-5.
  38. Lin, H. (2015). Recent decrease in typhoon destructive potential and global warming implications. Nature Communications, DOI: 10.1038/ncomms8182.
  39. Lin, Y. (2015). Tropical cyclone rainfall area controlled by relative sea surface temperature. Nature Communications, DOI: 10.1038/ncocoms7591.
  40. Mann, M. (2007). Atlantic tropical cyclones revisited. EOS Transactions American Geophysical Union, 88:36:349-350.
  41. Mann, M. (2007). Evidence of a modest undercount bias in early historical Atlantic tropical cyclone counts. Geophysical Research Letters, 34: L22707.
    McBride, J. (1995). Tropical cyclone formation. In W. Frank, A global view of tropical cyclones (pp. 63-100). Geneva: World Meteorological Organization.
  42. Munshi, J. (2015). Global cyclone paper data archive. Retrieved 2015, from Dropbox: [LINK]
  43. Murakami, H. (2014). Contributing Factors to the Recent High Level of Accumulated Cyclone Energy (ACE) and Power Dissipation Index (PDI) in the North Atlanti. Journal of Climate, v.27, n. 8.
  44. Neetu, S. (2012). Influence of upper ocean stratification on tropical cyclone induced surface cooling in the Bay of Bengal. Journal of Geophysical Research, V117 C12020.
  45. NHC. (2015). National Hurricane Center. Retrieved 2015, from NOAA: http://www.nhc.noaa.gov/
  46. NOAA. (2015). La Nina. Retrieved 2015, from NOAA: http://www.publicaffairs.noaa.gov/lanina.html
  47. NOAA. (2015). NOAA. Retrieved 2015, from NOAA: http://www.noaa.gov/
  48. NOAA/NCDC. (2015). IBTRACS. Retrieved 2015, from NCDC: https://www.ncdc.noaa.gov/ibtracs/index.php?name=ibtracs-data
  49. Royer, J. (1998). A GCM study of the impact of greenhouse gas increase on the frequency of tropical cyclones. Climate Change, 38: 307-343.
  50. Scoccimarro, E. (2014). Intense precipitation events associated with landfalling tropical cyclones in response to a warmer climate and increased CO2. Journal of Climate, 27: 4642-4654.
  51. Sengupta, D. (2008). Cyclone-induced mixing does not cool SST in the post-monsoon north Bay of Bengal. Atmospheric Science Letters, 9(1), 1–6.
  52. Sharkov, E. (2012). Global tropical cyclogenesis. Berlin: Springer-Verlag.
    Singh, O. P. (2001). Has the frequency of intense tropical cyclones increased in the north Indian Ocean? Current Science, 80(4), 575–580.
  53. Sriver, R. (2006). Low frequency variability in globally integrated tropical cyclone power dissipation. Geophysical Research Letters, 33: L11705.
  54. Stormfax. (2015). El-Nino. Retrieved 2015, from el-nin.com: http://el-nino.com/
  55. Walsh, K. (2014). Hurricanes and climate. Bulletin of the American Meteorological Society, DOI: 10.1175/BAMS-D-13-00242.1.
  56. Webster, P. (2005). Changes in tropical cyclone number, duration, and intensity in a warming environment. Science, 309: 1844-1846.
  57. Zhao, H. (2011). Interannual Changes of Tropical Cyclone Intensity in the Western North Pacific . Journal of the Meteorological Society of Japan, Vol. 89, No. 3, pp. 243–253, 2011 .
  58. Zhao, M. (2009). Simulations of global hurricane climatology, interannual variability, and reponse to global warming. Journal of Climate, 22: 6653-6678.
  59. Zhao, M. (2012). GCM simulations of hurricane frequency response to sea surface temperature anomalies. Journal of Climate, 25: 2995-3009.

 

THIS POST IS A CRITICAL REVIEW OF PELLIGRINI ETAL 2018 WHERE THE AUTHORS REPORT AN IMPACT OF FOREST FIRES THAT HAS CLIMATE CHANGE IMPLICATIONS IN TERMS OF THE ABILITY OF SOILS TO SEQUESTER CARBON.

  1. The lead author of the paper (photo above) is Adam Pellegrini PhD, Earth System Science Department, Stanford University where he is a NOAA Climate and Global Change Postdoc Fellow. The paper (item#14 in the “Forest Fire Effects on Topsoil Chemistry” Bibliography below) reports findings on the effect of forest fires on soil chemistry. It is an extensive study across 48 sites globally distributed and three different kinds of forest, grasslands, broad leaf, and needle leaf. The authors studied the effects of forest fires on the chemistry of topsoil, the uppermost loose layer of the soil about six inches in depth normally rich in life and organic matter. Unlike the other studies of forest fires listed below where effects of prescribed fires and wildfires are reported, the Pelligrini 2018 study is experimental where forest fires in selected regions and forest types are intentionally started by the researchers with different frequency of fires in order to study the long term effects over a period of 65 years.
  2. They found that frequent fires in grasslands and broad leaf forests  caused significant declines in carbon and nitrogen in topsoil. No significant effect of frequent fires was found in needleleaf forests or for infrequent fires in any of the three forest types. The findings are consistent with field data and with a dynamic computer model of global vegetation. The findings are thus taken as a validation of the dynamic global vegetation model (DGVM).
  3. Further study with the DGVM predicts that the long-term losses of soil nitrogen  from more frequent burning may lower the ability of the topsoil to perform its assumed carbon sequestration function that is relevant to the carbon cycle and climate change because the carbon sequestration function of healthy topsoil sequesters 20% of the carbon released by wildfire activity in forests and grasslands.
  4. The relevance of this finding to the climate change issue is that if wildfire frequency is increased by climate change, the expected impact on the topsoil’s ability to sequester carbon may act as a feedback by increasing atmospheric CO2 concentration and thereby accelerating climate change severity and thereby the rise in wildfire frequency.
  5. The assumed causal connection, that AGW climate change increases wildfire frequency, is derived from the various works of Leroy Westerling, Professor of Climatology, the University of California at Merced from 2006 to 2011 and some later works by other authors. The references are listed in the “climate change wildfire” bibliography below. These references show that ….
  6. In certain specific regions (eg California), but not in others, wildfires have increased since the mid-1980s while at the same time AGW climate change was causing increased warmth, desiccation, and wind speed that could enhance wildfire conditions. These relationships are taken as evidence that AGW climate change did in fact cause an increase in wildfires. The weaknesses in this argument are many as listed below.
  7. (i) Evidence of the effect of global warming on wildfire frequency or severity is not established globally; but rather for specific regions where rising devastation by wildfires is known to have occurred are selected for the evaluation; (ii) That variable y is rising while at the same time variable x was also rising establishes neither correlation nor causation even when x and y could have a causation relationship; yet this is the sole argument presented for the attribution of wildfire severity and/or frequency to AGW other than the rationalization that AGW is expected to cause increased warmth, desiccation, an wind speed (iii) Other factors that are also concomitant are not considered such as changes in California logging regulations that were made around the time when the Spotted Owl was declared to be an endangered species threatened by logging. Logging in California’s wilderness was banned. At the same time, prescribed forest management fires were banned or severely curtailed. These management changes also occurred in the late 1980s and early 1990s but they have been removed from consideration to make way for a single minded focus on a pre-determined causation relationship between AGW climate change and wildfires. Even more egregious, if indeed the wildfire devastation in California is related to the failure of forest management by way of inadequate prescribed fires, the Pellegrini 2018 implication that prescribed fires are bad because of lost carbon sequestration is the exact opposite of the forest management change needed in California. (iv) Computer modeling of the impact of AGW climate change on wildfires will of course show that relationship because it has been programmed into it. These results serve only to show that the model works the way it is supposed to work and can’t be presented as evidence that the observed increase in California wildfire devastation since the 1990s must have been caused by AGW. Computer models of expected theoretical relationships are an expression of the theory itself and it cannot also serve as the data to test theory.
  8. The works listed in the AGW Wildfire bibliography below, particularly those by Professor Westerling are biased in this way. Results of modeling studies and climate theories of the impact of AGW climate change on wildfires have created a sense that the truth of the causation relationship is a given and that the observational evidence plays the minor role of providing the kind of data that are expected in going through the required formality for a causation that has been fully accepted by the researchers apriori. In other words, that AGW climate change increases wildfire devastation is the null hypothesis. However, it is necessary for empirical test of theory to be carried out in exactly the opposite way where the null hypothesis is the absence of the causation relationship and sufficient and convincing unbiased evidence must be presented before the null hypothesis can be rejected.
  9. The finding in {Pellegrini, Adam FA, et al. “Fire frequency drives decadal changes in soil carbon and nitrogen and ecosystem productivity.” Nature 553.7687 (2018)} that climate change causes loss of soil carbon sequestration which in turn exacerbates climate change contains two significant flaws. The first is, as stated above, insufficient evidence is provided for the relationship that AGW climate change increases wildfire devastation. The second flaw in the Pellegrini 2018 paper is the interpretation of carbon sequestration loss in soils in terms of AGW climate change. AGW is a theory that the carbon in fossil fuels are extraneous and unknown to the current carbon cycle and climate system such that this system of nature suffered an unnatural perturbation when humans dug up ancient carbon in fossil fuels and injected that extraneous carbon, (carbon that does not belong in the current account of the carbon cycle), into the carbon cycle and climate system.
  10. A related issue is the importance of the Industrial Revolution that divides climate history into pre-industrial (the reference climate relative to which AGW is measured as the impact of the industrial economy), and the post industrial era where fossil fuel emissions of the industrial economy is said to have created a new, unprecedented, and artificial climate regime. AGW climate change must be understood only in this context. Thus, the impact of wildfires in general is not an issue because there were wildfires in pre-industrial times. Therefore only the effect of those wildfires that would not have been without the industrial economy can serve as a creation of fossil fuels and thus be related to AGW and used as climate change arguments.
  11. The argument made or implied by the authors is that the loss of carbon sequestration capacity of topsoil due to forest fires, whether prescribed control burns or wildfires, will serve to increase atmospheric CO2 concentration and thereby act as an AGW feedback loop so that the more warming there is the more forest fires there will be and the more soil sequestration that will be lost and so even more warming there will be and so on. The statistical significance of this relationship must first be established in the context of the very high uncertainties in carbon cycle flows described in a related post [LINK] . The interpretation of this change must be made in the context of the AGW theoretical framework that AGW warming is the result of the perturbation of the carbon cycle with external carbon dug up from under the ground where it had been sequestered for millions of years.
  12. The related post linked above [LINK] shows extremely high uncertainties as expected since carbon cycle flows are not directly measurable but must be inferred. For example, carbon cycle flows related to “land use change” are listed as Land use change: surface to atmosphere:Mean=1.1,SD=0.8 implying a 90% confidence interval of -0.216 to 2.416 GT (IPCC AR5 2018). The relevance of the proposed forest fire effect must be shown in the context of these large uncertainties in unmeasurable carbon cycle flows.
  13. An analysis is presented in related posts on this site that shows that given the large uncertainties in carbon cycle flows, fossil fuel emissions are statistically not detectable [LINK] [LINK] . Carbon cycle flows are not directly measurable but must be inferred. Given these uncertainties in carbon cycle flows, a global estimate of carbon sequestration lost due to forest fires and its uncertainty must be estimated and shown to be detectable and relevant in the context of the carbon cycle and its extreme uncertainties. Until these additional information can be made available, the findings of {Pellegrini, Adam FA, et al. “Fire frequency drives decadal changes in soil carbon and nitrogen and ecosystem productivity.” Nature 553.7687 (2018)}, though interesting, cannot be assumed to have practical implications for AGW climate change and the corresponding propositions for climate action.
  14. In summary, what we see in the Spotted Owl case is that an excess of environmentalism zeal can harm the environment. That pattern now emerges in climate change environmentalism.

 

 

This post was motivated by a related post on the WUWT site [LINK] 

 

 

AGW AND WILDFIRE FREQUENCY:  BIBLIOGRAPHY

Featured Author: Anthony Leroy Westerling, Professor of Climatology, UC Merced

westerling_leroy

  1. Fried, Jeremy S., Margaret S. Torn, and Evan Mills. “The impact of climate change on wildfire severity: a regional forecast for northern California.” Climatic change 64.1-2 (2004): 169-191.  We estimated the impact of climatic change on wildland fire and suppression effectiveness in northern California by linking general circulation model (GCM) output to local weather and fire records and projecting fire outcomes with an initial-attack suppression model. The warmer and windier conditions corresponding to a 2 × CO2 climate scenario produced fires that burned more intensely and spread faster in most locations. Despite enhancement of fire suppression efforts, the number of escaped fires (those exceeding initial containment limits) increased 51% in the south San Francisco Bay area, 125% in the Sierra Nevada, and did not change on the north coast. Changes in area burned by contained fires were 41%, 41% and –8%, respectively. When interpolated to most of northern California’s wildlands, these results translate to an average annual increase of 114 escapes (a doubling of the current frequency) and an additional 5,000 hectares (a 50% increase) burned by contained fires. On average, the fire return intervals in grass and brush vegetation types were cut in half. The estimates reported represent a minimum expected change, or best-case forecast. In addition to the increased suppression costs and economic damages, changes in fire severity of this magnitude would have widespread impacts on vegetation distribution, forest condition, and carbon storage, and greatly increase the risk to property, natural resources and human life. [FULL TEXT PDF]
  2. Westerling, Anthony L., et al. “Warming and earlier spring increase western US forest wildfire activity.” science 313.5789 (2006): 940-943Western United States forest wildfire activity is widely thought to have increased in recent decades, yet neither the extent of recent changes nor the degree to which climate may be driving regional changes in wildfire has been systematically documented. Much of the public and scientific discussion of changes in western United States wildfire has focused instead on the effects of 19th- and 20th-century land-use history. We compiled a comprehensive database of large wildfires in western United States forests since 1970 and compared it with hydroclimatic and land-surface data. Here, we show that large wildfire activity increased suddenly and markedly in the mid-1980s, with higher large-wildfire frequency, longer wildfire duration, and longer wildfire seasons. The greatest increases occurred in mid-elevation, Northern Rockies forests, where land-use histories have relatively little effect on fire risks and are strongly associated with increased spring and summer temperatures and an earlier spring snowmelt. In the Conclusions section of the paper the authors write “Robust statistical associations between wildfire and hydroclimate in western forests indicate that increased wildfire activity over recent decades reflects sub-regional responses to changes in climate. Historical wildfire observations exhibit an abrupt transition in the mid-1980s from a regime of infrequent large wildfires of short (average of 1 week) duration to one with much more frequent and longer burning (5 weeks) fires. This transition was marked by a shift toward unusually warm springs, longer summer dry seasons, drier vegetation (which provoked more and longer burning large wildfires), and longer fire seasons. Reduced winter precipitation and an early spring snowmelt played a role in this shift. Increases in wildfire were particularly strong in mid-elevation forests. [LINK TO FULL TEXT DOWNLOAD]
  3. Scholze, Marko, et al. “A climate-change risk analysis for world ecosystems.” Proceedings of the National Academy of Sciences 103.35 (2006): 13116-13120.  We quantify the risks of climate-induced changes in key ecosystem processes during the 21st century by forcing a dynamic global vegetation model (DGVM) with multiple scenarios from 16 climate models and mapping the proportions of model runs showing forest/nonforest shifts or exceedance of natural variability in wildfire frequency and freshwater supply. Our analysis does not assign probabilities to scenarios or weights to models. Instead, we consider distribution of outcomes within three sets of model runs grouped by the amount of global warming they simulate: <2°C (including simulations in which atmospheric composition is held constant, i.e., in which the only climate change is due to greenhouse gases already emitted), 2–3°C, and >3°C. High risk of forest loss is shown for Eurasia, eastern China, Canada, Central America, and Amazonia, with forest extensions into the Arctic and semiarid savannas; more frequent wildfire in Amazonia, the far north, and many semiarid regions; more runoff north of 50°N and in tropical Africa and northwestern South America; and less runoff in West Africa, Central America, southern Europe, and the eastern U.S. Substantially larger areas are affected for global warming >3°C than for <2°C; some features appear only at higher warming levels. A land carbon sink of ≈1 Pg of C per yr is simulated for the late 20th century, but for >3°C this sink converts to a carbon source during the 21st century (implying a positive climate feedback) in 44% of cases. The risks continue increasing over the following 200 years, even with atmospheric composition held constant. [FULL TEXT PDF DOWNLOAD] .
  4. Westerling, A. L., and B. P. Bryant. “Climate change and wildfire in California.” Climatic Change 87.1 (2008): 231-249Wildfire risks for California under four climatic change scenarios were statistically modeled as functions of climate, hydrology, and topography. Wildfire risks for the GFDL and PCM global climate models (note: GFDL and PCM are different resolutions of GCM climate models) and the A2 and B1 emissions scenarios were compared for 2005–2034, 2035–2064, and 2070–2099 against a modeled 1961–1990 reference period in California and neighboring states. Outcomes for the GFDL model runs, which exhibit higher temperatures than the PCM model runs, diverged sharply for different kinds of fire regimes, with increased temperatures promoting greater large fire frequency in wetter, forested areas, via the effects of warmer temperatures on fuel flammability. At the same time, reduced moisture availability due to lower precipitation and higher temperatures led to reduced fire risks in some locations where fuel flammability may be less important than the availability of fine fuels. Property damages due to wildfires were also modeled using the 2000 U.S. Census to describe the location and density of residential structures. In this analysis the largest changes in property damages under the climate change scenarios occurred in wildland/urban interfaces proximate to major metropolitan areas in coastal southern California, the Bay Area, and in the Sierra foothills northeast of Sacramento. [FULL TEXT PDF]
  5. Cannon, Susan H., and Jerry DeGraff. “The increasing wildfire and post-fire debris-flow threat in western USA, and implications for consequences of climate change.” Landslides–disaster risk reduction. Springer, Berlin, Heidelberg, 2009. 177-190.  In southern California and the intermountain west of the USA, debris flows generated from recently-burned basins pose significant hazards. Increases in the frequency and size of wildfires throughout the western USA can be attributed to increases in the number of fire ignitions, fire suppression practices, and climatic influences. Increased urbanization throughout the western USA, combined with the increased wildfire magnitude and frequency, carries with it the increased threat of subsequent debris-flow occurrence. Differences between rainfall thresholds and empirical debris-flow susceptibility models for southern California and the intermountain west indicate a strong influence of climatic and geologic settings on post-fire debris-flow potential. The linkages between wildfires, debris-flow occurrence, and global warming suggests that the experiences in the western United States are highly likely to be duplicated in many other parts of the world, and necessitate hazard assessment tools that are specific to local climates and physiographies. [FULL TEXT PDF]
  6. Abatzoglou, John T., and Crystal A. Kolden. “Climate change in western US deserts: potential for increased wildfire and invasive annual grasses.” Rangeland Ecology & Management 64.5 (2011): 471-478.  The influence of climate change on future invasions depends on both climate suitability that defines a potential species range and the mechanisms that facilitate invasions and contractions. A suite of downscaled climate projections for the mid–21st century was used to examine changes in physically based mechanisms, including critical physiological temperature thresholds, the timing and availability of moisture, and the potential for large wildfires. Results suggest widespread changes in 1) the length of the freeze-free season that may favor cold-intolerant annual grasses, 2) changes in the frequency of wet winters that may alter the potential for establishment of invasive annual grasses, and 3) an earlier onset of fire season and a lengthening of the window during which conditions are conducive to fire ignition and growth furthering the fire-invasive feedback loop. We propose that a coupled approach combining bioclimatic envelope modeling with mechanistic modeling targeted to a given species can help land managers identify locations and species that pose the highest level of overall risk of conversion associated with the multiple stressors of climate change. [FULL TEXT PDF]
  7. Girardin, Martin P., et al. “Vegetation limits the impact of a warm climate on boreal wildfires.” New Phytologist 199.4 (2013): 1001-1011.  Strategic introduction of less flammable broadleaf vegetation into landscapes was suggested as a management strategy for decreasing the risk of boreal wildfires projected under climatic change. However, the realization and strength of this offsetting effect in an actual environment remain to be demonstrated. Here we combined paleoecological data, global climate models and wildfire modelling to assess regional fire frequency (RegFF, i.e. the number of fires through time) in boreal forests as it relates to tree species composition and climate over millennial time‐scales. Lacustrine charcoals from northern landscapes of eastern boreal Canada indicate that RegFF regional fire frequency during the mid‐Holocene (6000–3000 yr ago) was significantly higher than pre‐industrial RegFF (ad c. 1750). In southern landscapes, RegFF was not significantly higher than the pre‐industrial RegFF in spite of the declining drought severity. The modelling experiment indicates that the high fire risk brought about by a warmer and drier climate in the south during the mid‐Holocene was offset by a higher broadleaf component. Our data highlight an important function for broadleaf vegetation in determining boreal RegFF in a warmer climate. We estimate that its feedback may be large enough to offset the projected climate change impacts on drought conditions. [FULL TEXT]  
  8. Westerling, Anthony LeRoy. “Increasing western US forest wildfire activity: sensitivity to changes in the timing of spring.” Philosophical Transactions of the Royal Society B: Biological Sciences 371.1696 (2016): 20150178.  Prior work shows western US forest wildfire activity increased abruptly in the mid-1980s. Large forest wildfires and areas burned in them have continued to increase over recent decades, with most of the increase in lightning-ignited fires. Northern US Rockies forests dominated early increases in wildfire activity, and still contributed 50% of the increase in large fires over the last decade. However, the percentage growth in wildfire activity in Pacific northwestern and southwestern US forests has rapidly increased over the last two decades. Wildfire numbers and burned area are also increasing in non-forest vegetation types. Wildfire activity appears strongly associated with warming and earlier spring snowmelt. Analysis of the drivers of forest wildfire sensitivity to changes in the timing of spring demonstrates that forests at elevations where the historical mean snow-free season ranged between two and four months, with relatively high cumulative warm-season actual evapotranspiration, have been most affected. Increases in large wildfires associated with earlier spring snowmelt scale exponentially with changes in moisture deficit, and moisture deficit changes can explain most of the spatial variability in forest wildfire regime response to the timing of spring. [FULL TEXT]

 

 

 

FOREST FIRE EFFECT ON TOPSOIL CHEMISTRY:  BIBLIOGRAPHY

  1. Shakesby, Richard A., et al. “Impacts of prescribed fire on soil loss and soil quality: an assessment based on an experimentally-burned catchment in central Portugal.” Catena 128 (2015): 278-293Prescribed (controlled) fire has recently been adopted as an important wildfire-fighting strategy in the Mediterranean. Relatively little research, however, has assessed its impacts on soil erosion and soil quality. This paper investigates hillslope-scale losses of soil, organic matter and selected nutrients before and after a ‘worst-case scenario’ prescribed fire in a steep, shrub-vegetated catchment with thin stony soil in central Portugal. Comparison is made with soil erosion measured: (1) on a nearby hillslope burned by wildfire and monitored at the hillslope scale; and (2) on long-unburned terrain at small-plot, hillslope- and catchment-scales. Hillslope-scale pre- and post-fire soil erosion was recorded over periods of 6 weeks to 5 months for (1) 9.5 months pre-fire and 27 months post-fire in the prescribed fire catchment, and (2) c. 3 years post-fire at the wildfire site. Organic matter content, pH, total N, K2O, P2O5, Ca2 + and Mg2 + were measured in the eroded sediment and in pre- and post-prescribed fire surface soil. Results indicate that: (1) both the prescribed fire and the wildfire caused expected marked increases in erosion compared with unburned terrain; and (2) the hillslope-scale post-prescribed fire soil losses (up to 2.41 t ha− 1 yr− 1) exceeded many reported plot-scale post-prescribed fire and post-wildfire erosion rates in the Mediterranean. As a comparison, post-fire erosion for both fire types was less than that caused by some other forms of common soil disturbance (e.g. types of tillage) and even that on undisturbed shrubland in low rainfall areas of the region. Total estimated post-prescribed fire particulate losses of organic matter and nutrients represent only 0.2–2.9% of the content in the upper 2 cm of soil, suggesting only a modest fire effect on soil quality, although this may reflect in part a lack of extreme rainfall events following the fire. The longer-term implications for soil conservation of repeated prescribed fire in the Mediterranean are explored and future research priorities identified.
  2. Pellegrini, Adam FA, et al. “Fire alters ecosystem carbon and nutrients but not plant nutrient stoichiometry or composition in tropical savanna.” Ecology 96.5 (2015): 1275-1285.  Fire and nutrients interact to influence the global distribution and dynamics of the savanna biome, (Biome=large naturally occurring community of flora and fauna such as a forest) but the results of these interactions are both complex and poorly known. A critical but unresolved question is whether short‐term losses of carbon and nutrients caused by fire can trigger long‐term and potentially compensatory responses in the nutrient stoichiometry of plants, or in the abundance of dinitrogen‐fixing trees. There is disagreement in the literature about the potential role of fire on savanna nutrients, and, in turn, on plant stoichiometry and composition. A major limitation has been the lack of fire manipulations over time scales sufficiently long for these interactions to emerge. We use a 58‐year, replicated, large‐scale, fire manipulation experiment in Kruger National Park (South Africa) in savanna to quantify the effect of fire on (1) distributions of carbon, nitrogen, and phosphorus at the ecosystem scale; (2) carbon : nitrogen : phosphorus stoichiometry of above‐ and below-ground tissues of plant species; and (3) abundance of plant functional groups including nitrogen fixers. Our results show dramatic effects of fire on the relative distribution of nutrients in soils, but that individual plant stoichiometry and plant community composition remained unexpectedly resilient. Moreover, measures of nutrients and carbon stable isotopes allowed us to discount the role of tree cover change in favor of the turnover of herbaceous biomass as the primary mechanism that mediates a transition from low to high soil carbon and nutrients in the absence of fire. We conclude that, in contrast to extra‐tropical grasslands or closed‐canopy forests, vegetation in the savanna biome may be uniquely adapted to nutrient losses caused by recurring fire.
  3. Fultz, Lisa M., et al. “Forest wildfire and grassland prescribed fire effects on soil biogeochemical processes and microbial communities: Two case studies in the semi-arid Southwest.” Applied soil ecology 99 (2016): 118-128.  Fire is a natural disturbance that shapes many ecosystems. In semi-arid regions, where high temperatures and low soil moisture limit nutrient cycling and plant growth, fire is critical to supply nutrients and drive vegetation composition. We examined soil chemical and biological properties to assess the short-term impacts of wildfire and prescribed fires on soil functioning in semi-arid regions of Texas. Better understanding of soil organic matter transformation and nutrient cycling processes will aid land managers in predicting ecosystem recovery response post-fire. Soil samples were collected following both prescribed grassland fires in June of 2009 in Lubbock, TX and the April 2012 Livermore Ranch Complex Fire located in the Davis Mountains, TX. Prescribed fire samples (0–2.5 cm) were collected within 6 hours prior to burning and again at 0.5, 24, 48, and 168 hours post-fire to experimentally examine short-term influences of fire and fire frequency (1× vs. 2×) on soil carbon dynamics, inorganic nitrogen, and microbial community composition. Wildfire samples (0–5 cm) were collected two and six months following the wildfire. We evaluated the effects of three burn severity levels and sampled under three tree species (Juniperus deppeanaPinus cembroides, and Quercus grisea). Within 0.5 h of the prescribed fire, CO2 flux, NH4+-N concentration and total microbial biomass (as estimated by total fatty acid methyl esters) increased. A shift in the microbial community from a predominance of fungi to Gram positive bacteria occurred immediately following the fire. Chemical shifts were short lived (decreased within 24 h), but the biotic shift to a dominance of Gram negative bacteria and actinomycetes was measured in samples collected after 168 h. Soil pH and NH4+-N concentration increased at two and six months following the wildfire. In contrast, soil organic matter content decreased at two months post wildfire which, in combination of abiotic conditions such as low moisture content (<3.3%), resulted in reduced soil microbial biomass and enzyme activity. Increased soil moisture six months post fire created more favorable conditions for nitrification resulting in increased NO3-N concentration (0.8 to 36.1 mg NO3-N kg−1 soil), particularly following high severity fire. Prescribed fire did not have lasting impacts on soil nutrients, but both prescribed and wildfire resulted in increased NH4+-N, shifts in microbial community structure and decreased in microbial biomass. While the increase in nitrogen maybe be beneficial to the plant growth and revegetation, the loss of microbial biomass may have far reaching implications to the overall sustainability of the soils in these systems.
  4. Brown, Julian, Alan York, and Fiona Christie. “Fire effects on pollination in a sexually deceptive orchid.” International Journal of Wildland Fire 25.8 (2016): 888-895. Research into the effectiveness of prescribed fire in managing pollination has only recently begun. The effects of fire on pollination have not been explored in sexually deceptive systems. Further, the potential for multiple effects operating at different spatial scales has not been explored in any pollination system despite multi-scale effects on pollination observed in agricultural landscapes. We observed the frequency of pollinator visitation to flowers of sexually deceptive Caladenia tentaculata and related it to the post-fire age class of the vegetation at local and landscape scales. We also related the number of the pollinator’s putative larval hosts (scarab beetles) captured at these sites to age class. At the local scale (i.e. the sample location), visitation was highest in recently burnt sites. At the landscape scale, positive associations were observed between (1) putative pollinator hosts and vegetation burnt 36–50 years ago, and (2) pollinator visitation and vegetation burnt ≥50 years ago. Local- and landscape-scale effects on visitation were synergistic, such that visitation was greatest when fire age was heterogeneous within pollinator foraging range.
  5. Alcañiz, M., et al. “Long-term dynamics of soil chemical properties after a prescribed fire in a Mediterranean forest (Montgrí Massif, Catalonia, Spain).” Science of the total environment 572 (2016): 1329-1335.  This study examines the effects of a prescribed fire on soil chemical properties in the Montgrí Massif (Girona, Spain). The prescribed forest fire was conducted in 2006 to reduce understory vegetation and so prevent potential severe wildfires. Soil was sampled at a depth of 0–5 cm at 42 sampling points on four separate occasions: prior to the event, immediately after, one year after and nine years after. The parameters studied were pH, electrical conductivity (EC), total carbon (C), total nitrogen (N), available phosphorus (P), potassium (K+), calcium (Ca2 +) and magnesium (Mg2 +). All parameters (except pH) increased significantly immediately after the fire. One year after burning, some chemical parameters – namely, EC, available P and K+ – had returned to their initial, or even lower, values; while others – pH and total C – continued to rise. Total N, Ca2 + and Mg2 + levels had fallen one year after the fire, but levels were still higher than those prior to the event. Nine years after the fire, pH, total C, total N and available P are significantly lower than pre-fire values and nutrients concentrations are now higher than at the outset but without statistical significance. The soil system, therefore, is still far from being recovered nine years later.
  6. Armas-Herrera, Cecilia M., et al. “Immediate effects of prescribed burning in the Central Pyrenees on the amount and stability of topsoil organic matter.” Catena 147 (2016): 238-244Prescribed burning is the deliberate application of fire under selected conditions to accomplish predetermined management objectives. It is generally accepted that controlled use of fire has neutral or even positive effects on soils due to its lower temperature, intensity and severity compared to wildfires. However, very few studies have examined the effects of prescribed burning of shrub vegetation in humid mountain areas on soil properties. The objective of this work was to determine the immediate effects of prescribed burning on the quality and biochemical stability of soil organic matter (SOM) in areas encroached by shrubs in the Central Pyrenees (NE Spain). Soil samples were sampled in triplicate immediately before and after burning from the Ah horizon at 0–1, 1–2 and 2–3 cm depths. We quantified the variations as a direct result of burning in (1) the SOM content, (2) the content and mineralization rates of labile and recalcitrant C pools as inferred from incubation assays (141 days), and (3) the soil biological activity related to C cycling (microbial biomass C and β-D-glucosidase activity). Nearly all the soil properties studied were significantly affected by fire, varying in terms of extent of the effect and the soil depth affected. The total soil organic C (SOC), C/N ratio, β-D-glucosidase activity, C-CO2 efflux and estimated content of labile SOC decreased significantly up to 3 cm depth. The total N and microbial biomass C were significantly affected only in the upper cm of the soil (0–1 cm). These results describe a short-term stronger impact of the prescribed fire on topsoil properties than usually reported. However, comparing these findings to other studies should be performed with caution because of the different environments considered in each case, as well as the differing soil thicknesses found in the literature, typically between 5 and 15 cm, which can lead to a dilution effect associated with the actual impacts of fire on soil properties. In this sense, the choice of a suitable soil thickness or sampling just after burning can be relevant factors in the detection of the immediate effects of fire. Short- and medium-term monitoring of the soils is needed to assess the suitability of this practice for pasture maintenance and for adapting the frequency of prescribed fires in order to minimize its impact on soil.
  7. Sun, Hui, et al. “Bacterial community structure and function shift across a northern boreal forest fire chronosequence.” Scientific reports 6 (2016): 32411Soil microbial responses to fire are likely to change over the course of forest recovery. Investigations on long-term changes in bacterial dynamics following fire are rare. We characterized the soil bacterial communities across three different times post fire in a 2 to 152-year fire chronosequence by Illumina MiSeq sequencing, coupled with a functional gene array (GeoChip). The results showed that the bacterial diversity did not differ between the recently and older burned areas, suggesting a concomitant recovery in the bacterial diversity after fire. The differences in bacterial communities over time were mainly driven by the rare operational taxonomic units (OTUs < 0.1%). Proteobacteria (39%), Acidobacteria (34%) and Actinobacteria (17%) were the most abundant phyla across all sites. Genes involved in C and N cycling pathways were present in all sites showing high redundancy in the gene profiles. However, hierarchical cluster analysis using gene signal intensity revealed that the sites with different fire histories formed separate clusters, suggesting potential differences in maintaining essential biogeochemical soil processes. Soil temperature, pH and water contents were the most important factors in shaping the bacterial community structures and function. This study provides functional insight on the impact of fire disturbance on soil bacterial community.
  8. Badía, David, et al. “Burn effects on soil properties associated to heat transfer under contrasting moisture content.” Science of the Total Environment 601 (2017): 1119-1128. The aim of this work is to investigate the topsoil thickness affected by burning under contrasting soil moisture content (field capacity versus air-dried conditions). A mollic horizon of an Aleppo pine forest was sampled and burned in the laboratory, recording the temperature continuously at the topsoil surface and at soil depths of 1, 2, and 3 cm. Changes in soil properties were measured at 0–1, 1–2, 2–3, and 3–4 cm. Both the maximum temperature and the charring intensities were significantly lower in wet soils than in air-dried soils up to 3 cm in depth. Moreover, soil heating was slower and cooling faster in wet soils as compared to dry soils. Therefore, the heat capacity increase of the soil moistened at field capacity plays a more important role than the thermal conductivity increase on heat transfer on burned soils. Burning did not significantly modify the pH, the carbonate content and the chroma, for either wet or dry soil. Fire caused an immediate and significant decrease in water repellency in the air-dried soil, even at 3 cm depth, whereas the wet soil remained hydrophilic throughout its thickness, without being affected by burning. Burning depleted 50% of the soil organic C (OC) content in the air-dried soil and 25% in the wet soil at the upper centimeter, which was blackened. Burning significantly decreased the total N (TN) content only in the dry soil (to one-third of the original value) through the first centimeter of soil depth. Soluble ions, measured by electrical conductivity (EC), increased after burning, although only significantly in the first centimeter of air-dried soils. Below 2 cm, burning had no significant effects on the brightness, OC, TN, or EC, for either wet or dry soil.
  9. Dove, Nicholas C., and Stephen C. Hart. “Fire reduces fungal species richness and in situ mycorrhizal colonization: a meta-analysis.” Fire Ecology 13.2 (2017): 37-65Soil fungal communities perform many functions that help plants meet their nutritional demands. However, overall trends for fungal response to fire, which can be especially critical in a post-fire context, have been difficult to elucidate. We used meta-analytical techniques to investigate fungal response to fire across studies, ecosystems, and fire types. Change in fungal species richness and mycorrhizal colonization were used as the effect size metrics in random effects models. When different types of methods for assessing fungal species richness and mycorrhizal colonization were considered together, there was an average reduction of 28 % in fungal species richness post fire, but no significant response in mycorrhizal colonization. In contrast, there was a 41 % reduction in fungal species richness post fire when assessed by sporocarp surveys, but fungal species richness was not significantly affected when assessed by molecular methods. Measured in situ, fire reduced mycorrhizal colonization by 21 %, yet no significant response occurred when assessed by ex situ bioassays. These findings suggest that the putative magnitude of fire effects on soil fungal communities may be dependent on the approach and assessment method used. Furthermore, biome, but not fire type (i.e., wildfire versus prescribed fire) was a significant moderator of our categorical models, suggesting that biome might be a more useful predictor of fungal species richness response to fire than fire type. Reductions in fungal species richness and in situ mycorrhizal colonization post fire declined logarithmically and approached zero (i.e., no effect) at 22 and 11 years, respectively. We concluded that fire reduces fungal species richness and in situ mycorrhizal colonization, but if conditions allow communities to recover (e.g., without subsequent disturbance, favorable growing conditions), soil fungi are resilient on decadal time scales; the resiliency of soil fungi likely contributes to the overall rapid ecosystem recovery following fire.
  10. Girona-García, Antonio, et al. “Effects of prescribed burning on soil organic C, aggregate stability and water repellency in a subalpine shrubland: Variations among sieve fractions and depths.” Catena 166 (2018): 68-77Soil organic matter, aggregation and water repellency are relevant interrelated soil properties that can be affected by fire. The aim of this work was to analyse the effects of shrub prescribed burning for pasture reclamation on the soil aggregate stability, organic carbon and water repellency of different soil depths and aggregate sizes in a subalpine environment. Soil samples were collected from an area treated by an autumnal low-intensity prescribed fire in the Central Pyrenees (NE-Spain) at 0–1, 1–2, 2–3 and 3–5 cm depths just before and ~1 h, 6 months and 12 months after burning. Samples were separated as whole soil (<10 mm) and 6 sieve fractions, <0.25, 0.25–0.5, 0.5–1, 1–2, 2–4 and 4–10 mm. We analysed soil organic Carbon (SOC), aggregate stability (AS) and soil water repellency (SWR). In the unburned samples, SOC and SWR were higher in the <0.25 to 2 mm sieve fractions than the 2 to 10 mm sieve fractions. Fire severely and significantly decreased the SOC content in the whole soil and the <0.25 mm fraction at 0–1 cm depth and in the 0.25–0.5 mm fraction at 0–2 cm depth. SWR was reduced by burning mainly at 0–1 cm depth for the whole soil and the <0.25 to 2 mm sieve fractions. Nevertheless, the AS of the 0.25–0.5 mm aggregates increased after fire, while the rest of the sieve fractions remained virtually unaffected. One year after the prescribed burning, SOC slightly increased and SWR recovered in the fire-affected fractions, while the AS for all aggregate sizes and depths showed a considerable decrease. The results suggest that the direct effects of burning are still present one year after burning, and the post-fire situation may pose an increased risk of soil loss. Furthermore, our results indicate that fine soil fractions are more likely to be affected by fire than coarser soil fractions and highly influence the whole soil behaviour.
  11. Butler, Orpheus M., et al. “The phosphorus‐rich signature of fire in the soil–plant system: a global meta‐analysis.” Ecology letters 21.3 (2018): 335-344.  The biogeochemical and stoichiometric signature of vegetation fire may influence post‐fire ecosystem characteristics and the evolution of plant ‘fire traits’. Phosphorus (P), a potentially limiting nutrient in many fire‐prone environments, might be particularly important in this context; however, the effects of fire on Phosphorus   cycling often vary widely. We conducted a global‐scale meta‐analysis using data from 174 soil studies and 39 litter studies, and found that fire led to significantly higher concentrations of soil mineral Phosphorus as well as significantly lower soil and litter carbon:Phosphorus  and nitrogen:Phosphorus ratios. These results demonstrate that fire has a Phosphorus ‐rich signature in the soil–plant system that varies with vegetation type. Further, they suggest that burning can ease Phosphorus limitation and decouple the biogeochemical cycling of Phosphorus , carbon and nitrogen. These effects resemble a transient reversion to an earlier stage of ecosystem development, and likely underpin at least some of fire’s impacts on ecosystems and organisms.
  12. Alcañiz, M., et al. “Effects of prescribed fires on soil properties: a review.” Science of The Total Environment 613 (2018): 944-957.  Soils constitute one of the most valuable resources on earth, especially because soil is renewable on human time scales. During the 20th century, a period marked by a widespread rural exodus and land abandonment, fire suppression policies were adopted facilitating the accumulation of fuel in forested areas, exacerbating the effects of wildfires, leading to severe degradation of soils. Prescribed fires had emerged as an option for protecting forests and their soils from wildfires through the reduction of fuels levels. However such fires can serve other objectives, including stimulating the regeneration of a particular plant species, maintaining biological diversity or as a tool for recovering grasslands in encroached lands. This paper reviews studies examining the short- and long- term impacts of prescribed fires on the physical, chemical and biological soil properties; in so doing, it provides a summary of the benefits and drawbacks of this technique, to help determine if prescribed fires can be useful for managing the landscape. From the study conducted, we can affirm that prescribed fires affect soil properties but differ greatly depending on soil initial characteristics, vegetation or type of fire. Also, it is possible to see that soil’s physical and biological properties are more strongly affected by prescribed fires than are its chemical properties. Finally, we conclude that prescribed fires clearly constitute a disturbance on the environment (positive, neutral or negative depending on the soil property studied), but most of the studies reviewed report a good recovery and their effects could be less pronounced than those of wildfires because of the limited soil heating and lower fire intensity and severity.
  13. Koltz, Amanda M., et al. “Global change and the importance of fire for the ecology and evolution of insects.” Current opinion in insect science 29 (2018): 110-116Climate change is drastically altering global fire regimes, which may affect the structure and function of insect communities. Insect responses to fire are strongly tied to fire history, plant responses, and changes in species interactions. Many insects already possess adaptive traits to survive fire or benefit from post-fire resources, which may result in community composition shifting toward habitat and dietary generalists as well as species with high dispersal abilities. However, predicting community-level resilience of insects is inherently challenging due to the high degree of spatio-temporal and historical heterogeneity of fires, diversity of insect life histories, and potential interactions with other global change drivers. Future work should incorporate experimental approaches that specifically consider spatiotemporal variability and regional fire history in order to integrate eco-evolutionary processes in understanding insect responses to fire.
  14. Pellegrini, Adam FA, et al. “Fire frequency drives decadal changes in soil carbon and nitrogen and ecosystem productivity.” Nature 553.7687 (2018): 194. Fire frequency is changing globally and is projected to affect the global carbon cycle and climate. However, uncertainty about how ecosystems respond to decadal changes in fire frequency makes it difficult to predict the effects of altered fire regimes on the carbon cycle; for instance, we do not fully understand the long-term effects of fire on soil carbon and nutrient storage, or whether fire-driven nutrient losses limit plant productivity. Here we analyse data from 48 sites in savanna grasslands, broadleaf forests and needleleaf forests spanning up to 65 years, during which time the frequency of fires was altered at each site. We find that frequently burned plots experienced a decline in surface soil carbon and nitrogen that was non-saturating through time, having 36 per cent (±13 per cent) less carbon and 38 per cent (±16 per cent) less nitrogen after 64 years than plots that were protected from fire. Fire-driven carbon and nitrogen losses were substantial in savanna grasslands and broadleaf forests, but not in temperate and boreal needleleaf forests. We also observe comparable soil carbon and nitrogen losses in an independent field dataset and in dynamic model simulations of global vegetation. The model study predicts that the long-term losses of soil nitrogen that result from more frequent burning may in turn decrease the carbon that is sequestered by net primary productivity by about 20 per cent of the total carbon that is emitted from burning biomass over the same period. Furthermore, we estimate that the effects of changes in fire frequency on ecosystem carbon storage may be 30 per cent too low if they do not include multidecadal changes in soil carbon, especially in drier savanna grasslands. Future changes in fire frequency may shift ecosystem carbon storage by changing soil carbon pools and nitrogen limitations on plant growth, altering the carbon sink capacity of frequently burning savanna grasslands and broadleaf forests. CONCLUSION: In conclusion, our results reveal the sensitivity of surface soils to fire and the substantial effects that changes in soil pools have on long-term ecosystem C exchange. The large empirical and conservative modelbased
    estimates of soil C changes suggest that present estimates of fire-driven C losses7, which primarily consider losses from plant biomass pools, may substantially underestimate the effects of long-term trends in fire frequencies in savanna grasslands and broadleaf forests in particular. Our findings suggest that future alterations in fire regimes in savanna grasslands and broadleaf forests may shift ecosystem C storage by changing soil C levels and changing the N limitation of plant growth, altering the carbon-sink capacity of these fire-prone ecosystems.
  15. Pressler, Yamina, John C. Moore, and M. Francesca Cotrufo. “Belowground community responses to fire: meta‐analysis reveals contrasting responses of soil microorganisms and mesofauna.” Oikos 128.3 (2019): 309-327Global fire regimes are shifting due to climate and land use changes. Understanding the responses of below-ground communities to fire is key to predicting changes in the ecosystem processes they regulate. We conducted a comprehensive meta‐analysis of 1634 observations from 131 empirical studies to investigate the effect of fire on soil microorganisms and mesofauna. Fire had a strong negative effect on soil biota biomass, abundance, richness, evenness, and diversity. Fire reduced microorganism biomass and abundance by up to 96%. Bacteria were more resistant to fire than fungi. Fire reduced nematode abundance by 88% but had no significant effect on soil arthropods. Fire reduced richness, evenness and diversity of soil microorganisms and mesofauna by up to 99%. We found little evidence of temporal trends towards recovery within 10 years post‐disturbance suggesting little resilience of the soil community to fire. Interactions between biome, fire type, and depth explained few of these negative trends. Future research at the intersection of fire ecology and soil biology should aim to integrate soil community structure with the ecosystem processes they mediate under changing global fire regimes.

 

 

This post was motivated by a related post on the WUWT site [LINK] . 

Millar, Richard J., and Pierre Friedlingstein. “The utility of the historical record for assessing the transient climate response to cumulative emissions.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376.2119 (2018): 20160449. ABSTRACT: The historical observational record offers a way to constrain the relationship between cumulative carbon dioxide emissions and global mean warming. We use a standard detection and attribution technique, along with observational uncertainties to estimate the all-forcing or ‘effective’ transient climate response to cumulative emissions (TCRE) from the observational record. Accounting for observational uncertainty and uncertainty in historical non-CO2 radiative forcing gives a best-estimate from the historical record of 1.84°C/TtC (1.43–2.37°C/TtC 5–95% uncertainty) for the effective TCRE and 1.31°C/TtC (0.88–2.60°C/TtC 5–95% uncertainty) for the CO2-only TCRE. While the best-estimate TCRE lies in the lower half of the IPCC likely range, the high upper bound is associated with the not-ruled-out possibility of a strongly negative aerosol forcing. Earth System Models have a higher effective TCRE range when compared like-for-like with the observations over the historical period, associated in part with a slight underestimate of diagnosed cumulative emissions relative to the observational best-estimate, a larger ensemble mean-simulated CO2-induced warming, and rapid post-2000 non-CO2 warming in some ensemble members. This article is part of the theme issue ‘The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels’.

 

THIS POST IS A CRITICAL REVIEW OF A RESEARCH PAPER ON THE CLIMATE SCIENCE IMPLICATIONS OF THE TCRE (Transient Climate Response to Cumulative Emissions) PUBLISHED IN THE PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY IN THEIR MATHEMATICAL, PHYSICAL, AND ENGINEERING SERIES IN 2018. THE CITATION AND ABSTRACT OF THE PAPER APPEARS ABOVE. 

 

 

[HOME PAGE]

[THE CARBON BUDGETS OF CLIMATE SCIENCE]]

[THE REMAINING CARBON BUDGET ANOMALY EXPLAINED]

 

 

 

 

  1. It has long been recognized that the climate sensitivity of surface temperature to the logarithm of atmospheric CO2  (ECS), which lies at the heart of the anthropogenic global warming and climate change (AGW) proposition, was a difficult issue for climate science because of the large range of empirical values reported in the literature and the so called “uncertainty problem” it implies {Caldeira, et al “Climate sensitivity uncertainty and the need for energy without CO2 emission.” Science 299.5615 (2003): 2052-2054}. The ECS uncertainty issue was interpreted in two very different ways. Climate science took the position that ECS uncertainty implies that climate action has to be greater than that implied by the mean value of ECS in order to ensure that higher values of ECS that are possible will be accommodated while skeptics argued that the large range means that we don’t really know. At the same time skeptics also presented convincing arguments against the assumption that observed changes in atmospheric CO2 concentration can be attributed to fossil fuel emissions [[LINK] , [LINK] .
  2. A breakthrough came in 2009 when Damon Matthews, Myles Allen, and a few others almost simultaneously published almost identical papers reporting the discovery of a “near perfect” correlation (ρ≈1) between surface temperature and cumulative emissions {2009: Matthews, H. Damon, et al. “The proportionality of global warming to cumulative carbon emissions” Nature 459.7248 (2009): 829}. They had found that, irrespective of the timing of emissions or of atmospheric CO2 concentration, emitting a trillion tonnes of carbon will cause 1.0 – 2.1 C of global warming. This linear regression coefficient corresponding with the near perfect correlation between cumulative warming and cumulative emissions (note: temperature=cumulative warming), initially described as the Climate Carbon Response (CCR) was later termed the Transient Climate Response to Cumulative Emissions (TCRE). Initially a curiosity, it gained in importance when it was found that it was in fact predicting future temperatures consistent with model predictions. The consistency with climate models was taken as a validation of the new tool and the TCRE became integrated into the theory of climate change. However, as noted in a related post [LINK] [LINK] , the consistency likely derives from the assumption that emissions accumulate in the atmosphere.
  3. Thereafter the TCRE became incorporated into the foundation of climate change theory particularly so in terms of its utility in the construction of carbon budgets for climate action plans for any given target temperature rise, an application for which the TCRE appeared to be tailor made. Most importantly, it solved or perhaps bypassed the messy and inconclusive uncertainty issue in ECS climate sensitivity that remained unresolved. The importance of this aspect of the TCRE is found in the 2017 paper “Beyond Climate Sensitivity” by prominent climate scientist Reto Knutti where he declared that the TCRE metric should replace the ECS as the primary tool for relating warming to human caused emissions {2017: Knutti, Reto, Maria AA Rugenstein, and Gabriele C. Hegerl. “Beyond equilibrium climate sensitivity.” Nature Geoscience 10.10 (2017): 727}. The anti ECS Knutti paper was not only published but received with great fanfare by the journal and by the climate science community in general.
  4. The TCRE has continued to gain in importance and prominence as a tool for the practical application of climate change theory in terms of its utility in the construction and tracking of carbon budgets for limiting warming to a target such as the Paris Climate Accord target of +1.5C above pre-industrial.  {Matthews, H. Damon. “Quantifying historical carbon and climate debts among nations.” Nature climate change 6.1 (2016): 60}. A bibliography on the subject of TCRE carbon budgets is included below at the end of this post.
  5. However, a mysterious and vexing issue has arisen in the practical matter of applying and tracking TCRE based carbon budgets. The unsolved matter in the TCRE carbon budget is the remaining carbon budget puzzle {Rogelj, Joeri, et al. “Estimating and tracking the remaining carbon budget for stringent climate targets.” Nature 571.7765 (2019): 335-342}. It turns out that midway in the implementation of a carbon budget, the remaining carbon budget computed by subtraction does not match the TCRE carbon budget for the latter period computed directly using the Damon Matthews proportionality of temperature with cumulative emissions for that period. As it turns out, the difference between the two estimates of the remaining carbon budget has a rational explanation in terms of the statistics of a time series of cumulative values of another time series described in a related post [LINK] .
  6. It is shown in the related  post that a time series of the cumulative  values of another time series has neither time scale nor degrees of freedom and that therefore statistical properties of this series can have no practical interpretation. It is demonstrated with random numbers that the only practical implication of the “near perfect proportionality” correlation reported by Damon Matthews is that the two time series being compared (annual warming and annual emissions) tend to have positive values. In the case of emissions we have all positive values, and during a time of global warming, the annual warming series contains mostly positive values. The correlation between temperature (cumulative warming) and cumulative emissions derives from this sign bias as demonstrated with random numbers with and without sign bias in a related post [LINK] .
  7.  The sign bias explains the correlation between cumulative values of time series data and also the remaining carbon budget puzzle. It is shown in the related post [LINK] that the TCRE regression coefficient between these time series of cumulative values derives from the positive value bias in the annual warming data. Thus, during a period of accelerated warming, the second half of the carbon budget period may contain a higher percentage of positive values for annual warming and it will therefore show a carbon budget that exceeds the proportional budget for the second half computed from the full span regression coefficient that is based on a lower bias for positive values.
  8. In short, the bias for positive annual warming is highest for the second half, lowest for the first half, and midway between these two values for the full span – and therein lies the simple statistics explanation of the remaining carbon budget issue that climate science is trying to solve in terms of climate theory and its extension to Earth System Models. The Millar and Friedlingstein 2018 paper is yet another in a long line of studies that ignore the statistical issues in the TCRE correlation and instead try to explain its anomalous behavior in terms of climate theory whereas in fact their explanation lies in statistical issues that have been overlooked by these young scientists [LINK] .
  9. The fundamental problem with the construction of TCRE carbon budgets and their interpretation in terms of climate action is that the TCRE is a spurious correlation that has no interpretation in terms of a relationship between emissions and warming. Complexities in these carbon budgets such as the remaining carbon budget are best understood in these terms and not in terms of new and esoteric variables such as those in earth system models. 

 

[HOME PAGE]

[THE CARBON BUDGETS OF CLIMATE SCIENCE]]

[THE REMAINING CARBON BUDGET ANOMALY EXPLAINED]

 

TCRE CARBON BUDGET BIBLIOGRAPHY

  1. MacDougall, Andrew H., et al. “Sensitivity of carbon budgets to permafrost carbon feedbacks and non-CO2 forcings.” Environmental Research Letters 10.12 (2015): 125003.  The near proportionality between cumulative CO2 emissions and change in near surface temperature can be used to define a carbon budget: a finite quantity of carbon that can be burned associated with a chosen ‘safe’ temperature change threshold. Here we evaluate the sensitivity of this carbon budget to permafrost carbon dynamics and changes in non-CO2 forcings. The carbon budget for 2.0 C ◦ of warming is reduced from 1320 Pg C when considering only forcing from CO2 to 810 Pg C when considering permafrost carbon feedbacks as well as other anthropogenic contributions to climate change. We also examined net carbon budgets following an overshoot of and return to a warming target. That is, the net cumulative CO2 emissions at the point in time a warming target is restored following artificial removal of CO2 from the atmosphere to cool the climate back to a chosen temperature target. These overshoot net carbon budgets are consistently smaller than the conventional carbon budgets. Overall carbon budgets persist as a robust and simple conceptual framework to relate the principle cause of climate change to the impacts of climate change. [FULL TEXT PDF]
  2. Millar, Richard, et al. “The cumulative carbon budget and its implications.” Oxford Review of Economic Policy 32.2 (2016): 323-342.  The cumulative impact of carbon dioxide (CO 2 ) emissions on climate has potentially profound economic and policy implications. It implies that the long-term climate change mitigation challenge should be reframed as a stock problem, while the overwhelming majority of climate policies continue to focus on the flow of CO 2 into the atmosphere in 2030 or 2050. An obstacle, however, to the use of a cumulative carbon budget in policy is uncertainty in the size of this budget consistent with any specific temperature-based goal such as limiting warming to 2°C. This arises from uncertainty in the climate response to CO 2 emissions, which is relatively tractable, and uncertainty in future warming due to non-CO 2 drivers, which is less so. We argue these uncertainties are best addressed through policies that recognize the need to reduce net global CO 2 emissions to zero to stabilize global temperatures but adapt automatically to evolving climate change. Adaptive policies would fit well within the Paris Agreement under the UN Framework Convention on Climate Change.
  3. Rogelj, Joeri, et al. Differences between carbon budget estimates unravelled,  Nature Climate Change 6.3 (2016): 245-252.  Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust number — the carbon budget for CO2-induced warming only — is also the least relevant for real-world policy. Including all greenhouse gases and using methods based on scenarios that avoid instead of exceed a given temperature limit results in lower carbon budgets. For a >66% chance of limiting warming below the internationally agreed temperature limit of 2 °C relative to pre-industrial levels, the most appropriate carbon budget estimate is 590–1,240 GtCO2 from 2015 onwards. Variations within this range depend on the probability of staying below 2 °C and on end-of-century non-CO2 warming. Current CO2 emissions are about 40 GtCO2 yr−1, and global CO2 emissions thus have to be reduced urgently to keep within a 2 °C-compatible budget.
  4. MacDougall, Andrew H., et al. “Corrigendum: Sensitivity of carbon budgets to permafrost carbon feedbacks and non-CO2 forcings (2015Environ. Res. Lett. 10.” (2016). The near proportionality between cumulative CO2 emissions and change in near surface temperature can be used to define a carbon budget: a finite quantity of carbon that can be burned associated with a chosen ‘safe’ temperature change threshold. Here we evaluate the sensitivity of this carbon budget to permafrost carbon dynamics and changes in non-CO2 forcings. The carbon budget for 2.0 C ◦ of warming is reduced from 1320 Pg C when considering only forcing from CO2 to 810 Pg C when considering permafrost carbon feedbacks as well as other anthropogenic contributions to climate change. We also examined net carbon budgets following an overshoot of and return to a warming target. That is, the net cumulative CO2 emissions at the point in time a warming target is restored
    following artificial removal of CO2 from the atmosphere to cool the climate back to a chosen temperature target. These overshoot net carbon budgets are consistently smaller than the conventional carbon budgets. Overall carbon budgets persist as a robust and simple conceptual framework to relate the principle cause of climate change to the impacts of climate change.
  5. Friedlingstein, P. “Differences between carbon budget estimates unravelled. (2016).  Several methods exist to estimate the cumulative carbon emissions that would keep global warming to below a given temperature limit. Here we review estimates reported by the IPCC and the recent literature, and discuss the reasons underlying their differences. The most scientifically robust number — the carbon budget for CO2-induced warming only — is also the least relevant for real-world policy. Including all greenhouse gases and using methods based on scenarios that avoid instead of exceed a given temperature limit results in lower carbon budgets. For a >66% chance of limiting warming below the internationally agreed temperature limit of 2 °C relative to pre-industrial levels, the most appropriate carbon budget estimate is 590–1,240 GtCO2 from 2015 onwards. Variations within this range depend on the probability of staying below 2 °C and on end-of-century non-CO2 warming. Current CO2 emissions are about 40 GtCO2 yr–1, and global CO2 emissions thus have to be reduced urgently to keep within a 2 °C-compatible budget [FULL TEXT] .
  6. Matthews, H. Damon, et al. “Estimating carbon budgets for ambitious climate targets.” Current Climate Change Reports 3.1 (2017): 69-77.  Carbon budgets, which define the total allowable CO2 emissions associated with a given global climate target, are a useful way of framing the climate mitigation challenge. In this paper, we review the geophysical basis for the idea of a carbon budget, showing how this concept emerges from a linear climate response to cumulative CO2 emissions. We then discuss the difference between a “CO2-only carbon budget” associated with a given level of CO2-induced warming and an “effective carbon budget” associated with a given level of warming caused by all human emissions. We present estimates for the CO2-only and effective carbon budgets for 1.5 and 2 °C, based on both model simulations and updated observational data. Finally, we discuss the key contributors to uncertainty in carbon budget estimates and suggest some implications of this uncertainty for decision-making. Based on the analysis presented here, we argue that while the CO2-only carbon budget is a robust upper bound on allowable emissions for a given climate target, the size of the effective carbon budget is dependent on the how quickly we are able to mitigate non-CO2 greenhouse gas and aerosol emissions. This suggests that climate mitigation efforts could benefit from being responsive to a changing effective carbon budget over time, as well as to potential new information that could narrow uncertainty associated with the climate response to CO2 emissions.
  7. MacDougall, Andrew H. “The oceanic origin of path-independent carbon budgets.” Scientific reports 7.1 (2017): 10373.  Virtually all Earth system models (ESM) show a near proportional relationship between cumulative emissions of CO2 and change in global mean temperature, a relationship which is independent of the emissions pathway taken to reach a cumulative emissions total. The relationship, which has been named the Transient Climate Response to Cumulative CO2 Emissions (TCRE), gives rise to the concept of a ‘carbon budget’. That is, a finite amount of carbon that can be burnt whilst remaining below some chosen global temperature change threshold, such as the 2.0 °C target set by the Paris Agreement. Here we show that the path-independence of TCRE arises from the partitioning ratio of anthropogenic carbon between the ocean and the atmosphere being almost the same as the partitioning ratio of enhanced radiative forcing between the ocean and space. That these ratios are so close in value is a coincidence unique to CO2. The simple model used here is underlain by many assumptions and simplifications but does reproduce key aspects of the climate system relevant to the path-independence of carbon budgets. Our results place TCRE and carbon budgets on firm physical foundations and therefore help validate the use of these metrics for climate policy.
  8. van der Ploeg, Frederick. “The safe carbon budget.” Climatic change 147.1-2 (2018): 47-59.  Cumulative emissions drive peak global warming and determine the carbon budget needed to keep temperature below 2 or 1.5 °C. This safe carbon budget is low if uncertainty about the transient climate response is high and risk tolerance (willingness to accept risk of overshooting the temperature target) is low. Together with energy costs, this budget determines the optimal carbon price and how quickly fossil fuel is abated and replaced by renewable energy. This price is the sum of the present discounted value of all future losses in aggregate production due to emitting one ton of carbon today plus the cost of peak warming that rises over time to reflect the increasing scarcity of carbon as temperature approaches its upper limit. If policy makers ignore production losses, the carbon price rises more rapidly. If they ignore the peak temperature constraint, the carbon price rises less rapidly. The alternative of adjusting damages upwards to factor in the peak warming constraint leads initially to a higher carbon price which rises less rapidly.
  9. Matthews, H. Damon, et al. “Focus on cumulative emissions, global carbon budgets and the implications for climate mitigation targets.” Environmental Research Letters 13.1 (2018): 010201.  The Environmental Research Letters focus issue on ‘Cumulative Emissions, Global Carbon Budgets and the Implications for Climate Mitigation Targets’ was launched in 2015 to highlight the emerging science of the climate response to cumulative emissions, and how this can inform efforts to decrease emissions fast enough to avoid dangerous climate impacts. The 22 research articles published represent a fantastic snapshot of the state-or-the-art in this field, covering both the science and policy aspects of cumulative emissions and carbon budget research. In this Review and Synthesis, we summarize the findings published in this focus issue, outline some suggestions for ongoing research needs, and present our assessment of the implications of this research for ongoing efforts to meet the goals of the Paris climate agreement.
  10. Millar, Richard J., and Pierre Friedlingstein. “The utility of the historical record for assessing the transient climate response to cumulative emissions.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376.2119 (2018): 20160449. The historical observational record offers a way to constrain the relationship between cumulative carbon dioxide emissions and global mean warming. We use a standard detection and attribution technique, along with observational uncertainties to estimate the all-forcing or ‘effective’ transient climate response to cumulative emissions (TCRE) from the observational record. Accounting for observational uncertainty and uncertainty in historical non-CO2 radiative forcing gives a best-estimate from the historical record of 1.84°C/TtC (1.43–2.37°C/TtC 5–95% uncertainty) for the effective TCRE and 1.31°C/TtC (0.88–2.60°C/TtC 5–95% uncertainty) for the CO2-only TCRE. While the best-estimate TCRE lies in the lower half of the IPCC likely range, the high upper bound is associated with the not-ruled-out possibility of a strongly negative aerosol forcing. Earth System Models have a higher effective TCRE range when compared like-for-like with the observations over the historical period, associated in part with a slight underestimate of diagnosed cumulative emissions relative to the observational best-estimate, a larger ensemble mean-simulated CO2-induced warming, and rapid post-2000 non-CO2 warming in some ensemble members. This article is part of the theme issue ‘The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels’.
  11. Rogelj, Joeri, et al. “Estimating and tracking the remaining carbon budget for stringent climate targets.” Nature 571.7765 (2019): 335-342.  Research reported during the past decade has shown that global warming is roughly proportional to the total amount of carbon dioxide released into the atmosphere. This makes it possible to estimate the remaining carbon budget: the total amount of anthropogenic carbon dioxide that can still be emitted into the atmosphere while holding the global average temperature increase to the limit set by the Paris Agreement. However, a wide range of estimates for the remaining carbon budget has been reported, reducing the effectiveness of the remaining carbon budget as a means of setting emission reduction targets that are consistent with the Paris Agreement. Here we present a framework that enables us to track estimates of the remaining carbon budget and to understand how these estimates can improve over time as scientific knowledge advances. We propose that application of this framework may help to reconcile differences between estimates of the remaining carbon budget and may provide a basis for reducing uncertainty in the range of future estimates.

 

THIS POST IS AN ANALYSIS OF PIOMAS SEA ICE VOLUME 1979-2019 FOR THE CALENDAR MONTHS JANUARY TO SEPTEMBER.

 

REVISED 11/7/2019 WITH THANKS TO ANDY LEE ROBINSON AND TO THE FINE PEOPLE AT ARCTIC-SEA-ICE-GRAPHS [LINK] 

 

 

[LINK TO HOME PAGE OF THIS SITE]

 

FIGURE 1: SEA ICE VOLUME 1979-2019:  THOUSANDS OF CUBIC KM SEAICE-GIF

 

FIGURE 2: AVERAGE RATE OF DECLINE: THOUSANDS OF CUBIC KM/YEARDECLINERATE

 

FIGURE 3: CORRELATION & DETRENDED CORRELATION WITH TEMPERATURECORR-DETCORR

 

  1. Sea Ice Volume is calculated using the Pan-Arctic Ice Ocean Modeling and Assimilation System (PIOMAS, Zhang and Rothrock, 2003) and published as anomalies by the Polar Science Center of the University of Washington [LINK] . Anomalies for each day are calculated relative to the average over the 1979 -2016 period for that day of the year to remove the annual cycle. The data as received for the calendar months January to September in the period 1979-2019 are displayed graphically in Figure 1. The red line through the data is a 3rd order polynomial regression curve. Sustained sea ice volume decline is seen for all nine months studied. As of this writing data for all years in the study period 1979-2019 are available for the calendar months January to September. This period contains the seasonal minimum and maximum sea ice extents that occur in September and March respectively.
  2. The average rate of decline in sea ice volume in thousands of cubic kilometers per year for each of the nine calendar months studied is summarized graphically in Figure 2. It shows high rates of decline in winter and spring and lower rates of decline in summer and fall.
  3. The alarming decline in sea ice is thought to be driven by anthropogenic global warming (AGW) climate change and is thought to be ecologically harmful to the region and to the world as a whole as well as a posing the possibility of initiating runaway global warming through a feedback system caused by lost albedo. It is proposed that the dangerous rate of sea ice decline can and must be attenuated by taking climate action in the form of reducing or eliminating fossil fuel emissions.
  4. To determine whether the observed loss in sea ice volume can be attributed to AGW climate change such that it can be attenuated with climate action in terms of reducing or eliminating fossil fuel emissions, we compute the correlation between AGW driven temperature rise and loss in sea ice volume. The UAH satellite data for lower troposphere temperature over the North Polar Ocean region is used as the relevant temperature record. Since rising temperature is expected to cause declining sea ice volume, the attribution of sea ice loss to temperature requires a statistically significant negative correlation. The correlation between temperature and the loss in sea ice volume for each of the nine calendar months is shown with the blue line in Figure 3 and it does show statistically significant negative correlations between temperature and the rate of decline in Arctic sea ice volume.
  5. However, source data correlation between  x and y in time series data derive from responsiveness of y to x at the time scale of interest and also from shared long term trends. These two effects can be separated by detrending both time series and then computing the correlation between the detrended series.  When the trend effect is removed only the responsiveness of y to x remains. This is why detrended correlation is a better measure of responsiveness than source data correlation as explained very well by Alex Tolley in his Youtube lecture [LINK] . That spurious correlations can be found in time series data when detrended analysis is not used is demonstrated with examples at the Tyler Vigen Spurious Correlation website [LINK] .
  6. Accordingly, the correlations between the detrended series are computed and reported in the red line of Figure 3. There we find that without the shared opposing trends in the two time series, the correlation is essentially zero. None of the correlation in the source data survives into the detrended series indicating that the correlation is an artifact of shared trends and not an indication of responsiveness at an annual time scale. Thus we find no evidence in the data that the observed decline in sea ice volume can be attributed to AGW climate change.
  7. The continued attribution of sea ice dynamics whether in extent, area, or volume, to AGW climate change (see history of attribution below), likely derives from the atmosphere bias of climate science such that there is a tendency to explain all observed changes in the Arctic, such as sea ice melt, in terms of AGW climate change and overlook the extensive geothermal heat sources in the Arctic – an area known to be geologically active. Some of the geological features of the Arctic including the Mid Arctic Rift system and the Jan Mayen Trend are described in related posts [LINK] [LINK] and in the Graphic below.
  8. SUMMARY: The data do show declining Arctic sea ice in its various measures such as extent, area, thickness, age, and PIOMAS volume but without evidence for the assumed attribution of these changes to AGW climate change and therefore without support for the claim that these changes can be attenuated with climate action in the form of reducing or eliminating fossil fuel emissions. A historical list of such attributions of convenience that have eroded the credibility of climate science is provided below.

 

bandicam 2019-07-01 16-29-44-526

 

 

 

A HISTORY OF ATTRIBUTION OF SEA ICE CHANGES TO AGW CLIMATE CHANGE

  1. The atmosphere bias of climate science in terms of its study of sea ice is evident in the following historical notes (paragraph#13 to 23 below) on the continuing concern that AGW climate change is melting away the sea ice. 
  2. 2004, ARCTIC CLIMATE IMPACT ASSESSMENT
    An unprecedented 4-year study of the Arctic shows that polar bears, walruses, and some seals are becoming extinct. Arctic summer sea ice may disappear entirely. Combined with a rapidly melting Greenland ice sheet, it will raise the sea level 3 feet by 2100 inundating lowlands from Florida to Bangladesh. Average winter temperatures in Alaska and the rest of the Arctic are projected to rise an additional 7 to 13 degrees over the next 100 years because of increasing emissions of greenhouse gases from human activities. The area is warming twice as fast as anywhere else because of global air circulation patterns and natural feedback loops, such as less ice reflecting sunlight, leading to increased warming at ground level and more ice melt. Native peoples’ ways of life are threatened. Animal migration patterns have changed, and the thin sea ice and thawing tundra make it too dangerous for humans to hunt and travel.
  3. 2004, RAPID ARCTIC WARMING BRINGS SEA LEVEL RISE
    The Arctic Climate Impact Assessment (ACIA) report says: increasing greenhouse gases from human activities is causing the Arctic to warm twice as fast as the rest of the planet; in Alaska, western Canada, and eastern Russia winter temperatures have risen by 2C to 4C in the last 50 years; the Arctic will warm by 4C to 7C by 2100. A portion of Greenland’s ice sheet will melt; global sea levels will rise; global warming will intensify. Greenland contains enough melting ice to raise sea levels by 7 meters; Bangkok, Manila, Dhaka, Florida, Louisiana, and New Jersey are at risk of inundation; thawing permafrost and rising seas threaten Arctic coastal regions; climate change will accelerate and bring about profound ecological and social changes; the Arctic is experiencing the most rapid and severe climate change on earth and it’s going to get a lot worse; Arctic summer sea ice will decline by 50% to 100%; polar bears will be driven towards extinction; this report is an urgent SOS for the Arctic; forest fires and insect infestations will increase in frequency and intensity; changing vegetation and rising sea levels will shrink the tundra to its lowest level in 21000 years; vanishing breeding areas for birds and grazing areas for animals will cause extinctions of many species; “if we limit emission of heat trapping carbon dioxide we can still help protect the Arctic and slow global warming”.
  4. 2007: THE ARCTIC IS SCREAMING
    Climate science declares that the low sea ice extent in the Arctic is the leading indicator of climate change. We are told that the Arctic “is screaming”, that Arctic sea ice extent is the “canary in the coal mine”, and that Polar Bears and other creatures in the Arctic are dying off and facing imminent extinction. Scientists say that the melting sea ice has set up a positive feedback system that would cause the summer melts in subsequent years to be greater and greater until the Arctic becomes ice free in the summer of 2012. We must take action immediately to cut carbon dioxide emissions from fossil fuels. http://chaamjamal.blogspot.com/2010/04/reference-less-ice-less-pups-bangkok.html
  5. 2007: THE ICE FREE ARCTIC CLAIMS GAIN MOMENTUM
    The unusual summer melt of Arctic sea ice in 2007 has encouraged climate science to warn the world that global warming will cause a steep decline in the amount of ice left in subsequent summer melts until the Arctic becomes ice free in summer and that could happen as soon as 2080 or maybe 2060 or it could even be 2030. This time table got shorter and shorter until, without a “scientific” explanation, the ice free year was brought up to 2013. In the meantime, the data showed that in 2008 and 2009 the summer melt did not progressively increase as predicted but did just the opposite by making a comeback in 2008 that got even stronger in 2009. More info: http://chaamjamal.blogspot.com/2009/12/reference-polar-ice-may-go-in-five.html
  6. 2008: POSITIVE FEEDBACK: ARCTIC SEA ICE IN A DOWNWARD SPIRAL
    Our use of fossil fuels is devastating the Arctic where the volume of sea ice “fell to its lowest recorded level to date” this year and that reduced ice coverage is causing a non-linear acceleration in the loss of polar ice because there is less ice to reflect sunlight. More info: http://chaamjamal.blogspot.com/2008/12/reference-poznan-only-marking-time.html
  7. 2008: THE ARCTIC WILL BE ICE FREE IN SUMMER IN 2008, 2013, or 2030
    The unusually low summer sea ice extent in the Arctic in 2007
    The IPCC has taken note and has revised its projection of an ice free Arctic first from 2008 to 2013 and then again from 2013 to 2030. The way things are going it may be revised again to the year 2100. More info: http://chaamjamal.blogspot.com/2008/10/reference-arctic-to-be-ice-free-due-to.html
  8. 2008: GLOBAL WARMING IS THE CAUSE OF ALL ICE MELT EVENTS
    When there was a greater focus on Antarctica climate scientists said that global warming was melting the West Antarctic Ice Shelf; but the melting was found to be localized and with an active volcano underneath the melting and the attention of “melt forecast” climate science shifted to Arctic sea ice after the an extensive summer melt was observed in September 2007. More info: http://chaamjamal.blogspot.com/2008/08/reference-arctic-sea-ice-drops.html
  9. 2009: SUMMER ARCTIC SEA ICE EXTENT IN 2009 THE 3RD LOWEST ON RECORD
    The second lowest was 2008 and the first lowest was 2007. This is not a trend that shows that things are getting worse. It shows that things are getting better and yet it is being sold and being bought as evidence that things are getting worse due to rising fossil fuel emissions. More info: http://chaamjamal.blogspot.com/2009/09/reference-arctic-sea-ice-pack-at-record.html
  10. 2009: THE ARCTIC WILL BE ICE FREE IN SUMMER BY 2029
    An alarm is raised that the extreme summer melt of Arctic sea ice in 2007 was caused by humans using fossil fuels and it portends that in 20 years human caused global warming will leave the Arctic Ocean ice-free in the summer raising sea levels and harming wildlife. More info: http://chaamjamal.blogspot.com/2009/10/reference-arctic-will-be-ice-free-in-20.html
  11. 2009: THE ARCTIC WILL BE ICE FREE IN SUMMER BY THE YEAR 2012
    Climate scientists continue to extrapolate the extreme summer melt of Arctic sea ice in 2007 to claim that the summer melt of 2007 was a climate change event and that it implies that the Arctic will be ice free in the summer from 2012 onwards. This is a devastating effect on the planet and our use of fossil fuels is to blame. More info: http://chaamjamal.blogspot.com/2009/10/reference-effects-of-arctic-warming.html
  12. 2009: THE SUMMER SEA ICE EXTENT IN THE ARCTIC WILL BE GONE
    Summer melt of Arctic ice was the third most extensive on record in 2009, second 2008, and the most extensive in 2007. These data show that warming due to our carbon dioxide emissions are causing summer Arctic ice to gradually diminish until it will be gone altogether. More info: http://chaamjamal.blogspot.com/2009/12/reference-warming-trend-is-clear.html

 

 

 

 

 

THIS POST IS A TRANSCRIPTION WITH CRITICAL COMMENTARY OF A YOUTUBE VIDEO (ABOVE) WHICH SAYS THAT GREENLAND IS MELTING AWAY AT THE “MIND BLOWING” RATE OF “8500 TONS PER SECOND, EVERY SECOND”. 

 

[HOME PAGE OF THIS SITE]

 

  1. TRANSCRIPTION: The humpbacks have returned to the shores of western Greenland. They bounced  back from near extinction thanks to an international effort to stop their slaughter and here they are feeding at the edge of Greenland’s largest glacier. The Yakob Harbin glacier stretches inland for around 40 miles. But for how long? The melt here in Greenland hit record levels in the summer (of 2019). Greenland’s ice cap which holds about 8% of the world’s fresh water just lost 12.5 billion tonnes of ice in a single day. It raised sea levels globally. That melt, on August 2nd 2019, was the largest single day loss in recorded history.
  2. The entire ice sheet that covers Greenland contains enough ice to raise sea levels across the globe by 20 feet if it melted. Climate models say that won’t happen for a while but consider this: the summer’s level of ice melt wasn’t supposed to happen for another 50 years. Greenland is now losing so much ice that it is shaping the world in such a way that you cannot ignore it any more.
  3. Question: But doesn’t the world naturally warm up and melt ice in interglacials? Answer: Laughter! Greenland is losing approximately 8,500 tonnes of ice per second day in day out around the clock, 8,500 tonnes of ice per second every second! Mind blowing! Hard to…. that’s why it’s a concern!
  4. In fact over half the Arctic’s permanent ice has melted revealing a landscape that has been hidden for 40,000 years. Up the glacier an area of ice 10 times the size of the UK pouring billions of tonnes of water into the Atlantic Ocean every day of the year every year and the melt rate is gaining momentum meaning that unless we act soon, this incredible destruction of Greenland’s glaciers will be unstoppable.
  5. Disappearing sea ice means shipping routes like the Northwest Passage are opening up. That could cut freight miles but others say these waters should be protected as a world heritage site. We’re expecting more traffic from the metal industry, oil industry, tourism, and more traffic in the Northwest Passage.
  6. Question: But ships are safe now. The fact that you can open something up to shipping doesn’t mean necessarily that the area will be destroyed, does it? Answer: Right here in this pristine area you can use heavy fuel oil. Once you bring heavy fuel oils into this area there could be a catastrophe of enormous dimensions.
  7. Moments later as if on cue we suddenly have company – diesel powered company (a boatload of tourists arrive). The problem is these Arctic waters are some of the most fragile marine environments on the planet; and there is no control about the numbers of luxury cruise ships driving through it.
  8. (Natives singing and dancing performing for tourists) In 17 degrees summer heat it is t-shirt weather. It’s hot work performing in seal skin jackets and polar bear trousers. Traditions of the past played out for tourists. But the old way of life is disappearing partly due to climate change. Rainfall, unusual in the High Arctic is triggering landslides. This village is entirely built on permafrost but its foundations are melting in the summer. Now more than half the town will need to be demolished and rebuilt. 75-year-old Inuit Joseph Manumina is a village elder. He’s seen the glaciers melt at a rapid rate and he says “The sun is warmer now than it has been and that is the reason for the melt”. It was not like this before. 
  9. A young Inuit speaks: We can see this time of year. That is a benefit for us because we can travel further and faster by boat but when it is getting dark in the early winter like if we say when the sun comes down like in October and November that is where it starts, we cannot go up by boat because we cannot see anything and sea ice will be too thin to go out by dogsled. That’s where the problem is.
  10. So the dog population reduces year by year. Without sea ice you can’t use sled dogs so they shoot the dogs.
  11. Tourism is developing a new economy here. Whale watching! Some Inuits are still allowed to hunt whales but many find that there is more money to be made in the whale watching tourism business rather than harpooning. But the tourists are also coming for the Arctic surroundings and as Greenland melts four times faster than previously predicted the future of the humpbacks and us humans around the world are in jeopardy.

 

  1. RESPONSES: CLAIM: The humpbacks have returned to the shores of western Greenland. They bounced  back from near extinction thanks to an international effort to stop their slaughter and here they are feeding at the edge of Greenland’s largest glacier. RESPONSE: The only relevance of this introduction is possibly to set an environmental and ecological high ground for the climate change discussion to follow. 
  2. CLAIMThe melt here in Greenland hit record levels in the summer (of 2019). and lost 12.5 billion tonnes of ice in a single day”RESPONSE: The melt reported for a single day is acknowledged as an extreme event and it cannot be treated as an average but even if it were an average, at 12.5 gigatonnes per day every day Greenland would lose 4563 gigatons per year that would raise sea level by 0.417 inches per year and at that rate the whole of the ice sheet will be gone in 576 years. Even more important is that this melt rate was a single extreme day in August when the most rapid melt occurs and in the winter months,Greenland gains ice. The extreme one day summer melt event, though presented in alarming tone and language, does not appear to provide reason for the alarm. 
  3. CLAIM: The entire ice sheet that covers Greenland contains enough ice to raise sea levels across the globe by 20 feet if it melted. RESPONSE: Yes it would but as noted in the prior response, even at the extreme melt rate of 12.5 gigatons per day, every day summer and winter, it would take 576 years to melt the whole of the ice sheet and raise sea levels by 0.417 inches per year until the scary level of 20 feet is reached 576 years from now. The fear mongering is presented with great emotion and strong language but with little substance in the data presented as the rationale for the fear.
  4. CLAIM: Greenland is losing 8,500 tonnes of ice per second every second of the day every day of the year. RESPONSE: Year-round ice loss is normally not possible as Greenland loses ice in summer but gains ice in winter; but even if it were possible, 8,500 tonnes per second amounts to 268 gigatonnes per year that would cause the Greenland ice sheet to be gone in 9,800 years raising sea level by 0.024 inches per year in the meantime. The data presented do not support the fear that is being sold in terms of the data.
  5. CLAIM: In fact over half the Arctic’s permanent ice has melted revealing a landscape that has been hidden for 40,000 years. RESPONSEThe current interglacial is only about 10,000 years old and so the 40,000 year time span into the past takes us to 30,000 years into the last glacial period and its glacial maximum. The comparison between the glacial maximum with interglacial conditions may contain useful information about glaciation cycles but it tells us nothing about Holocene ice dynamics. This claim, though meant to be a reason to fear fossil fuel driven AGW, does not provide any information that would serve as a reason to fear AGW. 
  6. CLAIM: The opening of the Northwest Passage will allow commercial shipping into the Arctic and they use “heavy fuel oil” that poses a catastrophic ecological threat to the region. RESPONSE: The North Sea is a major producer of offshore oil that has been actively producing and shipping oil since 1980 and since that time oil production has gone up from 0.4 million barrels/day in 1980 to more than 1.8 million barrels per day today (2019). Significant shipping of oil, equipment, and personnel is seen in the region but there has been no significant ecological damage to the region caused by ships “using heavy fuel oil”. It appears that no climate fear could be attributed to the opening of the Northwest Passage and so ecological fear was attempted but the argument does not make sense in light of what we know about shipping. 
  7. CLAIM: A young Inuit says that dog sleds can no longer be used because the sea ice is too thin and so they have to shoot the dogs. RESPONSE: That may be so but it can’t be blamed on AGW as described in three related posts [LINK] [LINK] [LINK] that describe the overlooked role of geological activity in the Arctic. 
  8. CLAIM:  75-year-old Inuit Joseph Manumina is a village elder. He’s seen the glaciers melt at a rapid rate and he says “The sun is warmer now than it has been and that is the reason for the melt”. It was not like this beforeRESPONSE: Figure 1 below shows the temperature at Nuuk, Greenland from 1866 to 2013 for each calendar month. A significant rate of warming is not seen particularly so in the summer months. The absence of a strong warming trend in summer is made somewhat clearer in Figure 2 where we see that the overall warming trend derives mostly from warming in the early portion of the time span from 1866 to 1939 with very little warming and even some cooling seen in the second half 1940 to 2013. These data do not support the village elder’s claim that in his 74-year lifetime he has seen rapid warming such that it it is causing the ice  to melt.

 

FIGURE 1: NUUK TEMPERATURES 1866-2013 FOR EACH CALENDAR MONTHNUUKGIF

FIGURE 2: NUUK SEASONAL CYCLE AND WARMING RATES

 

 

 

SUMMARY: At current sea level rise forecasts, it was projected that 110 million people will be affected by coastal high tide flooding events by the year 2100 but new improved DEM data for coastal land elevation shows that they are not as high as we had thought and so the number of people affected by high tide flood events at the same rate of sea level rise will be higher, maybe 190 million or somewhere between 140 and 240 million. A problem with that assessment is that the large uncertainty in coastal land elevation data may mean that we don’t really know what the coastal land elevation is exactly. 

 

  1. How the media sees it: November 1, 2019 at 6:37 a.m. GMT+7: HERE’S ANOTHER piece of evidence that climate change might be worse than scientists previously predicted. The seas are rising, and will continue to rise, because hotter temperatures melt land-based ice and expand the volume existing ocean water takes up. But while much study has examined the shift in amount and warmth of seawater humans will face, there is another variable scientists must get right to assess the risk to humanity: just how many people live in low-lying areas. A new paper suggests previous estimates of land elevation — and, therefore, the number of at-risk people — were wrong. The study, published Tuesday in the journal Nature Communications, corrects satellite elevation data, and it “reveals a developed global coastline three times more exposed to extreme coastal water levels than previously thought,” the authors warn. Even under an optimistic scenario in which heat-warming greenhouse emissions are restrained and Antarctic ice sheets remain stable, “the global impacts of sea-level rise and coastal flooding this century will likely be far greater than indicated by the most pessimistic past analyses.” [LINK]
  2. TRANSLATION: Sadly, it looks like AGW climate change driven sea level rise won’t be as high and as scary as we were hoping for but there is still hope for us. What if coastal lands are not as high as we think they are? That would cause the same degree of devastation at the lower sea level rise that we now have to live with. All those people in Bangladesh and elsewhere living close to sea level will die and it will all be your fault for using fossil fuels. [RELATED POST] .
  3. What their source paper says: Article Open Access Published: 29 October 2019: New elevation data triple estimates of global vulnerability to sea-level rise and coastal flooding, Scott A. Kulp & Benjamin H. Strauss, Nature Communications volume 10, Article number: 4844 (2019) Cite this article. ABSTRACT: Most estimates of global mean sea-level rise this century fall below 2 m. This quantity is comparable to the positive vertical bias of the principle digital elevation model (DEM) used to assess global and national population exposures to extreme coastal water levels, NASA’s SRTM. CoastalDEM is a new DEM utilizing neural networks to reduce SRTM error. Here we show – employing CoastalDEM—that 190 M people (150–250 M, 90% CI) currently occupy global land below projected high tide lines for 2100 under low carbon emissions, up from 110 M today, for a median increase of 80 M. These figures triple SRTM-based values. Under high emissions, CoastalDEM indicates up to 630 M people live on land below projected annual flood levels for 2100, and up to 340 M for mid-century, versus roughly 250 M at present. We estimate one billion people now occupy land less than 10 m above current high tide lines, including 250 M below 1 m.
  4. TRANSLATION: Using a new improved digital elevation model (DEM) developed by NASA, we find that land is not as high as we thought it was. We used to think that land was higher such that only 110 million people were at risk of suffering from coastal flooding at current SLR projections for the year 2100. But new land elevation data shows that land is lower than we thought it was such that at the same sea level rise, 190 million people are at risk of suffering from coastal flooding at current SLR projections for the year 2100. The uncertainty in this projection shows a range of {190-40=150} to {190+60=250} million people affected for a 90% confidence interval. Subtracting 40 million for the low end of the confidence interval and adding 60 million for the high end of the confidence interval makes it a little scarier, we thought.
  5. UNCERTAINTY: The Zhang etal 2019 paper shows that the error in the DEM estimations may be even higher with values ranging from 1.74 to 14.29 and the Dai etal 2019 paper finds a much lower uncertainty within 2 meters with much lower scary predictions. Although uncertainty measures the extent of our ignorance the advantage of uncertainty in climate alarmism is that the less we know, the wider the 90% confidence interval gets, and the scarier climate change becomes as explained in paragraph#5 in this related post [LINK] {Activism needs of researchers also corrupt how the statistical property of variance is viewed and interpreted. In statistics and also in information theory, high variance implies low information content. In other words, the higher the variance the less we know. In this context high variance is undesirable because it degrades the information we can derive from the data. However, high variance also yields large confidence intervals and where either end of this confidence interval is extreme enough to support the activism needs of climate researchers, high variance is interpreted not as absence of information but as information about a danger of how extreme it could be. This interpretation in conjunction with the precautionary principle leads to a perverse interpretation of uncertainty such that uncertainty becomes transformed into certainty of extreme values}. In other words, THE LESS WE KNOW THE SCARIER IT GETS. 
  6. An alternative methodology that bypasses the uncertainty problem in SRTM v4. 1 and MERIT DEM data was suggested in a WUWT comment by Hans Erren as follows: “November 3, 2019 at 4:46 am. A simple solution springs to my mind: use sea level gauges in coastal areas and do not use satellites at all” [LINK] . The Hans Erren insight is that an unnecessary complexity is imposed on the study of high tide floods in a list of specific locations by the climate science reliance on global mean eustatic sea level (GMES). It is true that the study of sea level rise and its overall impacts should be studied in terms of GMES but the study of localized events in terms of global data creates an unnecessary complication that introduces layers of uncertainty that do not exist in local data. To understand localized high tide floods as a function of GMES requires land elevation data on a standardized global scale. On the other hand to understand localized high tide flood events as a function of local sea level is a very simple exercise that does not require uncertain satellite measures of land elevation. Since the coastal areas at risk have already been identified, and their tidal gauge data and  high tide flood events are recorded, much greater precision in forecasting future high tide floods can be realized if these at-risk-areas are studied separately instead translating them into global data and then back again to local data. The number of people at risk at each coastal area can then be assessed. The global number can then be estimated by summation. The study of global GMES and DEM data in order to understand localized phenomena is the source of the large uncertainties that ultimately erode the utility of such findings.

 

BIBLIOGRAPHY

  1. Kulp, Scott A., and Benjamin H. Strauss. “CoastalDEM: A global coastal digital elevation model improved from SRTM using a neural network.” Remote sensing of environment 206 (2018): 231-239.  Positive vertical bias in elevation data derived from NASA’s Shuttle Radar Topography Mission (SRTM) is known to cause substantial underestimation of coastal flood risks and exposure. Previous attempts to correct SRTM elevations have used regression to predict vertical error from a small number of auxiliary data products, but these efforts have been focused on reducing error introduced solely by vegetative land cover. Here, we employ a multilayer perceptron artificial neural network to perform a 23-dimensional vertical error regression analyses, where in addition to vegetation cover indices, we use variables including neighborhood elevation values, population density, land slope, and local SRTM deviations from ICESat altitude observations. Using lidar data as ground truth, we train the neural network on samples of US data from 1–20 m of elevation according to SRTM, and assess outputs with extensive testing sets in the US and Australia. Our adjustment system reduces mean vertical bias in the coastal US from 3.67 m to less than 0.01 m, and in Australia from 2.49 m to 0.11 m. RMSE is cut by roughly one-half at both locations, from 5.36 m to 2.39 m in the US, and from 4.15 m to 2.46 in Australia. Using ICESat data as a reference, we estimate that global bias falls from 1.88 m to −0.29 m, and RMSE from 4.28 m and 3.08 m. The methods presented here are flexible and effective, and can be effectively applied to land cover of all types, including dense urban development. The resulting enhanced global coastal DEM (CoastalDEM) promises to greatly improve the accuracy of sea level rise and coastal flood analyses worldwide.
  2. Kulp, Scott Andrew, and B. Strauss. “Improved elevation data more than doubles estimates of global coastal vulnerability to sea level rise.” AGU Fall Meeting Abstracts. 2018.  As sea levels rise and damaging storm surge becomes more intense and frequent, accurate flooding vulnerability assessments are essential to prepare coastal communities for the growing impacts and damage these threats may bring. A digital elevation model (DEM) is the foundation of such analyses, but large numbers of assessments performed outside of the United States use NASA’s SRTM, which has a multimeter mean vertical bias in the coastal zone globally – more than most sea level projections for this century. Here, we apply an improved global coastal elevation model we have developed using artificial neural newtorks, CoastalDEM, that reduces mean vertical bias to on the order of 10cm. A global vulnerability assessment with our new model suggests that SRTM has gravely underestimated coastal threats from sea level rise. Across multiple carbon emission pathways and sea level rise projection models, CoastalDEM predicts more than twice as many people living on land at risk of permanent inundation this century than SRTM does.
  3. Hirt, Christian. “Artefact detection in global digital elevation models (DEMs): The Maximum Slope Approach and its application for complete screening of the SRTM v4. 1 and MERIT DEMs.” Remote Sensing of Environment 207 (2018): 27-41.  Despite post-processing efforts by space agencies and research institutions, contemporary global digital elevation models (DEMs) may contain artefacts, i.e., erroneous features that do not exist in the actual terrain, such as spikes, holes and line errors. The goal of the present paper is to illuminate the artefact issue of current global DEM data sets that might be an obstacle for any geoscience study using terrain information. We introduce the Maximum Slope Approach (MSA) as a technique that uses terrain slopes as indicator to detect and localize spurious artefacts. The MSA relies on the strong sensitivity of terrain slopes for sudden steps in the DEM that is a direct feature of larger artefacts. In a numerical case study, the MSA is applied for globally complete screening of two SRTM-based 3 arc-second DEMs, the SRTM v4.1 and the MERIT-DEM. Based on 0.1° × 0.1° sub-divisions and a 5 m/m slope threshold, 1341 artefacts were detected in SRTM v4.1 vs. 108 in MERIT. Most artefacts spatially correlate with SRTM voids (and thus with the void-filling) and not with the SRTM-measured elevations. The strong contrast in artefact frequency (factor ~12) is attributed to the SRTM v4.1 hole filling. Our study shows that over parts of the Himalaya Mountains the SRTM v4.1 data set is contaminated by step artefacts where the use of this DEM cannot be recommended. Some caution should be exercised, e.g., over parts of the Andes and Rocky Mountains. The same holds true for derived global products that depend on SRTM v4.1, such as gravity maps. Primarily over the major mountain ranges, the MERIT model contains artefacts, too, but in smaller numbers. As a conclusion, globally complete artefact screening is recommended prior to the public release of any DEM data set. However, such a quality check should also be considered by users before using DEM data. MSA-based artefact screening is not only limited to DEMs, but can be applied as quality assurance measure to other gridded data sets such as digital bathymetric models or gridded physical quantities such as gravity or magnetics.
  4. Jain, Akshay O., et al. “Vertical accuracy evaluation of SRTM-GL1, GDEM-V2, AW3D30 and CartoDEM-V3. 1 of 30-m resolution with dual frequency GNSS for lower Tapi Basin India.” Geocarto international 33.11 (2018): 1237-1256Shuttle Radar Topography Mission (SRTM-GL1), Advanced Space Borne Thermal Emission and Reflection Radiometer- Global DEM (GDEM-V2), recently released Advanced Land Observing Satellite (‘DAICHI’) DEM (AW3D30) and Indian National Cartosat-1 DEM v3 (CartoDEM-V3.1) provide free topographic data at a 30-m resolution for Indian peninsula. In this research study, the vertical accuracy of DEM is evaluated for above data-sets and compared with high accuracy dual frequency GNSS of a millimetre accuracy. The extensive field investigation is carried out using a stratified random fast static DGPS survey for collecting 117 high accuracy ground control points in a predominantly agriculture catchment. Further, the effect of land cover, slope and low-lying coastal zone on DEM vertical accuracy was also analysed and presented in this study. The results for RMSE of terrain elevation are 2.88m, 5.46m, 2.45m and 3.52m for SRTM-GL1, GDEM-V2, AW3D30 and CartoDEM-V3.1 respectively. 
  5. Zhang, Keqi, et al. “Accuracy assessment of ASTER, SRTM, ALOS, and TDX DEMs for Hispaniola and implications for mapping vulnerability to coastal flooding.” Remote sensing of environment 225 (2019): 290-306Digital elevation models (DEMs) derived from remote sensing data provide a valuable and consistent data source for mapping coastal flooding at local and global scales. Mapping of flood risk requires quantification of the error in DEM elevations and its effect on delineation of flood zones. The ASTERSRTMALOS, and TanDEM-X (TDX) DEMs for the island of Hispaniola were examined by comparing them with GPS and LiDAR measurements. The comparisons were based on a series of error measures including root mean square error (RMSE) and absolute error at 90% quantile (LE90). When compared with >2000 GPS measurements with elevations below 7 m, RMSE and LE90 values for ASTER, SRTM, ALOS, TDX DEMs were 8.44 and 14.29, 3.82 and 5.85, 2.08 and 3.64, and 1.74 and 3.20 m, respectively. In contrast, RMSE and LE90 values for the same DEMs were 4.24 and 6.70, 4.81 and 7.16, 4.91 and 6.82, and 2.27 and 3.66 m when compared to DEMs from 150 km2 LiDAR data, which included elevations as high as 20 m. The expanded area with LiDAR coverage included additional types of land surface, resulting in differences in error measures. Comparison of RMSEs indicated that the filtering of TDX DEMs using four methods improved the accuracy of the estimates of ground elevation by 20–43%. DTMs generated by interpolating the ground pixels from a progressive morphological filter, using an empirical Bayesian kriging method, produced an RMSE of 1.06 m and LE90 of 1.73 m when compared to GPS measurements, and an RMSE of 1.30 m and LE90 of 2.02 m when compared to LiDAR data. Differences in inundation areas based on TDX and LiDAR DTMs were between −13% and −4% for scenarios of 3, 5, 10, and 15 m water level rise, a much narrower range than inundation differences between ASTER, SRTM, ALOS and LiDAR. The TDX DEMs deliver high resolution global DEMs with unprecedented elevation accuracy, hence, it is recommended for mapping coastal flood risk zones on a global scale, as well as at a local scale in developing countries where data with higher accuracy are unavailable.
  6. Dai, Chunli, et al. “Coastline extraction from repeat high resolution satellite imagery.” Remote Sensing of Environment 229 (2019): 260-270.  This paper presents a new coastline extraction method that improves water classification accuracy by benefitting from an ever-increasing volume of repeated measurements from commercial satellite missions. The widely-used Normalized Difference Water Index (NDWI) method is tested on a sample of around 12,600 satellite images for statistical analysis. The core of the new water classification method is the use of a water probability algorithm based on the stacking of repeat measurements, which can mitigate the effects of translational offsets of images and the classification errors caused by clouds and cloud shadows. By integrating QuickBirdWorldView-2 and WorldView-3 multispectral images, the final data product provides a 2 m resolution coastline, as well as a 2 m water probability map and a repeat-count measurement map. Improvements on the existing coastline (GSHHS-the Global Self-consistent, Hierarchical, High-resolution Shoreline Database, 50 m–5000 m) in terms of resolution (2 m) is substantial, thanks to the combination of multiple data sources.