Thongchai Thailand

Archive for February 2020


bandicam 2020-02-26 14-32-17-900




























CLAIM#1: Antarctic temperatures recently hit record highs twice in the space of one week. RESPONSE#1: These isolated temperature events were constrained both in space and time in a remote tip of the Antarctic Peninsula on specific days. They have no interpretation in terms of AGW climate change – a theory about the effect of fossil fuel emissions on long term warming trends in global mean temperature. In a related posts it is shown that a more rational interpretation of such isolated temperature events in a geologically active region is provided by the known geothermal heal fluxes in these locations [LINK]

CLAIM#2:  A heat wave melted around 20% of the snow of one of its islands, as illustrated by new NASA Earth Observatory images (Image#2 above). During the heat wave, around 4 inches of snow covering Eagle Island melted as well as the highest ever temperature recorded on the continent reaching 64.9 degrees Fahrenheit (18.3 degrees Celsius), the same as temperatures in Los Angeles on the same day.  RESPONSE#2: AGW climate change is a theory about a long term warming trend in global mean temperature. Isolated temperature events of this nature in a remote corner of Antarctica have no interpretation in this context and it has not been shown that the temperature events in the extreme North of the Antarctic Peninsula are a response to atmospheric forcing. The mean surface temperature of Antarctica does not show evidence of global warming [LINK] . A more likely source of the energy in such isolated events is geothermal heat particularly so when the location of the heat event is a geologically active area as seen in the geothermal heat flux maps in Image #4 and Image #5. Yet another possibility is the warming and snow melt effects of by Foehn and Chinook winds (Image#6) that are known to occur in this region [LINK] [LINK]

CLAIM#3: The images of Eagle Island, captured by NASA’s Landsat-8 satellite on February 4th and February 13th, reveal the startling difference nine days of record-breaking temperatures can make on the planet’s coldest continent. While the usually anomalous high temperatures are a cause for alarm, the picture of a snow-covered island transformed into one with melt-pools and exposed rocky terrain has researchers concerned about the effects climate change is having on the region. You see these kinds of melt events in Alaska and Greenland, but not usually in Antarctica. RESPONSE#3: Geothermal heat induced ice and snow melt is common in the geologically active areas of Antarctica such as the tip of the Antarctic Peninsula. Consider also that the assumed causation by global warming is not supported by the temperature data as shown in a related post [LINK] . The geothermal heat flux maps in Images 4&5 and Foehn and Chinook wind events  (Image#6) above provide a better explanation for such isolated and evanescent heat events than AGW global warming. These phenomena cannot be generalized across Antarctica because they are location specific and the the specific locations of these melt events coincide with known locations of geological activity.

CLAIM#4: Such dramatic snowmelt is the result of increased temperatures over a sustained period of time which is thought to be the result of overall global temperatures rising. However, other conditions also influenced the sudden heat wave in the Antarctic climate including unusually weak winds which prevented a warm surge moving southwards from Chile and penetrating the continent.  RESPONSE#4: As shown in a related post, there is no evidence of an AGW climate change warming trend in Antarctica [LINK] . As shown in a related post, [LINK] , West Antarctica, and particularly so the Antarctic Peninsula, is a geologically active area where geothermal heat from the West Antarctic Rift and the Marie Byrd Mantle Plume and Foehn and Chinook wind events  (Image#6) are more likely explanations of isolated and brief heat and melt events. Nearby to the events described is Deception Island where an extreme volcanic eruption had created a large hot spring lake now popular with tourists as seen in the image below.

bandicam 2020-02-09 12-19-23-705





  1. Scambos, Ted A., et al. “The link between climate warming and break-up of ice shelves in the Antarctic Peninsula.” Journal of Glaciology 46.154 (2000): 516-530.  A review of in situ and remote-sensing data covering the ice shelves of the Antarctic Peninsula provides a series of characteristics closely associated with rapid shelf retreat: deeply embayed ice fronts; calving of myriad small elongate bergs in punctuated events; increasing flow speed; and the presence of melt ponds on the ice-shelf surface in the vicinity of the break-ups. As climate has warmed in the Antarctic Peninsula region, melt-season duration and the extent of ponding have increased. Most break-up events have occurred during longer melt seasons, suggesting that meltwater itself, not just warming, is responsible. Regions that show melting without pond formation are relatively unchanged. Melt ponds thus appear to be a robust harbinger of ice-shelf retreat. We use these observations to guide a model of ice-shelf flow and the effects of meltwater. Crevasses present in a region of surface ponding will likely fill to the brim with water. We hypothesize (building on Weertman (1973), Hughes (1983) and Van der Veen (1998)) that crevasse propagation by meltwater is the main mechanism by which ice shelves weaken and retreat. A thermodynamic finite-element model is used to evaluate ice flow and the strain field, and simple extensions of this model are used to investigate crack propagation by meltwater. The model results support the hypothesis.
  2. Convey, P., et al. “The flora of the South Sandwich Islands, with particular reference to the influence of geothermal heating.” Journal of Biogeography 27.6 (2000): 1279-1295.  Data obtained in 1997 are combined with updated records from the only previous survey (in 1964) to provide a baseline description of the flora of the archipelago, which currently includes 1 phanerogam, 38 mosses, 11 liverworts, 5 basidiomycete fungi, 41 lichenised fungi and 16 diatoms with, additionally, several taxa identified only to genus. Major elements of the moss and liverwort floras are composed of South American taxa (32% and 73%, respectively), with a further 45% of mosses having bipolar or cosmopolitan distributions. These two groups show low levels of Antarctic endemicity (11% and 18%, respectively). In contrast, 52% of lichens and 80% of basidiomycete fungi are endemic to the Antarctic. A further 36% of lichens are bipolar/cosmopolitan, with only 5% of South American origin. The flora of the South Sandwich Islands is clearly derived from those of other Antarctic zones. The flora of unheated ground is closely related to that of the maritime Antarctic, although with a very limited number of species represented. That of heated ground contains both maritime and sub‐Antarctic elements, confirming the importance of geothermal heating for successful colonisation of the latter group. The occurrence of several maritime Antarctic species only on heated ground confirms the extreme severity of the archipelago’s climate in comparison with well‐studied sites much further south in this biogeographical zone.
  3. Smith, RI Lewis. “The bryophyte flora of geothermal habitats on Deception Island, Antarctica.” The Journal of the Hattori Botanical Laboratory 97 (2005): 233-248.  Deception Island is one of the most volcanically active sites south of 60°S. Between 1967 and 1970 three major eruptions devastated large expanses of the landscape and its predominantly cryptogamic vegetation. Since 1970 extensive recolonisation has occurred on the more stable surfaces. Unheated ground supports several bryophyte and lichen communities typical of much of the maritime Antarctic, but geothermal habitats possess remarkable associations of bryophytes, many of the species being unknown or very rare elsewhere in the Antarctic. Nine geothermal sites were located and their vegetation investigated in detail. Communities associated with more transient sites have disappeared when the geothermal activity ceased. Mosses and liverworts occur to within a few centimetres of fumarole vents where temperatures reach 90-95℃, while temperatures within adjacent moss turf can reach 35-50℃ or more and remain consistently between 25 and 45℃. Most of the bryoflora has a Patagonian-Fuegian provenance and it is presumed that, unlike most species, the thermophiles are not pre-adapted to the Antarctic environment, being able to colonise only where the warm and humid conditions prevail.
  4. Vieira, Gonçalo, et al. “Geomorphological observations of permafrost and ground-ice degradation on Deception and Livingston Islands, Maritime Antarctica.” (2008): 1939-1844. The Antarctic Peninsula is experiencing one of the fastest increases in mean annual air temperatures (ca. 2.5oC in the last 50 years) on Earth. If the observed warming trend continues as indicated by climate models, the region could suffer widespread permafrost degradation. This paper presents field observations of geomorphological features linked to permafrost and ground-ice degradation at two study areas: northwest Hurd Peninsula (Livingston Island) and Deception Island along the Antarctic Peninsula. These observations include thermokarst features, debris flows, active-layer detachment slides, and rockfalls. The processes observed may be linked not only to an increase in temperature, but also to increased rainfall, which can trigger debris flows and other processes. On Deception Island some thermokarst (holes in the ground produced by the selective melting of permafrost)  features may be related to anomalous geothermal heat flux from volcanic activity.
  5. Mulvaney, Robert, et al. “Recent Antarctic Peninsula warming relative to Holocene climate and ice-shelf history.” Nature 489.7414 (2012): 141-144Rapid warming over the past 50 years on the Antarctic Peninsula is associated with the collapse of a number of ice shelves and accelerating glacier mass loss1,2,3,4,5,6,7. In contrast, warming has been comparatively modest over West Antarctica and significant changes have not been observed over most of East Antarctica8,9, suggesting that the ice-core palaeoclimate records available from these areas may not be representative of the climate history of the Antarctic Peninsula. Here we show that the Antarctic Peninsula experienced an early-Holocene warm period followed by stable temperatures, from about 9,200 to 2,500 years ago, that were similar to modern-day levels. Our temperature estimates are based on an ice-core record of deuterium variations from James Ross Island, off the northeastern tip of the Antarctic Peninsula. We find that the late-Holocene development of ice shelves near James Ross Island was coincident with pronounced cooling from 2,500 to 600 years ago. This cooling was part of a millennial-scale climate excursion with opposing anomalies on the eastern and western sides of the Antarctic Peninsula. Although warming of the northeastern Antarctic Peninsula began around 600 years ago, the high rate of warming over the past century is unusual (but not unprecedented) in the context of natural climate variability over the past two millennia. The connection shown here between past temperature and ice-shelf stability suggests that warming for several centuries rendered ice shelves on the northeastern Antarctic Peninsula vulnerable to collapse. Continued warming to temperatures that now exceed the stable conditions of most of the Holocene epoch is likely to cause ice-shelf instability to encroach farther southward along the Antarctic Peninsula.
  6. Fraser, Ceridwen I., et al. “Geothermal activity helps life survive glacial cycles.” Proceedings of the National Academy of Sciences 111.15 (2014): 5634-5639.  The evolution and maintenance of diversity through cycles of past climate change have hinged largely on the availability of refugia (places where life can survive through a period of unfavorable conditions such as glaciation). Geothermal refugia may have been particularly important for survival through past glaciations. Our spatial modeling of Antarctic biodiversity indicates that some terrestrial groups likely survived throughout intense glacial cycles on ice-free land or in sub-ice caves associated with areas of geothermal activity, from which recolonization of the rest of the continent took place. These results provide unexpected insights into the responses of various species to past climate change and the importance of geothermal regions in promoting biodiversity. Furthermore, they indicate the likely locations of biodiversity “hotspots” in Antarctica, suggesting a critical focus for future conservation efforts.
  7. An, Meijian, et al. “Temperature, lithosphere‐asthenosphere boundary, and heat flux beneath the Antarctic Plate inferred from seismic velocities.” Journal of Geophysical Research: Solid Earth 120.12 (2015): 8720-8742.  We estimate the upper mantle temperature of the Antarctic Plate based on the thermoelastic properties of mantle minerals and S velocities using a new 3‐D shear velocity model, AN1‐S. Crustal temperatures and surface heat fluxes are then calculated from the upper mantle temperature assuming steady state thermal conduction. The temperature at the top of the asthenosphere beneath the oceanic region and West Antarctica is higher than the dry mantle solidus, indicating the presence of melt. From the temperature values, we generate depth maps of the lithosphere‐asthenosphere boundary and the Curie temperature isotherm. The maps show that East Antarctica has a thick lithosphere similar to that of other stable cratons, with the thickest lithosphere (~250 km) between Domes A and C. The thin crust and lithosphere beneath West Antarctica are similar to those of modern subduction‐related rift systems in East Asia. A cold region beneath the Antarctic Peninsula is similar in spatial extent to that of a flat‐subducted slab beneath the southern Andes, indicating a possible remnant of the Phoenix Plate, which was subducted prior to 10 Ma. The oceanic lithosphere generally thickens with increasing age, and the age‐thickness correlation depends on the spreading rate of the ridge that formed the lithosphere. Significant flattening of the age‐thickness curves is not observed for the mature oceanic lithosphere of the Antarctic Plate.
  8. Dziadek, Ricarda, et al. “Geothermal heat flux in the Amundsen Sea sector of West Antarctica: New insights from temperature measurements, depth to the bottom of the magnetic source estimation, and thermal modeling.” Geochemistry, Geophysics, Geosystems 18.7 (2017): 2657-2672[FULL TEXT]  Focused research on the Pine Island and Thwaites glaciers, which drain the West Antarctic Ice Shelf (WAIS) into the Amundsen Sea Embayment (ASE), revealed strong signs of instability in recent decades that result from variety of reasons, such as inflow of warmer ocean currents and reverse bedrock topography, and has been established as the Marine Ice Sheet Instability hypothesis. Geothermal heat flux (GHF) is a poorly constrained parameter in Antarctica and suspected to affect basal conditions of ice sheets, i.e., basal melting and subglacial hydrology. Thermomechanical models demonstrate the influential boundary condition of geothermal heat flux for (paleo) ice sheet stability. Due to a complex tectonic and magmatic history of West Antarctica, the region is suspected to exhibit strong heterogeneous geothermal heat flux variations. We present an approach to investigate ranges of realistic heat fluxes in the ASE by different methods, discuss direct observations, and 3‐D numerical models that incorporate boundary conditions derived from various geophysical studies, including our new Depth to the Bottom of the Magnetic Source (DBMS) estimates. Our in situ temperature measurements at 26 sites in the ASE more than triples the number of direct GHF observations in West Antarctica. We demonstrate by our numerical 3‐D models that GHF spatially varies from 68 up to 110 mW m−2.
  9. Martos, Yasmina M., et al. “Heat flux distribution of Antarctica unveiled.” Geophysical Research Letters 44.22 (2017): 11-417[FULL TEXT]  Antarctica is the largest reservoir of ice on Earth. Understanding its ice sheet dynamics is crucial to unraveling past global climate change and making robust climatic and sea level predictions. Of the basic parameters that shape and control ice flow, the most poorly known is geothermal heat flux. Direct observations of heat flux are difficult to obtain in Antarctica, and until now continent‐wide heat flux maps have only been derived from low‐resolution satellite magnetic and seismological data. We present a high‐resolution heat flux map and associated uncertainty derived from spectral analysis of the most advanced continental compilation of airborne magnetic data. Small‐scale spatial variability and features consistent with known geology are better reproduced than in previous models, between 36% and 50%. Our high‐resolution heat flux map and its uncertainty distribution provide an important new boundary condition to be used in studies on future subglacial hydrology, ice sheet dynamics, and sea level change.
  10. Burton‐Johnson, Alex, et al. “A new heat flux model for the Antarctic Peninsula incorporating spatially variable upper crustal radiogenic heat production.” Geophysical Research Letters 44.11 (2017): 5436-5446.  A new method for modeling heat flux shows that the upper crust contributes up to 70% of the Antarctic Peninsula’s subglacial heat flux and that heat flux values are more variable at smaller spatial resolutions than geophysical methods can resolve. Results indicate a higher heat flux on the east and south of the Peninsula (mean 81 mW m−2) where silicic rocks predominate, than on the west and north (mean 67 mW m−2) where volcanic arc and quartzose sediments are dominant. While the data supports the contribution of heat‐producing element‐enriched granitic rocks to high heat flux values, sedimentary rocks can be of comparative importance dependent on their provenance and petrography. Models of subglacial heat flux must utilize a heterogeneous upper crust with variable radioactive heat production if they are to accurately predict basal conditions of the ice sheet. Our new methodology and data set facilitate improved numerical model simulations of ice sheet dynamics.



  1. Nylen, Thomas H., Andrew G. Fountain, and Peter T. Doran. “Climatology of katabatic winds in the McMurdo dry valleys, southern Victoria Land, Antarctica.” Journal of Geophysical Research: Atmospheres 109.D3 (2004)Katabatic winds dramatically affect the climate of the McMurdo dry valleys, Antarctica. Winter wind events can increase local air temperatures by 30°C. The frequency of katabatic winds largely controls winter (June to August) temperatures, increasing 1°C per 1% increase in katabatic frequency, and it overwhelms the effect of topographic elevation (lapse rate). Summer katabatic winds are important, but their influence on summer temperature is less. The spatial distribution of katabatic winds varies significantly. Winter events increase by 14% for every 10 km up valley toward the ice sheet, and summer events increase by 3%. The spatial distribution of katabatic frequency seems to be partly controlled by inversions. The relatively slow propagation speed of a katabatic front compared to its wind speed suggests a highly turbulent flow. The apparent wind skip (down‐valley stations can be affected before up‐valley ones) may be caused by flow deflection in the complex topography and by flow over inversions, which eventually break down. A strong return flow occurs at down‐valley stations prior to onset of the katabatic winds and after they dissipate. Although the onset and termination of the katabatic winds are typically abrupt, elevated air temperatures remain for days afterward. We estimate that current frequencies of katabatic winds increase annual average temperatures by 0.7° to 2.2°C, depending on location. Seasonally, they increase (decrease) winter average temperatures (relative humidity) by 0.8° to 4.2° (−1.8 to −8.5%) and summer temperatures by 0.1° to 0.4°C (−0.9% to −4.1%). Long‐term changes of dry valley air temperatures cannot be understood without knowledge of changes in katabatic winds.
  2. Walker, Virginia K., Gerald R. Palmer, and Gerrit Voordouw. “Freeze-thaw tolerance and clues to the winter survival of a soil community.” Appl. Environ. Microbiol. 72.3 (2006): 1784-1792.  Although efforts have been made to sample microorganisms from polar regions and to investigate a few of the properties that facilitate survival at freezing or subzero temperatures, soil communities that overwinter in areas exposed to alternate freezing and thawing caused by Foehn or Chinook winds have been largely overlooked. We designed and constructed a cryocycler to automatically subject soil cultures to alternating freeze-thaw cycles. After 48 freeze-thaw cycles, control Escherichia coli and Pseudomonas chlororaphis isolates were no longer viable. Mixed cultures derived from soil samples collected from a Chinook zone showed that the population complexity and viability were reduced after 48 cycles. However, when bacteria that were still viable after the freeze-thaw treatments were used to obtain selected cultures, these cultures proved to be >1,000-fold more freeze-thaw tolerant than the original consortium. Single-colony isolates obtained from survivors after an additional 48 freeze-thaw cycles were putatively identified by 16S RNA gene fragment sequencing. Five different genera were recognized, and one of the cultures, Chryseobacterium sp. strain C14, inhibited ice recrystallization, a property characteristic of antifreeze proteins that prevents the growth of large, potentially damaging ice crystals at temperatures close to the melting temperature. This strain was also notable since cell-free medium derived from cultures of it appeared to enhance the multiple freeze-thaw survival of another isolate, Enterococcus sp. strain C8. The results of this study and the development of a cryocycler should allow further investigations into the biochemical and soil community adaptations to the rigors of a Chinook environment.
  3. Speirs, Johanna C., et al. “Foehn winds in the McMurdo Dry Valleys, Antarctica: The origin of extreme warming events.” Journal of Climate 23.13 (2010): 3577-3598 Foehn winds resulting from topographic modification of airflow in the lee of mountain barriers are frequently experienced in the McMurdo Dry Valleys (MDVs) of Antarctica. Strong foehn winds in the MDVs cause dramatic warming at onset and have significant effects on landscape forming processes; however, no detailed scientific investigation of foehn in the MDVs has been conducted. As a result, they are often misinterpreted as adiabatically warmed katabatic winds draining from the polar plateau. Herein observations from surface weather stations and numerical model output from the Antarctic Mesoscale Prediction System (AMPS) during foehn events in the MDVs are presented. Results show that foehn winds in the MDVs are caused by topographic modification of south-southwesterly airflow, which is channeled into the valleys from higher levels. Modeling of a winter foehn event identifies mountain wave activity similar to that associated with midlatitude foehn winds. These events are found to be caused by strong pressure gradients over the mountain ranges of the MDVs related to synoptic-scale cyclones positioned off the coast of Marie Byrd Land. Analysis of meteorological records for 2006 and 2007 finds an increase of 10% in the frequency of foehn events in 2007 compared to 2006, which corresponds to stronger pressure gradients in the Ross Sea region. It is postulated that the intra- and interannual frequency and intensity of foehn events in the MDVs may therefore vary in response to the position and frequency of cyclones in the Ross Sea region.
  4. Steinhoff, Daniel Frederick. Dynamics and Variability of Foehn Winds in the McMurdo Dry Valleys Antarctica. Diss. The Ohio State University, 2011.  The McMurdo Dry Valleys (“MDVs”) are the largest ice-free region in Antarctica, featuring perennially ice-covered lakes that are fed by ephemeral melt streams in the summer. The MDVs have been an NSF-funded Long-Term Ecological Research (LTER) site since 1993, and LTER research has shown that the hydrology and biology of the MDVs are extremely sensitive to small climatic fluctuations, especially during summer when temperatures episodically rise above freezing. However, the atmospheric processes that control MDVs summer climate, namely the westerly foehn and easterly sea-breeze regimes, are not well understood. The goals of this study are to (i) produce a coherent physical mechanism for the development and spatial extent of foehn winds in the MDVs, and (ii) determine aspects of large-scale climate variability responsible for intraseasonal and interannual differences in MDVs temperature. Polar WRF simulations are run for a prominent foehn case study at 500 m horizontal grid spacing to study the mesoscale components of foehn events, and 15 summers at 2 km horizontal grid spacing to analyze event and temporal variability. The Polar WRF simulations have been tailored for use in the MDVs through modifications to the input soil conditions, snow cover, land use, and sea ice. An objective foehn identification method is used to identify and categorize events, as well as validate the model against LTER AWS observations. The MDVs foehn mechanism consists of a gap wind through a topographic constriction south of the MDVs, forced by pressure differences on each side of the gap and typically set up by cyclonic flow over the Ross and Amundsen Seas. Significant mountain wave activity over the gap modulates the flow response over the MDVs themselves, and pressure-driven channeling drives foehn flow down-valley. During strongly forced events, mass accumulation east of the MDVs from flow around Ross Island is responsible for easterly intrusions, and not a thermally forced sea breeze as previously thought. A variety of ambient flow directions and associated synoptic-scale patterns can result in MDVs foehn, but adequate forcing is necessary to activate the foehn mechanism. The warmest foehn events are associated with amplified circulation patterns that are not associated with particular interannual modes of variability, but instead related to intraseasonal variability forced by the extratropical response to a stagnant MJO. Implications of the findings upon current MDVs paleoclimate theories on the existence of huge melt lakes at the LGM are also presented.
  5. Elvidge, Andrew. Polar föhn winds and warming over the Larsen C Ice Shelf, Antarctica. Diss. University of East Anglia, 2013. Recent hypotheses that the foehn effect is partly responsible for warming to the east of the Antarctic Peninsula (AP) and enhanced melt rates on the Larsen C Ice Shelf are supported in a study combining the analysis of observational and high resolution model data. Leeside warming and drying during foehn events is observed in new aircraft, radiosonde and automatic weather station data and simulated by the UK Met Office Unified Model at ~1.5 km grid spacing (MetUM 1.5 km). Three contrasting cases are investigated. In Case A relatively weak southwesterly flow induces a nonlinear foehn event. Strongly accelerated flow above and a hydraulic jump immediately downwind of the lee slopes lead to high amplitude warming in the immediate lee of the AP, downwind of which the warming effect diminishes rapidly due to the upward ‘rebound’ of the foehn flow. Case C defines a relatively linear case associated with strong northwesterly winds. The lack of a hydraulic jump enables foehn flow to flood across the entire ice shelf at low levels. Melt rates are high due to a combination of large radiative heat flux, due to dry, clear leeside conditions, and sensible heat flux downward from the warm, well-mixed foehn flow. Climatological work suggests that such strong northwesterly cases are often responsible for high Larsen C melt rates. Case B describes a weak, relatively non-linear foehn event associated with insignificant daytime melt rates.

    Previously unknown jets – named polar foehn jets – emanating from the mouths of leeside inlets are identified as a type of gap flow. They are cool and moist relative to adjacent calmer regions, due to lower-altitude upwind source regions, and are characterised by larger turbulent heat fluxes both within the air column and at the surface. The relative importance of the three mechanisms deemed to induce leeside foehn warming (isentropic drawdown, latent heating and sensible heating) are quantified using a novel method analysing back trajectories and MetUM 1.5 km model output. It is shown that, depending on the linearity of the flow regime and the humidity of the air mass, each mechanism can dominate. This implies that there is no dominant foehn warming mechanism, contrary to the conclusions of previous work.

  6. Steinhoff, Daniel F., David H. Bromwich, and Andrew Monaghan. “Dynamics of the foehn mechanism in the McMurdo Dry Valleys of Antarctica from Polar WRF.” Quarterly Journal of the Royal Meteorological Society 139.675 (2013): 1615-1631.  Foehn events over the McMurdo Dry Valleys (MDVs), the largest ice‐free region of Antarctica, promote glacial melt that supports biological activity in the lakes, streams, rocks and soils. Although MDVs foehn events are known to depend upon the synoptic‐scale circulation, the physical processes responsible for foehn events are unknown. A polar‐optimized version of the Weather Research and Forecasting model (Polar WRF) is used for a case study of a representative summer foehn event from 29 December 2006 to 1 January 2007 in order to identify and explain the MDVs foehn mechanism. Pressure differences across an elevated mountain gap upstream of the MDVs provide forcing for southerly flow into the western, upvalley entrance of the MDVs. Complex terrain over the elevated gap and the MDVs leads to mountain wave effects such as leeside acceleration, hydraulic jumps, wave breaking and critical layers. These mountain wave effects depend on the ambient (geostrophic) wind direction. Pressure‐driven channelling then brings the warm, dry foehn air downvalley to eastern MDV sites. Brief easterly intrusions of maritime air into the eastern MDVs during foehn events previously have been attributed to either a sea‐breeze effect in summer or local cold‐pooling effects in winter. In this particular case, the easterly intrusions result from blocking effects of nearby Ross Island and the adjacent Antarctic coast. Temperature variability during the summer foehn event, which is important for meltwater production and biological activity when it exceeds 0°C, primarily depends on the source airmass rather than differences in foehn dynamics.
  7. Cape, M. R., et al. “Foehn winds link climate‐driven warming to ice shelf evolution in Antarctica.” Journal of Geophysical Research: Atmospheres 120.21 (2015): 11-037Rapid warming of the Antarctic Peninsula over the past several decades has led to extensive surface melting on its eastern side, and the disintegration of the Prince Gustav, Larsen A, and Larsen B ice shelves. The warming trend has been attributed to strengthening of circumpolar westerlies resulting from a positive trend in the Southern Annular Mode (SAM), which is thought to promote more frequent warm, dry, downsloping foehn winds along the lee, or eastern side, of the peninsula. We examined variability in foehn frequency and its relationship to temperature and patterns of synoptic‐scale circulation using a multidecadal meteorological record from the Argentine station Matienzo, located between the Larsen A and B embayments. This record was further augmented with a network of six weather stations installed under the U.S. NSF LARsen Ice Shelf System, Antarctica, project. Significant warming was observed in all seasons at Matienzo, with the largest seasonal increase occurring in austral winter (+3.71°C between 1962–1972 and 1999–2010). Frequency and duration of foehn events were found to strongly influence regional temperature variability over hourly to seasonal time scales. Surface temperature and foehn winds were also sensitive to climate variability, with both variables exhibiting strong, positive correlations with the SAM index. Concomitant positive trends in foehn frequency, temperature, and SAM are present during austral summer, with sustained foehn events consistently associated with surface melting across the ice sheet and ice shelves. These observations support the notion that increased foehn frequency played a critical role in precipitating the collapse of the Larsen B ice shelf.
  8. Elvidge, Andrew D., and Ian A. Renfrew. “The causes of foehn warming in the lee of mountains.” Bulletin of the American Meteorological Society 97.3 (2016): 455-466.  The foehn effect is well known as the warming, drying, and cloud clearance experienced on the lee side of mountain ranges during “flow over” conditions. Foehn flows were first described more than a century ago when two mechanisms for this warming effect were postulated: an isentropic drawdown mechanism, where potentially warmer air from aloft is brought down adiabatically, and a latent heating and precipitation mechanism, where air cools less on ascent—owing to condensation and latent heat release—than on its dry descent on the lee side. Here, for the first time, the direct quantitative contribution of these and other foehn warming mechanisms is shown. The results suggest a new paradigm is required after it is demonstrated that a third mechanism, mechanical mixing of the foehn flow by turbulence, is significant. In fact, depending on the flow dynamics, any of the three warming mechanisms can dominate. A novel Lagrangian heat budget model, back trajectories, high-resolution numerical model output, and aircraft observations are all employed. The study focuses on a unique natural laboratory—one that allows unambiguous quantification of the leeside warming—namely, the Antarctic Peninsula and Larsen C Ice Shelf. The demonstration that three foehn warming mechanisms are important has ramifications for weather forecasting in mountainous areas and associated hazards such as ice shelf melt and wildfires.
















CLAIM: Net zero means that EMISSIONS are balanced by ABSORPTION of an equivalent amount from the atmosphere. Carbon Capture and Storage (BECCS) technology exists but it is not used because of cost and technical considerations. Therefore, “net” means net of natural photosynthesis, specifically additional natural photosynthesis that can be claimed in terms of specific action taken by humans (so that it can be described as human activity) in terms of things like reforestation and afforestation where the CO2 removal time span is in the order of 50 to 100 years. In countries where incremental afforestation and reforestation opportunities are limited this activity can be carried out overseas particularly in poor third world countries with poor forest management. A more lucrative option for claiming ownership  of CO2 removal by way of nature’s photosynthesis mechanism is the so called “blue carbon” found in shallow coastal waters in the equatorial region where underwater plants such as seagrass and mangrove remove CO2 from the atmosphere by photosynthesis and sequester he carbon for thousands of years [LINK] . The relevant issue in this regard is the degradation of coastal ecosystems and their blue carbon sequestration due to human activity [RELATE POST ON BLUE CARBON] . For this reason, additional carbon sequestration that can be attributed to action taken by humans to reduce coastal ecosystem degradation due to human activity can be claimed as CO2 removal in net emission accounting.

RESPONSE:  AGW climate change is a theory not about how changes in the carbon cycle changes the rate of warming. Such changes are natural and predate the Industrial Revolution and they can’t be described as effects of the industrial economy. AGW climate change theory is about a warming trend attributed to the industrial economy and described as a perturbation of the carbon cycle by external carbon dug up by humans from under the ground where it had been sequestered from the carbon cycle for millions of years such that the external carbon is not part of the current account of the carbon cycle.

It is the injection of this external carbon by humans into the carbon cycle and the climate system that is identified by AGW climate change theory as the cause of the current warming. The carbon emission accounting for net emissions where carbon cycle flows and external artificial carbon flows are combined in simple addition and subtraction accounting is conceptually and mathematically flawed. The conceptual flaw is described above. The mathematical flaw is described in some detail a related post [LINK] .

Briefly, the issue is that carbon cycle flows are an order of magnitude larger than the fossil fuel emissions of humans but they cannot be directly measured. They must therefore be inferred. This is why the estimation of carbon cycle flows contains large uncertainties. It is shown in the related post [LINK]  that when these uncertainties are included in the flow accounting, the relatively smaller fossil fuel emissions cannot be detected because the flow account balances with and without fossil fuel emissions within the statistical range implied by the uncertainty in carbon cycle flows.

The implication of this result for “net zero climate action” strategies is that the accounting reduction in carbon cycle emissions must be shown to be statistically significant when the uncertainty in the relevant carbon cycle flows is taken into account.

In view of the result that the whole of the fossil fuel emissions is not detectable net of uncertainties in carbon cycle flows, it is highly unlikely that smaller changes to the carbon cycle that are assumed reduce net emissions will be found to be statistically significant when uncertainties are included in the accounting. In view of the arguments presented above, net zero climate action strategies, that is fossil fuel emissions net of projected reductions in carbon cycle flows, cannot be assumed to be less than fossil fuel emissions until it can be shown that the difference is statistically detectable net of uncertainties in carbon cycle flows. This important detail of net zero climate action strategies is missing from the flow accounting used in the net zero computation.


CLAIMThe science of ‘carbon budgets: Climate science is clear that to a close approximation, the eventual extent of global warming is proportional to the total amount of carbon dioxide that human activities add to the atmosphere.


RESPONSE: This claim is a reference to the proportionality between mean global surface temperature and cumulative emissions described by Damon Matthews and others since 2009. The strong correlation between temperature and cumulative emissions appears to support the validity of the regression coefficient for temperature against cumulative emissions that shows that cumulative emissions drive warming at the rate of somewhere between 1C and 2.5C of warming per teratonne of cumulative emissions. This coefficient is called the TCRE or Transient Climate Response to Cumulative Emissions. Climate action plans in terms of carbon budgets is computed based on the TCRE metric. Net zero climate action plans are designed in terms of the TCRE.

Yet, as shown in a related post [LINK] the strong proportionality between temperature and cumulative emissions found by climate scientists is a spurious correlation that has no interpretation in terms of phenomena in the real world. This implies that carbon budgets derived from the TCRE also have no interpretation in the real world [LINK] . It is this statistical flaw in the TCRE and not complexities of Earth System Models that explains the Remaining Carbon Budget problem in climate science [LINK]  {see also [LINK] }. The implication for Net Zero climate action plans is that the the net zero strategies that rely on the TCRE are illusory and the creation of a spurious correlation that has no interpretation in the real world. 








Other parts parts of the ECIU document are included below for reference: 

Net zero: why is it necessary? A number of countries including the UK are making commitments to move to a net zero emissions economy. This is in response to climate science showing that in order to halt climate change, carbon emissions have to stop – reducing them is not sufficient. ‘Net zero’ means that any emissions are balanced by absorbing an equivalent amount from the atmosphere. In order to meet the global warming target in the Paris Agreement, global carbon emissions should reach net zero around mid-century. For developed nations such as the UK, the date may need to be earlier. Some have already set such dates.

The science of ‘carbon budgets: Climate science is clear that to a close approximation, the eventual extent of global warming is proportional to the total amount of carbon dioxide that human activities add to the atmosphere. So, in order to stabilise climate change, CO2 emissions need to fall to zero. The longer it takes to do so, the more the climate will change. Emissions of other greenhouse gases also need to be constrained. In the Paris Agreement, governments agreed to keep global warming ‘well below’ 2 degrees Celsius, and to ‘make efforts’ to keep it below 1.5ºC. The Intergovernmental Panel on Climate Change (IPCC) released a report in October 2018 on the 1.5ºC target; it concluded that global emissions need to reach net zero around mid-century to give a reasonable chance of limiting warming to 1.5ºC.

Why ‘net zero’?

In many sectors of the economy, technologies exist that can bring emissions to zero. In electricity, it can be done using renewable and nuclear generation. A transport system that runs on electricity or hydrogen, well-insulated homes and industrial processes based on electricity rather than gas can all help to bring sectoral emissions to absolute zero.

However, in industries such as aviation the technological options are limited; in agriculture too it is highly unlikely that emissions will be brought to zero. Therefore some emissions from these sectors will likely remain; and in order to offset these, an equivalent amount of CO2 will need to be taken out of the atmosphere – negative emissions. Thus the target becomes ‘net zero’ for the economy as a whole. The term ‘carbon neutrality’ is also used.

Sometimes a net zero target is expressed in terms of greenhouse gas emissions overall, sometimes of CO2 only. The UK Climate Change Act now expresses its net zero emissions target by 2050 in terms of greenhouse gases overall.

Negative emissions

The only greenhouse gas that can easily be absorbed from the atmosphere is carbon dioxide. There are two basic approaches to extracting it: by stimulating nature to absorb more, and by building technology that does the job.

Increasing forest cover can help absorb carbon dioxide emissions. Image: Jon Sullivan, creative commons licence
Plants absorb CO2 as they grow, through photosynthesis. Therefore, all other things being equal, having more plants growing, or having plants growing faster, will remove more from the atmosphere. Two of the easiest and most effective approaches for negative emissions, then, are afforestation – planting more forest – and reforestation – replacing forest that has been lost or thinned. Technical options include BioEnergy with Carbon Capture and Storage (BECCS) (see our Negative Emissions briefing.)

Who is moving to net zero?

A number of countries have already set targets, or committed to do so, for reaching net zero emissions on timescales compatible with the Paris Agreement temperature goals. They include the UK, France, Spain, Denmark, Portugal, New Zealand, Chile, Costa Rica (2050), Sweden (2045), Iceland (2040), Finland (2035) and Norway (2030). The tiny Himalayan Kingdom of Bhutan and the most forested country on earth, Suriname, are already carbon-negative – they absorb more CO2 than they emit.

In addition, the European Union recently agreed measures that are likely to result in the bloc adopting a net zero target by 2050 at the latest.

The principle that rich nations should lead on climate change is enshrined in the UN climate convention that dates back to 1992, and was reconfirmed in the Paris Agreement. Therefore, if the science says ‘global net zero by mid-century’, there is a strong moral case for developed countries adopting an earlier date.

So far, the UK, France, Sweden and Norway have enshrined their net zero targets in national law. Other nations including Spain, Denmark, Chile and New Zealand are looking to do so.

Interim Minister of State for Energy and Clean Growth Chris Skidmore
Chris Skidmore, Interim Minister of State for Energy and Clean Growth, signs legislation to commit the UK to a legally binding target of net zero emissions by 2050. Image:
In the UK

Immediately after the IPCC published its Special Report on 1.5°C in October 2018, the governments of the UK, Scotland and Wales asked its official advisers, the Committee on Climate Change (CCC), to provide advice on the UK and Devolved Administrations’ long-term targets for greenhouse gas emissions.

The CCC had previously indicated that the UK should be aiming for net zero emissions by 2045-2050 in order to be compatible with the 1.5ºC Paris Agreement goal.

The CCC delivered its advice in May 2019. Its high-level recommendations were:

For the UK, a new target: net-zero greenhouse gases by 2050 (up from the existing emissions reductions target of 80% from 1990 levels by 2050);
For Scotland, a net-zero date of 2045, ‘reflecting Scotland’s greater relative capacity to remove emissions than the UK as a whole’;
For Wales, a 95% reduction in greenhouse gases by 2050, reflecting it having ‘less opportunity for CO2 storage and relatively high agricultural emissions that are hard to reduce’.
The governments of Wales and Scotland swiftly accepted the CCC’s advice, and on 12 June 2019, the UK government laid a statutory instrument to amend the 80% target in the Climate Change Act 2008. Just over two weeks later, the new net zero target (100% from 1990 levels by 2050) was formally signed into law.

Only a matter of days before France could complete the feat, the UK had pipped them to it and become the first G7 country to legislate for net zero greenhouse gas emissions by 2050.

bandicam 2020-02-24 14-19-11-290










We human beings, homo sapiens, have been on this planet for a tiny fraction of the full history of life on earth. We are unlike other species in that we are able to have a global impact on our environment through our activities. We are now within this era known as the Anthropocene where human beings have sort of taken over, taken the reins from geological processes and natural processes in being the primary driver of changes in the earth system. And certainly the burning of fossil fuels and climate change is one primary example of that.

My name is Michael Mann. I am a professor at Penn State University and a climate researcher. In this ??mook?? I am going to lay down the fundamental scientific principles behind climate change and global warming. We need to understand the science in order to solve the societal, environmental, and economic problems that climate change is bringing. bandicam 2020-02-24 14-27-39-516

We begin with the principles of atmospheric science. We will talk about how climate data are collected, the trends that these data show, and how we look for signals of climate change in the data. We’ll learn how to do basic computations and view theoretical models of the climate system to address questions about future climate change.

 bandicam 2020-02-24 14-37-34-121

Finally we will discuss the impacts that climate change may have on the social, cultural, economic, urban, and other human systems. The science of climate change impacts tells us that once we warm the planet beyond about 2C relative to pre-industrial times, we are likely to see most damaging and potentially irreversible climate change. 2C is probably a line that we con’t want to cross.

bandicam 2020-02-24 14-47-20-203

I am hoping that this course will help arm others with information and knowledge and resources that they can use down in the trenches as we all fight this collective battle to preserve our planet for future generations.





CLAIM: We are now in a geological epoch called the Anthropocene wherein humans have taken over from geological forces and humans are now the primary force that is reshaping the planet. RESPONSE: The crust of the planet consisting of land, ocean, and atmosphere where we have things like climate, climate change and  the carbon cycle and carbon life forms like plants, trees, animals, humans, fish, and whales and stuff like that, is 0.3% of the planet containing 0.2% of the planet’s carbon. The other 99.7% of the planet and 99.8% of the carbon is down in the mantle and core. This is the source of the energy and immense power of the planet’s geological forces that do things like plate tectonics, volcanism, mantle plumes, and rifts that transfer energy and carbon from the 99.7% of the planet to the 0.3% of the planet where we have climate. Life on the 0.3% is made from the little bits of carbon that oozes out of the 99.7% . It is true that since the Industrial Revolution we humans have been digging up some of the minute portions of the planet’s carbon found in the crust of the planet and burning it for energy but that does not make us the new geological force nor does it place the fate of the planet in our hands. The total cumulative carbon emissions of the industrial economy of humans since pre-industrial times is estimated to be 102 billion tonnes, less than 0.000001% of  the of carbon in the mantle. AGW climate change does not have a planetary interpretation. The yearning of climate science for a planetary relevance of AGW climate change has no basis. It is an extreme form of the atmosphere bias of climate science and an irrational form of climate activism to describe the planet’s geological forces in terms of the carbon emissions of the industrial economy of humans. Related post on the Anthropocene: [LINK] .


CLAIM: The science of climate change impacts tells us that once we warm the planet beyond about 2C relative to pre-industrial times, we are likely to see most damaging and potentially irreversible climate change. 2C is probably a line that we con’t want to cross.  RESPONSE: The determination that a warming of 2C above pre-industrial will create catastrophic climate change impacts in the form of irreversible climate change such that 2C is a line that we must not cross, is dated. It derives from an IPCC report published in 2015. Since then the IPCC special report of 2018 was published and it says that the science says that the line that we must not cross to avoid catastrophic and irreversible climate change is 1.5C. Prior to the 2C determination of 2015, there was the IPCC report published in 2013 and there the science of climate science had determined that the line that we must not cross to avoid catastrophic and irreversible climate change is 3C. Prior to 2013, the IPCC 2007 report had issued a similar warning which said that warming must not be allowed to exceed 4C above pre-industrial to avoid catastrophic impacts and irreversible climate change. And prior to that, in 2001 the science of climate science told us that warming since pre-industrial must not be allowed to exceed 5C in order to avoid catastrophic consequences and irreversible climate change.

An additional consideration is that the reference temperature described as pre-industrial is also an unknown because there is some confusion about when AGW climate change began. In 2001, the IPCC report said it was the year 1750 but in the 2015 report it says the reference pre-industrial year was 1850. The NASA website says the human caused anthropogenic global warming began in 1950  [LINK]  and climate scientist Peter Cox used computer models to determine that human caused warming began in the 1970s  [LINK]. The world’s first AGW climate change paper was Callendar 1938 [LINK]  where we read that human caused global warming began in 1900. The extreme state of confusion in the matter of when human caused global warming began and therefore exactly what the “pre-industrial” reference temperature is and how much warming above that reference temperature we can tolerate before catastrophic irreversible climate change sets in does not indicate that climate scientists understand the current warming trend well enough to lecture us about the details of a theory and its warming targets of which they themselves appear to be grossly unsure. Yet this is the basis of the demand that the world must overhaul its energy infrastructure “to save the planet” in the Anthropocene.


bandicam 2020-02-23 10-00-02-695


bandicam 2020-02-23 10-10-55-242






CLAIM#1: the world’s five largest publicly-owned oil and gas companies spend about US$200 million a year on lobbying to control, delay or block binding climate policy

RESPONSE TO CLAIM#1: The corresponding amount for AGW climate change research from government research funds alone is estimated by the University of Sussex and the Norwegian Institute of International Affairs as $1.64 billion per year or more than 8 times the claimed oil industry funding for climate denial. Additional funds for AGW change from private sources are estimated by Jacob Nordangard [LINK] to exceed the amount claimed to flow from oil companies to climate denialism. These data do not suggest that AGW climate change science is at a disadvantage against climate denialism because of funding of denialism by oil companies.

CLAIM#2: Recent polls suggested over 75% of Americans think humans are causing climate change.  There also seems to be a renewed optimism that we can deal with the crisis. School climate strikes, Extinction Rebellion protests, national governments declaring a climate emergency, improved media coverage of climate change and an increasing number of extreme weather events have all contributed to this shift. These positive developments have driven climate deniers to desperate measures of “Climate Sadism”. Climate Sadism is used to mock young people going on climate protests and to ridicule Greta Thunberg, a 16-year-old young woman with Asperger’s, who is simply telling the scientific truth. 

RESPONSE TO CLAIM#2:  If Climate Sadism mocking of young people deployed in climate activism is a bad thing for which we must feel sorry for the plight of these young people, we should surely demand that deniers must stop the mocking and more importantly we must demand that climate activists must cease and desist from this kind of child abuse and child exploitation that places children at risk of Climate Sadism. Children should be allowed to have a childhood and a normal school education and not burdened with climate change issues or scared with climate change holocaust scenarios. It is also noted that the use of extreme weather as reason to oppose climate denialism requires empirical evidence for the attribution of those events to AGW climate change


The Bizarre-Culture article says that there are five types of climate denial described as (1) Science Denial, (2) Economic Denial, (3) Humanitarian Denial , (4) Political Denial, and (5) Crisis Denial. We now discuss each of these in turn as a series of five distinct claims. 

CLAIM#3: Science Denial: In Science Denial, deniers say that the science of climate change is not settled, that climate change is just part of the natural cycle, that climate models are unreliable and too sensitive to carbon dioxide. Some suggest that CO₂ is too small a part of the atmosphere to have a large warming effect or that climate scientists are fixing the data to show the climate is changing (a global conspiracy that would take thousands of scientists in more than a 100 countries to pull off)All these arguments are false because there is a clear consensus among scientists about the causes of climate change. The climate models that predict global temperature rises have remained very similar over the last 30 years despite the huge increase in complexity, showing it is a robust outcome of the science. 

RESPONSE TO CLAIM#3: The history of science has progressed through a process of propositions and interpretations of data, their critical evaluation, active and even acrimonious debate, and the resolution of differences by the exchange of ideas and data as seen for example in the resolution of the theory of relativistic gravity described in Nugayev 2014 {Origin and Resolution of the Modern Theory of Gravity, January 1987 Methodology and Science 2014):177-197} where we find deniers of the initial theory of relativistic gravitation challenging the theory and the interpretation of data followed by resolution of the differences with an active exchange of ideas and without any party claiming to God given truth by virtue of their status as scientist. Contentious issues in climate science – as for example the spurious correlation problem [LINK] , should be debated and ideas exchanged until a resolution is found without the need for either side of the debate to claim a unique and singular access to truth by virtue of their description as scientist. 

CLAIM#4: In Economic Denial, deniers propose that climate action is not cost effective although economists say we could fix climate change now by spending 1% of world GDP. But if we don’t act now, by 2050 it could cost over 20% of world GDP. We should also remember that in 2018 the world generated GDP of $86 trillion and every year World GDP grows by 3.5%. So setting aside just 1% to deal with climate change would make little overall difference and would save the world a huge amount of money. What the climate change deniers also forget to tell you is that they are protecting a fossil fuel industry that receives US$5.2 trillion in annual subsidies which includes subsidised supply costs, tax breaks and environmental costs. This amounts to 6% of world GDP. The International Monetary Fund estimates that efficient fossil fuel pricing would lower global carbon emissions by 28%, fossil fuel air pollution deaths by 46%, and increase government revenue by 3.8% of the country’s GDP. 

RESPONSE TO CLAIM#4: A sense of Ad hominem runs through this series of claims. Here, individuals whose profession is described as “economist” are considered infallible sources of economic and financial information such that their estimate of the cost of climate action must not be questioned. The reality is very different. In a related post it is described how the 2008 financial crisis in the USA was repeatedly and comically misdiagnosed by economists and their “economic action” programs to resolve the problem such as “Quantitative Easing” and “TARP” had actually made it worse [LINK] . As in the case of climate science described above what is relevant in these disputes is the evaluation of the arguments presented by the two sides and not their professional titles.


CLAIM#5: In Humanitarian denial Climate change deniers also argue that climate change is good for us. They suggest longer, warmer summers in the temperate zone will make farming more productive. These gains, however, are often offset by the drier summers and increased frequency of heatwaves in those same areas. For example, the 2010 “Moscow” heatwave killed 11,000 people, devastated the Russian wheat harvest and increased global food prices. Geographical zones of the world. More than 40% of the world’s population lives in the Tropics where from both a human health prospective and an increase in desertification no one wants summer temperatures to rise. Deniers also point out that plants need atmospheric carbon dioxide to grow so having more of it acts like a fertilizer. This is indeed true and the land biosphere has been absorbing about a quarter of our carbon dioxide pollution every year. Another quarter of our emissions is absorbed by the oceans. But losing massive areas of natural vegetation through deforestation and changes in land use completely nullifies this minor fertilization effect. Climate change deniers will tell you that more people die of the cold than heat, so warmer winters will be a good thing. This is deeply misleading. Vulnerable people die of the cold because of poor housing and not being able to afford to heat their homes. Society, not climate, kills them. This argument is also factually incorrect. In the US, for example, heat-related deaths are four times higher than cold-related ones. This may even be an underestimate as many heat-related deaths are recorded by cause of death such as heart failure, stroke, or respiratory failure, all of which are exacerbated by excessive heat. 

RESPONSE TO CLAIM#5: Here the authors make some good points about certain questionable and pointless denialist claims that are common – as for example that higher atmospheric CO2 causes greening and increases agricultural yield and so therefore AGW must be a good thing. A similar argument is that hotter is better than colder because more people die of cold than of heat.

CLAIM#6 Political denial:  Climate change deniers argue we cannot take action because other countries are not taking action. But not all countries are equally guilty of causing current climate change. For example, 25% of the human-produced CO₂ in the atmosphere is generated by the US, another 22% is produced by the EU. Africa produces just under 5%. Given the historic legacy of greenhouse gas pollution, developed countries have an ethical responsibility to lead the way in cutting emissions. But ultimately, all countries need to act because if we want to minimise the effects of climate change then the world must go carbon zero by 2050. Per capita annual carbon dioxide emissions and cumulative country emissions. Deniers will also tell you that there are problems to fix closer to home without bothering with global issues. But many of the solutions to climate change are win-win and will improve the lives of normal people. Switching to renewable energy and electric vehicles, for example, reduces air pollution, which improves people’s overall health. Developing a green economy provides economic benefits and creates jobs. Improving the environment and reforestation provides protection from extreme weather events and can in turn improve food and water security.


RESPONSE TO CLAIM#6: Climate change deniers argue we cannot take action because other countries are not taking action. This is of course a weak denialist argument because where all must act to achieve a certain goal there is no room for the discussion of who must act. However, it must be said that this ideal has already been undone by the United Nations which in both the Kyoto Protocol and the UNFCCC segregated the world’s nations into Annex-1, Annex-2, and non-Annex countries with different emission reduction obligations such that non-Annex countries have no emission reduction obligation and when they do cut emissions they can sell that emission reduction in the dysfunctional  carbon credits market described in a related post [LINK] .


CLAIM #7 CRISIS DENIAL:  The final piece of climate change denial is the argument that we should not rush into changing things, especially given the uncertainty raised by the other four areas of denial above. Deniers argue that climate change is not as bad as scientists make out. We will be much richer in the future and better able to fix climate change. They also play on our emotions as many of us don’t like change and can feel we are living in the best of times – especially if we are richer or in power. But similarly, hollow arguments were used in the past to delay ending slavery, granting the vote to women, ending colonial rule, ending segregation, decriminalising homosexuality, bolstering worker’s rights and environmental regulations, allowing same-sex marriages and banning smoking. The fundamental question is why are we allowing the people with the most privilege and power to convince us to delay saving our planet from climate change?


RESPONSE TO CLAIM#7: The denialist arguments claimed in this section do not sound well thought out and I am not familiar with them as I have not seen them before. As a postscript, I should add that if the author of these claims against denialism is Mark Maslin, whose name appears at the bottom of the linked document above, it should be emphasized that Mark’s work and opinions in AGW climate change can’t be assumed to be unbiased scientific inquiry. In a related post his emotional activism against human activity is described by Mark himself in terms of the Anthropocene [LINK], a concept derived from an extreme and irrational form of environmental activism [LINK]  .



















bandicam 2020-02-21 16-21-40-417








The 2008 financial crisis caused a complete meltdown of the American economy that showed no positive response to the intervention by the Federal Reserve and the government in general with regulatory innovations. It turned out that the financial crisis and economic collapse of 2007/2008 was a replay of similar events in 1929/1930. Both of these crises were the result of the mark to market accounting rule. In both cases, the government’s effort to solve the problem with regulatory intervention failed and possibly worsened the crisis – and in both cases, simply rescinding the mark-to-market rule (by Franklin Delano Roosevelt in 1938 and by Barney Frank on April 2, 2009) brought about a recovery from the crisis and healthy economic growth immediately followed. The lesson is that free market systems are not operated by the government but by innovators and risk taking investors in new ideas and the financial system that funds these ventures. The government’s job is not to operate the economy but to provide the right kind of regulatory infrastructure where innovators and investors can thrive.



Brian S. Wesbury is an American economist focusing on macroeconomics and economic forecasting. He is the economics editor and a monthly contributor for The American Spectator, in addition to appearing on television stations such as CNBC, Fox Business, Fox News, and Bloomberg TV frequently. Born: September 8, 1958 in the United States. Education: Kellogg School of Management, Northwestern University, Rock Bridge High School, University of Montana. (Source: Wikipedia)




  1. How most people view the financial system:  “The free market system of capitalism is a domain of the greedy rich and particularly so, the bankers. Their greed drives them to go through periods of excess speculation that causes a collapse of the financial system. The financial crisis thus created then causes an economic crisis that affects the entire population including workers, investors, and small business – seen as victims of greedy rich speculators in financial markets. As for example, the Great Depression is described in this way in textbooks and in the popular press. This conceptual model of the financial system also forms the basis of the way the 2008 Financial Crisis has been presented the way most people understand it.
  2. How the financial system actually works:  A key element of the financial system is the Federal Reserve Bank (The FED) because it controls short term interest rates through the Fed funds rate. As it had done in the 2008 financial crisis, the Fed drops interest rates, to zero if necessary, to stimulate the economy in crisis situations – trying to get the economy moving again.
  3. Here is a financial history from 2001 that is relevant to the 2008 crisis in light of the elements of the financial system described above. In 2001, when the Fed funds rate was 6.5%,  the Fed began dropping the short term interest rate lower and lower until it had reached 1% in 2004. How does the Fed funds rate affect us? When you are making a decision to take out a loan or buy a house, the most important variable in that decision is the interest rate. bandicam 2020-02-21 17-40-54-575
  4. When Alan Greenspan had pushed interest rates down to 1% in 2003 and 2004, interest rates were below the rate of inflation for almost 3 years. So if you are shopping for a house with a mortgage rate proportional to the Fed funds rate, the lower the interest rate the more money you will spend – particularly so if the interest rate is below the inflation rate. Here is an analogy. When you drive up to a green light you don’t stop or look both ways to make sure it is safe. In that way, low interest rates are like a green light to spend and buy with little or no motivation to save and invest. This effect of low interest rates distorts not just the purchase habits of consumers but the financial system as a whole by changing the financial decisions of bankers investors, and business corporations of all descriptions. With interest rates at 1%, all the lights are green and that changes decision making across the entire spectrum of the economy.
  5. Housing prices went up 8% in 2001 but in 2004 and 2005 they went up 14% and 15% respectively. Low interest rate drives up housing prices and prices of non consumer goods in general. Thus, at low mortgage rates and high price appreciation rates of home prices, buying houses becomes an attractive option whether for a home or as an investment. The result was consumers and businesses over-invested in real estate and other investments in real assets with low interest loans and high rate of price appreciation.
  6. These conditions of course encouraged banks to give out more loans at higher and higher  loan to asset ratio and raise their risk level. This is what had created the housing bubble that preceded the 2008 financial crisis. If interest rates were higher – even 2 or 4 percentage points higher, the housing bubble would likely not have formed the way it did.
  7. This is not the first time that interest rates that are too low created an unstable economy. Back in the 1970s, the Fed had also held interest rates too low for too long. Farmers bought too much land, we sank too many oil wells betting on oil prices going up with cheap debt. Then in the 1980s when oil prices and farmland prices collapsed, banks also collapsed. In fact the entire savings and loan industry also collapsed. Although this crisis is remembered as the savings and loan collapse, it was in fact an economic crisis across the board that had affected the savings and loan industry most severely. They made too many loans at low interest rates and and when interest rates went up the market value of those loans shrank and they collapsed.
  8. At the same time, the government encouraged the big banks to make large loans to Latin American countries. So, in the 1970s the banks expanded making loans to farmers, home buyers, oil companies, and to Latin American countries – and all of those sectors of the economy collapsed in the late 1970s and early 1980s. The banking system was in big trouble. In 1983 the eight biggest banks in America had no capital because the very large loans made to Latin America were in default.
  9. The 1980s crisis contains insights and lessons that are relevant and useful in understanding the 2008 financial crisis the most salient of which is that the banking problems of the 1980s did not take down the entire economy but the 2008 financial crisis did take down the entire economy. The question is why this difference exists between these two otherwise similar financial crises.
  10. Below is a chart showing the collapse of the S&P500 index from January 2008 to March 2009. The S&P500 index tracks the stock prices of the 500 largest listed companies in the USA. The transcripts of all the Federal Reserve meetings in 2008, that recently became available to the public, contain useful insights into the causes and evolution of the 2008 financial crisis. The red dots in the chart below mark dates of the 14 Federal reserved meetings during 2008 and 2009. In these meetings there are 18 or 20 people sitting around a table and each of them talk for 3 or 4 minutes. The transcript is a record of what was said. Proposed actions to be taken are voted on and these propositions and the votes are also recorded in the transcript. The transcripts of the 14 meetings are 1,865 pages long.  bandicam 2020-02-21 20-52-12-833
  11. The chart above shows that the steepest decline in the S&P500 index in the 2008 financial crisis occurred during September and October of 2008. In the so called called “bloody weekend” of the 2008 financial crisis, September 13&14, Lehman Brothers had failed, and AIG, and Fannie Mac and Freddie Mac, and all those things had happened, and he Federal Reserve started a program called “Quantitative Easing” (QE) meaning that the Fed will buy back their bonds as a way of injecting cash into the system (increasing the money supply) in an attempt to save the economy from complete collapse. A few weeks later in October 8, 2008 Hank Paulson the Treasury Secretary in concert with the Bush white house and Congress passed TARP , the Troubled Assets Relief Plan that involved $700 billion of government spending to save the banking system.
  12. Note in the chart above that QE and TARP were passed during the near vertical decline of the S&P500 and they did nothing to stop it. In fact, the 2008 financial crisis escalated after TARP. The stock market lost 40% of its market value with financial companies losing 80%. The chart appears to indicate that the more the Feds met and the more the government took action to ease the crisis, the worse it got.  The government did not save us. It is wrong to think of the government as the architect and manager of the financial system.
  13. The free market of capitalism does not have a press agent but the government does, and the Federal Reserve does. Market forces are invisible but their agents aren’t. There are two thousand books about the financial crisis. The three main ones are:  (1) Geithner, Timothy F. Stress test: Reflections on financial crises. Broadway Books, 2014, (Geithner was head of the New York Federal Reserve Bank in 2008); (2) Bernanke, Ben S. The Federal Reserve and the financial crisis. Princeton University Press, 2013,  (3) Paulson, Henry M. On the Brink: Inside the Race to Stop the Collapse of the Global Financial System–With Original New Material on the Five Year Anniversary of the Financial Crisis. Business Plus, 2013. In all such books, speeches, and media commentaries these government agents credit government interventions such as TARP and QE as what Geithner calls “stress tests” (as for example, stress testing banks so that people will trust them again because they passed the test). bandicam 2020-02-22 09-05-37-770
  14. The underlying question is how a banking crisis turned into an economic crisis that caused the collapse of the world’s largest free market system. The data show that in the late 1970s and early 1980s banks suffered more financial losses than they did later in 2008 and yet the economy had not collapsed back then and in fact it had actually started to grow without government interventions like TARP and without QE. In fact, in the early 1980s, Paul Volcker was raising interest rates as the economy recovered; whereas in TARP & QE the government cut the interest rate essentially to zero and the economy has grown relatively slowly – slower than in the early 1980s. {Footnote: In the Fed transcripts we find that Ben Bernanke had asked his staff of 200 PhD economists to “Go out and find out how big the problem is and how many sub-prime loans were made, how many losses we could face“. This research estimated that the loss could be as high as $228 billion. This loss estimate is only 1.52% of a $15 trillion economyobama.volcker
  15. Therefore, the question here is how this relatively small problem brought the $15 trillion economy down into an economic crisis. The answer to this question is the accounting innovation called MARK TO MARKET ACCOUNTING.  It was re-adopted and enforced in November 2007 after the concept had sat in the accounting books without enforcement since 1938. mark2market
  16. This accounting rule is best understood in the historical context. In the 1800s accountants were yet not elevated to a professional category and were called bookkeepers and bookkeepers of that time did in fact mark all valuation to market instead of computing inflation adjusted historical cost. So as assets went up in value, the bookkeepers marked them up in the books. So in good times things look better but then when you start marking things down to market they look worse. This is surely one of the reasons why the economy in those times was so volatile with panics and depressions alternating with good times. In other words, mark-to-market accounting causes economic and financial volatility. In fact in the crash of the 1930s, mark to market accounting caused many bank failures. It was then that mark to market accounting was considered to be a bad law by the Securities and Exchange Commission (SEC) and the SEC advised President Franklin D Roosevelt to abolish mark to market. Roosevelt complied and Mark-to Market was abolished in 1938. And it did not come back until 70 years later in 1970.
  17. What does mark-to-market do? How does it affect financial and economic volatility? Suppose that you live on the coast in Galveston TX and you have a $500,000 house on the beach with a $300,000 mortgage; and there is a hurricane on the way and you are told to evacuate. So you pack up all your most important belongings and as you are leaving, your mortgage banker shows up and worried about their loan of $300,000 because of the risk that the asset on which it is based may be worthless after the hurricane hits. If they choose at that point to mark to house to market. The problem is that in the hurricane crisis situation there is no one around to bid on the house so that its market value can be determined. Let us suppose that a random stranger is summoned and asked for a bid and they bid $20,000. That then is the best estimate of the market value of the house at that precise moment in time. In that situation mark to market accounting would imply that the homeowner should pay the balance in cash ($280,000) or lose the house. Essentially, the homeowner is bankrupt. This is what mark to market accounting can do although the example is somewhat theatrical.  mark2market
  18. It is in this context that we can understand what happened in 2008 after mark-to-market accounting was reinstated in 2007. What happens in bad times with mark-to-market accounting is that banks can’t sell assets and they won’t buy assets and what happens as a result is that their losses spiral out of control. And this is how a $300 billion banking problem blew up into a $4 trillion economic collapse.
  19. The amazing thing is what happened right at the bottom when the SP500 indexed had bottomed out on March 9 2009. Something changed the world on that day. It involved Congressman Barney Frank, now retired. His financial services committee actually held for a year and he brought the accountants in and argued against mark-to-market – and this is how mark-to-market was removed once again. They announced that the hearing would be held on March 9. The hearing was held on March 12; and the accounting rule was changed on April 2 and mark-to-market was removed from the accounting rules once again. frank
  20. It was then that the both the stock market and the economy reversed their slide and began to grow. From that point on, the economy has grown. The stock market is up 200% (in 2014) Thank you Barney Frank.
  21. We conclude from the data and analysis presented that the 2008 financial crisis was not a creation of over-speculation that may in fact have been triggered by the Federal Reserve’s low interest rate policy in the first place – and it was the change in the accounting rule that brought about the recovery of the economy from the depths of the 2008 financial crisis.
  22. It is a generally held belief that the government has brought about the recovery with the Quantitative Easing policy of the Fed and the TARP initiative of the Obama administration. The role and effectiveness  of the Fed can best be understood in terms of their activity which involves either buying or selling government bonds. When they buy bonds they inject cash into the banking system which in turn increases lending. But in the last 5 years (2009-2014) that did not happen. Instead, the banks just sat on the excess reserves. The economic growth seen in the economy today (2014) is driven by entrepreneurship.
  23. Ben Bernanke and Janet Yellen never stayed up all night drinking Red Bull, eating pizza, and writing Apps. They’ve never fracked a well, they never built a 3D printer. So when you look at the economy and try to understand its behavior, you must see it in terms of free market capitalism and that if the free market actually works we will see economic growth and prosperity. The appropriate role of the government is to provide the appropriate legal and regulatory infrastructure, for example for interest rates, where the free market can function at its best. The people are the actors and drivers of the free market system with the conventional wisdom and motivation needed for their primary role in capitalism. They are not puppets that move when the government pulls the string. Although the free market system needs to be regulated, regulation can be flawed or overdone when governments misread their role in a free market system where wealth creators can create wealth by “staying up all night drinking Red Bull, eating pizza, and writing Apps” when the government provides the optimal regulatory regime for the free market system.
  24. The conventional wisdom that the 2008 financial crisis was brought about by banking failure because the bankers lost control is wrong. It was the government that lost control or misread its ability to control economy with more and more  regulation. In this case it was a case of regulation gone wrong with a flaw in the accounting rules and the government’s efforts to overcome that flaw with more and more regulatory interference with the free market system. In this case it turned out that not more and more government regulation but simply changing a flawed accounting rule fixed the economy.

bandicam 2020-02-20 09-47-57-981






CLAIM#1: NASA has unique capabilities because we have the point of view from space. With NASA’s carbon monitoring system you can see the amount of CO2 in the atmosphere decreasing in the spring and the summer. Plants and the oceans and the land surface are greening up and pulling the carbon dioxide out of the atmosphere. And then in the fall and in the wintertime you’ll see the CO2 in the atmosphere increasing because plants and animals are releasing the carbon dioxide that was captured during the growing season.

RESPONSE TO CLAIM#1:  The seasonal cycle in atmospheric carbon dioxide concentration is well known and has been well known for some time simply from Mauna Loa data as explained in a related post [LINK]  and as shown in the video display below. The red and yellow video displays created by a space exploration agency with a $20 billion annual budget are surely more colorful and more entertaining but the simple video below is a clearer expression of the essential data in this seasonal cycle particularly so in terms of the magnitudes and numerical values in these cyclical changes.


CLAIM#2 There is a graph called the Keeling Curve where you can see the summer and winter cycles. This process is very natural. Contrast that with old slow carbon. So this is a chunk of coal (speaker holds up a large chunk of coal). It was also made by plants. It also contains carbon dioxide that was in the atmosphere, but the carbon in this chunk of coal was taken out of the atmosphere 350 million years ago. And since the Industrial Revolution, we’ve been taking it out of the ground and using it for fuel. The burning of fossil fuel, whether it is coal, oil, or natural gas, has released this very very old carbon back into the atmosphere a lot faster than the plants and the oceans can take it out of the atmosphere. Bit by bit it is moving the Keeling Curve up. 1989 was the last time we saw atmospheric CO2 below 350 ppm. And it appears that 2016 will be the last time we see CO2 below 400 ppm.

RESPONSE-A TO CLAIM#2:  The NASA animated graphics showing the Keeling curve from 1979 to 2014 is very impressive and certainly useful in evaluating this system in terms of fossil fuel emissions. However, it is not clear that we need a $200 billion dollar space exploration agency to provide us with this kind of information when the same information is readily available from the University of California, San Diego (or from one of many such research institutions) in formats that are just as useful if not more so. The Keeling curve made freely available by the Scripps Institution, where the late great Charles David Keeling had worked, is shown below. There is nothing lacking in this chart in terms of information or useful presentation of information that suggests the need for technical assistance from a space exploration agency .  

RESPONSE-B TO CLAIM#2:  This is a response to the statement that the carbon in this chunk of coal was taken out of the atmosphere 350 million years ago. And since the Industrial Revolution, we’ve been taking it out of the ground and using it for fuel. The burning of fossil fuel, whether it is coal, oil, or natural gas, has released this very very old carbon back into the atmosphere a lot faster than the plants and the oceans can take it out of the atmosphere. Bit by bit it is moving the Keeling Curve up“. 

This argument is the the essence and the foundation of the theory of Anthropogenic Global Warming and Climate Change. It claims that since the carbon in fossil fuels is not part of the current account of the carbon cycle and therefore external to the carbon cycle, its introduction into the atmosphere is a perturbation of the carbon cycle such that the extra and external carbon from fossil fuels causes atmospheric CO2 concentration to rise as seen in the Keeling Curve 1979-2014 presented by NASA and 1960 to 2015 presented by Scripps. The evidence presented for this causation hypothesis is that atmospheric CO2 concentration has been going up during a time when the industrial economy was burning fossil fuels. In terms of the principles of statistics, this argument does not provide evidence of causation. Tyler Vigen’s collection of spurious correlations sheds some light on this issue [LINK] .

Correlation between time series data arises from two different sources. These are (1) shared trends and (2) the responsiveness of the object time series to the causation time series at the time scale at which the causation is supposed to occur. Only the second source of correlation has a causation interpretation. The first source, shared trends, is what creates all those spurious correlations demonstrated by Tyler Vigen. Therefore, to show that atmospheric CO2 concentration is responsive to fossil fuel emissions, we must first remove their trends. And if the causation occurs at an annual time scale, that is if year to year changes in atmospheric CO2 concentration is explained by annual fossil fuel emissions, then the detrended correlation between the two detrended series will show a statistically significant correlation at an annual time scale. Only this detrended correlation and not the observation that atmospheric CO2 concentration has been rising during a time of fossil fuel emissions, serves as evidence of causation, i.e., that fossil fuel emissions cause atmospheric CO2 concentration to rise at an annual time scale. Detrended correlation analyses of this nature are presented in related posts on this site. No evidence is found that the observed changes in atmospheric CO2 concentration during a time of fossil fuel emissions are caused by fossil fuel emissions  [LINK] .

A further investigation of the effect of fossil fuel emissions on atmospheric composition is presented in terms of the the carbon cycle. Carbon cycle flows are an order of magnitude larger than fossil fuel emissions. These flows are not directly measured but inferred and they therefore contain very large uncertainties. Although these uncertainties are declared, they are ignored when carrying out the mass balance that shows  what’s called the “airborne fraction” of fossil fuel emissions, that is the portion of fossil fuel emissions that is thought to remain in the atmosphere and thereby explain the observed rise in atmospheric CO2 driven by fossil fuel emissions. However, this computation is flawed because it does not include the uncertainties declared by climate science to exist in the estimation of carbon cycle flows. In a related post it is shown that when the uncertainties in carbon cycle flows declared by the IPCC are taken into account, it is not possible to detect the much smaller fossil fuel emissions because the carbon cycle balances with and without fossil fuel emissions. [LINK] . The carbon cycle mass balance and the detrended correlation analyses taken together show that no evidence exists to attribute observed changes in atmospheric CO2 concentration to fossil fuel emissions.

It is noted that in this presentation NASA embraces the theory that AGW climate change began after the Industrial Revolution when the Industrial Economy began to burn coal but their official position is that AGW climate change began in 1950. This contradiction requires an explanation.

CLAIM #3: And what the heck is 400 parts per million? What does that even mean? Well, we know from the analysis of ice samples from Antarctica that before the Industrial Revolution the amount of carbon dioxide in the atmosphere was about 275 parts per million (ppm). It had been there for thousands of years. Something has increased the number from 275 to 400. We are quite certain that it is due to the human activity of burning fossil fuels.

bandicam 2020-02-20 16-30-43-363


RESPONSE TO CLAIM #3: It is claimed that the observed rise of atmospheric CO2 concentration from 275ppm to 400pppm was caused by fossil fuels. No evidence is provided to support that claim. Instead, the claim is supported by the statement that “We are quite certain that it is due to the human activity of burning fossil fuels“. Perhaps this claim is a reference to the scientific credentials of an AERONAUTICS AND SPACE ADMINISTRATION such that if scientists in such high places are “quite certain” it must be so. This claim is an Ad hominem fallacy. The implication is that if the very knowledgeable scientists at NASA are “quite certain” it must be true. This conclusion is therefore rejected because of the absence of evidence.

bandicam 2020-02-20 16-47-18-071

CLAIM #4: We take these satellite measurements, and the variation over time of how the world is changing as facts. We’ve seen warming over the last century and a half …. very very meticulous measurements … and it shows a really sharp acceleration in the warming over the last four decades.

bandicam 2020-02-20 17-16-23-040


RESPONSE TO CLAIM #4:  Presumably, the first two sentences are not related because taken together they imply the impossibility that satellite measurements have seen warming over the last century and a half. But perhaps the real message of this claim is the acceleration in warming seen by NASA with satellite measurements that they take as facts. Below are decadal warming rates for the twelve calendar months found in the global mean lower troposphere temperature measured by satellites for the four decades 1979-2018. The charts for the twelve calendar months are presented as a GIF animation that cycles through the twelve calendar months. Acceleration in the rate of warming will be evident in these charts as a rising trend in decadal warming rates. Such a rising trend is seen for the months of January, February, October, and perhaps November. No acceleration is seen in the other eight months of the year. The annual mean decadal warming rates are seen in the chart below the GIF animation. No evidence of acceleration is found in the annual mean decadal warming rate. These data are inconsistent with the claim of a “a really sharp acceleration in the warming over the last four decades“. 




CLAIM#5:  People have a hard time understanding what’s the big deal for a planet that it is warmer by 1C or warmer by 2C. The impact that we are worried about is being treated not at a 20-degree warmer world but they are being treated at a one degree warmer world.

RESPONSE TO CLAIM#5: It is true that 2C is 100% higher than 1C but 21C is only 5% higher than 20C but that is not the issue. The issue is that climate science had initially marked the point of “irreversible climate change” at 5C and proposed plans to limit warming to 5C. Later that danger point of warming that must be avoided was dropped to 4C and then again to 3C and then again to 2C and finally in 2018 IPCC released a special report lowering the “do not cross” line to 1.5C – only 0.5C warmer than today. If NASA and the other climate scientists really understand this warming phenomenon well enough to demand an overhaul of the world’s energy infrastructure, this slide from 5C to 1.5C requires a rational explanation.


CLAIM#6:  Over the last decade (2007-2016) we’ve seen the ice melting. We’ve seen the melting in the ?Wo? Pole, we’ve seen the ice melt really fast on Greenland. They’ve fallen off Greenland into the ocean. We’ve had Pacific Islands that have already had to be abandoned because of sea level rise. We can combine our data with global climate models and say how is sea level rise going to change in the next 5, 10, 15 years because if we continue on the path we’re doing there’s going to be a lot of coastal communities all around the world that are going to be flooded. As scientists, we’re taking the most precise data that we can. It’s open data. It’s factual. For instance the enormous droughts and fires that we have around the world that are directly related to a warmer climate. That has a huge impact on people, was unprecedented. If you have a warmer atmosphere that can hold more moisture. That’s what warmer atmospheres do, they can suck up more moisture. That means more convection, more big thunderstorms, more hurricanes, more extreme weather. That’s one of the likely outcomes of a warming world. We built our civilization around the current planet, our coastal cities, our food resources, our water resources … they’re all pegged to the climate … and there’s not much slack in the system. We’re already seeing the impacts and the impacts are going to increase. In a 2-degree warming world there will be more. And in a 3-degree warming world there’ll be even more … and when you’re looking at those kinds of scenarios, 3, 4, 5 degrees warmer – that are totally plausible.  If we go down that path, we’ll be looking at a different planet.

RESPONSE TO CLAIM#6: Claim#6 reads like the usual alarming climate scare stories we read in the newspapers everyday and does not appear to be a scientific argument from rocket scientists. For example in the large fluctuations in Greenland ice melt from gain to loss what is the significance that there was a loss in a specific decade?  And if there are Pacific Islands that have already had to be abandoned because of sea level rise, why have those islands not been identified and the data provided? And that sea level is going to change in the next 5, 10, 15 years also requires data and their interpretation. Is the sea level going to change in 5 years or 10 years or is it 15 years? and by how much will that change and how was that change interpreted as a calamity? And statements like this “We’re already seeing the impacts and the impacts are going to increase. In a 2-degree warming world there will be more. And in a 3-degree warming world there’ll be even more … and when you’re looking at those kinds of scenarios, 3, 4, 5 degrees warmer – that are totally plausible.” contain no useful information and suggest that the speaker has none to offer.


CONCLUSION: It does not appear from this presentation that NASA has the climate science expertise it claims to have and to which it apparently aspires. In terms of their aeronautics and space expertise, their role in AGW climate change that would best serve climate science and taxpayers is their priceless technology used for collecting the relevant data from space and making that data available to both taxpayers and climate scientists. Rocket scientists should not be involved in climate action strategies any more than climate scientists should be involved space exploration strategies.


bandicam 2020-02-18 09-14-18-235









  1. SOURCE: JSTOR DECEMBER 2019 [LINK] : The road to understanding climate change stretches back to the tweed-clad middle years of the 19th century when Victorian-era scientists conducted the first experiments proving that runaway CO2 could, one day, cook the planet. In other words, “global warming was officially discovered more than 100 years ago. Joseph Fourier asked why the Earth was as warm as it was. In two papers published in 1824 and 1837 he proposed that the atmosphere creates barriers that trap earth’s long wave radiation and that this mechanism could change the earth’s temperature when altered by natural forces and human activity. These papers are the first predictions of climate change.
  2. In 1856, Eunice Newton Foote, an amateur scientist placed jars of different gas combinations in the sun and found that the jar with CO2 and water vapor  in it got hottest. These results were published in 1856 in the American Journal of Science and established empirical evidence of the heat trapping effect of CO2.
  3. Irish scientist John Tyndall set out to explain ice age cycles because it wasn’t clear why the earth’s surface temperature fluctuated so wildly. He reasoned that could be the atmospheric heat trapping effect of Fourier with the temperature cycle driven by a CO2 cycle due to the CO2 effect demonstrated by Eunice Foote. In 1860 Tyndall carried out experiments similar to those of Foote and found that water vapor and CO2 were powerful heat trapping gases.
  4. Swedish scientist Svante Arrhenius put it all together into the climate science we know today more than 100 years ago in 1896: Arrhenius, like Tyndall, was interested in explaining ice age cycles. At the time, there were two competing explanations. One was the perturbations in Earth’s orbit and the other was changes in atmospheric composition, specifically, CO2.
  5. Arrhenius investigated the CO2 theory and with the help of CO2 expert Arvid Högbom and atmospheric heat  balance scientist Samuel Pierpont Langley, Arrhenius calculated how much heat would be trapped if levels of CO2 and water vapor changed. He determined if you doubled the amount of CO2 in the atmosphere, it would raise the world’s temperature by 5 to 6 degrees Celsius – i.e., a equilibrium climate sensitivity of 5C to 6C.
  6. It is thus that the era of modern climate science was born. The industrial revolution was well underway but Arrhenius was not concerned with that because his science was an attempt to explain nature’s glaciation and interglacial cycles that had recently been discovered by geologists. In those cycles the horror was the glaciation and CO2 and water vapor driven warming the relief from the ice. The other significant event of nature that worried him was volcanic activity having lived through the 1883 eruption of Krakatoa. Therefore, for Arrhenius CO2 driven warming was not a horror but a relief from nature’s cold spells.
  7. JSTOR conclusion: It was a nice idea at the time—but nature, as is now dangerously clear, had different ideas. We’re now faced with the challenge of mitigating as much climate change as possible, while adapting to what’s already set in place. The onset of a warmer planet can seem sudden, if you judge by today’s panicked headlines. But the science predicting that it would occur? It is, alas, generations’ old.
  8. This story line in various forms is found in many other sources that include (1) The Guardian’s “Father of Climate Change [LINK] , The Open Mind website’sThe Man Who Foresaw Climate Change” [LINK] , NASA’s “Svante Arrhenius” page [LINK] , and a comprehensive presentation by HISTORY.AIP.ORG’S “The Discovery of Climate Change“, that includes the important work of Callendar (1938) [LINK] .This work is presented below. 
  9. SOURCE: HISTORY.AIP.ORG: THE DISCOVERY OF CLIMATE CHANGE: In the 19th century, scientists realized that gases in the atmosphere cause a “greenhouse effect” which affects the planet’s temperature. These scientists were interested chiefly in the possibility that a lower level of carbon dioxide gas might explain the ice ages of the distant past. At the turn of the century, {Svante Arrhenius calculated that emissions from human industry might someday bring a global warming. False}. Other scientists dismissed his idea as faulty. In 1938, G.S. Callendar argued that the level of carbon dioxide was climbing and raising global temperature:   [[RELATED POST ON CALLENDAR 1938] . In the early 1960s, C.D. Keeling measured the level of carbon dioxide in the atmosphere: it was rising fast. Researchers began to take an interest, struggling to understand how the level of carbon dioxide had changed in the past, and how the level was influenced by chemical and biological forces. They found that the gas plays a crucial role in climate change, so that the rising level could gravely affect our future.
  10. John Tyndall was fascinated by recent and alarming discovery of the time that the earth goes through glaciation and interglacial cycles. He considered the possibility that these “ice age cycles” were driven by atmospheric composition based on the works of Joseph Fourier and others that energy in the form of visible light from the Sun easily penetrates the atmosphere to reach the surface and heat it up, but heat cannot so easily escape back into space because of atmospheric absorption. For the air absorbs invisible heat rays (“infrared radiation”) rising from the surface. The warmed air radiates some of the energy back down to the surface, helping it stay warm. This was the effect that would later be called, by an inaccurate analogy, the “greenhouse effect.” The equations and data available to 19th-century scientists were far too poor to allow an accurate calculation. Yet the physics was straightforward enough to show that a bare, airless rock at the Earth’s distance from the Sun should be far colder than the Earth actually is.
  11. Tyndall set out to find whether there was in fact any gas in the atmosphere that could trap heat rays. In 1859, his careful laboratory work identified several gases that did just that. The most important was simple water vapor (H2O). Also effective were carbon dioxide (CO2), although in the atmosphere the gas is only a few parts in ten thousand, and the even rarer methane (CH4). Just as a sheet of paper will block more light than an entire pool of clear water, so a trace of CO2 or CH4 could strongly affect the transmission of heat radiation through the atmosphere.
  12. The next major scientist to consider the Earth’s temperature was another man with broad interests, Svante Arrhenius in Stockholm. He too was attracted by the great riddle of the prehistoric ice ages, and he saw CO2 as the key. Why focus on that rare gas rather than water vapor, which was far more abundant? Because the level of water vapor in the atmosphere fluctuated daily, whereas the level of CO2 was set over a geological timescale by emissions from volcanoes. If the emissions changed, the alteration in the CO2 greenhouse effect would only slightly change the global temperature—but that would almost instantly change the average amount of water vapor in the air, which would bring further change through its own greenhouse effect. Thus the level of CO2 acted as a regulator of water vapor, and ultimately determined the planet’s long-term equilibrium temperature.
  13. In 1896 Arrhenius completed a laborious numerical computation which suggested that cutting the amount of CO2 in the atmosphere by half could lower the temperature in Europe some 4-5°C (roughly 7-9°F) — that is, to an ice age level. But this idea could only answer the riddle of the ice ages if such large changes in atmospheric composition really were possible. For that question Arrhenius turned to a colleague, Arvid Högbom. It happened that Högbom had compiled estimates for how carbon dioxide cycles through natural geochemical processes, including emission from volcanoes, uptake by the oceans, and so forth.
  14. It had occurred to Högbom to calculate the amounts of CO2 emitted by factories and other industrial sources. Surprisingly, he found that human activities were adding CO2 to the atmosphere at a rate roughly comparable to the natural geochemical processes that emitted or absorbed the gas.
  15. Arrhenius did not see that as a problem. He figured that if industry continued to burn fuel at the current (1896) rate, it would take perhaps three thousand years for the CO2 level to rise so high. Högbom doubted it would ever rise that much. One thing holding back the rise was the oceans. According to a simple calculation, sea water would absorb 5/6ths of any additional gas. Arrhenius brought up the possibility of future warming but by the time the book was published, 1908, the rate of coal burning was already significantly higher than in 1896, and Arrhenius suggested warming might appear within a few centuries rather than millenia. Yet here as in his first article, the possibility of warming in some distant future was far from his main point. He mentioned it only in passing.
  16. What really interested scientists of his time — the cause of the ice ages. Arrhenius had not quite discovered global warming, but only a curious theoretical concept.(5) An American geologist, T. C. Chamberlin, and a few others took an interest in CO2. How, they wondered, is the gas stored and released as it cycles through the Earth’s reservoirs of sea water and minerals, and also through living matter like forests? Chamberlin was emphatic that the level of CO2 in the atmosphere did not necessarily stay the same over the long term. But these scientists too were pursuing the ice ages and other, yet more ancient climate changes — gradual shifts over millions of years.





  1. What we find in this history is that 19th century climate scientists were studying what was then a recent discovery that the earth goes through glaciation and interglacial cycles over a time scale of hundreds of thousands of years. The research agenda of these scientists, particularly Arrhenius, was to discover what drives glaciation cycles at time scales of 100,000 to 200,000 years. Arrhenius did find an explanation of these climate cycles in terms of the greenhouse effect of CO2 and water and that work was published and recognized as a significant advance in science.
  2. However, to draw a parallel between that and AGW climate change at multi-decadal and at most centennial time scales, is a failure to account for the importance of time scale in time series analysis (See for example [LINK] ). The authors in this work note that ” When monitoring complex physical systems over time, one often finds multiple phenomena in the data that work on different time scales. If one is interested in analyzing and modeling these individual phenomena, it is crucial to recognize these different scales and separate the data into its underlying components”.
  3. Therefore, the “climate science” of AGW climate change at multi-decadal or centennial time scales is not the same science as the “climate science” of glaciation cycles at time scales that are orders of magnitude longer. Therefore there is no correspondence between AGW science and Arrhenius although both these sciences rely on the heat trapping effect of atmospheric composition in terms of its CO2 and water content. Besides, these works had nothing whatsoever to do with an impact of the industrial economy on climate. These are two very different events in the history of climate research with very little if any correspondence between them.
  4. Yet another matter to consider in the claim that Arrhenius is the father of AGW climate change and that the science has been established for over a hundred years is that the Arrhenius theory of glaciation cycles has been discredited in favor of the theory of Milankovitch cycles proposed by Milutin Milanković about a hundred years ago and only 25 years after the work of Arrhenius.
  5. The only historical work that used the CO2 concentration of the atmosphere at the time scale of AGW climate change and did so in the context of the burning of fossil fuels in the industrial economy is Callendar 1938 described in a related post [LINK]. The history from 1938 to the present is summarized here [LINK]
  6. SUMMARY: To summarize, the parallel drawn between the work of Arrhenius on glaciation cycles and the current theory of catastrophic climate impacts of the industrial economy that operate at grossly different time scales appears to be a desperate search for validation – and the need for such validation along with the Ad hominem need for validation by virtue of consensus  –  suggests weaknesses in AGW science that requires this kind of support.