Thongchai Thailand

Archive for May 2018

Reference: After Nobel Gore should go for the next big prize, Bangkok Post, Oct 16, 2007

The goofiness of the Nobel Committee is grossly underrated. Robert Merton and Myron Scholes must have thought they had been certified as Lords of Finance when they were awarded the Nobel Prize in Economics until their hedge fund called Long Term Capital Management did a nose dive and almost brought down the American financial system. More recently the Nobel Prize was awarded to scientists who had warned us that that human activity was causing ozone depletion and making the ozone hole bigger. They were allowed to keep their prizes even after it became apparent that the observed changes in the ozone layer were part of a natural cycle having to do with shifting winds in the upper atmosphere and NOT due to human activity.

In its latest goof, the Nobel Committee has awarded a prize for service to humanity to people who see humanity as the enemy of the earth and whose stated goal could be achieved simply by eliminating humanity from the face of the earth. Besides, the movie about global warming An Inconvenient Truth has been widely discredited as being biased and containing not only exaggerations but outright lies and scientific fraud.

The award of the Nobel Prize to these snake oil salesmen does not prove that the Anthropogenic Global Warming hypothesis is correct. That proof can only be provided by empirical evidence. The fact that Gore was awarded a prize by a committee of five Norwegians appointed by the Norwegian legislature does not vindicate Gore. If anything it discredits the Nobel committee. If the Nobel Prize is to be made into a global prize then it should be removed from the confines of Norwegian goofiness and opened up to a more global evaluation.







  1. Paleo climate data tell us that prior to the Industrial era the mean annual CO2 concentration of the atmosphere stayed in range 180-290 ppm (IPCCAR5, 2013), a difference of 234 gigatons of carbon equivalent (GTC). The range is equivalent to total global fossil fuel emissions in the 33-year period 1985-2017 but since the Paleo changes occurred prior to the industrial age, these changes are ascribed to volcanic eruptions which inject both aerosols and CO2 into the atmosphere. Changes in solar activity are also considered as they can change the equilibrium partial pressure of CO2 over the oceans in accordance with the Henry’s Law relationship of the temperature dependence of the solubility of carbon dioxide in water (IPCCAR5, 2013).
  2. However, in the postindustrial era, these changes are shown to be much more rapid and are therefore explained in terms of anthropogenic fossil fuel emissions with the mathematics of the attribution computed in the context of the carbon cycle that describes the natural flows of carbon dioxide to and from the atmosphere. The IPCC describes the carbon cycle in terms of carbon dioxide flows among multiple sources and sinks. The atmosphere plays a role in nine of these flows. These mean flows, averaged over the decade 2000-2009 (Figure 7) and their standard deviations (SD) as reported by the IPCC are listed below in units of GTC/y (IPCCAR5, 2013). Non availability of data is indicated by N/A.
  3. Natural: Ocean surface to atmosphere:Mean=78.4,SD=N/A.
  4. Natural: Atmosphere to ocean:surface:Mean=80.0,SD=N/A
  5. Human: Fossil fuel emissions:surface to atmosphere:Mean=7.8,SD=0.6
  6. Human: Land use change:surface to atmosphere:Mean=1.1,SD=0.8
  7. Natural: Photosynthesis:atmosphere to surface:Mean=123.0,SD=8.0
  8. Natural: Respiration/fire:surface to atmosphere:Mean=118.7,SD=N/A
  9. Natural: Freshwater to atmosphere:Mean=1.0,SD=N/A
  10. Natural: Volcanic emissions surface to atmosphere:Mean=0.1,SS =N/A
  11. Natural: Rock weathering:surface to atmosphere:Mean=0.3,SD=N/A
  12. A simple flow accounting of the mean values without consideration of uncertainty shows a net CO2 flow from surface to atmosphere of 4.4 GTC/y. The details of this computation are as follows. In the emissions and atmospheric composition data we find that during the decade 2000-2009 total fossil fuel emissions were 78.1 GTC and that over the same period atmospheric CO2 rose from 369.2 to 387.9 ppm for an increase of 18.7 ppm equivalent to 39.6 GTC in atmospheric CO2 or 4.4 GTC/y. The ratio of the observed increase in atmospheric carbon to emitted carbon is thus =39.6/78.2=0.51. This computation is the source of the claim that the so called “Airborne Fractionis about 50%; that is to say that about half of the emitted carbon accumulates in the atmosphere on average and the other half is absorbed by the oceans, by photosynthesis, and by terrestrial soil absorption. The Airborne Fraction of AF=50% later had to be made flexible in light of a range of observed values (Figure 1).
  13. The left frame of Figure 1 above shows that a large range of values of the decadal mean Airborne Fraction of 0<DMAF<4 .5 for decades ending in 1860 to 2017. This sample period includes ice core CO2 data from the Law Dome for years prior to 1958. However, when the sample period is restricted to the more precise Mauna Loa data from 1958, a much smaller range of values are seen in the right frame of Figure 1 with 0.45<DMAF<0.65. These data appear to support the usual assumption in climate science that fossil fuel emissions have contributed about half of the decadal mean increase in atmospheric CO2 concentration since 1958; but as demonstrated in a related post [LINK] , without a correlation between emissions and changes in atmospheric CO2 concentration, airborne fractions can be computed but they have no interpretation in terms of cause and effect in the phenomenon being studied [LINK] .
  14. When uncertainties are not considered, the flow accounting appears to show an exact match of the predicted and computed carbon balance. It is noted, however, that this exact accounting balance is achieved, not with flow measurements, but with estimates of unmeasurable flows constrained by the circular reasoning that assigns flows according to an assumed flow balance.
  15. However, a very different picture emerges when uncertainties are included in the balance. Published uncertainties for three of the nine flows are available in the IPCC reports. Uncertainty for the other six flows are not known. However, we know that they are large because no known method exists for the direct measurement these flows. They can only be grossly inferred based in assumptions that exclude or minimize geological flows.
  16. We therefore set up a Monte Carlo simulation to estimate the highest value of the unknown standard deviations at which we can detect the presence of human emissions in the carbon cycle. For the purpose of this test we propose that an uncertain flow account is in balance as long as the Null Hypothesis that the sum of the flows is zero cannot be rejected. The alpha error rate for the test is set to a high value of alpha=0.10 to ensure that any reasonable ability to discriminate between the flow account WITH Anthropogenic Emissions from a the flow account WITHOUT Anthropogenic Emissions is taken into evidence that the relatively small fossil fuel emissions can be detected in the presence of much larger and uncertain natural flows. The spreadsheet used in this determination is available for download from an online data archive Data Archive Link .
  17. In the simulation we assign different levels of uncertainty to the flows for which no uncertainty data are available and test the null hypothesis that the flows balance with anthropogenic emissions (AE) included and again with AE excluded. If the flows balance when AE are included and they don’t balance when AE are excluded then we conclude that the presence of the AE can be detected at that level of uncertainty. However, if the flows balance with and without AE then we conclude that the stochastic flow account is not sensitive to AE at that level of uncertainty because it is unable to detect their presence. If the presence of AE cannot be detected no role for their effect on climate can be deduced from the data at that level of uncertainty in natural flows.
  18. The balance is computed from the atmospheric perspective as Balance=Input-Output where Input is flow to the atmosphere and Output is flow from the atmosphere. The p-values for hypothesis tests for uncertainties in the natural flows from 1% of mean to 6.5% of mean are presented below both as a tabulation and as a line chart.
  1. In the tabulation the PCT column shows the assumed percent standard deviation in the natural flows for which no uncertainty information is available. In the”base case”, the blanket statement by the IPCC that the uncertainty is 20% is interpreted to mean that the width of the 95% confidence interval is 20% of the mean and the corresponding standard deviation computed as (20/2)/1.96 is almost identical to that in the 5% (5PC) row. The data in each row shows the p-values of two hypothesis tests labeled as WITH and WITHOUT. The WITH column shows p-values when the AE are included in the balance computation. The WITHOUT column shows the p-values when the AE are left out of the balance computation.
  2. We use a critical p-value of alpha=0.1 for the test of the null hypothesis that Balance=0. Balance=0 means that the stochastic flow account is in balance. If the p-value is less than apha we reject the null hypothesis and conclude that the stochastic flow account is not in balance. If we fail to reject the null then we conclude the stochastic flow account is in balance.
  3. The p-values for WITH and WITHOUT in each row taken together tell us whether the stochastic flow system is sensitive to AE, that is whether the relatively small AE flow can be detected in the context of uncertainty in much larger natural flows. If we fail to reject the null hypothesis that Balance=0 in both WITH and WITHOUT columns, the stochastic flow account balances with and without the AE flows. In these cases the stochastic flow account is not sensitive to AE, that is it is unable to detect the presence of the AE flows. This is true for the five rows in which the uncertainty in natural flows is 3% of mean or higher.
  4. For the two lower uncertainty levels of 2% and 1% we find that the null hypothesis Balance=0 is not rejected when AE are included (the stochastic flow account is in balance) but rejected when AE are not included (the stochastic flow account is not in balance). Under these uncertainty conditions, the stochastic flow account is sensitive to the presence of AE, that is the flow account can detect the presence of the relatively small AE flows. The chart shows that the crossover uncertainty lies somewhere between 2% and 3% and in fact it is found by trial and error that the crossover occurs at 2.3%.
  5. These results imply that the IPCC carbon cycle stochastic flow balance is not sensitive to the presence of the relatively low flows from human activity involving fossil fuel emissions and land use change. The large natural flows of the carbon cycle cannot be directly measured and they can only be indirectly inferred. These inferred values contain uncertainties much larger than 2.3% of the mean. It is not possible to carry out a balance of the carbon cycle under these conditions.
  6. The balance presented by the IPCC by assuming certain flows to force an exact balance is justified by circular reasoning. Therefore, the IPCC carbon cycle balance does not contain useful information that may be used to ascertain the impact of fossil fuel emissions on the carbon cycle or on the climate system.


The following posts on this site are relevant to this discussion:

  1. Fossil Fuel Emissions and Atmospheric Composition
  2. Will Emission Reduction Change the Rate of Warming?
  3. Ocean Acidification by Fossil Fuel Emissions
  4. Spurious Correlations in Climate Science



The following downloadable papers may also be relevant

  1. Responsiveness of Atmospheric CO2 to Fossil Fuel Emissions
  2. An Empirical Study of Fossil Fuel Emissions and Ocean Acidification
  3. Circular Reasoning in Climate Change Research
  4. Uncertain Flow Accounting and the IPCC Carbon Budget
  5. Some Methodological Issues in Climate Science
  6. Generational Fossil Fuel Emissions and Generational Warming
  7. Dilution of Atmospheric Radiocarbon CO2 by Fossil Fuel Emissions
  8. A Test of the Anthropogenic Sea Level Rise Hypothesis
  9. Changes in the 13C/12C Ratio of Atmospheric CO2 1977-2014
  10. Correlation of Regional Warming with Global Emissions
  11. The Correlation between Emissions and Warming in the CET  



bandicam 2018-06-26 09-38-07-779

  1. The Global Carbon Project, with a goal to “fully understand the carbon cycle” has instead evolved into the world’s foremost and most trusted accountants of fossil fuel emissions. The Project keeps records of fossil fuel emissions the world over on a country by country and year by year basis and these data are made publicly available and also analyzed for trends as well as implications for future climate change scenarios. According to Wikipedia, “The Global Carbon Project (GCP) was established in 2001. The organisation seeks to quantify global carbon emissions and their sources.”
  2. Their pretension to the study of the carbon cycle is not as useful as their emissions data because it is presented completely in the context of circular reasoning. Observed changes of CO2 concentration in natural systems are assumed to derive wholly from fossil fuel and land use change. The carbon cycle accounting is carried out on that basis. Uncertainties are noted but ignored in the accounting calculations. The procedure is described below for the decade 2008-2017.
  3. The average annual carbon cycle for the decade 2008-2017 is presented here as an example of the reliance of the Global Carbon Project on circular reasoning. The flows used in the flow accounting are provided by the Global Carbon Project as GTY of carbon dioxide. They have been converted to GTY of carbon (GTCY) for ease of comparison with the IPCC figures above.
  4. The use of net flows (net flow to land sink and net flow to ocean sink) insert assumptions and circular reasoning into the carbon cycle flow account. These are differences between very large flows in the order of 100 GTY with large uncertainties in their measurement. These differences therefore contain even larger uncertainties.
  5. In cases where a net flow is a direct measurement, it’s interpretation subsumes that which the flow account is carried out to determine. For example, if the net flow to ocean sink is measured as a change in the average inorganic carbon concentration of the oceans, then the flow account assumes that the change was caused by surface phenomena such as fossil fuel emissions ignoring emissions from plate tectonics, submarine volcanoes, hydrothermal vents, and hydrocarbon seeps.
  6. Carbon cycle flow accounts of this nature are not pure data that can be used to test theory but rather the theory itself expressed in terms of flow account values.

Fossil fuel emissions: mean=9.3545, stdev=0.5454 [IPCC 2000-2009: 7.8]

Land use change emissions: mean=1.485, stdev=0.818 [IPCC 2000-2009: 1.1]

Net flow to land sink: mean=3.054, stdev=0.818 [IPCC 2000-2009: 4.3]

Net flow to ocean sink: mean=2.37, stdev=0.545 [IPCC 2000-2009: 1.6]

Growth in atmospheric CO2: mean=4.718. stdev=0.0545 [IPCC 2000-2009: 4.4]

















  1. You know who Charles Darwin is of course but you may not have heard of his mad cousin Francis Galton who did the math for Darwin’s theory of evolution. Two of the many procedures Sir Galton came up with to help him make sense of the data are still used today and are possibly the two most widely used tools in all of statistics. They are ordinary least squares (OLS) linear regression and OLS correlation.
  2. Both of these statistics are measures of a linear relationship between two variables X and Y. The linear regression coefficient B of Y against X is a measure of how much Y changes on average for a unit change in X and the linear correlation R is a measure of how close the observed changes are to the average. This close relationship between regression and correlation is described in Figure 1 above that consists of four charts. The data for these charts are generated by a Monte Carlo simulation used to control the degree of correlation.
  3. In the HIGH (R=0.94) and VERY HIGH (R=0.98) correlation charts, linear regression tells us that on average, a unit change in X causes Y to change by B=5 and the data are very consistent with this requirement. The consistency in this case derives from a low variance of the regression coefficient implied by high correlation. The strong correlation also implies that the observed changes in Y for a unit increases in X are close to the the average value of B=5 over the full span of the data and for any selected sub-span of the time series. A sample of five observed values shows changes of 5.0 to 5.4 for very high correlation, and 4.5 to 5.4 for high correlation.
  4. In the LOW (R=0.36) and MID (R=0.7) correlation charts, the regression coefficients are correspondingly less precise varying from B=1.8 to B=7.1 for LOW-R and B=3.5 to B=5.6 for MID-R in the five random estimates presented. The point here is that without a sufficient degree of correlation between the time series at the time scale of interest, though regression coefficients can be computed, the computed coefficients may have no interpretation. The weak correlations in these cases also imply that the observed changes in Y for a unit increases in X would be different in sub-spans of the time series. The so called “split-half” test, which compares the first half of the time series with the second half, may be used to examine the instability of the regression coefficient imposed by low correlation or by violations of other OLS assumptions such as dependence and that of identical Gaussian distributions of the source of each data value in the time seris.
  5. An issue specific to the analysis of time series data is that the observed correlation in the source data must be separated into the portion that derives from shared long term trends, with no interpretation at the time scale of interest, from the responsiveness of Y to changes in X at the time scale of interest. If this separation is not made, the correlation used in the evaluation may be, and often is spurious. An example of such a spurious correlation is shown in Figure 2 above. It was provided by the TYLERVIGEN collection of  spurious correlations  [LINK] . As is evident, the spurious correlation between drownings and marriages derives from a shared trend. The fluctuations around the trend at an appropriate time scale are clearly not correlated.
  6. The separation of these effects may be carried out using detrended correlation analysis. Briefly, the trend component is removed from both time series and the residuals are tested for the responsiveness of Y to changes in X at the appropriate time scale. The procedure and its motivation are described quite well in ALEX TOLLEY’S LECTURE  (Figure 4)  [LINK] The motivation and procedure for detecting and removing such spurious correlations in time series data are described in a short paper available for download described in below.
  7. SPURIOUS CORRELATIONS IN TIME SERIES DATA: Unrelated time series data can show spurious correlations by virtue of a shared drift in the long term trend. The spuriousness of such correlations is demonstrated with examples. The SP500 stock market index, GDP at current prices for the USA, and the number of homicides in England and Wales in the sample period 1968 to 2002 are used for this demonstration. Detrended analysis shows the expected result that at an annual time scale the GDP and SP500 series are related and that neither of these time series is related to the homicide series. Correlations between the source data and those between cumulative values show spurious correlations of the two financial time series with the homicide series. These results have implications for empirical evidence that attributes changes in temperature and carbon dioxide levels in the surface-atmosphere system to fossil fuel emissions. FULL TEXT. Yet another example of spurious correlations in time series data is the apparent homicide sensitivity of atmospheric carbon dioxide concentration described in a related post [LINK] .
  8. It is for these reasons that the argument that “the theory that X causes Y is supported by the data because X shows a rising trend and at the same time we see that Y has also been going up” is specious because for the data to be declared consistent with causation it must be shown that Y is responsive to X at the appropriate time scale when the spurious effect of the shared trend is removed. Examples from climate science are presented in the seven downloadable papers described in PARAGRAPHS #9 TO #15.
  9. Are fossil fuel emissions causing atmospheric CO2 levels to rise? RESPONSIVENESS OF ATMOSPHERIC CO2 CONCENTRATION TO EMISSIONS The IPCC carbon cycle accounting assumes that changes in atmospheric CO2 are driven by fossil fuel emissions on a year by year basis. A testable implication of the validity of this assumption is that changes in atmospheric CO2 should be correlated with fossil fuel emissions at an annual time scale net of long term trends. A test of this relationship with insitu CO2 data from Mauna Loa 1958-2016 and flask CO2 data from twenty three stations around the world 1967-2015 is presented. The test fails to show that annual changes in atmospheric CO2 levels can be attributed to annual emissions. The finding is consistent with prior studies that found no evidence to relate the rate of warming to emissions and they imply that the IPCC carbon budget is flawed possibly because of insufficient attention to uncertainty, excessive reliance on net flows, and the use of circular reasoning that subsumes a role for fossil fuel emissions in the observed increase in atmospheric CO2. FULL TEXT
  10. Can sea level rise be attenuated by reducing or eliminating fossil fuel emissions? A TEST OF THE ANTHROPOGENIC SEA LEVEL RISE HYPOTHESIS Detrended correlation analysis of a global sea level reconstruction 1807-2010 does not show that changes in the rate of sea level rise are related to the rate of fossil fuel emissions at any of the nine time scales tried. The result is checked against the measured data from sixteen locations in the Pacific and Atlantic regions of the Northern Hemisphere. No evidence could be found that observed changes in the rate of sea level rise are unnatural phenomena that can be attributed to fossil fuel emissions. These results are inconsistent with the proposition that the rate of sea level rise can be moderated by reducing emissions. It is noted that correlation is a necessary but not sufficient condition for a causal relationship between emissions and acceleration of sea level rise. FULL TEXT
  11. Can ocean acidification be attenuated by reducing or eliminating fossil fuel emissions? A TEST OF THE ANTHROPOGENIC OCEAN ACIDIFICATION HYPOTHESIS Detrended correlation analysis of annual fossil fuel emissions and mean annual changes in ocean CO2 concentration in the sample period 1958-2014 shows no evidence that the two series are causally related. The finding is inconsistent with the claim that fossil fuel emissions have a measurable impact on the CO2 concentration of the oceans at a lag and time scale of one year.  FULL TEXT
  12. Is surface temperature responsive to atmospheric CO2 levels? EMPIRICAL TEST FOR THE CHARNEY CLIMATE SENSITIVITY FUNCTION Monthly means of Mauna Loa atmospheric CO2 concentrations are used in conjunction with surface temperature data from two different sources for the sample period 1979-2017 to test the validity and reliability of the empirical Charney climate sensitivity function. Detrended correlation analysis of temperature in five global regions from two different sources did not show that surface temperature is responsive to changes in the logarithm of atmospheric CO2 at an annual time scale. Correlations observed in source data are thus shown to be spurious. We conclude that the empirical Charney Climate Sensitivity function is specious because it is based on a spurious correlation. FULL TEXT
  13. Is surface temperature responsive to atmospheric CO2 levels? UNCERTAINTY IN EMPIRICAL CLIMATE SENSITIVITY  Atmospheric CO2 concentrations and surface temperature reconstructions in the study period 1850-2017 are used to estimate observed equilibrium climate sensitivity. Comparison of climate sensitivities in the first and second halves of the study period and a study of climate sensitivities in a moving 60-year window show that the estimated values of climate sensitivity are unstable and unreliable and that therefore they may not contain useful information. These results are not consistent with the existence of a climate sensitivity parameter that determines surface temperature according to atmospheric CO2 concentration. FULL TEXT
  14. Is surface temperature responsive to atmospheric CO2 levels? FROM CLIMATE SENSITIVITY TO TRANSIENT CLIMATE RESPONSE A testable implication of the theory of anthropogenic global warming (AGW) is the Equilibrium Climate Sensitivity (ECS), the coefficient of proportionality between the logarithm of atmospheric CO2 and surface temperature. This line of research has been retarded by large uncertainties in empirical estimates of the ECS. An alternative to the ECS that offers a more stable metric for AGW is the Carbon Climate Response or Transient Climate Response to Cumulative Emissions (CCR/TCRE). It is computed as the coefficient of proportionality between cumulative fossil fuel emissions and temperature. The CCR/TCRE metric provides a direct connection from emissions to temperature without the intervening step of atmospheric accumulation. We show here that though the CCR/TCRE is stable, it has no interpretation in terms of AGW because the proportionality it describes is spurious and specious. FULL TEXT
  15. Is surface temperature responsive to atmospheric CO2 levels? THE CHARNEY HOMICIDE SENSITIVITY TO CO2 Homicides in England and Wales 1898-2003 are studied against the atmospheric carbon dioxide data for the same period. The Charney Equilibrium Sensitivity of homicides is found to be λ=1.7 thousands of additional annual homicides for each doubling of atmospheric CO2. The sensitivity estimate is supported by a strong correlation of ρ=0.95 and detrended correlation of ρ=0.86. The analysis illustrates that spurious proportionalities in time series data in conjunction with inadequate statistical rigor in the interpretation of empirical Charney climate sensitivity estimates impedes the orderly accumulation of knowledge in this line of research. FULL TEXT
  16. A further caution needed in regression and  correlation analysis of time series data arises when the source data are preprocessed prior to analysis. In most cases, the effective sample size of the preprocessed data is less than that of the source data because preprocessing involves using data values more than once. For example taking moving averages involves multiplicity in the use of the data that reduces the effective sample size (EFFN) and the effect of that on the degrees of freedom (DF) must be taken into account when carrying out hypothesis tests.
  17. The procedures and their rationale are described in this freely downloadable paper ILLUSORY STATISTICAL POWER IN TIME SERIES DATA : Preprocessing of time series data with moving average and autoregressive processes serves a useful purpose in time series analysis; but the further use of the preprocessed series for computing probability in hypothesis tests or for constructing confidence intervals requires a correction to the degrees of freedom imposed on the filtered series by multiplicity. Multiplicity derives from repeated use of the same data item in the source data series for the computation of multiple items in the filtered series. A procedure for estimating multiplicity and the effective degrees of freedom implied by multiplicity is proposed and its utility is demonstrated with examples. It is found that without a multiplicity correction the filtered series can show an illusory increase in statistical power. FULL TEXT
  18. Failure to correct for this effect on DF may result in a false sense of statistical power and faux rejection of the null in hypothesis tests as shown in this analysis of Kerry Emmanuel’s famous paper on what he called “increasing destructiveness” of North Atlantic hurricanes: CIRCULAR REASONING IN CLIMATE SCIENCE : A literature review shows that the circular reasoning fallacy is common in climate change research. It is facilitated by confirmation bias and by activism such that the prior conviction of researchers is subsumed into the methodology. Example research papers on the impact of fossil fuel emissions on tropical cyclones, on sea level rise, and on the carbon cycle demonstrate that the conclusions drawn by researchers about their anthropogenic cause derive from circular reasoning. The validity of the anthropogenic nature of global warming and climate change and that of the effectiveness of proposed measures for climate action may therefore be questioned solely on this basis. FULL TEXT
  19. When the statistics are done correctly, we find no evidence for the claim that “human caused climate change is supercharging tropical cyclones” as in the downloadable paper A General Linear Model for Trends in Tropical Cyclone Activity ABSTRACT: The ACE index is used to compare tropical cyclone activity worldwide among seven decades from 1945 to 2014. Some increase in tropical cyclone activity is found relative to the earliest decades. No trend is found after the decade 1965-1974. A comparison of the six cyclone basins in the study shows that the Western Pacific Basin is the most active basin and the North Indian Basin the least. The advantages of using a general linear model for trend analysis are described. FULL TEXT
  20. CUMULATIVE VALUES OF A TIME SERIES: An extreme case of the effect of preprocessing on degrees of freedom occurs when a time series of cumulative values is derived from the source data as in the famous Matthews paper on the proportionality of warming to cumulative emissions [Matthews, H. Damon, et al. “The proportionality of global warming to cumulative carbon emissions.” Nature 459.7248 (2009): 829].
  21. It has been shown in these downloadable papers that the time series of cumulative values has an effective sample size of EFFN=2 and therefore there are no degrees of freedom and there is no statistical power. The correlation between cumulative values is therefore spurious and does not contain useful information. The spuriousness is demonstrated in Figure 3 and explored in detail in the papers described in the next five paragraphs. Abstracts and download links for the five papers are provided.
  22.  EFFECTIVE-N OF THE CUMULATIVE VALUES OF A TIME SERIES : In the computation of cumulative values of a time series of length N, all data items except for the last item are used more than once. The multiplicity in the use of the data reduces the effective value of N. We show that for time series of cumulative values the effective value of N is too small to yield sufficient degrees of freedom to make inferences about the population. It is not possible to evaluate the statistical significance of a correlation between cumulative values for this reason even when the magnitude of the correlation coefficient observed in the sample is large. The results provide a rationale for the findings of a previous work in which the spuriousness of correlations between cumulative values was demonstrated with Monte Carlo simulation. FULL TEXT
  23. LIMITATIONS OF THE TCRE : Observed correlations between cumulative emissions and cumulative changes in climate variables form the basis of the Transient Climate Response to Cumulative Emissions (TCRE) function. The TCRE is used to make forecasts of future climate scenarios based on different emission pathways and thereby to derive their policy implications for climate action. Inaccuracies in these forecasts likely derive from a statistical weakness in the methodology used. The limitations of the TCRE are related to its reliance on correlations between cumulative values of time series data. Time series of cumulative values contain neither time scale nor degrees of freedom. Their correlations are spurious. No conclusions may be drawn from them. FULL TEXT
  24. From Equilibrium Climate Sensitivity to Carbon Climate Response: A testable implication of the theory of anthropogenic global warming (AGW) is the Equilibrium Climate Sensitivity (ECS), the coefficient of proportionality between the logarithm of atmospheric CO2 and surface temperature. This line of research has been retarded by large uncertainties in empirical estimates of the ECS. An alternative to the ECS that offers a more stable metric for AGW is the Carbon Climate Response or Transient Climate Response to Cumulative Emissions (CCR/TCRE). It is computed as the coefficient of proportionality between cumulative fossil fuel emissions and temperature. The CCR/TCRE metric provides a direct connection from emissions to temperature without the intervening step of atmospheric accumulation. We show here that though the CCR/TCRE is stable, it has no interpretation in terms of AGW because the proportionality it describes is spurious and specious. FULL TEXT
  25. SPURIOUS CORRELATIONS BETWEEN CUMULATIVE VALUES  Monte Carlo simulation shows that cumulative values of unrelated variables have a tendency to show spurious correlations. The results have important implications for the theory of anthropogenic global warming because empirical support for the theory that links warming to fossil fuel emissions rests entirely on a correlation between cumulative values. FULL TEXT
  26. EXTRATERRESTRIAL FORCING OF SURFACE TEMPERATURE: It is proposed that visitation by extraterrestrial spacecraft (UFO) alters the electromagnetic properties of the earth, its atmosphere, and its oceans and that these changes can cause global warming leading to climate change and thence to the catastrophic consequences of floods, droughts, severe storms, and sea level rise. An empirical test of this theory is presented with data for UFO sightings and surface temperature reconstructions for the study period 1910-2015. The results show strong evidence of proportionality between surface temperature and cumulative UFO sightings. We conclude that the observed warming since the Industrial Revolution are due to an electromagnetic perturbation of the climate system by UFO extraterrestrial spacecraft. FULL TEXT








Figure 1: Screen grab from the Youtube video “Forklift causes whole warehouse to collapse”. An example of dependence, nonlinear dynamics and chaos.  Link to video:




Figure 2: Demonstration of chaotic behavior of Hurst persistence in time series data



  1. The video in Figure 2 plots the same time series twice – in red and in blue. In the red line, the events in the time series evolve as a deterministic system; independently with no dependence or persistence. Such independence is an important assumption in OLS (ordinary least squares) regression. In the blue line the events in the time series evolve as a chaotic system with dependence and persistence. A small dependence has been inserted as a 1% tendency for persistence. Without persistence rising and falling tendencies are always equal with 50% probability at each event. The result of that is seen in the red line where the ups and downs are not visible because they are small.
  2. The 1% persistence in the blue line means that if the prior change was positive, the probabilities change from 50%/50% to 51%/49% favoring a positive change for the next event. If it is positive again in the next event, the probabilities are changed to 52%/48% but if it is negative they change back to 50%/50%. The probabilities keep changing to favor the direction of change in the prior event. This behavior is called persistence and it is very common in nature, particularly in surface temperature.
  3. When you play the video you will see the blue line take various shapes and trends both rising and falling mostly staying very close to the Gaussian red line. BUT, it is capable of sudden departures from the Gaussian to form what appears to be patterns such as rising and falling trends. These shapes do not imply cause and effect phenomena. They are the random behavior of a chaotic system.
  4. All of these shapes are representations of the same underlying phenomenon. The differences among these curves have no interpretation because they represent randomness. The human instinct to look for causes for unusual patterns derives from Darwinian survival but it leads us astray when we study chaotic systems.
  5. The shapes and trends the blue line forms may be found to be statistically significant if the system is assumed to be deterministic and the violations of OLS assumptions are ignored; but that statistical significance is meaningless. Yet this kind of analysis is common. The conclusions they imply in terms of the phenomena of nature have no interpretation because they are the product of violations of assumptions.
  6. Surface temperature and other climate variables are known to be chaotic and therefore, not all observed patterns in climate data contain information about cause and effect phenomena.That nature is chaotic is well understood. These three books present that argument with data, examples, and case studies: EA Jackson, Exploring Nature’s Dynamics; C Letellier, Chaos in Nature; Cushing et al, Chaos in Ecology.
  7. References to chaos in nature are also found in the climate change literature. These two excellent papers by Timothy Palmer are a good introduction to the subject of chaos in climate data: “A nonlinear dynamical perspective on climate prediction” Journal of Climate 12.2 (1999): 575-591, and “Predicting uncertainty in forecasts of weather and climate” Reports on Progress in Physics 63.2 (2000): 71.
  8. A common method of detecting fractal/chaotic behavior in long time series of field data is to compute the so called Hurst exponent “H” of the time series as a way of detecting persistence in the data.  Persistence implies that the the data in the time series do not evolve independently but contain a dependence on prior values such that prior changes tend to persist into the next time slice. This method was first described by Harold Edwin Hurst in 1951 in the only paper he ever published which is still cited hundreds of times a year every year ( Hurst, H.E. (1951). “Long-term storage capacity of reservoirs”. Transactions of American Society of Civil Engineers).
  9. The Hurst exponent method of detecting non-linear dynamics in time series data is used in climate change research. This trend in climate science has been led by the very charismatic and controversial hydrologist Demetris Koutsoyiannis, Professor of Hydrology, National Technical University of Athens. Demetris finds that many hydrology time series behaviors that climate science ascribes to emissions can be explained in terms of non-linear dynamics.
  10. Here are some examples from the literature of  Hurst persistence analysis of climate data: Cohn, Timothy etal “Nature’s style: Naturally trendy.” Geophysical Research Letters 32.23 (2005) (by “naturally trendy” Tim means that things that look like trends are actually randomness); Weber, Rudolf etal. “Spectra and correlations of climate data from days to decades.” Journal of Geophysical Research: Atmospheres 106.D17 (2001): 20131-20144; Koutsoyiannis, Demetris. “Climate change, the Hurst phenomenon, and hydrological statistics.” Hydrological Sciences Journal 48.1 (2003): 3-24; Markonis, Yannis, “Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics.” Surveys in Geophysics 34.2 (2013): 181-207; Pelletier, Jon etal. “Long-range persistence in climatological and hydrological time series: analysis, modeling and application to drought hazard assessment.” Journal of Hydrology 203.1-4 (1997): 198-208; Rybski, Diego, et al. “Long‐term persistence in climate and the detection problem.” Geophysical Research Letters 33.6 (2006).
  11. Much of the empirical work in climate science is presented in terms of time series of field data (field data means data made by nature over which the researcher has no control – as opposed to data collected in controlled experiments). Information about climate contained in these data is usually gathered by researchers in terms of OLS (ordinary least squares) regression analysis. Surprisingly, even at the highest levels of climate research little or no attention is paid to the assumptions of OLS analysis which include for example the assumption that the data in the time series evolve as I.I.D, (independent identically distributed). The stationarity assumption further enforces the requirement that the distribution must not change as the time series evolves.
  12. More information on regression analysis may be found in a companion post on SPURIOUS CORRELATIONS. A good reference for regression analysis of time series data is “Time Series Analysis: Forecasting and Control, Wiley 2015, by George Box & Gwilym Jenkins. Another is Time Series Analysis and Its Applications with Examples, Springer 2017, by Robert Shumway. These authors have shown that when time series analysis goes awry you can bet it has to do with violations of OLS assumptions.
  13. In my work with temperature, sea level rise, precipitation, solar activity, and ozone depletion, I tested the time series for OLS violations by computing the Hurst exponent H. The theoretical neutral value with no serial dependence is H=0.5 but it has been shown that the neutral value for comparison in empirical research needs to be adjusted for the specific sub-sampling strategy used in the estimation of H. In all my work, this estimation is carried out twice – once with the data and again with a Monte Carlo simulation of the data that generates a corresponding IID/stationary series.
  14. The two values of H are then compared. If the difference between the two values of H is not statistically significant, we conclude that there is no evidence of Hurst behavior in the data and OLS regression results may therefore be interpreted in terms of the phenomena under study. However if the value of H in the data is greater than the value of H in the IID simulation, then we can conclude that the data contain Fractal/Chaotic behavior by virtue of Hurst persistence and that therefore OLS results may not be interpreted strictly in terms of the phenomena under study. Nonlinear dynamics must be considered. A list of these studies appears below with links to the freely downloadable full text.
  1. Ozone Depletion Latitudinally Weighted Mean Global Ozone 1979-2015
  2. Warming Seasonality and Dependence in Daily Mean USCRN Temperature 
  3. Precipitation The Hurst Exponent of Precipitation
  4. Warming A Robust Test for OLS Trends in Daily Temperature Data
  5. Solar Activity The Hurst Exponent of Sunspot Counts
  6. Warming The OLS Warming Trend at Nuuk, Greenland
  7. Warming The Hurst Exponent of Surface Temperature
  8. Warming OLS Trend Analysis of CET Daily Mean Temperatures 1772-2016
  9. Precipitation The Hurst Exponent of Precipitation: England and Wales 1766-2016








  1. Before it was expropriated by the global warming/climate change movement, the term “Greenhouse Effect” referred to the effect of elevated carbon dioxide in greenhouses on crop chemistry. We know from greenhouse studies going back to the late 19th century that crop chemistry reflects the balance between soil chemistry, air chemistry, and light intensity. The important features of air chemistry are the availability of carbon dioxide for photosynthesis and of oxygen for plant respiration. The important features of soil chemistry are the availability of water, nitrates, phosphates, and minerals.
  2. Climate science is apparently concerned about the effect of elevated atmospheric CO2 on agriculture. Initially it was assessed that the effects of climate change would devastate agriculture but later the concern shifted to the effect of elevated atmospheric CO2 on the nutritional quality of crops. These concerns appear to be disconnected from the extensive literature on elevated CO2 agriculture in greenhouses that have been with us for more than a century.
  3. Greenhouse operations include irrigation, air circulation to maintain air quality, heating for temperature control, the introduction of carbon dioxide to maintain elevated carbon dioxide levels of 1000 to 2000 parts per million for photosynthesis enrichment, and the availability of sufficient light for photosynthesis to occur. Photosynthesis enrichment improves crop yield. Corresponding changes to soil chemistry are required to preserve the nutritional quality of the crops.
  4. It has been found in numerous greenhouse studies since the 19th century that if elevated carbon dioxide is not matched by corresponding changes to soil chemistry, crop chemistry may shift in the direction of higher starch content and lower nutritional quality. These effects are crop specific and vary greatly among crop types.
  5. Proper greenhouse management is responsive to these dynamics and involves the management of light and soil chemistry that is appropriate for any given level of carbon dioxide so that crop nutritional quality is maintained. These relationships are described in some detail in the Stitt&Krapp1999 paper listed below and highlighted in bold.
  6. The various works of Bruce Kimball of the US Water Conservation Laboratory (with full text available free from the USDA) are unique in this line of research as they are not greenhouse studies but a survey of a large number of such studies carried out to estimate the impact of climate change on crop yield.
  7. His work followed in the heels of the landmark “Climate Sensitivity” presentation made by Jule Charney in 1979 in which he presented the finding from climate model studies that a doubling of atmospheric carbon dioxide will cause mean global temperature to rise by 1.5C to 4.5C. The Charney Climate Sensitivity still serves as the fundamental relationship in climate science for the “greenhouse warming effect” thought to be caused by atmospheric carbon dioxide.
  8. Kimball followed the Charney format and presented his finding that  a doubling of atmospheric carbon dioxide will increase crop yields worldwide by about 30% with some differences among crops and for different conditions and latitudes. The relevant citations appear below.
  9. Kimball, Bruce A. “Carbon Dioxide and Agricultural Yield: An Assemblage and Analysis of 430 Prior Observations 1.” Agronomy journal 75.5 (1983): 779-788.
  10. Kimball, B. A., and S. B. Idso. “Increasing atmospheric CO2: effects on crop yield, water use and climate.” Agricultural water management 7.1-3 (1983): 55-72.
  11. Kimball, B. A., et al. “Effects of increasing atmospheric CO2 on vegetation.” CO2 and Biosphere. Springer, Dordrecht, 1993. 65-76.
  12. Mauney and Kimball. “Growth and yield of cotton in response to a free-air carbon dioxide enrichment (FACE) environment.” Agricultural and Forest Meteorology 70.1-4 (1994): 49-67.
  13. Kimball, Bruce A., et al. “Productivity and water use of wheat under free‐air CO2 enrichment.” Global Change Biology 1.6 (1995): 429-442.
  14. Kimball, B. A., K. Kobayashi, and M. Bindi. “Responses of agricultural crops to free-air CO2 enrichment.” Advances in agronomy. Vol. 77. Academic Press, 2002. 293-368.
  15. Idso and Kimball. “Effects of atmospheric CO2 enrichment on plant growth: the role of air temperature.” Agriculture, ecosystems & environment 20.1 (1987): 1-10.
  16. The findings of a selection of GREENHOUSE STUDIES from Besford 1990 to Galtier 1995 presenting measurements of nutritional loss due to an imbalance in CO2 and soil nutrients are listed below. The greenhouse management implications of these findings are described best in the Stitt and Krapp 1999 paper.
  17. RT Besford, et al 1990, Journal of Experimental Botany 41.8: 925-931: Compared with tomato plants grown in normal ambient CO2, the 1000 ppm CO2 grown leaves, when almost fully expanded, contained only half as much RuBPco protein. Note: corresponding soil enrichment was not used.
  18. Peter Curtis et al, 1998Oecologia 113.3: 299-313: Total biomass and net CO2 assimilation increased significantly at about twice ambient CO2, regardless of growth conditions. Low soil nutrient availability reduced the CO2 stimulation of total biomass by half, from +31% under optimal conditions to +16%, while low light increased the difference to +52%.
  19. Kramer, Paul J. 1981BioScience 31.1: 29-33: The long-term response to high CO2 varies widely among species. Furthermore, the rate of photosynthesis is limited by various internal and environmental factors in addition to the COconc.
  20. Curtis, P. S. 1996Plant, Cell & Environment 19.2: 127-137: Growth at elevated [CO2] resulted in moderate reductions in gs in unstressed plants, but there was no significant effect of CO2 on gs in stressed plants. Leaf dark respiration (mass or area basis) was reduced strongly by growth at high [CO2] > while leaf N was reduced only when expressed on a mass basis.
  21. Shahidul Islam et al, 1996Scientia Horticulturae 137-149: CO2 enriched tomatoes had lower amounts of citric, malic and oxalic acids, and higher amounts of ascorbic acid, fructose, glucose and sucrose synthase activity than the control. Elevated CO2 enhanced fruit growth and colouring during development.
  22. Stitt & Krapp 1999Plant, Cell & Environment 22.6-583-621: Increased rates of growth in elevated [CO2] will require higher rates of inorganic nitrogen uptake and assimilation. An increased supply of sugars can increase the rates of nitrate and ammonium uptake and assimilation, the synthesis of organic acid acceptors, and the synthesis of amino acids. Interpretation of experiments in elevated [CO2] requires that the nitrogen status of the plants is monitored.
  23. Galtier, Nathalie, et al. 1995Journal of Experimental Botany 1335-1344:  At elevated CO2, the rate of sucrose synthesis was increased relative to that of starch and sucrose/starch ratios were higher throughout the photoperiod in the leaves of all plants expressing high SPS activity. At high C02 the stimulation of photosynthesis was more pronounced. We conclude that SPS activity is a major point of control of photosynthesis particularly under saturating light and C02.





The eco scare that human activity is killing off the fish in the oceans predates climate change. In the BC days (before-climate), a combination of over-fishing, seafaring, and discharges of plastics and pollution into the oceans by humans were cited (“Sea’s riches running out”, 1977). In AC times (after climate) there is of course only one cause for all things and that is human caused global warming by way of fossil fuel emissions (Oceans running out of fish, Bangkok Post, 1994), (Ocean’s fish could disappear, Bangkok Post, May 19, 2010), (A New Warning Says We Could Run Out of Fish by 2048, HuffPost, Dec 14, 2017), (All seafood will run out in 2050, say scientists, The Telegraph, 22 May 2018), (Oceans are running out of fish much faster than previously thought, ZME Science, 20 January 2016).


In the AC after-climate era, causes of the fish apocalypse is described in terms of rising ocean temperature and ocean acidification by fossil fuel emissions. As well, the language of fish apocalypse is changed from gradual reduction in numbers to “depletion at alarming rates” and that marine life on earth is “at a breaking point”. There is also a timeline given for when the oceans will become devoid of fish. That will happen in the year 2050. Unless of course we get serious about the Paris Accord, stop using dirty polluting fossil fuels, and save the planet. And the fish.

bandicam 2018-08-14 18-42-38-365


  1. Global warming scientists cited the shrinking of the Chorabari Glacier in the eastern Himalayan Mountains as evidence that carbon dioxide emissions from fossil fuels is causing global warming and that global warming in turn is causing Himalayan glaciers to melt. Although the data are insufficient and conflicting, they project that in a hundred years, the glacial loss will affect water supply to a vast region whose rivers get their water from these glaciers. With respect to the absence of sufficient data to support this projection, they propose the odd logic that “the dearth of scientific knowledge only adds to the alarm”.
  2. There are a thousand glaciers in the Himalayan Mountains. Some of them are retreating. Some of them are expanding. Some are doing neither. We don’t have sufficient data to know what most of them are doing except that there has been a gradual net retreat of the glaciers since the year 1850 which marks the glacial maximum of the Little Ice Age.
  3. The Himalayans are folded mountains and the folding is currently in process. It is a geologically active area. There is a lot of geothermal activity in these mountains particularly in Uttaranchal where Chorabari Glacier is located. Steamy hot springs are a major tourist attraction in Uttaranchal.
  4. Neither the geothermal nor volcanic activity is included in the assessment of glacial melt as an effect of fossil fuel emissions. The assessment is that the end is near for Himalayan glaciers due to fossil fuel emissions. The end may very well be near but the prediction of its coming would be more credible if their computer model included volcanic and geothermal activity both on land and in the bottom of the ocean.
  5. A computer model based on the assumption that all surface anomalies of the planet are due to human activity is not the appropriate tool for the determination of the role of human activity in surface anomalies.



  1. Although there has been some thinning of coastal ice in Greenland, the total ice mass there is actually increasing because of a rapid increase in ice thickness at higher elevations. If we could cause all of Greenland’s ice to melt into the sea, it would raise the sea level by 7 meters, as the scaremongers say, but that scenario does not appear likely given the data.
  2. One should also take note that during the last decade, Greenland has not become warmer. It has become colder. It is therefore not possible to ascribe changes in its ice mass to global warming or to fossil fuel emissions. As a footnote, Greenland’s coast was in fact green with vegetation in the tenth century when it was discovered by Nordic sailors. It was warmer then than it is now.
  3. Since then it has been through the Little Ice Age from which it is currently recovering. Studies of ice mass balance in Greenland by climate scientists ignore geothermal activity and begin with the assumption that all observed ice loss are due to fossil fuel emissions and that they can be attenuated by taking climate action in the form of cutting emissions.



  1. Africa is a drought prone continent and has suffered numerous tragic droughts over the last 500 years. These droughts are natural occurrences. They are not caused by carbon dioxide emissions from fossil fuels. There is no trend in the severity of these droughts and the current one is not the most severe. African scholars have written to refute efforts to associate the current drought with the global warming agenda. One of these scholarly articles was recently published in the Bangkok Post.
  2. The dropping of water levels in Lake Victoria and other lakes there is a known effect of a cascade of dams on the Nile and cannot in any way be related to the use of fossil fuels.
  3. The New York Times columnist who makes these alarming charges is the same individual who once fell for the oldest trick in book in Cambodian brothels and paid a large sum of money to “purchase freedom” for a young prostitute and then wrote a column about his heroic deed. The young lady had by then returned to the brothel. This is the level of gullibility we are dealing with in this column as well. One should use a big dose of critical thinking when consuming this kind of information.



The Anomalies in Temperature Anomalies

The Greenhouse Effect of Atmospheric CO2

ECS: Equilibrium Climate Sensitivity

Climate Sensitivity Research: 2014-2018

TCR: Transient Climate Response

Peer Review of Climate Research: A Case Study

Spurious Correlations in Climate Science

Antarctic Sea Ice: 1979-2018

Arctic Sea Ice 1979-2018

Global Warming and Arctic Sea Ice: A Bibliography

Carbon Cycle Measurement Problems Solved with Circular Reasoning

NASA Evidence of Human Caused Climate Change

Event Attribution Science: A Case Study

Event Attribution Case Study Citations

Global Warming Trends in Daily Station Data

History of the Global Warming Scare