- The Fang Zhi is kind of a government gazette that has been issued by Chinese governments for thousands of years. Data on extreme weather events and famines are included in this gazette. The data show that floods and droughts are common in China and that they are periodically particularly severe.
- A cyclical pattern of famines caused by severe drought followed by devastating floods may be traced back through all of recorded history in China. The period of this cycle has been estimated to be about fifty years. A peculiarity of this weather cycle is that floods and droughts can occur at the same time in China because weather in Southern China is about 180 degrees out of phase with that in Northern China. History has recorded many cases when the south is flooding from torrential rainfall while the north is in drought or conversely when the north is flooding and the south is dry.
- Much of the sociology, philosophy, literature, and politics of China have been shaped by the flood and drought cycle. Some scholars go so far as to claim that all of Chinese history is a story of the people’s fight against famine caused by this calamitous cycle of weather. One of the largest infrastructure projects in history is the failed attempt to link southern Chinese rainfall with northern Chinese rainfall using a very ambitious canal network. The construction and maintenance of granaries on an immense scale has consumed a succession of Chinese dynasties while famines have been the downfall of others. The 2005 drought in Hainan and Guangdong along with torrential rains and floods in Northern China fits the known pattern of extreme weather in China.
- If you truncate history at 1961, however, these weather events will appear to be unusual and unnatural. An equally unnatural cause for this kind of weather may then be assessed. In particular, those with a predisposition to the global warming/climate change hypothesis contained in the Kyoto Protocol and the UNFCCC will find in these events the kind of evidence they need to support their predisposed position (See for example, Waiting in vain for rain that’s two years late has Hainan’s farmers fretting about their future, The Nation, Bangkok, June 3, 2005).
- Fossil fuel consumption has risen dramatically since 1961. The data also show what appears to be an irregular increase in the CO2 content of the atmosphere in parallel with rising fuel consumption. At the same time we find the average temperature of the earth has been rising since 1979. It is tempting to draw a causal link from fossil fuels to CO2 and from CO2 to temperature and from there to extreme weather events. These relationships appear so convincing that no further scientific evidence is sought to support the subsumed causalities.
- Yet statistical analysis of the observational data do not show the correlations that would exist if this chain of causation to be true were true. The correlation argument is presented in more detail in a related post. SPURIOUS CORRELATIONS IN CLIMATE SCIENCE.
- In the Chinese weather data, the global warming enthusiasts have been undone by the Fang Zhi. Their claim that fossil fuel consumption is to be blamed for this year’s drought in southern China and floods in northern China appears grossly childish and specious in light of history.
Archive for May 2018
Reference: After Nobel Gore should go for the next big prize, Bangkok Post, Oct 16, 2007
The goofiness of the Nobel Committee is grossly underrated. Robert Merton and Myron Scholes must have thought they had been certified as Lords of Finance when they were awarded the Nobel Prize in Economics until their hedge fund called Long Term Capital Management did a nose dive and almost brought down the American financial system. More recently the Nobel Prize was awarded to scientists who had warned us that that human activity was causing ozone depletion and making the ozone hole bigger. They were allowed to keep their prizes even after it became apparent that the observed changes in the ozone layer were part of a natural cycle having to do with shifting winds in the upper atmosphere and NOT due to human activity.
In its latest goof, the Nobel Committee has awarded a prize for service to humanity to people who see humanity as the enemy of the earth and whose stated goal could be achieved simply by eliminating humanity from the face of the earth. Besides, the movie about global warming An Inconvenient Truth has been widely discredited as being biased and containing not only exaggerations but outright lies and scientific fraud.
The award of the Nobel Prize to these snake oil salesmen does not prove that the Anthropogenic Global Warming hypothesis is correct. That proof can only be provided by empirical evidence. The fact that Gore was awarded a prize by a committee of five Norwegians appointed by the Norwegian legislature does not vindicate Gore. If anything it discredits the Nobel committee. If the Nobel Prize is to be made into a global prize then it should be removed from the confines of Norwegian goofiness and opened up to a more global evaluation.
RELATED POST: CIRCULAR REASONING IN CLIMATE SCIENCE: [LINK]
RELATED POST: AN EXCLUSIVE RELIANCE ON FOSSIL FUEL EMISSIONS OVERLOOKS NATURAL CARBON FLOWS. [LINK]
[RELATED POST ON THE CARBON CYCLE]
ASSUME A SPHERICAL COW
FIGURE 1: CO2 AIRBORNE FRACTION
- Paleo climate data tell us that prior to the Industrial era the mean annual CO2 concentration of the atmosphere stayed in range 180-290 ppm (IPCCAR5, 2013), a difference of 234 gigatons of carbon equivalent (GTC). The range is equivalent to total global fossil fuel emissions in the 33-year period 1985-2017 but since the Paleo changes occurred prior to the industrial age, these changes are ascribed to volcanic eruptions which inject both aerosols and CO2 into the atmosphere. Changes in solar activity are also considered as they can change the equilibrium partial pressure of CO2 over the oceans in accordance with the Henry’s Law relationship of the temperature dependence of the solubility of carbon dioxide in water (IPCCAR5, 2013).
- However, in the postindustrial era, these changes are shown to be much more rapid and are therefore explained in terms of anthropogenic fossil fuel emissions with the mathematics of the attribution computed in the context of the carbon cycle that describes the natural flows of carbon dioxide to and from the atmosphere. The IPCC describes the carbon cycle in terms of carbon dioxide flows among multiple sources and sinks. The atmosphere plays a role in nine of these flows. These mean flows, averaged over the decade 2000-2009 (Figure 7) and their standard deviations (SD) as reported by the IPCC are listed below in units of GTC/y (IPCCAR5, 2013). Non availability of data is indicated by N/A.
- Natural: Ocean surface to atmosphere:Mean=78.4,SD=N/A.
- Natural: Atmosphere to ocean:surface:Mean=80.0,SD=N/A
- Human: Fossil fuel emissions:surface to atmosphere:Mean=7.8,SD=0.6
- Human: Land use change:surface to atmosphere:Mean=1.1,SD=0.8
- Natural: Photosynthesis:atmosphere to surface:Mean=123.0,SD=8.0
- Natural: Respiration/fire:surface to atmosphere:Mean=118.7,SD=N/A
- Natural: Freshwater to atmosphere:Mean=1.0,SD=N/A
- Natural: Volcanic emissions surface to atmosphere:Mean=0.1,SS =N/A
- Natural: Rock weathering:surface to atmosphere:Mean=0.3,SD=N/A
- A simple flow accounting of the mean values without consideration of uncertainty shows a net CO2 flow from surface to atmosphere of 4.4 GTC/y. The details of this computation are as follows. In the emissions and atmospheric composition data we find that during the decade 2000-2009 total fossil fuel emissions were 78.1 GTC and that over the same period atmospheric CO2 rose from 369.2 to 387.9 ppm for an increase of 18.7 ppm equivalent to 39.6 GTC in atmospheric CO2 or 4.4 GTC/y. The ratio of the observed increase in atmospheric carbon to emitted carbon is thus =39.6/78.2=0.51. This computation is the source of the claim that the so called “Airborne Fraction” is about 50%; that is to say that about half of the emitted carbon accumulates in the atmosphere on average and the other half is absorbed by the oceans, by photosynthesis, and by terrestrial soil absorption. The Airborne Fraction of AF=50% later had to be made flexible in light of a range of observed values (Figure 1).
- The left frame of Figure 1 above shows that a large range of values of the decadal mean Airborne Fraction of 0<DMAF<4 .5 for decades ending in 1860 to 2017. This sample period includes ice core CO2 data from the Law Dome for years prior to 1958. However, when the sample period is restricted to the more precise Mauna Loa data from 1958, a much smaller range of values are seen in the right frame of Figure 1 with 0.45<DMAF<0.65. These data appear to support the usual assumption in climate science that fossil fuel emissions have contributed about half of the decadal mean increase in atmospheric CO2 concentration since 1958; but as demonstrated in a related post [LINK] , without a correlation between emissions and changes in atmospheric CO2 concentration, airborne fractions can be computed but they have no interpretation in terms of cause and effect in the phenomenon being studied [LINK] .
- When uncertainties are not considered, the flow accounting appears to show an exact match of the predicted and computed carbon balance. It is noted, however, that this exact accounting balance is achieved, not with flow measurements, but with estimates of unmeasurable flows constrained by the circular reasoning that assigns flows according to an assumed flow balance.
- However, a very different picture emerges when uncertainties are included in the balance. Published uncertainties for three of the nine flows are available in the IPCC reports. Uncertainty for the other six flows are not known. However, we know that they are large because no known method exists for the direct measurement of these flows. They can only be grossly inferred based in assumptions that exclude or minimize geological flows.
- Here, we set up a Monte Carlo simulation to estimate the highest value of the unknown standard deviations at which we can detect the presence of human emissions in the carbon cycle. For the purpose of this test we propose that an uncertain flow account is in balance as long as the Null Hypothesis that the sum of the flows is zero cannot be rejected. The alpha error rate for the test is set to a high value of alpha=0.10 to ensure that any reasonable ability to discriminate between the flow account WITH Anthropogenic Emissions from a the flow account WITHOUT Anthropogenic Emissions is taken into evidence that the relatively small fossil fuel emissions can be detected in the presence of much larger and uncertain natural flows. The spreadsheet used in this determination is available for download from an online data archive Data Archive Link .
- In the simulation we assign different levels of uncertainty to the flows for which no uncertainty data are available and test the null hypothesis that the flows balance with anthropogenic emissions (AE) included and again with AE excluded. If the flows balance when AE are included and they don’t balance when AE are excluded then we conclude that the presence of the AE can be detected at that level of uncertainty. However, if the flows balance with and without AE then we conclude that the stochastic flow account is not sensitive to AE at that level of uncertainty because it is unable to detect their presence. If the presence of AE cannot be detected no role for their effect on climate can be deduced from the data at that level of uncertainty in natural flows.
- The balance is computed from the atmospheric perspective as Balance=Input-Output where Input is flow to the atmosphere and Output is flow from the atmosphere. The p-values for hypothesis tests for uncertainties in the natural flows from 1% of mean to 6.5% of mean are presented below both as a tabulation and as a line chart.
- In the tabulation the PCT column shows the assumed percent standard deviation in the natural flows for which no uncertainty information is available. In the”base case”, the blanket statement by the IPCC that the uncertainty is 20% is interpreted to mean that the width of the 95% confidence interval is 20% of the mean and the corresponding standard deviation computed as (20/2)/1.96 is almost identical to that in the 5% (5PC) row. The data in each row shows the p-values of two hypothesis tests labeled as WITH and WITHOUT. The WITH column shows p-values when the AE are included in the balance computation. The WITHOUT column shows the p-values when the AE are left out of the balance computation.
- We use a critical p-value of alpha=0.1 for the test of the null hypothesis that Balance=0. Balance=0 means that the stochastic flow account is in balance. If the p-value is less than apha we reject the null hypothesis and conclude that the stochastic flow account is not in balance. If we fail to reject the null then we conclude the stochastic flow account is in balance.
- The p-values for WITH and WITHOUT in each row taken together tell us whether the stochastic flow system is sensitive to AE, that is whether the relatively small AE flow can be detected in the context of uncertainty in much larger natural flows. If we fail to reject the null hypothesis that Balance=0 in both WITH and WITHOUT columns, the stochastic flow account balances with and without the AE flows. In these cases the stochastic flow account is not sensitive to AE, that is it is unable to detect the presence of the AE flows. This is true for the five rows in which the uncertainty in natural flows is 3% of mean or higher.
- For the two lower uncertainty levels of 2% and 1% we find that the null hypothesis Balance=0 is not rejected when AE are included (the stochastic flow account is in balance) but rejected when AE are not included (the stochastic flow account is not in balance). Under these uncertainty conditions, the stochastic flow account is sensitive to the presence of AE, that is the flow account can detect the presence of the relatively small AE flows. The chart shows that the crossover uncertainty lies somewhere between 2% and 3% and in fact it is found by trial and error that the crossover occurs at 2.3%.
- These results imply that the IPCC carbon cycle stochastic flow balance is not sensitive to the presence of the relatively low flows from human activity involving fossil fuel emissions and land use change. The large natural flows of the carbon cycle cannot be directly measured and they can only be indirectly inferred. These inferred values contain uncertainties much larger than 2.3% of the mean. It is not possible to carry out a balance of the carbon cycle under these conditions.
- In the case of the conclusion by climate scientists that the observed increase in atmospheric CO2 concentration is caused by fossil fuel emissions, natural flows in the carbon cycle that are an order of magnitude larger than fossil fuel emissions and that cannot be directly measured are inferred with the implicit assumption that the increase in atmospheric CO2 comes from fossil fuel emissions. The flow balance can then be carried out and it does of course show that the increase in atmospheric CO2 derives from fossil fuel emissions The balance presented by the IPCC with inferred flows thus forces an exact balance by way of circular reasoning. Therefore, the IPCC carbon cycle balance does not contain useful information that may be used to ascertain the impact of fossil fuel emissions on the carbon cycle or on the climate system.
- A rationale for the inability to relate changes in atmospheric CO2 to fossil fuel emissions is described by Geologist James Edward Kamis in terms of natural geological emissions due to plate tectonics [LINK] . The essential argument is that, in the context of significant geological flows of carbon dioxide and other carbon based compounds, it is a form of circular reasoning to describe changes in atmospheric CO2 only in terms of human activity. It is shown in a related post, that in the context of large uncertainties in natural flows, changes in atmospheric CO2 is not responsive to the rate of emissions [LINK] .
- Circular reasoning in this case can be described in terms of the “Assume a spherical cow” fallacy [LINK] which refers to the use of simplifying assumptions needed to solve a problem that change the context of the problem so that the solution no longer answers the original research question.
[RELATED POST ON THE CARBON CYCLE]
The following posts on this site are relevant to this discussion:
- Fossil Fuel Emissions and Atmospheric Composition
- Will Emission Reduction Change the Rate of Warming?
- Ocean Acidification by Fossil Fuel Emissions
- Spurious Correlations in Climate Science
THE GLOBAL CARBON PROJECT
- The Global Carbon Project, with a goal to “fully understand the carbon cycle” has instead evolved into the world’s foremost and most trusted accountants of fossil fuel emissions. The Project keeps records of fossil fuel emissions the world over on a country by country and year by year basis and these data are made publicly available and also analyzed for trends as well as implications for future climate change scenarios. According to Wikipedia, “The Global Carbon Project (GCP) was established in 2001. The organisation seeks to quantify global carbon emissions and their sources.”
- Their pretension to the study of the carbon cycle is not as useful as their emissions data because it is presented completely in the context of circular reasoning. Observed changes of CO2 concentration in natural systems are assumed to derive wholly from fossil fuel and land use change. The carbon cycle accounting is carried out on that basis. Uncertainties are noted but ignored in the accounting calculations. The procedure is described below for the decade 2008-2017.
- The average annual carbon cycle for the decade 2008-2017 is presented here as an example of the reliance of the Global Carbon Project on circular reasoning. The flows used in the flow accounting are provided by the Global Carbon Project as GTY of carbon dioxide. They have been converted to GTY of carbon (GTCY) for ease of comparison with the IPCC figures above.
- The use of net flows (net flow to land sink and net flow to ocean sink) insert assumptions and circular reasoning into the carbon cycle flow account. These are differences between very large flows in the order of 100 GTY with large uncertainties in their measurement. These differences therefore contain even larger uncertainties.
- In cases where a net flow is a direct measurement, it’s interpretation subsumes that which the flow account is carried out to determine. For example, if the net flow to ocean sink is measured as a change in the average inorganic carbon concentration of the oceans, then the flow account assumes that the change was caused by surface phenomena such as fossil fuel emissions ignoring emissions from plate tectonics, submarine volcanoes, hydrothermal vents, and hydrocarbon seeps.
- Carbon cycle flow accounts of this nature are not pure data that can be used to test theory but rather the theory itself expressed in terms of flow account values.
Fossil fuel emissions: mean=9.3545, stdev=0.5454 [IPCC 2000-2009: 7.8]
Land use change emissions: mean=1.485, stdev=0.818 [IPCC 2000-2009: 1.1]
Net flow to land sink: mean=3.054, stdev=0.818 [IPCC 2000-2009: 4.3]
Net flow to ocean sink: mean=2.37, stdev=0.545 [IPCC 2000-2009: 1.6]
Growth in atmospheric CO2: mean=4.718. stdev=0.0545 [IPCC 2000-2009: 4.4]
FIGURE 1: CLIMATE SCIENTISTS
FIGURE 2: FEAR OF MELTING ICE AND SEA LEVEL RISE
FIGURE 3: SPURIOUS CORRELATIONS
- DETRENDED CORRELATION ANALYSIS OF TIME SERIES DATA: Correlation between x and y in time series data derive from responsiveness of y to x at the time scale of interest and also from shared long term trends. These two effects can be separated by detrending both time series as explained by Alex Tolley in the video frame of Figure 3. When the trend effect is removed only the responsiveness of y to x remains. This is why detrended correlation is a better measure of responsiveness than source data correlation as explained very well by Alex Tolley in the video. The full video may be viewed on Youtube [LINK] . That spurious correlations can be found in time series data when detrended analysis is not used is demonstrated with examples at the Tyler Vigen Spurious Correlation website [LINK] . Spurious correlations are common in climate science where many critical relationships that support the fundamentals of anthropogenic global warming (AGW) are found to be based on spurious correlations.
- EXAMPLE 1: For example, climate science assumes that changes in atmospheric CO2 concentration since pre-industrial times are due to fossil fuel emissions of the industrial economy. This attribution is supported by a strong correlation between the rate of emissions and the rate of increase in atmospheric CO2 concentration in the time series of the source data. However, when the two time series are detrended, the correlation is not found. This result of detrended correlation analysis implies that the correlation seen in the source data derives from shared trends and not from responsiveness at an annual time scale. Details of this test are presented in a related post [LINK] .
- EXAMPLE 2: A similar relationship is found in the ocean acidification hypothesis which claims that changes in the inorganic carbon concentration of oceans are driven by fossil fuel emissions. There, too the source data do show a strong correlation but that correlation vanishes when the two time series are detrended. As before, this pattern implies that the correlation in the source data derives from shared trends and not from responsiveness at an annual time scale. [LINK] .
- EXAMPLE 3: It is claimed that the observed rise in atmospheric methane concentration is due to human caused methane emissions in activities such as cattle ranching and dairy farming as well as rice cultivation and oil and gas production. Here too, a strong correlation is found in the time series of the source data but this correlation does not survive into the detrended series. This result implies that the correlation between human caused methane emissions and the rise in atmospheric methane derives from shared trends and not from responsiveness at an annual time scale. Such responsiveness is a necessary, though not sufficient, condition for causation. Details of this work may be found in a related post at this site [LINK] .
- EXAMPLE 4: A cornerstone of climate science is the effectiveness of proposed climate action in the form of reducing fossil fuel emissions. That the rate of warming can be attenuated by reducing fossil fuel emissions requires that the rate of warming must be responsive to the rate of emissions at the appropriate time scale for this causation to occur (thought to be a decade or perhaps longer (Ricke&Caldeira 2014). And in fact, we find a strong correlation between the rate of warming and the rate of emissions in the time series of the source data at five different time scales (10, 15, 20, 25, & 30 years). Both of these source time series show an upward trend such that the shared trend can create spurious correlations as in the Alex Tolley lecture. When the two time series are detrended, the correlation disappears. The absence of detrended correlation implies that the observed correlation was a faux relationship driven by shared trends and not by responsiveness at the time scales tested in the analysis as demonstrated in a related post [LINK] . Thus no evidence is found in the data that reducing emissions will slow down the rate of warming.
- EXAMPLE 5: It is also claimed in climate science that reducing emissions will slow down the rate of sea level rise. This relationship requires a responsiveness of the rate of sea level rise to the rate of emissions at the appropriate time scale for this causation. And in fact, we find a strong correlation between the rate of sea level rise and the rate of emissions in the time series of the source data at five different time scales ranging from 30 to 50 years. Both of these source time series show an upward trend such that the shared trend can create a faux correlation. When the two time series are detrended, the correlation disappears. The absence of detrended correlation implies that the observed correlation was a spurious relationship driven by shared trends and not by responsiveness at the time scales tested in the analysis. This work may be found in a related post [LINK] .
- EXAMPLE 6: Climate science supports the greenhouse gas heat trapping theory of atmospheric CO2 and the relevance of their climate models with a strong correlation between model projections of surface temperature and actual observations (see for example Santer 2019). However, this correlation is also between two time series with rising trends. In a related post it is shown that there is indeed a strong correlation between the source data but this correlation is not found in the detrended series [LINK]
- EXAMPLE 7: Arctic sea ice extent has played an important role in climate change fear based activism because of periods of diminishing summer minimum sea ice extent in September and the forecasts of “ice free Arctic” that these trends have engendered. The underlying fear of human caused climate change causing Arctic sea ice melt was thus created. The evidence for the causal connection for this causation is a correlation between the rate of warming and the rate of summer sea ice decline; but detrended correlation analysis shows that this correlation is spurious as no year to year responsiveness of September Arctic sea ice extent to the rate of warming is found in the detrended series [LINK] .
- EXAMPLE 8: With the assumption that the observed rise in atmospheric CO2 concentration is driven by fossil fuel emissions (discussed in example 1) the effect of higher atmospheric CO2 concentration on climate is then established in terms of climate sensitivity, that is the responsiveness of surface temperature to the logarithm of atmospheric CO2 concentration. The validity of the climate sensitivity function can be shown with strong and statistically significant correlations between the climate model temperature series and observations. However, as shown in a related post [LINK] , this correlation does not survive into the detrended series and is therefore a spurious correlation, similar to the Tyler Vigen examples, that derives from shared trends and not from responsiveness at an annual or other fixed and finite time scale.
- EXAMPLE 9: The theory of the greenhouse gas effect of atmospheric CO2 predicts that as the CO2 concentration rises, it will cause tropospheric temperatures to rise and at the same time will cause lower stratospheric temperatures to fall. Thus we expect that that the lower stratospheric temperature will be responsive to mid-tropospheric temperature at an annual time scale and climate scientists claim that this is exactly what we find in the observational data. The evidence presented is a strong correlation between tropospheric temperature and lower stratospheric temperature. However, detrended correlation shows that this correlation derives from shared trends and not from a responsiveness of lower stratospheric temperature to mid tropospheric temperature at an annual time scale. The details of this analysis is described in a related post [LINK] .
- EXAMPLE 10: An additional argument for the attribution of increases in atmospheric CO2 to fossil fuel emissions is presented by climate science in terms of the observed dilution of the 14C isotope fraction of carbon in atmospheric CO2. It is claimed that this dilution proves that fossil fuel emissions accumulate in the atmosphere because fossil fuel carbon is known to contain low or no 14C having been dead and underground for millions of years. A test of this hypothesis shows that the correlation presented by climate science as empirical evidence in support of this theory is spurious. Details in a related post [LINK] .
- MOVING AVERAGES AND OTHER PRE-PROCESSED TIME SERIES DATA. When moving averages or moving sums of a time series are used to construct a derived time series, care must be taken to correct for the effective sample size (EFFN) in hypothesis tests because multiplicity (the use of the same data point more than once) reduces the effective sample size. When the reduction in degrees of freedom is not taken into account faux statistical significance can lead to spurious findings . This issue is discussed in some detail in a related post [LINK] and an example of this statistical error in climate science is presented in another related post [LINK] .
- A TIME SERIES OF THE CUMULATIVE VALUES OF ANOTHER TIME SERIES: An extreme case of such multiplicity is the construction of a time series of the cumulative values of another time series. In these cases it can be shown that the effective sample size is always EFFN=2 so that the degrees of freedom in hypothesis tests is DF=0. This relationship is described in an online paper [LINK] with the relevant text reproduced in paragraph#8 below. It should also be noted that the time series of the cumulative values of another time series does not contain a time scale. Thus, without either time scale or degrees of freedom, it is not possible to test for the statistical significance of any statistic for a time series of the cumulative values of another time series. The spuriousness of such correlations is demonstrated with Monte Carlo simulation in paragraph#9 below.
- EFFECTIVE SAMPLE SIZE OF THE CUMULATIVE VALUES OF A TIME SERIES. If the summation starts at K=2, series cumulative values of a time series X of length N is computed as Σ(X1 to X2), Σ(X1 to X3), Σ(X1 to X4), Σ(X1 to X5) … Σ(X1 to XN-3), Σ(X1 to XN-2), Σ(X1 to XN-1), Σ(X1 to XN). In these N-K+1 cumulative values, XN is used once, XN-1 is used twice, XN-2 is used three times, XN-3 is used four times, X4 is used N-3 times, X3 is used N-2 times, X2 is used N-1 times , X1 is used N-1 times. In general, each of the first K data items will be used N-K+1 times. Thus, the sum of the multiples for the first K data items may be expressed as K*(N-K+1). The multiplicities of the remaining N-K data items form a sequence of integers from one to N-K and their sum is (N-K)*(N-K+1)/2. The average multiplicity of the N data items in the computation of cumulative values may be expressed as AVERAGE-MULTIPLE = [(K*(N-K+1) + (N-K)*(N-K+1)/2]/N. Since multiplicity of use reduces the effective value of the sample size we can express the effective sample size as: EffectiveN = N/(AVERAGE-MULTIPLE) = N2/(K*(N-K+1) + (N-K)*(N-K+1)/2). To be able to determine the statistical significance of the correlation coefficient it is necessary that the degrees of freedom (DF) computed as effectiveN -2 should be a positive integer. This condition is not possible for a sequence of cumulative values that begins with Σ(X1 to X2). Effective-N can be increased to values higher than two only by beginning the cumulative series at a later point K>2 in the time series so that the first summation is Σ(X1 to XK) where K>2. In that case, the total multiplicity is reduced and this reduction increases the value of effectiveN somewhat but not enough to reach values much greater than two.
- MONTE CARLO SIMULATION OF SPURIOUS CORRELATION BETWEEN CUMULATIVE VALUES OF TIME SERIES DATA
- EXAMPLE 1: An example of the use of cumulative values in climate science is the so called TCRE or Transient Climate Response to Cumulative Emissions. It is the correlation between cumulative emissions and cumulative warming (note that temperature = cumulative warming). This relationship shows a nearly perfect proportionality that is thought to provide convincing evidence of a causal relationship between emissions and temperature and provides a convenient metric for the computation of the so called remaining “carbon budget”, that is the amount of additional emissions possible for a given constraint on the amount of warming. The spuriousness of the TCRE proportionality is described in a related post on this site [LINK] and its spuriousness is further supported with a parody of the procedure that shows that UFO visitations are the real cause of global warming [LINK] . A related post shows that when a finite time scale is inserted into the TCRE, the correlation disappears [LINK] .
- EXAMPLE 2: A paper by Peter Clark of Oregon State University extended the TCRE methodology to sea level rise to provide empirical evidence that fossil fuel emissions cause sea level rise and that climate action in the form of reducing fossil fuel emissions should moderate the rate of sea level rise. (Clark, Peter U., et al. “Sea-level commitment as a gauge for climate policy” Nature Climate Change 8.8 2018: 653). In a related post on this site it is shown that this correlation is spurious [LINK] . In another, we show that when finite time scales are inserted so that both time scale and degrees of freedom are available for carrying out hypothesis tests, the correlation seen in the cumulative series is not found [LINK] .
- EXAMPLE 3: It is claimed that a correlation between cumulative values provides evidence that the decay in atmospheric 13C/12C isotope ratio is related to fossil fuel emissions and proves that the observed increase in atmospheric CO2 is driven by fossil fuel emissions. This claim and spurious correlation are addressed in a related post [LINK] .
- EXAMPLE 4: Climate science claims that dilution of the 13C isotope of carbon in atmospheric CO2 provides evidence that the observed increase in atmospheric CO2 concentration is caused by fossil fuel emissions. A strong correlation is presented as evidence but the correlation is between cumulative values and therefore spurious. When that error is corrected, no correlation is found [LINK] .
- THE INTERPRETATION OF VARIANCE IN CLIMATE SCIENCE STATISTICS. A related issue in statistical analysis methods of climate scientists is the way variance is interpreted. In statistics and also in information theory, high variance implies low information content. In other words, the higher the variance the less we know. In this context high variance is undesirable because it degrades the information we can derive from the data. However, high variance also yields large confidence intervals making it possible for high variance to be interpreted not as absence of information but as information about a danger of how extreme it could be. This interpretation of variance is common in climate science. In conjunction with the precautionary principle, it leads to a perverse interpretation of uncertainty such that uncertainty about the mean becomes transformed into certainty of extreme values. For example if the mean value of empirical climate sensitivity is found to have no statistical significance because of a large variance over a range of λ=2 to λ=6, the conclusion drawn by climate science from these data is not that we don’t really know what the value of λ is or even whether this concept can be verified with empirical evidence, but an obsession with the high value of λ=6 along with the alarming fear of the highest possible value in a range that actually implies that we don’t know. This interpretation of variance is aided by the use of the precautionary principle which holds that if a possible value of something that is harmful is high it is better to take precaution against that possibility than to interpret the data in a strictly rational way. In other words, the less you know the more extreme it COULD be and this use of the word “could” is common in climate science in the use of ignorance in the form of high variance to create fear.
- THE USE OF CIRCULAR REASONING IN CLIMATE SCIENCE STATISTICS: In carrying out the flow accounting of the carbon cycle as a way of determining the effect of carbon in fossil fuel emissions on the carbon cycle, climate science is faced with the impossibility of measuring the much larger flows of carbon to and from the atmosphere in the carbon cycle. This difficulty is overcome by using the time series of atmospheric CO2 concentration from the Mauna Loa observatory that shows atmospheric CO2 concentration rising over time. By attributing the changes in atmospheric CO2 to fossil fuel emissions, a flow account of the unmeasurable carbon cycle can be inferred. The inferred flow account is then used to determine that the observed rise in atmospheric CO2 concentration is explained in terms of fossil fuel emissions. This issue is presented in detain in two related posts [LINK] [LINK] .
\
DEPENDENCE: A SIMPLE WAY TO UNDERSTAND CHAOS IN TIME SERIES DATA.
Figure 1: Screen grab from the Youtube video “Forklift causes whole warehouse to collapse“. An example of dependence, nonlinear dynamics and chaos. The chaos we see here is the creation of a series of dependencies.
Link to video: https://www.youtube.com/watch?v=pkrMgMoR0RU
Figure 3: Demonstration of chaotic behavior of Hurst persistence in time series data. A simple demonstration of dependency
- The video in Figure 2 plots the same time series twice – in red and in blue. In the red line, the events in the time series evolve independently with no dependence or persistence. Such independence is an important assumption in OLS (ordinary least squares) regression. In the blue line the events in the time series evolve as a chaotic system with dependence and persistence. A small dependence has been inserted as a 1% tendency for persistence. Without persistence rising and falling tendencies are always equal with 50% probability at each event. The result of that is seen in the red line where the ups and downs are not visible because they are small.
- The 1% persistence in the blue line means that if the prior change was positive, the probabilities change from 50%/50% to 51%/49% favoring a positive change for the next event. If it is positive again in the next event, the probabilities are changed to 52%/48% but if it is negative they change back to 50%/50%. The probabilities keep changing to favor the direction of change in the prior event. This behavior is called persistence and it is very common in nature, particularly in surface temperature.
- When you play the video you will see the blue line take various shapes and trends both rising and falling mostly staying very close to the Gaussian red line. BUT, it is capable of sudden departures from the Gaussian to form what appears to be patterns such as rising and falling trends. These shapes do not imply cause and effect phenomena. They are the random behavior of a chaotic system.
- All of these shapes are representations of the same underlying phenomenon. The differences among these curves have no interpretation because they represent randomness. The human instinct to look for causes for unusual patterns derives from Darwinian survival but it leads us astray when we study chaotic systems.
- The shapes and trends the blue line forms may be found to be statistically significant if the system is assumed to be deterministic and the violations of OLS assumptions are ignored; but that statistical significance is meaningless. Yet this kind of analysis is common. The conclusions they imply in terms of the phenomena of nature have no interpretation because they are the product of violations of assumptions.
- Climate variables at decadal time scales are are known to be chaotic and understood as Internal Climate Variability: [LINK]and therefore, not all climate events contain information about cause and effect phenomena particularly so at brief time scales.That nature is chaotic is well understood. These three books present that argument with data, examples, and case studies: EA Jackson, Exploring Nature’s Dynamics; C Letellier, Chaos in Nature; Cushing et al, Chaos in Ecology.
- References to chaos in nature are also found in the climate change literature. These two excellent papers by Timothy Palmer are a good introduction to the subject of chaos in climate data: “A nonlinear dynamical perspective on climate prediction” Journal of Climate 12.2 (1999): 575-591, and “Predicting uncertainty in forecasts of weather and climate” Reports on Progress in Physics 63.2 (2000): 71.
- A common method of detecting fractal/chaotic behavior in long time series of field data is to compute the so called Hurst exponent “H” of the time series as a way of detecting persistence in the data. Persistence implies that the the data in the time series do not evolve independently but contain a dependence on prior values such that prior changes tend to persist into the next time slice. This method was first described by Harold Edwin Hurst in 1951 in the only paper he ever published which is still cited hundreds of times a year every year ( Hurst, H.E. (1951). “Long-term storage capacity of reservoirs”. Transactions of American Society of Civil Engineers).
- The Hurst exponent method of detecting non-linear dynamics in time series data is used in climate change research. This trend in climate science has been led by the very charismatic and controversial hydrologist Demetris Koutsoyiannis, Professor of Hydrology, National Technical University of Athens. Demetris finds that many hydrology time series behaviors that climate science ascribes to emissions can be explained in terms of non-linear dynamics.
- Here are some examples from the literature of Hurst persistence analysis of climate data: Cohn, Timothy etal “Nature’s style: Naturally trendy.” Geophysical Research Letters 32.23 (2005) (by “naturally trendy” Tim means that things that look like trends are actually randomness); Weber, Rudolf etal. “Spectra and correlations of climate data from days to decades.” Journal of Geophysical Research: Atmospheres 106.D17 (2001): 20131-20144; Koutsoyiannis, Demetris. “Climate change, the Hurst phenomenon, and hydrological statistics.” Hydrological Sciences Journal 48.1 (2003): 3-24; Markonis, Yannis, “Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics.” Surveys in Geophysics 34.2 (2013): 181-207; Pelletier, Jon etal. “Long-range persistence in climatological and hydrological time series: analysis, modeling and application to drought hazard assessment.” Journal of Hydrology 203.1-4 (1997): 198-208; Rybski, Diego, et al. “Long‐term persistence in climate and the detection problem.” Geophysical Research Letters 33.6 (2006).
- Much of the empirical work in climate science is presented in terms of time series of field data (field data means data made by nature over which the researcher has no control – as opposed to data collected in controlled experiments). Information about climate contained in these data is usually gathered by researchers in terms of OLS (ordinary least squares) regression analysis. Surprisingly, even at the highest levels of climate research little or no attention is paid to the assumptions of OLS analysis which include for example the assumption that the data in the time series evolve as I.I.D, (independent identically distributed). The stationarity assumption further enforces the requirement that the distribution must not change as the time series evolves.
- More information on regression analysis may be found in a companion post on SPURIOUS CORRELATIONS. A good reference for regression analysis of time series data is “Time Series Analysis: Forecasting and Control, Wiley 2015, by George Box & Gwilym Jenkins. Another is Time Series Analysis and Its Applications with Examples, Springer 2017, by Robert Shumway. These authors have shown that when time series analysis goes awry you can bet it has to do with violations of OLS assumptions.
- In the analysis of temperature, sea level rise, precipitation, solar activity, and ozone depletion listed below, the time series is tested for OLS violations by computing the Hurst exponent H. The theoretical neutral value with no serial dependence is H=0.5 but it has been shown that the neutral value for comparison in empirical research needs to be adjusted for the specific sub-sampling strategy used in the estimation of H. Therefore, this estimation is carried out twice – once with the data and again with a Monte Carlo simulation of the data that generates a corresponding IID/stationary series.
- The two values of H are then compared. If the difference between the two values of H is not statistically significant, we conclude that there is no evidence of Hurst behavior in the data and OLS regression results may therefore be interpreted in terms of the phenomena under study. However if the value of H in the data is greater than the value of H in the IID simulation, then we can conclude that the data contain Fractal/Chaotic behavior by virtue of Hurst persistence and that therefore OLS results may not be interpreted strictly in terms of the phenomena under study. Nonlinear dynamics must be considered. A list of these studies appears below with links to the freely downloadable full text.
- Ozone Depletion: Mean global total ozone is estimated as the latitudinally weighted average of total ozone measured by the TOMS and OMI satellite mounted ozone measurement devices for the periods 1979-1992 and 2005-2015 respectively. The TOMS dataset shows an OLS depletion rate of 0.65 DU per year on average in mean monthly ozone from January 1979 to December 1992. The OMI dataset shows an OLS accretion rate of 0.5 DU per year on average in mean monthly ozone from January 2005 to December 2015. The conflicting and inconsequential OLS trends may be explained in terms of the random variability of nature and violations of OLS assumptions that can create the so called Hurst phenomenon. These findings are inconsistent with the Rowland-Molina theory of ozone destruction by anthropogenic chemical agents because the theory implies continued and dangerous depletion of total ozone on a global scale until the year 2040. [FULL TEXT]
- Global Warming: A study of daily mean temperature data from five USCRN stations in the sample period 1/1/2005-3/31/2016 shows that the seasonal cycle can be captured with significantly greater precision by dividing the year into smaller parts than calendar months. The enhanced precision greatly reduces vestigial patterns in the deseasonalized and detrended residuals. Rescaled Range analysis of the residuals indicates a violation of the independence assumption of OLS regression. The existence of dependence, memory, and persistence in the data is indicated by high values of the Hurst exponent. The results imply that decadal and even multi-decadal OLS trends in USCRN daily mean temperature may be spurious. [FULL TEXT]
- Precipitation: Rescaled range analysis of precipitation in the sample period 1893-2014 for ten USHCN stations in five states of the USA does not provide evidence of dependence, long term memory, or persistence in the time series. All of the observed Hurst exponents of precipitation are indicative of Gaussian randomness. Therefore, multi-decadal and non-periodic drought and flood events observed at some of these stations are more likely to be irregular cyclical phenomena of nature than the random effects of persistence and long term memory in the data. [FULL TEXT]
- Global Warming: Trends in time series data estimated with OLS linear regression may be tested with a robust procedure that is less sensitive to influential observations and violations of regression assumptions. The test consists of comparing the average age of data tritiles. If the higher tritiles are newer a rising trend is indicated for the sample period. If the higher tritiles are older a declining trend is indicated. If neither of these conditions is met, no sustained trend in the sample period may be inferred from the data. Daily temperature data from selected USHCN and USCRN stations are used to demonstrate the utility of the proposed methodology. [FULL TEXT]
- Solar Activity: It is shown that the time series of sunspot counts may be represented as the sum of a regular cyclical process and a random Hurst process. In the 2375-month study period 1/1818-11/2015, the optimal cyclical components of mean monthly sunspot counts consist of a short wave function with a period of 131 months and a long wave function in which the amplitude of the short wave undergoes a 100-year cycle. The residuals of this model, though random, exhibit properties of the Hurst phenomenon in which dependence, memory, and persistence generate apparent patterns out of randomness. The findings imply that not all patterns in the empirical record of sunspot counts contain useful information because some patterns represent random behavior. [FULL TEXT]
- Global Warming: The deseasonalized monthly mean surface temperature time series for Nuuk, Greenland in the 148-year sample period 1866-2013 shows a statistically significant OLS warming trend of 0.1C per decade. Rescaled range analysis of the deseasonalized and detrended residuals reveals a high value of the Hurst exponent indicative of memory, dependence, and persistence in the time series. A robust test for trends described in a previous paper (Munshi, 2015) indicates that the observed OLS trend in the Nuuk data is spurious and therefore possibly an artifact of dependence and persistence. [FULL TEXT]
- Global Warming: High values of the Hurst exponent of H=0.66±0.05 for deseasonalized monthly mean surface temperatures in the sample period 1850-2015 suggest persistence and long term memory in the temperature time series. Such Hurst phenomena have been observed in the stochastic processes of nature in the area of hydrology (Hurst) and also in the proxy record of annual mean surface temperature at a millennial time scale (Barnett). Our study suggests that these patterns may also exist in deseasonalized monthly means of the measured temperature record in the post industrial era, a period that is normally associated with global warming and climate change. [FULL TEXT]
- Global Warming: A month by month trend analysis at an annual time scale of the daily mean Central England Temperature (CET) series 1772-2016 shows a general warming trend for most autumn and winter months. These trends are usually described in terms of anthropogenic global warming (AGW). OLS diagnostics reveal anomalies in the data having to do with asymmetry, non-linearity, and serial Hurst dependence in the series of generational trends in a 30-year moving window. Therefore, the phenomena of nature that generated this temperature series are best understood in terms of nonlinear patterns within the sample period rather than a single linear OLS trend-line across the whole of the sample period. [FULL TEXT]
- Precipitation: Month by month analysis of precipitation in England and Wales 1766-2016 is carried out at an annual time scale for all twelve calendar months. No evidence of dependence, long term memory, or persistence is found. All twelve of the observed Hurst exponents of precipitation are found to be H≈0.5 indicative of Gaussian randomness. Therefore, non-periodic clusters of flood years observed in this area are more likely to be irregular cyclical phenomena of nature than effects of the Hurst phenomenon. [FULL TEXT]
CHAOS THEORY IN CLIMATE SCIENCE: A BIBLIOGRAPHY
- Zeng, Xubin, Roger A. Pielke, and R. Eykholt. “Chaos theory and its applications to the atmosphere.” Bulletin of the American Meteorological Society 74.4 (1993): 631-644. A brief overview of chaos theory is presented, including bifurcations, routes to turbulence, and methods for characterizing chaos. The paper divides chaos applications in atmospheric sciences into three categories: new ideas and insights inspired by chaos, analysis of observational data, and analysis of output from numerical models. Based on the review of chaos theory and the classification of chaos applications, suggestions for future work are given.
- Marotzke, Jochem. “Abrupt climate change and thermohaline circulation: Mechanisms and predictability.” Proceedings of the National Academy of Sciences 97.4 (2000): 1347-1350. The ocean’s thermohaline circulation has long been recognized as potentially unstable and has consequently been invoked as a potential cause of abrupt climate change on all timescales of decades and longer. However, fundamental aspects of thermohaline circulation changes remain poorly understood. [LINK TO FULL TEXT PDF]
- Rial, Jose A., and C. A. Anaclerio. “Understanding nonlinear responses of the climate system to orbital forcing.” Quaternary Science Reviews 19.17-18 (2000): 1709-1722. Frequency modulation (FM) of the orbital eccentricity forcing may be one important source of the nonlinearities observed in δ18O time series from deep-sea sediment cores (J.H. Rial (1999a) Pacemaking the lce Ages by frequency modulation of Earth’s orbital eccentricity. Science 285, 564–568). Here we present further evidence of frequency modulation found in data from the Vostok ice core. Analyses of the 430,000-year long, orbitally untuned, time series of CO2, deuterium, aerosol and methane, suggest frequency modulation of the 41 kyr (0.0244 kyr−1) obliquity forcing by the 413 kyr-eccentricity signal and its harmonics. Conventional and higher-order spectral analyses show that two distinct spectral peaks at ∼29 kyr (0.034 kyr−1) and ∼69 kyr (0.014 kyr−1) and other, smaller peaks surrounding the 41 kyr obliquity peak are harmonically (nonlinearly) related and likely to be FM-generated sidebands of the obliquity signal. All peaks can be closely matched by the spectrum of an appropriately built theoretical FM signal. A preliminary model, based on the classic logistic growth delay differential equation, reproduces the longer period FM effect and the familiar multiply peaked spectra of the eccentricity band. Since the FM effect appears to be a common feature in climate response, finding out its cause may help understand climate dynamics and global climate change.
- Ashkenazy, Yosef, et al. “Nonlinearity and multifractality of climate change in the past 420,000 years.” Geophysical research letters 30.22 (2003). Evidence of past climate variations are stored in polar ice caps and indicate glacial‐interglacial cycles of ∼100 kyr. Using advanced scaling techniques we study the long‐range correlation properties of temperature proxy records of four ice cores from Antarctica and Greenland. These series are long‐range correlated in the time scales of 1–100 kyr. We show that these time series are nonlinear for time scales of 1–100 kyr as expressed by temporal long‐range correlations of magnitudes of temperature increments and by a broad multifractal spectrum. Our results suggest that temperature increments appear in clusters of big and small increments—a big (positive or negative) climate change is most likely followed by a big (positive or negative) climate change and a small climate change is most likely followed by a small climate change.
- Rial, Jose A. “Abrupt climate change: chaos and order at orbital and millennial scales.” Global and Planetary Change 41.2 (2004): 95-109. Successful prediction of future global climate is critically dependent on understanding its complex history, some of which is displayed in paleoclimate time series extracted from deep-sea sediment and ice cores. These recordings exhibit frequent episodes of abrupt climate change believed to be the result of nonlinear response of the climate system to internal or external forcing, yet, neither the physical mechanisms nor the nature of the nonlinearities involved are well understood. At the orbital (104–105 years) and millennial scales, abrupt climate change appears as sudden, rapid warming events, each followed by periods of slow cooling. The sequence often forms a distinctive saw-tooth shaped time series, epitomized by the deep-sea records of the last million years and the Dansgaard–Oeschger (D/O) oscillations of the last glacial. Here I introduce a simplified mathematical model consisting of a novel arrangement of coupled nonlinear differential equations that appears to capture some important physics of climate change at Milankovitch and millennial scales, closely reproducing the saw-tooth shape of the deep-sea sediment and ice core time series, the relatively abrupt mid-Pleistocene climate switch, and the intriguing D/O oscillations. Named LODE for its use of the logistic-delayed differential equation, the model combines simplicity in the formulation (two equations, small number of adjustable parameters) and sufficient complexity in the dynamics (infinite-dimensional nonlinear delay differential equation) to accurately simulate details of climate change other simplified models cannot. Close agreement with available data suggests that the D/O oscillations are frequency modulated by the third harmonic of the precession forcing, and by the precession itself, but the entrained response is intermittent, mixed with intervals of noise, which corresponds well with the idea that the climate operates at the edge between chaos and order. LODE also predicts a persistent ∼1.5 ky oscillation that results from the frequency modulated regional climate oscillation.
- Huybers, Peter, and Carl Wunsch. “Obliquity pacing of the late Pleistocene glacial terminations.” Nature 434.7032 (2005): 491. The 100,000-year timescale in the glacial/interglacial cycles of the late Pleistocene epoch (the past ∼700,000 years) is commonly attributed to control by variations in the Earth’s orbit1. This hypothesis has inspired models that depend on the Earth’s obliquity (∼ 40,000 yr; ∼40 kyr), orbital eccentricity (∼ 100 kyr) and precessional (∼ 20 kyr) fluctuations2,3,4,5, with the emphasis usually on eccentricity and precessional forcing. According to a contrasting hypothesis, the glacial cycles arise primarily because of random internal climate variability6,7,8. Taking these two perspectives together, there are currently more than thirty different models of the seven late-Pleistocene glacial cycles9. Here we present a statistical test of the orbital forcing hypothesis, focusing on the rapid deglaciation events known as terminations10,11. According to our analysis, the null hypothesis that glacial terminations are independent of obliquity can be rejected at the 5% significance level, whereas the corresponding null hypotheses for eccentricity and precession cannot be rejected. The simplest inference consistent with the test results is that the ice sheets terminated every second or third obliquity cycle at times of high obliquity, similar to the original proposal by Milankovitch12. We also present simple stochastic and deterministic models that describe the timing of the late-Pleistocene glacial terminations purely in terms of obliquity forcing.
- Tziperman, Eli, Carl Wunsch. “Consequences of pacing the Pleistocene 100 kyr ice ages by nonlinear phase locking to Milankovitch forcing.” Paleoceanography 21.4 (2006).: The consequences of the hypothesis that Milankovitch forcing affects the phase (e.g., termination times) of the 100 kyr glacial cycles via a mechanism known as “nonlinear phase locking” are examined. Phase locking provides a mechanism by which Milankovitch forcing can act as the “pacemaker” of the glacial cycles. Nonlinear phase locking can determine the timing of the major deglaciations, nearly independently of the specific mechanism or model that is responsible for these cycles as long as this mechanism is suitably nonlinear. A consequence of this is that the fit of a certain model output to the observed ice volume record cannot be used as an indication that the glacial mechanism in this model is necessarily correct. Phase locking to obliquity and possibly precession variations is distinct from mechanisms relying on a linear or nonlinear amplification of the eccentricity forcing. Nonlinear phase locking may determine the phase of the glacial cycles even in the presence of noise in the climate system and can be effective at setting glacial termination times even when the precession and obliquity bands account only for a small portion of the total power of an ice volume record. Nonlinear phase locking can also result in the observed “quantization” of the glacial period into multiples of the obliquity or precession periods.
- Eisenman, Ian, Norbert Untersteiner, and J. S. Wettlaufer. “On the reliability of simulated Arctic sea ice in global climate models.” Geophysical Research Letters 34.10 (2007). While most of the global climate models (GCMs) currently being evaluated for the IPCC Fourth Assessment Report simulate present‐day Arctic sea ice in reasonably good agreement with observations, the intermodel differences in simulated Arctic cloud cover are large and produce significant differences in downwelling longwave radiation. Using the standard thermodynamic models of sea ice, we find that the GCM‐generated spread in longwave radiation produces equilibrium ice thicknesses that range from 1 to more than 10 meters. However, equilibrium ice thickness is an extremely sensitive function of the ice albedo, allowing errors in simulated cloud cover to be compensated by tuning of the ice albedo. This analysis suggests that the results of current GCMs cannot be relied upon at face value for credible predictions of future Arctic sea ice.
- Frank, Patrick, and John McCarthy. “A climate of belief.” Skeptic 14.1 (2008): 22-30. The claim that anthropogenic CO2 is responsible for the current warming of Earth climate is scientifically insupportable because climate models are unreliable by Patrick Frank “He who refuses to do arithmetic is doomed to talk nonsense.” — John McCarthy “The latest scientific data confirm that the earth’s climate is rapidly changing. … The cause? A thickening layer of carbon dioxide pollution, mostly from power plants and automobiles, that traps heat in the atmosphere. … *A+verage U.S. temperatures could rise another 3 to 9 degrees by the end of the century … Sea levels will rise, *and h+eat waves will be more frequent and more intense. Droughts and wildfires will occur more often. Disease-carrying mosquitoes will expand their range. And species will be pushed to extinction.” So says the National Resources Defense Council,2 with agreement by the Sierra Club,3 Greenpeace,4 National Geographic,5 the US National Academy of Sciences,6 and the US Congressional House leadership.7 Concurrent views are widespread,8 as a visit to the internet or any good bookstore will verify. Since at least the 1995 Second Assessment Report, the UN Intergovernmental Panel on Climate Change (IPCC) has been making increasingly assured statements that human-produced carbon dioxide (CO2) is influencing the climate, and is the chief cause of the global warming trend in evidence since about 1900. The current level of atmospheric CO2 is about 390 parts per million by volume (ppmv), or 0.039% by volume of the atmosphere, and in 1900 was about 295 ppmv. If the 20th century trend continues unabated, by about 2050 atmospheric CO2 will have doubled to about 600 ppmv. This is the basis for the usual “doubled CO2” scenario. Doubled CO2 is a bench-mark for climate scientists in evaluating greenhouse warming. Earth receives about 342 watts per square meter (W/m2 ) of incoming solar energy, and all of this energy eventually finds its way back out into space. However, CO2 and other greenhouse gasses, most notably water vapor, absorb some of the outgoing energy and warm the atmosphere. This is the greenhouse effect. Without it Earth’s average surface temperature would be a frigid -19°C (-2.2 F). With it, the surface warms to about +14°C (57 F) overall, making Earth habitable.9 With more CO2, more outgoing radiant energy is absorbed, changing the thermal dynamics of
the atmosphere. All the extra greenhouse gasses that have entered the atmosphere since 1900, including CO2, equate to an extra 2.7 W/m2 of energy absorption by the atmosphere.10 This is the worrisome greenhouse effect. On February 2, 2007, the IPCC released the Working Group I (WGI) “Summary for Policymakers” (SPM) report on Earth climate,11 which is an executive summary of the science supporting the predictions quoted above. The full “Fourth Assessment Report” (4AR) came out in sections during 2007. [LINK TO FULL TEXT PDF] - Huybers, Peter John. “Pleistocene glacial variability as a chaotic response to obliquity forcing.” (2009). The mid-Pleistocene Transition from 40 ky to ~100 ky glacial cycles is generally characterized as a singular transition attributable to scouring of continental regolith or a long-term decrease in atmospheric CO2 concentrations. Here an alternative hypothesis is suggested, that Pleistocene glacial variability is chaotic and that transitions from 40 ky to ~100 ky modes of variability occur spontaneously. This alternate view is consistent with the presence of ~80 ky glacial cycles during the early Pleistocene and the lack of evidence for a change in climate forcing during the mid-Pleistocene. A simple model illustrates this chaotic scenario. When forced at a 40 ky period the model chaotically transitions between small 40 ky glacial cycles and larger 80 and 120 ky cycles which, on average, give the ~100 ky variability.
- Dima, Mihai, and Gerrit Lohmann. “Conceptual model for millennial climate variability: a possible combined solar-thermohaline circulation origin for the~ 1,500-year cycle.” Climate Dynamics 32.2-3 (2009): 301-311. Dansgaard-Oeschger and Heinrich events are the most pronounced climatic changes over the last 120,000 years. Although many of their properties were derived from climate reconstructions, the associated physical mechanisms are not yet fully understood. These events are paced by a ~1,500-year periodicity whose origin remains unclear. In a conceptual model approach, we show that this millennial variability can originate from rectification of an external (solar) forcing, and suggest that the thermohaline circulation, through a threshold response, could be the rectifier. We argue that internal threshold response of the thermohaline circulation (THC) to solar forcing is more likely to produce the observed DO cycles than amplification of weak direct ~1,500-year forcing of unknown origin, by THC. One consequence of our concept is that the millennial variability is viewed as a derived mode without physical processes on its characteristic time scale. Rather, the mode results from the linear representation in the Fourier space of nonlinearly transformed fundamental modes.
- Dijkstra, Henk A. Nonlinear climate dynamics. Cambridge University Press, 2013.
Elevated CO2 and Crop Chemistry
Posted May 24, 2018
on:- Before it was expropriated by the global warming/climate change movement, the term “Greenhouse Effect” referred to the effect of elevated carbon dioxide in greenhouses on crop chemistry. We know from greenhouse studies going back to the late 19th century that crop chemistry reflects the balance between soil chemistry, air chemistry, and light intensity. The important features of air chemistry are the availability of carbon dioxide for photosynthesis and of oxygen for plant respiration. The important features of soil chemistry are the availability of water, nitrates, phosphates, and minerals.
- Climate science is apparently concerned about the effect of elevated atmospheric CO2 on agriculture. Initially it was assessed that the effects of climate change would devastate agriculture but later the concern shifted to the effect of elevated atmospheric CO2 on the nutritional quality of crops. These concerns appear to be disconnected from the extensive literature on elevated CO2 agriculture in greenhouses that have been with us for more than a century.
- Greenhouse operations include irrigation, air circulation to maintain air quality, heating for temperature control, the introduction of carbon dioxide to maintain elevated carbon dioxide levels of 1000 to 2000 parts per million for photosynthesis enrichment, and the availability of sufficient light for photosynthesis to occur. Photosynthesis enrichment improves crop yield. Corresponding changes to soil chemistry are required to preserve the nutritional quality of the crops.
- It has been found in numerous greenhouse studies since the 19th century that if elevated carbon dioxide is not matched by corresponding changes to soil chemistry, crop chemistry may shift in the direction of higher starch content and lower nutritional quality. These effects are crop specific and vary greatly among crop types.
- Proper greenhouse management is responsive to these dynamics and involves the management of light and soil chemistry that is appropriate for any given level of carbon dioxide so that crop nutritional quality is maintained. These relationships are described in some detail in the Stitt&Krapp1999 paper listed below and highlighted in bold.
- The various works of Bruce Kimball of the US Water Conservation Laboratory (with full text available free from the USDA) are unique in this line of research as they are not greenhouse studies but a survey of a large number of such studies carried out to estimate the impact of climate change on crop yield.
- His work followed in the heels of the landmark “Climate Sensitivity” presentation made by Jule Charney in 1979 in which he presented the finding from climate model studies that a doubling of atmospheric carbon dioxide will cause mean global temperature to rise by 1.5C to 4.5C. The Charney Climate Sensitivity still serves as the fundamental relationship in climate science for the “greenhouse warming effect” thought to be caused by atmospheric carbon dioxide.
- Kimball followed the Charney format and presented his finding that a doubling of atmospheric carbon dioxide will increase crop yields worldwide by about 30% with some differences among crops and for different conditions and latitudes. The relevant citations appear below.
- Kimball, Bruce A. “Carbon Dioxide and Agricultural Yield: An Assemblage and Analysis of 430 Prior Observations 1.” Agronomy journal 75.5 (1983): 779-788.
- Kimball, B. A., and S. B. Idso. “Increasing atmospheric CO2: effects on crop yield, water use and climate.” Agricultural water management 7.1-3 (1983): 55-72.
- Kimball, B. A., et al. “Effects of increasing atmospheric CO2 on vegetation.” CO2 and Biosphere. Springer, Dordrecht, 1993. 65-76.
- Mauney and Kimball. “Growth and yield of cotton in response to a free-air carbon dioxide enrichment (FACE) environment.” Agricultural and Forest Meteorology 70.1-4 (1994): 49-67.
- Kimball, Bruce A., et al. “Productivity and water use of wheat under free‐air CO2 enrichment.” Global Change Biology 1.6 (1995): 429-442.
- Kimball, B. A., K. Kobayashi, and M. Bindi. “Responses of agricultural crops to free-air CO2 enrichment.” Advances in agronomy. Vol. 77. Academic Press, 2002. 293-368.
- Idso and Kimball. “Effects of atmospheric CO2 enrichment on plant growth: the role of air temperature.” Agriculture, ecosystems & environment 20.1 (1987): 1-10.
- The findings of a selection of GREENHOUSE STUDIES from Besford 1990 to Galtier 1995 presenting measurements of nutritional loss due to an imbalance in CO2 and soil nutrients are listed below. The greenhouse management implications of these findings are described best in the Stitt and Krapp 1999 paper.
- RT Besford, et al 1990, Journal of Experimental Botany 41.8: 925-931: Compared with tomato plants grown in normal ambient CO2, the 1000 ppm CO2 grown leaves, when almost fully expanded, contained only half as much RuBPco protein. Note: corresponding soil enrichment was not used.
- Peter Curtis et al, 1998, Oecologia 113.3: 299-313: Total biomass and net CO2 assimilation increased significantly at about twice ambient CO2, regardless of growth conditions. Low soil nutrient availability reduced the CO2 stimulation of total biomass by half, from +31% under optimal conditions to +16%, while low light increased the difference to +52%.
- Kramer, Paul J. 1981, BioScience 31.1: 29-33: The long-term response to high CO2 varies widely among species. Furthermore, the rate of photosynthesis is limited by various internal and environmental factors in addition to the CO2 conc.
- Curtis, P. S. 1996, Plant, Cell & Environment 19.2: 127-137: Growth at elevated [CO2] resulted in moderate reductions in gs in unstressed plants, but there was no significant effect of CO2 on gs in stressed plants. Leaf dark respiration (mass or area basis) was reduced strongly by growth at high [CO2] > while leaf N was reduced only when expressed on a mass basis.
- Shahidul Islam et al, 1996, Scientia Horticulturae 137-149: CO2 enriched tomatoes had lower amounts of citric, malic and oxalic acids, and higher amounts of ascorbic acid, fructose, glucose and sucrose synthase activity than the control. Elevated CO2 enhanced fruit growth and colouring during development.
- Stitt & Krapp 1999: Plant, Cell & Environment 22.6-583-621: Increased rates of growth in elevated [CO2] will require higher rates of inorganic nitrogen uptake and assimilation. An increased supply of sugars can increase the rates of nitrate and ammonium uptake and assimilation, the synthesis of organic acid acceptors, and the synthesis of amino acids. Interpretation of experiments in elevated [CO2] requires that the nitrogen status of the plants is monitored.
- Galtier, Nathalie, et al. 1995, Journal of Experimental Botany 1335-1344: At elevated CO2, the rate of sucrose synthesis was increased relative to that of starch and sucrose/starch ratios were higher throughout the photoperiod in the leaves of all plants expressing high SPS activity. At high C02 the stimulation of photosynthesis was more pronounced. We conclude that SPS activity is a major point of control of photosynthesis particularly under saturating light and C02.
- At very high CO2 levels, not only sufficient water and nutrients but also sufficient light must be provided to the plants. This greenhouse management issue is discussed in an excellent article at the CROP KING site: LINK: [CROP KING]
RELATED POST: [CLIMATE CHANGE IMPACTS RESEARCH]
Fishing for climate calamity?
Posted May 23, 2018
on:The eco scare that human activity is killing off the fish in the oceans predates climate change. In the BC days (before-climate), a combination of over-fishing, seafaring, and discharges of plastics and pollution into the oceans by humans were cited (“Sea’s riches running out”, 1977). In AC times (after climate) there is of course only one cause for all things and that is human caused global warming by way of fossil fuel emissions (Oceans running out of fish, Bangkok Post, 1994), (Ocean’s fish could disappear, Bangkok Post, May 19, 2010), (A New Warning Says We Could Run Out of Fish by 2048, HuffPost, Dec 14, 2017), (All seafood will run out in 2050, say scientists, The Telegraph, 22 May 2018), (Oceans are running out of fish much faster than previously thought, ZME Science, 20 January 2016).
In the AC after-climate era, causes of the fish apocalypse is described in terms of rising ocean temperature and ocean acidification by fossil fuel emissions. As well, the language of fish apocalypse is changed from gradual reduction in numbers to “depletion at alarming rates” and that marine life on earth is “at a breaking point”. There is also a timeline given for when the oceans will become devoid of fish. That will happen in the year 2050. Unless of course we get serious about the Paris Accord, stop using dirty polluting fossil fuels, and save the planet. And the fish.
Climate Science 2007: “The dearth of scientific knowledge only adds to the alarm”
Posted May 22, 2018
on:THE DEARTH OF SCIENTIFIC KNOWLEDGE ONLY ADDS TO THE ALARM
- Global warming scientists cited the shrinking of the Chorabari Glacier in the eastern Himalayan Mountains as evidence that carbon dioxide emissions from fossil fuels is causing global warming and that global warming in turn is causing Himalayan glaciers to melt. Although the data are insufficient and conflicting, they project that in a hundred years, the glacial loss will affect water supply to a vast region whose rivers get their water from these glaciers. With respect to the absence of sufficient data to support this projection, they propose the odd logic that “the dearth of scientific knowledge only adds to the alarm”.
- There are a thousand glaciers in the Himalayan Mountains. Some of them are retreating. Some of them are expanding. Some are doing neither. We don’t have sufficient data to know what most of them are doing except that there has been a gradual net retreat of the glaciers since the year 1850 which marks the glacial maximum of the Little Ice Age.
- The Himalayans are folded mountains and the folding is currently in process. It is a geologically active area. There is a lot of geothermal activity in these mountains particularly in Uttaranchal where Chorabari Glacier is located. Steamy hot springs are a major tourist attraction in Uttaranchal.
- Neither the geothermal nor volcanic activity is included in the assessment of glacial melt as an effect of fossil fuel emissions. The assessment is that the end is near for Himalayan glaciers due to fossil fuel emissions. The end may very well be near but the prediction of its coming would be more credible if their computer model included volcanic and geothermal activity both on land and in the bottom of the ocean.
- A computer model based on the assumption that all surface anomalies of the planet are due to human activity is not the appropriate tool for the determination of the role of human activity in surface anomalies.
FOSSIL FUEL EMISSIONS MELTING ICE IN GREENLAND
- Although there has been some thinning of coastal ice in Greenland, the total ice mass there is actually increasing because of a rapid increase in ice thickness at higher elevations. If we could cause all of Greenland’s ice to melt into the sea, it would raise the sea level by 7 meters, as the scaremongers say, but that scenario does not appear likely given the data.
- One should also take note that during the last decade, Greenland has not become warmer. It has become colder. It is therefore not possible to ascribe changes in its ice mass to global warming or to fossil fuel emissions. As a footnote, Greenland’s coast was in fact green with vegetation in the tenth century when it was discovered by Nordic sailors. It was warmer then than it is now.
- Since then it has been through the Little Ice Age from which it is currently recovering. Studies of ice mass balance in Greenland by climate scientists ignore geothermal activity and begin with the assumption that all observed ice loss are due to fossil fuel emissions and that they can be attenuated by taking climate action in the form of cutting emissions.
FOSSIL FUEL EMISSIONS CAUSE DROUGHT IN AFRICA
- Africa is a drought prone continent and has suffered numerous tragic droughts over the last 500 years. These droughts are natural occurrences. They are not caused by carbon dioxide emissions from fossil fuels. There is no trend in the severity of these droughts and the current one is not the most severe. African scholars have written to refute efforts to associate the current drought with the global warming agenda. One of these scholarly articles was recently published in the Bangkok Post.
- The dropping of water levels in Lake Victoria and other lakes there is a known effect of a cascade of dams on the Nile and cannot in any way be related to the use of fossil fuels.
- The New York Times columnist who makes these alarming charges is the same individual who once fell for the oldest trick in book in Cambodian brothels and paid a large sum of money to “purchase freedom” for a young prostitute and then wrote a column about his heroic deed. The young lady had by then returned to the brothel. This is the level of gullibility we are dealing with in this column as well. One should use a big dose of critical thinking when consuming this kind of information.
RELATED POSTS
The Anomalies in Temperature Anomalies
The Greenhouse Effect of Atmospheric CO2
ECS: Equilibrium Climate Sensitivity
Climate Sensitivity Research: 2014-2018
TCR: Transient Climate Response
Peer Review of Climate Research: A Case Study
Spurious Correlations in Climate Science
Global Warming and Arctic Sea Ice: A Bibliography
Carbon Cycle Measurement Problems Solved with Circular Reasoning
NASA Evidence of Human Caused Climate Change
Event Attribution Science: A Case Study
Event Attribution Case Study Citations


RELATED POSTS
Total Hurricane Energy & Fossil Fuel Emissions
Correlation Between Cumulative Emissions and Cumulative Sea Level Rise
TCRE: Transient Climate Response to Cumulative Emissions
A CO2 Radiative Forcing Seasonal Cycle?
Climate Change: Theory vs Data
Correlation of CMIP5 Forcings with Temperature
The Anomalies in Temperature Anomalies
The Anomalies in Temperature Anomalies
The Greenhouse Effect of Atmospheric CO2
ECS: Equilibrium Climate Sensitivity
Climate Sensitivity Research: 2014-2018
TCR: Transient Climate Response
Peer Review of Climate Research: A Case Study
Spurious Correlations in Climate Science
Global Warming and Arctic Sea Ice: A Bibliography
Carbon Cycle Measurement Problems Solved with Circular Reasoning
NASA Evidence of Human Caused Climate Change
Event Attribution Science: A Case Study
Event Attribution Case Study Citations
Global Warming Trends in Daily Station Data
The Trend Profile of Temperature Data
History of the Global Warming Scare
The dearth of scientific knowledge only adds to the alarm
Nonlinear Dynamics: Is Climate Chaotic?
Eco-Fearology in the Anthropocene
Carl Wunsch Assessment of Climate Science: 2010
Gerald Marsh, A Theory of Ice Ages
History of the Ozone Depletion Scare
Empirical Test of Ozone Depletion
Brewer-Dobson Circulation Bibliography
Elevated CO2 and Crop Chemistry
Little Ice Age Climatology: A Bibliography
Sorcery Killings, Witch Hunts, & Climate Action
Climate Impact of the Kuwait Oil Fires: A Bibliography
Noctilucent Clouds: A Bibliography
Climate Change Denial Research: 2001-2018
The Population Bomb Update: 2010
Posted May 21, 2018
on:-
On the one hand, Western pundits warn us about the dangers of an impending “population bomb” brought about by overpopulation. We are told that the planet is being overwhelmed by the sheer number of people on it and will soon be unable to supply us with sufficient food, water, shelter, and energy and so we must do everything we can to control the population growth rate.
-
On the other hand, we find that the Western nations themselves are scrambling for population growth. They provide tax deductions and other financial benefits per child and the United States is now counting on a vigorous fertility rate to boost its population to 400 million by the year 2050 as a way of gaining economic advantage with a more stable population (America will be just fine, Bangkok Post, April 7, 2010).
-
We thus find that the same nations that fund anti-fertility programs to limit population growth in Asia and Africa, are, at the same time, providing tax benefits for having children and brag about their ability to increase fertility and growth rate of their own populations.
-
These contradictions raise serious questions. Is population growth good or bad? Is the population bomb a global problem or a localized one? To protect the planet from the population bomb should the population growth in some areas be restricted while that in others encouraged?
- CONCLUSION: THE POPULATION BOMB DOES NOT MEAN THAT THERE ARE TOO MANY OF US. IT MEANS THAT THERE ARE TOO MANY OF THEM!
The Eyjafjallajoekull Eruption: 2010
Reference: Ice cap thaw may awaken Icelandic volcanoes, Bangkok Post, April 17, 2010
