Thongchai Thailand

Archive for July 2020

ARCCSS | Australian Research Council's (ARC) Centre of Excellence ...

Centre of Excellence for Climate Extremes launched | UNSW Newsroom

bandicam 2020-07-25 19-56-34-266






  1. ABSTRACT: We assess evidence relevant to Earth’s equilibrium climate sensitivity per doubling of atmospheric CO2, characterized by an effective sensitivity=S. This evidence includes feedback process understanding, the historical climate record, and the paleoclimate record. An S value lower than 2K is difficult to reconcile with any of the three lines of evidence. The amount of cooling during the Last Glacial Maximum provides strong evidence against values of S greater than 4.5 K. Other lines of evidence in combination also show that this is relatively unlikely. We use a Bayesian approach to produce a probability density function for S given all the evidence, including tests of robustness to difficult-to-quantify uncertainties and different priors. The 66% range is 2.6-3.9 K for our
    Baseline calculation, and remains within 2.3-4.5 K under the robustness tests;
    corresponding 5-95% ranges are 2.3-4.7 K. This indicates a stronger constraint on S than reported in past assessments, by lifting the low end of the range. This
    narrowing occurs because the three lines of evidence agree and are judged to be largely independent, and because of greater confidence in understanding feedback processes and in combining evidence. We identify promising avenues for further narrowing the range in S, in particular using comprehensive models and process understanding to address limitations in the traditional forcing-feedback paradigm for interpreting past changes.
  2. PLAIN LANGUAGE SUMMARY:  Earth’s global climate sensitivity is a fundamental quantitative measure of the susceptibility of Earth’s climate to human influence. A landmark report in 1979 (Jules Charney) concluded that it probably lies
    between 1.5-4.5℃ per doubling of atmospheric carbon dioxide, assuming that other influences on climate remain unchanged. In the 40 years since, it has appeared difficult to reduce this uncertainty range. In this report we thoroughly assess all lines of evidence including some new developments. We find that a large volume of consistent evidence now points to a more confident view of a climate sensitivity near the middle or upper part of this range. In particular, it now appears extremely unlikely that the climate sensitivity could be low enough to avoid substantial climate change well in excess of 2℃ warming under a high-emissions future scenario. We remain unable to rule out that the sensitivity could be above 4.5℃ per doubling of carbon dioxide levels although this is not likely. Continued research is needed to further reduce the uncertainty and we identify some of the more promising possibilities in this regard.
  3. INTRODUCTION: The ECS, defined as the steady-state global temperature increase for a doubling of CO2, has long been taken as the starting point for understanding global climate changes. It was quantified specifically by Charney in 1979 as the equilibrium warming as seen in a model with ice sheets and vegetation fixed at present-day values and with a proposed range of 1.5-4.5 K based on the information at the time, but did not attempt to quantify the probability that the sensitivity was inside or outside this range. The IPCC 2013 report asserted the same now-familiar range, but more precisely dubbed it a >66% likely credible interval, implying an up to one in three chance of being outside that range. It has been estimated that, in an ideal world where the information would lead to optimal policy responses, halving the uncertainty in a measure of climate sensitivity would lead to an average savings of US$10 trillion in today’s dollars. Apart from this, the sensitivity of the world’s climate to external influence is a key piece of knowledge that humanity should have at its fingertips. So how can we narrow this range? Quantifying ECS is challenging because the available evidence consists of diverse strands, none of which is conclusive by itself. This requires that the strands be combined in some way. Yet, because the underlying science spans many disciplines within the Earth Sciences, individual scientists generally only fully understand one or a few of the strands. Moreover, the interpretation of each strand requires structural assumptions that cannot be proven, and sometimes ECS measures have been estimated from each strand that are not fully equivalent. This complexity and uncertainty thwarts rigorous, definitive calculations and gives expert judgment and assumptions a potentially large role. Our assessment was undertaken under the auspices of the World Climate Research Programme’s Grand Science Challenge on Clouds, Circulation and Climate Sensitivity  {2015workshop at Ringberg Castle in Germany}. It tackles the above issues, addressing three questions: (1) Given all the information we now have, acknowledging and respecting the uncertainties, how likely are very high or very low climate sensitivities outside the presently accepted likely range of 1.5-4.5 K: (2) What is the strongest evidence against very high or very low values?: (3) Where is there potential to reduce the uncertainty?  In addressing these questions, we follow Stevens et al. (2016, hereafter SSBW16) who laid out a strategy for combining lines of evidence and transparently considering uncertainties. The lines of evidence we consider, as in SSBW16, are modern observations and models of system variability and feedback processes; the rate and trajectory of historical warming, and the paleoclimate record. The core of the combination strategy is to lay out all the circumstances that would have to hold for the climate sensitivity to be very low or high given all the evidence (which SSBW16 call “storylines”). A formal assessment enables quantitative probability statements given all evidence and a prior distribution, but the “storyline” approach allows readers to draw their own conclusions about how likely the storylines are, and points naturally to areas with
    226 greatest potential for further progress. Recognizing that expert judgment is unavoidable, we attempt to incorporate it in a transparent and consistent way. Combining multiple lines of evidence will increase our confidence and tighten the range of likely ECS if the lines of evidence are broadly consistent. If uncertainty is underestimated in any individual line of evidence, inappropriately ruling out or discounting part of the ECS range—this will make an important difference to the final outcome (see example in Knutti et al., 2017). {Blogger’s note: There are two citations for Knutti etal 2017 in the list of citations}. Therefore it is vital to seek a comprehensive estimate of the uncertainty of each line of evidence that accounts for the risk of unexpected errors or influences on the evidence. This must ultimately be done subjectively. We will therefore explore the uncertainty via sensitivity tests and by considering ‘what if’ cases in the sense of Bjorn Stevens, including what happens if an entire line of evidence is dismissed. The most recent reviews (Collins et al., 2013, Knutti et al., 2017 (which one?) have considered the same three main lines of evidence considered here, and have noted they are broadly consistent with one another, but did not attempt a formal quantification of the probability distribution function of ECS. Formal Bayesian quantifications have been done based on the historical warming record (see Bodman and Jones 2016 for a recent review), the paleoclimate record (PALAEOSENS, 2012), a combination of historical and last millennium records (Hegerl et al., 2006), and multiple lines of evidence from instrumental and paleo records (Annan and Hargreaves, 2006). An assessment based only on a subset of the evidence will yield too wide a range if the excluded evidence is consistent (e.g. Annan and Hargreaves, 2006), but if both subsets rely on similar information or assumptions, this co-dependence must be considered when combining them (Knutti and Hegerl 2008). Therefore, an important aspect of our assessment is to explicitly assess how uncertainties could affect more than one line of evidence and to assess the sensitivity of calculated PDFs to reasonable allowance for interdependencies of the evidence {blogger’s note: i.e. violation of the independence assumption}. Another key aspect of our assessment is that we explicitly consider process understanding via modern observations and process models as a newly robust line of evidence. Such knowledge has occasionally been incorporated implicitly (via the prior on ECS) based on the sample distribution of ECS in available climate models (Annan and Hargreaves, 2006) or expert judgments (Forest et al., 2002), but climate models and expert judgments do not fully represent existing knowledge or uncertainty relevant to climate feedbacks, nor are they fully independent of other evidence (in particular that from the historical temperature record, see Kiehl, 2007). Process understanding has recently blossomed, however, to the point where substantial statements can be made without simply relying on climate model representations of feedback processes, creating a new opportunity exploited here. Climate models (GCMs) nonetheless play an increasing role in calculating what our observational data would look like under various hypothetical ECS values in effect translating from evidence to ECS. Their use in this role is now challenging long held assumptions, for example showing that 20th-century warming could have been relatively weak even if ECS were high, that paleoclimate changes are strongly affected by factors other than CO2, and that climate may become more sensitive to greenhouse gases in warmer states. GCMs are also crucial for confirming how modern observations of feedback processes are related to ECS. Accordingly, another novel feature of this assessment will be to use GCMs to refine our expectations of what observations should accompany any given value of ECS and thereby avoid biases now evident in some estimates of ECS based on the historical record using simple energy budget or energy balance model arguments. GCMs are also275 used to link global feedback strengths to observable phenomena. However, for reasons noted above, we avoid relying on GCMs to tell us what values to expect for key feedbacks except where 277 the feedback mechanisms can be calibrated against other evidence. Since we use GCMs in some way to help interpret all lines of evidence, we must be mindful that any errors in doing this could reinforce across lines. We emphasize that this assessment begins with the evidence on which previous studies were based, including new evidence not used previously, and aims to comprehensively synthesize the implications for climate sensitivity both by drawing on key literature and by doing new calculations. In doing this, we will identify structural uncertainties that have caused previous studies to report different ranges of ECS from (essentially) the same evidence, and account for this when assessing what that underlying evidence can tell us. An issue with past studies is that different or vague definitions of ECS may have led to perceived, un-physical discrepancies in estimates of ECS that hampered abilities to constrain its range and progress understanding. Bringing all the evidence to bear in a consistent way requires using a specific measure of ECS, so that all lines of evidence are linked to the same underlying quantity. We denote this quantity as S. The implications for S of the three strands of evidence are examined separately in sections 3-5, and anticipated dependencies between them are discussed in section 6. To obtain a quantitative probability distribution function of S, we follow Bjorn Stevens and many other studies by adopting a Bayesian formalism, which is outlined in sections 2.2-2.6. The results of applying this to the evidence are presented in section 7, along with the implications of our results for other measures of climate sensitivity and for future warming. The overall conclusions of our assessment are presented in section 8.
  4. SECTION 8: PREAMBLE TO CONCLUSIONS:  There are subjective elements in this study but there are also objective ones, in particular, enforcing mathematical rules of probability to ensure that our beliefs about climate sensitivity are internally consistent and consistent with our beliefs about the individual pieces of evidence. All observational evidence must be interpreted using some type of model that relates underlying quantities to the data, hence there is no such thing as a purely observational estimate of climate sensitivity. Uncertainty associated with any evidence therefore comes from three sources: observational uncertainty, potential model error, and unknown influences on the evidence such as unpredictable variability. By comparing past studies that used different models for interpreting similar evidence we find that the additional uncertainty associated with the model itself is considerable compared with the stated uncertainties typically obtained in such studies assuming one particular model. When numerical Global Climate Models (GCMs) {blogger’s note: The acronym GCM stands for General Circulation Model} are used to interpret evidence, they reveal deficiencies in the much simpler models used traditionally—in particular the failure of these models to adequately account for the effects of non-homogeneous warming. This insight is particularly important for the historical temperature record, which is revealed by GCMs to be compatible with higher climate sensitivities than previously inferred using simple models. In general, many published studies appear to have overestimated the ability of a particular line of evidence to constrain sensitivity, leading to contradictory conclusions. When additional uncertainties are accounted for, single lines of evidence can sometimes offer only relatively weak constraints on the sensitivity. The effective sensitivity S analyzed here is defined based on the behavior during the first 150 years after a step change in forcing, which is chosen for several practical reasons. While our study also addresses other measures of sensitivity (the Transient Climate Response TCR) and long-term equilibrium sensitivity, the calculations of these were not optimal and future studies could apply a methodology similar to that used here to quantify them, or other quantities perhaps more relevant to medium-term warming, more rigorously. After extensively examining the evidence qualitatively and quantitatively we followed a number of past studies and used Bayesian methods to attempt to quantify the
    implications and probability distribution function for S. It must be remembered that every step of this process involves judgments or models, and results will depend on assumptions and assessments of structural uncertainties that are hard to quantify hard to quantify. Thus we emphasize that a solid qualitative understanding of how the evidence stacks up is at least as important as any probabilities we assign. Nonetheless, sensitivity tests suggest that our results are not very sensitive to reasonable assumptions in the statistical approach.
  5. SECTION 8: THE CONCLUSIONS:   (1) Each line of evidence considered here—process knowledge, the historical warming record, and the paleoclimate record—accords poorly with values outside the traditional “Charney” range of 1.5-4.5 K for climate sensitivity. (2) But when these lines of evidence are taken together, because of their mutual reinforcement, we find the “outside” possibilities for S to be substantially reduced. Whatever the true value of S is, it must be reconcilable with all pieces of evidence; if any one piece of evidence effectively rules out a particular value of S, that value does not become likely again just because it is consistent with some other, weaker, piece of evidence as long as there are other S values consistent with all the evidence. If on the other hand every value of S appeared inconsistent with at least one piece of evidence, the evidence would need reviewing to look for mistakes. But we do not find this situation. Instead we find that the lines are broadly consistent in the sense that there is plenty of overlap between the ranges of S each supports. This strongly affects our judgment of S: if the true S were 1 K, it would be highly unlikely for each of several lines of evidence to independently point toward values around 3 K. And this statement holds even when each of the individual lines of evidence is thought to be prone to errors. We asked the following question (following Bjorn Stevens): what would it take, in terms of errors or unaccounted-for factors, to reconcile an outside value of S with the totality of the evidence? A very low sensitivity (S ~ 1.5 K or less) would require all of the following: Negative low-cloud feedback. This is not indicated by evidence from satellite or process model studies and would require emergent constraints on GCMs to be wrong. Or, a strong and unanticipated negative feedback from another cloud type such as cirrus, which is possible due to poor understanding of these clouds but is neither credibly suggested by any model, nor by physical principles, nor by observations. Cooling of climate by anthropogenic aerosols over the instrumental period at the extreme weak end of the plausible range (near zero or slight warming) based both on direct estimates and attribution results using warming patterns. Or, that forced ocean surface warming will be much more heterogeneous than expected and cooling by anthropogenic aerosols is from weak to middle of the assessed range.  Warming during the mid-Pliocene Warm Period well below the low end of the range inferred from observations, and cooling during the Last Glacial Maximum also below the range inferred from observations. Or, that S is much more state-dependent than expected in warmer climates and forcing during these periods was higher than estimated. In other words, each of the three lines of evidence strongly discounts the possibility of S around 1.5 K or below: the required negative feedbacks do not appear achievable, the industrial-era global
    warming of nearly 1 K could not be fully accounted for, and large global temperature changes through Earth history would also be inexplicable. A very high sensitivity (S > 4.5 K) would require all of the following to be true: Total cloud feedback stronger than suggested by process-model and satellite studies, Cooling by anthropogenic aerosols near the upper end of the plausible range. Or, that 4184 future feedbacks will be much more positive than they appear from this historical record because the mitigating effect of recent SST patterns on planetary albedo has been at the high end of expectations, Much weaker-than-expected negative forcing from dust and ice sheets during the Last Glacial Maximum Or, a strong asymmetry in feedback state-dependence, significantly less positive feedback in cold climates than in the present, but relatively little difference in warmer paleoclimates). Thus, each of the three lines of evidence also argues against very high S, although not as strongly as they do against low S. This is mainly because of uncertainty in how strongly “pattern effects” may have postponed the warming from historical forcing, which makes it difficult to rule out the possibility of warming accelerating in the future based on what has happened so far. Indeed, we find that the paleoclimate record (in particular, the Last Glacial Maximum) now provides the strongest evidence against very high S, while all lines provide more similar constraints against low S (paleo slightly less than the others). An important question governing the probability of low or high S is whether the lines of evidence are independent, such that multiple chance coincidences would be necessary for each of them to be wrong in the same direction. For the most part, the various elements in low- and high-S scenarios do appear superficially independent. For example, while possible model errors are identified that (if they occurred) could affect historical or paleo evidence, they mostly appear unrelated to each other or to global cloud feedback or model-predicted S. Some key unknowns act in a compensating fashion, i.e., where an unexpected factor would oppositely affect two lines of evidence, effectively cancelling out most of its contributed uncertainty. Even in the one identified possibility where an unknown could affect more than one line of evidence in the same direction, modelling indicates a relatively modest impact on the probability distribution function. The IPCC AR5 concluded that climate sensitivity is likely (≥ 66% probability) in the range 1.5-4.5 K. The probability of S being in this range is 93% in our Baseline calculation, and is no less than 82% in all other “plausible” calculations considered as indicators of reasonable structural uncertainty. Although consistent with IPCC’s “likely” statement, this indicates considerably more confidence than the minimum implied by the statement. We also find asymmetric probabilities outside this range, with negligible probability below 1.5 K but up to an 18% chance of being above 4.5 K . This is consistent with all three lines of evidence arguing against low sensitivity fairly confidently, which strengthens in combination. Given this consensus, we do not see how any reasonable interpretation of the evidence could assign a significant chance to S < 1.5 K. Moreover our plausible sensitivity experiments indicate a less-than 5% chance that S is below 2 K: our Baseline 5-95% range is 2.3-4.7 K and remains within 2.0 and 5.7 K under reasonable structural changes. Since the extreme tails of the probability distribution function of S are more uncertain and possibly sensitive to “unknown unknowns” and mathematical choices, it may be safer to focus on 66% ranges (the minimum for what the IPCC terms “likely”). This range in our Baseline case is 2.6-3.9 K, a span less than half that of AR5’s likely range, and is bounded by 2.3 and 4.5 K in all plausible alternative calculations considered. Although we are more confident in the central part of the distribution, the upper tail is important for quantifying the overall risk associated with climate change and so does need to be considered. We also note that allowing for “surprises” in individual lines of evidence via “fat-tailed” likelihoods had little effect on results, as long as such surprises affect the evidence lines independently. Our S is not the true equilibrium sensitivity ECS, which is expected to be somewhat higher than S due to slowly emerging positive feedback. Values are similar, however, because we define S for a quadrupling of CO2 while ECS is defined for a doubling, which cancels out most of the expected effect of these feedbacks .We find that the 66% ECS range, at 2.6-4.1 K bounded by 2.4 and 4.6 K, is not very different from that of S, though slightly higher. Thus, our constraint on the upper bound of the ‘likely’ range for ECS is close to that of the IPCC AR5 and previous assessments, which formally adopt an equilibrium definition. The constraint on the lower bound of the “likely” range is substantially stronger than that of AR5 regardless of the measure used. The uncertainties in ECS and S assessed here are similar because each is somewhat better constrained than the other by some subset of the evidence. Among the plausible alternate calculations, the one producing the weakest high end constraint on S uses a uniform-S-inducing prior, which shifts the ranges upward to 2.8-4.5 K (66%) and 2.4-5.7 K (90%). Our Baseline calculation assumes feedbacks are independent (or that dependence is unknown), which predicts a non-uniform prior probability distribution function for S; to predict a uniform one requires instead assuming a known, prior dependence structure among the feedbacks. Although lack of consensus on priors remains a leading-order source of spread in possible results, we still find that sensitivity to this is sufficiently modest that strong constraints are possible, especially at the low end of the S range. The main reason for the stronger constraints seen here in contrast to past assessments is that new analysis and understanding has led us to combine lines of evidence in a way the community was not ready to do previously. We also find that the three main lines of evidence are more consistent than would be expected were the true uncertainty to be as large as in previous assessments. While some individual past studies have assigned even narrower ranges, as discussed above, past studies have often been overconfident in assigning uncertainty so not too much weight should be given to any single study. We note that although we did not use GCM “emergent constraint” studies using present-day climate system variables in our base results, our results are nonetheless similar to what those studies suggest in the aggregate. New models run for CMIP6 are showing a broader range of S than previous iterations of CMIP.  Our findings are not sensitive to GCM S distributions since we do not directly rely on them. The highest and lowest CMIP6 S values are much less consistent with evidence analyzed here than those near the middle of the range. Some of the  effects quantified in this paper with the help of GCMs were looked at only with pre-CMIP6 models, and interpretations of evidence might therefore shift in the future upon further analysis of newer models, but we would not expect such shifts to be noteworthy.




  1. The extremely verbose and confused rant of statements about climate sensitivity intermingled with a multiplicity of interpretations and disclaimers along with a duality in the definition of climate sensitivity as ECS and as S, does not provide useful information on the subject.
  2. Also a research question stated as what the width of the climate sensitivity confidence interval should be and whether it can be reduced from the width suggested by Charney is inappropriate in an unbiased and objective scientific inquiry. The issue is not the width of the confidence interval or what the probability in the confidence interval should be but only what the mean and variance of the estimate are. Confidence intervals are simply a way of expressing mean and variance. An extreme form of bias is contained in the research question stated as We identify promising avenues for further narrowing the range in S“.
  3. The Equilibrium Climate Sensitivity described by Charney is derived from climate model simulations of CO2 forcing only that refers specifically to the correlation between temperature and the natural logarithm of atmospheric CO2 concentration in the absence of other forcings. However, temperature forecasts are made with a portfolio of forcings that includes but is not restricted to CO2 forcing. Some of the complexity of the presentation appears to derive from the lack of clarity in this distinction.
  4. As suggested at the end of the paper, but not used in the analysis that precedes it, the understanding of warming and the forecast of future warming should be based on the complete portfolio of forcings that includes ECS CO2 forcing. The forcings portfolio can then be tested against observed temperatures and evaluated according to the fit as demonstrated in related posts at this site: [LINK] [LINK] .
  5. The analysis also displays the oddity in climate science of understanding  variance not as degradation of the information content of the mean but as how extreme the the values are that define the confidence interval such that the low information content of large variances is understood not as uncertainty but as the certainty of how extreme the the values COULD be. Such odd interpretations of variance likely derives from confirmation bias in climate science that looks at confidence intervals not as measures of uncertainty (not knowing) but measures of knowing how extreme it COULD be [LINK] .
  6. The extensive research efforts and their interpretation presented in the manuscript appear to be products of inappropriate research questions that derive from a flawed interpretation of a confidence interval and the confirmation bias of the researchers expressed as a research objective not of discovering an unbiased estimate of the mean and variance of climate sensitivity to atmospheric CO2 but of finding ways to reduce the width of the confidence interval from the large interval proposed by Jules Charney.
  7. The authors mention the relevance of the TCRE (transient climate response to cumulative emissions) but do not address its many anomalous interpretations. For example, the TCRE shows that cumulative emissions of one teratonne will cause 1.5C of warming within a small uncertainty band. The corresponding increase in atmospheric CO2 implies a climate sensitivity and the corresponding uncertainty in the TCRE implies a climate sensitivity uncertainty and its 95% confidence interval. A study of climate sensitivity and its uncertainty should be able to explain the TCRE.
  8. The authors do not do that writing only that “The Transient Climate Response (TCR, or warming at the time of CO2 doubling in an idealized 1% per year increase scenario), has been proposed as a better measure of warming over the near- to
    medium-term; it may be more generally related to peak warming, and better constrained by historical warming, than S. It may also be better at predicting high-latitude warming. But 21st-century global-mean trends under high emissions are better predicted by S than by TCR perhaps because of non-linearities in forcing or response or because TCR estimates are affected by noise. TCR is less directly related to the other lines of evidence than is S” 
  9. With this brief and mysterious assessment, the authors dismiss the topic altogether. In fact TCRE is not any of these things and TCRE is not affected by noise. In fact the TCRE coefficient is derived from a near perfect correlation between temperature and cumulative emissions. The authors cite the Knutti 2017 paper in which Reto Knutti and co-authors extol the virtues of the TCRE and propose that the TCRE should replace climate sensitivity as our way of understanding the warming effect of fossil fuel emissions. It is clear from the authors’ language that they either did not study the TCRE sufficiently or chose to dismiss it without a sufficient explanation of why it was dismissed. 
  10. In summary, we find that this study derives from biased research questions and a poor understanding of variance as a measure of uncertainty. It does not present a useful analysis of the climate sensitivity issue in climate science specifically having to do with the understanding of climate sensitivity in the context of all forcings and of being able to relate the sensitivity issue to the TCRE. 








University of California, Santa Barbara. Got my BA degree from ...

Geological carbon flows | Thongchai Thailand




CITATION: Decadal trends in the ocean carbon sink, Tim DeVries et al, Proceedings of the National Academy of Sciences 2019, 116 (24) 11646-11651; DOI: 10.1073/pnas.1900371116  [LINK]

ABSTRACT:  Measurements show large decadal variability in the rate of CO2 accumulation in the atmosphere that is not driven by CO2 emissions. The decade of the 1990s experienced enhanced carbon accumulation in the atmosphere relative to emissions, while in the 2000s, the atmospheric growth rate slowed, even though emissions grew rapidly. These variations are driven by natural sources and sinks of CO2 due to the ocean and the terrestrial biosphere. In this study, we compare three independent methods for estimating oceanic CO2 uptake and find that the ocean carbon sink could be responsible for up to 40% of the observed decadal variability in atmospheric CO2 accumulation. Data-based estimates of the ocean carbon sink from pCO2 mapping methods and decadal ocean inverse models generally agree on the magnitude and sign of decadal variability in the ocean CO2 sink at both global and regional scales. Simulations with ocean biogeochemical models confirm that climate variability drove the observed decadal trends in ocean CO2 uptake, but also demonstrate that the sensitivity of ocean CO2 uptake to climate variability may be too weak in models. Furthermore, all estimates point toward coherent decadal variability in the oceanic and terrestrial CO2 sinks, and this variability is not well-matched by current global vegetation models. Reconciling these differences will help to constrain the sensitivity of oceanic and terrestrial CO2 uptake to climate variability and lead to improved climate projections and decadal climate predictions.

TRANSLATION: Measurements show that atmospheric composition is not responsive to fossil fuel emissions confirming a finding to that effect in this denier blog [LINK] . The decade of the 1990s experienced enhanced carbon accumulation in the atmosphere relative to emissions, while in the 2000s, the atmospheric growth rate slowed, even though emissions grew rapidly. Therefore these variations must be driven by nature’s carbon cycle as claimed by in the denier blogs: [LINK]

THE INTERPRETATION OF THESE DATA BY THE AUTHORS: The authors use these data to propose a decadal variability in ocean-atmosphere CO2 flux. Then use the same data to test this hypothesis. This kind of hypothesis test suffers from circular reasoning. It is not possible to test a hypothesis with the same data used to construct the hypothesis. In a related post we show that it is not possible detect the impact of fossil fuel emissions on the carbon cycle because carbon cycle flows are an order of magnitude larger and with large uncertainties because these flows cannot be directly measured and must be inferred. [LINK] [LINK] [LINK] .


WHAT DOES THE MASS BALANCE SHOW?  The data show that changes in oceanic CO2 attributed to fossil fuel emissions are not possible because of a mass balance deficit. There is not enough fossil fuel emissions to explain observed annual changes in oceanic CO2. These changes are likely best understood in terms of geological carbon flows in the ocean as for example the CO2 bubbles just off the coast from the University of California Santa Barbara. The relative insignificance of the atmosphere in this respect is discussed in a related post [LINK] .

CONCLUSION: The evidence does not support the claim that fossil fuel emissions can explain changes in atmospheric and oceanic CO2 concentration. There is no evidence that either atmospheric composition or ocean acidification are responsive to fossil fuel emissions at an annual time scale. The climate science dependence on the atmosphere to explain all observed changes ignores much larger geological flows of carbon. [LINK] [LINK] [LINK] [LINK] [LINK]   


POSTSCRIPT: The total mass of the ocean and atmosphere taken together is 1.36E18 metric tonnes of which the atmosphere is 0.38% and the ocean 99.62%. The insistence of climate science that the atmosphere tail wags the ocean dog in terms of heat and carbon dioxide content is not credible in many different ways

Do volcanic eruptions happen underwater? : Ocean Exploration Facts ...

One Of The Most Dangerous Submarine Volcanoes On Earth | Science ...

Geological carbon flows | Thongchai Thailand



18 Countries Show Decrease in Carbon Emissions from Fossil Fuels ...

Ecologia Sociale | Why The World Need To Cut Down On Emission From ...






  1. Radioactive Bookkeeping of Carbon Emissions: A new sampling method uses carbon-14 to single out which carbon dioxide molecules in the atmosphere derive from fossil fuels. The method could help track emissions goals for climate mitigation.
  2. Estimating how much carbon dioxide (CO2) is released by burning fossil fuels has traditionally resembled a large-scale mathematics test. Accurately summing up the number and types of vehicles on the road, evaluating current emissions standards, adding in emissions from power plants, and accounting for efficiencies require detailed calculations and data.
  3. Since the 1990s, scientists at the Environmental Protection Agency (EPA) have been estimating emissions using these complex accounting procedures. Now, a group of researchers has taken a different approach: measuring atmospheric CO2 directly. But first, they had to separate the fossil fuel–derived carbon from natural sources such as volcanic emissions. Using carbon-14 as a marker, the team parsed out CO2 sources, providing monthly to annual measurements of fossil fuel emissions.
  4. They found CO2 fossil fuel emissions were about 5% higher than EPA estimates. They note their approach can give more frequent and focused measurements, which is useful in policy and climate modeling research.
  5. Greenhouse gas emissions are a direct driver of climate change. International treaties like the 2015 Paris Agreement and the 1992 United Nations Framework Convention on Climate Change (UNFCCC) treaty have specific targets for reducing greenhouse gas emissions.
  6. Since the UNFCCC treaty, the EPA has used a bottom-up approach to estimate emissions. They look at activities that produce CO2: use of cars, trucks, planes, power plants, and heating. Then they look at the emission factor, or how much CO2 is produced by each activity. In the end, researchers multiply activities and emission factors and sum up everything to get an estimate of CO2 emissions for the country. “These are very detailed statistics,” said Basu, adding that EPA researchers do a good job of trying to capture all ways CO2 is emitted. “This is a very hard job, but at the end of the day, it’s an accounting process,” he said. “It’s entirely possible to miss something.” For example, using the wrong emission factor—such as an outdated fuel efficiency for a car—can skew estimates.
  7. Basu and his colleagues decided to do a top-down approach and take direct measurements of CO2 in the atmosphere. “If you just look at the atmospheric data, because everything that’s emitted has to show up in the atmosphere, you can construct an independent estimate of [emissions],” explained Basu.
  8. The team has been setting up a sampling network across the country for almost 2 decades. The geographic coverage and number of stations make a “robust” sampling of total U.S. emissions, said Basu.
  9. Parsing Out Carbon Dioxide: “Now we have great carbon isotope data that we can use to figure out how much fossil fuel emissions are coming from the U.S.”Atmospheric measurements of CO2 take all sources into account: those from fossil fuel combustion and carbon emissions from other sources. The trick is to figure out what part of the total emissions comes from fossil fuels.
  10. Fossil fuels are made of carbon-based material (ancient plants and animals) that is millions of years old. The carbon in fossil fuels has long since stopped the process of radioactive decay. When the carbon gets to a fossil fuel state, it’s completely devoid of carbon-14. Carbon-14 is the only known radioactive isotope of carbon. Separating it out means the remaining carbon in an air sample is fossil fuel derived.
  11. Now we have great carbon isotope data that we can use to figure out how much fossil fuel emissions are coming from the U.S. from the atmospheric data,” said Eri Saikawa, an atmospheric chemist at Emory University who was not involved with the study. That is extremely fascinating.
  12. The team measured 1,000 air samples from 2010. Using a dual-tracer inverse modeling framework and measurements, the team found their top-down estimate was 5% larger than the EPA’s estimate: 1,653 teragrams of carbon per year from fossil fuels compared to about 1,581 teragrams of carbon per year.
  13. Although there was a 5% increase using the top-down method, Saikawa said that statistically, it was not much different than the bottom-up approach. “Actually, the bottom-up and the top-down [methods] are overlapping,” she noted. “I thought that was a pretty good result.” Saikawa noted both methods have uncertainties. “Nothing is perfect,” she said.
  14. But I think having both sets to work with to see how we can improve is very important. I think that’s the most exciting point,” said Saikawa. “We might actually be able to reduce the uncertainty that we currently have, in terms of the CO2 emissions estimates.
  15. The team wants to expand their work to include multiple years to note any trends in estimates or emissions over time. Accurately estimating the amount of CO2 emissions is important for long-term climate policy and planning, especially related to the Paris Agreement.
  16. The thing is, even though we as a country might be thinking of withdrawing from the Paris accord, there are several entities within this country who have decided, ‘No, we are going to have [our own trajectories] anyway,’” said Basu. “But in order to independently verify those trajectories using our atmospheric measurement–based method, we need more measurements.
  17. I hope that we can use this type of great work as a way to increase the network [of sampling] and then make sure that we have a good set of measurements available,” said Saikawa. “I would love for them to be able to get data not at the national level, but then at the state level,” she said. “That could potentially show a much more difference in a specific region.





  1. The principal finding of the paperA novel fossil fuel emissions estimation procedure is proposed. It consists of measuring the CO2 concentration=x1 of a sample air and also the fraction of the CO2 that contains no-C14=x2. The fraction that contains no C14 is assumed to be from fossil fuels. Thus, x1*x2*(the total mass of the atmosphere) = the total fossil fuel emissions currently in the atmosphere based on the assumption that the only source of C14-free CO2 is fossil fuels.
  2. The difference between successive measurements year to year is then proposed as the measurement metric that will yield the figures for annual fossil fuel emissions. This measurement is thought to be of greater precision and with less uncertainty than the current survey methods used by the EPA
  3. About Anthropogenic Global Warming  (AGW)The benefits of fossil fuels come with the fear that carbon dioxide emissions from fossil fuels taken from deep below the surface of the earth are an extraneous perturbation of the delicately balanced surface-atmosphere carbon cycle. The long term consequences of this perturbation could be devastating in terms of climate change by way of altered atmospheric composition and its greenhouse effect. Our use of fossil fuels has risen exponentially since the Industrial Revolution and concurrently, atmospheric CO2 has risen steadily from 280 ppm to over 400 ppm. At the same time there has been a long term warming trend in global surface temperature. The science of anthropogenic global warming and climate change (AGW) is based on the assumption that these concurrent changes are causally related such that all of these changes are ultimately driven by fossil fuel emissions. 
  4. A cornerstone of AGW theory is that the observed increase in atmospheric CO2 since the Industrial Revolution derives completely from CO2 generated by the combustion of fossil fuels. One line of empirical evidence for this relationship is proposed in terms of changes in the Radiocarbon fraction of atmospheric CO2 (Suess, 1953) (Levin, 2000) (Stuiver, 1981) (IPCC, 2014).
  5. About Radiocarbon: Carbon-14 forms naturally in the atmosphere by the action of cosmic rays on nitrogen but being radioactive, once formed, 14C decays exponentially with a half life of 5,700 years. Radioactive decay is balanced by new cosmogenic synthesis and at equilibrium roughly one part per trillion of atmospheric carbon dioxide is made with radiocarbon. The equilibrium radiocarbon ratio is not constant but varies over long periods of time. This variation is the reason that radiocarbon dating must be calibrated against the prevailing carbon-14 ratio at the time that the organic matter being tested had died {Reference: My paper on Uncertainty in Radiocarbon Dating [LINK] .
  6. About Radiocarbon Dating: All carbon life-forms contain the prevailing equilibrium ratio of atmospheric carbon-14 as long as they are alive and their bodily carbon is being replenished. When they die, however, the radiocarbon fraction in their body begins an exponential decay and this fraction may be used to determine how long dead matter has been dead. This dating procedure, properly calibrated for the prevailing equilibrium carbon-14 ratio at the time of death, serves a useful purpose in establishing the age of pottery-based strata in archaeological excavations.
  7. The relevance of these relationships in climate science:  derives from the idea that fossil fuels are dead matter that has been dead for millions of years and therefore contain no radiocarbon. It is thus postulated that the release of fossil fuel emissions into the atmosphere reduces the radiocarbon portion of atmospheric carbon dioxide and that therefore the degree of such radiocarbon dilution serves as a measure of the contribution of fossil fuel emissions to the observed increase in atmospheric carbon dioxide (Stuiver, 1981) (Suess, 1953) (Revelle, 1957) (Tans, 1979).
  8. The Stuiver and Quay paper (Stuiver, 1981) presents carbon-14 measurements in tree rings of two Douglas Firs in the Pacific Northwest that grew from 1815 to 1975. The now famous graphic representation of these data taken from their paper is reproduced below. The figure shows a fairly steady 14C ratio from 1820 to 1900 and then a steep decline of about 20% from 1900 to 1950. The authors attributed the decline to dilution of natural atmospheric carbon dioxide with CO2 from fossil fuels that contain no 14C. These data are the foundation of the generally accepted idea that the observed increase in atmospheric CO2 since the Industrial Revolution is derived from fossil fuel emissions (IPCC, 2007) (IPCC, 2014).
  9. The flaw in Stuiver and Quay: However, as shown in a related post, the Stuiver and Quay study contains a fatal mass balance flaw. The Stuiver and Quay tree ring data imply that from 1900 to 1950, fossil fuel emissions, being free of radiocarbon, caused the radiocarbon portion of atmospheric CO2 to decline by 20%. In terms of fossil fuel emissions during this period , a total of 50 gigatons of carbon (GTC) or about 180 gigatons of CO2 were released into the atmosphere. At the same time atmospheric CO2 concentration rose 15.6 ppm or 5.38% from 296 ppm to 311 ppm. The increase of 15.6 ppm is equivalent to 33 GTC or 120 gigatons of CO2. Even if all of the emissions had gone into increasing atmospheric CO2 concentration, the dilution of 14C could not have been more than 8%. The dilution of 20% claimed by Stuiver and Quay is an impossibility in this context. These results imply that the Stuiver and Quay data do not have a simple interpretation in terms of dilution of atmospheric CO2 with fossil fuel emissions and therefore do not provide evidence that rising atmospheric CO2 concentration can be explained in terms of fossil fuel emissions.bandicam 2020-07-24 09-32-09-852
  10. The value of these pre-bomb data is that almost immediately following the end of their sample period in 1950, atmospheric tests of nuclear weapons sharply increased the 14C ratio in atmospheric carbon dioxide and following the cessation of such tests after the nuclear test ban treaty the 14C ratio began a natural exponential decay. The so called bomb spike data in shown below are taken from the measuring station in Wellington, NZ for the period 1955 to 1993.  bandicam 2020-07-24 09-41-37-661
  11. It would be difficult in the context of the bomb spike shown above to detect the effect of dilution by fossil fuel emissions. Nevertheless, several measurement stations were set up around the world to measure the 14C ratio in atmospheric carbon dioxide. These direct measurements provided by NOAA/ESRL are shown below. It is claimed that these data support the Stuiver and Quay findings and prove that the observed increase in atmospheric CO2 is attributable to fossil fuel emissions (NOAA/ESRL, 2010). bandicam 2020-07-24 10-51-14-737
  12. After the nuclear test ban treaty in 1963, atmospheric 14C began its expected  exponential decay. The decay curve is provided by The Wellington 14C measuring station and it is shown in the chart below. Because of the natural decay of the bomb spike, the decline in atmospheric 14C measured by NOAA cannot be assumed to be caused by dilution with fossil fuels in the post bomb spike era. Evidence is needed to attribute observed declines in atmospheric 14C to fossil fuel emissions. bandicam 2020-07-24 10-54-26-170
  13. The observed decay rate is larger than theoretical radioactive decay and this difference is thought to derive from the exchange of atmospheric and oceanic CO2 (Sherrington, 2016) (Mearns, 2014). However, the relevant point here is that the later data presented by NOAA/ESRL are consistent with the observed decay following cessation of atmospheric testing of nuclear weapons with no evidence of a response to the exponential increase in fossil fuel emissions found in the data. In that context, what we find is that the 14C decay claimed by NOAA to be caused by fossil fuel emissions is better understood as a continuation of the post bomb natural decay of 14C. This pattern is seen in the charts below where each chart refers to a single NOAA measuring station identified with acronyms as follows: PTB=Point Barrow Alaska, MLO=Mauna Loa Hawaii, KUM=Cape Kumukahi, SPO=South Pole, NIWOT=Niwot Ridge, Colorado. A tabular summary of the data is provided.  bandicam 2020-07-24 11-23-23-559bandicam 2020-07-24 11-22-45-415bandicam 2020-07-24 11-22-33-554bandicam 2020-07-24 11-22-23-231bandicam 2020-07-24 11-22-00-970bandicam 2020-07-24 11-21-48-835bandicam 2020-07-24 11-21-39-365bandicam 2020-07-24 11-20-55-231
  14. In each of the charts, the NOAA station data is appended to the Wellington bomb spike decay data and what we see is that the NOAA station data exactly matches the decay curve and that match is strongly supported by high values of R-squared. This pattern suggests that the observed decline in atmospheric 14C is best understood as simply a continuation of the bomb spike decay and not as being driven by fossil fuel emissions. This conclusion is further supported by the summary table above which shows that the second half of the decay curve contains a smaller decay than the first half whereas the fossil fuel emission data show that fossil fuel emissions were higher in the second half of the study period.
  15. The analysis presented above shows that the data are inconsistent with their interpretation by NOAA/ESRL/IPCC as evidence of atmospheric 14C dilution by fossil fuel emissions.
  16. Yet another consideration is that it is not possible for carbon isotopic ratios to identify fossil fuel emissions as the source of the rise in atmospheric CO2 because isotopic ratios are unable to distinguish between fossil carbon and geological carbon. The proposed methodology assumes that all natural sources of carbon contain C14 and that fossil fuel emissions are unique in that it is the only source of carbon available to the atmosphere that contains no C14. This assumption is not valid because geological sources of carbon derived from the mantle are also free of C14. Most of the planet’s carbon, 99.8%, is in the mantle and core with only 0.2% in the crust where we live and where we have things like ocean, atmosphere, carbon life forms, and climate. All of this 0.2% came from the mantle through geological processes that include rifts, volcanism, mantle plumes, hydrothermal vents, and seepage. Although we tend to think of volcanism in terms of volcanoes on land, more than 80% of the world’s volcanic activity is on the ocean floor. All of these geological activities transfer pure C12 carbon from the mantle to the crust. All of our carbon on the crust, including the carbon in carbon life forms such as humans, came from the mantle. There is no C14 in the mantle. The climate science assumption that changes in atmospheric CO2 concentration derive from fossil fuel emissions is tested in three related posts at this site.
  18. Related Post#1: A test for the hypothesis that atmospheric composition is responsive to fossil fuel emissions at an annual time scale[LINK] . FINDINGSWe conclude that atmospheric composition specifically in relation to the CO2 concentration is not responsive to the rate of fossil fuel emissions. This finding is inconsistent with the theory of anthropogenic global warming by way of rising atmospheric CO2 attributed to the use of fossil fuels in the industrial economy; and of the “Climate Action proposition of the UN that reducing fossil fuel emissions will moderate the rate of warming by slowing the rise of atmospheric CO2. The finding also establishes that the climate action project of creating Climate Neutral Economies, that is Economies that have no impact on atmospheric CO2, is unnecessary because the global economy is already Climate Neutral. A rationale for the inability to relate changes in atmospheric CO2 to fossil fuel emissions is described by Geologist James Edward Kamis in terms of natural geological emissions due to plate tectonics [LINK] .
  19. Related Post #2: Uncertainty in Carbon Cycle Flows  [LINK]  : Here we find that in the context of large uncertainties in carbon cycle flows, it is not possible to detect the effect on atmospheric composition of relatively much smaller flows of fossil fuel emissions.







File:Alfred-Wegener-Institut - panoramio.jpg - Wikimedia Commons


calanus glacialis

Limancina retroversa

Kalkalge (Emiliania huxleyi)


Gebleichtes Riff

A healthy reef off the coast of Thailand






  1. Facts on Ocean Acidification: Knowledge at a Glance: Never before have so many scientists conducted research on what impacts the declining pH value of seawater has on animals and plants in the ocean. Here we present a summary of their major research results from the past years here.
  2. Oceans as a carbon store: The oceans have absorbed more than a fourth of the anthropogenically generated atmospheric carbon dioxide over the past 200 years. Without this natural store the greenhouse gas concentration in the atmosphere would be much higher and the temperature on the Earth quite a bit warmer. However, this storage function has a high price: the oceans have become nearly 30 percent more acidic since the beginning of the Industrial Revolution.
  3. More acidic doesn’t mean acid. With an average pH value of 8.2 seawater is typically slightly alkaline. This figure has dropped to 8.1 over the past 200 years. Since pH values are logarithmically compressed, this corresponds to a decline of nearly 30 percent. By 2100 the pH value of the oceans will presumably drop by another 0.3 to 0.4 units and seawater will thus become 100 to 150 percent more acidic. That does not mean the oceans are actually acidic because even at values around 7.7 they remain alkaline, but are – in relative terms – more acidic than before.
  4. Naturally more acidic: The pH value of seawater is subject to natural fluctuations. Depending on season and region, the pH value may change. At so-called “champagne sites”, for instance, large amounts of carbon dioxide escape from natural volcanic sources. These marine regions therefore serve as windows into the future because they show which ocean dwellers are able to adapt to a low pH value and which are not.
  5. The colder, the more acidic: Carbon dioxide dissolves especially well in cold water. That is why ocean acidification is progressing primarily in the polar regions. Acidification of the Arctic Ocean could result in less availability of aragonite, an important building block for calcareous shells, as early as in the middle of this century.
  6. Bad Company: It never rains, but it pours. In addition to ocean acidification, increasing water temperatures and declining oxygen concentrations are also forcing ocean dwellers to adapt to new living conditions. A deadly trio. After all, when the three factors have a joint impact, organisms in the ocean react extremely sensitively. Moreover, oceans as habitats are frequently polluted and over-fished.
  7. Everything reacts in its own way: Not all marine dwellers react equally sensitively to the declining pH value of seawater. While calcifying creatures, for example, already reach their limits at low carbon dioxide concentrations, the more acidic water hardly has an effect on other living organisms.
  8. In some cases animals and plants differ within a single species, which is why scientists presume that some parent generations have already succeeded in preparing their offspring for the challenges of ocean acidification – a so-called epigenetic effect.
  9. Danger at early life stages: Ocean acidification represents a threat particularly for the young life stages of marine animals, such as eggs or larvae. Some larvae, for instance, no longer grow and develop so well in more acidic water. In contrast to more mature specimens, they have not yet developed all internal mechanisms necessary to protect themselves successfully against external influences.
  10. Sensitive calcareous shells: When water becomes more acidic, it means bad news especially for all ocean dwellers that build calcareous shells, such as molluscs and sea angels. This is because they then have to expend more energy to build and maintain their calcareous shells. A potential consequence: their shells get thinner and possibly disintegrate, thus offering less protection against predators.
  11. Too light for transport to the depths: If the shell walls of calcifying phytoplankton species become thinner and smaller in more acidic water, this may have an impact on the entire marine carbon store. The reason is that thinner shells are at the same time lighter so their weight declines. However, this additional ballast previously meant that even the shells of tiny creatures sank to the depths – and with them the carbon in their shells. The carbon could thus be stored on the seafloor for millennia. Ocean acidification might therefore result in significantly less carbon being transported to the depths.
  12. Corals as a high-risk group:  Today the most species-rich ecosystems of the oceans, the coral reefs, are already suffering from too warm and too acidic living conditions in some regions. By the end of this century it is even possible that only 30 percent of all corals will have enough building material for their skeletons. This also has consequences for us humans: 400 million people currently owe their food and protection against storm surges to intact coral reefs.
  13. Energy deficiency: Marine dwellers have close contact to the water in which they live. If the pH value of seawater drops, the pH value in the body fluids of most living creatures also declines, possibly leading to an acid imbalance. More highly developed organisms like fish can regulate their acid balance within hours or days. However, that requires energy – which may then be lacking somewhere else, such as for growth and reproduction.
  14. If acidification is a strain on the nerves: Fish are generally relatively insensitive to ocean acidification. Nevertheless, in more acidic water they do not swim without any effects. After all, the declining pH value may have a sensory influence on fish and thus affect their behaviour. In laboratory experiments tropical clownfish, for example, swam towards their predators instead of away from them. Scientists additionally presume that ocean acidification impairs the sight of fish. Their otoliths, by contrast, grow well in more acidic water – which could strengthen their hearing and orientation. Or, on the other hand, completely mix them up since fish may overestimate the distance of certain signals.
  15. Boost to photosynthesis: Not all ocean dwellers react sensitively to the declining pH value. Some even profit from an increase in the carbon dioxide concentration. They include seagrass, macroalgae and phytoplankton species that do not form a calcareous shell. On the one hand, these plants predominantly live in coastal regions that are naturally subject to pH value fluctuations. On the other hand, the additional carbon dioxide is important for their photosynthesis. Seagrasses, for instance, can even positively influence the chemistry in the surrounding waters through their primary production.
  16. Learning from the past: The ocean repeatedly underwent acidification in the past, too – often with severe consequences, particularly for calcifying organisms. During the last ocean acidification event 56 million years ago many of the coral species vanished from the oceans forever at that time. Scientists can learn a lot about how life in the sea has reacted to more acidic water from these past geological eras. Today, however, the pH value is declining ten times faster than in the past.
  17. Expensive consequences:  The consequences of ocean acidification for corals and molluscs alone will cost 1,000 billion US dollars. Scientists have calculated this amount with the help of forecasts.
  18. Only one way out:  There is only one effective way of combating ocean acidification. We humans have to reduce our carbon dioxide emissions. However, even if we could stop all emissions from one day to the next, the ocean would need thousands of years to recover completely.




  1. The essential argument made to relate ocean chemistry to human cause is that humans have been burning fossil fuels for 200 years and over that same period we have observed a gradual drop in oceanic pH and that therefore fossil fuel emissions must be the cause of the observed change in oceanic pH. This argument for causation is inadequate and unacceptable.
  2. In related posts [LINK] [LINK] we test this causation hypothesis with detrended correlation analysis and also with a mass balance. It is true that correlation does not prove causation but the argument we use in the test is that though correlation does not prove causation, causation implies correlation and therefore without the correlation the causation hypothesis has no empirical support.
  3. The causation is tested with detrended correlation analysis at an annual time scale in a related post [LINK] with ocean acidification data comprising  124,813 measurements of ocean CO2 concentration expressed in millimoles per liter (MM/L) 1958 to 2014 provided by the Scripps Institution of Oceanography. If fossil fuel emissions are responsible for the observed ocean acidification, we expect to find a correlation between the rate of emissions and changes in oceanic inorganic CO2 at an annual time scale. In the correlation test we find no evidence that changes in oceanic CO2 are related to fossil fuel emissions at an annual time scale.
  4. A further test of the causation hypothesis that ocean acidification are caused by fossil fuel emissions is carried out in terms of a mass balance  [LINK]Here too, we find no evidence of causation because fossil fuel emissions do not contain the amount of CO2 needed to explain changes in oceanic pH. We conclude from this analysis that there is no empirical evidence to support the usual assumption in climate science papers on ocean acidification, such as the Alfred Wegener Institut paper presented here, that ocean acidification can be understood in terms of fossil fuel emissions or that ocean acidification can be attenuated by taking climate action in for form of reducing or eliminating the use of fossil fuels.
  5. In another related post [LINK] we present evidence that the ocean is a far greater source of carbon by many orders of magnitude than the atmosphere and human emissions could ever be such that the ocean and does acidify itself to a much greater extent than fossil fuels could ever do given their relatively minuscule supply relative to the almost infinite supply of geological carbon in the ocean and mantle. More than 80% of all volcanic activity on earth is in submarine volcanism. Other sources of carbon in the ocean are hydrothermal vents, mantle plumes, rifts, and related geological activity. In this context the exclusive focus on fossil fuel emissions of humans as the only source of carbon available to the ocean is an extreme form of the atmosphere bias of climate science.
  6. The total mass of the ocean and atmosphere taken together is 1.36E18 metric tonnes of which the atmosphere is 0.38% and the ocean 99.62%. Of the total carbon inventory of the earth, only 0.2% is found on the crust including carbon life forms and the other 99.8% is the core and mantle including the outer mantle from where carbon is known to seep into the ocean by rifting and by other means. The insistence of climate science that the atmosphere tail wags the ocean dog in terms of heat and carbon dioxide content is not credible.
  7. As an example of the ocean’s ability to cause ocean acidification by acidifying itself is seen in the PETM event 55 million years ago that is described in related posts on this site [LINK] [LINK] .
  8. In the context of the data and arguments presented above for the relative insignificance of fossil fuel emissions and the atmosphere in the understanding of changes in oceanic pH, the arguments presented by the Alfred Wegener Institut for human caused ocean acidification can only be understood as an extreme form of the atmosphere bias and the human cause bias in climate science and not as objective scientific inquiry into the phenomenon of oceanic pH dynamics.



Do volcanic eruptions happen underwater? : Ocean Exploration Facts ...


Understanding the bird that flew from Finland to Siaya: The Standard


Maria Helena Hällfors | Publons






  1. For breeding birds, timing is everything. Most species have just a narrow window to get the food they need to feed their brood—after spring’s bounty has sprung, but before other bird species swoop in to compete. Now, a new study suggests that as the climate warms, birds are not only breeding earlier, but their breeding windows are also shrinking by as long as 4 to 5 days. This could lead to increased competition for food that might threaten many bird populations.
  2. Birds typically time their breeding to cues signaling the start of spring, so that their chicks hatch when food like plants and insects is most abundant. But global warming has pushed many species to breed earlier in the year; that effect is especially prominent at higher latitudes, where temperatures are rising faster than near the equator. Few studies, however, have examined how climate change affects the duration of breeding windows, which closely track the number of chicks born each year as well as overall population trends.
  3. To find out how the length of breeding periods has changed over time, a team led by Maria Hällfors, an ecologist at the University of Helsinki, analyzed an extensive data set from amateur ornithologists coordinated by the Finnish Museum of Natural History. The data set spans from 1975 to 2017 and includes the nesting records of 73 species and more than 820,000 birds from a 1000-square-kilometer area in Finland’s boreal forests. Each year, trained volunteers placed uniquely numbered rings around the legs of newly hatched chicks to track their movements and survival. Because chicks had to be a certain size to get a ring, the researchers were able to use the timing of the tagging to work out when each chick had hatched—and therefore when breeding had occurred.
  4. On average, the beginnings and ends of the breeding periods are occurring earlier in the year. However, the ends are shifting back faster than the beginnings, resulting in an average breeding window that is 1.7 days shorter in 2017 than it was in 1975. During that same period, Finland’s average temperature rose by 0.8C, suggesting many bird species are actively responding to changing temperatures, Hällfors says.
  5. “It’s good for the species if it’s able to follow the optimum conditions as the climate changes,” she says. However, the shorter breeding windows mean more birds are breeding earlier in the season—a risky time for chicks’ survival, especially if the weather turns suddenly cold. In addition, because many late-season species are shifting their breeding windows up, that could mean more competition for food and nesting sites early on, leaving some chicks to go hungry. Although the researchers were unable to tease out overall population trends from their data set, Hällfors expects these shifts will have a large impact on bird numbers, with some species outcompeting others.
  6. Lucyna Halupka, an ecologist at the University of Wrocław, calls the study “a very important paper” because it’s one of the few to measure the breeding period duration. For 2 decades, she says, many scientists studying birds and climate change have looked only at the earliest, median, or mean laying dates for specific groups of birds. However, she cautions that because the study is limited to Finland, the findings may not apply universally; future studies should examine how breeding seasons move in other regions where the effect of climate change is different. They should also try to determine how shifting breeding windows affect population sizes, she says.
  7. For Hällfors, the new findings illustrate the power of long-term data sets. “Imagine the bird-ringing ornithologists in the 1970s,” she says. “They probably couldn’t have imagined that their data would be used in 2020 to look at climate change.” It’s also a valuable addition to other ongoing climate change research, says conservation biologist Stuart Butchart of BirdLife International. “Many people still think of climate change as a problem that’s going to arise in the future,” he says. “This is another study showing that entire communities of species have already shown substantial responses to climate change over recent decades.”



  1. CLAIM: “average breeding window that is 1.7 days shorter in 2017 than it was in 1975. During that same period, Finland’s average temperature rose by 0.8C, suggesting many bird species are actively responding to changing temperatures”  RESPONSE: That the observed breeding window shortened over a 42-year period during which temperatures rose by 0.8C does not serve as evidence for causation. The timescale (whether annual or longer) for the response of breeding time to temperature must be specified from theoretical  considerations. And then it must be shown with detrended correlation analysis that breeding time is responsive to rising temperature at that time scale. As presented these data imply a time scale of 42 years and a sample size of one. No statistically significant evidence for causation can be found in a sample size of N=1.0. 
  2. Consider for example, that the following important changes occurred in Finland in the same time period over which a temperature rise of 0.8C was observed. In 1991 Finland’s economy went through a boom and bust cycle. In 1995 Finland joined the EU. In 2009, A tragic mall shooting event occurred. In 2011 Cyclone Dagmar struck Finland. In 2012 Hyvinkää shooting occurred. In 2013 Jyväskylä library stabbing occurred and Nordic storms struck Finland. Yet, no one would suggest, simply from theiir co-occurrence that these events were causally related to either the temperature or or the breeding habits of birds. By the same logic, that the breeding habits of birds shrank during a period of warming does imply either that shorter breeding habits caused warming or that warming caused shorter breeding habits.
  3. CLAIM: The impact of global warming on the breeding habits of birds can be assessed by observations of breeding habits in Finland and the observations of temperatures in Finland.  RESPONSE: The impact of global warming on the breeding habits of birds cannot be established with data from a relatively small geographical region of the earth. Finland occupies about 0.227% of the world’s land area. As explained in a related post [LINK] , the impact of global warming is best understood in terms of global mean temperature or significant latitudinal sections thereof over sufficiently long time spans of more than 30 years. In this case, though the time span criterion is met, the extreme localization of the data to 0.227% of the world’s land area does not contain information that can be generalized in a global warming context.
  4. CONCLUSIONWe conclude from the analysis above that the data presented as evidence for the causation of changes in observed breeding habits of birds by global warming contain significant weaknesses such that they do not serve as evidence that global warming has changed the breeding habits of birds. Firstly, the observation that events A and B occurred over the same time span does not serve as evidence of causation. And second, the extreme geographical localization of the data for both bird breeding habits and for temperature makes it impossible to interpret these events in the context of global warming.

Maria Helena Hällfors | Publons



Understanding the bird that flew from Finland to Siaya: The Standard









main article image

New insight into Earth's crust, mantle and outer core interactions

Earth's magnetic field is weakening – but it's not about to reverse

South Atlantic Anomaly





  1. The Mysterious Anomaly Weakening Earth’s Magnetic Field:  Satellite data from ESA show a mysterious anomaly weakening Earth’s magnetic field continues to evolve, with the most recent observations showing we could soon be dealing with more than one of these strange phenomena.
  2. The South Atlantic Anomaly is a vast expanse of reduced magnetic intensity in Earth’s magnetic field, extending all the way from South America to southwest Africa. Since our planet’s magnetic field acts as a kind of shield – protecting Earth from solar winds and cosmic radiation, any reduction in its strength is an important event we need to monitor closely.
  3. These changes could ultimately have significant implications for our planet. The ESA notes that the most significant effects right now are largely limited to technical malfunctions on board satellites and spacecraft, which can be exposed to a greater amount of charged particles in low-Earth orbit as they pass through the South Atlantic Anomaly in the skies above South America and the South Atlantic Ocean.
  4. In the last two centuries, Earth’s magnetic field has lost about 9 percent of its strength on average assisted by a drop in minimum field strength in the South Atlantic Anomaly from approximately 24,000 nanoteslas to 22,000 nanoteslas over the past 50 years. Exactly why this is happening remains a mystery. Earth’s magnetic field is generated by electrical currents produced by a swirling mass of liquid iron within the outer core of our planet, but while this phenomenon appears stable at any given moment, over vast timescales, it’s never really still.
  5. Research has shown that Earth’s magnetic field is constantly in a state of flux, and every few hundred thousand years, Earth’s magnetic field flips, with the north and south magnetic poles swapping places. That process could actually occur more frequently than people think, but while scientists continually debate when we might next witness such an event, even the regular, wandering movements of Earth’s magnetic poles keep geophysicists guessing.
  6. It is not fully clear how those reversals might be tied to what’s currently going on with the South Atlantic Anomaly – which some have suggested could be caused by a vast reservoir of dense rock underneath Africa called the African Large Low Shear Velocity Province. What is certain, though, is that the South Atlantic Anomaly is not sitting still. Since 1970, the anomaly has been growing in size, as well as moving westward at a pace of approximately 20 kilometres (12 miles) per year. But that’s not all.
  7. This suggests the whole thing could be in the process of splitting up into two separate cells – with the original centred above the middle of South America, and the new, emerging cell appearing to the east, hovering off the coast of southwest Africa. The new, eastern minimum of the South Atlantic Anomaly has appeared over the last decade and in recent years is developing vigorously. The challenge now is to understand the processes in Earth’s core driving these changes.
  8. Just how the anomaly will develop from here is unknown, but previous research has suggested disruptions in the magnetic field like this one might be recurrent events that take place every few hundred years. Whether that’s what we’re witnessing now isn’t fully clear – or how a split anomaly might end up playing out – but scientists are watching closely.




  1. CITATION: Elevated paleomagnetic dispersion at Saint Helena suggests long-lived anomalous behavior in the South Atlantic.  Yael A. Engbers, Andrew J. Biggin, and Richard K. Bono, PNAS, July 20, 2020  [LINK]
  2. ABSTRACT:  Earth’s magnetic field is presently characterized by a large and growing anomaly in the South Atlantic Ocean. The question of whether this region of Earth’s surface is preferentially subject to enhanced geomagnetic variability on geological timescales has major implications for core dynamics, core−mantle interaction, and the possibility of an imminent magnetic polarity reversal. Here we present paleomagnetic data from Saint Helena, a volcanic island ideally suited for testing the hypothesis that geomagnetic field behavior is anomalous in the South Atlantic on timescales of millions of years. Our results, supported by positive baked contact and reversal tests, produce a mean direction approximating that expected from a geocentric axial dipole for the interval 8 to 11 million years ago, but with very large associated directional dispersion. These findings indicate that, on geological timescales, geomagnetic secular variation is persistently enhanced in the vicinity of Saint Helena. This, in turn, supports the South Atlantic as a locus of unusual geomagnetic behavior arising from core−mantle interaction, while also appearing to reduce the likelihood that the present-day regional anomaly is a precursor to a global polarity reversal.
  3. ALSO NOTED BY THE AUTHORS:  Earth’s magnetic field is generated in the outer core by convecting liquid iron and protects the atmosphere from solar wind erosion. The most substantial anomaly in the magnetic field is in the South Atlantic (SA). An important conjecture is that this region could be a site of recurring anomalies because of unusual core−mantle conditions, but this has not previously been tested on geological timescales. With paleodirectional data from rocks from Saint Helena, an island in the SA, we show that the directional behavior of the magnetic field in the SA did indeed vary anomalously between ∼8 million and 11 million years ago. This supports the hypothesis of core−mantle interaction being manifest in the long-term geomagnetic field behavior of this region.





  1. Hartmann, Gelvam A., and Igor G. Pacca. “Time evolution of the South Atlantic magnetic anomaly.” Anais da Academia Brasileira de Ciências 81.2 (2009): 243-255.  ABSTRACT: The South Atlantic Magnetic Anomaly (SAMA) is one of the most outstanding anomalies of the geomagnetic field. The SAMA secular variation was obtained and compared to the evolution of other anomalies using spherical harmonic field models for the 1590-2005 period. An analysis of data from four South American observatories shows how this large scale anomaly affected their measurements. Since SAMA is a low total field anomaly, the field was separated into its nondipolar, quadrupolar and octupolar parts. The time evolution of the non-dipole/total, quadrupolar/total and octupolar/total field ratios yielded increasingly high values for the South Atlantic since 1750. The SAMA evolution is compared to the evolution of other large scale surface geomagnetic features like the North and the South Pole and the Siberia High, and this comparison shows the intensity equilibrium between these anomalies in both hemispheres. The analysis of non-dipole fields in historical period suggests that SAMA is governed by (i) quadrupolar field for drift, and (ii) quadrupolar and octupolar fields for intensity and area of influence. Furthermore, our study reinforces the possibility that SAMA may be related to reverse fluxes in the outer core under the South Atlantic region.
  2. Abdu, M. A., et al. “South Atlantic magnetic anomaly ionization: A review and a new focus on electrodynamic effects in the equatorial ionosphere.” Journal of Atmospheric and Solar-Terrestrial Physics 67.17-18 (2005): 1643-1657ABSTRACTSatellite observations of enhanced energetic particle fluxes in the South Atlantic Magnetic Anomaly (SAMA) region have been supported by ground-based observations of enhanced ionization induced by particle precipitation in the ionosphere over this region. Past observations using a variety of instruments such as vertical sounding ionosondes, riometers and VLF receivers have provided evidences of the enhanced ionization due to energetic particle precipitation in the ionosphere over Brazil. The extra ionization at E-layer heights could produce enhanced ionospheric conductivity within and around the SAMA region. The energetic particle ionization source that is operative even under “quiet” conditions can undergo significant enhancements during magnetospheric storm disturbances, when the geographic region of enhanced ionospheric conductivity can extend to magnetic latitudes closer to the equator where the magnetic field line coupling of the E and F regions plays a key role in the electrodynamics of the equatorial ionosphere. Of particular interest are the sunset electrodynamic processes responsible for equatorial spread F/plasma bubble irregularity generation and related dynamics (zonal and vertical drifts, etc.). The SAMA represents a source of significant longitudinal variability in the global description of the equatorial spread F irregularity phenomenon. Recent results from digital ionosondes operated at Fortaleza and Cachoeira Paulista have provided evidence that enhanced ionization due to particle precipitation associated with magnetic disturbances, in the SAMA region, can indeed significantly influence the equatorial electrodynamic processes leading to plasma irregularity generation and dynamics. Disturbance magnetospheric electric fields that penetrate the equatorial latitudes during storm events seem to be intensified in the SAMA region based on ground-based and satellite-borne measurements. This paper will review our current understanding of the influence of SAMA on the equatorial electrodynamic processes from the perspective outlined above.
  3. Pinto Jr, O., et al. “The South Atlantic magnetic anomaly: three decades of research.” Journal of Atmospheric and Terrestrial Physics 54.9 (1992): 1129-1134.  ABSTRACT:  This brief review of advances in our understanding of some physical processes related to the South Atlantic Magnetic Anomaly (SAMA) is intended to highlight specific issues on which further research is needed. The discussion focuses on the origin of the SAMA, the geomagnetic storm-related effects and the impact of the SAMA on orbiting spacecraft.
  4. Freden, Stanley C., and George A. Paulikas. “Trapped protons at low altitudes in the South Atlantic magnetic anomaly.” Journal of Geophysical Research 69.7 (1964): 1259-1269.  ABSTRACT:  The fluxes of protons from 5 to 20 Mev and 60 to 120 Mev were measured in September and October 1962 at low altitudes over the South Atlantic magnetic anomaly. The ratio of the fluxes in these two energy intervals is essentially independent of B and is independent of L below L ≃ 1.4 but changes rapidly for L > 1.4. The flux in the 5‐ to 20‐Mev interval indicates that the spectrum turns back up below the local minimum (at L ≳ 1.5) near 20 Mev. This result is consistent with an increased absorption in the atmosphere for the albedo neutrons near 20 Mev. The integral flux above 31 Mev appears to have increased by a factor of about 3 at the lower L values and higher B values since the Explorer 4 measurements. This would be expected if the source had stayed essentially constant and the atmospheric density had decreased in these regions of space as we moved away from the period of solar maximum.
  5. Pinto Jr, O., and W. D. Gonzalez. “Energetic electron precipitation at the South Atlantic Magnetic Anomaly: a review.” Journal of Atmospheric and Terrestrial Physics 51.5 (1989): 351-365.  ABSTRACT;  This paper reviews the status of knowledge concerning energetic electron precipitation at the South Atlantic Magnetic Anomaly (SAMA). The main purpose is to place recent results in the context of the long-standing problems about energetic electron precipitation at the SAMA region. A synopsis of results achieved in the last two decades, in relation to the various physical mechanisms responsible for precipitating energetic electrons, are also presented. The major uncertainties in the understanding of the energetic electron precipitation at the SAMA include: (1) temporal and spatial precipitation changes from magnetically quiet to disturbed periods; (2) the role of wave-induced precipitation processes.
  6. Zmuda, A. J. “Ionization enhancement from Van Allen electrons in the South Atlantic magnetic anomaly.” Journal of Geophysical Research 71.7 (1966): 1911-1917.  ABSTRACTSatellite particle observations have shown that a longitudinal dependence exists for trapped electrons at B‐L points extending into the lower atmosphere over the South Atlantic anomaly. The longitudinal variation results from the precipitation and loss of trapped particles drifting through the anomaly and is a regular characteristic of the Van Allen radiation zones. The trapped electrons that collide with atmospheric constituents produce ionization and represent a significant source of local enhancements in the D and lower E regions of the ionosphere. Regions where electron‐ion augmentations may be expected are also discussed.




The Engbers et al paper of July 2020 presented above has explained the South Atlantic magnetic anomaly as a natural and well understood creation of known periodic anomalies in the outer core and mantle. In the bibliography presented above we find that the South Atlantic Anomaly, referred to as “SAMA” in the literature, has been a well understood magnetic phenomenon in the satellite era going back to the 1960s. In this context, it does not appear that the SAMA event of May 2020 was unusual. The only mysterious part of the May 2020 event is that it had been elevated to alarmism not unlike the climate change alarmism that appears to be the new normal for scientific research. We conclude from these findings that in the climate change era, alarmism has now been incorporated into the scientific method for all research questions having to do with the planet now claimed by climate science as being threatened to extinction by human activity in the form of burning fossil fuels. This implies that therefore the planet is now under the care of humans who must take care of the planet all the way down to the mantle and the core,  because it is no longer able to care for itself as it had done for billions of years before the humans came along.  RELATED POST [LINK]

Opinion | Science Alone Won't Save the Earth. People Have to Do ...

A letter from Earth to humans on Earth Day 2019



Climate change: Polar bears could be lost by 2100 - BBC News

Polar bears across the Arctic face shorter sea ice season ...

Polar bear video: Is it really the 'face of climate change'? - BBC ...





  1. CITATION:  Molnár, P.K., Bitz, C.M., Holland, M.M. et al. Fasting season length sets temporal limits for global polar bear persistence. Nat. Clim. Chang. (2020).
  3. Polar bears require sea ice for capturing seals and are expected to decline range-wide as global warming and sea-ice loss continue. Estimating when different sub-populations will likely begin to decline has not been possible to date because data linking ice availability to demographic performance are unavailable for most sub-populations and unobtainable a priori for the projected but yet-to-be-observed low ice extremes.
  4. Here, we establish the likely nature, timing and order of future demographic impacts by estimating the threshold numbers of days that polar bears can fast before cub recruitment and/or adult survival are impacted and decline rapidly.
  5. Intersecting these fasting impact thresholds with projected numbers of ice-free days, estimated from a large ensemble of an Earth system model, reveals when demographic impacts will likely occur in different subpopulations across the Arctic. Our model captures demographic trends observed during 1979–2016, showing that recruitment and survival impact thresholds may already have been exceeded in some subpopulations.
  6. It also suggests that, with high greenhouse gas emissions, steeply declining reproduction and survival will jeopardize the persistence of all but a few high-Arctic subpopulations by 2100. Moderate emissions mitigation prolongs persistence but is unlikely to prevent some subpopulation extirpations within this century.


  1. Polar bears will be wiped out by the end of the century unless more is done to tackle climate change. Scientists say some populations have already reached their survival limits as the Arctic sea ice shrinks. The carnivores rely on the sea ice of the Arctic Ocean to hunt for seals. As sea ice extent declines, the animals are forced to roam for long distances and struggle to find food and feed their cubs.
  2. Polar bears are already sitting at the top of the world; if the ice goes, they have no place to go. Polar bears are listed as vulnerable to extinction by the International Union for Conservation of Nature (IUCN), with climate change a key factor in their decline.  Female polar bears need to store sufficient fat to feed their cubs. Studies show that declining sea ice is likely to decrease polar bear numbers, perhaps substantially.
  3. The new study, published in Nature Climate Change, puts a timeline on when that might happen to the year 2100 because the Polar bears are running out of food as sea ice declines. By modelling the energy use of polar bears, the researchers were able to calculate their endurance limits.
  4. Dr Steven Amstrup, chief scientist of Polar Bears International, who was also involved in the study, told BBC News: “What we’ve shown is that, first, we’ll lose the survival of cubs, so cubs will be born but the females won’t have enough body fat to produce milk to bring them along through the ice-free season. We can only go without food for so long, that’s a biological reality for all species.
  5. Polar bears rely on sea ice to catch their prey.  The researchers were also able to predict when these thresholds will be reached in different parts of the Arctic. This may have already happened in some areas where polar bears live. Showing how imminent the threat is for different polar bear populations is another reminder that we must act now to head off the worst of future problems faced by us all.
  6. The trajectory we’re on now is not a good one, but if society gets its act together, we have time to save polar bears. And if we do, we will benefit the rest of life on Earth, including ourselves. Under a high greenhouse gas emissions scenario, it’s likely that all but a few polar bear populations will collapse by 2100, the study found. Under moderate emissions reduction, several populations will disappear. The findings match previous projections that polar bears are likely to persist to 2100 only in a few populations very far north if climate change continues unabated.
  7. Sea ice is frozen seawater that floats on the ocean surface, forming and melting with the polar seasons. Some persists year after year in the Arctic, providing vital habitat for wildlife such as polar bears, seals, and walruses.Sea ice that stays in the Arctic for longer than a year has been declining at a rate of about 13% per decade since satellite records began in the late 1970s.


  1. As shown in related posts, the data do show that both Arctic sea ice extent and Arctic sea ice PIOMAS volume have steadily declined in the 40-year study period 1979-2019. Links to these related posts are as follows: 
  2. PIOMAS Arctic Sea Ice Volume: [LINK]  
  3. Arctic Sea Ice Extent: 2018 study: [LINK]
  4. Arctic Sea Ice Extent: 2019 study: [LINK]
  5. Arctic Sea Ice Extent: Bibliography: [LINK]
  6. The finding in these studies shows clearly that there is a year to year decline in Arctic sea ice in the period 1979-2019. This finding is corroborated in the bibliography. However, what we also find in the bibliography is that there are significant differences in the rate of decline both regionally and in time.
  7. At the same time, temperature data for the Arctic region shows a steady warming rate consistent with the global warming context of the study period.
  8. However a correlation analysis of Arctic sea ice decline with the relevant temperature data for the region does not indicate that the observed decline is responsive to temperature at an annual time scale.
  9. The implication of the correlation analysis results is that though the evidence shows that temperatures are rising and that sea ice extent is falling, it does not imply that declining minimum sea ice extent in September can be attributed to rising temperatures.
  10. CONCLUSION: We conclude from these studies that there is no empirical evidence that global warming drives the observed decline in September minimum sea ice extent in the Arctic. Therefore, though polar bears may be threatened by the observed decline in sea ice extent, there is no evidence that the decline is caused by global warming and no evidence that the decline can be attenuated and the polar bears saved by taking climate action in the form of reducing or eliminating the use of fossil fuels.
  11. The observed changes in bear populations are likely a natural phenomenon and not one created by humans; and therefore not one that can be attenuated by humans. In other words, as tragic as their situation is, the observed changes are the work of nature. We are unable to save these bears.
  12. In fact any attempt to save them could be construed as humans interfering with nature. A contradiction in environmentalism is that it forbids humans from interfering with nature and at the same time calls on humans to interfere with nature when nature is deemed unkind by humans.
  13. A relevant bibliography is presented below. Although the findings do show decline in polar bear population and health during periods of sea ice decline, the literature also shows significant uncertainty in these findings and contradictions in the data that are not reflected in the media reports of these findings. The media tend to imply an extreme form of certainty in the causal connection between sea ice decline and polar bear survival. This expression of certainty is not found in the literature.
  14. We also find in these studies that extremely short periods of just a few years is the norm in the comparison of polar bear survival conditions in low and high sea ice extent conditions. The generalization of these short study period findings may not be possible as admitted by the authors themselves in terms of uncertainty and the use of the word “could” when making projections. 

bandicam 2020-07-23 08-46-22-029




  1. Stirling, Ian, and Andrew E. Derocher. “Possible impacts of climatic warming on polar bears.” Arctic (1993): 240-245stirling
  2. Bromaghin, Jeffrey F., et al. “Polar bear population dynamics in the southern Beaufort Sea during a period of sea ice decline.” Ecological Applications 25.3 (2015): 634-651. In the southern Beaufort Sea of the United States and Canada, prior investigations have linked declines in summer sea ice to reduced physical condition, growth, and survival of polar bears (Ursus maritimus ). Combined with projections of population decline due to continued climate warming and the ensuing loss of sea ice habitat, those findings contributed to the 2008 decision to list the species as threatened under the U.S. Endangered Species Act. Here, we used mark–recapture models to investigate the population dynamics of polar bears in the southern Beaufort Sea from 2001 to 2010, years during which the spatial and temporal extent of summer sea ice generally declined. Low survival from 2004 through 2006 led to a 25–50% decline in abundance. We hypothesize that low survival during this period resulted from (1) unfavorable ice conditions that limited access to prey during multiple seasons; and possibly, (2) low prey abundance. For reasons that are not clear, survival of adults and cubs began to improve in 2007 and abundance was comparatively stable from 2008 to 2010, with ~900 bears in 2010 (90% CI 606–1212). However, survival of subadult bears declined throughout the entire period. Reduced spatial and temporal availability of sea ice is expected to increasingly force population dynamics of polar bears as the climate continues to warm. However, in the short term, our findings suggest that factors other than sea ice can influence survival. A refined understanding of the ecological mechanisms underlying polar bear population dynamics is necessary to improve projections of their future status and facilitate development of management strategies. (FACTORS OTHER THAN SEA ICE).
  3. Stirling, Ian, et al. “Polar bear population status in the northern Beaufort Sea, Canada, 1971–2006.” Ecological Applications 21.3 (2011): 859-876.  Polar bears (Ursus maritimus) of the northern Beaufort Sea (NB) population occur on the perimeter of the polar basin adjacent to the northwestern islands of the Canadian Arctic Archipelago. Sea ice converges on the islands through most of the year. We used open‐population capture–recapture models to estimate population size and vital rates of polar bears between 1971 and 2006 to: (1) assess relationships between survival, sex and age, and time period; (2) evaluate the long‐term importance of sea ice quality and availability in relation to climate warming; and (3) note future management and conservation concerns. The highest‐ranking models suggested that survival of polar bears varied by age class and with changes in the sea ice habitat. Model‐averaged estimates of survival (which include harvest mortality) for senescent adults ranged from 0.37 to 0.62, from 0.22 to 0.68 for cubs of the year (COY) and yearlings, and from 0.77 to 0.92 for 2–4 year‐olds and adults. Horvtiz‐Thompson (HT) estimates of population size were not significantly different among the decades of our study. The population size estimated for the 2000s was 980 ± 155 (mean and 95% CI). These estimates apply primarily to that segment of the NB population residing west and south of Banks Island. The NB polar bear population appears to have been stable or possibly increasing slightly during the period of our study. This suggests that ice conditions have remained suitable and similar for feeding in summer and fall during most years and that the traditional and legal Inuvialuit harvest has not exceeded sustainable levels. However, the amount of ice remaining in the study area at the end of summer, and the proportion that continues to lie over the biologically productive continental shelf (<300 m water depth) has declined over the 35‐year period of this study. If the climate continues to warm as predicted, we predict that the polar bear population in the northern Beaufort Sea will eventually decline. Management and conservation practices for polar bears in relation to both aboriginal harvesting and offshore industrial activity will need to adapt.
  4. Rode, Karyn D., et al. “A tale of two polar bear populations: ice habitat, harvest, and body condition.” Population Ecology 54.1 (2012): 3-18.  One of the primary mechanisms by which sea ice loss is expected to affect polar bears is via reduced body condition and growth resulting from reduced access to prey. To date, negative effects of sea ice loss have been documented for two of 19 recognized populations. Effects of sea ice loss on other polar bear populations that differ in harvest rate, population density, and/or feeding ecology have been assumed, but empirical support, especially quantitative data on population size, demography, and/or body condition spanning two or more decades, have been lacking. We examined trends in body condition metrics of captured bears and relationships with summertime ice concentration between 1977 and 2010 for the Baffin Bay (BB) and Davis Strait (DS) polar bear populations. Polar bears in these regions occupy areas with annual sea ice that has decreased markedly starting in the 1990s. Despite differences in harvest rate, population density, sea ice concentration, and prey base, polar bears in both populations exhibited positive relationships between body condition and summertime sea ice cover during the recent period of sea ice decline. Furthermore, females and cubs exhibited relationships with sea ice that were not apparent during the earlier period (1977–1990s) when sea ice loss did not occur. We suggest that declining body condition in BB may be a result of recent declines in sea ice habitat. In DS, high population density and/or sea ice loss, may be responsible for the declines in body condition.
  5. Regehr, Eric V., et al. “Survival and breeding of polar bears in the southern Beaufort Sea in relation to sea ice.” Journal of animal ecology 79.1 (2010): 117-127. 1. Observed and predicted declines in Arctic sea ice have raised concerns about marine mammals. In May 2008, the US Fish and Wildlife Service listed polar bears (Ursus maritimus ) – one of the most ice‐dependent marine mammals – as threatened under the US Endangered Species Act. 2. We evaluated the effects of sea ice conditions on vital rates (survival and breeding probabilities) for polar bears in the southern Beaufort Sea. Although sea ice declines in this and other regions of the polar basin have been among the greatest in the Arctic, to date population‐level effects of sea ice loss on polar bears have only been identified in western Hudson Bay, near the southern limit of the species’ range.  3. We estimated vital rates using multistate capture–recapture models that classified individuals by sex, age and reproductive category. We used multimodel inference to evaluate a range of statistical models, all of which were structurally based on the polar bear life cycle. We estimated parameters by model averaging, and developed a parametric bootstrap procedure to quantify parameter uncertainty. 4. In the most supported models, polar bear survival declined with an increasing number of days per year that waters over the continental shelf were ice free. In 2001–2003, the ice‐free period was relatively short (mean 101 days) and adult female survival was high (0·96–0·99, depending on reproductive state). In 2004 and 2005, the ice‐free period was longer (mean 135 days) and adult female survival was low (0·73–0·79, depending on reproductive state). Breeding rates and cub litter survival also declined with increasing duration of the ice‐free period. Confidence intervals on vital rate estimates were wide. 5. The effects of sea ice loss on polar bears in the southern Beaufort Sea may apply to polar bear populations in other portions of the polar basin that have similar sea ice dynamics and have experienced similar, or more severe, sea ice declines. Our findings therefore are relevant to the extinction risk facing approximately one‐third of the world’s polar bears.
  6. Schliebe, S., et al. “Effects of sea ice extent and food availability on spatial and temporal distribution of polar bears during the fall open-water period in the Southern Beaufort Sea.” Polar Biology 31.8 (2008): 999-1010. We investigated the relationship between sea ice conditions, food availability, and the fall distribution of polar bears (Ursus maritimus) in terrestrial habitats of the Southern Beaufort Sea via weekly aerial surveys in 2000–2005. Aerial surveys were conducted weekly during September and October along the Southern Beaufort Sea coastline and barrier islands between Barrow and the Canadian border to determine polar bear density on land. The number of bears on land both within and among years increased when sea-ice was retreated furthest from the shore. However, spatial distribution also appeared to be related to the availability of subsistence-harvested bowhead whale (Balaena mysticetus) carcasses and the density of ringed seals (Phoca hispida) in offshore waters. Our results suggest that long-term reductions in sea-ice could result in an increasing proportion of the Southern Beaufort Sea polar bear population coming on land during the fall open-water period and an increase in the amount of time individual bears spend on land.



Sea otters in Prince William Sound, Alaska - Bing Wallpapers - Sonu Rai

Polar Bear Waiting for Ice | Polar Bears International | Polar bear, Polar  bear facts, Polar bears international



Kane Basin - Wikipedia




CITATION:  Transient benefits of climate change for a high‐Arctic polar bear subpopulation. Kristin L. et al:  23 September 2020 

ABSTRACT: KANE BASIN is one of the world’s most northerly polar bear subpopulations, where bears have historically inhabited a mix of thick multiyear and annual sea ice year‐round. Currently, KANE BASIN is transitioning to a seasonally ice‐free region because of climate change. This ecological shift has been hypothesized to benefit polar bears in the near‐term due to thinner ice with increased biological production, although this has not been demonstrated empirically. We assess sea‐ice changes in KANE BASIN together with changes in polar bear 1. movements, 2. seasonal ranges, 3. body condition, and 4. reproductive metrics obtained from capture–recapture and satellite telemetry studies during two study periods (1993–1997 and 2012–2016). The annual cycle of sea‐ice habitat in KANE BASIN shifted from a year‐round ice platform (~50% coverage in summer) in the 1990s to nearly complete melt‐out in summer (<5% coverage) in the 2010s. The mean duration between sea‐ice retreat and advance increased from 109 to 160 days (p = .004). Between the 1990s and 2010s, adult female (AF) seasonal ranges more than doubled in spring and summer and were significantly larger in all months. Body condition scores improved for all ages and both sexes. Mean litter sizes of cubs‐of‐the‐year (C0s) and yearlings (C1s), and the number of C1s per AF, did not change between decades. The date of spring sea‐ice retreat in the previous year was positively correlated with C1 litter size, suggesting smaller litters following years with earlier sea‐ice breakup. Our study provides evidence for range expansion, improved body condition, and stable reproductive performance in the polar bear subpopulation. These changes, together with a likely increasing subpopulation abundance, may reflect the shift from thick, multiyear ice to thinner, seasonal ice with higher biological productivity. The duration of these benefits is unknown because, under unmitigated climate change, continued sea‐ice loss is expected to eventually have negative demographic and ecological effects on all polar bears

CRITICAL COMMENTARYThe Kane Basin represents 0.03% of maximum Arctic sea ice area in April and 0.3% of minimum Arctic sea ice area in September. Sea ice and polar bear dynamics in a small sub-region over brief decadal time scales do not contain useful information about the impact of climate change on sea ice extent and polar bears. As noted above, no evidence is found in the data at longer time scales and larger geographical extents that sea ice extent is responsive to AGW or that it can be attenuated by reducing fossil fuel emissions.

In a related post we highlight geological features of the Arctic that play a significant role in Arctic sea ice dynamics and underscore the need for larger geographical extents and longer time scales for the assessment of the impact of climate change on September minimum Arctic sea ice extent and of its effects on the biota:  [LINK] 



The survivors: Is climate change really killing polar bears? | New Scientist

Pin on Week 10- Reasons for Climate Change and its consequences





Wild Coral
  1. LONGEVITY: Generally 20 to 30 years but as low as 15 and as high as 32. You can tell how old it is by looking at a thin slice of tooth and counting the layers.
  2. PREDATION: Adult polar bears have no predators except other polar bears but cubs less than one year old sometimes are prey to wolves and other carnivores and newborns may be eaten by the polar bears themselves especially if the mother is starved.
  3. INTRA-SPECIES PREDATION: This does not happen a lot but males fight over females and will kill the competition to get the lady he wants. In extreme hunger conditions, male polar bears may attack, kill, and eat female polar bears. This is not a normal behavior pattern but it does happen.
  4. HUMAN PREDATION: Humans have hunted, killed, and eaten Polar bears for thousands of years. Arctic people have traditionally hunted polar bears for food, clothing, bedding, and religious purposes. More recently commercial hunting for polar bear hides got started more than 500 years ago. There was a sharp rise in the kill rate in the 1950s when modern equipment such as snowmobiles, speedboats, and aircraft were employed in the polar bear hide trade. The hunt expanded to what was eventually viewed as a threat to the survival of the species and an International Agreement was signed in 1973 to ban the use of aircraft and speed boats in polar bear hunts although hunting continued to the extent that they were still the leading cause of polar bear mortality.
  5. THE CURRENT STATE OF HUMAN PREDATION: Today, polar bears are hunted by native arctic populations for food, clothing, handicrafts, and sale of skins. Polar bears are also killed in defense of people or property. However, hunting is strictly regulated in Canada, Greenland, Norway, and Russia. In Norway and Russia hunting polar bears is banned.
  6. CLIMATE CHANGE IMPACT: Increasing temperatures are associated with a decrease in sea ice both in terms of how much sea ice there is and how many months a year they are there. Polar bears use sea ice as a platform to prey mainly on ringed and bearded seals. Therefore, a decline in sea ice extent reduces the polar bear’s ability to hunt for seals and can cause bears to starve or at least to be malnourished.
  7. YOUNG POLAR BEARS: Subadults are inexperienced hunters, and often are chased from kills by larger adults. OLDER, WEAKER bears also are susceptible to starvation for the same reason – that they can’t compete with younger and stronger bears. So in hunt constrained situations, as in limited availability of sea ice, kids and seniors starve first.
  8. Climate change scientists have found (see bibliography above) that polar bear populations have shown increasing evidence of food deprivation including an increase in the number of underweight or starving bears, smaller bears, fewer cubs, and cubs that don’t survive into adulthood partially because in food constrained situations cubs are more likely to be eaten by adult polar bears. This takes place in areas that are experiencing shorter hunting seasons with limited access to sea ice. These conditions limit the bears’ ability to hunt for seals.
  9. The implication for climate impact studies is that a comparison of polar bear counts across time at brief time scales, in and of itself, may not have a climate change sea ice interpretation because of the number of variables involved in these dynamics.


Trump playing Chess meme | Trump, President trump, 3d chess






Donald Trump and Dr Anthony Fauci

As Donald Trump denigrates the advice of his own administration’s scientists, more than 1,200 members of the US National Academy of Sciences have now signed an open letter urging the president to “restore science-based policy in government” as a response to Trump’s refusal to act on their warnings over the climate crisis



  1. That 1,200 AAAS members signed the petition may seem impressive but if the media had had a bit of science education they would have wanted to know how many AAAS members didn’t sign. For the record 118,800 members did not sign.
  2. Yet another way to write that headline would have been “As Donald Trump denigrates the advice of his own administration’s scientists, about 1% of the members of the AAAS have now signed an open online letter urging the president to “restore science-based policy in government” (99% didn’t).
  3. Some open questions to the media and to the 1% of the AAAS members who signed the online petition: (1) What does {science-based policy in government} mean?  (2) What is the role of a body such as the AAAS in government such that it can impose policy on an elected government in a democracy that requires the government to take climate action? (3)  What advisory role if any did the AAAS play in providing the Trump administration with the climate science data and the answers to the critical issues about climate action raised by the Trump administration?
  4. If no history of an advisory role exists for the AAAS, the petition serves only to underscore the extreme limits to which climate action activism has been stretched. The need for such protracted and continually upgraded fear based activism against fossil fuels to greater and greater fear levels does not improve confidence in the science of climate science. It does just the opposite.
  5. Such theatrical climate activism exposes a weakness in the science of climate science and the greater the need for activism the greater the evidence of this weakness that has recently been elevated by the finding that “Internal variability in the climate system confounds assessment of human-induced climate change and imposes irreducible limits on the accuracy of climate change projections, especially at regional and decadal scales” as described in a related post [LINK] .

Antarctic GIFs - Get the best GIF on GIPHY


Ice Lake GIFs - Get the best GIF on GIPHY



The Antarctica that we know is being transformed by human caused global warming and climate change. The rate of ice loss from Antarctica has tripled since 2012 compared to ice losses from the previous two decades. Large enough chunks of ice are calving off of the Antarctic ice shelves that maps must be redrawn. Just this past summer, over the period from November to February, scientists documented unprecedented heatwaves and melting at Casey Station, East Antarctica. These sorts of impacts are expected to continue as warming continues and will be felt far beyond the continent. The breakup of the Antarctic ice sheet fuels sea level rise, affecting communities around the world, while recent research revealed the connection between Antarctic ice, ocean circulation patterns and the risk of more frequent extreme events.


1. Antarctica’s air and ocean are heating up:  The Antarctic Peninsula is witnessing some of the most rapid warming on Earth. In the last 50 years, the peninsula warmed at a rate of 0.06C per year, significantly higher than the global average.  In February, Argentina’s Esperanza research station recorded high temperatures of 18.3C , the highest temperature ever recorded in continental Antarctica. Higher temperatures are resulting in ice loss and warmer waters surrounding the continent. The oceans around Antarctica are getting hotter as well, with some areas of the Southern Ocean warming by 3C. This is particularly problematic, because ice loss has been greatest where there’s an influx of warm waters. Researchers now warn that there could be more ice shelves exposed to warm waters than previously thought, especially in East Antarctica, which could contribute multi-meter sea level rise if climate change continues unabated.

2. Ice is retreating and melting rapidly. The rate of ice loss from Antarctica has tripled since 2012 compared to ice losses from the previous two decades. Large enough chunks of ice are calving off the Antarctic ice shelves that maps are needing to be redrawn. The world’s largest iceberg “A68” – weighing 1.1 trillion tons and the size of Delaware – broke off from the Larsen C ice shelf in July 2017.  Ice loss from the Antarctic Ice Sheet has rapidly accelerated over the last four decades.  These types of calving events are becoming increasingly frequent. Just last month, an iceberg twice the size of Washington, D.C. broke off the rapidly retreating Pine Island Glacier. Fracturing of the ice shelf is a sign of its weakness; thinning and breakup of the ice can further destabilize the ice shelf. In the nearby Thwaites Glacier, researchers recently found a large underwater cavity, two-thirds the size of Manhattan. The cavity used to contain 14 billion tons of ice, but much of it has melted in just three years. The presence of a cavity like this allows for warm waters to get further under the glacier, allowing it to melt faster. Warming is resulting in ice loss, including melting underneath glaciers. These discoveries come against a backdrop of longer-term trends. Using satellite record data, scientists estimated that ice mass loss from the Antarctic Ice Sheet has accelerated over the last four decades, increasing six-fold from roughly 40 billion tons per year in 1979–1990 to about 252 billion tons a year in 2009–2017. Those figures mask changes in particularly vulnerable areas. For example, scientists recently found that a part of the Ross Ice Shelf that is particularly important for its overall stability is melting 10 times faster than the shelf average. Scientists also found that surface meltwater is now widespread across the Antarctic Ice Sheet, leading to rapid and large accelerations of outlet glaciers, a glacier that flows out of an ice sheet and can drain it quickly. Such shifts have not been included in models to date. And in addition to meltwater, lakes are sitting on the surface of East Antarctica, reducing surface reflectivity of the ice and increasing absorption of solar radiation, which speeds up melting. Scientists also recently reported large lakes under eastern Antarctica. It’s a troubling finding, as glaciers can move more quickly when they sit on water as opposed to bedrock.

3. Penguin populations are shrinking. Some penguin species are adapting to warmer temperatures, while other populations are declining. On certain islands off the coast of the Antarctic peninsula, thousands of penguins, black and white, stretched up the hillside. On others, however, isolated groups of penguins sat next to empty sites that were once full of life. Naturalists told us how several colonies they frequented dropped dramatically in recent years. As krill populations decline, some penguins are relying on new penguins sources, including fish and squid. Climate change is creating penguin winners and losers. Their fate hinges on how dependent they are on krill, which has declined by 70-80% in some regions of the Weddell Sea and waters off of the Antarctic peninsula, as a result of commercial fishing, sea ice loss and recovery of whales. Gentoo penguins, on the one hand, have diversified their diet to include fish and squid. They’ve seen greater success than the Chinstrap and Adélie penguins, which almost exclusively rely on krill. For example, in the South Shetland Islands off the western Antarctic Peninsula, Adélie breeding pairs dropped from 105,000 to 30,000 from 1982 to 2017. Over the same period, breeding pairs of Gentoos climbed from 25,000 to 173,000. A preliminary census released in February 2020 found that some chinstrap colonies have seen as much as a 77% decline since the 1970s.

4. SNOW IS TURNING RED. When I pictured Antarctica, I envisioned cool shades of blue and white. But there I was with my rubber boots digging into snow that looked like it had been spray-painted red. Rising temperatures are allowing algae to grow in large masses, giving snow a red pigment. This phenomenon is not new, with reports of red snow as old as millennia, dubbed “watermelon snow.” It’s a result of algae living on the snow, which produce a reddish pigment to protect from solar radiation during warmer seasons. The problem is that in a changing climate, this can lead to a “feedback loop” that exacerbates warming. As temperatures warm, the algae are able to grow in greater masses, turning the snow a darker shade. This reduces surface reflectivity and, in turn, causes more solar radiation absorption, furthering melt. A recent study of 40 red snow sites across the Arctic confirmed higher melt rates when the algae was present.

5. LAND IS TURNING GREEN:  In addition to seeing pink snowfields, I never imagined Antarctica to be quite as green as it was, with its mosses, lichens and flowering plants. I never thought I’d see a penguin waddle amongst green grass, almost appearing photoshopped against a background thousands of miles away. Once snowy hills are now covered in more vegetation as increased temperatures release nitrogen from the underlying soil. Antarctica has two vascular plants – Antarctic hair grass (Deschampsia antarctica) and Antarctic pearlwort (Colobanthus quitensis). A recent study documented the proliferation of these two native plants, especially the hair grass, thanks to newly available nitrogen. Nitrogen is typically locked away in soil that is now decomposing more quickly as temperatures increase. What Happens in Antarctica Doesn’t Stay in Antarctica. No area on our planet is spared from warming—nor does warming in one area stay confined to that region. Though Antarctica seems so far away, human activities occurring on continents separated by vast seas are affecting its lands, waters and the life that exists there. And, by the same token, climate impacts to Antarctica won’t be a localized phenomenon; they will affect the entire planet, with rising seas flooding low-lying communities around the world, altered ocean circulation patterns, and potentially more frequent extreme weather events. Additionally, Antarctica may be a predictor of patterns of change that the rest of the world may see in the future. The impacts of climate change are already painted across the landscape of Antarctica. Effective and immediate action against climate change is crucial to preventing future impacts that will transform not only Antarctica, but the world at large.  THE END.







Clem, K.R., Fogt, R.L., Turner, J. et al. Record warming at the South Pole during the past three decades. Nat. Clim. Chang. (2020).


Over the last three decades, the South Pole has experienced a record-high statistically significant warming of 0.61 ± 0.34 °C per decade, more than three times the global average. Here, we use an ensemble of climate model experiments to show this recent warming lies within the upper bounds of the simulated range of natural variability. The warming resulted from a strong cyclonic anomaly in the Weddell Sea caused by increasing sea surface temperatures in the western tropical Pacific. This circulation, coupled with a positive polarity of the Southern Annular Mode, advected warm and moist air from the South Atlantic into the Antarctic interior. These results underscore the intimate linkage of interior Antarctic climate to tropical variability. Further, this study shows that atmospheric internal variability can induce extreme regional climate change over the Antarctic interior, which has masked any anthropogenic warming signal there during the twenty-first century.

























  1. Methane Emissions Have Jumped a Staggering Nine Percent Since Last Decade
  2. Emissions of methane – a planet-warming gas several times more potent than carbon dioxide – have risen by nine percent in a decade driven by humanity’s insatiable hunger for energy and food.
  3. Methane has a warming potential 28 times greater than CO2 over a 100-year period and its concentration in the atmosphere has more than doubled since the Industrial Revolution.
  4. Over a 20-year period, it is more than 80 times as potent. While there are a number of natural methane sources such as wetlands and lakes, the the study concluded that 60 percent of CH4 emissions are now manmade.
  5. These sources fall principally into three categories: extracting and burning fossil fuels for power, agriculture including livestock, and waste management.
  6. The 2015 Paris climate agreement saw nations commit to limit temperature rises to “well below” two degrees Celsius (3.6 Farenheit) above pre-industrial levels. The levels of atmospheric methane are increasing by around 12 parts per billion each year.
  7. This trajectory is in line with a scenario modelled by the IPCC that sees Earth warming as much as 3 to 4 degrees Celsius by 2100.
  8. “Regular updates of the global methane budget are necessary because reducing methane emissions would have a rapid positive effect on climate.
  9. To meet the objectives of the Paris Agreement, not only do CO2 emissions need to be reduced but also methane emissions.
  10. The Global Carbon Project, a consortium of more than 50 research institutions around the world, has gathered data from more than 100 observation stations.
  11. The world now produces around 50 million additional tonnes of methane every year than it did between 2000-2006.
  12. Around 60 percent of human-made CH4 emissions were estimated to come from agriculture and waste, including as much as 30 percent from the digestive processes of cattle and sheep. Twenty-two percent comes from the extraction and burning of oil and gas, while 11 percent leaks from the world’s coal mines, the study found.
  13. But recent studies based on new techniques for spotting methane leaks using satellite data suggest that emissions from the oil and gas sector may be significantly higher than those shown in the study, which only included data through 2017.
  14. Short-term threat: While the overall trend is upwards, emissions levels fluctuate between regions. For instance, Africa, China and Asia each produce 10-15 million tonnes annually. The US churns out around 4-5 million tonnes. Europe is the only region where methane emissions are falling – between 2-4 million tonnes since 2006, depending on the estimation method.
  15. The United Nations says that to hit the more ambitious Paris target of a 1.5 degrees Celsius warming cap, all greenhouse gas emissions must fall by 7.6 percent annually this decade.




  1. Anthropogenic global warming and climate change (AGW) is a theory about the impact of the industrial economy on climate specifically in terms of the combustion of fossil fuels and the artificial carbon dioxide emissions thus caused. This is why the amount of warming caused by AGW is assessed as “since pre-industrial”. AGW is not a theory about how wetlands, cattle, and other natural sources of carbon have caused warming because of the industrial revolution. Specifically, AGW climate change relates to the anthropogenic emissions of carbon dioxide from the combustion of fossil fuels in the industrial economy and not to carbon cycle flows.
  2. This issue is explained by NASA climate scientist Dr. Peter Griffith on this video. {Link to the source on Youtube [LINK] .
  3. Details of this issue are provided in related posts [LINK][LINK][LINK].
  4.  It should also be noted that methane is unstable in the atmosphere  where it readily spontaneously oxidizes to carbon dioxide with a half life of about 5 years. That methane is a much stronger greenhouse gas than carbon dioxide is therefore not relevant at the longer time scales of 60 years or more at which the warming due to the greenhouse effect of AGW can be found in the data.  At longer time scales, the net effect of methane emissions in the atmosphere is its oxidized form as CO2.
  5. More to the point the decadal time scales used in methane studies is known to be too short to assess its effect on climate. Climate science contains the important time scale limitation that limits the interpretation of anthropogenic global warming to time scales greater than 30 years and to a global geographical span.  It is found that at shorter time scales, and particularly so at the decadal time scale used in methane studies, the impact of AGW on climate is confounded by nature in what is termed “internal climate variability“. This issue is presented in some detail in a related post [LINK] .
  6. It should also be mentioned that the sources of methane used in the Saunois studies to explain changes in atmospheric methane concentration are simply assumptions of convenience with no evidence presented for that assumed causation. More importantly, these sources of methane emissions are not creations of the Industrial Economy.  In any case, a related post shows that correlation analysis does not support the causation assumptions typically seen in methane studies in the context of anthropogenic global warming [LINK]
  7. In conclusion, we emphasize that it is important in climate science, a scientific community that prides itself on consensus, for researchers to pay careful attention to the details of the underlying theory that forms the foundation of their work. In the Saunois papers, what we find is that this attention to detail is missing and that therefore the AGW research in which the researcher tests a theory of convenience does not correspond with the AGW (anthropogenic global warming) theory of the consensus.  In that theory, it is the perturbation of nature’s carbon cycle with fossil fuel emissions of the industrial economy, and not the carbon cycle itself, that is identified as the driver of anthropogenic global warmingThe sources of methane emissions identified in the Saunois papers are not creations of the industrial economy. They are part of the natural “internal variability of climate” [LINK] as it was in pre-industrial times. 
  8. The important distinction between “pre-industrial times” and “the industrial economy” in climate science is missing in the Saunois methane papers.

Cow farts and climate change - YouTube



  1. Saunois, Marielle, et al. “The global methane budget 2000–2017.” Earth system science data 12.3 (2020): 1561-1623.  Understanding and quantifying the global methane (CH4) budget is important for assessing realistic pathways to mitigate climate change. Atmospheric emissions and concentrations of CH4 continue to increase, making CH4 the second most important human-influenced greenhouse gas in terms of climate forcing, after carbon dioxide (CO2). The relative importance of CH4 compared to CO2 depends on its shorter atmospheric lifetime, stronger warming potential, and variations in atmospheric growth rate over the past decade, the causes of which are still debated. Two major challenges in reducing uncertainties in the atmospheric growth rate arise from the variety of geographically overlapping CH4 sources and from the destruction of CH4 by short-lived hydroxyl radicals (OH). To address these challenges, we have established a consortium of multidisciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate new research aimed at improving and regularly updating the global methane budget. Following Saunois et al. (2016), we present here the second version of the living review paper dedicated to the decadal methane budget, integrating results of top-down studies (atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up estimates (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations). For the 2008–2017 decade, global methane emissions are estimated by atmospheric inversions (a top-down approach) to be 576 Tg CH4 yr−1 (range 550–594, corresponding to the minimum and maximum estimates of the model ensemble). Of this total, 359 Tg CH4 yr−1 or ∼ 60 % is attributed to anthropogenic sources, that is emissions caused by direct human activity (i.e. anthropogenic emissions; range 336–376 Tg CH4 yr−1 or 50 %–65 %). The mean annual total emission for the new decade (2008–2017) is 29 Tg CH4 yr−1 larger than our estimate for the previous decade (2000–2009), and 24 Tg CH4 yr−1 larger than the one reported in the previous budget for 2003–2012 (Saunois et al., 2016). Since 2012, global CH4 emissions have been tracking the warmest scenarios assessed by the Intergovernmental Panel on Climate Change. Bottom-up methods suggest almost 30 % larger global emissions (737 Tg CH4 yr−1, range 594–881) than top-down inversion methods. Indeed, bottom-up estimates for natural sources such as natural wetlands, other inland water systems, and geological sources are higher than top-down estimates. The atmospheric constraints on the top-down budget suggest that at least some of these bottom-up emissions are overestimated. The latitudinal distribution of atmospheric observation-based emissions indicates a predominance of tropical emissions (∼ 65 % of the global budget, < 30∘ N) compared to mid-latitudes (∼ 30 %, 30–60∘ N) and high northern latitudes (∼ 4 %, 60–90∘ N). The most important source of uncertainty in the methane budget is attributable to natural emissions, especially those from wetlands and other inland waters.Some of our global source estimates are smaller than those in previously published budgets (Saunois et al., 2016; Kirschke et al., 2013). In particular wetland emissions are about 35 Tg CH4 yr−1 lower due to improved partition wetlands and other inland waters. Emissions from geological sources and wild animals are also found to be smaller by 7 Tg CH4 yr−1 by 8 Tg CH4 yr−1, respectively. However, the overall discrepancy between bottom-up and top-down estimates has been reduced by only 5 % compared to Saunois et al. (2016), due to a higher estimate of emissions from inland waters, highlighting the need for more detailed research on emissions factors. Priorities for improving the methane budget include (i) a global, high-resolution map of water-saturated soils and inundated areas emitting methane based on a robust classification of different types of emitting habitats; (ii) further development of process-based models for inland-water emissions; (iii) intensification of methane observations at local scales (e.g., FLUXNET-CH4 measurements) and urban-scale monitoring to constrain bottom-up land surface models, and at regional scales (surface networks and satellites) to constrain atmospheric inversions; (iv) improvements of transport models and the representation of photochemical sinks in top-down inversions; and (v) development of a 3D variational inversion system using isotopic and/or co-emitted species such as ethane to improve source partitioning.
  2. Saunois, Marielle, et al. “The global methane budget 2000–2012.” Earth System Science Data 8.2 (2016): 697-751.  The global methane (CH4) budget is becoming an increasingly important component for managing realistic pathways to mitigate climate change. This relevance, due to a shorter atmospheric lifetime and a stronger warming potential than carbon dioxide, is challenged by the still unexplained changes of atmospheric CH4 over the past decade. Emissions and concentrations of CH4 are continuing to increase, making CH4 the second most important human-induced greenhouse gas after carbon dioxide. Two major difficulties in reducing uncertainties come from the large variety of diffusive CH4 sources that overlap geographically, and from the destruction of CH4 by the very short-lived hydroxyl radical (OH). To address these difficulties, we have established a consortium of multi-disciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate research on the methane cycle, and producing regular (∼ biennial) updates of the global methane budget. This consortium includes atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions. Following Kirschke et al. (2013), we propose here the first version of a living review paper that integrates results of top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up models, inventories and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, and inventories for anthropogenic emissions, data-driven extrapolations). For the 2003–2012 decade, global methane emissions are estimated by top-down inversions at 558 Tg CH4 yr−1 , range 540–568. About 60 % of global emissions are anthropogenic (range 50–65 %). Since 2010, the bottom-up global emission inventories have been closer to methane emissions in the most carbonintensive Representative Concentrations Pathway (RCP8.5) and higher than all other RCP scenarios. Bottom-up approaches suggest larger global emissions (736 Tg CH4 yr−1 , range 596–884) mostly because of larger natural emissions from individual sources such as inland waters, natural wetlands and geological sources. Considering the atmospheric constraints on the top-down budget, it is likely that some of the individual emissions reported by the bottom-up approaches are overestimated, leading to too large global emissions. Latitudinal data from top-down emissions indicate a predominance of tropical emissions (∼ 64 % of the global budget, < 30◦ N) as compared to mid (∼ 32 %, 30–60◦ N) and high northern latitudes (∼ 4 %, 60–90◦ N). Top-down inversions consistently infer  lower emissions in China (∼ 58 Tg CH4 yr−1 , range 51–72, −14 %) and higher emissions in Africa(86 Tg CH4 yr−1 , range 73–108, +19 %) than bottom-up values used as prior estimates. Overall, uncertainties for anthropogenic emissions appear smaller than those from natural sources, and the uncertainties on source categories appear larger for top-down inversions than for bottom-up inventories and models. The most important source of uncertainty on the methane budget is attributable to emissions from wetland and other inland waters. We show that the wetland extent could contribute 30–40 % on the estimated range for wetland emissions. Other priorities for improving the methane budget include the following: (i) the development of process-based models for inland-water emissions, (ii) the intensification of methane observations at local scale (flux measurements) to constrain bottom-up land surface models, and at regional scale (surface networks and satellites) to constrain top-down inversions, (iii) improvements in the estimation of atmospheric loss by OH, and (iv) improvements of the transport models integrated in top-down inversions. The data presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( METHANE_BUDGET_2016_V1.1) and the Global Carbon Project.
  3. Saunois, Marielle, et al. “The growing role of methane in anthropogenic climate change.” Environmental Research Letters 11.12 (2016): 120207.  Unlike CO2, atmospheric methane concentrations are rising faster than at any time in the past two decades and, since 2014, are now approaching the most greenhouse-gas-intensive scenarios. The reasons for this renewed growth are still unclear, primarily because of uncertainties in the global methane budget. New analysis suggests that the recent rapid rise in global methane concentrations is predominantly biogenic-most likely from agriculture-with smaller contributions from fossil fuel use and possibly wetlands. Additional attention is urgently needed to quantify and reduce methane emissions. Methane mitigation offers rapid climate benefits and economic, health and agricultural co-benefits that are highly complementary to CO2 mitigation.
















  • chaamjamal: Thank you for your input
  • Ruben Leon: When your mind is made up you ignore the data and try to justify the bias you acquired as a juvenile and never questioned. The fact that the Antar
  • chaamjamal: Thank you for raising these interesting points. We live in strange times. Some day we may figure this out.