Thongchai Thailand

Archive for April 2020

bandicam 2020-04-10 09-09-41-283

BANGKOK-PM2.5

PM2.5 MORTALITY CHARTS

 

[LINK TO THE HOME PAGE OF THIS SITE]

 

PRIMARY SOURCE DOCUMENTS

EPA-PM2.5 2010 [LINK] , EMAC MODEL EXAMPLES [LINK]  

THIS POST IS A PRESENTATION OF THE EPA STANDARDS AND METHODS FOR RISK FACTORS AND MORTALITY ESTIMATES OF PM2.5 POLLUTION LEVELS

 

HOW TO COMPUTE THE PM2.5 KILL RATE FOR YOUR CITY

  1. STEP-1: IN THE EPA (OR SIMILAR AGENCY IN YOUR COUNTRY) PUBLICATIONS FIND THE AVERAGE PM2.5 LEVEL FOR THE PREVIOUS YEAR FOR YOUR CITY. 
  2. STEP-2: IN YOUR CITY’S HEALTH DEPARTMENT PUBLICATIONS, FIND THE NUMBER OF PEOPLE OLDER THAN AGE 70 WHO DIED LAST YEAR WITH THE CAUSE OF DEATH LISTED ON THE DEATH CERTIFICATE AS ONE OF THESE FIVE DISEASES: STROKE, HEART DISEASE, LUNG CANCER, RESPIRATORY INFECTION, OR CHRONIC OBSTRUCTIVE PULMONARY DISEASE (COPD).
  3. STEP-3:  NOW GO TO THE EPA MORTALITY TABLES OR CHARTS AND FOR EACH OF THESE FIVE DISEASES, LOOK UP THE PERCENT OF DEATHS ATTRIBUTABLE TO PM2.5 AT THE PM2.5 LEVELS YOU FOUND IN STEP-1.
  4. STEP 4:  MULTIPLY THE “NUMBER OF PEOPLE” IN STEP 2 FOR EACH OF THE FIVE DISEASES WITH THE CORRESPONDING “PERCENT OF DEATHS” IN STEP 3. NOW YOU HAVE FIVE NUMBERS AS THE RESULTS OF THESE MULTIPLICATIONS.
  5. STEP 5:  ADD UP THE FIVE NUMBERS FROM STEP 4. VOILA! THIS IS THE NUMBER OF PEOPLE WHO  WERE KILLED BY PM2.5 LAST YEAR. THE RESULTS SHOULD STACK UP SOMETHING LIKE THIS.

DEATHS

 

THE POINT OF THIS POST IS THAT NOBODY REALLY DIES FROM PM2.5 AND THERE IS NO DEATH CERTIFICATE THAT LISTS PM2.5 AS THE CAUSE OF DEATH. BUT IT IS BELIEVED THAT PM2.5 IS A CONTRIBUTING FACTOR IF THE DEAD PERSON WAS OLD AND IF HE HAD ONE OF THE FIVE LISTED CONDITIONS. THEREFORE, THE PM2.5 MORTALITY FIGURE YOU SEE IN THE NEWS IS JUST A NUMBER THAT WE COMPUTE. IT DOESN’T REALLY MEAN WHAT IT PRETENDS TO MEAN.

WE CAN COMPUTE HOW MANY WERE KILLED BY PM2.5 BUT WE CAN’T TELL YOU WHO THEY WERE.

00PIERRE

[LINK TO THE HOME PAGE OF THIS SITE]

RELATED POSTS ON TCRE CARBON BUDGETS [LINK] [LINK] [LINK]

THIS POST IS A CRITICAL REVIEW OF A RESEARCH PAPER ON TCRE CARBON BUDGETS CITED BELOW WITH FULL TEXT PDF AVAILABLE FRIEDLINGSTEIN 

 

ABSTRACT: Climate science has misinterpreted anomalies created by statistical errors as a climate science issue that needs to be resolved with climate models of greater complexity. In this context we find that their struggle with the remaining carbon budget puzzle demonstrates a failure of climate science to address statistical issues of the TCRE in terms of statistics. This failure has led them down a complex and confusing path of trying to find a climate science explanation of the remaining carbon budget anomaly that was created by statistical errors. The research paper presented below serves as an example of this kind of climate research. The real solution to the remaining carbon budget puzzle is to understand the statistical flaws in the TCRE correlation and to stop using it. [LINK] [LINK] . In the second link we show that the TCRE procedure that shows that fossil fuel emissions cause warming also shows that UFOs cause warming [LINK] .  

ufo2

 

Environmental Research Letters: Quantifying process-level uncertainty contributions to TCRE and Carbon Budgets for meeting Paris Agreement climate targets.
Chris D Jone1 and Pierre Friedlingstein: 1 April 2020: Abstract: To achieve the goals of the Paris Agreement requires deep and rapid reductions in anthropogenic CO2 emissions, but uncertainty surrounds the magnitude and depth of reductions. Earth system models provide a means to quantify the link from emissions to global climate change. Using the concept of TCRE the transient climate response to cumulative carbon emissions – we can estimate the remaining carbon budget to achieve 1.5 or 2 oC. But the uncertainty is large, and this hinders the usefulness of the concept. Uncertainty in carbon budgets associated with a given global temperature rise is determined by the physical Earth system, and therefore Earth system modelling has a clear and high priority remit to address and reduce this uncertainty. Here we explore multi-model carbon cycle simulations across three generations of Earth system models to quantitatively assess the sources of uncertainty which propagate through to TCRE. Our analysis brings new insights which will allow us to determine how we can better direct our research priorities in order to reduce this uncertainty. We emphasize that uses of carbon budget estimates must bear in mind the uncertainty stemming from the biogeophysical earth system, and we recommend specific areas where the carbon cycle research community needs to re-focus activity in order to try to reduce this uncertainty. We conclude that we should revise focus from the climate feedback on the carbon cycle to place more emphasis on CO2 as the main driver of carbon sinks and their long-term behaviour. Our proposed framework will enable multiple constraints on components of the carbon cycle to propagate to constraints on remaining carbon budgets.

 

 

PART-1: WHAT THE PAPER SAYS

(1) ABOUT THE TCRE: A body of literature from 2009 found consistently that warming was much more closely related to the cumulative CO2 emissions than the time profile or particular pathway. This relationship between warming and cumulative emissions is found in the IPCC’s Fifth Assessment Report (AR5) as TCRE: the Transient Climate Response to cumulative carbon Emissions. The physical basis of TCRE is described by Caldeira & Kasting (1993) who noted that saturation of the radiative effect of CO2 in the atmosphere could be balanced by saturation of uptake by ocean carbon leading to insensitivity of the warming to the pathway of CO2 emissions. Literature since then has put this on a firm footing with numerous authors showing that trajectories of ocean heat and carbon uptake have similar effects on global temperature due to the diminishing radiative forcing from CO2 in the atmosphere and the diminishing efficiency of ocean heat uptake. Terrestrial carbon uptake is equally important for the magnitude of TCRE – in fact we will show here that land and ocean contribute equally to the magnitude of TCRE and that land dominates over the ocean in terms of model spread.

(2) ABOUT TCRE CARBON BUDGETS: The IPCC AR5 assessed a total carbon budget of 790 PgC to stay below 2C above pre-industrial, of which about 630 PgC has been emitted over the 1870-2018 period. However, the uncertainty in the remaining carbon budget to achieve 1.5C or 2C is very large – in fact possibly larger than the remaining budget itself. This large uncertainty hinders the potential usefulness of this simplifying concept to policy makers. All studies and reports which present estimates of the remaining carbon budget (e.g. The IPCC’s Fifth Assessment Report, its Special Report on Global Warming of 1.5oC, or the UNEP Gap Report) have to make an assumption on how to deal with and present this uncertainty. Some explicitly describe the chosen assumptions (such as 50% or 66% probability of meeting targets) or tabulate multiple options, but all are hindered by the uncertainty. The AR5 Synthesis Report quoted a value of 400 GtCO2 (110 GtC) remaining budget from 2011 for a 66% chance to keep warming below 1.5C. It is now clear that this was an underestimate as this would mean a remaining budget of about 20 GtC from 2020. Since AR5 there has been extensive literature on the application of the TCRE concept and its limitations including the choice of temperature metric and baseline period and issues of biases in Earth system models (ESMs). Some studies accounted for climate model biases by relating warming from present day onwards to the remaining carbon budget . Other studies have used the historical record to constrain TCRE and the remaining budget using simple models or attribution techniques. Both these approaches find a substantial increase in the remaining carbon budget for 1.5C compared to the IPCC AR5 SPM approach. Studies that have tried to additionally account for non-CO2 warming. show that CO2-only TCRE budgets are a robust upper limit but taking account of non-CO2 forcing results in lower allowable emissions. Some have proposed techniques for combining emissions rates of short-lived climate pollutants with long-term CO2 cumulative emission budgets. In light of these advances, the IPCC Special Report on Global Warming of 1.5C (SR15) quotes a value of 420 GtCO2 remaining carbon budget for a 66% chance to keep warming below 1.5C – a value very similar to the AR5 value from 5 years earlier.
There is also a lot of focus on how to achieve such carbon budgets and the increasing realization of the need for carbon dioxide removal and research into the feasibility and implications of negative emissions technology.

(3) ABOUT CARBON CAPTURE AND SEQUESTRATION:  The discussion around carbon dioxide removal (CDR) requires more detailed assessment of the magnitude and timing of any requirement for negative emissions technology and hence more precise estimates of remaining carbon budgets. Glen Peters  argues that large uncertainty in budget estimates may be used to justify further political inaction and Sutton (2018) argues for consideration of plausible high impact outcomes in the tails of the likelihood distribution. The same argument applies to TCRE and carbon budgets: we need information on best estimates but also possible extremes however unlikely. The feasibility of achieving 1.5C without net negative emissions depends on the remaining budget being at the high end of current estimates. Knowing the likelihood of the range as well as central estimate is required to inform the debate on requirements for negative emissions. We should break down individual the individual contributions to uncertainty in carbon budgets in terms of historical human induced warming to date, likely range of TCRE, potential additional warming after emissions reached zero, warming from non-CO2 forcing, and carbon emissions from Earth system feedbacks not yet in Earth System Models as in thawing permafrost. Our ability to model the climate-carbon cycle system is imperfect with uncertainties but it plays a dominant role in the remaining carbon budget issue.
The SR15 assumptions of no further warming after CO2 emissions cease is consistent with the multi-model mean. Similarly, CMIP6 and sophisticated ESMs begin to include additional Earth system feedbacks – but the elephant in the room is that past generations of models have not seen a decreased spread in TCRE remaining carbon budget and adding complexity doesn’t help. In terms of climate sensitivity, GCMs continue with the large range of 3°C (from about 1.5 to about 4.5 °C) since the Charney report of 1979. We need to figure out where the large uncertainty in the TCRE remaining carbon budget comes from so that we can control it with observational constraints.

(4) WHAT’S NEW IN THIS PAPER:  Here we perform a new analysis of three generations of Earth System Model results, spanning over a decade, to examine whether or not existing simulations and analyses are well placed to answer the increasing requirements of policy makers on the carbon cycle research community. We present a new analytical framework which allows us to quantify sources of uncertainty in carbon budgets to land or ocean response to CO2 or climate. It is the carbon cycle response to CO2, rather than its response to climate, which dominates the uncertainty in TCRE and hence carbon budgets.

 

EARTH SYSTEM MODELS  [SOURCE]

Conventional climate models separate the carbon cycle from the model by including only the net carbon feed into the system from the carbon cycle. Earth System Models (ESM) include the carbon cycle in the climate model. The ESM is thought to be a more accurate representation of climate dynamics and is therefore relied upon by climate scientists to unlock the mystery of the remaining carbon budget puzzle.

esm_diagram

 

 

CRITICAL COMMENTARY

(1)  A TIME SERIES OF VALUES COMPUTED FROM ANOTHER TIME SERIES CAN LOSE DEGREES OF FREEDOM AND THEREBY LOSE STATISTICAL POWER.  Here we demonstrate this principle with a time series of the number of foreign golfers at a golf club in Huahin, Thailand per day for a period of 30 days in the month of October. October is the month when European tourists begin to arrive in large numbers in Thailand. The golf course manager would like to know if the rise in the number of golfers  in October is due to tourist arrivals in Huahin in terms of a correlation between the two time series.

Figure 1: Daily Data: Figure 1 below depicts the data for the number of golfers at the course and the number of net arrivals (net of departures) in Huahin, the city nearby where there are a large number of hotels and ladies of the night. What we see in Figure 1 is that both time series show a rising trend but the correlation analysis in the third frame does not show that the two time series are related such that the rise in golfer numbers can be explained by tourist arrivals in Huahin. The correlation is found to be r=0.265 with a sample size of n=30, and degrees of freedom of df=30-1=29. The standard deviation of the correlation coefficient can be computed as (1-r*r)/sqrt(n) = 0.1697. These figures imply a t-statistic of t=0.265/0.1697 = 1.5633. At degrees of freedom = 30-1=29, that yields a p-value of pv=00644 >0.05 and so we fail to reject the null hypothesis. No evidence of correlation is found in the data.

Figure 1: Daily golfer counts and net arrivals

huahin-3

Figure 2:  5-day averages: In Figure 2 below, time series of 5-day averages is tried because the daily figures may contain too much of a variance to detect the suspected correlation. Here the correlation is much higher at r=0.55 however the sample size has shrunk to n=6 with degrees of freedom df=5. The t-test shows a p-value of p=0.055 and so, at alpha=0.05, we once again fail to reject H0 the null hypothesis that the two time series are not related. The next option we can try is a moving 5-year average that moves through the time series one day at a time. {Note: The charts for 5-day averages are incorrectly labeled as “5yr averages”}

huahin-4

 

FIGURE 3: 5-DAY MOVING AVERAGES:  Next we try a 5-year moving average that moves through the time series one year at a time. The results appear in the chart below. Here we find a higher correlation of r= 0.615 and with apparently a longer time series of n=26 that will yield higher degrees of freedom and greater statistical power. At n=26, we find that the standard deviation of the correlation coefficient is sd=0.2725 and that yields a t-statistic of t=2.258. if we use the sample size of n=26, we get degrees of freedom df=25 and that yields a p-value of p=0.016, less than alpha=0.05 meaning that the observed correlation is statistically significant.

HUAHIN-5

 

FIGURE 4: MULTIPLICITY: However, there is a problem with the results shown in Figure 3 above and it has to do with multiplicity. Multiplicity means that some of the data values were used more than once and that created the illusion of a longer time series and more degrees of freedom than we actually have. As shown in the multiplicity chart below, the first five numbers are used once, twice, thrice, four times, and five times respectively; and the last five numbers are used 5 times, 4 times, 3 times, 2 times, and once respectively. All the other numbers are used five times. The average multiplicity of use is 4.333. Therefore, the effective sample size is n/multiplicity = 30/4.333 = 6.923 or approximately 7 with degrees of freedom=6. Using the lower degrees of freedom in the p-value computation of Figure 3 above yields a higher p-value of p=0.03 that is less than our critical value of alpha=0.05 and so in this case the moving average series provided more statistical power and was able to detect a correlation between net arrivals of tourists and the number of golfers on the golf course. However, this is no always the case and it is necessary to check the effective sample size and effective degrees of freedom for statistical tests of significance in time series analysis.

HUAHIN6

 

FIGURE 5: CUMULATIVE VALUES: AN EXTREME CASE OF MULTIPLICITY.  In the construction of a time series of the cumulative values of  another time series, say of length N, the multiplicity of use is significantly greater. Here, the Nth number is used once, the (N-1)the number is used twice, the (N-2)th number is used three times and so on until we get to the first number which is used N times. The total number of numbers used in this sequence is the sum of consecutive numbers from 1 to N. For example, in a time series of 30 numbers, the total number of numbers used in the construction of the cumulative value series is the sum of the integers from 1 to 30 computed as M=(30/2)*(1+30) = 15*31=465. On average the numbers in the time series are used 465/30 times or M=15.5 times each and thus the effective sample size is 30/15.5=1.935. As seen in the chart below, in the case of the cumulative values of the golf club dataset, we compute a near perfect correlation between tourist arrivals and golfer count of Corr=0.976. If multiplicity adjustment is not used to correct for effective sample size, we compute the standard deviation of the correlation coefficient as sigma= 0.008658 that yields t-statistic=112, and a p-value close to zero indicating a statistically significant correlation exists between arrivals and golfer counts. However, when the sample size is corrected for multiplicity in the use of the data, the effective sample size is reduced to n=1.9735 leaving degrees of freedom for the determination of correlation as less than unity.

huahin7

 

FIGURE 6: STATISTICAL ISSUES CANNOT BE RESOLVED WITH EARTH SYSTEM MODELS: For the regression of temperature against cumulative emissions that yields the TCRE regression coefficient, the degrees of freedom is 1.9735-2, a negative number. The TCRE “near perfect proportionality” between cumulative warming and cumulative emissions is therefore illusory and has no interpretation in terms of phenomena it apparently represents because a statistical test of significance for the TCRE is not possible. Yet another statistical issue in the TCRE is that a time series consisting of cumulative values of another time series does not have a time scale. Statistical flaws in the TCRE create confusing situations in climate science procedures that rely on the TCRE. Climate scientists interpret these anomalies created by statistical flaws as climate science issues further confusing the creation of a statistical flaw in the TCRE mathematics. Climate science procedures for the resolution of statistical defects with more sophisticated climate models and Earth System Models do not lead to resolution of the statistical issues but to further and deeper confusion about the TCRE.

We conclude from the analysis presented above that the near perfect proportionality between temperature and cumulative emissions cited by climate science is a correlation between cumulative values and that therefore this correlation has no interpretation in the real world because it has neither time scale not degrees of freedom. 

 

FIGURE 7: THE REMAINING CARBON BUDGET PUZZLE: The Jone- Friedlingstein paper presented here is an example the intense research agenda in climate science having to do with the “remaining carbon budget puzzle” that has so engaged climate science research ever since the TCRE “near perfect proportionality” between cumulative emissions and cumulative warming became the primary theoretical link between emissions and warming in that discipline. The remaining carbon budget puzzle is that midway into a TCRE carbon budget time span, the remaining carbon budget computed by subtraction does not equal the remaining carbon budget computed by the TCRE procedure that was used to construct the carbon budget for the full span of the TCRE carbon budget. This apparent anomaly is a statistics issue and not a climate science issue.

 

The TCRE correlation does not derive from the responsiveness of warming to emissions but from a fortuitous sign pattern in which annual emissions are always positive and, during a warming trend, annual warming is mostly positive. Since emissions are always positive, the TCRE regression coefficient in this proportionality is determined by the fraction of annual warming values that are positive. Larger fractions of positive warming values yield higher values of the TCRE regression coefficient and lower fractions of positive warming values yield lower lower values of the TCRE. It is the regression coefficient that determines the value of the carbon budget. Because of the random nature of the annual warming values, it is highly unlikely that the fraction of annual warming values that are positive in the full span of the carbon budget period will be the same as the fraction of annual warming values that are positive in the two halves of the full span. This is the source of the remaining carbon budget problem because this is where the remaining carbon budget enigma comes from. In general the TCRE regression coefficient for the full span of the carbon budget period, that for the first half of the carbon budget period, and that for the second half of the carbon budget period will be different and that is why the remaining carbon budget computed by subtraction cannot be expected to equal the remaining carbon budget computed with a new TCRE procedure for that period.

Since emissions are always positive, the critical factor is the fraction of annual warming values that are positive. In the chart above, the upper left frame shows random annual warming values with no bias for positive values. The right frame visually displays the corresponding correlation of cumulative warming with cumulative emissions. It is evident in this graphic that without a bias for positive values in annual warming no TCRE correlation can be found. In the lower frame, a slight bias is inserted for positive values of annual warming and the corresponding right frame shows the strong TCRE correlation that was created by the bias for positive values of annual warming. The essence of the remaining budget puzzle can be understood in terms of this demonstration because there is no guarantee that the fraction of annual warming values that are positive in the full span of the TCRE carbon budget time period, the fraction that is positive in the first half of the time period, and the fraction that is positive in the second half of the time period cannot be expected to be the same.

 

CONCLUSION: Climate science has misinterpreted these statistical anomalies as a climate science issue that needs to be resolved with climate models of greater complexity. Their struggle with the remaining carbon budget puzzle seen in the Jone-Friedlingstein paper presented above, demonstrates a failure of climate science to address statistical issues of the TCRE in terms of statistics. This failure has led them down a complex and confusing path of trying to find a climate science explanation of the remaining carbon budget anomaly. The research paper presented above serves as an example of this kind of climate research. The real solution to the remaining carbon budget puzzle is to understand the statistical flaws in the TCRE correlation and to stop using it.  [LINK] [LINK] . In the second link we show that the TCRE procedure that shows that fossil fuel emissions cause warming also shows that UFOs cause warming [LINK] .  

ufo2

 

IMAGE#1: THE PACIFIC BASIN AND THE RING OF FIRE

 

IMAGE#2: THE HAWAIIAN RIDGE

 

IMAGE#3: THE HAWAIIAN HOTSPOT

 

[LINK TO THE HOME PAGE OF THIS SITE]

Some typos fixed 3:20pm 4/9/2020 Thai time with Thanks to Danny

THIS POST IS A SURVEY OF THE GEOLOGICAL FEATURES OF THE HAWAIIAN ISLANDS IN THE CONTEXT OF CLIMATE CHANGE AND THE EMISSION REDUCTION PRIORITIES OF THE HAWAIIAN ISLANDERS. 

PRIMARY REFERENCE : Eruptions of Hawaiian Volcanoes. Past, Present, and, future”, Robert Tilling, Christina Heliker, Donald Swanson. U.S. Geological Survey  Product 117 [LINK] . ADDITIONAL REFERENCES CITED BELOW IN THE TEXT.

 

(1)  ORIGIN OF THE HAWAIIAN ISLANDS:  We are atmospheric creatures so our view of the world is an atmospheric view and our view of the Hawaiian Islands are the small bits of it that are evident in that view as seen in the image below. Here we see seven tropical islands, one very large, one very small, and five other islands that offer a carefree tropical lifestyle for the few and a wonderful vacation destination for the many.

hawaii-map

(2)  A DEEPER VIEW OF THE HAWAIIAN ISLANDS:  The view from the bottom of the ocean reveals that the seven islands we know and love are all we can see from the atmosphere of the 80 or more volcanoes in that region of the Pacific Ocean in the middle of the Ring of Fire known for its intensive geological activity that involves the transfer of heat and materials (including carbon), from the mantle to the crust. As shown in the left frame of the image below, the Big Island of the Hawaiian Islands is a mantle plume hotspot.

(3) ABOUT MANTLE PLUME HOTSPOTS: As described in a related post [LINK] , a mantle plume hotspot is a large area of magma that comes up from the mantle of the inner earth, goes up through layers of rock until obstructed when it spreads out into a mushroom shape over a widespread area. If it is under a sufficient pressure, the magma can break through to the atmosphere as a volcanic eruption. As shown in the image above, the Big Island of Hawaii is adjacent to a hotspot where magma is flowing up from the mantle although with little resistance so that there is no visible “eruption”. Free flowing magma shown above is often seen in the south end of the Big Island and the seas around it.

 

(4)  THE HAWAIIAN RIDGE:  The Hawaiian Islands are the tiny visible part of an immense submarine ridge called the Hawaiian Ridge. The visible islands are the tops of gigantic volcanic mountains formed by countless eruptions of fluid lava* over several million years; some more than 30,000 feet above the seafloor. These volcanic peaks rising above the ocean surface represent only the tiny, visible part of an immense submarine ridge. The Hawaiian Ridge Emperor Seamount Chain is composed of more than 80 large volcanoes. This range stretches across the Pacific Ocean floor from the Hawaiian Islands to the Aleutian Trench. The length of the Hawaiian Ridge segment alone, between the Island of Hawai‘i and Midway Island to the northwest, is about 1,600 miles. The amount of lava erupted to form this huge ridge, about 186,000 cubic miles, is more than enough to cover the State of California with a layer 1 mile thick.  (USGS [LINK] .

(5)  THE CARBON DIOXIDE ISSUE:  In the context of the AGW climate change as human caused, climate science points out correctly that the total amount of carbon dioxide from volcanic eruptions are a small fraction of human emissions.  Fossil fuel emissions are estimated to be about 30 gigatons per year of CO2 [LINK] . The comparison of these human emissions with volcanic eruption emissions of CO2 is demonstrated with graphics such as the one shown below. These graphics indicate a lopsided comparison where human emissions are 60 times that from volcanic eruptions. This comparison is based on eruptions of volcanoes above land.

However, more than 80% of the world’s volcanoes are submarine and found on the ocean floor and they account for more than 90% of the world’s volcanic activity. More to the point in terms of carbon emissions, is that the release of magmatic material from the mantle into the crust of the planet does not normally involve violent eruptions of any kind such that carbon emissions from the bottom of the ocean can create bubbles of carbon dioxide on the ocean surface and these emissions are not counted in the volcano accounting described by climate science. Seepage, hydrothermal vents, and mantle plumes, though not included in the land volcano account, are the major sources of carbon flow from the mantle to the crust. In this regard, it should also be mentioned that more than 98% of the planet’s carbon is in the mantle and core and less than 2% of it is found on the crust inclusive of the carbon life forms such as us and all our fossil fuel reserves such as coal, petroleum, and natural gas.  [LINK TO RELATED POST] .

bandicam 2020-04-09 07-48-43-684

 

bubbles2

bubbles-1

 

(6)  CHANGING ATMOSPHERIC COMPOSITION: THE CASE FOR CLIMATE ACTION:  A crucial argument for human cause of changes in atmospheric composition is that atmospheric composition is changing and it has been changing for more than a hundred years and these changes show a rising level of atmospheric CO2 concentration that has gone from less than 300 ppm to more than 400 ppm over a period of time that corresponds with the rise of the fossil fuel driven industrial economy of humans and of its rapid expansion. This correspondence provides the critical evidence of causation in climate science that the observed changes in atmospheric composition are caused by fossil fuel emissions and that therefore these changes can and must be moderated and halted by taking climate action in the form of reducing the use of fossil fuels until fossil fuel emissions are reduced from 30 gigatons per year of CO2 to zero.

 

(7) SPURIOUS CORRELATIONS IN TIME SERIES DATA: When working with time series of field data, care must be taken to avoid some well known statistical pitfalls in correlation and causation analysis. This point is made graphically with examples in the Tyler Vigen Spurious Correlation website [LINK] as seen in the example below.

bandicam 2020-04-09 12-21-53-140

Briefly, causation in time series data cannot be assumed based only on concurrence. So for example, that atmospheric CO2 concentration went up during a time when humans were burning fossil fuels does not establish causation or the direction of causation such that therefore burning fossil fuels had caused atmospheric CO2 concentration to rise. For that the two time series must first be detrended so that the effect of shared trends on the correlation coefficient is removed. Then a statistically significant correlation between the detrended series can be used to support the causation hypothesis at a given time scale of interest. In related posts we show that detrended correlation analysis does not support the assumed causation in climate science that because of the concurrence of fossil fuel emissions with rising atmospheric CO2, fossil fuel emissions must be the cause of the rising CO2 [LINK] [LINK] [LINK] [LINK] .

(8)  NATURAL SOURCES OF CO2 MUST BE CONSIDERED:  The failure of detrended correlation analysis to support the hypothesis, that changes in atmospheric composition in the industrial economy is caused by the industrial economy, implies that natural flows of CO2 must also be considered in the dynamics of atmospheric composition. A significant source of carbon in that context is the geological emissions found in seepage, above ground volcanic eruptions and submarine volcanism as well as hydrothermal vents and other non-explosive transfer of mantle carbon to the surface such as mantle plumes. The climate science argument that CO2 emissions from eruptions of land volcanoes are immaterial in this regard derives from an arbitrary restriction to land volcanic eruptions as the only geological source.

(9)  THE CASE OF HAWAIIAN CLIMATE ACTIVISM: Specifically, the relative contribution of geological carbon to fossil fuel carbon is likely to be significantly enhanced above the global average in areas with higher intensity of geological activity. We show above that this is the case with the Hawaiian Islands that sit on the very active geological feature of the Hawaiian Ridge system. Therefore in Hawaii, more so than in global averages, geological carbon dioxide is relatively more important when evaluating the effect of fossil fuel emissions on atmospheric composition. These considerations imply that climate activism in the Hawaiian Islands against rising atmospheric CO2 concentration must be directed at all sources of carbon dioxide relevant in the local context instead of just fossil fuels. The details of the Hawaiian climate activism is described in the CONTEXT section below.

(10) THE CONTEXT FOR THE CASE OF HAWAIIAN ACTIVISM: THE CITY OF HONOLULU WILL FILE CLIMATE LIABILITY LAWSUIT AGAINST FOSSIL FUEL COMPANIES SEEKING $$COMPENSATION$$ FOR CLIMATE IMPACTSMayor Kirk Caldwell said on Tuesday he intends to file suit against BP, Chevron, Shell, ExxonMobil, ConocoPhillips, the BHP Group, Marathon and Aloha Petroleum to hold them accountable for climate impacts to the city. The announcement comes just one week after Maui announced it intends to do the same. Both suits are pending approval of other local officials. The municipalities are seeking to force the fossil fuel companies to help pay for climate adaptation and damage costs. Both suits are expected to allege that the companies knew for decades about the devastating climate consequences of their products and engaged in a tobacco-style campaign to undermine the science and obstruct regulatory policies. “The ocean is speaking about the climate crisis. For 50 years, Big Oil knew about these impacts. They knew and then they covered it up. Hawaii is particularly vulnerable to climate impacts such as sea level rise, flooding and severe storms. Over 70 percent of Hawaii’s beaches are in a chronic state of erosion and the island of Oahu has lost nearly 25 percent of its beaches. A category 3 hurricane could inflict $26 billion in damages on Oahu alone and relocation of roads in the state is estimated to cost $15 billion. The state’s tourism industry is also expected to suffer as the sea eats away at Hawaii’s famous beaches. Rising seas, rain bombs, stronger hurricanes, and other consequences of climate change are already threatening Oahu and will impact our fiscal health. Taxpayers should not have to pay for all the steps we will need to take to protect our roads, beaches, homes, and businesses. That should be on the fossil fuel companies who knowingly caused the damage, and as budget chair I believe we should go to court to make them pay their share. Sixty-eight percent of Hawaiians strongly support making fossil fuel companies pay for climate damages, according to a recent poll. That rises to 71 percent among residents of Honolulu County. “Oahu taxpayers shouldn’t have to bear the burden of this cost. Honolulu must prepare now. “California is on fire, the Bahamas were nearly wiped off the map, and Houston has been hit by three 500-year floods in the past three years. It is devastating to find out that Big Oil knew these impacts would occur as far back as the 1960s, and yet they chose to undermine the science and sow confusion instead of becoming responsible corporate citizens. This lawsuit won’t stop climate change from happening, but it will help pay for the protection and preparation of our citizens as climate disasters continue to come our way.

HONOLULU

(11) THE ASSESSMENT OF NATURAL GEOLOGICAL SOURCES OF CARBON DIOXIDE PRESENTED ABOVE IMPLY THAT IT IS NECESSARY FOR THE LITIGANT IN THIS CASE TO DETERMINE THE FRACTION OF THE BLAME AND LIABILITY FOR THEIR CLIMATE WOES THAT MUST BE BORNE BY NATURE AND THE FRACTION TO BE BORNE BY FOSSIL FUEL COMPANIES. THE BURDEN OF PROOF FOR THESE DETERMINATIONS LIES WITH THE LITIGANT. MORE TO THE POINT, PEOPLE WHO DON’T LIKE CARBON DIOXIDE SHOULDN’T LIVE ON A MANTLE PLUME HOTSPOT

 

 

 

 

 

POSTSCRIPT

  1. Dalrymple, G. Brent, Eli A. Silver, and Everett D. Jackson. “Origin of the Hawaiian Islands: Recent studies indicate that the Hawaiian volcanic chain is a result of relative motion between the Pacific plate and a melting spot in the Earth’s mantle.” American Scientist 61.3 (1973): 294-308.  Link to Full Text PDF file [LINK]
  2. USGS, Eruptions of Hawaiian Volcanoes, (2006) Full Test PDF file [LINK] . The principal Hawaiian islands, stretching about 400 miles from Ni’ihau in the northwest to the Island of Hawai’i in the southeast, are the exposed tops of volcanoes that rise tens of thousands of feet above the ocean floor. Some islands are made up of two or more volcanoes. Areas in red on the Island of Hawai‘i indicate lava flows erupted during the past two centuries. Lö‘ihi Seamount, Hawai‘i’s newest and still submarine volcano, lies about 3,100 feet beneath the sea. bandicam 2020-04-09 07-27-53-596

 

IMAGE#1: ARCTIC OCEAN

GREENLAND-SEA2

 

IMAGE#2: THE ULTRASLOW SPREADING MOHNS RIDGE

gakkel-3

 

[LINK TO THE HOME PAGE OF THIS SITE]

RELATED POSTS ON GEOLOGY OF THE ARCTIC SEA [LINK] [LINK] [LINK] 

 

 

THIS POST IS A CRITICAL REVIEW OF FINDINGS BY CLIMATE SCIENCE OF GREENLAND ICE MELT FROM UNDER BY WARM WATERS ATTRIBUTED TO CLIMATE CHANGE BY WAY OF WARM WATER CURRENTS FROM THE GULF OF MEXICO. IT IS PRESENTED IN THREE PARTS. PART-1 IS A STATEMENT OF THE FINDINGS AS REPORTED IN NATURE GEOSCIENCE. PART-2 IS A PRESENTATION OF THE RELEVANT HYDROTHERMAL FEATURES OF THE ARCTIC SEA IN THAT REGION, AND PART-3 IS A CRITICAL EVALUATION OF THE ATTRIBUTION OF UNDERWATER ICE MELT IN GREENLAND TO CLIMATE CHANGE IN TERMS OF THE DATA IN PART-2. 

 

LINKS TO THE SOURCES OF THESE CLAIMS:

  1. INTERVIEW OF CLIMATE SCIENTISTS: POPULAR SCIENCE MAGAZINE [LINK]
  2. YOUTUBE VIDEO: NASA RESEARCH POSTED BY CNN [LINK] 

 

bandicam 2020-04-08 07-50-07-632

PART-1: THE FINDINGS OF ICE MELT BY WARM OCEAN WATERS ATTRIBUTED TO CLIMATE CHANGE AS DESCRIBED BY THE POPULAR SCIENCE ARTICLE

The Greenland ice sheet, a 656,000-square-mile mass of ice covering most of Greenland, is melting at a rapid pace losing ice seven times faster than in the 1990s, and rising air temperatures due to climate change are largely to blame except that glacial ice shelves in the ocean are melting from the bottom up. A new study in Nature Geoscience found that a previously unknown seafloor landscape is bringing warm water to the 79º North glacier, located in the northeastern part of the country, and eroding a 50-mile-long lobe of ice hanging off the glacier. And that could have serious repercussions for the entire ice sheet. What was known is that these ice tongues melt from underneath but what was not known is how this warm water makes it to the glacier. These relatively thin tongues of ice extending from the main ice sheet are most vulnerable to melting from below. Blasts of warm water from the deep can chip away at these slabs, which are basically attached icebergs. At the surface, the water is brisk and hovering below zero Celsuis so that is not what’s melting the ice. Below the ice tongue, they found a deep, mile-wide channel that was funneling warm water toward the glacier. The water down there reached around 34ºF, much warmer than Arctic waters above. The heat emitted by this warm water blast is equivalent to the heat produced by 60 to 70 nuclear power plants. The water was melting about 34 feet of ice below the glacier every year (but a smaller amount of ice is added to the top every year, too). The warm water is part of a current moving from the Gulf of Mexico, along the Gulf Stream, and then flowing along the west coast of Norway. As it hits the Arctic, the water ducks below the polar waters because it’s more dense. The seafloor topography then moves the water along deep channels. This layer of relatively warm water is normal to find here but this warm water has gotten a little bit warmer. However,  melting at the surface still drives most of Greenland ice sheet loss, but underwater melting is also important. It is necessary to understand a lot of these local details in order to understand what is happening on an ice sheet as a whole. Although the 79º North glacier is just one fragment of a massive icy expanse, it is located at the mouth of a river of ice that drains a fifth of the ice sheet. When the ice tongue is gone, this ice river will flow faster, ejecting its contents into the ocean. All that extra ice loss would contribute to sea level rise. This information on the glacier can now be incorporated into models to simulate how climate change will further erode the ice sheet. The findings may also help illuminate melting processes at the Antarctic ice sheet, which also has many large ice tongues extending from it. Being able to project how fast these two ice sheets are melting is crucial for coastal cities, where a difference of a couple feet is the difference between submerging and staying dry. “Ice loss from Greenland and Antarctica is one of the largest contributors to sea level rise. “Being able to understand what drives the pace of ice loss and the quantity of ice loss over multiple years is very valuable for us getting a better handle on what those sea level rise numbers are going to be in the future.

 

PART-2: HYDROTHERMAL ACTIVITY IN THE REGION 

A 2019 AGU research paper on the ULTRASLOW SPREADING MOHNS RIDGE [LINK]  describes the hydrothermal features of the sea adjacent to the part of Greenland where the underwater ice melt is reported. CITATION: Hydrothermal Activity at the Ultraslow‐Spreading Mohns Ridge: New Insights From Near‐Seafloor Magnetics, Anna Lim Marco Brönner Ståle Emil Johansen Marie‐Andrée Dumais, 19 November 2019, AGU https://doi.org/10.1029/2019GC008439 ABSTRACT: Hydrothermal circulation is a process fundamental to all types of mid‐ocean ridges that largely impacts the chemical and physical balance of the World Ocean. However, diversity of geological settings hosting hydrothermal fields complicates the exploration and requires thorough investigation of each individual case study before effective criteria can be established. Analysis of high‐resolution bathymetric and magnetic data, coupled with video and rock samples material, furthers our knowledge about mid‐ocean‐ridge‐hosted venting sites and aid in the interpretation of the interplay between magmatic and tectonic processes along the axial volcanic ridges. The rock‐magnetic data provide constraints on the interpretation of the observed contrasts in crustal magnetization. We map the areal extent of the previously discovered active basalt‐hosted Loki’s Castle and inactive sediment‐hosted Mohn’s Treasure massive sulfide deposits and infer their subsurface extent. Remarkably, extinct hydrothermal sites have enhanced magnetizations and display clear magnetic signatures allowing their confident identification and delineation. Identified magnetic signatures exert two new fossil hydrothermal deposits, MT‐2 and MT‐3. The Loki’s Castle site coincides with negative magnetic anomaly observed in the 2‐D magnetic profile data crossing the deposit. First geophysical investigations in this area reveal the complexity of the geological setting and the variation of the physical properties in the subsurface.

GEOLOGICAL SETTING: The various intense geological properties of the Mohns Ridge  (Image#2) derive from major plate boundariy reorganization involving a 30° shift in the plate motion, followed by the initiation of oblique spreading of the Mohns Ridge and the inception of the Knipovich Ridge. The Mohns Ridge is an ultraslow and obliquely spreading ridge with a full rate estimated at ~15.6 mm/year for the last 10 Ma. The topography is rough with a complex spreading history of the Greenland basin. Both flanks of the rift valley and the valley floor are covered by sediments. The MOR is characterized by linked magmatic (volcanic) and amagmatic (tectonic) segments. Topographic highs are volcanic in origin. Abundant volcanic features such as prominent cones, flat‐topped volcanoes, and volcanic ridges, are observed consistent with the hypothesis that the two domed elongated edifices are newly formed volcanic axial volcanic ridges. The life cycle of axial volcanic ridges alternates between magmatic and tectonic. The area is seismically active—earthquake epicenters located within the ridge valley closely correlates with the major faults and volcanoes at the graben floor. The interplay between these processes is of major importance for hydrothermal circulation along the ridges. Loki’s Castle is an active high‐temperature hydrothermal venting field . It occurs at the northernmost AVR of the Mohns Ridge that rises approximately 1,300 m above the rift valley floor at 2,000‐m depth. En echelon faults can be traced along the entire ridge, which is locally covered by fresh lava flows. Volcanic cones, smaller ridges, flat‐topped volcanoes are common features. The hydrothermal fluid collected from the black smokers indicate significant magmatic influence.

FINDINGS: Near‐seafloor magnetic data from the ultraslow‐spreading Mohns Ridge indicates hydrothermal deposits associated with both active and inactive hydrothermal venting sites and imply magmatic and tectonic processes along the axial volcanic ridges. Loki’s Castle is an active hydrothermal venting field. Two strong positive magnetic anomalies near the Mohn’s Treasure reveal newly extinct hydrothermal venting sites with the same magnetic signature as the Mohn’s Treasure. The increasing prevalence of faulting and its complexity has positive implications for hydrothermal discharge and potentially controls the occurrence of active hydrothermal venting field in the northern AVR1, currently undergoing a destructive tectonic stage.

 

PART-3: CRITICAL COMMENTARY ON THE ATTRIBUTION OF WARM WATER ICE MELT OF GREENLAND ICE SHEET GLACIAL TONGUES TO CLIMATE CHANGE

It is reported in the Nature Geoscience paper described above in PART-1 that glacial ice  immersed in the Greenland Sea are melting from the bottom by warm water. Climate scientists who studied this phenomenon argue for human cause in these cases by attributing the warmth of the water to ocean currents that bring warm water from the Gulf of Mexico to the Arctic in a journey of about 5,000 km while retaining the heat. It is thus that AGW atmospheric heat can be trapped by the Gulf of Mexico and delivered to the Arctic to melt glacial ice dipped into the Arctic Sea. This line of reasoning and causation sequence maintains the human cause of ice melt in Greenland and the COP26 climate action rationale that climate action can and must stop the ice melt in Greenland. We argue here that geological activity is the more likely source of energy that is melting ice in Greenland. It is more likely that the warm water was made warm by geothermal heat. As seen in the AGU research paper in PART-2 and in a related post on the geology of the Arctic [LINK] , the Arctic sea floor is very geologically active. 

Image#3 below shows the 6,000 km long Mid Arctic Rift depicted as a long and curvy red hashed area in close proximity to Greenland with red triangles marking known active submarine volcanoes on the ocean floor. Also shown on this slide is the Greenland/Iceland mantle plume hotspot. A mantle plume hotspot is a large area of magma that comes up from the mantle of the inner earth, goes up through layers of rock until obstructed when it spreads out into a mushroom shape over a widespread area. Under a sufficient pressure the magma can break through to the atmosphere as a volcanic eruption.

On the left of Greenland is the Baffin Bay Labrador rift system marked as BBLR. In the left upper corner on the image active submarine volcanoes are marked with red triangles. It is the Aleutian Island convergent plate boundary where two giant plates collide and one dives under the other and creates a tremendous amount of geological energy that becomes evident as geothermal heat.

IMAGE#3

bandicam 2019-07-01 16-29-44-526

 

CONCLUSION

In the context of these intense geological features of the Arctic region in an around Greenland, it requires a strong sense of the atmosphere bias to claim an atmospheric anthropogenic global warming source of the warmth in the water that is melting the “tongues” of Greenland’s glaciers immersed in the sea. We propose on the basis of the geological features of the Arctic cited here, that nature’s geothermal heat from the bottom of the Arctic Ocean is the likely source of energy that warms the water and melts the tongues of Greenland’s glaciers. To quote Carmack (2012), “The Arctic Ocean warms from below”

 

 

 

 

 

RELEVANT BIBLIOGRAPHY

  1. Taylor, A., A. Judge, and V. Allen. “Terrestrial heat flow from project CESAR, Alpha Ridge, Arctic Ocean.” Journal of geodynamics 6.1-4 (1986): 137-176. During two months in spring, 1983, a multidisciplinary study, project CESAR, was undertaken from the sea ice across the eastern Alpha Ridge, Arctic Ocean. In the geothermal program, 10 gradiometer profiles were obtained; 63 determinations of in situ sediment thermal conductivity were obtained with the same probe, and 714 measurements of conductivity using the needle probe method were obtained on nearby core. Weighted means of the thermal conductivity of the sediment are 1.26 W/mK (in situ) and 1.34 W/mK (core), consistent with the compacted sediment encountered across the ridge and with the lithology. Calculated terrestrial heat flow values, corrected for the regional topography, range from 37 to 72 mWm−2; the average is 56+/−8 mWm−2. Some temperature and heat flow versus depth profiles exhibit non-linearities that can be explained by variations in bottom water temperatures preceding the measurements; models are hypothesized that reduce the curvatures. Two heat flow values considerably higher than others in the area may be explained by higher bottom water temperature over several years, while the low value is consistent with a recent deposition from a slump. This hypothetical modelling reduces the scatter of heat flows and reduces the average to 53+/−6 mWm−2. The CESAR heat flow is somewhat greater than expected for a purely continental fragment but is consistent with crust of oceanic origin. The heat flow is similar to values obtained in Cretaceous back-arc basins. Based on the oceanic heat flow-age relationship, the heat flow constrains the age of the ridge to 60–120 million years.
  2. Carmack, Eddy C., et al. “The Arctic Ocean warms from below.” Geophysical research letters 39.7 (2012).  The old (∼450‐year isolation age) and near‐homogenous deep waters of the Canada Basin (CBDW), that are found below ∼2700 m, warmed at a rate of ∼0.0004°C yr−1 between 1993 and 2010. This rate is slightly less than expected from the reported geothermal heat flux (Fg ∼ 50 mW m−2). A deep temperature minimum Tmin layer overlies CBDW within the basin and is also warming at approximately the same rate, suggesting that some geothermal heat escapes vertically through a multi‐stepped, ∼300‐m‐thick deep transitional layer. Double diffusive convection and thermobaric instabilities are identified as possible mechanisms governing this vertical heat transfer. The CBDW found above the lower continental slope of the deep basin maintains higher temperatures than those in the basin interior, consistent with geothermal heat being distributed through a shallower water column, and suggests that heat from the basin interior does not diffuse laterally and escape at the edges.
  3. Björk, Göran, and Peter Winsor. “The deep waters of the Eurasian Basin, Arctic Ocean: Geothermal heat flow, mixing and renewal.” Deep Sea Research Part I: Oceanographic Research Papers 53.7 (2006): 1253-1271.  Hydrographic observations from four separate expeditions to the Eurasian Basin of the Arctic Ocean between 1991 and 2001 show a 300–700 m thick homogenous bottom layer. The layer is characterized by slightly warmer temperature compared to ambient, overlying water masses, with a mean layer thickness of 500±100 m and a temperature surplus of 7.0±2×10−3 °C. The layer is present in the deep central parts of the Nansen and Amundsen Basins away from continental slopes and ocean ridges and is spatially coherent across the interior parts of the deep basins. Here we show that the layer is most likely formed by convection induced by geothermal heat supplied from Earth’s interior. Data from 1991 to 1996 indicate that the layer was in a quasi steady state where the geothermal heat supply was balanced by heat exchange with a colder boundary. After 1996 there is evidence of a reformation of the layer in the Amundsen Basin after a water exchange. Simple numerical calculations show that it is possible to generate a layer similar to the one observed in 2001 in 4–5 years, starting from initial profiles with no warm homogeneous bottom layer. Limited hydrographic observations from 2001 indicate that the entire deep-water column in the Amundsen Basin is warmer compared to earlier years. We argue that this is due to a major deep-water renewal that occurred between 1996 and 2001.
  4. Edmonds, H. N., et al. “Discovery of abundant hydrothermal venting on the ultraslow-spreading Gakkel ridge in the Arctic Ocean.” Nature 421.6920 (2003): 252-256.  Submarine hydrothermal venting along mid-ocean ridges is an important contributor to ridge thermal structure1, and the global distribution of such vents has implications for heat and mass fluxes2 from the Earth’s crust and mantle and for the biogeography of vent-endemic organisms.3 Previous studies have predicted that the incidence of hydrothermal venting would be extremely low on ultraslow-spreading ridges (ridges with full spreading rates <2 cm yr-1—which make up 25 per cent of the global ridge length), and that such vent systems would be hosted in ultramafic in addition to volcanic rocks4,5. Here we present evidence for active hydrothermal venting on the Gakkel ridge, which is the slowest spreading (0.6–1.3 cm yr-1) and least explored mid-ocean ridge. On the basis of water column profiles of light scattering, temperature and manganese concentration along 1,100 km of the rift valley, we identify hydrothermal plumes dispersing from at least nine to twelve discrete vent sites. Our discovery of such abundant venting, and its apparent localization near volcanic centres, requires a reassessment of the geologic conditions that control hydrothermal circulation on ultraslow-spreading ridges.
  5. Sohn, Robert A., et al. “Explosive volcanism on the ultraslow-spreading Gakkel ridge, Arctic Ocean.” Nature 453.7199 (2008):  Roughly 60% of the Earth’s outer surface is composed of oceanic crust formed by volcanic processes at mid-ocean ridges. Although only a small fraction of this vast volcanic terrain has been visually surveyed or sampled, the available evidence suggests that explosive eruptions are rare on mid-ocean ridges, particularly at depths below the critical point for seawater (3,000 m)1. A pyroclastic deposit has never been observed on the sea floor below 3,000 m, presumably because the volatile content of mid-ocean-ridge basalts is generally too low to produce the gas fractions required for fragmenting a magma at such high hydrostatic pressure. We employed new deep submergence technologies during an International Polar Year expedition to the Gakkel ridge in the Arctic Basin at 85° E, to acquire photographic and video images of ‘zero-age’ volcanic terrain on this remote, ice-covered ridge. Here we present images revealing that the axial valley at 4,000 m water depth is blanketed with unconsolidated pyroclastic deposits, including bubble wall fragments (limu o Pele)2, covering a large (>10 km2) area. At least 13.5 wt% CO2 is necessary to fragment magma at these depths3, which is about tenfold the highest values previously measured in a mid-ocean-ridge basalt4. These observations raise important questions about the accumulation and discharge of magmatic volatiles at ultraslow spreading rates on the Gakkel ridge5 and demonstrate that large-scale pyroclastic activity is possible along even the deepest portions of the global mid-ocean ridge volcanic system.
  6. Piskarev, A., Elkina, D. Giant caldera in the Arctic Ocean: Evidence of the catastrophic eruptive event. Sci Rep 7, 46248 (2017). https://doi.org/10.1038/srep46248:  A giant caldera located in the eastern segment of the Gakkel Ridge could be firstly seen on the bathymetric map of the Arctic Ocean published in 1999. In 2014, seismic and multibeam echosounding data were acquired at the location. The caldera is 80 km long, 40 km wide and 1.2 km deep. The total volume of ejected volcanic material is estimated as no less than 3000 km3 placing it into the same category with the largest Quaternary calderas (Yellowstone and Toba). Time of the eruption is estimated as ~1.1 Ma. Thin layers of the volcanic material related to the eruption had been identified in sedimentary cores located about 1000 km away from the Gakkel Ridge. The Gakkel Ridge Caldera is the single example of a supervolcano in the rift zone of the Mid-Oceanic Ridge System.
  7. NATURE 2006, LIVESCIENCE 2008: Volcanoes Erupt Beneath Arctic Ice:  New evidence deep beneath the Arctic ice suggests a series of underwater volcanoes have erupted in violent explosions in the past decade. Hidden 2.5 miles beneath the Arctic surface, the volcanoes are up to a mile in diameter and a few hundred yards tall. They formed along the Gakkel Ridge, a rift system where the lithosphere is being pulled apart.  The extreme pressure from the overlying water makes it difficult for gas and magma to blast outward. The finding of jagged, glassy fragments of rock scattered around the volcanoes, suggest explosive eruptions occurred between 1999 and 2001. When the gas pressure gets high in the rift system it pops like a champagne bottle. The volcanoes have a major impact on the overlying water column. The eruptions discharge large amounts of carbon dioxide, helium, trace metals and geothermal heat into the water over long distances.
  8. Greve, Ralf. “Relation of measured basal temperatures and the spatial distribution of the geothermal heat flux for the Greenland ice sheet.” Annals of Glaciology 42 (2005): 424-432.  The thermo-mechanical, three-dimensional ice-sheet model SICOPOLIS is applied to the Greenland ice sheet. Simulations over two glacial–interglacial cycles are carried out, driven by a climatic forcing interpolated between present conditions and Last Glacial Maximum anomalies. Based on the global heat-flow representation by Pollack and others (1993), we attempt to constrain the spatial pattern of the geothermal heat flux by comparing simulation results to direct measurements of basal temperatures at the GRIP, NorthGRIP, Camp Century and Dye 3 ice-core locations. The obtained heat-flux map shows an increasing trend from west to east, a high-heat-flux anomaly around NorthGRIP with values up to 135 mWm–2 and a low-heat-flux anomaly around Dye 3 with values down to 20 mW m–2. Validation is provided by the generally good fit between observed and measured ice thicknesses. Residual discrepancies are most likely due to deficiencies of the input precipitation rate and further variability of the geothermal heat flux not captured here.
  9. Greve, Ralf, and Kolumban Hutter. “Polythermal three-dimensional modelling of the Greenland ice sheet with varied geothermal heat flux.” Annals of Glaciology 21 (1995): 8-12.  Computations over 50 000 years into steady state with Greve’s polythermal ice-sheet model and its numerical code are performed for the Greenland ice sheet with today’s climatological input (surface temperature and accumulation function) and three values of the geothermal heat flux: (42, 54.6, 29.4) mW m−2. It is shown that through the thermomechanical coupling the geometry as well as the thermal regime, in particular that close to the bed, respond surprisingly strongly to the basal thermal heat input. The most sensitive variable is the basal temperature field, but the maximum height of the summit also varies by more than ±100m. Furthermore, some intercomparison of the model outputs with the real ice sheet is carried out, showing that the model provides reasonable results for the ice-sheet geometry as well as for the englacial temperatures.

 

[LINK TO THE HOME PAGE OF THIS SITE]

 

THIS POST IS A CRITICAL REVIEW OF A TED TALK ON CLIMATE ACTION [LINK] 

 

PART-1: ANTHROPOGENIC GLOBAL WARMING AND CLIMATE CHANGE

For human welfare and well being and the continued advancement of human civilization, we need energy and we need to keep the price of energy as low as possible. Currently our energy derives mostly from the use of fossil fuels in a process that involves 26 gigatons of CO2 emissions per year (equivalent to 7.1 gigatons of carbon). Climate scientists have determined that these emissions are warming the planet and that continued warming will have negative effects in terms of extreme weather, sea level rise, ocean acidification, and ecosystem collapse. To prevent these negative effects, we need  to reduce emissions to zero because as long as there are human caused emissions there will be human caused warming.

Climate scientists found that surface temperature is proportional to cumulative fossil fuel emissions and they have determined that the relationship between emissions and warming can be stated in terms of the so called TCRE PROPORTIONALITY which shows a warming effect of 1C to2C for each trillion tonne of carbon (equivalent to 3.67 trillion tonnes of CO2). The TCRE (Transient Climate Response to Cumulative Emissions) states that the temperature rise from time-1 to time-2 will be proportional to the cumulative emissions from time-1 to time-2 at a rate of about 1C to 2C per teratonne of carbon in fossil fuel  emissions. This equation is shown in the right frame of the image above and it establishes that fossil fuel emissions cause warming and it also establishes the need for zero emissions because the TCRE equation implies that as long as there are emissions there will be warming. This analysis establishes the need for climate action to achieve zero emissions.

 

PART-2:  CLIMATE ACTION OPTIONS AND THEIR FEASIBILITY

bandicam 2020-04-05 17-10-33-622

There are five climate action options that are currently in consideration. They are carbon capture and sequestration (CCS), nuclear, wind, solar-photovoltaic, and solar-thermal. The tidal, geothermal, fusion, and biofuel are not considered in this analysis.

THE CCS OPTION:  What you need to do there, it seems simple but it isn’t. You have to take all the CO2 after you burn it, going out the flue, pressurize it, make it into a liquid, and put it somewhere and hope it stays there. There are some pilot plants that are able to do this at the 60% to 80% level but getting this technology up to 100% will be very tricky. Another issue is to agree on where the CO2 should be sequestered. But the real issue is the determination and the verification that all the CO2 has indeed been removed and all of it has been sequestered and that none of it is leaking back out. The volume of storage involved will surely be huge, much larger than any waste disposal technology we have ever undertaken. So that’s a tough one and probably not feasible.

THE NUCLEAR OPTION: Like CCS, Nuclear also has three big problems. They are (1) COST, particularly in highly regulated countries, will be high. The issue of safety where we can really feel good about the plant because nothing can go wrong even with all these human operators that can screw up. (2) NUCLEAR WEAPONS: We have to ensure that the fuel doesn’t get used for nuclear weapons. and (3) The issue of WASTE DISPOSAL. The amount of waste is not large but there are a lot of safety concerns. So there are three very tough problems that might be solvable so we should keep working to find a solution but as things are today, these problems keep the nuclear option from further consideration.

RENEWABLE ENERGY OPTIONS: That leaves us with the three renewable energy options described as wind, solar photovoltaic, and solar thermal.  Their great advantage is that they do not require fuel but there are some disadvantages. One is that the density of energy gathering in these technologies is dramatically less than a power plant. This is energy farming. You’re talking about many square miles, thousands of times more area than conventional power plants. Another disadvantage is that these are intermittent sources. The sun doesn’t shine every day and likewise the wind doesn’t blow all the time. Therefore to depend on renewable sources, you have to have backup power – some way of getting energy during the times when the sun doesn’t shine or when the wind doesn’t blow. Currently, the technology available to solve the intermittency problem is batteries but this technology is far behind the curve.

bandicam 2020-04-05 20-11-37-495

 

PART-3: INNOVATION AND NEW TECHNOLOGIES NEEDED TO SOLVE THIS PROBLEM

bandicam 2020-04-05 20-55-32-298

A new technological breakthrough offered by a new company called TERRAPOWER may be the answer to the climate action puzzle described above. TerraPower has a traveling-wave reactor (TWR) which would run on depleted uranium. It could be dramatically safer and substantially cheaper than current nuclear reactors. Nuclear is ideal for dealing with climate change, because it is the only carbon-free, scalable energy source that’s available 24 hours a day. The problems with today’s reactors, such as the risk of accidents, has been solved with innovation.

Terrapower has developed a new nuclear power technology that solves the problems with nuclear described above. At the TERRAPOWER website at https://www.terrapower.com/a-world-in-transition-a-time-to-lead-toward-a-decarbonized-economy/ [LINK] , we learn that:  “TerraPower has made technological advances in nuclear energy innovation to offer “Advanced Nuclear Technology for an Emissions-Free Economy that will allow us to get to zero emissions. The IPCC says that to avoid the worst effects of climate change, we must keep global temperature increases below 1.5 degrees Celsius. The continued use of nuclear energy is the only viable way to achieve this goal. And if we’re to bring electricity to the 840 million people who lack access, we actually need to increase the use of nuclear energy. Getting to a carbon-free future will also require us to develop strategies to produce chemicals, cement, metals and other products without burning fossil fuels.

TerraPower’s advanced nuclear technologies can provide reliable, very high temperature heat for these and other industrial processes without emitting any carbon dioxide or methane. Most Americans either live in states with emissions-reduction targets or are served by utilities that have put forth ambitious emissions-reduction goals. This includes TerraPower’s home state of Washington, where our power provider, Energy Northwest, has offered a plan to meet our state’s mandate to eliminate carbon emissions from the grid with a combination of wind, solar, hydro, existing nuclear and next-generation nuclear technologies. With this combination in mind, our technology is specifically designed to integrate into a grid with high levels of renewables. In fact, we are currently working with Southern Company and Oak Ridge National Laboratory to use the high-temperature heat from our reactors to power a molten salt system that can store tremendous amounts of energy. That energy can be used to power the grid at peak demand when the wind isn’t blowing, or the sun isn’t shining. We view this technology as a key enabler of wind and solar technologies, and part of the fastest way to get to a 100% clean energy future. America and the world are transitioning to a future that requires creativity, resilience and persistence. To rediscover the normalcy we long for, all forms of emissions-free energy will be needed. Clean electricity has the potential to lift hundreds of millions out of poverty and to drive economic growth. Economic opportunity is getting more attention than ever, with society recognizing its various factors must be addressed once we are past the pandemic. The U.S. can and will rise to today’s challenges and lead the advanced nuclear transition.

 

PART-4:  CRITICAL COMMENTARY

IN SUMMARY, WHAT WE FIND IN THIS BILL GATES TED TALK IS THAT (1) CLIMATE CHANGE CAN CAUSE GREAT HARM TO OUR CIVILIZATION AND TO THE WORLD’S ECOSYSTEMS AND THAT THEREFORE WE MUST STOP THE WARMING. (2) THE WARMING CAN BE STOPPED IF WE REDUCE EMISSIONS TO ZERO. (3) THERE ARE FIVE CONVENTIONAL OPTIONS FOR EMISSION REDUCTION BUT THEY ARE ALL FLAWED. (4) THEREFORE, WE NEED A TECHNOLOGICAL BREAKTHROUGH TO MOVE AWAY FROM FOSSIL FUELS AND TACKLE CLIMATE CHANGE. (5) JUST SUCH A TECHNOLOGICAL BREAKTHROUGH IS FOUND IN THE TERRAPOWER TRAVELING WAVE NUCLEAR REACTOR INNOVATION. IT OFFERS A PRACTICAL SOLUTION FOR CLIMATE ACTION TO ACHIEVE ZERO EMISSIONS. IT IS FURTHER NOTED THAT THE SPEAKER OF THIS TED TALK IS BILL GATES WHO IS ALSO THE CHAIRMAN OF THE BOARD OF TERRAPOWER. 

A FURTHER CRITICAL COMMENT IS MADE WITH RESPECT TO THE USE OF THE TCRE (TRANSIENT CLIMATE RESPONSE TO CUMULATIVE EMISSIONS) TO RELATE WARMING TO CO2 EMISSIONS. AS SHOWN IN RELATED POSTS, THE TCRE IS A SPURIOUS CORRELATION THAT HAS NO INTERPRETATION IN TERMS OF THE REAL WORLD VARIABLES IT OSTENSIBLY REPRESENTS [LINK#1] [LINK#2] [LINK#4] [LINK#5]

wc

[LINK TO THE HOME PAGE OF THIS SITE]

POSTS ON THE UNITED NATIONS [LINK] [LINK] [LINK] [LINK] [LINK] 

POSTS ON THE MONTREAL PROTOCOL [LINK] [LINK] [LINK] [LINK] 

SMALLSTEP

THE PARIS CLIMATE AGREEMENT

ONE SMALL STEP FOR THE UN, ONE GIANT LEAP TO NOWHERE

THIS POST IS A CRITICAL ANALYSIS OF THE EVOLUTION OF THE PARIS AGREEMENT IN TERMS OF ITS STRUCTURAL AND HISTORICAL ODDITIES AND ITS LATER DISCONNECTED INTERPRETATIONS IN TERMS OF CLIMATE ACTION. THE REFERENCE ARTICLE IS: “Japan’s woeful climate plan amounts to science denial” PUBLISHED ONLINE BY CLIMATECHANGENEWS.COM [LINK]

paris-1

ABSTRACT: THE CASE AGAINST JAPAN PRESENTED BELOW IS THAT (1) JAPAN’S “INTENDED NATIONALLY DETERMINED CONTRIBUTION” (INDC) THAT IT HAD SUBMITTED IN 2015 WAS IN FACT NATIONALLY DETERMINED AND NOT GLOBALLY DETERMINED AND NOT GLOBALLY IMPOSED TO COMPLY WITH A GLOBAL CARBON BUDGET. AND (2) THAT JAPAN’S “INTENDED NATIONALLY DETERMINED CONTRIBUTION” (INDC) THAT IT HAD SUBMITTED IN 2015 WAS IN FACT  JUST AN INTENTION AND NOT A TARGET TO WHICH JAPAN HAD COMMITTED. THEREFORE, THE PARIS AGREEMENT IS AN AGREEMENT FOR THE PARTIES TO SUBMIT NATIONAL INTENTIONS NATION BY NATION AND A COLLECTION OF THESE INDCs IS NOT AN AGREEMENT. THEREFORE THERE IS NO SUCH THING AS A PARIS AGREEMENT AND NO SUCH THING AS A PARIS GLOBAL CARBON BUDGET. THERE IS NO GLOBAL CARBON BUDGET THAT THE PARTIES HAVE SIGNED AND TO WHICH THE PARTIES HAVE COMMITTED.THE “PARIS AGREEMENT” IS A DESPERATION BUREAUCRTIC GAME BY THE UNITED NATIONS TO BE SEEN AS DELIVERING THE EXPECTATION OF THEIR MONTREAL PROTOCOL SUCCESS IN THE CLIMATE CHANGE ISSUE. THE PARIS AGREEMENT IS BUREAUCRATESE. THESE BUREAUCRATS WERE ABLE TO CLOSE THE DEAL BECAUSE THERE WAS NO DEAL, NO AGREEMENT, NO COMMITMENT, AND NO GLOBAL CARBON BUDGET. THIS IS WHY UN BUREAUCRATS NEED WORDS LIKE AMBITION AND MOMENTUM.  [LINK] [LINK] [LINK]  .

The Delicate Art of Bureaucracy: Digital Transformation with the Monkey,  the Razor, and the Sumo Wrestler: Schwartz, Mark: Amazon.com.mx: Libros

Clowns. Adult clown standing at a podium giving a speech — Stock Photo ©  jbryson #21424127

(1)  A HISTORICAL CONTEXT FOR THE PARIS AGREEMENT:   The UN bureaucrats pictured above and a few others were stung by a bitter and embarrassing failure at the much anticipated and lavishly advertised COP15 at Copenhagen. It was a failure that had the markings of a near-death experience for the climate movement but with extreme linguistic spin to describe failure as success as in “This is the first step we are taking towards a green and low carbon future for the world, steps we are taking together. But like all first steps, the steps are difficult and they are hard.” (Gordon Brown)  [LINK] . Another notable quote that tries to explain failure as success “I know what we really need is a legally binding treaty as quickly as possible”. The gathering of politicians and bureaucrats knew they had failed but there was no shortage of colorful language to describe failure as success. It seemed that the failure in Copenhagen had taken the spirit out of the climate movement. This is the backdrop to the 2015 COP21 meeting in Paris. What occurred in COP21 can only be interpreted in this context. The next few Conference of Parties were non-events. From COP16 in Cancun Mexico to COP 20 in Lima Peru that followed the disaster in Copenhagen provided only a venue for vacuous speeches about goals and the kind of action needed but with no action put to the floor and no action taken; except for a strategy to describe failure to agree as an agreement constructed in Lima that would emerge in Paris as an “Agreement”.

(2) THE COP 21 MEETING IN PARIS: The failure in Copenhagen and the relatively inactive COPs thereafter had created the so called “ambition” in the UN that there had to be a worldwide emission reduction agreement signed in Paris. It is here that the word “ambition” entered the climate change language of the UN bureaucracy perhaps based on the idea that COP15 had failed because of inadequate ambition. Thus it was thought that an agreement could be reached in Paris if the Parties (nation states that had signed the UNFCCC) had ambition. The meeting began with a warming target of 2C or less by the year 2100 and the global carbon budget that implies. The climate action obligations of the Parties in meeting that carbon budget was discussed and draft emission reduction plans were presented but no agreement could be reached. The differences among nations made it impossible to find a single emission reduction contract that all parties would sign.

(3)  IF YOU WON’T SIGN THE CONTRACT WE WROTE, SIGN THE ONE YOU WILL. The Lima plan was thus invoked.

APPENDIX-1: WHAT THE REFERENCE DOCUMENT SAYS

  1. Japan recently submitted a ‘new’ climate action plan largely reiterating its old targets for 2030. This is not just woefully inadequate for meeting larger climate goals, it also negates science and sets a bad precedent, especially as Covid-19 engulfs the planet. Already, the role of the leading emitters such as the US have made the goals of Paris Agreement more turbid.
  2. And Japan’s plan issued on 30 March, known as a Nationally Determined Contribution (NDC), further muddies the issue. The fight to address climate change never been more serious, even though the 26th annual UN climate summit, Cop26, has been postponed in the wake of global pandemic.
  3. The growth of greenhouse gas emissions over the past years has wreaked havoc across the world in the form of extreme weather events such as floods, forest fires, heatwaves and droughts. Still, the world continues to exorbitantly emit greenhouse gases into the atmosphere with China, the US, India, Russia and Japan accounting for 62% of all emissions in 2018. The per-capita carbon emissions show the skewed divisions between advanced industrial societies and the developing world, mandating an aggressive role for industrial societies in line with their historical responsibility in creating climate change.
  4. Japan’s NDC lacks ambition. To meet the Paris Agreement’s goals of keeping the temperature rise to well below 2 degrees Celsius, civil society and governments largely from island states and lesser developed world have urged greater ambition from the developed world in solving the climate crises.
  5. Being the fifth largest emitter globally and with per capita emissions close to those of the US, Japan however has shown punier responsibility towards the climate crisis. Its revised NDC is a reiteration of previous climate targets. Back in 2015, when Japan submitted its first NDC pledging to cut emissions by 26% by 2030 as compared to 2013 levels, environmental groups rated Japan’s efforts as the among the weakest by developed countries.
  6. The revised NDC lacks an upward revision of targets, which is one of the major requirements of the Paris Agreement for submitting revised NDC. It vaguely talks of steps to reduce long-term emissions without giving details. Additionally, Japan has overlooked another of the Paris Agreement’s key requirement, which is to have transparency in its domestic climate apparatus. Apparently, the revised NDC is formed without a proper public consultation process.
  7. What’s Japan up to? Ecologically, Japan is not well endowed with key natural resources and has relied on huge import of fossil fuels to fulfill its development needs. Well before the ambitious solar rooftop program in countries including Germany, China and India, Japanese programs like Sun Shine, Moon Light, and Global Technology Program drew huge success. Till 2000, Japan was the sole installer and supplier for half of the solar panels on the planet.
  8. Governments are due to submit tougher climate plans in 2020, despite Cop26 delay. Before the 2011 Fukushima nuclear disaster, the concentration of nuclear energy stood at 14% of total power generation. During the Kyoto Protocol regime, Japan had a target of a 6% reduction from 1990 levels in the first commitment period of the Kyoto Protocol between 2008-2012. However, rather than aiming for real emission cuts, Japan relied on offsets and buying credits from other countries.
  9. Measures such as energy efficiency, Cool Plan for halving emissions by 2050 (without a base year) and coal tax have been touted, but they are as weak and highly insufficient. Often in climate negotiations, Japan has worked closely with the biggest historical polluter – the United States – in common stances such as resisting ambition, pushing for “clean coal” and related technologies and refusing to fulfill finance and technology transfer to developing countries.
  10. The Japan-United States Strategic Energy Partnership (JUSEP) in 2017 for promoting coal and controversial nuclear technologies in the Southeast Asian region is one such example. Applauded initially, it did not translate into real climate actions. Fukushima changed Japanese energy dimensions drastically. In the decade to 2010, the Japanese solar photo-voltaic (PV) industry became uncompetitive with foreign rivals. After the Senkaku Island dispute (2010) with China, the import restrictions of rare earth elements (like neodymium, indium, praseodymium, dysprosium, and terbium) further dented the solar program.
  11. This was worsened by poor oversight to see the effects of major solar programs outside Japan’s innovations system. Its wind program did not take off largely due to stringent environmental and technical norms regarding seismic zones. Rather than focusing on restructuring its solar industry, Japan has opted for an easier option and switched to coal power in a big way.
  12. Japan has more than 90 coal plants and plans to operate 22 additional new plants. It relies on coal for more than a third of its power generation needs, resulting in an upward increase in carbon emissions. The Ministry of Economy, Trade and Industry (METI), backed by the fossil fuel lobby, prepares Basic Energy Plan which largely influences Japan’s climate targets.
  13. Japan sticks to 2030 climate goals, draws fire for a ‘disappointing’ lack of ambition. With a weak NDC submission, Japan has also lost an opportunity to emerge as a climate leader in Asia, where giants such as India and China are making more efforts towards reducing dependence on fossil fuels and switching to renewables amid structural shifts in their economies.
  14. For a start, Japan needs to urgently revise its climate targets mandated by science, shift massively to renewables with a clear plan for de-carbonisation and create transparent climate structures domestically involving all stakeholders. The coronavirus and economic slowdowns should be no excuse for climate inaction. The world cannot afford to passively watch rich emitters’ shenanigans. The non actions of climate rogues need to be called out unanimously and to raise climate ambition is no longer a matter of choice.

APPENDIX 2: A RELEVANT BIBLIOGRAPHY

  1. Clémençon, Raymond. “The two sides of the Paris climate agreement: Dismal failure or historic breakthrough?.” (2016): 3-24.  The December 2015 Paris Climate Agreement is better than no agreement. This is perhaps the best that can be said about it. The scientific evidence on global warming is alarming, and the likelihood depressingly small that the world can stay below a 2°C—even less a 1.5°C warming—over pre-industrial times. The Paris Agreement does not provide a blueprint for achieving these stabilization objectives. But it is ultimately the hope, however small, that a fundamental and rapid energy transition is achievable that must inform social and political behavior and activism in the coming years. In this sense, the Paris outcome is an aspirational global accord that will trigger and legitimize more climate action around the world. The question is whether this will happen quickly enough and at a sufficient scale to avoid disastrous warming of the planet. What is certain is that it will not occur without determined and far-reaching government intervention in energy markets in the next few years, particularly in the largest polluting countries.
  2. Young, Oran R. “The Paris Agreement: destined to succeed or doomed to fail?.” Politics and Governance 4.3 (2016): 124-132.  Is the 2015 Paris Agreement on climate change destined to succeed or doomed to fail? If all the pledges embedded in the intended nationally determined contributions (INDCs) are implemented fully, temperatures at the Earth’s surface are predicted to rise by 3–4 °C, far above the agreement’s goal of limiting increases to 1.5 °C. This means that the fate of the agreement will be determined by the success of efforts to strengthen or ratchet up the commitments contained in the national pledges over time. The first substantive section of this essay provides a general account of mechanisms for ratcheting up commitments and conditions determining the use of these mechanisms in international environmental agreements. The second section applies this analysis to the specific case of the Paris Agreement. The conclusion is mixed. There are plenty of reasons to doubt whether the Paris Agreement will succeed in moving from strength to strength in a fashion resembling experience with the Montreal Protocol on ozone depleting substances. Nevertheless, there is more room for hope in this regard than those who see the climate problem as unusually malign, wicked, or even diabolical are willing to acknowledge.
  3. Christoff, Peter. “The promissory note: COP 21 and the Paris Climate Agreement.” Environmental Politics 25.5 (2016): 765-787.  The 2015 UN climate negotiations in Paris resulted in an inclusive, binding treaty that succeeds the Kyoto Protocol. In contrast to the failure at Copenhagen in 2009, the Paris negotiations are therefore seen as a major diplomatic success that has regenerated faith in the United Nations Framework Convention on Climate Change as a forum for dynamic multilateralism. The Paris Agreement provides a robust framework for ratcheting up efforts to combat global warming. However, the Agreement’s value will remain unclear for some time. The historical path to the Paris accord is outlined, and a preliminary assessment is offered of its key elements and outcomes.
  4. Höhne, Niklas, et al. “The Paris Agreement: resolving the inconsistency between global goals and national contributions.” Climate Policy 17.1 (2017): The adoption of the Paris Agreement in December 2015 moved the world a step closer to avoiding dangerous climate change. The aggregated individual intended nationally determined contributions (INDCs) are not yet sufficient to be consistent with the long-term goals of the agreement of ‘holding the increase in global average temperature to well below 2°C’ and ‘pursuing efforts’ towards 1.5°C. However, the Paris Agreement gives hope that this inconsistency can be resolved. We find that many of the contributions are conservative and in some cases may be over-achieved. We also find that the preparation of the INDCs has advanced national climate policy-making, notably in developing countries. Moreover, provisions in the Paris Agreement require countries to regularly review, update and strengthen these actions. In addition, the significant number of non-state actions launched in recent years is not yet adequately captured in the INDCs. Finally, we discuss decarbonization, which has happened faster in some sectors than expected, giving hope that such a transition can also be accomplished in other sectors. Taken together, there is reason to be optimistic that eventually national action to reduce emissions will be more consistent with the agreed global temperature limits. The next step for the global response to climate change is not only implementation, but also strengthening, of the Paris Agreement. To this end, national governments must formulate and implement policies to meet their INDC pledges, and at the same time consider how to raise their level of ambition. For many developing countries, implementation and tougher targets will require financial, technological and other forms of support. The findings of this article are highly relevant for both national governments and support organizations in helping them to set their implementation priorities. Its findings also put existing INDCs in the context of the Paris Agreement’s global goals, indicating the extent to which current national commitments need to be strengthened, and possible ways in which this could be done.
  5. Gupta, Joyeeta. “The Paris climate change agreement: China and India.” Climate Law 6.1-2 (2016): 171-181.  This paper assesses how the Paris Agreement on climate change affects China and India. Taking a third-world approaches to international law, it argues that patterns of exploitation are repeated in different fields. The UNFCCC required developed countries to reduce their emissions before developing countries would be required to do so. While some developed countries are keeping to their side of the bargain, others are failing to do so. Nevertheless, China and India have accepted an agreement with targets for all countries which requires considerable sacrifices in the energy field but possible gains in the water field. While both countries have agreed to reduce the rate of growth of their emissions, they have high expectations of climate finance, which are unlikely to be fulfilled. Their commitments require major changes to national policy, scarcely the sort of tinkering that the no-regrets policy in India has achieved.
  6. Ollila, Antero. “Challenging the scientific basis of the Paris climate agreement.” International Journal of Climate Change Strategies and Management (2019).  The future emission and temperature trends are calculated according to a baseline scenario by the IPCC, which is the worst-case scenario RCP8.5. The selection of RCP8.5 can be criticized because the present CO2 growth rate 2.2 ppmy−1 should be 2.8 times greater, meaning a CO2 increase from 402 to 936 ppm. The emission target scenario of COP21 is 40 GtCO2 equivalent, and the results of this study confirm that the temperature increase stays below 2°C by 2100 per the IPCC calculations. The IPCC-calculated temperature for 2016 is 1.27°C, 49 per cent higher than the observed average of 0.85°C in 2000. Two explanations have been identified for this significant difference in the IPCC’s calculations: The model is too sensitive for CO2 increase, and the positive water feedback does not exist. The SENSITIVITY of 0.6°C found in some critical research studies means that the temperature increase would stay below the 2°C target, even if emissions follow the baseline scenario. This is highly unlikely because the estimated conventional oil and gas reserves would be exhausted around the 2060s if the present consumption rate continues.
  7.  

SUMMARY AND CONCLUSION

THE CASE AGAINST JAPAN PRESENTED ABOVE IS THAT (1) JAPAN’S “INTENDED NATIONALLY DETERMINED CONTRIBUTION” (INDC) THAT IT HAD SUBMITTED IN 2015 WAS IN FACT NATIONALLY DETERMINED AND NOT GLOBALLY DETERMINED AND NOT GLOBALLY IMPOSED TO COMPLY WITH A GLOBAL CARBON BUDGET. AND (2) THAT JAPAN’S “INTENDED NATIONALLY DETERMINED CONTRIBUTION” (INDC) THAT IT HAD SUBMITTED IN 2015 WAS IN FACT  JUST AN INTENTION AND NOT A TARGET TO WHICH JAPAN HAD COMMITTED. THEREFORE, THE PARIS AGREEMENT IS AN AGREEMENT FOR THE PARTIES TO SUBMIT NATIONAL INTENTIONS NATION BY NATION AND A COLLECTION OF THESE INDCs IS NOT AN AGREEMENT. THEREFORE THERE IS NO SUCH THING AS A PARIS AGREEMENT AND NO SUCH THING AS A PARIS GLOBAL CARBON BUDGET. THERE IS NO GLOBAL CARBON BUDGET THAT THE PARTIES HAVE SIGNED AND TO WHICH THE PARTIES HAVE COMMITTED.THE “PARIS AGREEMENT” IS BEST UNDERSTOOD IN TERMS OF THE DESPERATION OF THE UNITED NATIONS TO BE SEEN AS MEETING THE EXPECTATION THAT THEY WILL DELIVER THEIR MONTREAL PROTOCOL SUCCESS IN THE CLIMATE CHANGE ISSUE. THE PARIS AGREEMENT IS BUREAUCRATESE. THESE BUREAUCRATS WERE ABLE TO CLOSE THE DEAL BECAUSE THERE WAS NO DEAL, NO AGREEMENT, NO COMMITMENT, AND NO GLOBAL CARBON BUDGET. THIS IS WHY UN BUREAUCRATS NEED WORDS LIKE AMBITION AND MOMENTUM.  [LINK] [LINK] [LINK]  .

bant/ - International/Random » Thread #7463106

 

[RELATED POST ON METHANE]

 

FEAR EMISSIONS NOT METHANE TIME BOMBS

THE CONVERSATION APRIL 2 2020: 

 

 

WHAT THE ARTICLE SAYS

  1. The Arctic is predicted to warm faster than anywhere else in the world this century, perhaps by as much as 7°C. These rising temperatures threaten one of the largest long-term stores of carbon on land: permafrost. Permafrost is permanently frozen soil. The generally cold temperatures in the Arctic keep soils there frozen year-on-year. Plants grow in the uppermost soil layers during the short summers and then decay into soil, which freezes when the winter snow arrives. Over thousands of years, carbon has built up in these frozen soils, and they’re now estimated to contain twice the carbon currently in the atmosphere. Some of this carbon is more than 50,000 years old, which means the plants that decomposed to produce that soil grew over 50,000 years ago. These soil deposits are known as “Yedoma”, which are mainly found in the East Siberian Arctic, but also in parts of Alaska and Canada.
  2. As the region warms, the permafrost is thawing, and this frozen carbon is being released to the atmosphere as carbon dioxide and methane. Methane release is particularly worrying, as it’s a highly potent greenhouse gas. Arctic landscapes are changing rapidly as the region warms. But a recent study suggested that the release of methane from ancient carbon sources, sometimes referred to as the Arctic methane “bomb” didn’t contribute much to the warming that occurred during the last deglaciation 18,000 to 8,000 years ago, a period that climate scientists study intently, as it’s the last time global temperatures rose by 4°C, which is roughly what is predicted for the world by 2100. This study suggested to many that ancient methane emissions are not something we should be worried about this century. But in new research, we found that this optimism may be misplaced.
  3. The Arctic is turning brown because of weird weather and that could accelerate climate change. We went to the East Siberian Arctic to compare the age of different forms of carbon found in the ponds, rivers and lakes. These waters thaw during the summer and leak greenhouse gases from the surrounding permafrost. We measured the age of the carbon dioxide, methane and organic matter found in these waters using radiocarbon dating and found that most of the carbon released to the atmosphere was overwhelmingly “young”. Where there was intense permafrost thaw, we found that the oldest methane was 4,800 years old, and the oldest carbon dioxide was 6,000 years old. But over this vast Arctic landscape, the carbon released was mainly from young plant organic matter.
  4. This means that the carbon produced by plants growing during each summer growing season is rapidly released over the next few summers. This rapid turnover releases much more carbon than the thaw of older permafrost, even where severe thaw is occurring. This means that carbon emissions from a warming Arctic may not be driven by the thawing of an ancient frozen carbon bomb, as it’s often described. Instead, most emissions may be relatively new carbon that is produced by plants that grew fairly recently.
  5. Arctic lakes are growing sources of methane emissions to the atmosphere. Joshua Dean, Author provided. What this shows is that the age of the carbon released from the warming Arctic is less important than the amount and form it takes. Methane is 34 times more potent than carbon dioxide as a greenhouse gas over a 100-year timeframe. The East Siberian Arctic is a generally flat and wet landscape, and these are conditions which produce lots of methane, as there’s less oxygen in soils which might otherwise create carbon dioxide during thaws instead. As a result, potent methane could well dominate the greenhouse gas emissions from the region.
  6. Since most of the emissions from the Arctic this century will likely be from “young” carbon, we may not need to worry about ancient permafrost adding substantially to modern climate change. But the Arctic will still be a huge source of carbon emissions, as carbon that was soil or plant matter only a few hundred years ago leaches to the atmosphere. That will increase as warmer temperatures lengthen growing seasons in the Arctic summer.
  7. The fading spectre of an ancient methane time bomb is cold comfort. The new research should urge the world to act boldly on climate change, to limit how much natural processes in the Arctic can contribute to the problem. 

 

CRITICAL COMMENTARY

THIS CLIMATE ACTIVISM VERBIAGE TRANSLATES INTO PLAIN ENGLISH AS FOLLOWS: WE KNOW THAT WE SPENT DECADES GETTING YOU TO FEAR THE METHANE BOMB COOKING IN THE ARCTIC AS NOTED FOR EXAMPLE BY THIS SCARY LECTURE BY CLIMATE SCIENTIST AND ARCTIC EXPERT PETER WADHAMS  [LINK]  WHERE PROFESSOR WADHAMS SAYS “methane plumes being emitted and this is thought to be due to the fact that offshore permafrost in that area is now thawing because of the warmer water temperatures in summer. This is releasing methane hydrates as methane gas with methane plumes rising and coming up to the surface and being emitted because when methane is released from only 50 or 70 meters, it doesn’t have time to dissolve and it comes out into the atmosphere, and this is a very big climatic issue for the planet”. BUT THEN WE REALIZED THAT THE METHANE BOMB DOES NOT SERVE OUR ACTIVISM AGAINST FOSSIL FUELS AND THE COP26 AMBITION OF MR GUTERRES SO WE HAD TO FIND A WAY TO GET OUR LANGUAGE BACK TO FOSSIL FUELS AND TO THE CLIMATE ACTION AMBITION THAT MR GUTERRES WANTS IN THE FIGHT AGAINST FOSSIL FUELS.

 

 

 

 

bandicam 2020-04-03 19-21-58-964

[LINK TO THE HOMEPAGE OF THIS SITE]

STATEMENT ON COVID-19 EPIDEMIOLOGY

BY DR SUCHARIT BHAKDI  

Emeritus of the Johannes-Gutenberg-University in Mainz and long time
director of the Institute for Medical Microbiology

MARCH 2020

APRIL 2021 UPDATE: NEW RESEARCH PAPER SENT IN BY DR CRIS LINGLE: LINK

CITATIONAcademic Editor: Paul B. Tchounwou. Int. J. Environ. Res. Public Health 2021, 18(8), 4344; https://doi.org/10.3390/ijerph18084344  Received: 20 March 2021 / Revised: 15 April 2021 / Accepted: 16 April 2021 / Published: 20 April 2021  (This article belongs to the Section Environmental Health)  

ABSTRACTMany countries introduced the requirement to wear masks in public spaces for containing SARS-CoV-2 making it commonplace in 2020. Up until now, there has been no comprehensive investigation as to the adverse health effects masks can cause. The aim was to find, test, evaluate and compile scientifically proven related side effects of wearing masks. For a quantitative evaluation, 44 mostly experimental studies were referenced, and for a substantive evaluation, 65 publications were found. The literature revealed relevant adverse effects of masks in numerous disciplines. In this paper, we refer to the psychological and physical deterioration as well as multiple symptoms described because of their consistent, recurrent and uniform presentation from different disciplines as a Mask-Induced Exhaustion Syndrome (MIES). We objectified evaluation evidenced changes in respiratory physiology of mask wearers with significant correlation of O2 drop and fatigue (p < 0.05), a clustered co-occurrence of respiratory impairment and O2 drop (67%), N95 mask and CO2 rise (82%), N95 mask and O2 drop (72%), N95 mask and headache (60%), respiratory impairment and temperature rise (88%), but also temperature rise and moisture (100%) under the masks. Extended mask-wearing by the general population could lead to relevant effects and consequences in many medical fields. 

FINDINGS: A total of 65 scientific papers on masks qualified for a purely content-based evaluation. These included 14 reviews and two meta-analyses. Of the mathematically evaluable, groundbreaking 44 papers with significant negative mask effects (p < 0.05 or n ≥ 50%), 22 were published in 2020 (50%), and 22 were published before the COVID-19 pandemic. Of these 44 publications, 31 (70%) were of experimental nature, and the remainder were observational studies (30%). Most of the publications in question were English (98%). Thirty papers referred to surgical masks (68%), 30 publications related to N95 masks (68%), and only 10 studies pertained to fabric masks (23%). Despite the differences between the primary studies, we were able to demonstrate a statistically significant correlation in the quantitative analysis between the negative side effects of blood-oxygen depletion and fatigue in mask wearers with p = 0.0454. In addition, we found a mathematically grouped common appearance of statistically significant confirmed effects of masks in the primary studies (p < 0.05 and n ≥ 50%) as shown in Figure 2. In nine of the 11 scientific papers (82%), we found a combined onset of N95 respiratory protection and carbon dioxide rise when wearing a mask. We found a similar result for the decrease in oxygen saturation and respiratory impairment with synchronous evidence in six of the nine relevant studies (67%). N95 masks were associated with headaches in six of the 10 studies (60%). For oxygen deprivation under N95 respiratory protectors, we found a common occurrence in eight of 11 primary studies (72%). Skin temperature rise under masks was associated with fatigue in 50% (three out of six primary studies). The dual occurrence of the physical parameter temperature rise and respiratory impairment was found in seven of the eight studies (88%). A combined occurrence of the physical parameters temperature rise and humidity/moisture under the mask was found in 100% within six of six studies, with significant readings of these parameters. The literature review confirms that relevant, undesired medical, organ and organ system-related phenomena accompanied by wearing masks occur in the fields of internal medicine (at least 11 publications. The list covers neurology (seven publications, psychology (more than 10 publications, psychiatry (three publications, gynecology (three publications, dermatology (at least 10 publications, ENT medicine (four publications, dentistry (one publication, sports medicine (four publications, sociology (more than five publications, occupational medicine (more than 14 publications, microbiology (at least four publications, epidemiology (more than 16 publications, and pediatrics (four publications, as well as environmental medicine (four publications. We will present the general physiological effects as a basis for all disciplines. This will be followed by a description of the results from the different medical fields of expertise and closing off with pediatrics the final paragraph.

Physiological Effects of wearing Covid masks: As early as 2005, an experimental dissertation (randomized crossover study) demonstrated that wearing surgical masks in healthy medical personnel (15 subjects, 18–40 years old) leads to measurable physical effects with elevated transcutaneous carbon dioxide values after 30 min [13]. The role of dead space volume and CO2 retention as a cause of the significant change (p < 0.05) in blood gases on the way to hypercapnia, which was still within the limits, was discussed in this article. Masks expand the natural dead space (nose, throat, trachea, bronchi) outwards and beyond the mouth and nose. An experimental increase in the dead space volume during breathing increases carbon dioxide retention at rest and under exertion and correspondingly the carbon dioxide partial pressure in the blood. As well as addressing the increased rebreathing of carbon dioxide due to the dead space, scientists also debate the influence of the increased breathing resistance when using masks. According to the scientific data, mask wearers as a whole show a striking frequency of typical, measurable, physiological changes associated with masks. In a recent intervention study conducted on eight subjects, measurements of the gas content for oxygen (measured in O2 Vol%) and carbon dioxide (measured in CO2 ppm) in the air under a mask showed a lower oxygen availability even at rest than without a mask. A Multi-Rae gas analyzer was used for the measurements. At the time of the study, the device was the most advanced portable multivariant real-time gas analyzer. It is also used in rescue medicine and operational emergencies. The absolute concentration of oxygen in the air under the masks was significantly lower. Simultaneously, a health-critical value of carbon dioxide concentration increased by a factor of 30 compared to normal room air was measured (ppm with mask versus 464 ppm without mask.

THE BOTTOM LINE: These phenomena are responsible for a statistically significant increase in carbon dioxide blood content in mask wearers. Another consequence of masks is a statistically significant drop in blood oxygen saturation. A drop in blood oxygen partial pressure with the effect of an accompanying increase in heart rate as well as an increase in respiratory rate have been proven. A statistically significant measurable increase in pulse rate and decrease in oxygen saturation after the first and second hour under a disposable mask were reported by researchers in a mask intervention study they conducted on 53 employed neurosurgeons. In another experimental study surgical masks caused a significant increase in heart rate as well as a corresponding feeling of exhaustion along with a sensation of heat and itching due to moisture penetration of the masks after only 90 min of physical activity  in heart rate (p < 0.001) and respiratory rate accompanied by a significant measurable increase in transcutaneous carbon dioxide. They also complained of breathing difficulties during the exercise. The increased rebreathing of carbon dioxide from the enlarged dead space volume in mask wearers can reflectively trigger increased respiratory activity with increased muscular work as well as the resulting additional oxygen demand and oxygen consumption. This is a reaction to pathological changes in the sense of an adaptation effect. 
Christopher Lingle - Escuela de Negocios | Escuela de Negocios

NOW BACK TO SUCHARIT BHAKDI

SOURCE DOCUMENT: [LINK]  THE VIDEO IN GERMAN:  [LINK]  

Coronavirus Outbreak – Thai PBS World

MUST READ! More Cases Being Reported In China Of Recovered Covid-19 Patients  Dying Suddenly - Thailand Medical News

Overwhelming, lonely, stressful – life in a coronavirus COVID-19 unit in  Geneva | MSF

AN EDITED AND ABBREVIATED VERSION OF THE BHAKDI ANALYSIS

My concern is that unforeseeable socioeconomic consequences of the drastic containment measures which are currently being applied. We need to look at both the advantages and disadvantages of restricting public life in terms of its long term effects. To this end, I am confronted with FIVE UNANSWERED QUESTIONS.

ITEM #1: STATISTICS: In infectology, a distinction must be made made between infection and disease and so therefore, only patients with symptoms such as fever or cough should be included in the statistics as new cases. It is not sufficient to test positive for COVID-19 to be counted in the disease statistics.

ITEM #2: DANGER: A number of different coronaviruses have been with us for some time largely unnoticed by the media. If it should turn out that the COVID-19 virus should not be ascribed a significantly higher risk potential than the coronaviruses already circulating, all countermeasures now employed would obviously become unnecessary. The very credible International Journal of Antimicrobial Agents will soon publish a paper that addresses this issue. Preliminary results of the study lead to the conclusion that the new virus is NOT different from the corona viruses of the past in terms of danger. The title of the paper is “SARS-CoV-2: Fear versus Data“.

ITEM #3: DISSEMINATION: According to a report in the Süddeutsche Zeitung, not even the Robert Koch Institute knows exactly how many have tested positive for COVID-19. But there is no doubt that there has been a rapid increase in the number of cases in Germany as the volume of tests increases. It is possible therefore that the virus has already spread unnoticed into the whole of the population. If so, it means that the official death rate of 206 deaths from 37,300 infections by 26 March 2020, at a rate of 0.55%, is too high. This would also mean that it isn’t really possible to prevent the spread of this virus.

ITEM #4: MORTALITY: The fear of a rise in the death rate in Germany (currently 0.55 percent) currently carries an intense media interest. Many people are worried that it could go up to 7% or 10% as it had in Spain and Italy. This fear likely derives from the practice of attributing deaths to the virus only on the basis that patient had tested positive for Covid at the time of his death. This practice is flawed. To attribute death to an agent it must first be determined that the agent played a significant role in the death. The Association of the Scientific Medical Societies of Germany includes this principle in its guidelines saying that to declare a cause of death, the causal chain is more important than the underlying disease. A more critical analysis of medical records should be undertaken to determine how many deaths can be attributed to this virus.

ITEM #5. COMPARABILITY

The use of Italy as a reference scenario for evaluating the risk posed to our population by this virus is flawed because the role of the virus in the Italian fatality statistics is unclear. There are external factors at play unique to Italy that made Italy particularly vulnerable. It has not been determined that these factors also apply to Germany. A factor unique to Italy is a high level of air pollution in Northern Italy that would account for more than 8,000 fatalities even without the virus. Air pollution increases the risk of viral lung
diseases in very young and in the very old. A household feature of Italy is the cohabitation of the very young and the very old (27.4% of the population) such that the very young can pass the virus to the very old who are at a high risk of death from the virus. This social feature is also found in Spain at the higher percentage of 33.5%. But it is not found in Germany. Therefore these countries do not serve as a model for understanding the spread and fatality rate of the virus in Germany. Yet another factor that makes it difficult to compare Germany with Italy and Spain is the relatively better equipment in Germany’s health care facility.

************************************************************************

MORE ARGUMENTS AGAINST LOCKDOWNS IN MEDICAL RESEARCH  

López, Leonardo, and Xavier Rodó. “The end of social confinement and COVID-19 re-emergence risk.” Nature Human Behaviour 4.7 (2020): 746-755.  The lack of effective pharmaceutical interventions for SARS-CoV-2 raises the possibility of COVID-19 recurrence. We explore different post-confinement scenarios by using a stochastic modified SEIR (susceptible–exposed–infectious–recovered) model that accounts for the spread of infection during the latent period and also incorporates time-decaying effects due to potential loss of acquired immunity, people’s increasing awareness of social distancing and the use of non-pharmaceutical interventions. Our results suggest that lockdowns should remain in place for at least 60 days to prevent epidemic growth, as well as a potentially larger second wave of SARS-CoV-2 cases occurring within months. The best-case scenario should also gradually incorporate workers in a daily proportion at most 50% higher than during the confinement period. We show that decaying immunity and particularly awareness and behaviour have 99% significant effects on both the current wave of infection and on preventing COVID-19 re-emergence. Social distancing and individual non-pharmaceutical interventions could potentially remove the need for lockdowns.

Peto, Julian, et al. “Universal weekly testing as the UK COVID-19 lockdown exit strategy.” The Lancet 395.10234 (2020): 1420-1421. 

The British public have been offered alternating periods of lockdown and relaxation of restrictions as part of the coronavirus disease 2019 (COVID-19) lockdown exit strategy.  Extended periods of lockdown will increase economic and social damage, and each relaxation will almost certainly trigger a further epidemic wave of deaths. These cycles will kill tens of thousands, perhaps hundreds of thousands, of people before a vaccine becomes available, with the most disadvantaged groups experiencing the greatest suffering.  There is an alternative strategy: universal repeated testing.  We recommend evaluation of weekly severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) antigen testing of the whole population in an entire city as a demonstration site (preferably several towns and cities, if possible), with strict household quarantine after a positive test. Quarantine would end when all residents of the household test negative at the same time; everyone else in the city can resume normal life, if they choose to. This testing programme should be assessed for feasibility in one or more cities with 200 000–300 000 people. Such a feasibility study should begin as soon as possible and continue after the current lockdown ends, when the infection rate will be fairly low but rising. The rate at which the number of infections then rises or falls, compared with the rest of the UK, will be apparent within a few weeks. A decision to proceed with national roll-out can then be made, beginning in high-risk areas and limited only by reagent supplies. If the epidemic is controlled, hundreds of thousands of lives could be saved, intensive care units will no longer be overloaded, and the adverse effects of lockdown on mental ill health and unemployment will end.
Block, Per, et al. “Social network-based distancing strategies to flatten the COVID-19 curve in a post-lockdown world.” Nature Human Behaviour (2020): 1-9. 
Social distancing and isolation have been widely introduced to counter the COVID-19 pandemic. Adverse social, psychological and economic consequences of a complete or near-complete lockdown demand the development of more moderate contact-reduction policies. Adopting a social network approach, we evaluate the effectiveness of three distancing strategies designed to keep the curve flat and aid compliance in a post-lockdown world. These are: limiting interaction to a few repeated contacts akin to forming social bubbles; seeking similarity across contacts; and strengthening communities via triadic strategies. We simulate stochastic infection curves incorporating core elements from infection models, ideal-type social network models and statistical relational event models. We demonstrate that a strategic social network-based reduction of contact strongly enhances the effectiveness of social distancing measures while keeping risks lower. We provide scientific evidence for effective social distancing that can be applied in public health messaging and that can mitigate negative consequences of social isolation.
Rawson, Thomas, et al. “How and when to end the COVID-19 lockdown: an optimization approach.” Frontiers in Public Health 8 (2020): 262.
Countries around the world are in a state of lockdown to help limit the spread of SARS-CoV-2. However, as the number of new daily confirmed cases begins to decrease, governments must decide how to release their populations from quarantine as efficiently as possible without overwhelming their health services. We applied an optimal control framework to an adapted Susceptible-Exposure-Infection-Recovery (SEIR) model framework to investigate the efficacy of two potential lockdown release strategies, focusing on the UK population as a test case. To limit recurrent spread, we find that ending quarantine for the entire population simultaneously is a high-risk strategy, and that a gradual re-integration approach would be more reliable. Furthermore, to increase the number of people that can be first released, lockdown should not be ended until the number of new daily confirmed cases reaches a sufficiently low threshold. We model a gradual release strategy by allowing different fractions of those in lockdown to re-enter the working non-quarantined population. Mathematical optimization methods, combined with our adapted SEIR model, determine how to maximize those working while preventing the health service from being overwhelmed. The optimal strategy is broadly found to be to release approximately half the population 2–4 weeks from the end of an initial infection peak, then wait another 3–4 months to allow for a second peak before releasing everyone else. We also modeled an “on-off” strategy, of releasing everyone, but re-establishing lockdown if infections become too high. We conclude that the worst-case scenario of a gradual release is more manageable than the worst-case scenario of an on-off strategy, and caution against lockdown-release strategies based on a threshold-dependent on-off mechanism. The two quantities most critical in determining the optimal solution are transmission rate and the recovery rate, where the latter is defined as the fraction of infected people in any given day that then become classed as recovered. We suggest that the accurate identification of these values is of particular importance to the ongoing
Ruktanonchai, Nick Warren, et al. “Assessing the impact of coordinated COVID-19 exit strategies across Europe.” Science (2020).
As rates of new COVID-19 cases decline across Europe due to non-pharmaceutical interventions such as social distancing policies and lockdown measures, countries require guidance on how to ease restrictions while minimizing the risk of resurgent outbreaks. Here, we use mobility and case data to quantify how coordinated exit strategies could delay continental resurgence and limit community transmission of COVID-19. We find that a resurgent continental epidemic could occur as many as 5 weeks earlier when well-connected countries with stringent existing interventions end their interventions prematurely. Further, we found that appropriate coordination can greatly improve the likelihood of eliminating community transmission throughout Europe. In particular, synchronizing intermittent lockdowns across Europe meant half as many lockdown periods were required to end community transmission continent-wide.

************************************************************************

5/12/2020: ECONOMIST DR CHRISTOPHER LINGLE ON LOCKDOWNS AND CAUSE OF DEATH DETERMINATIONS. 

The ongoing experience with the SARS-CoV-2 virus and its companion disease, COVID-19, is a matter of balancing health and medical aspects of the associated pandemic against economic costs and consequences of policy responses. Alarmed by estimates from models predicting high morbidity rates (e.g., Imperial College London model), politicians and health bureaucrats imposed lockdowns, mandated social distancing or required citizens to shelter-in-place. As it turned out, worst-case scenarios of healthcare systems and hospitals being overwhelmed were exaggerated with hotspots like Wuhan, Lombardy and NYC bearing the brunt of the wave of illnesses and treatments. The policy responses to this pandemic provide several important lessons.

First, policy makers must understand that estimates derived from models can never be treated as reflecting objective reality of Besides the subjective nature of the assumptions within these models, it is as misguided as it is misleading to treat data sets as if they are devoid of subjectivity. What this means is that claims that public policy is guided by “science and evidence” rings hollow. Science is about testing and questioning what we see around us, not about establishing a consensus.

Second, policy makers should understand and undertake cost-benefit analysis of their decisions before jumping in over their heads. In large measure, the material harm and human costs of the economic collapse are the outcome of what now seems to have been rash judgements based upon faulty modelling.

Third, the goals of policies must be tempered by reality rather than rhetoric or political impulses. For example, Illinois Governor J.B. Pritzker announced economic reopening plan would wait “until we’re able to eradicate” the novel coronavirus.

But to imagine a world that is 100% free of corona or any other virus is an unrealistic goal. As it turns out, moving the number of cases closer to zero will cause the economic and human costs of the collateral damage to rise asymptotically. The way that epidemics or pandemics end is when a large enough proportion of the population acquires immunity either from the antibodies in reaction to infections or the development of a vaccine or treatment.

Vaccines must not be thought of as a magic bullet. On the one hand, vaccines for influenza that have been developed and used for decades are effective in the range of 35% to 45%. On the other hand, the RNA nature of coronaviruses makes them harder to pin down due to more rapid mutations. As it is, there are no vaccines for HIV, Zika, or Ebola despite so much having been invested over such a long period. Indeed, there are no “cures” for these deadly pathogens, only treatments.

Insisting that restrictions should only be lifted only after a vaccine is discovered is also misguided. Discovery of the vaccine is only one part of the puzzle. Even with a miraculous discovery of a vaccine within a few months, it will take considerable time to develop the capacity to produce medically acceptable scales that guarantee purity and safety. Then there is the matter of distributing and inoculating a large enough population. All of this will extend the time horizon into the distant future.

In all events, a question arises concerning the appropriate target ratio is that we should be tracking all the way to zero. Is it the case fatality rate, the infection fatality rate, or the crude mortality rate? None of these ratios are objective ratios determined purely from the data because their interpretation in immunology and virology have a subjective component; but once published these numbers are treated as objective data.

Yet, even the published number for the total number of deaths from COVID-19 is subject to interpretation because the death of persons known to have been infected by the virus creates a strong bias to record the cause of death as COVID-19. In the US, hospitals were guided by a incentive whereby they receive higher financial assistance for such patients that is likely to contribute to an upward bias on reported cases. Similarly, regardless of co-morbidity, instructions were given that all deaths of individuals infected with COV-19 were to be counted as COV-19 fatalities, even if they died with it rather than from it.

Details on the human transmissibility and age-specific impacts of the Coronavirus were reported during an earlier SARS pandemic in 2002-03. Details can be found in this 2009 paper [LINK] by Chris Ka-fai Li and Xiaoning Xu where we learn that “The reemergence of the Coronavirus in humans remains high due to the large animal reservoirs of coronavirus and the genome instability of coronaviruses RNA.”

It is extremely difficult to make accurate estimates of the true risks arising from human encounters with SARS-CoV-2 or any other disease during the initial stages of an epidemic, making it difficult to formulate appropriate policy responses. In all events, pandemics and epidemics first require medical rather than political responses.

In assessing the risk of death from the virus, there are several indicators, i.e., the “case fatality rate”, the “crude mortality rate”, and the “infection fatality rate”.

The Case Fatality Rate (CFR) is the ratio between confirmed deaths and confirmed cases, but it is a poor measure of the overall mortality risk of the disease. As it is, CFR relies on a clear assessment of the number of confirmed cases, something that is very fluid. This is worsened by the fact that the total number of deaths from COVID-19 is subject to interpretation. Anyone testing positive for SARS-CoV-2 will be known to clinical staff, so if they should die, it will be recorded on the death certificate as Covid-19 even if it was not the cause of death.

As such, it is very likely that deaths from COVID-19 are being recorded will tend to give the appearance that Covid-19 is causing an excess number of deaths. Data from the CDC & WHO indicate the CFR for the USA rate is about 5.9%, France 15%, UK 14.4%, Italy 14%, Netherlands 12.75%, Sweden 12.2%, Spain 10%, Mexico 10%, Canada 7.1%, Brazil 6.8%, Ireland 6.3%, & Switzerland 6.1%.

While the negative impact of lockdowns and similar restrictions on movements on economic activity is very clear, it is difficult to know the health impacts. Even so, it is likely that they may also make matters worse for several reasons. First, by simply delaying an inevitable spread of a highly contagious disease from the wide community. Second, keeping people indoors, away from sunshine, fresh(er) air, confined in close quarters with other people, especially if third, they are confined with other people suffering from other communicable diseases like TB that will not be diagnosed so it could have been treated or for infected to be isolated (NB: the “poor” are likely to be hit the hardest, especially in Third World countries). Fourth, extending the timeline of moving towards “herd immunity” might actually lead to a higher overall long-term death rate.

Numbers 1 and 2 will almost surely contribute to a “2nd wave” while number 3 leads to avoidable deaths from otherwise treatable diseases. So, it is not the lifting of a lockdown so much as instituting it in the first place that leads to another round of infections.

In light of the slow progress towards & low probability of discovering, producing and administering an effective vaccine on a large scale, the immediate path towards “herd immunity” is the natural development of antibodies against the SARS-CoV-2 virus. As it is, the evidence from an earlier iteration of the SARS virus suggests that the antibodies formed in that case were effective for up to 12 months & in some cases perhaps as much as 48 months.

For its part, this novel virus does not care about the choice of timing; either accept an initial short, sharp shock or suffer through a series of lockdowns in response to successive waves. Public Choice theory as a subset of economics predicts that politicians and bureaucrats tend to avoid political costs of immediate events inducing them to follow what is in their own best interest, which they believe is to engage in more interventions rather than less. So it is better for them to claim they are being guided by “science and evidence” to issue policy declaration that really reflect their own personal incentives as political agents. In all events, they have no “skin-in-the-game” of the lockdowns since neither they nor most public sector employees will face job losses as have occurred in the private sector.

“Successful” lockdowns will almost certainly be followed by a nasty 2nd wave. In turn, this will almost surely lead to more economic damage while undermining institutional and Constitutional restraints on the democratic process that would impose greater limits on human liberty. Another concern about lockdowns, social distancing mandates, and shelter-in-place orders is if they involve heavy-handed enforcement, there are likely to be heavy impacts on racial minorities, the poor, and those less able to defend themselves.

ANTARCTICA-RAINFOREST

(1)  THE ANTARCTICA RAIN FOREST STORY IN THE GUARDIAN

Think of Antarctica and it is probably sweeping expanses of ice, and the odd penguin, that come to mind. But at the time of the dinosaurs the continent was covered in swampy rainforest. Now experts say they have found the most southerly evidence yet of this environment in plant material extracted from beneath the seafloor in West Antarctica. The Cretaceous, 145-66 MYA (million years ago), was a warm period during which Earth had a greenhouse climate and vegetation grew in Antarctica. This new discovery reveals that swampy rainforests were thriving near the south pole about 90m years ago but that temperatures were higher than expected. Such conditions could only have been produced if carbon dioxide levels were far higher than previously thought and there were no glaciers in the region. We didn’t know that this Cretaceous greenhouse climate was that extreme. It shows what carbon dioxide can do. In 2017, the scientists drilled a narrow hole down into the seafloor near the Pine Island glacier in west Antarctica. This location is about 2,000km (1,200 miles) from today’s south pole, but about 90m years ago it was about 900km from the pole. The hole was drilled and material extracted using a remotely operated rig. It is like a spaceship sitting on the seafloor. The first few meters of material were glacial sediment, dating to about 25,000 years ago, while the next 25m were sandstone, dating to about 45m years ago.  In the next three metres the scientists found exciting new material in mudstone, topped by a coal-like material, and packed with soil from the ancient forest, complete with roots, spores and pollen from conifer trees and ferns. They found evidence of more than 65 different kinds of plants within the material, revealing that the landscape near the south pole would have been covered in a swampy conifer rainforest similar to that found today in the north-western part of the South Island of New Zealand. The material was dated to between 92 and 83 MYA. It would have had average annual temperatures of 12-13C  which is warmer than in Germany today. The analysis of chemicals left by photosynthetic cyanobacteria revealed that surface waters were at a pleasant 20C. Computer modelling shows that such an environment so close to the south pole would only have been possible if greenhouse gas concentrations were far higher than previously thought and the land surface were covered in vegetation. There were no ice sheets present. Studying the Antarctic ecosystem is hugely important in understanding past and future climate change because unabated use of fossil fuels use could push concentrations of carbon dioxide to levels similar to those 90m years ago by the start of the next century. If we have an atmosphere of more than 1,000 parts per million of carbon dioxide, we are committing ourselves to a future planet that has little to no ice.

 

(2)  THE CITED RESEARCH PAPER

Article: Published: 01 April 2020: Temperate rainforests near the South Pole during peak Cretaceous warmth. Johann P. Klages, Ulrich Salzmann, […]the Science Team of Expedition PS104. Nature volume 580, pages81–86(2020)Cite this article: Abstract: The mid-Cretaceous period was one of the warmest intervals of the past 140 million years, driven by atmospheric carbon dioxide levels of around 1,000 parts per million by volume. In the near absence of proximal geological records from south of the Antarctic Circle, it is disputed whether polar ice could exist under such environmental conditions. Here we use a sedimentary sequence recovered from the West Antarctic shelf—the southernmost Cretaceous record reported so far—and show that a temperate lowland rainforest environment existed at a palaeolatitude of about 82° S during the Turonian–Santonian age (92 to 83 million years ago). This record contains an intact 3-metre-long network of in situ fossil roots embedded in a mudstone matrix containing diverse pollen and spores. A climate model simulation shows that the reconstructed temperate climate at this high latitude requires a combination of both atmospheric carbon dioxide concentrations of 1,120–1,680 parts per million by volume and a vegetated land surface without major Antarctic glaciation, highlighting the important cooling effect exerted by ice albedo under high levels of atmospheric carbon dioxide.

 

 

(3)  CRITICAL COMMENTARY

  1. In the cited research paper, the relevant evidence is a concurrence of two events in West Antarctica in the Cretaceous. These are the high level of atmospheric CO2 and the extreme year round warmth of West Antarctica that is necessary to explain the existence of a lush green rainforest in the region as implied by the fossil roots in the mudstone matrix.
  2. The researchers concluded from this concurrence that the two events were causally related to propose that the Cretaceous rainforest in Antarctica must have been the result of the warmth caused by the greenhouse effect of the high level of atmospheric CO2.  This causation interpretation is flawed.
  3. The concurrence of events A and B by itself does not imply either causation or the direction of the causation. If causation is to be inferred, one should consider that the concurrence of two events A and B could mean that A causes B or that B causes A or that a third unobserved variable causes both A and B. Of course, it could also mean that the concurrence was incidental and that it does not have a causation implication.
  4. Specifically, in this case, the conclusion drawn from the observed concurrence of high atmospheric CO2 and evidence of a rainforest is that high atmospheric CO2 caused warming by way of the greenhouse effect of carbon dioxide and that the warmth thus caused had created the conditions in West Antarctica that explain the rainforest. This specific interpretation of the concurrence is arbitrary and likely driven by the atmosphere bias in climate science.
  5. With no humans to burn fossil fuels in the Cretaceous, the source of the carbon that raised atmospheric CO2 concentration to 1000 ppm must have been the mantle. The leakage from the mantle to the atmosphere that supplied the CO2 must also have supplied geothermal heat. Therefore, the concurrence may not have the implication that A causes B but that a third unobserved variable causes both A and B and that third unobserved variable in this research is geological activity.
  6. Further support for the geological interpretation of this event are that (a) West Antarctica is a geologically active region as explained in a related post [LINK]  in terms of the West Antarctic Rift System and the Marie Byrd Mantle Plume. Therefore, it should be considered that geological events had caused both the high CO2 and the warmth.
  7. Yet another argument for the geological interpretation of the data and against the atmospheric source of the warmth by way of the greenhouse effect of atmospheric CO2 is that the greenhouse effect of atmospheric CO2 requires sunshine – but Antarctica does not have sunshine all year.  It gets sunshine only six months of the year. In the other six months, Antarctica is dark with no sunshine for the earth to re-radiate and for CO2 to trap.
  8. Also, it is not possible for the the greenhouse effect of atmospheric CO2 to turn an icy surface into rocks and dirt because ice does not absorb and re-radiate incident solar radiation at infra-red frequency. It reflects sunlight in an albedo effect at a high frequency that cannot be trapped by CO2 whatever its atmospheric concentration.
  9. CONCLUSION: In view of the above considerations, we find that the interpretation of the data in the cited paper is biased. The source of the bias is likely to be a combination of the atmosphere bias of climate science [LINK] [LINK] and the activism need of climate science to motivate climate action in their war against fossil fuels by creating a sufficient fear of atmospheric CO2 warming [LINK] [LINK]

 

LEFT: DECCAN TRAPS VOLCANISM   RIGHT: CIRCULAR REASONING

 

Courtesy: Rafe Champion [LINK] 

causation

 

THE MID CRETACEOUS SUPERPLUME WAS A VERY LARGE MANTLE PLUME OF MAGMA FLOW THAT HAD FLOWED UP THROUGH THE OCEAN AND VENTED INTO THE ATMOSPHERE. SUPERPLUME MAGMA COMES FROM CLOSE TO THE CORE WHERE THE TEMPERATURE IS 5000C.

THE CLIMATE SCIENCE POSITION IS THAT THE CO2 THUS BROUGHT TO THE ATMOSPHERE CAUSED THE WARMING KNOWN AS THE MID CRETACEOUS WARM PERIOD. THIS THEORY ASSUMES THE IMPOSSIBILITY THAT SUPERPLUME MAGMA CAN FLOW UP FROM THE MANTLE TO THE ATMOSPHERE WITHOUT WARMING THE OCEAN. THE CLIMATE SCIENCE THEORY OF THE MID CRETACEOUS WARM PERIOD IS AN EXTREME FORM OF THE ATMOSPHERE BIAS THAT PLAGUES THIS DISCIPLINE.

AN EXCELLENT CRITIQUE OF THIS CLIMATE SCIENCE POSITION IS PROVIDED BY BEN WOUTERS IN THE COMMENT SECTION BELOW WHERE HE EXPLAINS THAT THE MID CRETACEOUS WARMING IS A CREATION OF A WARM OCEAN WHICH WAS WARMED BY THE SUPERPLUME.