How Much Will Reduction in Shipping Emissions Stoke Global Warming?

A controversial new research paper claims that a major reduction in emissions of SO2 (sulfur dioxide) since 2020, due to a ban on the use of high-sulfur fuels by ships, could result in additional global warming of 0.16 degrees Celsius (0.29 degrees Fahrenheit) for seven years – over and above that from other sources. The paper was published by a team of NASA scientists.

This example of the law of unintended consequences, if correct, would boost warming from human CO2, as well as that caused by water vapor in the stratosphere resulting from the massive underwater eruption of the Hunga Tonga–Hunga Haʻapai volcano in 2022. The eruption, as I described in a previous post, is likely to raise global temperatures by 0.035 degrees Celsius (0.063 degrees Fahrenheit) during the next few years.

It’s been known for some time that SO2, including that emanating from ship engines, reacts with water vapor in the air to produce aerosols. Sulfate aerosol particles linger in the atmosphere, reflecting incoming sunlight and also acting as condensation nuclei for the formation of reflective clouds. Both effects cause global cooling.

In fact, it was the incorporation of sulfate aerosols into climate models that enabled the models to successfully reproduce the cooling observed between 1945 and about 1975, a feature that had previously eluded modelers.   

On January 1, 2020, new IMO (International Maritime Organization) regulations lowered the maximum allowable sulfur content in international shipping fuels to 0.5%, a significant reduction from the previous 3.5%. This air pollution control measure has reduced cloud formation and the associated reflection of shortwave solar radiation, both reductions having inadvertently increased global warming.

As would be expected, the strongest effects show up in the world’s most traveled shipping lanes: the North Atlantic, the Caribbean and the South China Sea. The figure on the left below depicts the researchers’ calculated contribution from reduced cloud fraction to additional radiative forcing resulting from the SO2 reduction. The figure on the right shows by how much the concentration of condensation nuclei in low maritime clouds has fallen since the regulations took effect.

The cloud fraction contribution is 0.11 watts per square meter. The other contributions, from a reduction in cloud water content and the drop in reflection of solar radiation, add up to a total of 0.2 watts per square meter extra radiative forcing averaged over the global ocean, arising from the new shipping regulations, the NASA scientists say.

The effect is concentrated in the Northern Hemisphere since there is relatively little shipping traffic in the Southern Hemisphere. The researchers calculate the boost to radiative forcing to be 0.32 watts per square meter in the Northern Hemisphere, but only 0.1 watts per square meter in the Southern Hemisphere. The hemispheric difference in their calculations of absorbed shortwave solar radiation (near the earth’s surface) can be seen in the following figure, to the right of the dotted line.

According to the paper, the additional radiative forcing of 0.2 watts per square meter since 2020 corresponds to added global warming of 0.16 degrees Celsius (0.29 degrees Fahrenheit) over seven years. Such an increase implies a warming rate of 0.24 degrees Celsius (0.43 degrees Fahrenheit) per decade from reduced SO2 emissions alone, which is more than double the average warming rate since 1880 and 20% higher than the mean warming rate since 1980 of approximately 0.19 degrees Celsius (0.34 degrees Fahrenheit) per decade.

The researchers remark that the forcing increase of 0.2 watts per square meter is a staggering 80% of the measured gain in forcing from other sources since 2020, the net planetary heat uptake since then being 0.25 watts per square meter.

However, these controversial claims have been heavily criticized, and not just by climate change skeptics. Climate scientist and modeler Zeke Hausfather points out that total warming will be less than the estimated 0.16 degrees Celsius (0.29 degrees Fahrenheit), because the new shipping regulations have only a minimal effect on land, which covers 29% of the earth’s surface.

And, states Hausfather, the researchers’ energy balance model “does not reflect real-world heat uptake by the ocean, and no actual climate model has equilibration times anywhere near that fast.” Hausfather’s own 2023 estimate of additional warming due to the use of low-sulfur shipping fuels was a modest 0.045 degrees Celsius (0.081 degrees Fahrenheit) after 30 years, as shown in the figure below.

Further criticism of the paper’s methodology comes from Laura Wilcox, associate professor at the National Centre for Atmospheric Science at the University of Reading. Wilcox told media that the paper makes some “very bold statements about temperature changes … which seem difficult to justify on the basis of the evidence.” She also has concerns about the mathematics of the researchers' calculations, including the possibility that the effect of sulfur emissions is double-counted.

Next: Philippines Court Ruling Deals Deathblow to Success of GMO Golden Rice

The Scientific Reality of the Quest for Net Zero

Often lost in the lemming-like drive toward Net Zero is the actual effect that reaching the goal of zero net CO2 emissions by 2050 will have. A new paper published by the CO2 Coalition demonstrates how surprisingly little warming would actually be averted by adoption of Net-Zero policies. The fundamental reason is that CO2 warming is already close to saturation, with each additional tonne of atmospheric CO2 producing less warming than the previous tonne.

The paper, by atmospheric climatologist Richard Lindzen together with atmospheric physicists William Happer and William van Wijngaarden, shows that for worldwide Net-Zero CO2 emissions by 2050, the averted warming would be 0.28 degrees Celsius (0.50 degrees Fahrenheit). If the U.S. were to achieve Net Zero on its own by 2050, the averted warming would be a tiny 0.034 degrees Celsius (0.061 degrees Fahrenheit).

These estimates assume that water vapor feedback, which is thought to amplify the modest temperature rise from CO2 acting alone, boosts warming without feedback by a factor of four – the assertion made by the majority of the climate science community. With no feedback, the averted warming would be 0.070 degrees Celsius (0.13 degrees Fahrenheit) for worldwide Net-Zero CO2 emissions, and a mere 0.0084 degrees Celsius (0.015 degrees Fahrenheit) for the U.S. alone.

The paper’s calculations are straightforward. As the authors point out, the radiative forcing of CO2 is proportional to the logarithm of its concentration in the atmosphere. So the temperature increase from now to 2050 caused by a concentration increment ΔC, would be

ΔT = S log2 (C/C0),

in which S is the temperature increase for a doubling of the atmospheric CO2 concentration from its present value C0; C = C0 + ΔC, or what the CO2 concentration in 2050 would be if no action is taken to reduce CO2 emissions by then; and log2 is the binary (base 2) logarithm.

The saturation effect for CO2 comes from this logarithmic dependence of ΔT on the concentration ratio C/ C0, so that each CO2 concentration increment results in less warming than the previous equal increment. In the words of the paper’s authors, “Greenhouse warming from CO2 is subject to the law of diminishing returns.”

If emissions were to decrease by 2050, the CO2 concentration would be less than C in the equation above, or C – δC where δC represents the concentration decrement. The slightly smaller temperature increase ΔT/ would then be

ΔT/ = S log2 ((C – δC)/ C0),

and the averted temperature increase δT from Net-Zero policies is δT = ΔT - ΔT/, which is

δT = S {log2 (C/C0) - log2 ((C – δC)/C0)} = S log2 (C/(C – δC)) = - S log2 (1 – δC/C).

This can be rewritten as

δT = - S ln (1 – δC/C)/ ln (2), in which ln is the natural (base e) logarithm.

Now using the power series expansion – ln (1 - x) = x + x2/2 + x3/3 + x4/4 + …. and recognizing that δC is much smaller than C, so that all terms in the expansion of – ln (1 – δC/C) beyond the first can be ignored,

δT = S (δC/C) / ln (2).

Finally, writing the concentration increment without emissions reduction ΔC as RΔt, where R is the constant emission rate over the time interval Δt, we have

C = C0 + ΔC = C0 + RΔt, and the concentration decrement for reduced emissions δC is

δC = ʃΔT R (1 – t/Δt) dt = RΔt/2, which gives

δT = S RΔt/ (2 ln (2) (C0 + RΔt)).

It’s this latter equation which yields the numbers for averted warming quoted above. In the case of the U.S. going it alone, δT needs to be multiplied by 0.12, which is the U.S. fraction of total world CO2 emissions in 2024.

Such small amounts of averted warming show the folly of the quest for Net Zero. While avoiding 0.28 degrees Celsius (0.50 degrees Fahrenheit) of warming globally is arguably a desirable goal, it’s extremely unlikely that the whole world will comply with Net Zero. China, India and Indonesia are currently indulging in a spate of building new coal-fired power plants which belch CO2, and only a limited number of those will be retired by 2050.

Developing countries, especially in Africa, are in no mood to hold back on any form of fossil fuel burning either. Many of these countries, quite reasonably, want to reach the same standard of living as the West – a lifestyle that has been attained through the availability of cheap, fossil fuel energy. Coal-fired electricity is the most affordable remedy for much of Africa and Asia.

In any case, few policy makers in the West have given much thought to the cost of achieving Net Zero. Michael Kelly, emeritus Prince Philip Professor of Technology at the University of Cambridge and an expert in energy systems, has calculated that the cost of a Net-Zero economy by 2050 in the U.S. alone will be at least $35 trillion, and this does not include the cost of educating the necessary skilled workforce.

Professor Kelly says the target is simply unattainable, a view shared by an ever-increasing number of other analysts. In his opinion, “the hard facts should put a stop to urgent mitigation and lead to a focus on adaptation (to warming).”

Next: How Much Will Reduction in Shipping Emissions Stoke Global Warming?

No Convincing Evidence That Extreme Wildfires Are Increasing

According to a new research study by scientists at the University of Tasmania, the frequency and magnitude of extreme wildfires around the globe more than doubled between 2003 and 2023, despite a decline in the total worldwide area burned annually. The study authors link this trend to climate change.

Such a claim doesn’t stand up to scrutiny, however. First, the authors seem unaware of the usual definition of climate change, which is a long-term shift in weather patterns over a period of at least 30 years. Their finding of a 21-year trend in extreme wildfires is certainly valid, but the study interval is too short to draw any conclusions about climate.

Paradoxically, the researchers mention an earlier 2017 study of theirs, stating that the 12-year period of that study of extreme wildfires was indeed too short to identify any temporal climate trend. Why they think 21 years is any better is puzzling!

Second, the study makes no attempt to compare wildfire frequency and magnitude over the last 21 years with those from decades ago, when there were arguably as many hot-burning fires as now. Such a comparison would allow the claim of more frequent extreme wildfires today to be properly evaluated.

Although today’s satellite observations of wildfire intensity far outnumber the observations made before the satellite era, there’s still plenty of old data that could be analyzed. Satellites measure what is called the FRP (fire radiative power), which is the total fire radiative energy less the energy dissipated through convection and conduction. The older FI (fire intensity) also measures the energy released by a fire, and is the rate of energy released per unit time per unit length of fire front; FRP, usually measured in MW (megawatts), is obviously related to FI.

The study authors define extreme wildfires as those with daily FRPs exceeding the 99.99th percentile. Satellite FRP data for all fires in the study period was collected in pixels 1 km on a side, each retained pixel containing just one wildfire “hotspot” after duplicate hotspots were excluded.

The total raw dataset included 88.4 million hotspot observations, and this number was reduced to 30.7 million “events” by summing individual pixels in cells approximately 22 x 22 km on a side. Of this 30.7 million, just 2,913 events satisfied the extreme wildfire 99.99th percentile requirement. The average of the study’s summed FRP values for the top 20 events was in the range of 50,000-150,000 MW, corresponding to individual FRPs of about 100-300 MW in a 1 x 1 km pixel.   

A glance at the massive datatset shows individual FRP values ranging from the single digits to several hundred MW. If the 20 hottest wildfires during 2003-23 had FRPs above 100 MW, most of the other 2,893 fires above the 99.99th percentile would have had lower FRPs, in the tens and teens.

While intensity data for historical wildfires is sparse, there are occasionally numbers mentioned in the literature. One example can be found in a 2021 paper that reviews past large-area high-intensity wildfires that have occurred in arid Australian grasslands. The paper’s authors state that:

Contemporary fire cycles in these grasslands (spinifex) are characterized by periodic wildfires that are large in scale, high in intensity (e.g., up to c. 14,000 kW) … and driven by fuel accumulations that occur following exceptionally high rainfall years.

An FRP of 14,000 kW, or 14 MW, is comparable to that of many of the 2,893 FRPs for modern extreme wildfires (excluding the top 20) in the Tasmanian study. The figure below shows the potential fire intensity of bushfires across Australia, the various colors indicating the FI range. As you can see, the most intense bushfires occur in the southeast and southwest of the country; FI values in those regions can exceed 100 MW per meter, which correspond to FRPs of about 30 MW.

And, although it doesn’t cite FI numbers, a 1976 paper on Australian bushfires from 1945 to 1975 makes the statement that:

The fire control authorities recognise that no fire suppression system has been developed in the world which can halt the forward spread of a high-intensity fire burning in continuous heavy fuels under the influence of extreme fire weather.

High- and extremely high-intensity wildfires in Australia at least are nothing new, and the same is no doubt true for other countries included in the Tasmanian study. The study authors remark correctly that higher temperatures due to global warming and the associated drying out of vegetation and forests both increase wildfire intensity. But there have been equally hot and dry periods in the past, such as the 1930s, when larger areas burned.

So there’s nothing remarkable about the present study. Even though it’s difficult to find good wildfire data in the pre-satellite era, the study authors could easily extend their work back to the onset of satellite measurements in the 1970s.

Next: The Scientific Reality of the Quest for Net Zero

Was the Permian Extinction Caused by Global Warming or CO2 Starvation?

Of all the mass extinctions in the earth’s distant past, by far the greatest and most drastic was the Permian Extinction, which occurred during the Permian between 300 and 250 million years ago. Also known as the Great Dying, the Permian Extinction killed off an estimated 57% of all biological families including rainforest flora, 81% of marine species and 70% of terrestrial vertebrate species that existed before the Permian’s last million years. What was the cause of this devastation?

The answer to that question is controversial among paleontologists. For many years, it has been thought the extinction was a result of ancient global warming. During Earth’s 4.5-billion-year history, the global average temperature has fluctuated wildly, from “hothouse” temperatures as much as 14 degrees Celsius (25 degrees Fahrenheit) above today’s level of about 14.8 degrees Celsius (27 degrees Fahrenheit), to “icehouse” temperatures 6 degrees Celsius (11 degrees Fahrenheit) below.

Hottest of all was a sudden temperature spike from icehouse conditions at the onset of the Permian to extreme hothouse temperatures at its end, as can be seen in the figure below. The figure is a 2021 estimate of ancient temperatures derived from oxygen isotopic measurements combined with lithologic climate indicators, such as coals, sedimentary rocks, minerals and glacial deposits. The barely visible time scale is in millions of years before the present.

The geological event responsible for this enormous surge in temperature is a massive volcanic eruption known as the Siberian Traps. The eruption lasted at least 1 million years and resulted in the outpouring of voluminous quantities of basaltic lava from rifts in West Siberia; the lava buried over 50% of Siberia in a blanket up to 6.5 km (4 miles) deep.

Volcanic CO2 released by the eruptions was supplemented by CO2 produced during combustion of thick, buried coal deposits that lay along the subterranean path of the erupting lava. This stupendous outburst boosted the atmospheric CO2 level from a very low 200 ppm (parts per million) to more than 2,000 ppm, as shown in the next figure.

The conventional wisdom in the past has been that this geologically sudden, gigantic increase in the CO2 level sent the global thermometer soaring – a conclusion sensationalized by mainstream media such as the New York Times. However, that argument ignores the saturation effect for atmospheric CO2, which limits CO2-induced warming to that produced by the first few hundred ppm of the greenhouse gas.

While the composition of the atmosphere 250 million years ago may have been different from today’s, the saturation effect would still have occurred. There’s no question, nevertheless, that end-Permian temperatures were as high as we think, whatever the cause. That’s because the temperatures are based on the highly reliable method of measuring oxygen 18O to 16O isotopic ratios in ancient microfossils.

Such hothouse conditions would have undoubtedly caused the extinction of various species; the severity of the extinction event is revealed by subsequent gaps in the fossil record. Organic carbon accumulated in the deep ocean, depleting oxygen and thus wiping out many marine species such as phytoplankton, brachiopods and reef-building corals. On land, vertebrates such as amphibians and early reptiles, as well as diverse tropical and temperate rainforest flora, disappeared.

All from extreme global warming? Not so fast, says ecologist Jim Steele.

Steele attributes the Permian extinction not to an excess of CO2 at the end of this geological period, but rather to a lack of it during the preceding Carboniferous and the early Permian, as can be seen in the figure above. He explains that all life is dependent on a supply of CO2, and that when its concentration drops below 150 ppm, photosynthesis ceases, and plants and living creatures die.

Steele argues that because of CO2 starvation over this interval, many species had either already become extinct, or were on the verge of extinction, long before the planet heated up so abruptly.

In comparison to other periods, the Permian saw the appearance of very few new species, as illustrated in the following figure. For example, far more new species evolved (and became extinct) during the earlier Ordovician, when CO2 levels were much, much higher but an icehouse climate prevailed.

When CO2 concentrations reached their lowest levels ever in the early Permian, phytoplankton fossils were extremely rare – some 40 million years or so before the later hothouse spike, which is when the conventional narrative claims the species became extinct. And Steele says that 35-47% of marine invertebrate genera went extinct, as well as almost 80% of land vertebrates, from 7 to 17 million years before the mass extinction at the end of the Permian.

Furthermore, Steele adds, the formation of the supercontinent Pangaea (shown to the left), which occurred during the Carboniferous, had a negative effect on biodiversity. Pangea removed unique niches from its converging island-like microcontinents, again long before the end-Permian.

Next: Unexpected Sea Level Fluctuations Due to Gravity, New Evidence Shows

Shrinking Cloud Cover: Cause or Effect of Global Warming?

Clouds play a dominant role in regulating our climate. Observational data show that the earth’s cloud cover has been slowly decreasing since at least 1982, at the same time that its surface temperature has risen about 0.8 degrees Celsius (1.4 degrees Fahrenheit). Has the reduction in cloudiness caused that warming, as some heretical research suggests, or is it an effect of increased temperatures?

It's certainly true that clouds exert a cooling effect, as you’d expect – at least low-level clouds, which are the majority of the planet’s cloud cover. Low-level clouds such as cumulus and stratus clouds are thick enough to reflect 30-60% of the sun’s radiation that strikes them back into space, so they act like a parasol and cool the earth’s surface. Less cloud cover would therefore be expected to result in warming.

Satellite measurements of global cloud cover from 1982 to 2018 or 2019 are presented in the following two, slightly different figures, which also include atmospheric temperature data for the same period. The first figure shows cloud cover from one set of satellite data, and temperatures in degrees Celsius relative to the mean tropospheric temperature from 1991 to 2020.

The second figure shows cloud cover from a different set of satellite data, and absolute temperatures in degrees Fahrenheit. The temperature data were not measured directly but derived from measurements of outgoing longwave radiation, which is probably why the temperature range from 1982 to 2018 appears much larger than in the previous figure.

This second figure is the basis for the authors’ claim that 90% of global warming since 1982 is a result of fewer clouds. As can be seen, their estimated trendline temperature (red dotted line, which needs extending slightly) at the end of the observation period in 2018 was 59.6 degrees Fahrenheit. The reduction in clouds (blue dotted line) over the same interval was 2.7% - although the researchers erroneously conflate the cloud cover and temperature scales to come up with a 4.1% reduction.

Multiplying 59.6 degrees Fahrenheit by 2.7% yields a temperature change of 1.6 degrees Fahrenheit. The researchers then make use of the well-established fact that the Northern Hemisphere is up to 1.5 degrees Celsius (2.7 degrees Fahrenheit) warmer than the Southern Hemisphere. So, they say, clouds can account for (1.6/2.7) = 59% of the temperature difference between the hemispheres.

This suggests that clouds may be responsible for 59% of recent global warming, if the temperature difference between the two hemispheres is due entirely to the difference in cloud cover from hemisphere to hemisphere.

Nevertheless, this argument is on very weak ground. First, the authors wrongly used 4.1% instead of 2.7% as just mentioned, which incorrectly leads to a temperature change due to cloud reduction of 2.4 degrees Fahrenheit and an estimated contribution to global warming of a higher (2.4/2.7) = 89%, as they claim in their paper.

Regardless of this mistake, however, a temperature increase of even 1.6 degrees Fahrenheit is more than twice as large as the observed rise measured by the more reliable satellite data in the first figure above. And attributing the 1.5 degrees Celsius (2.7 degrees Fahrenheit) temperature difference between the two hemispheres entirely to cloud cover difference is dubious.

There is indeed a difference in cloud cover between the hemispheres. The Southern Hemisphere contains more clouds (69% average cloud cover) than the Northern Hemisphere (64%), partly because there is more ocean surface in the Southern Hemisphere, and thus more evaporation as the planet warms. This in itself would not explain why the Northern Hemisphere is warmer, however.

Southern Hemisphere clouds are also more reflective than their Northern Hemisphere counterparts. That is because they contain more liquid water droplets and less ice; it has been found that lack of ice nuclei causes low-level clouds to form less often. But apart from the ice content, the chemistry and dynamics of cloud formation are complex and depend on many factors. So associating the hemispheric temperature difference only with cloud cover is most likely invalid.

A few other research papers also claim that the falloff in cloud cover explains recent global warming, but their arguments are equally shaky. As is the proposal by joint winner of the 2022 Nobel Prize in Physics, John Clauser, of a cloud thermostat mechanism that controls the earth’s temperature: if cloud cover falls and the temperature climbs, the thermostat acts to create more clouds and cool the earth down again. Obviously, this has not happened.

Finally, it’s interesting to note that the current decline in cloud cover is not uniform across the globe. This can be seen in the figure below, which shows an expanding trend with time in coverage over the oceans, but a diminishing trend over land.

The expanding ocean cloud cover comes from increased evaporation of seawater with rising temperatures. The opposite trend over land is a consequence of the drying out of the land surface; evidently, the land trend dominates globally.

Next: Was the Permian Extinction Caused by Global Warming or CO2 Starvation?

El Niño and La Niña May Have Their Origins on the Sea Floor

One of the least understood aspects of our climate is the ENSO (El Niño – Southern Oscillation) ocean cycle, whose familiar El Niño (warm) and La Niña (cool) events cause drastic fluctuations in global temperature, along with often catastrophic weather in tropical regions of the Pacific and delayed effects elsewhere. A recent research paper attributes the phenomenon to tectonic and seismic activity under the oceans.

Principal author Valentina Zharkova, formerly at the UK’s Northumbria University, is a prolific researcher into natural sources of global warming, such as the sun’s internal magnetic field and the effect of solar activity on the earth’s ozone layer. Most of her studies involve sophisticated mathematical analysis and her latest paper is no exception.

Zharkova and her coauthor Irina Vasilieva make use of a technique known as wavelet analysis, combined with correlation analysis, to identify key time periods in the ONI (Oceanic Niño Index). The index, which measures the strength of El Niño and La Niña events, is the 3-monthly average difference from the long-term average sea surface temperature in the ENSO region of the tropical Pacific. Shown in the figure below are values of the index from 1950 to 2016.

Wavelet analysis supplies information both on which frequencies are present in a time series signal, and on when those frequencies occur, unlike a Fourier transform which decomposes a signal only into its frequency components.

Using the wavelet approach, Zharkova and Vasilieva have identified two separate ENSO cycles: one with a shorter period of 4-5 years, and a longer one with a period of 12 years. This is illustrated in the next figure which shows the ONI at top left; the wavelet spectrum of the index at bottom left, with the wavelet “power” indicated by the colored bar at top right; and the global wavelet spectrum at bottom right. 

The authors link the 4- to 5-year ENSO cycle to the motion of tectonic plates, a connection that has been made by other researchers. The 12-year ENSO cycle identified by their wavelet analysis they attribute to underwater volcanic activity; it does not correspond to any solar cycle or other known natural source of warming.

The following figure depicts an index (in red, right-hand scale), calculated by the authors, that measures the total annual volcanic strength and duration of all submarine volcanic eruptions from 1950 to 2023, superimposed on the ONI (in black) over the same period. A weak correlation can be seen between the ENSO ONI and undersea volcanic activity, the correlation being strongest at 12-year intervals.

Zharkova and Vasilieva estimate the 12-year ENSO correlation coefficient at 25%, a connection they label as “rather significant.” As I discussed in a recent post, retired physical geographer Arthur Viterito has proposed that submarine volcanic activity is the principal driver of global warming, via a strengthening of the thermohaline circulation that redistributes seawater and heat around the globe.

Zharkova and Vasilieva, however, link the volcanic eruptions causing the 12-year boost in the ENSO index to tidal gravitational forces on the earth from the giant planet Jupiter and from the sun. Jupiter of course orbits the sun and spins on an axis, just like Earth. But the sun is not motionless either: it too rotates on an axis and, because it’s tugged by the gravitational pull of the Jupiter and Saturn giants, orbits in a small but complex spiral around the center of the solar system.

Jupiter was selected by the researchers because its orbital period is 12 years - the same as the longer ENSO cycle identified by their wavelet analysis.

That Jupiter’s gravitational pull on Earth influences volcanic activity is clear from the next figure, in which the frequency of all terrestrial volcanic eruptions (underwater and surface) is plotted against the distance of Earth from Jupiter; the distance is measured in AU (astronomical units), where 1 AU is the average earth-sun distance. The thick blue line is for all eruptions, while the thick yellow line shows the eruption frequency in just the ENSO region.

What stands out is the increased volcanic frequency when Jupiter is at one of two different distances from Earth: 4.5 AU and 6 AU. The distance of 4.5 AU is Jupiter’s closest approach to Earth, while 6 AU is Jupiter’s distance when the sun is closest to Earth and located between Earth and Jupiter. The correlation coefficient between the 12-year ENSO cycle and the Earth-Jupiter distance is 12%.  

For the gravitational pull of the sun, Zharkova and Vasilieva find there is a 15% correlation between the 12-year ENSO cycle and the Earth-sun distance in January, when Earth’s southern hemisphere (where ENSO occurs) is closest to the sun. Although these solar system correlations are weak, Zharkova and Vasilieva say they are high considering the vast distances involved.

Next: Shrinking Cloud Cover: Cause or Effect of Global Warming?

The Deceptive Catastrophizing of Weather Extremes: (2) Economics and Politics

In my previous post, I reviewed the science described in environmentalist Ted Nordhaus’ four-part essay, “Did Exxon Make It Rain Today?”, and how science is being misused to falsely link weather extremes to climate change. Nordhaus also describes how the perception of a looming climate catastrophe, exemplified by extreme weather events, is being fanned by misconceptions about the economic costs of natural disasters and by environmental politics – both the subject of this second post.

Between 1990 and 2017, the global cost of weather-related disasters increased by 74%, according to an analysis by Roger Pielke, Jr., a former professor at the University of Colorado. Economic loss studies of natural disasters have been quick to blame human-caused climate change for this increase.

But Nordhaus makes the point that, if the cost of natural disasters is increasing due to global warming, then you would expect the cost of weather-related disasters to be rising faster than that of disasters not related to weather. Yet the opposite is true. States Nordhaus: “The cost of disasters unrelated [my italics] to weather increased 182% between 1990 and 2017, more than twice as fast as for weather-related disasters.” This is evident in the figure below, which shows both costs from 1990 to 2018.

Nordhaus goes on to declare:

In truth, it is economic growth, not climate change, that is driving the boom in economic damage from both weather-related and non-weather-related natural disasters.

Once the losses are corrected for population gain and the ever-escalating value of property in harm’s way, there is very little evidence to support any connection between natural dis­asters and global warming. Nordhaus explains that accelerating urbanization since 1950 has led to an enormous shift of the global population, economic activity, and wealth into river and coastal floodplains.

On the influence of environmental politics in connecting weather extremes to global warming, Nordhaus has this to say:

… the perception among many audiences that these events centrally implicate anthropogenic warming has been driven by ... a sustained campaign by environmental advocates to move the proximity of climate catastrophe in the public imagination from the uncertain future into the present.

The campaign had its origins in a 2012 meeting of environmental advocates, litigators, climate scientists and others in La Jolla, California, convened by the Union of Concerned Scientists. The specific purpose of the gathering was “to develop a public narrative connecting extreme weather events that were already happening, and the damages they were causing, with climate change and the fossil fuel industry.”

This was clearly an attempt to mimic the 1960s campaign against smoking tobacco because of its link to lung cancer. However, the correlation between smoking and lung cancer is extraordinarily high, leaving no doubt about causation. The same cannot be said for any connection between extreme weather events and climate change.

Nevertheless, it was at the La Jolla meeting that the idea of reframing the attribution of extreme weather to climate change, as I discussed in my previous post, was born. Nordhaus discerns that a subsequent flurry of attribution reports, together with a fortuitous restructuring of the media at the same time:

… have given journalists license to ignore the enormous body of research and evidence on the long-term drivers of natural disasters and the impact that climate change has had on them.

It was but a short journey from there for the media to promote the notion, favored by “much of the environmental cognoscenti” as Nordhaus puts it, that “a climate catastrophe is now unfolding, and that it is demonstrable in every extreme weather event.”

The media have undergone a painful transformation in the last few decades, with the proliferation of cable news networks followed by the arrival of the Internet. The much broader marketplace has resulted in media outlets tailoring their content to the political values and ideological preferences of their audiences. This means, says Nordhaus, that sensationalism such as catastrophic climate news – especially news linking extreme weather to anthropogenic warming – plays a much larger role than before.

As I discussed in a 2023 post, the ever increasing hype in nearly all mainstream media coverage of weather extremes is a direct result of advocacy by well-heeled benefactors like the Rockefeller, Walton and Ford foundations. The Rockefeller Foundation, for example, has begun funding the hiring of climate reporters to “fight the climate crisis.”

A new coalition, founded in 2019, of more than 500 media outlets is dedicated to producing “more informed and urgent climate stories.” The CCN (Covering Climate Now) coalition includes three of the world’s largest news agencies — Reuters, Bloomberg and Agence France Presse – and claims to reach an audience of two billion.

Concludes Nordhaus:

[These new dynamics] are self-reinforcing and have led to the widespread perception among elite audiences that the climate is spinning out of control. New digital technology bombards us with spectacular footage of extreme weather events. … Catastrophist climate coverage generates clicks from elite audiences.

Next: El Niño and La Niña May Have Their Origins on the Sea Floor

The Deceptive Catastrophizing of Weather Extremes: (1) The Science

In these pages, I’ve written extensively about the lack of scientific evidence for any increase in extreme weather due to global warming. But I’ve said relatively little about the media’s exploitation of the mistaken belief that weather extremes are worsening be­cause of climate change.

A recent four-part essay addresses the latter issue, under the title “Did Exxon Make It Rain Today?”  The essay was penned by Ted Nordhaus, well-known environmentalist and director of the Breakthrough Institute in Berkeley, California, which he co-founded with Michael Shellenberger in 2007. Its authorship was a surprise to me, since the Breakthrough Institute generally supports the narrative of largely human-caused warming.

Nonetheless, Nordhaus’s thoughtful essay takes a mostly skeptical – and realistic – view of hype about weather extremes, stating that:

We know that anthropogenic warming can increase rainfall and storm surges from a hurricane, or make a heat wave hotter. But there is little evidence that warming could create a major storm, flood, drought, or heat wave where otherwise none would have occurred, …

Nordhaus goes on to make the insightful statement that “The main effect that climate change has on extreme weather and natural disasters … is at the margins.” By this, he means that a heat wave in which daily high temperatures for, say, a week reached 37 degrees Celsius (99 degrees Fahrenheit) or above in the absence of climate change would instead stay above perhaps 39 degrees Celsius (102 degrees Fahrenheit) with our present level of global warming.

His assertion is illustrated in the following, rather congested figure from the IPCC (Intergovernmental Panel on Climate Change)’s Sixth Assessment Report. The purple curve shows the average annual hottest daily maximum temperature on land, while the green and black curves indicate the land and global average annual mean temperature, respectively; temperatures are measured relative to their 1850–1900 means.

However, while global warming is making heat waves marginally hotter, Nordhaus says there is no evidence that extreme weather events are on the rise, as so frequently trumpeted by the mainstream media. Although climate change will make some weather events such as heavy rainfall more intense than they otherwise would be, the global area burned by wildfires has actually decreased and there has been no detectable global trend in river floods, nor meteorological drought, nor hurricanes.

Adds Nordhaus:

The main source of climate variability in the past, present, and future, in all places and with regard to virtually all climatic phenomena, is still overwhelmingly non-human: all the random oscillations in climatic extremes that occur in a highly complex climate system across all those highly diverse geographies and topographies.

The misconception that weather extremes are increasing when they are not has been amplified by attribution studies, which use a new statistical method and climate models to assign specific extremes to either natural variabil­ity or human causes. Such studies involve highly questionable methodology that has several shortcomings.

Even so, the media and some climate scientists have taken scientifically unjustifiable liberties with attribution analysis in order to link extreme events to climate change – such as attempting to quantify how much more likely global warming made the occurrence of a heat wave that resulted in high temperatures above 38 degrees Celsius (100 degrees Fahrenheit) for a period of five days in a specific location.

But, explains Nordhaus, that is not what an attribution study actually estimates. Rather, “it quantifies changes in the likelihood of the heat wave reaching the precise level of extremity that occurred.” In the hypothetical case above, the heat wave would have happened anyway in the absence of climate change, but it would have resulted in high temperatures above 37 degrees Celsius (99 degrees Fahrenheit) over five days instead of above 38 degrees.

The attribution method estimates the probability of a heat wave or other extreme event occurring that is incrementally hotter or more severe than the one that would have occurred without climate change, not the probability of the heat wave or other event occurring at all.

Nonetheless, as we’ll see in the next post, the company WWA (World Weather Attribution), founded by German climatologist Friederike Otto, has utilized this new technology to rapidly produce science that does connect weather extremes to climate change – with the explicit goal of shaping news coverage. Coverage of climate-related disasters now routinely features WWA analysis, which is often employed to suggest that climate change is the cause of such events.

Next: The Deceptive Catastrophizing of Weather Extremes: (2) Economics and Politics

Sea Ice Update: Arctic Stable, Antarctic Recovering

The climate doomsday machine constantly insists that sea ice at the two poles is shrinking inexorably and that the Arctic will soon be ice-free in the summer. But the latest data puts the kibosh on those predictions. The maximum winter Arctic ice extent last month was no different from 2023, and the minimum summer 2024 extent in the Antarctic, although lower than the long-term average, was higher than last year.

Satellite images of Arctic sea ice extent in February 2024, one month before its winter peak (left image), and Antarctic extent at its summer minimum the same month (right image), are shown in the figure below. Sea ice shrinks during summer months and expands to its maximum extent during the winter. The red lines in the figure denote the median ice extent from 1981 to 2010.

Arctic summer ice extent decreased by approximately 39% over the interval from 1979 to 2023, but was essentially the same in 2023 as it was in 2007. Arctic winter ice extent on March 3, 2024 was 11% lower than in 1979, when satellite measurements began, but slightly higher than in 2023, as indicated by the inset in the figure below.

Arctic winter maximum extent fluctuates less than its summer minimum extent, as can be seen in the right panel of the figure which compares the annual trend by month for various intervals during the satellite era, as well as for the low-summer-ice years of 2007 and 2012. The left panel shows the annual trend by month for all years from 2013 through 2024.

What is noticeable about this year’s winter maximum is that it was not unduly low, despite the Arctic being warmer than usual. According to the U.S. NSIDC (National Snow & Ice Data Center), February air temperatures in the Arctic troposphere, about 760 meters (2,500 feet) above sea level, were up to 10 degrees Celsius (18 degrees Fahrenheit) above average.

The NSIDC attributes the unusual warmth to a strong pressure gradient that forced relatively warm air over western Eurasia to flow into the Arctic. However, other explanations have been put forward for enhanced winter warming, such as the formation during non-summer seasons of more low-level clouds due to the increased area of open water compared to sea ice. The next figure illustrates this effect between 2008 and 2022.

Despite the long-term loss of ice in the Arctic, the sea ice around Antarctica had been expanding steadily during the satellite era up until 2016, growing at an average rate between 1% and 2% per decade, with considerable fluctuations from year to year. But it took a tumble in 2017, as depicted in the figure below.

Note that this figure shows “anomalies,” or departures from the February mean ice extent for the period from 1981 to 2010, rather than the minimum extent of summer ice in square km. The anomaly trend is plotted as the percent difference between the February extent for that year and the February mean from 1981 to 2010.

As can be seen, the summer ice minimum recovered briefly in 2020 and 2021, only to fall once more and pick up again this year. The left panel in the next figure shows the annual Antarctic trend by month for all years from 2013 through 2024, along with the summer minimum (in square km) in the inset. As for the Arctic previously, the right panel compares the annual trend by month for various intervals during the satellite era, as well as for the high-summer-ice years of 2012 and 2014.

Antarctic sea ice at its summer minimum this year was especially low in the Ross, Amundsen, and Bellingshausen Seas, all of which are on the West Antarctica coast, while the ice cover in the Weddell Sea to the north and along the East Antarctic coast was at average levels. Such a pattern is thought to be associated with the current El Niño.

A slightly different representation of the Antarctic sea ice trend is presented in the following figure, in which the February anomaly is shown directly in square km rather than as a difference percentage. This representation illustrates more clearly how the decline in summer sea ice extent has now persisted for seven years.

The overall trend from 1979 to 2023 is an insignificant 0.1% per decade relative to the 1981 to 2010 mean. Yet a prolonged increase above the mean occurred from 2008 to 2017, followed by the seven-year decline since then. The current downward trend has sparked debate and several possible reasons have been advanced, not all of which are linked to global warming. One analysis attributes the big losses of sea ice in 2017 and 2023 to extra strong El Niños.

Next: The Deceptive Catastrophizing of Weather Extremes: (1) The Science

Exactly How Large Is the Urban Heat Island Effect in Global Warming?

It’s well known that global surface temperatures are biased upward by the urban heat island (UHI) effect. But there’s widespread disagreement among climate scientists about the magnitude of the effect, which arises from the warmth generated by urban surroundings, such as buildings, concrete and asphalt.

In its Sixth Assessment Report in 2021, the IPCC (Intergovernmental Panel on Climate Change) acknowledged the existence of the UHI effect and the consequent decrease in the number of cold nights since around 1950. Nevertheless, the IPCC is ambivalent about the actual size of the effect. On the one hand, the report dismisses its significance by declaring it “less than 10%” (Chapter 2, p. 324) or “negligible” (chapter 10, p. 1368).

On the other hand, the IPCC presents a graph (Chapter 10, p. 1455), reproduced below, showing that the UHI effect ranges from 0% to 60% or more of measured warming in various cities. Since the population of the included cities is a few per cent of the global population, and many sizable cities are not included, it’s hard to see how the IPCC can state that the global UHI effect is negligible.

One climate scientist who has studied the magnitude of the UHI effect for some time is PhD meteorologist Roy Spencer. In a recent preview of a paper submitted for publication, Spencer finds that summer warming in U.S. cities from 1895 to 2023 has been exaggerated by 100% or more from UHI warming. The next figure shows the results of his calculations which, as you would expect, depend on population density.

The barely visible solid brown line is the measured average summertime temperature for the continental U.S. (CONUS) relative to its 1901-2000 average, in degrees Celsius, from 1895 to 2023; the solid black line represents the same data corrected for UHI warming, as estimated from population density data. The measurements are taken from the monthly GHCN (Global Historical Climatology Network) “homogenized” dataset, as compiled by NOAA (the U.S. National Oceanic and Atmospheric Administration).

You can see that the UHI effect accounts for a substantial portion of the recorded temperature in all years. Spencer says that the UHI influence is 24% of the trend averaged over all measurement stations, which are dominated by rural sites not subject to UHI warming. But for the typical “suburban” station (100-1,000 persons per square km), the UHI effect is 52% of the measured trend, which means that measured warming in U.S. cities is at least double the actual warming. 

Globally, a rough estimate of the UHI effect can be made from NOAA satellite temperature data compiled by Spencer and Alabama state climatologist John Christy. Satellite data are not influenced by UHI warming because they measure the earth’s near-surface, not surface, temperature. The most recent data for the global average lower tropospheric temperature are displayed below.

According to Spencer and Christy’s calculations, the linear rate of global warming since measurements began in January 1979 is 0.15 degrees Celsius (0.27 degrees Fahrenheit) per decade, while the warming rate measured over land only is 0.20 degrees Celsius (0.36 degrees Fahrenheit) per decade. The difference of 0.05 degrees Celsius (0.09 degrees Fahrenheit) per decade in the warming rates can reasonably be attributed, at least in part, to the UHI effect.

So the UHI influence is as high as 0.05/0.20 or 25% of the measured temperature trend – in close agreement with Spencer’s 24% estimated from his more detailed calculations.

Other estimates peg the UHI effect as larger yet. As part of a study of natural contributions to global warming, which I discussed in a recent post, the CERES research group suggested that urban warming might account for up to 40% of warming since 1850.

But the 40% estimate comes from a comparison of the warming rate for rural temperature stations alone with that for rural and urban stations combined, from 1900 to 2018. Over the shorter time period from 1972 to 2018, which almost matches Spencer and Christy’s satellite record, the estimated UHI effect is a much smaller 6%. The study authors caution that more research is needed to estimate the UHI magnitude more accurately.

The effect of urbanization on global temperatures is an active research field. Among other recent studies is a 2021 paper by Chinese researchers, who used a novel approach involving machine learning to quantify the phenomenon. Their study encompassed measurement stations in four geographic areas – Australia, East Asia, Europe and North America – and found that the magnitude of UHI warming from 1951 to 2018 was 13% globally, and 15% in East Asia where rapid urbanization has occurred.

What all these studies mean for climate science is that global warming is probably about 20% lower than most people think. That is, about 0.8 degrees Celsius (1.4 degrees Fahrenheit) at the end of 2022, before the current El Niño spike, instead of the reported 0.99 degrees Celsius (1.8 degrees Fahrenheit). Which means in turn that we’re only halfway to the Paris Agreement’s lower limit of 1.5 degrees Celsius (2.7 degrees Fahrenheit).  

Next: Sea Ice Update: Arctic Stable, Antarctic Recovering

Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Driven by Oceanic Seismic Activity, Not CO2

Although undersea volcanic eruptions can’t cause global warming directly, as I discussed in a previous post, they can contribute indirectly by altering the deep-ocean thermohaline circulation. According to a recent lecture, submarine volcanic activity is currently intensifying the thermohaline circulation sufficiently to be the principal driver of global warming.

The lecture was delivered by Arthur Viterito, a renowned physical geographer and retired professor at the College of Southern Maryland. His provocative hypothesis links an upsurge in seismic activity at mid-ocean ridges to recent global warming, via a strengthening of the ocean conveyor belt that redistributes seawater and heat around the globe.

Viterito’s starting point is the observation that satellite measurements of global warming since 1979 show distinct step increases following major El Niño events in 1997-98 and 2014-16, as demonstrated in the following figure. The figure depicts the satellite-based global temperature of the lower atmosphere in degrees Celsius, as compiled by scientists at the University of Alabama in Huntsville; temperatures are annual averages and the zero baseline represents the mean tropospheric temperature from 1991 to 2020.

Viterito links these apparent jumps in warming to geothermal heat emitted by volcanoes and hydrothermal vents in the middle of the world’s ocean basins – heat that shows similar step increases over the same time period, as measured by seismic activity. The submarine volcanoes and hydrothermal vents lie along the earth’s mid-ocean ridges, which divide the major oceans roughly in half and are illustrated in the next figure. The different colors denote the geothermal heat output (in milliwatts per square meter), which is highest along the ridges.

The total mid-ocean seismic activity along the ridges is shown in the figure below, in which the global tropospheric temperature, graphed in the first figure above, is plotted in blue against the annual number of mid-ocean earthquakes (EQ) in orange. The best fit between the two sets of data occurs when the temperature readings are lagged by two years: that is, the 1979 temperature reading is paired with the 1977 seismic reading, and so on. As already mentioned, seismic activity since 1979 shows step increases similar to the temperature.

A regression analysis yields a correlation coefficient of 0.74 between seismic activity and the two-year lagged temperatures, which implies that mid-ocean geothermal heat accounts for 55% of current global warming, says Viterito. However, a correlation coefficient of 0.74 is not as high as some estimates of the correlation between rising CO2 and temperature.

In support of his hypothesis, Viterito states that multiple modeling studies have demonstrated how geothermal heating can significantly strengthen the thermohaline circulation, shown below. He then links the recently enhanced undersea seismic activity to global warming of the atmosphere by examining thermohaline heat transport to the North Atlantic-Arctic and western Pacific oceans.

In the Arctic, Viterito points to several phenomena that he believes are a direct result of a rapid intensification of North Atlantic currents which began around 1995 – the same year that mid-ocean seismic activity started to rise. The phenomena include the expansion of a phytoplankton bloom toward the North Pole due to incursion of North Atlantic currents into the Arctic; enhanced Arctic warming; a decline in Arctic sea ice; and rapid warming of the Subpolar Gyre, a circular current south of Greenland.

In the western Pacific, he cites the increase since 1993 in heat content of the Indo-Pacific Warm Pool near Indonesia; a deepening of the Indo-Pacific Warm Pool thermocline, which divides warmer surface water from cooler water below; strengthening of the Kuroshio Current near Japan; and recently enhanced El Niños.

But, while all these observations are accurate, they do not necessarily verify Viterito’s hypothesis that submarine earthquakes are driving current global warming. For instance, he cites as evidence the switch of the AMO (Atlantic Multidecadal Oscillation) to its positive or warm phase in 1995, when mid-ocean seismic activity began to increase. However, his assertion begs the question: Isn’t the present warm phase of the AMO just the same as the hundreds of warm cycles that preceded it?

In fact, perhaps the AMO warm phase has always been triggered by an upturn in mid-ocean earthquakes, and has nothing to do with global warming.

There are other weaknesses in Viterito’s argument too. One example is his association of the decline in Arctic sea ice, which also began around 1995, with the current warming surge. What he overlooks is that the sea ice extent stopped shrinking on average in 2007 or 2008, but warming has continued.

And while he dismisses CO2 as a global warming driver because the rising CO2 level doesn’t show the same step increases as the tropospheric temperature, a correlation coefficient between CO2 and temperature as high as 0.8 means that any CO2 contribution is not negligible.

It’s worth noting here that a strengthened thermohaline circulation is the exact opposite of the slowdown postulated by retired meteorologist William Kininmonth as the cause of global warming, a possibility I described in an earlier post in this Challenges series (#7). From an analysis of longwave radiation from greenhouse gases absorbed at the tropical surface, Kininmonth concluded that a slowdown in the thermohaline circulation is the only plausible explanation for warming of the tropical ocean.

Next: Foundations of Science Under Attack in U.S. K-12 Education

Rapid Climate Change Is Not Unique to the Present

Rapid climate change, such as the accelerated warming of the past 40 years, is not a new phenomenon. During the last ice age, which spanned the period from about 115,000 to 11,000 years ago, temperatures in Greenland rose abruptly and fell again at least 25 times. Corresponding temperature swings occurred in Antarctica too, although they were less pronounced than those in Greenland.

The striking but fleeting bursts of heat are known as Dansgaard–Oeschger (D-O) events, named after palaeoclimatologists Willi Dansgaard and Hans Oeschger who examined ice cores obtained by deep drilling the Greenland ice sheet. What they found was a series of rapid climate fluctuations, when the icebound earth suddenly warmed to near-interglacial conditions over just a few decades, only to gradually cool back down to frigid ice-age temperatures.

Ice-core data from Greenland and Antarctica are depicted in the figure below; two sets of measurements, recorded at different locations, are shown for each. The isotopic ratios of 18O to 16O, or δ18O, and 2H to 1H, or δ2H, in the cores are used as proxies for the past surface temperature in Greenland and Antarctica, respectively.

Multiple D-O events can be seen in the four sets of data, stronger in Greenland than Antarctica. The periodicity of successive events averages 1,470 years, which has led to the suggestion of a 1,500-year cycle of climate change associated with the sun.

Somewhat similar cyclicity has been observed during the present interglacial period or Holocene, with eight sudden temperature drops and recoveries, mirroring D-O temperature spurts, as illustrated by the thick black line in the next figure. Note that the horizontal timescale runs forward, compared to backward in the previous (and following) figure.

These so-called Bond events were identified by geologist Gerard Bond and his colleagues, who used drift ice measured in deep-sea sediment cores, and δ18O as a temperature proxy, to study ancient climate change. The deep-sea cores contain glacial debris rafted into the oceans by icebergs, and then dropped onto the sea floor as the icebergs melted. The volume of glacial debris was largest, and it was carried farthest out to sea, when temperatures were lowest.

Another set of distinctive, abrupt events during the latter part of the last ice age were Heinrich events, which are related to both D-O events and Bond cycles. Five of the six or more Heinrich events are shown in the following figure, where the red line represents Greenland ice-core δ18O data, and some of the many D-O events are marked; the figure also includes Antarctic δ18O data, together with ice-age CO2 and CH4 levels.

As you can see, Heinrich events represent the cooling portion of certain D-O events. Although the origins of both are debated, they are thought likely to be associated with an increase in icebergs discharged from the massive Laurentide ice sheet which covered most of Canada and the northern U.S. Just as with Bond events, Heinrich and D-O events left a signature on the ocean floor, in this case in the form of large rocks eroded by glaciers and dropped by melting icebergs.

The melting icebergs would have also disgorged enormous quantities of freshwater into the Labrador Sea. One hypothesis is that this vast influx of freshwater disrupted the deep-ocean thermohaline circulation (shown below) by lowering ocean salinity, which in turn suppressed deepwater formation and reduced the thermohaline circulation.

Since the thermohaline circulation plays an important role in transporting heat northward, a slowdown would have caused the North Atlantic to cool, leading to a Heinrich event. Later, as the supply of freshwater decreased, ocean salinity and deepwater formation would have increased again, resulting in the rapid warming of a D-O event.

However, this is but one of several possible explanations. The proposed freshwater increase and reduced deepwater formation during D-O events could have resulted from changes in wind and rainfall patterns in the Northern Hemisphere, or the expansion of Arctic sea ice, rather than melting icebergs.

In 2021, an international team of climate researchers concluded that when certain parts of the ice-age climate system changed abruptly, other parts of the system followed like a series of dominoes toppling in succession. But to their surprise, neither the rate of change nor the order of the processes were the same from one event to the other.

Using data from two Greenland ice cores, the researchers discovered that changes in ocean currents, sea ice and wind patterns were so closely intertwined that they likely triggered and reinforced each other in bringing about the abrupt climate changes of D-O and Heinrich events.

While there’s clearly no connection between ice-age D-O events and today’s accelerated warming, this research and the very existence of such events show that the underlying causes of rapid climate change can be elusive.

Next: Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Is Driven by Oceanic Seismic Activity, Not CO2

Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

In something of a twist to my series on challenges to the CO2 global warming hypothesis, this post describes a new paper that attributes modern global warming entirely to water vapor, not CO2.

Water vapor (H2O) is in fact the major greenhouse gas in the earth’s atmosphere and accounts for about 70% of the Earth’s natural greenhouse effect. Water droplets in clouds account for another 20%, while CO2 contributes only a small percentage, between 4 and 8%, of the total. The natural greenhouse effect keeps the planet at a comfortable enough temperature for living organisms to survive, rather than 33 degrees Celsius (59 degrees Fahrenheit) cooler.

According to the CO2 hypothesis, it’s the additional greenhouse effect of CO2 and other gases from human activities that is responsible for the current warming (ignoring El Niño) of about 1.0 degrees Celsius (1.8 degrees Fahrenheit) since the preindustrial era. Because elevated CO2 on its own causes only a tiny increase in temperature, the hypothesis postulates that the increase from CO2 is amplified by water vapor in the atmosphere and by clouds – a positive feedback effect.

The paper’s authors, Canadian researchers H. Douglas Lightfoot and Gerald Ratzer, don’t dispute that the natural greenhouse effect exists, as do other, heretical challenges described previously in this series. But the authors ignore the postulated water vapor amplification of CO2 greenhouse warming, and claim that increased water vapor alone accounts for today’s warmer world. It’s well known that extra water vapor is produced by the sun’s evaporation of seawater.

The basis of Lightfoot and Ratzer’s conclusion is something called the psychrometric chart, which is a rather intimidating tool used by architects and engineers in designing heating and cooling systems for buildings. The chart, illustrated below, is a mathematical model of the atmosphere’s thermodynamic properties, including heat content (enthalpy), temperature and relative humidity.

As inputs to their psychrometric model, the researchers used temperature and relative humidity measurements recorded on the 21st of the month over a 12-month period at 20 different locations: four north of the Arctic Circle, six in north mid-latitudes, three on the equator, one in the Sahara Desert, five in south mid-latitudes and one in Antarctica.

As indicated in the figure above, one output of the model from these inputs is the mass of water vapor in grams per kilogram of dry air. The corresponding mass of CO2 per kilogram of dry air at each location was calculated from Mauna Loa CO2 data in ppm (parts per million).

Their results revealed that the ratio of water vapor molecules to CO2 molecules ranges from 0.3 in polar regions to 108 in the tropics. Then, in a somewhat obscure argument, Lightfoot and Ratzer compared these ratios to calculated spectra for outgoing radiation at the top of the atmosphere. Three spectra – for the Sahara Desert, the Mediterranean, and Antarctica – are shown in the next figure.

The significant dip in the Sahara Desert spectrum arises from absorption by CO2 of outgoing radiation whose emission would otherwise cool the earth. You can see that in Antarctica, the dip is absent and replaced by a bulge. This bulge has been explained by William Happer and William van Wijngaarden as being a result of the radiation to space by greenhouse gases over wintertime Antarctica exceeding radiation by the cold ice surface.

Yet Lightfoot and Ratzer assert that the dip must be unrelated to CO2 because their psychrometric model shows there are 0.3 to 40 molecules of water vapor per CO2 molecule in Antarctica, compared with a much higher 84 to 108 in the tropical Sahara where the dip is substantial. Therefore, they say, the warming effect of CO2 must be negligible.

As I see it, however, there are at least two fallacies in the researchers’ arguments, First, the psychrometric model is an inadequate representation of the earth’s climate. Although the model takes account of both convective heat and latent heat (from evaporation of H2O) in the atmosphere, it ignores multiple feedback processes, including the all-important water vapor feedback mentioned above. Other feedbacks include the temperature/altitude (lapse rate) feedback, high- and low-cloud feedback, and the carbon cycle feedback.

A more important objection is that the assertion about water vapor causing global warming represents a circular argument.

According to Lightfoot and Ratzer’s paper, any warming above that provided by the natural greenhouse effect comes solely from the sun. On average, they correctly state, about 26% of the sun’s incoming energy goes into evaporation of water (mostly seawater) to water vapor. The psychrometric model links the increase in water vapor to a gain in temperature.

But the Clausius-Clapeyron equation tells us that warmer air holds more moisture, about 7% more for each degree Celsius of temperature rise. So an increase in temperature raises the water vapor level in the atmosphere – not the other way around. Lightfoot and Ratzer’s claim is circular reasoning.

Next: Rapid Climate Change Is Not Unique to the Present

Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s

In a recent series of blog posts, I showed how actual scientific data and reports in newspaper archives over the past century demonstrate clearly that the frequency and severity of extreme weather events have not increased during the last 100 years. But there’s also plenty of evidence of weather extremes comparable to today’s dating back centuries and even millennia.

The evidence consists largely of reconstructions based on proxies such as tree rings, sediment cores and leaf fossils, although some evidence is anecdotal. Reconstruction of historical hurricane patterns, for example, confirms what I noted in an earlier post, that past hurricanes were even more frequent and stronger than those today.

The figure below shows a proxy measurement for hurricane strength of landfalling tropical cyclones – the name for hurricanes down under – that struck the Chillagoe limestone region in northeastern Queensland, Australia between 1228 and 2003. The proxy was the ratio of 18O to 16O isotopic levels in carbonate cave stalagmites, a ratio which is highly depleted in tropical cyclone rain.

What is plotted here is the 18O/16O depletion curve, in parts per thousand (‰); the thick horizontal line at -2.50 ‰ denotes Category 3 or above events, which have a top wind speed of 178 km per hour (111 mph) or greater. It’s clear that far more (seven) major tropical cyclones impacted the Chillagoe region in the period from 1600 to 1800 than in any period since, at least until 2003. Indeed, the strongest cyclone in the whole record occurred during the 1600 to 1800 period, and only one major cyclone was recorded from 1800 to 2003.

Another reconstruction of past data is that of unprecedently long and devastating “megadroughts,” which have occurred in western North America and in Europe for thousands of years. The next figure depicts a reconstruction from tree ring proxies of the drought pattern in central Europe from 1000 to 2012, with observational data from 1901 to 2018 superimposed. Dryness is denoted by negative values, wetness by positive values.

The authors of the reconstruction point out that the droughts from 1400 to 1480 and from 1770 to 1840 were much longer and more severe than those of the 21st century. A reconstruction of megadroughts in California back to 800 was featured in a previous post.

An ancient example of a megadrought is the 7-year drought in Egypt approximately 4,700 years ago that resulted in widespread famine, known as Famine Stela. The water level in the Nile River dropped so low that the river failed to flood adjacent farmlands as it normally does each year, resulting in drastically reduced crop yields. The event is recorded in a hieroglyphic inscription on a granite block located on an island in the Nile.

At the other end of the wetness scale, a Christmas Eve flood in the Netherlands, Denmark and Germany in 1717 drowned over 13,000 people – many more than died in the much hyped Pakistan floods of 2022.

Although most tornadoes occur in the U.S., they have been documented in the UK and other countries for centuries. In 1577, North Yorkshire in England experienced a tornado of intensity T6 on the TORRO scale, which corresponds approximately to EF4 on the Fujita scale, with wind speeds of 259-299 km per hour (161-186 mph). The tornado destroyed cottages, trees, barns, hayricks and most of a church. EF4 tornadoes are relatively rare in the U.S.: of 1,000 recorded tornadoes from 1950 to 1953, just 46 were EF4.

Violent thunderstorms that spawn tornadoes have also been reported throughout history. An associated hailstorm which struck the Dutch town of Dordrecht in 1552 was so violent that residents “thought the Day of Judgement was coming” when hailstones weighing up to a few pounds fell on the town. A medieval depiction of the event is shown in the following figure.

Such historical storms make a mockery of the 2023 claim by a climate reporter that “Recent violent storms in Italy appear to be unprecedented for intensity, geographical extensions and damages to the community.” The thunderstorms in question produced hailstones the size of tennis balls, merely comparable to those that fell on Dordrecht centuries earlier. And the storms hardly compare with a hailstorm in India in 1888, which actually killed 246 people.

Next: Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

Two Statistical Studies Attempt to Cast Doubt on the CO2 Narrative

As I’ve stated many times in these pages, the evidence that global warming comes largely from human emissions of CO2 and other greenhouse gases is not rock solid. Two recent statistical studies affirm this position, but both studies can be faulted.

The first study, by four European engineers, is provocatively titled “On Hens, Eggs, Temperatures and CO2: Causal Links in Earth’s Atmosphere.” As the title suggests, the paper addresses the question of whether modern global warming results from increased CO2 in the atmosphere, according to the CO2 narrative, or whether it’s the other way around. That is, whether rising temperatures from natural sources are causing the CO2 concentration to go up.

The study’s controversial conclusion is the latter possibility – that extra atmospheric CO2 can’t be the cause of higher temperatures, but that raised temperatures must be the origin of elevated CO2, at least over the last 60 years for which we have reliable CO2 data. The mathematics behind the conclusion is complicated but relies on something called the impulse response function.

The impulse response function describes the reaction over time of a dynamic system to some external change or impulse. Here, the impulse and response are the temperature change ΔT and the increase in the logarithm of the CO2 level, Δln(CO2), or the reverse. The study authors took ΔT to be the average one-year temperature difference from 1958 to 2022 in the Reanalysis 1 dataset compiled by the U.S. NCEP (National Centers for Environmental Prediction) and the NCAR (National Center for Atmospheric Research); CO2 data was taken from the Mauna Loa time series which dates from 1958.

Based on these two time series, the study’s calculated IRFs (impulse response functions) are depicted in the figure below, for the alternate possibilities of ΔT => Δln(CO2) (left, in green) and Δln(CO2) => ΔT (right, in red). Clearly, the IRF indicates that ΔT is the cause and Δln(CO2) the effect, since for the opposite case of Δln(CO2) causing ΔT, the time lag is negative and therefore unphysical.

This is reinforced by the correlations shown in the following figure (lower panels), which also illustrates the ΔT and Δln(CO2) time series (upper panel). A strong correlation (R = 0.75) is seen between ΔT and Δln(CO2) when the CO2 increase occurs six months later than ΔT, while there is no correlation (R = 0.01) when the CO2 increase occurs six months earlier than ΔT, so ΔT must cause Δln(CO2). Note that the six-month displacement of Δln(CO2) from ΔT in the two time series is artificial, for easier viewing.

However, while the above correlation and the behavior of the impulse response function are impressive mathematically, I personally am dubious about the study’s conclusion.

The oceans hold the bulk of the world’s CO2 and release it as the temperature rises, since warmer water holds less CO2 according to Henry’s Law. For global warming of approximately 1 degree Celsius (1.8 degrees Fahrenheit) since 1880, the corresponding increase in atmospheric CO2 outgassed from the oceans is only about 16 ppm (parts per million) – far below the actual increase of 130 ppm over that time. The Hens and Eggs study can’t account for the extra 114 ppm of CO2.

The equally provocative second study, titled “To what extent are temperature levels changing due to greenhouse gas emissions?”, comes from Statistics Norway, Norway’s national statistical institute and the principal source of the country’s official statistics. From a statistical analysis, the study claims that the effect of human CO2 emissions during the last 200 years has not been strong enough to cause the observed rise in temperature, and that climate models are incompatible with actual temperature data.

The conclusions are based on an analysis of 75 temperature time series from weather stations in 32 countries, the records spanning periods from 133 to 267 years; both annual and monthly time series were examined. The analysis attempted to identify systematic trends in temperature, or the absence of trends, in the temperature series.

What the study purports to find is that only three of the 75 time series show any systematic trend in annual data (though up to 10 do in monthly data), so that 72 sets of long-term temperature data show no annual trend at all. From this finding, the study authors conclude it’s not possible to determine how much of the observed temperature increase since the 19th century is due to CO2 emissions and how much is natural.

One of the study’s weaknesses is that it excludes sea surface temperatures, even though the oceans cover 70% of the earth’s surface, so the study is not truly global. A more important weakness is that it confuses local temperature measurements with global mean temperature. Furthermore, the study authors fail to understand that a statistical model simply can’t approximate the complex physical processes of the earth’s climate system.

In any case, statistical analysis in climate science doesn’t have a strong track record. The infamous “hockey stick” - a recon­structed temperature graph for the past 2000 years resembling the shaft and blade of a hockey stick on its side – is perhaps the best example.

The reconstruction was debunked in 2003 by Stephen McIntyre and Ross McKitrick, who found (here and here) that the graph was based on faulty statistical analysis, as well as preferential data selection. The hockey stick was further discredited by a team of scientists and statisticians from the National Research Council of the U.S. National Academy of Sciences.

Next: Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s

Antarctica Sending Mixed Climate Messages

Antarctica, the earth’s coldest and least-populated continent, is an enigma when it comes to global warming.

While the huge Antarctic ice sheet is known to be shedding ice around its edges, it may be growing in East Antarctica. Antarctic sea ice, after expanding slightly for at least 37 years, took a tumble in 2017 and reached a record low in 2023. And recent Antarctic temperatures have swung from record highs to record lows. No one is sure what’s going on.

The influence of global warming on Antarctica’s temperatures is uncertain. A 2021 study concluded that both East Antarctica and West Antarctica have cooled since the beginning of the satellite era in 1979, at rates of 0.70 degrees Celsius (1.3 degrees Fahrenheit) per decade and 0.42 degrees Celsius (0.76 degrees Fahrenheit) per decade, respectively. But over the same period, the Antarctic Peninsula (on the left in the adjacent figure) has warmed at a rate of 0.18 degrees Celsius (0.32 degrees Fahrenheit) per decade.

During the southern summer, two locations in East Antarctica recorded record low temperatures early this year. At the Concordia weather station, located at the 4 o’clock position from the South Pole, the mercury dropped to -51.2 degrees Celsius (-60.2 degrees Fahrenheit) on January 31, 2023. This marked the lowest January temperature recorded anywhere in Antarctica since the first meteorological observations there in 1956.

Two days earlier on January 29, 2023, the nearby Vostok station, about 400 km (250) miles closer to the South Pole, registered a low temperature of -48.7 degrees Celsius (-55.7 degrees Fahrenheit), that location’s lowest January temperature since 1957. Vostok has the distinction of reporting the lowest temperature ever recorded in Antarctica, and also the world record low, of -89.2 degrees Celsius (-128.6 degrees Fahrenheit) on July 21, 1984.

Barely a year before, however, East Antarctica had experienced a heat wave, when the temperature soared to -10.1 degrees Celsius (13.8 degrees Fahrenheit) at the Concordia station on March 18, 2022. This balmy reading was the highest recorded hourly temperature at that weather station since its establishment in 1996, and 20 degrees Celsius (36 degrees Fahrenheit) above the previous March record high there. Remarkably, the temperature remained above the previous March record for three consecutive days, including nighttime.

Antarctic sea ice largely disappears during the southern summer and reaches its maximum extent in September, at the end of winter. The two figures below illustrate the winter maximum extent in 2023 (left) and the monthly variation of Antarctic sea ice extent this year from its March minimum to the September maximum (right).

The black curve on the right depicts the median extent from 1981 to 2010, while the dashed red and blue curves represent 2022 and 2023, respectively. It's clear that Antarctic sea ice in 2023 has lagged the median and even 2022 by a wide margin throughout the year. The decline in summer sea ice extent has now persisted for six years, as seen in the following figure which shows the average monthly extent since satellite measurements began, as an anomaly from the median value.

The overall trend from 1979 to 2023 is an insignificant 0.1% per decade relative to the 1981 to 2010 median. Yet a prolonged  increase above the median occurred from 2008 to 2017, followed by the six-year decline since then. The current downward trend has sparked much debate and several possible reasons have been put forward, not all of which are linked to global warming. One analysis attributes the big losses of sea ice in 2017 and 2023 to extra strong El Niños.

Melting of the Antarctic ice sheet is currently causing sea levels to rise by 0.4 mm (16 thousandths of an inch) per year, contributing about 10% of the global total. But the ice loss is not uniform across the continent, as seen in the next figure showing changes in Antarctic ice sheet mass since 2002.

In the image on the right, light blue shades indicate ice gain while orange and red shades indicate ice loss. White denotes areas where there has been very little or no change in ice mass since 2002; gray areas are floating ice shelves whose mass change is not measured by this satellite method.

You can see that East Antarctica has experienced modest amounts of ice gain, which is due to warming-enhanced snowfall. Nevertheless, this gain has been offset by significant loss of ice in West Antarctica over the same period, largely from melting of glaciers – which is partly caused by active volcanoes underneath the continent. While the ice sheet mass declined at a fairly constant rate of 133 gigatonnes (147 gigatons) per year from 2002 to 2020, it appears that the total mass may have reached a minimum and is now on the rise again.

Despite the hullabaloo about its melting ice sheet and shrinking sea ice, what happens next in Antarctica continues to be a scientific mystery.

Next: Two Statistical Studies Attempt to Cast Doubt on the CO2 Narrative

No Evidence That Today’s El Niños Are Any Stronger than in the Past

The current exceptionally strong El Niño has revived discussion of a question which comes up whenever the phenomenon recurs every two to seven years: are stronger El Niños caused by global warming? While recent El Niño events suggest that in fact they are, a look at the historical record shows that even stronger El Niños occurred in the distant past.

El Niño is the warm phase of ENSO (the El Niño – Southern Oscillation), a natural ocean cycle that causes drastic temperature fluctuations and other climatic effects in tropical regions of the Pacific. Its effect on atmospheric temperatures is illustrated in the figure below. Warm spikes such as those in 1997-98, 2009-10, 2014-16 and 2023 are due to El Niño; cool spikes like those in 1999-2001 and 2008-09 are due to the cooler La Niña phase.

A slightly different temperature record, of selected sea surface temperatures in the El Niño region of the Pacific, averaged yearly from 1901 to 2017, is shown in the next figure from a 2019 study.

Here the baseline is the mean sea surface temperature over the 1901-2017 interval, and the black dashed line at 0.6 degrees Celsius is defined by the study authors as the threshhold for an El Niño event. The different colors represent various regional types of El Niño; the gray bars mark warm years in which no El Niño developed.

This year’s gigantic spike in the tropospheric temperature to 0.93 degrees Celsius (1.6 degrees Fahrenheit) – a level that set alarm bells ringing – is clearly the strongest El Niño by far in the satellite record. Comparison of the above two figures shows that it is also the strongest since 1901. So it does indeed appear that El Niños are becoming stronger as the globe warms, especially since 1960.

Nevertheless, such a conclusion is ill-considered as there is evidence from an earlier study that strong El Niños have been plentiful in the earth’s past.

As I described in a previous post, a team of German paleontologists established a complete record of El Niño events going back 20,000 years, by examining marine sediment cores drilled off the coast of Peru. The cores contain an El Niño signature in the form of tiny, fine-grained stone fragments, washed into the sea by multiple Peruvian rivers following floods in the country caused by heavy El Niño rainfall.

The research team classified the flood signal as very strong when the concentration of stone fragments, known as lithics, was more than two standard deviations above the centennial mean. The frequency of these very strong events over the last 12,000 years is illustrated in the next figure; the black and gray bars show the frequency as the number of 500- and 1,000-year floods, respectively. Radiocarbon dating of the sediment cores was used to establish the timeline.

A more detailed record is presented in the following figure, showing the variation over 20,000 years of the sea surface temperature off Peru (top), the lithic concentration (bottom) and a proxy for lithic concentration (center). Sea surface temperatures were derived from chemical analysis of the marine sediment cores.

You can see that the lithic concentration and therefore El Niño strength were high around 2,000 and 10,000 years ago – approximately the same periods when the most devastating floods occurred. The figure also reveals the absence of strong El Niño activity from 5,500 to 7,500 years ago, a dry interval without any major Peruvian floods as reflected in the previous figure.

If you examine the lithic plots carefully, you can also see that the many strong El Niños approximately 2,000 and 10,000 years ago were several times stronger (note the logarithmic concentration scale) than current El Niños on the far left of the figure. Those two periods were warmer than today as well, being the Roman Warm Period and the Holocene Thermal Maximum, respectively.

So there is nothing remarkable about recent strong El Niños.

Despite this, the climate science community is still uncertain about the global warming question. The 2019 study described above found that since the 1970s, formation of El Niños has shifted from the eastern to the western Pacific, where ocean temperatures are higher. From this observation, the study authors concluded that future El Niños may intensify. However, they qualified their conclusion by stating that:

… the root causes of the observed background changes in the later part of the 20th century remain elusive … Natural variability may have added significant contributions to the recent warming.

Recently, an international team of 17 scientists has conducted a theoretical study of El Niños since 1901 using 43 climate models, most of which showed the same increase in El Niño strength since 1960 as the actual observations. But again, the researchers were unable to link this increase to global warming, declaring that:

Whether such changes are linked to anthropogenic warming, however, is largely unknown.

The researchers say that resolution of the question requires improved climate models and a better understanding of El Niño itself. Some climate models show El Niño becoming weaker in the future.

Next: Antarctica Sending Mixed Climate Messages

Targeting Farmers for Livestock Greenhouse Gas Emissions Is Misguided

Farmers in many countries are increasingly coming under attack over their livestock herds. Ireland’s government is contemplating culling the country’s cattle herds by 200,000 cows to cut back on methane (CH4) emissions; the Dutch government plans to buy out livestock farmers to lower emissions of CH4 and nitrous oxide (N2O) from cow manure; and New Zealand is close to taxing CH4 from cow burps.

But all these measures, and those proposed in other countries, are misguided and shortsighted – for multiple reasons.

The thrust behind the intended clampdown on the farming community is the estimated 11-17% of current greenhouse gas emissions from agriculture worldwide, which contribute to global warming. Agricultural CH4, mainly from ruminant animals, accounted for approximately 4% of total greenhouse gas emissions in the U.S. in 2021, according to the EPA (Environmental Protection Agency), while N2O accounted for another 5%.

The actual warming produced by these two greenhouse gases depends on their so-called “global warming potential,” a quantity determined by three factors: how efficiently the gas absorbs heat, its lifetime in the atmosphere, and its atmospheric concentration. The following table illustrates these factors for CO2, CH4 and N2O, together with their comparative warming effects.

The conventional global warming potential (GWP) is a dimensionless metric, in which the GWP per molecule of a particular greenhouse gas is normalized to that of CO2; the GWP takes into account the atmospheric lifetime of the gas. The table shows both GWP-20 and GWP-100, the warming potentials calculated over a 20-year and 100-year time horizon, respectively.

The final column shows what I call weighted GWP values, as percentages of the CO2 value, calculated by multiplying the conventional GWP by the ratio of the rate of concentration increase for that gas to that of CO2. The weighted GWP indicates how much warming CH4 or N2O causes relative to CO2.

Over a 100-year time span, you can see that both CH4 and N2O exert essentially the same warming influence, at 10% of CO2 warming. But over a 20-year interval, CH4 has a stronger warming effect than N2O, at 27% of CO2 warming, because of its shorter atmospheric lifetime which boosts the conventional GWP value from 30 (over 100 years) to 83.

However, the actual global temperature increase from CH4 and N2O – concern over which is the basis for legislation targeting the world’s farmers – is small. Over a 20-year period, the combined contribution of these two gases is approximately 0.075 degrees Celsius (0.14 degrees Fahrenheit), assuming that all current warming comes from CO2, CH4 and N2O combined, and using a value of 0.14 degrees Celsius (0.25 degrees Fahrenheit) per decade for the current warming rate.

But, as I’ve stated in many previous posts, at least some current warming is likely to be from natural sources, not greenhouse gases. So the estimated 20-year temperature rise of 0.075 degrees Celsius (0.14 degrees Fahrenheit) is probably an overestimate. The corresponding number over 100 years, also an overestimate, is 0.23 degrees Celsius (0.41 degrees Fahrenheit).

Do such small, or even smaller, gains in temperature justify the shutting down of agriculture? Farmers around the globe certainly don’t think so, and for good reason.

First, CH4 from ruminant animals such as cows, sheep and goats accounts for only 4% of U.S. greenhouse emissions as noted above, compared with 29% from transportation, for example. And our giving up eating meat and dairy products would have little impact on global temperatures. Removing all livestock and poultry from the U.S. food system would only reduce global greenhouse gas emissions by 0.36%, a study has found.

Other studies have shown that the elimination of all livestock from U.S. farms would leave our diets deficient in vital nutrients, including high-quality protein, iron and vitamin B12 that meat provides, says the Iowa Farm Bureau.

Furthermore, as agricultural advocate Kacy Atkinson argues, the methane that cattle burp out during rumination breaks down in 10 to 15 years into CO2 and water. The grasses that cattle graze on absorb that CO2, and the carbon gets sequestered in the soil through the grasses’ roots.

Apart from cow manure management, the largest source of N2O emissions worldwide is the application of nitrogenous fertilizers to boost crop production. Greatly increased use of nitrogen fertilizers is the main reason for massive increases in crop yields since 1961, part of the so-called green revolution in agriculture.

The figure below shows U.S. crop yields relative to yields in 1866 for corn, wheat, barley, grass hay, oats and rye. The blue dashed curve is the annual agricultural usage of nitrogen fertilizer in megatonnes (Tg). The strong correlation with crop yields is obvious.

Restricting fertilizer use would severely impact the world’s food supply. Sri Lanka’s ill-conceived 2022 ban of nitrogenous fertilizer (and pesticide) imports caused a 30% drop in rice production, resulting in widespread hunger and economic turmoil – a cautionary tale for any efforts to extend N2O reduction measures from livestock to crops.

Next: No Evidence That Today’s El Niños Are Any Stronger than in the Past

Estimates of Economic Losses from El Niños Are Far-fetched

A recent study makes the provocative claim that some of the most intense past El Niño events cost the global economy from $4 trillion to $6 trillion over the following years. That’s two orders of magnitude higher than previous estimates, but almost certainly wrong.

One reason for the enormous difference is that earlier estimates only examined the immediate economic toll, whereas the new study estimated cumulative losses over the five-year period after a warming El Niño. The study authors say, correctly, that the economic downturn triggered by this naturally occurring climate cycle can last that long.

However, even when this drawn-out effect is taken into account, the new study’s cost estimates are still one order of magnitude greater than other estimates in the scientific literature, such as those of the University of Colorado’s Roger Pielke Jr., who studies natural disasters. His estimated time series of total weather disaster losses as a proportion of global GDP from 1990 to 2020 is shown in the figure below.

The accounting used in the new study includes the “spatiotemporal heterogeneity of El Niño teleconnections,” teleconnections being links between weather phenomena at widely separated locations. Country-level teleconnections are based on correlations between temperature or rainfall in that country, and indexes commonly used to define El Niño and its cooling counterpart, La Niña. Teleconnections are strongest in the tropics and weaker in midlatitudes.

The researchers’ accounting procedure estimates total losses from the 1997-98 El Niño at a staggering $5.7 trillion by 2003, compared with a previous estimate of only $36 billion in the immediate aftermath of the event. For the earlier 1982-83 El Niño, the study estimates the total costs at $4.1 trillion by 1988. The calculated global distribution of GDP losses following both events is illustrated in the next figure.

To see how implausible these trillion-dollar estimates are, it’s only necessary to refer to Pielke’s graph above, which relies on official data from the insurance industry (including leading reinsurance company Munich Re) and the World Bank. His graph indicates that the peak loss from all 1998 weather disasters was 0.38% of global GDP for that year.

As El Niño was not the only disaster in 1998 – others include floods and hurricanes – this number represents an upper limit for instant El Niño losses. Using a value for global GDP in 1998 of $31,533 billion in current U.S. dollars, 0.38% was a maximum instant loss of $120 billion. Over a subsequent 5-year period, the maximum loss would have been 5 times as much, or $600 billion assuming the same annual loss each year which is undoubtedly an overestimate.

This inflated estimate of $600 billion is still an order of magnitude smaller than the study’s $5.7 trillion by 2003. In reality, the discrepancy is larger yet because the actual 5-year loss was likely much less than $600 billion as just discussed.

Two other observations about Pielke’s graph cast further doubt on the methodology of the researchers’ accounting procedure. First, the strongest El Niños in that 21-year period were those in 1997-98, 2009-10 and 2014-16. The graph does indeed show peaks in 1998-99 and in 2017, one year after a substantial El Niño – but not in 2011 following the 2009-10 event. This alone suggests that financial losses from El Niño are not as large as the researchers think.

Furthermore, there’s a strong peak in 2005, the largest in the 21 years of the graph, which doesn’t correspond to any substantial El Niño. The implication is that losses from other types of weather disaster can dominate losses from El Niño.

It’s important to get an accurate handle on economic losses from El Niño and other weather disasters, in case global warming exacerbates such events in the future – although, as I’ve written extensively, there’s no evidence to date that this is happening yet. Effects of El Niño include catastrophic flooding in the western Americas, flooding or episodic droughts in Australia, and coral bleaching.

The study authors stand by their research, however, estimating that the 2023 El Niño could hold back the global economy by $3 trillion over the next five years, a figure not included in their paper. But others are more skeptical. Climate economist Gary Yohe commented that “the enormous estimates cannot be explained simply by forward-looking accounting.” And Mike McPhaden, a senior scientist at NOAA (the U.S. National Oceanic and Atmospheric Administration) who was not involved in the research, called the study “provocative.”

Next: Targeting Farmers for Livestock Greenhouse Gas Emissions Is Misguided

Challenges to the CO2 Global Warming Hypothesis: (9) Rotation of the Earth’s Core as the Source of Global Warming

Yet another challenge to the CO2 global warming hypothesis, but one radically different from all the other challenges I’ve discussed in this series, hypothesizes that global warming or cooling result entirely from the slight speeding up or slowing down of the earth’s rotating inner core.

Linking the earth’s rotation to its surface temperature is not a new idea and has been discussed by several geophysicists over the last 50 years. What is new is the recent (2023) discovery that changes in global temperature follow changes in the earth’s rotation rate that in turn follow changes in the rotation rate of the inner core, both with a time delay. This discovery underlies the postulate that the earth’s temperature is regulated by rotational variations of the inner core, not by CO2.

The history and recent developments of the rotational hypothesis have been summarized in a recent paper by Australian Richard Mackey. The apparently simplistic hypothesis, which is certain to raise scientific eyebrows, does, however, meet the requirements for its scientific validation or rejection: it makes a prediction that can be tested against observation.

As Mackey explains, the prediction is that our current bout of global warming will come to an end in 2025, when global cooling will begin.

The prediction is based on the geophysical findings that shifts in the earth’s temperature appear to occur about eight years after the planet’s rotation rate changes, and the earth’s rotation rate changes eight years after the inner core’s rotation rate does. Because the inner core’s rotation rate began to slow around 2009, cooling should set in around 16 years later in 2025, according to the rotational hypothesis.

As illustrated in the figure below, the partly solid inner core is surrounded by the liquid metal outer core; the outer core is enveloped by the thick solid mantle, which underlies the thin crust on which we live. Convection in the outer core generates an electromagnetic field. The resulting electromagnetic torque on the inner core, together with gravitational coupling between the inner core and mantle, drive rotational variations in the inner core.

Although all layers rotate with the whole earth, the outer and inner cores also oscillate back and forth. Variations in the inner core rotation rate appear to be correlated with changes in the earth’s electromagnetic field mentioned above, changes that are in phase with variations in the global mean temperature.

Only recently was it found that the inner core rotates at a different speed than the outer core and mantle, with decadal fluctuations superimposed on the irregular rotation. The rotational hypothesis links these decadal fluctuations of the inner core to global warming and cooling: as the core rotates faster, the earth warms and as it puts the brakes on, the earth cools.

The first apparent evidence for the rotational hypothesis was reported in a 1976 research paper by geophysicists Kurt Lambeck and Amy Cazenave, who argued that global cooling in the 1960s and early 1970s arose from a slowing of the earth’s rotation during the 1950s.

At that time, the role of inner-core rotation was unknown. Nevertheless, the authors went on to predict that a period of global warming would commence in the 1980s, following a 1972 switch in rotation rate from deceleration to acceleration. Their prediction was based on a time lag of 10 to 15 years between changes in the earth’s rotational speed and surface temperature, rather than the 16 years established recently.

Other researchers had proposed a total time lag of only eight years. The next figure compares their estimates of rotation rate (green line) and surface temperature (red line) from 1880 to 2002, clearly showing the temperature lag, at least since 1900. (The black and blue lines should be ignored).

A minimum lag of eight years and a maximum of 16 years means that global warming should have begun at anytime between 1980 and 1988, according to the rotational hypothesis. In fact, the current warming stretch started in the late 1970s, so the hypothesis is on weak ground.

Another weakness is whether the hypothesis can account for all of modern warming. Mackey argues that it can, based on known shortcomings in the various global temperature datasets with which predictions of the rotational hypothesis are compared. But those shortcomings mean merely that there are large uncertainties associated with any comparison, and that a role for CO2 can’t be definitely ruled out.

A moment of truth for the rotational hypothesis will come in 2025 when, it predicts, the planet will start to cool. However, if that indeed happens, rotational fluctuations of the earth’s inner core won’t be the only possible explanation. As I’ve discussed in a previous post, a potential drop in the sun’s output, known as a grand solar minimum, could also initiate a cold spell around that time.

Next: Estimates of Economic Losses from El Niños Are Farfetched