Suspension of New Blog Posts

It is with great regret that I’m suspending publication of further blog posts, at least temporarily, because of ill health. Two years ago I was diagnosed with stage 4 (metastatic) prostate cancer. Unfortunately, both the disease itself and some of the side effects from treatment have recently taken a turn for the worse, and I can no longer devote the necessary time to keeping up my blog.

However, the blog will remain accessible and you will be able to continue making comments on previous posts. I will endeavor to answer questions when I can.

Philippines Court Ruling Deals Deathblow to Success of GMO Golden Rice

Genetically modified Golden Rice was once seen as the answer to vitamin A deficiency in Asia and Africa, where rice is the staple food.  But a recent court ruling in the Philippines, the very country where rice breeders first came up with the idea of Golden Rice, has brought more than 30 years of crop development to an abrupt halt.

As reported in Science magazine, a Philippine Court of Appeals in April 2024 revoked a 2021 permit that allowed the commercial planting of a Golden Rice variety tailored for local conditions. The ruling resulted from a lawsuit by Greenpeace and other groups, who for many years have opposed the introduction of all GMO (genetically modified organism) crops as unsafe for humans and the environment.

Millions of poor children in Asia and Africa go blind or even die each year from weakened immune systems caused by a lack of vitamin A, which is produced in the human body through the action of a naturally occurring compound, beta-carotene.

So the discovery by Swiss plant geneticist Ingo Potrykus and German biologist Peter Beyer in the 1990s that splicing two genes – beta-carotene from daffodils, the other from a bacterium – into rice could greatly increase its beta-carotene content caused considerable excitement among nutritionists. Subsequent research, in which the daffodil gene was replaced by one from maize, boosted the beta-carotene level even further.

The original discovery should have been heralded as a massive breakthrough. But widespread hostility erupted once the achievement was publicized. Potrykus was accused of creating a “Frankenfood,” evocative of the monster created by the fictional mad scientist Frankenstein, and subjected to bomb threats. Trial plots of Golden Rice were destroyed by rampaging activists.

Nevertheless, in 2018, four countries – Australia, New Zealand, Canada and the U.S. – finally approved Golden Rice. The U.S. FDA (Food and Drug Administration) has granted the biofortified food its prestigious “GRAS (generally recognized as safe)” status. This success paved the way for the nonprofit IRRI (International Rice Research Institute) in the Philippines to initiate large-scale trials of Golden Rice varieties in that country and Bangladesh.

Greenpeace contends that currently planted Golden Rice in the Philippines will have to be destroyed, although a consulting attorney says there is nothing in the Court of Appeals decision to support that claim. And while Bangladesh is close to growing Golden Rice for consumption, the request to actually start planting has been under review since 2017.

The Philippines court justified its ruling by citing the supposed lack of scientific consensus on the safety of Golden Rice; the consulting attorney pointed out that “both camps presented opposing evidence.” In fact, the judges leaned heavily on the so-called precautionary principle – a concept developed by 20th-century environmental activists.

The origins of the precautionary principle can be traced to the application in the early 1970s of the German principle of “Vorsorge” or foresight, based on the belief that environmental damage can be avoided by careful forward planning. The “Vorsorgeprinzip” became the foundation for German environmental law and policies in areas such as acid rain, pollution and global warming. The principle reflects the old adage that “it’s better to be safe than sorry,” and can be regarded as a restatement of the ancient Hippocratic oath in medicine, “First, do no harm.”

Formally, the precautionary principle can be stated as:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.

But in spite of its noble intentions, the precautionary principle in practice is based far more on political considerations than on science. A notable example is the bans on GMO crops by more than half the countries in the European Union. The bans stem from the widespread, fear-based belief that eating genetically altered foods is unsafe, despite the lack of any scientific evidence that GMOs have ever caused harm to a human.

In the U.S., approved GMO crops include corn, which is the basic ingredient in many cereals, corn tortillas, corn starch and corn syrup, as well as feed for livestock and farmed fish; soybeans; canola; sugar beets; yellow squash and zucchini; bruise-free potatoes; nonbrowning apples; papaya; and alfalfa.

One of the biggest issues with the precautionary principle is that it essentially advocates risk avoidance. But risk avoidance carries its own risks.

We accept the risk, for example, of being killed or badly injured while traveling on the roads because the risk is outweighed by the convenience of getting to our destination quickly, or by our desire to have fresh food available at the supermarket. Applying the precautionary principle would mean, in addition to the safety measures already in place, reducing all speed limits to 16 km per hour (10 mph) or less – a clearly impractical solution that would take us back to horse-and-buggy days.

Next: 

How Much Will Reduction in Shipping Emissions Stoke Global Warming?

A controversial new research paper claims that a major reduction in emissions of SO2 (sulfur dioxide) since 2020, due to a ban on the use of high-sulfur fuels by ships, could result in additional global warming of 0.16 degrees Celsius (0.29 degrees Fahrenheit) for seven years – over and above that from other sources. The paper was published by a team of NASA scientists.

This example of the law of unintended consequences, if correct, would boost warming from human CO2, as well as that caused by water vapor in the stratosphere resulting from the massive underwater eruption of the Hunga Tonga–Hunga Haʻapai volcano in 2022. The eruption, as I described in a previous post, is likely to raise global temperatures by 0.035 degrees Celsius (0.063 degrees Fahrenheit) during the next few years.

It’s been known for some time that SO2, including that emanating from ship engines, reacts with water vapor in the air to produce aerosols. Sulfate aerosol particles linger in the atmosphere, reflecting incoming sunlight and also acting as condensation nuclei for the formation of reflective clouds. Both effects cause global cooling.

In fact, it was the incorporation of sulfate aerosols into climate models that enabled the models to successfully reproduce the cooling observed between 1945 and about 1975, a feature that had previously eluded modelers.   

On January 1, 2020, new IMO (International Maritime Organization) regulations lowered the maximum allowable sulfur content in international shipping fuels to 0.5%, a significant reduction from the previous 3.5%. This air pollution control measure has reduced cloud formation and the associated reflection of shortwave solar radiation, both reductions having inadvertently increased global warming.

As would be expected, the strongest effects show up in the world’s most traveled shipping lanes: the North Atlantic, the Caribbean and the South China Sea. The figure on the left below depicts the researchers’ calculated contribution from reduced cloud fraction to additional radiative forcing resulting from the SO2 reduction. The figure on the right shows by how much the concentration of condensation nuclei in low maritime clouds has fallen since the regulations took effect.

The cloud fraction contribution is 0.11 watts per square meter. The other contributions, from a reduction in cloud water content and the drop in reflection of solar radiation, add up to a total of 0.2 watts per square meter extra radiative forcing averaged over the global ocean, arising from the new shipping regulations, the NASA scientists say.

The effect is concentrated in the Northern Hemisphere since there is relatively little shipping traffic in the Southern Hemisphere. The researchers calculate the boost to radiative forcing to be 0.32 watts per square meter in the Northern Hemisphere, but only 0.1 watts per square meter in the Southern Hemisphere. The hemispheric difference in their calculations of absorbed shortwave solar radiation (near the earth’s surface) can be seen in the following figure, to the right of the dotted line.

According to the paper, the additional radiative forcing of 0.2 watts per square meter since 2020 corresponds to added global warming of 0.16 degrees Celsius (0.29 degrees Fahrenheit) over seven years. Such an increase implies a warming rate of 0.24 degrees Celsius (0.43 degrees Fahrenheit) per decade from reduced SO2 emissions alone, which is more than double the average warming rate since 1880 and 20% higher than the mean warming rate since 1980 of approximately 0.19 degrees Celsius (0.34 degrees Fahrenheit) per decade.

The researchers remark that the forcing increase of 0.2 watts per square meter is a staggering 80% of the measured gain in forcing from other sources since 2020, the net planetary heat uptake since then being 0.25 watts per square meter.

However, these controversial claims have been heavily criticized, and not just by climate change skeptics. Climate scientist and modeler Zeke Hausfather points out that total warming will be less than the estimated 0.16 degrees Celsius (0.29 degrees Fahrenheit), because the new shipping regulations have only a minimal effect on land, which covers 29% of the earth’s surface.

And, states Hausfather, the researchers’ energy balance model “does not reflect real-world heat uptake by the ocean, and no actual climate model has equilibration times anywhere near that fast.” Hausfather’s own 2023 estimate of additional warming due to the use of low-sulfur shipping fuels was a modest 0.045 degrees Celsius (0.081 degrees Fahrenheit) after 30 years, as shown in the figure below.

Further criticism of the paper’s methodology comes from Laura Wilcox, associate professor at the National Centre for Atmospheric Science at the University of Reading. Wilcox told media that the paper makes some “very bold statements about temperature changes … which seem difficult to justify on the basis of the evidence.” She also has concerns about the mathematics of the researchers' calculations, including the possibility that the effect of sulfur emissions is double-counted.

Next: Philippines Court Ruling Deals Deathblow to Success of GMO Golden Rice

The Scientific Reality of the Quest for Net Zero

Often lost in the lemming-like drive toward Net Zero is the actual effect that reaching the goal of zero net CO2 emissions by 2050 will have. A new paper published by the CO2 Coalition demonstrates how surprisingly little warming would actually be averted by adoption of Net-Zero policies. The fundamental reason is that CO2 warming is already close to saturation, with each additional tonne of atmospheric CO2 producing less warming than the previous tonne.

The paper, by atmospheric climatologist Richard Lindzen together with atmospheric physicists William Happer and William van Wijngaarden, shows that for worldwide Net-Zero CO2 emissions by 2050, the averted warming would be 0.28 degrees Celsius (0.50 degrees Fahrenheit). If the U.S. were to achieve Net Zero on its own by 2050, the averted warming would be a tiny 0.034 degrees Celsius (0.061 degrees Fahrenheit).

These estimates assume that water vapor feedback, which is thought to amplify the modest temperature rise from CO2 acting alone, boosts warming without feedback by a factor of four – the assertion made by the majority of the climate science community. With no feedback, the averted warming would be 0.070 degrees Celsius (0.13 degrees Fahrenheit) for worldwide Net-Zero CO2 emissions, and a mere 0.0084 degrees Celsius (0.015 degrees Fahrenheit) for the U.S. alone.

The paper’s calculations are straightforward. As the authors point out, the radiative forcing of CO2 is proportional to the logarithm of its concentration in the atmosphere. So the temperature increase from now to 2050 caused by a concentration increment ΔC, would be

ΔT = S log2 (C/C0),

in which S is the temperature increase for a doubling of the atmospheric CO2 concentration from its present value C0; C = C0 + ΔC, or what the CO2 concentration in 2050 would be if no action is taken to reduce CO2 emissions by then; and log2 is the binary (base 2) logarithm.

The saturation effect for CO2 comes from this logarithmic dependence of ΔT on the concentration ratio C/ C0, so that each CO2 concentration increment results in less warming than the previous equal increment. In the words of the paper’s authors, “Greenhouse warming from CO2 is subject to the law of diminishing returns.”

If emissions were to decrease by 2050, the CO2 concentration would be less than C in the equation above, or C – δC where δC represents the concentration decrement. The slightly smaller temperature increase ΔT/ would then be

ΔT/ = S log2 ((C – δC)/ C0),

and the averted temperature increase δT from Net-Zero policies is δT = ΔT - ΔT/, which is

δT = S {log2 (C/C0) - log2 ((C – δC)/C0)} = S log2 (C/(C – δC)) = - S log2 (1 – δC/C).

This can be rewritten as

δT = - S ln (1 – δC/C)/ ln (2), in which ln is the natural (base e) logarithm.

Now using the power series expansion – ln (1 - x) = x + x2/2 + x3/3 + x4/4 + …. and recognizing that δC is much smaller than C, so that all terms in the expansion of – ln (1 – δC/C) beyond the first can be ignored,

δT = S (δC/C) / ln (2).

Finally, writing the concentration increment without emissions reduction ΔC as RΔt, where R is the constant emission rate over the time interval Δt, we have

C = C0 + ΔC = C0 + RΔt, and the concentration decrement for reduced emissions δC is

δC = ʃΔT R (1 – t/Δt) dt = RΔt/2, which gives

δT = S RΔt/ (2 ln (2) (C0 + RΔt)).

It’s this latter equation which yields the numbers for averted warming quoted above. In the case of the U.S. going it alone, δT needs to be multiplied by 0.12, which is the U.S. fraction of total world CO2 emissions in 2024.

Such small amounts of averted warming show the folly of the quest for Net Zero. While avoiding 0.28 degrees Celsius (0.50 degrees Fahrenheit) of warming globally is arguably a desirable goal, it’s extremely unlikely that the whole world will comply with Net Zero. China, India and Indonesia are currently indulging in a spate of building new coal-fired power plants which belch CO2, and only a limited number of those will be retired by 2050.

Developing countries, especially in Africa, are in no mood to hold back on any form of fossil fuel burning either. Many of these countries, quite reasonably, want to reach the same standard of living as the West – a lifestyle that has been attained through the availability of cheap, fossil fuel energy. Coal-fired electricity is the most affordable remedy for much of Africa and Asia.

In any case, few policy makers in the West have given much thought to the cost of achieving Net Zero. Michael Kelly, emeritus Prince Philip Professor of Technology at the University of Cambridge and an expert in energy systems, has calculated that the cost of a Net-Zero economy by 2050 in the U.S. alone will be at least $35 trillion, and this does not include the cost of educating the necessary skilled workforce.

Professor Kelly says the target is simply unattainable, a view shared by an ever-increasing number of other analysts. In his opinion, “the hard facts should put a stop to urgent mitigation and lead to a focus on adaptation (to warming).”

Next: How Much Will Reduction in Shipping Emissions Stoke Global Warming?

No Convincing Evidence That Extreme Wildfires Are Increasing

According to a new research study by scientists at the University of Tasmania, the frequency and magnitude of extreme wildfires around the globe more than doubled between 2003 and 2023, despite a decline in the total worldwide area burned annually. The study authors link this trend to climate change.

Such a claim doesn’t stand up to scrutiny, however. First, the authors seem unaware of the usual definition of climate change, which is a long-term shift in weather patterns over a period of at least 30 years. Their finding of a 21-year trend in extreme wildfires is certainly valid, but the study interval is too short to draw any conclusions about climate.

Paradoxically, the researchers mention an earlier 2017 study of theirs, stating that the 12-year period of that study of extreme wildfires was indeed too short to identify any temporal climate trend. Why they think 21 years is any better is puzzling!

Second, the study makes no attempt to compare wildfire frequency and magnitude over the last 21 years with those from decades ago, when there were arguably as many hot-burning fires as now. Such a comparison would allow the claim of more frequent extreme wildfires today to be properly evaluated.

Although today’s satellite observations of wildfire intensity far outnumber the observations made before the satellite era, there’s still plenty of old data that could be analyzed. Satellites measure what is called the FRP (fire radiative power), which is the total fire radiative energy less the energy dissipated through convection and conduction. The older FI (fire intensity) also measures the energy released by a fire, and is the rate of energy released per unit time per unit length of fire front; FRP, usually measured in MW (megawatts), is obviously related to FI.

The study authors define extreme wildfires as those with daily FRPs exceeding the 99.99th percentile. Satellite FRP data for all fires in the study period was collected in pixels 1 km on a side, each retained pixel containing just one wildfire “hotspot” after duplicate hotspots were excluded.

The total raw dataset included 88.4 million hotspot observations, and this number was reduced to 30.7 million “events” by summing individual pixels in cells approximately 22 x 22 km on a side. Of this 30.7 million, just 2,913 events satisfied the extreme wildfire 99.99th percentile requirement. The average of the study’s summed FRP values for the top 20 events was in the range of 50,000-150,000 MW, corresponding to individual FRPs of about 100-300 MW in a 1 x 1 km pixel.   

A glance at the massive datatset shows individual FRP values ranging from the single digits to several hundred MW. If the 20 hottest wildfires during 2003-23 had FRPs above 100 MW, most of the other 2,893 fires above the 99.99th percentile would have had lower FRPs, in the tens and teens.

While intensity data for historical wildfires is sparse, there are occasionally numbers mentioned in the literature. One example can be found in a 2021 paper that reviews past large-area high-intensity wildfires that have occurred in arid Australian grasslands. The paper’s authors state that:

Contemporary fire cycles in these grasslands (spinifex) are characterized by periodic wildfires that are large in scale, high in intensity (e.g., up to c. 14,000 kW) … and driven by fuel accumulations that occur following exceptionally high rainfall years.

An FRP of 14,000 kW, or 14 MW, is comparable to that of many of the 2,893 FRPs for modern extreme wildfires (excluding the top 20) in the Tasmanian study. The figure below shows the potential fire intensity of bushfires across Australia, the various colors indicating the FI range. As you can see, the most intense bushfires occur in the southeast and southwest of the country; FI values in those regions can exceed 100 MW per meter, which correspond to FRPs of about 30 MW.

And, although it doesn’t cite FI numbers, a 1976 paper on Australian bushfires from 1945 to 1975 makes the statement that:

The fire control authorities recognise that no fire suppression system has been developed in the world which can halt the forward spread of a high-intensity fire burning in continuous heavy fuels under the influence of extreme fire weather.

High- and extremely high-intensity wildfires in Australia at least are nothing new, and the same is no doubt true for other countries included in the Tasmanian study. The study authors remark correctly that higher temperatures due to global warming and the associated drying out of vegetation and forests both increase wildfire intensity. But there have been equally hot and dry periods in the past, such as the 1930s, when larger areas burned.

So there’s nothing remarkable about the present study. Even though it’s difficult to find good wildfire data in the pre-satellite era, the study authors could easily extend their work back to the onset of satellite measurements in the 1970s.

Next: The Scientific Reality of the Quest for Net Zero

Unexpected Sea Level Fluctuations Due to Gravity, New Evidence Shows

Although the average global sea level is rising as the world warms, the rate of rise is far from uniform across the planet and, in some places, is negative – that is, the sea level is falling. Recent research has revealed the role that localized gravity plays in this surprising phenomenon.   

The researchers used gravity-sensing satellites to track how changes in water retention on land can cause unexpected fluctuations in sea levels. While 75% of the extra water in the world’s oceans comes from melting ice sheets and mountain glaciers, they say, the other 25% is due to variations in water storage in ice-free land regions. These include changes in dam water levels, water used in agriculture, and extraction of groundwater which either evaporates or flows into the sea via rivers.

Water is heavy but, the researchers point out, moves easily. Thus local changes in sea level aren't just due to melting ice sheets or glaciers, but also reflect changes in the mass of water on nearby land. For example, the land gets heavier during large floods, which boosts its gravity and causes a temporary rise in local sea level. The opposite occurs during droughts or groundwater extraction, when the land becomes lighter, gravity falls and the local sea level drops.

A similar exchange of water explains why the sea level around Antarctica falls as the massive Antarctic ice sheet melts. The total mass of ice in the sheet is a whopping 24 million gigatonnes (26 million gigatons), enough to exert a significant gravitational pull on the surrounding ocean, making the sea level higher than it would be with no ice sheet. But as the ice sheet melts, this gravitational pull weakens and so the local sea level falls.

At the same time, however, distant sea levels rise in compensation. They also rise continuously over the long term because of the thermal expansion of seawater as it warms; added meltwater from both the Antarctic and Greenland ice sheets; and land subsidence caused by groundwater extraction, resulting from rapid urbanization and population growth. In an earlier post, I discussed how sea levels are affected by land subsidence.

The research also reveals how the pumping of groundwater in ice-free places, such as Mumbai in India and Taipei in Taiwan, can almost mask the sea level rise expected from distant ice sheet melting. Conversely, at Charleston on the U.S. Atlantic coast, where groundwater extraction is minimal, sea level rise appears to be accelerated.

All these and other factors contribute to substantial regional variation in sea levels across the globe. This is depicted in the following figure which shows the average rate of sea level rise, measured by satellite, between 1993 and 2014.

Clearly visible is the falling sea level in the Southern Ocean near Antarctica, as well as elevated rates of rise in the western Pacific and the east coast of North America. Note, however, that the figure is only for the period between 1993 and 2014. Over longer time scales, the global average rate of rise fluctuates considerably, most likely due to the gravitational effects of the giant planets Jupiter and Saturn.

Yet another gravitational influence on sea levels is La Niña, the cool phase of the ENSO (El Niño – Southern Oscillation) ocean cycle. The arrival of La Niña often brings torrential rain and catastrophic flooding to the Pacific northwest of the U.S., northern South America and eastern Australia. As mentioned before, the flooding temporarily enhances the gravitational pull of the land. This raises local sea levels, resulting in a lowering of more distant sea levels – the opposite of the effects from the melting Antarctic ice sheet or from groundwater extraction.

The influence of La Niña is illustrated in the figure below, showing the rate of sea level rise during the two most recent strong La Niñas, in 2010-12 and 2020-23. (Note that the colors in the sea level trend are reversed compared to the previous figure.) A significant local increase in sea level can be seen around both northern South America and eastern Australia, while the global level fell, especially in the 2010-12 La Niña event. Consecutive La Niñas in those years dumped so much rain on land that the average sea level worldwide fell about 5 mm (0.2 inches).

The current rate of sea level rise is estimated at 3.4 mm per year. Of this, the researchers calculate that over-extraction of groundwater alone contributes approximately 1 mm per year – meaning that the true rate of rise, predominantly from ice sheet melting and thermal expansion, is about 2.4 mm per year. Strong La Niñas lower this rate even more temporarily.

But paradoxically, as discussed above, groundwater extraction is causing local sea levels to fall. It’s local sea levels that matter to coastal communities and their engineers and planners.

Next: No Convincing Evidence That Extreme Wildfires Are Increasing

Was the Permian Extinction Caused by Global Warming or CO2 Starvation?

Of all the mass extinctions in the earth’s distant past, by far the greatest and most drastic was the Permian Extinction, which occurred during the Permian between 300 and 250 million years ago. Also known as the Great Dying, the Permian Extinction killed off an estimated 57% of all biological families including rainforest flora, 81% of marine species and 70% of terrestrial vertebrate species that existed before the Permian’s last million years. What was the cause of this devastation?

The answer to that question is controversial among paleontologists. For many years, it has been thought the extinction was a result of ancient global warming. During Earth’s 4.5-billion-year history, the global average temperature has fluctuated wildly, from “hothouse” temperatures as much as 14 degrees Celsius (25 degrees Fahrenheit) above today’s level of about 14.8 degrees Celsius (27 degrees Fahrenheit), to “icehouse” temperatures 6 degrees Celsius (11 degrees Fahrenheit) below.

Hottest of all was a sudden temperature spike from icehouse conditions at the onset of the Permian to extreme hothouse temperatures at its end, as can be seen in the figure below. The figure is a 2021 estimate of ancient temperatures derived from oxygen isotopic measurements combined with lithologic climate indicators, such as coals, sedimentary rocks, minerals and glacial deposits. The barely visible time scale is in millions of years before the present.

The geological event responsible for this enormous surge in temperature is a massive volcanic eruption known as the Siberian Traps. The eruption lasted at least 1 million years and resulted in the outpouring of voluminous quantities of basaltic lava from rifts in West Siberia; the lava buried over 50% of Siberia in a blanket up to 6.5 km (4 miles) deep.

Volcanic CO2 released by the eruptions was supplemented by CO2 produced during combustion of thick, buried coal deposits that lay along the subterranean path of the erupting lava. This stupendous outburst boosted the atmospheric CO2 level from a very low 200 ppm (parts per million) to more than 2,000 ppm, as shown in the next figure.

The conventional wisdom in the past has been that this geologically sudden, gigantic increase in the CO2 level sent the global thermometer soaring – a conclusion sensationalized by mainstream media such as the New York Times. However, that argument ignores the saturation effect for atmospheric CO2, which limits CO2-induced warming to that produced by the first few hundred ppm of the greenhouse gas.

While the composition of the atmosphere 250 million years ago may have been different from today’s, the saturation effect would still have occurred. There’s no question, nevertheless, that end-Permian temperatures were as high as we think, whatever the cause. That’s because the temperatures are based on the highly reliable method of measuring oxygen 18O to 16O isotopic ratios in ancient microfossils.

Such hothouse conditions would have undoubtedly caused the extinction of various species; the severity of the extinction event is revealed by subsequent gaps in the fossil record. Organic carbon accumulated in the deep ocean, depleting oxygen and thus wiping out many marine species such as phytoplankton, brachiopods and reef-building corals. On land, vertebrates such as amphibians and early reptiles, as well as diverse tropical and temperate rainforest flora, disappeared.

All from extreme global warming? Not so fast, says ecologist Jim Steele.

Steele attributes the Permian extinction not to an excess of CO2 at the end of this geological period, but rather to a lack of it during the preceding Carboniferous and the early Permian, as can be seen in the figure above. He explains that all life is dependent on a supply of CO2, and that when its concentration drops below 150 ppm, photosynthesis ceases, and plants and living creatures die.

Steele argues that because of CO2 starvation over this interval, many species had either already become extinct, or were on the verge of extinction, long before the planet heated up so abruptly.

In comparison to other periods, the Permian saw the appearance of very few new species, as illustrated in the following figure. For example, far more new species evolved (and became extinct) during the earlier Ordovician, when CO2 levels were much, much higher but an icehouse climate prevailed.

When CO2 concentrations reached their lowest levels ever in the early Permian, phytoplankton fossils were extremely rare – some 40 million years or so before the later hothouse spike, which is when the conventional narrative claims the species became extinct. And Steele says that 35-47% of marine invertebrate genera went extinct, as well as almost 80% of land vertebrates, from 7 to 17 million years before the mass extinction at the end of the Permian.

Furthermore, Steele adds, the formation of the supercontinent Pangaea (shown to the left), which occurred during the Carboniferous, had a negative effect on biodiversity. Pangea removed unique niches from its converging island-like microcontinents, again long before the end-Permian.

Next: Unexpected Sea Level Fluctuations Due to Gravity, New Evidence Shows

Shrinking Cloud Cover: Cause or Effect of Global Warming?

Clouds play a dominant role in regulating our climate. Observational data show that the earth’s cloud cover has been slowly decreasing since at least 1982, at the same time that its surface temperature has risen about 0.8 degrees Celsius (1.4 degrees Fahrenheit). Has the reduction in cloudiness caused that warming, as some heretical research suggests, or is it an effect of increased temperatures?

It's certainly true that clouds exert a cooling effect, as you’d expect – at least low-level clouds, which are the majority of the planet’s cloud cover. Low-level clouds such as cumulus and stratus clouds are thick enough to reflect 30-60% of the sun’s radiation that strikes them back into space, so they act like a parasol and cool the earth’s surface. Less cloud cover would therefore be expected to result in warming.

Satellite measurements of global cloud cover from 1982 to 2018 or 2019 are presented in the following two, slightly different figures, which also include atmospheric temperature data for the same period. The first figure shows cloud cover from one set of satellite data, and temperatures in degrees Celsius relative to the mean tropospheric temperature from 1991 to 2020.

The second figure shows cloud cover from a different set of satellite data, and absolute temperatures in degrees Fahrenheit. The temperature data were not measured directly but derived from measurements of outgoing longwave radiation, which is probably why the temperature range from 1982 to 2018 appears much larger than in the previous figure.

This second figure is the basis for the authors’ claim that 90% of global warming since 1982 is a result of fewer clouds. As can be seen, their estimated trendline temperature (red dotted line, which needs extending slightly) at the end of the observation period in 2018 was 59.6 degrees Fahrenheit. The reduction in clouds (blue dotted line) over the same interval was 2.7% - although the researchers erroneously conflate the cloud cover and temperature scales to come up with a 4.1% reduction.

Multiplying 59.6 degrees Fahrenheit by 2.7% yields a temperature change of 1.6 degrees Fahrenheit. The researchers then make use of the well-established fact that the Northern Hemisphere is up to 1.5 degrees Celsius (2.7 degrees Fahrenheit) warmer than the Southern Hemisphere. So, they say, clouds can account for (1.6/2.7) = 59% of the temperature difference between the hemispheres.

This suggests that clouds may be responsible for 59% of recent global warming, if the temperature difference between the two hemispheres is due entirely to the difference in cloud cover from hemisphere to hemisphere.

Nevertheless, this argument is on very weak ground. First, the authors wrongly used 4.1% instead of 2.7% as just mentioned, which incorrectly leads to a temperature change due to cloud reduction of 2.4 degrees Fahrenheit and an estimated contribution to global warming of a higher (2.4/2.7) = 89%, as they claim in their paper.

Regardless of this mistake, however, a temperature increase of even 1.6 degrees Fahrenheit is more than twice as large as the observed rise measured by the more reliable satellite data in the first figure above. And attributing the 1.5 degrees Celsius (2.7 degrees Fahrenheit) temperature difference between the two hemispheres entirely to cloud cover difference is dubious.

There is indeed a difference in cloud cover between the hemispheres. The Southern Hemisphere contains more clouds (69% average cloud cover) than the Northern Hemisphere (64%), partly because there is more ocean surface in the Southern Hemisphere, and thus more evaporation as the planet warms. This in itself would not explain why the Northern Hemisphere is warmer, however.

Southern Hemisphere clouds are also more reflective than their Northern Hemisphere counterparts. That is because they contain more liquid water droplets and less ice; it has been found that lack of ice nuclei causes low-level clouds to form less often. But apart from the ice content, the chemistry and dynamics of cloud formation are complex and depend on many factors. So associating the hemispheric temperature difference only with cloud cover is most likely invalid.

A few other research papers also claim that the falloff in cloud cover explains recent global warming, but their arguments are equally shaky. As is the proposal by joint winner of the 2022 Nobel Prize in Physics, John Clauser, of a cloud thermostat mechanism that controls the earth’s temperature: if cloud cover falls and the temperature climbs, the thermostat acts to create more clouds and cool the earth down again. Obviously, this has not happened.

Finally, it’s interesting to note that the current decline in cloud cover is not uniform across the globe. This can be seen in the figure below, which shows an expanding trend with time in coverage over the oceans, but a diminishing trend over land.

The expanding ocean cloud cover comes from increased evaporation of seawater with rising temperatures. The opposite trend over land is a consequence of the drying out of the land surface; evidently, the land trend dominates globally.

Next: Was the Permian Extinction Caused by Global Warming or CO2 Starvation?

El Niño and La Niña May Have Their Origins on the Sea Floor

One of the least understood aspects of our climate is the ENSO (El Niño – Southern Oscillation) ocean cycle, whose familiar El Niño (warm) and La Niña (cool) events cause drastic fluctuations in global temperature, along with often catastrophic weather in tropical regions of the Pacific and delayed effects elsewhere. A recent research paper attributes the phenomenon to tectonic and seismic activity under the oceans.

Principal author Valentina Zharkova, formerly at the UK’s Northumbria University, is a prolific researcher into natural sources of global warming, such as the sun’s internal magnetic field and the effect of solar activity on the earth’s ozone layer. Most of her studies involve sophisticated mathematical analysis and her latest paper is no exception.

Zharkova and her coauthor Irina Vasilieva make use of a technique known as wavelet analysis, combined with correlation analysis, to identify key time periods in the ONI (Oceanic Niño Index). The index, which measures the strength of El Niño and La Niña events, is the 3-monthly average difference from the long-term average sea surface temperature in the ENSO region of the tropical Pacific. Shown in the figure below are values of the index from 1950 to 2016.

Wavelet analysis supplies information both on which frequencies are present in a time series signal, and on when those frequencies occur, unlike a Fourier transform which decomposes a signal only into its frequency components.

Using the wavelet approach, Zharkova and Vasilieva have identified two separate ENSO cycles: one with a shorter period of 4-5 years, and a longer one with a period of 12 years. This is illustrated in the next figure which shows the ONI at top left; the wavelet spectrum of the index at bottom left, with the wavelet “power” indicated by the colored bar at top right; and the global wavelet spectrum at bottom right. 

The authors link the 4- to 5-year ENSO cycle to the motion of tectonic plates, a connection that has been made by other researchers. The 12-year ENSO cycle identified by their wavelet analysis they attribute to underwater volcanic activity; it does not correspond to any solar cycle or other known natural source of warming.

The following figure depicts an index (in red, right-hand scale), calculated by the authors, that measures the total annual volcanic strength and duration of all submarine volcanic eruptions from 1950 to 2023, superimposed on the ONI (in black) over the same period. A weak correlation can be seen between the ENSO ONI and undersea volcanic activity, the correlation being strongest at 12-year intervals.

Zharkova and Vasilieva estimate the 12-year ENSO correlation coefficient at 25%, a connection they label as “rather significant.” As I discussed in a recent post, retired physical geographer Arthur Viterito has proposed that submarine volcanic activity is the principal driver of global warming, via a strengthening of the thermohaline circulation that redistributes seawater and heat around the globe.

Zharkova and Vasilieva, however, link the volcanic eruptions causing the 12-year boost in the ENSO index to tidal gravitational forces on the earth from the giant planet Jupiter and from the sun. Jupiter of course orbits the sun and spins on an axis, just like Earth. But the sun is not motionless either: it too rotates on an axis and, because it’s tugged by the gravitational pull of the Jupiter and Saturn giants, orbits in a small but complex spiral around the center of the solar system.

Jupiter was selected by the researchers because its orbital period is 12 years - the same as the longer ENSO cycle identified by their wavelet analysis.

That Jupiter’s gravitational pull on Earth influences volcanic activity is clear from the next figure, in which the frequency of all terrestrial volcanic eruptions (underwater and surface) is plotted against the distance of Earth from Jupiter; the distance is measured in AU (astronomical units), where 1 AU is the average earth-sun distance. The thick blue line is for all eruptions, while the thick yellow line shows the eruption frequency in just the ENSO region.

What stands out is the increased volcanic frequency when Jupiter is at one of two different distances from Earth: 4.5 AU and 6 AU. The distance of 4.5 AU is Jupiter’s closest approach to Earth, while 6 AU is Jupiter’s distance when the sun is closest to Earth and located between Earth and Jupiter. The correlation coefficient between the 12-year ENSO cycle and the Earth-Jupiter distance is 12%.  

For the gravitational pull of the sun, Zharkova and Vasilieva find there is a 15% correlation between the 12-year ENSO cycle and the Earth-sun distance in January, when Earth’s southern hemisphere (where ENSO occurs) is closest to the sun. Although these solar system correlations are weak, Zharkova and Vasilieva say they are high considering the vast distances involved.

Next: Shrinking Cloud Cover: Cause or Effect of Global Warming?

The Deceptive Catastrophizing of Weather Extremes: (2) Economics and Politics

In my previous post, I reviewed the science described in environmentalist Ted Nordhaus’ four-part essay, “Did Exxon Make It Rain Today?”, and how science is being misused to falsely link weather extremes to climate change. Nordhaus also describes how the perception of a looming climate catastrophe, exemplified by extreme weather events, is being fanned by misconceptions about the economic costs of natural disasters and by environmental politics – both the subject of this second post.

Between 1990 and 2017, the global cost of weather-related disasters increased by 74%, according to an analysis by Roger Pielke, Jr., a former professor at the University of Colorado. Economic loss studies of natural disasters have been quick to blame human-caused climate change for this increase.

But Nordhaus makes the point that, if the cost of natural disasters is increasing due to global warming, then you would expect the cost of weather-related disasters to be rising faster than that of disasters not related to weather. Yet the opposite is true. States Nordhaus: “The cost of disasters unrelated [my italics] to weather increased 182% between 1990 and 2017, more than twice as fast as for weather-related disasters.” This is evident in the figure below, which shows both costs from 1990 to 2018.

Nordhaus goes on to declare:

In truth, it is economic growth, not climate change, that is driving the boom in economic damage from both weather-related and non-weather-related natural disasters.

Once the losses are corrected for population gain and the ever-escalating value of property in harm’s way, there is very little evidence to support any connection between natural dis­asters and global warming. Nordhaus explains that accelerating urbanization since 1950 has led to an enormous shift of the global population, economic activity, and wealth into river and coastal floodplains.

On the influence of environmental politics in connecting weather extremes to global warming, Nordhaus has this to say:

… the perception among many audiences that these events centrally implicate anthropogenic warming has been driven by ... a sustained campaign by environmental advocates to move the proximity of climate catastrophe in the public imagination from the uncertain future into the present.

The campaign had its origins in a 2012 meeting of environmental advocates, litigators, climate scientists and others in La Jolla, California, convened by the Union of Concerned Scientists. The specific purpose of the gathering was “to develop a public narrative connecting extreme weather events that were already happening, and the damages they were causing, with climate change and the fossil fuel industry.”

This was clearly an attempt to mimic the 1960s campaign against smoking tobacco because of its link to lung cancer. However, the correlation between smoking and lung cancer is extraordinarily high, leaving no doubt about causation. The same cannot be said for any connection between extreme weather events and climate change.

Nevertheless, it was at the La Jolla meeting that the idea of reframing the attribution of extreme weather to climate change, as I discussed in my previous post, was born. Nordhaus discerns that a subsequent flurry of attribution reports, together with a fortuitous restructuring of the media at the same time:

… have given journalists license to ignore the enormous body of research and evidence on the long-term drivers of natural disasters and the impact that climate change has had on them.

It was but a short journey from there for the media to promote the notion, favored by “much of the environmental cognoscenti” as Nordhaus puts it, that “a climate catastrophe is now unfolding, and that it is demonstrable in every extreme weather event.”

The media have undergone a painful transformation in the last few decades, with the proliferation of cable news networks followed by the arrival of the Internet. The much broader marketplace has resulted in media outlets tailoring their content to the political values and ideological preferences of their audiences. This means, says Nordhaus, that sensationalism such as catastrophic climate news – especially news linking extreme weather to anthropogenic warming – plays a much larger role than before.

As I discussed in a 2023 post, the ever increasing hype in nearly all mainstream media coverage of weather extremes is a direct result of advocacy by well-heeled benefactors like the Rockefeller, Walton and Ford foundations. The Rockefeller Foundation, for example, has begun funding the hiring of climate reporters to “fight the climate crisis.”

A new coalition, founded in 2019, of more than 500 media outlets is dedicated to producing “more informed and urgent climate stories.” The CCN (Covering Climate Now) coalition includes three of the world’s largest news agencies — Reuters, Bloomberg and Agence France Presse – and claims to reach an audience of two billion.

Concludes Nordhaus:

[These new dynamics] are self-reinforcing and have led to the widespread perception among elite audiences that the climate is spinning out of control. New digital technology bombards us with spectacular footage of extreme weather events. … Catastrophist climate coverage generates clicks from elite audiences.

Next: El Niño and La Niña May Have Their Origins on the Sea Floor

The Deceptive Catastrophizing of Weather Extremes: (1) The Science

In these pages, I’ve written extensively about the lack of scientific evidence for any increase in extreme weather due to global warming. But I’ve said relatively little about the media’s exploitation of the mistaken belief that weather extremes are worsening be­cause of climate change.

A recent four-part essay addresses the latter issue, under the title “Did Exxon Make It Rain Today?”  The essay was penned by Ted Nordhaus, well-known environmentalist and director of the Breakthrough Institute in Berkeley, California, which he co-founded with Michael Shellenberger in 2007. Its authorship was a surprise to me, since the Breakthrough Institute generally supports the narrative of largely human-caused warming.

Nonetheless, Nordhaus’s thoughtful essay takes a mostly skeptical – and realistic – view of hype about weather extremes, stating that:

We know that anthropogenic warming can increase rainfall and storm surges from a hurricane, or make a heat wave hotter. But there is little evidence that warming could create a major storm, flood, drought, or heat wave where otherwise none would have occurred, …

Nordhaus goes on to make the insightful statement that “The main effect that climate change has on extreme weather and natural disasters … is at the margins.” By this, he means that a heat wave in which daily high temperatures for, say, a week reached 37 degrees Celsius (99 degrees Fahrenheit) or above in the absence of climate change would instead stay above perhaps 39 degrees Celsius (102 degrees Fahrenheit) with our present level of global warming.

His assertion is illustrated in the following, rather congested figure from the IPCC (Intergovernmental Panel on Climate Change)’s Sixth Assessment Report. The purple curve shows the average annual hottest daily maximum temperature on land, while the green and black curves indicate the land and global average annual mean temperature, respectively; temperatures are measured relative to their 1850–1900 means.

However, while global warming is making heat waves marginally hotter, Nordhaus says there is no evidence that extreme weather events are on the rise, as so frequently trumpeted by the mainstream media. Although climate change will make some weather events such as heavy rainfall more intense than they otherwise would be, the global area burned by wildfires has actually decreased and there has been no detectable global trend in river floods, nor meteorological drought, nor hurricanes.

Adds Nordhaus:

The main source of climate variability in the past, present, and future, in all places and with regard to virtually all climatic phenomena, is still overwhelmingly non-human: all the random oscillations in climatic extremes that occur in a highly complex climate system across all those highly diverse geographies and topographies.

The misconception that weather extremes are increasing when they are not has been amplified by attribution studies, which use a new statistical method and climate models to assign specific extremes to either natural variabil­ity or human causes. Such studies involve highly questionable methodology that has several shortcomings.

Even so, the media and some climate scientists have taken scientifically unjustifiable liberties with attribution analysis in order to link extreme events to climate change – such as attempting to quantify how much more likely global warming made the occurrence of a heat wave that resulted in high temperatures above 38 degrees Celsius (100 degrees Fahrenheit) for a period of five days in a specific location.

But, explains Nordhaus, that is not what an attribution study actually estimates. Rather, “it quantifies changes in the likelihood of the heat wave reaching the precise level of extremity that occurred.” In the hypothetical case above, the heat wave would have happened anyway in the absence of climate change, but it would have resulted in high temperatures above 37 degrees Celsius (99 degrees Fahrenheit) over five days instead of above 38 degrees.

The attribution method estimates the probability of a heat wave or other extreme event occurring that is incrementally hotter or more severe than the one that would have occurred without climate change, not the probability of the heat wave or other event occurring at all.

Nonetheless, as we’ll see in the next post, the company WWA (World Weather Attribution), founded by German climatologist Friederike Otto, has utilized this new technology to rapidly produce science that does connect weather extremes to climate change – with the explicit goal of shaping news coverage. Coverage of climate-related disasters now routinely features WWA analysis, which is often employed to suggest that climate change is the cause of such events.

Next: The Deceptive Catastrophizing of Weather Extremes: (2) Economics and Politics

Sea Ice Update: Arctic Stable, Antarctic Recovering

The climate doomsday machine constantly insists that sea ice at the two poles is shrinking inexorably and that the Arctic will soon be ice-free in the summer. But the latest data puts the kibosh on those predictions. The maximum winter Arctic ice extent last month was no different from 2023, and the minimum summer 2024 extent in the Antarctic, although lower than the long-term average, was higher than last year.

Satellite images of Arctic sea ice extent in February 2024, one month before its winter peak (left image), and Antarctic extent at its summer minimum the same month (right image), are shown in the figure below. Sea ice shrinks during summer months and expands to its maximum extent during the winter. The red lines in the figure denote the median ice extent from 1981 to 2010.

Arctic summer ice extent decreased by approximately 39% over the interval from 1979 to 2023, but was essentially the same in 2023 as it was in 2007. Arctic winter ice extent on March 3, 2024 was 11% lower than in 1979, when satellite measurements began, but slightly higher than in 2023, as indicated by the inset in the figure below.

Arctic winter maximum extent fluctuates less than its summer minimum extent, as can be seen in the right panel of the figure which compares the annual trend by month for various intervals during the satellite era, as well as for the low-summer-ice years of 2007 and 2012. The left panel shows the annual trend by month for all years from 2013 through 2024.

What is noticeable about this year’s winter maximum is that it was not unduly low, despite the Arctic being warmer than usual. According to the U.S. NSIDC (National Snow & Ice Data Center), February air temperatures in the Arctic troposphere, about 760 meters (2,500 feet) above sea level, were up to 10 degrees Celsius (18 degrees Fahrenheit) above average.

The NSIDC attributes the unusual warmth to a strong pressure gradient that forced relatively warm air over western Eurasia to flow into the Arctic. However, other explanations have been put forward for enhanced winter warming, such as the formation during non-summer seasons of more low-level clouds due to the increased area of open water compared to sea ice. The next figure illustrates this effect between 2008 and 2022.

Despite the long-term loss of ice in the Arctic, the sea ice around Antarctica had been expanding steadily during the satellite era up until 2016, growing at an average rate between 1% and 2% per decade, with considerable fluctuations from year to year. But it took a tumble in 2017, as depicted in the figure below.

Note that this figure shows “anomalies,” or departures from the February mean ice extent for the period from 1981 to 2010, rather than the minimum extent of summer ice in square km. The anomaly trend is plotted as the percent difference between the February extent for that year and the February mean from 1981 to 2010.

As can be seen, the summer ice minimum recovered briefly in 2020 and 2021, only to fall once more and pick up again this year. The left panel in the next figure shows the annual Antarctic trend by month for all years from 2013 through 2024, along with the summer minimum (in square km) in the inset. As for the Arctic previously, the right panel compares the annual trend by month for various intervals during the satellite era, as well as for the high-summer-ice years of 2012 and 2014.

Antarctic sea ice at its summer minimum this year was especially low in the Ross, Amundsen, and Bellingshausen Seas, all of which are on the West Antarctica coast, while the ice cover in the Weddell Sea to the north and along the East Antarctic coast was at average levels. Such a pattern is thought to be associated with the current El Niño.

A slightly different representation of the Antarctic sea ice trend is presented in the following figure, in which the February anomaly is shown directly in square km rather than as a difference percentage. This representation illustrates more clearly how the decline in summer sea ice extent has now persisted for seven years.

The overall trend from 1979 to 2023 is an insignificant 0.1% per decade relative to the 1981 to 2010 mean. Yet a prolonged increase above the mean occurred from 2008 to 2017, followed by the seven-year decline since then. The current downward trend has sparked debate and several possible reasons have been advanced, not all of which are linked to global warming. One analysis attributes the big losses of sea ice in 2017 and 2023 to extra strong El Niños.

Next: The Deceptive Catastrophizing of Weather Extremes: (1) The Science

Exactly How Large Is the Urban Heat Island Effect in Global Warming?

It’s well known that global surface temperatures are biased upward by the urban heat island (UHI) effect. But there’s widespread disagreement among climate scientists about the magnitude of the effect, which arises from the warmth generated by urban surroundings, such as buildings, concrete and asphalt.

In its Sixth Assessment Report in 2021, the IPCC (Intergovernmental Panel on Climate Change) acknowledged the existence of the UHI effect and the consequent decrease in the number of cold nights since around 1950. Nevertheless, the IPCC is ambivalent about the actual size of the effect. On the one hand, the report dismisses its significance by declaring it “less than 10%” (Chapter 2, p. 324) or “negligible” (chapter 10, p. 1368).

On the other hand, the IPCC presents a graph (Chapter 10, p. 1455), reproduced below, showing that the UHI effect ranges from 0% to 60% or more of measured warming in various cities. Since the population of the included cities is a few per cent of the global population, and many sizable cities are not included, it’s hard to see how the IPCC can state that the global UHI effect is negligible.

One climate scientist who has studied the magnitude of the UHI effect for some time is PhD meteorologist Roy Spencer. In a recent preview of a paper submitted for publication, Spencer finds that summer warming in U.S. cities from 1895 to 2023 has been exaggerated by 100% or more from UHI warming. The next figure shows the results of his calculations which, as you would expect, depend on population density.

The barely visible solid brown line is the measured average summertime temperature for the continental U.S. (CONUS) relative to its 1901-2000 average, in degrees Celsius, from 1895 to 2023; the solid black line represents the same data corrected for UHI warming, as estimated from population density data. The measurements are taken from the monthly GHCN (Global Historical Climatology Network) “homogenized” dataset, as compiled by NOAA (the U.S. National Oceanic and Atmospheric Administration).

You can see that the UHI effect accounts for a substantial portion of the recorded temperature in all years. Spencer says that the UHI influence is 24% of the trend averaged over all measurement stations, which are dominated by rural sites not subject to UHI warming. But for the typical “suburban” station (100-1,000 persons per square km), the UHI effect is 52% of the measured trend, which means that measured warming in U.S. cities is at least double the actual warming. 

Globally, a rough estimate of the UHI effect can be made from NOAA satellite temperature data compiled by Spencer and Alabama state climatologist John Christy. Satellite data are not influenced by UHI warming because they measure the earth’s near-surface, not surface, temperature. The most recent data for the global average lower tropospheric temperature are displayed below.

According to Spencer and Christy’s calculations, the linear rate of global warming since measurements began in January 1979 is 0.15 degrees Celsius (0.27 degrees Fahrenheit) per decade, while the warming rate measured over land only is 0.20 degrees Celsius (0.36 degrees Fahrenheit) per decade. The difference of 0.05 degrees Celsius (0.09 degrees Fahrenheit) per decade in the warming rates can reasonably be attributed, at least in part, to the UHI effect.

So the UHI influence is as high as 0.05/0.20 or 25% of the measured temperature trend – in close agreement with Spencer’s 24% estimated from his more detailed calculations.

Other estimates peg the UHI effect as larger yet. As part of a study of natural contributions to global warming, which I discussed in a recent post, the CERES research group suggested that urban warming might account for up to 40% of warming since 1850.

But the 40% estimate comes from a comparison of the warming rate for rural temperature stations alone with that for rural and urban stations combined, from 1900 to 2018. Over the shorter time period from 1972 to 2018, which almost matches Spencer and Christy’s satellite record, the estimated UHI effect is a much smaller 6%. The study authors caution that more research is needed to estimate the UHI magnitude more accurately.

The effect of urbanization on global temperatures is an active research field. Among other recent studies is a 2021 paper by Chinese researchers, who used a novel approach involving machine learning to quantify the phenomenon. Their study encompassed measurement stations in four geographic areas – Australia, East Asia, Europe and North America – and found that the magnitude of UHI warming from 1951 to 2018 was 13% globally, and 15% in East Asia where rapid urbanization has occurred.

What all these studies mean for climate science is that global warming is probably about 20% lower than most people think. That is, about 0.8 degrees Celsius (1.4 degrees Fahrenheit) at the end of 2022, before the current El Niño spike, instead of the reported 0.99 degrees Celsius (1.8 degrees Fahrenheit). Which means in turn that we’re only halfway to the Paris Agreement’s lower limit of 1.5 degrees Celsius (2.7 degrees Fahrenheit).  

Next: Sea Ice Update: Arctic Stable, Antarctic Recovering

Retractions of Scientific Papers Are Skyrocketing

A trend that bodes ill for the future of scientific publishing, and another signal that science is under attack, is the soaring number of research papers being retracted. According to a recent report in Nature magazine, over 10,000 retractions were issued for scientific papers in 2023.

Although more than 8,000 of these were sham articles from a single publisher, Hindawi, all the evidence shows that retractions are rising more rapidly than the research paper growth rate. The two figures below depict the yearly number of retractions since 2013, and the retraction rate as a percentage of all scientific papers published from 2003 to 2022.

Clearly, there is cause for alarm as both the number of retractions and the retraction rate are accelerating. Nature’s analysis suggests that the retraction rate has more than trebled over the past decade to its present 0.2% or above. And the journal says the estimated total of about 50,000 retractions so far is only the tip of the iceberg of work that should be retracted.

An earlier report in 2012 by a trio of medical researchers reviewed 2,047 biomedical and life-science research articles retracted since 1977. They found that 43% of the retractions were attributable to fraud or suspected fraud, 14% to duplicate publication and 10% to plagiarism, with 21% withdrawn because of error. The researchers also discovered that retractions for fraud or suspected fraud as a percentage of total articles published have increased almost 10 times since 1975.

A recent example of fraud outside the biomedical area is the 2022 finding of the University of Delaware that star marine ecologist Danielle Dixson was guilty of research misconduct, for fabricating and falsifying research results in her work on fish behavior and coral reefs. As reported in Science magazine, the university subsequently sought retraction of three of Dixson’s papers.

The misconduct involves studies by Dixson of the behavior of coral reef fish in slightly acidified seawater, in order to simulate the effect of ocean acidification caused by the absorption of up to 30% of human CO2 emissions. Dixson and Philip Munday, a former marine ecologist at James Cook University in Townsville, Australia, claimed that the extra CO2 causes reef fish to be attracted by chemical cues from predators, instead of avoiding them; to become hyperactive and disoriented; and to suffer loss of vision and hearing.

But, as I described in a 2021 blog post, a team of biological and environmental researchers led by Timothy Clark of Deakin University in Geelong, Australia debunked all these conclusions. Most damningly of all, the researchers found that the reported effects of ocean acidification on the behavior of coral reef fish were not reproducible.

The investigative panel at the University of Delaware endorsed Clark’s findings, saying it was “repeatedly struck by a serial pattern of sloppiness, poor recordkeeping, copying and pasting within spreadsheets, errors within many papers under investigation, and deviation from established animal ethics protocols.” The panel also took issue with the reported observation times for two of the studies, stating that the massive amounts of data could not have been collected in so short a time. Dixson has since been fired from the university.

Closely related to fraud is the reproducibility crisis – the vast number of peer-reviewed scientific studies that can’t be replicated in subsequent investigations and whose findings turn out to be false, like Dixson’s. In the field of cancer biology, for example, scientists at Amgen in California discovered in the early 2000s that an astonishing 89% of published results couldn’t be reproduced.

One of the reasons for the soaring number of retractions is the rapid growth of fake research papers churned out by so-called “paper mills.” Paper mills are shady businesses that sell bogus manuscripts and authorships to researchers who need journal publications to advance their careers. Another Nature report suggests that over the past two decades, more than 400,000 published research articles show strong textual similarities to known studies produced by paper mills; the rising trend is illustrated in the next figure.

German neuropsychologist Bernhard Sabel estimates that in medicine and neuroscience, as many as 11% of papers in 2020 were likely paper-mill products. University of Oxford psychologist and research-integrity sleuth Dorothy Bishop found signs of paper mill-activity last year in at least 10 journals from Hindawi, the publisher mentioned earlier.

Textual similarities are only one fingerprint of paper-mill publications. Others include suspicious e-mail addresses that don’t correspond to any of a paper’s authors; e-mail addresses from hospitals in China (because the issue is known to be so common there); manipulated images from other papers; twisted phrases that indicate efforts to avoid plagiarism detection; and duplicate submissions across journals.

Journals, fortunately, are starting to pay more attention to paper mills, revamping their review processes for example. They’re also being aided by an ever-growing army of paper-mill detectives such as Bishop.

Next: Exactly How Large Is the Urban Heat Island Effect in Global Warming?

Foundations of Science Under Attack in U.S. K-12 Education

Little known to most people is that science is under assault in the U.S. classroom. Some 49 U.S. states have adopted standards for teaching science in K-12 schools that abandon the time-honored edifice of the scientific method, which underpins all the major scientific advances of the past two millennia.

In place of the scientific method, most schoolchildren are now taught “scientific practices.” These emphasize the use of computer models and social consensus over the fundamental tenets of the scientific method, namely the gathering of empirical evidence and the use of reasoning to make sense of the evidence. 

The modern scientific method, illustrated schematically in the figure below, was conceived over two thousand years ago by the Hellenic-era Greeks, then almost forgotten and ultimately rejuvenated in the Scientific Revolution, before being refined into its present-day form in the 19th century. However, even earlier scientists such as Galileo Galilei and Isaac Newton had followed the basic principles of the method, as have subsequent scientific luminaries like Marie Curie and Albert Einstein. 

The present assault on science in U.S. schools began with publication in 2012 of a 400-page document, A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas, by the U.S. National Academy of Sciences. This was followed in 2014 with publication by a newly formed consortium of national and state education groups of a companion document, Next Generation Science Standards (NGSS), based on the 2012 Framework.

The Framework summarily dismisses the scientific method with the outrageous statement:

… the notion that there is a single scientific method of observation, hypothesis, deduction, and conclusion—a myth perpetuated to this day by many textbooks—is fundamentally wrong,

and its explanation of “practices” as: 

… not rote procedures or a ritualized “scientific method.”

The Framework’s abandonment of the scientific method appears to have its origins in a 1992 book by H.H. Bauer entitled Scientific Literacy and the Myth of the Scientific Method. Bauer’s arguments against the importance of the scientific method include the mistaken conflation of science with sociology, and a misguided attempt to elevate the irrational pseudoscience of astrology to the status of a true science.

The NGSS give the scientific method even shorter shrift than the Framework, not mentioning the concept nor the closely related term of critical thinking once in its 103 pages. A scathing review of the NGSS in 2021 by the U.S. National Association of Scholars (NAS), Climbing Down: How the Next Generation Science Standards Diminish Scientific Literacy, concludes that:

The NGSS severely neglect content instruction, politicize much of the content that remains … and abandon instruction of the scientific method.

Stating that “The scientific method is the logical and rational process through which we observe, describe, explain, test, and predict phenomena … but is nowhere to be found in the actual standards of the NGSS,” the NAS report also states:

Indeed, the latest generation of science education reformers has replaced scientific content with performance-based “learning” activities, and the scientific method with social consensus.

It goes on to say that neither the Framework nor the NGSS ever mention explicitly the falsifiability criterion – a crucial but often overlooked feature of the modern scientific method, in addition to the basic steps outlined above. The criterion, introduced in the early 20th century by philosopher Sir Karl Popper, states that a true scientific theory or law must in principle be capable of being invalidated by observation or experiment. Any evidence that fits an unfalsifiable theory has no scientific validity.

The primary deficiencies of the Framework and the NGSS have recently been enumerated and discussed by physicist John Droz, who has identified a number of serious shortcomings, some of which inject politics into what should be purely scientific standards. These include the use of computer models to imply reality; treating consensus as equal in value to empirical data; and the use of correlation to imply causation.

The NGSS do state that “empirical evidence is required to differentiate between cause and correlation” (in Crosscutting Concepts, page 92 onward), and there is a related discussion in the Framework. However, there is no attempt in either document to connect the concept of cause and effect to the steps of observation, and formulation and testing of a hypothesis, in the scientific method.

The NAS report is pessimistic about the effect of the NGSS on K-12 science education in the U.S., stating that:

They [the NGSS] do not provide a science education adequate to take introductory science courses in college. They lack large areas of necessary subject matter and an extraordinary amount of mathematical rigor. … The NGSS do not prepare students for careers or college readiness.

There is, however, one bright light. In his home state of North Carolina (NC), Droz was successful in July 2023 in having the scientific method restored to the state’s K-12 Science Standards. Earlier that year, he had discovered that existing NC science standards had excluded teaching the scientific method for more than 10 years. So Droz formally filed a written objection with the NC Department of Public Instruction.

Droz was told that he was “the only one bringing up this issue” out of 14,000 inputs on the science standards. However, two members of the State Board of Education ultimately joined him in questioning the omission and, after much give-and-take, the scientific method was reinstated. That leaves 48 other states that need to follow North Carolina’s example.

Next: Retractions of Scientific Papers Are Skyrocketing

Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Driven by Oceanic Seismic Activity, Not CO2

Although undersea volcanic eruptions can’t cause global warming directly, as I discussed in a previous post, they can contribute indirectly by altering the deep-ocean thermohaline circulation. According to a recent lecture, submarine volcanic activity is currently intensifying the thermohaline circulation sufficiently to be the principal driver of global warming.

The lecture was delivered by Arthur Viterito, a renowned physical geographer and retired professor at the College of Southern Maryland. His provocative hypothesis links an upsurge in seismic activity at mid-ocean ridges to recent global warming, via a strengthening of the ocean conveyor belt that redistributes seawater and heat around the globe.

Viterito’s starting point is the observation that satellite measurements of global warming since 1979 show distinct step increases following major El Niño events in 1997-98 and 2014-16, as demonstrated in the following figure. The figure depicts the satellite-based global temperature of the lower atmosphere in degrees Celsius, as compiled by scientists at the University of Alabama in Huntsville; temperatures are annual averages and the zero baseline represents the mean tropospheric temperature from 1991 to 2020.

Viterito links these apparent jumps in warming to geothermal heat emitted by volcanoes and hydrothermal vents in the middle of the world’s ocean basins – heat that shows similar step increases over the same time period, as measured by seismic activity. The submarine volcanoes and hydrothermal vents lie along the earth’s mid-ocean ridges, which divide the major oceans roughly in half and are illustrated in the next figure. The different colors denote the geothermal heat output (in milliwatts per square meter), which is highest along the ridges.

The total mid-ocean seismic activity along the ridges is shown in the figure below, in which the global tropospheric temperature, graphed in the first figure above, is plotted in blue against the annual number of mid-ocean earthquakes (EQ) in orange. The best fit between the two sets of data occurs when the temperature readings are lagged by two years: that is, the 1979 temperature reading is paired with the 1977 seismic reading, and so on. As already mentioned, seismic activity since 1979 shows step increases similar to the temperature.

A regression analysis yields a correlation coefficient of 0.74 between seismic activity and the two-year lagged temperatures, which implies that mid-ocean geothermal heat accounts for 55% of current global warming, says Viterito. However, a correlation coefficient of 0.74 is not as high as some estimates of the correlation between rising CO2 and temperature.

In support of his hypothesis, Viterito states that multiple modeling studies have demonstrated how geothermal heating can significantly strengthen the thermohaline circulation, shown below. He then links the recently enhanced undersea seismic activity to global warming of the atmosphere by examining thermohaline heat transport to the North Atlantic-Arctic and western Pacific oceans.

In the Arctic, Viterito points to several phenomena that he believes are a direct result of a rapid intensification of North Atlantic currents which began around 1995 – the same year that mid-ocean seismic activity started to rise. The phenomena include the expansion of a phytoplankton bloom toward the North Pole due to incursion of North Atlantic currents into the Arctic; enhanced Arctic warming; a decline in Arctic sea ice; and rapid warming of the Subpolar Gyre, a circular current south of Greenland.

In the western Pacific, he cites the increase since 1993 in heat content of the Indo-Pacific Warm Pool near Indonesia; a deepening of the Indo-Pacific Warm Pool thermocline, which divides warmer surface water from cooler water below; strengthening of the Kuroshio Current near Japan; and recently enhanced El Niños.

But, while all these observations are accurate, they do not necessarily verify Viterito’s hypothesis that submarine earthquakes are driving current global warming. For instance, he cites as evidence the switch of the AMO (Atlantic Multidecadal Oscillation) to its positive or warm phase in 1995, when mid-ocean seismic activity began to increase. However, his assertion begs the question: Isn’t the present warm phase of the AMO just the same as the hundreds of warm cycles that preceded it?

In fact, perhaps the AMO warm phase has always been triggered by an upturn in mid-ocean earthquakes, and has nothing to do with global warming.

There are other weaknesses in Viterito’s argument too. One example is his association of the decline in Arctic sea ice, which also began around 1995, with the current warming surge. What he overlooks is that the sea ice extent stopped shrinking on average in 2007 or 2008, but warming has continued.

And while he dismisses CO2 as a global warming driver because the rising CO2 level doesn’t show the same step increases as the tropospheric temperature, a correlation coefficient between CO2 and temperature as high as 0.8 means that any CO2 contribution is not negligible.

It’s worth noting here that a strengthened thermohaline circulation is the exact opposite of the slowdown postulated by retired meteorologist William Kininmonth as the cause of global warming, a possibility I described in an earlier post in this Challenges series (#7). From an analysis of longwave radiation from greenhouse gases absorbed at the tropical surface, Kininmonth concluded that a slowdown in the thermohaline circulation is the only plausible explanation for warming of the tropical ocean.

Next: Foundations of Science Under Attack in U.S. K-12 Education

Rapid Climate Change Is Not Unique to the Present

Rapid climate change, such as the accelerated warming of the past 40 years, is not a new phenomenon. During the last ice age, which spanned the period from about 115,000 to 11,000 years ago, temperatures in Greenland rose abruptly and fell again at least 25 times. Corresponding temperature swings occurred in Antarctica too, although they were less pronounced than those in Greenland.

The striking but fleeting bursts of heat are known as Dansgaard–Oeschger (D-O) events, named after palaeoclimatologists Willi Dansgaard and Hans Oeschger who examined ice cores obtained by deep drilling the Greenland ice sheet. What they found was a series of rapid climate fluctuations, when the icebound earth suddenly warmed to near-interglacial conditions over just a few decades, only to gradually cool back down to frigid ice-age temperatures.

Ice-core data from Greenland and Antarctica are depicted in the figure below; two sets of measurements, recorded at different locations, are shown for each. The isotopic ratios of 18O to 16O, or δ18O, and 2H to 1H, or δ2H, in the cores are used as proxies for the past surface temperature in Greenland and Antarctica, respectively.

Multiple D-O events can be seen in the four sets of data, stronger in Greenland than Antarctica. The periodicity of successive events averages 1,470 years, which has led to the suggestion of a 1,500-year cycle of climate change associated with the sun.

Somewhat similar cyclicity has been observed during the present interglacial period or Holocene, with eight sudden temperature drops and recoveries, mirroring D-O temperature spurts, as illustrated by the thick black line in the next figure. Note that the horizontal timescale runs forward, compared to backward in the previous (and following) figure.

These so-called Bond events were identified by geologist Gerard Bond and his colleagues, who used drift ice measured in deep-sea sediment cores, and δ18O as a temperature proxy, to study ancient climate change. The deep-sea cores contain glacial debris rafted into the oceans by icebergs, and then dropped onto the sea floor as the icebergs melted. The volume of glacial debris was largest, and it was carried farthest out to sea, when temperatures were lowest.

Another set of distinctive, abrupt events during the latter part of the last ice age were Heinrich events, which are related to both D-O events and Bond cycles. Five of the six or more Heinrich events are shown in the following figure, where the red line represents Greenland ice-core δ18O data, and some of the many D-O events are marked; the figure also includes Antarctic δ18O data, together with ice-age CO2 and CH4 levels.

As you can see, Heinrich events represent the cooling portion of certain D-O events. Although the origins of both are debated, they are thought likely to be associated with an increase in icebergs discharged from the massive Laurentide ice sheet which covered most of Canada and the northern U.S. Just as with Bond events, Heinrich and D-O events left a signature on the ocean floor, in this case in the form of large rocks eroded by glaciers and dropped by melting icebergs.

The melting icebergs would have also disgorged enormous quantities of freshwater into the Labrador Sea. One hypothesis is that this vast influx of freshwater disrupted the deep-ocean thermohaline circulation (shown below) by lowering ocean salinity, which in turn suppressed deepwater formation and reduced the thermohaline circulation.

Since the thermohaline circulation plays an important role in transporting heat northward, a slowdown would have caused the North Atlantic to cool, leading to a Heinrich event. Later, as the supply of freshwater decreased, ocean salinity and deepwater formation would have increased again, resulting in the rapid warming of a D-O event.

However, this is but one of several possible explanations. The proposed freshwater increase and reduced deepwater formation during D-O events could have resulted from changes in wind and rainfall patterns in the Northern Hemisphere, or the expansion of Arctic sea ice, rather than melting icebergs.

In 2021, an international team of climate researchers concluded that when certain parts of the ice-age climate system changed abruptly, other parts of the system followed like a series of dominoes toppling in succession. But to their surprise, neither the rate of change nor the order of the processes were the same from one event to the other.

Using data from two Greenland ice cores, the researchers discovered that changes in ocean currents, sea ice and wind patterns were so closely intertwined that they likely triggered and reinforced each other in bringing about the abrupt climate changes of D-O and Heinrich events.

While there’s clearly no connection between ice-age D-O events and today’s accelerated warming, this research and the very existence of such events show that the underlying causes of rapid climate change can be elusive.

Next: Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Is Driven by Oceanic Seismic Activity, Not CO2

Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

In something of a twist to my series on challenges to the CO2 global warming hypothesis, this post describes a new paper that attributes modern global warming entirely to water vapor, not CO2.

Water vapor (H2O) is in fact the major greenhouse gas in the earth’s atmosphere and accounts for about 70% of the Earth’s natural greenhouse effect. Water droplets in clouds account for another 20%, while CO2 contributes only a small percentage, between 4 and 8%, of the total. The natural greenhouse effect keeps the planet at a comfortable enough temperature for living organisms to survive, rather than 33 degrees Celsius (59 degrees Fahrenheit) cooler.

According to the CO2 hypothesis, it’s the additional greenhouse effect of CO2 and other gases from human activities that is responsible for the current warming (ignoring El Niño) of about 1.0 degrees Celsius (1.8 degrees Fahrenheit) since the preindustrial era. Because elevated CO2 on its own causes only a tiny increase in temperature, the hypothesis postulates that the increase from CO2 is amplified by water vapor in the atmosphere and by clouds – a positive feedback effect.

The paper’s authors, Canadian researchers H. Douglas Lightfoot and Gerald Ratzer, don’t dispute that the natural greenhouse effect exists, as do other, heretical challenges described previously in this series. But the authors ignore the postulated water vapor amplification of CO2 greenhouse warming, and claim that increased water vapor alone accounts for today’s warmer world. It’s well known that extra water vapor is produced by the sun’s evaporation of seawater.

The basis of Lightfoot and Ratzer’s conclusion is something called the psychrometric chart, which is a rather intimidating tool used by architects and engineers in designing heating and cooling systems for buildings. The chart, illustrated below, is a mathematical model of the atmosphere’s thermodynamic properties, including heat content (enthalpy), temperature and relative humidity.

As inputs to their psychrometric model, the researchers used temperature and relative humidity measurements recorded on the 21st of the month over a 12-month period at 20 different locations: four north of the Arctic Circle, six in north mid-latitudes, three on the equator, one in the Sahara Desert, five in south mid-latitudes and one in Antarctica.

As indicated in the figure above, one output of the model from these inputs is the mass of water vapor in grams per kilogram of dry air. The corresponding mass of CO2 per kilogram of dry air at each location was calculated from Mauna Loa CO2 data in ppm (parts per million).

Their results revealed that the ratio of water vapor molecules to CO2 molecules ranges from 0.3 in polar regions to 108 in the tropics. Then, in a somewhat obscure argument, Lightfoot and Ratzer compared these ratios to calculated spectra for outgoing radiation at the top of the atmosphere. Three spectra – for the Sahara Desert, the Mediterranean, and Antarctica – are shown in the next figure.

The significant dip in the Sahara Desert spectrum arises from absorption by CO2 of outgoing radiation whose emission would otherwise cool the earth. You can see that in Antarctica, the dip is absent and replaced by a bulge. This bulge has been explained by William Happer and William van Wijngaarden as being a result of the radiation to space by greenhouse gases over wintertime Antarctica exceeding radiation by the cold ice surface.

Yet Lightfoot and Ratzer assert that the dip must be unrelated to CO2 because their psychrometric model shows there are 0.3 to 40 molecules of water vapor per CO2 molecule in Antarctica, compared with a much higher 84 to 108 in the tropical Sahara where the dip is substantial. Therefore, they say, the warming effect of CO2 must be negligible.

As I see it, however, there are at least two fallacies in the researchers’ arguments, First, the psychrometric model is an inadequate representation of the earth’s climate. Although the model takes account of both convective heat and latent heat (from evaporation of H2O) in the atmosphere, it ignores multiple feedback processes, including the all-important water vapor feedback mentioned above. Other feedbacks include the temperature/altitude (lapse rate) feedback, high- and low-cloud feedback, and the carbon cycle feedback.

A more important objection is that the assertion about water vapor causing global warming represents a circular argument.

According to Lightfoot and Ratzer’s paper, any warming above that provided by the natural greenhouse effect comes solely from the sun. On average, they correctly state, about 26% of the sun’s incoming energy goes into evaporation of water (mostly seawater) to water vapor. The psychrometric model links the increase in water vapor to a gain in temperature.

But the Clausius-Clapeyron equation tells us that warmer air holds more moisture, about 7% more for each degree Celsius of temperature rise. So an increase in temperature raises the water vapor level in the atmosphere – not the other way around. Lightfoot and Ratzer’s claim is circular reasoning.

Next: Rapid Climate Change Is Not Unique to the Present

Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s

In a recent series of blog posts, I showed how actual scientific data and reports in newspaper archives over the past century demonstrate clearly that the frequency and severity of extreme weather events have not increased during the last 100 years. But there’s also plenty of evidence of weather extremes comparable to today’s dating back centuries and even millennia.

The evidence consists largely of reconstructions based on proxies such as tree rings, sediment cores and leaf fossils, although some evidence is anecdotal. Reconstruction of historical hurricane patterns, for example, confirms what I noted in an earlier post, that past hurricanes were even more frequent and stronger than those today.

The figure below shows a proxy measurement for hurricane strength of landfalling tropical cyclones – the name for hurricanes down under – that struck the Chillagoe limestone region in northeastern Queensland, Australia between 1228 and 2003. The proxy was the ratio of 18O to 16O isotopic levels in carbonate cave stalagmites, a ratio which is highly depleted in tropical cyclone rain.

What is plotted here is the 18O/16O depletion curve, in parts per thousand (‰); the thick horizontal line at -2.50 ‰ denotes Category 3 or above events, which have a top wind speed of 178 km per hour (111 mph) or greater. It’s clear that far more (seven) major tropical cyclones impacted the Chillagoe region in the period from 1600 to 1800 than in any period since, at least until 2003. Indeed, the strongest cyclone in the whole record occurred during the 1600 to 1800 period, and only one major cyclone was recorded from 1800 to 2003.

Another reconstruction of past data is that of unprecedently long and devastating “megadroughts,” which have occurred in western North America and in Europe for thousands of years. The next figure depicts a reconstruction from tree ring proxies of the drought pattern in central Europe from 1000 to 2012, with observational data from 1901 to 2018 superimposed. Dryness is denoted by negative values, wetness by positive values.

The authors of the reconstruction point out that the droughts from 1400 to 1480 and from 1770 to 1840 were much longer and more severe than those of the 21st century. A reconstruction of megadroughts in California back to 800 was featured in a previous post.

An ancient example of a megadrought is the 7-year drought in Egypt approximately 4,700 years ago that resulted in widespread famine, known as Famine Stela. The water level in the Nile River dropped so low that the river failed to flood adjacent farmlands as it normally does each year, resulting in drastically reduced crop yields. The event is recorded in a hieroglyphic inscription on a granite block located on an island in the Nile.

At the other end of the wetness scale, a Christmas Eve flood in the Netherlands, Denmark and Germany in 1717 drowned over 13,000 people – many more than died in the much hyped Pakistan floods of 2022.

Although most tornadoes occur in the U.S., they have been documented in the UK and other countries for centuries. In 1577, North Yorkshire in England experienced a tornado of intensity T6 on the TORRO scale, which corresponds approximately to EF4 on the Fujita scale, with wind speeds of 259-299 km per hour (161-186 mph). The tornado destroyed cottages, trees, barns, hayricks and most of a church. EF4 tornadoes are relatively rare in the U.S.: of 1,000 recorded tornadoes from 1950 to 1953, just 46 were EF4.

Violent thunderstorms that spawn tornadoes have also been reported throughout history. An associated hailstorm which struck the Dutch town of Dordrecht in 1552 was so violent that residents “thought the Day of Judgement was coming” when hailstones weighing up to a few pounds fell on the town. A medieval depiction of the event is shown in the following figure.

Such historical storms make a mockery of the 2023 claim by a climate reporter that “Recent violent storms in Italy appear to be unprecedented for intensity, geographical extensions and damages to the community.” The thunderstorms in question produced hailstones the size of tennis balls, merely comparable to those that fell on Dordrecht centuries earlier. And the storms hardly compare with a hailstorm in India in 1888, which actually killed 246 people.

Next: Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

Two Statistical Studies Attempt to Cast Doubt on the CO2 Narrative

As I’ve stated many times in these pages, the evidence that global warming comes largely from human emissions of CO2 and other greenhouse gases is not rock solid. Two recent statistical studies affirm this position, but both studies can be faulted.

The first study, by four European engineers, is provocatively titled “On Hens, Eggs, Temperatures and CO2: Causal Links in Earth’s Atmosphere.” As the title suggests, the paper addresses the question of whether modern global warming results from increased CO2 in the atmosphere, according to the CO2 narrative, or whether it’s the other way around. That is, whether rising temperatures from natural sources are causing the CO2 concentration to go up.

The study’s controversial conclusion is the latter possibility – that extra atmospheric CO2 can’t be the cause of higher temperatures, but that raised temperatures must be the origin of elevated CO2, at least over the last 60 years for which we have reliable CO2 data. The mathematics behind the conclusion is complicated but relies on something called the impulse response function.

The impulse response function describes the reaction over time of a dynamic system to some external change or impulse. Here, the impulse and response are the temperature change ΔT and the increase in the logarithm of the CO2 level, Δln(CO2), or the reverse. The study authors took ΔT to be the average one-year temperature difference from 1958 to 2022 in the Reanalysis 1 dataset compiled by the U.S. NCEP (National Centers for Environmental Prediction) and the NCAR (National Center for Atmospheric Research); CO2 data was taken from the Mauna Loa time series which dates from 1958.

Based on these two time series, the study’s calculated IRFs (impulse response functions) are depicted in the figure below, for the alternate possibilities of ΔT => Δln(CO2) (left, in green) and Δln(CO2) => ΔT (right, in red). Clearly, the IRF indicates that ΔT is the cause and Δln(CO2) the effect, since for the opposite case of Δln(CO2) causing ΔT, the time lag is negative and therefore unphysical.

This is reinforced by the correlations shown in the following figure (lower panels), which also illustrates the ΔT and Δln(CO2) time series (upper panel). A strong correlation (R = 0.75) is seen between ΔT and Δln(CO2) when the CO2 increase occurs six months later than ΔT, while there is no correlation (R = 0.01) when the CO2 increase occurs six months earlier than ΔT, so ΔT must cause Δln(CO2). Note that the six-month displacement of Δln(CO2) from ΔT in the two time series is artificial, for easier viewing.

However, while the above correlation and the behavior of the impulse response function are impressive mathematically, I personally am dubious about the study’s conclusion.

The oceans hold the bulk of the world’s CO2 and release it as the temperature rises, since warmer water holds less CO2 according to Henry’s Law. For global warming of approximately 1 degree Celsius (1.8 degrees Fahrenheit) since 1880, the corresponding increase in atmospheric CO2 outgassed from the oceans is only about 16 ppm (parts per million) – far below the actual increase of 130 ppm over that time. The Hens and Eggs study can’t account for the extra 114 ppm of CO2.

The equally provocative second study, titled “To what extent are temperature levels changing due to greenhouse gas emissions?”, comes from Statistics Norway, Norway’s national statistical institute and the principal source of the country’s official statistics. From a statistical analysis, the study claims that the effect of human CO2 emissions during the last 200 years has not been strong enough to cause the observed rise in temperature, and that climate models are incompatible with actual temperature data.

The conclusions are based on an analysis of 75 temperature time series from weather stations in 32 countries, the records spanning periods from 133 to 267 years; both annual and monthly time series were examined. The analysis attempted to identify systematic trends in temperature, or the absence of trends, in the temperature series.

What the study purports to find is that only three of the 75 time series show any systematic trend in annual data (though up to 10 do in monthly data), so that 72 sets of long-term temperature data show no annual trend at all. From this finding, the study authors conclude it’s not possible to determine how much of the observed temperature increase since the 19th century is due to CO2 emissions and how much is natural.

One of the study’s weaknesses is that it excludes sea surface temperatures, even though the oceans cover 70% of the earth’s surface, so the study is not truly global. A more important weakness is that it confuses local temperature measurements with global mean temperature. Furthermore, the study authors fail to understand that a statistical model simply can’t approximate the complex physical processes of the earth’s climate system.

In any case, statistical analysis in climate science doesn’t have a strong track record. The infamous “hockey stick” - a recon­structed temperature graph for the past 2000 years resembling the shaft and blade of a hockey stick on its side – is perhaps the best example.

The reconstruction was debunked in 2003 by Stephen McIntyre and Ross McKitrick, who found (here and here) that the graph was based on faulty statistical analysis, as well as preferential data selection. The hockey stick was further discredited by a team of scientists and statisticians from the National Research Council of the U.S. National Academy of Sciences.

Next: Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s