Has the Sun’s Role in Climate Change Been Trivialized?

Central to the narrative that climate change comes largely from human emissions of greenhouse gases is the assertion that the sun plays almost no role at all. According to its Fifth Assessment Report, the IPCC (Intergovernmental Panel on Climate Change) attributes no more than a few percent of total global warming to the sun’s influence.

But the exact amount of the solar contribution to global warming is critically dependent on how much the sun’s heat and light output, known technically as the TSI (total solar irradiance), has varied since the 19th century. According to an international team of scientists in a recently published paper, different estimates of the TSI lead to different conclusions about global warming – ranging from the sun making a trivial contribution, which backs up the IPCC claim that recent warming is mostly human-caused, to the opposite conclusion that global warming is mostly natural and due to changes in solar activity.

How can there be such a wide discrepancy between these two positions? Over the approximately 11-year solar cycle, the TSI varies by only a tenth of one percent. However, long-term fluctuations in the sun’s internal magnetic field cause the baseline TSI to vary over decades and centuries.

This can be seen in the somewhat congested figure below, which depicts several reconstructions of the TSI since 1850 and shows variations in both the TSI baseline and its peak-to-peak amplitude. The curve plotted in black forms the basis for the current CMIP6 generation of computer climate models; the curve in yellow was the basis for the previous CMIP5 models featured in the IPCC’s Fifth Assessment Report.

TSI Matthes.jpg

A rather different reconstruction of the TSI since 1700 is shown in the next figure, based on an earlier solar irradiance model augmented with recent satellite data. You can see that in this reconstruction, the TSI since 1850 exhibits much larger fluctuations – from 1358 to 1362 watts per square meter – compared with the reconstruction above, in which the variation since 1850 is only from about 1360.5 to 1362 watts per square meter.

The dramatic difference between the two estimates of the TSI arises from rival sets of satellite data. Satellite measurements of TSI began in 1978, the two main sources of data being the Royal Meteorological Institute of Belgium’s so-called ACRIM (Active Cavity Radiometer Irradiance Monitor) composite, and the World Radiation Center’s PMOD (Physikalisch-Meteorologisches Observatorium Davos) composite.

The ACRIM composite implies that the TSI rose during the 1980s and 1990s but has fallen slightly since then, as seen in the second figure above. The PMOD composite implies that the TSI has been steadily dropping since the late 1970s, a trend just visible in the first figure. The PMOD composite, showing a decline in solar activity during the period after 1975 in which global temperatures went up, therefore downplays the sun’s role in global warming. On the other hand, the ACRIM composite indicates an increase in solar activity over the same period, so supports the notion that global temperatures are strongly linked to the TSI.

The ACRIM satellite data set and the PMOD data differ in the procedures used to bridge a two-year gap in ACRIM data around 1990. The gap in data gathering occurred after the launch of a new ACRIM satellite was delayed by the Challenger disaster. It’s these disparate gap-bridging procedures that result in the ACRIM and PMOD composite data showing such different behavior of the TSI during the most recent solar cycles 21 to 23.

The authors of the recent paper also discuss other TSI reconstructions, some of which support the ACRIM data and some of which back the rival PMOD data. Rather than passing judgment on which dataset is the better representation of reality, the authors urge the climate science community to consider all relevant estimates of the TSI and not just the one illustrated in the first figure above. But they conclude that, contrary to the current narrative, the question of how much the sun has influenced recent global temperatures – at least in the Northern Hemisphere – has not yet been answered satisfactorily.

The researchers go on to comment: “The PMOD dataset is more politically advantageous to justify the ongoing considerable political and social efforts to reduce greenhouse gas emissions under the assumption that the observed global warming since the late 19th century is mostly due to greenhouse gases.” They add that political considerations have been acknowledged as one of the motivations for the development of the PMOD composite as a rival dataset to the ACRIM measurements.

Next: Latest UN Climate Report Is More Hype than Science

Could Pacific Northwest Heat Wave, European Floods Have Been Caused by the Sun?

The recent record-shattering heat wave in the Pacific Northwest and devastating floods in western Europe have both been ascribed to global warming by many climate scientists. But an alternative explanation, voiced by some climatologists yet ignored by the mainstream media, is that the disasters were caused by the phenomenon of jet-stream blocking – which may or may not be a result of global warming, and could instead arise from a weakening of the sun’s output.

Blocking refers to the locking in place for several days or weeks of the jet stream, a narrow, high-altitude air current that flows rapidly from west to east in each hemisphere and governs much of our weather. One of the more common blocking patterns is known as an “omega block,” a buckling of the jet stream named for its resemblance to the upper-case Greek letter omega, that produces alternating, stationary highs and lows in pressure as shown in the figure below. Under normal weather conditions, highs and lows move on quickly.

According to the blocking explanation, the torrential rains that hovered over parts of eastern Germany, Belgium and the Netherlands came from a low-pressure system trapped between two blocking highs to the west and east – the opposite situation to that shown in the figure.

Precipitation tends to increase in a warmer world because of enhanced evap­oration from tropical oceans, resulting in more water vapor in the atmosphere. So with a blocking low stuck over the Rhine valley and the ground already saturated from previous rainfall, it’s not surprising that swollen rivers overflowed and engulfed whole villages.

A similar argument can be invoked to explain the intense “heat dome” that parked itself over British Columbia, Washington and Oregon for five blisteringly hot days last month. In this case, it was a region of high pressure that was pinned in place by lows on either side, with the sweltering heat intensified by the effects of La Niña on North America. Several Pacific Northwest cities experienced temperatures a full 5 degrees Celsius (9 degrees Fahrenheit) above previous records.

There’s little doubt that both of these calamitous events resulted from jet-stream omega blocks. Blocking can also induce cold extremes, such as the deep freeze endured by Texas earlier this year. But how can blocking be caused by the sun?

Over the 11-year solar cycle, the sun’s heat and visible light fluctuate, as does its production of invisible UV, which varies much more than the tenth of a percent change in total solar output. It’s thought that changes in solar UV irradiance cause wind shifts in the stratosphere (the layer of the atmosphere above the troposphere), which in turn induce blocking in the tropospheric jet stream via a feedback effect. Blocking can also stem from other mechanisms. In the North Atlantic at least, a 2008 research paper found that during periods of low solar activity, blocking events in more eastward locations are longer and more intense than during higher solar activity.

Right now we’re entering a stretch of diminished solar output, signified by a falloff in the average monthly number of sunspots as depicted in the next figure. The decline in the maximum number of sunspots over the last few cycles likely heralds the onset of a grand solar minimum, which could usher in a period of global cooling.

wolfjmms.jpg

However, climate scientists are divided on the question of whether global warming exacerbates jet-stream blocking. Computer climate models in fact predict that blocking will diminish in a warming climate, though the frequency of past blocking events has been simulated poorly by the models. Many climate scientists admit that it’s not yet clear how large a role is played by natural variability – which includes solar fluctuations.

Adherents to the global warming explanation got a boost from a just-completed attribution study, claiming that the deadly US-Canada heat wave was “virtually impossible” without climate change. The study authors used 21 climate models to estimate how much global warming contributed to excessive heat in the areas around the cities of Vancouver, Seattle and Portland. Their conclusion was that the heat wave was a one-in-1,000-year event, and would have been at least 150 times rarer in the past.

But such a highly exaggerated claim is based on dubious computer models, not on actual scientific evidence. June’s scorching temperatures in the Pacific Northwest were no match for the persistently searing heat waves that afflicted North America in the 1930s.

And despite the hoopla over the European floods, it’s not the first time some of the flooded areas have suffered catastrophic flooding. For example, the Ayr valley in Germany, struck again this month, experienced major floods in the same locations on June 12, 1910, when at least 52 people were killed.

Next: Has the Sun’s Role in Climate Change Been Trivialized?

New Doubts on Climatic Effects of Ocean Currents, Clouds

Recent research has cast doubt on the influence of two watery entities – ocean currents and clouds – on future global warming. But, unlike many climate studies, the two research papers are grounded in empirical observations rather than theoretical models.

The first study examined so-called deep water formation in the Labrador Sea, located between Greenland and Canada in the North Atlantic Ocean, and its connection to the strength of the AMOC (Atlantic Meridional Overturning Circulation). The AMOC forms part of the ocean conveyor belt that redistributes seawater and heat around the globe. Despite recent evidence to the contrary, computer climate models have predicted that climate change may weaken the AMOC or even shut it down altogether.  

Deep water formation, which occurs in a few localized areas across the world, refers to the sinking of cold, salty surface water to depths of several kilometers because it’s denser than warm, fresher water; winter winds in the Labrador Sea both cool the surface and increase salinity through evaporation. Most climate models link any decline in North Atlantic deep water formation, due to global warming, to decreases in the strength of the AMOC.

But the researchers found that winter convection in the Labrador Sea and the adjacent Irminger Sea (east of Greenland) had very little impact on deep ocean currents associated with the AMOC, over the period from 2014 to 2018. Their observational data came from a new array of seagoing instruments deployed in the North Atlantic, including moorings anchored on the sea floor, underwater gliders and submersible floats. The devices measure ocean current, temperature and salinity.

Results for the strength of the AMOC are illustrated in the figure below, in which “OSNAP West” includes the Labrador and Irminger Seas while the more variable “OSNAP East” is in the vicinity of Iceland. As can be seen, the AMOC in the Labrador Sea didn’t change on average during the whole period of observation. The study authors caution, however, that measurements over a longer time period are needed to confirm their conclusion that strong winter cooling in the Labrador Sea doesn’t contribute significantly to variability of the AMOC.

Fig 1.jpg

Understanding the behavior of the AMOC is important because its strength affects sea levels, as well as weather in Europe, North America and parts of Africa. Variability of the AMOC is thought to have caused multiple episodes of abrupt climate change, in a decade or less, during the last ice age.

The second study to question the effect of water on global warming involves clouds. As I described in an earlier post, the lack of detailed knowledge about clouds is one of the major limitations of computer climate models. One problem with the existing models is that they simulate too much rainfall from “warm” clouds and, therefore, underestimate their lifespan and cooling effect.

Warm clouds contain only liquid water, compared with “cool” clouds that consist of ice particles mixed with water droplets. Since the water droplets are usually smaller than the ice particles, they have a larger surface area to mass ratio which makes them reflect the sun’s radiation more readily. So warm clouds block more sunlight and produce more cooling than cool, icy clouds. At the same time, warm clouds survive longer because they don’t rain as much.

The research team used satellite data to ascertain how much precipitation from clouds occurs in our present climate. The results for warm clouds are illustrated in the following map showing the warm-rain fraction; red designates 100% warm rain, while various shades of blue indicate low fractions. As would be expected, most warm rain falls near the equator where temperatures are highest. It also falls predominantly over the oceans.

Fig 2.jpg

The researchers then employed the empirical satellite data for rainfall to modify the warm-rain processes in an existing CMIP6 climate model (the latest generation of models). The next figure, which shows the probability of rain from warm clouds at different latitudes, compares the satellite data (gray) to the model results before (maroon) and after (yellow) modification.

Fig 3.jpg

It’s seen that the modified climate model is in much better agreement with the satellite data, except for a latitude band just north of the equator, and is also a major improvement over the unmodified model. The scientists say that their correction to the model makes negative “cloud-lifetime feedback” – the process by which higher temperatures inhibit warm clouds from raining as much and increases their lifetime – almost three times larger than in the original model.

This larger cooling feedback is enough to account for the greater greenhouse warming predicted by CMIP6 models compared with earlier CMIP5 models. But, as the study tested only a single model, it needs to be extended to more models before that conclusion can be confirmed.

Next: Could Pacific Northwest Heat Wave, European Floods Have Been Caused by the Sun?

Fishy Business: Alleged Fraud over Ocean Acidification Research, Reversal on Coral Extinction

In the news recently have been two revelations about the sometimes controversial world of coral reef research. The first is fraud allegations against research claiming that ocean acidification from global warming impairs the behavior of coral reef fish. The second is an about-face on inflated estimates for the extinction risk of Pacific Ocean coral species due to climate change. 

The alleged fraud involves 22 research papers authored by Philip Munday, a marine ecologist at JCU (James Cook University) in Townsville, Australia and Danielle Dixson, a U.S. biologist who completed her PhD under Munday’s supervision in 2012. The fraud charges were made in August 2020 by three of an international group of mostly biological and environmental scientists, plus the group leader, fish physiologist Timothy Clark of Deakin University in Geelong, Australia. The Clark group says it will publicize the alleged data problems shortly.

The research in question studied the behavior of coral reef fish in slightly acidified seawater, in order to simulate the effect of ocean acidification caused by the absorption of up to 30% of humanity’s CO2 emissions. The additional CO2 has so far lowered the average pH – a measure of acidity – of ocean surface water from about 8.2 to 8.1 since industrialization began in the 18th century.

Munday and Dixson claim that the extra CO2 causes reef fish to be attracted by chemical cues from predators, instead of avoiding them; to become hyperactive and disoriented; and to suffer loss of vision and hearing. But Clark and his fellow scientists, in their own paper published in January 2020, debunk all of these conclusions. Most damningly of all, the researchers find that the reported effects of ocean acidification on the behavior of coral reef fish are not reproducible – the basis for their fraud allegations against the JCU work.

In a published rebuttal, Munday and Dixson say that the Clark group’s replication study differed from the original research “in at least 16 crucial ways” and didn’t acknowledge other papers that support the JCU position.

Nevertheless, while the university has dismissed the allegations after a preliminary investigation, Science magazine points out that a 2016 paper by another former PhD student of Munday’s was subsequently deemed fraudulent and retracted. And Clark and his colleagues say they have evidence of manipulation in publicly available raw data files for two papers published by Munday’s research laboratory, as well as documentation of large and “statistically impossible” effects from CO2 reported in many of the other 20 allegedly fraudulent papers.

Coral reef fish.jpg

CREDIT: ALEX MUSTARD/MINDEN PICTURES

The about-turn on coral extinction involves another JCU group, the university’s Centre of Excellence for Coral Reef Studies. Four Centre researchers published a paper in March 2021 that completely contradicts previous apocalyptic predictions of the imminent demise of coral reefs, predictions that include an earlier warning by three of the same authors of ongoing coral degradation from global warming.

As an example of past hype, the IUCN (International Union for Conservation of Nature) states on its website that 33% of all reef-building corals are at risk of extinction. The IUCN is highly regarded for its assessments of the world’s biodiversity, including evaluation of the extinction risk of thousands of species. An even more pessimistic environmental organization suggests that more than 90% of the planet’s coral reefs may be extinct by 2050.

The recent JCU paper turns all such alarming prophecies on their head. But the most astounding revelation is perhaps the sheer number of corals estimated to exist on reefs across the Pacific Ocean, from Indonesia to French Polynesia – approximately half a trillion, similar to the number of trees in the Amazon, or birds in the world. To estimate abundances, the JCU scientists used a combination of coral reef habitat maps and counts of coral colonies.

This colossal population is for a mere 300 species, a small fraction of the 2,175 coral species estimated to exist worldwide by the IUCN. And of the 80 species considered by the IUCN to be at an elevated risk of extinction, those in its “critically endangered” and “endangered” categories, 12 species have estimated Pacific populations of over a billion colonies. One of the study’s authors remarks that the eight most common coral species in the region each have a population size larger than the 7.8 billion people on Earth.

The implication of this stunning research is that the global extinction risk of most coral species is lower than previously estimated, even though a local loss can be ecologically devastating to coral reefs in the vicinity. So any future extinctions due to global warming are unlikely to unfold rapidly, if at all.

Next: New Doubts on the Climatic Effects of Ocean Currents, Clouds

Challenges to the CO2 Global Warming Hypothesis: (4) A Minimal Ice-Age Greenhouse Effect

As an addendum to my 2020 series of posts on the CO2 global warming hypothesis (here, here and here), this post presents a further challenge to the hypothesis central to the belief that humans make a substantial contribution to climate change. The hypothesis is that observed global warming – currently about 1 degree Celsius (1.8 degrees Fahrenheit) since the preindustrial era – has been caused primarily by human emissions of CO2 and other greenhouse gases into the atmosphere.

The new challenge to the CO2 hypothesis is set out in a recent research paper by French geologist Pascal Richet. Richet claims, by reexamining previous analyses of an Antarctic ice core, that greenhouse gases such as CO2 and methane had only a minor effect on the earth’s climate over the past 423,000 years, and that the assumed forcing of climate by CO2 is incompatible with ice-core data. The paper is controversial, however, and the publisher has subjected it to a post-publication review, as a result of which the paper has since been removed.

The much-analyzed ice core in question was drilled at the Russian Vostok station in East Antarctica. Past atmospheric CO2 levels and surface temperatures are calculated from ice cores by measuring the air composition and the oxygen 18O to 16O isotopic ratio, respectively, in air bubbles trapped by the ice. The Vostok record, which covers the four most recent ice ages or glaciations as well as the current interglacial (Holocene), is depicted in the figure below. The CO2 level is represented by the upper set of graphs (below the insolation data), and shows the substantial drop in CO2 during an ice age; the associated drop in temperature ΔT is represented by the lower set of graphs.

Vostok ice cores.jpg

It is seen that transitions from glacial to interglacial conditions are relatively sharp, while the ice ages themselves are punctuated by smaller warming and cooling episodes. And, though it’s hardly visible in the figure, the ice-age CO2 level closely mimics changes in temperature, but the CO2 concentration lags behind – with CO2 going up or down after the corresponding temperature shift occurs. The lag is more pronounced for temperature declines than increases.

The oceans, which are where the bulk of the CO2 on our planet is stored, can hold much more CO2 (and heat) than the atmosphere. Warm water holds less CO2 than cooler water, so the oceans release CO2 when the temperature rises, but take it in when the earth cools.

Richet noticed that the temperature peaks in the Vostok record are much narrower than the corresponding CO2 peaks. The full widths at half maximum, marked by thick horizontal bars in the figure above, range from about 7,000 to 16,000 years for the initial temperature peak in cycles II, III and IV, but from 14,000 to 23,000 years for the initial CO2 peak; cycle V can’t be analyzed because its start is missing from the data. All other peaks are also narrower for temperature than for CO2.

The author argues that CO2 can’t drive temperature since an effect can’t last for a shorter period of time than its cause. The fact that the peaks are systematically wider for CO2 than for temperature implies that the CO2 level responds to temperature changes, not the other way round. And for most of cycles II, III and IV, CO2 increases correspond to temperature decreases and vice versa.

Richet’s conclusion, if correct, would deal a deathblow to the CO2 global warming hypothesis. The reason has to do with the behavior of the temperature and CO2 level at the commencement and termination of ice ages.

Ice ages are believed to have ended (and begun) because of changes in the Earth’s orbit around the sun. After tens of thousands of years of bitter cold, the temperature suddenly took an upward turn. But according to the CO2 hypothesis, the melting of ice sheets and glaciers caused by the slight initial warming could not have continued, unless this temperature rise was amplified by positive feedbacks. These include CO2 feedback, triggered by a surge in atmospheric CO2 as it escaped from the oceans.

The problem with this explanation is that it requires a similar chain of events, based on CO2 and other feedbacks, to have enhanced global cooling as the temperature fell at the beginning of an ice age. But, says Richet, “From the dual way in which feedback would work, temperature decreases and increases should be similar for the same concentrations of greenhouse gases, regardless of the residence times of these gases in the atmosphere.” The fact that temperature decreases don’t depend in any straightforward way on CO2 concentration in the figure above demonstrates that the synchronicity required by the feedback mechanism is absent.

Next: Fishy Business: Alleged Fraud over Ocean Acidification Research, Reversal on Coral Extinction

New EPA Climate Change Indicator Is Deceptive

New climate change indicators on the U.S. EPA (Environmental Protection Agency) website are intended to inform science-based decision-making by presenting climate science transparently. But many of the indicators are misleading or deceptive, being based on incomplete evidence or selective data.

A typical example is the indicator for heat waves. This is illustrated in the left panel of the figure below, depicting the EPA’s representation of heat wave frequency in the U.S. from 1961 to 2019. The figure purports to show a steady increase in the occurrence of heat waves, which supposedly tripled from an average of two per year during the 1960s to six per year during the 2010s.

Heat waves (min) EPA.jpg
Heat waves (max) EPA.jpg

Unfortunately, the chart on the left is highly deceptive in several ways. First, the data is derived from minimum, not maximum, temperatures averaged across 50 American cities. The corresponding chart for maximum temperatures, shown in the right panel above, paints a rather different picture – one in which the heat wave frequency less than doubled from 2.5 per year in the 1960s to 4.5 per year in the 2010s, and actually declined from the 1980s to the 2000s.

This maximum-temperature graph revealing a much smaller increase in heat waves than the minimum-temperature graph displayed so boldly on the EPA website is dishonestly hidden away in its technical documentation.

A second deception is that the starting date of 1961 for both graphs is conveniently cherry-picked during a 30-year period of global cooling from 1940 to 1970. That in itself exaggerates the warming effect since then. Starting instead in 1980, after the current bout of global warming had begun, it can be seen that the heat wave frequency based on maximum temperatures (right panel) barely increased at all from 1981 to 2019. Similar exaggeration and sleight of hand can be seen in the EPA indicators for heat wave duration, season length and intensity.

A third deception is that the 1961 start date ignores the record U.S. heat of the 1930s, a decade characterized by persistent, searing heat waves across North America, especially in 1934 and 1936. The next figure shows the frequency and magnitude of U.S. heat waves from 1900 to 2018.

Heat waves.jpg

The frequency (top panel) is the annual number of calendar days the maximum temperature exceeded the 90th percentile for 1961–1990 for at least six consecutive days. The EPA’s data is calculated for a period of at least four days, while the heat wave index (lower panel) measures the annual magnitude of all heat waves of at least three days in that year combined.

Despite the differences in definition, it’s abundantly clear that heat waves over the last few decades – the ones publicized by the EPA – pale in comparison to those of the 1930s, and even those of other decades such as the 1910s and 1950s. The peak heat wave index in 1936 is a full three times higher than it was in 2012 and up to nine times higher than in many other years.

The heat wave index shown above actually appears on the same EPA website page as the mimimum-temperature chart. But it’s presented as a tiny Figure 3 that is only 20% as large as the much more prominent Figure 1 showing minimum temperatures. As pointed out recently by another writer, a full-size version of the index chart, from 1895 to 2015, was once featured on the website, before the site was updated this year with the new climate change indicators.

The EPA points out that the 1930s heat waves in North America, which were concentrated in the Great Plains states of the U.S. and southern Canada, were exacerbated by Dust Bowl drought that depleted soil moisture and reduced the moderating effects of evaporation. While this is undoubtedly true, it has been suggested by climate scientists that future droughts in a warming world could result in further record-breaking U.S. heat waves. The EPA has no justification for omitting 1930s heat waves from their data record, or for suppressing the heat wave index chart.

Although the Dust Bowl was unique to the U.S. and Canada, there are locations in other parts of North America and in other countries where substantial heat waves occurred before 1961 as well. In the summer of 1930 two record-setting, back-to-back scorchers, each lasting eight days, afflicted Washington, D.C.; while in 1936, the province of Ontario – also well removed from the Great Plains – experienced 44 degrees Celsius (111 degrees Fahrenheit) heat during the longest, deadliest Canadian heat wave on record. In Europe, France was baked during heat waves in both 1930 and 1947, and many eastern European countries suffered prolonged heat waves in 1946.   

What all this means is that the EPA’s heat-wave indicator grossly misrepresents the actual science and defeats its stated goal for the indicators of “informing our understanding of climate change.”

Next: Challenges to the CO2 Global Warming Hypothesis: (4) A Minimal Ice-Age Greenhouse Effect

Is Recent Record Cold Just La Niña, or the Onset of Global Cooling?

Little noticed by the mainstream media in their obsession with global warming is an exceptionally chilly 2020-21 winter in the Northern Hemisphere and an unusually early start to the Southern Hemisphere winter. Low temperature and snowfall records are tumbling all over the globe. The harsh cold has already crippled this year’s crops and vines in Europe, while the U.S. state of Texas was ravaged by the subfreezing polar vortex.

Is this the beginning of the predicted grand solar minimum, which was the subject of an earlier post – or simply a manifestation of the naturally occurring La Niña cycle? A grand solar minimum is signified by a steep decline in the maximum number of sunspots during the 11-year solar cycle, a decline that appears to be underway already.

The familiar El Niño and La Niña cycles arise from seesaw changes in surface temperatures in the tropical Pacific Ocean and last for periods of a year or more at a time. The persistent but irregular pattern is visible in the graph below, showing satellite measurements of the global temperature since 1979. Warm spikes such as those in 1998, 2010 and 2016 are due to El Niño; cool spikes like those in 2000 and 2008 are due to La Niña. The climatic effects of El Niño and La Niña include catastrophic flooding in the western Americas and flooding or severe drought in Australia; La Niña has also been tied to major landfalling hurricanes in both the U.S. and the western Pacific.

UAH April 2021.jpg

The zero baseline in the figure represents the average temperature in the tropical lower atmosphere or troposphere from 1991 to 2020 (though the satellite record began in 1979). Observations in the troposphere are a more reliable indicator of global warming than surface data, which are distorted by the urban heat island effect on land and by insufficient measurement stations across the oceans.

Right now, in May 2021, it’s clear that we’re experiencing another La Niña, with the mean April temperature having fallen back to the long-term average. This isn’t permanent of course, and the mercury will continue to rise and fall with future El Niño and La Niña fluctuations. But those fluctuations are superimposed on an overall warming trend of 0.14 degrees Celsius (0.25 degrees Fahrenheit) per decade at present – the familiar global warming.

Whether the present frigid and snowy conditions in much of the world are merely a result of La Niña, or the start of a longer cooling trend, we won’t know for several years yet. Climate, after all, is a long-term average of the weather over an extended period of time, up to decades.

Nonetheless, there’s ample evidence that the current cold snap is not about to let up. At the same time as the UK experienced its lowest average minimum temperature for April since 1922, and both Switzerland and Slovenia suffered record low temperatures for the month, bone-chilling cold struck Australia, New Zealand and even normally shivery Antarctica in the Southern Hemisphere. The next figure shows how the 2021 sea ice extent (in blue) around Antarctica is above or close to the 30-year average from 1981 to 2010 (in gray).

SH Antarctic sea ice.jpeg

Snow records have continued to be broken around the world too. Belgrade, the capital of Serbia, registered its all-time high snowfall for April, in record books dating back to 1888; during April, both Finland and Russia reported their heaviest snow in decades; and the UK, Spain and several countries in the Middle East saw rare spring snowfalls from March to May. On the other side of the globe, up to 22 cm (9 inches) of snow fell on southeastern Australia mountain peaks a full two months before the start of the 2021 ski season; and southern Africa was also blanketed in early-season snow.

The figure below shows the  Northern Hemisphere snow mass (excluding mountains) for the current season, based on data from the Finnish Meteorological Institute. As can be seen, the snow mass for much of the season has tracked more than one standard deviation above the average for 1982-2012, and in March 2021 exceeded the average by two standard deviations. The mass is measured in billions of tonnes (Gigatonnes, Gt where 1 tonne = 1.102 U.S. tons).

Snow 2020-21.jpg

As startling as all this unusual weather is, it should be noted that recent bursts of extreme cold have sometimes been interspersed with brief periods of unseasonal warmth. Such swings between extremes may result from jet stream blocking, a phenomenon that can arise from natural sources such as a downturn in UV from a quieter sun, which can in turn produce changes in upper atmosphere wind patterns.

Next: New EPA Climate Change Indicator Is Deceptive

Little Evidence for Link between Natural Disasters and Global Warming

A new report on extreme weather in 2020 shows how socio-economic studies of natural disasters have been used to buttress the popular but mistaken belief that global warming causes weather extremes. Two international agencies, UNDRR (the UN Office for Disaster Risk Reduction) – in conjunction with CRED (the Centre for Research on the Epidemiology of Disasters) – and IFRC (the International Red Cross), both issued reports in 2020 claiming that climate-related disasters are currently escalating.

However, as the two reports themselves reveal, such claims are manifestly wrong. This can be seen in the following figure, originally included in the UNDRR-CRED’s report but since withdrawn, showing the annual number of climate-related disasters from 2000 through 2020. The disasters are those in the yellow climatological (droughts, glacial lake outbursts and wildfires), green meteorological (storms, extreme temperatures and fog), and blue hydrological (floods, landslides and wave action) categories.

CRED disasters.jpg

The UNDRR-CRED report draws a strong link between global warming and extreme weather events, citing a “staggering rise in climate-related disasters over the last twenty years.” But, as shown in the figure above, the total number of climate-related disasters in fact exhibits a distinctly declining trend (in red) since 2000, falling by 11% over the last 21 years. This completely contradicts the claims in two different sections of the report that the annual number of disasters since 2000 has either risen significantly from before or been “relatively stable.”

Another blatant inconsistency in the UNDRR-CRED report, an inconsistency that bolsters its false claim of a rising disaster rate, is a comparison between the period from 2000 to 2019 and the preceding 20 years from 1980 to 1999. The report contends that the earlier 20 years saw only 4,212 disasters, compared with 7,348 during the later period.       

However, the University of Colorado’s Roger Pielke Jr., who studies natural disasters, says that the report’s numbers are flawed. As CRED has repeatedly acknowledged, data from 20th-century disasters are unreliable because disasters were reported differently before the Internet existed. Climate writer Paul Homewood has noted a sudden jump in the annual number of disasters listed in CRED’s EM-DAT (Emergency Events Database) after 1998, which the agency itself attributes to increased disaster reporting in the Internet era. So its claim that the number of disasters over 20 years jumped from 4,212 to 7,348 is meaningless.

The IFRC report reaches the same erroneous conclusions as the CRED-UNDRR report – not surprisingly, since they are both based on CRED’s EM-DAT. As seen in the next figure, which is the same as the Red Cross report’s Figure 1.1, climate- and weather-related disasters since 2000 have declined by approximately the same 11% noted above. The report’s misleading assertion that such disasters have risen almost 35% since the 1990s relies on the same failure to account for a major increase in disaster reporting since 1998 due to the arrival of the Internet.

CRED Red Cross.jpg

That natural disasters are in fact diminishing over time is reinforced by data on the associated loss of life. The figure below illustrates the annual global number of deaths from natural disasters, including weather extremes, corrected for population increase over time and averaged by decade from 1900 to 2015.

CRED Fig 3.jpg

Because the data is compiled from the same EM-DAT database, the annual number of deaths shows an uptick from the 1990s to the 2000s. Yet it’s abundantly clear that disaster-related deaths have been dwindling since the 1920s. However, this is due as much to improvements in planning and engineering to safeguard structures, and to early warning systems that allow evacuation of threatened communities, as it is to diminishing numbers of natural disasters.

Economic loss studies of natural disasters have been quick to blame human-caused climate change for the apparently increasing frequency and intensity of weather-related events. But once the losses are corrected for population gain and the ever-increasing value of property in harm’s way, there’s very little evidence to support any connection between natural disasters and global warming.

According to numerous analyses by Pielke, the frequency and intensity of the phenomena causing financial losses show no detectable trend to date. Climate-related losses themselves are actually declining as a percentage of global gross domestic product. Another research study, based on the NatCatSERVICE database of reinsurance giant Munich Re, has concluded that both human and economic vulnerability to climate-related disasters exhibit a decreasing trend, and that average disaster mortality has dropped by a sizable 6.5 times from 1980–1989 to 2007–2016.

The IPCC (Intergovernmental Panel on Climate Change), whose assessment reports serve as the voice of authority for climate science, endorsed these findings in a 2014 report on the impacts of climate change. But most of the political world and the mainstream media cling to the erroneous notion that extreme weather is triggered by global warming and becoming more frequent, despite a lack of scientific evidence for either assertion.

Next: Is Recent Record Cold Just La Niña, or the Onset of Global Cooling?

Natural Sources of Global Warming and Cooling: (1) Solar Variability and La Niña

The role played by the sun in climate change has long been trivialized by advocates of the orthodoxy that links global warming almost entirely to our emissions of greenhouse gases. But recent research suggests that solar fluctuations, while small, may affect climate by driving the multidecadal switch from El Niño to La Niña conditions in the Pacific Ocean. Other research finds that our inability to correctly simulate the cooling La Niña cycle is a major reason that computer climate models run hot.     

La Niña is the cool phase of ENSO (the El Niño – Southern Oscillation), a natural cycle that causes temperature fluctuations and other climatic effects in tropical regions of the Pacific. The familiar El Niño and La Niña events, which last for a year or more at a time, recur at irregular intervals from two to seven years. Serious effects of ENSO range from catastrophic flooding in the U.S. and Peru to severe droughts in Australia. 

The sun has several natural cycles, the most well known of which is the 11-year sunspot cycle. During the sunspot cycle the sun’s heat and light output waxes and wanes by about 0.08%. Although this variation in itself is too small to have any appreciable direct effect on the earth’s climate, indirect solar effects can have an impact on the warming and cooling of our planet – indirect effects that are ignored in climate models.

Just such an indirect solar effect may have been discovered in a new study revealing a correlation between the end of sunspot cycles and the switch from El Niño to La Niña states of the tropical Pacific. The research was conducted by a team of scientists from NASA and the U.S. National Center for Atmospheric Research.

The researchers found that the termination of all five solar cycles between 1960 and 2010-11 coincided with a flip from a warmer El Niño to a cooler La Niña. And the end of the most recent solar cycle, which has just occurred, also coincides with the beginning of a new La Niña event. Because the end of the 11-year solar cycle is fuzzy, the research team relied for its “clock” on the sun’s more well-defined magnetic polarity cycle known as a Hale cycle, which is precisely 22 years in length.

The correspondence between the 11-year solar cycle and the onset of La Niña events is illustrated in the figure below, showing the six-month smoothed monthly sunspot number since 1950 in black and the Oceanic El Niño Index in color. The red and blue boxes mark El Niño and La Niña periods, respectively, in the repeating pattern. What stands out is that the end of each sunspot cycle is closely correlated with the switch from El Niño to La Niña. That the correlation is mere coincidence is statistically highly unlikely, say the study authors, although further research is needed to establish the physical connection between the sun and earth responsible for the correlation.

Solar ENSO.jpg

Another study, headed by climate scientists at the U.S. Lawrence Livermore National Laboratory, finds that multidecadal La Niña variability is why computer climate models overestimate sea surface temperatures in the Pacific by two to three times. The La Niña cycle results in atmospheric cooling and a distinct pattern of cooler-than-normal sea surface temperatures in the central and eastern tropical Pacific, with warmer waters to the north and south.

Many climate models produce ENSO variations, but are unable to predict either the timing of El Niño and La Niña events or temperatures measured by satellite in the tropical lower atmosphere (troposphere). However, the study authors found that approximately 13% of 482 simulations by 55 computer models do show tropospheric warming in the tropics that matches the satellite record. And, unexpectedly, those simulations reproduce all the characteristics of La Niña.

The next figure shows how well one of these particular simulations reproduces a La Niña temperature pattern, in both geographic extent (upper panel) and ocean depth (lower panel). The panels labeled B are the computer simulation and the panels labeled C are the satellite observations. Temperatures are depicted as an average warming (positive) or cooling (negative) rate, in degrees Celsius per decade, over the period from 1979 to 2018. La Niña cooling in the Pacific is clearly visible in both B and C.

Solar 2.jpg

The other 87% of the computer simulations overestimate tropical Pacific temperatures, which is why, the authors say, the multimodel mean warming rate is two to three times higher than observed. But their results show that natural climate variability, here in the form of La Niña, is large enough to explain the difference between reality and climate model predictions.

Next: Little Evidence for Link between Natural Disasters and Global Warming

How Near-Saturation of CO2 Limits Future Global Warming

The climate change narrative is based in part on the concept that adding more and more CO2 to the atmosphere will cause the planet to become unbearably hot. But recent research refutes this notion by concluding that extra CO2 quickly becomes less effective in raising global temperatures – a saturation effect, long disputed by believers in the narrative.

First reported in 2020, the new and highly detailed research is described in a preprint by physicists William Happer and William van Wijngaarden. Happer is an emeritus professor at Princeton University and prominent in optical and radiation physics. In their paper, the two authors examine the radiative forcings – disturbances that alter the earth’s climate – of the five most abundant greenhouse gases, including CO2 and water vapor.

The researchers find that the current levels of atmospheric CO2 and water vapor are close to saturation. Saturation is a technical term meaning that the greenhouse effect has already had its maximum impact and further increases in concentration will cause little additional warming. For CO2, doubling its concentration from its 2015 level of 400 ppm (parts per million) to 800 ppm will increase its radiative forcing by just 1%. This increase in forcing will decrease the cooling radiation emitted to space by about 3 watts per square meter, out of a total of about 300 watts per square meter currently radiated to space.

The science behind greenhouse gas warming is illustrated in the figure below, depicting the wavelength spectrum of the intensity of thermal radiation transmitted through the atmosphere, where wavelength is measured in micrometers. Radiation is absorbed and radiated by the earth in two different wavelength regions: absorption of solar radiation takes place at short (ultraviolet and visible) wavelengths, shown in red in the top panel, while heat is radiated away at long (infrared) wavelengths, shown in blue.

Happer 1.jpg

Greenhouse gases in the atmosphere allow most of the downward shortwave radiation to pass through, but prevent a substantial portion of the upward longwave radiation from escaping – resulting in net warming, as suggested by the relative areas of red and blue in the figure above. The absorption by various greenhouse gases of upward (emitted) radiation at different wavelengths can be seen in the lower panels of the figure, water vapor and CO2 being the most dominant gases.

The research of Happer and van Wijngaarden takes into account both absorption and emission, as well as atmospheric temperature variation with altitude. The next figure shows the authors’ calculated spectrum for cooling outgoing radiation at the top of the atmosphere, as a function of wavenumber or spatial frequency rather than wavelength, which is the inverse of spatial frequency. (The temporal frequency is the spatial frequency multiplied by the speed of light.)

Happer 2.jpg

The blue curve is the spectrum for an atmosphere without any greenhouse gases at all, while the green curve is the spectrum for all greenhouse gases except CO2. Including CO2 results in the black or red curve, for concentrations of 400 ppm or 800 ppm, respectively; the gap in the spectrum represents the absorption of radiation that would otherwise cool the earth. The small decrease in area underneath the curve, from black to red, corresponds to the forcing increase of 3 watts per square meter resulting from doubling the CO2 level.

What matters for global warming is how much the additional forcing bumps up the temperature. This depends in part on the assumption made about climate feedback, since it’s the positive feedback from much more abundant water vapor in the atmosphere that is thought to amplify the modest temperature rise from CO2 acting alone. The strength of the water vapor feedback is closely tied to relative humidity.

Assuming positive water vapor feedback and constant relative humidity with increasing altitude, the preprint authors find that the extra forcing from doubled CO2 causes a temperature increase of 2.2 to 2.3 degrees Celsius (4.0 to 4.1 degrees Fahrenheit). If the water vapor feedback is set to zero, then the temperature increase is only 1.4 degrees Celsius (2.5 degrees Fahrenheit). These results can be compared with the prediction of 2.6 to 4.1 degrees Celsius (4.7 to 7.4 degrees Fahrenheit) in a recent study based on computer climate models and other evidence.

Although an assumption of zero water vapor feedback may seem unrealistic, Happer points out that something important is missing from their calculations, and that is feedback from clouds – an omission the authors are currently working on. Net cloud feedback, from both low and high clouds, is poorly understood currently but could be negative rather than positive.

If indeed overall cloud feedback is negative rather than positive, it’s possible that negative feedbacks in the climate system from the lapse rate (the rate of temperature decrease with altitude in the lower atmosphere) and clouds dominate the positive feedbacks from water vapor, and from snow and ice. In either case, this research demonstrates that future global warming won’t be nearly as troublesome as the climate change narrative insists.

Next: Natural Sources of Global Cooling: (1) Solar Variability and La Niña

Good Gene – Bad Gene: When GMOs Succeed and When They Don’t

As we saw in a previous post, genetic engineering has recently been successful in greatly accelerating the development of vaccines for COVID-19. Genetically engineered crops, which date back about 30 years, have also scored a number of successes, but there have also been some notable failures.

To some environmentalists, tinkering with a food plant’s genes conjures up pictures of “Frankenfoods,” evocative of the monster created by the fictional mad scientist Frankenstein. But such irrational fears over food safety and the planet’s ecology, aggravated in the past by the cavalier attitude of agribusiness companies, are a rejection of science.

Golden_Rice.jpg

The clash between environmental activists and the agricultural behemoths is epitomized by the success story of Golden Rice. Golden Rice is genetically modified to contain beta-carotene, a naturally occurring pigment that produces vitamin A in the human body and imbues the grain with a characteristic yellow color. The GMO (genetically modified organism) has been developed as the answer to vitamin A deficiency in many parts of Asia and Africa, where millions of poor children die or go blind each year from weakened immune systems caused by a lack of the vitamin.

But as soon as Swiss plant geneticist Ingo Potrykus and German biologist Peter Beyer triumphed in splicing the two necessary genes – one from daffodils, one from a bacterium – into rice, widespread hostility erupted, despite a wave of publicity about their accomplishment and a feature article in Time magazine in 2000.  

Golden RIce Patrick Moore starving children.jpg

Golden Rice was dismissed as “fool’s gold” by Greenpeace, who claimed that a person would have to eat about 9 kilograms (20 pounds) of cooked Golden Rice per day to meet the daily requirement for vitamin A. However, this far-fetched claim was repudiated by the subsequent development, reported in 2005, of an improved Golden Rice with 20 times as much vitamin A-generating beta-carotene. Other detractors saw the genetic engineering feat simply as a Trojan horse, as a vehicle for launching other more profitable GMO crops in the developing world.

Many further barriers lay in the two scientists’ path. These included bomb threats against Potrykus, necessitating construction of a bombproof greenhouse; obtaining free licenses to 70 patents belonging to 32 different companies and universities that the discovery had potentially infringed on; and crossbreeding required to insert the magic daffodil and bacterium genes into suitable varieties of rice, research conducted at the nonprofit IRRI (International Rice Research Institute) in the Philippines.

Nonetheless, in 2018, four countries – Australia, New Zealand, Canada and the U.S. – finally approved Golden Rice. The U.S. FDA (Food and Drug Administration) has granted the biofortified food its prestigious “GRAS (generally recognized as safe)” status. IRRI applied for approvals in rich countries initially, in order to avoid trade disruptions arising from small quantities of GMO rice finding their way into non-GMO rice sold to other countries.

An example of a GMO food that never made it to market is a soybean containing a gene from Brazil nuts. Seed supplier Pioneer Hi-Bred International wanted to bolster the nutritional content of its soy-based animal feeds, which must normally be supplemented with an amino acid called methionine to promote adequate growth of the feeding animals. Because the Brazil-nut protein 2S albumin is very rich in methionine, Pioneer planned to splice the 2S albumin gene into the soybean genome.  

But mindful that Brazil nuts can cause strong allergic reactions in humans – though the specific allergen was previously unknown – and that soybeans intended for animals can’t easily be separated from those destined for human consumption, the company commissioned testing of its transgenic soybeans for allergenicity.  

Sure enough, 2S albumin was found to be not only a human allergen but also present in the genetically altered soybeans, revealing that genetic engineering can indeed transfer food allergens from one plant to another. The positive test results, reported in 1996, would have required Pioneer to label its new product for sale in the U.S., under the FDA protocol for allergy testing in transgenic plants. Instead, the company dropped its marketing plans for the soybeans.

GMO potatoes.jpg

Another example of a potential GMO food that didn’t come to fruition is potatoes genetically engineered to produce their own pesticide. In this case, the idea was to make potatoes pest resistant via a gene borrowed from that harbinger of spring, the snowdrop flower. Despite its delicate appearance, the flower harbors a type of sugar-bearing protein known as a lectin that is toxic to certain kinds of preying insect.

A major furor erupted in the late 1990s over research on laboratory rats fed with lectin-modified transgenic potatoes, and claims by the researcher that the GMO potatoes stunted the rats’ growth and degraded their immune systems. But controversy over a scientific review of the research that found the experiments were invalid put a stop to any further development of potatoes engineered with the snowdrop gene. Today, the only GMO potato approved for human consumption is a nonbruising variety.

Next: How Near-Saturation of CO2 Limits Future Global Warming

Growing Antarctic Sea Ice Defies Climate Models

We saw in the previous post how computer climate models greatly exaggerate short-term warming. Something else they get wrong is the behavior of Antarctic sea ice. According to the models, sea ice at both the North and South Poles should shrink as global temperatures rise. It’s certainly contracting in the Arctic, faster in fact than most models predict, but contrary to expectations, sea ice in the Antarctic is actually expanding.

Scientific observations of sea ice in the Arctic and Antarctic have only been possible since satellite measurements began in 1979. The figure below shows satellite-derived images of Antarctic sea ice extent at its summer minimum in 2020 (left image), and its previous winter maximum in 2019 (right image). Sea ice expands to its maximum extent during the winter and contracts during summer months.

Blog 3-8-21 JPG(1).jpg

But in contrast to the increase in the maximum extent of sea ice around Antarctica shown by observations during the satellite era, the computer models all simulate a decrease. Two research groups have investigated this decrease in detail for the previous generation of CMIP5 models.

One of the groups is the BAS (British Antarctic Survey), which has a long history of scientific studies of Antarctica dating back to World War II and before. Their 2013 assessment of 18 CMIP5 climate models found marked differences in the modeled trend in month-to-month Antarctic sea ice extent from that observed over the previous 30 years, as illustrated in the next figure. The thick blue line at the top indicates the trend in average monthly ice extent measured over the period from 1979 to 2005, and the colored lines are the monthly trends simulated by the various models; the black line is the model mean.

Blog 3-8-21 JPG(2).jpg

It’s seen that almost all models exhibit an incorrect negative trend for every month of the year. The mean monthly trend for all models is a decline of -3.2% per decade between 1979 and 2005, with the largest mean monthly decline being -13.6% per decade in February. But the actual observed gain in Antarctic sea ice extent is (+)1.1% per decade from 1979 to 2005 according to the BAS, or a somewhat higher 1.8% per decade from 1979 to 2019, as estimated by the U.S. NSIDC (National Snow and Ice Data Center) and depicted below.

Blog 3-8-21 JPG(3).jpg

For actual sea ice extent, the majority of models simulate too meager an extent at the February minimum, while several models estimate less than two thirds of the real-world extent at the September maximum. Similar results were obtained in a study by a Chinese research group, as well as other studies.

The discrepancy in sea ice extent between the empirical satellite observations and the climate models is particularly pronounced on a regional basis. At the February minimum, the satellite data indicate substantial residual ice in the Weddell Sea to the east of the Antarctic Peninsula (see the first figure above), whereas most models show very little. And the few models that simulate a realistic amount of February sea ice fail to reproduce the loss of ice in the Ross Sea adjoining West Antarctica.

All these differences indicate that computer models are not properly simulating the physical processes that govern Antarctic sea ice. Various possible processes not incorporated in the models have been suggested to explain the model deficiencies. These include freshening of seawater by melting ice shelves attached to the Antarctic ice sheet; meltwater from rain; and atmospheric processes involving clouds or wind.

BAS climate modeler Paul Holland thinks the seasons may hold the key to the conundrum, having noticed that trends in sea ice growth or shrinkage vary in strength in the different seasons. Holland surmised that it was more important to look at how fast the ice was growing or shrinking from season to season than focusing on changes in ice extent. His calculations of the rate of growth led him to conclude that seasonal wind trends play a role.

The researcher found that winds are spreading sea ice out in some regions of Antarctica, while compressing or keeping it intact in others, and that these effects begin in the spring. “I always thought, and as far as I can tell everyone else thought, that the biggest changes must be in autumn, Holland said. “But the big result for me now is we need to look at spring. The trend is bigger in the autumn, but it seems to be created in spring.”

That’s where Holland’s research stands for now. More detailed work is required to check out his novel idea.

Next: Good Gene – Bad Gene: When GMOs Succeed and When They Don’t

Latest Computer Climate Models Run Almost as Hot as Before

The narrative that global warming is largely human-caused and that we need to take drastic action to control it hinges entirely on computer climate models. It’s the models that forecast an unbearably hot future unless we rein in our emissions of CO2.

But the models have a dismal track record. Apart from failing to predict a recent slowdown in global warming in the early 2000s, climate models are known even by modelers to consistently run hot. The previous generation of models, known in the jargon as CMIP5 (Coupled Model Intercomparison Project Phase 5), overestimated short-term warming by more than 0.5 degrees Celsius (0.9 degrees Fahrenheit) above observed temperatures. That’s 50% of all the global warming since preindustrial times.

The new CMIP6 models aren’t much better. The following two figures reveal just how much both CMIP5 and CMIP6 models exaggerate predicted temperatures, and how little the model upgrade has done to shrink the difference between theory and observation. The figures were compiled by climate scientist John Christy, who is Director of the Earth System Science Center at the University of Alabama in Huntsville and an expert reviewer of the upcoming sixth IPCC (Intergovernmental Panel on Climate Change) report.

Models CMIP5.jpg
Models CMIP6.jpg

Both figures plot the warming relative to 1979 in degrees Celsius, measured in a band in the tropical upper atmosphere between altitudes of approximately 9 km (30,000 feet) and 12 km (40,000 feet). That’s a convenient band for comparison of model predictions with measurements made by weather balloons and satellites. The thin colored lines indicate the predicted variation of temperature with time for the different models, while the thick red and green lines show the mean trend for models and observations, respectively.

The trend for CMIP6 models is depicted more clearly in Christy’s next figure, which compares the warming rates for 39 of the models. The average CMIP6 trend in warming rate is 0.40 degrees Celsius (0.72 degrees Fahrenheit) per decade, compared with the actual observed rate of 0.17 degrees Celsius (0.31 degrees Fahrenheit) per decade – meaning that the predicted warming rate is 2.35 times too high.

Models CMIP6 warming rate.jpg

These CMIP6 numbers are only a marginal improvement over those predicted by the older CMIP5 models, for which the warming trend was 0.44 degrees Celsius (0.72 degrees Fahrenheit) per decade, or 2.75 times higher than the observed rate of 0.16 degrees Celsius (0.29 degrees Fahrenheit) per decade (for a slightly different set of measurements),

It’s seen that the warming rates for any particular model fluctuate wildly in both cases, much more so than the observations themselves. Christy says the large variability is a sign that the models underestimate negative feedbacks in the climate system, especially from clouds that I’ve discussed in another post. Negative feedback is stabilizing and acts to damp down processes that cause fluctuations. There is evidence, albeit controversial, that feedback from high clouds such as cirrus clouds – which normally warm the planet – may not be as strongly positive as the new models predict, and could even be negative overall.

You may be wondering why all these comparisons between models and observations are made high up in the atmosphere, rather than at the earth’s surface which is where we actually feel global warming. The reason is the atmosphere at 9 to 12 km (6 to 7 miles) above the tropics is a much more sensitive test of CO2 greenhouse warming than it is near the ground. Computer climate models predict that the warming rate at those altitudes should be about twice as large as at ground level, giving rise to the so-called CO2 “hot spot.”

The hot spot is illustrated in the figure below, showing the air temperature as a function of both altitude (measured as atmospheric pressure) and global latitude, as predicted by a Canadian model. Similar predictions come from the other CMIP6 models. The hot spot is the red patch at the center of the figure bounded by the 0.6 degrees Celsius (1.1 degrees Fahrenheit) contour, extending roughly 20o either side of the equator and at altitudes of 30,000-40,000 feet. The corresponding temperature on the ground is seen to be less than 0.3 degrees Celsius (0.5 degrees Fahrenheit).

Hot spot.jpg

But the hot spot doesn’t show up in measurements made by weather balloons or satellites. This mismatch between models and experiment is important because the 30,000-40,000 feet band in the atmosphere is the very altitude from which infrared heat is radiated away from the earth. The models run hot, according to Christy, because they trap too much heat that in reality is lost to outer space – a consequence of insufficient negative feedback in the models.

Next: Growing Antarctic Sea Ice Defies Climate Models

Science on the Attack: The Vaccine Revolution Spurred by Messenger RNA

The lightning speed with which biotech companies Pfizer-BioNTech and Moderna developed a safe and effective COVID-19 vaccine is testament not only to scientific perseverance, but to the previously unrealized potential of messenger RNA (mRNA) to revolutionize medicine. Today’s blog post in my series showcasing science on the attack rather than under attack highlights the genetic breakthrough behind this transformational discovery.

COVID vaccination.jpg

Genetic vaccines are a relative newcomer to the immunization scene. Unlike traditional vaccines that use killed or weakened versions of the virus to stimulate the body’s immune system into action, genetic vaccines deliver a single virus gene or part of its genetic code into human cells. The genetic instructions induce the cells to make viral proteins that constitute only a small piece of the virus, but have the same effect on the immune system as the whole virus molecule.

But, until 2020, the only approved genetic vaccines – based on DNA, not RNA – were for animal diseases. It was the urgent need to come up with a vaccine to protect against COVID-19 in humans that triggered the worldwide quest to bring an mRNA vaccine to market.

The job of mRNA in the body is to transcribe the DNA code for one or more genes contained in a cell nucleus, and then deliver the encoded information to the protein factory in the cell’s outer reaches. There, the message is decoded and the requisite protein manufactured. DNA contains the blueprint for making nearly all the proteins in the body, while mRNA acts as a delivery service.

The concept of harnessing mRNA to fight disease goes back to the early 1990s, but hopes raised by promising early experiments on mice were dashed when multiple roadblocks arose to working with synthetic mRNA injected into the human body. The primary obstacle was the immune system’s overreaction to mRNA engineered to manufacture virus proteins. The immune system often destroyed the foreign mRNA altogether, as well as causing excessive inflammation in some people. Other problems were that the mRNA degraded quickly in the body and didn’t produce enough of the crucial virus protein for a vaccine to be effective.

So scientific attention switched instead to development of DNA vaccines, which cause fewer problems though are clunky compared to their mRNA cousins. Then, in a series of papers starting in 2005, two scientists at the University of Pennsylvania, Katalin Karikó and Drew Weissman, reported groundbreaking research that brought mRNA back into the limelight.

Karikó and Weissman found that tweaking the structure of the mRNA molecule could overcome most of the earlier obstacles. By exchanging one of mRNA’s four building blocks called nucleosides, they were able to create a hybrid mRNA that drastically suppressed the immune system’s reaction to the intruder and boosted production of the viral protein. In their own words, their monumental achievement was “the biological equivalent of swapping out a tire.”      

Their discovery, however, was initially received with a big yawn by many of their peers, who were still preoccupied with DNA. Karikó found herself snubbed by the research funding community and demoted from her university position. Eventually, in 2013 she was hired by the German company BioNTech to help oversee its mRNA research.

In the meantime, work proceeded on the final impediment to exploiting synthetic mRNA for vaccines: preventing its degradation in the human body. To reach the so-called cytoplasm of a cell where proteins are manufactured, the artificial mRNA needs to penetrate the lipid membrane barrier protecting the cell. Karikó, Weissman and others solved this problem by encasing the mRNA in small bubbles of fat known as lipid nanoparticles.

Armed with these leaps forward, researchers have now developed mRNA vaccines for at least four infectious diseases: rabies, influenza, cytomegalovirus and Zika. But testing in humans has been disappointing so far. The immune response has been weaker than expected from animal studies – just as with DNA vaccines – and serious side effects have occurred.

Nevertheless, COVID-19 mRNA vaccines have been a stunning success story. The major advantage of mRNA vaccines over their traditional counterparts is the relative ease and speed with which they can be produced. But until now, no mRNA vaccine or drug has ever won approval.

Maybe COVID-19 is the exception and synthetic coronavirus mRNA generates a stronger immune response with fewer adverse effects than the other viral mRNA vaccines investigated to date. Mass production of a beneficial and safely tolerated COVID-19 vaccine in less than 12 months is certainly an amazing accomplishment, considering that it’s taken several years to develop a new vaccine in the past. But whether the potential of mRNA vaccines to ward off other diseases or even cancer remains to be seen.

Next: Latest Computer Climate Models Run Almost as Hot as Before

Both Greenland and Antarctic Ice Sheets Melting from Below

Amidst all the hype over melting from above of the Antarctic and Greenland ice sheets due to global warming, little attention has been paid to melting from below due to the earth’s volcanic activity. But the two major ice sheets are in fact melting on both top and bottom, meaning that the contribution of global warming isn’t as large as climate activists proclaim.

In central Greenland, Japanese researchers recently discovered a flow of molten rocks, known as a mantle plume, rising up beneath the island. The previously unknown plume emanates from the boundary between the earth’s core and mantle (labeled CMB in the following figure) at a depth of 2,889 km (1,795 miles), and melts Greenland’s ice from below.

Greenland plume.jpg

As the figure shows, the Greenland plume has two branches. One of the branches feeds into the similar Iceland plume that arises underneath Iceland and supplies heat to an active volcano there. The Greenland plume provides heat to an active volcano on the island of Jan Mayen in the Arctic Ocean, as well as a geothermal area in the Svalbard archipelago in the same ocean.

To study the plume, the research team used seismic topography – a technique, similar to a CT scan of the human body, that constructs a three-dimensional image of subterranean structures from differences in the speed of earthquake sound waves traveling through the earth. Sound waves pass more slowly through rocks that are hotter, less dense or hydrated, but more quickly through rocks that are colder, denser or drier. The researchers took advantage of seismographs forming part of the Greenland Ice Sheet Monitoring Network, set up in 2009, to analyze data from 16,257 earthquakes recorded around the world.

The existence of a mantle plume underneath Antarctica, originating at a depth of approximately 2,300 km (1,400 miles), was confirmed by a Caltech (California Institute of Technology) study in 2017. Located under West Antarctica (labeled WA in the next figure), the plume generates as much as 150 milliwatts of heat per square meter – heat that feeds several active volcanoes and also melts the overlying ice sheet from below. For comparison, the earth’s geothermal heat is 40-60 milliwatts per square meter on average, but reaches about 200 milliwatts per square meter beneath geothermally active Yellowstone National Park in the U.S.

Heat Antarctica.jpg

A team of U.S. and UK researchers found in 2018 that one of the active volcanoes drawing heat from the mantle plume in West Antarctica is making a major contribution to the melting of the Pine Island Glacier. The Pine Island Glacier, situated adjacent to the Thwaites Glacier in the figure above, is the fastest melting glacier in Antarctica, responsible for about 25% of the continent’s ice loss.   

The researchers’ discovery was serendipitous. Originally part of an expedition to study ice melting patterns in seawater close to West Antarctica, the team was surprised to find high concentrations of the gaseous helium isotope 3He near the Pine Island Glacier. Because 3He is found almost exclusively in the earth’s mantle, where it’s given off by hot magma, the gas is a telltale sign of volcanism.

The study authors calculated that the volcano buried underneath the Pine Island Glacier released at least 2,500 megawatts of heat to the glacier in 2014, which is about 60% of the heat released annually by Iceland’s most active volcano and roughly 25 times greater than the annual heating caused by any one of over 100 dormant Antarctic volcanoes.

A more recent study by the British Antarctic Survey found evidence for a hidden source of heat beneath the ice sheet in East Antarctica (labeled EA in the figure above). From ice-penetrating radar data, the scientists concluded that the heat source is a combination of unusually radioactive rocks and hot water coming from deep underground. The heat melts the base of the ice sheet, producing meltwater which drains away under the ice to fill subglacial lakes. The estimated geothermal heat flux is 120 milliwatts per square meter, comparable to the 150 milliwatts per square meter from the mantle plume underneath West Antarctica that was discussed above.

Heat Antarctica2.jpg

All these hitherto unknown subterranean heat sources in Antarctica and Greenland, just like global warming, melt ice and contribute to sea level rise. However, as I’ve discussed in previous posts (see here and here), the giant Antarctic ice sheet may not be melting at all overall, and the Greenland ice sheet is only losing ice slowly.

Next: Science on the Attack: The Vaccine Revolution Spurred by Messenger RNA

What Triggered the Ice Ages? The Uncertain Role of CO2

About a million years ago, the earth’s ice ages became colder and longer – with a geologically sudden jump from thinner, smaller glaciers that came and went every 41,000 years to thicker, larger ice sheets that persisted for 100,000 years. Although several hypotheses have been put forward to explain this transition, including a long-term decline in the atmospheric CO2 level, the phenomenon remains a scientific conundrum.

Two research teams spearheaded by geologists from Princeton University have recently described their attempts to resolve the mystery. A 2019 study measured the CO2 content in two-million-year-old ice cores extracted from Antarctica, which are by far the oldest cores ever recovered and span the puzzling transition to a 100,000-year ice age cycle that occurred a million years before. A just reported 2020 study utilized seabed sediment cores from the Antarctic Ocean to investigate storing of CO2 in the ocean depths over the last 150,000 years.

Both studies recognize that the prolonged deep freezes of the ice ages are set off partly by perpetual but regular changes in the earth’s orbit around the sun. That’s the basis of a hypothesis proposed by Serbian engineer and meteorologist Milutin Milankovitch. As shown in the figure below, the earth orbits the sun in an elliptical path and spins on an axis that is tilted. The elliptical orbit stretches and contracts over a 100,000-year cycle (top), while the angle of tilt or obliquity oscillates with a 41,000-year period (bottom), and the planet also wobbles on its axis in a 26,000-year cycle (center).

94_card_with_border.jpg

Milankovitch linked all three cycles to glaciation, but his hypothesis has been dogged by two persistent problems. First, it predicts a dominant 41,000-year cycle governed by obliquity, whereas the current pattern is ruled by the 100,000-year eccentricity cycle as mentioned above. Second, the orbital fluctuations thought to trigger the extended cooling cycles are too subtle to cause on their own the needed large changes in solar radiation reaching the planet – known as insolation. That’s where CO2 comes in, as one of various feedbacks that amplify the tiny changes that do occur.

Before the 2019 Princeton study, it had been suspected that the transition from 41,000-year to 100,000-year cycles was due to a long-term decline in the atmospheric CO2 level over both glacial and interglacial epochs. But that belief held when ice-core data went back only about 800,000 years. Armed with their new data from 2 million years in the past, the first Princeton team discovered surprisingly that the average CO2 level was unchanged over that time span, even though the minimum level dropped after the transition to longer ice age cycles.

This means that the 100,000-year transition can’t be attributed to CO2, although CO2 feedback has been invoked to explain the relatively sudden temperature rise at the end of ice ages. Rather, said the study authors, the switch in ice age length was probably caused by enhanced growth of ice sheets or changes in global ocean circulation.

It’s another feedback process involving CO2 that was investigated by the second Princeton team, who made measurements on tiny fossils embedded in Antarctic Ocean sediments. While it has long been known that the atmospheric CO2 level and global temperatures varied in tandem over glacial cycles, and that CO2 lagged temperature, the causes of the CO2 fluctuations are not well understood.

We know that the oceans can hold more CO2 than the atmosphere. Because CO2 is less soluble in warm water than cooler water, CO2 is absorbed from the atmosphere by cold ocean water at the poles and released by warmer water at the equator. The researchers found that, during ice ages, the Antarctic Ocean stored even more CO2 than expected. Absorption in the Antarctic is enabled by the sinking of floating algae that carry CO2 deep into the ocean before becoming fossilized, a process referred to as the "biological carbon pump."

But some of the sequestered CO2 normally escapes, due to the strong eastward winds encircling Antarctica that drag CO2-rich deep water up to the surface and vent the CO2 back to the atmosphere. The new research provides evidence that this wind-driven Antarctic Ocean upwelling slowed down during the ice ages, allowing less CO2 to be vented and more to remain locked up in the ocean waters.

Apart from any effect this retention of CO2 may have had on ice-age temperatures, the researchers say their data suggests that the past lag of CO2 behind temperature may have been caused directly by the effect on Antarctic upwelling of changing obliquity in the earth’s orbit – Milankovitch’s 41,000-year cycle. The study authors believe this explains why the eccentricity and precession cycles now prevail over the obliquity cycle.

Next: Both Greenland and Antarctic Ice Sheets Melting from Below

New Evidence That the Ancient Climate Was Warmer than Today’s

Two recently published studies confirm that the climate thousands of years ago was as warm or warmer than today’s – a fact disputed by some believers in the narrative of largely human-caused global warming. That was an era when CO2 levels were much lower than now, long before industrialization and SUVs.

One study demonstrates that the period known as the Roman Warming was the warmest in the last 2,000 years. The other study provides evidence that it was just as warm up to 6,000 years ago. Both studies reinforce the occurrence of an even warmer period immediately following the end of the last ice age 11,000 years ago, known as the Holocene Thermal Maximum.

The first study, undertaken by a group of Italian and Spanish researchers, reconstructed sea surface temperatures in the Mediterranean Sea over the past 5,300 years. Because temperature measurement using scientific thermometers goes back only to the 18th century, temperatures for earlier periods must be reconstructed from proxy data using indirect sources such as tree rings, ice cores, leaf fossils or boreholes.

This particular study utilized fossilized amoeba skeletons found in seabed sediments. The ratio of magnesium to calcium in the skeletons is a measure of the seawater temperature at the time the sediment was deposited; a timeline can be established by radiocarbon dating. The researchers focused on the central part of the Mediterranean Sea, specifically the Sicily Channel as indicated by the red arrow in the figure below. The samples came from a depth of 475 meters (1,550 feet).

Mediterranean Roman era.jpg

Analysis of the data found that ancient sea surface temperatures in the Sicily Channel ranged from 16.4 degrees Celsius (61.5 degrees Fahrenheit) to 22.7 degrees Celsius (72.9 degrees Fahrenheit) over the period from 3300 BCE to July 2014. This is illustrated in the next figure, in which the dark blue dashed line represents the Sicily Channel raw temperature data and the thick dark blue solid line shows smoothed values. The other lines are Mediterranean temperatures reconstructed by other research groups.

Mediterranean Mg-Ca.jpg

With the exception of the Aegean data, the results all show distinct warming during the Roman period from 0 CE to 500 CE, when temperatures were about 2 degrees Celsius (3.6 degrees Fahrenheit) higher than the average for Sicily and western Mediterranean regions in later centuries, and much higher than present-day Sicilian temperatures. The high temperatures in the Aegean Sea result from its land-locked nature. During the 500 years of the Roman Warming, the Roman Empire flourished and reached its zenith. Subsequent cooling, seen in the figure above, led to the Empire’s collapse prior to the Medieval Warm Period, say the researchers.

The second study was conducted by archaeologists in Norway, who discovered a treasure trove of arrows, arrowheads, clothing and other artifacts, unearthed by receding ice in a mountainous region of the country. Because the artifacts would have been deposited when no ice covered the ground, and are only being exposed now due to global warming, temperatures must have been at least as high as today during the many periods when the artifacts were cast aside.

Norway arrowhead.jpg

The oldest arrows and artifacts date from around 4100 BCE, the youngest from approximately 1300 CE, at the end of the Medieval Warm Period. That the artifacts come from several different periods separated by hundreds or thousands of years implies that the ice and snow in the region must have expanded and receded several times over the past 6,000 years.

During the Holocene Thermal Maximum, which occurred from approximately 10,000 to 6,000 years ago and preceded the period of the stunning Norwegian discoveries, global temperatures were higher yet. In upper latitudes, where the most reliable proxies are found, it was an estimated 2-3 degrees Celsius (3.6-5.4 degrees Fahrenheit) warmer than at present. The warmth contributed to the rise of agricultural societies around the globe and the development of human civilization.

Paradoxically though, the Greenland ice sheet – the present melting of which has sparked heated debate – is thought to have been even larger at the peak of the Holocene Thermal Maximum than it is today, when Greenland temperatures are lower. This can be seen in the following figure, showing that the ice sheet extent was about the same as now about 7,500 years (7.5 ka) ago, and even greater before that. The ice did, however, retract to a minimum during the intervening period (up to 7.5 ka ago) that includes both the Roman Warming and the period of the Norwegian discoveries discussed above.

Greenland ice Holocene.jpg

Puzzles like this mean that we still have much to learn about the earth’s climate, both past and present, especially in the area of natural variability.

Next: What Triggered the Ice Ages?  The Uncertain Role of CO2

No Evidence That 2020 Hurricane Season Was Record-Breaking

In a world that routinely hypes extreme weather events, it’s no surprise that the mainstream media and alarmist climate scientists have declared this year’s Atlantic hurricane season “unprecedented” and “record-shattering.” But the reality is that the season was merely so-so and no records fell.

While it’s true that the very active 2020 season saw a record-breaking 30 named storms, only 13 of these became hurricanes. That was fewer than the historical high of 15 recorded in 2005 and only one more than the 12 hurricanes recorded in 1969 and 2010, according to NOAA (the U.S. National Oceanic and Atmospheric Administration). The figure below shows the frequency of all Atlantic hurricanes from 1851 to 2020.

Atlantic hurricanes.jpg

Of 2020’s 13 hurricanes, only six were major hurricanes, less than the record eight in 1950 and seven in 1961 and 2005, as shown in the next figure. A major hurricane is defined as one in Category 3, 4 or 5 on the so-called Saffir-Simpson scale, corresponding to a top wind speed of 178 km per hour (111 mph) or greater. Although it appears that major Atlantic hurricanes were less frequent before about 1940, the lower numbers reflect the relative lack of observations in early years of the record. Aircraft reconnaissance flights to gather data on hurricanes only began in 1944, while satellite coverage dates only from the 1960s.

Atlantic major hurricanes.jpg

Despite the lack of any significant trend in Atlantic hurricanes in a warming world, the frequency of hurricanes globally is actually diminishing as seen in the following figure. The apparent slight increase in major hurricanes since 1981 has been ascribed to improvements in observational capabilities, rather than warming oceans that provide the fuel for hurricanes and typhoons.

Hurricane frequency global (Ryan Maue).jpg

As further evidence that recent hurricane activity is nothing unusual, the figure below depicts what is known as the ACE (Accumulated Cyclone Energy) index for the Atlantic basin from 1855 to 2020. The ACE index is an integrated metric combining the number of storms each year, how long they survive and how intense they become. Mathematically, the index is calculated by squaring the maximum sustained wind speed in a named storm every six hours that it remains above tropical storm intensity and summing that up for all storms in the season.

Atlantic ACE.jpg

For 2020, the Atlantic basin ACE index was 179.8, which ranks 13th behind 2017, 2005, the peak in 1933 and nine other years. For comparison, this year’s ACE index for the northwestern Pacific, where typhoons are common, was 148.5. The higher value for the Atlantic this year reflects the greater number of named storms.

NOAA attributes the enhanced number of atmospheric whirligigs in the Atlantic in recent years to the warm phase of the naturally occurring AMO (Atlantic Multi-Decadal Oscillation). The AMO, which has a cycle time of approximately 65 years and alternates between warm and cool phases, governs many extremes, such as cyclonic storms in the Atlantic basin and major floods in eastern North America and western Europe. The present warm phase began in 1995, marking the beginning of a period when both named Atlantic storms and hurricanes have become more common on average – as seen in the first two figures above.

Another contribution to storm activity in the Atlantic comes from La Niña cycles in the Pacific. Apart from a cooling effect, La Niñas result in quieter conditions in the eastern Pacific and heightened activity in the Atlantic. The current La Niña started several months ago and is expected to continue into 2021.

Despite NOAA’s recognition of what has caused so many Atlantic storms in 2020, activists continue to claim that climate change is making hurricanes stronger and more destructive and increasing the likelihood of more frequent major hurricanes. Pontificates Michael “hockey stick” Mann: “The impacts of climate change are no longer subtle. We’re seeing them play out right now in the form of unprecedented wildfires out West and an unprecedented hurricane season back East.”

Clearly, there’s no evidence for such nonsensical, unscientific statements.

Next: New Evidence That the Ancient Climate Was Warmer than Today’s

No Evidence for Dramatic Loss of Great Barrier Reef Corals

A 2020 study of the Great Barrier Reef that set alarm bells ringing in the mainstream media is based on faulty evidence, according to Australian scientist and leading coral reef authority, Professor Peter Ridd. The study claims that between 1995 and 2017 the reef lost half its corals, especially small baby colonies, because of global warming – but Ridd says the claims are false.

The breathtakingly beautiful Great Barrier Reef, labeled by CNN as one of the seven natural wonders of the world, is the planet’s largest living structure. Visible from outer space and 2,300 km (1,400 miles) long, the reef hugs the northeastern coast of Australia. A healthy portion of the reef is shown in the image below.

Ridd-GBF coral.jpg

CREDIT: DAVID CHILD, EVENING STANDARD.

But corals are susceptible to overheating and undergo bleaching when the water gets too hot, losing their vibrant colors. During the prolonged El Niño of 2016-17, higher temperatures caused mass bleaching that damaged portions of the northern and central regions of the Great Barrier Reef. Ridd’s fellow reef scientists contended at the time that as much as 30% to 95% of the reef’s corals died. However, Ridd disagreed, estimating that only 8% of the Great Barrier Reef suffered; much of the southern end of the reef wasn’t affected at all. 

Likewise, Ridd finds no evidence for the 50% loss of corals since 1995 claimed in the recent study. He says the most reliable data on coral extent comes from AIMS (the Australian Institute of Marine Science), who have been measuring over 100 reefs every year since 1986. As the following figure illustrates, AIMS data shows that coral cover fluctuates dramatically with time but there is approximately the same amount of Great Barrier Reef coral today as in 1995. Adds Ridd:

There was a huge reduction in coral cover in 2011 which was caused by two major cyclones that halved coral cover. Cyclones have always been the major cause of temporary coral loss on the Reef.

Ridd coral cover.jpg

It can be seen that the coral cover averages only about 20% in the years since 1986, when AIMS measurements began. But a 2019 research paper reported that the first reef expedition back in 1928-29 discovered very similar coverage: on a reef island known as Low Isles, the coral cover ranged from 8% to 42% in different parts of the island. So essentially no coral has disappeared over a period of 90 years that encompasses both warming and cooling periods.

The paper’s authors did find that the coral colonies on Low Isles were 30% smaller in 2019 than in 1928-29, and that coral “richness” had declined. Apart from its faulty conclusion about coral loss, the 2020 study also found smaller colony sizes throughout the reef, even though the relative abundance of large colonies was unchanged.

Nevertheless, the most recent AIMS report records small gains in the cover of hard corals in the central and southern Great Barrier Reef, following another mass bleaching event in late 2019. Hard corals are the primary reef-building corals; soft corals don’t form reefs.

Even more encouraging news for coral reef health comes from a just-reported survey of coral reefs on the opposite side of the country – the Rowley Shoals, a chain of three coral atolls 300 km (190 miles) off the coast of northwest Western Australia. Following an extensive marine heat wave in December 2019, an April 2020 survey found that up to 60% of the Rowley Shoals corals had become a pallid white (left image below). Yet a follow-up survey just six months later revealed that much of the bleached coral had already recovered (right image) and that perhaps only 10% of the reef had been killed.

Rowley Shoals bleaching.jpg
Rowley Shoals coral.jpg

CREDIT: WESTERN AUSTRALIA DBCA.

Tom Holmes, the marine monitoring coordinator at Western Australia’s DBCA (Department of Biodiversity, Conservation and Attractions), said "We were expecting to see widespread mortality, and we just didn't see it … which is a really amazing thing." Holmes explained that, while high ocean temperatures cause coral to bleach, what is less well known is that bleached corals don’t die immediately. Bleaching is initially just a sign of stress, but if the stress continues for a long time, it does lead to mortality.

However, Holmes – ever the cautious scientist – feels the reef may have been lucky and dodged a bullet this time. That’s because the marine heat wave that caused the bleaching was short-lived, dissipating at the end of the Australian summer a few months ago and giving the corals a chance to recover.

The resilience of the Rowley Shoals is no surprise to Ridd. Despite having been fired from his position at James Cook University in northern Queensland for his politically incorrect views on the Great Barrier Reef and climate change, Ridd continues to push the case for more accurate measurements and better quality assurance in coral reef science.

Next: No Evidence That 2020 Hurricane Season Was Record-Breaking

Evidence Mounting for Global Cooling Ahead: Record Snowfalls, Less Greenland Ice Loss

Despite the ongoing drumbeat of apocalyptic proclamations about our warming climate, a close look at recent evidence suggests we’re on the threshold of a cooling spell. Even before the northern 2020-21 winter arrives, early-season snowfall records are tumbling all over North America and Europe. And, while Greenland has been losing ice for decades, the rate of loss has slowed dramatically since 2016.

Could all this be a prelude to the upcoming grand solar minimum?

Snow extent in the Northern Hemisphere fall has in fact been increasing yearly since at least the 1960s, as shown in the figure below depicting the fall monthly average snowfall measured by satellite, up to 2019. The trend is the same for Eurasia as it is for North America.

Snow NH Fall.jpg

But the 2020 fall extent is likely to surpass that of any previous year, based on data from the Finnish Meteorological Institute for Northern Hemisphere snow mass (as opposed to extent) to date. As seen in the next figure, the snow mass this season is already tracking above the average for 1982-2012; the mass is measured in billions of tonnes (gigatonnes, where 1 tonne = 1.102 U.S. tons).

fmi_swe_tracker.jpg

In the U.S., much of the white powder blanketing the northern states prematurely in 2020 has fallen in Minnesota. The state recently experienced its largest early-season snowstorm in recorded history, going back about 140 years. Dropping up to 23 cm (9 inches) of snow in some places, which is a lot for the fall, the storm produced the second biggest October snowfall ever in the state. The cities of Alexandria and St. Cloud, Minnesota saw their snowiest October on record.

And snow records were also smashed in towns and cities across Montana and South Dakota. All of this has been accompanied by misery in the form of bitterly cold temperatures across the U.S. and Canada.

In Europe, skiing and sledding enthusiasts are delighted with the early onset of a snowpack deeper than 80 cm (3 feet) on some Austrian glaciers. Both the Alps and Pyrenees had several heavy early-season snowfalls in October, and even parts of lower-altitude Scandinavia are already buried in snow.

As for ice, much recent attention has been focused on the Greenland ice sheet. A  research paper published in the journal Nature in October 2020 set alarm bells ringing by insisting that Greenland’s ice is now melting faster than at any time in the past 12,000 years. This spurious claim seems to have been influenced by a big jump in the rate of ice loss, from an average 75 gigatonnes (83 gigatons) yearly in the 20th century to an average annual loss of 258 gigatonnes (284 gigatons) between 2002 and 2016; a greater-than-average loss of 329 gigatonnes (363 gigatons) occurred in 2019.

However, the paper’s authors failed to note that the rate of Greenland ice loss has not increased since 2002, or that the 2019 loss was less than seen seven years before, in 2012. In fact, as illustrated in the figure below, the loss rate has drastically slowed since 2016. The figure shows satellite measurements of the ice loss at regular intervals from April 2002 to April 2019, in gigatonnes, but doesn’t include the massive summer melt in 2019. The 2020 loss was only 152 gigatonnes (168 gigatons).

Greenland mass loss 2002-19.jpg

Although a hefty amount of ice is normally lost during the short Greenland summer, some of this ice loss is compensated by ice gained over the long winter from the accumulation of compacted snow. The ice sheet, 2-3 km (6,600-9,800 feet) thick, consists of layers of compressed snow built up over at least hundreds of thousands of years. In addition to summer melting, the sheet loses ice by calving of icebergs at its edges.

The slowdown in ice loss since 2016 is a clear sign that the doomsday talk and panic engendered by the Nature paper is unwarranted. And, not only is Greenland losing less ice, and snowfall in the Northern Hemisphere setting new records, but this year’s winter in the Southern Hemisphere was also exceptionally snowy and brutal – all likely harbingers of the soon-to-arrive grand solar minimum.

Next: No Evidence for Dramatic Loss of Great Barrier Reef Corals