Sea Level Rise Is Partly Anthropogenic – but Due to Subsidence, Not Global Warming

Rising sea levels are all too often blamed on climate change by activists and the media. But a recent research study has revealed that, while much of sea level rise in coastal cities is indeed due to human activity, the culprit is land subsidence caused by groundwater extraction, rather than any human-induced global warming.

The study, conducted by oceanographers at the University of Rhode Island, measured subsidence rates in 99 coastal cities around the world between 2015 and 2020, using data from a pair of Europe’s Sentinel-1 satellites. The subsidence rates for each city were calculated from satellite images taken once every two months during the observation period – a procedure that enabled the researchers to measure the height of the ground with millimeter accuracy.

Several different processes can affect vertical motion of land, as I discussed in a previous post. Long-term glacial rebound after melting of the last ice age’s heavy ice sheets is causing land to rise in high northern latitudes. But in many regions, the ground is sinking because of sediment settling and aquifer compaction caused by human activities, especially groundwater depletion resulting from rapid urbanization and population growth. 

The study found that subsidence is common across the globe. The figure below shows the maximum subsidence rates measured by the authors in the 99 coastal cities studied, from 2015 to 2020.

In Tianjin, China and Jakarta, Indonesia, parts of the city are subsiding at alarming rates exceeding 30 mm (1.2 inches) per year. Maximum rates of this magnitude dwarf average global sea level rise by as much as 15 times. Even in 31 other cities, the maximum subsidence rate is more than 5 times faster than global sea level rise.

The most rapid subsidence is occurring in southern, southeastern and eastern Asia. Even in cities that are relatively stable, some areas of the cities are sinking faster than sea levels are rising. The next figure demonstrates four examples: Taipei, the largest city in Taiwan with a population of 2.7 million; Mumbai, with a population of about 20 million; Auckland, the largest city in New Zealand and home to 1.6 million people; and for comparison with the U.S., Tampa, which has a population of over 3 million. Both Taipei and Tampa are seen to have major subsidence.

This study of subsidence throws light on a long-standing dilemma: what is the true rate of global sea level rise? According to NOAA (the U.S. National Oceanic and Atmospheric Administration) tide gauge records, the average rate of rise during the 20th century was 1.7 mm (about 1/16th of an inch) per year. But NASA’s satellite measurements say the rate is more like 3.4 mm (1/8th of an inch) per year, double NOAA’s value.

The difference comes from subsidence. Satellite observations of absolute sea level measure the height of the sea – the distance of its surface to the center of the earth. Tide gauges measure the height of the sea relative to the land to which the gauge is attached, the so-called RSL (Relative Sea Level) metric. Sinking of the land independently of sea level, as in the case of the 99 cities studied, artificially amplifies the RSL rise and makes satellite-measured sea levels higher than tide gauge RSLs.

But it's the tide gauge measurements that matter to the local community and its engineers and planners. Whether or not tidal cycles or storms cause flooding of critical coastal structures depends on the RSL measured at that location. Adaptation needs to be based on RSLs, not sea levels determined by satellite.

Shown in the two figures below are tide gauge time series compiled by NOAA for various sites around the globe that have long-term records dating back to 1900 or before. The graph in the left panel of the upper figure is the average of records at two sites: Harlingen in the Netherlands and Honolulu in Hawaii. The average rate of RSL rise at these two locations is 1.38 mm (0.05 inches) per year, with an acceleration of only 0.007 mm per year per year, which is essentially zero. At Sydney in Australia (right panel of upper figure), the RSL is rising at only 0.78 mm (0.03 inches) per year.

In the lower figure, the rate of RSL rise at Charleston – a “hotspot” for sea level rise on the U.S. Atlantic coast – is a high 3.4 mm (0.13 inches) per year. At Mumbai, where much of the city is subsiding more rapidly than 2 mm (0.08 inches) per year as seen earlier, the RSL is rising at 0.83 mm (0.03 inches) per year, comparable to Sydney. Without subsidence at Mumbai, the RSL would be falling.  

Were it not for anthropogenic subsidence, actual rates of sea level rise in many parts of the world would be considerably lower than they appear.

Next: Climate Science Establishment Finally Admits Some Models Run Too Hot

Science on the Attack: Nuclear Fusion – the Energy Hope of the Future

As one of my series of occasional posts showcasing science on the attack rather than under attack, this post reviews the present status of nuclear fusion as an energy source. Although not yet an engineering reality, fusion is one of two potential long-term technologies for meeting the world’s future energy needs – the other being already commercialized nuclear fission.

Whatever one’s views on fossil fuels, it’s likely this now abundant source of energy will become depleted in a century or so. And despite the promise of renewable energy sources such as wind and solar, the necessary development of large-scale battery storage capability, to store energy for those times when the wind stops blowing or the sun isn’t shining, is decades away.

Fission and fusion are both nuclear processes. Fission is the splitting through bombardment of a heavy nucleus such as uranium into two lighter nuclei. Establishment of a self-sustaining chain reaction unleashes an explosive amount of energy; a controlled chain reaction is the basis for a nuclear reactor, while an uncontrolled reaction is the basis of the atomic bomb.

Fusion, on the other hand, smashes two light nuclei together at high speed to form a heavier nucleus. The light nuclei are typically deuterium and tritium, isotopes of hydrogen containing one and two neutrons, respectively (the hydrogen nucleus consists of just a single proton). This process, which powers our sun and other stars, also releases vast amounts of energy and can result in a self-sustaining chain reaction when enough fusion reactions occur. Uncontrolled fusion is the principle of the so-called hydrogen bomb, while fusion as an energy source involves a controlled reaction.

That fusion hasn’t become commercial after nearly 80 years of research and development is because it’s difficult to sustain the very high temperatures required – 50 to 100 million degrees Celsius – to make the process work. At lower temperatures, the light nuclei can’t be pushed close enough together for them to collide and fuse.

What this means in practice is that the deuterium and tritium fuel must be in the form of either a high-temperature plasma confined by strong magnetic fields, or a small pellet a few millimeters across that is heated and compressed by powerful lasers or particle beams. Typical experimental reactors for the first method, known as magnetic confinement, are shown in the two figures below.

The most common kind of magnetic confinement uses a doughnut-shaped or toroidal magnetic field combined with a perpendicular or poloidal field. The combination produces a spiral or helical field, as illustrated in the following schematic; the central solenoid induces a powerful electric current that both ionizes the deuterium and tritium reactor fuel and heats the resulting plasma. High-energy neutrons from the fusion reaction are absorbed in an outer blanket containing lithium, generating heat that can be converted to electricity.

In inertial confinement, precisely focused laser or ion beams heat the outer layers of the fuel pellet, exploding the fuel outwards and in turn producing an inward-moving shock wave that compresses the core of the pellet. The fusion reaction then spreads through the whole pellet as the chain reaction proceeds; extracting the resulting heat provides electricity.

The promise of fusion is immense. A few teaspoons of seawater, from which deuterium can be extracted, can provide as much energy through fusion as several tons of coal, oil or gas. But up until now there’s been a major problem: to release that much energy, an even greater amount of energy has to be supplied to the lasers or the coils powering the magnets.

Only recently have experimental fusion reactors achieved and ever so slightly surpassed this breakeven point. Researchers at the Joint European Torus, a magnetic confinement machine in Oxfordshire, UK reported in February this year that their reactor had been able to sustain a fusion reaction for five seconds, with a net output of heat energy. The National Ignition Facility at Lawrence Livermore National Laboratory in the U.S. announced last August that their inertial confinement reactor had generated over 10 quadrillion watts of fusion power for all of 100 trillionths of a second.

While these seem like baby steps, they represent a breakthrough for fusion technology. Now that the energy threshold has been exceeded at all, scientists are confident that extending the reaction time from seconds to hours or days is not far away. Those in the industry say they expect the 2020s to see a transition from experimental reactors to commercialization, with the first fusion facilities becoming connected to the grid in the 2030s.

Advantages of fusion energy over fission include a small footprint, no risk of a meltdown and no high-level nuclear waste. The radioactive waste that is generated is short-lived in comparison with fission waste.

Next:  Sea Level Rise Is Partly Anthropogenic – but Due to Subsidence, Not Global Warming

“Rescued” Victorian Rainfall Data Casts Doubt on Claims of a Wetter UK

Millions of handwritten rainfall records dating back nearly 200 years have revealed that the UK was just as wet in Victorian times as today. The records were “rescued” by more than 16,000 volunteers who digitally transcribed the observations from the archives of the UK Met Office, as a means of distracting themselves during the recent pandemic. The 5.3 million digitized records boost the number of pre-1961 observations by an order of magnitude.

The new data extends the official UK rainfall record back to 1836 and even earlier for some regions. The year 1836 was when Charles Darwin returned to the UK after his famous sea voyage gathering specimens that inspired his theory of evolution, and a year before Queen Victoria came to the throne. The oldest record in the collection dates back to 1677.

As a result of the project, the number of rain gauges contributing to the official record for the year 1862, for example, has increased from 19 to more than 700. The rain gauges were situated in almost every town and village across the UK, in locations as diverse as lighthouses, a chocolate factory, and next door to children’s author Beatrix Potter's Hilltop Farm in the Lake District.

Raw data in the form of “Ten Year rainfall sheets” included monthly rainfall amounts measured across the UK, Ireland and the Channel Islands between 1677 and 1960. After digitizing and organizing the raw data by county, the volunteer scientists combined data from different decades and applied quality control measures such as removing estimates and duplicate measurements, and identifying rain gauge moves.

The outcome of their efforts, presented in a recently published paper, is depicted in the figure below showing the annual average UK rainfall by season from 1836 to 2019. The rescue data for 1836-1960 is shown in black and the previous Met Office data for 1862-2019 in blue. Both sets of data agree well for the overlapping period from 1862 to 1960.

 While the annual rainfall for all seasons combined is not included in the paper, the figure shows clearly that current UK rainfall is no higher on average than it was during the 19th century, with the possible exception of winter. This conclusion conflicts with statements on the Met Office website, such as: “… the UK has become wetter over the last few decades … From the start of the observational record in 1862, six of the ten wettest years across the UK have occurred since 1998 … these trends point to an increase in frequency and intensity of rainfall across the UK.”

In fact, the wettest UK month on record was in the early 20th century, October 1903. The rescue data for the 19th century reveals that November and December 1852 were also exceptionally wet months. December 1852 is found to have been the third wettest month on record in Cumbria County in northern England, and November 1852 the wettest month on record for large parts of southern England.

The next figure illustrates how much UK rainfall varies regionally in time and space, for the four wettest months between 1836 and 1960. It can be seen that the soggiest regions of the nation are consistently Scotland, Wales and northwestern England. Shown in the subsequent figure is the monthly rainfall pattern from 1850 to 1960 recorded by rain gauges located near Seathwaite in Cumbria’s Lake District – one of the wettest spots in the country, with annual rainfall sometimes exceeding 5,000 mm (200 inches). The different colors represent nine different gauges.

By contrast, the driest UK month on record was February 1932 – during a prolonged period of heat waves across the globe. But the new data finds that the driest year on record was actually 1855. And 1844 now boasts the driest spring month of May, during a period of notably dry winters in the 1840s and 1850s.

Gathering the original rain gauge readings transcribed by the volunteers was evidently no simple task. The published paper summarizing the rescue project includes amusing comments found on the Ten Year sheets, such as “No readings as gauge stolen”; “Gauge emptied by child”; and “Gauge hidden by inmates of a mental hospital.”

But the newly expanded dataset does bring recent Met Office statements into question. While precipitation tends to increase as the world warms because of enhanced evap­oration from tropical oceans, which results in more water vapor in the atmosphere, there’s very little evidence that the UK has become any rainier so far.

Next: Science on the Attack: Nuclear Fusion – the Energy Hope of the Future

Natural Sources of Global Warming and Cooling: (2) The PDO and AMO

As a follow-on to my earlier post on solar variability and La Niña as natural sources of global cooling, this second post in the series examines the effect on our climate of two major ocean cycles – the PDO (Pacific Decadal Oscillation) and the AMO (Atlantic Multidecadal Oscillation).

Both the PDO and AMO have cycle times of 60-65 years and alternate between warm and cool phases of approximately equal length, though the warm phases of the AMO may last longer. The two cycles are compared in the following figure, which shows indexes measuring fluctuations in average Pacific (top) and Atlantic (bottom) sea surface temperature since 1854 (1856 for the AMO); red denotes the warm phase, blue the cool phase of the cycle.

PDO temperature fluctuations are greater than those of the AMO, and can be as much as 2 degrees Celsius (3.6 degrees Fahrenheit) from the mean. This is mainly because the Pacific Ocean is so much larger than the Atlantic in the tropics, the region where most of the forcing that drives the PDO and AMO occurs. It can be seen that phases of the AMO are more distinct than those of the PDO, in which the warm phase often includes cold spells and vice versa. In 2022, the PDO is in a cool phase that began either around 2000 or in 2007, but the AMO is in its warm phase.  

Although the PDO can be traced back at least several centuries, its distinctive behavior wasn’t recognized until the 1990s, when it was named by a U.S. fisheries scientist trying to explain the connection between Alaskan salmon harvests and the Pacific climate. The geographic pattern has a characteristic horseshoe shape, as shown in the figure below illustrating its warm (left) and cool (right) phases; the color scale represents the percentage of selected warm or cool years since 1951 with above-normal temperatures from December to February.

During the PDO warm phase, more El Niños occur and the southeastern U.S. is cooler and wetter than usual. Its cool phase is marked by an excess of La Niñas, and dominated by warmer, drier conditions inland. The cycle has also been linked to cold weather extremes in the U.S. and Canada.

Just as the warm phase of the PDO results in warmer than normal sea surface temperatures along the west coast of North America, the warm phase of the AMO produces warm waters off the west coast of Europe and Africa, as seen in the next figure showing its warm (left) and cool (right) phases. The AMO warm phase causes intense hurricanes in the North Atlantic basin together with heavier-than-normal rainfall in Europe, leading to major flooding, but lighter rainfall in North America. This pattern is reversed during the cool phase.

So what effect, if any, do the PDO and AMO have on global warming?

While the two cycles are approximately the same length, they’ve been almost exactly out of phase since 1854, with the warm phase of one cycle almost coinciding with the cool phase of the other, as revealed in the first figure above. Were the PDO and AMO of equal strength, you’d expect the opposite phases to cancel each other.

But, because the PDO dominates as noted earlier, a rather different pattern emerges when the two indexes are combined as in the figure below. Note that the combined index is defined differently from the indexes in the first figure above; the blue line depicts annual values from 1900 to 2005, while the purple line is a 5-year running mean. It’s seen that the combined index was negative, signifying cooling, from 1900 to about 1925; positive, signifying warming, until about 1950; negative again up to 1980; and positive once more to 2005.  

This is not too different from the behavior of the average global temperature since 1900, which went up from 1910 to 1940, down from 1940 to 1970, and upward since then – exhibiting perhaps a 10-year lag behind the combined AMO-PDO index.

Once the AMO switches back to its cool phase in about 2030, when the PDO will still be in the cool phase, strong cooling is likely. However, the actual effect of the PDO and AMO on climate is more complicated and depends not only on sea surface temperatures, but also on factors such as cloud cover – so that the correlation of these two natural cycles with global temperature may not be as real as it appears.

In addition, the PDO is no longer thought to be a single phenomenon, but rather a combination of different processes including random atmospheric forcing, large-scale teleconnections from the tropical Pacific, and changes in ocean currents. And the very existence of the AMO has been questioned, although most ocean scientists remain convinced of its reality. More research is needed to understand the influence of these two sources of natural variability on climate change.

Next:  “Rescued” Victorian Rainfall Data Casts Doubt on Claims of a Wetter UK

New Projections of Sea Level Rise Are Overblown

That sea levels are rising due to global warming is not in question. But there’s no strong scientific evidence that the rate of rise is accelerating, as claimed in a recent NOAA (the U.S. National Oceanic and Atmospheric Administration) report on sea level rise or the Sixth Assessment Report (AR6) of the UN’s IPCC (Intergovernmental Panel on Climate Change). Such claims create unnecessary alarm.

NOAA’s projections to 2050 are illustrated in the next figure, showing sea level relative to 2000 both globally and in the contiguous U.S. The green curves represent a smoothing of actual observations from 1970 to 2020, together with an extrapolation from 2020 to 2050 based on the earlier observations. The projections in other colors correspond to five different modeled scenarios ranging from low to high risk for coastal communities.

The U.S. projections are higher than the global average because the North American Atlantic coast is a “hotspot” for sea level rise, with anomalously high rates of rise. The extrapolated U.S. average is projected to increase from 11 cm (4.3 inches) above its 2000 level in 2020, to 19 cm (7.5 inches) in 2030, 28 cm (11 inches) in 2040 and 38 cm (15 inches) in 2050. Projected increases are somewhat higher than average for the Atlantic and Gulf coasts, and considerably lower for the west coast.

These projected NOAA increases clearly suggest an accelerating rate of sea level rise, from a rate of 5.5 cm (2.2 inches) per decade between 2000 and 2020, to an almost doubled 10 cm (3.9 inches) per decade between 2040 and 2050. That’s an acceleration rate of 0.15 mm per year per year and implies a rise in U.S. sea levels by 2050 as large as that seen over the past century. The implied global acceleration rate is 0.08 mm per year per year.

But even the IPCC’s AR6, which makes some exaggerated claims about extreme weather, estimates global sea level acceleration at only 0.1 mm per year per year from 1993 to 2018. It seems highly unlikely that the rate would increase by 50% in 32 years, so the NOAA projections appear excessively high.  

However, all these estimates are based not only on actual measurements, but also on computer models. The models include contributions to sea level rise from the expansion of seawater as it warms; melting of the Greenland and Antarctic ice sheets, as well as glaciers; sinking of the seafloor under the weight of extra meltwater; and local subsidence due to groundwater depletion, or rebound after melting of the last ice age’s heavy ice sheet.

The figure on the left below shows the GMSL (global-mean sea level, blue curve) rise rate estimated by one of the models for the 100 years from 1910 to 2010. Although it’s clear that the rate has been increasing since the late 1960s, it did the same in the 1920s and 1930s, and may currently be turning downward. Not surprisingly, studies using these models often come to very different conclusions about future rates of sea level rise.

The figure on the right below is an historical reconstruction of the rise rate for various locations along the Atlantic North American and Icelandic coasts, derived from salt-marsh sediment proxies and corrected for glacial rebound. It can be seen that rates of rise in the 18th century were at times only slightly lower than those in the 20th century, and that sea levels have fluctuated for at least 300 years, long before modern global warming began.

Because of this, the reconstruction study authors comment that the high “hotspot” rates of sea level rise in eastern North America may not be associated with any human contribution to global warming. They hypothesize that the fluctuations are related to changes in the mass of Arctic land ice, possibly associated with the naturally occurring North Atlantic Oscillation.

Along with the IPCC estimates, the reconstruction casts doubt on NOAA’s claim of continuing acceleration of today’s sea level rise rate. An accompanying news release adds to the hype, stating that “Sea levels are continuing to rise at an alarming rate, endangering communities around the world.”

Supporting the conclusion that NOAA’s projections are exaggerated is a 2021 assessment by climate scientist Judith Curry of projected sea level scenarios for the New Jersey coast. Taking issue with a 2019 report led by scientists from Rutgers University, her assessment found that the Rutgers sea level projections were – like NOAA’s estimates – substantially higher than those of the IPCC in its Fifth Assessment Report prior to AR6. Curry’s finding was that the bottom of the Rutgers “likely” scenarios was the most probable indicator of New Jersey sea level rise by 2050.

Interestingly, NOAA’s “low” scenario projected a U.S. average sea level of 31 cm (12 inches) in 2050, rather than 38 cm (15 inches), implying essentially no acceleration of the rise rate at all – and no cause for its media hype.

(This post has also been kindly reproduced in full on the Climate Depot blog.)

Next: Natural Sources of Global Warming and Cooling: (2) The PDO and AMO

Can Undersea Volcanoes Cause Global Warming?

It’s well known that active volcanoes on land can cause significant global cooling when they erupt, from shielding of sunlight by sulfate aerosol particles in the eruption plume which linger in the atmosphere. But what is the effect on climate of undersea volcanic eruptions such as the massive submarine blast that blanketed the nearby South Pacific kingdom of Tonga with ash in January?

Submarine volcanoes are relatively unexplored but are thought to number over a million, of which several thousand may be currently active. Many lie along tectonic plate boundaries, where plates are pulling apart or colliding with each other. The Tonga volcano sits above a geological pileup, where the western edge of the Pacific plate dives under the Indian–Australian plate.

The eruption of any volcano releases a huge amount of energy. In the case of a submarine volcano that may be thousands of meters deep, the plume may not even reach the surface and all the energy is absorbed by the ocean. The Tonga eruption was from a shallow depth, so much of the energy was dissipated at the ocean surface – launching a destructive tsunami – and in the atmosphere – generating a plume of ash that reached a record altitude of 55 kilometers (34 miles), a shockwave that traveled around the globe, and nearly 400,000 lightning strikes.

You might think all that energy could contribute to global warming, had the volcano erupted in deeper water that would have converted all the energy to heat. However, the oceans, which cover 71% of the earth’s surface, are vast and can hold 1,000 times more heat than the atmosphere. Any change in sea surface temperatures from even multiple underwater volcanic eruptions would be imperceptible.

This can be seen from a simple calculation. According to NASA scientists, the energy released by the undersea Tonga eruption was equivalent to the explosive power of 3.6 to 16 megatonnes (4 to 18 megatons) of TNT. For comparison, the 1980 eruption on land of Mount Saint Helens in Washington state released about 22 megatonnes of TNT equivalent, and the famous 1883 explosion of Indonesia's Krakatoa unleashed 180 megatonnes; the atomic bomb that the U.S. dropped on Hiroshima in Japan in 1945 released roughly 14 kilotonnes of TNT equivalent.

The upper Tonga limit of 16 megatonnes is equal to 7.5 x 1016 Joules of energy. Assuming the heat capacity of seawater to be 3,900 Joules per kilogram per degree Celsius and the total mass of the oceans to be 1.4 × 1021 kilograms, it would take 5.5 × 1024 Joules (5.5 trillion trillion Joules) to warm the entire ocean by 1 degree Celsius (1.8 degrees Fahrenheit).

So if all 16 megatonnes had gone into the ocean, ocean temperatures would have risen by (7.5 x 1016)/( 5.5 × 1024) or a minuscule 1.4 x 10-8 (14 billionths) of a degree Celsius. The Krakatoa above-water eruption, on the other hand, decreased global air temperatures by as much as 1.2 degrees Celsius (2.2 degrees Fahrenheit) for several years and may have cooled the oceans as well.

But there’s another potential source of warming from submarine volcanoes, and that is the CO2 emitted along with the sulfur dioxide (SO2) that causes cooling through formation of sulfate aerosols. If the underwater plume reaches the ocean surface, both gases are released into the atmosphere. In the case of Tonga, while the amount of SO2 emitted was too small to have any cooling effect, the emitted CO2 could in theory contribute to global warming.

However, the yearly average of CO2 emissions from all volcanoes, both on land and submarine, is only 1 to 2% of current human emissions that have raised global temperatures by 1 degree Celsius (1.8 degrees Fahrenheit) at most. So any CO2 warming effect from an underwater eruption is unlikely to be much larger than the above calculation for energy release. Interestingly though, Chinese researchers recently reported that the atmospheric concentration of CO2 near Tonga after the eruption jumped by 2 parts per million, which is as much as the global concentration normally increases in a whole year from human sources. But this is most probably a temporary local effect that won’t affect the global CO2 increase expected in 2022.

Despite the inability of undersea eruptions to affect our present climate, it was suggested in a 2015 research paper that CO2 from submarine volcanoes may have triggered the warming that pulled the earth out of the last ice age about 15,000 years ago.

The basic idea is that lower sea levels during glaciation relieved the hydrostatic pressure on submarine volcanoes that suppressed eruptions during warmer times. This caused them to erupt more. After a lengthy ice age, the buildup of CO2 from undersea eruptions initiated warming that then began to melt the ice sheets covering volcanoes on land, causing them in turn to belch CO2 that enhanced the warming, melting more ice in a feedback effect.

Next: New Projections of Sea Level Rise Are Overblown

Little Evidence That Global Warming Is Causing Extinction of Coral Reefs

Coral reefs, like polar bears, have become a poster child for global warming. According to the climate change narrative, both are in imminent peril of becoming extinct.

But just as polar bears are thriving despite the loss of sea ice in the Arctic, coral reefs are in good health overall despite rising temperatures. Recent research shows that not only are corals capable of much more rapid recovery from bleaching events than most reef scientists thought, but they are a lot more abundant around the globe than anyone knew.

During the massive, prolonged El Niño of 2014-17, higher temperatures caused mass bleaching of coral reefs all across the Pacific Ocean, including the famous Great Barrier Reef that hugs the northeastern coast of Australia. Corals lose their vibrant colors when the water gets too hot, because heat causes the microscopic food-producing algae that normally live inside them to poison the coral – so the coral kicks them out. However, corals have the ability to select from the surrounding water a different species of algae better suited to hot conditions, and thus to survive.

Until recently, it was believed that the recovery process, if it occurred at all, took years. But new studies (see here and here) have found that both the Great Barrier Reef and coral colonies on reefs around Christmas Island in the Pacific were able to recover from the 2014-17 El Niño much more rapidly, even while seawater temperatures were still higher than normal. The authors of the studies attribute the corals’ recovery capacity to lack of exposure to other stressors such as the crown-of-thorns starfish and water pollution from farming runoff.

That corals worldwide are not on the verge of extinction was first revealed in a 2021 study by four researchers at Australia’s James Cook University (JCU). The study completely contradicted previous apocalyptic predictions of the imminent demise of coral reefs, predictions that included an earlier warning from three of the same authors and others of ongoing coral degradation from global warming.

The JCU study included data on more than 900 coral reefs across the Pacific, from Indonesia to French Polynesia, as shown in the figure below. To estimate abundances, the researchers used a combination of coral reef habitat maps and counts of coral colonies. They estimated the total number of corals in the Pacific at approximately half a trillion, similar to the number of trees in the Amazon or birds in the world. This colossal population is for a mere 300 species, a small fraction of the 1,619 coral species estimated to exist worldwide by the International Union for Conservation of Nature (IUCN).

Reinforcing the JCU finding is a very recent discovery made by Scuba divers working with the UN Educational, Scientific and Cultural Organization (UNESCO). The divers mapped out a massive reef of giant rose-shaped corals in pristine condition off the coast of Tahiti, the largest island in French Polynesia. The stunning reef, described as “a work of art” by the diving expedition leader, is remarkable for its size and its survival of a mass bleaching event in 2019.

Approximately 3 kilometers (2 miles) long and 30 to 65 meters (100 to 210 feet) across, the reef lies between 30 and 55 meters (100 and 180 feet) below the surface, about 2 kilometers (1 mile) off shore. The giant corals measure more than 2 meters (6.5 feet) in diameter, according to UNESCO. Studying a reef at such great depths for Scuba divers required special technology, such as the use of air containing helium, which negates hallucinations caused by oxygen and nitrogen at depth and helps prevent decompression sickness.

CREDIT: Alexis Rosenfeld/Associated Press

The existence of this and likely many other deep coral reefs, together with the JCU study, mean that the global extinction risk of most coral species is much lower than previously thought, even though a local loss can be ecologically devastating to coral reefs in the vicinity.

The newly discovered rapid recovery of corals probably helped save the Great Barrier Reef from being added to a list of World Heritage Sites that are “in danger.” This classification had been recommended in 2021 by a UNESCO committee, to counter the supposed deleterious effects of climate change.

But, after intensive lobbying by an angry Australian government keen to avoid a politically embarrassing classification for a popular tourist attraction, the committee members agreed to an amendment. The amended recommendation required Australia to produce an updated report on the state of the reef by this month, when a vote could follow on whether or not to classify the site as being in danger.

Next: Can Undersea Volcanoes Cause Global Warming?

No Evidence That Islands Are Sinking Due to Rising Seas

According to the climate-change narrative, island nations such as the Maldives in the Indian Ocean and Tuvalu in the Pacific face the specter of rising seas, fleeing residents and vanishing villages. But recent research belies the claim that such tropical paradises are about to disappear beneath the waves, revealing that most of the hundreds of atolls studied actually grew in size from 2000 to 2017.

Low-lying atoll islands consist of a ring-shaped coral reef partly or completely encircling a shallow green lagoon in the midst of a deep blue sea. Perched just a few meters above sea level, these coral-reef islands are susceptible to rising waters that can cause flooding, damage to infrastructure and the intrusion of saltwater into groundwater. Such concerns are behind the grim prognosis that islanders will become “climate refugees,” forced to leave their homes as the oceans rise.

However, two recent studies conclude that this threat is unfounded. A 2021 study analyzed changes in land area on 221 atolls in the Indian and Pacific Oceans, utilizing cloud-free imagery from Landsat satellites. The atolls studied are shown in red in the following figure. Apart from the Maldives and Tuvalu, the dataset included islands in the South China Sea, the Marshall Islands and French Polynesia.

The study found that the total land area of the atolls increased by 6.1% between 2000 and 2017, from 1,008 to 1,069 square kilometers (389 to 413 square miles). Most of the gain was from the Maldives and South China Sea atolls, which together accounted for 88% of the total increase, and came from artificial building of islands within those areas for development of infrastructure, extra land and resorts.

As shown in the next figure, the areas of two island groups – French Polynesia and Palau – did diminish over the study period. Although these two groups accounted for 68 of the 221 atolls studied, the combined decrease represents only 0.15% of the global total area. The Republic of the Marshall Islands is designated as RMI; the percentages at the bottom of the figure are the increases or decreases of the individual island groups.

An earlier study by the same researchers analyzed shoreline changes in the 101 reef islands of the Pacific nation of Tuvalu between 1971 and 2014; this excluded the 9 atolls forming part of the subsequent study. During these 43 years the local sea level rose at twice the global average, at a rate of 3.9 mm (about 1/8 of an inch) per year. But despite surging seas, the total land area of the 101 islands expanded by 2.9% over the slightly more than four decades. The changes are illustrated in the figure below, where the areas on the horizontal axis and the changes on the vertical axis are measured in hectares (ha).

Altogether, 73 reef islands grew in size – some by more than 100% – and the other 28 shrank, though by a smaller average amount. Light blue circles enclosing symbols in the figure denote populated islands. Tuvalau is home to 10,600 people, half of whom live on the urban island of Fogafale in Funafuti atoll. Fogafale expanded by 3% or 4.6 hectares (11.4 acres) over the 43-year study period.

Concerns about rising seas in the Maldives, the world’s lowest country, gained worldwide attention in 2009 when the Maldivian president and cabinet held an underwater meeting at the bottom of a turquoise lagoon. But the theatrics of ministers clad in black diving suits and goggles, signing a document asking all countries to reduce their CO2 emissions, were unnecessary. A research paper published subsequently in 2018 by Northumbria University scientists found that the Maldives actually formed when sea levels were even higher than they are today.

The researchers studied the formation of five atoll rim islands in the southern Maldives, by drilling cores in sand- and gravel-based reefs. A timeline was established by radiocarbon dating. What they found was that the islands formed approximately three to four thousand years ago, through the pounding on the reefs of large waves caused by distant storms off the coast of South Africa.

These large waves, known as high-energy wave events, broke coral debris off the reefs and transported it onto reef platforms, initiating reef island growth. Sea levels at that time were at least 0.5 meters (1.6 feet) higher than they are today, so the waves had more energy than current ocean swells. Further vertical reef growth is possible in the future, the study authors say, as sea levels continue to rise and large wave events increase, accompanied by sedimentation.

Next: Little Evidence That Global Warming Is Causing Extinction of Coral Reefs

Science Under Renewed Attack: New Zealand Proposal to Equate Māori Mythology with Science

Already under assault from many directions, science has recently come under renewed attack in New Zealand, a country perhaps better known for its prowess in rugby and its filming of The Lord of the Rings trilogy. The attack involves a government working group's proposal that schools should give the same weight to Māori mythology as they do to science in the classroom.

The proposal prompted a letter to the New Zealand Listener, signed by seven professors from the University of Auckland, questioning a plan to give mātauranga Māori (Māori knowledge) equal standing with scientific fields such as physics, chemistry and biology. The letter was critical of one of the working group’s new course descriptions, which promotes:

discussion and analysis of the ways in which science has been used to support the dominance of Eurocentric views … and the notion that science is a Western European invention and itself evidence of European dominance over Māori and other indigenous peoples.

The letter went on to say that such a statement perpetuated “disturbing misunderstandings of science emerging at all levels of education and in science funding,” and concluded that indigenous knowledge “falls far short of what we can define as science itself.” 

The professors did acknowledge the role that Māori knowledge in New Zealand has played in “the preservation and perpetuation of culture and local practices.” One of the letter’s coauthors, biological scientist Garth Cooper, is of Māori descent himself.

But the letter sparked a firestorm of complaints from fellow scientists and academics. The New Zealand Association of Scientists said it was “dismayed” by the comments. An open response signed by the staggering number of over 2,000 academics condemned the authors for causing "untold harm and hurt," and said they “categorically disagree” with their colleagues’ opinions, claiming that:

Mātauranga is far more than just equivalent to or equal to ‘Western’ science. It offers ways of viewing the world that are unique and complementary to other knowledge systems.

and

The seven professors ignore the fact that colonisation, racism, misogyny, and eugenics have each been championed by scientists wielding a self-declared monopoly on universal knowledge.

New Zealand’s Royal Society also issued a statement rejecting the authors’ views, saying it “strongly upholds the value of mātauranga Māori and rejects the narrow and outmoded definition of science”. The society is now contemplating drastic action by investigating whether Garth Cooper and another coauthor, philosophy professor Robert Nola, should be expelled from its membership as a result of the letter.

Aside from the politics involved, the reaction to the letter constitutes an enormous onslaught on science. True science emphasizes empirical evidence and logic. But indigenous Māori knowledge – which includes the folklore that all living things originated with Papa, the earth mother and Rangi, the sky father – is pseudoscience because it depends not on scientific evidence, but on religious beliefs and therefore can’t be falsified, as required of a valid scientific theory.

In the U.S., the issue of whether to teach schoolchildren creationism, a purely religious belief that rejects the theory of evolution, was settled long ago by the infamous Scopes Monkey Trial of 1925. Later, the Supreme Court struck down the last of the old state laws banning the teaching of evolution in schools; in 1987 it went further, in upholding a lower-court ruling that a Louisiana state law, mandating that equal time be given to the teaching of creation science and evolution in public schools, was unconstitutional.

As for the claim of the Auckland letter’s critics that science has been colonising, Professor Nola responded:

I don't think science is a coloniser at all: all people are colonisers, and we've done plenty of colonising, and we may have used our science to help do that. But science itself, I can't see how that is colonising – Newton's laws of motion, colonising of the brain or the mind or whatever, it's nonsense.

Nola added that such a claim could deter young New Zealanders from studying science at all.

The Royal Society’s investigation continues but not without opposition. Several fellows of the Royal Society have threatened to resign if the letter coauthors are disciplined. These include two recipients of the Society’s prestigious Rutherford Medal: Peter Schwerdtfeger, the director of Massey University’s Theoretical Chemistry and Physics Centre and Brian Boyd, University of Auckland literature professor. Schwerdtfeger called the investigation “shameful” and accused the Society of succumbing to woke ideology.

Famous UK evolutionary biologist Richard Dawkins has recently weighed in on the controversy as well, writing that:

The Royal Society of New Zealand … is supposed to stand for science. Not ‘Western’ science, not ‘European’ science, not ‘White’ science, not ‘Colonialist’ science. Just science. Science is science is science, and it doesn’t matter who does it, or where … True science is evidence­-based, not tradition-based.

Next: No Evidence That Islands Are Sinking Due to Rising Seas

No Evidence That Thwaites Glacier in Antarctica Is about to Collapse

Contrary to recent widespread media reports and dire predictions by a team of earth scientists, Antarctica’s Thwaites Glacier – the second fastest melting glacier on the continent – is not on the brink of collapse. The notion that catastrophe is imminent stems from a basic misunderstanding of ice sheet dynamics in West Antarctica.

The hoopla began with publication of a research study in November 2021 and a subsequent invited presentation to the AGU (American Geophysical Union). Both postulated that giant cracks recently observed in the Thwaites Eastern Ice Shelf (pictured to the left) may cause the whole ice shelf to shatter within as little as five years. The cracks result from detachment of the ice shelf’s seaward edge from an underwater mountain about 40 kilometers (25 miles) offshore that pins the shelf in place like a cork in a bottle.

Because the ice shelf already floats on the ocean, collapse of the shelf itself and release of a flotilla of icebergs wouldn’t cause global sea levels to rise. But the researchers argue that loss of the ice shelf would speed up glacier flow, increasing the contribution to sea level rise of the Thwaites Glacier – often dubbed the “doomsday glacier” – from 4% to 25%. A sudden increase of this magnitude would have a devastating impact on coastal communities worldwide. The glacier’s location is indicated by the lower red dot in the figure below.  

But such a drastic scenario is highly unlikely, says geologist and UN IPCC expert reviewer Don Easterbrook. The misconception is about the submarine “grounding” of the glacier terminus, the boundary between the glacier and its ice shelf extending out over the surrounding ocean, as illustrated in the next figure.

The grounding line of the Thwaites Glacier, shown in red in the left figure below, has been retreating since 2000. According to the study authors, this spells future disaster: the retreat, they say, will lead to dynamic instability and greatly accelerated discharge of glacier ice into the ocean, by as much as three times.

As evidence, the researchers point to propagating rifts on the top of the ice shelf and basal crevasses beneath it, both of which are visible in the satellite image above, the rifts as diagonal lines and the crevasses as nearly vertical ones. The crevasses arise from basal melting produced by active volcanoes underneath West Antarctica combined with so-called circumpolar deep water warmed by climate change.

However, as Easterbrook explained in response to a 2014 scare about the adjacent Pine Island glacier, this reasoning is badly flawed since a glacier is not restrained by ice at its terminus. Rather, the terminus is established by a balance between ice gains from snow accumulation and losses from melting and iceberg calving. The removal of ice beyond the terminus will not cause unstoppable collapse of either the glacier or the ice sheet behind it.

Other factors are important too, one of which is the source area of Antarctic glaciers. Ice draining into the Thwaites Glacier is shown in the right figure above in dark green, while ice draining into the Pine Island glacier is shown in light green; light and dark blue represent ice draining into the Ross Sea to the south of the two glaciers. The two glaciers between them drain only a relatively small portion of the West Antarctic ice sheet, and the total width of the Thwaites and Pine Island glaciers constitutes only about 170 kilometers (100 miles) of the 4,000 kilometers (2,500) miles of West Antarctic coastline.

Of more importance are possible grounding lines for the glacier terminus. The retreat of the present grounding line doesn’t mean an impending calamity because, as Easterbrook points out, multiple other grounding lines exist. Although the base of much of the West Antarctic ice sheet, including the Thwaites glacier, lies below sea level, there are at least six potential grounding lines above sea level, as depicted in the following figure showing the ice sheet profile. A receding glacier could stabilize at any of these lines, contrary to the claims of the recent research study.

As can be seen, the deepest parts of the subglacial basin lie beneath the central portion of the ice sheet where the ice is thickest. What is significant is the ice thickness relative to its depth below sea level. While the subglacial floor at its deepest is 2,000 meters (6,600 feet) below sea level, almost all the subglacial floor in the above profile is less than 1,000 meters (3,300 feet) below the sea. Since the ice is mostly more than 2,500 meters (8,200 ft) thick, it couldn’t float in 1,000 meters (3,300 feet) of water anyway.

Next: Science Under Renewed Attack: New Zealand Proposal to Equate Maori Mythology with Science

Sudden Changes in Ocean Currents Warmed Arctic, Cooled Antarctic in Past

Abrupt changes in ocean currents – and not greenhouse gases – were responsible for sudden warming of the Arctic and for sudden cooling in the Antarctic at different times in the past, according to two recent research studies. The Antarctic cooling marked the genesis of the now massive Antarctic ice sheet.

The first study, by a team of European scientists, discovered that the expansion of warm Atlantic Ocean water flowing into the Arctic caused sea surface temperatures in the Fram Strait east of Greenland to rise by about 2 degrees Celsius (3.6 degrees Fahrenheit) as early as 1900. The phenomenon, known as “Atlantification” of the Arctic, is important because it precedes instrumental measurements of the effect by several decades and is not simulated by computer climate models.

The conclusion is based on an 800-year reconstruction of Atlantification along the Fram Strait, which separates Atlantic waters from the Arctic Ocean. The researchers used marine sediment cores as what they call a “natural archive” of past climate variability, deriving the chronological record from radionuclide dating.

Shown in the figure below is the sea surface temperature and Arctic sea ice extent from 1200 to 2000. The blue curve represents the reconstructed mean summer temperature (in degrees Celsius) of Atlantic waters in the eastern Fram strait, while the red curve indicates the April retreat (in kilometers) of the sea ice edge toward the Arctic. You can see clearly that the seawater temperature increased abruptly around 1900, after centuries of remaining constant, and that sea ice began to retreat at the same time, after at least a century of extending about 200 kilometers farther into the strait.

Along with temperature, the salinity of Atlantic waters in the strait suddenly increased also. The researchers suggest that this Atlantification phenomenon could have been due to weakening of two ocean currents – the AMOC (Atlantic Meridional Overturning Circulation) and the SPG (Subpolar Gyre), a circular current south of Greenland – at the end of the Little Ice Age. The AMOC forms part of the ocean conveyor belt that redistributes seawater and heat around the globe.

This abrupt change in ocean currents is thought to have redistributed nutrients, heat and salt in the northeast Atlantic, say the study authors, but is unlikely to be associated with greenhouse gases. The change caused subtropical Atlantic waters to flow northward through the Fram Strait, as illustrated schematically in the figure below; the halocline is the subsurface layer in which salinity changes sharply from low (at the surface) to high. The WSC (West Spitsbergen Current) carries heat and salt to the Arctic and keeps the eastern Fram Strait ice-free.

Sudden cooling occurred in the Antarctic but millions of years earlier, a second study has found. Approximately 34 million years ago, a major reorganization of ocean currents in the Southern Ocean resulted in Antarctic seawater temperatures abruptly falling by as much as 5 degrees Celsius (9 degrees Fahrenheit). The temperature drop initiated growth of the Antarctic ice sheet, at the same time that the earth underwent a drastic transition from warm Greenhouse to cold Icehouse conditions.

This dramatic cooling was caused by tectonic events that opened up two underwater gateways around Antarctica, the international team of researchers says. The gateways are the Tasmanian Gateway, formerly a land bridge between Antarctica and Tasmania, and Drake Passage, once a land bridge from Antarctica to South America. The scientists studied the effect of tectonics using a high-resolution ocean model that includes details such as ocean eddies and small-scale seafloor roughness.

After tectonic forces caused the two land bridges to submerge, the present-day ACC (Antarctic Circumpolar Current) began to flow. This circumpolar current, although initially less strong than today, acted to weaken the flow of warm waters to the Antarctic coast. As the two gateways slowly deepened, the warm-water flow weakened even further, causing the relatively sudden cooling event.

Little cooling occurred before one or both gateways subsided to a depth of more than 300 meters (1,000 feet). After the second gateway had subsided from 300 meters (1,000 feet) to 600 meters (2,000 feet), surface waters along the entire Antarctic coast cooled by 2 to 3.5 degrees Celsius (3.6 to 6.3 degrees Fahrenheit). And once the second gateway had subsided below 600 meters (2,000 feet), the temperature of Antarctic coastal waters decreased another 0.5 to 2 degrees Celsius (0.9 to 3.6 degrees Fahrenheit). The next figure depicts the gradual opening of the two gateways.

Although declining CO2 levels in the atmosphere may have played a minor role, the study authors conclude that undersea tectonic changes were the key factor in altering Southern Ocean currents and in creating our modern-day Icehouse world.

Next: No Evidence That Thwaites Glacier in Antarctica Is about to Collapse

El Niño and La Niña May Influence the Climate More than Greenhouse Gases

The familiar El Niño and La Niña cycles are known to cause drastic fluctuations in global temperature, along with often catastrophic climatic effects in tropical regions of the Pacific Ocean. What is less well known is that the powerful ocean oscillations have been a feature of our climate for at least 20,000 years – that is, since before the most recent ice age ended.

A 2005 study established a complete record of El Niño events in the southeastern Pacific, by examining marine sediment cores drilled off the coast of Peru. The cores contain an El Niño signature in the form of tiny, fine-grained stone fragments, washed into the sea by multiple Peruvian rivers following floods on the continent caused by heavy El Niño rainfall. As indicated in the adjacent figure, the study site was approximately 80 kilometers (50 miles) from Lima at a depth of 184 meters (604 feet).

Northern Peru sees the heaviest El Niño rainfall, generating floods capable of dispersing large amounts of fine-grained river sediments. Smaller amounts of rainfall in central and southern Peru, which are not caused by El Niño, don’t result in flooding with the same dispersal capability.

The study authors classified the flood event signal as very strong when the concentration of stone fragments, known as lithics, was more than two standard deviations above the centennial mean. The frequency of these very strong events over the last 12,000 years is illustrated in the next figure; the black and gray bars show the frequency as the number of 500- and 1,000-year floods, respectively. Radiocarbon dating of the sediment cores was used to establish the timeline.

It can be seen that the number of very strong Peruvian flood events peaked around 9,500 years ago and again about 2,500 years ago, since when the number has been decreasing. No extreme floods occurred at all from about 5,500 to 7,500 years in the past.

A more detailed record is presented in the following figure, showing the variation over 20,000 years of the sea surface temperature off Peru (top), the lithic concentration (bottom) and a proxy for lithic concentration (middle). Sea surface temperatures were derived from chemical analysis of the marine sediment cores.

As indicated in this figure, the lithic concentration and therefore El Niño strength were high around 2,000 and 10,000 years ago – approximately the same periods when the most devastating floods occurred. The figure also reveals the absence of strong El Niño activity from 5,500 to 7,500 years ago, a dry interval without any major Peruvian floods.

But it’s seen that El Niños were strong in other eras too. During this 20,000-year span, El Niños first became prominent between 17,000 and 16,000 years before now, at the same time that sea surface temperatures jumped several degrees. The initial rise in both El Niños and ocean temperature was followed by roughly centennial fluctuations, alternating between weaker and stronger El Niño activity. After the gap from 7,500 to 5,500 years ago, El Niños surged again, as did sea surface temperatures.

On a finer scale, El Niños during the last two millennia were distinctly stronger than their modern counterparts between 2,000 and 1,300 years ago, then relatively weak during the MWP (Medieval Warm Period) from about 800 (1,300 years ago) to 1300. During the LIA (Little Ice Age) from about 1500 to 1850, El Niños strengthened once more before falling back to their present-day levels.

It may seem counterintuitive that El Niños, which release vast amounts of heat from the Pacific Ocean into the atmosphere and often raise the global temperature by several tenths of a degree for a year or so, are associated historically with prolonged periods of cooling such as the LIA. But ecologist Jim Steele has explained this phenomenon as arising from the absence of La Niña conditions during an El Niño. La Niña is the cool phase of the so-called ENSO (El Niño–Southern Oscillation), El Niño being the warm phase.

In a La Niña event, east to west trade winds cause warm water heated by the sun to pile up in the western tropical Pacific. This removes solar‑heated water from the eastern Pacific, resulting in upwelling of cooler subsurface waters there that replace the surface waters transported to the west and cause a temporary decline in global temperatures. But at the same time, the ocean gains heat at greater depths.

With the absence of this recharging of ocean heat during an El Niño, global cooling sets in for an extended period. Such cooling is usually associated with lower heat output from the sun, characterized by a falloff in the average monthly number of sunspots. Conversely, La Niñas usually accompany periods of higher solar output and result in extended global warming, as occurred during the MWP.

El Niño and La Niña have been major influences on our climate for many millennia and will continue to be. Until they are better understood, we can’t be sure they play less of a role in global warming than greenhouse gases.

Next: Sudden Changes in Ocean Currents Warmed Arctic, Cooled Antarctic in Past

The Crucial Role of Water Feedbacks in Global Warming

One of the most important features of our climate system, and the computer models developed to represent it, is feedbacks. Most people don’t know that without positive feedbacks, the climate would be so insensitive to CO2 and other greenhouse gases that global warming wouldn’t be a concern. Positive feedbacks amplify global warming, while negative feedbacks tamp it down.

A doubling of CO2, acting entirely on its own, would raise global temperatures only by a modest 1.1 degrees Celsius (2.0 degrees Fahrenheit). In climate models, it’s positive feedback from water vapor – by far the most abundant greenhouse gas – and, to a lesser extent, feedback from clouds, snow and ice, that boosts the warming effect of doubled CO2 alone to the predicted very likely range of 2 degrees Celsius (3.6 degrees Fahrenheit) to 5 degrees Celsius (9 degrees Fahrenheit).

Contributions of the various greenhouse gases to global warming can be surmised from the figure below, which depicts the wavelength spectrum of thermal radiation transmitted through the atmosphere, where wavelength is measured in micrometers. Greenhouse gases cause warming by absorbing a substantial portion of the cooling longwave radiation emitted upwards by the earth. The lower panels of the figure show how water vapor absorbs strongly in several wavelength bands that don’t overlap CO2.

The assumption that water vapor feedback is positive and not negative was originally made by the Swedish chemist Svante Arrhenius over a century ago. The feedback arises when slight CO2-induced warming of the earth causes more water to evaporate from oceans and lakes, and the extra moisture then adds to the heat-trapping water vapor already in the atmosphere. This amplifies the warming even more.

The magnitude of the feedback is critically dependent on how much of the extra water vapor ends up in the upper atmosphere as the planet warms, because that’s where heat escapes to outer space. An increase in moisture there means stronger, more positive water vapor feedback and thus more heat trapping.

The concentration of water vapor in the atmosphere declines steeply with altitude, more than 95% of it being within 5 kilometers of the earth’s surface. Limited data do show that upper atmosphere humidity strengthened slightly in the tropics during the 30-year period from 1979 to 2009, during which the globe warmed by about 0.5 degrees Celsius (0.9 degrees Fahrenheit). However, the humidity diminished in the subtropics and possibly at higher latitudes also during this time.

But the predicted warming of 2 degrees Celsius (3.6 degrees Fahrenheit) to 5 degrees Celsius (9 degrees Fahrenheit) for doubled CO2 assumes that the water vapor concentration in the upper atmosphere increases at all latitudes as it heats up. In the absence of observational evidence for this assumption, we can’t be at all sure that the water vapor feedback is strong enough to produce temperatures in the predicted range.

The uncertainty over CO2 warming is exacerbated by lack of knowledge about another water feedback, from clouds. As I’ve discussed in another post, cloud feedback can be either positive or negative.

Positive cloud feedback is normally associated with an increase in high-level clouds such as cirrus clouds, which allow most of the sun’s incoming shortwave radiation to penetrate, but also act as a blanket inhibiting the escape of longwave heat radiation to space. More high-level clouds amplify warming that in turn evaporates more water and produces yet more clouds.

Negative cloud feedback can arise from a warming-induced increase in low-level clouds such as cumulus and stratus clouds. These clouds reflect 30-60% of the sun’s radiation back into space, acting like a parasol and thus cooling the earth’s surface. The cooling results in less evaporation that then reduces new cloud formation.

Conversely, a decrease in high-level clouds would imply negative cloud feedback, while a decrease in low-level clouds would imply positive feedback. Because of all these possibilities, together with the paucity of empirical data, it’s simply not known whether net cloud feedback in the earth’s climate system is one or the other – positive or negative.

If overall cloud feedback is negative, rather than positive as the computer models suggest, it’s possible that negative feedbacks in the climate system from the lapse rate (the rate of temperature decrease with altitude in the lower atmosphere) and clouds dominate the positive feedbacks from water vapor, and from snow and ice. This would mean that the response of the climate to added CO2 in the atmosphere is to lessen, rather than magnify, the temperature increase from CO2 acting alone, the opposite of what climate models tell us. Most feedbacks in nature are negative, keeping the natural world stable.

So until water feedbacks are better understood, there’s little scientific justification for any political action on CO2 emissions.

Next: El Niño and La Niña May Influence the Climate More than Greenhouse Gases

Challenges to the CO2 Global Warming Hypothesis: (5) Peer Review Abused to Axe Skeptical Paper

A climate research paper featured in a previous post of mine has recently been removed by the publisher, following a post-publication review by seven new reviewers who all recommended rejection of the paper. This drastic action represents an abuse of the peer review process in my opinion, as the reviews are based on dubious science.

The paper in question was a challenge to the CO2 global warming hypothesis by French geologist Pascal Richet. From analysis of an Antarctic ice core, Richet postulates that greenhouse gases such as CO2 had only a minor effect on the earth’s climate over the past 423,000 years, and that any assumed forcing of climate by CO2 is incompatible with ice-core data.

Past atmospheric CO2 levels and surface temperatures are calculated from ice cores by measuring the air composition and the oxygen 18O to 16O isotopic ratio, respectively, in air bubbles trapped by the ice. Data from the core, drilled at the Russian Vostok station in East Antarctica, is depicted in the figure below. The CO2 level is represented by the upper graphs (below the insolation data) that show the substantial drop in CO2 during an ice age; the associated drop in temperature ΔT is represented by the lower graphs.

It’s well known that the CO2 level during the ice ages closely mimicked changes in temperature, but the CO2 concentration lagged behind. What Richet observed is that the temperature peaks in the Vostok record are much narrower than the corresponding CO2 peaks. From this observation, he argued that CO2 can’t drive temperature since an effect can’t last for a shorter period than its cause.

The seven negative reviews focused on two main criticisms. The first is that Richet supposedly fails to understand that CO2 can act both as a temperature driver, when CO2 leads, and as an amplifying feedback, when CO2 lags.

At the end of ice ages, it’s thought that a subtle change in the earth’s orbit around the sun initiated a sudden upward turn of the temperature. This slight warming was then amplified by feedbacks, including CO2 feedback triggered by a surge in atmospheric CO2 as it escaped from the oceans; CO2 is less soluble in warmer water. A similar but opposite chain of events is believed to have enhanced global cooling as the temperature fell at the beginning of an ice age. In both cases – deglaciation and glaciation – CO2 as a feedback lagged temperature.

However, several of Richet’s reviewers base their criticism on a 2012 paper, by paleoclimatologist Jeremy Shakun and coauthors, which proposes the somewhat preposterous notion that CO2 lagged temperature during the most recent glaciation, all through the subsequent ice age, and during 0.3 degrees Celsius (0.5 degrees Fahrenheit) of the initial warming as the ice age ended – but then switched roles from feedback to driver and led temperature during the remaining deglaciation.

Shakun’s proposal is illustrated in the left graph below, in which the blue curve shows the mean global temperature during deglaciation, the red curve represents the temperature in Antarctica and the yellow dots are the atmospheric CO2 concentration. The CO2 levels and Antarctic temperatures are derived from an ice core, as in Richet’s paper but using the so-called Dome C core, while global temperatures are calculated from proxy data obtained from ocean and lake sediments.

The apparent switch of CO2 from feedback to driver is clearly visible in the figure above about 17,500 years ago, when the temperature escalated sharply. Although the authors attempt to explain the sudden change as resulting from variability of the AMOC (Atlantic Meridional Overturning Circulation), their argument is only hand-waving at best and does nothing to bolster their postulated dual role for CO2.

In any case, a detailed, independent analysis of the same proxy data has found there is so much data scatter that whether CO2 leads or lags the warming can’t even be established. This analysis is shown in the right graph above, where the green dots represent the temperature data and the black circles are the CO2 level.

All this invalidates the reviewers’ first main criticism of Richet’s paper. The second criticism is that Richet dismisses computer climate models as an unreliable tool for studying the effect of CO2 on climate, past or present. But, as frequently pointed out in these pages, climate models indeed have many weaknesses. These include the omission of many types of natural variability, exaggeration of predicted temperatures and the inability to reproduce the past climate accurately. Repudiation of climate models is therefore no reason to reject a paper.

Some of the reviewers’ lesser criticisms of Richet’s paper are justified, such as his analysis of only one Antarctic ice core when several are available, and his inappropriate philosophical and political comments in a scientific paper. But outright rejection of the paper smacks of bias against climate change skeptics and is an abuse of the time-honored tradition of peer review.

Next: The Crucial Role of Water Feedbacks in Global Warming

Ice Sheet Update (2): Greenland Ice Sheet Melting No Faster than Last Century

The climate doomsday machine makes much more noise about warming-induced melting of Greenland’s ice sheet than Antarctica’s, even though the Greenland sheet holds only about 10% as much ice. That’s because the smaller Greenland ice sheet is melting at a faster rate and contributes more to sea level rise. But the melt rate is no faster today than it was 90 years ago and appears to have slowed over the last few years.

The ice sheet, 2-3 km (6,600-9,800 feet) thick, consists of layers of compressed snow built up over at least hundreds of thousands of years. Melting takes place only during Greenland’s late spring and summer, the meltwater running over the ice sheet surface into the ocean, as well as funneling its way down through thick glaciers, helping speed up their flow toward the sea.

In addition to summer melting, the sheet loses ice at its edges from calving or breaking off of icebergs, and from submarine melting by warm seawater. Apart from these losses, a small amount of ice is gained over the long winter from the accumulation of compacted snow at high altitudes in the island’s interior. The net result of all these processes at the end of summer melt in August is illustrated in the adjacent figure, based on NASA satellite data.

The following figure depicts the daily variation, over the past year, of the estimated surface mass balance of the Greenland ice sheet – which includes gains from snowfall and losses from melt runoff, but not sheet edge losses – as well as the mean daily variation for the period from 1981 to 2010. The loss of ice during the summer months of June, July and August is clearly visible, though the summer loss was smaller in 2021 than in many years. An unusual, record-setting gain can also be seen in May 2021.

The next figure shows the average annual gain or loss of both the surface mass balance (in blue) and a measure of the total mass balance (in black), going all the way back to 1840. Most of the data comes from meteorological stations across Greenland. The total mass balance in this graph includes the surface mass balance, iceberg calving and submarine melting (combined in the gray dashed line), and melting from basal sources underneath the ice sheet, but not peripheral glaciers – glaciers that contribute 15 to 20% of Greenland’s total mass loss.

It can be seen that the rate of ice loss excluding glaciers has increased since about 2000. But it’s also clear from the graph that high short-term loss rates have occurred more than once in the past, notably in the 1930s and the 1950s – so the current shrinking of the Greenland ice sheet is nothing remarkable. There’s no obvious correlation with global temperatures, since the planet warmed from 1910 to 1940 but cooled from 1940 to 1970.

The total ice loss (not its rate) since 1972 is displayed in more detail in the final figure below. The slightly different estimates of the total mass are because some estimates include losses from peripheral glaciers and basal melting, while others don’t. Nevertheless, the insert showing mass loss from 2010 to 2020 reveals clearly that the rate of loss may have slowed down since about 2015.

The 2020 loss was 152 gigatonnes (168 gigatons), much lower than the average annual losses of 258 gigatonnes (284 gigatons) and 247 gigatonnes (272 gigatons) from 2002 to 2016 and 2012 through 2016, respectively. The 2020 loss also pales in comparison with the record high losses of 458 gigatonnes (505 gigatons) in 2012 and 329 gigatonnes (363 gigatons) in 2019. The 2021 loss is on track to be similar to 2020, according to estimates at the end of the summer melt season.

The Sixth Assessment Report of the UN’s IPCC (Intergovernmental Panel on Climate Change) maintains with high confidence that, between 2006 and 2018, melting of the Greenland ice sheet and peripheral glaciers was causing sea levels to rise by 0.63 mm (25 thousandths of an inch) per year. This can be compared with a rise of 0.37 mm (15 thousandths of an inch) per year from melting of Antarctic ice. However, the rate of rise from Greenland ice losses may be falling, as discussed above.

If the rate of Greenland ice loss were to remain at its 2012 to 2016 average of 247 gigatonnes (272 gigatons) per year, which is an annual loss of about 0.01% of the total mass of the ice sheet, it would take another 10,000 years for all Greenland’s ice to melt. If the rate stays at the 2020 value of 152 gigatonnes (168 gigatons) per year, the ice sheet would last another 17,000 years.

Next: Challenges to the CO2 Global Warming Hypothesis: (5) Peer Review Abused to Axe Skeptical Paper

Ice Sheet Update (1): Evidence That Antarctica Is Cooling, Not Warming

Melting due to climate change of the Antarctic and Greenland ice sheets has led to widespread panic about the future impact of global warming. But, as we’ll see in this and a subsequent post, Antarctica may not be warming overall, while the rate of ice loss in Greenland has slowed recently.

The kilometers-thick Antarctic ice sheet contains about 90% of the world’s freshwater ice and would raise global sea levels by about 60 meters (200 feet) were it to melt completely. The Sixth Assessment Report of the UN’s IPCC (Intergovernmental Panel on Climate Change) maintains with high confidence that, between 2006 and 2018, melting of the Antarctic ice sheet was causing sea levels to rise by 0.37 mm (15 thousandths of an inch) per year, contributing about 10% of the global total.

By far the largest region is East Antarctica, which covers two thirds of the continent as seen in the figure below and holds nine times as much ice by volume as West Antarctica. The hype about imminent collapse of the Antarctic ice sheet is based on rapid melting of the glaciers in West Antarctica; the glaciers contribute an estimated 63% (see here) to 73% (here) of the annual Antarctic ice loss. East Antarctica, on the other hand, may not have shed any mass at all – and may even have gained slightly – over the last three decades, due to the formation of new ice resulting from enhanced snowfall.  

The influence of global warming on Antarctica is uncertain. In an earlier post, I reported the results of a 2014 research study that concluded West Antarctica and the small Antarctic Peninsula, which points toward Argentina, had warmed appreciably from 1958 to 2012, but East Antarctica had barely heated up at all over the same period. The warming rates were 0.22 degrees Celsius (0.40 degrees Fahrenheit) and 0.33 degrees Celsius (0.59 degrees Fahrenheit) per decade, for West Antarctica and the Antarctic Peninsula respectively – both faster than the global average.

But a 2021 study reaches very different conclusions, namely that both West Antarctica and East Antarctica cooled between 1979 and 2018, while the Antarctic Peninsula warmed but at a much lower rate than found in the 2014 study. Both studies are based on reanalyses of limited Antarctic temperature data from mostly coastal meteorological stations, in an attempt to interpolate temperatures in the more inaccessible interior regions of the continent.

This later study appears to carry more weight as it incorporates data from 41 stations, whereas the 2014 study includes only 15 stations. The 2021 study concludes that East Antarctica and West Antarctica have cooled since 1979 at rates of 0.70 degrees Celsius (1.3 degrees Fahrenheit) per decade and 0.42 degrees Celsius (0.76 degrees Fahrenheit) per decade, respectively, with the Antarctic Peninsula having warmed at 0.18 degrees Celsius (0.32 degrees Fahrenheit) per decade.

It’s the possible cooling of West Antarctica that’s most significant, because of ice loss from thinning glaciers. Ice loss and gain rates from Antarctica since 2003, measured by NASA’s ICESat satellite, are illustrated in the next figure, in which dark reds and purples show ice loss and blues show gain.

The high loss rates along the coast of West Antarctica have been linked to thinning of the floating ice shelves that terminate glaciers, by so-called circumpolar deep water warmed by climate change. Although disintegration of an ice shelf already floating on the ocean doesn’t raise sea levels, a retreating ice shelf can accelerate the downhill flow of glaciers that feed the shelf. It’s thought this can destabilize the glaciers and the ice sheets behind them.

However, not all the melting of West Antarctic glaciers is due to global warming and the erosion of ice shelves by circumpolar deep water. As I’ve discussed in a previous post, active volcanoes underneath West Antarctica are melting the ice sheet from below. One of these volcanoes is making a major contribution to melting of the Pine Island Glacier, which is adjacent to the Thwaites Glacier in the first figure above and is responsible for about 25% of the continent’s ice loss.

If the Antarctic Peninsula were to cool along with East Antarctica and West Antarctica, the naturally occurring SAM (Southern Annular Mode) – the north-south movement of a belt of strong southern westerly winds surrounding Antarctica – could switch from its present positive phase to negative. A negative SAM would result in less upwelling of circumpolar deep water, thus reducing ice shelf thinning and the associated melting of glaciers.

As seen in the following figure, the 2021 study’s reanalysis of Antarctic temperatures shows an essentially flat trend for the Antarctic Peninsula since the late 1990s (red curve); warming occurred only before that time. The same behavior is even evident in the earlier 2014 study, which goes back to 1958. So future cooling of the Antarctic Peninsula is not out of the question. The South Pole in East Antarctica this year experienced its coldest winter on record.

Peninsula.jpg

Next: Ice Sheet Update (2): Evidence That Greenland Melting May Have Slowed Down

Sea Ice Update: No Evidence for Recent Ice Loss

Climate activists have long lamented the supposedly impending demise of Arctic sea ice due to global warming. But, despite the constant drumbeat of apocalyptic predictions, the recently reached minimum extent of Arctic ice in 2021 is no smaller than it was back in 2008.  And at the other end of the globe, the sea ice around Antarctica has been expanding for at least 42 years.

Scientific observations of sea ice in the Arctic and Antarctic have only been possible since satellite measurements began in 1979. The figure below shows satellite-derived images of Arctic sea ice extent in the summer of 1979 (left image), and the summer (September) and winter (March) of 2021 (right image, with September on the left). Sea ice shrinks during summer months and expands to its maximum extent during the winter.

Blog 10-7-19 JPG(1).jpg
Arctic sea ice.jpg

Over the interval from 1979 to 2021, Arctic summer ice extent decreased by approximately 30%; while it still embraces northern Greenland, it no longer reaches the Russian coast. The left graph in the next figure compares the monthly variation of Arctic ice extent from its March maximum to the September minimum, for the years 2021 (blue curve) and 2008 (green curve). The 2021 summer minimum is seen to be almost identical to that in 2008, with the black curve depicting the median extent over the period from 1981 to 2010.

Arctic sea ice volume 9-16.jpg

The right graph in the figure shows the estimated month-by-month variation of Arctic ice volume in recent years. The volume depends on both ice extent and its thickness, which varies with location as well as season – the thickest, and oldest, winter ice currently lying along the northern coasts of the Canadian Arctic Archipelago and Greenland.

Arctic ice thickness is notoriously difficult to measure, the best data coming from limited submarine observations. According to one account based on satellite data, more than 75% of the Arctic winter ice pack today consists of thin ice just a few months old, whereas in the past it was only 50%. However, these estimates are unreliable and a trio of Danish research institutions that monitor the Arctic estimate that the ice volume has changed very little over the last 17 years, as seen in the figure above.

Another indication that Arctic ice is not melting as fast as climate activists claim is the state of the Northwest Passage – the waterway between the Atlantic and Pacific Oceans through the Arctic Ocean, along the coast of North America. Although both the southern and northern routes of the Northwest Passage have been open intermittently since 2007, ice conditions this year are relatively severe compared to the past two decades: thicker multiyear ice is the main hazard. The northern deep-water route is already choked with ice and will not open until at least next year.

In the Antarctic, sea ice almost disappears completely during the southern summer and reaches its maximum extent in September, at the end of winter. This is illustrated in the satellite-derived images below, showing the summer minimum (left image) and winter maximum extent (right image) in 2021. The Antarctic winter sea ice extent is presently well above its long-term average.

In fact, despite the long-term loss of ice in the Arctic, the sea ice around Antarctica has expanded slightly during the satellite era, as shown in the following figure up to 2020. Although the maximum Antarctic ice extent (shown in red) fluctuates greatly from year to year, and took a tumble in 2017, it has grown at an average rate between 1% and 2% per decade (dashed red line) since 1979.

Note that the ice losses shown in this figure are “anomalies,” or departures from the monthly mean ice extent for the period from 1981 to 2010, rather than the minimum extent of summer ice. So the Arctic data don’t reveal how the 2021 minimum was almost identical to 2008, as illustrated in the earlier figures.

Several possible reasons have been put forward for the greater fluctuations in Antarctic winter sea ice compared to that in the Arctic. One analysis links the Antarctic oscillations to ENSO (the El Niño – Southern Oscillation), a natural cycle that causes variations in mean temperature and other climatic effects in tropical regions of the Pacific Ocean. The Pacific impinges on a substantial portion of the Southern Ocean that surrounds Antarctica.

The analysis suggests that the very large winter ice extents of 2012, 2013 and 2014 were a consequence of the 2012 La Niña, which is the cool phase of ENSO. Reinforcing that idea is the fact that this year’s surge in ice extent follows another La Niña earlier in 2021; the big loss of sea ice in 2017 could be associated with 2016’s strong El Niño, the warm phase of ENSO. The natural Pacific Decadal Oscillation may also play a role.

Next: Ice Sheet Update (1): Evidence That Antarctica Is Cooling, Not Warming

What “The Science” Really Says about the Coronavirus Pandemic

The answer is not much – at least, not yet.

While advocates of lockdowns and masking mandates claim to be invoking “the science,” science by its very nature can’t provide short-term answers to the efficacy of such measures. The scientific method demands extensive data gathering and testing, which generally take longer than the duration of a pandemic. An abundance of scientific evidence does exist for the effectiveness of vaccination, but whether vaccines can completely eradicate the coronavirus is an open question. Social distancing as a preventive measure is also on firm scientific ground.

Lockdowns have been used for centuries as a way to slow the spread of disease, including the Black Death plague in the 14th century and the Spanish Flu in 1918-1919. But all they do is initially reduce transmission of the virus, and to claim otherwise is scientifically disingenuous.

The primary purpose of slowing down the spread of a contagious and deadly disease is to prevent the healthcare system from becoming overwhelmed. If more people get sick enough to require hospitalization than the number of hospital beds available, some won’t get adequate treatment and deaths will increase. However, lockdowns also have a devastating effect on a society’s economic and mental health. Studies have shown that negative socioeconomic impacts greatly limit the effectiveness of lockdowns over time.

In some countries such as Taiwan and Australia, death rates from COVID-19 are very low so far after repeated lockdowns, causing lockdown supporters to link the two. But other nations with small populations such as Israel have much higher mortality rates despite continued shutdowns. So there’s no correlation and, in fact, many other factors influence the death rate.  

The science behind masking, shown below in use during the Spanish Flu pandemic, is muddier yet and has been badly contaminated by politics. Unfortunately, the gold standard in medical testing – the RCT (randomized controlled trial) – isn’t the basis for evaluating the benefit of mask-wearing by institutions like the U.S. CDC (Centers for Disease Control and Prevention) or the WHO (World Health Organization).

In an RCT or clinical trial, participants are divided randomly into two identical groups, with intervention in only one group and the other group used as a control. Neither the researchers nor the participants are told which group the participants are part of until the very end. Such double-blind trials are therefore able to establish causation.

For masks, just 14 RCTs have been carried out across the world to study how well masks guard against respiratory diseases, primarily influenza. Nearly all the trials tested so-called surgical, three-ply paper masks, rather than the N95 respirator style. Of the 14 trials, just two investigated the claim that wearing a mask benefits others who come in close contact with the mask wearer, while the other 12 tested the combination of benefit to others and protection for the wearer.

A recent analysis by a prominent statistician of all 14 RCTs, which include the only trial to test mask-wearing’s specific effectiveness against COVID-19, reveals that masks have no significant effect on either the wearer or those in close proximity, although some trials were ambiguous. There was no strong evidence that N95 masks performed any better than surgical or cloth masks. Exactly the same conclusions were reached in two independent analyses of 13 (see here) and 11 (here) of the same RCTs.

The CDC, however, relies on observational studies conducted since the start of the coronavirus pandemic, not RCTs, in issuing its masking guidance. An observational study is less scientific in being unable to assign a cause to an effect; it can only establish association.

Vaccination against infectious diseases, on the other hand, has a solid scientific basis. Pioneered by Edward Jenner at the end of the 18th century, vaccination has eradicated killer diseases such as smallpox and polio in many countries, and drastically curtailed others such as measles, mumps and pertussis (whooping cough).

Nevertheless, the science underlying vaccination against COVID-19 is incomplete. In the past it’s taken several years to develop a new vaccine, but the COVID-19 vaccines currently available were brought to market at lightning speed. Although such haste was seen as necessary to combat a rapidly proliferating virus, it meant shortening the RCTs designed to test vaccine efficacy, leaving questions such as long-term side effects and duration of effectiveness unresolved.

And barely understood yet is the greater protection against infection acquired though natural immunity – the result of having recovered from a previous COVID-19 infection – than from vaccination. This complicates calls for vaccine mandates, as those with natural immunity arguably don’t need to be vaccinated.

Moreover, the coronavirus is an RNA virus like influenza and so frequently mutates. This means that mandatory vaccination for the coronavirus is unlikely to be any more effective community-wide than a mandated flu vaccine would be. Regular COVID-19 booster shots will probably be needed, just like the flu.

That’s where science stands on the coronavirus. But rather than following the science, most decisions on lockdowns, masking and vaccination are ruled by politics.

Next: Sea Ice Update: No Evidence for Recent Ice Loss

Weather Extremes: Hurricanes and Tornadoes Likely to Diminish in 2021

Despite the brouhaha over the recent record-breaking heat wave in the Pacific northwest and disastrous floods in Europe and China, windy weather extremes – hurricanes and tornadoes – are attracting little media attention because they’re both on track for a relatively quiet season.

Scientists at the Climate Prediction Center of NOAA (the U.S. National Oceanic and Atmospheric Administration) don’t anticipate that 2021 will see the record-breaking 30 named storms of 2020, even though they think the total may still be above average. However, of last year’s 30 storms, only 13 became actual hurricanes, including 6 major hurricanes. The record annual highs are 15 hurricanes recorded in 2005 and 8 major hurricanes in 1950.

Hurricanes are classified by their sustained wind speeds on the Saffir-Simpson scale, ranging from Category 1, the weakest, to Category 5, the strongest. A major hurricane is defined as one in Category 3, 4 or 5, corresponding to a top wind speed of 178 km per hour (111 mph) or greater. NOAA predicts just 6 to 10 hurricanes this year, with 3 to 5 of those being in the major hurricane categories.

Hurricanes in the Atlantic basin, which has the best quality data available in the world, do show heightened ac­tivity over the last 20 years, particularly in 2005 and 2020. This can be seen in the figure below, depicting the frequency of all Atlantic hurricanes from 1851 to 2020. But researchers have found that the apparent increase in recent times is not related to global warming.

Hurricanes Atlantic 1851-2020.jpg

Rather, say the scientists who work at NOAA and several universities, the increase reflects natural variability. Although enhanced evaporation from warming oceans pro­vides more fuel for hurricanes, recent numbers have been artificially boosted by a big improvement in our ability to detect hurricanes, especially since the advent of satellite coverage in the late 1960s. And global warming can’t be the explanation, as the earth was cooling during the previous period of increased activity in the 1950s and 1960s.

Prior to that time, most data on hurricane frequency were based on eyewitness accounts, thus excluding all the hurricanes that never made landfall. What the researchers did was examine the eyewitness records, preserved by NOAA workers, in order to calculate the ratio of Atlantic hurricanes that didn’t come ashore to those that did, both in the modern era and in the past. The observations of non-landfalling hurricanes before the early 1970s came primarily from ships at sea.

Then, using a model for the radius of hurricane or major hurricane winds, the researchers were able to estimate the number of hurricanes or major hurricanes going back to 1860 that were never recorded. Their analysis revealed that the recent hike in the hurricane count is nothing remarkable, being comparable to earlier surges in the early 1880s and late 1940s. In the U.S., the past decade was in fact the second quietest for landfalling hurricanes and landfalling major hurricanes since the 1850s. Hurricane Ida was the first major U.S. landfalling hurricane this year.

Tornadoes, which occur predominantly in the U.S., have been less violent and fewer in number than average so far in 2021. Like hurricanes, tornadoes are categorized according to wind speed, using the Fujita Scale going from EF0 to EF5; EF5 tornadoes attain wind speeds up to 480 km per hour (300 mph).

Up to the end of August, 958 tornadoes had been reported by NOAA’s Storm Prediction Center in 2021 – of which 740 had been confirmed, according to Wikipedia. These numbers can be compared with the January to August average of 1035 confirmed tornadoes; the yearly average is 1253.

The annual incidence of all tornadoes in the U.S. shows no meaningful trend from 1950 to 2020, a period that included both warming and cooling spells, with net global warming of approximately 1.1 degrees Celsius (2.0 degrees Fahrenheit) during that time. But the number of strong tornadoes (EF3 or greater) has declined dramatically over the last half century, as seen in the next figure illustrating the number observed each year from 1954 to 2017.

Strong tornadoes.jpg

Clearly, the trend is downward instead of upward. Indeed, the average number of strong tornadoes annually from 1986 to 2017 was 40% less than from 1954 to 1985. In May this year, there wasn’t a single strong tornado for the first time since record-keeping began in 1950. Although there’s debate over whether the current system for rating tornadoes is flawed, 2021 looks like being another quiet year.

Next: What “The Science” Really Says about the Coronavirus Pandemic

Latest UN Climate Report Is More Hype than Science

In its latest climate report, the UN’s IPCC (Intergovernmental Panel on Climate Change) falls prey to the hype usually characteristic of alarmists who ignore the lack of empirical evidence for the climate change narrative of “unequivocal” human-caused global warming.

Past IPCC assessment reports have served as the voice of authority for climate science and, even among those who believe in man-made climate change, as a restraining influence – being hesitant in linking weather extremes to a warmer world, for instance. But all that has changed in its Sixth Assessment Report, which the UN Secretary-General has hysterically described as “code red for humanity.”

Among other claims trumpeted in the report is the statement that “Evidence of observed changes in extremes such as heat waves, heavy precipitation, droughts, and tropical cyclones, and, in particular, their attribution to human influence, has strengthened since [the previous report].” This is simply untrue and actually contrary to the evidence, with the exception of precipitation that tends to increase with global warming because of enhanced evap­oration from tropical oceans, resulting in more water vapor in the atmosphere.

In other blog posts and a recent report, I’ve shown how there’s no scientific evidence that global warm­ing triggers extreme weather, or even that weather extremes are becoming more frequent. Anomalous weather events, such as heat waves, hurricanes, floods, droughts and tornadoes, show no long-term trend over more than a century of reliable data.

As one example, the figure below shows how the average glob­al area and intensity of drought remained unchanged on aver­age from 1950 to 2019, even though the earth warmed by about 1.1 degrees Celsius (2.0 degrees Fahrenheit) over that interval. The drought area is the percentage of total global land area, excluding ice sheets and deserts, while the intensity is characterized by the self-calibrating Palmer Drought Severity Index, which measures both dryness and wetness and classifies events as “moderate,” “severe” or “extreme.”

Drought.jpg

Although the IPCC report claims, with high confidence, that “the frequency of concurrent heatwaves and droughts on the global scale” are increasing, the scientific evidence doesn’t sup­port such a bold assertion. An accompanying statement that cold extremes have become less frequent and less severe is also blatantly incorrect.

Cold extremes are in fact on the rise, as I’ve discussed in previous blog posts (here and here). The IPCC’s sister UN agency, the WMO (World Meteorological Organi­zation) does at least acknowledge the existence of cold weather extremes, but has no explanation for their origin nor their growing frequency. Cold extremes include prolonged cold spells, unusually heavy snowfalls and longer winter seasons. Why the IPCC should draw the wrong conclusion about them is puzzling.

In discussing the future climate, the IPCC makes use of five scenarios that project differing emissions of CO2 and other greenhouse gases. The scenarios start in 2015 and range from one that assumes very high emissions, with atmospheric CO2 doubling from its present level by 2050, to one assuming very low emissions, with CO2 declining to “net zero” by mid-century.

But, as pointed out by the University of Colorado’s Roger Pielke Jr., the estimates in the IPCC report are dominated by the highest emissions scenario. Pielke finds that this super-high emissions scenario accounts for 41.5% of all scenario mentions in the report, whereas the scenarios judged to be the most likely under current trends account for only a scant 18.4% of all mentions. The hype inherent in the report is obvious by comparing these percentages with the corresponding ones in the Fifth Assessment Report, which were 31.4% and 44.5%, respectively. 

Not widely known is that the supposed linkage between climate change and human emissions of greenhouse gases, as well as the purported connection between global warming and weather extremes, both depend entirely on computer climate models. Only the models link climate change or extreme weather to human activity. The empirical evidence does not – it merely shows that the planet is warming, not what’s causing the warming.

A recent article in the mainstream scientific journal Science surprisingly drew attention to the shortcomings of climate models, weaknesses that have been emphasized for years by climate change skeptics. Apart from falsely linking global warming to CO2 emissions – because the models don’t include many types of natural variability – the models greatly exaggerate predicted temperatures, and can’t even reproduce the past climate accurately. As leading climate scientist Gavin Schmidt says, “You end up with numbers for even the near-term that are insanely scary—and wrong.”

The new IPCC report, with its prognostications of gloom and doom, should have paid more attention to its modelers. In making wrong claims about the present climate, and relying too heavily on high-emissions scenarios for future projections, the IPCC has strayed from the path of science.

Next: Weather Extremes: Hurricanes and Tornadoes Likely to Diminish in 2021