Why Both Coronavirus and Climate Models Get It Wrong

Most coronavirus epidemiological models have been an utter failure in providing advance information on the spread and containment of the insidious virus. Computer climate models are no better, with a dismal track record in predicting the future.

This post compares the similarities and differences of the two types of model. But similarities and differences aside, the models are still just that – models. Although I remarked in an earlier post that epidemiological models are much simpler than climate models, this doesn’t mean they’re any more accurate.     

Both epidemiological and climate models start out, as they should, with what’s known. In the case of the COVID-19 pandemic the knowns include data on the progression of past flu epidemics, and demographics such as population size, age distribution, social contact patterns and school attendance. Among the knowns for climate models are present-day weather conditions, the global distribution of land and ice, atmospheric and ocean currents, and concentrations of greenhouse gases in the atmosphere.

But the major weakness of both types of model is that numerous assumptions must be made to incorporate the many variables that are not known. Coronavirus and climate models have little in common with the models used to design computer chips, or to simulate nuclear explosions as an alternative to actual testing of atomic bombs. In both these instances, the underlying science is understood so thoroughly that speculative assumptions in the models are unnecessary.

Epidemiological and climate models cope with the unknowns by creating simplified pictures of reality involving approximations. Approximations in the models take the form of adjustable numerical parameters, often derisively termed “fudge factors” by scientists and engineers. The famous mathematician John von Neumann once said, “With four [adjustable] parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

One of the most important approximations in coronavirus models is the basic reproduction number R0 (“R naught”), which measures contagiousness. The numerical value of R0 signifies the number of other people that an infected individual can spread the disease to, in the absence of any intervention. As shown in the figure below, R0 for COVID-19 is thought to be in the range from 2 to 3, much higher than for a typical flu at about 1.3, though less than values for other infectious diseases such as measles.

COVID-19 R0.jpg

It’s COVID-19’s high R0 that causes the virus to spread so easily, but its precise value is still uncertain. What determines how quickly the virus multiplies, however, is the incubation period, during which an infected individual can’t infect others. Both R0 and the incubation period define the epidemic growth rate. They’re adjustable parameters in coronavirus models, along with factors such as the rate at which susceptible individuals become infectious in the first place, travel patterns and any intervention measures taken.

In climate models, hundreds of adjustable parameters are needed to account for deficiencies in our knowledge of the earth’s climate. Some of the biggest inadequacies are in the representation of clouds and their response to global warming. This is partly because we just don’t know much about the inner workings of clouds, and partly because actual clouds are much smaller than the finest grid scale that even the largest computers can accommodate – so clouds are simulated in the models by average values of size, altitude, number and geographic location. Approximations like these are a major weakness of climate models, especially in the important area of feedbacks from water vapor and clouds.

An even greater weakness in climate models is unknowns that aren’t approximated at all and are simply omitted from simulations because modelers don’t know how to model them. These unknowns include natural variability such as ocean oscillations and indirect solar effects. While climate models do endeavor to simulate various ocean cycles, the models are unable to predict the timing and climatic influence of cycles such as El Niño and La Niña, both of which cause drastic shifts in global climate, or the Pacific Decadal Oscillation. And the models make no attempt whatsoever to include indirect effects of the sun like those involving solar UV radiation or cosmic rays from deep space.

As a result of all these shortcomings, the predictions of coronavirus and climate models are wrong again and again. Climate models are known even by modelers to run hot, by 0.35 degrees Celsius (0.6 degrees Fahrenheit) or more above observed temperatures. Coronavirus models, when fed data from this week, can probably make a reasonably accurate forecast about the course of the pandemic next week – but not a month, two months or a year from now. Dr. Anthony Fauci of the U.S. White House Coronavirus Task Force recently admitted as much.

Computer models have a role to play in science, but we need to remember that most of them depend on a certain amount of guesswork. It’s a mistake, therefore, to base scientific policy decisions on models alone. There’s no substitute for actual, empirical evidence.

Next: How Science Is Being Misused in the Coronavirus Pandemic

Coronavirus Epidemiological Models: (3) How Inadequate Testing Limits the Evidence

Hampering the debate over what action to take on the coronavirus, and over which epidemiological model is the most accurate, is a shortage of evidence. Evidence includes the infectiousness of the virus, how readily it’s transmitted, whether infection confers immunity and, if so, for how long. The answers to such questions can only be obtained from individual testing. But testing has been conspicuously inadequate in most countries, being largely limited to those showing symptoms.

We know the number of deaths, those recorded at least, but a big unknown is the total number of people infected. This “evidence fiasco,” as eminent Stanford medical researcher and epidemiologist John Ioannidis describes it, creates great uncertainty about the lethality of COVID-19 and means that reported case fatality rates are meaningless. In Ioannidis’ words, “We don’t know if we are failing to capture infections by a factor of three or 300.”

The following table lists the death rate, expressed as a percentage of known infections, for the countries with the largest number of reported cases as of April 16, and the most recent data for testing rates (per 1,000 people).

Table (2).jpg

As Ioannidis emphasizes, the death rate calculated as a percentage of the number of cases is highly uncertain because of variations in testing rate. And the number of fatalities is likely an undercount, since most countries don’t include those who die at home or in nursing facilities, as opposed to hospitals.

Nevertheless, the data does reveal some stark differences from country to country. Two nations with two of the highest testing rates in the table above – Italy and Germany – show markedly distinct death rates – 13.1% and 2.9%, respectively – despite having not very different numbers of COVID-19 cases. The disparity has been attributed to different demographics and levels of health in Italy and Germany. And two countries with two of the lowest testing rates, France and Turkey, also differ widely in mortality, though Turkey has a lower number of cases to date.

Most countries, including the U.S., lack the ability to test a large number of people and no countries have reliable data on the prevalence of the virus in the population as a whole. Clearly, more testing is needed before we can get a good handle on COVID-19 and be able to make sound policy decisions about the disease.

Two different types of test are necessary. The first is a test to discover how many people are currently infected or not infected, apart from those already diagnosed. A major problem in predicting the spread of the coronavirus has been the existence of asymptomatic individuals, possibly 25% or more of the population, who unknowingly have the disease and transmit the virus to those they come in contact with.

A rapid diagnostic test for infection has recently been developed by U.S. medical device manufacturer Abbott Laboratories. The compact, portable Abbott device, which recently received emergency use authorization from the FDA (U.S. Food and Drug Administration), can deliver a positive (infected) result for COVID-19 in as little as five minutes and a negative (uninfected) result in 13 minutes. Together with a more sophisticated device for use in large laboratories, Abbott expects to provide about 5 million tests in April alone. Public health laboratories using other devices will augment this number by several hundred thousand.

That’s not the whole testing story, however. A negative result in the first test includes both those who have never been infected and those who have been infected but are now recovered. To distinguish between these two groups requires a second test – an antibody test that indicates which members of the community are immune to the virus as a result of previous infection.

A large number of 15-minute rapid antibody tests have been developed around the world. In the U.S., more than 70 companies have sought approval to sell antibody tests in recent weeks, say regulators, although only one so far has received FDA emergency use authorization. It’s not known how reliable the other tests are; some countries have purchased millions of antibody tests only to discover they were inaccurate. And among other unknowns are the level of antibodies it takes to actually become immune and how long antibody protection against the coronavirus actually lasts.       

But there’s no question that both types of test are essential if we’re to accumulate enough evidence to conquer this deadly disease. Empirical evidence is one of the hallmarks of genuine science, and that’s as true of epidemiology as of other disciplines.

Next: Does Planting Trees Slow Global Warming? The Evidence

Coronavirus Epidemiological Models: (2) How Completely Different the Models Can Be

Two of the most crucial predictions of any epidemiological model are how fast the disease in question will spread, and how many people will die from it. For the COVID-19 pandemic, the various models differ dramatically in their projections.

A prominent model, developed by an Imperial College, London research team and described in the previous post, assesses the effect of mitigation and suppression measures on spreading of the pandemic in the UK and U.S. Without any intervention at all, the model predicts that a whopping 500,000 people would die from COVID-19 in the UK and 2.2 million in the more populous U.S. These are the numbers that so alarmed the governments of the two countries.

Initially, the Imperial researchers claimed their numbers could be halved (to 250,000 and 1.1 million deaths, respectively) by implementing a nationwide lockdown of individuals and nonessential businesses. Lead scientist Neil Ferguson later revised the UK estimate drastically downward to 20,000 deaths. But it appears this estimate would require repeating the lockdown periodically for a year or longer, until a vaccine becomes available. Ferguson didn’t give a corresponding reduced estimate for the U.S., but it would be approximately 90,000 deaths if the same scaling applies.

This reduced Imperial estimate for the U.S. is somewhat above the latest projection of a U.S. model, developed by the Institute for Health Metrics and Evaluations at the University of Washington in Seattle. The Washington model estimates the total number of American deaths at about 60,000, assuming national adherence to stringent stay-at-home and social distancing measures. The figure below shows the predicted number of daily deaths as the U.S. epidemic peaks over the coming months, as estimated this week. The peak of 2,212 deaths on April 12 could be as high as 5,115 or as low as 894, the Washington team says.

COVID.jpg

The Washington model is based on data from local and national governments in areas of the globe where the pandemic is well advanced, whereas the Imperial model primarily relies on data from China and Italy alone.  Peaks in each U.S. state are expected to range from the second week of April through the last week of May.

Meanwhile, a rival University of Oxford team has put forward an entirely different model, which suggests that up to 68% of the UK population may have already been infected. The virus may have been spreading its tentacles, they say, for a month or more before the first death was reported. If so, the UK crisis would be over in two to three months, and the total number of deaths would be below the 250,000 Imperial estimate, due to a high level of herd immunity among the populace. No second wave of infection would occur, unlike the predictions of the Imperial and Washington models.

Nevertheless, that’s not the only possible interpretation of the Oxford results. In a series of tweets, Harvard public health postdoc James Hay has explained that the proportion of the UK population already infected could be anywhere between 0.71% and 56%, according to his calculations using the Oxford model. The higher the percentage infected and therefore immune before the disease began to escalate, the lower the percentage of people still at risk of contracting severe disease, and vice versa.

The Oxford model shares some assumptions with the Imperial and Washington models, but differs slightly in others. For example, it assumes a shorter period during which an infected individual is infectious, and a later date when the first infection occurred. However, as mathematician and infectious disease specialist Jasmina Panovska-Griffiths explains, the two models actually ask different questions. The question asked by the Imperial and Washington groups is: What strategies will flatten the epidemic curve for COVID-19? The Oxford researchers ask the question: Has COVID-19 already spread widely?  

Without the use of any model, Stanford biophysicist and Nobel laureate Michael Levitt has come to essentially the same conclusion as the Oxford team, based simply on an analysis of the available data. Levitt’s analysis focuses on the rate of increase in the daily number of new cases: once this rate slows down, so does the death rate and the end of the outbreak is in sight.

By examining data from 78 of the countries reporting more than 50 new cases of COVID-19 each day, Levitt was able to correctly predict the trajectory of the epidemic in most countries. In China, once the number of newly confirmed infections began to fall, he predicted that the total number of COVID-19 cases would be around 80,000, with about 3,250 deaths – a remarkably accurate forecast, though doubts exist about the reliability of the Chinese numbers. In Italy, where the caseload was still rising, his analysis indicated that the outbreak wasn’t yet under control, as turned out to be tragically true.

Levitt, however, agrees with the need for strong measures to contain the pandemic, as well as earlier detection of the disease through more widespread testing.

Next: Coronavirus Epidemiological Models: (3) How Inadequate Testing Limits the Evidence



Coronavirus Epidemiological Models: (1) What the Models Predict

Amid all the brouhaha over COVID-19 – the biggest respiratory virus threat globally since the 1918 influenza pandemic – confusion reigns over exactly what epidemiological models of the disease are predicting. That’s important as the world begins restricting everyday activities and effectively shutting down national economies, based on model predictions.

In this and subsequent blog posts, I’ll examine some of the models being used to simulate the spread of COVID-19 within a population. As readers will know, I’ve commented at length in this blog on the shortcomings of computer climate models and their failure to accurately predict the magnitude of global warming. 

Epidemiological models, however, are far simpler than climate models and involve far fewer assumptions. The propagation of disease from person to person is much better understood than the vagaries of global climate. A well-designed disease model can help predict the likely course of an epidemic, and can be used to evaluate the most realistic strategies for containing it.

Following the initial coronavirus episode that began in Wuhan, China, various attempts have been made to model the outbreak. One of the most comprehensive studies is a report published last week, by a research team at Imperial College in London, that models the effect of mitigation and suppression control measures on the pandemic spreading in the UK and U.S.

Mitigation focuses on slowing the insidious spread of COVID-19, by taking steps such as requiring home quarantine of infected individuals and their families, and imposing social distancing of the elderly; suppression aims to stop the epidemic in its tracks, by adding more drastic measures such as social distancing of everyone and the closing of nonessential businesses and schools. Both tactics are currently being used not only in the UK and U.S., but also in many other countries – especially in Italy, hit hard by the epidemic.

The model results for the UK are illustrated in the figure below, which shows how the different strategies are expected to affect demand for critical care beds in UK hospitals over the next few months. You can see the much-cited “flattening of the curve,” referring to the bell-shaped curve that portrays the peaking of critical care cases, and related deaths, as the disease progresses. The Imperial College model assumes that 50% of those in critical care will die, based on expert clinical opinion. In the U.S., the epidemic is predicted to be more widespread than in the UK and to peak slightly later.

COVID-19 Imperial College.jpg

What set alarm bells ringing was the model’s conclusion that, without any intervention at all, approximately 0.5 million people would die from COVID-19 in the UK and 2.2 million in the more populous U.S. But these numbers could be halved (to 250,000 and 1.1-1.2 million deaths, respectively) if all the proposed mitigation and suppression measures are put into effect, say the researchers.

Nevertheless, the question then arises of how long such interventions can or should be maintained. The blue shading in the figure above shows the 3-month period during which the interventions are assumed to be enforced. But because there is no cure for the disease at present, it’s possible that a second wave of infection will occur once interventions are lifted. This is depicted in the next figure, assuming a somewhat longer 5-month period of initial intervention.

COVID-19 Imperial College 2nd wave.jpg

The advantage of such a delayed peaking of the disease’s impact would be a lessening of pressure on an overloaded healthcare system, allowing more time to build up necessary supplies of equipment and reducing critical care demand – in turn reducing overall mortality. In addition, stretching out the timeline for a sufficiently long time could help bolster herd immunity. Herd immunity from an infectious disease results when enough people become immune to the disease through either recovery or vaccination, both of which reduce disease transmission. A vaccine, however, probably won’t be available until 2021, even with the currently accelerated pace of development.

Whether the assumptions behind the Imperial College model are accurate is an issue we’ll look at in a later post. The model is highly granular, reaching down to the level of the individual and based on high-resolution population data, including census data, data from school districts, and data on the distribution of workplace size and commuting distance. Contacts between people are examined within a household, at school, at work and in social settings.

The dilemma posed by the model’s predictions is obvious. It’s necessary to balance minimizing the death rate from COVID-19 with the social and economic disruption caused by the various interventions, and with the likely period over which the interventions can be maintained.

Next: Coronavirus Epidemiological Models: (2) How Completely Different the Models Can Be

How Hype Is Hurting Science

The recent riots in France over a proposed carbon tax, aimed at supposedly combating climate change, were a direct result of blatant exaggeration in climate science for political purposes. It’s no coincidence that the decision to move forward with the tax came soon after an October report from the UN’s IPCC (Intergovernmental Panel on Climate Change), claiming that drastic measures to curtail climate change are necessary by 2030 in order to avoid catastrophe. President Emmanuel Macron bought into the hype, only to see his people rise up against him.

Exaggeration has a long history in modern science. In 1977, the select U.S. Senate committee drafting new low-fat dietary recommendations wildly exaggerated its message by declaring that excessive fat or sugar in the diet was as much of a health threat as smoking, even though a reasoned examination of the evidence revealed that wasn’t true.

About a decade later, the same hype infiltrated the burgeoning field of climate science. At another Senate committee hearing, astrophysicist James Hansen, who was then head of GISS (NASA’s Goddard Institute for Space Studies), declared he was 99% certain that the 0.4 degrees Celsius (0.7 degrees Fahrenheit) of global warming from 1958 to 1987 was caused primarily by the buildup of greenhouse gases in the atmosphere, and wasn’t a natural variation. This assertion was based on a computer model of the earth’s climate system.

At a previous hearing, Hansen had presented climate model predictions of U.S. temperatures 30 years in the future that were three times higher than they turned out to be. This gross exaggeration makes a mockery of his subsequent claim that the warming from 1958 to 1987 was all man-made. His stretching of the truth stands in stark contrast to the caution and understatement of traditional science.

But Hansen’s hype only set the stage for others. Similar computer models have also exaggerated the magnitude of more recent global warming, failing to predict the pause in warming from the late 1990s to about 2014. During this interval, the warming rate dropped to below half the rate measured from the early 1970s to 1998. Again, the models overestimated the warming rate by two or three times.

An exaggeration mindlessly repeated by politicians and the mainstream media is the supposed 97% consensus among climate scientists that global warming is largely man-made. The 97% number comes primarily from a study of approximately 12,000 abstracts of research papers on climate science over a 20-year period. But what is never revealed is that almost 8,000 of the abstracts expressed no opinion at all on anthropogenic (human-caused) warming. When that and a subsidiary survey are taken into account, the climate scientist consensus percentage falls to between 33% and 63% only. So much for an overwhelming majority! 

A further over-hyped assertion about climate change is that the polar bear population at the North Pole is shrinking because of diminishing sea ice in the Arctic, and that the bears are facing extinction. For global warming alarmists, this claim has become a cause célèbre. Yet, despite numerous articles in the media and photos of apparently starving bears, current evidence shows that the polar bear population has actually been steady for the whole period that the ice has been decreasing – and may even be growing, according to the native Inuit.

It’s not just climate data that’s exaggerated (and sometimes distorted) by political activists. Apart from the historical example in nutritional science cited above, the same trend can be found in areas as diverse as the vaccination debate and the science of GMO foods.

Exaggeration is a common, if frowned-upon marketing tool in the commercial world: hype helps draw attention in the short term. But its use for the same purpose in science only tarnishes the discipline. And, just as exaggeration eventually turns off commercial customers interested in a product, so too does it make the general public wary if not downright suspicious of scientific proclamations. The French public has recognized this on climate change.

Belief in Catastrophic Climate Change as Misguided as Eugenics Was 100 Years Ago

Last week’s landmark report by the UN’s IPCC (Intergovernmental Panel on Climate Change), which claims that global temperatures will reach catastrophic levels unless we take drastic measures to curtail climate change by 2030, is as misguided as eugenics was 100 years ago. Eugenics was the shameful but little-known episode in the early 20th century characterized by the sterilization of hundreds of thousands of people considered genetically inferior, especially the mentally ill, the physically handicapped, minorities and the poor.

Although ill-conceived and even falsified as a scientific theory in 1917, eugenics became a mainstream belief with an enormous worldwide following that included not only scientists and academics, but also politicians of all parties, clergymen and luminaries such as U.S. President Teddy Roosevelt and famed playwright George Bernard Shaw. In the U.S., where the eugenics movement was generously funded by organizations such as the Rockefeller Foundation, a total of 27 states had passed compulsory sterilization laws by 1935 – as had many European countries.

Eugenics only fell into disrepute with the discovery after World War II of the horrors perpetrated by the Nazi regime in Germany, including the holocaust as well as more than 400,000 people sterilized against their will. The subsequent global recognition of human rights declared eugenics to be a crime against humanity.

The so-called science of catastrophic climate change is equally misguided. Whereas modern eugenics stemmed from misinterpretation of Mendel’s genetics and Darwin’s theory of evolution, the notion of impending climate disaster results from misrepresentation of the actual empirical evidence for a substantial human contribution to global warming, which is shaky at best.

Instead of the horrors of eugenics, the narrative of catastrophic anthropogenic (human-caused) global warming conjures up the imaginary horrors of a world too hot to live in. The new IPCC report paints a grim picture of searing yearly heat waves, food shortages and coastal flooding that will displace 50 million people, unless draconian action is initiated soon to curb emissions of greenhouse gases from the burning of fossil fuels. Above all, insists the IPCC, an unprecedented transformation of the world’s economy is urgently needed to avoid the most serious damage from climate change.

But such talk is utter nonsense. First, the belief that we know enough about climate to control the earth’s thermostat is preposterously unscientific. Climate science is still in its infancy and, despite all our spectacular advances in science and technology, we still have only a rudimentary scientific understanding of climate. The very idea that we can regulate the global temperature to within 0.9 degrees Fahrenheit (0.5 degrees Celsius) through our own actions is absurd.

Second, the whole political narrative about greenhouse gases and dangerous anthropogenic warming depends on faulty computer climate models that were unable to predict the recent slowdown in global warming, among other failings. The models are based on theoretical assumptions; science, however, takes its cue from observational evidence. To pretend that current computer models represent the real world is sheer arrogance on our part.

And third, the empirical climate data that is available has been exaggerated and manipulated by activist climate scientists. The land warming rates from 1975 to 2015 calculated by NOAA (the U.S. National Oceanic and Atmospheric Administration) are distinctly higher than those calculated by the other two principal guardians of the world’s temperature data. Critics have accused the agency of exaggerating global warming by excessively cooling the past and warming the present, suggesting politically motivated efforts to generate data in support of catastrophic human-caused warming.  

Exaggeration also shows up in the setting of new records for the “hottest year ever” –declarations deliberately designed to raise alarm. But when the global temperature is currently creeping upwards at the rate of only a few hundredths of a degree every 10 years, the establishment of new records is unsurprising. If the previous record has been set in the last 10 or 20 years, a high temperature that is only several hundredths of a degree above the old record will set a new one.

Eugenics too was rooted in unjustified human hubris, false science, and exaggeration in its methodology. Just like eugenics, belief in apocalyptic climate change and in the dire prognostications of the IPCC will one day be abandoned also.

Next week: No Evidence That Aluminum in Vaccines Is Harmful

Solar Science Shortchanged in Climate Models

The sun gets short shrift in the computer climate models used to buttress the mainstream view of anthropogenic (human-caused) global warming. That’s because the climate change narrative, which links warming almost entirely to our emissions of greenhouse gases, trivializes the contributions to global warming from all other sources. According to its Fifth Assessment Report, the IPCC (Intergovernmental Panel on Climate Change) attributes no more than a few percent of total global warming to the sun’s influence.

That may be the narrative but it’s not one universally endorsed by solar scientists. Although some, such as solar physicist Mike Lockwood, adhere to the conventional wisdom on CO2, others, such as mathematical physicist Nicola Scafetta, think instead that the sun has an appreciable impact on the earth’s climate. In disputing the conventional wisdom, Scafetta points to our poor understanding of indirect solar effects as opposed to the direct effect of the sun’s radiation, and to analytical models of the sun that oversimplify its behavior. Furthermore, a lack of detailed historical data prior to the recent observational satellite era casts doubt on the accuracy and reliability of the IPCC estimates.

I’ve long felt sorry for solar scientists, whose once highly respectable field of research before climate became an issue has been marginalized by the majority of climate scientists. And solar scientists who are climate change skeptics have had to endure not only loss of prestige, but also difficulty in obtaining research funding because their work doesn’t support the consensus on global warming. But it appears that the tide may be turning at last.

Judging from recent scientific publications, the number of papers affirming a strong sun-climate link is on the rise. From 93 papers in 2014 examining such a link, almost as many were published in the first half of 2017 alone. The 2017 number represents about 7% of all research papers in solar science over the same period (Figure 1 here) and about 16% of all papers on computer climate models during that time (Figure 4 here).

Sunspots.jpg

This rising tide of papers linking the sun to climate change may be why UK climate scientists in 2015 attempted to silence the researcher who led a team predicting a slowdown in solar activity after 2020. Northumbria University’s Valentina Zharkova had dared to propose that the average monthly number of sunspots will soon drop to nearly zero, based on a model in which a drastic falloff is expected in the sun’s magnetic field. Other solar researchers have made the same prediction using different approaches.

Sunspots are small dark blotches on the sun caused by intense magnetic turbulence on the sun’s surface. Together with the sun’s heat and light, the number of sunspots goes up and down during the approximately 11-year solar cycle. But the maximum number of sunspots seen in a cycle has recently been declining. The last time they disappeared altogether was during the so-called Maunder Minimum, a 70-year cool period in the 17th and 18th centuries forming part of the Little Ice Age.

While Zharkova’s research paper actually said nothing about climate, climate scientists quickly latched onto the implication that a period of global cooling might be ahead and demanded that the Royal Astronomical Society – at whose meeting she had originally presented her findings – withdraw her press release. Fortunately, the Society refused to accede to this attack on science at the time, although the press release has since been removed from the Web. Just last month, Zharkova’s group refuted criticisms of its methodology by another prominent solar scientist.

Apart from such direct effects, indirect solar effects due to the sun’s ultraviolet (UV) radiation or cosmic rays from deep space could also contribute to global warming. In both cases, some sort of feedback mechanism would be needed to amplify what would otherwise be tiny perturbations to global temperatures. However, what’s not generally well known is that the warming predicted by computer climate models comes from assumed water vapor amplification of the modest temperature increase caused by CO2 acting alone. Speculative candidates for amplification of solar warming involve changes in cloud cover as well as the earth’s ozone layer.

Next week: Measles or Autism? False Choice, Says Science

Evidence Lacking for Major Human Role in Climate Change

Conventional scientific wisdom holds that global warming and consequent changes in the climate are primarily our own doing. But what few people realize is that the actual scientific evidence for a substantial human contribution to climate change is flimsy. It requires highly questionable computer climate models to make the connection between global warming and human emissions of carbon dioxide (CO2).

The multiple lines of evidence which do exist are simply evidence that the world is warming, not proof that the warming comes predominantly from human activity. The supposed proof relies entirely on computer models that attempt to simulate the earth’s highly complex climate, and include greenhouse gases as well as aerosols from both volcanic and man-made sources – but almost totally ignore natural variability.

So it shouldn’t be surprising that the models have a dismal track record in predicting the future. Most spectacularly, the models failed to predict the recent pause or hiatus in global warming from the late 1990s to about 2014. During this period, the warming rate dropped to only a third to a half of the rate measured from the early 1970s to 1998, while at the same time CO2 kept spewing into the atmosphere. Out of 32 climate models, only a lone Russian model came anywhere close to the actual observations.

Blog1 image JPG.jpg

Not only did the models overestimate the warming rate by two or three times, they wrongly predict a hot spot in the upper atmosphere that isn’t there, and are unable to accurately reproduce sea level rise.

Yet it’s these same failed models that underpin the whole case for catastrophic consequences of man-made climate change, a case embodied in the 2015 Paris Agreement. The international agreement on reducing greenhouse gas emissions – which 195 nations, together with many of the world’s scientific societies and national academies, have signed on to – is based not on empirical evidence, but on artificial computer models. Only the models link climate change to human activity. The empirical evidence does not.

Proponents of human-caused global warming, including a majority of climate scientists, insist that the boost to global temperatures of about 1.6 degrees Fahrenheit (0.9 degrees Celsius) since 1850 comes almost exclusively from the steady increase in the atmospheric CO2 level. They argue that elevated CO2 must be the cause of nearly all the warming because the sole major change in climate “forcing” over this period has been from CO2 produced by human activities – mainly the burning of fossil fuels as well as deforestation.

But correlation is not causation, as is well known from statistics or the public health field of epidemiology. So believers in the narrative of catastrophic anthropogenic (human-caused) climate change fall back on computer models to shore up their argument. With the climate change narrative trumpeted by political entities such as the UN’s IPCC (Intergovernmental Panel on Climate Change), and amplified by compliant media worldwide, predictions of computer climate models have acquired the status of quasi-religious edicts.

Indeed, anyone disputing the conventional wisdom is labeled a “denier” by advocates of climate change orthodoxy, who claim that global warming skeptics are just as anti-science as those who believe vaccines cause autism. The much-ballyhooed war on science typically lumps climate change skeptics together with creationists, anti-vaccinationists and anti-GMO activists. But the climate warmists are the ones on the wrong side of science.

Like their counterparts in the debate over the safety of GMOs, warmists employ fear, hyperbole and heavy-handed political tactics in an attempt to shut down debate. Yet skepticism about the human influence on global warming persists, and may even be growing among the general public. In 2018, a Gallup poll in the U.S. found that 36% of Americans don’t believe that global warming is caused by human activity, while a UK survey showed that a staggering 64% of the British public feel the same way. And the percentage of climate scientists who endorse the mainstream view of a strong human influence is nowhere near the widely believed 97%, although it’s probably above 50%.

Most scientists who are skeptics like me accept that global warming is real, but not that it’s entirely man-made or that it’s dangerous. The observations alone aren’t evidence for a major human role. Such lack of regard for the importance of empirical evidence, and misguided faith in the power of deficient computer climate models, are abuses of science.

(Another 189 comments on this post can be found at the What's Up With That blog and the NoTricksZone blog, which have kindly reproduced the whole post.)