Are UFO Sightings a Threat to Science?

Credit: CoolCatGameStudio from Pixabay

Credit: CoolCatGameStudio from Pixabay

Do UFO sightings threaten science? The short answer is that UFO observations don’t in themselves – as long as one separates true observations from the questionable claims of alien abduction and other supposed extraterrestrial activity on Earth.    

Unlike pseudosciences such as astrology or crystal healing, UFOs belong to the realm of science, even if we don’t know exactly what some of them are. Sightings of ethereal objects in the sky have been reported throughout recorded history, although there’s been a definite uptick since the advent of air travel in the 20th century. According to recently released records, UK wartime prime minister Winston Churchill colluded with General Dwight Eisenhower to suppress the alleged observation of a UFO by a British bomber crew toward the end of World War II, out of fear that reporting it would cause mass panic.

Since then, numerous incidents have been reported in countries across the globe, by scientists and nonscientists alike. The U.S. Air Force, which coined the term UFO, undertook a series of studies from 1947 to 1969 that included more than 12,000 claimed UFO sightings. The project concluded that the vast majority of sightings could be explained as misidentified conventional objects or natural phenomena, such as spy planes, helium balloons, clouds or meteors – or occasionally, hoaxes. Nonetheless, there was no explanation for 701 (about 6%) of the sightings investigated. 

Only in the last several months has the existence of a new U.S. program to study UFOs been disclosed, this time under the aegis of the Pentagon. Begun in 2007, the secret program apparently continues until this day, though its government funding ended in 2012. One of the few publicized incidents which was examined involved two Navy F/A-18F fighter pilots, who chased an oval object that appeared to be moving at impossibly high speeds for humans, off the coast of southern California in 2004.

Perhaps the most famous American event was the so-called Roswell incident in 1947, when an Air Force balloon designed for nuclear test monitoring crashed at a ranch near Roswell, New Mexico. The official but deceptive statement by the military that it was a high-altitude weather balloon only served to generate ever-escalating conspiracy theories about the crash. The theories postulated that the military had covered up the crash landing of an alien spacecraft, and that bodies of its extraterrestrial crew had been recovered and preserved. Over the years, details of the story became embellished to the point where more than one candidate for U.S. President promised to unlock the secret government files on Roswell.

Belief in alien activity is where UFO lore departs from science. While it’s possible that some of the small percentage of unexplained UFO sightings have been spaceships piloted by extraterrestrial beings, there’s currently no credible evidence that aliens actually exist, nor that they’ve ever visited planet Earth.

In particular, it’s belief in alien abductions that constitutes a threat to science, the hallmarks of which are empirical evidence and logic. In the U.S., the phenomenon began with the mysterious case of Betty and Barney Hill in 1961. The Hills claim to have encountered a UFO while driving home on an isolated rural road in New Hampshire, and to have been seized by humanoid figures with large eyes who took them onto their spaceship, where invasive experiments were performed on the terrified pair. Afterwards, both the Hills’ watches stopped working and they had no recollection of two hours of their bewildering drive.

Although the alien abduction narrative captured the American imagination during the next two decades, the Air Force ultimately dismissed the story and determined that the alien craft was a “natural” object. Indeed, there’s no reliable empirical evidence that any of the millions of other reported abductions have been real.  

Psychologists attribute the episodes to false memories and fantasies created by a human brain that we’re still struggling to understand. Possible physical causes of the abduction phenomenon include epilepsy, hallucinations and sleep paralysis, a condition in which a person is half-awake — conscious, though unable to move.

But while abduction stories may be entertaining, they qualify as irrational pseudoscience because they can’t be falsified. Pseudoscience is frequently based on faith in a belief, instead of scientific evidence, and makes vague and often grandiose claims that can’t be tested. One of the clear-cut ways to differentiate real science from pseudoscience is the falsifiability criterion formulated by 20th-century philosopher Sir Karl Popper: a genuine scientific theory or law must be capable in principle of being invalidated – of being disproved – by observation or experiment. That’s not possible with alien abductions, which can’t be either proved or disproved.

Next: No Evidence That Climate Change Causes Weather Extremes: (1) Drought

UN Species Extinction Report Spouts Unscientific Hype, Dubious Math

An unprecedented decline in nature’s animal and plant species is supposedly looming, according to a UN body charged with developing a knowledge base for preservation of the planet’s biodiversity. In a dramatic announcement this month, the IPBES (Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services) claimed that more species are currently at risk of extinction than at any time in human history and that the extinction rate is accelerating. But these claims are nonsensical hype, based on wildly exaggerated numbers that can’t be corroborated.

Credit: Ben Curtis, Associated Press

Credit: Ben Curtis, Associated Press

The IPBES report summary, which is all that has been released so far, states that “around 1 million of an estimated 8 million animal and plant species (75% of which are insects), are threatened with extinction.” Apart from the as-yet-unpublished report, there’s little indication of the source for these estimates, which are as mystifying as the classic magician’s rabbit produced from an empty hat.

It appears from the report summary that the estimates are derived from a much more reliable set of numbers – the so-called Red List of threatened species, compiled by the IUCN (International Union for Conservation of Nature). The IUCN, not affiliated with the UN, is an international environmental network highly regarded for its assessments of the world’s biodiversity, including evaluation of the extinction risk of thousands of species. The network includes a large number of biologists and conservationists.

Of an estimated 1.7 million species in total, the IUCN’s Red List has currently assessed just 98,512 species, of which it lists 27,159 or approximately 28% as threatened with extinction. The IUCN’s “threatened” description includes the categories “critically endangered,” “endangered” and “vulnerable.”

A close look at the IUCN category definitions reveals that “vulnerable” represents a probability of extinction in the wild of merely “at least 10% within 100 years,” and “endangered” an extinction probability of “at least 20% within a maximum of 100 years.” Both of these categories are hardly a major cause for concern, yet together they embrace 78% of the IUCN’s compilation of threatened species. That leaves just 22% or about 5,900 critically endangered species, whose probability of extinction in the wild is assessed at more than 50% over the next 100 years – high enough for these species to be genuinely at risk of becoming extinct.

But while the IUCN presents these numbers matter-of-factly without fanfare, the much more political IPBES resorts to unashamed hype by extrapolating the statistics beyond the 98,512 species that the IUCN has actually investigated, and by assuming a total number of species far in excess of the IUCN’s estimated 1.7 million. Estimates of just how many species the Earth hosts vary considerably, from the IUCN number of 1.7 million all the way up to 1 trillion. The IPBES number of 8 million species appears to be plucked out of nowhere, as does the 1 million threatened with extinction, despite the IPBES report being the result of a “systematic review” of 15,000 scientific and government sources.

According to IPBES chair Sir Robert Watson, the 1 million number was derived from the 8 million by what appears to be an arbitrary calculation based on the IUCN’s much lower numbers. The IPBES assumes a global total of 5.5 million insects – compared with the IUCN’s Red List estimate of 1.0 million – which, when subtracted from the 8 million grand total, leaves 2.5 million non-insect species. This 2.5 million is then multiplied by the IUCN 28% threatened rate, and the 5.5 million insects multiplied by a mysterious unspecified lower rate, to arrive at the 1 million species in danger. That far excedes the IUCN’s estimate of 27,159.

Not only does the IPBES take unjustified liberties with the IUCN statistics, but its extinction rate projection bears no relationship whatsoever to actual extinction data. A known 680 vertebrate species have been driven to extinction since the 16th century, with 66 known insect extinctions recorded over the same period – or approximately 1.5 extinctions per year on average. The IPBES report summary states that the current rate of global species extinction is tens to hundreds of times higher than this and accelerating, but without explanation except for the known effect of habitat loss on animal species.

Maybe we should give the IPBES the benefit of the doubt and suspend judgment until the full report is made available. But with such a disparity between its estimates and the more sober assessment of the IUCN, it seems that the IPBES numbers are sheer make-believe. One million species on the brink of extinction is nothing but fiction, when the true number could be as low as 5,900.

Next: Are UFO Sightings a Threat to Science?

Science, Political Correctness and the Great Barrier Reef

A recent Australian court case highlights the intrusion of political correctness into science to bolster the climate change narrative. On April 16, a federal judge ruled that Australian coral scientist Dr. Peter Ridd had been unlawfully fired from his position at North Queensland’s James Cook University, for questioning his colleagues’ research on the impact of climate change on the Great Barrier Reef. In his ruling, the judge criticized the university for not respecting Ridd’s academic freedom.

Great Barrier Reef.jpg

The Great Barrier Reef is the world's largest coral reef system, 2,300 km (1,400 miles) long and visible from outer space. Labeled by CNN as one of the seven natural wonders of the world, the reef is a constant delight to tourists, who can view the colorful corals from a glass-bottomed boat or by snorkeling or scuba diving.

Rising temperatures, especially during the prolonged El Niño of 2014-17, have severely damaged portions of the Great Barrier Reef – so much so that the reef has become the poster child for global warming. Corals are susceptible to overheating and undergo bleaching when the water gets too hot, losing their vibrant colors. But exactly how much of the Great Barrier Reef has been affected, and how quickly it’s likely to recover, are controversial issues among reef researchers.

Ridd’s downfall came after he authored a chapter on the resilience of Great Barrier Reef corals in the book, Climate Change: The Facts 2017. In his chapter and subsequent TV interviews, Ridd bucked the politically correct view that the reef is doomed to an imminent death by climate change, and criticized the work of colleagues at the university’s Centre of Excellence for Coral Reef Studies. He maintained that his colleagues’ findings on the health of the reef in a warming climate were flawed, and that scientific organizations such as the Centre of Excellence could no longer be trusted.

Ridd had previously been censured by the university for going public with a dispute over a different aspect of reef health. This time, his employer accused Ridd of “uncollegial” academic misconduct and warned him to remain silent about the charge. When he didn’t, the university fired him after a successful career of more than 40 years.

At the crux of the issue of bleaching is whether or not it’s a new phenomenon. The politically correct view of many of Ridd’s fellow reef scientists is that bleaching didn’t start until the 1980s as global warming surged, so is an entirely man-made spectacle. But Ridd points to scientific records that reveal multiple coral bleaching events around the globe throughout the 20th century.

The fired scientist also disagrees with his colleagues over the extent of bleaching from the massive 2014-17 El Niño. Ridd estimates that just 8% of Great Barrier Reef coral actually died; much of the southern end of the reef didn’t suffer at all. But his politically correct peers maintain that the die-off was anywhere from 30% to 95%.

Such high estimates, however, are for very shallow water coral – less than 2 meters (7 feet) below the surface, which is only a small fraction of all the coral in the reef. A recent independent study found that deep water coral – down to depths of more than 40 meters (130 feet) – saw far less bleaching. And while Ridd’s critics claim that warming has reduced the growth rate of new coral by 15%, he finds that the growth rate has increased slightly over the past 100 years.

Ridd explains the adaptability of corals to heating as a survival mechanism, in which the multitude of polyps that constitute a coral exchange the microscopic algae that normally live inside the polyps and give coral its striking colors. Hotter than normal water causes the algae to poison the coral that then expels them, turning the polyps white. But to survive, the coral needs resident algae which supply it with energy by photosynthesis of sunlight. So from the surrounding water, the coral selects a different species of algae better suited to hot conditions, a process that enables the coral to recover within a few years, says Ridd.

Ridd attributes what he believes are the erroneous conclusions of his reef scientist colleagues to a failure of the peer review process in scrutinizing their work. To support his argument, he cites the so-called reproducibility crisis in contemporary science – the vast number of peer-reviewed studies that can’t be replicated in subsequent investigations and whose findings turn out to be false. Although it’s not known how severe irreproducibility is in climate science, it’s a serious problem in the biomedical sciences, where as many as 89% of published results in certain fields can’t be reproduced.

In Ridd’s opinion, as well as mine, studies predicting that the Great Barrier Reef is in imminent peril are based more on political correctness than good science.

Next: UN Species Extinction Report Spouts Unscientific Hype, Dubious Math

Grassroots Climate Change Movement Ignores Actual Evidence

Earth Day 2019 is marked by the recent launch of several grassroots organizations whose ostensible aim is to combat climate change. The crusades include the UK’s Extinction Rebellion, the Swedish WeDontHaveTime, and the pied-piper-like campaign sparked by striking Swedish schoolgirl Greta Thunberg. What’s most disturbing about them all is not their intentions or methods, but their ignorance and their disregard of scientific evidence.

Common to the entire movement is the delusional belief that climate Armageddon is imminent – a mere 12 years away, according to U.S. congresswoman Alexandria Ocasio-Cortez. The WeDontHaveTime manifesto declares that “climate change is killing us” and that we’re already experiencing catastrophe. Trumpets Extinction Rebellion: “The science is clear … we are in a life or death situation … ,” a sentiment echoed by the Sunrise Movement in the U.S. And a proclamation of the youth climate strikers insists that “The climate crisis … is the biggest threat in human history.”

But despite the climate hysteria, these activists show almost no knowledge of the science that supposedly underlies their doomsday claims. Instead, they resort to logically fallacious appeals to authority. Apart from the UN’s IPCC (Intergovernmental Panel on Climate Change), which is as much a political body as a scientific one, the authorities include the former head of NASA’s Goddard Institute for Space Studies, James Hansen – known for his hype on global warming – and the UK Met Office, an agency with a dismal track record of predicting even the coming season’s weather.

Among numerous mistaken assertions by the would-be crusaders is the constant drumbeat of extreme weather events attributed to human emissions of greenhouse gases. The sadly uninformed protesters seem completely unaware that anomalous weather has been part of the earth’s climate from ancient times, long before industrialization bolstered the CO2 level in the atmosphere. They don’t bother to check the actual evidence that reveals no long-term trend whatsoever in hurricanes, heat waves, floods, droughts and wildfires in more than 100 years. Linking weather extremes to global warming or CO2 is empty-headed ignorance.

Another fallacy is that the huge Antarctic ice sheet, containing about 90% of the freshwater ice on the earth’s surface, is losing ice and causing sea-level rise to accelerate. But while it’s true that glaciers in West Antarctica and the Antarctic Peninsula are thinning, there’s evidence, albeit controversial, that the ice loss is outweighed by new ice formation in East Antarctica from warming-enhanced snowfall. The much smaller Greenland ice sheet is indeed losing ice by melting, but not at an alarming rate.

The cluelessness of the climate change movement is also exemplified by its embrace of false predictions of the future, such as the claim that climate change will cause shortfalls in food production. If anything, exactly the reverse is true. Higher temperatures and the fertilizing effect of CO2, which helps plants grow, boost crop yields and make plants more resistant to drought.

Participation in the movement runs in the hundreds of thousands around the world, especially among school climate strikers. The eco-anarchist Extinction Rebellion, formed last year, promotes acts of nonviolent civil disobedience to achieve its goals, harking back to “Ban the Bomb” and US civil rights protests of the 1950s and 1960s. To “save the planet”, the organization is calling for greenhouse gas emissions to be reduced to net zero as soon as 2025.

The newly created WeDontHaveTime subscribes to the widely held political, but unscientific belief that climate change is an existential crisis, and that catastrophe lurks around the corner. Its particular focus is on building a global social media network dedicated to climate change, with the initial phase being launched today, April 22.

The school strike for climate has similar aims, to be achieved by children around the globe playing hooky from school. An estimated total of more than a million pupils in 125 countries demonstrated in strikes on March 15.

The movement’s lack of scientific knowledge extends to the origin of CO2 emissions as well. Extinction Rebellion and WeDontHaveTime, at least, appear oblivious to the fact that the lion’s share of the world’s CO2 emissions comes from China and India alone – 34% in 2019, by preliminary estimates, and increasing yearly. If the climate change catastrophists were really serious about their objectives, they’d be directing their efforts against the governments of these two countries instead of wasting time on the West.

Next: Science, Political Correctness and the Great Barrier Reef

The Sugar Industry: Sugar Daddy to Manipulated Science?

Industry funding of scientific research often comes with strings attached. There’s plenty of evidence that industries such as tobacco and lead have been able to manipulate sponsored research to their advantage, in order to create doubt about the deleterious effects of their product. But has the sugar industry, currently in the spotlight because of concern over sugary drinks, done the same?

suger large.jpg

This charge was recently leveled at the industry by a team of scientists at UCSF (University of California, San Francisco), who accused the industry of funding research in the 1960s that downplayed the risks of consuming sugar and overstated the supposed dangers of eating saturated fat. Both saturated fat and sugar had been linked to coronary heart disease, which was surging at the time.

The UCSF researchers claim to have discovered evidence that an industry trade group secretly paid two prominent Harvard scientists to conduct a literature review refuting any connection between sugar and heart disease, and making dietary fat the villain instead. The published review made no mention of sugar industry funding.

A year after the review came out, the trade group funded an English researcher to conduct a study on laboratory rats. Initial results seemed to confirm other studies indicating that sugars, which are simple carbohydrates, were more detrimental to heart health than complex or starchy carbohydrates like grains, beans and potatoes. This was because sugar appeared to elevate the blood level of triglyceride fats, today a known risk factor for heart disease, through its metabolism by microbes in the gut.

Perhaps more alarmingly, preliminary data suggested that consumption of sugar – though not starch – produced high levels of an enzyme called beta-glucuronidase that other contemporary studies had associated with bladder cancer in humans. Before any of this could be confirmed, however, the industry trade organization shut the research project down; the results already obtained were never published.

The UCSF authors say in a second paper that the literature review’s dismissal of contrary studies, together with the suppression of evidence tying sugar to triglycerides and bladder cancer, show how the sugar industry has attempted for decades to bury scientific data on the health risks of eating sugar. If the findings of the laboratory study had been disclosed, they assert, sugar would probably have been scrutinized as a potential carcinogen, and its role in cardiovascular disease would have been further investigated. Added one of the UCSF team, “This is continuing to build the case that the sugar industry has a long history of manipulating science.”

Marion Nestle, an emeritus professor of food policy at New York University, has commented that the internal industry documents unearthed by the UCSF researchers were striking “because they provide rare evidence that the food industry suppressed research it did not like, a practice that has been documented among tobacco companies, drug companies and other industries.”

Nonetheless, the current sugar trade association disputes the UCSF claims, calling them speculative and based on questionable assumptions about events that took place almost 50 years ago. The association also considers the research itself tainted, because it was conducted and funded by known critics of the sugar industry. The industry has consistently denied that sugar plays any role in promoting obesity, diabetes or heart disease.

And despite a statement by the trade association’s predecessor that it was created “for the basic purpose of increasing the consumption of sugar,” other academics have defended the industry. They point out that, at the time of the industry review and the rat study in the 1960s, the link between sugar and heart disease was supported by only limited evidence, and the dietary fat hypothesis was deeply entrenched in scientific thinking, being endorsed by the AHA (American Heart Association) and the U.S. NHI (National Heart Institute).

But, says Nestle, it’s déjà vu today, with the sugar and beverage industries now funding research to let the industries off the hook for playing a role in causing the current obesity epidemic. As she notes in a commentary in the journal JAMA Internal Medicine:

"Is it really true that food companies deliberately set out to manipulate research in their favor? Yes, it is, and the practice continues.”

Next: Grassroots Climate Change Movement Ignores Actual Evidence

Measles Rampant Again, Thanks to Anti-Vaccinationists

Measles is on the march once more, even though vaccination against the disease has cut the number of worldwide deaths from an estimated 2.6 million per year in the mid-20th century to 110,000 in 2017. But thanks to the anti-scientific, anti-vaccination movement and the ever expanding reach of social media, measles cases are now at a 20-year high in Europe and as many U.S. cases were reported in the first two months of 2019 as in the first six months of 2018.

measles large.jpg

Highly contagious, measles is not a malady to be taken lightly. One in 1,000 people who catch it die of the disease; most of the victims are children under five. Even those who survive are at high risk of falling prey to encephalitis, an often debilitating infection of the brain that can lead to seizures and mental retardation. Other serious complications of measles include blindness and pneumonia.

It’s not the first time that measles has reared its ugly head since the widespread introduction of the MMR (measles-mumps-rubella) vaccine in 1963. Although laws mandating vaccination for schoolchildren were in place in all 50 U.S. states by 1980, sporadic outbreaks of the disease have continued to occur. Before the surge in 2018-19, a record number of 667 cases of measles from 23 outbreaks were reported in the U.S. in 2014. And major epidemics are currently raging in countries such as Ukraine and the Philippines.

The primary reason for all these outbreaks is that more and more parents are choosing not to vaccinate their children. The WHO (World Health Organization), for the first time, has listed vaccine hesitancy as one of the top 10 global threats of 2019.

While some parents oppose immunization on religious or philosophical grounds, by far the most objections come from those who insist that all vaccines cause disabling side effects or other diseases – even though the available scientific data doesn’t support such claims. As discussed in a previous post, there’s absolutely no scientific evidence for the once widely held belief that MMR vaccination results in autism, for example.

Anti-vaccinationists, when accused of exposing their children to unnecessary risk by refusing immunization because of unjustified fears about vaccine safety, rationalize their stance by appealing to herd immunity. Herd immunity is the mass protection from an infectious disease that results when enough members of the community become immune to the disease through vaccination, just as sheer numbers protect a herd of animals from predators. Once a sufficiently large number of people have been vaccinated, viruses and bacteria can no longer spread in that community.

For measles, herd immunity requires up to 94% of the populace to be immunized. That the threshold is lower than 100%, however, enables anti-vaccinationists to hide their children in the herd. By not vaccinating their offspring but choosing to live among the vaccinated, anti-vaxxers avoid the one in one million risk of their children experiencing serious side effects from the vaccine, while simultaneously not exposing them to infection – at least not in their own community.  

But hiding in the herd takes advantage of others and is morally indefensible. Certain vulnerable groups can’t be vaccinated at all, including those with weakened immune systems such as children undergoing chemotherapy for cancer or the elderly on immunosuppressive therapy for rheumatic diseases. If too many people choose not to vaccinate, the percentage vaccinated will fall below the threshold, herd immunity will break down and those whose protection depends on those around them being vaccinated will suffer.

Another contentious issue is exemptions from mandatory vaccination for religious or philosophical reasons. While some American parents regard the denial of schooling to unvaccinated children as an infringement of their constitutional rights, supreme courts in several U.S. states have ruled that the right to practice religion freely doesn’t include liberty to expose the community or a child to communicable disease. And ever since it was found in 2006 that the highest incidence of diseases such as whooping cough occurred in the states most generous in granting exemptions, more and more states have abolished nonmedical exemptions altogether.

But other countries are not so vigilant. In Madagascar, for instance, less than an estimated 60% of the population has been immunized against measles – because of which an epidemic there has caused more than 900 deaths in six months, according to the WHO. Although the WHO says that the reasons for the global rise in measles cases are complex, there’s no doubt that resistance to vaccination is a major factor. It’s not helped by the extensive dissemination of anti-vaccination misinformation by Russian propagandists.

Next: The Sugar Industry: Sugar Daddy to Manipulated Science?

Does Climate Change Threaten National Security?

Earth new.jpg

The U.S. White House’s proposed Presidential Committee on Climate Security (PCCS) is under attack – by the mainstream media, Democrats in Congress and military retirees, among others. The committee’s intended purpose is to conduct a genuine scientific assessment of climate change.

But the assailants’ claim that the PCCS is a politically motivated attempt to overthrow science has it backwards. The Presidential Committee will undertake a scientifically motivated review of climate change science, in the hope of eliminating the subversive politics that have taken over the scientific debate.

It’s those opposed to the committee who are playing politics and abusing science. The whole political narrative about greenhouse gases and dangerous anthropogenic (human-caused) warming, including the misguided Paris Agreement that the U.S. has withdrawn from, depends on faulty computer climate models that failed to predict the recent slowdown in global warming, among other shortcomings. The actual empirical evidence for a substantial human contribution to global warming is flimsy.

And the supposed 97% consensus among climate scientists that global warming is largely man-made is a gross exaggeration, mindlessly repeated by politicians and the media.

The 97% number comes primarily from a study of approximately 12,000 abstracts of research papers on climate science over a 20-year period. What is rarely revealed is that nearly 8,000 of the abstracts expressed no opinion at all on human-caused warming. When that and a subsidiary survey are taken into account, the climate scientist consensus percentage falls to between 33% and 63% only. So much for an overwhelming majority! 

Blatant exaggeration like this for political purposes is all too common in climate science. An example that permeates current news articles and official reports on climate change is the hysteria over extreme weather. Almost every hurricane, major flood, drought, wildfire or heat wave is ascribed to global warming.

But careful examination of the actual scientific data shows that if there’s a trend in any of these events, it’s downward rather than upward. Even the UN’s Intergovernmental Panel on Climate Change has found little to no evidence that global warming increases the occurrence of many types of extreme weather.

Polar bear JPG 250.jpg

Another over-hyped assertion about climate change is that the polar bear population at the North Pole is shrinking because of diminishing sea ice in the Arctic, and that the bears are facing extinction. Yet, despite numerous articles in the media and photos of apparently starving bears, current evidence shows that the polar bear population has actually been steady for the whole period that the ice has been decreasing and may even be growing, according to the native Inuit.

All these exaggerations falsely bolster the case for taking immediate action to combat climate change, supposedly by pulling back on fossil fuel use. But the mandate of the PCCS is to cut through the hype and assess just what the science actually says.  

A specific PCCS goal is to examine whether climate change impacts U.S. national security, a connection that the defense and national security agencies have strongly endorsed.

A recent letter of protest to the President from a group of former military and civilian national security professionals expresses their deep concern about “second-guessing the scientific sources used to assess the threat … posed by climate change.” The PCCS will re-evaluate the criteria employed by the national agencies to link national security to climate change.

The protest letter also claims that less than 0.2% of peer-reviewed climate science papers dispute that climate change is driven by humans. This is nonsense. In solar science alone during the first half of 2017, the number of peer-reviewed papers affirming a strong link between the sun and our climate, independent of human activity, represented approximately 4% of all climate science papers during that time – and there are many other fields of study apart from the sun.

Let’s hope that formation of the new committee will not be thwarted and that it will uncover other truths about climate science.

(This post was published previously on March 7, on The Post & Email blog.)

Next: Measles Rampant Again, Thanks to Anti-Vaccinationists

Nature vs Nurture: Does Epigenetics Challenge Evolution?

A new wrinkle in the traditional nature vs nurture debate – whether our behavior and personalities are influenced more by genetics or by our upbringing and environment – is the science of epigenetics. Epigenetics describes the mechanisms for switching individual genes on or off in the genome, which is an organism’s complete set of genetic instructions.

epigenetics.jpg

A controversial question is whether epigenetic changes can be inherited. According to Darwin’s 19th-century theory, evolution is governed entirely by heritable variation of what we now know as genes, a variation that usually results from mutation; any biological changes to the whole organism during its lifetime caused by environmental factors can’t be inherited. But recent evidence from studies on rodents suggests that epigenetic alterations can indeed be passed on to subsequent generations. If true, this implies that our genes record a memory of our lifestyle or behavior today that will form part of the genetic makeup of our grandchildren and great-grandchildren.

So was Darwin wrong? Is epigenetics an attack on science? At first blush, epigenetics is reminiscent of Lamarckism – the pre-Darwinian notion that acquired characteristics are heritable, promulgated by French naturalist Jean-Baptiste Lamarck. Lamarck’s most famous example was the giraffe, whose long neck was thought at the time to have come from generations of its ancestors stretching to reach foliage in high trees, with longer and longer necks then being inherited.

Darwin himself, when his proposal of natural selection as the evolutionary driving force was initially rejected, embraced Lamarckism as a possible alternative to natural selection. But the Lamarckian view was later discredited, as more and more evidence for natural selection accumulated, especially from molecular biology.

Nonetheless, the wheel appears to have turned back to Lamarck’s idea over the last 20 years. Several epidemiological studies have established an apparent link between 20th-century starvation and the current prevalence of obesity in the children and grandchildren of malnourished mothers. The most widely studied event is the Dutch Hunger Winter, the name given to a 6-month winter blockade of part of the Netherlands by the Germans toward the end of World War II. Survivors, who included Hollywood actress Audrey Hepburn, resorted to eating grass and tulip bulbs to stay alive.

The studies found that mothers who suffered malnutrition during early pregnancy gave birth to children who were more prone to obesity and schizophrenia than children of well-fed mothers. More unexpectedly, the same effects showed up in the grandchildren of the women who were malnour­ished during the first three months of their pregnancy. Similarly, an increased incidence of Type II diabetes has been discovered in adults whose pregnant mothers experienced starvation during the Ukrainian Famine of 1932-33 and the Great Chinese Famine of 1958-61.  

All this data points to the transmission from generation to generation of biological effects caused by an individual’s own experiences. Further evidence for such epigenetic, Lamarckian-like changes comes from laboratory studies of agouti mice, so called because they carry the agouti gene that not only makes the rodents fat and yellow, but also renders them susceptible to cancer and diabetes. By simply altering a pregnant mother’s diet, researchers found they could effectively silence the agouti gene and produce offspring that were slender and brown, and no longer prone to cancer or diabetes

The modified mouse diet was rich in methyl donors, small molecules that attach themselves to the DNA string in the genome and switch off the troublesome gene, and are found in foods such as onions and beets. In addition to its DNA, any genome in fact contains an array of chemical markers and switches that constitute the instructions for the estimated 21,000 protein-coding genes in the genome. That is, the array is able to turn the expression of particular genes on or off.

However, the epigenome, as this array is called, can’t alter the genes themselves. A soldier who loses a limb in battle, for example, will not bear children with shortened arms or legs. And, while there’s limited evidence that epigenetic changes in humans can be transmitted between generations, such as the starvation studies described above, the possibility isn’t yet fully established and further research is needed.

One line of thought, for which an increasing amount of evidence exists in animals and plants, is that epigenetic change doesn’t come from experience or use – as in the case of Lamarck’s giraffe – but actually results from Darwinian natural selection. The idea is that in order to cope with an environmental threat or need, natural selection may choose the variation in the species that has an epigenome favoring the attachment to its DNA of a specific type of molecule such as a methyl donor, capable of expressing or silencing certain genes. In other words, epigenetic changes can exploit existing heritable genetic variation, and so are passed on.

Is this explanation correct or, as creationists would like to think, did Darwin’s theory of evolution get it wrong? Time will tell.

How the Scientific Consensus Can Be Wrong

consensus wrong 250.jpg

Consensus is a necessary step on the road from scientific hypothesis to theory. What many people don’t realize, however, is that a consensus isn’t necessarily the last word. A consensus, whether newly proposed or well-established, can be wrong. In fact, the mistaken consensus has been a recurring feature of science for many hundreds of years.

A recent example of a widespread consensus that nevertheless erred was the belief that peptic ulcers were caused by stress or spicy foods – a dogma that persisted in the medical community for much of the 20th century. The scientific explanation at the time was that stress or poor eating habits resulted in excess secretion of gastric acid, which could erode the digestive lining and create an ulcer.

But two Australian doctors discovered evidence that peptic ulcer disease was caused by a bacterial infection of the stomach, not stress, and could be treated easily with antibiotics. Yet overturning such a longstanding consensus to the contrary would not be simple. As one of the doctors, Barry Marshall, put it:

“…beliefs on gastritis were more akin to a religion than having any basis in scientific fact.”

To convince the medical establishment the pair were right, Marshall resorted in 1984 to the drastic measure of infecting himself with a potion containing the bacterium in question (known as Helicobacter pylori). Despite this bold and risky act, the medical world didn’t finally accept the new doctrine until 1994. In 2005, Barry Marshall and Robin Warren were awarded the Nobel Prize in Medicine for their discovery.

Earlier last century, an individual fighting established authority had overthrown conventional scientific wisdom in the field of geology. Acceptance of Alfred Wegener’s revolutionary theory of continental drift, proposed in 1912, was delayed for many decades – even longer than resistance continued to the infection explanation for ulcers – because the theory was seen as a threat to the geological establishment.

Geologists of the day refused to take seriously Wegener’s circumstantial evidence of matchups across the ocean in continental coastlines, animal and plant fossils, mountain chains and glacial deposits, clinging instead to the consensus of a contracting earth to explain these disparate phenomena. The old consensus of fixed continents endured among geologists even as new, direct evidence for continental drift surfaced, including mysterious magnetic stripes on the seafloor. But only after the emergence in the 1960s of plate tectonics, which describes the slow sliding of thick slabs of the earth’s crust, did continental drift theory become the new consensus.

A much older but well-known example of a mistaken consensus is the geocentric (earth-centered) model of the solar system that held sway for 1,500 years. This model was originally developed by ancient Greek philosophers Plato and Aristotle, and later simplified by the astronomer Ptolemy in the 2nd century. Medieval Italian mathematician Galileo Galilei fought to overturn the geocentric consensus, advocating instead the rival heliocentric (sun-centered) model of Copernicus – the model which we accept today, and for which Galileo gathered evidence in the form of unprecedented telescopic observations of the sun, planets and planetary moons.    

Although Galileo was correct, his endorsement of the heliocentric model brought him into conflict with university academics and the Catholic Church, both of which adhered to Ptolemy’s geocentric model. A resolute Galileo insisted that:

 “In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.”

But to no avail: Galileo was called before the Inquisition, forbidden to defend Copernican ideas, and finally sentenced to house arrest for publishing a book that did just that and also ridiculed the Pope.

These are far from the only cases in the history of science of a consensus that was wrong. Others include the widely held 19th-century religious belief in creationism that impeded acceptance of Darwin’s theory of evolution, and the 20th-century paradigm linking saturated fat to heart disease.

Consensus is built only slowly, so belief in the consensus tends to become entrenched over time and is not easily abandoned by its devotees. This is certainly the case for the current consensus that climate change is largely a result of human activity – a consensus, as I’ve argued in a previous post, that is most likely mistaken.

Next: Nature vs Nurture: Does Epigenetics Challenge Evolution?

How Elizabeth Holmes Abused Science to Deceive Investors

Even in Silicon Valley, which is no stranger to hubris and deceit, it stands out – the bold-faced audacity of a young Stanford dropout, who bilked prominent investors out of hundreds of millions of dollars for a fictitious blood-testing technology based on finger-stick specimens.

Credit: Associated Press

Credit: Associated Press

Elizabeth Holmes, former CEO of now defunct Theranos, last year settled charges of massive financial fraud brought by the U.S. SEC (Securities and Exchange Commission), and now faces criminal charges in California for her multiple misdeeds. But beyond the harm done to duped investors, fired employees and patients misled about blood test results, Holmes’ duplicity and pathological lies only add to the abuse being heaped on science today.

One of the linchpins of the scientific method, a combination of observation and reason developed and refined for more than two thousand years, is the replication step. Observations that can’t be repeated, preferably by independent investigators, don’t qualify as scientific evidence. When the observations are blood tests on actual patients, repeatability and reliability are obviously paramount. Yet Theranos failed badly in both these areas.

Holmes created a compact testing device originally known as the Edison and later dubbed the minLab, supposedly capable of inexpensively diagnosing everything from diabetes to cancer. But within a year or two, questions began to emerge about just how good it was.

Several Theranos scientists protested in 2013 that the technology wasn’t ready for the market. Instead of repeatable results, the company’s new machine was generating inaccurate and even erroneous data for patients. Whistleblowers addressing a recent forum related how open falsification and cherry-picking of data were a regular part of everyday operations at Theranos. And technicians had to rerun tests if the results weren’t “acceptable” to management.

Much of this chicanery was exposed by Wall Street Journal investigative reporter John Carreyrou. In the wake of his sensational reporting, drugstore chain Walgreens announced in 2015 that it was suspending previous plans to establish blood testing facilities using Theranos technology in more than 40 stores across the U.S.

Among the horrors that Carreyrou documented in a later book were a Theranos test on a 16-year-old Arizona girl, whose faulty result showed a high level of potassium, meaning she could have been at risk of a heart attack. Tests on another Arizona woman suggested an impending stroke, for which she was unnecessarily rushed to a hospital emergency room. Hospital tests contradicted both sets of Theranos data. In January 2016, the Centers for Medicare and Medicaid Services, the oversight agency for blood-testing laboratories, declared that one of Theranos' labs posed "immediate jeopardy" to patients.

Closely allied to the repeatability required by the scientific method is transparency. Replication of a result isn’t possible unless the scientists who conducted the original experiment described their work openly and honestly – something that doesn’t always occur today. To be fair, there’s a need for a certain degree of secrecy in a commercial setting, in order to protect a company’s intellectual property. However, this need shouldn’t extend to internal operations of the company or to interactions between the very employees whose research is the basis of the company’s products.

But that’s exactly what happened at Theranos, where its scientists and technicians were kept in the dark about the purpose of their work and constantly shuffled from department to department. Physical barriers were erected in the research facility to prevent employees from actually seeing the lab-on-a-chip device, based on microfluidics and biochemistry, supposedly under development.

Only a handful of people knew that the much-vaunted technology was in fact a fake. In a 2014 article in Fortune magazine, Holmes claimed that Theranos already offered more than 200 blood tests and was ramping up to more than 1,000. The reality was that Theranos could only perform 12 of the 200-plus tests, all of one type, on its own equipment and had to use third-party analyzers to carry out all the other tests. Worse, Holmes allegedly knew that the miniLab had problems with accuracy and reliability, was slower than some competing devices and, in some ways, wasn’t competitive at all with more conventional blood-testing machines.

Investors were fooled too. Among the luminaries deceived by Holmes were former U.S. Secretaries of State Henry Kissinger and George Shultz, recently resigned Secretary of Defense and retired General James Mattis – all of whom became members of Theranos’ “all-star board” – and media tycoon Rupert Murdoch. Initial meetings with new investors were often followed by a rigged demonstration of the miniLab purporting to analyze their just-collected finger-stick samples.

Holmes not only fleeced her investors but also did a great disservice to science. The story will shortly be immortalized in a movie starring Jennifer Lawrence as Holmes.

Next: How the Scientific Consensus Can Be Wrong

Consensus in Science: Is It Necessary?

An important but often misunderstood concept in science is the role of consensus. Some scientists argue that consensus has no place at all in science, that the scientific method alone with its emphasis on evidence and logic dictates whether a particular hypothesis stands or falls.  But the eventual elevation of a hypothesis to a widely accepted theory, such as the theory of evolution or the theory of plate tectonics, does depend on a consensus being reached among the scientific community.

consensus.jpg

In politics, consensus democracy refers to a consensual decision-making process by the members of a legislature – in contrast to traditional majority rule, in which minority opinions can be ignored by the majority. In science, consensus has long been more like majority rule, but based on facts or empirical evidence rather than personal convictions. Although observational evidence is sometimes open to interpretation, it was the attempt to redefine scientific consensus in the mold of consensus democracy that triggered a reaction to using the term in science.

This reaction was eloquently summarized by medical doctor and Jurassic Park author Michael Crichton, in a 2003 Caltech lecture titled “Aliens Cause GlobaL Warming”:

“I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. …

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world.

In science consensus is irrelevant. What is relevant is reproducible results. … There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus.”

What Crichton was talking about, I think, was the consensus democracy sense of the word – consensus forming the basis for legislation, for political action. But that’s not the same as scientific consensus, which can never be reached by taking a poll of scientists. Rather, a scientific consensus is built by the slow accumulation of unambiguous pieces of empirical evidence, until the collective evidence is strong enough to become a theory.

Indeed, the U.S. AAAS (American Association for the Advancement of Science) and NAS (National Academy of Sciences, Engineering and Medicine) both define a scientific theory in such terms. According to the NAS, for example,

 “The formal scientific definition of theory …  refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence.”

Contrary to popular opinion, theories rank highest in the scientific hierarchy – above laws, hypotheses and facts or observations. 

Crichton’s reactionary view of consensus as out of place in the scientific world has been voiced in the political sphere as well. Twentieth-century UK prime minister Margaret Thatcher once made the comment, echoing Crichton’s words, that political consensus was “the process of abandoning all beliefs, principles, values and policies in search of something in which no one believes, but to which no one objects; the process of avoiding the very issues that have to be solved, merely because you cannot get agreement on the way ahead.” Thatcher was a firm believer in majority rule.

A well-known scientist who shares Crichton’s opinion of scientific consensus is James Lovelock, ecologist and propounder of the Gaia hypothesis that the earth and its biosphere are a living organism. Lovelock has said of consensus:

“I know that such a word has no place in the lexicon of science; it is a good and useful word, but it belongs to the world of politics and the courtroom, where reaching a consensus is a way of solving human differences.”

But as discussed above, there is a role for consensus in science. The notion articulated by Crichton and Lovelock that consensus is irrelevant has arisen in response to the modern-day politicization of science. One element of their proclamations does apply, however. As pointed out by astrophysicist and author Ethan Siegel, the existence of a scientific consensus doesn’t mean that the “science is settled.” Consensus is merely the starting point on the way to a full-fledged theory.

Next week: How Elizabeth Holmes Abused Science to Deceive Investors

Corruption of Science: Scientific Fraud

fraud.jpg

One of the most troubling signs of the attack on science is the rising incidence of outright fraud, in the form of falsification and even fabrication of scientific data. A 2012 study published by the U.S. National Academy of Sciences noted an increase of almost 10 times since 1975 in the percentage of biomedical research articles retracted because of fraud. Although the current percentage retracted due to fraud was still very small at approximately 0.01%, the study authors remarked that this underestimated the actual percentage of fraudulent articles, since only a fraction of such articles are retracted.

One of the more egregious episodes of fraud was British gastroenterologist Andrew Wakefield’s claim in a 1998 study that 8 out of 12 children in the study had developed symptoms of autism after injection of the combination MMR (measles-mumps-rubella) vaccine. As a result of the well publicized study, hundreds of thousands of parents who had conscientiously followed immunization schedules in the past panicked and began declining MMR vaccine. And, unsurprisingly, outbreaks of measles subsequently occurred all over the world.

But Wakefield’s paper was slowly discredited over the next 12 years, until the prestigious medical journal The Lancet formally retracted it. The journal’s editors then went one step further in 2011 by declaring the paper fraudulent, citing unmistakable evidence that Wakefield had fabricated his data on autism and the MMR vaccine. Shortly after, the disgraced gastroenterologist’s medical license was revoked.

In 2015, Iowa State University researcher Dong Pyou Han received a prison sentence of four and a half years and was ordered to repay $7.2 million in grant funds, after being convicted of fabricating and falsifying data in trials of a potential HIV vaccine.  On multiple occasions, Han had mixed blood samples from vaccinated rabbits into human HIV antibodies to create the illusion that the vaccine boosted immunity against HIV. Although Han was contrite in court, one of the prosecuting attorneys doubted his remorse, pointing out that Han’s job depended on research funding that was only renewed as a result of his bogus presentations showing the experiments were succeeding.

In 2018, officials at Harvard Medical School and Brigham and Women’s Hospital in Boston called for the retraction of a staggering 31 papers from the laboratory of once prominent Italian heart researcher Piero Anversa, because the papers "included falsified and/or fabricated data." Dr. Anversa’s research was based on the notion that the heart contains stem cells, a type of cell capable of transforming into other cells, that could regenerate cardiac muscle. But other laboratories couldn’t verify Anversa’s idea and were unable to reproduce his experimental findings – a major red flag, since replication of scientific data is a crucial part of the scientific method.

Despite this warning sign, the work spawned new companies claiming that their stem-cell injections could heal hearts damaged by a heart attack, and led to a clinical trial funded by the U.S. National Heart, Lung and Blood Institute. The Boston hospital’s parent company, however, agreed in 2017 to a $10 million settlement with the U.S. government over allegations that the published research of Anversa and two colleagues had been used to fraudulently obtain federal funding. Apart from data that the lab fabricated, the government alleged that it utilized invalid and improperly characterized cardiac stem cells, and maintained deliberately misleading records. Anversa has since left the medical school and hospital.

Scientific fraud today extends even to the publishing world. A recent sting operation involved so-called predatory journals – those charging a fee without offering any publication services (such as peer review), other than publication itself. The investigation found that an amazing 33% of the journals contacted offered a phony scientific editor a position on their editorial boards, four of them immediately appointing the fake scientist as editor-in-chief.   

It’s no wonder then that scientific fraud is escalating. In-depth discussion of recent cases can be found on several websites, such as For Better Science and Retraction Watch.

Next week: Consensus in Science: Is It Necessary?

Corruption of Science: The Reproducibility Crisis

One of the more obvious signs that modern science is ailing is the reproducibility crisis – the vast number of peer-reviewed scientific studies that can’t be replicated in subsequent investigations and whose findings turn out to be false. In the field of cancer biology, for example, researchers discovered that an alarming 89% of published results couldn’t be reproduced. Even in the so-called soft science of psychology, the rate of irreproducibility hovers around 60%. And to make matters worse, falsification and outright fabrication of scientific data is on the rise.

Bronowski enlarged.jpg

The reproducibility crisis is drawing a lot of attention from scientists and nonscientists alike. In 2018, the U.S. NAS (the National Association of Scholars in this case, not the Academy of Sciences), an academic watchdog organization that normally focuses on the liberal arts and education policy, published a particularly comprehensive examination of the problem. Although the emphasis in the NAS report is on the misuse of statistical methods in scientific research, the report discusses possible causes of irreproducibility and presents a laundry list of recommendations for addressing the crisis.

The crisis is especially acute in the biomedical sciences. Over 10 years ago, Greek medical researcher John Ioannidis argued that the majority of published research findings in medicine were wrong. This included epidemiological studies in areas such as dietary fat, vaccination and GMO foods as well as clinical trials and cutting-edge research in molecular biology. 

In 2011, a team at Bayer HealthCare in Germany reported that only about 25% of published preclinical studies on potential new drugs could be validated. Some of the unreproducible papers had catalyzed entirely new fields of research, generating hundreds of secondary publications. More worryingly, other papers had led to clinical trials that were unlikely to be of any benefit to the participants.

Author Richard Harris describes another disturbing example, of research on breast cancer that was conducted on misidentified skin cancer cells. The sloppiness resulted in thousands of papers being published in prominent medical journals on the wrong cancer. Harris blames the sorry condition of current research on scientists taking shortcuts around the once venerated scientific method.

Cutting corners to pursue short-term success is but one consequence of the pressures experienced by today’s scientists. These pressures include the constant need to win research grants as well as to publish research results in high-impact journals. The more spectacular that a paper submitted for publication is, the more likely it is to be accepted, but often at the cost of research quality. It has become more important to be the first to publish or to present sensational findings than to be correct.      

Another consequence of the bind in which scientists find themselves is the ever increasing degree of misunderstanding and misuse of statistics, as detailed in the NAS report. Among other abuses, the report cites spurious correlations in data that researchers claim to be “statistically significant”; the improper use of statistics due to poor understanding of statistical methodology; and the conscious or unconscious biasing of data to fit preconceived ideas.

Ioannidis links irreproducibility to the habit of assigning too much importance to the statistical p-value. The smaller the p-value, the more likely it is that the experimental data can’t be explained by existing theory and that a new hypothesis is needed. Although p-values below 0.05 are commonly regarded as statistically significant, using this condition as a criterion for publication means that one time in twenty, the experimental data could be the result of chance alone. The NAS report recommends defining statistical significance as a p-value less than 0.01 rather than 0.05 – a much more demanding standard.

The report further recommends integration of basic statistics into curricula at high-school and college levels, and rigorous educational programs in those disciplines that rely heavily on statistics. Beyond statistics, other suggested reforms include having researchers make their data available for public inspection, which doesn’t often occur at present, and encouraging government agencies to fund projects designed purely to replicate earlier research, which again is rare today. The NAS believes that measures like these will help to improve reproducibility in scientific studies as well as keeping advocacy and the politicization of science at bay.

Next week: Corruption of Science: Scientific Fraud

Should We Fear Low-Dose Radiation? What Science Says

Modern science is constantly under attack from political forces, often fueled by fear. A big fear is radiation exposure – a fear made only too real by the devastation of the atomic bombs dropped on Japan to end World War II, and the aftereffects of several extensive nuclear accidents around the world in the last few decades. But, while high doses of radiation are known to be harmful to human health or even deadly, the effects of low doses are controversial.

Pic.jpg

For many years, the prevailing wisdom in the scientific community about radiation protection has been that there is no safe dose of ionizing radiation. This belief is enshrined in the so-called LNT (linear, no-threshold) model used to estimate cancer risks and establish cleanup levels in radioactively contaminated environments. The model dates back to studies of irradiated fruit flies in the 1930s, and subsequent formulation of the LNT dose-response model by American geneticist and Nobel laureate Hermann Muller.

The LNT model assumes that the body’s response to radiation is directly proportional to the radiation dose. So any detrimental health effects – such as cancer or an inheritable genetic mutation – go up and down with dose (and dose rate), but don’t disappear altogether until the dose falls to zero.

A very different concept that is gaining acceptance among radiation workers is the threshold model. Unlike the LNT model, this assumes that exposure to radiation is safe as long as the exposure is below a threshold dose. That is, there are no adverse health effects at all at low radiation doses, but above the threshold there are effects proportional to the dose, as in the no-threshold model.  

A new variation on the threshold model is hormesis, which hypothesizes that below the threshold dose, beneficial health effects actually occur. Hormesis has been championed by Edward Calabrese, an environmental toxicologist at the University of Massachusetts Amherst who has long been critical of the LNT approach to risk assessment, for both radiation and toxic chemicals. In 2015, a petition was submitted to the U.S. NRC (Nuclear Regulatory Commission) to adopt the hormesis model for regulatory purposes.

Which model is the correct picture of how the human body is affected by radiation? The scientific evidence isn’t all that clear.

Even when the LNT model was proposed, only very limited data was available at low doses, a situation that’s unchanged today. This means that the statistical accuracy of individual data points at low doses is poor, and much of the data could equally well fit the LNT, threshold or hormesis models. Two major pieces of evidence that a U.S. NAS (National Academy of Sciences) committee formerly relied on to buttress the LNT model – a study of Japanese atomic bomb survivors and a 15-country study of nuclear workers – are in fact compatible with either the threshold or the LNT model, more recent analysis has shown.   

The threshold model may seem more intuitive, since it’s well known for chemical toxins that any substance is toxic above a certain dose. “The dose makes the poisin,” as medieval Swiss physician Paracelsus observed. But the biological response to radiation isn’t necessarily the same as the response to a toxin.

Evidence in support of the hormesis model, however, includes numerous studies showing that low radiation doses can activate the immune system and thereby protect health. And no increase in the incidence of cancer has been observed among those Japanese bomb survivors exposed to only low doses of the same radiation that, in higher doses, sickened or killed others.

Scientific opinion is divided. The once strong consensus on the validity of the LNT model has evaporated, 70% of scientists at U.S. national laboratories now believing that the threshold model more accurately reflects radiation effects. A similar percentage of scientists in several European countries hold the same view.

Whether or not low doses of radiation are protective, as the hormesis model suggests, no adverse health effects have ever been detected from exposure to low dose, low dose rate radiation. But the public clings to the outmoded scientific consensus of the LNT model that no dose is safe. So society at large is unnecessarily fearful of any exposure to radiation whatsoever, when in reality low doses are most likely benign and could even be beneficial.

Next: Corruption of Science: The Reproducibility Crisis

How Hype Is Hurting Science

The recent riots in France over a proposed carbon tax, aimed at supposedly combating climate change, were a direct result of blatant exaggeration in climate science for political purposes. It’s no coincidence that the decision to move forward with the tax came soon after an October report from the UN’s IPCC (Intergovernmental Panel on Climate Change), claiming that drastic measures to curtail climate change are necessary by 2030 in order to avoid catastrophe. President Emmanuel Macron bought into the hype, only to see his people rise up against him.

Exaggeration has a long history in modern science. In 1977, the select U.S. Senate committee drafting new low-fat dietary recommendations wildly exaggerated its message by declaring that excessive fat or sugar in the diet was as much of a health threat as smoking, even though a reasoned examination of the evidence revealed that wasn’t true.

About a decade later, the same hype infiltrated the burgeoning field of climate science. At another Senate committee hearing, astrophysicist James Hansen, who was then head of GISS (NASA’s Goddard Institute for Space Studies), declared he was 99% certain that the 0.4 degrees Celsius (0.7 degrees Fahrenheit) of global warming from 1958 to 1987 was caused primarily by the buildup of greenhouse gases in the atmosphere, and wasn’t a natural variation. This assertion was based on a computer model of the earth’s climate system.

At a previous hearing, Hansen had presented climate model predictions of U.S. temperatures 30 years in the future that were three times higher than they turned out to be. This gross exaggeration makes a mockery of his subsequent claim that the warming from 1958 to 1987 was all man-made. His stretching of the truth stands in stark contrast to the caution and understatement of traditional science.

But Hansen’s hype only set the stage for others. Similar computer models have also exaggerated the magnitude of more recent global warming, failing to predict the pause in warming from the late 1990s to about 2014. During this interval, the warming rate dropped to below half the rate measured from the early 1970s to 1998. Again, the models overestimated the warming rate by two or three times.

An exaggeration mindlessly repeated by politicians and the mainstream media is the supposed 97% consensus among climate scientists that global warming is largely man-made. The 97% number comes primarily from a study of approximately 12,000 abstracts of research papers on climate science over a 20-year period. But what is never revealed is that almost 8,000 of the abstracts expressed no opinion at all on anthropogenic (human-caused) warming. When that and a subsidiary survey are taken into account, the climate scientist consensus percentage falls to between 33% and 63% only. So much for an overwhelming majority! 

A further over-hyped assertion about climate change is that the polar bear population at the North Pole is shrinking because of diminishing sea ice in the Arctic, and that the bears are facing extinction. For global warming alarmists, this claim has become a cause célèbre. Yet, despite numerous articles in the media and photos of apparently starving bears, current evidence shows that the polar bear population has actually been steady for the whole period that the ice has been decreasing – and may even be growing, according to the native Inuit.

It’s not just climate data that’s exaggerated (and sometimes distorted) by political activists. Apart from the historical example in nutritional science cited above, the same trend can be found in areas as diverse as the vaccination debate and the science of GMO foods.

Exaggeration is a common, if frowned-upon marketing tool in the commercial world: hype helps draw attention in the short term. But its use for the same purpose in science only tarnishes the discipline. And, just as exaggeration eventually turns off commercial customers interested in a product, so too does it make the general public wary if not downright suspicious of scientific proclamations. The French public has recognized this on climate change.

Subversion of Science: The Low-Fat Diet

low-fat.jpg

Remember the low-fat-diet? Highly popular in the 1980s and 1990s, it was finally pushed out of the limelight by competitive eating regimens such as the Mediterranean diet. That the low-fat diet wasn’t particularly healthy hadn’t yet been discovered. But its official blessing for decades by the governments of both the U.S. and UK represents a subversion of science by political forces that overlook evidence and abandon reason.

The low-fat diet was born in a 1977 report from a U.S. government committee chaired by Senator George McGovern, which had become aware of research purportedly linking excessive fat in the diet to killer diseases such as coronary heart disease and cancer. The committee hoped that its report would do as much for diet and chronic disease as the earlier Surgeon General’s report had done for smoking and lung cancer.

The hypothesis that eating too much saturated fat results in heart disease, caused by narrowing of the coronary arteries, was formulated by American physiologist Ancel Keys in the 1950s. Keys’ own epidemiological study, conducted in seven different countries, initially confirmed his hypothesis. But many other studies failed to corroborate the diet-heart hypothesis, and Keys’ data itself no longer substantiated it 25 years later. Double-blind clinical trials which, unlike epidemiological studies are able to establish causation, also gave results in conflict with the hypothesis.

Although it was found that eating less saturated fat could lower cholesterol levels, a growing body of evidence showed that it didn’t help to ward off heart attacks or prolong life spans. Yet Senator McGovern’s committee forged ahead regardless. The results of all the epidemiological studies and major clinical trials that refuted the diet-heart hypothesis were simply ignored – a classic case of science being trampled on by politics.

The McGovern committee’s report turned the mistaken hypothesis into nutritional dogma by drawing up a detailed set of dietary guidelines for the American public. After heated political wrangling with other government agencies, the USDA (U.S. Department of Agriculture) formalized the guidelines in 1980, effectively sanctioning the first ever, official low-fat diet. The UK followed suit a few years later.

While the guidelines erroneously linked high consumption of saturated fat to heart disease, they did concede that what constitutes a healthy level of fat in the diet was controversial. The guidelines recommended lowering intake of high-fat foods such as eggs and butter; boosting consumption of fruits, vegetables, whole grains, poultry and fish; and eating fewer foods high in sugar and salt.

With government endorsement, the low-fat diet quickly became accepted around the world. It was difficult back then even to find cookbooks that didn’t extol the virtues of the diet. Unfortunately for the public, the diet promoted to conquer one disease contributed to another – obesity – because it replaced fat with refined carbohydrates. And it wasn’t suitable for everyone.

This first became evident in the largest ever, long-term clinical trial of the low-fat diet, known as the Women’s Health Initiative. But, just like the earlier studies that led to the creation of the diet, the trial again showed that the diet-heart hypothesis didn’t hold up, at least for women.  After eight years, the low-fat diet was found to have had no effect on heart disease or deaths from the disease. Worse still, in a short-term study of the low-fat diet in U.S. Boeing employees, women who had followed the low-fat diet appeared to have actually increased their risk for heart disease.

A UN review of available data in 2008 concluded that several clinical trials of the diet “have not found evidence for beneficial effects of low-fat diets,” and commented that there wasn’t any convincing evidence either for any significant connection between dietary fat and coronary heart disease or cancer.

Today the diet-heart hypothesis is no longer widely accepted and nutritional science is beginning to regain the ground taken over by politics. But it has taken over 60 years for this attack on science to be repulsed.

Next week: How Hype Is Hurting Science

Use and Misuse of the Law in Science

Aside from patent law, science and the law are hardly bosom pals. But there are many parallels between them: above all, they’re both crucially dependent on evidence and logic. However, while the legal system has been used to defend science and to settle several scientific issues, it has also been misused for advocacy by groups such as anti-evolutionists and anti-vaccinationists.

law.jpg

In the U.S., the law played a major role in keeping the teaching of creationism out of schools during the latter part of the 20th century. Creationism, discussed in previous posts on this blog, is a purely religious belief that rejects the theory of evolution. Because of the influence of the wider Protestant fundamentalist movement earlier in the century, which culminated in the infamous Scopes Monkey Trial of 1925, little evolution was taught in American public schools and universities for decades.

All that changed in 1963, when the U.S., as part of an effort to catch up to the rival Soviet Union in science, issued a new biology text, making high-school students aware for the first time of their apelike ancestors. And five years later, the U.S. Supreme Court struck down the last of the old state laws banning the teaching of evolution in schools.

In 1987 the Supreme Court went further, in upholding a ruling by a Louisiana judge that a state law, mandating that equal time be given to the teaching of creation science and evolution in public schools, was unconstitutional. Creationism suffered another blow in 2005 when a judge in Dover, Pennsylvania ruled that the school board’s sanctioning of the teaching of intelligent design in its high schools was also unconstitutional. The board had angered teachers and parents by requiring biology teachers to make use of an intelligent design reference book in their classes.

All these events show how the legal system was misused repeatedly by anti-evolutionists to argue that creationism should be taught in place of or alongside evolution in public schools, but how at the same time the law was used successfully to quash the creationist efforts and to bolster science.

Much the same pattern can be seen with anti-vaccine advocates, who have misused lawsuits and the courtroom to maintain that their objections to vaccination are scientific and that vaccines are harmful. But judges in many different courts have found the evidence presented for all such contentions to be unscientific.

The most notable example was a slew of cases – 5,600 in all – that came before the U.S. Vaccine Court in 2007. Alleged in these cases was that autism, the often devastating neurological disorder in children, is caused by vaccination with the measles-mumps-rubella (MMR) vaccine, or by a combination of the vaccine with a mercury-based preservative. To handle the enormous caseload, the court chose three special masters to hear just three test cases on each of the two charges.

In 2009 and 2010, the Vaccine Court unanimously rejected both contentions. The special masters called the evidence weak and unpersuasive, and chastised doctors and researchers who “peddled hope, not opinions grounded in science and medicine.”

Likewise, the judge in a UK court case alleging a link between autism and the combination diphtheria-tetanus-pertussis (DTP) vaccine found that the “plaintiff had failed to establish … that the vaccine could cause permanent brain damage in young children.” The judge excoriated a pediatric neurologist whose testimony at the trial completely contradicted assertions the doctor had made in a previous research paper that had triggered the litigation, along with other lawsuits, in the first place.

But, while it took a court of law to establish how unscientific the evidence for the claims about vaccines was, and it was the courts that kept the teaching of unscientific creationism out of school science classes, the court of public opinion has not been greatly swayed in either case. As many as 40% of the general public worldwide believe that all life forms, including ourselves, were created directly by God out of nothing, and that the earth is only between 6,000 and 10,000 years old. And more and more parents are choosing not to vaccinate their children, insisting that vaccines always cause disabling side effects or even other diseases.

Although the law has done its best to uphold the court of science, the attack on science continues.

Next week: Subversion of Science: The Low-Fat Diet

On Science Skeptics and Deniers

Do all climate change skeptics also question the theory of evolution? Do anti-vaccinationists also believe that GMO foods are unsafe? As we’ll see in this post, scientific skepticism and “science denial” are much more nuanced than most people think.

Skeptic.jpg

To begin with, scientific skeptics on hot-button issues such as climate change, vaccination and GMOs (genetically modified organisms) are often linked together as anti-science deniers. But the simplistic notion that skeptics and deniers are one and the same – the stance taken by the mainstream media – is mistaken. And the evidence shows that skeptics or deniers in one area of science aren’t necessarily so in other areas.

The split between outright deniers of the science and skeptics who merely question some of it varies markedly, surveys show, from approximately twice as many deniers as skeptics on evolution to about half as many deniers compared to skeptics on climate change.

In evolution, approximately 32% of the American public are creationists who deny Darwin’s theory of evolution entirely, while another 14% are skeptical of the theory. In climate change, the numbers are reversed with about 19% denying any human role in global warming, and a much larger 35% (averaged from here and here) accepting a human contribution but being skeptical about its magnitude. In GMOs, on the other hand, the percentages of skeptics and deniers are about the same.

The surveys also reveal that anti-science skepticism or denial don’t carry over from one issue to another. For example, only about 65% of evolutionary skeptics or deniers are also climate change skeptics or deniers: the remaining 35% who doubt or reject evolution believe in the climate change narrative of largely human-caused warming. So the two groups of skeptics or deniers don’t consist of the same individuals, although there is some overlap.

In the case of GMO foods, approximately equal percentages of the public reject the consensus among scientists that GMOs are safe to eat, and are skeptical about climate change. Once more, however, the two groups don’t consist of the same people. And, even though most U.S. farmers accept the consensus on the safety of GMO crops but are climate change skeptics, there are environmentalists who are GMO deniers or skeptics but accept the prevailing belief on climate change. Prince Charles is a well-known example of the latter.

Social scientists who study such surveys have identified two main influences on scientific skepticism and denial: religion and politics. As we might expect, opinions about evolution are strongly tied to religious identity, practice and belief. And, while Evangelicals are much more likely to be skeptical about climate change than those with no religious affiliation, climate skepticism overall seems to be driven more by politics – specifically, political conservatism – than by religion.

In the political sphere, U.S. Democrats are more inclined than Republicans to believe that human actions are the cause of global warming, that the theory of evolution is valid, and that GMO foods are safe to eat. However, other factors influence the perception of GMO food safety, such as corporate control of food production and any government intervention. Variables like demographics and education come into the picture too, in determining skeptical attitudes on all issues.

Lastly, a striking aspect of skepticism and denial in contemporary science is the gap in opinion between scientists and the general public. Although skepticism is an important element of the scientific method, a far larger percentage of the population in general question the prevailing wisdom on scientific issues than do scientists, with the possible exception of climate change. The precise reasons for this gap are complex according to a recent study, and include religious and political influences as well as differences in cognitive functioning and in education. While scientists may possess more knowledge of science, the public may exhibit more common sense.

Next week: Use and Misuse of the Law in Science

Why Creation Science Isn’t Science

According to so-called creation science – the widely held religious belief that the world and all its living creatures were created by God in just six days – the earth is only 6,000 to 10,000 years old. The faith-based belief rejects Darwin’s scientific theory of evolution, which holds that life forms evolved over a long period of time through the process of natural selection. In resorting to fictitious claims to justify its creed, creation science only masquerades as science.    

creation science.jpg

Creation science has its roots in a literal interpretation of the Bible. To establish a biblical chronology, various scholars have estimated the lifespans of prominent figures and the intervals between significant historical events described in the Bible. The most detailed chronology was drawn up in the 1650s by an Irish Archbishop, who calculated that exactly 4,004 years elapsed between the creation and the birth of Jesus. It’s this dubious calculation that underlies the 6,000-year lower limit for the age of the earth. 

Scientific evidence, however, tells us that the earth’s actual age is 4.5 to 4.6 billion years. Even when Darwin proposed his theory, the available evidence at the time indicated an age of at least a few hundred thousand years. Darwin himself believed that the true number was more like several hundred million years, based on his forays into geology. 

By the early 1900s, the newly developed method of radiometric dating dramatically boosted estimates of Earth’s age into the billion year range – a far cry from the several thousand years that young-Earth creationists allow, derived from their literal reading of the Bible. Radiometric dating relies on the radioactive decay of certain chemical elements such as uranium, carbon or potassium, for which the decay rates are accurately known.

To overcome the vast discrepancy between the scientifically determined age of the earth and the biblical estimate, young-Earth creationists – who, surprisingly, include hundreds of scientists with an advanced degree in science or medicine – twist science in a futile effort to discredit radiometric dating. Absurdly, they object that the method can’t be trusted because of a handful of instances when radiometric dating has been incorrect. But such an argument in no way proves a young earth, and in any case fails to invalidate a technique that has yielded correct results, as established independently by other methods, tens of thousands of times.

Another, equally ridiculous claim is that somehow the rate of radioactive decay underpinning the dating method was billions of times higher in the past, which would considerably shorten radiometrically measured ages. Some creationists even maintain that radioactive decay sped up more than once. What they don’t realize is that any significant change in decay rates would imply that fundamental physical constants (such as the speed of light) had also changed. If that were so, we’d be living in a completely different type of universe. 

Among other wild assertions that creationists use as evidence that the planet is no more than 10,000 years old are rapid exponential decay of the earth’s magnetic field, which is a spurious claim, and the low level of helium in the atmosphere, which merely reflects how easily the gas escapes from the earth and has nothing to do with its age.

Apart from such futile attempts to shorten the earth’s longevity, young-Earth creationists also rely on the concept of flood geology to prop up their religious beliefs. Flood geology, which I’ve discussed in detail elsewhere, maintains that the planet was reshaped by a massive worldwide flood as described in the biblical story of Noah’s ark. It’s as preposterously unscientific as creationist efforts to uphold the idea of a young earth.

The depth of the attack on modern science can be seen in polls showing that a sizable 38% of the U.S. adult public, and a similar percentage globally, believe that God created humans in their present form within the last 10,000 years. The percentage may be higher yet for those who identify with certain religions, and perhaps a further 10% believe in intelligent design, the form of creationism discussed in last week’s post. The breadth of disbelief in the theory of evolution is astounding, especially considering that it’s almost universally accepted by mainstream Churches and the overwhelming majority of the world’s scientists.

Next week: On Science Skeptics and Deniers

What Intelligent Design Fails to Understand about Evolution

ID.jpg

One of the threats to modern science is the persistence of faith-based beliefs about the origin of life on Earth, such as the concept of intelligent design which holds that the natural world was created by an intelligent designer – who may or may not be God or another deity. Intelligent design, like other forms of creationism, is incompatible with the theory of evolution formulated by English naturalist Charles Darwin in the 19th century. But, in asserting that complex biological systems defy any scientific explanation, believers in intelligent design fail to understand the basic features of evolution.

The driving force in biological evolution, or descent from a common ancestor through cumulative change over time, is the process of natural selection. The essence of natural selection is that, as in human society, nature produces more offspring than can survive, and that variation in a species means some offspring have a slightly greater chance of survival than the others.  These offspring have a better chance of reproducing and passing the survival trait on to the next generation than those who lack the trait.

A common misconception about natural selection is that it is an entirely random process. But this is not so. Genetic variation within a species, which distinguishes individuals from one another and usually results from mutation, is indeed random. However, the selection aspect isn’t random but rather a snowballing process, in which each evolutionary step that selects the variation best suited to reproduction builds on the previous step.

Intelligent design proponents often argue that the “astonishing complexity” of living cells and biological complexes such as the bacterial flagellum – a whip-like appendage on a bacterial cell that rotates like a propeller – precludes their evolution via the step-by-step mechanism of natural selection. Such complex systems, they insist, can only be created as an integrated whole and must therefore have been designed by an intelligent entity.

There are several sound scientific reasons why this claim is fallacious: for example, natural selection can work on modular units already assembled for another purpose. But the most telling argument is simply that evolution is incremental and can take millions or even hundreds of millions of years – a length of time that is incomprehensible, meaningless to us as himans, to whom even a few thousand years seems an eternity. The laborious, trial-and-error, one-step-at-a-time assembly of complex biological entities may indeed not be possible in a few thousand years, but is easily accomplished in a time span that’s beyond our imagination.     

However, evolution aside, intelligent design can’t lay any claim to being science. Most intelligent design advocates do accept the antiquity of life on earth, unlike adherents to the deceptively misnamed creation science, the topic for next week’s post. But neither intelligent design nor creation science offers any scientific alternative to Darwin’s mechanism of natural selection. And they both distort or ignore the vast body of empirical evidence for evolution, which includes the fossil record and biodiversity as well as a host of modern-day observations from fields such as molecular biology and embryology.

That intelligent design and creation science aren’t science at all is apparent from the almost total lack of peer-reviewed papers published in the scientific literature. Apart from a few articles (such as this one) in educational journals on the different forms of creationism, the only known paper on creationism itself – an article, based on intelligent design, about the epoch known as the Cambrian explosion – was published in an obscure biological journal in 2004. But one month later, the journal’s publishing society reprimanded the editor for not handling peer review properly and repudiated the article. In its formal explanation, the society emphasized that no scientific evidence exists to support intelligent design.

A valid scientific theory must, at least in principle, be capable of being invalidated or disproved by observation or experiment. Along with other brands of creationism, intelligent design is a pseudoscience that can’t be falsified because it depends not on scientific evidence, but on a religious belief based on faith in a supernatural creator. There’s nothing wrong with faith, but it’s the very antithesis of science. Science requires evidence and a skeptical evaluation of claims, while faith demands unquestioning belief, without evidence.

Next week: Why Creation Science Isn’t Science