Philippines Court Ruling Deals Deathblow to Success of GMO Golden Rice

Genetically modified Golden Rice was once seen as the answer to vitamin A deficiency in Asia and Africa, where rice is the staple food.  But a recent court ruling in the Philippines, the very country where rice breeders first came up with the idea of Golden Rice, has brought more than 30 years of crop development to an abrupt halt.

As reported in Science magazine, a Philippine Court of Appeals in April 2024 revoked a 2021 permit that allowed the commercial planting of a Golden Rice variety tailored for local conditions. The ruling resulted from a lawsuit by Greenpeace and other groups, who for many years have opposed the introduction of all GMO (genetically modified organism) crops as unsafe for humans and the environment.

Millions of poor children in Asia and Africa go blind or even die each year from weakened immune systems caused by a lack of vitamin A, which is produced in the human body through the action of a naturally occurring compound, beta-carotene.

So the discovery by Swiss plant geneticist Ingo Potrykus and German biologist Peter Beyer in the 1990s that splicing two genes – beta-carotene from daffodils, the other from a bacterium – into rice could greatly increase its beta-carotene content caused considerable excitement among nutritionists. Subsequent research, in which the daffodil gene was replaced by one from maize, boosted the beta-carotene level even further.

The original discovery should have been heralded as a massive breakthrough. But widespread hostility erupted once the achievement was publicized. Potrykus was accused of creating a “Frankenfood,” evocative of the monster created by the fictional mad scientist Frankenstein, and subjected to bomb threats. Trial plots of Golden Rice were destroyed by rampaging activists.

Nevertheless, in 2018, four countries – Australia, New Zealand, Canada and the U.S. – finally approved Golden Rice. The U.S. FDA (Food and Drug Administration) has granted the biofortified food its prestigious “GRAS (generally recognized as safe)” status. This success paved the way for the nonprofit IRRI (International Rice Research Institute) in the Philippines to initiate large-scale trials of Golden Rice varieties in that country and Bangladesh.

Greenpeace contends that currently planted Golden Rice in the Philippines will have to be destroyed, although a consulting attorney says there is nothing in the Court of Appeals decision to support that claim. And while Bangladesh is close to growing Golden Rice for consumption, the request to actually start planting has been under review since 2017.

The Philippines court justified its ruling by citing the supposed lack of scientific consensus on the safety of Golden Rice; the consulting attorney pointed out that “both camps presented opposing evidence.” In fact, the judges leaned heavily on the so-called precautionary principle – a concept developed by 20th-century environmental activists.

The origins of the precautionary principle can be traced to the application in the early 1970s of the German principle of “Vorsorge” or foresight, based on the belief that environmental damage can be avoided by careful forward planning. The “Vorsorgeprinzip” became the foundation for German environmental law and policies in areas such as acid rain, pollution and global warming. The principle reflects the old adage that “it’s better to be safe than sorry,” and can be regarded as a restatement of the ancient Hippocratic oath in medicine, “First, do no harm.”

Formally, the precautionary principle can be stated as:

When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.

But in spite of its noble intentions, the precautionary principle in practice is based far more on political considerations than on science. A notable example is the bans on GMO crops by more than half the countries in the European Union. The bans stem from the widespread, fear-based belief that eating genetically altered foods is unsafe, despite the lack of any scientific evidence that GMOs have ever caused harm to a human.

In the U.S., approved GMO crops include corn, which is the basic ingredient in many cereals, corn tortillas, corn starch and corn syrup, as well as feed for livestock and farmed fish; soybeans; canola; sugar beets; yellow squash and zucchini; bruise-free potatoes; nonbrowning apples; papaya; and alfalfa.

One of the biggest issues with the precautionary principle is that it essentially advocates risk avoidance. But risk avoidance carries its own risks.

We accept the risk, for example, of being killed or badly injured while traveling on the roads because the risk is outweighed by the convenience of getting to our destination quickly, or by our desire to have fresh food available at the supermarket. Applying the precautionary principle would mean, in addition to the safety measures already in place, reducing all speed limits to 16 km per hour (10 mph) or less – a clearly impractical solution that would take us back to horse-and-buggy days.

Next: 

Targeting Farmers for Livestock Greenhouse Gas Emissions Is Misguided

Farmers in many countries are increasingly coming under attack over their livestock herds. Ireland’s government is contemplating culling the country’s cattle herds by 200,000 cows to cut back on methane (CH4) emissions; the Dutch government plans to buy out livestock farmers to lower emissions of CH4 and nitrous oxide (N2O) from cow manure; and New Zealand is close to taxing CH4 from cow burps.

But all these measures, and those proposed in other countries, are misguided and shortsighted – for multiple reasons.

The thrust behind the intended clampdown on the farming community is the estimated 11-17% of current greenhouse gas emissions from agriculture worldwide, which contribute to global warming. Agricultural CH4, mainly from ruminant animals, accounted for approximately 4% of total greenhouse gas emissions in the U.S. in 2021, according to the EPA (Environmental Protection Agency), while N2O accounted for another 5%.

The actual warming produced by these two greenhouse gases depends on their so-called “global warming potential,” a quantity determined by three factors: how efficiently the gas absorbs heat, its lifetime in the atmosphere, and its atmospheric concentration. The following table illustrates these factors for CO2, CH4 and N2O, together with their comparative warming effects.

The conventional global warming potential (GWP) is a dimensionless metric, in which the GWP per molecule of a particular greenhouse gas is normalized to that of CO2; the GWP takes into account the atmospheric lifetime of the gas. The table shows both GWP-20 and GWP-100, the warming potentials calculated over a 20-year and 100-year time horizon, respectively.

The final column shows what I call weighted GWP values, as percentages of the CO2 value, calculated by multiplying the conventional GWP by the ratio of the rate of concentration increase for that gas to that of CO2. The weighted GWP indicates how much warming CH4 or N2O causes relative to CO2.

Over a 100-year time span, you can see that both CH4 and N2O exert essentially the same warming influence, at 10% of CO2 warming. But over a 20-year interval, CH4 has a stronger warming effect than N2O, at 27% of CO2 warming, because of its shorter atmospheric lifetime which boosts the conventional GWP value from 30 (over 100 years) to 83.

However, the actual global temperature increase from CH4 and N2O – concern over which is the basis for legislation targeting the world’s farmers – is small. Over a 20-year period, the combined contribution of these two gases is approximately 0.075 degrees Celsius (0.14 degrees Fahrenheit), assuming that all current warming comes from CO2, CH4 and N2O combined, and using a value of 0.14 degrees Celsius (0.25 degrees Fahrenheit) per decade for the current warming rate.

But, as I’ve stated in many previous posts, at least some current warming is likely to be from natural sources, not greenhouse gases. So the estimated 20-year temperature rise of 0.075 degrees Celsius (0.14 degrees Fahrenheit) is probably an overestimate. The corresponding number over 100 years, also an overestimate, is 0.23 degrees Celsius (0.41 degrees Fahrenheit).

Do such small, or even smaller, gains in temperature justify the shutting down of agriculture? Farmers around the globe certainly don’t think so, and for good reason.

First, CH4 from ruminant animals such as cows, sheep and goats accounts for only 4% of U.S. greenhouse emissions as noted above, compared with 29% from transportation, for example. And our giving up eating meat and dairy products would have little impact on global temperatures. Removing all livestock and poultry from the U.S. food system would only reduce global greenhouse gas emissions by 0.36%, a study has found.

Other studies have shown that the elimination of all livestock from U.S. farms would leave our diets deficient in vital nutrients, including high-quality protein, iron and vitamin B12 that meat provides, says the Iowa Farm Bureau.

Furthermore, as agricultural advocate Kacy Atkinson argues, the methane that cattle burp out during rumination breaks down in 10 to 15 years into CO2 and water. The grasses that cattle graze on absorb that CO2, and the carbon gets sequestered in the soil through the grasses’ roots.

Apart from cow manure management, the largest source of N2O emissions worldwide is the application of nitrogenous fertilizers to boost crop production. Greatly increased use of nitrogen fertilizers is the main reason for massive increases in crop yields since 1961, part of the so-called green revolution in agriculture.

The figure below shows U.S. crop yields relative to yields in 1866 for corn, wheat, barley, grass hay, oats and rye. The blue dashed curve is the annual agricultural usage of nitrogen fertilizer in megatonnes (Tg). The strong correlation with crop yields is obvious.

Restricting fertilizer use would severely impact the world’s food supply. Sri Lanka’s ill-conceived 2022 ban of nitrogenous fertilizer (and pesticide) imports caused a 30% drop in rice production, resulting in widespread hunger and economic turmoil – a cautionary tale for any efforts to extend N2O reduction measures from livestock to crops.

Next: No Evidence That Today’s El Niños Are Any Stronger than in the Past

Global Warming from Food Production and Consumption Grossly Overestimated

A recent peer-reviewed study makes the outrageous claim that production and consumption of food could contribute as much as 0.9 degrees Celsius (1.6 degrees Fahrenheit) to global warming by 2100, from emissions of the greenhouse gases methane (CH4), nitrous oxide (N2O) and carbon dioxide (CO2).

Such a preposterous notion is blatantly wrong, even if it were true that global warming largely comes from human CO2 emissions. Since agriculture is considered responsible for an estimated 15-20% of current warming, a 0.9 degrees Celsius (1.6 degrees Fahrenheit) agricultural contribution in 2100 implies a total warming (since 1850-1900) at that time of 0.9 / (0.15–0.2), or 4.5 to 6.0 degrees Celsius (8.1 to 10.8 degrees Fahrenheit).

As I discussed in a previous post, only the highest, unrealistic CO2 emissions scenarios project such a hot planet by the end of the century. A group of prominent climate scientists has estimated the much lower range of likely 2100 warming, of 2.6-3.9 degrees Celsius (4.7-7.0 degrees Fahrenheit). And climate writer Roger Pielke Jr. has pegged the likely warming range at 2-3 degrees Celsius (3.6-5.4 degrees Fahrenheit), based on the most plausible emissions scenarios.

Using the same 15-20% estimate for the agricultural portion of global warming, a projected 2100 warming of say 3 degrees Celsius (5.4 degrees Fahrenheit) would mean a contribution from food production of only 0.45-0.6 degrees Celsius (0.8-1.1 degrees Fahrenheit) – about half of what the new study’s authors calculate.

That even this estimate of future warming from agriculture is too high can be seen by examining the following figure from their study. The figure illustrates the purported temperature rise by 2100 attributable to each of the three greenhouse gases generated by the agricultural industry: CH4, N2O and CO2. CH4 is responsible for nearly 60% of the temperature increase, while N2O and CO2 each contribute about 20%.

This figure can be compared with the one below from a recent preprint by a team which includes atmospheric physicists William Happer and William van Wijngaarden, showing the authors’ evaluation of expected radiative forcings at the top of the troposphere over the next 50 years. The forcings are increments relative to today, measured in watts per square meter; the horizontal lines are the projected temperature increases (ΔT) corresponding to particular values of the forcing increase.

To properly compare the two figures, we need to know what percentages of total CH4, N2O and CO2 emissions in the Happer and van Wijngaarden figure come from the agricultural sector; these are approximately 50%, 67% and 3%, respectively, according to the authors of the food production study.

Using these percentages and extrapolating the Happer and van Wijngaarden graph to 78 years (from 2022), the total additional forcing from the three gases in 2100 can be shown to be about 0.52 watts per square meter. This forcing value corresponds to a temperature increase due to food production and consumption of only around 0.1 degrees Celsius (0.18 degrees Fahrenheit).

The excessively high estimate of 0.9 degrees Celsius (1.6 degrees Fahrenheit) in the study may be due in part to the study’s dependence on a climate model: many climate models greatly exaggerate future warming.

While on the topic of CH4 and N2O emissions, let me draw your attention to a fallacy widely propagated in the climate science literature; the fallacy appears on the websites of both the U.S. EPA (Environmental Protection Agency) and NOAA (the U.S. National Oceanic and Atmospheric Administration), and even in the IPCC’s Sixth Assessment Report (Table 7.15).

The fallacy conflates the so-called “global warming potential” for greenhouse gas emissions, which measures the warming potential per molecule (or unit mass) of various gases, with their warming potential weighted by their rate of concentration increase relative to CO2. Because the abundances of CH4 and N2O in the atmosphere are much lower than that of CO2, and are increasing even more slowly, there is a big difference between their global warming potentials and their weighted warming potentials.

The difference is illustrated in the table below. The conventional global warming potential (GWP) is a dimensionless metric, in which the GWP of a particular greenhouse gas is normalized to that of CO2; the GWP takes into account the atmospheric lifetime of the gas. The table shows values of GWP-100, the warming potential calculated over a 100-year time horizon.

The final column shows the value of the weighted GWP-100, which is not dimensionless like the conventional GWP-100 but measured in units of watts per square meter, the same as radiative forcing. The weighted GWP-100 is calculated by multiplying the conventional GWP-100 by the ratio of the rate of concentration increase for that gas to that of CO2.

As you can see, the actual anticipated warming in 100 years from either CH4 or N2O agricultural emissions will be only 10% of that from CO2 – in contrast to the conventional GWP-100 values extensively cited in the literature. What a waste of time and effort in trying to rein in CH4 and N2O emissions!

Next: CRED’s 2022 Disasters in Numbers report is a Disaster in Itself

The Sugar Industry: Sugar Daddy to Manipulated Science?

Industry funding of scientific research often comes with strings attached. There’s plenty of evidence that industries such as tobacco and lead have been able to manipulate sponsored research to their advantage, in order to create doubt about the deleterious effects of their product. But has the sugar industry, currently in the spotlight because of concern over sugary drinks, done the same?

suger large.jpg

This charge was recently leveled at the industry by a team of scientists at UCSF (University of California, San Francisco), who accused the industry of funding research in the 1960s that downplayed the risks of consuming sugar and overstated the supposed dangers of eating saturated fat. Both saturated fat and sugar had been linked to coronary heart disease, which was surging at the time.

The UCSF researchers claim to have discovered evidence that an industry trade group secretly paid two prominent Harvard scientists to conduct a literature review refuting any connection between sugar and heart disease, and making dietary fat the villain instead. The published review made no mention of sugar industry funding.

A year after the review came out, the trade group funded an English researcher to conduct a study on laboratory rats. Initial results seemed to confirm other studies indicating that sugars, which are simple carbohydrates, were more detrimental to heart health than complex or starchy carbohydrates like grains, beans and potatoes. This was because sugar appeared to elevate the blood level of triglyceride fats, today a known risk factor for heart disease, through its metabolism by microbes in the gut.

Perhaps more alarmingly, preliminary data suggested that consumption of sugar – though not starch – produced high levels of an enzyme called beta-glucuronidase that other contemporary studies had associated with bladder cancer in humans. Before any of this could be confirmed, however, the industry trade organization shut the research project down; the results already obtained were never published.

The UCSF authors say in a second paper that the literature review’s dismissal of contrary studies, together with the suppression of evidence tying sugar to triglycerides and bladder cancer, show how the sugar industry has attempted for decades to bury scientific data on the health risks of eating sugar. If the findings of the laboratory study had been disclosed, they assert, sugar would probably have been scrutinized as a potential carcinogen, and its role in cardiovascular disease would have been further investigated. Added one of the UCSF team, “This is continuing to build the case that the sugar industry has a long history of manipulating science.”

Marion Nestle, an emeritus professor of food policy at New York University, has commented that the internal industry documents unearthed by the UCSF researchers were striking “because they provide rare evidence that the food industry suppressed research it did not like, a practice that has been documented among tobacco companies, drug companies and other industries.”

Nonetheless, the current sugar trade association disputes the UCSF claims, calling them speculative and based on questionable assumptions about events that took place almost 50 years ago. The association also considers the research itself tainted, because it was conducted and funded by known critics of the sugar industry. The industry has consistently denied that sugar plays any role in promoting obesity, diabetes or heart disease.

And despite a statement by the trade association’s predecessor that it was created “for the basic purpose of increasing the consumption of sugar,” other academics have defended the industry. They point out that, at the time of the industry review and the rat study in the 1960s, the link between sugar and heart disease was supported by only limited evidence, and the dietary fat hypothesis was deeply entrenched in scientific thinking, being endorsed by the AHA (American Heart Association) and the U.S. NHI (National Heart Institute).

But, says Nestle, it’s déjà vu today, with the sugar and beverage industries now funding research to let the industries off the hook for playing a role in causing the current obesity epidemic. As she notes in a commentary in the journal JAMA Internal Medicine:

"Is it really true that food companies deliberately set out to manipulate research in their favor? Yes, it is, and the practice continues.”

Next: Grassroots Climate Change Movement Ignores Actual Evidence

Nature vs Nurture: Does Epigenetics Challenge Evolution?

A new wrinkle in the traditional nature vs nurture debate – whether our behavior and personalities are influenced more by genetics or by our upbringing and environment – is the science of epigenetics. Epigenetics describes the mechanisms for switching individual genes on or off in the genome, which is an organism’s complete set of genetic instructions.

epigenetics.jpg

A controversial question is whether epigenetic changes can be inherited. According to Darwin’s 19th-century theory, evolution is governed entirely by heritable variation of what we now know as genes, a variation that usually results from mutation; any biological changes to the whole organism during its lifetime caused by environmental factors can’t be inherited. But recent evidence from studies on rodents suggests that epigenetic alterations can indeed be passed on to subsequent generations. If true, this implies that our genes record a memory of our lifestyle or behavior today that will form part of the genetic makeup of our grandchildren and great-grandchildren.

So was Darwin wrong? Is epigenetics an attack on science? At first blush, epigenetics is reminiscent of Lamarckism – the pre-Darwinian notion that acquired characteristics are heritable, promulgated by French naturalist Jean-Baptiste Lamarck. Lamarck’s most famous example was the giraffe, whose long neck was thought at the time to have come from generations of its ancestors stretching to reach foliage in high trees, with longer and longer necks then being inherited.

Darwin himself, when his proposal of natural selection as the evolutionary driving force was initially rejected, embraced Lamarckism as a possible alternative to natural selection. But the Lamarckian view was later discredited, as more and more evidence for natural selection accumulated, especially from molecular biology.

Nonetheless, the wheel appears to have turned back to Lamarck’s idea over the last 20 years. Several epidemiological studies have established an apparent link between 20th-century starvation and the current prevalence of obesity in the children and grandchildren of malnourished mothers. The most widely studied event is the Dutch Hunger Winter, the name given to a 6-month winter blockade of part of the Netherlands by the Germans toward the end of World War II. Survivors, who included Hollywood actress Audrey Hepburn, resorted to eating grass and tulip bulbs to stay alive.

The studies found that mothers who suffered malnutrition during early pregnancy gave birth to children who were more prone to obesity and schizophrenia than children of well-fed mothers. More unexpectedly, the same effects showed up in the grandchildren of the women who were malnour­ished during the first three months of their pregnancy. Similarly, an increased incidence of Type II diabetes has been discovered in adults whose pregnant mothers experienced starvation during the Ukrainian Famine of 1932-33 and the Great Chinese Famine of 1958-61.  

All this data points to the transmission from generation to generation of biological effects caused by an individual’s own experiences. Further evidence for such epigenetic, Lamarckian-like changes comes from laboratory studies of agouti mice, so called because they carry the agouti gene that not only makes the rodents fat and yellow, but also renders them susceptible to cancer and diabetes. By simply altering a pregnant mother’s diet, researchers found they could effectively silence the agouti gene and produce offspring that were slender and brown, and no longer prone to cancer or diabetes

The modified mouse diet was rich in methyl donors, small molecules that attach themselves to the DNA string in the genome and switch off the troublesome gene, and are found in foods such as onions and beets. In addition to its DNA, any genome in fact contains an array of chemical markers and switches that constitute the instructions for the estimated 21,000 protein-coding genes in the genome. That is, the array is able to turn the expression of particular genes on or off.

However, the epigenome, as this array is called, can’t alter the genes themselves. A soldier who loses a limb in battle, for example, will not bear children with shortened arms or legs. And, while there’s limited evidence that epigenetic changes in humans can be transmitted between generations, such as the starvation studies described above, the possibility isn’t yet fully established and further research is needed.

One line of thought, for which an increasing amount of evidence exists in animals and plants, is that epigenetic change doesn’t come from experience or use – as in the case of Lamarck’s giraffe – but actually results from Darwinian natural selection. The idea is that in order to cope with an environmental threat or need, natural selection may choose the variation in the species that has an epigenome favoring the attachment to its DNA of a specific type of molecule such as a methyl donor, capable of expressing or silencing certain genes. In other words, epigenetic changes can exploit existing heritable genetic variation, and so are passed on.

Is this explanation correct or, as creationists would like to think, did Darwin’s theory of evolution get it wrong? Time will tell.

How the Scientific Consensus Can Be Wrong

consensus wrong 250.jpg

Consensus is a necessary step on the road from scientific hypothesis to theory. What many people don’t realize, however, is that a consensus isn’t necessarily the last word. A consensus, whether newly proposed or well-established, can be wrong. In fact, the mistaken consensus has been a recurring feature of science for many hundreds of years.

A recent example of a widespread consensus that nevertheless erred was the belief that peptic ulcers were caused by stress or spicy foods – a dogma that persisted in the medical community for much of the 20th century. The scientific explanation at the time was that stress or poor eating habits resulted in excess secretion of gastric acid, which could erode the digestive lining and create an ulcer.

But two Australian doctors discovered evidence that peptic ulcer disease was caused by a bacterial infection of the stomach, not stress, and could be treated easily with antibiotics. Yet overturning such a longstanding consensus to the contrary would not be simple. As one of the doctors, Barry Marshall, put it:

“…beliefs on gastritis were more akin to a religion than having any basis in scientific fact.”

To convince the medical establishment the pair were right, Marshall resorted in 1984 to the drastic measure of infecting himself with a potion containing the bacterium in question (known as Helicobacter pylori). Despite this bold and risky act, the medical world didn’t finally accept the new doctrine until 1994. In 2005, Barry Marshall and Robin Warren were awarded the Nobel Prize in Medicine for their discovery.

Earlier last century, an individual fighting established authority had overthrown conventional scientific wisdom in the field of geology. Acceptance of Alfred Wegener’s revolutionary theory of continental drift, proposed in 1912, was delayed for many decades – even longer than resistance continued to the infection explanation for ulcers – because the theory was seen as a threat to the geological establishment.

Geologists of the day refused to take seriously Wegener’s circumstantial evidence of matchups across the ocean in continental coastlines, animal and plant fossils, mountain chains and glacial deposits, clinging instead to the consensus of a contracting earth to explain these disparate phenomena. The old consensus of fixed continents endured among geologists even as new, direct evidence for continental drift surfaced, including mysterious magnetic stripes on the seafloor. But only after the emergence in the 1960s of plate tectonics, which describes the slow sliding of thick slabs of the earth’s crust, did continental drift theory become the new consensus.

A much older but well-known example of a mistaken consensus is the geocentric (earth-centered) model of the solar system that held sway for 1,500 years. This model was originally developed by ancient Greek philosophers Plato and Aristotle, and later simplified by the astronomer Ptolemy in the 2nd century. Medieval Italian mathematician Galileo Galilei fought to overturn the geocentric consensus, advocating instead the rival heliocentric (sun-centered) model of Copernicus – the model which we accept today, and for which Galileo gathered evidence in the form of unprecedented telescopic observations of the sun, planets and planetary moons.    

Although Galileo was correct, his endorsement of the heliocentric model brought him into conflict with university academics and the Catholic Church, both of which adhered to Ptolemy’s geocentric model. A resolute Galileo insisted that:

 “In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.”

But to no avail: Galileo was called before the Inquisition, forbidden to defend Copernican ideas, and finally sentenced to house arrest for publishing a book that did just that and also ridiculed the Pope.

These are far from the only cases in the history of science of a consensus that was wrong. Others include the widely held 19th-century religious belief in creationism that impeded acceptance of Darwin’s theory of evolution, and the 20th-century paradigm linking saturated fat to heart disease.

Consensus is built only slowly, so belief in the consensus tends to become entrenched over time and is not easily abandoned by its devotees. This is certainly the case for the current consensus that climate change is largely a result of human activity – a consensus, as I’ve argued in a previous post, that is most likely mistaken.

Next: Nature vs Nurture: Does Epigenetics Challenge Evolution?

How Hype Is Hurting Science

The recent riots in France over a proposed carbon tax, aimed at supposedly combating climate change, were a direct result of blatant exaggeration in climate science for political purposes. It’s no coincidence that the decision to move forward with the tax came soon after an October report from the UN’s IPCC (Intergovernmental Panel on Climate Change), claiming that drastic measures to curtail climate change are necessary by 2030 in order to avoid catastrophe. President Emmanuel Macron bought into the hype, only to see his people rise up against him.

Exaggeration has a long history in modern science. In 1977, the select U.S. Senate committee drafting new low-fat dietary recommendations wildly exaggerated its message by declaring that excessive fat or sugar in the diet was as much of a health threat as smoking, even though a reasoned examination of the evidence revealed that wasn’t true.

About a decade later, the same hype infiltrated the burgeoning field of climate science. At another Senate committee hearing, astrophysicist James Hansen, who was then head of GISS (NASA’s Goddard Institute for Space Studies), declared he was 99% certain that the 0.4 degrees Celsius (0.7 degrees Fahrenheit) of global warming from 1958 to 1987 was caused primarily by the buildup of greenhouse gases in the atmosphere, and wasn’t a natural variation. This assertion was based on a computer model of the earth’s climate system.

At a previous hearing, Hansen had presented climate model predictions of U.S. temperatures 30 years in the future that were three times higher than they turned out to be. This gross exaggeration makes a mockery of his subsequent claim that the warming from 1958 to 1987 was all man-made. His stretching of the truth stands in stark contrast to the caution and understatement of traditional science.

But Hansen’s hype only set the stage for others. Similar computer models have also exaggerated the magnitude of more recent global warming, failing to predict the pause in warming from the late 1990s to about 2014. During this interval, the warming rate dropped to below half the rate measured from the early 1970s to 1998. Again, the models overestimated the warming rate by two or three times.

An exaggeration mindlessly repeated by politicians and the mainstream media is the supposed 97% consensus among climate scientists that global warming is largely man-made. The 97% number comes primarily from a study of approximately 12,000 abstracts of research papers on climate science over a 20-year period. But what is never revealed is that almost 8,000 of the abstracts expressed no opinion at all on anthropogenic (human-caused) warming. When that and a subsidiary survey are taken into account, the climate scientist consensus percentage falls to between 33% and 63% only. So much for an overwhelming majority! 

A further over-hyped assertion about climate change is that the polar bear population at the North Pole is shrinking because of diminishing sea ice in the Arctic, and that the bears are facing extinction. For global warming alarmists, this claim has become a cause célèbre. Yet, despite numerous articles in the media and photos of apparently starving bears, current evidence shows that the polar bear population has actually been steady for the whole period that the ice has been decreasing – and may even be growing, according to the native Inuit.

It’s not just climate data that’s exaggerated (and sometimes distorted) by political activists. Apart from the historical example in nutritional science cited above, the same trend can be found in areas as diverse as the vaccination debate and the science of GMO foods.

Exaggeration is a common, if frowned-upon marketing tool in the commercial world: hype helps draw attention in the short term. But its use for the same purpose in science only tarnishes the discipline. And, just as exaggeration eventually turns off commercial customers interested in a product, so too does it make the general public wary if not downright suspicious of scientific proclamations. The French public has recognized this on climate change.

Subversion of Science: The Low-Fat Diet

low-fat.jpg

Remember the low-fat-diet? Highly popular in the 1980s and 1990s, it was finally pushed out of the limelight by competitive eating regimens such as the Mediterranean diet. That the low-fat diet wasn’t particularly healthy hadn’t yet been discovered. But its official blessing for decades by the governments of both the U.S. and UK represents a subversion of science by political forces that overlook evidence and abandon reason.

The low-fat diet was born in a 1977 report from a U.S. government committee chaired by Senator George McGovern, which had become aware of research purportedly linking excessive fat in the diet to killer diseases such as coronary heart disease and cancer. The committee hoped that its report would do as much for diet and chronic disease as the earlier Surgeon General’s report had done for smoking and lung cancer.

The hypothesis that eating too much saturated fat results in heart disease, caused by narrowing of the coronary arteries, was formulated by American physiologist Ancel Keys in the 1950s. Keys’ own epidemiological study, conducted in seven different countries, initially confirmed his hypothesis. But many other studies failed to corroborate the diet-heart hypothesis, and Keys’ data itself no longer substantiated it 25 years later. Double-blind clinical trials which, unlike epidemiological studies are able to establish causation, also gave results in conflict with the hypothesis.

Although it was found that eating less saturated fat could lower cholesterol levels, a growing body of evidence showed that it didn’t help to ward off heart attacks or prolong life spans. Yet Senator McGovern’s committee forged ahead regardless. The results of all the epidemiological studies and major clinical trials that refuted the diet-heart hypothesis were simply ignored – a classic case of science being trampled on by politics.

The McGovern committee’s report turned the mistaken hypothesis into nutritional dogma by drawing up a detailed set of dietary guidelines for the American public. After heated political wrangling with other government agencies, the USDA (U.S. Department of Agriculture) formalized the guidelines in 1980, effectively sanctioning the first ever, official low-fat diet. The UK followed suit a few years later.

While the guidelines erroneously linked high consumption of saturated fat to heart disease, they did concede that what constitutes a healthy level of fat in the diet was controversial. The guidelines recommended lowering intake of high-fat foods such as eggs and butter; boosting consumption of fruits, vegetables, whole grains, poultry and fish; and eating fewer foods high in sugar and salt.

With government endorsement, the low-fat diet quickly became accepted around the world. It was difficult back then even to find cookbooks that didn’t extol the virtues of the diet. Unfortunately for the public, the diet promoted to conquer one disease contributed to another – obesity – because it replaced fat with refined carbohydrates. And it wasn’t suitable for everyone.

This first became evident in the largest ever, long-term clinical trial of the low-fat diet, known as the Women’s Health Initiative. But, just like the earlier studies that led to the creation of the diet, the trial again showed that the diet-heart hypothesis didn’t hold up, at least for women.  After eight years, the low-fat diet was found to have had no effect on heart disease or deaths from the disease. Worse still, in a short-term study of the low-fat diet in U.S. Boeing employees, women who had followed the low-fat diet appeared to have actually increased their risk for heart disease.

A UN review of available data in 2008 concluded that several clinical trials of the diet “have not found evidence for beneficial effects of low-fat diets,” and commented that there wasn’t any convincing evidence either for any significant connection between dietary fat and coronary heart disease or cancer.

Today the diet-heart hypothesis is no longer widely accepted and nutritional science is beginning to regain the ground taken over by politics. But it has taken over 60 years for this attack on science to be repulsed.

Next week: How Hype Is Hurting Science

When No Evidence Is Evidence: GMO Food Safety

The twin hallmarks of genuine science are empirical evidence and logic. But in the case of foods containing GMOs (genetically modified organisms), it’s the absence of evidence to the contrary that provides the most convincing testament to the safety of GMO foods. Although almost 40% of the public in the U.S. and UK remain skeptical, there simply isn’t any evidence to date that GMOs are deadly or even unhealthy for humans.

Absence of evidence doesn’t prove that GMO foods are safe beyond all possible doubt, of course. Such proof is impossible in practice, as harmful effects from some as-yet unknown GMO plant can’t be categorically ruled out. But a committee of the U.S. NAS (National Academy of Sciences, Engineering and Medicine) undertook a study in 2016 to examine any negative effects as well as potential benefits of both currently commercialized and future GMO crops.

images.jpg

The study authors found no substantial evidence that the risk to human health was any different for current GMO crops on the market than for their traditionally crossbred counterparts. Crossbreeding or artificial hybridization refers to the conventional form of plant breeding, first developed in the 18th century and continually refined since then, which revolutionized agriculture before genetic engineering came on the scene in the 1970s. The evidence evaluated in the study included presentations by 80 people with diverse expertise on GMO crops; hundreds of comments and documents from individuals and organizations; and an extensive survey by the committee of published scientific papers.

The committee reexamined the results of several types of testing conducted in the past to evaluate genetically engineered crops and the foods derived from them. Although they found that many animal-feeding studies weren’t optimal, the large number of such experimental studies provided “reasonable evidence” that eating GMO foods didn’t harm animals (typically rodents). This conclusion was reinforced by long-term data on livestock health before and after GMO feed crops were introduced.

Two other informative tests involved analyzing the composition of GMO plants and testing for allergens. The NAS study found that while there were differences in the nutrient and chemical compositions of GMO plants compared to similar non-GMO varieties, the differences fell within the range of natural variation for non-GMO crops. 

In the case of specific health problems such as allergies or cancer possibly caused by eating genetically modified foods, the committee relied on epidemiological studies, since long-term randomized controlled trials have never been carried out. The results showed no difference between studies conducted in the U.S. and Canada, where the population has consumed GMO foods since the late 1990s, and similar studies in the UK and Europe, where very few GMO foods are eaten. The committee acknowledged, however, that biases may exist in the epidemiological data available on certain health problems.

The NAS report also recommended a tiered approach to future safety testing of GMOs. The recommendation was to use newly available DNA analysis technologies to evaluate the risks to human health or to the environment of a plant –  grown by either conventional hybridization or genetic engineering – and then to do safety testing only on those plant varieties that show signs of potential hazards.

While there is documentation that the NAS committee listened to both sides of the GMO debate and made an honest attempt to evaluate the available evidence fairly, this hasn’t always been so in other NAS studies. Just as politics have interfered in the debate over Roundup and cancer, as discussed in last week’s post, the NAS has been accused of substituting politics for science. Further accusations include insufficient attention to conflicts of interest among committee and panel members, and even turning a blind eye to scientific misconduct (including falsification of data). Misconduct is an issue I’ll return to in future posts.

Next week: What Intelligent Design Fails to Understand About Evolution