How the Scientific Consensus can be Wrong

consensus wrong 250.jpg

Consensus is a necessary step on the road from scientific hypothesis to theory. What many people don’t realize, however, is that a consensus isn’t necessarily the last word. A consensus, whether newly proposed or well-established, can be wrong. In fact, the mistaken consensus has been a recurring feature of science for many hundreds of years.

 A recent example of a widespread consensus that nevertheless erred was the belief that peptic ulcers were caused by stress or spicy foods – a dogma that persisted in the medical community for much of the 20th century. The scientific explanation at the time was that stress or poor eating habits resulted in excess secretion of gastric acid, which could erode the digestive lining and create an ulcer.

But two Australian doctors discovered evidence that peptic ulcer disease was caused by a bacterial infection of the stomach, not stress, and could be treated easily with antibiotics. Yet overturning such a longstanding consensus to the contrary would not be simple. As one of the doctors, Barry Marshall, put it:

“…beliefs on gastritis were more akin to a religion than having any basis in scientific fact.”

To convince the medical establishment the pair were right, Marshall resorted in 1984 to the drastic measure of infecting himself with a potion containing the bacterium in question (known as Helicobacter pylori). Despite this bold and risky act, the medical world didn’t finally accept the new doctrine until 1994. In 2005, Barry Marshall and Robin Warren were awarded the Nobel Prize in Medicine for their discovery.    

Earlier last century, an individual fighting established authority had overthrown conventional scientific wisdom in the field of geology. Acceptance of Alfred Wegener’s revolutionary theory of continental drift, proposed in 1912, was delayed for many decades – even longer than resistance continued to the infection explanation for ulcers – because the theory was seen as a threat to the geological establishment.

Geologists of the day refused to take seriously Wegener’s circumstantial evidence of matchups across the ocean in continental coastlines, animal and plant fossils, mountain chains and glacial deposits, clinging instead to the consensus of a contracting earth to explain these disparate phenomena. The old consensus endured among geologists even as new, direct evidence for continental drift surfaced, including mysterious magnetic stripes on the seafloor. But only after the emergence in the 1960s of plate tectonics, which describes the slow sliding of thick slabs of the earth’s crust, did continental drift theory become the new consensus.

A much older but well-known example of a mistaken consensus is the geocentric (earth-centered) model of the solar system that held sway for 1,500 years. This model was originally developed by ancient Greek philosophers Plato and Aristotle, and later simplified by the astronomer Ptolemy in the 2nd century. Medieval Italian mathematician Galileo Galilei fought to overturn the geocentric consensus, advocating instead the rival heliocentric (sun-centered) model of Copernicus – the model which we accept today, and for which Galileo gathered evidence in the form of unprecedented telescopic observations of the sun, planets and planetary moons.    

Although Galileo was correct, his endorsement of the heliocentric model brought him into conflict with university academics and the Catholic Church, both of which adhered to Ptolemy’s geocentric model. A resolute Galileo insisted that:

 “In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.”

But to no avail: Galileo was called before the Inquisition, forbidden to defend Copernican ideas, and finally sentenced to house arrest for publishing a book that did just that and also ridiculed the Pope.

These are far from the only cases in the history of science of a consensus that was wrong. Others include the widely held 19th-century religious belief in creationism that impeded acceptance of Darwin’s theory of evolution, and the 20th-century paradigm linking saturated fat to heart disease.

Consensus is built only slowly, so belief in the consensus tends to become entrenched over time and is not easily abandoned by its devotees. This is certainly the case for the current consensus that climate change is largely a result of human activity – a consensus, as I’ve argued in a previous post, that is most likely mistaken.

Next: Nature vs Nurture: Does Epigenetics Challenge Evolution?

How Elizabeth Holmes Abused Science to Deceive Investors

Even in Silicon Valley, which is no stranger to hubris and deceit, it stands out – the bold-faced audacity of a young Stanford dropout, who bilked prominent investors out of hundreds of millions of dollars for a fictitious blood-testing technology based on finger-stick specimens.

Credit: Associated Press

Credit: Associated Press

Elizabeth Holmes, former CEO of now defunct Theranos, last year settled charges of massive financial fraud brought by the U.S. SEC (Securities and Exchange Commission), and now faces criminal charges in California for her multiple misdeeds. But beyond the harm done to duped investors, fired employees and patients misled about blood test results, Holmes’ duplicity and pathological lies only add to the abuse being heaped on science today.

One of the linchpins of the scientific method, a combination of observation and reason developed and refined for more than two thousand years, is the replication step. Observations that can’t be repeated, preferably by independent investigators, don’t qualify as scientific evidence. When the observations are blood tests on actual patients, repeatability and reliability are obviously paramount. Yet Theranos failed badly in both these areas.

Holmes created a compact testing device originally known as the Edison and later dubbed the minLab, supposedly capable of inexpensively diagnosing everything from diabetes to cancer. But within a year or two, questions began to emerge about just how good it was.

Several Theranos scientists protested in 2013 that the technology wasn’t ready for the market. Instead of repeatable results, the company’s new machine was generating inaccurate and even erroneous data for patients. Whistleblowers addressing a recent forum related how open falsification and cherry-picking of data were a regular part of everyday operations at Theranos. And technicians had to rerun tests if the results weren’t “acceptable” to management.

Much of this chicanery was exposed by Wall Street Journal investigative reporter John Carreyrou. In the wake of his sensational reporting, drugstore chain Walgreens announced in 2015 that it was suspending previous plans to establish blood testing facilities using Theranos technology in more than 40 stores across the U.S.

Among the horrors that Carreyrou documented in a later book were a Theranos test on a 16-year-old Arizona girl, whose faulty result showed a high level of potassium, meaning she could have been at risk of a heart attack. Tests on another Arizona woman suggested an impending stroke, for which she was unnecessarily rushed to a hospital emergency room. Hospital tests contradicted both sets of Theranos data. In January 2016, the Centers for Medicare and Medicaid Services, the oversight agency for blood-testing laboratories, declared that one of Theranos' labs posed "immediate jeopardy" to patients.

Closely allied to the repeatability required by the scientific method is transparency. Replication of a result isn’t possible unless the scientists who conducted the original experiment described their work openly and honestly – something that doesn’t always occur today. To be fair, there’s a need for a certain degree of secrecy in a commercial setting, in order to protect a company’s intellectual property. However, this need shouldn’t extend to internal operations of the company or to interactions between the very employees whose research is the basis of the company’s products.

But that’s exactly what happened at Theranos, where its scientists and technicians were kept in the dark about the purpose of their work and constantly shuffled from department to department. Physical barriers were erected in the research facility to prevent employees from actually seeing the lab-on-a-chip device, based on microfluidics and biochemistry, supposedly under development.

Only a handful of people knew that the much vaunted technology was in fact a fake. In a 2014 article in Fortune magazine, Holmes claimed that Theranos already offered more than 200 blood tests and was ramping up to more than 1,000. The reality was that Theranos could only perform 12 of the 200-plus tests, all of one type, on its own equipment and had to use third-party analyzers to carry out all the other tests. Worse, Holmes allegedly knew that the miniLab had problems with accuracy and reliability, was slower than some competing devices and, in some ways, wasn’t competitive at all with more conventional blood-testing machines.

Investors were fooled too. Among the luminaries deceived by Holmes were former U.S. Secretaries of State Henry Kissinger and George Shultz, recently resigned Secretary of Defense and retired General James Mattis – all of whom became members of Theranos’ “all-star board” – and media tycoon Rupert Murdoch. Initial meetings with new investors were often followed by a rigged demonstration of the miniLab purporting to analyze their just-collected finger-stick samples.

Holmes not only fleeced her investors but also did a great disservice to science. The story will shortly be immortalized in a movie starring Jennifer Lawrence as Holmes.

Next: How the Scientific Consensus can be Wrong

Consensus in Science: Is It Necessary?

An important but often misunderstood concept in science is the role of consensus. Some scientists argue that consensus has no place at all in science, that the scientific method alone with its emphasis on evidence and logic dictates whether a particular hypothesis stands or falls.  But the eventual elevation of a hypothesis to a widely accepted theory, such as the theory of evolution or the theory of plate tectonics, does depend on a consensus being reached among the scientific community.


In politics, consensus democracy refers to a consensual decision-making process by the members of a legislature – in contrast to traditional majority rule, in which minority opinions can be ignored by the majority. In science, consensus has long been more like majority rule, but based on facts or empirical evidence rather than personal convictions. Although observational evidence is sometimes open to interpretation, it was the attempt to redefine scientific consensus in the mold of consensus democracy that triggered a reaction to using the term in science.

This reaction was eloquently summarized by medical doctor and Jurassic Park author Michael Crichton, in a 2003 Caltech lecture titled “Aliens Cause GlobaL Warming”:

“I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. …

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world.

In science consensus is irrelevant. What is relevant is reproducible results. … There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus.”

What Crichton was talking about, I think, was the consensus democracy sense of the word – consensus forming the basis for legislation, for political action. But that’s not the same as scientific consensus, which can never be reached by taking a poll of scientists. Rather, a scientific consensus is built by the slow accumulation of unambiguous pieces of empirical evidence, until the collective evidence is strong enough to become a theory.

Indeed, the U.S. AAAS (American Association for the Advancement of Science) and NAS (National Academy of Sciences, Engineering and Medicine) both define a scientific theory in such terms. According to the NAS, for example,

 “The formal scientific definition of theory …  refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence.”

Contrary to popular opinion, theories rank highest in the scientific hierarchy – above laws, hypotheses and facts or observations. 

Crichton’s reactionary view of consensus as out of place in the scientific world has been voiced in the politicaL sphere as well. Twentieth-century UK prime minister Margaret Thatcher once made the comment, echoing Crichton’s words, that political consensus was “the process of abandoning all beliefs, principles, values and policies in search of something in which no one believes, but to which no one objects; the process of avoiding the very issues that have to be solved, merely because you cannot get agreement on the way ahead.” Thatcher was a firm believer in majority rule.

A well-known scientist who shares Crichton’s opinion of scientific consensus is James Lovelock, ecologist and propounder of the Gaia hypothesis that the earth and its biosphere are a living organism. Lovelock has said of consensus:

“I know that such a word has no place in the lexicon of science; it is a good and useful word, but it belongs to the world of politics and the courtroom, where reaching a consensus is a way of solving human differences.”

But as discussed above, there is a role for consensus in science. The notion articulated by Crichton and Lovelock that consensus is irrelevant has arisen in response to the modern-day politicization of science. One element of their proclamations does apply, however. As pointed out by astrophysicist and author Ethan Siegel, the existence of a scientific consensus doesn’t mean that the “science is settled.” Consensus is merely the starting point on the way to a full-fledged theory.

Next week: How Elizabeth Holmes Abused Science to Deceive Investors

Corruption of Science: Scientific Fraud


One of the most troubling signs of the attack on science is the rising incidence of outright fraud, in the form of falsification and even fabrication of scientific data. A 2012 study published by the U.S. National Academy of Sciences noted an increase of almost 10 times since 1975 in the percentage of biomedical research articles retracted because of fraud. Although the current percentage retracted due to fraud was still very small at approximately 0.01%, the study authors remarked that this underestimated the actual percentage of fraudulent articles, since only a fraction of such articles are retracted.

One of the more egregious episodes of fraud was British gastroenterologist Andrew Wakefield’s claim in a 1998 study that 8 out of 12 children in the study had developed symptoms of autism after injection of the combination MMR (measles-mumps-rubella) vaccine. As a result of the well publicized study, hundreds of thousands of parents who had conscientiously followed immunization schedules in the past panicked and began declining MMR vaccine. And, unsurprisingly, outbreaks of measles subsequently occurred all over the world.

But Wakefield’s paper was slowly discredited over the next 12 years, until the prestigious medical journal The Lancet formally retracted it. The journal’s editors then went one step further in 2011 by declaring the paper fraudulent, citing unmistakable evidence that Wakefield had fabricated his data on autism and the MMR vaccine. Shortly after, the disgraced gastroenterologist’s medical license was revoked.

In 2015, Iowa State University researcher Dong Pyou Han received a prison sentence of four and a half years and was ordered to repay $7.2 million in grant funds, after being convicted of fabricating and falsifying data in trials of a potential HIV vaccine.  On multiple occasions, Han had mixed blood samples from vaccinated rabbits into human HIV antibodies to create the illusion that the vaccine boosted immunity against HIV. Although Han was contrite in court, one of the prosecuting attorneys doubted his remorse, pointing out that Han’s job depended on research funding that was only renewed as a result of his bogus presentations showing the experiments were succeeding.

In 2018, officials at Harvard Medical School and Brigham and Women’s Hospital in Boston called for the retraction of a staggering 31 papers from the laboratory of once prominent Italian heart researcher Piero Anversa, because the papers "included falsified and/or fabricated data." Dr. Anversa’s research was based on the notion that the heart contains stem cells, a type of cell capable of transforming into other cells, that could regenerate cardiac muscle. But other laboratories couldn’t verify Anversa’s idea and were unable to reproduce his experimental findings – a major red flag, since replication of scientific data is a crucial part of the scientific method.

Despite this warning sign, the work spawned new companies claiming that their stem-cell injections could heal hearts damaged by a heart attack, and led to a clinical trial funded by the U.S. National Heart, Lung and Blood Institute. The Boston hospital’s parent company, however, agreed in 2017 to a $10 million settlement with the U.S. government over allegations that the published research of Anversa and two colleagues had been used to fraudulently obtain federal funding. Apart from data that the lab fabricated, the government alleged that it utilized invalid and improperly characterized cardiac stem cells, and maintained deliberately misleading records. Anversa has since left the medical school and hospital.

Scientific fraud today extends even to the publishing world. A recent sting operation involved so-called predatory journals – those charging a fee without offering any publication services (such as peer review), other than publication itself. The investigation found that an amazing 33% of the journals contacted offered a phony scientific editor a position on their editorial boards, four of them immediately appointing the fake scientist as editor-in-chief.   

It’s no wonder then that scientific fraud is escalating. In-depth discussion of recent cases can be found on several websites, such as For Better Science and Retraction Watch.

Next week: Consensus in Science: Is It Necessary?

Corruption of Science: The Reproducibility Crisis

One of the more obvious signs that modern science is ailing is the reproducibility crisis – the vast number of peer-reviewed scientific studies that can’t be replicated in subsequent investigations and whose findings turn out to be false. In the field of cancer biology, for example, researchers discovered that an alarming 89% of published results couldn’t be reproduced. Even in the so-called soft science of psychology, the rate of irreproducibility hovers around 60%. And to make matters worse, falsification and outright fabrication of scientific data is on the rise.

Bronowski enlarged.jpg

The reproducibility crisis is drawing a lot of attention from scientists and nonscientists alike. In 2018, the U.S. NAS (the National Association of Scholars in this case, not the Academy of Sciences), an academic watchdog organization that normally focuses on the liberal arts and education policy, published a particularly comprehensive examination of the problem. Although the emphasis in the NAS report is on the misuse of statistical methods in scientific research, the report discusses possible causes of irreproducibility and presents a laundry list of recommendations for addressing the crisis.    

The crisis is especially acute in the biomedical sciences. Over 10 years ago, Greek medical researcher John Ioannidis argued that the majority of published research findings in medicine were wrong. This included epidemiological studies in areas such as dietary fat, vaccination and GMO foods as well as clinical trials and cutting-edge research in molecular biology. 

In 2011, a team at Bayer HealthCare in Germany reported that only about 25% of published preclinical studies on potential new drugs could be validated. Some of the unreproducible papers had catalyzed entirely new fields of research, generating hundreds of secondary publications. More worryingly, other papers had led to clinical trials that were unlikely to be of any benefit to the participants.

Author Richard Harris describes another disturbing example, of research on breast cancer that was conducted on misidentified skin cancer cells. The sloppiness resulted in thousands of papers being published in prominent medical journals on the wrong cancer. Harris blames the sorry condition of current research on scientists taking shortcuts around the once venerated scientific method.

Cutting corners to pursue short-term success is but one consequence of the pressures experienced by today’s scientists. These pressures include the constant need to win research grants as well as to publish research results in high-impact journals. The more spectacular that a paper submitted for publication is, the more likely it is to be accepted, but often at the cost of research quality. It has become more important to be the first to publish or to present sensational findings than to be correct.      

Another consequence of the bind in which scientists find themselves is the ever increasing degree of misunderstanding and misuse of statistics, as detailed in the NAS report. Among other abuses, the report cites spurious correlations in data that researchers claim to be “statistically significant”; the improper use of statistics due to poor understanding of statistical methodology; and the conscious or unconscious biasing of data to fit preconceived ideas.

Ioannidis links irreproducibility to the habit of assigning too much importance to the statistical p-value. The smaller the p-value, the more likely it is that the experimental data can’t be explained by existing theory and that a new hypothesis is needed. Although p-values below 0.05 are commonly regarded as statistically significant, using this condition as a criterion for publication means that one time in twenty, the experimental data could be the result of chance alone. The NAS report recommends defining statistical significance as a p-value less than 0.01 rather than 0.05 – a much more demanding standard.

The report further recommends integration of basic statistics into curricula at high-school and college levels, and rigorous educational programs in those disciplines that rely heavily on statistics. Beyond statistics, other suggested reforms include having researchers make their data available for public inspection, which doesn’t often occur at present, and encouraging government agencies to fund projects designed purely to replicate earlier research, which again is rare today. The NAS believes that measures like these will help to improve reproducibility in scientific studies as well as keeping advocacy and the politicization of science at bay.

Next week: Corruption of Science: Scientific Fraud

Should We Fear Low-Dose Radiation? What Science Says

Modern science is constantly under attack from political forces, often fueled by fear. A big fear is radiation exposure – a fear made only too real by the devastation of the atomic bombs dropped on Japan to end World War II, and the aftereffects of several extensive nuclear accidents around the world in the last few decades. But, while high doses of radiation are known to be harmful to human health or even deadly, the effects of low doses are controversial.


For many years, the prevailing wisdom in the scientific community about radiation protection has been that there is no safe dose of ionizing radiation. This belief is enshrined in the so-called LNT (linear, no-threshold) model used to estimate cancer risks and establish cleanup levels in radioactively contaminated environments. The model dates back to studies of irradiated fruit flies in the 1930s, and subsequent formulation of the LNT dose-response model by American geneticist and Nobel laureate Hermann Muller.

The LNT model assumes that the body’s response to radiation is directly proportional to the radiation dose. So any detrimental health effects – such as cancer or an inheritable genetic mutation – go up and down with dose (and dose rate), but don’t disappear altogether until the dose falls to zero.

A very different concept that is gaining acceptance among radiation workers is the threshold model. Unlike the LNT model, this assumes that exposure to radiation is safe as long as the exposure is below a threshold dose. That is, there are no adverse health effects at all at low radiation doses, but above the threshold there are effects proportional to the dose, as in the no-threshold model.  

A new variation on the threshold model is hormesis, which hypothesizes that below the threshold dose, beneficial health effects actually occur. Hormesis has been championed by Edward Calabrese, an environmental toxicologist at the University of Massachusetts Amherst who has long been critical of the LNT approach to risk assessment, for both radiation and toxic chemicals. In 2015, a petition was submitted to the U.S. NRC (Nuclear Regulatory Commission) to adopt the hormesis model for regulatory purposes.

Which model is the correct picture of how the human body is affected by radiation? The scientific evidence isn’t all that clear.

Even when the LNT model was proposed, only very limited data was available at low doses, a situation that’s unchanged today. This means that the statistical accuracy of individual data points at low doses is poor, and much of the data could equally well fit the LNT, threshold or hormesis models. Two major pieces of evidence that a U.S. NAS (National Academy of Sciences) committee formerly relied on to buttress the LNT model – a study of Japanese atomic bomb survivors and a 15-country study of nuclear workers – are in fact compatible with either the threshold or the LNT model, more recent analysis has shown.   

The threshold model may seem more intuitive, since it’s well known for chemical toxins that any substance is toxic above a certain dose. “The dose makes the poisin,” as medieval Swiss physician Paracelsus observed. But the biological response to radiation isn’t necessarily the same as the response to a toxin.

Evidence in support of the hormesis model, however, includes numerous studies showing that low radiation doses can activate the immune system and thereby protect health. And no increase in the incidence of cancer has been observed among those Japanese bomb survivors exposed to only low doses of the same radiation that, in higher doses, sickened or killed others.

Scientific opinion is divided. The once strong consensus on the validity of the LNT model has evaporated, 70% of scientists at U.S. national laboratories now believing that the threshold model more accurately reflects radiation effects. A similar percentage of scientists in several European countries hold the same view.

Whether or not low doses of radiation are protective, as the hormesis model suggests, no adverse health effects have ever been detected from exposure to low dose, low dose rate radiation. But the public clings to the outmoded scientific consensus of the LNT model that no dose is safe. So society at large is unnecessarily fearful of any exposure to radiation whatsoever, when in reality low doses are most likely benign and could even be beneficial.

Next: Corruption of Science: The Reproducibility Crisis

How Hype is Hurting Science

The recent riots in France over a proposed carbon tax, aimed at supposedly combating climate change, were a direct result of blatant exaggeration in climate science for political purposes. It’s no coincidence that the decision to move forward with the tax came soon after an October report from the UN’s IPCC (Intergovernmental Panel on Climate Change), claiming that drastic measures to curtail climate change are necessary by 2030 in order to avoid catastrophe. President Emmanuel Macron bought into the hype, only to see his people rise up against him.

Exaggeration has a long history in modern science. In 1977, the select U.S. Senate committee drafting new low-fat dietary recommendations wildly exaggerated its message by declaring that excessive fat or sugar in the diet was as much of a health threat as smoking, even though a reasoned examination of the evidence revealed that wasn’t true.

About a decade later, the same hype infiltrated the burgeoning field of climate science. At another Senate committee hearing, astrophysicist James Hansen, who was then head of GISS (NASA’s Goddard Institute for Space Studies), declared he was 99% certain that the 0.4 degrees Celsius (0.7 degrees Fahrenheit) of global warming from 1958 to 1987 was caused primarily by the buildup of greenhouse gases in the atmosphere, and wasn’t a natural variation. This assertion was based on a computer model of the earth’s climate system.

At a previous hearing, Hansen had presented climate model predictions of U.S. temperatures 30 years in the future that were three times higher than they turned out to be. This gross exaggeration makes a mockery of his subsequent claim that the warming from 1958 to 1987 was all man-made. His stretching of the truth stands in stark contrast to the caution and understatement of traditional science.

But Hansen’s hype only set the stage for others. Similar computer models have also exaggerated the magnitude of more recent global warming, failing to predict the pause in warming from the late 1990s to about 2014. During this interval, the warming rate dropped to below half the rate measured from the early 1970s to 1998. Again, the models overestimated the warming rate by two or three times.

An exaggeration mindlessly repeated by politicians and the mainstream media is the supposed 97% consensus among climate scientists that global warming is largely man-made. The 97% number comes primarily from a study of approximately 12,000 abstracts of research papers on climate science over a 20-year period. But what is never revealed is that almost 8,000 of the abstracts expressed no opinion at all on anthropogenic (human-caused) warming. When that’s taken into account, the climate scientist consensus percentage falls to between 33% and 63% only. So much for an overwhelming majority! 

A further over-hyped assertion about climate change is that the polar bear population at the North Pole is shrinking because of diminishing sea ice in the Arctic, and that the bears are facing extinction. For global warming alarmists, this claim has become a cause célèbre. Yet, despite numerous articles in the media and photos of apparently starving bears, current evidence shows that the polar bear population has actually been steady for the whole period that the ice has been decreasing – and may even be growing, according to the native Inuit.

It’s not just climate data that’s exaggerated (and sometimes distorted) by political activists. Apart from the historical example in nutritional science cited above, the same trend can be found in areas as diverse as the vaccination debate and the science of GMO foods.

Exaggeration is a common, if frowned-upon marketing tool in the commercial world: hype helps draw attention in the short term. But its use for the same purpose in science only tarnishes the discipline. And, just as exaggeration eventually turns off commercial customers interested in a product, so too does it make the general public wary if not downright suspicious of scientific proclamations. The French public has recognized this on climate change.

Subversion of Science: The Low-Fat Diet


Remember the low-fat-diet? Highly popular in the 1980s and 1990s, it was finally pushed out of the limelight by competitive eating regimens such as the Mediterranean diet. That the low-fat diet wasn’t particularly healthy hadn’t yet been discovered. But its official blessing for decades by the governments of both the U.S. and UK represents a subversion of science by political forces that overlook evidence and abandon reason.

The low-fat diet was born in a 1977 report from a U.S. government committee chaired by Senator George McGovern, which had become aware of research purportedly linking excessive fat in the diet to killer diseases such as coronary heart disease and cancer. The committee hoped that its report would do as much for diet and chronic disease as the earlier Surgeon General’s report had done for smoking and lung cancer.

The hypothesis that eating too much saturated fat results in heart disease, caused by narrowing of the coronary arteries, was formulated by American physiologist Ancel Keys in the 1950s. Keys’ own epidemiological study, conducted in seven different countries, initially confirmed his hypothesis. But many other studies failed to corroborate the diet-heart hypothesis, and Keys’ data itself no longer substantiated it 25 years later. Double-blind clinical trials which, unlike epidemiological studies are able to establish causation, also gave results in conflict with the hypothesis.

Although it was found that eating less saturated fat could lower cholesterol levels, a growing body of evidence showed that it didn’t help to ward off heart attacks or prolong life spans. Yet Senator McGovern’s committee forged ahead regardless. The results of all the epidemiological studies and major clinical trials that refuted the diet-heart hypothesis were simply ignored – a classic case of science being trampled on by politics.

The McGovern committee’s report turned the mistaken hypothesis into nutritional dogma by drawing up a detailed set of dietary guidelines for the American public. After heated political wrangling with other government agencies, the USDA (U.S. Department of Agriculture) formalized the guidelines in 1980, effectively sanctioning the first ever, official low-fat diet. The UK followed suit a few years later.

While the guidelines erroneously linked high consumption of saturated fat to heart disease, they did concede that what constitutes a healthy level of fat in the diet was controversial. The guidelines recommended lowering intake of high-fat foods such as eggs and butter; boosting consumption of fruits, vegetables, whole grains, poultry and fish; and eating fewer foods high in sugar and salt.

With government endorsement, the low-fat diet quickly became accepted around the world. It was difficult back then even to find cookbooks that didn’t extol the virtues of the diet. Unfortunately for the public, the diet promoted to conquer one disease contributed to another – obesity – because it replaced fat with refined carbohydrates. And it wasn’t suitable for everyone.

This first became evident in the largest ever, long-term clinical trial of the low-fat diet, known as the Women’s Health Initiative. But, just like the earlier studies that led to the creation of the diet, the trial again showed that the diet-heart hypothesis didn’t hold up, at least for women.  After eight years, the low-fat diet was found to have had no effect on heart disease or deaths from the disease. Worse still, in a short-term study of the low-fat diet in U.S. Boeing employees, women who had followed the low-fat diet appeared to have actually increased their risk for heart disease.

A UN review of available data in 2008 concluded that several clinical trials of the diet “have not found evidence for beneficial effects of low-fat diets,” and commented that there wasn’t any convincing evidence either for any significant connection between dietary fat and coronary heart disease or cancer.

Today the diet-heart hypothesis is no longer widely accepted and nutritional science is beginning to regain the ground taken over by politics. But it has taken over 60 years for this attack on science to be repulsed.

Next week: How Hype is Hurting Science

Use and Misuse of the Law in Science

Aside from patent law, science and the law are hardly bosom pals. But there are many parallels between them: above all, they’re both crucially dependent on evidence and logic. However, while the legal system has been used to defend science and to settle several scientific issues, it has also been misused for advocacy by groups such as anti-evolutionists and anti-vaccinationists.


In the U.S., the law played a major role in keeping the teaching of creationism out of schools during the latter part of the 20th century. Creationism, discussed in previous posts on this blog, is a purely religious belief that rejects the theory of evolution. Because of the influence of the wider Protestant fundamentalist movement earlier in the century, which culminated in the infamous Scopes Monkey Trial of 1925, little evolution was taught in American public schools and universities for decades.

All that changed in 1963, when the U.S., as part of an effort to catch up to the rival Soviet Union in science, issued a new biology text, making high-school students aware for the first time of their apelike ancestors. And five years later, the U.S. Supreme Court struck down the last of the old state laws banning the teaching of evolution in schools.

In 1987 the Supreme Court went further, in upholding a ruling by a Louisiana judge that a state law, mandating that equal time be given to the teaching of creation science and evolution in public schools, was unconstitutional. Creationism suffered another blow in 2005 when a judge in Dover, Pennsylvania ruled that the school board’s sanctioning of the teaching of intelligent design in its high schools was also unconstitutional. The board had angered teachers and parents by requiring biology teachers to make use of an intelligent design reference book in their classes.

All these events show how the legal system was misused repeatedly by anti-evolutionists to argue that creationism should be taught in place of or alongside evolution in public schools, but how at the same time the law was used successfully to quash the creationist efforts and to bolster science.

Much the same pattern can be seen with anti-vaccine advocates, who have misused lawsuits and the courtroom to maintain that their objections to vaccination are scientific and that vaccines are harmful. But judges in many different courts have found the evidence presented for all such contentions to be unscientific.

The most notable example was a slew of cases – 5,600 in all – that came before the U.S. Vaccine Court in 2007. Alleged in these cases was that autism, the often devastating neurological disorder in children, is caused by vaccination with the measles-mumps-rubella (MMR) vaccine, or by a combination of the vaccine with a mercury-based preservative. To handle the enormous caseload, the court chose three special masters to hear just three test cases on each of the two charges.

In 2009 and 2010, the Vaccine Court unanimously rejected both contentions. The special masters called the evidence weak and unpersuasive, and chastised doctors and researchers who “peddled hope, not opinions grounded in science and medicine.”

Likewise, the judge in a UK court case alleging a link between autism and the combination diphtheria-tetanus-pertussis (DTP) vaccine found that the “plaintiff had failed to establish … that the vaccine could cause permanent brain damage in young children.” The judge excoriated a pediatric neurologist whose testimony at the trial completely contradicted assertions the doctor had made in a previous research paper that had triggered the litigation, along with other lawsuits, in the first place.

But, while it took a court of law to establish how unscientific the evidence for the claims about vaccines was, and it was the courts that kept the teaching of unscientific creationism out of school science classes, the court of public opinion has not been greatly swayed in either case. As many as 40% of the general public worldwide believe that all life forms, including ourselves, were created directly by God out of nothing, and that the earth is only between 6,000 and 10,000 years old. And more and more parents are choosing not to vaccinate their children, insisting that vaccines always cause disabling side effects or even other diseases.

Although the law has done its best to uphold the court of science, the attack on science continues.

Next week: Subversion of Science: The Low-Fat Diet

On Science Skeptics and Deniers

Do all climate change skeptics also question the theory of evolution? Do anti-vaccinationists also believe that GMO foods are unsafe? As we’ll see in this post, scientific skepticism and “science denial” are much more nuanced than most people think.


To begin with, scientific skeptics on hot-button issues such as climate change, vaccination and GMOs (genetically modified organisms) are often linked together as anti-science deniers. But the simplistic notion that skeptics and deniers are one and the same – the stance taken by the mainstream media – is mistaken. And the evidence shows that skeptics or deniers in one area of science aren’t necessarily so in other areas.

The split between outright deniers of the science and skeptics who merely question some of it varies markedly, surveys show, from approximately twice as many deniers as skeptics on evolution to about half as many deniers compared to skeptics on climate change.

In evolution, approximately 32% of the American public are creationists who deny Darwin’s theory of evolution entirely, while another 14% are skeptical of the theory. In climate change, the numbers are reversed with about 19% denying any human role in global warming, and a much larger 35% (averaged from here and here) accepting a human contribution but being skeptical about its magnitude. In GMOs, on the other hand, the percentages of skeptics and deniers are about the same.

The surveys also reveal that anti-science skepticism or denial don’t carry over from one issue to another. For example, only about 65% of evolutionary skeptics or deniers are also climate change skeptics or deniers: the remaining 35% who doubt or reject evolution believe in the climate change narrative of largely human-caused warming. So the two groups of skeptics or deniers don’t consist of the same individuals, although there is some overlap.

In the case of GMO foods, approximately equal percentages of the public reject the consensus among scientists that GMOs are safe to eat, and are skeptical about climate change. Once more, however, the two groups don’t consist of the same people. And, even though most U.S. farmers accept the consensus on the safety of GMO crops but are climate change skeptics, there are environmentalists who are GMO deniers or skeptics but accept the prevailing belief on climate change. Prince Charles is a well-known example of the latter.

Social scientists who study such surveys have identified two main influences on scientific skepticism and denial: religion and politics. As we might expect, opinions about evolution are strongly tied to religious identity, practice and belief. And, while Evangelicals are much more likely to be skeptical about climate change than those with no religious affiliation, climate skepticism overall seems to be driven more by politics – specifically, political conservatism – than by religion.

In the political sphere, U.S. Democrats are more inclined than Republicans to believe that human actions are the cause of global warming, that the theory of evolution is valid, and that GMO foods are safe to eat. However, other factors influence the perception of GMO food safety, such as corporate control of food production and any government intervention. Variables like demographics and education come into the picture too, in determining skeptical attitudes on all issues.

Lastly, a striking aspect of skepticism and denial in contemporary science is the gap in opinion between scientists and the general public. Although skepticism is an important element of the scientific method, a far larger percentage of the population in general question the prevailing wisdom on scientific issues than do scientists, with the possible exception of climate change. The precise reasons for this gap are complex according to a recent study, and include religious and political influences as well as differences in cognitive functioning and in education. While scientists may possess more knowledge of science, the public may exhibit more common sense.

Next week: Use and Misuse of the Law in Science

Why Creation Science Isn’t Science

According to so-called creation science – the widely held religious belief that the world and all its living creatures were created by God in just six days – the earth is only 6,000 to 10,000 years old. The faith-based belief rejects Darwin’s scientific theory of evolution, which holds that life forms evolved over a long period of time through the process of natural selection. In resorting to fictitious claims to justify its creed, creation science only masquerades as science.    

creation science.jpg

Creation science has its roots in a literal interpretation of the Bible. To establish a biblical chronology, various scholars have estimated the lifespans of prominent figures and the intervals between significant historical events described in the Bible. The most detailed chronology was drawn up in the 1650s by an Irish Archbishop, who calculated that exactly 4,004 years elapsed between the creation and the birth of Jesus. It’s this dubious calculation that underlies the 6,000-year lower limit for the age of the earth. 

Scientific evidence, however, tells us that the earth’s actual age is 4.5 to 4.6 billion years. Even when Darwin proposed his theory, the available evidence at the time indicated an age of at least a few hundred thousand years. Darwin himself believed that the true number was more like several hundred million years, based on his forays into geology. 

By the early 1900s, the newly developed method of radiometric dating dramatically boosted estimates of Earth’s age into the billion year range – a far cry from the several thousand years that young-Earth creationists allow, derived from their literal reading of the Bible. Radiometric dating relies on the radioactive decay of certain chemical elements such as uranium, carbon or potassium, for which the decay rates are accurately known.

To overcome the vast discrepancy between the scientifically determined age of the earth and the biblical estimate, young-Earth creationists – who, surprisingly, include hundreds of scientists with an advanced degree in science or medicine – twist science in a futile effort to discredit radiometric dating. Absurdly, they object that the method can’t be trusted because of a handful of instances when radiometric dating has been incorrect. But such an argument in no way proves a young earth, and in any case fails to invalidate a technique that has yielded correct results, as established independently by other methods, tens of thousands of times.

Another, equally ridiculous claim is that somehow the rate of radioactive decay underpinning the dating method was billions of times higher in the past, which would considerably shorten radiometrically measured ages. Some creationists even maintain that radioactive decay sped up more than once. What they don’t realize is that any significant change in decay rates would imply that fundamental physical constants (such as the speed of light) had also changed. If that were so, we’d be living in a completely different type of universe. 

Among other wild assertions that creationists use as evidence that the planet is no more than 10,000 years old are rapid exponential decay of the earth’s magnetic field, which is a spurious claim, and the low level of helium in the atmosphere, which merely reflects how easily the gas escapes from the earth and has nothing to do with its age.

Apart from such futile attempts to shorten the earth’s longevity, young-Earth creationists also rely on the concept of flood geology to prop up their religious beliefs. Flood geology, which I’ve discussed in detail elsewhere, maintains that the planet was reshaped by a massive worldwide flood as described in the biblical story of Noah’s ark. It’s as preposterously unscientific as creationist efforts to uphold the idea of a young earth.

The depth of the attack on modern science can be seen in polls showing that a sizable 38% of the U.S. adult public, and a similar percentage globally, believe that God created humans in their present form within the last 10,000 years. The percentage may be higher yet for those who identify with certain religions, and perhaps a further 10% believe in intelligent design, the form of creationism discussed in last week’s post. The breadth of disbelief in the theory of evolution is astounding, especially considering that it’s almost universally accepted by mainstream Churches and the overwhelming majority of the world’s scientists.

Next week: On Science Skeptics and Deniers

What Intelligent Design Fails to Understand About Evolution


One of the threats to modern science is the persistence of faith-based beliefs about the origin of life on Earth, such as the concept of intelligent design which holds that the natural world was created by an intelligent designer – who may or may not be God or another deity. Intelligent design, like other forms of creationism, is incompatible with the theory of evolution formulated by English naturalist Charles Darwin in the 19th century. But, in asserting that complex biological systems defy any scientific explanation, believers in intelligent design fail to understand the basic features of evolution.

The driving force in biological evolution, or descent from a common ancestor through cumulative change over time, is the process of natural selection. The essence of natural selection is that, as in human society, nature produces more offspring than can survive, and that variation in a species means some offspring have a slightly greater chance of survival than the others.  These offspring have a better chance of reproducing and passing the survival trait on to the next generation than those who lack the trait.

A common misconception about natural selection is that it is an entirely random process. But this is not so. Genetic variation within a species, which distinguishes individuals from one another and usually results from mutation, is indeed random. However, the selection aspect isn’t random but rather a snowballing process, in which each evolutionary step that selects the variation best suited to reproduction builds on the previous step.

Intelligent design proponents often argue that the “astonishing complexity” of living cells and biological complexes such as the bacterial flagellum – a whip-like appendage on a bacterial cell that rotates like a propeller – precludes their evolution via the step-by-step mechanism of natural selection. Such complex systems, they insist, can only be created as an integrated whole and must therefore have been designed by an intelligent entity.

There are several sound scientific reasons why this claim is fallacious: for example, natural selection can work on modular units already assembled for another purpose. But the most telling argument is simply that evolution is incremental and can take millions or even hundreds of millions of years – a length of time that is incomprehensible, meaningless to us as himans, to whom even a few thousand years seems an eternity. The laborious, trial-and-error, one-step-at-a-time assembly of complex biological entities may indeed not be possible in a few thousand years, but is easily accomplished in a time span that’s beyond our imagination.     

However, evolution aside, intelligent design can’t lay any claim to being science. Most intelligent design advocates do accept the antiquity of life on earth, unlike adherents to the deceptively misnamed creation science, the topic for next week’s post. But neither intelligent design nor creation science offers any scientific alternative to Darwin’s mechanism of natural selection. And they both distort or ignore the vast body of empirical evidence for evolution, which includes the fossil record and biodiversity as well as a host of modern-day observations from fields such as molecular biology and embryology.

That intelligent design and creation science aren’t science at all is apparent from the almost total lack of peer-reviewed papers published in the scientific literature. Apart from a few articles (such as this one) in educational journals on the different forms of creationism, the only known paper on creationism itself – an article, based on intelligent design, about the epoch known as the Cambrian explosion – was published in an obscure biological journal in 2004. But one month later, the journal’s publishing society reprimanded the editor for not handling peer review properly and repudiated the article. In its formal explanation, the society emphasized that no scientific evidence exists to support intelligent design.

A valid scientific theory must, at least in principle, be capable of being invalidated or disproved by observation or experiment. Along with other brands of creationism, intelligent design is a pseudoscience that can’t be falsified because it depends not on scientific evidence, but on a religious belief based on faith in a supernatural creator. There’s nothing wrong with faith, but it’s the very antithesis of science. Science requires evidence and a skeptical evaluation of claims, while faith demands unquestioning belief, without evidence.

Next week: Why Creation Science Isn’t Science

When No Evidence is Evidence: GMO Food Safety

The twin hallmarks of genuine science are empirical evidence and logic. But in the case of foods containing GMOs (genetically modified organisms), it’s the absence of evidence to the contrary that provides the most convincing testament to the safety of GMO foods. Although almost 40% of the public in the U.S. and UK remain skeptical, there simply isn’t any evidence to date that GMOs are deadly or even unhealthy for humans.

Absence of evidence doesn’t prove that GMO foods are safe beyond all possible doubt, of course. Such proof is impossible in practice, as harmful effects from some as-yet unknown GMO plant can’t be categorically ruled out. But a committee of the U.S. NAS (National Academy of Sciences, Engineering and Medicine) undertook a study in 2016 to examine any negative effects as well as potential benefits of both currently commercialized and future GMO crops.


The study authors found no substantial evidence that the risk to human health was any different for current GMO crops on the market than for their traditionally crossbred counterparts. Crossbreeding or artificial hybridization refers to the conventional form of plant breeding, first developed in the 18th century and continually refined since then, which revolutionized agriculture before genetic engineering came on the scene in the 1970s. The evidence evaluated in the study included presentations by 80 people with diverse expertise on GMO crops; hundreds of comments and documents from individuals and organizations; and an extensive survey by the committee of published scientific papers.

The committee reexamined the results of several types of testing conducted in the past to evaluate genetically engineered crops and the foods derived from them. Although they found that many animal-feeding studies weren’t optimal, the large number of such experimental studies provided “reasonable evidence” that eating GMO foods didn’t harm animals (typically rodents). This conclusion was reinforced by long-term data on livestock health before and after GMO feed crops were introduced.

Two other informative tests involved analyzing the composition of GMO plants and testing for allergens. The NAS study found that while there were differences in the nutrient and chemical compositions of GMO plants compared to similar non-GMO varieties, the differences fell within the range of natural variation for non-GMO crops. 

In the case of specific health problems such as allergies or cancer possibly caused by eating genetically modified foods, the committee relied on epidemiological studies, since long-term randomized controlled trials have never been carried out. The results showed no difference between studies conducted in the U.S. and Canada, where the population has consumed GMO foods since the late 1990s, and similar studies in the UK and Europe, where very few GMO foods are eaten. The committee acknowledged, however, that biases may exist in the epidemiological data available on certain health problems.

The NAS report also recommended a tiered approach to future safety testing of GMOs. The recommendation was to use newly available DNA analysis technologies to evaluate the risks to human health or to the environment of a plant –  grown by either conventional hybridization or genetic engineering – and then to do safety testing only on those plant varieties that show signs of potential hazards.

While there is documentation that the NAS committee listened to both sides of the GMO debate and made an honest attempt to evaluate the available evidence fairly, this hasn’t always been so in other NAS studies. Just as politics have interfered in the debate over Roundup and cancer, as discussed in last week’s post, the NAS has been accused of substituting politics for science. Further accusations include insufficient attention to conflicts of interest among committee and panel members, and even turning a blind eye to scientific misconduct (including falsification of data). Misconduct is an issue I’ll return to in future posts.

Next week: What Intelligent Design Fails to Understand About Evolution

Politics Clashes with Science over Glyphosate and Cancer

Nothing exemplifies the attack on science more than its subversion to politics. The intrusion of political forces into the scientific sphere has distorted the debates over dietary fat, climate change, GMO crops and other controversial topics. Particularly contentious at present is the charge that the weedkiller glyphosate causes cancer, an allegation that’s behind thousands of U.S. lawsuits filed by cancer victims against glyphosate manufacturer Monsanto. The victims, who suffer from non-Hodgkin’s lymphoma, claim the cancer was caused by spraying the company’s glyphosate-based Roundup herbicide.


Best-seller Roundup has been used since 1974 to kill weeds in more than 100 food crops as well as greenhouses, aquatic areas, and residential parks and gardens. It became an especially profitable product for Monsanto in 1996 after the agricultural behemoth introduced Roundup Ready seeds, which are genetically engineered to make crop plants resistant to the herbicide; the revolutionary advance meant that farmers could now use Roundup to kill weeds while the crop was growing, instead of only before planting.

The carcinogenic potential of glyphosate has been evaluated several times in the U.S. by the EPA (Environmental Protection Agency), and in Europe by the EFSA (European Food Safety Authority) and the ECHA (European Chemicals Agency). While all these evaluations concluded that glyphosate was unlikely to be carcinogenic to humans, the IARC (International Agency for Research on Cancer) shocked the world in 2015 by classifying glyphosate as a potential carcinogen. It’s the IARC assessment that underpins the multimillion-dollar mass litigation against Monsanto.

So who’s right? The various government agencies in the U.S. and Europe that see no problem in continuing to use Roundup, or the WHO (World Health Organization)’s IARC? Both camps maintain that the scientific evidence is on their side.

The dispute is an all-too-common example of how politics is invading science. As reported by Reuters last year, the IARC made significant changes between the original draft of its 2015 monograph on glyphosate and the published version. Although it’s not unusual for a final agency report to differ from the draft, what stands out in this case is that the principal changes were the deletion of all statements and findings – and there were many – contrary to the IARC’s ultimate conclusion that glyphosate probably causes cancer.

The agency refuses to say who made the changes or why. If such secrecy alone were not enough to arouse suspicion, Reuters found 10 significant alterations to the draft chapter on animal studies – the very chapter that in the final report provided “sufficient evidence" that glyphosate causes cancer in animals. The draft chapter had reported the conclusions of multiple studies finding no link at all between glyphosate and cancer in laboratory animals. But the final report concluded exactly the opposite.

That the changes to the IARC report were politically rather than scientifically motivated is reinforced by the EPA’s finding that glyphosate poses no carcinogenic risk to humans. This conclusion was also reached by the EFSA, the ECHA and the UN’s FAO (Food and Agriculture Organization) – despite the EFSA and ECHA, like other European agencies, being more conservative and pro-environment than their U.S. counterparts. Not surprisingly, the environmentally activist organization Greenpeace has called the EFSA report faulting the IARC declaration “a whitewash.”

By far the greatest amount of scientific data on the possible human carcinogenicity of glyphosate comes from a 2005 study of 85,279 American farmers and their spouses. The so-called AHS (Agricultural Health Study) included epidemiological, animal carcinogenicity, and genotoxicity investigations to elucidate the carcinogenic potential of Roundup products.

Both the EPA and the IARC utilized the AHS data in their recent, conflicting evaluations of glyphosate. However, the IARC monograph also included a number of “low-quality” studies that the EPA elected to omit. Although this may seem arbitrary on the EPA’s part, the studies omitted by the EPA lacked, for example, information on glyphosate exposure of individual subjects. Such shortcomings are important in light of the current reproducibility crisis in the biomedical sciences: up to 90% of published findings in some areas of biomedicine can’t be replicated. The inclusion of scientifically inferior data by the IARC strongly suggests political interference in the scientific process.

Of six specific studies that investigated the association between glyphosate exposure and non-Hodgkin’s lymphoma – the possible linkage at issue in the mass lawsuits – the EPA stated that “a conclusion … cannot be determined based on the available data.” Nevertheless, for cancer overall, the EPA report found the strongest support for an assessment of glyphosate exposure as “not likely to be carcinogenic to humans,” which is the weakest of five EPA risk classifications on cancer.

Meanwhile, the jury in the first trial of the Roundup litigation ruled on August 10 that Monsanto’s weedkiller was a substantial contributing factor in causing non-Hodgkin’s lymphoma and ordered the company to pay $289 million in damages, a figure since reduced to $78 million on appeal. With further trials scheduled in 2019, only time will tell who is right about the science.

Next week: When No Evidence is Evidence: GMO Food Safety

No Evidence That Aluminum in Vaccines is Harmful

Part of the anti-vaccinationist stance against immunization is the belief that vaccines contain harmful chemicals such as aluminum, formaldehyde and thimerosal. Although mercury-based thimerosal is no longer used in any U.S. vaccines except certain flu shots, and the amount of formaldehyde is a tiny fraction of that found in many foods – including those fed to babies such as pureed bananas or pears – aluminum remains a villain among the anti-vax crowd. But, as with the discredited link between the measles vaccine and autism discussed in a previous post, no medical evidence exists to support the aluminum hypothesis.


Aluminum salts are employed as powerful adjuvants to enhance the immune system response to a vaccine, thus reducing the number of repeat injections needed. What anti-vaccinationists fail to understand, however, is that less than 1% of aluminum in vaccines is actually absorbed by the body. The same is true of the aluminum found in our food supply, in drinking water and even in the air we breathe, as well as the breast milk or infant formula ingested by babies. For vaccination, the daily quantity of aluminum absorbed by a vaccinated newborn infant is 10 times smaller than the FDA’s threshold for neurotoxicity.

Because it has been suggested that aluminum could be linked to certain neurological disorders such as Alzheimer’s Disease, anti-vaccinationists maintain that injected aluminum rapidly enters the bloodstream and thereby accumulates in the brain, causing neurological damage – reminiscent of Wakefield’s fraudulent claim about autism. There are several scientific fallacies underlying this assertion, however.

The first fallacy is that injected aluminum finds its way into the body more rapidly than ingested aluminum. In fact, most of the aluminum adjuvant in a vaccine remains near the injection site for a long period and is only absorbed slowly into the bloodstream, at approximately the same daily rate as ingested aluminum. Even though the amount of ingested aluminum is much larger, most of that is absorbed in the intestines and only released slowly into the blood.

Another fallacy involves the neurotoxicity of aluminum in the brain. Although aluminum and other chemicals can enter the brain from the bloodstream, they first have to penetrate a protective semipermeable membrane that separates flowing blood from brain tissue, known as the blood brain barrier. The blood brain barrier normally keeps circulating pathogens and toxins from getting into the brain, while allowing the passage of water, nutrients and hormones.

The flawed claim is that injected aluminum sneaks its way into the brain by hiding in macrophages – a type of first-responder white blood cell that devours germs, cellular debris and foreign particles, and plays an important role in the body’s immune system. Unable to digest metals, say anti-vaccinationists, the aluminum-loaded macrophages travel to the brain via the blood or lymphatic system. If the brain is already inflamed, the macrophages can cross the blood brain barrier and unload aluminum inside the brain. The aluminum supposedly causes further inflammation, leading to autism and other neurological disorders.

But none of this makes sense to many doctors and scientists who work in immunology or neuroscience. A neurovascular biologist who’s an expert on the blood brain barrier faults the science in several of the papers behind the Trojan-horse macrophage hypothesis. And he calls out the claim in one paper that macrophages digest injected aluminum as “not only exaggerated … but also provocative and fraudulent,” though this criticism was later accepted as justified on a major anti-vaccinationist website.

The website, whose authors prefer to remain anonymous, claims to be science-based and guided only by scientific evidence – the same as the emphasis of the present blog. The site also attempts to defend itself against charges of cherry picking research papers supporting its position that aluminum adjuvants cause autism. But it merely lists just the abstracts of a handful of the many papers backing the emerging consensus that autism is caused by maternal exposure to infections or toxins during pregnancy.   

In any case, even if the aluminum hypothesis were correct, the fact that the amount of aluminum absorbed from vaccines is comparable to the amount absorbed from aluminum ingested by the body means that macrophages could sweep up swallowed aluminum just as easily as injected aluminum. There’s no good evidence that either occurs, although the mechanisms by which adjuvants act are still not fully understood. 

Hat tip: Mike @realiwasframed

Next week: Politics Clashes with Science over Glyphosate and Cancer

Belief in Catastrophic Climate Change as Misguided as Eugenics was 100 Years Ago

Last week’s landmark report by the UN’s IPCC (Intergovernmental Panel on Climate Change), which claims that global temperatures will reach catastrophic levels unless we take drastic measures to curtail climate change by 2030, is as misguided as eugenics was 100 years ago. Eugenics was the shameful but little-known episode in the early 20th century characterized by the sterilization of hundreds of thousands of people considered genetically inferior, especially the mentally ill, the physically handicapped, minorities and the poor.

Although ill-conceived and even falsified as a scientific theory in 1917, eugenics became a mainstream belief with an enormous worldwide following that included not only scientists and academics, but also politicians of all parties, clergymen and luminaries such as U.S. President Teddy Roosevelt and famed playwright George Bernard Shaw. In the U.S., where the eugenics movement was generously funded by organizations such as the Rockefeller Foundation, a total of 27 states had passed compulsory sterilization laws by 1935 – as had many European countries.

Eugenics only fell into disrepute with the discovery after World War II of the horrors perpetrated by the Nazi regime in Germany, including the holocaust as well as more than 400,000 people sterilized against their will. The subsequent global recognition of human rights declared eugenics to be a crime against humanity.

The so-called science of catastrophic climate change is equally misguided. Whereas modern eugenics stemmed from misinterpretation of Mendel’s genetics and Darwin’s theory of evolution, the notion of impending climate disaster results from misrepresentation of the actual empirical evidence for a substantial human contribution to global warming, which is shaky at best.

Instead of the horrors of eugenics, the narrative of catastrophic anthropogenic (human-caused) global warming conjures up the imaginary horrors of a world too hot to live in. The new IPCC report paints a grim picture of searing yearly heatwaves, food shortages and coastal flooding that will displace 50 million people, unless draconian action is initiated soon to curb emissions of greenhouse gases from the burning of fossil fuels. Above all, insists the IPCC, an unprecedented transformation of the world’s economy is urgently needed to avoid the most serious damage from climate change. 

But such talk is utter nonsense. First, the belief that we know enough about climate to control the earth’s thermostat is preposterously unscientific. Climate science is still in its infancy and, despite all our spectacular advances in science and technology, we still have only a rudimentary scientific understanding of climate. The very idea that we can regulate the global temperature to within 0.9 degrees Fahrenheit (0.5 degrees Celsius) through our own actions is absurd.

Second, the whole political narrative about greenhouse gases and dangerous anthropogenic warming depends on faulty computer climate models that were unable to predict the recent slowdown in global warming, among other failings. The models are based on theoretical assumptions; science, however, takes its cue from observational evidence. To pretend that current computer models represent the real world is sheer arrogance on our part.

And third, the empirical climate data that is available has been exaggerated and manipulated by activist climate scientists. The land warming rates from 1975 to 2015 calculated by  NOAA (the U.S. National Oceanic and Atmospheric Administration) are distinctly higher than those calculated by the other two principal guardians of the world’s temperature data. Critics have accused the agency of exaggerating global warming by excessively cooling the past and warming the present, suggesting politically motivated efforts to generate data in support of catastrophic human-caused warming.  

Exaggeration also shows up in the setting of new records for the “hottest year ever” –declarations deliberately designed to raise alarm. But when the global temperature is currently creeping upwards at the rate of only a few hundredths of a degree every 10 years, the establishment of new records is unsurprising. If the previous record has been set in the last 10 or 20 years, a high temperature that is only several hundredths of a degree above the old record will set a new one.

Eugenics too was rooted in unjustified human hubris, false science, and exaggeration in its methodology. Just like eugenics, belief in apocalyptic climate change and in the dire prognostications of the IPCC will one day be abandoned also.

Next week: No Evidence That Aluminum in Vaccines is Harmful

Measles or Autism? False Choice, Says Science

Perhaps nowhere is the attack on science more visible than in the opposition to vaccination against infectious diseases such as hepatitis, polio and measles. To anti-vaxxers, immunizing a child with the measles vaccine is a choice between sentencing him or her to the lifelong misery of autism, or exposing the child to possible aftereffects of a disease that the youngster may never contract. This view, passionately held by a substantial minority of the population, is completely at odds with the logic and evidence of science.


Despite the insistence of anti-vaccinationists to the contrary, there’s absolutely no scientific evidence of any linkage between vaccines and autism. The myth connecting them was first suggested by U.S. activist Barbara Loe Fisher in the 1980s. It gained steam when British gastroenterologist Andrew Wakefield claimed in a 1998 study that 8 out of 12 children in the study had developed autism symptoms following injection of the combination measles-mumps-rubella (MMR) vaccine.

But Wakefield’s paper in the prestigious medical journal The Lancet was slowly discredited until, in 2011, the journal’s editors took the unprecedented step of declaring the paper fraudulent, saying that Wakefield had falsified his data. Die-hard anti-vaccinationists refused to accept this conclusion, despite Wakefield’s medical license being subsequently revoked by the UK General Medical Council, who found that his fraud was compounded by ethical lapses and medical misconduct in the same study.

The autism episode generated worldwide publicity and led to thousands of court cases in a special U.S. Vaccine Court set up as part of the National Vaccine Injury Compensation Program. To cope with the enormous caseload, the court assigned three special masters to hear just three test cases on each of two theories: that autism was caused by the MMR vaccine together with a mercury-based preservative known as thimerosal, or that it was caused by thimerosal-containing vaccines alone.

In 2009 and 2010, the special masters unanimously rejected both contentions. But they emphasized that their decisions had been guided only by scientific evidence, not by the poignant stories of autistic children. One of the masters declared in her analysis:

“Sadly, the petitioners in this litigation have been the victims of bad science, conducted to support litigation rather than to advance medical and scientific understanding of autism spectrum disorder. The evidence in support of petitioners’ causal theory is weak, contradictory, and unpersuasive.”

Yet, despite the Vaccine Court’s findings in the U.S. and The Lancet’s accusation of fraud against Wakefield in the UK, anti-vaccinationists continue to connect the MMR vaccine to autism.  In 2016, Wakefield directed a documentary, “Vaxxed,” alleging that the U.S. Centers for Disease Control and Prevention (CDC) covered up contrary data in a 2004 study that drew the same conclusions as the Vaccine Court and numerous epidemiological studies.  His allegations were baseless, however, as the 2014 research paper behind his outrageous claim was subsequently retracted.

According to CDC statistics, autism spectrum disorder afflicted 1 in 59 U.S. children in 2014. Diagnosis of the condition can be devastating and highly stressful for the desperate parents of an autistic child, who naturally tend to grasp for explanations and are often quite willing to believe the hype about vaccination.  Currently, the causes of autism remain unknown, although several risk factors have been identified: certain genetic conditions have been implicated, and it’s thought that exposure during pregnancy to toxic chemicals such as pesticides, or to bacterial or viral infections, plays a role.

While there’s no medical evidence tying autism to vaccines, it’s also true that serious adverse reactions to a vaccine shot do occur occasionally – typically about once in every one million vaccinations. Negative and occasionally fatal reactions to various vaccines have been documented in approximately 400 research papers. But these 400 cases need to be weighed against the hundreds of millions of vaccine doses administered every year in the U.S. without any reported side effects, cases that aren’t even worth studying.

And the odds of suffering an adverse reaction have to be compared with the risk of contracting the disease itself. One of 1,000 children who get the measles, for instance, will end up with encephalitis, which can have devastating aftereffects such as seizures and mental retardation; some children still die from measles, often after getting pneumonia. It’s a lot less dangerous to subject a child to an MMR shot than risk exposing the child to a disease as contagious and potentially deadly as measles.

Solar Science Shortchanged in Climate Models

The sun gets short shrift in the computer climate models used to buttress the mainstream view of anthropogenic (human-caused) global warming. That’s because the climate change narrative, which links warming almost entirely to our emissions of greenhouse gases, trivializes the contributions to global warming from all other sources. According to its Fifth Assessment Report, the IPCC attributes no more than a few percent of total global warming to the sun’s influence.

That may be the narrative but it’s not one universally endorsed by solar scientists. Although some, such as solar physicist Mike Lockwood, adhere to the conventional wisdom on CO 2 , others, such as mathematical physicist Nicola Scafetta, think instead that the sun has an appreciable impact on the earth’s climate. In disputing the conventional wisdom, Scafetta points to our poor understanding of indirect solar effects as opposed to the direct effect of the sun’s radiation, and to analytical models of the sun that oversimplify its behavior. Furthermore, a lack of detailed historical data prior to the recent observational satellite era casts doubt on the accuracy and reliability of the IPCC estimates.

I’ve long felt sorry for solar scientists, whose once highly respectable field of research before climate became an issue has been marginalized by the majority of climate scientists. And solar scientists who are climate change skeptics have had to endure not only loss of prestige, but also difficulty in obtaining research funding because their work doesn’t support the consensus on global warming. But it appears that the tide may be turning at last.

Judging from recent scientific publications, the number of papers affirming a strong sun-climate link is on the rise. From 93 papers in 2014 examining such a link, almost as many were published in the first half of 2017 alone. The 2017 number represents about 7% of all research papers in solar science over the same period (Figure 1 here) and about 16% of all papers on computer climate models during that time (Figure 4 here).


This rising tide of papers linking the sun to climate change may be why UK climate scientists in 2015 attempted to silence the researcher who led a team predicting a slowdown in solar activity after 2020. Northumbria University’s Valentina Zharkova had dared to propose that the average monthly number of sunspots will soon drop to nearly zero, based on a model in which a drastic falloff is expected in the sun’s magnetic field. Other solar researchers have made the same prediction using different approaches.

Sunspots are small dark blotches on the sun caused by intense magnetic turbulence on the sun’s surface. Together with the sun’s heat and light, the number of sunspots goes up and down during the approximately 11-year solar cycle. But the maximum number of sunspots seen in a cycle has recently been declining. The last time they disappeared altogether was during the so-called Maunder Minimum, a 70-year cool period in the 17th and 18th centuries forming part of the Little Ice Age.

While Zharkova’s research paper actually said nothing about climate, climate scientists quickly latched onto the implication that a period of global cooling might be ahead and demanded that the Royal Astronomical Society – at whose meeting she had originally presented her findings – withdraw her press release. Fortunately, the Society refused to accede to this attack on science at the time, although the press release has since been removed from the Web. Just last month, Zharkova’s group refuted criticisms of its methodology by another prominent solar scientist.

Apart from such direct effects, indirect solar effects due to the sun’s ultraviolet (UV) radiation or cosmic rays from deep space could also contribute to global warming. In both cases, some sort of feedback mechanism would be needed to amplify what would otherwise be tiny perturbations to global temperatures. However, what’s not generally well known is that the warming predicted by computer climate models comes from assumed water vapor amplification of the modest temperature increase caused by CO 2 acting alone. Speculative candidates for amplification of solar warming involve changes in cloud cover as well as the earth’s ozone layer.

(Comments on this post can be found at the Ice Age Now blog, which has kindly reproduced excerpts from the post.)

Next week: Measles or Autism? False Choice, Says Science

Evidence Lacking for Major Human Role in Climate Change

Conventional scientific wisdom holds that global warming and consequent changes in the climate are primarily our own doing. But what few people realize is that the actual scientific evidence for a substantial human contribution to climate change is flimsy. It requires highly questionable computer climate models to make the connection between global warming and human emissions of carbon dioxide (CO2).

The multiple lines of evidence which do exist are simply evidence that the world is warming, not proof that the warming comes predominantly from human activity. The supposed proof relies entirely on computer models that attempt to simulate the earth’s highly complex climate, and include greenhouse gases as well as aerosols from both volcanic and man-made sources – but almost totally ignore natural variability.

So it shouldn’t be surprising that the models have a dismal track record in predicting the future. Most spectacularly, the models failed to predict the recent pause or hiatus in global warming from the late 1990s to about 2014. During this period, the warming rate dropped to only a third to a half of the rate measured from the early 1970s to 1998, while at the same time CO2 kept spewing into the atmosphere. Out of 32 climate models, only a lone Russian model came anywhere close to the actual observations.

Blog1 image JPG.jpg

Not only did the models overestimate the warming rate by two or three times, they wrongly predict a hot spot in the upper atmosphere that isn’t there, and are unable to accurately reproduce sea level rise.

Yet it’s these same failed models that underpin the whole case for catastrophic consequences of man-made climate change, a case embodied in the 2015 Paris Agreement. The international agreement on reducing greenhouse gas emissions – which 195 nations, together with many of the world’s scientific societies and national academies, have signed on to – is based not on empirical evidence, but on artificial computer models. Only the models link climate change to human activity. The empirical evidence does not.

Proponents of human-caused global warming, including a majority of climate scientists, insist that the boost to global temperatures of about 1.6 degrees Fahrenheit (0.9 degrees Celsius) since 1850 comes almost exclusively from the steady increase in the atmospheric CO2 level. They argue that elevated CO2 must be the cause of nearly all the warming because the sole major change in climate “forcing” over this period has been from CO2 produced by human activities – mainly the burning of fossil fuels as well as deforestation.

But correlation is not causation, as is well known from statistics or the public health field of epidemiology. So believers in the narrative of catastrophic anthropogenic (human-caused) climate change fall back on computer models to shore up their argument. With the climate change narrative trumpeted by political entities such as the UN’s IPCC (Intergovernmental Panel on Climate Change), and amplified by compliant media worldwide, predictions of computer climate models have acquired the status of quasi-religious edicts.

Indeed, anyone disputing the conventional wisdom is labeled a “denier” by advocates of climate change orthodoxy, who claim that global warming skeptics are just as anti-science as those who believe vaccines cause autism. The much ballyhooed war on science typically lumps climate change skeptics together with creationists, anti-vaccinationists and anti-GMO activists. But the climate warmists are the ones on the wrong side of science.

Like their counterparts in the debate over the safety of GMOs, warmists employ fear, hyperbole and heavy-handed political tactics in an attempt to shut down debate. Yet skepticism about the human influence on global warming persists, and may even be growing among the general public. In 2018, a Gallup poll in the U.S. found that 36% of Americans don’t believe that global warming is caused by human activity, while a UK survey showed that a staggering 64% of the British public feel the same way. And the percentage of climate scientists who endorse the mainstream view of a strong human influence is nowhere near the widely believed 97%, although it’s probably above 50%.

Most scientists who are skeptics like me accept that global warming is real, but not that it’s entirely man-made or that it’s dangerous. The observations alone aren’t evidence for a major human role. Such lack of regard for the importance of empirical evidence, and misguided faith in the power of deficient computer climate models, are abuses of science.

(Another 189 comments on this post can be found at the What's Up With That blog and the NoTricksZone blog, which have kindly reproduced the whole post.)