No Evidence That Heat Kills More People than Cold

The irony in the recent frenzy over heat waves is that many more humans die each year from cold than they do from heat. But you wouldn’t know that from sensational media headlines reporting “killer” heat waves and conditions “as hot as hell.” In reality, cold weather worldwide kills 17 times as many people as heat.

This conclusion was reached by a major international study in 2015, published in the prestigious medical journal The Lancet. The study analyzed more than 74 million deaths in 384 locations across 13 countries including Australia, China, Italy, Sweden, the UK and USA, over the period from 1985 to 2012. The results are illustrated in the figure below, showing the average daily rate of premature deaths from heat or cold as a percentage of all deaths, by country.

World heat vs cold deaths.jpg

Perhaps not surprisingly, moderate cold kills people far more often than extreme cold, for a wide range of different climates. Extreme cold was defined by the study authors as temperatures falling below the 2.5th percentile at each location, a limit which varied from as low as -11 degrees Celsius (12 degrees Fahrenheit) in Toronto, Canada to as high as 25 degrees Celsius (77 degrees Fahrenheit) in Bangkok, Thailand. Moderate cold includes all temperatures from this lower limit up to the so-called optimum, the temperature at which the daily death rate at that location is a minimum.

Likewise, extreme heat was defined as temperatures above the 97.5th percentile at each location, and moderate heat as temperatures from the optimum up to the 97.5th percentile. But unlike cold, extreme and moderate heat cause approximately equal numbers of excess deaths.

The study found that on average, 7.71% of all deaths could be attributed to hot or cold – to temperatures above or below the optimum – with 7.29% being due to cold, but only 0.42% due to heat. That single result puts the lie to the popular belief that heat waves are deadlier than cold spells. Hypothermia kills a lot more of us than heat stroke. And though both high and low temperatures can increase the risk of exacerbating cardiovascular, respiratory and other conditions, it’s cold that is the big killer.

This finding is further borne out by seasonal mortality statistics. France, for instance, recorded 700 excess deaths attributed to heat in the summer of 2016, 475 in 2017 and 1,500 in 2018. Yet excess deaths from cold in the French winter from December to March average approximately 24,000. Even the devastating summer heat wave of 2003 claimed only 15,000 lives in France.

Similar statistics come from the UK, where an average of 32,000 more deaths occur during each December to March period than in any other four-month interval of the year. Flu epidemics boosted this total to 37,000 in the winter of 2016-17, and to 50,000 in 2017-18. Just as in France, these numbers for deaths from winter cold far exceed summer mortalities in the UK due to heat, which reached only 860 in 2018 and just 2,200 in the heat-wave year of 2003.

Even more evidence that cold kills a lot more people than heat is seen in an earlier study, published in the BMJ (formerly the British Medical Journal) in 2000. This study, restricted to approximately 3 million deaths in western Europe from 1988 to 1992, found that annual cold related deaths were much higher than heat related deaths in all seven regions studied – with the former averaging 2,000 per million people and the latter only 220 per million. Additionally, no more deaths from heat occurred in hotter regions than colder ones.

A sophisticated statistical approach was necessary in both studies. This is because of differences between regions and individuals, and the observation that, while death from heat is typically rapid and occurs within a few days, death from cold can be delayed up to three or four weeks. The larger Lancet study used more advanced statistical modeling than the BMJ study.

And despite the finding that more than 50% of published papers in biomedicine are not reproducible, the fact that two independent papers reached essentially the same result gives their conclusions some credibility.

Next: No Evidence That Climate Change Is Accelerating Sea Level Rise

No Evidence That Climate Change Causes Weather Extremes: (6) Heat Waves

This Northern Hemisphere summer has seen searing, supposedly record high temperatures in France and elsewhere in Europe. According to the mainstream media and climate alarmists, the heat waves are unprecedented and a harbinger of harsh, scorching hot times to come.

But this is absolute nonsense. In this sixth and final post in the present series, I’ll examine the delusional beliefs that the earth is burning up and may shortly be uninhabitable, and that this is all a result of human-caused climate change. Heat waves are no more linked to climate change than any of the other weather extremes we’ve looked at.

The brouhaha over two almost back-to-back heat waves in western Europe is a case in point. In the second, which occurred toward the end of July, the WMO (World Meteorological Organization) claimed that the mercury in Paris reached a new record high of 42.6 degrees Celsius (108.7 degrees Fahrenheit) on July 25, besting the previous record of 40.4 degrees Celsius (104.7 degrees Fahrenheit) set back in July, 1947. And a month earlier during the first heat wave, temperatures in southern France hit a purported record 46.0 degrees Celsius (114.8 degrees Fahrenheit) on June 28.

How convenient to ignore the past! Reported in Australian and New Zealand newspapers from August, 1930 is an account of an earlier French heatwave, in which the temperature soared to a staggering 50 degrees Celsius (122 degrees Fahrenheit) in the Loire valley, located in central France. That’s a full 4.0 degrees Celsius (7.2 degrees Fahrenheit) above the so-called record just mentioned in southern France, where the temperature in 1930 may well have equaled or exceeded the Loire valley’s towering record.

And the same newpaper articles reported a temperature in Paris that day of 38 degrees Celsius (100 degrees Fahrenheit), stating that back in 1870 the thermometer had reached an even higher, unspecified level there – quite possibly above the July 2019 “record” of 42.6 degrees Celsius (108.7 degrees Fahrenheit).    

The same duplicity can be seen in proclamations about past U.S. temperatures. Although it’s frequently claimed that heat waves are increasing in both intensity and frequency, there’s simply no scientific evidence for such a bold assertion. The following figure charts official data from NOAA (the U.S. National Oceanic and Atmospheric Administration) showing the yearly number of days, averaged over all U.S. temperature stations, from 1895 to 2018 with extreme temperatures above 38 degrees Celsius (100 degrees Fahrenheit) and 41 degrees Celsius (105 degrees Fahrenheit)


The next figure shows NOAA’s data for the year in which the record high temperature in each U.S. state occurred. Of the 50 state records, a total of 32 were set in the 1930s or earlier, but only seven since 1990.

US high temperature records.jpg

It’s obvious from these two figures that there were more U.S. heat waves in the 1930s, and they were hotter, than in the present era of climate hysteria. Indeed, the annual number of days on which U.S. temperatures reached 100 degrees, 95 degrees or 90 degrees Fahrenheit has been steadily falling since the 1930s. The EPA (Environmental Protection Agency)’s Heat Wave Index for the 48 contiguous states also shows clearly that the 1930s were the hottest decade.

Globally, it’s exactly the same story, as depicted in the figure below.

World record high temperatures 500.jpg

Of the seven continents, six recorded their all-time record high temperatures before 1982, three records dating from the 1930s or before; only Asia has set a record more recently (the WMO hasn’t acknowledged the 122 degrees Fahrenheit 1930 record in the Loire region). And yet the worldwide baking of the 1930s didn’t set the stage for more and worse heat waves in the years ahead, even as CO2 kept pouring into the atmosphere – the scenario we’re told, erroneously, that we face today. In fact, the sweltering 1930s were followed by global cooling from 1940 to 1970.

Contrary to the climate change narrative, the recent European heat waves came about not because of global warming, but rather a weather phenomenon known as jet stream blocking. Blocking results from an entirely different mechanism than the buildup of atmospheric CO2, namely a weakening of the sun’s output that may portend a period of global cooling ahead. A less active sun generates less UV radiation, which in turn perturbs winds in the upper atmosphere, locking the jet stream in a holding or blocking pattern. In this case, blocking kept a surge of hot Sahara air in place over Europe for extended periods.

It should be clear from all the evidence presented above that mass hysteria over heat waves and climate change is completely unwarranted. Current heat waves have as little to do with global warming as floods, droughts, hurricanes, tornadoes and wildfires.

Next: No Evidence That Heat Kills More People than Cold

No Evidence That Climate Change Causes Weather Extremes: (5) Wildfires

Probably the most fearsome of the weather extremes commonly blamed on human-caused climate change are tornadoes – the previous topic in this series – and wildfires. Both can arrive with little or no warning, making it difficult or impossible to flee, are often deadly, and typically destroy hundreds of homes and other structures. But just like tornadoes, there is no scientific evidence that the frequency or severity of wildfires are on the rise in a warming world.

You wouldn’t know that, however, from the mass hysteria generated by the mainstream media and climate activists almost every time a wildfire breaks out, especially in naturally dry climates such as those in California, Australia or Spain. While it’s true that the number of acres burned annually in the U.S. has gone up over the last 20 years or so, the present burned area is still only a small fraction of what it was back in the record 1930s, as seen in the figure below, showing data compiled by the U.S. National Interagency Fire Center.

Wildfires US-acres-burned 1926-2017 copy.jpg

Because modern global warming was barely underway in the 1930s, climate change clearly has nothing to do with the incineration of U.S. forests. Exactly the same trend is apparent in the next figure, which depicts the estimated area worldwide burned by wildfires, by decade from 1900 to 2010. Clearly, wildfires have diminished globally as the planet has warmed.

Global Burned Area


Wildfires global-acres-burned JPG.jpg

In the Mediterranean, although the annual number of wildfires has more than doubled since 1980, the burned area over three decades has mimicked the global trend and declined:

Mediterranean Wildfire Occurrence & Burnt Area


Wildfires Mediterranean_number_and_area 1980-2010 copy.jpg

The contrast between the Mediterranean and the U.S., where wildfires are becoming fewer but larger in area, has been attributed to different forest management policies on the two sides of the Atlantic – despite the protestations of U.S. politicians and firefighting officials in western states that climate change is responsible for the uptick in fire size. The next figure illustrates the timeline from 1600 onwards of fire occurrence at more than 800 different sites in western North America. 

Western North America Wildfire Occurrence


Western North American wildfires JPG.jpg

The sudden drop in wildfire occurrence around 1880 has been ascribed to the expansion of American livestock grazing in order to feed a rapidly growing population. Intensive sheep and cattle grazing after that time consumed most of the grasses that previously constituted the fuel for wildfires. This depletion of fuel, together with the firebreaks created by the constant movement of herds back and forth to water sources, and by the arrival of railroads, drastically reduced the incidence of wildfires. And once mechanical equipment for firefighting such as fire engines and aircraft became available in the 20th century, more and more emphasis was placed on wildfire prevention.

But wildfire suppression in the U.S. has led to considerable increases in forest density and the buildup of undergrowth, both of which greatly enhance the potential for bigger and sometimes hotter fires – the latter characterized by a growing number of terrifying, superhot “firenadoes” or fire whirls occasionally observed in today’s wildfires.

Intentional burning, long used by native tribes and early settlers and even advocated by some environmentalists who point out that fire is in fact a natural part of forest ecology as seen in the preceding figure, has become a thing of the past. Only now, after several devastating wildfires in California, is the idea of controlled burning being revived in the U.S. In Europe, on the other hand, prescribed burning has been supported by land managers for many years.

Combined with overgrowth, global warming does play a role by drying out vegetation and forests more rapidly than before. But there’s no evidence at all for the notion peddled by the media that climate change has amplified the impact of fires on the ecosystem, known technically as fire severity. Indeed, at least 10 published studies of forest fires in the western U.S. have found no recent trend in increasing fire severity.

You may think that the ever-rising level of CO2 in the atmosphere would exacerbate wildfire risk, since CO2 promotes plant growth. But at the same time, higher CO2 levels reduce plant transpiration, meaning that plants’ stomata or breathing pores open less, the leaves lose less water and more moisture is retained in the soil. Increased soil moisture has led to a worldwide greening of the planet.

In summary, the mistaken belief that the “new normal” of devastating wildfires around the globe is a result of climate change is not supported by the evidence. Humans, nevertheless, are the primary reason that wildfires have become larger and more destructive today. Population growth has caused more people to build in fire-prone areas, where fires are frequently sparked by an aging network of power lines and other electrical equipment. Coupled with poor forest management, this constitutes a recipe for disaster.

Next: No Evidence That Climate Change Causes Weather Extremes: (6) Heat Waves

No Evidence That Climate Change Causes Weather Extremes: (4) Tornadoes


Tornadoes are smaller and claim fewer lives than hurricanes. But the roaring twisters can be more terrifying because of their rapid formation and their ability to hurl objects such as cars, structural debris, animals and even people through the air. Nonetheless, the narrative that climate change is producing stronger and more deadly tornadoes is as fallacious as the nonexistent links between climate change and other weather extremes previously examined in this series.

Again, the UN’s IPCC (Intergovernmental Panel on Climate Change), whose assessment reports constitute the bible for the climate science community, has dismissed any connection between global warming and tornadoes. While the agency concedes that escalating temperatures and humidity may create atmospheric instability conducive to tornadoes, it also points out that other factors governing tornado formation, such as wind shear, diminish in a warming climate. In fact, declares the IPCC, the apparent increasing trend in tornadoes simply reflects their reporting by a larger number of people now living in remote areas.

A tornado is a rapidly rotating column of air, usually visible as a funnel cloud, that extends like a dagger from a parent thunderstorm to the ground. Demolishing homes and buildings in its often narrow path, it can travel many kilometers before dissipating. The most violent EF5 tornadoes attain wind speeds up to 480 km per hour (300 mph).

The U.S. endures by far the most tornadoes of any country, mostly in so-called Tornado Alley extending northward from central Texas through the Plains states. The annual incidence of all U.S. tornadoes from 1954 to 2017 is shown in the figure below. It’s obvious that no trend exists over a period that included both cooling and warming spells, with net global warming of approximately 0.7 degrees Celsius (1.3 degrees Fahrenheit) during that time.

US Tornadoes (NOAA) 1954-2017.jpg

But, as an illustration of how U.S. tornado activity can vary drastically from year to year, 13 successive days of tornado outbreaks in 2019 saw well over 400 tornadoes touch down in May, with June a close second – and this following seven quiet years ending in 2018, which was the quietest year in the entire record since 1954. The tornado surge, however, had nothing to do with climate change, but rather an unusually cold winter and spring in the West that, combined with heat from the Southeast and late rains, provided the ingredients for severe thunderstorms. 

The next figure depicts the number of strong (EF3 or greater) tornadoes observed in the U.S. each year during the same period from 1954 to 2017. Clearly, the trend is downward instead of upward; the average number of strong tornadoes annually from 1986 to 2017 was 40% less than from 1954 to 1985. Once more, global warming cannot have played a role. 

US strong tornadoes (NOAA) 1954-2017.jpg

In the U.S., tornadoes cause about 80 deaths and more than 1,500 injuries per year. The most deadly episode of all time in a single day was the “tri-State” outbreak in 1925, which killed 747 people and resulted in the most damage from any tornado outbreak in U.S. history. The most ferocious tornado outbreak ever recorded, spawning a total of 30 EF4 or EF5 tornadoes, was in 1974.

Tornadoes also occur more rarely in other parts of the world such as South America and Europe. The earliest known tornado in history occurred in Ireland in 1054. The human toll from tornadoes in Bangladesh actually exceeds that in the U.S., at an estimated 179 deaths per year, partly due to the region’s high population density. It’s population growth and expansion outside urban areas that have caused the cost of property damage from tornadoes to mushroom in the last few decades, especially in the U.S.

Next: No Evidence That Climate Change Causes Weather Extremes: (5) Wildfires

No Evidence That Climate Change Causes Weather Extremes: (3) Hurricanes

This third post in our series on the spurious links between climate change and extreme weather examines the incidence of hurricanes – powerful tropical cyclones that all too dramatically demonstrate the fury nature is capable of unleashing.

Although the UN’s IPCC (Intergovernmental Panel on Climate Change) has noted an apparent increase in the strongest (Category 4 and 5) hurricanes in the Atlantic Ocean, there’s almost no evidence for any global trend in hurricane strength. And the IPCC has found “no significant observed trends” in the number of global hurricanes each year.

Hurricanes occur in the Atlantic and northeastern Pacific Oceans, especially in and around the Gulf of Mexico; their cousins, typhoons, occur in the northwestern Pacific. Hurricanes can be hundreds of miles in extent with wind speeds up to 240 km per hour (150 mph) or more, and often exact a heavy toll in human lives and personal property. The deadliest U.S. hurricane in recorded history struck Galveston, Texas in 1900, killing an estimated 8,000 to 12,000 people. In the Caribbean, the Great Hurricane of 1780 killed 27,500 and winds exceeded an estimated 320 km per hour (200 mph). The worst hurricanes and typhoons worldwide have each claimed hundreds of thousands of lives.

How often hurricanes have occurred globally since 1981 is depicted in the figure below.

Hurricane frequency global (Ryan Maue).jpg

You can see immediately that the annual number of hurricanes overall (upper graph) is dropping. But, while the number of major hurricanes of Category 2, 3, 4 or 5 strength (lower graph) seems to show a slight increase over this period, the trend has been ascribed to improvements in observational capabilities, rather than warming oceans that provide the fuel for tropical cyclones.

The lack of any trend in major global hurricanes is borne out by the number of Category 3, 4 or 5 global hurricanes that make landfall, illustrated in the next figure. 

Hurricanes - global landfalls 1970-2018.png

It’s clear that the frequency of landfalling hurricanes of any strength (Categories 1 through 5) hasn’t changed in the nearly 50 years since 1970 – during a time when the globe warmed by approximately 0.6 degrees Celsius (1.1 degrees Fahrenheit). So the strongest hurricanes today aren’t any more extreme or devastating than those in the past. If anything, major landfalling hurricanes in the U.S. are tied to La Niña cycles in the Pacific Ocean, not to global warming.

Data for the North Atlantic basin, which has the best quality data available in the world, do, however, show heightened hurricane activity over the last 20 years. The figure below illustrates the frequency of all North Atlantic hurricanes (top graph) and major hurricanes (bottom graph) for the much longer period from 1851 to 2018.

Hurricanes - North Atlantic & major 1850-2020.png

What the data reveals is that the annual number of major North Atlantic hurricanes during the 1950s and 1960s was at least comparable to that during the last two decades when, as can be seen, the number took a sudden upward hike from the 1970s, 1980s and 1990s. But, because the earth was cooling in the 1950s and 1960s, the present enhanced hurricane activity in the North Atlantic is highly unlikely to result from global warming.

Even though it appears from the figure that major North Atlantic hurricanes were less frequent before about 1940, the lower numbers simply reflect the relative lack of observations in early years of the record. Aircraft reconnaissance flights to gather data on hurricanes didn’t begin until 1944, while satellite coverage dates from only 1966. While the data shown in the figure above has been adjusted to compensate for these deficiencies, it’s probable that the number of major North Atlantic hurricanes before 1944 is still undercounted.

The true picture is much more complicated, and any explanation of changing hurricane behavior needs to account as well for other factors, such as the now more rapid intensification of these violent storms and their slower tracking than before, both of which result in heavier rain following landfall.

The short duration of the observational record, and the even shorter record from the satellite era, makes it impossible to assess whether recent hurricane activity is unusual for the present interglacial period. Paleogeological studies of sediments in North Atlantic coastal waters suggest that the current boosted hurricane activity is not at all unusual, with several periods of frequent intense hurricane strikes having occurred thousands of years ago.

Next: No Evidence That Climate Change Causes Weather Extremes: (4) Tornadoes

No Evidence That Climate Change Causes Weather Extremes: (2) Floods

Widespread flooding and devastating tornadoes in the U.S. Midwest this May only served to amplify the strident voices of those who claim that climate change has intensified the occurrence of major floods, droughts, hurricanes, heat waves and wildfires. Like-minded voices in other countries have also fallen into the same trap of linking weather extremes to global warming.  

Apart from the IPCC (Intergovernmental Panel on Climate Change)’s dismissal of such hysterical beliefs, an increasing number of research studies are helping to dispel the notion that a warmer world is necessarily accompanied by more severe weather.

A 2017 Australian study of global flood risk concluded very little evidence exists that worldwide flooding is becoming more prevalent. Despite average rainfall getting heavier as the planet warms, the study authors point out that excessive precipitation is not the only cause of flooding. What is less obvious is that alterations to the catchment area – such as land-use changes, deforestation and the building of dams – also play a major role. 

Yet the study found that the biggest influence on flood trends is not more intense precipitation, changes in forest cover or the presence of dams, but the size of the catchment area. Previous studies had emphasized small catchment areas, as these were thought less likely to have been extensively modified. However, the new study discovered that, while smaller catchments do show a trend in flood risk that’s increasing over time, larger catchments exhibit a decreasing trend.

Globally, larger catchments dominate, so the trend in flood risk is actually decreasing rather than increasing in most parts of the globe, if there’s any trend at all. This is illustrated in the figure below, the data coming from 1,907 different locations over the 40 years from 1966 to 2005. Additional data from other locations and for a longer (93-year) period show the same global trend.


But while the overall trend is decreasing, the local trend in regions where smaller catchments are more common, such as Europe, eastern North America and southern Africa, is toward more flooding. The study authors suggest that the lower flood trend in larger catchment areas is due to the expanding presence of agriculture and urbanization.

Another 2017 study, this time restricted to North America and Europe, found “no compelling evidence for consistent changes over time” in the occurrence of major floods from 1930 to 2010.  Like the first study described above, this research included both small and large catchment areas. But the only catchments studied were those with minimal alterations and less than 10% urbanization, so as to focus on any trends driven by climate change.

The second figure below shows the likelihood of a 100-year flood occurring in North America or Europe in any given year, during two slightly different periods toward the end of the 20th century. A 100-year flood is a massive flood that occurs on average only once a century, and has a 1 in 100 or 1% chance of occurring or being exceeded in any given year – although the actual interval between 100-year floods is often less than 100 years.


You can see that for both periods studied, the probability of a 100-year flood in North America or Europe hovers around the 1% (0.01) level or below, implying that 100-year floods were no more or less likely to occur during those intervals than at any time. The straight lines drawn through the data points are meaningless. Similar results were obtained for 50-year floods. 

Although the international study authors concluded that major floods in the Northern Hemisphere between 1931 and 2010 weren’t caused by global warming and were no more likely than expected from chance alone, they did find that floods were influenced by the climate. The strongest influence is the naturally occurring Atlantic Multidecadal Oscillation, an ocean cycle that causes heavier than normal rainfall in Europe and lighter rainfall in North America during its positive phase – leading to an increase in major European floods and a decrease in North American ones.

The illusion that major floods are becoming more common is due in part to the world’s growing population and the appeal, in the more developed countries at least, of living near water. This has led to people building their dream homes in harm’s way on river or coastal floodplains, where rainfall-swollen rivers or storm surges result in intermittent flooding and subsequent devastation. It’s changing human wants rather than climate change that are responsible for disastrous floods.

Next: No Evidence That Climate Change Causes Weather Extremes: (3) Hurricanes

No Evidence That Climate Change Causes Weather Extremes: (1) Drought

Weather extremes are a commonly cited line of evidence for human-caused climate change. Despite the UN’s IPCC (Intergovernmental Panel on Climate Change) having found little to no evidence that global warming triggers extreme weather, the mainstream media and more than a few climate scientists don’t hesitate to trumpet their beliefs to the contrary at every opportunity.

In this and subsequent blog posts, I’ll show how the quasi-religious belief linking extreme weather events to climate change is badly mistaken and at odds with the actual scientific record. We’ll start with drought.

Droughts have been a continuing feature of the earth’s climate for millennia. Although generally caused by a severe fall-off in precipitation, droughts can be aggravated by other factors such as elevated temperatures, soil erosion and overuse of available groundwater. The consequences of drought, which can be catastrophic for human and animal life, include crop failure, starvation and mass migration. A major exodus of early humans out of Africa about 135,000 years ago is thought to have been driven by drought.  

Getting a good handle on drought has only been possible since the end of the 19th century, when the instrumentation needed to measure extreme weather accurately was first developed. The most widely used gauge of dry conditions is the Palmer Drought Severity Index that measures both dryness and wetness and classifies them as “moderate”, “severe” or “extreme.” The figure below depicts the Palmer Index for the U.S. during the past century or so, for all three drought or wetness classifications combined.

US drought index 1900-2012 JPG.jpg

What jumps out immediately is the lack of any long-term trend in either dryness or wetness in the U.S. With the exception of the 1930s Dust Bowl years, the pattern of drought (upper graph) looks boringly similar over the entire 112-year period, as does the pattern of excessive rain (lower graph).

Much the same is true for the rest of the world. The next figure illustrates two different drought indices during the period 1910-2010 for India, a country subject to parching summer heat followed by drenching monsoonal rains; negative values denote drought and positive values wetness. The two indices are a version of the Palmer Drought Severity Index (sc-PDSI, top graph), and the Standardized Precipitation Index (SPI, bottom graph). The SPI, which relies on rainfall data only, is easier to calculate than the PDSI, which depends on both rainfall and temperature. While both indices are useful, the SPI is better suited to making comparisons between different regions.

India Drought Index


SPEI index India JPG TOP.jpg
SPEI index India JPG BOTTOM.jpg

You’ll see that the SPI in India shows no particular tendency over the 100-year period toward either dryness or wetness, though there are 20-year intervals exhibiting one of the two conditions; the apparent trend of the PDSI toward drought since 1990 is an artifact of the index. Similar records for other countries around the globe all show the same thing – no drying of the planet as a whole over more than 100 years.

Recently, the mainstream media created false alarm over drought by mindlessly broadcasting the results of a new study, purporting to demonstrate that global warming will soon result in “unprecedented drying.” By combining computer models with long-term observations, the study authors claim to have definitively connected global warming to drought.

But this claim doesn’t hold up, even in the study’s results. Although the authors were able to match warming to drought conditions during the first half of the 20th century, their efforts were a dismal failure after that. From 1950 to 1980, the “fingerprint” of human-caused global warming completely disappeared, in spite of ever-increasing CO2 in the atmosphere. And from 1981 onward, the fingerprint was so faint that it couldn’t be distinguished from background noise. So the assertion by the authors that global warming causes drought is nothing but wishful thinking.

As further evidence that climate change isn’t exacerbating drought, the final figure below shows the Palmer Index for the U.S. since 1996. Just like the record for the period from 1900 up to 2012 illustrated in the first figure above, there is no discernible trend in either dryness or wetness. While the West and Southwest have both experienced lengthy spells of drought during this period, extreme dry conditions now appear to have abated in both Texas and California.

US drought index 1996-2018 JPG.jpg

In summary, the scientific evidence simply doesn’t support any link between drought and climate change. The IPCC was right to express low confidence in any global-scale observed trend.

Next: No Evidence That Climate Change Causes Weather Extremes: (2) Floods

Are UFO Sightings a Threat to Science?

Credit: CoolCatGameStudio from Pixabay

Credit: CoolCatGameStudio from Pixabay

Do UFO sightings threaten science? The short answer is that UFO observations don’t in themselves – as long as one separates true observations from the questionable claims of alien abduction and other supposed extraterrestrial activity on Earth.    

Unlike pseudosciences such as astrology or crystal healing, UFOs belong to the realm of science, even if we don’t know exactly what some of them are. Sightings of ethereal objects in the sky have been reported throughout recorded history, although there’s been a definite uptick since the advent of air travel in the 20th century. According to recently released records, UK wartime prime minister Winston Churchill colluded with General Dwight Eisenhower to suppress the alleged observation of a UFO by a British bomber crew toward the end of World War II, out of fear that reporting it would cause mass panic.

Since then, numerous incidents have been reported in countries across the globe, by scientists and nonscientists alike. The U.S. Air Force, which coined the term UFO, undertook a series of studies from 1947 to 1969 that included more than 12,000 claimed UFO sightings. The project concluded that the vast majority of sightings could be explained as misidentified conventional objects or natural phenomena, such as spy planes, helium balloons, clouds or meteors – or occasionally, hoaxes. Nonetheless, there was no explanation for 701 (about 6%) of the sightings investigated. 

Only in the last several months has the existence of a new U.S. program to study UFOs been disclosed, this time under the aegis of the Pentagon. Begun in 2007, the secret program apparently continues until this day, though its government funding ended in 2012. One of the few publicized incidents which was examined involved two Navy F/A-18F fighter pilots, who chased an oval object that appeared to be moving at impossibly high speeds for humans, off the coast of southern California in 2004.

Perhaps the most famous American event was the so-called Roswell incident in 1947, when an Air Force balloon designed for nuclear test monitoring crashed at a ranch near Roswell, New Mexico. The official but deceptive statement by the military that it was a high-altitude weather balloon only served to generate ever-escalating conspiracy theories about the crash. The theories postulated that the military had covered up the crash landing of an alien spacecraft, and that bodies of its extraterrestrial crew had been recovered and preserved. Over the years, details of the story became embellished to the point where more than one candidate for U.S. President promised to unlock the secret government files on Roswell.

Belief in alien activity is where UFO lore departs from science. While it’s possible that some of the small percentage of unexplained UFO sightings have been spaceships piloted by extraterrestrial beings, there’s currently no credible evidence that aliens actually exist, nor that they’ve ever visited planet Earth.

In particular, it’s belief in alien abductions that constitutes a threat to science, the hallmarks of which are empirical evidence and logic. In the U.S., the phenomenon began with the mysterious case of Betty and Barney Hill in 1961. The Hills claim to have encountered a UFO while driving home on an isolated rural road in New Hampshire, and to have been seized by humanoid figures with large eyes who took them onto their spaceship, where invasive experiments were performed on the terrified pair. Afterwards, both the Hills’ watches stopped working and they had no recollection of two hours of their bewildering drive.

Although the alien abduction narrative captured the American imagination during the next two decades, the Air Force ultimately dismissed the story and determined that the alien craft was a “natural” object. Indeed, there’s no reliable empirical evidence that any of the millions of other reported abductions have been real.  

Psychologists attribute the episodes to false memories and fantasies created by a human brain that we’re still struggling to understand. Possible physical causes of the abduction phenomenon include epilepsy, hallucinations and sleep paralysis, a condition in which a person is half-awake — conscious, though unable to move.

But while abduction stories may be entertaining, they qualify as irrational pseudoscience because they can’t be falsified. Pseudoscience is frequently based on faith in a belief, instead of scientific evidence, and makes vague and often grandiose claims that can’t be tested. One of the clear-cut ways to differentiate real science from pseudoscience is the falsifiability criterion formulated by 20th-century philosopher Sir Karl Popper: a genuine scientific theory or law must be capable in principle of being invalidated – of being disproved – by observation or experiment. That’s not possible with alien abductions, which can’t be either proved or disproved.

Next: No Evidence That Climate Change Causes Weather Extremes: (1) Drought

UN Species Extinction Report Spouts Unscientific Hype, Dubious Math

An unprecedented decline in nature’s animal and plant species is supposedly looming, according to a UN body charged with developing a knowledge base for preservation of the planet’s biodiversity. In a dramatic announcement this month, the IPBES (Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services) claimed that more species are currently at risk of extinction than at any time in human history and that the extinction rate is accelerating. But these claims are nonsensical hype, based on wildly exaggerated numbers that can’t be corroborated.

Credit: Ben Curtis, Associated Press

Credit: Ben Curtis, Associated Press

The IPBES report summary, which is all that has been released so far, states that “around 1 million of an estimated 8 million animal and plant species (75% of which are insects), are threatened with extinction.” Apart from the as-yet-unpublished report, there’s little indication of the source for these estimates, which are as mystifying as the classic magician’s rabbit produced from an empty hat.

It appears from the report summary that the estimates are derived from a much more reliable set of numbers – the so-called Red List of threatened species, compiled by the IUCN (International Union for Conservation of Nature). The IUCN, not affiliated with the UN, is an international environmental network highly regarded for its assessments of the world’s biodiversity, including evaluation of the extinction risk of thousands of species. The network includes a large number of biologists and conservationists.

Of an estimated 1.7 million species in total, the IUCN’s Red List has currently assessed just 98,512 species, of which it lists 27,159 or approximately 28% as threatened with extinction. The IUCN’s “threatened” description includes the categories “critically endangered,” “endangered” and “vulnerable.”

A close look at the IUCN category definitions reveals that “vulnerable” represents a probability of extinction in the wild of merely “at least 10% within 100 years,” and “endangered” an extinction probability of “at least 20% within a maximum of 100 years.” Both of these categories are hardly a major cause for concern, yet together they embrace 78% of the IUCN’s compilation of threatened species. That leaves just 22% or about 5,900 critically endangered species, whose probability of extinction in the wild is assessed at more than 50% over the next 100 years – high enough for these species to be genuinely at risk of becoming extinct.

But while the IUCN presents these numbers matter-of-factly without fanfare, the much more political IPBES resorts to unashamed hype by extrapolating the statistics beyond the 98,512 species that the IUCN has actually investigated, and by assuming a total number of species far in excess of the IUCN’s estimated 1.7 million. Estimates of just how many species the Earth hosts vary considerably, from the IUCN number of 1.7 million all the way up to 1 trillion. The IPBES number of 8 million species appears to be plucked out of nowhere, as does the 1 million threatened with extinction, despite the IPBES report being the result of a “systematic review” of 15,000 scientific and government sources.

According to IPBES chair Sir Robert Watson, the 1 million number was derived from the 8 million by what appears to be an arbitrary calculation based on the IUCN’s much lower numbers. The IPBES assumes a global total of 5.5 million insects – compared with the IUCN’s Red List estimate of 1.0 million – which, when subtracted from the 8 million grand total, leaves 2.5 million non-insect species. This 2.5 million is then multiplied by the IUCN 28% threatened rate, and the 5.5 million insects multiplied by a mysterious unspecified lower rate, to arrive at the 1 million species in danger. That far excedes the IUCN’s estimate of 27,159.

Not only does the IPBES take unjustified liberties with the IUCN statistics, but its extinction rate projection bears no relationship whatsoever to actual extinction data. A known 680 vertebrate species have been driven to extinction since the 16th century, with 66 known insect extinctions recorded over the same period – or approximately 1.5 extinctions per year on average. The IPBES report summary states that the current rate of global species extinction is tens to hundreds of times higher than this and accelerating, but without explanation except for the known effect of habitat loss on animal species.

Maybe we should give the IPBES the benefit of the doubt and suspend judgment until the full report is made available. But with such a disparity between its estimates and the more sober assessment of the IUCN, it seems that the IPBES numbers are sheer make-believe. One million species on the brink of extinction is nothing but fiction, when the true number could be as low as 5,900.

Next: Are UFO Sightings a Threat to Science?

Science, Political Correctness and the Great Barrier Reef

A recent Australian court case highlights the intrusion of political correctness into science to bolster the climate change narrative. On April 16, a federal judge ruled that Australian coral scientist Dr. Peter Ridd had been unlawfully fired from his position at North Queensland’s James Cook University, for questioning his colleagues’ research on the impact of climate change on the Great Barrier Reef. In his ruling, the judge criticized the university for not respecting Ridd’s academic freedom.

Great Barrier Reef.jpg

The Great Barrier Reef is the world's largest coral reef system, 2,300 km (1,400 miles) long and visible from outer space. Labeled by CNN as one of the seven natural wonders of the world, the reef is a constant delight to tourists, who can view the colorful corals from a glass-bottomed boat or by snorkeling or scuba diving.

Rising temperatures, especially during the prolonged El Niño of 2016-17, have severely damaged portions of the Great Barrier Reef – so much so that the reef has become the poster child for global warming. Corals are susceptible to overheating and undergo bleaching when the water gets too hot, losing their vibrant colors. But exactly how much of the Great Barrier Reef has been affected, and how quickly it’s likely to recover, are controversial issues among reef researchers.

Ridd’s downfall came after he authored a chapter on the resilience of Great Barrier Reef corals in the book, Climate Change: The Facts 2017. In his chapter and subsequent TV interviews, Ridd bucked the politically correct view that the reef is doomed to an imminent death by climate change, and criticized the work of colleagues at the university’s Centre of Excellence for Coral Reef Studies. He maintained that his colleagues’ findings on the health of the reef in a warming climate were flawed, and that scientific organizations such as the Centre of Excellence could no longer be trusted.  

Ridd had previously been censured by the university for going public with a dispute over a different aspect of reef health. This time, his employer accused Ridd of “uncollegial” academic misconduct and warned him to remain silent about the charge. When he didn’t, the university fired him after a successful career of more than 40 years.

At the crux of the issue of bleaching is whether or not it’s a new phenomenon. The politically correct view of many of Ridd’s fellow reef scientists is that bleaching didn’t start until the 1980s as global warming surged, so is an entirely man-made spectacle. But Ridd points to scientific records that reveal multiple coral bleaching events around the globe throughout the 20th century.

The fired scientist also disagrees with his colleagues over the extent of bleaching from the massive 2016-17 El Niño. Ridd estimates that just 8% of Great Barrier Reef coral actually died; much of the southern end of the reef didn’t suffer at all. But his politically correct peers maintain that the die-off was anywhere from 30% to 95%.

Such high estimates, however, are for very shallow water coral – less than 2 meters (7 feet) below the surface, which is only a small fraction of all the coral in the reef. A recent independent study found that deep water coral – down to depths of more than 40 meters (130 feet) – saw far less bleaching. And while Ridd’s critics claim that warming has reduced the growth rate of new coral by 15%, he finds that the growth rate has increased slightly over the past 100 years.

Ridd explains the adaptability of corals to heating as a survival mechanism, in which the multitude of polyps that constitute a coral exchange the microscopic algae that normally live inside the polyps and give coral its striking colors. Hotter than normal water causes the algae to poison the coral that then expels them, turning the polyps white. But to survive, the coral needs resident algae which supply it with energy by photosynthesis of sunlight. So from the surrounding water, the coral selects a different species of algae better suited to hot conditions, a process that enables the coral to recover within a few years, says Ridd.

Ridd attributes what he believes are the erroneous conclusions of his reef scientist colleagues to a failure of the peer review process in scrutinizing their work. To support his argument, he cites the so-called reproducibility crisis in contemporary science – the vast number of peer-reviewed studies that can’t be replicated in subsequent investigations and whose findings turn out to be false. Although it’s not known how severe irreproducibility is in climate science, it’s a serious problem in the biomedical sciences, where as many as 89% of published results in certain fields can’t be reproduced.

In Ridd’s opinion, as well as mine, studies predicting that the Great Barrier Reef is in imminent peril are based more on political correctness than good science.

Next: UN Species Extinction Report Based on Unscientific Hype, Dubious Math

Grassroots Climate Change Movement Ignores Actual Evidence

Earth Day 2019 is marked by the recent launch of several grassroots organizations whose ostensible aim is to combat climate change. The crusades include the UK’s Extinction Rebellion, the Swedish WeDontHaveTime, and the pied-piper-like campaign sparked by striking Swedish schoolgirl Greta Thunberg. What’s most disturbing about them all is not their intentions or methods, but their ignorance and their disregard of scientific evidence.

Common to the entire movement is the delusional belief that climate Armageddon is imminent – a mere 12 years away, according to U.S. congresswoman Alexandria Ocasio-Cortez. The WeDontHaveTime manifesto declares that “climate change is killing us” and that we’re already experiencing catastrophe. Trumpets Extinction Rebellion: “The science is clear … we are in a life or death situation … ,” a sentiment echoed by the Sunrise Movement in the U.S. And a proclamation of the youth climate strikers insists that “The climate crisis … is the biggest threat in human history.”

But despite the climate hysteria, these activists show almost no knowledge of the science that supposedly underlies their doomsday claims. Instead, they resort to logically fallacious appeals to authority. Apart from the UN’s IPCC (Intergovernmental Panel on Climate Change), which is as much a political body as a scientific one, the authorities include the former head of NASA’s Goddard Institute for Space Studies, James Hansen – known for his hype on global warming – and the UK Met Office, an agency with a dismal track record of predicting even the coming season’s weather.

Among numerous mistaken assertions by the would-be crusaders is the constant drumbeat of extreme weather events attributed to human emissions of greenhouse gases. The sadly uninformed protesters seem completely unaware that anomalous weather has been part of the earth’s climate from ancient times, long before industrialization bolstered the CO2 level in the atmosphere. They don’t bother to check the actual evidence that reveals no long-term trend whatsoever in hurricanes, heat waves, floods, droughts and wildfires in more than 100 years. Linking weather extremes to global warming or CO2 is empty-headed ignorance.

Another fallacy is that the huge Antarctic ice sheet, containing about 90% of the freshwater ice on the earth’s surface, is losing ice and causing sea-level rise to accelerate. But while it’s true that glaciers in West Antarctica and the Antarctic Peninsula are thinning, there’s evidence, albeit controversial, that the ice loss is outweighed by new ice formation in East Antarctica from warming-enhanced snowfall. The much smaller Greenland ice sheet is indeed losing ice by melting, but not at an alarming rate.

The cluelessness of the climate change movement is also exemplified by its embrace of false predictions of the future, such as the claim that climate change will cause shortfalls in food production. If anything, exactly the reverse is true. Higher temperatures and the fertilizing effect of CO2, which helps plants grow, boost crop yields and make plants more resistant to drought.

Participation in the movement runs in the hundreds of thousands around the world, especially among school climate strikers. The eco-anarchist Extinction Rebellion, formed last year, promotes acts of nonviolent civil disobedience to achieve its goals, harking back to “Ban the Bomb” and US civil rights protests of the 1950s and 1960s. To “save the planet”, the organization is calling for greenhouse gas emissions to be reduced to net zero as soon as 2025.

The newly created WeDontHaveTime subscribes to the widely held political, but unscientific belief that climate change is an existential crisis, and that catastrophe lurks around the corner. Its particular focus is on building a global social media network dedicated to climate change, with the initial phase being launched today, April 22.

The school strike for climate has similar aims, to be achieved by children around the globe playing hooky from school. An estimated total of more than a million pupils in 125 countries demonstrated in strikes on March 15.

The movement’s lack of scientific knowledge extends to the origin of CO2 emissions as well. Extinction Rebellion and WeDontHaveTime, at least, appear oblivious to the fact that the lion’s share of the world’s CO2 emissions comes from China and India alone – 34% in 2019, by preliminary estimates, and increasing yearly. If the climate change catastrophists were really serious about their objectives, they’d be directing their efforts against the governments of these two countries instead of wasting time on the West.

Next: Science, Political Correctness and the Great Barrier Reef

The Sugar Industry: Sugar Daddy to Manipulated Science?

Industry funding of scientific research often comes with strings attached. There’s plenty of evidence that industries such as tobacco and lead have been able to manipulate sponsored research to their advantage, in order to create doubt about the deleterious effects of their product. But has the sugar industry, currently in the spotlight because of concern over sugary drinks, done the same?

suger large.jpg

This charge was recently leveled at the industry by a team of scientists at UCSF (University of California, San Francisco), who accused the industry of funding research in the 1960s that downplayed the risks of consuming sugar and overstated the supposed dangers of eating saturated fat. Both saturated fat and sugar had been linked to coronary heart disease, which was surging at the time.

The UCSF researchers claim to have discovered evidence that an industry trade group secretly paid two prominent Harvard scientists to conduct a literature review refuting any connection between sugar and heart disease, and making dietary fat the villain instead. The published review made no mention of sugar industry funding.

A year after the review came out, the trade group funded an English researcher to conduct a study on laboratory rats. Initial results seemed to confirm other studies indicating that sugars, which are simple carbohydrates, were more detrimental to heart health than complex or starchy carbohydrates like grains, beans and potatoes. This was because sugar appeared to elevate the blood level of triglyceride fats, today a known risk factor for heart disease, through its metabolism by microbes in the gut.

Perhaps more alarmingly, preliminary data suggested that consumption of sugar – though not starch – produced high levels of an enzyme called beta-glucuronidase that other contemporary studies had associated with bladder cancer in humans. Before any of this could be confirmed, however, the industry trade organization shut the research project down; the results already obtained were never published.

The UCSF authors say in a second paper that the literature review’s dismissal of contrary studies, together with the suppression of evidence tying sugar to triglycerides and bladder cancer, show how the sugar industry has attempted for decades to bury scientific data on the health risks of eating sugar. If the findings of the laboratory study had been disclosed, they assert, sugar would probably have been scrutinized as a potential carcinogen, and its role in cardiovascular disease would have been further investigated. Added one of the UCSF team, “This is continuing to build the case that the sugar industry has a long history of manipulating science.”

Marion Nestle, an emeritus professor of food policy at New York University, has commented that the internal industry documents unearthed by the UCSF researchers were striking “because they provide rare evidence that the food industry suppressed research it did not like, a practice that has been documented among tobacco companies, drug companies and other industries.”

Nonetheless, the current sugar trade association disputes the UCSF claims, calling them speculative and based on questionable assumptions about events that took place almost 50 years ago. The association also considers the research itself tainted, because it was conducted and funded by known critics of the sugar industry. The industry has consistently denied that sugar plays any role in promoting obesity, diabetes or heart disease.

And despite a statement by the trade association’s predecessor that it was created “for the basic purpose of increasing the consumption of sugar,” other academics have defended the industry. They point out that, at the time of the industry review and the rat study in the 1960s, the link between sugar and heart disease was supported by only limited evidence, and the dietary fat hypothesis was deeply entrenched in scientific thinking, being endorsed by the AHA (American Heart Association) and the U.S. NHI (National Heart Institute).

But, says Nestle, it’s déjà vu today, with the sugar and beverage industries now funding research to let the industries off the hook for playing a role in causing the current obesity epidemic. As she notes in a commentary in the journal JAMA Internal Medicine:

"Is it really true that food companies deliberately set out to manipulate research in their favor? Yes, it is, and the practice continues.”

Next: Grassroots Climate Change Movement Ignores Actual Evidence

Measles Rampant Again, Thanks to Anti-Vaccinationists

Measles is on the march once more, even though vaccination against the disease has cut the number of worldwide deaths from an estimated 2.6 million per year in the mid-20th century to 110,000 in 2017. But thanks to the anti-scientific, anti-vaccination movement and the ever expanding reach of social media, measles cases are now at a 20-year high in Europe and as many U.S. cases were reported in the first two months of 2019 as in the first six months of 2018.

measles large.jpg

Highly contagious, measles is not a malady to be taken lightly. One in 1,000 people who catch it die of the disease; most of the victims are children under five. Even those who survive are at high risk of falling prey to encephalitis, an often debilitating infection of the brain that can lead to seizures and mental retardation. Other serious complications of measles include blindness and pneumonia.

It’s not the first time that measles has reared its ugly head since the widespread introduction of the MMR (measles-mumps-rubella) vaccine in 1963. Although laws mandating vaccination for schoolchildren were in place in all 50 U.S. states by 1980, sporadic outbreaks of the disease have continued to occur. Before the surge in 2018-19, a record number of 667 cases of measles from 23 outbreaks were reported in the U.S. in 2014. And major epidemics are currently raging in countries such as Ukraine and the Philippines.

The primary reason for all these outbreaks is that more and more parents are choosing not to vaccinate their children. The WHO (World Health Organization), for the first time, has listed vaccine hesitancy as one of the top 10 global threats of 2019.

While some parents oppose immunization on religious or philosophical grounds, by far the most objections come from those who insist that all vaccines cause disabling side effects or other diseases – even though the available scientific data doesn’t support such claims. As discussed in a previous post, there’s absolutely no scientific evidence for the once widely held belief that MMR vaccination results in autism, for example.

Anti-vaccinationists, when accused of exposing their children to unnecessary risk by refusing immunization because of unjustified fears about vaccine safety, rationalize their stance by appealing to herd immunity. Herd immunity is the mass protection from an infectious disease that results when enough members of the community become immune to the disease through vaccination, just as sheer numbers protect a herd of animals from predators. Once a sufficiently large number of people have been vaccinated, viruses and bacteria can no longer spread in that community.

For measles, herd immunity requires up to 94% of the populace to be immunized. That the threshold is lower than 100%, however, enables anti-vaccinationists to hide their children in the herd. By not vaccinating their offspring but choosing to live among the vaccinated, anti-vaxxers avoid the one in one million risk of their children experiencing serious side effects from the vaccine, while simultaneously not exposing them to infection – at least not in their own community.  

But hiding in the herd takes advantage of others and is morally indefensible. Certain vulnerable groups can’t be vaccinated at all, including those with weakened immune systems such as children undergoing chemotherapy for cancer or the elderly on immunosuppressive therapy for rheumatic diseases. If too many people choose not to vaccinate, the percentage vaccinated will fall below the threshold, herd immunity will break down and those whose protection depends on those around them being vaccinated will suffer.

Another contentious issue is exemptions from mandatory vaccination for religious or philosophical reasons. While some American parents regard the denial of schooling to unvaccinated children as an infringement of their constitutional rights, supreme courts in several U.S. states have ruled that the right to practice religion freely doesn’t include liberty to expose the community or a child to communicable disease. And ever since it was found in 2006 that the highest incidence of diseases such as whooping cough occurred in the states most generous in granting exemptions, more and more states have abolished nonmedical exemptions altogether.

But other countries are not so vigilant. In Madagascar, for instance, less than an estimated 60% of the population has been immunized against measles – because of which an epidemic there has caused more than 900 deaths in six months, according to the WHO. Although the WHO says that the reasons for the global rise in measles cases are complex, there’s no doubt that resistance to vaccination is a major factor. It’s not helped by the extensive dissemination of anti-vaccination misinformation by Russian propagandists.

Next: The Sugar Industry: Sugar Daddy to Manipulated Science?

Does Climate Change Threaten National Security?

Earth new.jpg

The U.S. White House’s proposed Presidential Committee on Climate Security (PCCS) is under attack – by the mainstream media, Democrats in Congress and military retirees, among others. The committee’s intended purpose is to conduct a genuine scientific assessment of climate change.

But the assailants’ claim that the PCCS is a politically motivated attempt to overthrow science has it backwards. The Presidential Committee will undertake a scientifically motivated review of climate change science, in the hope of eliminating the subversive politics that have taken over the scientific debate.

It’s those opposed to the committee who are playing politics and abusing science. The whole political narrative about greenhouse gases and dangerous anthropogenic (human-caused) warming, including the misguided Paris Agreement that the U.S. has withdrawn from, depends on faulty computer climate models that failed to predict the recent slowdown in global warming, among other shortcomings. The actual empirical evidence for a substantial human contribution to global warming is flimsy.

And the supposed 97% consensus among climate scientists that global warming is largely man-made is a gross exaggeration, mindlessly repeated by politicians and the media.

The 97% number comes primarily from a study of approximately 12,000 abstracts of research papers on climate science over a 20-year period. What is rarely revealed is that nearly 8,000 of the abstracts expressed no opinion at all on human-caused warming. When that and a subsidiary survey are taken into account, the climate scientist consensus percentage falls to between 33% and 63% only. So much for an overwhelming majority! 

Blatant exaggeration like this for political purposes is all too common in climate science. An example that permeates current news articles and official reports on climate change is the hysteria over extreme weather. Almost every hurricane, major flood, drought, wildfire or heat wave is ascribed to global warming.

But careful examination of the actual scientific data shows that if there’s a trend in any of these events, it’s downward rather than upward. Even the UN’s Intergovernmental Panel on Climate Change has found little to no evidence that global warming increases the occurrence of many types of extreme weather.

Polar bear JPG 250.jpg

Another over-hyped assertion about climate change is that the polar bear population at the North Pole is shrinking because of diminishing sea ice in the Arctic, and that the bears are facing extinction. Yet, despite numerous articles in the media and photos of apparently starving bears, current evidence shows that the polar bear population has actually been steady for the whole period that the ice has been decreasing and may even be growing, according to the native Inuit.

All these exaggerations falsely bolster the case for taking immediate action to combat climate change, supposedly by pulling back on fossil fuel use. But the mandate of the PCCS is to cut through the hype and assess just what the science actually says.  

A specific PCCS goal is to examine whether climate change impacts U.S. national security, a connection that the defense and national security agencies have strongly endorsed.

A recent letter of protest to the President from a group of former military and civilian national security professionals expresses their deep concern about “second-guessing the scientific sources used to assess the threat … posed by climate change.” The PCCS will re-evaluate the criteria employed by the national agencies to link national security to climate change.

The protest letter also claims that less than 0.2% of peer-reviewed climate science papers dispute that climate change is driven by humans. This is nonsense. In solar science alone during the first half of 2017, the number of peer-reviewed papers affirming a strong link between the sun and our climate, independent of human activity, represented approximately 4% of all climate science papers during that time – and there are many other fields of study apart from the sun.

Let’s hope that formation of the new committee will not be thwarted and that it will uncover other truths about climate science.

(This post was published previously on March 7, on The Post & Email blog.)

Next: Measles Rampant Again, Thanks to Anti-Vaccinationists

Nature vs Nurture: Does Epigenetics Challenge Evolution?

A new wrinkle in the traditional nature vs nurture debate – whether our behavior and personalities are influenced more by genetics or by our upbringing and environment – is the science of epigenetics. Epigenetics describes the mechanisms for switching individual genes on or off in the genome, which is an organism’s complete set of genetic instructions.


A controversial question is whether epigenetic changes can be inherited. According to Darwin’s 19th-century theory, evolution is governed entirely by heritable variation of what we now know as genes, a variation that usually results from mutation; any biological changes to the whole organism during its lifetime caused by environmental factors can’t be inherited. But recent evidence from studies on rodents suggests that epigenetic alterations can indeed be passed on to subsequent generations. If true, this implies that our genes record a memory of our lifestyle or behavior today that will form part of the genetic makeup of our grandchildren and great-grandchildren.

So was Darwin wrong? Is epigenetics an attack on science? At first blush, epigenetics is reminiscent of Lamarckism – the pre-Darwinian notion that acquired characteristics are heritable, promulgated by French naturalist Jean-Baptiste Lamarck. Lamarck’s most famous example was the giraffe, whose long neck was thought at the time to have come from generations of its ancestors stretching to reach foliage in high trees, with longer and longer necks then being inherited.

Darwin himself, when his proposal of natural selection as the evolutionary driving force was initially rejected, embraced Lamarckism as a possible alternative to natural selection. But the Lamarckian view was later discredited, as more and more evidence for natural selection accumulated, especially from molecular biology.

Nonetheless, the wheel appears to have turned back to Lamarck’s idea over the last 20 years. Several epidemiological studies have established an apparent link between 20th-century starvation and the current prevalence of obesity in the children and grandchildren of malnourished mothers. The most widely studied event is the Dutch Hunger Winter, the name given to a 6-month winter blockade of part of the Netherlands by the Germans toward the end of World War II. Survivors, who included Hollywood actress Audrey Hepburn, resorted to eating grass and tulip bulbs to stay alive.

The studies found that mothers who suffered malnutrition during early pregnancy gave birth to children who were more prone to obesity and schizophrenia than children of well-fed mothers. More unexpectedly, the same effects showed up in the grandchildren of the women who were malnour­ished during the first three months of their pregnancy. Similarly, an increased incidence of Type II diabetes has been discovered in adults whose pregnant mothers experienced starvation during the Ukrainian Famine of 1932-33 and the Great Chinese Famine of 1958-61.  

All this data points to the transmission from generation to generation of biological effects caused by an individual’s own experiences. Further evidence for such epigenetic, Lamarckian-like changes comes from laboratory studies of agouti mice, so called because they carry the agouti gene that not only makes the rodents fat and yellow, but also renders them susceptible to cancer and diabetes. By simply altering a pregnant mother’s diet, researchers found they could effectively silence the agouti gene and produce offspring that were slender and brown, and no longer prone to cancer or diabetes.  

The modified mouse diet was rich in methyl donors, small molecules that attach themselves to the DNA string in the genome and switch off the troublesome gene, and are found in foods such as onions and beets. In addition to its DNA, any genome in fact contains an array of chemical markers and switches that constitute the instructions for the estimated 21,000 protein-coding genes in the genome. That is, the array is able to turn the expression of particular genes on or off.

However, the epigenome, as this array is called, can’t alter the genes themselves. A soldier who loses a limb in battle, for example, will not bear children with shortened arms or legs. And, while there’s limited evidence that epigenetic changes in humans can be transmitted between generations, such as the starvation studies described above, the possibility isn’t yet fully established and further research is needed.

One line of thought, for which an increasing amount of evidence exists in animals and plants, is that epigenetic change doesn’t come from experience or use – as in the case of Lamarck’s giraffe – but actually results from Darwinian natural selection. The idea is that in order to cope with an environmental threat or need, natural selection may choose the variation in the species that has an epigenome favoring the attachment to its DNA of a specific type of molecule such as a methyl donor, capable of expressing or silencing certain genes. In other words, epigenetic changes can exploit existing heritable genetic variation, and so are passed on.

Is this explanation correct or, as creationists would like to think, did Darwin’s theory of evolution get it wrong? Time will tell.

How the Scientific Consensus Can Be Wrong

consensus wrong 250.jpg

Consensus is a necessary step on the road from scientific hypothesis to theory. What many people don’t realize, however, is that a consensus isn’t necessarily the last word. A consensus, whether newly proposed or well-established, can be wrong. In fact, the mistaken consensus has been a recurring feature of science for many hundreds of years.

A recent example of a widespread consensus that nevertheless erred was the belief that peptic ulcers were caused by stress or spicy foods – a dogma that persisted in the medical community for much of the 20th century. The scientific explanation at the time was that stress or poor eating habits resulted in excess secretion of gastric acid, which could erode the digestive lining and create an ulcer.

But two Australian doctors discovered evidence that peptic ulcer disease was caused by a bacterial infection of the stomach, not stress, and could be treated easily with antibiotics. Yet overturning such a longstanding consensus to the contrary would not be simple. As one of the doctors, Barry Marshall, put it:

“…beliefs on gastritis were more akin to a religion than having any basis in scientific fact.”

To convince the medical establishment the pair were right, Marshall resorted in 1984 to the drastic measure of infecting himself with a potion containing the bacterium in question (known as Helicobacter pylori). Despite this bold and risky act, the medical world didn’t finally accept the new doctrine until 1994. In 2005, Barry Marshall and Robin Warren were awarded the Nobel Prize in Medicine for their discovery.    

Earlier last century, an individual fighting established authority had overthrown conventional scientific wisdom in the field of geology. Acceptance of Alfred Wegener’s revolutionary theory of continental drift, proposed in 1912, was delayed for many decades – even longer than resistance continued to the infection explanation for ulcers – because the theory was seen as a threat to the geological establishment.

Geologists of the day refused to take seriously Wegener’s circumstantial evidence of matchups across the ocean in continental coastlines, animal and plant fossils, mountain chains and glacial deposits, clinging instead to the consensus of a contracting earth to explain these disparate phenomena. The old consensus of fixed continents endured among geologists even as new, direct evidence for continental drift surfaced, including mysterious magnetic stripes on the seafloor. But only after the emergence in the 1960s of plate tectonics, which describes the slow sliding of thick slabs of the earth’s crust, did continental drift theory become the new consensus.

A much older but well-known example of a mistaken consensus is the geocentric (earth-centered) model of the solar system that held sway for 1,500 years. This model was originally developed by ancient Greek philosophers Plato and Aristotle, and later simplified by the astronomer Ptolemy in the 2nd century. Medieval Italian mathematician Galileo Galilei fought to overturn the geocentric consensus, advocating instead the rival heliocentric (sun-centered) model of Copernicus – the model which we accept today, and for which Galileo gathered evidence in the form of unprecedented telescopic observations of the sun, planets and planetary moons.    

Although Galileo was correct, his endorsement of the heliocentric model brought him into conflict with university academics and the Catholic Church, both of which adhered to Ptolemy’s geocentric model. A resolute Galileo insisted that:

 “In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.”

But to no avail: Galileo was called before the Inquisition, forbidden to defend Copernican ideas, and finally sentenced to house arrest for publishing a book that did just that and also ridiculed the Pope.

These are far from the only cases in the history of science of a consensus that was wrong. Others include the widely held 19th-century religious belief in creationism that impeded acceptance of Darwin’s theory of evolution, and the 20th-century paradigm linking saturated fat to heart disease.

Consensus is built only slowly, so belief in the consensus tends to become entrenched over time and is not easily abandoned by its devotees. This is certainly the case for the current consensus that climate change is largely a result of human activity – a consensus, as I’ve argued in a previous post, that is most likely mistaken.

Next: Nature vs Nurture: Does Epigenetics Challenge Evolution?

How Elizabeth Holmes Abused Science to Deceive Investors

Even in Silicon Valley, which is no stranger to hubris and deceit, it stands out – the bold-faced audacity of a young Stanford dropout, who bilked prominent investors out of hundreds of millions of dollars for a fictitious blood-testing technology based on finger-stick specimens.

Credit: Associated Press

Credit: Associated Press

Elizabeth Holmes, former CEO of now defunct Theranos, last year settled charges of massive financial fraud brought by the U.S. SEC (Securities and Exchange Commission), and now faces criminal charges in California for her multiple misdeeds. But beyond the harm done to duped investors, fired employees and patients misled about blood test results, Holmes’ duplicity and pathological lies only add to the abuse being heaped on science today.

One of the linchpins of the scientific method, a combination of observation and reason developed and refined for more than two thousand years, is the replication step. Observations that can’t be repeated, preferably by independent investigators, don’t qualify as scientific evidence. When the observations are blood tests on actual patients, repeatability and reliability are obviously paramount. Yet Theranos failed badly in both these areas.

Holmes created a compact testing device originally known as the Edison and later dubbed the minLab, supposedly capable of inexpensively diagnosing everything from diabetes to cancer. But within a year or two, questions began to emerge about just how good it was.

Several Theranos scientists protested in 2013 that the technology wasn’t ready for the market. Instead of repeatable results, the company’s new machine was generating inaccurate and even erroneous data for patients. Whistleblowers addressing a recent forum related how open falsification and cherry-picking of data were a regular part of everyday operations at Theranos. And technicians had to rerun tests if the results weren’t “acceptable” to management.

Much of this chicanery was exposed by Wall Street Journal investigative reporter John Carreyrou. In the wake of his sensational reporting, drugstore chain Walgreens announced in 2015 that it was suspending previous plans to establish blood testing facilities using Theranos technology in more than 40 stores across the U.S.

Among the horrors that Carreyrou documented in a later book were a Theranos test on a 16-year-old Arizona girl, whose faulty result showed a high level of potassium, meaning she could have been at risk of a heart attack. Tests on another Arizona woman suggested an impending stroke, for which she was unnecessarily rushed to a hospital emergency room. Hospital tests contradicted both sets of Theranos data. In January 2016, the Centers for Medicare and Medicaid Services, the oversight agency for blood-testing laboratories, declared that one of Theranos' labs posed "immediate jeopardy" to patients.

Closely allied to the repeatability required by the scientific method is transparency. Replication of a result isn’t possible unless the scientists who conducted the original experiment described their work openly and honestly – something that doesn’t always occur today. To be fair, there’s a need for a certain degree of secrecy in a commercial setting, in order to protect a company’s intellectual property. However, this need shouldn’t extend to internal operations of the company or to interactions between the very employees whose research is the basis of the company’s products.

But that’s exactly what happened at Theranos, where its scientists and technicians were kept in the dark about the purpose of their work and constantly shuffled from department to department. Physical barriers were erected in the research facility to prevent employees from actually seeing the lab-on-a-chip device, based on microfluidics and biochemistry, supposedly under development.

Only a handful of people knew that the much vaunted technology was in fact a fake. In a 2014 article in Fortune magazine, Holmes claimed that Theranos already offered more than 200 blood tests and was ramping up to more than 1,000. The reality was that Theranos could only perform 12 of the 200-plus tests, all of one type, on its own equipment and had to use third-party analyzers to carry out all the other tests. Worse, Holmes allegedly knew that the miniLab had problems with accuracy and reliability, was slower than some competing devices and, in some ways, wasn’t competitive at all with more conventional blood-testing machines.

Investors were fooled too. Among the luminaries deceived by Holmes were former U.S. Secretaries of State Henry Kissinger and George Shultz, recently resigned Secretary of Defense and retired General James Mattis – all of whom became members of Theranos’ “all-star board” – and media tycoon Rupert Murdoch. Initial meetings with new investors were often followed by a rigged demonstration of the miniLab purporting to analyze their just-collected finger-stick samples.

Holmes not only fleeced her investors but also did a great disservice to science. The story will shortly be immortalized in a movie starring Jennifer Lawrence as Holmes.

Next: How the Scientific Consensus Can Be Wrong

Consensus in Science: Is It Necessary?

An important but often misunderstood concept in science is the role of consensus. Some scientists argue that consensus has no place at all in science, that the scientific method alone with its emphasis on evidence and logic dictates whether a particular hypothesis stands or falls.  But the eventual elevation of a hypothesis to a widely accepted theory, such as the theory of evolution or the theory of plate tectonics, does depend on a consensus being reached among the scientific community.


In politics, consensus democracy refers to a consensual decision-making process by the members of a legislature – in contrast to traditional majority rule, in which minority opinions can be ignored by the majority. In science, consensus has long been more like majority rule, but based on facts or empirical evidence rather than personal convictions. Although observational evidence is sometimes open to interpretation, it was the attempt to redefine scientific consensus in the mold of consensus democracy that triggered a reaction to using the term in science.

This reaction was eloquently summarized by medical doctor and Jurassic Park author Michael Crichton, in a 2003 Caltech lecture titled “Aliens Cause GlobaL Warming”:

“I want to pause here and talk about this notion of consensus, and the rise of what has been called consensus science. I regard consensus science as an extremely pernicious development that ought to be stopped cold in its tracks. Historically, the claim of consensus has been the first refuge of scoundrels; it is a way to avoid debate by claiming that the matter is already settled. …

Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world.

In science consensus is irrelevant. What is relevant is reproducible results. … There is no such thing as consensus science. If it’s consensus, it isn’t science. If it’s science, it isn’t consensus.”

What Crichton was talking about, I think, was the consensus democracy sense of the word – consensus forming the basis for legislation, for political action. But that’s not the same as scientific consensus, which can never be reached by taking a poll of scientists. Rather, a scientific consensus is built by the slow accumulation of unambiguous pieces of empirical evidence, until the collective evidence is strong enough to become a theory.

Indeed, the U.S. AAAS (American Association for the Advancement of Science) and NAS (National Academy of Sciences, Engineering and Medicine) both define a scientific theory in such terms. According to the NAS, for example,

 “The formal scientific definition of theory …  refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence.”

Contrary to popular opinion, theories rank highest in the scientific hierarchy – above laws, hypotheses and facts or observations. 

Crichton’s reactionary view of consensus as out of place in the scientific world has been voiced in the political sphere as well. Twentieth-century UK prime minister Margaret Thatcher once made the comment, echoing Crichton’s words, that political consensus was “the process of abandoning all beliefs, principles, values and policies in search of something in which no one believes, but to which no one objects; the process of avoiding the very issues that have to be solved, merely because you cannot get agreement on the way ahead.” Thatcher was a firm believer in majority rule.

A well-known scientist who shares Crichton’s opinion of scientific consensus is James Lovelock, ecologist and propounder of the Gaia hypothesis that the earth and its biosphere are a living organism. Lovelock has said of consensus:

“I know that such a word has no place in the lexicon of science; it is a good and useful word, but it belongs to the world of politics and the courtroom, where reaching a consensus is a way of solving human differences.”

But as discussed above, there is a role for consensus in science. The notion articulated by Crichton and Lovelock that consensus is irrelevant has arisen in response to the modern-day politicization of science. One element of their proclamations does apply, however. As pointed out by astrophysicist and author Ethan Siegel, the existence of a scientific consensus doesn’t mean that the “science is settled.” Consensus is merely the starting point on the way to a full-fledged theory.

Next week: How Elizabeth Holmes Abused Science to Deceive Investors

Corruption of Science: Scientific Fraud


One of the most troubling signs of the attack on science is the rising incidence of outright fraud, in the form of falsification and even fabrication of scientific data. A 2012 study published by the U.S. National Academy of Sciences noted an increase of almost 10 times since 1975 in the percentage of biomedical research articles retracted because of fraud. Although the current percentage retracted due to fraud was still very small at approximately 0.01%, the study authors remarked that this underestimated the actual percentage of fraudulent articles, since only a fraction of such articles are retracted.

One of the more egregious episodes of fraud was British gastroenterologist Andrew Wakefield’s claim in a 1998 study that 8 out of 12 children in the study had developed symptoms of autism after injection of the combination MMR (measles-mumps-rubella) vaccine. As a result of the well publicized study, hundreds of thousands of parents who had conscientiously followed immunization schedules in the past panicked and began declining MMR vaccine. And, unsurprisingly, outbreaks of measles subsequently occurred all over the world.

But Wakefield’s paper was slowly discredited over the next 12 years, until the prestigious medical journal The Lancet formally retracted it. The journal’s editors then went one step further in 2011 by declaring the paper fraudulent, citing unmistakable evidence that Wakefield had fabricated his data on autism and the MMR vaccine. Shortly after, the disgraced gastroenterologist’s medical license was revoked.

In 2015, Iowa State University researcher Dong Pyou Han received a prison sentence of four and a half years and was ordered to repay $7.2 million in grant funds, after being convicted of fabricating and falsifying data in trials of a potential HIV vaccine.  On multiple occasions, Han had mixed blood samples from vaccinated rabbits into human HIV antibodies to create the illusion that the vaccine boosted immunity against HIV. Although Han was contrite in court, one of the prosecuting attorneys doubted his remorse, pointing out that Han’s job depended on research funding that was only renewed as a result of his bogus presentations showing the experiments were succeeding.

In 2018, officials at Harvard Medical School and Brigham and Women’s Hospital in Boston called for the retraction of a staggering 31 papers from the laboratory of once prominent Italian heart researcher Piero Anversa, because the papers "included falsified and/or fabricated data." Dr. Anversa’s research was based on the notion that the heart contains stem cells, a type of cell capable of transforming into other cells, that could regenerate cardiac muscle. But other laboratories couldn’t verify Anversa’s idea and were unable to reproduce his experimental findings – a major red flag, since replication of scientific data is a crucial part of the scientific method.

Despite this warning sign, the work spawned new companies claiming that their stem-cell injections could heal hearts damaged by a heart attack, and led to a clinical trial funded by the U.S. National Heart, Lung and Blood Institute. The Boston hospital’s parent company, however, agreed in 2017 to a $10 million settlement with the U.S. government over allegations that the published research of Anversa and two colleagues had been used to fraudulently obtain federal funding. Apart from data that the lab fabricated, the government alleged that it utilized invalid and improperly characterized cardiac stem cells, and maintained deliberately misleading records. Anversa has since left the medical school and hospital.

Scientific fraud today extends even to the publishing world. A recent sting operation involved so-called predatory journals – those charging a fee without offering any publication services (such as peer review), other than publication itself. The investigation found that an amazing 33% of the journals contacted offered a phony scientific editor a position on their editorial boards, four of them immediately appointing the fake scientist as editor-in-chief.   

It’s no wonder then that scientific fraud is escalating. In-depth discussion of recent cases can be found on several websites, such as For Better Science and Retraction Watch.

Next week: Consensus in Science: Is It Necessary?

Corruption of Science: The Reproducibility Crisis

One of the more obvious signs that modern science is ailing is the reproducibility crisis – the vast number of peer-reviewed scientific studies that can’t be replicated in subsequent investigations and whose findings turn out to be false. In the field of cancer biology, for example, researchers discovered that an alarming 89% of published results couldn’t be reproduced. Even in the so-called soft science of psychology, the rate of irreproducibility hovers around 60%. And to make matters worse, falsification and outright fabrication of scientific data is on the rise.

Bronowski enlarged.jpg

The reproducibility crisis is drawing a lot of attention from scientists and nonscientists alike. In 2018, the U.S. NAS (the National Association of Scholars in this case, not the Academy of Sciences), an academic watchdog organization that normally focuses on the liberal arts and education policy, published a particularly comprehensive examination of the problem. Although the emphasis in the NAS report is on the misuse of statistical methods in scientific research, the report discusses possible causes of irreproducibility and presents a laundry list of recommendations for addressing the crisis.    

The crisis is especially acute in the biomedical sciences. Over 10 years ago, Greek medical researcher John Ioannidis argued that the majority of published research findings in medicine were wrong. This included epidemiological studies in areas such as dietary fat, vaccination and GMO foods as well as clinical trials and cutting-edge research in molecular biology. 

In 2011, a team at Bayer HealthCare in Germany reported that only about 25% of published preclinical studies on potential new drugs could be validated. Some of the unreproducible papers had catalyzed entirely new fields of research, generating hundreds of secondary publications. More worryingly, other papers had led to clinical trials that were unlikely to be of any benefit to the participants.

Author Richard Harris describes another disturbing example, of research on breast cancer that was conducted on misidentified skin cancer cells. The sloppiness resulted in thousands of papers being published in prominent medical journals on the wrong cancer. Harris blames the sorry condition of current research on scientists taking shortcuts around the once venerated scientific method.

Cutting corners to pursue short-term success is but one consequence of the pressures experienced by today’s scientists. These pressures include the constant need to win research grants as well as to publish research results in high-impact journals. The more spectacular that a paper submitted for publication is, the more likely it is to be accepted, but often at the cost of research quality. It has become more important to be the first to publish or to present sensational findings than to be correct.      

Another consequence of the bind in which scientists find themselves is the ever increasing degree of misunderstanding and misuse of statistics, as detailed in the NAS report. Among other abuses, the report cites spurious correlations in data that researchers claim to be “statistically significant”; the improper use of statistics due to poor understanding of statistical methodology; and the conscious or unconscious biasing of data to fit preconceived ideas.

Ioannidis links irreproducibility to the habit of assigning too much importance to the statistical p-value. The smaller the p-value, the more likely it is that the experimental data can’t be explained by existing theory and that a new hypothesis is needed. Although p-values below 0.05 are commonly regarded as statistically significant, using this condition as a criterion for publication means that one time in twenty, the experimental data could be the result of chance alone. The NAS report recommends defining statistical significance as a p-value less than 0.01 rather than 0.05 – a much more demanding standard.

The report further recommends integration of basic statistics into curricula at high-school and college levels, and rigorous educational programs in those disciplines that rely heavily on statistics. Beyond statistics, other suggested reforms include having researchers make their data available for public inspection, which doesn’t often occur at present, and encouraging government agencies to fund projects designed purely to replicate earlier research, which again is rare today. The NAS believes that measures like these will help to improve reproducibility in scientific studies as well as keeping advocacy and the politicization of science at bay.

Next week: Corruption of Science: Scientific Fraud