Coronavirus Epidemiological Models: (2) How Completely Different the Models Can Be

Two of the most crucial predictions of any epidemiological model are how fast the disease in question will spread, and how many people will die from it. For the COVID-19 pandemic, the various models differ dramatically in their projections.

A prominent model, developed by an Imperial College, London research team and described in the previous post, assesses the effect of mitigation and suppression measures on spreading of the pandemic in the UK and U.S. Without any intervention at all, the model predicts that a whopping 500,000 people would die from COVID-19 in the UK and 2.2 million in the more populous U.S. These are the numbers that so alarmed the governments of the two countries.

Initially, the Imperial researchers claimed their numbers could be halved (to 250,000 and 1.1 million deaths, respectively) by implementing a nationwide lockdown of individuals and nonessential businesses. Lead scientist Neil Ferguson later revised the UK estimate drastically downward to 20,000 deaths. But it appears this estimate would require repeating the lockdown periodically for a year or longer, until a vaccine becomes available. Ferguson didn’t give a corresponding reduced estimate for the U.S., but it would be approximately 90,000 deaths if the same scaling applies.

This reduced Imperial estimate for the U.S. is somewhat above the latest projection of a U.S. model, developed by the Institute for Health Metrics and Evaluations at the University of Washington in Seattle. The Washington model estimates the total number of American deaths at about 60,000, assuming national adherence to stringent stay-at-home and social distancing measures. The figure below shows the predicted number of daily deaths as the U.S. epidemic peaks over the coming months, as estimated this week. The peak of 2,212 deaths on April 12 could be as high as 5,115 or as low as 894, the Washington team says.

COVID.jpg

The Washington model is based on data from local and national governments in areas of the globe where the pandemic is well advanced, whereas the Imperial model primarily relies on data from China and Italy alone.  Peaks in each U.S. state are expected to range from the second week of April through the last week of May.

Meanwhile, a rival University of Oxford team has put forward an entirely different model, which suggests that up to 68% of the UK population may have already been infected. The virus may have been spreading its tentacles, they say, for a month or more before the first death was reported. If so, the UK crisis would be over in two to three months, and the total number of deaths would be below the 250,000 Imperial estimate, due to a high level of herd immunity among the populace. No second wave of infection would occur, unlike the predictions of the Imperial and Washington models.

Nevertheless, that’s not the only possible interpretation of the Oxford results. In a series of tweets, Harvard public health postdoc James Hay has explained that the proportion of the UK population already infected could be anywhere between 0.71% and 56%, according to his calculations using the Oxford model. The higher the percentage infected and therefore immune before the disease began to escalate, the lower the percentage of people still at risk of contracting severe disease, and vice versa.

The Oxford model shares some assumptions with the Imperial and Washington models, but differs slightly in others. For example, it assumes a shorter period during which an infected individual is infectious, and a later date when the first infection occurred. However, as mathematician and infectious disease specialist Jasmina Panovska-Griffiths explains, the two models actually ask different questions. The question asked by the Imperial and Washington groups is: What strategies will flatten the epidemic curve for COVID-19? The Oxford researchers ask the question: Has COVID-19 already spread widely?  

Without the use of any model, Stanford biophysicist and Nobel laureate Michael Levitt has come to essentially the same conclusion as the Oxford team, based simply on an analysis of the available data. Levitt’s analysis focuses on the rate of increase in the daily number of new cases: once this rate slows down, so does the death rate and the end of the outbreak is in sight.

By examining data from 78 of the countries reporting more than 50 new cases of COVID-19 each day, Levitt was able to correctly predict the trajectory of the epidemic in most countries. In China, once the number of newly confirmed infections began to fall, he predicted that the total number of COVID-19 cases would be around 80,000, with about 3,250 deaths – a remarkably accurate forecast, though doubts exist about the reliability of the Chinese numbers. In Italy, where the caseload was still rising, his analysis indicated that the outbreak wasn’t yet under control, as turned out to be tragically true.

Levitt, however, agrees with the need for strong measures to contain the pandemic, as well as earlier detection of the disease through more widespread testing.

Next: Coronavirus Epidemiological Models: (3) How Inadequate Testing Limits the Evidence



Coronavirus Epidemiological Models: (1) What the Models Predict

Amid all the brouhaha over COVID-19 – the biggest respiratory virus threat globally since the 1918 influenza pandemic – confusion reigns over exactly what epidemiological models of the disease are predicting. That’s important as the world begins restricting everyday activities and effectively shutting down national economies, based on model predictions.

In this and subsequent blog posts, I’ll examine some of the models being used to simulate the spread of COVID-19 within a population. As readers will know, I’ve commented at length in this blog on the shortcomings of computer climate models and their failure to accurately predict the magnitude of global warming. 

Epidemiological models, however, are far simpler than climate models and involve far fewer assumptions. The propagation of disease from person to person is much better understood than the vagaries of global climate. A well-designed disease model can help predict the likely course of an epidemic, and can be used to evaluate the most realistic strategies for containing it.

Following the initial coronavirus episode that began in Wuhan, China, various attempts have been made to model the outbreak. One of the most comprehensive studies is a report published last week, by a research team at Imperial College in London, that models the effect of mitigation and suppression control measures on the pandemic spreading in the UK and U.S.

Mitigation focuses on slowing the insidious spread of COVID-19, by taking steps such as requiring home quarantine of infected individuals and their families, and imposing social distancing of the elderly; suppression aims to stop the epidemic in its tracks, by adding more drastic measures such as social distancing of everyone and the closing of nonessential businesses and schools. Both tactics are currently being used not only in the UK and U.S., but also in many other countries – especially in Italy, hit hard by the epidemic.

The model results for the UK are illustrated in the figure below, which shows how the different strategies are expected to affect demand for critical care beds in UK hospitals over the next few months. You can see the much-cited “flattening of the curve,” referring to the bell-shaped curve that portrays the peaking of critical care cases, and related deaths, as the disease progresses. The Imperial College model assumes that 50% of those in critical care will die, based on expert clinical opinion. In the U.S., the epidemic is predicted to be more widespread than in the UK and to peak slightly later.

COVID-19 Imperial College.jpg

What set alarm bells ringing was the model’s conclusion that, without any intervention at all, approximately 0.5 million people would die from COVID-19 in the UK and 2.2 million in the more populous U.S. But these numbers could be halved (to 250,000 and 1.1-1.2 million deaths, respectively) if all the proposed mitigation and suppression measures are put into effect, say the researchers.

Nevertheless, the question then arises of how long such interventions can or should be maintained. The blue shading in the figure above shows the 3-month period during which the interventions are assumed to be enforced. But because there is no cure for the disease at present, it’s possible that a second wave of infection will occur once interventions are lifted. This is depicted in the next figure, assuming a somewhat longer 5-month period of initial intervention.

COVID-19 Imperial College 2nd wave.jpg

The advantage of such a delayed peaking of the disease’s impact would be a lessening of pressure on an overloaded healthcare system, allowing more time to build up necessary supplies of equipment and reducing critical care demand – in turn reducing overall mortality. In addition, stretching out the timeline for a sufficiently long time could help bolster herd immunity. Herd immunity from an infectious disease results when enough people become immune to the disease through either recovery or vaccination, both of which reduce disease transmission. A vaccine, however, probably won’t be available until 2021, even with the currently accelerated pace of development.

Whether the assumptions behind the Imperial College model are accurate is an issue we’ll look at in a later post. The model is highly granular, reaching down to the level of the individual and based on high-resolution population data, including census data, data from school districts, and data on the distribution of workplace size and commuting distance. Contacts between people are examined within a household, at school, at work and in social settings.

The dilemma posed by the model’s predictions is obvious. It’s necessary to balance minimizing the death rate from COVID-19 with the social and economic disruption caused by the various interventions, and with the likely period over which the interventions can be maintained.

Next: Coronavirus Epidemiological Models: (2) How Completely Different the Models Can Be

Science on the Attack: Cancer Immunotherapy

As a diversion from my regular blog posts examining how science is under attack, occasional posts such as this one will showcase examples of science nevertheless on the attack – to illustrate the power of the scientific method in tackling knotty problems, even when the discipline itself is under siege. This will exclude technology, which has always thrived. The first example is from the field of medicine: cancer immunotherapy.

Cancer is a vexing disease, in fact a slew of different diseases, in which abnormal cells proliferate uncontrollably and can spread to healthy organs and tissues. It’s one of the leading causes of death worldwide, especially in high-income countries. Each type of cancer, such as breast, lung or prostate, has as many as 10 different sub-types, vastly complicating efforts to conquer the disease.

Although the role of the body’s immune system is to detect and destroy abnormal cells, as well as invaders like foreign bacteria and viruses, cancer can evade the immune system through several mechanisms that shut down the immune response.

One mechanism involves the immune system deploying T-cells – a type of white blood cell – to recognize abnormal cells. It does this by looking for flags or protein fragments called antigens displayed on the cell surface that signal the cell’s identity. The T-cells, sometimes called the warriors of the immune system, identify and then kill the offending cells.

But the problem is that cancer cells can avoid annihilation by deactivating a switch on the T-cell known as an immune checkpoint, the purpose of which is to prevent T-cells from becoming over zealous and generating too powerful an immune response. Switching off the checkpoint altogether takes the T-cell out of the action and allows the cancer to grow. The breakthrough of cancer immunotherapy was in discovering drugs that can act as checkpoint inhibitors, which keep the checkpoint activated or switched on at all times and therefore enable the immune system to do its job in attacking the cancerous cells.

cancer immunotherapy.jpg

However, such a discovery wasn’t an easy task. Attempts to harness the immune system to fight cancer go back over 100 years, but none of these attempts worked successfully on a consistent basis. The only options available to cancer patients were from the standard regimen of surgery, chemotherapy, radiation and hormonal treatments.

In what the British Society for Immunology described as “one of the most extraordinary breakthroughs in modern medicine,” researchers James P. Allison and Tasuku Honjo were awarded the 2018 Nobel Prize in Physiology or Medicine for their discoveries of different checkpoint inhibitor drugs – discoveries that represented the culmination of over a decade’s painstaking laboratory work. Allison explored one type of checkpoint inhibitor (known as CTLA-4), Honjo another one (known as PD-1).

Early clinical tests of both types of inhibitor showed spectacular results. In several patients with advanced melanoma, an aggressive type of skin cancer, the cancer completely disappeared when treated with a drug based on Allison’s research. In patients with other types of cancer such as lung cancer, renal cancer and lymphoma, treatment with a drug based on Honjo’s research resulted in long-term remission, and may have even cured metastatic cancer – previously not considered treatable.

Yet despite this initial promise, it’s been found that checkpoint inhibitor immunotherapy is effective for only a small portion of cancer patients: genetic differences are no doubt at play. States Dr. Roy Herbst, chief of medical oncology at Yale Medicine, “The sad truth about immunotherapy treatment in lung cancer is that it shrinks tumors in only about one or two out of 10 patients.” More research and possibly drug combinations will be needed, Dr. Herbst says, to extend the revolutionary new treatment to more patients.

Another downside is possible side effects from immune checkpoint drugs, caused by overstimulation of the immune system and consequent autoimmune reactions in which the immune system attacks normal, healthy tissue. But such reactions are usually manageable and not life-threatening.

Cancer immunotherapy is but one of many striking recent advances in the medical field, illustrating how the biomedical sciences can be on the attack even as they come under assault, especially from medical malfeasance in the form of irreproducibility and fraud.

Next: Coronavirus Epidemiological Models: (1) What the Models Predict

The Futility of Action to Combat Climate Change: (2) Political Reality

In the previous post, I showed how scientific and engineering realities make the goal of taking action to combat climate change inordinately expensive and unattainable in practice for decades to come, even if climate alarmists are right about the need for such action. This post deals with the equally formidable political realities involved.

By far the biggest barrier is the unlikelihood that the signatories to the 2015 Paris Agreement will have the political will to adhere to their voluntary pledges for reducing greenhouse gas emissions. Lacking any enforcement mechanism, the agreement is merely a “feel good” document that allows nations to signal virtuous intentions without actually having to make the hard decisions called for by the agreement. This reality is tacitly admitted by all the major CO2 emitters.

Evidence that the Paris Agreement will achieve little is contained in the figure below, which depicts the ability of 58 of the largest emitters, accounting for 80% of the world’s greenhouse emissions, to meet the present goals of the accord. The goals are to hold “the increase in the global average temperature to well below 2 degrees Celsius (3.6 degrees Fahrenheit) above pre-industrial levels,” preferably limiting the increase to only 1.5 degrees Celsius (2.7 degrees Fahrenheit).

Paris commitments.jpg

It’s seen that only seven nations have declared emission reductions big enough to reach the Paris Agreement’s goals, including just one of the largest emitters, India. The seven largest emitters, apart from India which currently emits 7% of the world’s CO2, are China (28%), the USA (14%), Russia (5%), Japan (3%), Germany (2%, biggest in the EU) and South Korea (2%). The EU designation here includes the UK and 27 European nations.

As the following figure shows, annual CO2 emissions from both China and India are rising, along with those from the other developing nations (“Rest of world”). Emissions from the USA and EU, on the other hand, have been steady or falling for several decades. Ironically, the USA’s emissions in 2019, which dropped by 2.9% from the year before, were no higher than in 1993 – despite the country’s withdrawal from the Paris Agreement.

emissions_by_country.jpg

As the developing nations, including China and India, currently account for 76% of global emissions, it’s difficult to imagine that the world as a whole will curtail its emissions anytime soon.

China, although a Paris Agreement signatory, has declared its intention of increasing its annual CO2 emissions until 2030 in order to fully industrialize – a task requiring vast amounts of additional energy, mostly from fossil fuels. The country already has over 1,000 GW of coal-fired power capacity and another 120 GW under construction. China is also financing or building 250 GW of coal-fired capacity as part of its Belt and Road Initiative across the globe. Electricity generation in China from burning coal and natural gas accounted for 70% of the generation total in 2018, compared with 26% from renewables, two thirds of which came from hydropower.

India, which has also ratified the Paris Agreement, believes it can meet the agreement’s aims even while continuing to pour CO2 into the atmosphere. Coal’s share of Indian primary energy consumption, which is predominantly for electricity generation and steelmaking, is expected to decrease slightly from 56% in 2017 to 48% in 2040. However, achieving even this reduction depends on doubling the share of renewables in electricity production, an objective that may not be possible because of land acquisition and funding barriers.

Nonetheless, it’s not China nor India that stand in the way of making the Paris Agreement a reality, but rather the many third world countries who want to reach the same standard of living as the West – a lifestyle that has been attained through the availability of cheap, fossil fuel energy. In Africa today, for example, 600 million people don’t have access to electricity and 900 million are forced to cook with primitive stoves fueled by wood, charcoal or dung, all of which create health and environmental problems. Coal-fired electricity is the most affordable remedy for the continent.

In the words of another writer, no developing country will hold back from increasing their CO2 emissions “until they have achieved the same levels of per capita energy consumption that we have here in the U.S. and in Europe.” This drive for a better standard of living, together with the lack of any desire on the part of industrialized countries to lower their energy consumption, spells disaster for realizing the lofty goals of the Paris Agreement.

Next: Science on the Attack: Cancer Immunotherapy

The Futility of Action to Combat Climate Change: (1) Scientific and Engineering Reality

Amidst the clamor for urgent action to supposedly combat climate change, the scientific and engineering realities of such action are usually overlooked. Let’s imagine for a moment that we humans are indeed to blame for global warming and that catastrophe is imminent without drastic measures to curb fossil fuel emissions – views not shared by climate skeptics like myself.

In this and the subsequent blog post, I’ll show how proposed mitigation measures are either impractical or futile. We’ll start with the 2015 Paris Agreement – the international agreement on cutting greenhouse gas emissions, which 195 nations, together with many of the world’s scientific societies and national academies, have signed on to.

The agreement endorses the assertion that global warming comes largely from our emissions of greenhouse gases, and commits its signatories to “holding the increase in the global average temperature to well below 2 degrees Celsius (3.6 degrees Fahrenheit) above pre-industrial levels,” preferably limiting the increase to only 1.5 degrees Celsius (2.7 degrees Fahrenheit). According to NASA, current warming is close to 1 degree Celsius (1.8 degrees Fahrenheit).

How realistic are these goals? To achieve them, the Paris Agreement requires nations to declare a voluntary “nationally determined contribution” toward emissions reduction. However, it has been estimated by researchers at MIT (Massachusetts Institute of Technology) that, even if all countries were to follow through with their voluntary contributions, the actual mitigation of global warming by 2100 would be at most only about 0.2 degrees Celsius (0.4 degrees Fahrenheit).

Higher estimates, ranging up to 0.6 degrees Celsius (1.1 degrees Fahrenheit), assume that countries boost their initial voluntary emissions targets in the future. The agreement actually stipulates that countries should submit increasingly ambitious targets every five years, to help attain its long-term temperature goals. But the targets are still voluntary, with no enforcement mechanism.

Given that most countries are already falling behind their initial pledges, mitigation of more than 0.2 degrees Celsius (0.4 degrees Fahrenheit) by 2100 seems highly unlikely. Is it worth squandering the trillions of dollars necessary to achieve such a meager gain, even if the notion that we can control the earth’s thermostat is true?     

Another reality check is the limitations of renewable energy sources, which will be essential to our future if the world is to wean itself off fossil fuels that today supply almost 80% of our energy needs. The primary renewable technologies are wind and solar photovoltaics. But despite all the hype, wind and solar are not yet cost competitive with cheaper coal, oil and gas in most countries, when subsidies are ignored. Higher energy costs can strangle a country’s economy.

Source: BP

Source: BP

And it will be many years before renewables are practical alternatives to fossil fuels. It’s generally unappreciated by renewable energy advocates that full implementation of a new technology can take many decades. That’s been demonstrated again and again over the past century in areas as diverse as electronics and steelmaking.

The claim is often made, especially by proponents of the so-called Green New Deal, that scale-up of wind and solar power could be accomplished quickly by mounting an effort comparable to the U.S. moon landing program in the 1960s. But the claim ignores the already mature state of several technologies crucial to that program at the outset. Rocket technology, for example, had been developed by the Germans and used to terrify Londoners in the UK during World War II. The vacuum technology needed for the Apollo crew modules and spacesuits dates from the beginning of the 20th century.

Renewable energy.jpg

Such advantages don’t apply to renewable energy. The main engineering requirements for widespread utilization of wind and solar power are battery storage capability, to store energy for those times when the wind stops blowing or the sun isn’t shining, and redesign of the electric grid.

But even in the technologically advanced U.S., battery storage is an order of magnitude too expensive today for renewable electricity to be cost competitive with electricity generated from fossil fuels. That puts battery technology where rocket technology was more than 25 years before Project Apollo was able to exploit its use in space. Likewise, conversion of the U.S. power grid to renewable energy would cost trillions of dollars – and, while thought to be attainable, is currently seen as merely “aspirational.”

The bottom line for those who believe we must act urgently on the climate “emergency”: it’s going to take a lot of time and money to do anything at all, and whatever we do may make little difference to the global climate anyway.

Next: The Futility of Action to Combat Climate Change: (2) Political Reality

Australian Bushfires: Ample Evidence That Past Fires Were Worse

Listening to Hollywood celebrities and the mainstream media, you’d think the current epidemic of bushfires in Australia means the apocalypse is upon us. With vast tracts of land burned to the ground, dozens of people and millions of wild animals killed, and thousands of homes destroyed, climate alarmists would have you believe it’s all because of global warming.

But not only is there no scientific evidence that the frequency or severity of wildfires are on the rise in a warming world, but the evidence also clearly shows that the present Australian outbreak is unexceptional.

Bushfire.jpg

Although almost 20 million hectares (50 million acres, or 77,000 square miles) nationwide have burned so far, this is less than 17% of the staggeringly large area incinerated in the 1974-75 bushfire season and less than the burned area in three other conflagrations. Politically correct believers in the narrative of human-caused climate change seem unaware of such basic facts about the past.

The catastrophic fires in the 1974-75 season consumed 117 million hectares (300 million acres), which is 15% of the land area of the whole continent. Because nearly two thirds of the burned area was in remote parts of the Northern Territory and Western Australia, relatively little human loss was incurred, though livestock and native animals such as lizards and red kangaroos suffered.

The Northern Territory was also the location of major bushfires in the 1968-69, 1969-70 and 2002-03 seasons that burned areas of 40 million, 45 million and 38 million hectares (99 million, 110 million and 94 million acres), respectively. That climate change wasn’t the cause should be obvious from the fact that the 1968-69 and 1969-70 fires occurred during a 30-year period of global cooling from 1940 to 1970.

Despite the ignorance and politically charged rhetoric of alarmists, the primary cause of all these terrible fires in Australia has been the lack of intentional or prescribed burning. The practice was used by the native Aboriginal population for as long as 50,000 years before early settlers abandoned it after trying unsuccessfully to copy indigenous fire techniques – lighting fires so hot that flammable undergrowth, the main target of prescribed burns, actually germinated more after burning.

The Aboriginals, like native people everywhere, had a deep knowledge of the land. They knew what types of fires to burn for different types of terrain, how long to burn, and how frequently. This knowledge enabled them to keep potential wildfire fuel such as undergrowth and certain grasses in check, thereby avoiding the more intense and devastating bushfires of the modern era. As the Aboriginals found, small-scale fires can be a natural part of forest ecology.

Only recently has the idea of controlled burning been revived in Australia and the U.S., though the approach has been practiced in Europe for many years. Direct evidence of the efficacy of controlled burning is presented in the figure below, which shows how bushfires in Western Australia expanded significantly as prescribed burning was suppressed over the 50 years from 1963 to 2013.

WA bushfires.jpg

Bushfires have always been a feature of life in Australia. One of the earliest recorded outbreaks was the so-called Black Thursday bushfires of 1851, when then record-high temperatures up to 47.2 degrees Celsius (117 degrees Fahrenheit) and strong winds exacerbated fires that burned 5 million hectares (12 million acres), killed 12 people and terrified the young colony of Victoria. The deadliest fires of all time were the Black Saturday bushfires of 2009, also in Victoria, with 173 fatalities.    

Pictures of charred koala bears, homes engulfed in towering flames and residents seeking refuge on beaches are disturbing. But there’s simply no evidence for the recent statement in Time magazine by Malcolm Turnbull, former Australian Prime Minister, that “Australia’s fires this summer – unprecedented in the scale of their destruction – are the ferocious but inevitable reality of global warming.” Turnbull and climate alarmists should know better than to blame wildfires on this popular but erroneous belief.

Next: The Futility of Action to Combat Climate Change: (1) Scientific and Engineering Reality

When Science Is Literally under Attack: Ad Hominem Attacks

Ad hominem.jpg

Science by its nature is contentious. Before a scientific hypothesis can be elevated to a theory, a solid body of empirical evidence must be accumulated and differing interpretations debated, often vigorously. But while spirited debate and skepticism of new ideas are intrinsic to the scientific method, stooping to personal hostility and ad hominem (against the person) attacks are an abuse of the discipline.

If the animosity were restricted to words alone, it could be excused as inevitable human tribalism. Loyalty to the tribe and conformity are much more highly valued than dissent or original thinking; ad hominem attacks are merely a defensive measure against proposals that threaten tribal unity. 

However, when the acrimony in scientific debate goes beyond verbal to physical abuse, either threatened or actual, then science itself is truly under assault. Unfortunately, such vicious behavior is becoming all too common.  

A recent example was a physical attack on pediatrician and California state senator Richard Pan, who had authored a bill to tighten a previous law allowing medical exemptions from vaccination for the state’s schoolchildren. After enduring vitriolic ad hominem attacks and multiple death threats calling for him to be “eradicated” or hung by a noose, Pan had to get a court restraining order against an anti-vaccinationist who forcefully shoved the lawmaker on a Sacramento city street in August, 2019, during debate on the exemptions bill. Although the attacker was arrested on suspicion of battery, Pan told the court he was fearful for his safety.

Pan has long drawn the anger of anti-vaccine advocates in California for his support of mandatory vaccination laws for children. But science is unquestionably on his side. Again and again, it’s been demonstrated that those U.S. states with lower exemption rates for vaccination enjoy lower levels of infectious disease. It’s this scientific evidence of the efficacy of immunization that has prompted many states to take a tougher stand on exemptions, and even to abolish nonmedical exemptions – for religious or philosophical reasons – altogether.

Medical exemptions are necessary for those children who can’t be vaccinated at all owing to conditions such as chemotherapy for cancer, immunosuppressive therapy for a transplant, or steroid therapy for asthma. Successful protection of a community from an infectious disease requires more than a certain percentage of the populace to be vaccinated against the disease – 94% in the case of measles, for example. Once this herd immunity condition has been met, viruses and bacteria can no longer spread, just as sheer numbers protect a herd of animals from predators.

But the earlier California law was being abused by some doctors, who were issuing exemptions that were not medically necessary at the request of anti-vaccine parents. The practice had caused the immunization rate for kindergarten-aged children to fall below 95% state-wide, and below 90% in several counties. As a result, measles was on the rise again in California.

Shortly after Pan’s bill was passed in September, 2019, another disturbing incident occurred in the legislature itself. An anti-vaccine activist hurled her menstrual cup containing human blood from a balcony onto the desks of state senators, dowsing several of the lawmakers. The woman, who yelled “That’s for the dead babies,” was subsequently arrested and faces multiple charges.

In the lead-up to such violence, the ad hominem attacks on Pan were no more virulent than those directed a century ago at Alfred Wegener, the German meteorologist who proposed the revolutionary theory of continental drift. Wegener was vehemently criticized by his peers because his theory threatened the geology establishment, which clung to the old consensus of rigidly fixed continents. One critic harshly dismissed his hypothesis as “footloose,” and geologists scorned what they called Wegener’s “delirious ravings” and other symptoms of “moving crust disease.” It wasn’t until the 1960s that continental drift theory was vindicated.

What’s worrying is the escalation of such defamatory rhetoric into violence. The intimidation of California legislators in the blood-throwing incident, together with the earlier street attack on Pan and death threats made to other senators, are a prime example. The anti-vaccinationists responsible are attacking both democracy and science.

Next: Australian Bushfires: Ample Evidence That Past Fires Were Worse

No Evidence That Snow Is Disappearing

“Let It Snow! Let It Snow! Let It Snow!”

- 1945 Christmas song

You wouldn’t know it from mainstream media coverage but, far from disappearing, annual global snowfall is actually becoming heavier as the world warms. This is precisely the opposite of what climate change alarmists predicted as the global warming narrative took shape decades ago.

The prediction of less snowy winters was based on the simplistic notion that a higher atmospheric temperature would allow fewer snowflakes to form and keep less of the pearly white powder frozen on the ground. But, as any meteorologist will tell you, the crucial ingredient for snow formation, apart from near-freezing temperatures, is moisture in the air. Because warmer air can hold more water vapor, global warming in general produces more snow when the temperature drops.

This observation has been substantiated multiple times in recent years, in the Americas, Europe and Asia. As just one example, the eastern U.S. has experienced 29 high-impact winter snowstorms in the 10 years from 2009 though 2018. There were never more than 10 in any prior 10-year period.

The overall winter snow extent in the U.S. and Canada combined is illustrated in the figure below, which shows the monthly extent, averaged over the winter, from 1967 to 2019. Clearly, total snow cover is increasing, not diminishing.

Snow NAmerica 1967-2019.jpg

In the winter of 2009-10, record snowfall blanketed the entire mid-Atlantic coast of the U.S. in an event called Snowmaggedon, contributing to the record total for 2010 in the figure above. The winter of 2013-14 was the coldest and snowiest since the 1800s in parts of the Great Lakes. Further north in Canada, following an exceptionally cold winter in 2014-15, the lower mainland of British Columbia endured the longest cold snap in 32 years during the winter of 2016-17. That same winter saw record heavy Canadian snowfalls, in both British Columbia in the west and the maritime provinces in the east. 

The trend toward snowier winters is reflected in the number of North American blizzards over the same period, depicted in the next figure. Once again, it’s obvious that snow and harsh conditions have been on the rise for decades, especially as the globe warmed from the 1970s until 1998.  

US blizzard frequency 1960-2014.jpeg

But truckloads of snow haven’t fallen only in North America. The average monthly winter snow extent for the whole Northern Hemisphere from 1967 to 2019, illustrated in the following figure, shows a trend identical to that for North America.

Snow NH 1967-2019.jpg

Specific examples include abnormally chilly temperatures in southern China in January 2016, accompanying the first snow in Guangzhou since 1967 and the first in nearby Nanning since 1983. Extremely heavy snow that fell in the Panjshir Valley of Afghanistan in February 2015 killed over 200 people. During the winters of 2011-12 and 2012-13, much of central and eastern Europe experienced very cold and snowy weather, as they did once more in the winter of 2017-18. Eastern Ireland had its heaviest snowfalls for more than 50 years with totals exceeding 50 cm (20 inches).

Despite all this evidence, numerous claims have been made that snow is a thing of the past. “Children just aren’t going to know what snow is,” opined a research scientist at the CRU (Climatic Research Unit) of the UK’s University of East Anglia back in 2000. But, while winter snow is melting more rapidly in the spring and there’s less winter snow in some high mountain areas, the IPCC (Intergovernmental Panel on Climate Change) and WMO (World Meteorological Organization) have both been forced to concede that it’s now snowing more heavily at low altitudes than previously. Surprisingly, the WMO has even attributed the phenomenon to natural variability – as it should.

The IPCC and WMO, together with climate alarmists, are fond of using the term “unprecedented” to describe extreme weather events. As we’ve seen in previous blog posts, such usage is completely unjustified in every case – with the single exception of snowfall, though that’s a concession few alarmists make.

Next: When Science Is Literally under Attack: Ad Hominem Attacks

No Evidence That Marine Heat Waves Are Unusual

A new wrinkle in the narrative of human-induced global warming is the observation and study of marine heat waves. But, just as in other areas of climate science, the IPCC (Intergovernmental Panel on Climate Change) twists and hypes the evidence to boost the claim that heat waves at sea are becoming more common.

Marine heat waves describe extended periods of abnormally high ocean temperatures, just like atmospheric heat waves on land. The most publicized recent example was the “Blob,” a massive pool of warm water that formed in the northeast Pacific Ocean from 2013 to 2015, extending all the way from Alaska to the Baja Peninsula in Mexico as shown in the figure below, and up to 400 meters (1,300 feet) deep. Sea surface temperatures across the Blob averaged 3 degrees Celsius (5 degrees Fahrenheit) above normal. A similar temperature spike lasting for eight months was seen in Australia’s Tasman Sea in 2015 and 2016.

Recent Marine Heat Waves

OceanHeatWaveGlobe.jpg

The phenomenon has major effects on marine organisms and ecosystems, causing bleaching of coral reefs or loss of kelp forests, for example. Shellfish and small marine mammals such as sea lions and sea otters are particularly vulnerable, because the higher temperatures deplete the supply of plankton at the base of the ocean food chain. And toxic algae blooms can harm fish and larger marine animals.

OceanHeatWave kelp.jpg

Larger marine heat waves not only affect maritime life, but may also influence weather conditions on nearby land. The Blob is thought to have worsened California’s drought at the time, while the Tasman Sea event may have led to flooding in northeast Tasmania. The IPCC expresses only low confidence in such connections, however.   

Nevertheless, in its recent Special Report on the Ocean and Cryosphere in a Changing Climate, the IPCC puts its clout behind the assertion that marine heat waves doubled in frequency from 1982 to 2016, and that they have also become longer-lasting, more intense and more extensive. But these are false claims, for two reasons.

First, the observations supporting the claims were all made during the satellite era. Satellite measurements of ocean temperature are far more accurate and broader in coverage than measurements made by the old-fashioned methods that preceded satellite data. These cruder methods included placing a thermometer in seawater collected in wooden, canvas or insulated buckets tossed overboard from ships and hauled back on deck, or in seawater retained in ship engine-room inlets from several different depths; and data from moored or drifting buoys.

Because of the unreliability and sparseness of sea surface temperature data from the pre-satellite era, it’s obvious that earlier marine heat waves may well have been missed. Indeed, it would be surprising if no significant marine heat waves occurred during the period of record-high atmospheric temperatures of the 1930s, a topic discussed in a previous blog post.

The second reason the IPCC claims are spurious is that most of the reported studies (see for example, here and here) fail to take into account the overall ocean warming trend. Marine heat waves are generally measured relative to the average surface temperature over a 30-year baseline period. This means that any heat wave measured toward the end of that period will appear hotter than it really is, since the actual surface temperature at that time will be higher than the 30-year baseline. As pointed out by a NOAA (U.S. National Oceanic and Atmospheric Administration) scientist, not adjusting for the underlying warming falsely conflates natural regional variability with climate change.  

Even if the shortcomings of the data are ignored, it’s been found that from 1925 to 2016, the global average marine heatwave frequency and duration increased by only 34% and 17%, respectively – hardly dramatic increases. And in any case, the sample size for observations made since satellite observations began in 1982 is statistically small.

There’s no evidence, therefore, that marine heat waves are anything out of the ordinary.

Next: No Evidence That Snow Is Disappearing

Ocean Acidification: No Evidence of Impending Harm to Sea Life

ocean fish.jpg

Apocalyptic warnings about the effect of global warming on the oceans now embrace ocean acidification as well as sea level rise and ocean heating, both of which I’ve examined in previous posts. Acidification is a potential issue because the oceans absorb up to 30% of our CO2 emissions, according to the UN’s IPCC (Intergovernmental Panel on Climate Change). The extra CO2 increases the acidity of seawater.

But there’s no sign that any of the multitude of ocean inhabitants is suffering from the slightly more acidic conditions, although some species are affected by the warming itself. The average pH – a measure of acidity – of ocean surface water is currently falling by only 0.02 to 0.03 pH units per decade, and has dropped by only 0.1 pH units over the whole period since industrialization and CO2 emissions began in the 18th century. These almost imperceptible changes pale in comparison with the natural worldwide variation in ocean pH, which ranges from a low of 7.8 in coastal waters to a high of 8.4 in upper latitudes.

The pH scale sets 7.0 as the neutral value, with lower values being acidic and higher values alkaline. It’s a logarithmic scale, so a change of 1 pH unit represents a 10-fold change in acidity. A decrease of 0.1 units, representing a 26% increase in acidity, still leaves the ocean pH well within the alkaline range.    

The primary concern with ocean acidification is its effect on marine creatures – such as corals, some plankton, and shellfish – that form skeletons and shells made from calcium carbonate. The dissolution of CO2 in seawater produces carbonic acid (H2CO3), which in turn produces hydrogen ions (H+) that eat up any carbonate ions (CO32-) that were already present, depleting the supply of carbonate available to calcifying organisms, such as mussels and krill, for shell building.

Yet the wide range of pH values in which sea animals and plants thrive tells us that fears about acidification from climate change are unfounded. The figure below shows how much the ocean pH varies even at the same location over the period of one month, and often within a single day.

ocean pH over 1 month.jpg

In the Santa Barbara kelp forest (F in the figure), for example, the pH fluctuates by 0.5 units, a change in acidity of more than 200%, over 13 days; the mean variation in the Elkhorn Slough estuary (D) is a full pH unit, or a staggering 900% change in acidity, per day. Likewise, coral reefs (E) can withstand relatively large fluctuations in acidity: the pH of seawater in the open ocean can vary by 0.1 to 0.2 units daily, and by as much as 0.5 units seasonally, from summer to winter.

ocean coral.jpg

A 2011 study of coral formation in Papua New Guinea at underwater volcanic vents that exude CO2 found that coral reef formation ceased at pH values less than 7.7, which is 0.5 units below the pre-industrial ocean surface average of 8.2 units and 216% more acidic. However, at the present rate of pH decline, that point won’t be reached for at least another 130 to 200 years. In any case, there’s empirical evidence that existing corals are hardy enough to survive even lower pH values.

Australia’s Great Barrier Reef periodically endures surges of pronouncedly acid rainwater at the low pH of about 5.6 that pours onto the Reef from flooding of the Brisbane River, which has occurred 11 times since 1840. But the delicate corals have withstood the onslaught each time. And there have been several epochs in the distant past when the CO2 level in the atmosphere was much higher than now, yet marine species that calcify were able to persist for millions of years.

Nonetheless, advocates of the climate change narrative insist that marine animals and plants are headed for extinction if the CO2 level continues to rise, supposedly because of reduced fertility and growth rates. However, there’s a paucity of research conducted under realistic conditions that accurately simulates the actual environment of marine organisms. Acidification studies often fail to provide the organisms with a period of acclimation to lowered seawater pH, as they would experience in their natural surroundings, and ignore the chemical buffering effect of neighboring organisms on acidification.

Ocean acidification, often regarded as the evil twin of global warming, is far less of a threat to marine life than overfishing and pollution. In Shakespeare’s immortal words, the uproar over acidification is much ado about nothing.

Next: No Evidence That Marine Heat Waves Are Unusual

Ocean Heating: How the IPCC Distorts the Evidence

Part of the drumbeat accompanying the narrative of catastrophic human-caused warming involves hyping or distorting the supposed evidence, as I’ve demonstrated in recent posts on ice sheets, sea ice, sea levels and extreme weather. Another gauge of a warming climate is the amount of heat stashed away in the oceans. Here too, the IPCC (Intergovernmental Panel on Climate Change) and alarmist climate scientists bend the truth to bolster the narrative.

Perhaps the most egregious example comes from the IPCC itself. In its 2019 Special Report on the Ocean and Cryosphere in a Changing Climate, the IPCC declares that the world’s oceans have warmed unabated since 2005, and that the rate of ocean heating has accelerated – despite contrary evidence for both assertions presented in the very same report! It appears that catastrophists within the IPCC are putting a totally unjustified spin on the actual data.

Argo float being deployed.

Argo float being deployed.

Ocean heat, known technically as OHC (ocean heat content), is currently calculated from observations made by Argo profiling floats. These floats are battery-powered robotic buoys that patrol the oceans, sinking 1-2 km (0.6-1.2 miles) deep once every 10 days and then bobbing up to the surface, recording the temperature and salinity of the water as they ascend. When the floats eventually reach the surface, the data is transmitted to a satellite. Before the Argo system was deployed in the early 2000s, OHC data was obtained from older types of instrument.

The table below shows empirical data documented in the IPCC report, for the rate of ocean heating (heat uptake) over various intervals from 1969 to 2017, in two ocean layers: an upper layer down to a depth of 700 meters (2,300 feet), and a deeper layer from 700 meters down to 2,000 meters (6,600 feet). The data is presented in alternative forms: as the total heat energy absorbed by the global ocean yearly, measured in zettajoules (1021 joules), and as the rate of areal heating over the earth’s surface, measured in watts (1 watt = 1 joule per second) over one square meter.

OHC Table.jpg

Examination of the data in either form reveals clearly that in the upper, surface layer, the oceans heated less rapidly during the second half of the interval between 1993 and 2017, that is from 2005 to 2017, than during the first half from 1993 to 2005.

The same is true for the two layers combined, that is for all depths from the surface down to 2,000 meters (6,600 feet). When the two lines in the table above are added together, the combined layer heating rate was 9.33 zettajoules per year or 0.58 watts per square meter from 2005 to 2017, and 10.14 zettajoules per year or 0.63 watts per square meter from 1993 to 2017. Although these numbers ignore the large uncertainties in the measurements, they demonstrate that the ocean heating rate fell between 1993 and 2017.

Yet the IPCC has the audacity to state in the same report that “It is likely that the rate of ocean warming has increased since 1993,” even while correctly recognizing that the present heating rate is higher than it was back in 1969 or 1970. That the heating rate has not increased since 1993 can also be seen in the following figure, again from the same IPCC report.

Ocean Heat Content 1995-2017

OHC recent.jpg

The light and dark green bands in the figure show the change in OHC, measured in zettajoules, from the surface down to 2,000 meters (6,600 feet), relative to its average value between 2000 and 2010, over the period from 1995 to 2017. It’s obvious that the ocean heating rate – characterized by the slope of the graph – slowed down over this period, especially from 2003 to about 2008 when ocean heating appears to have stopped altogether. Both the IPCC’s table and figure in the report completely contradict its conclusions.

This contradiction is important not only because it reveals how the IPCC is a blatantly political more than a scientific organization, but also because OHC science has already been tarnished by the publication and subsequent retraction of a 2018 research paper claiming that ocean heating had reached the absurdly high rate of 0.83 watts per square meter.

If true, the claim would have meant that the climate is much more sensitive to CO2 emissions than previously thought – a finding the mainstream media immediately pounced on. But mathematician Nic Lewis quickly discovered that the researchers had miscalculated the ocean warming trend, as well as underestimating the uncertainty of their result in the retracted paper. Lewis has also uncovered errors in a 2019 paper on ocean heating.

In a recent letter to the IPCC, the Global Warming Policy Foundation has pointed out the errors and misinterpretations in both the 2018 and 2019 papers, as well as in the IPCC report discussed above. There’s been no response to date.

Next: Ocean Acidification: No Evidence of Impending Harm to Sea Life

No Convincing Evidence That Greenland Ice Sheet Is Melting Rapidly

Greenland1.jpg

When Greenland’s ice sheet lost an estimated 11.3 billion tonnes (12.5 billion tons) of ice on a single warm day this August, following back-to-back heat waves in June and July in western Europe, the climate doomsday machine went into overdrive. Predictions abounded of ever-accelerating melting of both Greenland and Antarctic ice sheets, and the imminent drowning of cities such as London and Miami by rising seas.

But all this hype is unnecessary fear-mongering. As discussed in the previous post, the massive Antarctic ice sheet may not be melting at all as the world warms. And the much smaller Greenland ice sheet isn’t melting any faster on average now than it was 15 years ago. While this year’s summer melt was above average, it was no more than seen seven years before, in 2012.

Melting takes place only during the short Greenland summer, the meltwater running over the ice sheet surface into the ocean, as well as funneling its way down through thick glaciers, helping speed up their flow toward the sea. Although some of the ice lost in summer is normally ice gained over the long winter from the accumulation of compacted snow, last winter’s low snowfall quickly disappeared this year in an early start to the melt season.   

Greenland melt.jpg

The figure to the left shows that at the peak of the 2019 event, over 60% of the ice sheet surface melted; on July 30 and 31, NOAA (the U.S. National Oceanic and Atmospheric Administration) reported that even the very highest point of the ice sheet liquefied briefly – a rare occurrence. The ice sheet, 2-3 km (6,600-9,800 feet) thick, consists of layers of compressed snow built up over at least hundreds of thousands of years. In addition to summer melting, the sheet loses ice by calving of icebergs at its edges.

The figure below depicts the daily variation, over a full year, of the estimated mass of ice in the Greenland ice sheet, relative to its average value from 1981 to 2010. The loss of ice during the summer months of June, July and August is clearly visible, the biggest recent losses having occurred in 2012 and 2019.

Greenland ice loss 2019.jpg

This year’s total ice loss has been estimated at 329 billion tonnes (363 billion tons), somewhat lower than the record 458 billion tonnes (505 billion tons) that melted in 2012. Despite the high 2012 loss, however, the average loss from 2012 through 2016 of 247 billion tonnes (272 billion tons) per year was essentially the same as the 2002 through 2011 average loss of 263 billion tonnes (290 billion tons) per year, according to the IPCC (Intergovernmental Panel on Climate Change).

So, while the average annual loss of 258 billion tonnes (284 billion tons) between 2002 and 2016 is a big jump from the average 75 billion tonnes (83 billion tons) of ice lost yearly during the 20th century, it appears that the Greenland ice sheet has at least stabilized since 2002. The 21st-century increase in summer melt rate may arise partly from dominance of the negative phase of the natural North Atlantic Oscillation since about 2000.

Little known about Greenland is that the ice sheet was smaller than it is today over most of the period since the end of the last ice age. During the long interglacial epoch, as human civilization developed and thrived, there were several periods when it was warmer on average in Greenland than at present, as illustrated in the next figure. This has been deduced by analyzing ice cores extracted from the Greenland ice sheet; the cores carry a record of past temperatures and atmospheric composition. 

Greenland temp(2) 5,000 yrs.jpg

One of the warm spells was the Medieval Warm Period, an era when Scandinavian Vikings colonized Greenland – growing crops, raising animals and hunting seals for meat and walruses for ivory. The Vikings are thought to have abandoned the island after temperatures dropped with the onset of the Little Ice Age. But there’s little doubt that what made it possible for the Vikings to settle in Greenland at all were a relatively hospitable climate and less ice than exists today.

To put everything in perspective, the present ice loss of 247 billion tonnes (272 billion tons) every year represents only about 0.01% of the total mass of the ice sheet. At the current rate, therefore, it would take another 10,000 years for all Greenland’s ice to melt.

Next: Ocean Warming: How the IPCC Distorts the Evidence

No Convincing Evidence That Antarctic Ice Sheet Is Melting

Antarctica Dome A.jpg

Of all the observations behind mass hysteria over our climate, none induces as much panic as melting of the earth’s two biggest ice sheets, covering the polar landmasses of Antarctica and Greenland. As long ago as 2006, Al Gore’s environmental documentary “An Inconvenient Truth” proclaimed that global warming would melt enough ice to cause a 6-meter (20-foot) rise in sea level “in the near future.” Today, every calving of a large iceberg from an ice shelf or glacier whips the mainstream media into a frenzy.

The huge Antarctic ice sheet alone would raise global sea levels by about 60 meters (200 feet) were it to melt completely. But there’s little evidence that the kilometers-thick ice sheet, which contains about 90% of the world’s freshwater ice, is melting at all.

Antarctica ice shelf.jpg

Any calving of large icebergs – a natural process unrelated to warming – from an ice shelf, or even disintegration into small icebergs, barely affects sea level. This is because the ice that breaks off was already floating on the ocean. Although a retreating ice shelf can contribute to sea level rise by accelerating the downhill flow of glaciers that feed the shelf, current breakups of Antarctic ice shelves are adding no more than about 0.1 mm (about 4/1000ths of an inch) per year to global sea levels, according to NOAA (the U.S. National Oceanic and Atmospheric Administration).

Global warming has certainly affected Antarctica, though not by as much as the Arctic. East Antarctica, by far the largest region that covers two thirds of the continent, heated up by only 0.06 degrees Celsius (0.11 degrees Fahrenheit) per decade between 1958 and 2012. At the South Pole, which is located in East Antarctica, temperatures actually fell in recent decades.

For comparison, global temperatures over this period rose by 0.11 degrees Celsius (0.20 degrees Fahrenheit) per decade, and Arctic temperatures shot up at an even higher rate. Antarctic warming from 1958 to 2012 is illustrated in the figure below, based on NOAA data. East Antarctica is to the right, West Antarctica to the left of the figure.

Antarctic temps 1958-2012.jpg

You can see, however, that temperatures in West Antarctica and the small Antarctic Peninsula, which points toward Argentina, increased more rapidly than in East Antarctica, by 0.22 degrees Celsius (0.40 degrees Fahrenheit) and 0.33 degrees Celsius (0.59 degrees Fahrenheit) per decade, respectively – faster than the global average. Still, the Peninsula has cooled since 2000.

It’s not surprising, therefore, that all the hype about imminent collapse of the Antarctic ice sheet centers on events in West Antarctica, such as glaciers melting at rapid rates. The Fifth Assessment Report of the UN’s IPCC (Intergovernmental Panel on Climate Change) maintained with high confidence that, between 2005 and 2010, the ice sheet was shedding mass and causing sea levels to rise by 0.41 mm per year, contributing about 24% of the measured rate of 1.7 mm (1/16th of an inch) per year between 1900 and 2010.

On the other hand, a 2015 NASA study reported that the Antarctic ice sheet was actually gaining rather than losing ice in 2008, and that ice thickening was making sea levels fall by 0.23 mm per year. The study authors found that the ice loss from thinning glaciers in West Antarctica and the Antarctic Peninsula was currently outweighed by new ice formation in East Antarctica resulting from warming-enhanced snowfall. Across the continent, Antarctica averages roughly  5 cm (2 inches) of precipitation per year. The same authors say that the trend has continued until at least 2018, despite a recent research paper by an international group of polar scientists endorsing the IPCC human-caused global warming narrative of diminishing Antarctic ice.

The two studies are both based on satellite altimetry – the same method used to measure sea levels, but in this case measuring the height of the ice sheet. Both studies also depend on models to correct the raw data for factors such as snowdrift, ice compaction and motion of the underlying bedrock. It’s differences in the models that give rise to the diametrically opposite results of the studies, one finding that Antarctic ice is melting away but the other concluding that it’s really growing.

Such uncertainty, even in the satellite era, shouldn’t be surprising. Despite the insistence of many climate scientists that theirs is a mature field of research, much of today’s climate science is dependent on models to interpret the empirical observations. The models, just like computer climate models, aren’t always good representations of reality.

Al Gore’s 6-meter (20-foot) rise hasn’t happened yet, and isn’t likely to happen even by the end of this century. Global panic over the impending meltdown of Antarctica is totally unwarranted.

(This post has also been kindly reproduced in full on the Climate Depot blog.)

Next: No Convincing Evidence That Greenland Ice Sheet Is Melting Rapidly

Shrinking Sea Ice: Evaluation of the Evidence

Most of us know about the loss of sea ice in the Arctic due to global warming. The dramatic reduction in summer ice cover, which has continued for almost 40 years, is frequently hyped by the mainstream media and climate activists as an example of what we’re supposedly doing to the planet.

But the loss is nowhere near as much as predicted, and in fact was no more in the summer of 2019 than in 2007. Also, it’s little known that Arctic sea ice has melted before during the record heat of the 1930s. And the sea ice around Antarctica, at the other end of the globe, has been expanding since at least 1979.

Actual scientific observations of sea ice in the Arctic and Antarctic have only been possible since satellite measurements began in 1979. The figure below shows satellite-derived images of Arctic sea ice extent in the summer of 1979 (left image), and the summer (September) and winter (March) of 2018 (right image). Sea ice expands to its maximum extent during the winter and shrinks during summer months.   

Arctic ice 1979.jpg
Arctic ice 2018.jpg

Arctic summer ice extent decreased by approximately 33% over the interval from 1979 to 2018; while it still encases northern Greenland, it no longer reaches the Russian coast.

However, there has been no net ice loss since 2007, with the year-to-year minimum extents fluctuating around a plateau. An exception was 2012, when a powerful August storm known as the Great Arctic Cyclone tore off a large chunk of ice from the main sea ice pack. Clearly, the evidence refutes numerous prognostications by advocates of catastrophic human-caused warming that Arctic ice would be completely gone by 2016. 

Before 1979, the only data available on Arctic sea ice are scattered observations from sources such as ship reports, aircraft reconnaissance and drifting buoys – observations recorded and synthesized by the Danish Meteorological Institute and the Russian Arctic and Antarctic Research Institute. Analyses of this spotty data have resulted in numerous reconstructions of Arctic sea ice extent in the pre-satellite era.

One such recent reconstruction is shown in the next figure, depicting reconstructed Arctic summer ice area, in millions of square kilometers, from 1900 to 2013. The reconstruction was based on the strong correlation of Arctic sea ice extent with Arctic air temperatures during the satellite era, especially in the summer, a correlation assumed to be the same in earlier years as well. This assumption then enabled the researchers to reconstruct the sea ice area before 1979 from observed temperatures in that era.  

Ice.jpg

What this graph reveals is that summer ice cover in the Arctic, apart from its present decline since about 1979, contracted previously in the 1920s and 1930s. According to the researchers, the biggest single-year decrease in area, which occurred in 1936, was about 26% – not much less than the 33% drop by 2018. Although this suggests that the relatively low sea ice extents in recent years are comparable to the 1930s, the reconstruction doesn’t incorporate any actual pre-satellite observations. Other reconstructions that do incorporate the earlier data show a smaller difference between the 1930s and today.

It’s the opposite story for sea ice in the Antarctic, which is at its lowest extent during the southern summer in February, as shown in the satellite-derived image below for 2018-19.

Antarctic ice 2018-2019.jpg

Despite the contraction in the Arctic, the sea ice around Antarctica has been expanding during the satellite era. As can be seen from the following figure, Antarctic sea ice has gained in extent by an average of 1.8% per decade (the dashed line represents the trend), though the ice extent fluctuates greatly from year to year. Antarctic sea ice covers a larger area than Arctic ice but occupies a smaller overall volume, because it’s only about half as thick.

Antarctic ice.jpg

Another fallacious claim about disappearing sea ice in the Arctic, one that has captured the public imagination like no other, is that the polar bear population is diminishing along with the ice. But, while this may yet happen in the future, current evidence shows that the bear population has been stable for the whole period that the ice has been decreasing and may even be growing, according to the native Inuit.

In summary, Arctic sea ice shrank from about 1979 to 2007 because of global warming, but has remained at the same extent on average in the 12 years since then, while Antarctic sea ice has expanded slightly over the whole period. So there’s certainly no cause for alarm.

Next: No Convincing Evidence That Antarctic Ice Sheet is Melting

No Evidence That Climate Change Is Accelerating Sea Level Rise

Malé, Maldives Capital City

Malé, Maldives Capital City

By far the most publicized phenomenon cited as evidence for human-induced climate change is rising sea levels, with the media regularly trumpeting the latest prediction of the oceans flooding or submerging cities in the decades to come. Nothing instills as much fear in low-lying coastal communities as the prospect of losing one’s dwelling to a hurricane storm surge or even slowly encroaching seawater. Island nations such as the Maldives in the Indian Ocean and Tuvalu in the Pacific are convinced their tropical paradises are about to disappear beneath the waves.

There’s no doubt that the average global sea level has been increasing ever since the world started to warm after the Little Ice Age ended around 1850. But there’s no reliable scientific evidence that the rate of rise is accelerating, or that the rise is associated with any human contribution to global warming.   

A comprehensive 2018 report on sea level and climate change by Judith Curry, a respected climate scientist and global warming skeptic, emphasizes the complexity of both measuring and trying to understand recent sea level rise. Because of the switch in 1993 from tide gauges to satellite altimetry as the principal method of measurement, the precise magnitude of sea level rise as well as projections for the future are uncertain.

According to both Curry and the UN’s IPCC (Intergovernmental Panel on Climate Change), the average global rate of sea level rise from 1901 to 2010 was 1.7 mm (about 1/16th of an inch) per year. In the latter part of that period from 1993 onward, the rate of rise was 3.2 mm per year, almost double the average rate – though this estimate is considered too high by some experts. But, while the sudden jump may seem surprising and indicative of acceleration, the fact is that the globally averaged sea level fluctuates considerably over time. This is illustrated in the IPCC’s figure below, which shows estimates from tide gauge data of the rate of rise from 1900 to 1993.

cropped.jpg

It’s clear that the rate of rise was much higher than its 20th century average during the 30 years from 1920 to 1950, and much lower than the average from 1910 to 1920 and again from 1955 to 1980. Strong regional differences exist too. Actual rates of sea level rise range from negative in Stockholm, corresponding to a falling sea level, as that region continues to rebound after melting of the last ice age’s heavy ice sheet, to positive rates three times higher than average in the western Pacific Ocean.

The regional variation is evident in the next figure, showing the average rate of sea level rise across the globe, measured by satellite, between 1993 and 2014.

Sea level rise rate 1993-2014.jpg

You can see that during this period sea levels increased fastest in the western Pacific as just noted, and in the southern Indian and Atlantic Oceans. At the same time, the sea level fell near the west coast of North America and in the Southern Ocean near Antarctica.

The reasons for such a jumbled picture are several. Because water expands and occupies more volume as it gets warmer, higher ocean temperatures raise sea levels. Yet the seafloor is not static and can sink under the weight of the extra water in the ocean basin that comes from melting glaciers and ice caps, and can be altered by underwater volcanic eruptions. Land surfaces can also sink (as well as rebound), as a result of groundwater depletion in arid regions or landfilling in coastal wetlands. For example, about 50% of the much hyped worsening of tidal flooding in Miami Beach, Florida is due to sinking of reclaimed swampland.

Historically, sea levels have been both lower and higher in the past than at present. Since the end of the last ice age, the average level has risen about 120 meters (400 feet), as depicted in the following figure. After it reached a peak in at least some regions about 6,000 years ago, however, the sea level has changed relatively little, even when industrialization began boosting atmospheric CO2. Over the 20th century, the worldwide average rise was about 15-18 cm (6-7 inches).

Sea level rise 24,000 yr.jpg

That the concerns of islanders are unwarranted despite rising seas is borne out by recent studies revealing that low-lying coral reef islands in the Pacific are actually growing in size by as much as 30% per century, and not shrinking. The growth is due to a combination of coral debris buildup, land reclamation and sedimentation. Another study found that the Maldives -- the world's lowest country -- formed when sea levels were even higher than they are today. Studies such as these belie the popular claim that islanders will become “climate refugees,” forced to leave their homes as sea levels rise.

Next: Shrinking Sea Ice: Evaluation of the Evidence

No Evidence That Heat Kills More People than Cold

The irony in the recent frenzy over heat waves is that many more humans die each year from cold than they do from heat. But you wouldn’t know that from sensational media headlines reporting “killer” heat waves and conditions “as hot as hell.” In reality, cold weather worldwide kills 17 times as many people as heat.

This conclusion was reached by a major international study in 2015, published in the prestigious medical journal The Lancet. The study analyzed more than 74 million deaths in 384 locations across 13 countries including Australia, China, Italy, Sweden, the UK and USA, over the period from 1985 to 2012. The results are illustrated in the figure below, showing the average daily rate of premature deaths from heat or cold as a percentage of all deaths, by country.

World heat vs cold deaths.jpg

Perhaps not surprisingly, moderate cold kills people far more often than extreme cold, for a wide range of different climates. Extreme cold was defined by the study authors as temperatures falling below the 2.5th percentile at each location, a limit which varied from as low as -11 degrees Celsius (12 degrees Fahrenheit) in Toronto, Canada to as high as 25 degrees Celsius (77 degrees Fahrenheit) in Bangkok, Thailand. Moderate cold includes all temperatures from this lower limit up to the so-called optimum, the temperature at which the daily death rate at that location is a minimum.

Likewise, extreme heat was defined as temperatures above the 97.5th percentile at each location, and moderate heat as temperatures from the optimum up to the 97.5th percentile. But unlike cold, extreme and moderate heat cause approximately equal numbers of excess deaths.

The study found that on average, 7.71% of all deaths could be attributed to hot or cold – to temperatures above or below the optimum – with 7.29% being due to cold, but only 0.42% due to heat. That single result puts the lie to the popular belief that heat waves are deadlier than cold spells. Hypothermia kills a lot more of us than heat stroke. And though both high and low temperatures can increase the risk of exacerbating cardiovascular, respiratory and other conditions, it’s cold that is the big killer.

This finding is further borne out by seasonal mortality statistics. France, for instance, recorded 700 excess deaths attributed to heat in the summer of 2016, 475 in 2017 and 1,500 in 2018. Yet excess deaths from cold in the French winter from December to March average approximately 24,000. Even the devastating summer heat wave of 2003 claimed only 15,000 lives in France.

Similar statistics come from the UK, where an average of 32,000 more deaths occur during each December to March period than in any other four-month interval of the year. Flu epidemics boosted this total to 37,000 in the winter of 2016-17, and to 50,000 in 2017-18. Just as in France, these numbers for deaths from winter cold far exceed summer mortalities in the UK due to heat, which reached only 1,700 in 2018 and just 2,200 in the heat-wave year of 2003.

Even more evidence that cold kills a lot more people than heat is seen in an earlier study, published in the BMJ (formerly the British Medical Journal) in 2000. This study, restricted to approximately 3 million deaths in western Europe from 1988 to 1992, found that annual cold related deaths were much higher than heat related deaths in all seven regions studied – with the former averaging 2,000 per million people and the latter only 220 per million. Additionally, no more deaths from heat occurred in hotter regions than colder ones.

A sophisticated statistical approach was necessary in both studies. This is because of differences between regions and individuals, and the observation that, while death from heat is typically rapid and occurs within a few days, death from cold can be delayed up to three or four weeks. The larger Lancet study used more advanced statistical modeling than the BMJ study.

And despite the finding that more than 50% of published papers in biomedicine are not reproducible, the fact that two independent papers reached essentially the same result gives their conclusions some credibility.

Next: No Evidence That Climate Change Is Accelerating Sea Level Rise

No Evidence That Climate Change Causes Weather Extremes: (6) Heat Waves

This Northern Hemisphere summer has seen searing, supposedly record high temperatures in France and elsewhere in Europe. According to the mainstream media and climate alarmists, the heat waves are unprecedented and a harbinger of harsh, scorching hot times to come.

But this is absolute nonsense. In this sixth and final post in the present series, I’ll examine the delusional beliefs that the earth is burning up and may shortly be uninhabitable, and that this is all a result of human-caused climate change. Heat waves are no more linked to climate change than any of the other weather extremes we’ve looked at.

The brouhaha over two almost back-to-back heat waves in western Europe is a case in point. In the second, which occurred toward the end of July, the WMO (World Meteorological Organization) claimed that the mercury in Paris reached a new record high of 42.6 degrees Celsius (108.7 degrees Fahrenheit) on July 25, besting the previous record of 40.4 degrees Celsius (104.7 degrees Fahrenheit) set back in July, 1947. And a month earlier during the first heat wave, temperatures in southern France hit a purported record 46.0 degrees Celsius (114.8 degrees Fahrenheit) on June 28.

How convenient to ignore the past! Reported in Australian and New Zealand newspapers from August, 1930 is an account of an earlier French heatwave, in which the temperature soared to a staggering 50 degrees Celsius (122 degrees Fahrenheit) in the Loire valley, located in central France. That’s a full 4.0 degrees Celsius (7.2 degrees Fahrenheit) above the so-called record just mentioned in southern France, where the temperature in 1930 may well have equaled or exceeded the Loire valley’s towering record.

And the same newspaper articles reported a temperature in Paris that day of 38 degrees Celsius (100 degrees Fahrenheit), stating that back in 1870 the thermometer had reached an even higher, unspecified level there – quite possibly above the July 2019 “record” of 42.6 degrees Celsius (108.7 degrees Fahrenheit).    

The same duplicity can be seen in proclamations about past U.S. temperatures. Although it’s frequently claimed that heat waves are increasing in both intensity and frequency, there’s simply no scientific evidence for such a bold assertion. The following figure charts official data from NOAA (the U.S. National Oceanic and Atmospheric Administration) showing the yearly number of days, averaged over all U.S. temperature stations, from 1895 to 2018 with extreme temperatures above 38 degrees Celsius (100 degrees Fahrenheit) and 41 degrees Celsius (105 degrees Fahrenheit).

ac-rebuttal-heat-waves-081819.jpg

The next figure shows NOAA’s data for the year in which the record high temperature in each U.S. state occurred. Of the 50 state records, a total of 32 were set in the 1930s or earlier, but only seven since 1990.

US high temperature records.jpg

It’s obvious from these two figures that there were more U.S. heat waves in the 1930s, and they were hotter, than in the present era of climate hysteria. Indeed, the annual number of days on which U.S. temperatures reached 100 degrees, 95 degrees or 90 degrees Fahrenheit has been steadily falling since the 1930s. The EPA (Environmental Protection Agency)’s Heat Wave Index for the 48 contiguous states also shows clearly that the 1930s were the hottest decade.

Globally, it’s exactly the same story, as depicted in the figure below.

World record high temperatures 500.jpg

Of the seven continents, six recorded their all-time record high temperatures before 1982, three records dating from the 1930s or before; only Asia has set a record more recently (the WMO hasn’t acknowledged the 122 degrees Fahrenheit 1930 record in the Loire region). And yet the worldwide baking of the 1930s didn’t set the stage for more and worse heat waves in the years ahead, even as CO2 kept pouring into the atmosphere – the scenario we’re told, erroneously, that we face today. In fact, the sweltering 1930s were followed by global cooling from 1940 to 1970.

Contrary to the climate change narrative, the recent European heat waves came about not because of global warming, but rather a weather phenomenon known as jet stream blocking. Blocking results from an entirely different mechanism than the buildup of atmospheric CO2, namely a weakening of the sun’s output that may portend a period of global cooling ahead. A less active sun generates less UV radiation, which in turn perturbs winds in the upper atmosphere, locking the jet stream in a holding or blocking pattern. In this case, blocking kept a surge of hot Sahara air in place over Europe for extended periods.

It should be clear from all the evidence presented above that mass hysteria over heat waves and climate change is completely unwarranted. Current heat waves have as little to do with global warming as floods, droughts, hurricanes, tornadoes and wildfires.

Next: No Evidence That Heat Kills More People than Cold

No Evidence That Climate Change Causes Weather Extremes: (5) Wildfires

Probably the most fearsome of the weather extremes commonly blamed on human-caused climate change are tornadoes – the previous topic in this series – and wildfires. Both can arrive with little or no warning, making it difficult or impossible to flee, are often deadly, and typically destroy hundreds of homes and other structures. But just like tornadoes, there is no scientific evidence that the frequency or severity of wildfires are on the rise in a warming world.

You wouldn’t know that, however, from the mass hysteria generated by the mainstream media and climate activists almost every time a wildfire breaks out, especially in naturally dry climates such as those in California, Australia or Spain. While it’s true that the number of acres burned annually in the U.S. has gone up over the last 20 years or so, the present burned area is still only a small fraction of what it was back in the record 1930s, as seen in the figure below, showing data compiled by the U.S. National Interagency Fire Center.

Wildfires US-acres-burned 1926-2017 copy.jpg

Because modern global warming was barely underway in the 1930s, climate change clearly has nothing to do with the incineration of U.S. forests. Exactly the same trend is apparent in the next figure, which depicts the estimated area worldwide burned by wildfires, by decade from 1900 to 2010. Clearly, wildfires have diminished globally as the planet has warmed.

Global Burned Area

1900-2010

Wildfires global-acres-burned JPG.jpg

In the Mediterranean, although the annual number of wildfires has more than doubled since 1980, the burned area over three decades has mimicked the global trend and declined:

Mediterranean Wildfire Occurrence & Burnt Area

1980-2010

Wildfires Mediterranean_number_and_area 1980-2010 copy.jpg

The contrast between the Mediterranean and the U.S., where wildfires are becoming fewer but larger in area, has been attributed to different forest management policies on the two sides of the Atlantic – despite the protestations of U.S. politicians and firefighting officials in western states that climate change is responsible for the uptick in fire size. The next figure illustrates the timeline from 1600 onwards of fire occurrence at more than 800 different sites in western North America. 

Western North America Wildfire Occurrence

1600-2000

Western North American wildfires JPG.jpg

The sudden drop in wildfire occurrence around 1880 has been ascribed to the expansion of American livestock grazing in order to feed a rapidly growing population. Intensive sheep and cattle grazing after that time consumed most of the grasses that previously constituted the fuel for wildfires. This depletion of fuel, together with the firebreaks created by the constant movement of herds back and forth to water sources, and by the arrival of railroads, drastically reduced the incidence of wildfires. And once mechanical equipment for firefighting such as fire engines and aircraft became available in the 20th century, more and more emphasis was placed on wildfire prevention.

But wildfire suppression in the U.S. has led to considerable increases in forest density and the buildup of undergrowth, both of which greatly enhance the potential for bigger and sometimes hotter fires – the latter characterized by a growing number of terrifying, superhot “firenadoes” or fire whirls occasionally observed in today’s wildfires.

Intentional burning, long used by native tribes and early settlers and even advocated by some environmentalists who point out that fire is in fact a natural part of forest ecology as seen in the preceding figure, has become a thing of the past. Only now, after several devastating wildfires in California, is the idea of controlled burning being revived in the U.S. In Europe, on the other hand, prescribed burning has been supported by land managers for many years.

Combined with overgrowth, global warming does play a role by drying out vegetation and forests more rapidly than before. But there’s no evidence at all for the notion peddled by the media that climate change has amplified the impact of fires on the ecosystem, known technically as fire severity. Indeed, at least 10 published studies of forest fires in the western U.S. have found no recent trend in increasing fire severity.

You may think that the ever-rising level of CO2 in the atmosphere would exacerbate wildfire risk, since CO2 promotes plant growth. But at the same time, higher CO2​ levels reduce plant transpiration, meaning that plants’ stomata or breathing pores open less, the leaves lose less water and more moisture is retained in the soil. Increased soil moisture has led to a worldwide greening of the planet.

In summary, the mistaken belief that the “new normal” of devastating wildfires around the globe is a result of climate change is not supported by the evidence. Humans, nevertheless, are the primary reason that wildfires have become larger and more destructive today. Population growth has caused more people to build in fire-prone areas, where fires are frequently sparked by an aging network of power lines and other electrical equipment. Coupled with poor forest management, this constitutes a recipe for disaster.

Next: No Evidence That Climate Change Causes Weather Extremes: (6) Heat Waves

No Evidence That Climate Change Causes Weather Extremes: (4) Tornadoes

tornadoes.jpg

Tornadoes are smaller and claim fewer lives than hurricanes. But the roaring twisters can be more terrifying because of their rapid formation and their ability to hurl objects such as cars, structural debris, animals and even people through the air. Nonetheless, the narrative that climate change is producing stronger and more deadly tornadoes is as fallacious as the nonexistent links between climate change and other weather extremes previously examined in this series.

Again, the UN’s IPCC (Intergovernmental Panel on Climate Change), whose assessment reports constitute the bible for the climate science community, has dismissed any connection between global warming and tornadoes. While the agency concedes that escalating temperatures and humidity may create atmospheric instability conducive to tornadoes, it also points out that other factors governing tornado formation, such as wind shear, diminish in a warming climate. In fact, declares the IPCC, the apparent increasing trend in tornadoes simply reflects their reporting by a larger number of people now living in remote areas.

A tornado is a rapidly rotating column of air, usually visible as a funnel cloud, that extends like a dagger from a parent thunderstorm to the ground. Demolishing homes and buildings in its often narrow path, it can travel many kilometers before dissipating. The most violent EF5 tornadoes attain wind speeds up to 480 km per hour (300 mph).

The U.S. endures by far the most tornadoes of any country, mostly in so-called Tornado Alley extending northward from central Texas through the Plains states. The annual incidence of all U.S. tornadoes from 1954 to 2017 is shown in the figure below. It’s obvious that no trend exists over a period that included both cooling and warming spells, with net global warming of approximately 1.1 degrees Celsius (2.0 degrees Fahrenheit) during that time.

US Tornadoes (NOAA) 1954-2017.jpg

But, as an illustration of how U.S. tornado activity can vary drastically from year to year, 13 successive days of tornado outbreaks in 2019 saw well over 400 tornadoes touch down in May, with June a close second – and this following seven quiet years ending in 2018, which was the quietest year in the entire record since 1954. The tornado surge, however, had nothing to do with climate change, but rather an unusually cold winter and spring in the West that, combined with heat from the Southeast and late rains, provided the ingredients for severe thunderstorms. 

The next figure depicts the number of strong (EF3 or greater) tornadoes observed in the U.S. each year during the same period from 1954 to 2017. Clearly, the trend is downward instead of upward; the average number of strong tornadoes annually from 1986 to 2017 was 40% less than from 1954 to 1985. Once more, global warming cannot have played a role. 

US strong tornadoes (NOAA) 1954-2017.jpg

In the U.S., tornadoes cause about 80 deaths and more than 1,500 injuries per year. The deadliest episode of all time in a single day was the “tri-State” outbreak in 1925, which killed 747 people and resulted in the most damage from any tornado outbreak in U.S. history. The most ferocious tornado outbreak ever recorded, spawning a total of 30 EF4 or EF5 tornadoes, was in 1974.

Tornadoes also occur more rarely in other parts of the world such as South America and Europe. The earliest known tornado in history occurred in Ireland in 1054. The human toll from tornadoes in Bangladesh actually exceeds that in the U.S., at an estimated 179 deaths per year, partly due to the region’s high population density. It’s population growth and expansion outside urban areas that have caused the cost of property damage from tornadoes to mushroom in the last few decades, especially in the U.S.

Next: No Evidence That Climate Change Causes Weather Extremes: (5) Wildfires