Exactly How Large Is the Urban Heat Island Effect in Global Warming?

It’s well known that global surface temperatures are biased upward by the urban heat island (UHI) effect. But there’s widespread disagreement among climate scientists about the magnitude of the effect, which arises from the warmth generated by urban surroundings, such as buildings, concrete and asphalt.

In its Sixth Assessment Report in 2021, the IPCC (Intergovernmental Panel on Climate Change) acknowledged the existence of the UHI effect and the consequent decrease in the number of cold nights since around 1950. Nevertheless, the IPCC is ambivalent about the actual size of the effect. On the one hand, the report dismisses its significance by declaring it “less than 10%” (Chapter 2, p. 324) or “negligible” (chapter 10, p. 1368).

On the other hand, the IPCC presents a graph (Chapter 10, p. 1455), reproduced below, showing that the UHI effect ranges from 0% to 60% or more of measured warming in various cities. Since the population of the included cities is a few per cent of the global population, and many sizable cities are not included, it’s hard to see how the IPCC can state that the global UHI effect is negligible.

One climate scientist who has studied the magnitude of the UHI effect for some time is PhD meteorologist Roy Spencer. In a recent preview of a paper submitted for publication, Spencer finds that summer warming in U.S. cities from 1895 to 2023 has been exaggerated by 100% or more from UHI warming. The next figure shows the results of his calculations which, as you would expect, depend on population density.

The barely visible solid brown line is the measured average summertime temperature for the continental U.S. (CONUS) relative to its 1901-2000 average, in degrees Celsius, from 1895 to 2023; the solid black line represents the same data corrected for UHI warming, as estimated from population density data. The measurements are taken from the monthly GHCN (Global Historical Climatology Network) “homogenized” dataset, as compiled by NOAA (the U.S. National Oceanic and Atmospheric Administration).

You can see that the UHI effect accounts for a substantial portion of the recorded temperature in all years. Spencer says that the UHI influence is 24% of the trend averaged over all measurement stations, which are dominated by rural sites not subject to UHI warming. But for the typical “suburban” station (100-1,000 persons per square km), the UHI effect is 52% of the measured trend, which means that measured warming in U.S. cities is at least double the actual warming. 

Globally, a rough estimate of the UHI effect can be made from NOAA satellite temperature data compiled by Spencer and Alabama state climatologist John Christy. Satellite data are not influenced by UHI warming because they measure the earth’s near-surface, not surface, temperature. The most recent data for the global average lower tropospheric temperature are displayed below.

According to Spencer and Christy’s calculations, the linear rate of global warming since measurements began in January 1979 is 0.15 degrees Celsius (0.27 degrees Fahrenheit) per decade, while the warming rate measured over land only is 0.20 degrees Celsius (0.36 degrees Fahrenheit) per decade. The difference of 0.05 degrees Celsius (0.09 degrees Fahrenheit) per decade in the warming rates can reasonably be attributed, at least in part, to the UHI effect.

So the UHI influence is as high as 0.05/0.20 or 25% of the measured temperature trend – in close agreement with Spencer’s 24% estimated from his more detailed calculations.

Other estimates peg the UHI effect as larger yet. As part of a study of natural contributions to global warming, which I discussed in a recent post, the CERES research group suggested that urban warming might account for up to 40% of warming since 1850.

But the 40% estimate comes from a comparison of the warming rate for rural temperature stations alone with that for rural and urban stations combined, from 1900 to 2018. Over the shorter time period from 1972 to 2018, which almost matches Spencer and Christy’s satellite record, the estimated UHI effect is a much smaller 6%. The study authors caution that more research is needed to estimate the UHI magnitude more accurately.

The effect of urbanization on global temperatures is an active research field. Among other recent studies is a 2021 paper by Chinese researchers, who used a novel approach involving machine learning to quantify the phenomenon. Their study encompassed measurement stations in four geographic areas – Australia, East Asia, Europe and North America – and found that the magnitude of UHI warming from 1951 to 2018 was 13% globally, and 15% in East Asia where rapid urbanization has occurred.

What all these studies mean for climate science is that global warming is probably about 20% lower than most people think. That is, about 0.8 degrees Celsius (1.4 degrees Fahrenheit) at the end of 2022, before the current El Niño spike, instead of the reported 0.99 degrees Celsius (1.8 degrees Fahrenheit). Which means in turn that we’re only halfway to the Paris Agreement’s lower limit of 1.5 degrees Celsius (2.7 degrees Fahrenheit).  

Next: Sea Ice Update: Arctic Stable, Antarctic Recovering

Retractions of Scientific Papers Are Skyrocketing

A trend that bodes ill for the future of scientific publishing, and another signal that science is under attack, is the soaring number of research papers being retracted. According to a recent report in Nature magazine, over 10,000 retractions were issued for scientific papers in 2023.

Although more than 8,000 of these were sham articles from a single publisher, Hindawi, all the evidence shows that retractions are rising more rapidly than the research paper growth rate. The two figures below depict the yearly number of retractions since 2013, and the retraction rate as a percentage of all scientific papers published from 2003 to 2022.

Clearly, there is cause for alarm as both the number of retractions and the retraction rate are accelerating. Nature’s analysis suggests that the retraction rate has more than trebled over the past decade to its present 0.2% or above. And the journal says the estimated total of about 50,000 retractions so far is only the tip of the iceberg of work that should be retracted.

An earlier report in 2012 by a trio of medical researchers reviewed 2,047 biomedical and life-science research articles retracted since 1977. They found that 43% of the retractions were attributable to fraud or suspected fraud, 14% to duplicate publication and 10% to plagiarism, with 21% withdrawn because of error. The researchers also discovered that retractions for fraud or suspected fraud as a percentage of total articles published have increased almost 10 times since 1975.

A recent example of fraud outside the biomedical area is the 2022 finding of the University of Delaware that star marine ecologist Danielle Dixson was guilty of research misconduct, for fabricating and falsifying research results in her work on fish behavior and coral reefs. As reported in Science magazine, the university subsequently sought retraction of three of Dixson’s papers.

The misconduct involves studies by Dixson of the behavior of coral reef fish in slightly acidified seawater, in order to simulate the effect of ocean acidification caused by the absorption of up to 30% of human CO2 emissions. Dixson and Philip Munday, a former marine ecologist at James Cook University in Townsville, Australia, claimed that the extra CO2 causes reef fish to be attracted by chemical cues from predators, instead of avoiding them; to become hyperactive and disoriented; and to suffer loss of vision and hearing.

But, as I described in a 2021 blog post, a team of biological and environmental researchers led by Timothy Clark of Deakin University in Geelong, Australia debunked all these conclusions. Most damningly of all, the researchers found that the reported effects of ocean acidification on the behavior of coral reef fish were not reproducible.

The investigative panel at the University of Delaware endorsed Clark’s findings, saying it was “repeatedly struck by a serial pattern of sloppiness, poor recordkeeping, copying and pasting within spreadsheets, errors within many papers under investigation, and deviation from established animal ethics protocols.” The panel also took issue with the reported observation times for two of the studies, stating that the massive amounts of data could not have been collected in so short a time. Dixson has since been fired from the university.

Closely related to fraud is the reproducibility crisis – the vast number of peer-reviewed scientific studies that can’t be replicated in subsequent investigations and whose findings turn out to be false, like Dixson’s. In the field of cancer biology, for example, scientists at Amgen in California discovered in the early 2000s that an astonishing 89% of published results couldn’t be reproduced.

One of the reasons for the soaring number of retractions is the rapid growth of fake research papers churned out by so-called “paper mills.” Paper mills are shady businesses that sell bogus manuscripts and authorships to researchers who need journal publications to advance their careers. Another Nature report suggests that over the past two decades, more than 400,000 published research articles show strong textual similarities to known studies produced by paper mills; the rising trend is illustrated in the next figure.

German neuropsychologist Bernhard Sabel estimates that in medicine and neuroscience, as many as 11% of papers in 2020 were likely paper-mill products. University of Oxford psychologist and research-integrity sleuth Dorothy Bishop found signs of paper mill-activity last year in at least 10 journals from Hindawi, the publisher mentioned earlier.

Textual similarities are only one fingerprint of paper-mill publications. Others include suspicious e-mail addresses that don’t correspond to any of a paper’s authors; e-mail addresses from hospitals in China (because the issue is known to be so common there); manipulated images from other papers; twisted phrases that indicate efforts to avoid plagiarism detection; and duplicate submissions across journals.

Journals, fortunately, are starting to pay more attention to paper mills, revamping their review processes for example. They’re also being aided by an ever-growing army of paper-mill detectives such as Bishop.

Next: Exactly How Large Is the Urban Heat Island Effect in Global Warming?

Foundations of Science Under Attack in U.S. K-12 Education

Little known to most people is that science is under assault in the U.S. classroom. Some 49 U.S. states have adopted standards for teaching science in K-12 schools that abandon the time-honored edifice of the scientific method, which underpins all the major scientific advances of the past two millennia.

In place of the scientific method, most schoolchildren are now taught “scientific practices.” These emphasize the use of computer models and social consensus over the fundamental tenets of the scientific method, namely the gathering of empirical evidence and the use of reasoning to make sense of the evidence. 

The modern scientific method, illustrated schematically in the figure below, was conceived over two thousand years ago by the Hellenic-era Greeks, then almost forgotten and ultimately rejuvenated in the Scientific Revolution, before being refined into its present-day form in the 19th century. However, even earlier scientists such as Galileo Galilei and Isaac Newton had followed the basic principles of the method, as have subsequent scientific luminaries like Marie Curie and Albert Einstein. 

The present assault on science in U.S. schools began with publication in 2012 of a 400-page document, A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas, by the U.S. National Academy of Sciences. This was followed in 2014 with publication by a newly formed consortium of national and state education groups of a companion document, Next Generation Science Standards (NGSS), based on the 2012 Framework.

The Framework summarily dismisses the scientific method with the outrageous statement:

… the notion that there is a single scientific method of observation, hypothesis, deduction, and conclusion—a myth perpetuated to this day by many textbooks—is fundamentally wrong,

and its explanation of “practices” as: 

… not rote procedures or a ritualized “scientific method.”

The Framework’s abandonment of the scientific method appears to have its origins in a 1992 book by H.H. Bauer entitled Scientific Literacy and the Myth of the Scientific Method. Bauer’s arguments against the importance of the scientific method include the mistaken conflation of science with sociology, and a misguided attempt to elevate the irrational pseudoscience of astrology to the status of a true science.

The NGSS give the scientific method even shorter shrift than the Framework, not mentioning the concept nor the closely related term of critical thinking once in its 103 pages. A scathing review of the NGSS in 2021 by the U.S. National Association of Scholars (NAS), Climbing Down: How the Next Generation Science Standards Diminish Scientific Literacy, concludes that:

The NGSS severely neglect content instruction, politicize much of the content that remains … and abandon instruction of the scientific method.

Stating that “The scientific method is the logical and rational process through which we observe, describe, explain, test, and predict phenomena … but is nowhere to be found in the actual standards of the NGSS,” the NAS report also states:

Indeed, the latest generation of science education reformers has replaced scientific content with performance-based “learning” activities, and the scientific method with social consensus.

It goes on to say that neither the Framework nor the NGSS ever mention explicitly the falsifiability criterion – a crucial but often overlooked feature of the modern scientific method, in addition to the basic steps outlined above. The criterion, introduced in the early 20th century by philosopher Sir Karl Popper, states that a true scientific theory or law must in principle be capable of being invalidated by observation or experiment. Any evidence that fits an unfalsifiable theory has no scientific validity.

The primary deficiencies of the Framework and the NGSS have recently been enumerated and discussed by physicist John Droz, who has identified a number of serious shortcomings, some of which inject politics into what should be purely scientific standards. These include the use of computer models to imply reality; treating consensus as equal in value to empirical data; and the use of correlation to imply causation.

The NGSS do state that “empirical evidence is required to differentiate between cause and correlation” (in Crosscutting Concepts, page 92 onward), and there is a related discussion in the Framework. However, there is no attempt in either document to connect the concept of cause and effect to the steps of observation, and formulation and testing of a hypothesis, in the scientific method.

The NAS report is pessimistic about the effect of the NGSS on K-12 science education in the U.S., stating that:

They [the NGSS] do not provide a science education adequate to take introductory science courses in college. They lack large areas of necessary subject matter and an extraordinary amount of mathematical rigor. … The NGSS do not prepare students for careers or college readiness.

There is, however, one bright light. In his home state of North Carolina (NC), Droz was successful in July 2023 in having the scientific method restored to the state’s K-12 Science Standards. Earlier that year, he had discovered that existing NC science standards had excluded teaching the scientific method for more than 10 years. So Droz formally filed a written objection with the NC Department of Public Instruction.

Droz was told that he was “the only one bringing up this issue” out of 14,000 inputs on the science standards. However, two members of the State Board of Education ultimately joined him in questioning the omission and, after much give-and-take, the scientific method was reinstated. That leaves 48 other states that need to follow North Carolina’s example.

Next: Retractions of Scientific Papers Are Skyrocketing

Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Driven by Oceanic Seismic Activity, Not CO2

Although undersea volcanic eruptions can’t cause global warming directly, as I discussed in a previous post, they can contribute indirectly by altering the deep-ocean thermohaline circulation. According to a recent lecture, submarine volcanic activity is currently intensifying the thermohaline circulation sufficiently to be the principal driver of global warming.

The lecture was delivered by Arthur Viterito, a renowned physical geographer and retired professor at the College of Southern Maryland. His provocative hypothesis links an upsurge in seismic activity at mid-ocean ridges to recent global warming, via a strengthening of the ocean conveyor belt that redistributes seawater and heat around the globe.

Viterito’s starting point is the observation that satellite measurements of global warming since 1979 show distinct step increases following major El Niño events in 1997-98 and 2014-16, as demonstrated in the following figure. The figure depicts the satellite-based global temperature of the lower atmosphere in degrees Celsius, as compiled by scientists at the University of Alabama in Huntsville; temperatures are annual averages and the zero baseline represents the mean tropospheric temperature from 1991 to 2020.

Viterito links these apparent jumps in warming to geothermal heat emitted by volcanoes and hydrothermal vents in the middle of the world’s ocean basins – heat that shows similar step increases over the same time period, as measured by seismic activity. The submarine volcanoes and hydrothermal vents lie along the earth’s mid-ocean ridges, which divide the major oceans roughly in half and are illustrated in the next figure. The different colors denote the geothermal heat output (in milliwatts per square meter), which is highest along the ridges.

The total mid-ocean seismic activity along the ridges is shown in the figure below, in which the global tropospheric temperature, graphed in the first figure above, is plotted in blue against the annual number of mid-ocean earthquakes (EQ) in orange. The best fit between the two sets of data occurs when the temperature readings are lagged by two years: that is, the 1979 temperature reading is paired with the 1977 seismic reading, and so on. As already mentioned, seismic activity since 1979 shows step increases similar to the temperature.

A regression analysis yields a correlation coefficient of 0.74 between seismic activity and the two-year lagged temperatures, which implies that mid-ocean geothermal heat accounts for 55% of current global warming, says Viterito. However, a correlation coefficient of 0.74 is not as high as some estimates of the correlation between rising CO2 and temperature.

In support of his hypothesis, Viterito states that multiple modeling studies have demonstrated how geothermal heating can significantly strengthen the thermohaline circulation, shown below. He then links the recently enhanced undersea seismic activity to global warming of the atmosphere by examining thermohaline heat transport to the North Atlantic-Arctic and western Pacific oceans.

In the Arctic, Viterito points to several phenomena that he believes are a direct result of a rapid intensification of North Atlantic currents which began around 1995 – the same year that mid-ocean seismic activity started to rise. The phenomena include the expansion of a phytoplankton bloom toward the North Pole due to incursion of North Atlantic currents into the Arctic; enhanced Arctic warming; a decline in Arctic sea ice; and rapid warming of the Subpolar Gyre, a circular current south of Greenland.

In the western Pacific, he cites the increase since 1993 in heat content of the Indo-Pacific Warm Pool near Indonesia; a deepening of the Indo-Pacific Warm Pool thermocline, which divides warmer surface water from cooler water below; strengthening of the Kuroshio Current near Japan; and recently enhanced El Niños.

But, while all these observations are accurate, they do not necessarily verify Viterito’s hypothesis that submarine earthquakes are driving current global warming. For instance, he cites as evidence the switch of the AMO (Atlantic Multidecadal Oscillation) to its positive or warm phase in 1995, when mid-ocean seismic activity began to increase. However, his assertion begs the question: Isn’t the present warm phase of the AMO just the same as the hundreds of warm cycles that preceded it?

In fact, perhaps the AMO warm phase has always been triggered by an upturn in mid-ocean earthquakes, and has nothing to do with global warming.

There are other weaknesses in Viterito’s argument too. One example is his association of the decline in Arctic sea ice, which also began around 1995, with the current warming surge. What he overlooks is that the sea ice extent stopped shrinking on average in 2007 or 2008, but warming has continued.

And while he dismisses CO2 as a global warming driver because the rising CO2 level doesn’t show the same step increases as the tropospheric temperature, a correlation coefficient between CO2 and temperature as high as 0.8 means that any CO2 contribution is not negligible.

It’s worth noting here that a strengthened thermohaline circulation is the exact opposite of the slowdown postulated by retired meteorologist William Kininmonth as the cause of global warming, a possibility I described in an earlier post in this Challenges series (#7). From an analysis of longwave radiation from greenhouse gases absorbed at the tropical surface, Kininmonth concluded that a slowdown in the thermohaline circulation is the only plausible explanation for warming of the tropical ocean.

Next: Foundations of Science Under Attack in U.S. K-12 Education

Rapid Climate Change Is Not Unique to the Present

Rapid climate change, such as the accelerated warming of the past 40 years, is not a new phenomenon. During the last ice age, which spanned the period from about 115,000 to 11,000 years ago, temperatures in Greenland rose abruptly and fell again at least 25 times. Corresponding temperature swings occurred in Antarctica too, although they were less pronounced than those in Greenland.

The striking but fleeting bursts of heat are known as Dansgaard–Oeschger (D-O) events, named after palaeoclimatologists Willi Dansgaard and Hans Oeschger who examined ice cores obtained by deep drilling the Greenland ice sheet. What they found was a series of rapid climate fluctuations, when the icebound earth suddenly warmed to near-interglacial conditions over just a few decades, only to gradually cool back down to frigid ice-age temperatures.

Ice-core data from Greenland and Antarctica are depicted in the figure below; two sets of measurements, recorded at different locations, are shown for each. The isotopic ratios of 18O to 16O, or δ18O, and 2H to 1H, or δ2H, in the cores are used as proxies for the past surface temperature in Greenland and Antarctica, respectively.

Multiple D-O events can be seen in the four sets of data, stronger in Greenland than Antarctica. The periodicity of successive events averages 1,470 years, which has led to the suggestion of a 1,500-year cycle of climate change associated with the sun.

Somewhat similar cyclicity has been observed during the present interglacial period or Holocene, with eight sudden temperature drops and recoveries, mirroring D-O temperature spurts, as illustrated by the thick black line in the next figure. Note that the horizontal timescale runs forward, compared to backward in the previous (and following) figure.

These so-called Bond events were identified by geologist Gerard Bond and his colleagues, who used drift ice measured in deep-sea sediment cores, and δ18O as a temperature proxy, to study ancient climate change. The deep-sea cores contain glacial debris rafted into the oceans by icebergs, and then dropped onto the sea floor as the icebergs melted. The volume of glacial debris was largest, and it was carried farthest out to sea, when temperatures were lowest.

Another set of distinctive, abrupt events during the latter part of the last ice age were Heinrich events, which are related to both D-O events and Bond cycles. Five of the six or more Heinrich events are shown in the following figure, where the red line represents Greenland ice-core δ18O data, and some of the many D-O events are marked; the figure also includes Antarctic δ18O data, together with ice-age CO2 and CH4 levels.

As you can see, Heinrich events represent the cooling portion of certain D-O events. Although the origins of both are debated, they are thought likely to be associated with an increase in icebergs discharged from the massive Laurentide ice sheet which covered most of Canada and the northern U.S. Just as with Bond events, Heinrich and D-O events left a signature on the ocean floor, in this case in the form of large rocks eroded by glaciers and dropped by melting icebergs.

The melting icebergs would have also disgorged enormous quantities of freshwater into the Labrador Sea. One hypothesis is that this vast influx of freshwater disrupted the deep-ocean thermohaline circulation (shown below) by lowering ocean salinity, which in turn suppressed deepwater formation and reduced the thermohaline circulation.

Since the thermohaline circulation plays an important role in transporting heat northward, a slowdown would have caused the North Atlantic to cool, leading to a Heinrich event. Later, as the supply of freshwater decreased, ocean salinity and deepwater formation would have increased again, resulting in the rapid warming of a D-O event.

However, this is but one of several possible explanations. The proposed freshwater increase and reduced deepwater formation during D-O events could have resulted from changes in wind and rainfall patterns in the Northern Hemisphere, or the expansion of Arctic sea ice, rather than melting icebergs.

In 2021, an international team of climate researchers concluded that when certain parts of the ice-age climate system changed abruptly, other parts of the system followed like a series of dominoes toppling in succession. But to their surprise, neither the rate of change nor the order of the processes were the same from one event to the other.

Using data from two Greenland ice cores, the researchers discovered that changes in ocean currents, sea ice and wind patterns were so closely intertwined that they likely triggered and reinforced each other in bringing about the abrupt climate changes of D-O and Heinrich events.

While there’s clearly no connection between ice-age D-O events and today’s accelerated warming, this research and the very existence of such events show that the underlying causes of rapid climate change can be elusive.

Next: Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Is Driven by Oceanic Seismic Activity, Not CO2

Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

In something of a twist to my series on challenges to the CO2 global warming hypothesis, this post describes a new paper that attributes modern global warming entirely to water vapor, not CO2.

Water vapor (H2O) is in fact the major greenhouse gas in the earth’s atmosphere and accounts for about 70% of the Earth’s natural greenhouse effect. Water droplets in clouds account for another 20%, while CO2 contributes only a small percentage, between 4 and 8%, of the total. The natural greenhouse effect keeps the planet at a comfortable enough temperature for living organisms to survive, rather than 33 degrees Celsius (59 degrees Fahrenheit) cooler.

According to the CO2 hypothesis, it’s the additional greenhouse effect of CO2 and other gases from human activities that is responsible for the current warming (ignoring El Niño) of about 1.0 degrees Celsius (1.8 degrees Fahrenheit) since the preindustrial era. Because elevated CO2 on its own causes only a tiny increase in temperature, the hypothesis postulates that the increase from CO2 is amplified by water vapor in the atmosphere and by clouds – a positive feedback effect.

The paper’s authors, Canadian researchers H. Douglas Lightfoot and Gerald Ratzer, don’t dispute that the natural greenhouse effect exists, as do other, heretical challenges described previously in this series. But the authors ignore the postulated water vapor amplification of CO2 greenhouse warming, and claim that increased water vapor alone accounts for today’s warmer world. It’s well known that extra water vapor is produced by the sun’s evaporation of seawater.

The basis of Lightfoot and Ratzer’s conclusion is something called the psychrometric chart, which is a rather intimidating tool used by architects and engineers in designing heating and cooling systems for buildings. The chart, illustrated below, is a mathematical model of the atmosphere’s thermodynamic properties, including heat content (enthalpy), temperature and relative humidity.

As inputs to their psychrometric model, the researchers used temperature and relative humidity measurements recorded on the 21st of the month over a 12-month period at 20 different locations: four north of the Arctic Circle, six in north mid-latitudes, three on the equator, one in the Sahara Desert, five in south mid-latitudes and one in Antarctica.

As indicated in the figure above, one output of the model from these inputs is the mass of water vapor in grams per kilogram of dry air. The corresponding mass of CO2 per kilogram of dry air at each location was calculated from Mauna Loa CO2 data in ppm (parts per million).

Their results revealed that the ratio of water vapor molecules to CO2 molecules ranges from 0.3 in polar regions to 108 in the tropics. Then, in a somewhat obscure argument, Lightfoot and Ratzer compared these ratios to calculated spectra for outgoing radiation at the top of the atmosphere. Three spectra – for the Sahara Desert, the Mediterranean, and Antarctica – are shown in the next figure.

The significant dip in the Sahara Desert spectrum arises from absorption by CO2 of outgoing radiation whose emission would otherwise cool the earth. You can see that in Antarctica, the dip is absent and replaced by a bulge. This bulge has been explained by William Happer and William van Wijngaarden as being a result of the radiation to space by greenhouse gases over wintertime Antarctica exceeding radiation by the cold ice surface.

Yet Lightfoot and Ratzer assert that the dip must be unrelated to CO2 because their psychrometric model shows there are 0.3 to 40 molecules of water vapor per CO2 molecule in Antarctica, compared with a much higher 84 to 108 in the tropical Sahara where the dip is substantial. Therefore, they say, the warming effect of CO2 must be negligible.

As I see it, however, there are at least two fallacies in the researchers’ arguments, First, the psychrometric model is an inadequate representation of the earth’s climate. Although the model takes account of both convective heat and latent heat (from evaporation of H2O) in the atmosphere, it ignores multiple feedback processes, including the all-important water vapor feedback mentioned above. Other feedbacks include the temperature/altitude (lapse rate) feedback, high- and low-cloud feedback, and the carbon cycle feedback.

A more important objection is that the assertion about water vapor causing global warming represents a circular argument.

According to Lightfoot and Ratzer’s paper, any warming above that provided by the natural greenhouse effect comes solely from the sun. On average, they correctly state, about 26% of the sun’s incoming energy goes into evaporation of water (mostly seawater) to water vapor. The psychrometric model links the increase in water vapor to a gain in temperature.

But the Clausius-Clapeyron equation tells us that warmer air holds more moisture, about 7% more for each degree Celsius of temperature rise. So an increase in temperature raises the water vapor level in the atmosphere – not the other way around. Lightfoot and Ratzer’s claim is circular reasoning.

Next: Rapid Climate Change Is Not Unique to the Present

Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s

In a recent series of blog posts, I showed how actual scientific data and reports in newspaper archives over the past century demonstrate clearly that the frequency and severity of extreme weather events have not increased during the last 100 years. But there’s also plenty of evidence of weather extremes comparable to today’s dating back centuries and even millennia.

The evidence consists largely of reconstructions based on proxies such as tree rings, sediment cores and leaf fossils, although some evidence is anecdotal. Reconstruction of historical hurricane patterns, for example, confirms what I noted in an earlier post, that past hurricanes were even more frequent and stronger than those today.

The figure below shows a proxy measurement for hurricane strength of landfalling tropical cyclones – the name for hurricanes down under – that struck the Chillagoe limestone region in northeastern Queensland, Australia between 1228 and 2003. The proxy was the ratio of 18O to 16O isotopic levels in carbonate cave stalagmites, a ratio which is highly depleted in tropical cyclone rain.

What is plotted here is the 18O/16O depletion curve, in parts per thousand (‰); the thick horizontal line at -2.50 ‰ denotes Category 3 or above events, which have a top wind speed of 178 km per hour (111 mph) or greater. It’s clear that far more (seven) major tropical cyclones impacted the Chillagoe region in the period from 1600 to 1800 than in any period since, at least until 2003. Indeed, the strongest cyclone in the whole record occurred during the 1600 to 1800 period, and only one major cyclone was recorded from 1800 to 2003.

Another reconstruction of past data is that of unprecedently long and devastating “megadroughts,” which have occurred in western North America and in Europe for thousands of years. The next figure depicts a reconstruction from tree ring proxies of the drought pattern in central Europe from 1000 to 2012, with observational data from 1901 to 2018 superimposed. Dryness is denoted by negative values, wetness by positive values.

The authors of the reconstruction point out that the droughts from 1400 to 1480 and from 1770 to 1840 were much longer and more severe than those of the 21st century. A reconstruction of megadroughts in California back to 800 was featured in a previous post.

An ancient example of a megadrought is the 7-year drought in Egypt approximately 4,700 years ago that resulted in widespread famine, known as Famine Stela. The water level in the Nile River dropped so low that the river failed to flood adjacent farmlands as it normally does each year, resulting in drastically reduced crop yields. The event is recorded in a hieroglyphic inscription on a granite block located on an island in the Nile.

At the other end of the wetness scale, a Christmas Eve flood in the Netherlands, Denmark and Germany in 1717 drowned over 13,000 people – many more than died in the much hyped Pakistan floods of 2022.

Although most tornadoes occur in the U.S., they have been documented in the UK and other countries for centuries. In 1577, North Yorkshire in England experienced a tornado of intensity T6 on the TORRO scale, which corresponds approximately to EF4 on the Fujita scale, with wind speeds of 259-299 km per hour (161-186 mph). The tornado destroyed cottages, trees, barns, hayricks and most of a church. EF4 tornadoes are relatively rare in the U.S.: of 1,000 recorded tornadoes from 1950 to 1953, just 46 were EF4.

Violent thunderstorms that spawn tornadoes have also been reported throughout history. An associated hailstorm which struck the Dutch town of Dordrecht in 1552 was so violent that residents “thought the Day of Judgement was coming” when hailstones weighing up to a few pounds fell on the town. A medieval depiction of the event is shown in the following figure.

Such historical storms make a mockery of the 2023 claim by a climate reporter that “Recent violent storms in Italy appear to be unprecedented for intensity, geographical extensions and damages to the community.” The thunderstorms in question produced hailstones the size of tennis balls, merely comparable to those that fell on Dordrecht centuries earlier. And the storms hardly compare with a hailstorm in India in 1888, which actually killed 246 people.

Next: Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

Two Statistical Studies Attempt to Cast Doubt on the CO2 Narrative

As I’ve stated many times in these pages, the evidence that global warming comes largely from human emissions of CO2 and other greenhouse gases is not rock solid. Two recent statistical studies affirm this position, but both studies can be faulted.

The first study, by four European engineers, is provocatively titled “On Hens, Eggs, Temperatures and CO2: Causal Links in Earth’s Atmosphere.” As the title suggests, the paper addresses the question of whether modern global warming results from increased CO2 in the atmosphere, according to the CO2 narrative, or whether it’s the other way around. That is, whether rising temperatures from natural sources are causing the CO2 concentration to go up.

The study’s controversial conclusion is the latter possibility – that extra atmospheric CO2 can’t be the cause of higher temperatures, but that raised temperatures must be the origin of elevated CO2, at least over the last 60 years for which we have reliable CO2 data. The mathematics behind the conclusion is complicated but relies on something called the impulse response function.

The impulse response function describes the reaction over time of a dynamic system to some external change or impulse. Here, the impulse and response are the temperature change ΔT and the increase in the logarithm of the CO2 level, Δln(CO2), or the reverse. The study authors took ΔT to be the average one-year temperature difference from 1958 to 2022 in the Reanalysis 1 dataset compiled by the U.S. NCEP (National Centers for Environmental Prediction) and the NCAR (National Center for Atmospheric Research); CO2 data was taken from the Mauna Loa time series which dates from 1958.

Based on these two time series, the study’s calculated IRFs (impulse response functions) are depicted in the figure below, for the alternate possibilities of ΔT => Δln(CO2) (left, in green) and Δln(CO2) => ΔT (right, in red). Clearly, the IRF indicates that ΔT is the cause and Δln(CO2) the effect, since for the opposite case of Δln(CO2) causing ΔT, the time lag is negative and therefore unphysical.

This is reinforced by the correlations shown in the following figure (lower panels), which also illustrates the ΔT and Δln(CO2) time series (upper panel). A strong correlation (R = 0.75) is seen between ΔT and Δln(CO2) when the CO2 increase occurs six months later than ΔT, while there is no correlation (R = 0.01) when the CO2 increase occurs six months earlier than ΔT, so ΔT must cause Δln(CO2). Note that the six-month displacement of Δln(CO2) from ΔT in the two time series is artificial, for easier viewing.

However, while the above correlation and the behavior of the impulse response function are impressive mathematically, I personally am dubious about the study’s conclusion.

The oceans hold the bulk of the world’s CO2 and release it as the temperature rises, since warmer water holds less CO2 according to Henry’s Law. For global warming of approximately 1 degree Celsius (1.8 degrees Fahrenheit) since 1880, the corresponding increase in atmospheric CO2 outgassed from the oceans is only about 16 ppm (parts per million) – far below the actual increase of 130 ppm over that time. The Hens and Eggs study can’t account for the extra 114 ppm of CO2.

The equally provocative second study, titled “To what extent are temperature levels changing due to greenhouse gas emissions?”, comes from Statistics Norway, Norway’s national statistical institute and the principal source of the country’s official statistics. From a statistical analysis, the study claims that the effect of human CO2 emissions during the last 200 years has not been strong enough to cause the observed rise in temperature, and that climate models are incompatible with actual temperature data.

The conclusions are based on an analysis of 75 temperature time series from weather stations in 32 countries, the records spanning periods from 133 to 267 years; both annual and monthly time series were examined. The analysis attempted to identify systematic trends in temperature, or the absence of trends, in the temperature series.

What the study purports to find is that only three of the 75 time series show any systematic trend in annual data (though up to 10 do in monthly data), so that 72 sets of long-term temperature data show no annual trend at all. From this finding, the study authors conclude it’s not possible to determine how much of the observed temperature increase since the 19th century is due to CO2 emissions and how much is natural.

One of the study’s weaknesses is that it excludes sea surface temperatures, even though the oceans cover 70% of the earth’s surface, so the study is not truly global. A more important weakness is that it confuses local temperature measurements with global mean temperature. Furthermore, the study authors fail to understand that a statistical model simply can’t approximate the complex physical processes of the earth’s climate system.

In any case, statistical analysis in climate science doesn’t have a strong track record. The infamous “hockey stick” - a recon­structed temperature graph for the past 2000 years resembling the shaft and blade of a hockey stick on its side – is perhaps the best example.

The reconstruction was debunked in 2003 by Stephen McIntyre and Ross McKitrick, who found (here and here) that the graph was based on faulty statistical analysis, as well as preferential data selection. The hockey stick was further discredited by a team of scientists and statisticians from the National Research Council of the U.S. National Academy of Sciences.

Next: Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s

Antarctica Sending Mixed Climate Messages

Antarctica, the earth’s coldest and least-populated continent, is an enigma when it comes to global warming.

While the huge Antarctic ice sheet is known to be shedding ice around its edges, it may be growing in East Antarctica. Antarctic sea ice, after expanding slightly for at least 37 years, took a tumble in 2017 and reached a record low in 2023. And recent Antarctic temperatures have swung from record highs to record lows. No one is sure what’s going on.

The influence of global warming on Antarctica’s temperatures is uncertain. A 2021 study concluded that both East Antarctica and West Antarctica have cooled since the beginning of the satellite era in 1979, at rates of 0.70 degrees Celsius (1.3 degrees Fahrenheit) per decade and 0.42 degrees Celsius (0.76 degrees Fahrenheit) per decade, respectively. But over the same period, the Antarctic Peninsula (on the left in the adjacent figure) has warmed at a rate of 0.18 degrees Celsius (0.32 degrees Fahrenheit) per decade.

During the southern summer, two locations in East Antarctica recorded record low temperatures early this year. At the Concordia weather station, located at the 4 o’clock position from the South Pole, the mercury dropped to -51.2 degrees Celsius (-60.2 degrees Fahrenheit) on January 31, 2023. This marked the lowest January temperature recorded anywhere in Antarctica since the first meteorological observations there in 1956.

Two days earlier on January 29, 2023, the nearby Vostok station, about 400 km (250) miles closer to the South Pole, registered a low temperature of -48.7 degrees Celsius (-55.7 degrees Fahrenheit), that location’s lowest January temperature since 1957. Vostok has the distinction of reporting the lowest temperature ever recorded in Antarctica, and also the world record low, of -89.2 degrees Celsius (-128.6 degrees Fahrenheit) on July 21, 1984.

Barely a year before, however, East Antarctica had experienced a heat wave, when the temperature soared to -10.1 degrees Celsius (13.8 degrees Fahrenheit) at the Concordia station on March 18, 2022. This balmy reading was the highest recorded hourly temperature at that weather station since its establishment in 1996, and 20 degrees Celsius (36 degrees Fahrenheit) above the previous March record high there. Remarkably, the temperature remained above the previous March record for three consecutive days, including nighttime.

Antarctic sea ice largely disappears during the southern summer and reaches its maximum extent in September, at the end of winter. The two figures below illustrate the winter maximum extent in 2023 (left) and the monthly variation of Antarctic sea ice extent this year from its March minimum to the September maximum (right).

The black curve on the right depicts the median extent from 1981 to 2010, while the dashed red and blue curves represent 2022 and 2023, respectively. It's clear that Antarctic sea ice in 2023 has lagged the median and even 2022 by a wide margin throughout the year. The decline in summer sea ice extent has now persisted for six years, as seen in the following figure which shows the average monthly extent since satellite measurements began, as an anomaly from the median value.

The overall trend from 1979 to 2023 is an insignificant 0.1% per decade relative to the 1981 to 2010 median. Yet a prolonged  increase above the median occurred from 2008 to 2017, followed by the six-year decline since then. The current downward trend has sparked much debate and several possible reasons have been put forward, not all of which are linked to global warming. One analysis attributes the big losses of sea ice in 2017 and 2023 to extra strong El Niños.

Melting of the Antarctic ice sheet is currently causing sea levels to rise by 0.4 mm (16 thousandths of an inch) per year, contributing about 10% of the global total. But the ice loss is not uniform across the continent, as seen in the next figure showing changes in Antarctic ice sheet mass since 2002.

In the image on the right, light blue shades indicate ice gain while orange and red shades indicate ice loss. White denotes areas where there has been very little or no change in ice mass since 2002; gray areas are floating ice shelves whose mass change is not measured by this satellite method.

You can see that East Antarctica has experienced modest amounts of ice gain, which is due to warming-enhanced snowfall. Nevertheless, this gain has been offset by significant loss of ice in West Antarctica over the same period, largely from melting of glaciers – which is partly caused by active volcanoes underneath the continent. While the ice sheet mass declined at a fairly constant rate of 133 gigatonnes (147 gigatons) per year from 2002 to 2020, it appears that the total mass may have reached a minimum and is now on the rise again.

Despite the hullabaloo about its melting ice sheet and shrinking sea ice, what happens next in Antarctica continues to be a scientific mystery.

Next: Two Statistical Studies Attempt to Cast Doubt on the CO2 Narrative

No Evidence That Today’s El Niños Are Any Stronger than in the Past

The current exceptionally strong El Niño has revived discussion of a question which comes up whenever the phenomenon recurs every two to seven years: are stronger El Niños caused by global warming? While recent El Niño events suggest that in fact they are, a look at the historical record shows that even stronger El Niños occurred in the distant past.

El Niño is the warm phase of ENSO (the El Niño – Southern Oscillation), a natural ocean cycle that causes drastic temperature fluctuations and other climatic effects in tropical regions of the Pacific. Its effect on atmospheric temperatures is illustrated in the figure below. Warm spikes such as those in 1997-98, 2009-10, 2014-16 and 2023 are due to El Niño; cool spikes like those in 1999-2001 and 2008-09 are due to the cooler La Niña phase.

A slightly different temperature record, of selected sea surface temperatures in the El Niño region of the Pacific, averaged yearly from 1901 to 2017, is shown in the next figure from a 2019 study.

Here the baseline is the mean sea surface temperature over the 1901-2017 interval, and the black dashed line at 0.6 degrees Celsius is defined by the study authors as the threshhold for an El Niño event. The different colors represent various regional types of El Niño; the gray bars mark warm years in which no El Niño developed.

This year’s gigantic spike in the tropospheric temperature to 0.93 degrees Celsius (1.6 degrees Fahrenheit) – a level that set alarm bells ringing – is clearly the strongest El Niño by far in the satellite record. Comparison of the above two figures shows that it is also the strongest since 1901. So it does indeed appear that El Niños are becoming stronger as the globe warms, especially since 1960.

Nevertheless, such a conclusion is ill-considered as there is evidence from an earlier study that strong El Niños have been plentiful in the earth’s past.

As I described in a previous post, a team of German paleontologists established a complete record of El Niño events going back 20,000 years, by examining marine sediment cores drilled off the coast of Peru. The cores contain an El Niño signature in the form of tiny, fine-grained stone fragments, washed into the sea by multiple Peruvian rivers following floods in the country caused by heavy El Niño rainfall.

The research team classified the flood signal as very strong when the concentration of stone fragments, known as lithics, was more than two standard deviations above the centennial mean. The frequency of these very strong events over the last 12,000 years is illustrated in the next figure; the black and gray bars show the frequency as the number of 500- and 1,000-year floods, respectively. Radiocarbon dating of the sediment cores was used to establish the timeline.

A more detailed record is presented in the following figure, showing the variation over 20,000 years of the sea surface temperature off Peru (top), the lithic concentration (bottom) and a proxy for lithic concentration (center). Sea surface temperatures were derived from chemical analysis of the marine sediment cores.

You can see that the lithic concentration and therefore El Niño strength were high around 2,000 and 10,000 years ago – approximately the same periods when the most devastating floods occurred. The figure also reveals the absence of strong El Niño activity from 5,500 to 7,500 years ago, a dry interval without any major Peruvian floods as reflected in the previous figure.

If you examine the lithic plots carefully, you can also see that the many strong El Niños approximately 2,000 and 10,000 years ago were several times stronger (note the logarithmic concentration scale) than current El Niños on the far left of the figure. Those two periods were warmer than today as well, being the Roman Warm Period and the Holocene Thermal Maximum, respectively.

So there is nothing remarkable about recent strong El Niños.

Despite this, the climate science community is still uncertain about the global warming question. The 2019 study described above found that since the 1970s, formation of El Niños has shifted from the eastern to the western Pacific, where ocean temperatures are higher. From this observation, the study authors concluded that future El Niños may intensify. However, they qualified their conclusion by stating that:

… the root causes of the observed background changes in the later part of the 20th century remain elusive … Natural variability may have added significant contributions to the recent warming.

Recently, an international team of 17 scientists has conducted a theoretical study of El Niños since 1901 using 43 climate models, most of which showed the same increase in El Niño strength since 1960 as the actual observations. But again, the researchers were unable to link this increase to global warming, declaring that:

Whether such changes are linked to anthropogenic warming, however, is largely unknown.

The researchers say that resolution of the question requires improved climate models and a better understanding of El Niño itself. Some climate models show El Niño becoming weaker in the future.

Next: Antarctica Sending Mixed Climate Messages

Targeting Farmers for Livestock Greenhouse Gas Emissions Is Misguided

Farmers in many countries are increasingly coming under attack over their livestock herds. Ireland’s government is contemplating culling the country’s cattle herds by 200,000 cows to cut back on methane (CH4) emissions; the Dutch government plans to buy out livestock farmers to lower emissions of CH4 and nitrous oxide (N2O) from cow manure; and New Zealand is close to taxing CH4 from cow burps.

But all these measures, and those proposed in other countries, are misguided and shortsighted – for multiple reasons.

The thrust behind the intended clampdown on the farming community is the estimated 11-17% of current greenhouse gas emissions from agriculture worldwide, which contribute to global warming. Agricultural CH4, mainly from ruminant animals, accounted for approximately 4% of total greenhouse gas emissions in the U.S. in 2021, according to the EPA (Environmental Protection Agency), while N2O accounted for another 5%.

The actual warming produced by these two greenhouse gases depends on their so-called “global warming potential,” a quantity determined by three factors: how efficiently the gas absorbs heat, its lifetime in the atmosphere, and its atmospheric concentration. The following table illustrates these factors for CO2, CH4 and N2O, together with their comparative warming effects.

The conventional global warming potential (GWP) is a dimensionless metric, in which the GWP per molecule of a particular greenhouse gas is normalized to that of CO2; the GWP takes into account the atmospheric lifetime of the gas. The table shows both GWP-20 and GWP-100, the warming potentials calculated over a 20-year and 100-year time horizon, respectively.

The final column shows what I call weighted GWP values, as percentages of the CO2 value, calculated by multiplying the conventional GWP by the ratio of the rate of concentration increase for that gas to that of CO2. The weighted GWP indicates how much warming CH4 or N2O causes relative to CO2.

Over a 100-year time span, you can see that both CH4 and N2O exert essentially the same warming influence, at 10% of CO2 warming. But over a 20-year interval, CH4 has a stronger warming effect than N2O, at 27% of CO2 warming, because of its shorter atmospheric lifetime which boosts the conventional GWP value from 30 (over 100 years) to 83.

However, the actual global temperature increase from CH4 and N2O – concern over which is the basis for legislation targeting the world’s farmers – is small. Over a 20-year period, the combined contribution of these two gases is approximately 0.075 degrees Celsius (0.14 degrees Fahrenheit), assuming that all current warming comes from CO2, CH4 and N2O combined, and using a value of 0.14 degrees Celsius (0.25 degrees Fahrenheit) per decade for the current warming rate.

But, as I’ve stated in many previous posts, at least some current warming is likely to be from natural sources, not greenhouse gases. So the estimated 20-year temperature rise of 0.075 degrees Celsius (0.14 degrees Fahrenheit) is probably an overestimate. The corresponding number over 100 years, also an overestimate, is 0.23 degrees Celsius (0.41 degrees Fahrenheit).

Do such small, or even smaller, gains in temperature justify the shutting down of agriculture? Farmers around the globe certainly don’t think so, and for good reason.

First, CH4 from ruminant animals such as cows, sheep and goats accounts for only 4% of U.S. greenhouse emissions as noted above, compared with 29% from transportation, for example. And our giving up eating meat and dairy products would have little impact on global temperatures. Removing all livestock and poultry from the U.S. food system would only reduce global greenhouse gas emissions by 0.36%, a study has found.

Other studies have shown that the elimination of all livestock from U.S. farms would leave our diets deficient in vital nutrients, including high-quality protein, iron and vitamin B12 that meat provides, says the Iowa Farm Bureau.

Furthermore, as agricultural advocate Kacy Atkinson argues, the methane that cattle burp out during rumination breaks down in 10 to 15 years into CO2 and water. The grasses that cattle graze on absorb that CO2, and the carbon gets sequestered in the soil through the grasses’ roots.

Apart from cow manure management, the largest source of N2O emissions worldwide is the application of nitrogenous fertilizers to boost crop production. Greatly increased use of nitrogen fertilizers is the main reason for massive increases in crop yields since 1961, part of the so-called green revolution in agriculture.

The figure below shows U.S. crop yields relative to yields in 1866 for corn, wheat, barley, grass hay, oats and rye. The blue dashed curve is the annual agricultural usage of nitrogen fertilizer in megatonnes (Tg). The strong correlation with crop yields is obvious.

Restricting fertilizer use would severely impact the world’s food supply. Sri Lanka’s ill-conceived 2022 ban of nitrogenous fertilizer (and pesticide) imports caused a 30% drop in rice production, resulting in widespread hunger and economic turmoil – a cautionary tale for any efforts to extend N2O reduction measures from livestock to crops.

Next: No Evidence That Today’s El Niños Are Any Stronger than in the Past

Estimates of Economic Losses from El Niños Are Far-fetched

A recent study makes the provocative claim that some of the most intense past El Niño events cost the global economy from $4 trillion to $6 trillion over the following years. That’s two orders of magnitude higher than previous estimates, but almost certainly wrong.

One reason for the enormous difference is that earlier estimates only examined the immediate economic toll, whereas the new study estimated cumulative losses over the five-year period after a warming El Niño. The study authors say, correctly, that the economic downturn triggered by this naturally occurring climate cycle can last that long.

However, even when this drawn-out effect is taken into account, the new study’s cost estimates are still one order of magnitude greater than other estimates in the scientific literature, such as those of the University of Colorado’s Roger Pielke Jr., who studies natural disasters. His estimated time series of total weather disaster losses as a proportion of global GDP from 1990 to 2020 is shown in the figure below.

The accounting used in the new study includes the “spatiotemporal heterogeneity of El Niño teleconnections,” teleconnections being links between weather phenomena at widely separated locations. Country-level teleconnections are based on correlations between temperature or rainfall in that country, and indexes commonly used to define El Niño and its cooling counterpart, La Niña. Teleconnections are strongest in the tropics and weaker in midlatitudes.

The researchers’ accounting procedure estimates total losses from the 1997-98 El Niño at a staggering $5.7 trillion by 2003, compared with a previous estimate of only $36 billion in the immediate aftermath of the event. For the earlier 1982-83 El Niño, the study estimates the total costs at $4.1 trillion by 1988. The calculated global distribution of GDP losses following both events is illustrated in the next figure.

To see how implausible these trillion-dollar estimates are, it’s only necessary to refer to Pielke’s graph above, which relies on official data from the insurance industry (including leading reinsurance company Munich Re) and the World Bank. His graph indicates that the peak loss from all 1998 weather disasters was 0.38% of global GDP for that year.

As El Niño was not the only disaster in 1998 – others include floods and hurricanes – this number represents an upper limit for instant El Niño losses. Using a value for global GDP in 1998 of $31,533 billion in current U.S. dollars, 0.38% was a maximum instant loss of $120 billion. Over a subsequent 5-year period, the maximum loss would have been 5 times as much, or $600 billion assuming the same annual loss each year which is undoubtedly an overestimate.

This inflated estimate of $600 billion is still an order of magnitude smaller than the study’s $5.7 trillion by 2003. In reality, the discrepancy is larger yet because the actual 5-year loss was likely much less than $600 billion as just discussed.

Two other observations about Pielke’s graph cast further doubt on the methodology of the researchers’ accounting procedure. First, the strongest El Niños in that 21-year period were those in 1997-98, 2009-10 and 2014-16. The graph does indeed show peaks in 1998-99 and in 2017, one year after a substantial El Niño – but not in 2011 following the 2009-10 event. This alone suggests that financial losses from El Niño are not as large as the researchers think.

Furthermore, there’s a strong peak in 2005, the largest in the 21 years of the graph, which doesn’t correspond to any substantial El Niño. The implication is that losses from other types of weather disaster can dominate losses from El Niño.

It’s important to get an accurate handle on economic losses from El Niño and other weather disasters, in case global warming exacerbates such events in the future – although, as I’ve written extensively, there’s no evidence to date that this is happening yet. Effects of El Niño include catastrophic flooding in the western Americas, flooding or episodic droughts in Australia, and coral bleaching.

The study authors stand by their research, however, estimating that the 2023 El Niño could hold back the global economy by $3 trillion over the next five years, a figure not included in their paper. But others are more skeptical. Climate economist Gary Yohe commented that “the enormous estimates cannot be explained simply by forward-looking accounting.” And Mike McPhaden, a senior scientist at NOAA (the U.S. National Oceanic and Atmospheric Administration) who was not involved in the research, called the study “provocative.”

Next: Targeting Farmers for Livestock Greenhouse Gas Emissions Is Misguided

Challenges to the CO2 Global Warming Hypothesis: (9) Rotation of the Earth’s Core as the Source of Global Warming

Yet another challenge to the CO2 global warming hypothesis, but one radically different from all the other challenges I’ve discussed in this series, hypothesizes that global warming or cooling result entirely from the slight speeding up or slowing down of the earth’s rotating inner core.

Linking the earth’s rotation to its surface temperature is not a new idea and has been discussed by several geophysicists over the last 50 years. What is new is the recent (2023) discovery that changes in global temperature follow changes in the earth’s rotation rate that in turn follow changes in the rotation rate of the inner core, both with a time delay. This discovery underlies the postulate that the earth’s temperature is regulated by rotational variations of the inner core, not by CO2.

The history and recent developments of the rotational hypothesis have been summarized in a recent paper by Australian Richard Mackey. The apparently simplistic hypothesis, which is certain to raise scientific eyebrows, does, however, meet the requirements for its scientific validation or rejection: it makes a prediction that can be tested against observation.

As Mackey explains, the prediction is that our current bout of global warming will come to an end in 2025, when global cooling will begin.

The prediction is based on the geophysical findings that shifts in the earth’s temperature appear to occur about eight years after the planet’s rotation rate changes, and the earth’s rotation rate changes eight years after the inner core’s rotation rate does. Because the inner core’s rotation rate began to slow around 2009, cooling should set in around 16 years later in 2025, according to the rotational hypothesis.

As illustrated in the figure below, the partly solid inner core is surrounded by the liquid metal outer core; the outer core is enveloped by the thick solid mantle, which underlies the thin crust on which we live. Convection in the outer core generates an electromagnetic field. The resulting electromagnetic torque on the inner core, together with gravitational coupling between the inner core and mantle, drive rotational variations in the inner core.

Although all layers rotate with the whole earth, the outer and inner cores also oscillate back and forth. Variations in the inner core rotation rate appear to be correlated with changes in the earth’s electromagnetic field mentioned above, changes that are in phase with variations in the global mean temperature.

Only recently was it found that the inner core rotates at a different speed than the outer core and mantle, with decadal fluctuations superimposed on the irregular rotation. The rotational hypothesis links these decadal fluctuations of the inner core to global warming and cooling: as the core rotates faster, the earth warms and as it puts the brakes on, the earth cools.

The first apparent evidence for the rotational hypothesis was reported in a 1976 research paper by geophysicists Kurt Lambeck and Amy Cazenave, who argued that global cooling in the 1960s and early 1970s arose from a slowing of the earth’s rotation during the 1950s.

At that time, the role of inner-core rotation was unknown. Nevertheless, the authors went on to predict that a period of global warming would commence in the 1980s, following a 1972 switch in rotation rate from deceleration to acceleration. Their prediction was based on a time lag of 10 to 15 years between changes in the earth’s rotational speed and surface temperature, rather than the 16 years established recently.

Other researchers had proposed a total time lag of only eight years. The next figure compares their estimates of rotation rate (green line) and surface temperature (red line) from 1880 to 2002, clearly showing the temperature lag, at least since 1900. (The black and blue lines should be ignored).

A minimum lag of eight years and a maximum of 16 years means that global warming should have begun at anytime between 1980 and 1988, according to the rotational hypothesis. In fact, the current warming stretch started in the late 1970s, so the hypothesis is on weak ground.

Another weakness is whether the hypothesis can account for all of modern warming. Mackey argues that it can, based on known shortcomings in the various global temperature datasets with which predictions of the rotational hypothesis are compared. But those shortcomings mean merely that there are large uncertainties associated with any comparison, and that a role for CO2 can’t be definitely ruled out.

A moment of truth for the rotational hypothesis will come in 2025 when, it predicts, the planet will start to cool. However, if that indeed happens, rotational fluctuations of the earth’s inner core won’t be the only possible explanation. As I’ve discussed in a previous post, a potential drop in the sun’s output, known as a grand solar minimum, could also initiate a cold spell around that time.

Next: Estimates of Economic Losses from El Niños Are Farfetched

The Sun Can Explain 70% or More of Global Warming, Says New Study

Few people realize that the popular narrative of overwhelmingly human-caused global warming, with essentially no contribution from the sun, hinges on a satellite dataset showing that the sun’s output of heat and light has decreased since the 1950s.

But if a different but plausible dataset is substituted, say the authors of a new study, the tables are turned and a staggering 70% to 87% of global warming since 1850 can be explained by solar variability. The 37 authors constitute a large international team of scientists, headed by U.S. astrophysicist Willie Soon, from many countries around the world.

The two rival datasets, each of which implies a different trend in solar output or TSI (total solar irradiance) since the late 1970s when satellite measurements began, are illustrated in the figure below, which includes pre-satellite proxy data back to 1850. The TSI and associated radiative forcing – the difference in the earth’s incoming and outgoing radiation, a difference which produces heating or cooling – are measured in units of watts per square meter, relative to the mean from 1901 to 2000.   

The upper graph (Solar #1) is the TSI dataset underlying the narrative that climate change comes largely from human emissions of greenhouse gases, and was used by the IPCC (Intergovernmental Panel on Climate Change) in its 2021 AR6 (Sixth Assessment Report). The lower graph (Solar #2) is a TSI dataset from a different satellite series, as explained in a previous post, and exhibits a more complicated trend since 1950 than Solar #1.

To identify the drivers of global warming since 1850, the study authors carried out a statistical analysis of observed Northern Hemisphere land surface temperatures from 1850 to 2018; the temperature record is shown as the black line in the next figure. Following the approach of the IPCC’s AR6, three possible drivers were considered: two natural forcings (solar and volcanic) and a composite of multiple human-caused or anthropogenic forcings (which include greenhouse gases and aerosols), as employed in AR6.   

Time series for the different forcings, or a combination of them, were fitted to the temperature record utilizing multiple linear regression. This differs slightly from the IPCC’s method, which used climate model hindcasts based on the forcing time series as an intermediate step, as well as fitting global land and ocean, rather than Northern Hemisphere land-only, temperatures.

The figure below shows the new study’s best fits to the Northern Hemisphere land temperature record for four scenarios using a combination of solar, volcanic and anthropogenic forcings. Scenarios 1 and 2 correspond to the Solar #1 and Solar #2 TSI time series depicted in the first figure above, respectively, combined with volcanic and anthropogenic time series. Scenarios 3 and 4 are the same without the anthropogenic component – that is, with natural forcings only. Any volcanic contribution to natural forcing usually has a cooling effect and is short in duration.

The researchers’ analysis reveals that if the Solar #1 TSI time series is valid, as assumed by the IPCC in AR6, then natural (solar and volcanic) forcings can explain at most only 21% of the observed warming from 1850 to 2018 (Scenario 3). In this picture, adding anthropogenic forcing brings that number up to an 87% fit (Scenario 1).

However, when the Solar #1 series is replaced with the Solar #2 series, then the natural contribution to overall warming increases from 21% to a massive 70% (Scenario 4), while the combined natural and anthropogenic forcing number rises from an 87% to 92% fit (Scenario 2). The better fits with the Solar #2 TSI time series compared to the Solar #1 series are visible if you look closely at the plots in the figure above.

These findings are enhanced further if urban temperatures are excluded from the temperature dataset, on the grounds that urbanization biases temperature measurements upward. The authors have also found that the long-term warming rate for rural temperature stations is only 0.55 degrees Celsius (0.99 degrees Fahrenheit) per century, compared with a rate of 0.89 degrees Celsius (1.6 degrees Fahrenheit) per century for rural and urban stations combined, as illustrated in the figure below.

Fitting the various forcing time series to a temperature record based on rural stations alone, the natural contribution to global warming rises from 70% to 87% when the Solar #2 series is used.

If the Solar #2 TSI time series represents reality better than the Solar #1 series used by the IPCC, this means that between 70% and 87% of global warming is mostly natural and the human-caused contribution is less than 30% – the complete opposite to the IPCC’s claim of largely anthropogenic warming.

Unsurprisingly, such an upstart conclusion has raised some hackles in the climate science community. But the three lead authors of the study have effectively countered their critics in lengthy, detailed rebuttals (here and here).

The study authors do point out that “it is still unclear which (if any) of the many TSI time series in the literature are accurate estimates of past TSI,” and say that we cannot be certain yet whether the warming since 1850 is mostly human-caused, mostly natural, or some combination of both. In another paper they remark that, while three of 27 or more different TSI time series can explain up to 99% of the warming, another seven time series cannot account for more than 3%.

Next: Challenges to the CO2 Global Warming Hypothesis: (9) Rotation of the Earth’s Core as the Source of Global Warming

Has the Mainstream Media Suddenly Become Honest in Climate Reporting?

Not so long ago I excoriated the mainstream media for misleading the public about perfectly normal extreme weather events. So ABC News’ August 14 article headlined “Why climate change can't be blamed for the Maui wildfires” came as a shock, a seeming media epiphany on the lack of connection between extreme weather and climate change.

But my amazement was short-lived. The next day the news network succumbed to a social media pressure campaign by climate activists, who persuaded ABC News to water down their headline by adding the word “entirely” after “blamed.” Back to the false narrative that today’s weather extremes are more common and more intense because of climate change.

Nevertheless, a majority of the scientific community, including many meteorologists and climate scientists, think that climate change was only a minor factor in kindling the deadly, tragic conflagration on Maui.

As ecologist Jim Steele has explained, the primary cause of the Maui disaster was dead grasses – invasive, nonnative species such as Guinea grass that have flourished in former Maui farmland and forest areas since pineapple and sugar cane plantations were abandoned in the 1980s. Following a wet spring this year which caused prolific grass growth, the superabundance of these grasses quickly became highly flammable in the ensuing dry season. The resulting tinderbox merely awaited a spark.

Three paragraphs later, the story quotes UCLA (University of California, Los Angeles) climate scientist Daniel Swain as saying:

We should not look to the Maui wildfires as a poster child of the link to climate change.

Swain’s statement was immediately followed by another from Abby Frazier, a climatologist at Clark University in Worcester, Massachusetts, wThat spark came from the failure of Maui’s electrical utility to shut off power in the face of hurricane-force winds. Numerous instances of blazes triggered by live wires falling on dessicated vegetation or by malfunctioning electrical equipment have been reported. Just hours before the city of Lahaina was devastated by the fires, a power line was actually seen shedding sparks and igniting dry grass.

Exactly the same conditions set off the calamitous Camp Fire in California in 2018, which was ignited by a faulty electric transmission line in high winds, and demolished Paradise and several other towns. While the Camp Fire’s fuel included parched trees as well as dry grasses, it was almost as deadly as the 2023 Maui fires, killing 86 people. The utility company PG&E (Pacific Gas and Electric Company) admitted responsibility, and was forced to file for bankruptcy in 2019 because of potential lawsuits.

Despite the editorial softening of ABC News’ headline on the Maui wildfires, however, the article itself still contains a number of statements more honest than most penned by run-of-the-mill climate journalists. Four paragraphs into the story, this very surprising sentence appears:

Not only do “fire hurricanes” not exist, but climate change can't be blamed for the number of people who died in the wildfires.

The term “fire hurricanes” refers to a term used erroneously by Hawaii’s governor when commenting on the fires.  ho commented that:

The main factor driving the fires involved the invasive grasses that cover huge parts of Hawaii, which are extremely flammable.

And there was more. All of which is unprecedented, to borrow a favorite word of climate alarmists, in climate reporting of the last few years that has routinely promoted the mistaken belief that weather extremes are worsening be­cause of climate change.

Is this the beginning of a new trend, or just an isolated exception?

Time will tell, but there are subtle signs that other mainstream newspapers and TV networks may be cutting back on their usual hysterical hype about extreme weather. One of the reasons could be the IPCC (Intergovernmental Panel on Climate Change) new Chair’s urging the IPCC to “stick to our fundamental values of following science and trying to avoid any siren voices that take us towards advocacy.” There are already a handful of media that endeavor to be honest and truly fact-based in their climate reporting, including the Washington Examiner and The Australian.

Opposing any move in this direction is a new coalition, founded in 2019, of more than 500 media outlets dedicated to producing “more informed and urgent climate stories.” The CCN (Covering Climate Now) coalition includes three of the world’s largest news agencies — Reuters, Bloomberg and Agence France Presse – and claims to reach an audience of two billion.

In addition to efforts of the CCN, the Rockefeller Foundation has begun funding the hiring of climate reporters to “fight the climate crisis.” Major beneficiaries of this program include the AP (Associated Press) and NPR (National Public Radio).

Leaving no doubts about the advocacy of the CCN agenda, its website mentions the activist term “climate emergency” multiple times and includes a page setting out:

Tips and examples to help journalists make the connection between extreme weather and climate change.

Interestingly enough, ABC News became a CCN member in 2021 – but has apparently had a change of heart since, judging from its Maui article.

Next: The Sun Can Explain 70% or More of Global Warming, Says New Study

Record Heat May Be from Natural Sources: El Niño and Water Vapor from 2022 Tonga Eruption

The record heat worldwide over the last few months – simultaneous heat waves in both the Northern and Southern Hemispheres, and abnormally warm oceans – has led to the hysterical declaration of “global boiling” by the UN Secretary General, the media and even some climate scientists. But a rational look at the data reveals that the cause may be natural sources, not human CO2.

The primary source is undoubtedly the warming El Niño ocean cycle, a natural event that recurs at irregular intervals from two to seven years. The last strong El Niño, which temporarily raised global temperatures by about 0.14 degrees Celsius (0.25 degrees Fahrenheit), was in 2016. For comparison, it takes a full decade for current global warming to increase temperatures by that much. 

However, on top of the 2023 El Niño has been an unexpected natural source of warming – water vapor in the upper atmosphere, resulting from a massive underwater volcanic eruption in the South Pacific kingdom of Tonga in January 2022.

Normally, erupting volcanoes cause significant global cooling, from shielding of sunlight by sulfate aerosol particles in the eruption plume that linger in the atmosphere. Following the 1991 eruption of Mount Pinatubo in the Philippines, for example, the global average temperature fell by 0.6 degrees Celsius (1.1 degrees Fahrenheit) for more than a year.

But the eruption of the Hunga Tonga–Hunga Haʻapai volcano did more than just launch a destructive tsunami and shoot a plume of ash, gas, and pulverized rock 55 kilometers (34 miles) into the sky. It also injected 146 megatonnes (161 megatons) of water vapor into the stratosphere (the layer of the atmosphere above the troposphere) like a geyser. Because it occurred only about 150 meters (500 feet) underwater, the eruption immediately superheated the shallow seawater above and converted it explosively into steam.

Although the excess water vapor – enough to fill more than 58,000 Olympic-size swimming pools – was originally localized to the South Pacific, it quickly diffused over the whole globe. According to a recent study by a group of atmospheric physicists at the University of Oxford and elsewhere, the eruption boosted the water vapor content of the stratosphere worldwide by as much as 10% to 15%. 

Water vapor is a powerful greenhouse gas, the dominant greenhouse gas in the atmosphere in fact; it is responsible for about 70% of the earth’s natural greenhouse effect, which keeps the planet at a comfortable enough temperature for living organisms to survive, rather than 33 degrees Celsius (59 degrees Fahrenheit) cooler. So even 10–15% extra water vapor in the stratosphere makes the earth warmer.

The study authors estimated the additional warming from the Hunga Tonga eruption using a simple climate model combined with a widely available radiative transfer model. Their estimate was a maximum global warming of 0.035 degrees Celsius (0.063 degrees Fahrenheit) in the year following the eruption, diminishing over the next five years. The cooling effect of the small amount of sulfur dioxide (SO2) from the eruption was found to be minimal.

As I explained in an earlier post, any increase in ocean surface temperatures from the Hunga Tonga eruption would have been imperceptible, at a minuscule 14 billionths of a degree Celsius or less. That’s because the oceans, which cover 71% of the earth’s surface, are vast and can hold 1,000 times more heat than the atmosphere. Undersea volcanic eruptions can, however, cause localized marine heat waves, as I discussed in another post.

Although 0.035 degrees Celsius (0.063 degrees Fahrenheit) of warming from the Hunga Tonga eruption pales in comparison with 2016’s El Niño boost of 0.14 degrees Celsius (0.25 degrees Fahrenheit), it’s nevertheless more than double the average yearly increase of 0.014 degrees Celsius (0.025 degrees Fahrenheit) of global warming from other sources such as greenhouse gases.

El Niño is the warm phase of ENSO (the El Niño – Southern Oscillation), a natural cycle that causes drastic temperature fluctuations and other climatic effects in tropical regions of the Pacific, as well as raising temperatures globally. Its effect on sea surface temperatures in the central Pacific is illustrated in the figure below. It can be seen that the strongest El Niños, such as those in 1998 and 2016, can make Pacific surface waters more than 2 degrees Celsius (3.6 degrees Fahrenheit) hotter for a whole year or so. 

Exactly how strong the present El Niño will be is unknown, but the heat waves of July suggest that this El Niño – augmented by the Hunga Tonga water vapor warming – may be super-strong. Satellite measurements showed that, in July 2023 alone, the temperature of the lower troposphere rose from 0.38 degrees Celsius (0.68 degrees Fahrenheit) to 0.64 degrees Celsius (1.2 degrees Fahrenheit) above the 1991-2020 mean.

If this El Niño turns out to be no stronger than in the past, then the source of the current “boiling” heat will remain a mystery. Perhaps the Hunga Tonga water vapor warming is larger than the Oxford group estimates. The source certainly isn’t any warming from human CO2, which raises global temperatures gradually and not abruptly as we’ve seen in 2023.

Next: Has the Mainstream Media Suddenly Become Honest in Climate Reporting?

Hottest in 125,000 Years? Dishonest Claim Contradicts the Evidence

Amidst the hysterical hype in the mainstream media about recent heat waves all over the Northern Hemisphere, especially in the U.S., the Mediterranean and Asia, one claim stands out as utterly ridiculous – which is that temperatures were the highest the world has seen in 125,000 years, since the interglacial period between the last two ice ages.

But the claim, repeated mindlessly by newspapers, magazines and TV networks in lockstep, is blatantly wrong. Aside from the media confusing the temperature of the hotter ground with that of the air above, there is ample evidence that the earth’s climate has been as warm or warmer than today’s – and comparable to that 125,000 years ago – several times during the past 11,000 years after the last ice age ended.

Underlying the preposterous claim is an erroneous temperature graph featured in the 2021 Sixth Assessment Report of the IPCC (Intergovernmental Panel on Climate Change). The report revives the infamous “hockey stick” – a reconstructed temperature graph for the past 2020 years resembling the shaft and blade of a hockey stick on its side, with no change or a slight decline in temperature for the first 1900 years, followed by a sudden, rapid upturn during the most recent 120 years.

Prominently displayed near the beginning of the report, the IPCC’s latest version of the hockey stick is shown in the figure above. The solid grey line from 1 to 2000 is a reconstruction of global surface temperature from paleoclimate archives, while the solid black line from 1850 to 2020 represents direct observations. Both are relative to the 1850–1900 mean and averaged by decade.

But what is missing from the spurious hockey stick are two previously well-documented features of our past climate: the MWP (Medieval Warm Period) around the year 1000, a time when warmer than normal conditions were reported in many parts of the world, and the cool period centered around 1650 known as the LIA (Little Ice Age).

The two features are clearly visible in a different reconstruction of past temperatures by Fredrik Ljungqvist, who is a professor of geography at Stockholm University in Sweden. Ljungqvist’s 2010 reconstruction, for extra-tropical latitudes (30–90°N) in the Northern Hemisphere only, is depicted in the next figure; temperatures are averaged by decade. Not only do the MWP and LIA stand out, but the end of the Roman Warm Period at the beginning of the previous millennium can also be seen on the left.

Both this reconstruction and the IPCC’s are based on paleoclimate proxies such as tree rings, marine sediments, ice cores, boreholes and leaf fossils. Although other reconstructions have supported the IPCC position that the MWP and LIA did not exist, a large number also provide strong evidence that they were real.

A 2016 summary paper by Ljungqvist and a co-author found that of the 16 large-scale reconstructions they studied, 7 had their warmest year during the MWP and 9 in the 20th century. The overall choice of research papers that the IPCC’s report drew from is strongly biased toward the lack of both the MWP and LIA, and many of the temperature reconstructions cited in the report are faulty because they rely on cherry-picked or incomplete proxy data.

A Southern Hemisphere example is shown in the figure below, depicting reconstructed temperatures for the continent of Antarctica back to the year 500. This also reveals a distinct LIA and what appears to be an extended MWP at the South Pole.

The hockey stick, the creation of climate scientist and IPCC author Michael Mann, first appeared in the IPCC’s Third Assessment Report in 2001, but was conspicuously absent from the fourth and fifth reports. It disappeared after its 2003 debunking by mining analyst Stephen McIntyre and economist Ross McKitrick, who found that the graph was based on faulty statistical analysis, as well as preferential data selection (see here and here). The hockey stick was also discredited by a team of scientists and statisticians assembled by the U.S. National Academy of Sciences.

Plenty of evidence, including that presented here, shows that global temperatures were not relatively constant for centuries as the hockey stick would have one believe. Maximum temperatures were actually higher than now during the MWP, when Scandinavian Vikings farmed in Greenland and wine was grown in the UK, and then much lower during the LIA, when frost fairs on the UK’s frozen Thames River became a common sight.

In a previous post, I presented evidence for a period even warmer than the MWP immediately following the last ice age, a period known as the Holocene Thermal Maximum.

Next: Record Heat May Be from Natural Sources: El Niño and Water Vapor from 2022 Tonga Eruption

No Evidence That Extreme Weather on the Rise: A Look at the Past - (6) Wildfires

This post on wildfires completes the present series on the history of weather extremes. The mistaken belief that weather extremes are intensifying be­cause of climate change has only been magnified by the smoke recently wafting over the U.S. from Canadian wildfires, if you believe the apocalyptic proclamations of Prime Minister Trudeau, President Biden and the Mayor of New York.

But, just as with all the other examples of extreme weather presented in this series, there’s no scientific evidence that wildfires today are any more frequent or severe than anything experienced in the past. Although wildfires can be exacerbated by other weather extremes such as heat waves and drought, we’ve already seen that those extremes are not on the rise either.

Together with tornadoes, wildfires are probably the most fearsome of the weather extremes commonly blamed on global warming. Both can arrive with little or no warning, making it difficult or impossible to flee, are often deadly, and typi­cally destroy hundreds of homes and other structures.

The worst wildfires occur in naturally dry climates such as those in Australia, Cali­fornia or Spain. One of the most devastating fire seasons in Australia was the summer of 1938-39, which saw bushfires (as they’re called down under) burning all summer, with ash from the fires falling as far away as New Zealand. The Black Friday bushfires of January 13, 1939 engulfed approximately 75% of the southeast state of Victoria, killing over 60 people as described in the article from the Telegraph-Herald on the left below, and destroying 1,300 buildings; as reported:

In the town of Woodspoint alone, 21 men and two women were burned to death and 500 made destitute.  

Just a few days later, equally ferocious bushfires swept through the neighboring state of South Australia. The inferno reached the outskirts of the state capital, Adelaide, as documented in the excerpt from the Adelaide Chronicle newspaper on the right above.

Nationally, Australia’s most extensive bushfire season was the catastrophic series of fires in 1974-75 that consumed 117 million hectares (290 million acres), which is 15% of the land area of the whole continent. Fortunately, because nearly two thirds of the burned area was in remote parts of the Northern Territory and Western Australia, relatively little human loss was incurred – only six people died – though livestock and native animals such as lizards and red kangaroos suffered. An estimated 57,000 farm animals were killed.

The 1974-75 fires were fueled by abnormally heavy growth of lush grasses, following unprecedented rainfall in 1974. The fires began in the Barkly Tablelands region of Queensland, a scene from which is shown below. One of the other bushfires in New South Wales had a perimeter of more than 1,000 km (620 miles).

In the U.S., while the number of acres burned annually has gone up over the last 20 years or so, the present area consumed by wildfires is still only a small fraction of what it was back in the 1930s – just like the frequency and duration of heat waves, discussed in the preceding post. The western states, especially California, have a long history of disastrous wildfires dating back many centuries.

Typical of California conflagrations in the 1930s are the late-season fires around Los Angeles in November 1938, described in the following article from the New York Times. In one burned area 4,100 hectares (10,000 acres) in extent, hundreds of mountain and beach cabins were wiped out. Another wildfire burned on a 320-km (200-mile) front in the mountains. As chronicled in the piece, the captain of the local mountain fire patrol lamented that:

This is a major disaster, the worst forest fire in the history of Los Angeles County. Damage to watersheds is incalculable.

Northern California was incinerated too. The newspaper excerpts below from the Middlesboro Daily News and the New York Times report on wildfires that broke out on a 640-km (400-mile) front in the north of the state in 1936, and near San Francisco in 1945, respectively. The 1945 article documents no less than 6,500 separate blazes in California that year.

Pacific coast states further north were not spared either. Recorded in the following two newspaper excerpts are calamitous wildfires in Oregon in 1936 and Canada’s British Columbia in 1938; the articles are both from the New York Times. The 1936 Oregon fires, which covered an area of 160,000 hectares (400,000 acres), obliterated the village of Bandon in southwestern Oregon, while the 1938 fire near Vancouver torched an estimated 40,000 hectares (100,000 acres). Said a policeman in the aftermath of the Bandon inferno, in which as many as 15 villagers died:

If the wind changes, God help Coquille and Myrtle Point. They’ll go like Bandon did.

In 1937, a wildfire wreaked similar havoc in the neighboring U.S. state of Wyoming. At least 12 people died when the fire raged in a national forest close to Yellowstone National Park. As reported in the Newburgh News article on the left below:

The 12th body … was burned until even the bones were black beneath the skin.

and    A few bodies were nearly consumed.

The article on the right from the Adelaide Advertiser reports on yet more wildfires on the west coast, including northern California, in 1938.

As further evidence that modern-day wildfires are no worse than those of the past, the two figures below show the annual area burned by wildfires in Australia since 1905 (as a percentage of total land area, top), and in the U.S. since 1926 (bottom). Clearly, the area burned annually is in fact declining, despite hysterical claims to the contrary by the mainstream me­dia. The same is true of other countries around the world.

Next: Hottest in 125,000 Years? Dishonest Claim Contradicts the Evidence

No Evidence That Extreme Weather on the Rise: A Look at the Past - (5) Heat Waves

Recent blistering hot spells in Texas, the Pacific northwest and Europe have only served to amplify the belief that heat waves are now more frequent and longer than in the past, due to climate change. But a careful look at the evidence reveals that this belief is mistaken, and that current heat waves are no more linked to global warming than any of the other weather extremes we’ve examined.

It’s true that a warming world is likely to make heat waves more common. By definition, heat waves are periods of abnormally hot weather, last­ing from days to weeks. However, heat waves have been a regular feature of Earth’s climate for at least as long as recorded history, and heat waves of the last few decades pale in comparison to those of the 1930s – a period whose importance is frequently downplayed by the media and climate activists.

Those who dismiss the 1930s justify their position by claiming that the searing heat was confined to just 10 of the Great Plains states in the U.S. and caused by Dust Bowl drought. But this simply isn’t so. The evidence shows that the record heat of the 1930s – when the globe was also warming – extended throughout much of North America, as well as other countries such as France, India and Australia.

In the summer of 1930 two record-setting, back-to-back scorchers, each lasting 8 days, afflicted Washington, D.C. in late July and early August. During that time, 11 days in the capital city saw maximum temperatures above 38 Degrees Celsius (100 degrees Fahrenheit). Nearby Harrisonburg, Virginia roasted in July and August also, experiencing its longest heat wave on record, lasting 23 days, with 10 days of 38 Degrees Celsius (100 degrees Fahrenheit) or more.

In April the same year, an historic 6-day heat wave enveloped the whole eastern and part of the central U.S., as depicted in the figure below, which shows sample maximum temperatures for selected cities over that period. The accompanying excerpt from a New York Times article chronicles heat events in New York that July.

The hottest years of the 1930s heat waves in the U.S. were 1934 and 1936. Typical newspaper articles from those two extraordinarily hot years are set out below.

The Western Argus article on the left reports how the Dust Bowl state of Oklahoma in 1934 endured an incredible 36 successive days on which the mercury exceeded 38 degrees Celsius (100 degrees Fahrenheit) in central Oklahoma. On August 7, the temperature there climbed to a sizzling 47 degrees Celsius (117 degrees Fahrenheit). And in the Midwest, Chicago and Detroit, both cities for which readings of 32 degrees Celsius (90 degrees Fahrenheit) are normally considered uncomfortably hot, registered over 40 degrees Celsius (104 degrees Fahrenheit) the same day.

It was worse in other cities. In the summer of 1934, Fort Smith, Arkansas recorded an unbelievable 53 consecutive days with maximum temperatures of 38 degrees Celsius (100 degrees Fahrenheit) or higher. Topeka, Kansas, had 47 days, Oklahoma City had 45 days and Columbia, Missouri had 34 days when the mercury reached or passed that level. Approximately 800 deaths were attributed to the widespread heat wave.

In a 13-day heat wave in July, 1936, the Canadian province of Ontario – well removed from the Great Plains where the Dust Bowl was concentrated – saw the thermometer soar above 44 degrees Celsius (111 degrees Fahrenheit) during the longest, deadliest Canadian heat wave on record. The Toronto Star article on the right above describes conditions during that heat wave in normally temperate Toronto, Ontario’s capital. As reported:

a great mass of the children of the poverty-stricken districts of Toronto are today experiencing some of the horrors of Dante’s Inferno.

and, in a headline,

            Egg[s] Fried on Pavement – Crops Scorched and Highways Bulged      

Portrayed in the next figure are two scenes from the 1936 U.S. heat wave; the one on the left shows children cooling off in New York City on July 9, while the one on the right shows ice being delivered to a crowd in Kansas City, Missouri in August.

Not only did farmers suffer and infrastructure wilt in the 1936 heat waves, but thousands died from heatstroke and other hot-weather ailments. By some estimates, over 5,000 excess deaths from the heat occurred that year in the U.S. and another 1,000 or more in Canada; a few details appear in the two newspaper articles on the right below, from the Argus-Press and Bend Bulletin, respectively.

The article on the left above from the Telegraph-Herald documents the effect of the July 1936 heat wave on the Midwest state of Iowa, which endured 12 successive days of sweltering heat. The article remarks that the 1936 heat wave topped the previous one in 1934, when the mercury reached or exceeded the 38 degrees Celsius (100 degrees Fahrenheit) mark for 8 consecutive days.

Heat waves lasting a week or longer in the 1930s were not confined to North America; the Southern Hemisphere baked too. Adelaide on Australia’s south coast experienced a heat wave at least 11 days long in 1930, and Perth on the west coast saw a 10-day spell in 1933, as described in the articles below from the Register News and Longreach Leader, respectively.

Not to be outdone, 1935 saw heat waves elsewhere in the world. The adjacent three excerpts from Australian newspapers recorded heat waves that year in India, France and Italy, although there is no information about their duration; the papers were the Canberra Times, the Sydney Morning Herald and the Daily News.  But 1935 wasn’t the only 1930s heat wave in France. In August 1930, Australian and New Zealand (and presumably French) newspapers recounted a French heat wave earlier that year, in which the temperature soared to a staggering 50 degrees Celsius (122 degrees Fahrenheit) in the Loire valley – besting a purported record of 46 degrees Celsius (115 degrees Fahrenheit) set in southern France in 2019.  

Many more examples exist of the exceptionally hot 1930s all over the globe. Even with modern global warming, there’s nothing unusual about current heat waves, either in frequency or duration.

Next: No Evidence That Extreme Weather on the Rise: A Look at the Past - (6) Wildfires

No Evidence That Extreme Weather on the Rise: A Look at the Past - (4) Droughts

Severe droughts have been a continuing feature of the earth’s climate for millennia, but you wouldn’t know that from the brouhaha in the mainstream media over last summer’s drought in Europe. Not only was the European drought not unprecedented, but there have been numerous longer and drier droughts throughout history, including during the past century.

Because droughts typically last for years or even decades, their effects are far more catastrophic for human and animal life than those of floods which usually recede in weeks or months. The consequences of drought include crop failure, starvation and mass migration. As with floods, droughts historically have been most common in Asian countries such as China and India.

One of most devastating natural disasters in Chinese history was the drought and subsequent famine in northern China from 1928 to 1933. The drought left 3.7 million hectares (9.2 million acres) of arable land barren, leading to a lengthy famine exacerbated by civil war. An estimated 3 million people died of starvation, while Manchuria in the northeast took in 4 million refugees.

Typical scenes from the drought are shown in the photos below. The upper photo portrays three starving boys who had been abandoned by their families in 1928 and were fed by the military authorities. The lower photo shows famine victims in the city of Lanzhou.

The full duration of the drought was extensively covered by the New York Times. In 1929, a lengthy article reported that relief funds from an international commission could supply just one meal daily to:

 only 175,000 sufferers out of the 20 million now starving or undernourished.

and    missionaries report that cannibalism has commenced.

A 1933 article, an excerpt from which is included in the figure above, chronicled the continuing misery four years later:

Children were being killed to end their suffering and the women of families were being sold to obtain money to buy food for the other members, according to an official report.

Drought has frequently afflicted India too. One of the worst episodes was the twin droughts of 1965 and 1966-67, the latter in the eastern state of Bihar. Although only 2,350 Indians died in the 1966-67 drought, it was only unprecedented foreign food aid that prevented mass starvation. Nonetheless, famine and disease ravaged the state, and it was reported that as many as 40 million people were affected.

Particularly hard hit were Bihar farmers, who struggled to keep their normally sturdy plow-pulling bullocks alive on a daily ration of 2.7 kilograms (6 pounds) of straw. As reported in the April 1967 New York Times article below, a U.S. cow at that time usually consumed over 11 kilograms (25 pounds) of straw a day. A total of 11 million farmers and 5 million laborers were effectively put out of work by the drought. Crops became an issue for starving farmers too, the same article stating that:

An official in Patna said confidently the other day that “the Indian farmer would rather die than eat his seed,” but in village after village farmers report that they ate their seed many weeks ago.

The harrowing photo on the lower right below, on permanent display at the Davis Museum in Wellesley College, Massachusetts, depicts a 45-year-old farmer and his cow dying of hunger in Bihar. Children suffered too, with many forced to subsist on a daily ration of four ounces of grain and an ounce of milk.

The U.S., like most countries, is not immune to drought either, especially in southern and southeastern states. Some of the worst droughts occurred in the Great Plains states and southern Canada during the Dust Bowl years of the 1930s.

But worse yet was a 7-year uninterrupted drought from 1950 to 1957, concentrated in Texas and Oklahoma but eventually including all the Four Corners states of Arizona, Utah, Colorado and New Mexico, as well as eastward states such as Missouri and Arkansas. For Texas, it was the most severe drought in recorded history. By the time the drought ended, 244 of Texas' 254 counties had been declared federal disaster areas.

Desperate ranchers resorted to burning cactus, removing the spines, and using it for cattle feed. Because of the lack of adequate rainfall, over 1,000 towns and cities in Texas had to ration the water supply. The city of Dallas opened centers where citizens could buy cartons of water from artesian wells for 50 cents a gallon, which was more than the cost of gasoline at the time.

Shown in the photo montage on the left below are various scenes from the Texas drought. The top photo is of a stranded boat on a dry lakebed, while the bottom photo illustrates once lakeside cabins on a shrinking Lake Waco; the middle photo shows a car being towed after becoming stuck in a parched riverbed. The newspaper articles on the right are from the West Australian in 1953 (“Four States In America Are Hit By Drought”) and the Montreal Gazette in 1957.

Reconstructions of ancient droughts using tree rings or pollen as a proxy reveal that historical droughts were even longer and more severe than those described here, many lasting for decades – so-called megadroughts. This can be seen in the figure below, which shows the pattern of dry and wet periods in drought-prone California over the past 1,200 years.

Next: No Evidence That Extreme Weather on the Rise: A Look at the Past - (5) Heat Waves