The Deceptive Catastrophizing of Weather Extremes: (1) The Science

In these pages, I’ve written extensively about the lack of scientific evidence for any increase in extreme weather due to global warming. But I’ve said relatively little about the media’s exploitation of the mistaken belief that weather extremes are worsening be­cause of climate change.

A recent four-part essay addresses the latter issue, under the title “Did Exxon Make It Rain Today?”  The essay was penned by Ted Nordhaus, well-known environmentalist and director of the Breakthrough Institute in Berkeley, California, which he co-founded with Michael Shellenberger in 2007. Its authorship was a surprise to me, since the Breakthrough Institute generally supports the narrative of largely human-caused warming.

Nonetheless, Nordhaus’s thoughtful essay takes a mostly skeptical – and realistic – view of hype about weather extremes, stating that:

We know that anthropogenic warming can increase rainfall and storm surges from a hurricane, or make a heat wave hotter. But there is little evidence that warming could create a major storm, flood, drought, or heat wave where otherwise none would have occurred, …

Nordhaus goes on to make the insightful statement that “The main effect that climate change has on extreme weather and natural disasters … is at the margins.” By this, he means that a heat wave in which daily high temperatures for, say, a week reached 37 degrees Celsius (99 degrees Fahrenheit) or above in the absence of climate change would instead stay above perhaps 39 degrees Celsius (102 degrees Fahrenheit) with our present level of global warming.

His assertion is illustrated in the following, rather congested figure from the IPCC (Intergovernmental Panel on Climate Change)’s Sixth Assessment Report. The purple curve shows the average annual hottest daily maximum temperature on land, while the green and black curves indicate the land and global average annual mean temperature, respectively; temperatures are measured relative to their 1850–1900 means.

However, while global warming is making heat waves marginally hotter, Nordhaus says there is no evidence that extreme weather events are on the rise, as so frequently trumpeted by the mainstream media. Although climate change will make some weather events such as heavy rainfall more intense than they otherwise would be, the global area burned by wildfires has actually decreased and there has been no detectable global trend in river floods, nor meteorological drought, nor hurricanes.

Adds Nordhaus:

The main source of climate variability in the past, present, and future, in all places and with regard to virtually all climatic phenomena, is still overwhelmingly non-human: all the random oscillations in climatic extremes that occur in a highly complex climate system across all those highly diverse geographies and topographies.

The misconception that weather extremes are increasing when they are not has been amplified by attribution studies, which use a new statistical method and climate models to assign specific extremes to either natural variabil­ity or human causes. Such studies involve highly questionable methodology that has several shortcomings.

Even so, the media and some climate scientists have taken scientifically unjustifiable liberties with attribution analysis in order to link extreme events to climate change – such as attempting to quantify how much more likely global warming made the occurrence of a heat wave that resulted in high temperatures above 38 degrees Celsius (100 degrees Fahrenheit) for a period of five days in a specific location.

But, explains Nordhaus, that is not what an attribution study actually estimates. Rather, “it quantifies changes in the likelihood of the heat wave reaching the precise level of extremity that occurred.” In the hypothetical case above, the heat wave would have happened anyway in the absence of climate change, but it would have resulted in high temperatures above 37 degrees Celsius (99 degrees Fahrenheit) over five days instead of above 38 degrees.

The attribution method estimates the probability of a heat wave or other extreme event occurring that is incrementally hotter or more severe than the one that would have occurred without climate change, not the probability of the heat wave or other event occurring at all.

Nonetheless, as we’ll see in the next post, the company WWA (World Weather Attribution), founded by German climatologist Friederike Otto, has utilized this new technology to rapidly produce science that does connect weather extremes to climate change – with the explicit goal of shaping news coverage. Coverage of climate-related disasters now routinely features WWA analysis, which is often employed to suggest that climate change is the cause of such events.

Next: The Deceptive Catastrophizing of Weather Extremes: (2) Economics and Politics

Foundations of Science Under Attack in U.S. K-12 Education

Little known to most people is that science is under assault in the U.S. classroom. Some 49 U.S. states have adopted standards for teaching science in K-12 schools that abandon the time-honored edifice of the scientific method, which underpins all the major scientific advances of the past two millennia.

In place of the scientific method, most schoolchildren are now taught “scientific practices.” These emphasize the use of computer models and social consensus over the fundamental tenets of the scientific method, namely the gathering of empirical evidence and the use of reasoning to make sense of the evidence. 

The modern scientific method, illustrated schematically in the figure below, was conceived over two thousand years ago by the Hellenic-era Greeks, then almost forgotten and ultimately rejuvenated in the Scientific Revolution, before being refined into its present-day form in the 19th century. However, even earlier scientists such as Galileo Galilei and Isaac Newton had followed the basic principles of the method, as have subsequent scientific luminaries like Marie Curie and Albert Einstein. 

The present assault on science in U.S. schools began with publication in 2012 of a 400-page document, A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas, by the U.S. National Academy of Sciences. This was followed in 2014 with publication by a newly formed consortium of national and state education groups of a companion document, Next Generation Science Standards (NGSS), based on the 2012 Framework.

The Framework summarily dismisses the scientific method with the outrageous statement:

… the notion that there is a single scientific method of observation, hypothesis, deduction, and conclusion—a myth perpetuated to this day by many textbooks—is fundamentally wrong,

and its explanation of “practices” as: 

… not rote procedures or a ritualized “scientific method.”

The Framework’s abandonment of the scientific method appears to have its origins in a 1992 book by H.H. Bauer entitled Scientific Literacy and the Myth of the Scientific Method. Bauer’s arguments against the importance of the scientific method include the mistaken conflation of science with sociology, and a misguided attempt to elevate the irrational pseudoscience of astrology to the status of a true science.

The NGSS give the scientific method even shorter shrift than the Framework, not mentioning the concept nor the closely related term of critical thinking once in its 103 pages. A scathing review of the NGSS in 2021 by the U.S. National Association of Scholars (NAS), Climbing Down: How the Next Generation Science Standards Diminish Scientific Literacy, concludes that:

The NGSS severely neglect content instruction, politicize much of the content that remains … and abandon instruction of the scientific method.

Stating that “The scientific method is the logical and rational process through which we observe, describe, explain, test, and predict phenomena … but is nowhere to be found in the actual standards of the NGSS,” the NAS report also states:

Indeed, the latest generation of science education reformers has replaced scientific content with performance-based “learning” activities, and the scientific method with social consensus.

It goes on to say that neither the Framework nor the NGSS ever mention explicitly the falsifiability criterion – a crucial but often overlooked feature of the modern scientific method, in addition to the basic steps outlined above. The criterion, introduced in the early 20th century by philosopher Sir Karl Popper, states that a true scientific theory or law must in principle be capable of being invalidated by observation or experiment. Any evidence that fits an unfalsifiable theory has no scientific validity.

The primary deficiencies of the Framework and the NGSS have recently been enumerated and discussed by physicist John Droz, who has identified a number of serious shortcomings, some of which inject politics into what should be purely scientific standards. These include the use of computer models to imply reality; treating consensus as equal in value to empirical data; and the use of correlation to imply causation.

The NGSS do state that “empirical evidence is required to differentiate between cause and correlation” (in Crosscutting Concepts, page 92 onward), and there is a related discussion in the Framework. However, there is no attempt in either document to connect the concept of cause and effect to the steps of observation, and formulation and testing of a hypothesis, in the scientific method.

The NAS report is pessimistic about the effect of the NGSS on K-12 science education in the U.S., stating that:

They [the NGSS] do not provide a science education adequate to take introductory science courses in college. They lack large areas of necessary subject matter and an extraordinary amount of mathematical rigor. … The NGSS do not prepare students for careers or college readiness.

There is, however, one bright light. In his home state of North Carolina (NC), Droz was successful in July 2023 in having the scientific method restored to the state’s K-12 Science Standards. Earlier that year, he had discovered that existing NC science standards had excluded teaching the scientific method for more than 10 years. So Droz formally filed a written objection with the NC Department of Public Instruction.

Droz was told that he was “the only one bringing up this issue” out of 14,000 inputs on the science standards. However, two members of the State Board of Education ultimately joined him in questioning the omission and, after much give-and-take, the scientific method was reinstated. That leaves 48 other states that need to follow North Carolina’s example.

Next: Retractions of Scientific Papers Are Skyrocketing

No Evidence That Today’s El Niños Are Any Stronger than in the Past

The current exceptionally strong El Niño has revived discussion of a question which comes up whenever the phenomenon recurs every two to seven years: are stronger El Niños caused by global warming? While recent El Niño events suggest that in fact they are, a look at the historical record shows that even stronger El Niños occurred in the distant past.

El Niño is the warm phase of ENSO (the El Niño – Southern Oscillation), a natural ocean cycle that causes drastic temperature fluctuations and other climatic effects in tropical regions of the Pacific. Its effect on atmospheric temperatures is illustrated in the figure below. Warm spikes such as those in 1997-98, 2009-10, 2014-16 and 2023 are due to El Niño; cool spikes like those in 1999-2001 and 2008-09 are due to the cooler La Niña phase.

A slightly different temperature record, of selected sea surface temperatures in the El Niño region of the Pacific, averaged yearly from 1901 to 2017, is shown in the next figure from a 2019 study.

Here the baseline is the mean sea surface temperature over the 1901-2017 interval, and the black dashed line at 0.6 degrees Celsius is defined by the study authors as the threshhold for an El Niño event. The different colors represent various regional types of El Niño; the gray bars mark warm years in which no El Niño developed.

This year’s gigantic spike in the tropospheric temperature to 0.93 degrees Celsius (1.6 degrees Fahrenheit) – a level that set alarm bells ringing – is clearly the strongest El Niño by far in the satellite record. Comparison of the above two figures shows that it is also the strongest since 1901. So it does indeed appear that El Niños are becoming stronger as the globe warms, especially since 1960.

Nevertheless, such a conclusion is ill-considered as there is evidence from an earlier study that strong El Niños have been plentiful in the earth’s past.

As I described in a previous post, a team of German paleontologists established a complete record of El Niño events going back 20,000 years, by examining marine sediment cores drilled off the coast of Peru. The cores contain an El Niño signature in the form of tiny, fine-grained stone fragments, washed into the sea by multiple Peruvian rivers following floods in the country caused by heavy El Niño rainfall.

The research team classified the flood signal as very strong when the concentration of stone fragments, known as lithics, was more than two standard deviations above the centennial mean. The frequency of these very strong events over the last 12,000 years is illustrated in the next figure; the black and gray bars show the frequency as the number of 500- and 1,000-year floods, respectively. Radiocarbon dating of the sediment cores was used to establish the timeline.

A more detailed record is presented in the following figure, showing the variation over 20,000 years of the sea surface temperature off Peru (top), the lithic concentration (bottom) and a proxy for lithic concentration (center). Sea surface temperatures were derived from chemical analysis of the marine sediment cores.

You can see that the lithic concentration and therefore El Niño strength were high around 2,000 and 10,000 years ago – approximately the same periods when the most devastating floods occurred. The figure also reveals the absence of strong El Niño activity from 5,500 to 7,500 years ago, a dry interval without any major Peruvian floods as reflected in the previous figure.

If you examine the lithic plots carefully, you can also see that the many strong El Niños approximately 2,000 and 10,000 years ago were several times stronger (note the logarithmic concentration scale) than current El Niños on the far left of the figure. Those two periods were warmer than today as well, being the Roman Warm Period and the Holocene Thermal Maximum, respectively.

So there is nothing remarkable about recent strong El Niños.

Despite this, the climate science community is still uncertain about the global warming question. The 2019 study described above found that since the 1970s, formation of El Niños has shifted from the eastern to the western Pacific, where ocean temperatures are higher. From this observation, the study authors concluded that future El Niños may intensify. However, they qualified their conclusion by stating that:

… the root causes of the observed background changes in the later part of the 20th century remain elusive … Natural variability may have added significant contributions to the recent warming.

Recently, an international team of 17 scientists has conducted a theoretical study of El Niños since 1901 using 43 climate models, most of which showed the same increase in El Niño strength since 1960 as the actual observations. But again, the researchers were unable to link this increase to global warming, declaring that:

Whether such changes are linked to anthropogenic warming, however, is largely unknown.

The researchers say that resolution of the question requires improved climate models and a better understanding of El Niño itself. Some climate models show El Niño becoming weaker in the future.

Next: Antarctica Sending Mixed Climate Messages

Global Warming from Food Production and Consumption Grossly Overestimated

A recent peer-reviewed study makes the outrageous claim that production and consumption of food could contribute as much as 0.9 degrees Celsius (1.6 degrees Fahrenheit) to global warming by 2100, from emissions of the greenhouse gases methane (CH4), nitrous oxide (N2O) and carbon dioxide (CO2).

Such a preposterous notion is blatantly wrong, even if it were true that global warming largely comes from human CO2 emissions. Since agriculture is considered responsible for an estimated 15-20% of current warming, a 0.9 degrees Celsius (1.6 degrees Fahrenheit) agricultural contribution in 2100 implies a total warming (since 1850-1900) at that time of 0.9 / (0.15–0.2), or 4.5 to 6.0 degrees Celsius (8.1 to 10.8 degrees Fahrenheit).

As I discussed in a previous post, only the highest, unrealistic CO2 emissions scenarios project such a hot planet by the end of the century. A group of prominent climate scientists has estimated the much lower range of likely 2100 warming, of 2.6-3.9 degrees Celsius (4.7-7.0 degrees Fahrenheit). And climate writer Roger Pielke Jr. has pegged the likely warming range at 2-3 degrees Celsius (3.6-5.4 degrees Fahrenheit), based on the most plausible emissions scenarios.

Using the same 15-20% estimate for the agricultural portion of global warming, a projected 2100 warming of say 3 degrees Celsius (5.4 degrees Fahrenheit) would mean a contribution from food production of only 0.45-0.6 degrees Celsius (0.8-1.1 degrees Fahrenheit) – about half of what the new study’s authors calculate.

That even this estimate of future warming from agriculture is too high can be seen by examining the following figure from their study. The figure illustrates the purported temperature rise by 2100 attributable to each of the three greenhouse gases generated by the agricultural industry: CH4, N2O and CO2. CH4 is responsible for nearly 60% of the temperature increase, while N2O and CO2 each contribute about 20%.

This figure can be compared with the one below from a recent preprint by a team which includes atmospheric physicists William Happer and William van Wijngaarden, showing the authors’ evaluation of expected radiative forcings at the top of the troposphere over the next 50 years. The forcings are increments relative to today, measured in watts per square meter; the horizontal lines are the projected temperature increases (ΔT) corresponding to particular values of the forcing increase.

To properly compare the two figures, we need to know what percentages of total CH4, N2O and CO2 emissions in the Happer and van Wijngaarden figure come from the agricultural sector; these are approximately 50%, 67% and 3%, respectively, according to the authors of the food production study.

Using these percentages and extrapolating the Happer and van Wijngaarden graph to 78 years (from 2022), the total additional forcing from the three gases in 2100 can be shown to be about 0.52 watts per square meter. This forcing value corresponds to a temperature increase due to food production and consumption of only around 0.1 degrees Celsius (0.18 degrees Fahrenheit).

The excessively high estimate of 0.9 degrees Celsius (1.6 degrees Fahrenheit) in the study may be due in part to the study’s dependence on a climate model: many climate models greatly exaggerate future warming.

While on the topic of CH4 and N2O emissions, let me draw your attention to a fallacy widely propagated in the climate science literature; the fallacy appears on the websites of both the U.S. EPA (Environmental Protection Agency) and NOAA (the U.S. National Oceanic and Atmospheric Administration), and even in the IPCC’s Sixth Assessment Report (Table 7.15).

The fallacy conflates the so-called “global warming potential” for greenhouse gas emissions, which measures the warming potential per molecule (or unit mass) of various gases, with their warming potential weighted by their rate of concentration increase relative to CO2. Because the abundances of CH4 and N2O in the atmosphere are much lower than that of CO2, and are increasing even more slowly, there is a big difference between their global warming potentials and their weighted warming potentials.

The difference is illustrated in the table below. The conventional global warming potential (GWP) is a dimensionless metric, in which the GWP of a particular greenhouse gas is normalized to that of CO2; the GWP takes into account the atmospheric lifetime of the gas. The table shows values of GWP-100, the warming potential calculated over a 100-year time horizon.

The final column shows the value of the weighted GWP-100, which is not dimensionless like the conventional GWP-100 but measured in units of watts per square meter, the same as radiative forcing. The weighted GWP-100 is calculated by multiplying the conventional GWP-100 by the ratio of the rate of concentration increase for that gas to that of CO2.

As you can see, the actual anticipated warming in 100 years from either CH4 or N2O agricultural emissions will be only 10% of that from CO2 – in contrast to the conventional GWP-100 values extensively cited in the literature. What a waste of time and effort in trying to rein in CH4 and N2O emissions!

Next: CRED’s 2022 Disasters in Numbers report is a Disaster in Itself

New Observations Upend Notion That Global Warming Diminishes Cloud Cover

Climate scientists have long thought that low clouds, which act like a parasol and cool the earth’s surface, will diminish as the earth heats up – thus amplifying warming in a positive feedback process. This notion has been reinforced by climate models. But recent empirical observations refute the idea and show that the mechanism causing the strongest cloud reductions in models doesn’t actually occur in nature.

The observations were reported in a 2022 paper by an international team of French and German scientists. In a major field campaign, the team collected and analyzed observational data from cumulus clouds near the Atlantic island of Barbados, utilizing two research airplanes and a ship. Barbados is in the tropical trade-wind region where low-level cumulus clouds are common.

More than 800 probes were dropped from one plane that flew in circles about 200 km (120 miles) in diameter at an altitude of 9 km (6 miles); the probes gathered data on atmospheric temperature, moisture, pressure and winds as they fell. The other plane used radar and lidar sensors to measure cloudiness at the base of the cloud layer, at an altitude of 0.8 km (2,600 feet), while the ship conducted surface-based measurements.

The response to global warming of small cumulus clouds in the tropics is critically dependent on how much extra moisture from increased evaporation of seawater accumulates at the base of the clouds.

In climate models, dry air from the upper cloud layer is transported or entrained downward when the clouds grow higher and mixes with the moister air at the cloud base, drying out the lower cloud layer. This causes moisture there to evaporate more rapidly and boosts the probability that the clouds will dissipate. The phenomenon is known to climate scientists as the “mixing-desiccation hypothesis,” the strength of the mixing mechanism increasing with global warming.

But the observations of the research team reveal that the mixing-desiccation mechanism is not actually present in nature. This is because – as the researchers found – mesoscale (up to 200 km) circulation of air vertically upward dominates the smaller-scale entrainment mixing downward. Although mesoscale circulations are ubiquitous in trade-wind regions, their effect on humidity is completely absent from climate models.

The two competing processes are illustrated in the figure below, in which M represents mixing, E is downward entrainment, W is mesoscale vertical air motion, and z is the altitude; the dashed line represents the trade-wind inversion layer.

Predicted by the mixing-desiccation hypothesis is that warming strongly diminishes cloudiness compared with the base state shown in the left panel above. In the base state, vertical air motion is mostly downward and normal convective mixing occurs. According to the hypothesis, stronger mixing (M++ in panel a) caused by entrainment (E++) of dry air from higher to lower cloud layers, below the cloud base, results in excessive drying and fewer clouds.

The mesoscale circulation mechanism, on the other hand, prevents drying through mesoscale vertical air motion upward (W++ in panel b) that overcomes the entrainment mixing, thus preventing cloud loss. If anything, cloud cover actually increases with more vertical mixing. Climate models simulate only the mixing-desiccation mechanism, but the new research demonstrates that a second and more dominant mechanism operates in nature.

That cloudiness increases with mixing can be seen from the next figure, which shows the research team’s observed values of the vertical mixing rate M (in mm per second) and the cloud-base cloudiness (as a percentage). The trend is clear: as M gets larger, so does cloudiness.

The research has important implications for cloud feedback. In climate models, the refuted mixing-desiccation mechanism leads to strong positive cloud feedback – feedback that amplifies global warming. The models find that low clouds would thin out, and many would not form at all, in a hotter world.

Analysis of the new observations, however, shows that climate models with large positive feedbacks are implausible and that a weak trade cumulus feedback is much more likely than a strong one. Climate models with large trade cumulus feedbacks exaggerate the dependence of cloudiness on cloud-base moisture compared with mixing, as well as overestimating variability in cloudiness.

Weaker than expected low cloud feedback is also suggested by lack of the so-called CO2 “hot spot” in the atmosphere, as I discussed in a previous post. Climate models predict that the warming rate at altitudes of 9 to 12 km (6 to 7 miles) above the tropics should be about twice as large as at ground level. Yet the hot spot doesn’t show up in measurements made by weather balloons or satellites.

Next: Nitrous Oxide No More a Threat for Global Warming than Methane

Mainstream Media Jump on Extreme Weather Caused by Climate Change Bandwagon

The popular but mistaken belief that weather extremes are worsening be­cause of climate change has been bolstered in recent years by ever increasing hype in nearly all mainstream media coverage of extreme events, despite a lack of scientific evidence for the assertion. This month’s story by NPR (National Public Radio) in the U.S. is just the latest in a steady drumbeat of media misinformation.

Careful examination of the actual data reveals that if there is any trend in most weather extremes, it is downward rather than upward. In fact, a 2016 survey of extreme weather events since 1900 found strong evidence that the first half of the 20th century saw more weather extremes than the second half, when global warming was more prominent. More information can be found in my recent reports on weather extremes (here, here and here).

To be fair, the NPR story merely parrots the conclusions of an ostensibly scientific report from the AMS (American Meteorological Society), Explaining Extreme Events in 2021 and 2022 from a Climate Perspective. Both the AMS and NPR claim to show how the most extreme weather events of the previous two years were driven by climate change.

Nevertheless, all the purported connections rely on the dubious field of extreme-event attribution science, which uses statistics and climate models to supposedly detect the impact of global warming on weather disasters. The shortcomings of this approach are twofold. First, the models have a dismal track record in predicting the future (or indeed of hindcasting the past); and second, attri­bution studies that assign specific extremes to either natural variability or human causes are based on highly questionable statistical meth­odology (see here and here).  

So the NPR claim that “scientists are increasingly able to pinpoint exactly how the weather is changing as the earth heats up” and “how climate change drove unprecedented heat waves, floods and droughts in recent years” is utter nonsense. These weather extremes have occurred from time im­memorial, long before modern global warming began.

Yet the AMS and NPR insist that extreme drought in California and Nevada in 2021 was “six times more likely because of climate change.” This is completely at odds with a 2007 U.S. study which reconstructed the drought pattern in North America over the last 1200 years, using tree rings as a proxy.

The reconstruction is illustrated in the figure below, showing the drought area in western North America from 800 to 2003, as a percentage of the total land area. The thick black line is a 60-year mean, while the blue and red horizon­tal lines represent the average drought area during the periods 1900–2003 and 900–1300, respectively. Clearly, several unprecedently long and severe megadroughts have occurred in this region since the year 800; 2021 (not shown in the graph) was unexceptional.

The same is true for floods. A 2017 study of global flood risk concluded there is very little evidence that flooding is becoming more prevalent worldwide, despite average rainfall getting heavier as the planet warms. And, although the AMS report cites an extremely wet May of 2021 in the UK as likely to have resulted from climate change, “rescued” Victorian rainfall data reveals that the UK was just as wet in Victorian times as today.

The illusion that major floods are becoming more frequent is due in part to the world’s growing population and the appeal, in the more developed countries at least, of living near water. This has led to more people building their dream homes in vulner­able locations, on river or coastal floodplains, as shown in the next figure.

Depicted is what has been termed the “Expanding Bull’s-Eye Effect” for a hypothetical river flood impacting a growing city. It can be seen that the same flood will cause much more destruction in 2040 than in 1950. A larger and wealthier population exposes more individuals and property to the devastation wrought by intermittent flooding from rainfall-swollen rivers or storm surges. Population expansion beyond urban areas, not climate change, has also worsened the death toll and property damage from hurricanes and tornadoes.

In a warming world, it is hardly surprising that heat waves are becoming more common. However, the claim by the AMS and NPR that heat waves are now “more extreme than ever” can be questioned, either because heat wave data prior to 1950 is completely ignored in many compilations, or because the data before 1950 is sparse. No recent heat waves come close to matching the frequency and duration of those experienced worldwide in the 1930s.

The media are misleading and stoking fear in the public about perfectly normal extreme weather, although there are some notable exceptions such as The Australian. The alarmist stories of the others are largely responsible for the current near-epidemic of “climate anxiety” in children, the most vulnerable members of our society.

Next: New Observations Upend Notion That Global Warming Diminishes Cloud Cover

New Research Finds Climate Models Unable to Reproduce Ocean Surface Temperatures

An earlier post of mine described how a group of prestigious U.S. climate scientists recently admitted that some climate models run too hot, greatly exaggerating future global warming. Now another group has published a research paper revealing that a whole ensemble of models is unable to reproduce observed sea surface temperature trends in the Pacific and Southern Oceans since 1979.

The observed trends include enhanced warming in the Indo-Pacific Warm Pool – a large body of water near Indonesia where sea surface temperatures exceed 28 degrees Celsius (82 degrees Fahrenheit) year-round – as well as slight cooling in the eastern equatorial Pacific, and cooling in the Southern Ocean.

Climate models predict exactly opposite effects in all three regions, as illustrated in the following figure. The top panel depicts the global trend in measured sea surface temperatures (SSTs) from 1979 to 2020, while the middle panel depicts the multimodel mean of hindcasted temperatures over the same period from a large 598-member ensemble, based on 16 different models and various possible CO2 emissions scenarios ranging from low (SSP2-4.5) to high (RCP8.5) emissions. The bottom panel shows the difference.

You can see that the difference between observed and modeled temperatures is indeed marked. Considerable warming in the Indo-Pacific Warm Pool and the western Pacific, together with cooling in the eastern Pacific and Southern Ocean, are absent from the model simulations. The researchers found that sea-level pressure trends showed the same difference. The differences are especially pronounced for the Indo-Pacific Warm Pool.

The contributions of the individual model ensemble members to several key climate indices is illustrated in the figure below, where the letters A to P denote the 16 model types and the horizontal lines show the range of actual observed trends.

The top panel shows the so-called Pacific SST gradient, or difference between western and eastern Pacific; the center panel shows the ratio of Indo-Pacific Warm Pool warming to tropical mean warming; and the bottom panel portrays the Southern Ocean SST. All indices are calculated as a relative rate of warming per degree Celsius of tropical mean SST change. It is clear that the researchers’ findings hold across all members of the ensemble.

The results suggest that computer climate models have systematic biases in the transient response of ocean temperature patterns to any anthropogenic forcing, say the research authors. That’s because the contribution of natural variability to multidecadal trends is thought to be small in the Indo-Pacific region.

To determine whether the difference between observations and models comes from internal climate variability or from climate forcing not captured by the models, the researchers conducted a signal-to-noise maximizing pattern analysis. This entails maximizing the signal-to-noise ratio in global temperature patterns, where the signal is defined as the difference between observations and the multimodel mean on 5-year and longer timescales, and the noise consists of inter-model differences, inter-ensemble-member differences, and less-than-5-year variability. The chosen ensemble had 160 members.

As seen in the next figure, the leading pattern from this analysis (Difference Pattern 1) shows significant discrepancies between observations and models, similar to the difference panel designated “e” in the first figure above. Lack of any difference would appear as a colorless pattern. Only one of the 598 ensemble members came anywhere close to matching the observed trend in this pattern, indicating that the models are the problem, not a misunderstanding of natural variability.

The second pattern (Difference Pattern 2), which focuses on the Northern Pacific and Atlantic Oceans, also shows an appreciable difference between models and observations. The research team found that only a handful of ensemble members could reproduce this pattern. They noted that the model that most closely matched the trend in Pattern 1 was furthest from reproducing the Pattern 2 trend.

Previously proposed explanations for the differences seen between observed and modeled trends in sea surface temperatures include systematic biases in the transient response to climate forcing, and model biases in the representation of multidecadal natural variability.

However, the paper’s authors conclude it is extremely unlikely that the trend discrepancies result entirely from internal variability, such as the anomalous return to warming during the recent cool phase of the PDO (Pacific Decadal Oscillation) as proposed by other researchers. The authors say that the large difference in the Warm Pool warming rate between models and observations (“b” in the second figure above) is particularly hard to explain by natural variability.

They suggest that multidecadal variability of both tropical and subtropical sea surface temperatures is much too weak in climate models. The authors suggest that damping feedbacks in response to Warm Pool warming may be too strong in the models, which would reduce both the modeled warming rate and the modeled amplitude of multidecadal variability.

Next: Are Ocean Surface Temperatures, Not CO2, the Climate Control Knob?

Recent Marine Heat Waves Caused by Undersea Volcanic Eruptions, Not Human CO2

In a previous post, I showed how submarine volcanic eruptions don’t contribute to global warming, despite the release of enormous amounts of explosive energy. But they do contribute to regional climate change in the oceans, such as marine heat waves and shrinkage of polar sea ice, explained a retired geologist in a recent lecture.

Wyss Yim, who holds positions at several universities in Hong Kong, says that undersea volcanic eruptions – rather than CO2 – are an important driver of regional climate variability. The release of geothermal heat from these eruptions can explain oceanic heat waves, polar sea-ice changes and stronger-than-normal cycles of ENSO (the El Niño – Southern Oscillation), which causes temperature fluctuations and other climatic effects in the Pacific.

Submarine eruptions can eject basaltic lava at temperatures as high as 1,200 degrees Celsius (2,200 degrees Fahrenheit), often from multiple vents over a large area. Even though the hot lava is quickly quenched by the surrounding seawater, the heat absorbed by the ocean can have local, regional impacts that last for years.

The Pacific Ocean in particular is a major source of active terrestrial and submarine volcanoes, especially around the Ring of Fire bounding the Pacific tectonic plate, as illustrated in the figure below. Yim has identified eight underwater eruptions in the Pacific from 2011 to 2022 that had long-lasting effects on the climate, six of which emanated from the Ring of Fire.

One of these eruptions was from the Nishino-shima volcano south of Tokyo, which underwent a massive blow-out, initially undersea, that persisted from March 2013 to August 2015. Yim says the event was the principal cause of the so-called North Pacific Blob, a massive pool of warm seawater that formed in the northeast Pacific from 2013 to 2015, extending all the way from Alaska to the Baja Peninsula in Mexico and up to 400 meters (1,300 feet) deep. Climate scientists at the time, however, attributed the Blob to global warming.

The Nishino-shima eruption, together with other submarine eruptions in the Pacific during 2014 and 2015, was a major factor in prolonging and strengthening the massive 2014-2017 El Niño. A map depicting sea surface temperatures in January 2014, at the onset of El Niño and almost a year after the emergence of the Blob, is shown in the next figure. At that time, surface temperatures across the Blob were about 2.5 degrees Celsius (4.5 degrees Fahrenheit) above normal.

By mid-2014, the Blob covered an area approximately 1,600 km (1,000 miles) square. Its vast extent, states Yim, contributed to the gradual decline of Arctic sea ice between 2014 and 2016, especially in the vicinity of the Bering Strait. The Blob also led to two successive years without winter along the northeast Pacific coast.

Biodiversity in the region suffered too, with sustained toxic algal blooms. Yet none of this was caused by climate change.

The 2014-2017 El Niño was further exacerbated by the eruption from May to June 2015 of the Wolf volcano on the Galapagos Islands in the eastern Pacific. Although the Wolf volcano is on land, its lava flows entered the ocean. The figure below shows the location of the Wolf eruption, along with submarine eruptions of both the Axial Seamount close to the Blob and the Hunga volcano in Tonga in the South Pacific.

According to Yim, the most significant drivers of the global climate are changes in the earth’s orbit and the sun, followed by geothermal heat, and – only in third place – human-induced changes such as increased greenhouse gases. Geothermal heat from submarine volcanic eruptions causes not only marine heat waves and contraction of polar sea ice, but also local changes in ocean currents, sea levels and surface winds.

Detailed measurements of oceanic variables such as temperature, pressure, salinity and chemistry are made today by the worldwide network of 3,900 Argo profiling floats. The floats are battery-powered robotic buoys that patrol the oceans, sinking 1-2 km (0.6-1.2 miles) deep once every 10 days and then bobbing up to the surface, recording the properties of the water as they ascend. When the floats eventually reach the surface, the data is transmitted to a satellite.

Yim says his studies show that the role played by submarine volcanoes in governing the planet’s climate has been underrated. Eruptions of any of the several thousand active underwater volcanoes can have substantial regional effects on climate, as just discussed.

He suggests that the influence of volcanic eruptions on atmospheric and oceanic circulation should be included in climate models. The only volcanic effect in current models is the atmospheric cooling produced by eruption plumes.

Next: Climate Heresy: To Avoid Extinction We Need More, Not Less CO2

Arctic Sea Ice Refuses to Disappear, despite Ever Rising Arctic Temperatures

The loss of sea ice in the Arctic due to global warming has long been held up by the mainstream media and climate activists as cause for alarm. The ice would be completely gone in summer, they predicted, by 2013, then 2016, then 2030. But the evidence shows that Arctic ice is not cooperating, and in fact its summer extent in 2022 was the same as in 2008. And this stasis has occurred even as Arctic temperatures continue to soar.

The minimum summer Arctic ice extent this month was about 67% of its coverage in 1979, which is when satellite measurements of sea ice in the Arctic and Antarctic began. The figure to the left shows satellite-derived images of Arctic sea ice extent in the summer of 2022 (September 18) and the winter of 2021 (March 7) , which was similar to 2022. Sea ice shrinks during summer months and expands to its maximum extent during the winter.

Over the interval from 1979 to 2022, Arctic summer ice detached from the Russian coast, although it still encases northern Greenland as can be seen. The figure below compares the monthly variation of Arctic ice extent from its March maximum to the September minimum, for the years 2022 (blue curve) and 2008 (red curve). The 2022 summer minimum is seen to be almost identical to that in 2008, as was the 2021 minimum, with the black curve depicting the median extent over the period from 1981 to 2010.

The next figure illustrates the estimated Arctic ice thickness and volume at the 2022 minimum. The volume depends on both ice extent and thickness, which varies with location as well as season. Arctic ice thickness is notoriously difficult to measure, the best data coming from limited submarine observations.

The thickest, and oldest, winter ice currently lies along the northern coasts of the Canadian Arctic Archipelago and Greenland. According to a trio of Danish research institutions, just 20% of the Arctic ice pack today consists of thick ice more than one to two years old, compared to 40% in 1983. Thick, multi-year ice doesn’t melt away in the summer, but much of the ice cover currently formed during winter consists of thin, first-year ice. 

What is surprising, however, is that the lack of any further loss in summer ice extent since 2008 has been accompanied by a considerable increase in Arctic temperature. The left panel of the next figure, from a dataset compiled by the European Union’s Copernicus Climate Change Service, shows the mean surface temperature in the Arctic since 1979.

You can see that the Arctic has been warming steadily since at least 1979, when the first satellite measurements were made. As shown in the figure, the mean temperature there shot up by 3 degrees Celsius (5.4 degrees Fahrenheit), compared to global warming over the same interval of only 0.68 degrees Celsius (1.2 degrees Fahrenheit). That’s an Arctic warming rate 4 times faster than the globe as a whole. From 2008 to 2022, during which the summer ice extent remained unchanged on average, the Arctic nevertheless warmed by about 1.3 degrees Celsius (2.3 degrees Fahrenheit).

This phenomenon of excessive warming at the North Pole is known as Arctic amplification, depicted numerically in the right panel of the figure above. The effect shows strong regional variability, with some areas – such as the Taymyr Peninsula region in Siberia and the sea near Novaya Zemlya Island – warming by as much as seven times the global average. The principal reason for the high amplification ratio in these areas is exceptionally low winter ice cover, which is most pronounced in the Barents Sea near Novaya Zemlya.

The amplification is a result of so-called albedo (reflectivity) feedback. Sea ice is covered by a layer of white snow that reflects around 85% of incoming sunlight back out to space. As the highly reflective ice melts from global warming, it exposes more of the darker seawater underneath. The less reflective seawater absorbs more incoming solar radiation than sea ice, pushing the temperature higher. This in turn melts more ice and exposes more seawater, amplifying the warming in a feedback loop.

Interestingly, computer climate models, most of which exaggerate the impact of global warming, underestimate Arctic warming. The models typically estimate an average Arctic amplification ratio of about 2.5, much lower than the average ratio of 4 deduced from actual observations. A recent research study attributes this difference to possible errors in the modeled sensitivity to greenhouse gas forcing, and in the distribution of heating from the forcing between the atmosphere, cryosphere and ocean.

They also suggest that climate models underestimate multi-decadal internal variability, especially of atmospheric circulation in mid-latitudes (30o to 60o from the equator), which influences temperature variability in the Arctic as well.

Next: Climate-Related Disasters Wrongly Linked to Global Warming by Two International Agencies

Climate Science Establishment Finally Admits Some Models Run Too Hot

The weaknesses of computer climate models – in particular their exaggeration of global warming’s impact – have long been denied or downplayed by modelers. But in a recent about-face published in the prestigious scientific journal Nature, a group of prominent climate scientists tell us it’s time to “recognize the ‘hot model’ problem.” 

The admission that some models predict a future that gets too hot too soon has far-reaching implications, not only for climate science, but also for worldwide political action being considered to curb greenhouse gas emissions. Widespread panic about an unbearably hot future is largely a result of overblown climate model predictions.

I’ve discussed the shortcomings of climate models in previous posts (see here and here). As well as their omission of many types of natural variability and their overestimation of predicted future temperatures, most models can’t even reproduce the past climate accurately – a process known to modelers as hindcasting. You can see this in the following figure, which shows global warming, relative to 1979 in degrees Celsius, hindcasted by 43 different models for the period from 1975 to 2019.  

The thin colored lines indicate the modeled variation of temperature with time for the different models, while the thick red and green lines show the mean trend for models and observations, respectively. It's evident that, even in the past, many models run too hot, with only a small number coming close to actual measurements.

Current projections of future warming represent an average of an ensemble of typically 55 different models. Researchers have found the reason some of the 55 models run hot is because of their inaccurate representation of clouds, which distorts the ensemble average upwards. When these too-hot models are excluded from the ensemble, the projected global mean temperature in the year 2100 drops by as much as 0.7 degrees Celsius (1.3 degrees Fahrenheit).  

Zeke Hausfather, lead author of the Nature article, remarks that it’s therefore a mistake for scientists to continue to assume that each climate model is independent and equally valid. “We must move away from the naïve idea of model democracy,” he says.

This approach has in fact been adopted in the Sixth Assessment Report (AR6) of the UN’s IPCC (Intergovernmental Panel on Climate Change). To estimate future global temperatures, AR6 authors assigned weights to the different climate models before averaging them, using various statistical weighting methods. More weight was given to the models that agreed most closely with historical temperature observations.

Their results are illustrated in the next figure, showing projected temperature increases to 2100, for four different possible CO2 emissions scenarios ranging from low (SSP1-2.6) to high (SSP5-8.5) emissions. The dashed lines for each scenario are projections that average all models, as done previously; the thin solid lines are projections that average all but the hottest models; and the thick solid lines are projections based on the IPCC statistical adjustment, which are seen to be slightly lower than the average excluding hot models

The figure sheds light on a second reason that climate models overestimate future temperatures, aside from their inadequate simulation of climatic processes, namely an emphasis on unrealistically high emissions scenarios. The mean projected temperature rise by 2100 is seen to range up to 4.8 degrees Celsius (8.6 degrees Fahrenheit) for the highest emissions scenario.

But the somewhat wide range of projected warming has been narrowed in a recent paper by the University of Colorado’s Roger Pielke Jr. and coauthors, who select the scenarios that are most consistent with temperature observations from 2005 to 2020, and that best match emissions projected by the IEA (International Energy Agency).

As shown in the figure below, their selected scenarios project warming by 2100 of between 2 and 3 degrees Celsius (3.6 and 5.4 degrees Fahrenheit), the exact value depending on the particular filter used in the analysis. Boxes denote the 25th to 75th percentile ranges for each filter, while white lines denote the medians.

Overall, the projected median is 2.2 degrees Celsius (4 degrees Fahrenheit), considerably lower than the implausible 4 to 5 degrees Celsius (7.2 to 9 degrees Fahrenheit) of future warming often touted by the media – although it’s slightly above the upper limit of 2 degrees Celsius (3.6 degrees Fahrenheit) targeted by the 2015 Paris Agreement. But, as Pielke has commented, the unrealistic high-emissions scenarios are the basis for dozens of research papers published every week, leading to ever-proliferating “sources of error in projections of future climate change.”

Likewise, Hausfather and his coauthors are concerned that “… much of the scientific literature is at risk of reporting projections that are … overly influenced by the hot models.” In addition to prioritizing models with realistic warming rates, he suggests adopting another IPCC approach to predicting the future climate: one that emphasizes the effects of specific levels of global warming (1.5, 2, 3 and 4 degrees Celsius for example), regardless of when those levels are reached.

Next: No Convincing Evidence That Cleaner Air Causes More Hurricanes

New Projections of Sea Level Rise Are Overblown

That sea levels are rising due to global warming is not in question. But there’s no strong scientific evidence that the rate of rise is accelerating, as claimed in a recent NOAA (the U.S. National Oceanic and Atmospheric Administration) report on sea level rise or the Sixth Assessment Report (AR6) of the UN’s IPCC (Intergovernmental Panel on Climate Change). Such claims create unnecessary alarm.

NOAA’s projections to 2050 are illustrated in the next figure, showing sea level relative to 2000 both globally and in the contiguous U.S. The green curves represent a smoothing of actual observations from 1970 to 2020, together with an extrapolation from 2020 to 2050 based on the earlier observations. The projections in other colors correspond to five different modeled scenarios ranging from low to high risk for coastal communities.

The U.S. projections are higher than the global average because the North American Atlantic coast is a “hotspot” for sea level rise, with anomalously high rates of rise. The extrapolated U.S. average is projected to increase from 11 cm (4.3 inches) above its 2000 level in 2020, to 19 cm (7.5 inches) in 2030, 28 cm (11 inches) in 2040 and 38 cm (15 inches) in 2050. Projected increases are somewhat higher than average for the Atlantic and Gulf coasts, and considerably lower for the west coast.

These projected NOAA increases clearly suggest an accelerating rate of sea level rise, from a rate of 5.5 cm (2.2 inches) per decade between 2000 and 2020, to an almost doubled 10 cm (3.9 inches) per decade between 2040 and 2050. That’s a rapid acceleration rate of 1.5 mm per year per year and implies a rise in U.S. sea levels by 2050 as large as that seen over the past century. The implied global acceleration rate is 0.83 mm per year per year.

But even the IPCC’s AR6, which makes exaggerated claims about extreme weather, estimates global sea level acceleration at only 0.1 mm per year per year from 1993 to 2018. It seems highly unlikely that the rate would increase by nearly an order of magnitude in 32 years, so the NOAA projections appear excessively high.  

However, all these estimates are based not only on actual measurements, but also on computer models. The models include contributions to sea level rise from the expansion of seawater as it warms; melting of the Greenland and Antarctic ice sheets, as well as glaciers; sinking of the seafloor under the weight of extra meltwater; and local subsidence due to groundwater depletion, or rebound after melting of the last ice age’s heavy ice sheet.

The figure on the left below shows the GMSL (global-mean sea level, blue curve) rise rate estimated by one of the models for the 100 years from 1910 to 2010. Although it’s clear that the rate has been increasing since the late 1960s, it did the same in the 1920s and 1930s, and may currently be turning downward. Not surprisingly, studies using these models often come to very different conclusions about future rates of sea level rise.

The figure on the right below is an historical reconstruction of the rise rate for various locations along the Atlantic North American and Icelandic coasts, derived from salt-marsh sediment proxies and corrected for glacial rebound. It can be seen that rates of rise in the 18th century were at times only slightly lower than those in the 20th century, and that sea levels have fluctuated for at least 300 years, long before modern global warming began.

Because of this, the reconstruction study authors comment that the high “hotspot” rates of sea level rise in eastern North America may not be associated with any human contribution to global warming. They hypothesize that the fluctuations are related to changes in the mass of Arctic land ice, possibly associated with the naturally occurring North Atlantic Oscillation.

Along with the IPCC estimates, the reconstruction casts doubt on NOAA’s claim of continuing acceleration of today’s sea level rise rate. An accompanying news release adds to the hype, stating that “Sea levels are continuing to rise at an alarming rate, endangering communities around the world.”

Supporting the conclusion that NOAA’s projections are exaggerated is a 2021 assessment by climate scientist Judith Curry of projected sea level scenarios for the New Jersey coast. Taking issue with a 2019 report led by scientists from Rutgers University, her assessment found that the Rutgers sea level projections were – like NOAA’s estimates – substantially higher than those of the IPCC in its Fifth Assessment Report prior to AR6. Curry’s finding was that the bottom of the Rutgers “likely” scenarios was the most probable indicator of New Jersey sea level rise by 2050.

Interestingly, NOAA’s “low” scenario projected a U.S. average sea level of 31 cm (12 inches) in 2050, rather than 38 cm (15 inches), implying essentially no acceleration of the rise rate at all – and no cause for its media hype.

(This post has also been kindly reproduced in full on the Climate Depot blog.)

Next: Natural Sources of Global Warming and Cooling: (2) The PDO and AMO

The Crucial Role of Water Feedbacks in Global Warming

One of the most important features of our climate system, and the computer models developed to represent it, is feedbacks. Most people don’t know that without positive feedbacks, the climate would be so insensitive to CO2 and other greenhouse gases that global warming wouldn’t be a concern. Positive feedbacks amplify global warming, while negative feedbacks tamp it down.

A doubling of CO2, acting entirely on its own, would raise global temperatures only by a modest 1.1 degrees Celsius (2.0 degrees Fahrenheit). In climate models, it’s positive feedback from water vapor – by far the most abundant greenhouse gas – and, to a lesser extent, feedback from clouds, snow and ice, that boosts the warming effect of doubled CO2 alone to the predicted very likely range of 2 degrees Celsius (3.6 degrees Fahrenheit) to 5 degrees Celsius (9 degrees Fahrenheit).

Contributions of the various greenhouse gases to global warming can be surmised from the figure below, which depicts the wavelength spectrum of thermal radiation transmitted through the atmosphere, where wavelength is measured in micrometers. Greenhouse gases cause warming by absorbing a substantial portion of the cooling longwave radiation emitted upwards by the earth. The lower panels of the figure show how water vapor absorbs strongly in several wavelength bands that don’t overlap CO2.

The assumption that water vapor feedback is positive and not negative was originally made by the Swedish chemist Svante Arrhenius over a century ago. The feedback arises when slight CO2-induced warming of the earth causes more water to evaporate from oceans and lakes, and the extra moisture then adds to the heat-trapping water vapor already in the atmosphere. This amplifies the warming even more.

The magnitude of the feedback is critically dependent on how much of the extra water vapor ends up in the upper atmosphere as the planet warms, because that’s where heat escapes to outer space. An increase in moisture there means stronger, more positive water vapor feedback and thus more heat trapping.

The concentration of water vapor in the atmosphere declines steeply with altitude, more than 95% of it being within 5 kilometers of the earth’s surface. Limited data do show that upper atmosphere humidity strengthened slightly in the tropics during the 30-year period from 1979 to 2009, during which the globe warmed by about 0.5 degrees Celsius (0.9 degrees Fahrenheit). However, the humidity diminished in the subtropics and possibly at higher latitudes also during this time.

But the predicted warming of 2 degrees Celsius (3.6 degrees Fahrenheit) to 5 degrees Celsius (9 degrees Fahrenheit) for doubled CO2 assumes that the water vapor concentration in the upper atmosphere increases at all latitudes as it heats up. In the absence of observational evidence for this assumption, we can’t be at all sure that the water vapor feedback is strong enough to produce temperatures in the predicted range.

The uncertainty over CO2 warming is exacerbated by lack of knowledge about another water feedback, from clouds. As I’ve discussed in another post, cloud feedback can be either positive or negative.

Positive cloud feedback is normally associated with an increase in high-level clouds such as cirrus clouds, which allow most of the sun’s incoming shortwave radiation to penetrate, but also act as a blanket inhibiting the escape of longwave heat radiation to space. More high-level clouds amplify warming that in turn evaporates more water and produces yet more clouds.

Negative cloud feedback can arise from a warming-induced increase in low-level clouds such as cumulus and stratus clouds. These clouds reflect 30-60% of the sun’s radiation back into space, acting like a parasol and thus cooling the earth’s surface. The cooling results in less evaporation that then reduces new cloud formation.

Conversely, a decrease in high-level clouds would imply negative cloud feedback, while a decrease in low-level clouds would imply positive feedback. Because of all these possibilities, together with the paucity of empirical data, it’s simply not known whether net cloud feedback in the earth’s climate system is one or the other – positive or negative.

If overall cloud feedback is negative, rather than positive as the computer models suggest, it’s possible that negative feedbacks in the climate system from the lapse rate (the rate of temperature decrease with altitude in the lower atmosphere) and clouds dominate the positive feedbacks from water vapor, and from snow and ice. This would mean that the response of the climate to added CO2 in the atmosphere is to lessen, rather than magnify, the temperature increase from CO2 acting alone, the opposite of what climate models tell us. Most feedbacks in nature are negative, keeping the natural world stable.

So until water feedbacks are better understood, there’s little scientific justification for any political action on CO2 emissions.

Next: El Niño and La Niña May Influence the Climate More than Greenhouse Gases

Latest UN Climate Report Is More Hype than Science

In its latest climate report, the UN’s IPCC (Intergovernmental Panel on Climate Change) falls prey to the hype usually characteristic of alarmists who ignore the lack of empirical evidence for the climate change narrative of “unequivocal” human-caused global warming.

Past IPCC assessment reports have served as the voice of authority for climate science and, even among those who believe in man-made climate change, as a restraining influence – being hesitant in linking weather extremes to a warmer world, for instance. But all that has changed in its Sixth Assessment Report, which the UN Secretary-General has hysterically described as “code red for humanity.”

Among other claims trumpeted in the report is the statement that “Evidence of observed changes in extremes such as heat waves, heavy precipitation, droughts, and tropical cyclones, and, in particular, their attribution to human influence, has strengthened since [the previous report].” This is simply untrue and actually contrary to the evidence, with the exception of precipitation that tends to increase with global warming because of enhanced evap­oration from tropical oceans, resulting in more water vapor in the atmosphere.

In other blog posts and a recent report, I’ve shown how there’s no scientific evidence that global warm­ing triggers extreme weather, or even that weather extremes are becoming more frequent. Anomalous weather events, such as heat waves, hurricanes, floods, droughts and tornadoes, show no long-term trend over more than a century of reliable data.

As one example, the figure below shows how the average glob­al area and intensity of drought remained unchanged on aver­age from 1950 to 2019, even though the earth warmed by about 1.1 degrees Celsius (2.0 degrees Fahrenheit) over that interval. The drought area is the percentage of total global land area, excluding ice sheets and deserts, while the intensity is characterized by the self-calibrating Palmer Drought Severity Index, which measures both dryness and wetness and classifies events as “moderate,” “severe” or “extreme.”

Drought.jpg

Although the IPCC report claims, with high confidence, that “the frequency of concurrent heatwaves and droughts on the global scale” are increasing, the scientific evidence doesn’t sup­port such a bold assertion. An accompanying statement that cold extremes have become less frequent and less severe is also blatantly incorrect.

Cold extremes are in fact on the rise, as I’ve discussed in previous blog posts (here and here). The IPCC’s sister UN agency, the WMO (World Meteorological Organi­zation) does at least acknowledge the existence of cold weather extremes, but has no explanation for their origin nor their growing frequency. Cold extremes include prolonged cold spells, unusually heavy snowfalls and longer winter seasons. Why the IPCC should draw the wrong conclusion about them is puzzling.

In discussing the future climate, the IPCC makes use of five scenarios that project differing emissions of CO2 and other greenhouse gases. The scenarios start in 2015 and range from one that assumes very high emissions, with atmospheric CO2 doubling from its present level by 2050, to one assuming very low emissions, with CO2 declining to “net zero” by mid-century.

But, as pointed out by the University of Colorado’s Roger Pielke Jr., the estimates in the IPCC report are dominated by the highest emissions scenario. Pielke finds that this super-high emissions scenario accounts for 41.5% of all scenario mentions in the report, whereas the scenarios judged to be the most likely under current trends account for only a scant 18.4% of all mentions. The hype inherent in the report is obvious by comparing these percentages with the corresponding ones in the Fifth Assessment Report, which were 31.4% and 44.5%, respectively. 

Not widely known is that the supposed linkage between climate change and human emissions of greenhouse gases, as well as the purported connection between global warming and weather extremes, both depend entirely on computer climate models. Only the models link climate change or extreme weather to human activity. The empirical evidence does not – it merely shows that the planet is warming, not what’s causing the warming.

A recent article in the mainstream scientific journal Science surprisingly drew attention to the shortcomings of climate models, weaknesses that have been emphasized for years by climate change skeptics. Apart from falsely linking global warming to CO2 emissions – because the models don’t include many types of natural variability – the models greatly exaggerate predicted temperatures, and can’t even reproduce the past climate accurately. As leading climate scientist Gavin Schmidt says, “You end up with numbers for even the near-term that are insanely scary—and wrong.”

The new IPCC report, with its prognostications of gloom and doom, should have paid more attention to its modelers. In making wrong claims about the present climate, and relying too heavily on high-emissions scenarios for future projections, the IPCC has strayed from the path of science.

Next: Weather Extremes: Hurricanes and Tornadoes Likely to Diminish in 2021

New Doubts on Climatic Effects of Ocean Currents, Clouds

Recent research has cast doubt on the influence of two watery entities – ocean currents and clouds – on future global warming. But, unlike many climate studies, the two research papers are grounded in empirical observations rather than theoretical models.

The first study examined so-called deep water formation in the Labrador Sea, located between Greenland and Canada in the North Atlantic Ocean, and its connection to the strength of the AMOC (Atlantic Meridional Overturning Circulation). The AMOC forms part of the ocean conveyor belt that redistributes seawater and heat around the globe. Despite recent evidence to the contrary, computer climate models have predicted that climate change may weaken the AMOC or even shut it down altogether.  

Deep water formation, which occurs in a few localized areas across the world, refers to the sinking of cold, salty surface water to depths of several kilometers because it’s denser than warm, fresher water; winter winds in the Labrador Sea both cool the surface and increase salinity through evaporation. Most climate models link any decline in North Atlantic deep water formation, due to global warming, to decreases in the strength of the AMOC.

But the researchers found that winter convection in the Labrador Sea and the adjacent Irminger Sea (east of Greenland) had very little impact on deep ocean currents associated with the AMOC, over the period from 2014 to 2018. Their observational data came from a new array of seagoing instruments deployed in the North Atlantic, including moorings anchored on the sea floor, underwater gliders and submersible floats. The devices measure ocean current, temperature and salinity.

Results for the strength of the AMOC are illustrated in the figure below, in which “OSNAP West” includes the Labrador and Irminger Seas while the more variable “OSNAP East” is in the vicinity of Iceland. As can be seen, the AMOC in the Labrador Sea didn’t change on average during the whole period of observation. The study authors caution, however, that measurements over a longer time period are needed to confirm their conclusion that strong winter cooling in the Labrador Sea doesn’t contribute significantly to variability of the AMOC.

Fig 1.jpg

Understanding the behavior of the AMOC is important because its strength affects sea levels, as well as weather in Europe, North America and parts of Africa. Variability of the AMOC is thought to have caused multiple episodes of abrupt climate change, in a decade or less, during the last ice age.

The second study to question the effect of water on global warming involves clouds. As I described in an earlier post, the lack of detailed knowledge about clouds is one of the major limitations of computer climate models. One problem with the existing models is that they simulate too much rainfall from “warm” clouds and, therefore, underestimate their lifespan and cooling effect.

Warm clouds contain only liquid water, compared with “cool” clouds that consist of ice particles mixed with water droplets. Since the water droplets are usually smaller than the ice particles, they have a larger surface area to mass ratio which makes them reflect the sun’s radiation more readily. So warm clouds block more sunlight and produce more cooling than cool, icy clouds. At the same time, warm clouds survive longer because they don’t rain as much.

The research team used satellite data to ascertain how much precipitation from clouds occurs in our present climate. The results for warm clouds are illustrated in the following map showing the warm-rain fraction; red designates 100% warm rain, while various shades of blue indicate low fractions. As would be expected, most warm rain falls near the equator where temperatures are highest. It also falls predominantly over the oceans.

Fig 2.jpg

The researchers then employed the empirical satellite data for rainfall to modify the warm-rain processes in an existing CMIP6 climate model (the latest generation of models). The next figure, which shows the probability of rain from warm clouds at different latitudes, compares the satellite data (gray) to the model results before (maroon) and after (yellow) modification.

Fig 3.jpg

It’s seen that the modified climate model is in much better agreement with the satellite data, except for a latitude band just north of the equator, and is also a major improvement over the unmodified model. The scientists say that their correction to the model makes negative “cloud-lifetime feedback” – the process by which higher temperatures inhibit warm clouds from raining as much and increases their lifetime – almost three times larger than in the original model.

This larger cooling feedback is enough to account for the greater greenhouse warming predicted by CMIP6 models compared with earlier CMIP5 models. But, as the study tested only a single model, it needs to be extended to more models before that conclusion can be confirmed.

Next: Could Pacific Northwest Heat Wave, European Floods Have Been Caused by the Sun?

Natural Sources of Global Warming and Cooling: (1) Solar Variability and La Niña

The role played by the sun in climate change has long been trivialized by advocates of the orthodoxy that links global warming almost entirely to our emissions of greenhouse gases. But recent research suggests that solar fluctuations, while small, may affect climate by driving the multidecadal switch from El Niño to La Niña conditions in the Pacific Ocean. Other research finds that our inability to correctly simulate the cooling La Niña cycle is a major reason that computer climate models run hot.     

La Niña is the cool phase of ENSO (the El Niño – Southern Oscillation), a natural cycle that causes temperature fluctuations and other climatic effects in tropical regions of the Pacific. The familiar El Niño and La Niña events, which last for a year or more at a time, recur at irregular intervals from two to seven years. Serious effects of ENSO range from catastrophic flooding in the U.S. and Peru to severe droughts in Australia. 

The sun has several natural cycles, the most well known of which is the 11-year sunspot cycle. During the sunspot cycle the sun’s heat and light output waxes and wanes by about 0.08%. Although this variation in itself is too small to have any appreciable direct effect on the earth’s climate, indirect solar effects can have an impact on the warming and cooling of our planet – indirect effects that are ignored in climate models.

Just such an indirect solar effect may have been discovered in a new study revealing a correlation between the end of sunspot cycles and the switch from El Niño to La Niña states of the tropical Pacific. The research was conducted by a team of scientists from NASA and the U.S. National Center for Atmospheric Research.

The researchers found that the termination of all five solar cycles between 1960 and 2010-11 coincided with a flip from a warmer El Niño to a cooler La Niña. And the end of the most recent solar cycle, which has just occurred, also coincides with the beginning of a new La Niña event. Because the end of the 11-year solar cycle is fuzzy, the research team relied for its “clock” on the sun’s more well-defined magnetic polarity cycle known as a Hale cycle, which is precisely 22 years in length.

The correspondence between the 11-year solar cycle and the onset of La Niña events is illustrated in the figure below, showing the six-month smoothed monthly sunspot number since 1950 in black and the Oceanic El Niño Index in color. The red and blue boxes mark El Niño and La Niña periods, respectively, in the repeating pattern. What stands out is that the end of each sunspot cycle is closely correlated with the switch from El Niño to La Niña. That the correlation is mere coincidence is statistically highly unlikely, say the study authors, although further research is needed to establish the physical connection between the sun and earth responsible for the correlation.

Solar ENSO.jpg

Another study, headed by climate scientists at the U.S. Lawrence Livermore National Laboratory, finds that multidecadal La Niña variability is why computer climate models overestimate sea surface temperatures in the Pacific by two to three times. The La Niña cycle results in atmospheric cooling and a distinct pattern of cooler-than-normal sea surface temperatures in the central and eastern tropical Pacific, with warmer waters to the north and south.

Many climate models produce ENSO variations, but are unable to predict either the timing of El Niño and La Niña events or temperatures measured by satellite in the tropical lower atmosphere (troposphere). However, the study authors found that approximately 13% of 482 simulations by 55 computer models do show tropospheric warming in the tropics that matches the satellite record. And, unexpectedly, those simulations reproduce all the characteristics of La Niña.

The next figure shows how well one of these particular simulations reproduces a La Niña temperature pattern, in both geographic extent (upper panel) and ocean depth (lower panel). The panels labeled B are the computer simulation and the panels labeled C are the satellite observations. Temperatures are depicted as an average warming (positive) or cooling (negative) rate, in degrees Celsius per decade, over the period from 1979 to 2018. La Niña cooling in the Pacific is clearly visible in both B and C.

Solar 2.jpg

The other 87% of the computer simulations overestimate tropical Pacific temperatures, which is why, the authors say, the multimodel mean warming rate is two to three times higher than observed. But their results show that natural climate variability, here in the form of La Niña, is large enough to explain the difference between reality and climate model predictions.

Next: Little Evidence for Link between Natural Disasters and Global Warming

How Near-Saturation of CO2 Limits Future Global Warming

The climate change narrative is based in part on the concept that adding more and more CO2 to the atmosphere will cause the planet to become unbearably hot. But recent research refutes this notion by concluding that extra CO2 quickly becomes less effective in raising global temperatures – a saturation effect, long disputed by believers in the narrative.

First reported in 2020, the new and highly detailed research is described in a preprint by physicists William Happer and William van Wijngaarden. Happer is an emeritus professor at Princeton University and prominent in optical and radiation physics. In their paper, the two authors examine the radiative forcings – disturbances that alter the earth’s climate – of the five most abundant greenhouse gases, including CO2 and water vapor.

The researchers find that the current levels of atmospheric CO2 and water vapor are close to saturation. Saturation is a technical term meaning that the greenhouse effect has already had its maximum impact and further increases in concentration will cause little additional warming. For CO2, doubling its concentration from its 2015 level of 400 ppm (parts per million) to 800 ppm will increase its radiative forcing by just 1%. This increase in forcing will decrease the cooling radiation emitted to space by about 3 watts per square meter, out of a total of about 300 watts per square meter currently radiated to space.

The science behind greenhouse gas warming is illustrated in the figure below, depicting the wavelength spectrum of the intensity of thermal radiation transmitted through the atmosphere, where wavelength is measured in micrometers. Radiation is absorbed and radiated by the earth in two different wavelength regions: absorption of solar radiation takes place at short (ultraviolet and visible) wavelengths, shown in red in the top panel, while heat is radiated away at long (infrared) wavelengths, shown in blue.

Happer 1.jpg

Greenhouse gases in the atmosphere allow most of the downward shortwave radiation to pass through, but prevent a substantial portion of the upward longwave radiation from escaping – resulting in net warming, as suggested by the relative areas of red and blue in the figure above. The absorption by various greenhouse gases of upward (emitted) radiation at different wavelengths can be seen in the lower panels of the figure, water vapor and CO2 being the most dominant gases.

The research of Happer and van Wijngaarden takes into account both absorption and emission, as well as atmospheric temperature variation with altitude. The next figure shows the authors’ calculated spectrum for cooling outgoing radiation at the top of the atmosphere, as a function of wavenumber or spatial frequency rather than wavelength, which is the inverse of spatial frequency. (The temporal frequency is the spatial frequency multiplied by the speed of light.)

Happer 2.jpg

The blue curve is the spectrum for an atmosphere without any greenhouse gases at all, while the green curve is the spectrum for all greenhouse gases except CO2. Including CO2 results in the black or red curve, for concentrations of 400 ppm or 800 ppm, respectively; the gap in the spectrum represents the absorption of radiation that would otherwise cool the earth. The small decrease in area underneath the curve, from black to red, corresponds to the forcing increase of 3 watts per square meter resulting from doubling the CO2 level.

What matters for global warming is how much the additional forcing bumps up the temperature. This depends in part on the assumption made about climate feedback, since it’s the positive feedback from much more abundant water vapor in the atmosphere that is thought to amplify the modest temperature rise from CO2 acting alone. The strength of the water vapor feedback is closely tied to relative humidity.

Assuming positive water vapor feedback and constant relative humidity with increasing altitude, the preprint authors find that the extra forcing from doubled CO2 causes a temperature increase of 2.2 to 2.3 degrees Celsius (4.0 to 4.1 degrees Fahrenheit). If the water vapor feedback is set to zero, then the temperature increase is only 1.4 degrees Celsius (2.5 degrees Fahrenheit). These results can be compared with the prediction of 2.6 to 4.1 degrees Celsius (4.7 to 7.4 degrees Fahrenheit) in a recent study based on computer climate models and other evidence.

Although an assumption of zero water vapor feedback may seem unrealistic, Happer points out that something important is missing from their calculations, and that is feedback from clouds – an omission the authors are currently working on. Net cloud feedback, from both low and high clouds, is poorly understood currently but could be negative rather than positive.

If indeed overall cloud feedback is negative rather than positive, it’s possible that negative feedbacks in the climate system from the lapse rate (the rate of temperature decrease with altitude in the lower atmosphere) and clouds dominate the positive feedbacks from water vapor, and from snow and ice. In either case, this research demonstrates that future global warming won’t be nearly as troublesome as the climate change narrative insists.

Next: Natural Sources of Global Cooling: (1) Solar Variability and La Niña

Growing Antarctic Sea Ice Defies Climate Models

We saw in the previous post how computer climate models greatly exaggerate short-term warming. Something else they get wrong is the behavior of Antarctic sea ice. According to the models, sea ice at both the North and South Poles should shrink as global temperatures rise. It’s certainly contracting in the Arctic, faster in fact than most models predict, but contrary to expectations, sea ice in the Antarctic is actually expanding.

Scientific observations of sea ice in the Arctic and Antarctic have only been possible since satellite measurements began in 1979. The figure below shows satellite-derived images of Antarctic sea ice extent at its summer minimum in 2020 (left image), and its previous winter maximum in 2019 (right image). Sea ice expands to its maximum extent during the winter and contracts during summer months.

Blog 3-8-21 JPG(1).jpg

But in contrast to the increase in the maximum extent of sea ice around Antarctica shown by observations during the satellite era, the computer models all simulate a decrease. Two research groups have investigated this decrease in detail for the previous generation of CMIP5 models.

One of the groups is the BAS (British Antarctic Survey), which has a long history of scientific studies of Antarctica dating back to World War II and before. Their 2013 assessment of 18 CMIP5 climate models found marked differences in the modeled trend in month-to-month Antarctic sea ice extent from that observed over the previous 30 years, as illustrated in the next figure. The thick blue line at the top indicates the trend in average monthly ice extent measured over the period from 1979 to 2005, and the colored lines are the monthly trends simulated by the various models; the black line is the model mean.

Blog 3-8-21 JPG(2).jpg

It’s seen that almost all models exhibit an incorrect negative trend for every month of the year. The mean monthly trend for all models is a decline of -3.2% per decade between 1979 and 2005, with the largest mean monthly decline being -13.6% per decade in February. But the actual observed gain in Antarctic sea ice extent is (+)1.1% per decade from 1979 to 2005 according to the BAS, or a somewhat higher 1.8% per decade from 1979 to 2019, as estimated by the U.S. NSIDC (National Snow and Ice Data Center) and depicted below.

Blog 3-8-21 JPG(3).jpg

For actual sea ice extent, the majority of models simulate too meager an extent at the February minimum, while several models estimate less than two thirds of the real-world extent at the September maximum. Similar results were obtained in a study by a Chinese research group, as well as other studies.

The discrepancy in sea ice extent between the empirical satellite observations and the climate models is particularly pronounced on a regional basis. At the February minimum, the satellite data indicate substantial residual ice in the Weddell Sea to the east of the Antarctic Peninsula (see the first figure above), whereas most models show very little. And the few models that simulate a realistic amount of February sea ice fail to reproduce the loss of ice in the Ross Sea adjoining West Antarctica.

All these differences indicate that computer models are not properly simulating the physical processes that govern Antarctic sea ice. Various possible processes not incorporated in the models have been suggested to explain the model deficiencies. These include freshening of seawater by melting ice shelves attached to the Antarctic ice sheet; meltwater from rain; and atmospheric processes involving clouds or wind.

BAS climate modeler Paul Holland thinks the seasons may hold the key to the conundrum, having noticed that trends in sea ice growth or shrinkage vary in strength in the different seasons. Holland surmised that it was more important to look at how fast the ice was growing or shrinking from season to season than focusing on changes in ice extent. His calculations of the rate of growth led him to conclude that seasonal wind trends play a role.

The researcher found that winds are spreading sea ice out in some regions of Antarctica, while compressing or keeping it intact in others, and that these effects begin in the spring. “I always thought, and as far as I can tell everyone else thought, that the biggest changes must be in autumn, Holland said. “But the big result for me now is we need to look at spring. The trend is bigger in the autumn, but it seems to be created in spring.”

That’s where Holland’s research stands for now. More detailed work is required to check out his novel idea.

Next: Good Gene – Bad Gene: When GMOs Succeed and When They Don’t

Latest Computer Climate Models Run Almost as Hot as Before

The narrative that global warming is largely human-caused and that we need to take drastic action to control it hinges entirely on computer climate models. It’s the models that forecast an unbearably hot future unless we rein in our emissions of CO2.

But the models have a dismal track record. Apart from failing to predict a recent slowdown in global warming in the early 2000s, climate models are known even by modelers to consistently run hot. The previous generation of models, known in the jargon as CMIP5 (Coupled Model Intercomparison Project Phase 5), overestimated short-term warming by more than 0.5 degrees Celsius (0.9 degrees Fahrenheit) above observed temperatures. That’s 50% of all the global warming since preindustrial times.

The new CMIP6 models aren’t much better. The following two figures reveal just how much both CMIP5 and CMIP6 models exaggerate predicted temperatures, and how little the model upgrade has done to shrink the difference between theory and observation. The figures were compiled by climate scientist John Christy, who is Director of the Earth System Science Center at the University of Alabama in Huntsville and an expert reviewer of the upcoming sixth IPCC (Intergovernmental Panel on Climate Change) report.

Models CMIP5.jpg
Models CMIP6.jpg

Both figures plot the warming relative to 1979 in degrees Celsius, measured in a band in the tropical upper atmosphere between altitudes of approximately 9 km (30,000 feet) and 12 km (40,000 feet). That’s a convenient band for comparison of model predictions with measurements made by weather balloons and satellites. The thin colored lines indicate the predicted variation of temperature with time for the different models, while the thick red and green lines show the mean trend for models and observations, respectively.

The trend for CMIP6 models is depicted more clearly in Christy’s next figure, which compares the warming rates for 39 of the models. The average CMIP6 trend in warming rate is 0.40 degrees Celsius (0.72 degrees Fahrenheit) per decade, compared with the actual observed rate of 0.17 degrees Celsius (0.31 degrees Fahrenheit) per decade – meaning that the predicted warming rate is 2.35 times too high.

Models CMIP6 warming rate.jpg

These CMIP6 numbers are only a marginal improvement over those predicted by the older CMIP5 models, for which the warming trend was 0.44 degrees Celsius (0.72 degrees Fahrenheit) per decade, or 2.75 times higher than the observed rate of 0.16 degrees Celsius (0.29 degrees Fahrenheit) per decade (for a slightly different set of measurements),

It’s seen that the warming rates for any particular model fluctuate wildly in both cases, much more so than the observations themselves. Christy says the large variability is a sign that the models underestimate negative feedbacks in the climate system, especially from clouds that I’ve discussed in another post. Negative feedback is stabilizing and acts to damp down processes that cause fluctuations. There is evidence, albeit controversial, that feedback from high clouds such as cirrus clouds – which normally warm the planet – may not be as strongly positive as the new models predict, and could even be negative overall.

You may be wondering why all these comparisons between models and observations are made high up in the atmosphere, rather than at the earth’s surface which is where we actually feel global warming. The reason is the atmosphere at 9 to 12 km (6 to 7 miles) above the tropics is a much more sensitive test of CO2 greenhouse warming than it is near the ground. Computer climate models predict that the warming rate at those altitudes should be about twice as large as at ground level, giving rise to the so-called CO2 “hot spot.”

The hot spot is illustrated in the figure below, showing the air temperature as a function of both altitude (measured as atmospheric pressure) and global latitude, as predicted by a Canadian model. Similar predictions come from the other CMIP6 models. The hot spot is the red patch at the center of the figure bounded by the 0.6 degrees Celsius (1.1 degrees Fahrenheit) contour, extending roughly 20o either side of the equator and at altitudes of 30,000-40,000 feet. The corresponding temperature on the ground is seen to be less than 0.3 degrees Celsius (0.5 degrees Fahrenheit).

Hot spot.jpg

But the hot spot doesn’t show up in measurements made by weather balloons or satellites. This mismatch between models and experiment is important because the 30,000-40,000 feet band in the atmosphere is the very altitude from which infrared heat is radiated away from the earth. The models run hot, according to Christy, because they trap too much heat that in reality is lost to outer space – a consequence of insufficient negative feedback in the models.

Next: Growing Antarctic Sea Ice Defies Climate Models

How Clouds Hold the Key to Global Warming

One of the biggest weaknesses in computer climate models – the very models whose predictions underlie proposed political action on human CO2 emissions – is the representation of clouds and their response to global warming. The deficiencies in computer simulations of clouds are acknowledged even by climate modelers. Yet cloud behavior is key to whether future warming is a serious problem or not.

Uncertainty about clouds is why there’s such a wide range of future global temperatures predicted by computer models, once CO2 reaches twice its 1850 level: from a relatively mild 1.5 degrees Celsius (2.7 degrees Fahrenheit) to an alarming 4.5 degrees Celsius (8.1 degrees Fahrenheit). Current warming, according to NASA, is close to 1 degree Celsius (1.8 degrees Fahrenheit).

Clouds can both cool and warm the planet. Low-level clouds such as cumulus and stratus clouds are thick enough to reflect 30-60% of the sun’s radiation that strikes them back into space, so they act like a parasol and cool the earth’s surface. High-level clouds such as cirrus clouds, on the other hand, are thinner and allow most of the sun’s radiation to penetrate, but also act as a blanket preventing the escape of reradiated heat to space and thus warm the earth. Warming can result from either a reduction in low clouds, or an increase in high clouds, or both.

Clouds Marohasy (2).jpg

Our inability to model clouds satisfactorily is partly because we just don’t know much about their inner workings either during a cloud’s formation, or when it rains, or when a cloud is absorbing or radiating heat.  So a lot of adjustable parameters are needed to describe them. It’s partly also because actual clouds are much smaller than the minimum grid scale in supercomputers, by as much as several hundred or even a thousand times.  For that reason, clouds are represented in computer models by average values of size, altitude, number and geographic location.

Most climate models predict that low cloud cover will decrease as the planet heats up, but this is by no means certain and meaningful observational evidence for clouds is sparse. To remedy the shortcoming, a researcher at Columbia University’s Earth Institute has embarked on a project to study how low clouds respond to climate change, especially in the tropics which receive the most sunlight and where low clouds are extensive.

The three-year project will utilize NASA satellite data to investigate the response of puffy cumulus clouds and more layered stratocumulus clouds to both surface temperature and the stability of the lower atmosphere. These are the two main influences on low cloud formation. It’s only recent satellite technology that makes it possible to clearly distinguish the two types of cloud from each other and from higher clouds. The knowledge obtained will test how well computer climate models simulate present-day low cloud behavior, as well as help narrow the range of warming expected as CO2 continues to rise.

High clouds are controversial. Climate models predict that high clouds will get higher and become more numerous as the atmosphere warms, resulting in a greater blanket effect and even more warming. This is an example of expected positive climate feedback – feedback that amplifies global warming. Positive feedback is also the mechanism by which low cloud cover is expected to diminish with warming.

But there’s empirical satellite evidence, obtained by scientists from the University of Alabama and the University of Auckland in New Zealand, that cloud feedback for both low-level and high-level clouds is negative. The satellite data also support an earlier proposal by atmospheric climatologist Richard Lindzen that high-level clouds near the equator open up, like the iris of an eye, to release extra heat when the temperature rises – also a negative feedback effect.

If indeed cloud feedback is negative rather than positive, it’s possible that combined negative feedbacks in the climate system dominate the positive feedbacks from water vapor, which is the primary greenhouse gas, and from snow and ice. That would mean that the overall response of the climate to added CO2 in the atmosphere is to lessen, rather than magnify, the temperature increase from CO2 acting alone, the reverse of what climate models say.

The latest generation of computer models, known as CMIP6, predicts an even greater – and potentially deadly – range of future warming than earlier models. This is largely because the models find that low clouds would thin out, and many would not form at all, in a hotter world. The result would be even stronger positive cloud feedback and additional warming. However, as many of the models are unable to accurately simulate actual temperatures in recent decades, their predictions about clouds are suspect.

Next: Evidence Mounting for Global Cooling Ahead: Record Snowfalls, Less Greenland Ice Loss

Why Both Coronavirus and Climate Models Get It Wrong

Most coronavirus epidemiological models have been an utter failure in providing advance information on the spread and containment of the insidious virus. Computer climate models are no better, with a dismal track record in predicting the future.

This post compares the similarities and differences of the two types of model. But similarities and differences aside, the models are still just that – models. Although I remarked in an earlier post that epidemiological models are much simpler than climate models, this doesn’t mean they’re any more accurate.     

Both epidemiological and climate models start out, as they should, with what’s known. In the case of the COVID-19 pandemic the knowns include data on the progression of past flu epidemics, and demographics such as population size, age distribution, social contact patterns and school attendance. Among the knowns for climate models are present-day weather conditions, the global distribution of land and ice, atmospheric and ocean currents, and concentrations of greenhouse gases in the atmosphere.

But the major weakness of both types of model is that numerous assumptions must be made to incorporate the many variables that are not known. Coronavirus and climate models have little in common with the models used to design computer chips, or to simulate nuclear explosions as an alternative to actual testing of atomic bombs. In both these instances, the underlying science is understood so thoroughly that speculative assumptions in the models are unnecessary.

Epidemiological and climate models cope with the unknowns by creating simplified pictures of reality involving approximations. Approximations in the models take the form of adjustable numerical parameters, often derisively termed “fudge factors” by scientists and engineers. The famous mathematician John von Neumann once said, “With four [adjustable] parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

One of the most important approximations in coronavirus models is the basic reproduction number R0 (“R naught”), which measures contagiousness. The numerical value of R0 signifies the number of other people that an infected individual can spread the disease to, in the absence of any intervention. As shown in the figure below, R0 for COVID-19 is thought to be in the range from 2 to 3, much higher than for a typical flu at about 1.3, though less than values for other infectious diseases such as measles.

COVID-19 R0.jpg

It’s COVID-19’s high R0 that causes the virus to spread so easily, but its precise value is still uncertain. What determines how quickly the virus multiplies, however, is the incubation period, during which an infected individual can’t infect others. Both R0 and the incubation period define the epidemic growth rate. They’re adjustable parameters in coronavirus models, along with factors such as the rate at which susceptible individuals become infectious in the first place, travel patterns and any intervention measures taken.

In climate models, hundreds of adjustable parameters are needed to account for deficiencies in our knowledge of the earth’s climate. Some of the biggest inadequacies are in the representation of clouds and their response to global warming. This is partly because we just don’t know much about the inner workings of clouds, and partly because actual clouds are much smaller than the finest grid scale that even the largest computers can accommodate – so clouds are simulated in the models by average values of size, altitude, number and geographic location. Approximations like these are a major weakness of climate models, especially in the important area of feedbacks from water vapor and clouds.

An even greater weakness in climate models is unknowns that aren’t approximated at all and are simply omitted from simulations because modelers don’t know how to model them. These unknowns include natural variability such as ocean oscillations and indirect solar effects. While climate models do endeavor to simulate various ocean cycles, the models are unable to predict the timing and climatic influence of cycles such as El Niño and La Niña, both of which cause drastic shifts in global climate, or the Pacific Decadal Oscillation. And the models make no attempt whatsoever to include indirect effects of the sun like those involving solar UV radiation or cosmic rays from deep space.

As a result of all these shortcomings, the predictions of coronavirus and climate models are wrong again and again. Climate models are known even by modelers to run hot, by 0.35 degrees Celsius (0.6 degrees Fahrenheit) or more above observed temperatures. Coronavirus models, when fed data from this week, can probably make a reasonably accurate forecast about the course of the pandemic next week – but not a month, two months or a year from now. Dr. Anthony Fauci of the U.S. White House Coronavirus Task Force recently admitted as much.

Computer models have a role to play in science, but we need to remember that most of them depend on a certain amount of guesswork. It’s a mistake, therefore, to base scientific policy decisions on models alone. There’s no substitute for actual, empirical evidence.

Next: How Science Is Being Misused in the Coronavirus Pandemic