Climate Model Track Record Improves Slightly: Paused Arctic Sea Ice Loss Predicted Correctly

As readers will know, I’ve been highly critical of computer climate models in these pages (see, for example, here and here). Some models greatly exaggerate future warming, or overestimate low cloud feedback, or are unable to reproduce observed sea surface temperature trends.

But a new study exonerates the models when it comes to predictions of sea ice loss in the Arctic, in particular the pause in loss of Arctic sea ice during the past two decades. Despite rising global temperatures, and contrary to alarmist projections of an ice-free summer by 2020, no statistically significant decline in Arctic summer minimum sea ice extent occurred between 2005 and 2024. Sea ice expands to its maximum extent during the winter and shrinks during summer months.

In correctly predicting this 20-year pause, the UK – U.S. collaboration partially redeems the same CMIP5 and CMIP6 models that have previously been shown incapable of accurately hindcasting the observed Arctic sea ice decline during the warming period of the early 20th century.

The present study also finds that periods with no sea ice loss while greenhouse gas emissions continue to increase are not at all unusual in large-ensemble CMIP5 and CMIP6 simulations. The modeling results suggest that internal climate variability – due to natural ocean cycles such as the AMO (Atlantic Multidecadal Oscillation), the PDO (Pacific Decadal Oscillation) or the NAO (North Atlantic Oscillation) – has offset any possible loss of Arctic sea ice due to human emissions.

The figure below illustrates the current robust and prolonged pause in Arctic sea ice loss, based on U.S. NSIDC (National Snow and Ice Data Center) data from satellite measurements, one of two key sea ice indices. The top panel shows the minimum summer ice extent in September over the satellite era from 1979 to 2024.

The center panel depicts 20-year trends during that time, the mean trend being a loss of 0.30 square km per decade, an amount that is not statistically significant despite the strong minimum in 2012; the red shaded envelope represents the 95% confidence level for statistical significance. The insignificant trend is approximately four times smaller than the peak 20-year sea ice trend recorded from 1993 to 2012. The bottom panel demonstrates how the current pause in Arctic sea ice loss shows up in every single month of the year, not just September.

The same effect of a drastic slowdown in Arctic sea ice decline is also evident in sea ice volume. The volume depends on both ice extent and thickness, which varies with location as well as season. Arctic ice thickness is notoriously difficult to measure, the best data coming from limited submarine observations.

According to the study authors, the loss of Arctic sea ice volume has stalled for at least the past 15 years, as shown by the shaded band on the left of the next figure. For the period 2010 to 2024, the simulated annual mean Arctic sea ice volume decreased by only 0.4 million cubic km per decade, a value that is seven times smaller than the long-term simulated loss from 1979 to 2024 of 2.9 million cubic km per decade, and again is not statistically significant.

Geographically, the sea ice volume loss is mostly evident in the Barents Sea, while the slowdown in September sea ice extent occurs mainly in the Pacific and Eurasian sectors. A satellite-derived image of total minimum Arctic sea ice extent is shown on the right of the above figure; the blue contour represents the median extent for 1991-2020.

As mentioned above, the new study has discovered that CMIP5 and CMIP6 climate models frequently simulate pauses in Arctic sea ice loss like the present one, rather than the past 20 years being an unexpected, rare event. Analysis of ensemble members that simulate the observed pause indicates that the current slowdown could possibly persist for a further five to ten years, say the study authors.

The following figure shows the percentage of ensemble members in various ensembles that have 2005-2024 September sea ice loss trends less than the observed value (note that the vertical label refers incorrectly to greater than). As can be seen, the multimodel average indicates that the chance of the ongoing pause in Arctic sea ice loss occurring at all is approximately 20%.

Although 20% is not 100%, the fact that climate models can make such a prediction is remarkable. Furthermore, the authors infer from the multimodel average that the present pause in September Arctic sea ice loss has a 1 in 2 chance of persisting for a further five years, and a 1 in 3 chance of persisting for another ten years.

In conclusion, the authors comment that their study is “a reminder that we should be humble about multidecadal predictions of the climate system, especially in highly variable regions such as the Arctic.”

Next: Ice Sheet Update (1): Reversal of Fortune for Antarctic Ice Sheet

AI Tries Its Hand at Climate Science

Much-ballyhooed AI (artificial intelligence), which is rapidly making inroads in many areas, has found its way into scientific publishing with the recent appearance of a peer-reviewed study featuring AI Grok 3 beta as the lead author. The 2025 paper, titled “A Critical Reassessment of the Anthropogenic CO2-Global Warming Hypothesis,” disputes the narrative that global warming is largely driven by human emissions of CO2.

Grok 3 beta, who wrote its own press release, spearheaded the research but needed critical guidance from four human coauthors. While the AI on its own was able to identify some of the relevant papers in the scientific literature, it fell short by missing a number of others and had to rely on its human colleagues to fill the gaps – and even they overlooked the work of one important researcher.

The study essentially reviews the conclusions of the IPCC (Intergovernmental Panel on Climate Change) regarding the global carbon cycle, computer climate models, and solar variability. The paper’s subtitle is “Empirical Evidence Contradicts IPCC Models and Solar Forcing Assumptions.”

On the carbon cycle, shown in the figure below, the study challenges the claim made in the IPCC’s AR6 (Sixth Assessment Report) that the effective residence time of CO2 in the atmosphere is more than 100 years. The claim has been questioned by several authors, including Greek civil engineer Demetris Koutsoyiannis who, in a 2024 paper, estimated a residence time as short as 3.5 to 4 years.

The Koutsoyiannis paper simply divides the atmospheric CO2 level, which was 416.4 ppm (parts per million) or 887 GtC (gigatonnes of carbon) in 2020, by the total flow of CO2 into the atmosphere of approximately 230 GtC per year. However, this analysis only accounts for the so-called fast carbon cycle – the rapid movement of CO2 between living organisms, the atmosphere and the oceans – but ignores the slow carbon cycle, in which CO2 is incorporated into long-lived vegetation such as tree bark, or the deep ocean, or limestone and other rock.

Notable among other authors who share the beliefs of Koutsoyiannis are atmospheric scientists Hermann Harde and the late Murray Salby. Their work (see, for example, here) is acknowledged by Grok 3 beta, but only because the AI was reprimanded by its coauthors for omitting any mention of Harde or Salby papers from its first draft.

Another major omission the first time around was the extensive work of U.S. astrophysicist Willie Soon, one of the coauthors, on solar contributions to global warming. I’ll touch on Soon’s research below.

But more egregious than any of these omissions is Grok 3 beta’s failure to include any papers of physicist David Andrews, who has been highly critical of Harde and Salby.

Andrews (see here, here and here) distinguishes between the fast and slow carbon cycles, and points out that large, natural, two-way exchanges of carbon occur between the atmosphere, the oceans and land biomass. The fluxes each way exceed the one-way flux into the atmosphere of anthropogenic CO2 from fossil fuels. Andrews also discusses errors in the interpretation of radiocarbon data in the Harde and Salby papers.

As for Koutsoyiannis, I discussed in a 2023 post how an earlier study of his is unable to explain 88% of the increase in atmospheric CO2 during global warming of approximately 1 degree Celsius (1.8 degrees Fahrenheit) since 1880, in terms of CO2 outgassing from the oceans.

Turning to climate models, Grok 3 beta gets it essentially right. As I’ve explained in numerous previous posts, many climate models run too hot, greatly exaggerating future global warming; only a small number come close to actual measurements. And a whole ensemble of models is unable to reproduce observed sea surface temperature trends in the Pacific and Southern Oceans since 1979.

Grok 3 beta correctly states that “model runs consistently fail to replicate observed temperature trajectories and sea ice extent trends, exhibiting correlations (R²) near zero when compared to unadjusted records.” On the subject of adjustments to raw temperature data, the paper also duly notes that such adjustments are contentious, especially the use of homogenization techniques which can introduce systematic errors into temperature data.

Finally, the AI rightfully draws attention to the IPCC’s questionable reliance on a single reconstruction of past solar output or TSI (total solar irradiance), to support its narrative of overwhelmingly human-caused global warming with essentially no contribution from the sun. As I described in another 2023 post, Soon and a team of coauthors have shown that a number of alternative TSI reconstructions, in which the sun plays a much larger role, can explain observed warming just as well as the IPCC’s chosen reconstruction.

So how well has Grok 3 beta mastered climate science? In this case, its paper is a mixed bag worthy of barely a passing grade.

Next: Climate Model Track Record Improves Slightly: Paused Arctic Sea Ice Loss Predicted Correctly

Math Teacher, Sole Climate Scientist Unlock Mystery of Recent Global Warming Spike

For the last two years it’s baffled climate scientists, who have been besides themselves trying to explain the apparent totally unexpected spike in recent global warming. Typical headlines in the media have included (see here, here, here and here):

What’s Causing the Recent Spike in Global Temperatures?

Charting the Exceptional, Unexpected Heat of 2023 and 2024

2023, 2024 climate change records defy scientific explanation

Scientists Stumped By 2024’s Heat Spike  

At a meeting of the American Geophysical Union in Washington, DC on December 10 last year, very few hands were raised when NASA climate scientist Gavin Schmidt asked how many attendees agreed that we understand why 2023 and 2024 were so hot. Asked a slightly different question, a majority of the audience concurred that an adequate explanation didn’t yet exist. Schmidt himself has used the phrase “uncharted territory” to describe the spike.

The spike can be viewed graphically in several ways. One way is by examining NOAA (the U.S. National Oceanic and Atmospheric Administration) satellite temperature data compiled by PhD meteorologist Roy Spencer and Alabama state climatologist John Christy:

The 2023-24 spike on the far right represents an extra strong El Niño, comparable to the one observed in 1997-98. The graph shows that the event raised the globally averaged temperature of the lower troposphere to a record 0.94 degrees Celsius (1.7 degrees Fahrenheit) above the 1991-2020 mean.

Another, perhaps more dramatic, way of viewing the spike is by plotting monthly temperatures for all years since 1880, compared to the preindustrial average, as depicted in the next figure. The warming surge during 2023 and 2024, which began with elevated global sea surface temperatures before the El Niño kicked in, stands out clearly.

But an Australian high school math (maths to Aussies) teacher who often comments on my blog can’t understand why everyone is so perplexed, saying that a very simple explanation exists – namely that the temperature spike was merely the result of an exceptionally strong El Niño superimposed on a steadily rising background due to global warming. The same effect will occur for any form of periodic variability.

The teacher, who goes by the screen name Braintic, points out that the phenomenon can be visualized mathematically by comparing the graph of the periodic function y = sin x with that of the function y = x + sin x, as shown in the figure below. Here x represents an assumed linear increase in global temperature with time, which may or may not be due to greenhouse gas emissions, and sin x represents periodic natural variability.

As Braintic emphasizes, the graph of y = x + sin x is a series of step increases superimposed on a rising trend. Exactly the same behavior can be seen in the first, satellite figure above.

In fact, such a staircase effect for global temperatures has actually been commented on before, by eminent New Zealand climate scientist Kevin Trenberth who made the following statement in an article published in July 2023, when the present spike was barely underway:

The combination of decadal variability and the warming trend from rising greenhouse gas emissions makes the temperature record look more like a rising staircase, rather than a steady climb.

Apparently, Trenberth’s prophetic comment – illustrated in the figure below – has gone unnoticed by his climate science colleagues, who have continued to make a mountain out of the proverbial molehill about the recent temperature surge. Trenberth remarks that the resulting temperature steps usually occur at the end of an El Niño event.

The staircase effect has been noticed by other scientists too, although none as perceptive as Braintic or Trenberth. In a 2024 blog post, I discussed a provocative hypothesis that links an upsurge in underwater seismic activity to recent warming.

The hypothesis was proposed by retired professor Arthur Viterito, whose starting point was the distinct step increases observed in satellite measurements of global warming, displayed in the first figure above. Viterito links these apparent jumps to geothermal heat emitted by volcanoes and hydrothermal vents in the middle of the world’s ocean basins. However, the explanation put forward here seems much more plausible.

Another revealing observation made by Braintic is that the magnitudes of the 1997-98 and 2023-24 El Niños are virtually the same, also contrary to the prevailing wisdom among climate scientists.

The Spencer-Christy satellite data shows that, for the 1997-98 El Niño, the background temperature anomaly (departure from the 1991-2020 mean) was -0.20 degrees Celsius averaged over the previous 10 years, and +0.35 degrees Celsius during the peak year of 1998. That’s a jump of 0.55 degrees Celsius.

For the 2023-24 El Niño, the background temperature anomaly was +0.23 degrees Celsius averaged over the previous 10 years, and +0.77 degrees Celsius during the peak year of 2024. That’s an almost identical jump of 0.54 degrees Celsius.

So two down-to-earth types from down under have solved a puzzle that has mystified hundreds in the climate science community!

Next: AI Tries Its Hand at Climate Science

Update: No Grand Solar Minimum Likely Anytime Soon

When empirical evidence doesn’t confirm the predictions of a hypothesis, the hypothesis is unequivocally wrong as emphasized in the header to this blog. But I’m guilty of ignoring this fundamental principle of the scientific method, in the aftermath of a blog post I wrote nearly five years ago titled “Upcoming Grand Solar Minimum Could Wipe Out Global Warming for Decades.” Within two years of the post it was clear that the evidence didn’t support this assertion, but I failed to recognize that until now.

In the post, I over emphasized the prediction of Northumbria University’s Valentina Zharkova that the sun was about to enter a period of diminished output known as a grand solar minimum – a prolonged cold stretch she estimated would last almost 35 years, in which global temperatures would drop by as much as 1.0 degrees Celsius (1.8 degrees Fahrenheit).

Her prediction was based on past observations of small dark blotches on the sun’s surface called sunspots, which are caused by magnetic turbulence in the sun’s interior and signal subtle changes in solar output or activity. Together with the sun’s heat and light, the monthly or yearly number of sunspots goes up and down during the approximately 11-year solar cycle, as can be seen in the figure below.

Zharkova observed that the maximum number of sunspots observed in a cycle had declined over several decades, from cycle 21 which peaked around 1980 to cycle 23 which peaked around 2000 (the 3rd and 5th peaks from the left in the figure above, respectively). A further drop in cycle 24 which peaked around 2015, and was the weakest cycle in 100 years, only served to reinforce her prediction of weaker yet future cycles: an upcoming grand solar minimum.

Zharkova’s prediction depended on a somewhat obscure statistical analysis, in which she linked grand solar minima that recur every 350 to 400 years to a drastic falloff in the sun’s internal magnetic field. The fluctuations in the solar magnetic field arise from regular variations in behavior of the very hot plasma powering our sun.

The last grand solar minimum occurred several centuries ago, the so-called Maunder Minimum from approximately 1645 to 1710, forming part of the Little Ice Age. Solar scientists have calculated that the sun’s heat and light output, a quantity known as the total solar irradiance, decreased by 0.22% during that minimum, which is about four times its normal rise or fall over an 11-year cycle.

The next figure depicts Zharkova’s calculated magnitude of the magnetic field from 1975 to 2040 that diminishes as her projected minimum approaches. On the basis of this calculation, she predicted that the peak sunspot number in cycle 25 would be 93, or 20% lower than the peak 116 in cycle 24 (the 1st and 2nd peaks from the right in the figure above, respectively), and 60% lower in cycle 26.

However, as pointed out by a persistent commenter on my original post, subsequent observations tell a very different story. Cycle 25 utterly refutes Zharkova’s prediction of a 20% lower peak than cycle 24, as is evident in the first figure above. Cycle 25 was already 16% stronger than cycle 24 two years ago, and the sunspot number is currently a whopping 158 or 70% stronger than cycle 24. So her prediction was badly off the mark and there’s no evidence of an impending grand solar minimum.

Other solar researchers have also predicted an imminent grand solar minimum, but for different reasons. One of the earliest predictions was by German astronomer and scholar, Theodor Landscheidt, in 2003. Landscheidt predicted a protracted cold period centered on the year 2030, based on his observations of an 87-year solar cycle known as a Gleissberg cycle.

A more recent prediction, based on a longer 210-year solar cycle, is that of Russian astrophysicist Habibullo Abdussamatov. He has projected a more extended period of global cooling than either Zharkova or Landscheidt, lasting as long as 65 years, with the coldest interval around 2043.

Clearly, these and other researchers who made the same claim were as mistaken as Zharkova. The myth of an approaching grand solar minimum was in fact debunked two years ago in a little-known post by prominent climate change skeptic Javier Vinós, who detailed the history of the myth and accused Zharkova and other climate scientists of misrepresentation to further their careers.

In conclusion, it’s worth noting that even the Solar Cycle 25 Prediction Panel, co-chaired by NOAA (the U.S. National Oceanic and Atmospheric Administration) and NASA, predicted in 2020 that the sunspot number in cycle 25 would be no higher than in cycle 24. Nonetheless, panel co-chair and solar physicist Lisa Upton said at the time: “There is no indication we are approaching a Maunder-type minimum in solar activity.”

Hat tip: Braintic

Next: Math Teacher, Sole Climate Scientist Unlock Mystery of Recent Global Warming Spike

Contribution of Low Clouds to Global Warming Still Controversial

A slew of research papers over the last year have attributed the so far unexplained surge in recent global warming to a decline in low cloud cover. But such a connection is still controversial. Several of the papers question their own conclusions or employ dubious methodology.

Prominent among these publications is a 2025 study by a trio of German environmental scientists. From satellite data, the authors claim to have pinpointed record-low planetary albedo, caused by a reduction in low clouds, as the primary source of the recent global temperature surge. The lowered albedo is most prominent in northern mid-latitudes and the tropics.

Albedo is a measure of the earth’s ability to reflect incoming solar radiation. Melting of light-colored snow and sea ice due to warming exposes darker surfaces such as soil, rock and seawater, which have lower albedo. The less reflective surfaces absorb more of the sun’s radiation and thus push temperatures higher.

The same effect occurs with low-level clouds, which are the majority of the planet’s cloud cover. Low-level clouds such as cumulus and stratus clouds are thick enough to reflect 30-60% of the sun’s radiation that strikes them back into space, so they act like a parasol and normally cool the earth’s surface. But less cloud cover lowers albedo and therefore results in warming.

The figure below shows global total and low cloud cover since 2000, calculated by the study authors from two separate sets of satellite data, indicated by the red and black curves, respectively. The decline in low cloud cover is as much as 1.5% – small, but large enough to raise global temperatures by 0.22 degrees Celsius (0.40 degrees Fahrenheit), say the scientists.

Furthermore, the disappearance of low clouds due to global warming sets up a positive feedback loop since fewer clouds cause more warming, which in turn lowers cloud cover even more. Nevertheless, the study authors cast doubt on their conclusions by pointing out that it is difficult to disentangle low cloud feedback from internal variability of the climate system and indirect aerosol effects. Long-term natural variability is associated with ocean cycles such as the AMO (Atlantic Multidecadal Oscillation).

As I discussed in a 2024 post, a major reduction in emissions of SO2 (sulfur dioxide) since 2020, arising from a ban on the use of high-sulfur fuels by ships, can boost global warming. SO2 reacts with water vapor in the air to produce sulfate aerosol particles that linger in the atmosphere and reflect incoming sunlight; they also act as condensation nuclei for the formation of reflective clouds. So the reduction in shipping emissions also contributes to the decline in low cloud cover.

More data linking the decline in low cloud cover to global warming is depicted in the next figure, showing cloud cover calculated from a different set of satellite data, and temperatures in degrees Celsius relative to the mean tropospheric temperature from 1991 to 2020.

A second 2025 paper, by two Chinese environmental engineers and a U.S. scientist, maintains that tropical low cloud feedback is positive rather than negative and 71% stronger than previously thought. Their conclusion depends on use of an obscure mathematical technique known as Pareto optimization, to reconcile large positive cloud feedback for the Atlantic Ocean in some global climate models with negative feedback for the Pacific in other models.

However, these authors concede that if warming is more uniform between cold stratocumulus regions and warm cloud ascent areas than they have assumed, then tropical low cloud feedback is likely to be much lower or even negative.

In a 2023 post, I reported on empirical observations, made by a team of French and German scientists, that refute the notion of positive low cloud feedback and show that the mechanism causing the strongest cloud reductions due to warming in climate models doesn’t actually occur in nature. Their conclusions were based on collection and analysis of observational data from cumulus clouds near the Atlantic island of Barbados.

In climate models, the refuted mechanism leads to strong positive cloud feedback that amplifies global warming. The models find that low clouds would thin out, and many would not form at all, in a hotter world. But the research team’s analysis reveals that climate models with large positive feedbacks are implausible.

Weaker than expected low cloud feedback is also suggested by lack of the so-called CO2 “hot spot” in the atmosphere, as I discussed in a 2021 post. Climate models predict that the warming rate at altitudes of 9 to 12 km (6 to 7 miles) above the tropics should be about twice as large as at ground level. Yet the hot spot doesn’t show up in measurements made by weather balloons or satellites.

Next: Update: No Grand Solar Minimum Likely Anytime Soon

How Much Will Reduction in Shipping Emissions Stoke Global Warming?

A controversial new research paper claims that a major reduction in emissions of SO2 (sulfur dioxide) since 2020, due to a ban on the use of high-sulfur fuels by ships, could result in additional global warming of 0.16 degrees Celsius (0.29 degrees Fahrenheit) for seven years – over and above that from other sources. The paper was published by a team of NASA scientists.

This example of the law of unintended consequences, if correct, would boost warming from human CO2, as well as that caused by water vapor in the stratosphere resulting from the massive underwater eruption of the Hunga Tonga–Hunga Haʻapai volcano in 2022. The eruption, as I described in a previous post, is likely to raise global temperatures by 0.035 degrees Celsius (0.063 degrees Fahrenheit) during the next few years.

It’s been known for some time that SO2, including that emanating from ship engines, reacts with water vapor in the air to produce aerosols. Sulfate aerosol particles linger in the atmosphere, reflecting incoming sunlight and also acting as condensation nuclei for the formation of reflective clouds. Both effects cause global cooling.

In fact, it was the incorporation of sulfate aerosols into climate models that enabled the models to successfully reproduce the cooling observed between 1945 and about 1975, a feature that had previously eluded modelers.   

On January 1, 2020, new IMO (International Maritime Organization) regulations lowered the maximum allowable sulfur content in international shipping fuels to 0.5%, a significant reduction from the previous 3.5%. This air pollution control measure has reduced cloud formation and the associated reflection of shortwave solar radiation, both reductions having inadvertently increased global warming.

As would be expected, the strongest effects show up in the world’s most traveled shipping lanes: the North Atlantic, the Caribbean and the South China Sea. The figure on the left below depicts the researchers’ calculated contribution from reduced cloud fraction to additional radiative forcing resulting from the SO2 reduction. The figure on the right shows by how much the concentration of condensation nuclei in low maritime clouds has fallen since the regulations took effect.

The cloud fraction contribution is 0.11 watts per square meter. The other contributions, from a reduction in cloud water content and the drop in reflection of solar radiation, add up to a total of 0.2 watts per square meter extra radiative forcing averaged over the global ocean, arising from the new shipping regulations, the NASA scientists say.

The effect is concentrated in the Northern Hemisphere since there is relatively little shipping traffic in the Southern Hemisphere. The researchers calculate the boost to radiative forcing to be 0.32 watts per square meter in the Northern Hemisphere, but only 0.1 watts per square meter in the Southern Hemisphere. The hemispheric difference in their calculations of absorbed shortwave solar radiation (near the earth’s surface) can be seen in the following figure, to the right of the dotted line.

According to the paper, the additional radiative forcing of 0.2 watts per square meter since 2020 corresponds to added global warming of 0.16 degrees Celsius (0.29 degrees Fahrenheit) over seven years. Such an increase implies a warming rate of 0.24 degrees Celsius (0.43 degrees Fahrenheit) per decade from reduced SO2 emissions alone, which is more than double the average warming rate since 1880 and 20% higher than the mean warming rate since 1980 of approximately 0.19 degrees Celsius (0.34 degrees Fahrenheit) per decade.

The researchers remark that the forcing increase of 0.2 watts per square meter is a staggering 80% of the measured gain in forcing from other sources since 2020, the net planetary heat uptake since then being 0.25 watts per square meter.

However, these controversial claims have been heavily criticized, and not just by climate change skeptics. Climate scientist and modeler Zeke Hausfather points out that total warming will be less than the estimated 0.16 degrees Celsius (0.29 degrees Fahrenheit), because the new shipping regulations have only a minimal effect on land, which covers 29% of the earth’s surface.

And, states Hausfather, the researchers’ energy balance model “does not reflect real-world heat uptake by the ocean, and no actual climate model has equilibration times anywhere near that fast.” Hausfather’s own 2023 estimate of additional warming due to the use of low-sulfur shipping fuels was a modest 0.045 degrees Celsius (0.081 degrees Fahrenheit) after 30 years, as shown in the figure below.

Further criticism of the paper’s methodology comes from Laura Wilcox, associate professor at the National Centre for Atmospheric Science at the University of Reading. Wilcox told media that the paper makes some “very bold statements about temperature changes … which seem difficult to justify on the basis of the evidence.” She also has concerns about the mathematics of the researchers' calculations, including the possibility that the effect of sulfur emissions is double-counted.

Next: Philippines Court Ruling Deals Deathblow to Success of GMO Golden Rice

The Scientific Reality of the Quest for Net Zero

Often lost in the lemming-like drive toward Net Zero is the actual effect that reaching the goal of zero net CO2 emissions by 2050 will have. A new paper published by the CO2 Coalition demonstrates how surprisingly little warming would actually be averted by adoption of Net-Zero policies. The fundamental reason is that CO2 warming is already close to saturation, with each additional tonne of atmospheric CO2 producing less warming than the previous tonne.

The paper, by atmospheric climatologist Richard Lindzen together with atmospheric physicists William Happer and William van Wijngaarden, shows that for worldwide Net-Zero CO2 emissions by 2050, the averted warming would be 0.28 degrees Celsius (0.50 degrees Fahrenheit). If the U.S. were to achieve Net Zero on its own by 2050, the averted warming would be a tiny 0.034 degrees Celsius (0.061 degrees Fahrenheit).

These estimates assume that water vapor feedback, which is thought to amplify the modest temperature rise from CO2 acting alone, boosts warming without feedback by a factor of four – the assertion made by the majority of the climate science community. With no feedback, the averted warming would be 0.070 degrees Celsius (0.13 degrees Fahrenheit) for worldwide Net-Zero CO2 emissions, and a mere 0.0084 degrees Celsius (0.015 degrees Fahrenheit) for the U.S. alone.

The paper’s calculations are straightforward. As the authors point out, the radiative forcing of CO2 is proportional to the logarithm of its concentration in the atmosphere. So the temperature increase from now to 2050 caused by a concentration increment ΔC, would be

ΔT = S log2 (C/C0),

in which S is the temperature increase for a doubling of the atmospheric CO2 concentration from its present value C0; C = C0 + ΔC, or what the CO2 concentration in 2050 would be if no action is taken to reduce CO2 emissions by then; and log2 is the binary (base 2) logarithm.

The saturation effect for CO2 comes from this logarithmic dependence of ΔT on the concentration ratio C/ C0, so that each CO2 concentration increment results in less warming than the previous equal increment. In the words of the paper’s authors, “Greenhouse warming from CO2 is subject to the law of diminishing returns.”

If emissions were to decrease by 2050, the CO2 concentration would be less than C in the equation above, or C – δC where δC represents the concentration decrement. The slightly smaller temperature increase ΔT/ would then be

ΔT/ = S log2 ((C – δC)/ C0),

and the averted temperature increase δT from Net-Zero policies is δT = ΔT - ΔT/, which is

δT = S {log2 (C/C0) - log2 ((C – δC)/C0)} = S log2 (C/(C – δC)) = - S log2 (1 – δC/C).

This can be rewritten as

δT = - S ln (1 – δC/C)/ ln (2), in which ln is the natural (base e) logarithm.

Now using the power series expansion – ln (1 - x) = x + x2/2 + x3/3 + x4/4 + …. and recognizing that δC is much smaller than C, so that all terms in the expansion of – ln (1 – δC/C) beyond the first can be ignored,

δT = S (δC/C) / ln (2).

Finally, writing the concentration increment without emissions reduction ΔC as RΔt, where R is the constant emission rate over the time interval Δt, we have

C = C0 + ΔC = C0 + RΔt, and the concentration decrement for reduced emissions δC is

δC = ʃΔT R (1 – t/Δt) dt = RΔt/2, which gives

δT = S RΔt/ (2 ln (2) (C0 + RΔt)).

It’s this latter equation which yields the numbers for averted warming quoted above. In the case of the U.S. going it alone, δT needs to be multiplied by 0.12, which is the U.S. fraction of total world CO2 emissions in 2024.

Such small amounts of averted warming show the folly of the quest for Net Zero. While avoiding 0.28 degrees Celsius (0.50 degrees Fahrenheit) of warming globally is arguably a desirable goal, it’s extremely unlikely that the whole world will comply with Net Zero. China, India and Indonesia are currently indulging in a spate of building new coal-fired power plants which belch CO2, and only a limited number of those will be retired by 2050.

Developing countries, especially in Africa, are in no mood to hold back on any form of fossil fuel burning either. Many of these countries, quite reasonably, want to reach the same standard of living as the West – a lifestyle that has been attained through the availability of cheap, fossil fuel energy. Coal-fired electricity is the most affordable remedy for much of Africa and Asia.

In any case, few policy makers in the West have given much thought to the cost of achieving Net Zero. Michael Kelly, emeritus Prince Philip Professor of Technology at the University of Cambridge and an expert in energy systems, has calculated that the cost of a Net-Zero economy by 2050 in the U.S. alone will be at least $35 trillion, and this does not include the cost of educating the necessary skilled workforce.

Professor Kelly says the target is simply unattainable, a view shared by an ever-increasing number of other analysts. In his opinion, “the hard facts should put a stop to urgent mitigation and lead to a focus on adaptation (to warming).”

Next: How Much Will Reduction in Shipping Emissions Stoke Global Warming?

No Convincing Evidence That Extreme Wildfires Are Increasing

According to a new research study by scientists at the University of Tasmania, the frequency and magnitude of extreme wildfires around the globe more than doubled between 2003 and 2023, despite a decline in the total worldwide area burned annually. The study authors link this trend to climate change.

Such a claim doesn’t stand up to scrutiny, however. First, the authors seem unaware of the usual definition of climate change, which is a long-term shift in weather patterns over a period of at least 30 years. Their finding of a 21-year trend in extreme wildfires is certainly valid, but the study interval is too short to draw any conclusions about climate.

Paradoxically, the researchers mention an earlier 2017 study of theirs, stating that the 12-year period of that study of extreme wildfires was indeed too short to identify any temporal climate trend. Why they think 21 years is any better is puzzling!

Second, the study makes no attempt to compare wildfire frequency and magnitude over the last 21 years with those from decades ago, when there were arguably as many hot-burning fires as now. Such a comparison would allow the claim of more frequent extreme wildfires today to be properly evaluated.

Although today’s satellite observations of wildfire intensity far outnumber the observations made before the satellite era, there’s still plenty of old data that could be analyzed. Satellites measure what is called the FRP (fire radiative power), which is the total fire radiative energy less the energy dissipated through convection and conduction. The older FI (fire intensity) also measures the energy released by a fire, and is the rate of energy released per unit time per unit length of fire front; FRP, usually measured in MW (megawatts), is obviously related to FI.

The study authors define extreme wildfires as those with daily FRPs exceeding the 99.99th percentile. Satellite FRP data for all fires in the study period was collected in pixels 1 km on a side, each retained pixel containing just one wildfire “hotspot” after duplicate hotspots were excluded.

The total raw dataset included 88.4 million hotspot observations, and this number was reduced to 30.7 million “events” by summing individual pixels in cells approximately 22 x 22 km on a side. Of this 30.7 million, just 2,913 events satisfied the extreme wildfire 99.99th percentile requirement. The average of the study’s summed FRP values for the top 20 events was in the range of 50,000-150,000 MW, corresponding to individual FRPs of about 100-300 MW in a 1 x 1 km pixel.   

A glance at the massive datatset shows individual FRP values ranging from the single digits to several hundred MW. If the 20 hottest wildfires during 2003-23 had FRPs above 100 MW, most of the other 2,893 fires above the 99.99th percentile would have had lower FRPs, in the tens and teens.

While intensity data for historical wildfires is sparse, there are occasionally numbers mentioned in the literature. One example can be found in a 2021 paper that reviews past large-area high-intensity wildfires that have occurred in arid Australian grasslands. The paper’s authors state that:

Contemporary fire cycles in these grasslands (spinifex) are characterized by periodic wildfires that are large in scale, high in intensity (e.g., up to c. 14,000 kW) … and driven by fuel accumulations that occur following exceptionally high rainfall years.

An FRP of 14,000 kW, or 14 MW, is comparable to that of many of the 2,893 FRPs for modern extreme wildfires (excluding the top 20) in the Tasmanian study. The figure below shows the potential fire intensity of bushfires across Australia, the various colors indicating the FI range. As you can see, the most intense bushfires occur in the southeast and southwest of the country; FI values in those regions can exceed 100 MW per meter, which correspond to FRPs of about 30 MW.

And, although it doesn’t cite FI numbers, a 1976 paper on Australian bushfires from 1945 to 1975 makes the statement that:

The fire control authorities recognise that no fire suppression system has been developed in the world which can halt the forward spread of a high-intensity fire burning in continuous heavy fuels under the influence of extreme fire weather.

High- and extremely high-intensity wildfires in Australia at least are nothing new, and the same is no doubt true for other countries included in the Tasmanian study. The study authors remark correctly that higher temperatures due to global warming and the associated drying out of vegetation and forests both increase wildfire intensity. But there have been equally hot and dry periods in the past, such as the 1930s, when larger areas burned.

So there’s nothing remarkable about the present study. Even though it’s difficult to find good wildfire data in the pre-satellite era, the study authors could easily extend their work back to the onset of satellite measurements in the 1970s.

Next: The Scientific Reality of the Quest for Net Zero

Was the Permian Extinction Caused by Global Warming or CO2 Starvation?

Of all the mass extinctions in the earth’s distant past, by far the greatest and most drastic was the Permian Extinction, which occurred during the Permian between 300 and 250 million years ago. Also known as the Great Dying, the Permian Extinction killed off an estimated 57% of all biological families including rainforest flora, 81% of marine species and 70% of terrestrial vertebrate species that existed before the Permian’s last million years. What was the cause of this devastation?

The answer to that question is controversial among paleontologists. For many years, it has been thought the extinction was a result of ancient global warming. During Earth’s 4.5-billion-year history, the global average temperature has fluctuated wildly, from “hothouse” temperatures as much as 14 degrees Celsius (25 degrees Fahrenheit) above today’s level of about 14.8 degrees Celsius (27 degrees Fahrenheit), to “icehouse” temperatures 6 degrees Celsius (11 degrees Fahrenheit) below.

Hottest of all was a sudden temperature spike from icehouse conditions at the onset of the Permian to extreme hothouse temperatures at its end, as can be seen in the figure below. The figure is a 2021 estimate of ancient temperatures derived from oxygen isotopic measurements combined with lithologic climate indicators, such as coals, sedimentary rocks, minerals and glacial deposits. The barely visible time scale is in millions of years before the present.

The geological event responsible for this enormous surge in temperature is a massive volcanic eruption known as the Siberian Traps. The eruption lasted at least 1 million years and resulted in the outpouring of voluminous quantities of basaltic lava from rifts in West Siberia; the lava buried over 50% of Siberia in a blanket up to 6.5 km (4 miles) deep.

Volcanic CO2 released by the eruptions was supplemented by CO2 produced during combustion of thick, buried coal deposits that lay along the subterranean path of the erupting lava. This stupendous outburst boosted the atmospheric CO2 level from a very low 200 ppm (parts per million) to more than 2,000 ppm, as shown in the next figure.

The conventional wisdom in the past has been that this geologically sudden, gigantic increase in the CO2 level sent the global thermometer soaring – a conclusion sensationalized by mainstream media such as the New York Times. However, that argument ignores the saturation effect for atmospheric CO2, which limits CO2-induced warming to that produced by the first few hundred ppm of the greenhouse gas.

While the composition of the atmosphere 250 million years ago may have been different from today’s, the saturation effect would still have occurred. There’s no question, nevertheless, that end-Permian temperatures were as high as we think, whatever the cause. That’s because the temperatures are based on the highly reliable method of measuring oxygen 18O to 16O isotopic ratios in ancient microfossils.

Such hothouse conditions would have undoubtedly caused the extinction of various species; the severity of the extinction event is revealed by subsequent gaps in the fossil record. Organic carbon accumulated in the deep ocean, depleting oxygen and thus wiping out many marine species such as phytoplankton, brachiopods and reef-building corals. On land, vertebrates such as amphibians and early reptiles, as well as diverse tropical and temperate rainforest flora, disappeared.

All from extreme global warming? Not so fast, says ecologist Jim Steele.

Steele attributes the Permian extinction not to an excess of CO2 at the end of this geological period, but rather to a lack of it during the preceding Carboniferous and the early Permian, as can be seen in the figure above. He explains that all life is dependent on a supply of CO2, and that when its concentration drops below 150 ppm, photosynthesis ceases, and plants and living creatures die.

Steele argues that because of CO2 starvation over this interval, many species had either already become extinct, or were on the verge of extinction, long before the planet heated up so abruptly.

In comparison to other periods, the Permian saw the appearance of very few new species, as illustrated in the following figure. For example, far more new species evolved (and became extinct) during the earlier Ordovician, when CO2 levels were much, much higher but an icehouse climate prevailed.

When CO2 concentrations reached their lowest levels ever in the early Permian, phytoplankton fossils were extremely rare – some 40 million years or so before the later hothouse spike, which is when the conventional narrative claims the species became extinct. And Steele says that 35-47% of marine invertebrate genera went extinct, as well as almost 80% of land vertebrates, from 7 to 17 million years before the mass extinction at the end of the Permian.

Furthermore, Steele adds, the formation of the supercontinent Pangaea (shown to the left), which occurred during the Carboniferous, had a negative effect on biodiversity. Pangea removed unique niches from its converging island-like microcontinents, again long before the end-Permian.

Next: Unexpected Sea Level Fluctuations Due to Gravity, New Evidence Shows

Shrinking Cloud Cover: Cause or Effect of Global Warming?

Clouds play a dominant role in regulating our climate. Observational data show that the earth’s cloud cover has been slowly decreasing since at least 1982, at the same time that its surface temperature has risen about 0.8 degrees Celsius (1.4 degrees Fahrenheit). Has the reduction in cloudiness caused that warming, as some heretical research suggests, or is it an effect of increased temperatures?

It's certainly true that clouds exert a cooling effect, as you’d expect – at least low-level clouds, which are the majority of the planet’s cloud cover. Low-level clouds such as cumulus and stratus clouds are thick enough to reflect 30-60% of the sun’s radiation that strikes them back into space, so they act like a parasol and cool the earth’s surface. Less cloud cover would therefore be expected to result in warming.

Satellite measurements of global cloud cover from 1982 to 2018 or 2019 are presented in the following two, slightly different figures, which also include atmospheric temperature data for the same period. The first figure shows cloud cover from one set of satellite data, and temperatures in degrees Celsius relative to the mean tropospheric temperature from 1991 to 2020.

The second figure shows cloud cover from a different set of satellite data, and absolute temperatures in degrees Fahrenheit. The temperature data were not measured directly but derived from measurements of outgoing longwave radiation, which is probably why the temperature range from 1982 to 2018 appears much larger than in the previous figure.

This second figure is the basis for the authors’ claim that 90% of global warming since 1982 is a result of fewer clouds. As can be seen, their estimated trendline temperature (red dotted line, which needs extending slightly) at the end of the observation period in 2018 was 59.6 degrees Fahrenheit. The reduction in clouds (blue dotted line) over the same interval was 2.7% - although the researchers erroneously conflate the cloud cover and temperature scales to come up with a 4.1% reduction.

Multiplying 59.6 degrees Fahrenheit by 2.7% yields a temperature change of 1.6 degrees Fahrenheit. The researchers then make use of the well-established fact that the Northern Hemisphere is up to 1.5 degrees Celsius (2.7 degrees Fahrenheit) warmer than the Southern Hemisphere. So, they say, clouds can account for (1.6/2.7) = 59% of the temperature difference between the hemispheres.

This suggests that clouds may be responsible for 59% of recent global warming, if the temperature difference between the two hemispheres is due entirely to the difference in cloud cover from hemisphere to hemisphere.

Nevertheless, this argument is on very weak ground. First, the authors wrongly used 4.1% instead of 2.7% as just mentioned, which incorrectly leads to a temperature change due to cloud reduction of 2.4 degrees Fahrenheit and an estimated contribution to global warming of a higher (2.4/2.7) = 89%, as they claim in their paper.

Regardless of this mistake, however, a temperature increase of even 1.6 degrees Fahrenheit is more than twice as large as the observed rise measured by the more reliable satellite data in the first figure above. And attributing the 1.5 degrees Celsius (2.7 degrees Fahrenheit) temperature difference between the two hemispheres entirely to cloud cover difference is dubious.

There is indeed a difference in cloud cover between the hemispheres. The Southern Hemisphere contains more clouds (69% average cloud cover) than the Northern Hemisphere (64%), partly because there is more ocean surface in the Southern Hemisphere, and thus more evaporation as the planet warms. This in itself would not explain why the Northern Hemisphere is warmer, however.

Southern Hemisphere clouds are also more reflective than their Northern Hemisphere counterparts. That is because they contain more liquid water droplets and less ice; it has been found that lack of ice nuclei causes low-level clouds to form less often. But apart from the ice content, the chemistry and dynamics of cloud formation are complex and depend on many factors. So associating the hemispheric temperature difference only with cloud cover is most likely invalid.

A few other research papers also claim that the falloff in cloud cover explains recent global warming, but their arguments are equally shaky. As is the proposal by joint winner of the 2022 Nobel Prize in Physics, John Clauser, of a cloud thermostat mechanism that controls the earth’s temperature: if cloud cover falls and the temperature climbs, the thermostat acts to create more clouds and cool the earth down again. Obviously, this has not happened.

Finally, it’s interesting to note that the current decline in cloud cover is not uniform across the globe. This can be seen in the figure below, which shows an expanding trend with time in coverage over the oceans, but a diminishing trend over land.

The expanding ocean cloud cover comes from increased evaporation of seawater with rising temperatures. The opposite trend over land is a consequence of the drying out of the land surface; evidently, the land trend dominates globally.

Next: Was the Permian Extinction Caused by Global Warming or CO2 Starvation?

El Niño and La Niña May Have Their Origins on the Sea Floor

One of the least understood aspects of our climate is the ENSO (El Niño – Southern Oscillation) ocean cycle, whose familiar El Niño (warm) and La Niña (cool) events cause drastic fluctuations in global temperature, along with often catastrophic weather in tropical regions of the Pacific and delayed effects elsewhere. A recent research paper attributes the phenomenon to tectonic and seismic activity under the oceans.

Principal author Valentina Zharkova, formerly at the UK’s Northumbria University, is a prolific researcher into natural sources of global warming, such as the sun’s internal magnetic field and the effect of solar activity on the earth’s ozone layer. Most of her studies involve sophisticated mathematical analysis and her latest paper is no exception.

Zharkova and her coauthor Irina Vasilieva make use of a technique known as wavelet analysis, combined with correlation analysis, to identify key time periods in the ONI (Oceanic Niño Index). The index, which measures the strength of El Niño and La Niña events, is the 3-monthly average difference from the long-term average sea surface temperature in the ENSO region of the tropical Pacific. Shown in the figure below are values of the index from 1950 to 2016.

Wavelet analysis supplies information both on which frequencies are present in a time series signal, and on when those frequencies occur, unlike a Fourier transform which decomposes a signal only into its frequency components.

Using the wavelet approach, Zharkova and Vasilieva have identified two separate ENSO cycles: one with a shorter period of 4-5 years, and a longer one with a period of 12 years. This is illustrated in the next figure which shows the ONI at top left; the wavelet spectrum of the index at bottom left, with the wavelet “power” indicated by the colored bar at top right; and the global wavelet spectrum at bottom right. 

The authors link the 4- to 5-year ENSO cycle to the motion of tectonic plates, a connection that has been made by other researchers. The 12-year ENSO cycle identified by their wavelet analysis they attribute to underwater volcanic activity; it does not correspond to any solar cycle or other known natural source of warming.

The following figure depicts an index (in red, right-hand scale), calculated by the authors, that measures the total annual volcanic strength and duration of all submarine volcanic eruptions from 1950 to 2023, superimposed on the ONI (in black) over the same period. A weak correlation can be seen between the ENSO ONI and undersea volcanic activity, the correlation being strongest at 12-year intervals.

Zharkova and Vasilieva estimate the 12-year ENSO correlation coefficient at 25%, a connection they label as “rather significant.” As I discussed in a recent post, retired physical geographer Arthur Viterito has proposed that submarine volcanic activity is the principal driver of global warming, via a strengthening of the thermohaline circulation that redistributes seawater and heat around the globe.

Zharkova and Vasilieva, however, link the volcanic eruptions causing the 12-year boost in the ENSO index to tidal gravitational forces on the earth from the giant planet Jupiter and from the sun. Jupiter of course orbits the sun and spins on an axis, just like Earth. But the sun is not motionless either: it too rotates on an axis and, because it’s tugged by the gravitational pull of the Jupiter and Saturn giants, orbits in a small but complex spiral around the center of the solar system.

Jupiter was selected by the researchers because its orbital period is 12 years - the same as the longer ENSO cycle identified by their wavelet analysis.

That Jupiter’s gravitational pull on Earth influences volcanic activity is clear from the next figure, in which the frequency of all terrestrial volcanic eruptions (underwater and surface) is plotted against the distance of Earth from Jupiter; the distance is measured in AU (astronomical units), where 1 AU is the average earth-sun distance. The thick blue line is for all eruptions, while the thick yellow line shows the eruption frequency in just the ENSO region.

What stands out is the increased volcanic frequency when Jupiter is at one of two different distances from Earth: 4.5 AU and 6 AU. The distance of 4.5 AU is Jupiter’s closest approach to Earth, while 6 AU is Jupiter’s distance when the sun is closest to Earth and located between Earth and Jupiter. The correlation coefficient between the 12-year ENSO cycle and the Earth-Jupiter distance is 12%.  

For the gravitational pull of the sun, Zharkova and Vasilieva find there is a 15% correlation between the 12-year ENSO cycle and the Earth-sun distance in January, when Earth’s southern hemisphere (where ENSO occurs) is closest to the sun. Although these solar system correlations are weak, Zharkova and Vasilieva say they are high considering the vast distances involved.

Next: Shrinking Cloud Cover: Cause or Effect of Global Warming?

The Deceptive Catastrophizing of Weather Extremes: (2) Economics and Politics

In my previous post, I reviewed the science described in environmentalist Ted Nordhaus’ four-part essay, “Did Exxon Make It Rain Today?”, and how science is being misused to falsely link weather extremes to climate change. Nordhaus also describes how the perception of a looming climate catastrophe, exemplified by extreme weather events, is being fanned by misconceptions about the economic costs of natural disasters and by environmental politics – both the subject of this second post.

Between 1990 and 2017, the global cost of weather-related disasters increased by 74%, according to an analysis by Roger Pielke, Jr., a former professor at the University of Colorado. Economic loss studies of natural disasters have been quick to blame human-caused climate change for this increase.

But Nordhaus makes the point that, if the cost of natural disasters is increasing due to global warming, then you would expect the cost of weather-related disasters to be rising faster than that of disasters not related to weather. Yet the opposite is true. States Nordhaus: “The cost of disasters unrelated [my italics] to weather increased 182% between 1990 and 2017, more than twice as fast as for weather-related disasters.” This is evident in the figure below, which shows both costs from 1990 to 2018.

Nordhaus goes on to declare:

In truth, it is economic growth, not climate change, that is driving the boom in economic damage from both weather-related and non-weather-related natural disasters.

Once the losses are corrected for population gain and the ever-escalating value of property in harm’s way, there is very little evidence to support any connection between natural dis­asters and global warming. Nordhaus explains that accelerating urbanization since 1950 has led to an enormous shift of the global population, economic activity, and wealth into river and coastal floodplains.

On the influence of environmental politics in connecting weather extremes to global warming, Nordhaus has this to say:

… the perception among many audiences that these events centrally implicate anthropogenic warming has been driven by ... a sustained campaign by environmental advocates to move the proximity of climate catastrophe in the public imagination from the uncertain future into the present.

The campaign had its origins in a 2012 meeting of environmental advocates, litigators, climate scientists and others in La Jolla, California, convened by the Union of Concerned Scientists. The specific purpose of the gathering was “to develop a public narrative connecting extreme weather events that were already happening, and the damages they were causing, with climate change and the fossil fuel industry.”

This was clearly an attempt to mimic the 1960s campaign against smoking tobacco because of its link to lung cancer. However, the correlation between smoking and lung cancer is extraordinarily high, leaving no doubt about causation. The same cannot be said for any connection between extreme weather events and climate change.

Nevertheless, it was at the La Jolla meeting that the idea of reframing the attribution of extreme weather to climate change, as I discussed in my previous post, was born. Nordhaus discerns that a subsequent flurry of attribution reports, together with a fortuitous restructuring of the media at the same time:

… have given journalists license to ignore the enormous body of research and evidence on the long-term drivers of natural disasters and the impact that climate change has had on them.

It was but a short journey from there for the media to promote the notion, favored by “much of the environmental cognoscenti” as Nordhaus puts it, that “a climate catastrophe is now unfolding, and that it is demonstrable in every extreme weather event.”

The media have undergone a painful transformation in the last few decades, with the proliferation of cable news networks followed by the arrival of the Internet. The much broader marketplace has resulted in media outlets tailoring their content to the political values and ideological preferences of their audiences. This means, says Nordhaus, that sensationalism such as catastrophic climate news – especially news linking extreme weather to anthropogenic warming – plays a much larger role than before.

As I discussed in a 2023 post, the ever increasing hype in nearly all mainstream media coverage of weather extremes is a direct result of advocacy by well-heeled benefactors like the Rockefeller, Walton and Ford foundations. The Rockefeller Foundation, for example, has begun funding the hiring of climate reporters to “fight the climate crisis.”

A new coalition, founded in 2019, of more than 500 media outlets is dedicated to producing “more informed and urgent climate stories.” The CCN (Covering Climate Now) coalition includes three of the world’s largest news agencies — Reuters, Bloomberg and Agence France Presse – and claims to reach an audience of two billion.

Concludes Nordhaus:

[These new dynamics] are self-reinforcing and have led to the widespread perception among elite audiences that the climate is spinning out of control. New digital technology bombards us with spectacular footage of extreme weather events. … Catastrophist climate coverage generates clicks from elite audiences.

Next: El Niño and La Niña May Have Their Origins on the Sea Floor

The Deceptive Catastrophizing of Weather Extremes: (1) The Science

In these pages, I’ve written extensively about the lack of scientific evidence for any increase in extreme weather due to global warming. But I’ve said relatively little about the media’s exploitation of the mistaken belief that weather extremes are worsening be­cause of climate change.

A recent four-part essay addresses the latter issue, under the title “Did Exxon Make It Rain Today?”  The essay was penned by Ted Nordhaus, well-known environmentalist and director of the Breakthrough Institute in Berkeley, California, which he co-founded with Michael Shellenberger in 2007. Its authorship was a surprise to me, since the Breakthrough Institute generally supports the narrative of largely human-caused warming.

Nonetheless, Nordhaus’s thoughtful essay takes a mostly skeptical – and realistic – view of hype about weather extremes, stating that:

We know that anthropogenic warming can increase rainfall and storm surges from a hurricane, or make a heat wave hotter. But there is little evidence that warming could create a major storm, flood, drought, or heat wave where otherwise none would have occurred, …

Nordhaus goes on to make the insightful statement that “The main effect that climate change has on extreme weather and natural disasters … is at the margins.” By this, he means that a heat wave in which daily high temperatures for, say, a week reached 37 degrees Celsius (99 degrees Fahrenheit) or above in the absence of climate change would instead stay above perhaps 39 degrees Celsius (102 degrees Fahrenheit) with our present level of global warming.

His assertion is illustrated in the following, rather congested figure from the IPCC (Intergovernmental Panel on Climate Change)’s Sixth Assessment Report. The purple curve shows the average annual hottest daily maximum temperature on land, while the green and black curves indicate the land and global average annual mean temperature, respectively; temperatures are measured relative to their 1850–1900 means.

However, while global warming is making heat waves marginally hotter, Nordhaus says there is no evidence that extreme weather events are on the rise, as so frequently trumpeted by the mainstream media. Although climate change will make some weather events such as heavy rainfall more intense than they otherwise would be, the global area burned by wildfires has actually decreased and there has been no detectable global trend in river floods, nor meteorological drought, nor hurricanes.

Adds Nordhaus:

The main source of climate variability in the past, present, and future, in all places and with regard to virtually all climatic phenomena, is still overwhelmingly non-human: all the random oscillations in climatic extremes that occur in a highly complex climate system across all those highly diverse geographies and topographies.

The misconception that weather extremes are increasing when they are not has been amplified by attribution studies, which use a new statistical method and climate models to assign specific extremes to either natural variabil­ity or human causes. Such studies involve highly questionable methodology that has several shortcomings.

Even so, the media and some climate scientists have taken scientifically unjustifiable liberties with attribution analysis in order to link extreme events to climate change – such as attempting to quantify how much more likely global warming made the occurrence of a heat wave that resulted in high temperatures above 38 degrees Celsius (100 degrees Fahrenheit) for a period of five days in a specific location.

But, explains Nordhaus, that is not what an attribution study actually estimates. Rather, “it quantifies changes in the likelihood of the heat wave reaching the precise level of extremity that occurred.” In the hypothetical case above, the heat wave would have happened anyway in the absence of climate change, but it would have resulted in high temperatures above 37 degrees Celsius (99 degrees Fahrenheit) over five days instead of above 38 degrees.

The attribution method estimates the probability of a heat wave or other extreme event occurring that is incrementally hotter or more severe than the one that would have occurred without climate change, not the probability of the heat wave or other event occurring at all.

Nonetheless, as we’ll see in the next post, the company WWA (World Weather Attribution), founded by German climatologist Friederike Otto, has utilized this new technology to rapidly produce science that does connect weather extremes to climate change – with the explicit goal of shaping news coverage. Coverage of climate-related disasters now routinely features WWA analysis, which is often employed to suggest that climate change is the cause of such events.

Next: The Deceptive Catastrophizing of Weather Extremes: (2) Economics and Politics

Sea Ice Update: Arctic Stable, Antarctic Recovering

The climate doomsday machine constantly insists that sea ice at the two poles is shrinking inexorably and that the Arctic will soon be ice-free in the summer. But the latest data puts the kibosh on those predictions. The maximum winter Arctic ice extent last month was no different from 2023, and the minimum summer 2024 extent in the Antarctic, although lower than the long-term average, was higher than last year.

Satellite images of Arctic sea ice extent in February 2024, one month before its winter peak (left image), and Antarctic extent at its summer minimum the same month (right image), are shown in the figure below. Sea ice shrinks during summer months and expands to its maximum extent during the winter. The red lines in the figure denote the median ice extent from 1981 to 2010.

Arctic summer ice extent decreased by approximately 39% over the interval from 1979 to 2023, but was essentially the same in 2023 as it was in 2007. Arctic winter ice extent on March 3, 2024 was 11% lower than in 1979, when satellite measurements began, but slightly higher than in 2023, as indicated by the inset in the figure below.

Arctic winter maximum extent fluctuates less than its summer minimum extent, as can be seen in the right panel of the figure which compares the annual trend by month for various intervals during the satellite era, as well as for the low-summer-ice years of 2007 and 2012. The left panel shows the annual trend by month for all years from 2013 through 2024.

What is noticeable about this year’s winter maximum is that it was not unduly low, despite the Arctic being warmer than usual. According to the U.S. NSIDC (National Snow & Ice Data Center), February air temperatures in the Arctic troposphere, about 760 meters (2,500 feet) above sea level, were up to 10 degrees Celsius (18 degrees Fahrenheit) above average.

The NSIDC attributes the unusual warmth to a strong pressure gradient that forced relatively warm air over western Eurasia to flow into the Arctic. However, other explanations have been put forward for enhanced winter warming, such as the formation during non-summer seasons of more low-level clouds due to the increased area of open water compared to sea ice. The next figure illustrates this effect between 2008 and 2022.

Despite the long-term loss of ice in the Arctic, the sea ice around Antarctica had been expanding steadily during the satellite era up until 2016, growing at an average rate between 1% and 2% per decade, with considerable fluctuations from year to year. But it took a tumble in 2017, as depicted in the figure below.

Note that this figure shows “anomalies,” or departures from the February mean ice extent for the period from 1981 to 2010, rather than the minimum extent of summer ice in square km. The anomaly trend is plotted as the percent difference between the February extent for that year and the February mean from 1981 to 2010.

As can be seen, the summer ice minimum recovered briefly in 2020 and 2021, only to fall once more and pick up again this year. The left panel in the next figure shows the annual Antarctic trend by month for all years from 2013 through 2024, along with the summer minimum (in square km) in the inset. As for the Arctic previously, the right panel compares the annual trend by month for various intervals during the satellite era, as well as for the high-summer-ice years of 2012 and 2014.

Antarctic sea ice at its summer minimum this year was especially low in the Ross, Amundsen, and Bellingshausen Seas, all of which are on the West Antarctica coast, while the ice cover in the Weddell Sea to the north and along the East Antarctic coast was at average levels. Such a pattern is thought to be associated with the current El Niño.

A slightly different representation of the Antarctic sea ice trend is presented in the following figure, in which the February anomaly is shown directly in square km rather than as a difference percentage. This representation illustrates more clearly how the decline in summer sea ice extent has now persisted for seven years.

The overall trend from 1979 to 2023 is an insignificant 0.1% per decade relative to the 1981 to 2010 mean. Yet a prolonged increase above the mean occurred from 2008 to 2017, followed by the seven-year decline since then. The current downward trend has sparked debate and several possible reasons have been advanced, not all of which are linked to global warming. One analysis attributes the big losses of sea ice in 2017 and 2023 to extra strong El Niños.

Next: The Deceptive Catastrophizing of Weather Extremes: (1) The Science

Exactly How Large Is the Urban Heat Island Effect in Global Warming?

It’s well known that global surface temperatures are biased upward by the urban heat island (UHI) effect. But there’s widespread disagreement among climate scientists about the magnitude of the effect, which arises from the warmth generated by urban surroundings, such as buildings, concrete and asphalt.

In its Sixth Assessment Report in 2021, the IPCC (Intergovernmental Panel on Climate Change) acknowledged the existence of the UHI effect and the consequent decrease in the number of cold nights since around 1950. Nevertheless, the IPCC is ambivalent about the actual size of the effect. On the one hand, the report dismisses its significance by declaring it “less than 10%” (Chapter 2, p. 324) or “negligible” (chapter 10, p. 1368).

On the other hand, the IPCC presents a graph (Chapter 10, p. 1455), reproduced below, showing that the UHI effect ranges from 0% to 60% or more of measured warming in various cities. Since the population of the included cities is a few per cent of the global population, and many sizable cities are not included, it’s hard to see how the IPCC can state that the global UHI effect is negligible.

One climate scientist who has studied the magnitude of the UHI effect for some time is PhD meteorologist Roy Spencer. In a recent preview of a paper submitted for publication, Spencer finds that summer warming in U.S. cities from 1895 to 2023 has been exaggerated by 100% or more from UHI warming. The next figure shows the results of his calculations which, as you would expect, depend on population density.

The barely visible solid brown line is the measured average summertime temperature for the continental U.S. (CONUS) relative to its 1901-2000 average, in degrees Celsius, from 1895 to 2023; the solid black line represents the same data corrected for UHI warming, as estimated from population density data. The measurements are taken from the monthly GHCN (Global Historical Climatology Network) “homogenized” dataset, as compiled by NOAA (the U.S. National Oceanic and Atmospheric Administration).

You can see that the UHI effect accounts for a substantial portion of the recorded temperature in all years. Spencer says that the UHI influence is 24% of the trend averaged over all measurement stations, which are dominated by rural sites not subject to UHI warming. But for the typical “suburban” station (100-1,000 persons per square km), the UHI effect is 52% of the measured trend, which means that measured warming in U.S. cities is at least double the actual warming. 

Globally, a rough estimate of the UHI effect can be made from NOAA satellite temperature data compiled by Spencer and Alabama state climatologist John Christy. Satellite data are not influenced by UHI warming because they measure the earth’s near-surface, not surface, temperature. The most recent data for the global average lower tropospheric temperature are displayed below.

According to Spencer and Christy’s calculations, the linear rate of global warming since measurements began in January 1979 is 0.15 degrees Celsius (0.27 degrees Fahrenheit) per decade, while the warming rate measured over land only is 0.20 degrees Celsius (0.36 degrees Fahrenheit) per decade. The difference of 0.05 degrees Celsius (0.09 degrees Fahrenheit) per decade in the warming rates can reasonably be attributed, at least in part, to the UHI effect.

So the UHI influence is as high as 0.05/0.20 or 25% of the measured temperature trend – in close agreement with Spencer’s 24% estimated from his more detailed calculations.

Other estimates peg the UHI effect as larger yet. As part of a study of natural contributions to global warming, which I discussed in a recent post, the CERES research group suggested that urban warming might account for up to 40% of warming since 1850.

But the 40% estimate comes from a comparison of the warming rate for rural temperature stations alone with that for rural and urban stations combined, from 1900 to 2018. Over the shorter time period from 1972 to 2018, which almost matches Spencer and Christy’s satellite record, the estimated UHI effect is a much smaller 6%. The study authors caution that more research is needed to estimate the UHI magnitude more accurately.

The effect of urbanization on global temperatures is an active research field. Among other recent studies is a 2021 paper by Chinese researchers, who used a novel approach involving machine learning to quantify the phenomenon. Their study encompassed measurement stations in four geographic areas – Australia, East Asia, Europe and North America – and found that the magnitude of UHI warming from 1951 to 2018 was 13% globally, and 15% in East Asia where rapid urbanization has occurred.

What all these studies mean for climate science is that global warming is probably about 20% lower than most people think. That is, about 0.8 degrees Celsius (1.4 degrees Fahrenheit) at the end of 2022, before the current El Niño spike, instead of the reported 0.99 degrees Celsius (1.8 degrees Fahrenheit). Which means in turn that we’re only halfway to the Paris Agreement’s lower limit of 1.5 degrees Celsius (2.7 degrees Fahrenheit).  

Next: Sea Ice Update: Arctic Stable, Antarctic Recovering

Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Driven by Oceanic Seismic Activity, Not CO2

Although undersea volcanic eruptions can’t cause global warming directly, as I discussed in a previous post, they can contribute indirectly by altering the deep-ocean thermohaline circulation. According to a recent lecture, submarine volcanic activity is currently intensifying the thermohaline circulation sufficiently to be the principal driver of global warming.

The lecture was delivered by Arthur Viterito, a renowned physical geographer and retired professor at the College of Southern Maryland. His provocative hypothesis links an upsurge in seismic activity at mid-ocean ridges to recent global warming, via a strengthening of the ocean conveyor belt that redistributes seawater and heat around the globe.

Viterito’s starting point is the observation that satellite measurements of global warming since 1979 show distinct step increases following major El Niño events in 1997-98 and 2014-16, as demonstrated in the following figure. The figure depicts the satellite-based global temperature of the lower atmosphere in degrees Celsius, as compiled by scientists at the University of Alabama in Huntsville; temperatures are annual averages and the zero baseline represents the mean tropospheric temperature from 1991 to 2020.

Viterito links these apparent jumps in warming to geothermal heat emitted by volcanoes and hydrothermal vents in the middle of the world’s ocean basins – heat that shows similar step increases over the same time period, as measured by seismic activity. The submarine volcanoes and hydrothermal vents lie along the earth’s mid-ocean ridges, which divide the major oceans roughly in half and are illustrated in the next figure. The different colors denote the geothermal heat output (in milliwatts per square meter), which is highest along the ridges.

The total mid-ocean seismic activity along the ridges is shown in the figure below, in which the global tropospheric temperature, graphed in the first figure above, is plotted in blue against the annual number of mid-ocean earthquakes (EQ) in orange. The best fit between the two sets of data occurs when the temperature readings are lagged by two years: that is, the 1979 temperature reading is paired with the 1977 seismic reading, and so on. As already mentioned, seismic activity since 1979 shows step increases similar to the temperature.

A regression analysis yields a correlation coefficient of 0.74 between seismic activity and the two-year lagged temperatures, which implies that mid-ocean geothermal heat accounts for 55% of current global warming, says Viterito. However, a correlation coefficient of 0.74 is not as high as some estimates of the correlation between rising CO2 and temperature.

In support of his hypothesis, Viterito states that multiple modeling studies have demonstrated how geothermal heating can significantly strengthen the thermohaline circulation, shown below. He then links the recently enhanced undersea seismic activity to global warming of the atmosphere by examining thermohaline heat transport to the North Atlantic-Arctic and western Pacific oceans.

In the Arctic, Viterito points to several phenomena that he believes are a direct result of a rapid intensification of North Atlantic currents which began around 1995 – the same year that mid-ocean seismic activity started to rise. The phenomena include the expansion of a phytoplankton bloom toward the North Pole due to incursion of North Atlantic currents into the Arctic; enhanced Arctic warming; a decline in Arctic sea ice; and rapid warming of the Subpolar Gyre, a circular current south of Greenland.

In the western Pacific, he cites the increase since 1993 in heat content of the Indo-Pacific Warm Pool near Indonesia; a deepening of the Indo-Pacific Warm Pool thermocline, which divides warmer surface water from cooler water below; strengthening of the Kuroshio Current near Japan; and recently enhanced El Niños.

But, while all these observations are accurate, they do not necessarily verify Viterito’s hypothesis that submarine earthquakes are driving current global warming. For instance, he cites as evidence the switch of the AMO (Atlantic Multidecadal Oscillation) to its positive or warm phase in 1995, when mid-ocean seismic activity began to increase. However, his assertion begs the question: Isn’t the present warm phase of the AMO just the same as the hundreds of warm cycles that preceded it?

In fact, perhaps the AMO warm phase has always been triggered by an upturn in mid-ocean earthquakes, and has nothing to do with global warming.

There are other weaknesses in Viterito’s argument too. One example is his association of the decline in Arctic sea ice, which also began around 1995, with the current warming surge. What he overlooks is that the sea ice extent stopped shrinking on average in 2007 or 2008, but warming has continued.

And while he dismisses CO2 as a global warming driver because the rising CO2 level doesn’t show the same step increases as the tropospheric temperature, a correlation coefficient between CO2 and temperature as high as 0.8 means that any CO2 contribution is not negligible.

It’s worth noting here that a strengthened thermohaline circulation is the exact opposite of the slowdown postulated by retired meteorologist William Kininmonth as the cause of global warming, a possibility I described in an earlier post in this Challenges series (#7). From an analysis of longwave radiation from greenhouse gases absorbed at the tropical surface, Kininmonth concluded that a slowdown in the thermohaline circulation is the only plausible explanation for warming of the tropical ocean.

Next: Foundations of Science Under Attack in U.S. K-12 Education

Rapid Climate Change Is Not Unique to the Present

Rapid climate change, such as the accelerated warming of the past 40 years, is not a new phenomenon. During the last ice age, which spanned the period from about 115,000 to 11,000 years ago, temperatures in Greenland rose abruptly and fell again at least 25 times. Corresponding temperature swings occurred in Antarctica too, although they were less pronounced than those in Greenland.

The striking but fleeting bursts of heat are known as Dansgaard–Oeschger (D-O) events, named after palaeoclimatologists Willi Dansgaard and Hans Oeschger who examined ice cores obtained by deep drilling the Greenland ice sheet. What they found was a series of rapid climate fluctuations, when the icebound earth suddenly warmed to near-interglacial conditions over just a few decades, only to gradually cool back down to frigid ice-age temperatures.

Ice-core data from Greenland and Antarctica are depicted in the figure below; two sets of measurements, recorded at different locations, are shown for each. The isotopic ratios of 18O to 16O, or δ18O, and 2H to 1H, or δ2H, in the cores are used as proxies for the past surface temperature in Greenland and Antarctica, respectively.

Multiple D-O events can be seen in the four sets of data, stronger in Greenland than Antarctica. The periodicity of successive events averages 1,470 years, which has led to the suggestion of a 1,500-year cycle of climate change associated with the sun.

Somewhat similar cyclicity has been observed during the present interglacial period or Holocene, with eight sudden temperature drops and recoveries, mirroring D-O temperature spurts, as illustrated by the thick black line in the next figure. Note that the horizontal timescale runs forward, compared to backward in the previous (and following) figure.

These so-called Bond events were identified by geologist Gerard Bond and his colleagues, who used drift ice measured in deep-sea sediment cores, and δ18O as a temperature proxy, to study ancient climate change. The deep-sea cores contain glacial debris rafted into the oceans by icebergs, and then dropped onto the sea floor as the icebergs melted. The volume of glacial debris was largest, and it was carried farthest out to sea, when temperatures were lowest.

Another set of distinctive, abrupt events during the latter part of the last ice age were Heinrich events, which are related to both D-O events and Bond cycles. Five of the six or more Heinrich events are shown in the following figure, where the red line represents Greenland ice-core δ18O data, and some of the many D-O events are marked; the figure also includes Antarctic δ18O data, together with ice-age CO2 and CH4 levels.

As you can see, Heinrich events represent the cooling portion of certain D-O events. Although the origins of both are debated, they are thought likely to be associated with an increase in icebergs discharged from the massive Laurentide ice sheet which covered most of Canada and the northern U.S. Just as with Bond events, Heinrich and D-O events left a signature on the ocean floor, in this case in the form of large rocks eroded by glaciers and dropped by melting icebergs.

The melting icebergs would have also disgorged enormous quantities of freshwater into the Labrador Sea. One hypothesis is that this vast influx of freshwater disrupted the deep-ocean thermohaline circulation (shown below) by lowering ocean salinity, which in turn suppressed deepwater formation and reduced the thermohaline circulation.

Since the thermohaline circulation plays an important role in transporting heat northward, a slowdown would have caused the North Atlantic to cool, leading to a Heinrich event. Later, as the supply of freshwater decreased, ocean salinity and deepwater formation would have increased again, resulting in the rapid warming of a D-O event.

However, this is but one of several possible explanations. The proposed freshwater increase and reduced deepwater formation during D-O events could have resulted from changes in wind and rainfall patterns in the Northern Hemisphere, or the expansion of Arctic sea ice, rather than melting icebergs.

In 2021, an international team of climate researchers concluded that when certain parts of the ice-age climate system changed abruptly, other parts of the system followed like a series of dominoes toppling in succession. But to their surprise, neither the rate of change nor the order of the processes were the same from one event to the other.

Using data from two Greenland ice cores, the researchers discovered that changes in ocean currents, sea ice and wind patterns were so closely intertwined that they likely triggered and reinforced each other in bringing about the abrupt climate changes of D-O and Heinrich events.

While there’s clearly no connection between ice-age D-O events and today’s accelerated warming, this research and the very existence of such events show that the underlying causes of rapid climate change can be elusive.

Next: Challenges to the CO2 Global Warming Hypothesis: (11) Global Warming Is Driven by Oceanic Seismic Activity, Not CO2

Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

In something of a twist to my series on challenges to the CO2 global warming hypothesis, this post describes a new paper that attributes modern global warming entirely to water vapor, not CO2.

Water vapor (H2O) is in fact the major greenhouse gas in the earth’s atmosphere and accounts for about 70% of the Earth’s natural greenhouse effect. Water droplets in clouds account for another 20%, while CO2 contributes only a small percentage, between 4 and 8%, of the total. The natural greenhouse effect keeps the planet at a comfortable enough temperature for living organisms to survive, rather than 33 degrees Celsius (59 degrees Fahrenheit) cooler.

According to the CO2 hypothesis, it’s the additional greenhouse effect of CO2 and other gases from human activities that is responsible for the current warming (ignoring El Niño) of about 1.0 degrees Celsius (1.8 degrees Fahrenheit) since the preindustrial era. Because elevated CO2 on its own causes only a tiny increase in temperature, the hypothesis postulates that the increase from CO2 is amplified by water vapor in the atmosphere and by clouds – a positive feedback effect.

The paper’s authors, Canadian researchers H. Douglas Lightfoot and Gerald Ratzer, don’t dispute that the natural greenhouse effect exists, as do other, heretical challenges described previously in this series. But the authors ignore the postulated water vapor amplification of CO2 greenhouse warming, and claim that increased water vapor alone accounts for today’s warmer world. It’s well known that extra water vapor is produced by the sun’s evaporation of seawater.

The basis of Lightfoot and Ratzer’s conclusion is something called the psychrometric chart, which is a rather intimidating tool used by architects and engineers in designing heating and cooling systems for buildings. The chart, illustrated below, is a mathematical model of the atmosphere’s thermodynamic properties, including heat content (enthalpy), temperature and relative humidity.

As inputs to their psychrometric model, the researchers used temperature and relative humidity measurements recorded on the 21st of the month over a 12-month period at 20 different locations: four north of the Arctic Circle, six in north mid-latitudes, three on the equator, one in the Sahara Desert, five in south mid-latitudes and one in Antarctica.

As indicated in the figure above, one output of the model from these inputs is the mass of water vapor in grams per kilogram of dry air. The corresponding mass of CO2 per kilogram of dry air at each location was calculated from Mauna Loa CO2 data in ppm (parts per million).

Their results revealed that the ratio of water vapor molecules to CO2 molecules ranges from 0.3 in polar regions to 108 in the tropics. Then, in a somewhat obscure argument, Lightfoot and Ratzer compared these ratios to calculated spectra for outgoing radiation at the top of the atmosphere. Three spectra – for the Sahara Desert, the Mediterranean, and Antarctica – are shown in the next figure.

The significant dip in the Sahara Desert spectrum arises from absorption by CO2 of outgoing radiation whose emission would otherwise cool the earth. You can see that in Antarctica, the dip is absent and replaced by a bulge. This bulge has been explained by William Happer and William van Wijngaarden as being a result of the radiation to space by greenhouse gases over wintertime Antarctica exceeding radiation by the cold ice surface.

Yet Lightfoot and Ratzer assert that the dip must be unrelated to CO2 because their psychrometric model shows there are 0.3 to 40 molecules of water vapor per CO2 molecule in Antarctica, compared with a much higher 84 to 108 in the tropical Sahara where the dip is substantial. Therefore, they say, the warming effect of CO2 must be negligible.

As I see it, however, there are at least two fallacies in the researchers’ arguments, First, the psychrometric model is an inadequate representation of the earth’s climate. Although the model takes account of both convective heat and latent heat (from evaporation of H2O) in the atmosphere, it ignores multiple feedback processes, including the all-important water vapor feedback mentioned above. Other feedbacks include the temperature/altitude (lapse rate) feedback, high- and low-cloud feedback, and the carbon cycle feedback.

A more important objection is that the assertion about water vapor causing global warming represents a circular argument.

According to Lightfoot and Ratzer’s paper, any warming above that provided by the natural greenhouse effect comes solely from the sun. On average, they correctly state, about 26% of the sun’s incoming energy goes into evaporation of water (mostly seawater) to water vapor. The psychrometric model links the increase in water vapor to a gain in temperature.

But the Clausius-Clapeyron equation tells us that warmer air holds more moisture, about 7% more for each degree Celsius of temperature rise. So an increase in temperature raises the water vapor level in the atmosphere – not the other way around. Lightfoot and Ratzer’s claim is circular reasoning.

Next: Rapid Climate Change Is Not Unique to the Present

Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s

In a recent series of blog posts, I showed how actual scientific data and reports in newspaper archives over the past century demonstrate clearly that the frequency and severity of extreme weather events have not increased during the last 100 years. But there’s also plenty of evidence of weather extremes comparable to today’s dating back centuries and even millennia.

The evidence consists largely of reconstructions based on proxies such as tree rings, sediment cores and leaf fossils, although some evidence is anecdotal. Reconstruction of historical hurricane patterns, for example, confirms what I noted in an earlier post, that past hurricanes were even more frequent and stronger than those today.

The figure below shows a proxy measurement for hurricane strength of landfalling tropical cyclones – the name for hurricanes down under – that struck the Chillagoe limestone region in northeastern Queensland, Australia between 1228 and 2003. The proxy was the ratio of 18O to 16O isotopic levels in carbonate cave stalagmites, a ratio which is highly depleted in tropical cyclone rain.

What is plotted here is the 18O/16O depletion curve, in parts per thousand (‰); the thick horizontal line at -2.50 ‰ denotes Category 3 or above events, which have a top wind speed of 178 km per hour (111 mph) or greater. It’s clear that far more (seven) major tropical cyclones impacted the Chillagoe region in the period from 1600 to 1800 than in any period since, at least until 2003. Indeed, the strongest cyclone in the whole record occurred during the 1600 to 1800 period, and only one major cyclone was recorded from 1800 to 2003.

Another reconstruction of past data is that of unprecedently long and devastating “megadroughts,” which have occurred in western North America and in Europe for thousands of years. The next figure depicts a reconstruction from tree ring proxies of the drought pattern in central Europe from 1000 to 2012, with observational data from 1901 to 2018 superimposed. Dryness is denoted by negative values, wetness by positive values.

The authors of the reconstruction point out that the droughts from 1400 to 1480 and from 1770 to 1840 were much longer and more severe than those of the 21st century. A reconstruction of megadroughts in California back to 800 was featured in a previous post.

An ancient example of a megadrought is the 7-year drought in Egypt approximately 4,700 years ago that resulted in widespread famine, known as Famine Stela. The water level in the Nile River dropped so low that the river failed to flood adjacent farmlands as it normally does each year, resulting in drastically reduced crop yields. The event is recorded in a hieroglyphic inscription on a granite block located on an island in the Nile.

At the other end of the wetness scale, a Christmas Eve flood in the Netherlands, Denmark and Germany in 1717 drowned over 13,000 people – many more than died in the much hyped Pakistan floods of 2022.

Although most tornadoes occur in the U.S., they have been documented in the UK and other countries for centuries. In 1577, North Yorkshire in England experienced a tornado of intensity T6 on the TORRO scale, which corresponds approximately to EF4 on the Fujita scale, with wind speeds of 259-299 km per hour (161-186 mph). The tornado destroyed cottages, trees, barns, hayricks and most of a church. EF4 tornadoes are relatively rare in the U.S.: of 1,000 recorded tornadoes from 1950 to 1953, just 46 were EF4.

Violent thunderstorms that spawn tornadoes have also been reported throughout history. An associated hailstorm which struck the Dutch town of Dordrecht in 1552 was so violent that residents “thought the Day of Judgement was coming” when hailstones weighing up to a few pounds fell on the town. A medieval depiction of the event is shown in the following figure.

Such historical storms make a mockery of the 2023 claim by a climate reporter that “Recent violent storms in Italy appear to be unprecedented for intensity, geographical extensions and damages to the community.” The thunderstorms in question produced hailstones the size of tennis balls, merely comparable to those that fell on Dordrecht centuries earlier. And the storms hardly compare with a hailstorm in India in 1888, which actually killed 246 people.

Next: Challenges to the CO2 Global Warming Hypothesis: (10) Global Warming Comes from Water Vapor, Not CO2

Two Statistical Studies Attempt to Cast Doubt on the CO2 Narrative

As I’ve stated many times in these pages, the evidence that global warming comes largely from human emissions of CO2 and other greenhouse gases is not rock solid. Two recent statistical studies affirm this position, but both studies can be faulted.

The first study, by four European engineers, is provocatively titled “On Hens, Eggs, Temperatures and CO2: Causal Links in Earth’s Atmosphere.” As the title suggests, the paper addresses the question of whether modern global warming results from increased CO2 in the atmosphere, according to the CO2 narrative, or whether it’s the other way around. That is, whether rising temperatures from natural sources are causing the CO2 concentration to go up.

The study’s controversial conclusion is the latter possibility – that extra atmospheric CO2 can’t be the cause of higher temperatures, but that raised temperatures must be the origin of elevated CO2, at least over the last 60 years for which we have reliable CO2 data. The mathematics behind the conclusion is complicated but relies on something called the impulse response function.

The impulse response function describes the reaction over time of a dynamic system to some external change or impulse. Here, the impulse and response are the temperature change ΔT and the increase in the logarithm of the CO2 level, Δln(CO2), or the reverse. The study authors took ΔT to be the average one-year temperature difference from 1958 to 2022 in the Reanalysis 1 dataset compiled by the U.S. NCEP (National Centers for Environmental Prediction) and the NCAR (National Center for Atmospheric Research); CO2 data was taken from the Mauna Loa time series which dates from 1958.

Based on these two time series, the study’s calculated IRFs (impulse response functions) are depicted in the figure below, for the alternate possibilities of ΔT => Δln(CO2) (left, in green) and Δln(CO2) => ΔT (right, in red). Clearly, the IRF indicates that ΔT is the cause and Δln(CO2) the effect, since for the opposite case of Δln(CO2) causing ΔT, the time lag is negative and therefore unphysical.

This is reinforced by the correlations shown in the following figure (lower panels), which also illustrates the ΔT and Δln(CO2) time series (upper panel). A strong correlation (R = 0.75) is seen between ΔT and Δln(CO2) when the CO2 increase occurs six months later than ΔT, while there is no correlation (R = 0.01) when the CO2 increase occurs six months earlier than ΔT, so ΔT must cause Δln(CO2). Note that the six-month displacement of Δln(CO2) from ΔT in the two time series is artificial, for easier viewing.

However, while the above correlation and the behavior of the impulse response function are impressive mathematically, I personally am dubious about the study’s conclusion.

The oceans hold the bulk of the world’s CO2 and release it as the temperature rises, since warmer water holds less CO2 according to Henry’s Law. For global warming of approximately 1 degree Celsius (1.8 degrees Fahrenheit) since 1880, the corresponding increase in atmospheric CO2 outgassed from the oceans is only about 16 ppm (parts per million) – far below the actual increase of 130 ppm over that time. The Hens and Eggs study can’t account for the extra 114 ppm of CO2.

The equally provocative second study, titled “To what extent are temperature levels changing due to greenhouse gas emissions?”, comes from Statistics Norway, Norway’s national statistical institute and the principal source of the country’s official statistics. From a statistical analysis, the study claims that the effect of human CO2 emissions during the last 200 years has not been strong enough to cause the observed rise in temperature, and that climate models are incompatible with actual temperature data.

The conclusions are based on an analysis of 75 temperature time series from weather stations in 32 countries, the records spanning periods from 133 to 267 years; both annual and monthly time series were examined. The analysis attempted to identify systematic trends in temperature, or the absence of trends, in the temperature series.

What the study purports to find is that only three of the 75 time series show any systematic trend in annual data (though up to 10 do in monthly data), so that 72 sets of long-term temperature data show no annual trend at all. From this finding, the study authors conclude it’s not possible to determine how much of the observed temperature increase since the 19th century is due to CO2 emissions and how much is natural.

One of the study’s weaknesses is that it excludes sea surface temperatures, even though the oceans cover 70% of the earth’s surface, so the study is not truly global. A more important weakness is that it confuses local temperature measurements with global mean temperature. Furthermore, the study authors fail to understand that a statistical model simply can’t approximate the complex physical processes of the earth’s climate system.

In any case, statistical analysis in climate science doesn’t have a strong track record. The infamous “hockey stick” - a recon­structed temperature graph for the past 2000 years resembling the shaft and blade of a hockey stick on its side – is perhaps the best example.

The reconstruction was debunked in 2003 by Stephen McIntyre and Ross McKitrick, who found (here and here) that the graph was based on faulty statistical analysis, as well as preferential data selection. The hockey stick was further discredited by a team of scientists and statisticians from the National Research Council of the U.S. National Academy of Sciences.

Next: Extreme Weather in the Distant Past Was Just as Frequent and Intense as Today’s