The Disease Has Been Diagnosed. Here Is the Cure.

By Ron Dembo

March 23, 2026 at 7:00 a.m. EST13 min readThought Leadership

Why the four structural failures at the heart of climate risk modelling are not just academic — and what it takes to fix them.

In a March 2026 LinkedIn article, Gregor Pfalz — a climate data scientist with a PhD in Paleoclimatology — laid out, with careful academic precision, the structural limitations of catastrophe modelling. His five-point diagnosis is thorough and credible. It is also, for those of us building the next generation of climate risk infrastructure, a familiar list.

Pfalz highlights four specific failures: models based on non-stationary historical data, hazards treated as independent when they are not, resolution improvements that overlook real uncertainty, and frameworks that do not correspond with decision-making processes. He advocates for better handling of non-stationarity, enhanced representation of compound hazards, increased focus on vulnerability science, and transparent quantification of uncertainty.

At RiskThinking.AI, and before, we have been making the same diagnosis — and developing the same solutions — for over a decade. This paper maps Pfalz's framework onto ours and demonstrates that what he describes as an open scientific frontier is, in fact, already addressed. The solutions are in place. They are operational. They have been proven. The question is no longer if the problems are real but whether the industry is ready to adopt the solutions.

The Central Claim

Every climate risk system you're likely using is flawed. This isn't just a minor technical issue; it has significant financial impacts — right now.

That is a strong statement. But it rests on a straightforward empirical observation: the foundational climate signal driving every major commercial risk score — at banks, insurers, asset managers, and regulators — traces back to CMIP6, a global climate dataset whose observational data halted in 2014.

Since that cutoff, from 2015 to 2024, nine of the ten hottest years on record have occurred. The rapid acceleration in climate impact that shapes today's risk environment is completely missing from most of the models that price that risk. The decade with the most extreme climate events isn't reflected in the data your portfolio is being assessed against.

Pfalz characterizes this as a non-stationarity issue — and he is correct. However, the industry's response has been to acknowledge the problem without adjusting the data. That approach is unacceptable when significant financial losses are at stake.

Four Errors. Four Responses.

The following maps Pfalz's diagnostic framework onto the four specific, verifiable errors we have identified in every commercial climate risk model — and the solutions we have developed and validated for each.

Error 1: The Stale Data Problem

What Pfalz says

Non-stationarity is now the central challenge of catastrophe modelling. Historical calibration — the foundation of the entire industry — assumes that the statistical properties of climate hazards are stable through time. They are not. Loss distributions evolve. Tail behaviour shifts. The framework built to price risk is systematically blind to the direction of travel.

The Error

CMIP6 is the global standard climate dataset underpinning every major risk vendor. Its observational data halted in 2014. Nine of the ten hottest years ever recorded occurred after that date. The models rating your portfolio today have never seen the climate environment they are being asked to price.

Imagine a mortgage risk model built only on housing data before 2008. You wouldn’t trust it. That’s the situation with climate risk today.

Our Solution

The Climate Digital Twin is not a static snapshot. It is a continuously updated computational replica of the Earth's future climate system, recalibrated annually against new observational data. The signal does not go stale because the system is designed to learn. As Pfalz rightly calls for — combining historical data with physical understanding — we built this architecture. Our “CMIP6” has been recalibrated to within a few months of today.

Error 2: The Three-Scenario Trap

What Pfalz says

Pfalz calls for approaches that "explicitly represent uncertainty in model parameters and evolving hazard regimes" — distinguishing structural uncertainty, parameter uncertainty, and natural variability. He identifies this as the right scientific frontier for the field.

The Error

The industry standard is to choose two or three SSP scenarios, run analyses on each, and report the findings. This isn’t stress-testing; it’s what-if analysis presented as risk management. When you select three paths from over 2,000 scientifically validated futures, you ignore 99.9% of the information about the full range of possible outcomes. You will miss blind spots.

The consequence is quantifiable. We analyzed 2,903 record-breaking precipitation events between 1982 and 2022. SSP1-1.9 — the Bank of England's favoured scenario — missed 31% of those events. Our stochastic approach missed 3.1%.

Our Solution

Rather than choosing scenarios, we use an ensemble of all 2,000+ validated pathways — weighted by likelihood and climate science— to generate a full probability distribution. This is structurally equivalent to how market risk works: you do not predict stock prices, you model the future distribution and compute a VaR. Climate risk should be identical in structure. CC-VaR — Our Climate Capital VaR — is the climate equivalent of market VaR.

The result is that extreme events stop being black swans and become priced tails. Dubai's April 2024 flood, which caused over $10 billion in damage, was not a surprise to our model, built a year earlier; the event fell clearly within our forecast tail. The same model saw Pune's 500mm rainfall event in 2023. We did not predict these events. We priced the bet.

Error 3: The Independence Fallacy

What Pfalz says

This is where Pfalz is most directly aligned with our diagnosis. He writes that catastrophe models treat hazards as separate stochastic processes — a windstorm model, a flood model, a wildfire model — but that "real-world losses increasingly emerge from compound and cascading processes." He lists precisely the chains we have spent years modelling: storms and saturated soils, wildfire followed by debris flows, convective clusters with correlated regional losses, and infrastructure cascades.

His conclusion: the modelling challenge has shifted from estimating individual hazard probabilities to capturing the statistical structure of interacting hazards. This is accurate. It is also a more difficult problem than the industry has recognized. And, we have a solution for it that is in production.

The Error

Climate operates in cycles. Hazards are phases in a connected system, not isolated events. A flood following a drought does not behave like a flood in normal soil — the drought has hardened the ground, the water has nowhere to go, the tail loss is compounded. When models treat these events as independent, they systematically underestimate the tail.

Australia's early 2025 floods are the most recent illustration. An extended drought preceded them. Every independent hazard model was blind to the compounding. Our model was not.

Our Solution

Our patented multi-hazard correlation model treats climate as a system, not a catalogue of independent perils. Correlated hazard chains — drought to soil hardening to flash flood to infrastructure damage to capital loss — are modelled as a connected process. The correlation is where the catastrophic losses live, and it is therefore the most important thing to capture correctly.

Pfalz recognizes robust dependence modelling as an active scientific frontier. We have a patented solution in production. This is not a frontier problem; it is a deployment issue.

Error 4: The Incompatibility Problem

What Pfalz says

Pfalz notes that "any biases or limitations in the underlying datasets and modelling methodologies are carried through into catastrophe model outputs" — and emphasizes the need for transparent uncertainty quantification rather than single point estimates.

The Error

The typical industry method is to purchase hazard models from different vendors — a flood model from one, a drought model from another, a wind model from a third — and then combine the outputs into a single system. This approach is like mixing apples and oranges. Models based on incompatible climate assumptions cannot be merged without amplifying their individual errors. Combining biased inputs does not create an unbiased result. Instead, it results in a greater bias that appears more comprehensive but is misleading.

Our Solution

The Climate Digital Twin is a unified system, built on a single consistent set of climate assumptions. There is no stitching together of incompatible models. Hazard distributions are generated from the same underlying stochastic engine, ensuring that correlations and tail dependencies are modelled consistently and coherently worldwide— not artifacts of combining models with conflicting priors.

From Risk Score to Capital Decision

Pfalz identifies a fourth dimension beyond the technical errors — a misalignment between what models produce and what decisions require. Catastrophe models estimate expected losses under assumptions that are held fixed for years, even as the underlying climate evolves. This works well for portfolio-level capital modelling. But it does not answer the questions that matter most to actual decision-makers

  • How does my risk evolve over the next decade?

  • Do I have enough capital for the tail I am carrying?

  • Is my current pricing reflecting the risk I am taking on today?

This is exactly the gap our CC-VaR framework aims to fill. The goal of accurate climate risk analysis is not a hazard score, but a capital figure.

The pipeline is clear yet powerful: Climate Digital Twin → Damage Distribution → CC-VaR → Cash Flow at Risk → Capital Adequacy. Each stage translates physical risk into financial terms, culminating in the question every board and regulator must answer: how much capital is needed, and how does it change over time?

What the Solutions Look Like in Practice

These are not theoretical improvements. They have been implemented, with quantified results.

A global bank: 2,500,000 mortgages

The question from the board: which mortgages in our portfolio are accumulating climate- driven default risk, and do we have enough capital? Standard ESG scores provided no answers. Neither did the regulatory stress-test scenarios. CC-VaR evaluated every mortgage for forward-looking physical risk concentration, with direct output into ICAAP capital planning. Risk was flagged before it appeared in the arrears data.

A manufacturer: 40 global facilities

The board's question: which facilities are most vulnerable, where should we allocate the adaptation budget, and how can we justify it? Standard hazard scores ranked facilities but did not provide CapEx guidance. CC-VaR offered a list of adaptation options with associated costs and risk-adjusted returns — essentially a capital plan, not just a score. Retrofitting three facilities costs $4.2M and prevents $38M in tail loss.

This is the option premium framing applied to physical risk. Adaptation is not a cost; it is the price of removing a larger, probabilistic future loss from your distribution. Once you have the full tail modelled correctly, the ROI on adaptation becomes clear — and compelling. In one case, a $1.2M cyclone retrofit prevented $17.5M in tail losses, yielding a 14.6X return. This remains completely hidden until the tail is accurately modelled.

A regulator: 120+ Canadian institutions

OSFI and AMF in Canada used CC-VaR to stress-test over 120 banks, insurers, and credit unions on a common framework — creating the first quantitative, comparable capital gap measure for physical climate risk across the entire financial system. The issue Pfalz highlights — that each institution uses different models, scenarios, and assumptions — is resolved only with a standardized, physically grounded capital measure. CC-VaR is the measure.

An asset manager: 2,000+ equity indices

The question from the investment committee: our ESG ratings show low climate risk — are they capturing physical tail risk? ESG scores are backward-looking and based on disclosed data, not modelled physical hazard distributions. CC-VaR ranked over 2,200 equity indices by physical climate tail risk, revealing concentration risk that standard ESG ratings are fundamentally blind to. This enabled a like-for-like physical risk comparison between portfolios for the first time.

The Philosophy Shift That Changes Everything

Pfalz ends his article with a call to assemble to find solutions. We welcome that. But there is a deeper point that needs to be made alongside the technical ones.

The industry aims to predict the future — select the right scenario and find the correct answer. But the future isn't predictable. Climate is a chaotic, non-linear system. Any model that provides a single number conveys far less than it seems.

Stop attempting to predict the future. Instead, concentrate on evaluating the bets you're already placing.

This illustrates the difference between what-if analysis and true risk management. In market risk, you don't predict stock prices; you model the distribution, calculate a VaR, and hold capital against it. Climate risk should be structured in the same way — and eventually, regulators will mandate it.

The framing is important because it influences what you ask of a model. The right question isn't: what will happen? It's: what bets are you already making, and what does it cost you to make them? Once you have a complete tail distribution, this question can be answered quantitatively. That answer, in turn, affects how boards make capital decisions, how asset managers build portfolios, and how regulators determine capital requirements.

Five Questions to Ask Any Climate Risk Vendor

Pfalz calls for transparent uncertainty quantification — models that communicate structured uncertainty ranges rather than single estimates. Here is a practical implementation of that principle: five questions that will immediately tell you whether the system you are relying on is doing its job.

#

Question

If the answer is...

01

How current is your climate signal?

CMIP6 without recent updates: walk away.

02

How many pathways do you use?

Fewer than 10: it's a “what if analysis, not risk.

03

How do you handle hazard correlation and consistency?

A blank look: they don't.

04

Where does your asset data come from?

Postcode-level for a complex portfolio: wrong by construction

05

Can you back test your model?

No evidence: don't trust the output.

Conclusion

Pfalz closes with: "The challenge is shifting from building more detailed models of individual hazards to developing modelling frameworks capable of representing complex, interacting, and evolving risk systems. That is as much a scientific challenge as it is an industry one."

We agree with the challenge. We disagree that it remains unsolved.

The non-stationarity issue is addressed with a continuously updating Climate Digital Twin. The three-scenario trap is overcome by an ensemble of over 2,000 scientifically valid stochastic pathways. The independence fallacy is tackled by a patented multi-hazard correlation model. The compatibility problem is resolved through a unified architecture based on a single consistent set of climate assumptions. The decision-alignment issue is closed with CC-VaR — a capital metric that communicates effectively with boards, portfolio managers, and regulators.

Climate change leads to uncertain capital needs. We analyze the range of these requirements. This isn't about predicting the future; it's about understanding the bets you're already making — and providing you with the information to make them knowingly.

The disease has been diagnosed, and a cure exists. The only remaining question is whether the industry moves quickly enough to implement it.