menu
A Simulated Interview With Ron Dembo
  • Thought Leadership,

A Simulated Interview With Ron Dembo

Below is a simulated conversation with Ron S. Dembo about the ideas in Risk Thinking: Decision Making in a Radically Uncertain World. The questions aim to challenge or explore the methodology and underlying assumptions, and the responses reflect how the author might explain or defend his work.


Q1: Your framework splits risks into deterministic and stochastic elements. However, in practice, many risks appear to be a blend of both. How do you precisely distinguish between what's deterministic and what's stochastic, and isn't that boundary sometimes fuzzy?

A1: That's an excellent point. In theory, the distinction is clear - deterministic components are those we can model precisely given known inputs, while stochastic parts involve inherent randomness. In practice, however, yes there is often a continuum. The key is to identify which elements we can reduce uncertainty through planning, standard procedures, and repeatable practices. For instance, the structural quality of a building can be thoroughly tested (deterministic) while future market conditions remain inherently uncertain (stochastic). When elements blend, we focus on isolating the parts where we can reduce uncertainty and treat the remainder as the stochastic "bet" on the future. It's a matter of incremental refinement and constant re-evaluation as more data comes in.


Q2: Your process emphasizes the generation of scenario trees using expert judgement and even machine learning. Could you elaborate on the algorithmic approach behind generating a "spanning set" of scenarios? How do you ensure that the generated scenarios capture the full spectrum of possibilities, including outliers?

A2: Certainly. Our approach to scenario generation is rooted in capturing uncertainty distributions for each key risk factor. We first collect forward-looking data - whether via structured expert judgement or by parsing expert literature using AI. Then, we generate a binary scenario tree - a "spanning set" that considers each risk factor's extreme outcomes, both upside and downside. By discretizing the uncertainty distributions (for example, using key percentiles such as the 10th and 90th), the algorithm constructs a tree that spans from best-case to the worst-case scenario. We intentionally include outlier responses as these "fat tails" are critical to understanding potential Black Swan events. The process is iterative: as new data become available, we update the distributions and the spanning set, ensuring that out model remains reflective of the latest insights.


Q3: Expert judgement is central to your framework, but as you acknowledge, experts can be biased or fall prey to groupthink. How do you mitigate the risk of converging biases among experts, especially when many come from similar backgrounds or institutions?

A3: Mitigating groupthink is indeed one of the biggest challenges when aggregating expert opinions. To counter this, our process emphasizes diversification at multiple levels. First, we deliberately seek experts from a broad spectrum of disciplines - even when addressing a single issue, like climate change, we include natural scientists, economists, engineers, and even practitioners. Second, when we use machine learning to "read" expert literature, the algorithm scans a vast array of source, which naturally dilutes any one homogeneous perspective. Lastly, rather than forcing a consensus or an average, we explicitly preserve the full distribution of opinions. This way, the divergent, even extreme, views are maintained in the scenario generation process rather than being averaged away. By doing so, we can see not only the "middle of the road" outcomes but also the rare, high-impact possibilities.


Q4: Forecasting has been widely used in many domains despite its limitations. If forecasting is so flawed, why do you think institutions still cling to it, and how does your "Risk Thinking" framework overcome those institutional biases?

A4: Institutions often cling to traditional forecasting methods because they provide a seemingly precise, singular figure - a comforting simplification in a complex world. It's easier to communicate and make decisions on a single projection. However, that precision is illusory when it comes to radical uncertainty. Our framework challenges that mindset by demonstrating that a singular forecast obscures the underlying range of possibilities. By generating and evaluating a spanning set of scenarios, institutions can see the full breadth of risk and make informed decisions about where to bet and where to hedge. Overcoming institutional inertia requires not only new methods but also a cultural shift - a recognition that planning must incorporate flexibility and robustness. Part of our work involves educating decision makers about the pitfalls of over reliance on deterministic forecasts and showing them, through real-world examples, how a scenario-based approach can lead to more resilient strategies.


Q5: The final step in your process is to constantly update scenarios and re-assess strategies as new information comes in. Given that decision-making environments can be highly dynamic, what mechanisms or processes do you recommend for ensuring that your risk models remain current?

A5: That's absolutely crucial. Our framework is designed to be iterative. I recommend establishing a regular review cycle where key risk factors and expert judgements are updated - this might be quarterly for some industries or even more frequently during times of crisis. Technologically, integrating real-time data feeds and employing machine learning to continuously scan the literature and new sources can automate parts of this update process. Furthermore, decision-making bodies should create cross functional teams responsible for "risk re-calibration" sessions, where they review the latest scenario outputs and adjust their strategies accordingly. The goal is to avoid locking in a static view of the future; instead, we maintain a dynamic risk profile that evolves with emerging trends and data. This adaptive approach is what differentiates risk thinking from traditional forecasting.


These questions and answers capture some of the core challenges and responses related to applying the Risk Thinking methodology in complex, real-world decision making.