From Triggers to Thresholds

Most days don’t start with a threat, but with friction. I use scenario thinking as a decision tree: triggers, competing hypotheses, indicators, and thresholds. The goal isn’t prediction. It’s readiness without alarmism.

From Triggers to Thresholds

How I Use Scenario Thinking in Protective Intelligence

Most days do not start with a threat. They start with friction.

A schedule compresses. A venue changes. A phone buzzes, then another news feed. A driver asks a question that needs an answer now. A principal wants reassurance without drama, and the team wants clarity without overreacting. In that space, I rarely get the luxury of certainty. I still have to decide.

At the height of my career, I was actively and passively monitoring 12 to 15 WhatsApp groups. Then came all the one-to-one chats, emails, and calls. When we traveled, temporary groups were added, new contacts appeared, calls increased, and venues and events multiplied on top of what was already there. Luckily, over the years I built a strong international, multidisciplinary team around me, so I could focus on the organizational, tactical, and strategic side of the project. But that overload is real. And still your client wants you present. That presence is non-negotiable. Out of sight, out of mind, but also out of touch with reality if you live only behind screens.

Intelligence collection is one thing. Analysis is another. Interpretation is an interdisciplinary discipline in its own right. But there is something you cannot replace: being there, tasting the environment, reading rooms, reading people, and staying aware. If you do that introspectively and objectively, it adds value that no dashboard can.

If I treat every weak signal as decisive, I burn credibility, exhaust the team, and slowly train everyone to ignore warnings. In practice, professional judgment lives between two failures: missing escalation, and crying wolf.

What I describe here is not a hero story, and it is not doctrine. It is a way of thinking that I have found repeatable under real constraints. I write in the first person because this method is inseparable from experience. At the same time, I treat my experience as a source with limits. I cannot share operational specifics, identities, or locations. What I can share is the reasoning process: how I map uncertainty into usable choices, and how I discipline imagination so it stays useful instead of becoming alarmism.

I build readiness, not prediction, using a decision tree I can run anywhere

In protective work, whether physical or in cyberspace, scenario thinking is often described casually: “If A then B. If C then D.” It can sound simplistic, but it points to something serious. I am not trying to predict the future as a single line. I am trying to prepare for several plausible paths, fast, like having multiple neuro-tabs open.

In more academic language, this is a decision tree with indicators and thresholds. I just run it in the car, in a corridor, or in the ops room.

When I say “if A, then B,” A is not a vibe. It is an initiating condition I can describe: a trigger. It can be physical, like a change in crowd behavior or a suspicious approach pattern. It can also be informational, like a sudden spike in hostile attention or a leak of location details. What matters is that I can describe it clearly enough that another operator would recognize it.

Insider secret: I literally meditate on this. I visualize scenarios in advance and walk through them. This is not mysticism. It is rehearsal.

From the trigger, I imagine a small set of branches. Usually three is enough. Each branch is a working hypothesis about what happens next. I treat those branches as competing hypotheses. In more formal intelligence terms, this resembles Analysis of Competing Hypotheses (ACH): instead of asking “which story do I like,” I ask “which hypothesis best survives the evidence.” I am not running a full matrix in the car, but I am using the same discipline: keep alternatives alive until indicators separate them. The point is not the story. The point is what I will look for, and what I will do if I see it.

My structure is simple: I name the trigger, sketch three plausible trajectories (from benign to serious), attach indicators that would increase or decrease confidence in each branch, then link each branch to a response through SOPs and escalation thresholds.

Milan, Fashion Week: a photo and caption suggested a confrontation at the hotel entrance. Branch one: online theater, nothing physical. Branch two: opportunistic crowding, phones out, but no coordinated intent. Branch three: coordinated approach, timing and positioning, intent to disrupt movement. My indicators were simple: did I see location specificity and coordination, or only noise. Did I see one person with a camera, or multiple people arriving with a plan. And I linked it to response: branch one is monitor and keep normal tempo, branch two is adjust arrival choreography and distance, branch three is route change and liaison. No drama, but readiness.

Sometimes I stand in front of the mirror and interrogate myself. Q and A. If I cannot explain my own reasoning clearly, I should not expect a team or a principal to follow it. You do not forget that kind of reflection.

Imagination is not the danger. Unbounded imagination is.

People in this field sometimes treat imagination as suspect, as if imagining possibilities is the same as panicking. I do not see it that way. Imagination is how I fill gaps when the environment is fluid and information is incomplete. Without it, I default to what I already know, and that is how surprise happens.

The problem is not imagination. The problem is imagination without discipline.

The discipline starts with limitation. I cap the number of branches because too many scenarios become self-induced noise. When everything is possible, nothing is actionable. I want a map that guides decisions, not a catalogue of anxieties. Yes, extremes exist. I park them as low-probability, high-impact scenarios, and I track a small counter-indicator so they do not hijack my whole model.

The next discipline is disconfirmation. For every branch I build, I force myself to articulate what would make it less likely. This is how I protect myself from story addiction, the temptation to keep feeding one narrative because it feels coherent.

The third discipline is confidence language. Confidence language is how I stop myself from reacting too hard, too early. I do not only ask “what might be happening,” I ask “how sure am I.” Low confidence means I monitor, verify, and prepare in ways I can reverse quickly. High confidence means I tighten posture and act decisively, but still calmly. The response scales with the indicators, not with my adrenaline.

Bias control under overload: how I keep my thinking clean

The hardest moments are not the obvious threat moments. They are the overload moments, when everything is happening at once and the mind tries to simplify by grabbing the first coherent narrative.

Under overload, we are more vulnerable to confirmation bias. We notice what fits our first impression. We are also more vulnerable to availability bias. We overweight what is vivid, recent, or emotionally charged. In protective work, those biases can push us toward unnecessary escalation, or toward missing a quieter signal because the loud signal consumes attention.

My countermeasure is to force a small pause and ask three questions: what do I think is happening, what else could it be, and what would I need to see in the next ten minutes to tell the difference. That is my street version of ACH: force alternatives, then look for the evidence that discriminates.

Example: in the same situation, one teammate was convinced it was “a setup” because of a recent incident on another trip. I forced a pause and asked what we would need to see in the next ten minutes to separate “coordinated intent” from “opportunistic crowding.” The answer was coordination cues: timing, positioning, and repeatable behavior. That kept us from escalating too early, and it kept us from ignoring the risk if it matured.

Those questions interrupt autopilot. They also help me communicate. Instead of saying “this feels wrong,” I can say “I am tracking two plausible trajectories, and I am watching for these specific indicators.”

Thresholds are how I protect credibility, including with the principal

The most dangerous habit in protective environments is constant escalation. It feels safe because it is active, but it has a cost. It wears out the team, and it trains everyone that warnings are just background noise. I have seen operations where the posture is permanently high, and in that atmosphere the real escalation can be missed because everything already looks like a crisis.

To prevent that, I use thresholds. I decide in advance what moves a situation from monitoring to action, and from action to escalation. This does not need to be bureaucratic. It can be as simple as agreeing that one weak indicator does not change posture, two independent indicators trigger a check, and three indicators trigger a change in movement or liaison. The exact rule varies, but the principle stays the same: escalation is earned by signals, not by fear.

This is also how I speak to decision-makers. Thresholds and confidence language keep the briefing proportional and repeatable, so “we are adjusting posture” does not sound like panic, and “we are staying normal” does not sound like denial.

Example: one weak cue, a single person filming aggressively near the expected path. We monitor and keep tempo. Second cue, the same person is joined by two more who position themselves at the choke point. We trigger a check and adjust spacing. Third cue, a car arrives, doors open, more people step out, and the timing aligns with our movement window. That crosses the threshold. We change the arrival choreography and route. Same day, same mission, different posture, because the indicators matured.

Thresholds create calm inside the team. When people know what triggers what, they stop freelancing. They stop trying to read your mind. They act as a unit.

Strategic empathy, without becoming a storyteller

Perspective-taking is another tool that gets misunderstood. When I say empathy in this context, I do not mean sympathy. I mean the disciplined attempt to model another actor’s incentives, constraints, and likely choices without assuming they think like I do.

In practice, strategic empathy helps me avoid mirror-imaging. It helps me avoid assuming the other side is motivated by what would motivate me. It also helps me anticipate what an adversary might consider success, especially when the objective is not physical harm but disruption, humiliation, or narrative dominance.

But empathy has a failure mode. It can become a story that explains everything. When that happens, we stop analyzing and start justifying. So I treat empathy as provisional. I use it to generate branches, and then I force it to compete with alternatives that do not rely on the same psychological story.

A simple test I use is this: can I explain the same behavior with a less dramatic motive, like opportunism, status-seeking, or routine criminality. If yes, I keep both explanations alive until indicators separate them. Empathy, with analytical distance.

Example: a person tries to get close and starts pushing conversation toward personal details. One branch says hostile collection. Another says clout-chasing, someone trying to “have a moment.” The separating indicator is persistence under friction. Opportunistic behavior usually fades when denied. Collection behavior tends to adapt, re-approach, and probe through another angle.

The platform-era twist: when digital signals translate into physical exposure

For close protection and protective intelligence, the modern problem is not only what happens in the street. It is how quickly a narrative can shape the street. Online attention can create new audiences, new hostility, and new coordination. Most of it stays online. Some of it becomes physical exposure. The hybrid zone.

So the question I keep asking is translation. Does this digital signal remain noise, or does it create the kind of specificity and intent that changes physical risk?

A doxxing pattern or a coordinated call-to-action is not “content.” It is a trigger. One branch says it stays online theater. Another branch says it becomes physical exposure. The indicators are coordination cues, location specificity, timing, and repeatable intent.

When I see volume without coordination, I treat it as noise. When I see coordination cues, location specificity, or doxxing patterns, my confidence in physical translation increases. The difference matters because it dictates proportional response. If I treat all online hostility as imminent physical threat, I will live in permanent escalation. If I treat all online hostility as harmless, I will be late the day it crosses the line.

Example: a clip goes viral and the comments are ugly, but there is no time and place, no targeting beyond insults. That stays noise. Another day, the same pattern shifts: accounts start sharing a location and time window, and the language becomes “be there.” That crosses into translation risk, and the posture changes.

From analysis to SOP: the only analysis that matters is actionable

I do not judge my scenario map by how clever it is. I judge it by whether it produces a clear action or a clear decision point. At each branch, I want to know what changes in movement, what changes in communication, what changes in liaison, and what changes in abort criteria.

Sometimes the right answer is not a dramatic shift. Sometimes it is a small SOP adjustment that closes a vulnerability, reduces exposure, and keeps tempo. That is often the highest form of competence: quiet correction before the environment forces loud action.

Example: on a client’s arrival, instead of keeping part of the team on standby in the ops room, I placed one operator outside in a covert car as a spotter, reading the street and observing the arrival from another angle, ready to act. Same resources, different posture. Better early warning.

For the martial artists among us, it is close to the concept Sen sen no sen (先先の先): moving before the opponent even begins, sensing intent early enough that it looks almost psychic. This is how I try to run my scenario map: early, calm, and ahead of tempo, without escalating just to feel active.

Limits and ethics: proportionality is part of the method

Scenario thinking can easily become justification for overreach. I am explicit about a boundary. Speech is not automatically a threat, and attention is not automatically intent. I look for translation indicators before I treat an online narrative as physical risk. The same applies on the street. Body language, crowd movement, and proximity cues can be data, but they are not verdicts.

I also try to avoid contaminating my own thinking with stereotypes or profiling shortcuts. Those shortcuts feel fast, but they corrode judgment over time.

I accept that I may be wrong. That is not weakness. It is the reality of fallible knowledge. The method is designed for that. I build branches so I can update without ego. I use thresholds so I can act without panic. I use empathy so I can model incentives without assuming sameness. And I use disconfirmation so I do not fall in love with my first story.

Closing: what I want operators and analysts to take from this

Scenario thinking is not a special event. It is daily tradecraft. The art is not imagining danger. The art is imagining plausibly, then disciplining that imagination with indicators, thresholds, and SOP linkages so the team stays calm, credible, and ready.

There is also a separate challenge people underestimate: client acceptance. Decision-makers do not always accept intelligence, even when the reasoning is sound and the collection is real. The work is not only to assess, but to communicate in a way that is proportional, credible, and usable.

I do not try to be certain. I try to be prepared, and I try to be honest about what would change my mind.

Further reading

  • Betts, R.K. (1983) ‘Warning Dilemmas: Normal Theory vs. Exceptional Theory’, Orbis, 26(4), pp. 828–843. Protection in shifting domains
  • Dhami, M.K., Witt, J.K. and De Werd, P. (2025) ‘Visualizing versus verbalizing uncertainty in intelligence analysis’, Intelligence and National Security, 40(2), pp. 302–327. Protection in shifting domains
  • Duke, M.C. (2024) ‘Probability and confidence: How to improve communication of uncertainty about uncertainty in intelligence analysis’, Journal of Behavioral Decision Making, 37(1), e2364. Protection in shifting domains
  • Kahneman, D. (2011) Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. Protection in shifting domains
  • Paul, C. and Matthews, M. (2016) The Russian “Firehose of Falsehood” Propaganda Model. RAND Corporation. Protection in shifting domains
  • Walker, C. and Ludwig, J. (2017) Sharp Power: Rising Authoritarian Influence. Washington, DC: National Endowment for Democracy. Protection in shifting domains