Scenario Sets

A single analysis often benefits from more than one way of framing the future. Different stakeholders may want to explore different axes of uncertainty, or an analyst may want to iterate on a set of scenarios without discarding earlier work. Scenario sets let you maintain multiple independent collections of scenarios within the same analysis, each with its own constraints, likelihoods, and forecaster input.

What Is a Scenario Set?

A scenario set is a named group of scenarios that belong to a single analysis. Every analysis starts with one scenario set - the primary set - and you can create additional sets at any time. Each set is self-contained: it has its own title, its own list of scenarios, and its own constraint settings.

Scenario sets are useful when you want to:

  • Explore alternative framings of the same issue using different drivers or uncertainty axes
  • Preserve a baseline set of scenarios while experimenting with variations
  • Maintain separate sets for different audiences or time horizons
  • Compare how different scenario structures lead to different likelihood assessments

The Primary Set

Every analysis has exactly one primary scenario set. The primary set is the default - it is the set that appears when you first navigate to an analysis's scenarios, and it is the set referenced by generated products such as action memos and one-pagers. When an analysis is created, Hinsley automatically creates a primary set for it.

You can change which set is primary at any time. Promoting a different set to primary automatically demotes the previous one. The primary set cannot be deleted; if you want to remove it, first promote another set to primary.

Scenario Settings

Each scenario set carries two constraint flags that describe the logical relationship between its scenarios:

  • Mutually exclusive - The scenarios do not overlap. Only one of them can occur. This is the classic “which future will we end up in?” framing, and it means the likelihoods should sum to roughly 100%.
  • Exhaustive - The scenarios cover all reasonable possible outcomes. Combined with mutual exclusivity, this ensures the set represents a complete partition of the future possibility space.

These flags are not just labels. When Hinsley generates AI likelihoods for a scenario set, it factors the constraint settings into its reasoning. A mutually exclusive set, for example, will produce likelihoods that are calibrated to sum appropriately, while a non-exclusive set allows scenarios whose likelihoods are independent of one another.

You can change the constraints for a set at any time from the scenario settings panel. If the set already has scenarios, Hinsley will offer to regenerate likelihoods to reflect the updated constraints.

Cloning a Scenario Set

Rather than building a new set from scratch, you can clone an existing one. Cloning copies all scenarios and constraint settings into a new set. The clone is independent of the original - editing scenarios in one set does not affect the other.

Cloning is particularly useful when you want to:

  • Branch off a working set to test a different framing without risk
  • Create a snapshot of your current scenarios before a major revision
  • Give a collaborator their own copy of a set to modify independently

Scenario Generation & Quality Validation

When Hinsley generates a new set of scenarios, it does not stop at the initial draft. It automatically runs a quality review to check whether the generated scenarios satisfy MECE principles - that they are sufficiently distinct from one another (mutually exclusive) and together cover the reasonable possibility space (collectively exhaustive).

The generation workflow proceeds in three stages:

  1. Generate - Hinsley produces an initial set of scenarios based on the analysis topic, constraint settings, and source material.
  2. Validate - An automated review checks for pairwise overlaps between scenarios, coverage gaps (important futures that no scenario captures), and abstraction or time-horizon mismatches.
  3. Heal - If issues are found, Hinsley attempts to revise the scenarios to resolve them. This may repeat up to several times before the workflow finalizes.

Status Badges

While generation is running, a status badge appears next to the scenario set title on the Scenarios page. It updates in real time to show the current stage:

  • Generating - The initial scenarios are being created.
  • Validating - The relationship review is running.
  • Healing (attempt N of M) - Issues were found and Hinsley is revising the scenarios.
  • Validated (green) - The scenarios passed validation. If healing was needed, the badge notes how many refinement rounds it took.
  • Issues remain (yellow) - Some relationship issues could not be fully resolved after the maximum number of healing attempts. The scenarios are still usable but may benefit from manual review.
  • Failed (red) - The workflow encountered an error. A Retry button appears alongside the badge to restart it.

Viewing the Validation Report

Once generation completes, clicking the green or yellow badge opens the full validation report for that scenario set. The report shows:

  • Pairwise overlaps - Pairs of scenarios that share significant conceptual territory, each rated by severity (high, medium, or low).
  • Coverage gaps - Important futures or outcomes that no scenario in the set addresses.
  • Abstraction & time horizon notes - Whether the scenarios are consistent in scope and time frame.
  • Reasons - A summary explanation of the overall validation outcome.

The report preserves the history of each validation and healing attempt in reverse chronological order, so you can see how the scenarios improved across rounds.