Insights

How to Tell If a Business Simulation Will Actually Change Behaviour

An engaging simulation isn't always a substantive one. The distinction is testable — and the questions that reveal it don't require a game-design background.

The debrief went well. Participants were animated, honest, and engaged. The facilitator drew out reflections that felt genuine, and people left the room saying it was the best training they'd had in years.

Three months later, nothing is different. The decisions they make at work haven't shifted. The habits they were supposed to challenge are still in place. The simulation was a good day — but not a good investment.

This pattern is familiar to any L&D professional who has commissioned more than a handful of experiential learning engagements. The problem isn't usually that participants didn't pay attention, and it isn't always that the facilitator failed to drive reflection. The problem is that the simulation itself didn't generate the kind of decisions that change behaviour. It was engaging, but it wasn't substantive.

Engagement isn't the same as impact

The L&D market treats engagement as a proxy for effectiveness. Happy participants, positive post-session scores, animated debrief — these things are measured and reported, and they matter. But engagement is a necessary condition, not a sufficient one. A well-produced simulation can fully occupy the room while teaching nothing durable, and if no one is looking for the distinction, people enjoyed it and people changed because of it quietly collapse into the same sentence.

The most useful thing a buyer can do before commissioning a simulation is learn what separates simulations that transfer to the workplace from ones that don't. The difference is structural, and it sits below the surface of anything a brochure or a showreel will show.

The hollow option problem

One of the most common failure modes in simulation design is the hollow option: a choice that sounds businesslike in the moment but produces no meaningful consequence in the game state. A team encounters an event — a new competitor enters the market, say — and is offered options like “strengthen supplier relationships” or “invest in team resilience.” These are the kind of things a real executive would say. But if the simulation doesn't actually change depending on which option is chosen, the participants aren't making a decision.

They're performing one.

Hollow options are common because they're easy to write. Narrative options are generated first, and the mechanics — the numbers that determine how the simulation responds — are bolted on afterwards, often loosely. The result is a simulation that feels realistic but doesn't teach anyone anything about trade-offs, because there are no real trade-offs to observe.

What substance looks like

Compare two options presented to a team facing a supply-chain disruption:

Option A

Strengthen supplier relationships.

Option B

Spend £5,000 from the operations budget to gain +10% efficiency for two rounds, at the cost of a delayed innovation milestone.

Option A sounds businesslike. Option B is a decision. It has a specific cost, a specific effect, a specific duration, and a specific consequence. When a team chooses it, the game state changes in ways the team can observe. When they choose not to, something different happens. The decision produces evidence, and the evidence is what makes the debrief worth having.

This distinction — between options that are meaningful and options that merely look like choices — isn't new. “A game is a series of interesting decisions” has been a foundational principle in game design since Sid Meier articulated it in 1989, and meaningful play sits at the centre of the serious-games literature. What applies to a strategy game applies equally to a business simulation: meaningful choice means options with quantified mechanical consequences, not options that merely sound plausible. Without those, a simulation is decoration.

Why this happens

Most traditional simulations are designed narrative-first: the designer drafts the scenario, writes the events, and generates options that would be plausible responses in that fictional world. The mechanical layer — the numbers, the resource flows, the scoring — is added afterwards, to make the game technically playable. In this sequence, the options come first and the mechanics are asked to accommodate them. They often can't, which is why so many simulations have options that sound distinct but produce near-identical outcomes.

A rigorous approach inverts this. The mechanical levers — every way a team's decisions can alter the game state — are defined with numerical values before any narrative is written. Events and options are then built from a pre-validated bank of mechanics, which means every option has a quantified effect by construction. Hollow options become structurally impossible, because an option without mechanics simply cannot exist inside the system.

Defenders of the narrative-first approach argue that the story is the teaching mechanism: that immersion in a well-written scenario produces the reflection that drives learning, regardless of mechanical fidelity. That view is coherent, and a vivid scenario can carry real weight in the room. But scenarios without mechanical substance tend to produce engaged debriefs and unchanged behaviour. The reflection participants take back to work needs something concrete to reflect on — and mechanics are what supply the concrete.

What to ask before you commission

If you're evaluating a bespoke simulation — or an off-the-shelf one, for that matter — three questions separate the substantive from the hollow.

  1. “Walk me through an event and its response options. For each option, what changes mechanically — in terms of resources, modifiers, or game state?” A designer who can't give concrete mechanical answers is describing theatre, not a game.
  2. “If two teams make completely different decisions through the session, do they end in different places? How different, and why?” If the simulation converges on similar outcomes regardless of decisions, the decisions weren't meaningful.
  3. “In the debrief, what specifically connects a team's choices to what happened on the board? What evidence does the simulation produce that participants can reflect on?” Good simulations generate observable consequences. Weak ones leave the debrief to do all the work.

These questions don't require specialist knowledge to ask. They require the vendor to demonstrate that their simulation is built on mechanics, not atmosphere. What a vendor can and can't answer reveals what they've actually built.

The point of the exercise

Simulations earn their place in a learning programme when they create the conditions for behaviour change: real decisions, observable consequences, and a debrief that connects the two. A simulation that entertains without producing those conditions is a well-staged day, not a training intervention. Everyone has a good time, and no one is measurably different afterwards.

The good news for buyers is that the distinction is testable. The mechanics either stand up to scrutiny or they don't. And the questions that reveal the difference don't require a background in game design — they just require asking, and paying attention to what comes back.

Weighing up a simulation?

If you're evaluating a simulation — whether a bespoke commission or an off-the-shelf programme — a second opinion on whether the design stands up is often worth the conversation. No commitment.

Get in touch