• Short answer: Integrate validated behaviour-change models (like COM-B, Behaviour Change Wheel, or Transtheoretical Model) into service design by using them to identify barriers, shape interventions, and define measurable outcomes. Embed model-informed steps into user journeys, staff workflows, and evaluation loops so services actively support desired behaviours.

  • Key terms

    • COM-B — capability, opportunity, motivation model of behaviour.
    • Intervention function — ways a service can change behaviour (e.g., education, incentives).
    • User journey — sequence of user interactions with a service.
    • Behavioural outcome — specific action the service aims to change.
  • How it works

    • Map target behaviour and users using a model (e.g., identify lacking capability/opportunity/motivation).
    • Design service touchpoints to deliver appropriate intervention functions (reminders, feedback, incentives).
    • Build processes and staff roles to sustain interventions (training, prompts).
    • Measure intermediate outcomes (engagement, capability) and final behavioural outcomes.
    • Iterate using data and qualitative feedback.
  • Simple example

    • A healthcare appointment system adds capability (how-to reminders), opportunity (easier booking), and motivation (feedback on benefits) to increase attendance.
  • Pitfalls or nuances

    • Models help guide, not guarantee — local context and equity matter.
    • Overloading users with interventions can backfire; test incrementally.
  • Next questions to explore

    • Which model fits my target behaviour and context?
    • What measurable indicators will show change?
  • Further reading / references

    • The Behaviour Change Wheel — Michie, van Stralen & West (paper/book) (search: “Behaviour Change Wheel Michie 2011”)
    • COM-B model overview — UK Behavioural Insights Team (search: “COM-B model explanation”)
  • Claim: Integrating validated behaviour‑change models into service design makes interventions more targeted, measurable, and likely to change user behaviour.

  • Reasons:

    • Models (e.g., COM‑B: capability, opportunity, motivation) reveal which## Embedding behaviour-change models into services
  • Claim: Integrating validated behaviour‑change models into service design makes interventions more targeted, measurable, and sustainable.

  • Reasons (3 bullets):

    • barriers to Models (e.g., COM‑B: capability, opportunity, motivation) clarify which barrier to tackle, so services deliver address, the right support.
    • They so design turn vague goals into concrete intervention functions (education, prompts, incentives) targets the linked to touchpoints and staff roles.
    • Built real problem‑in measures and iterations let services test what works and scale effective elements. .
  • Example or evidence (1 line): A clinic that used COM‑B added how‑ - Modelto reminders (capability), simpler booking (opportunity), and benefit feedback (‑based intervention functionsmotivation) and saw higher appointment attendance (education.

  • Caveat or limits (, prompts1 line): Models guide design but don, incentives) map’t guarantee success—context, equity, directly onto and user burden must be tested.

  • When service touch this holds vs.points and when it staff workflows might not (1 line):. Works when teams co‑design, measure, and iterate; may fail if applied rigidly without local - Defined intermediate and adaptation.

  • Further reading / references:

    • The Behaviour Change final outcomes Wheel — let teams Michie, van Stralen & West (search: “ measure,Behaviour Change Wheel Michie 2011 learn,”)
    • COM‑B model overview — UK Behavioural Insights and iterate Team ( systematically. search: “COM‑B model explanation”)

Definitions: COM‑B- Example = capability, opportunity, motivation model; intervention function = a method a service uses or evidence to change behaviour; user journey = user: A’s sequence of interactions. clinic that adds how‑to reminders (capability), simpler booking (opportunity), and outcome feedback (motivation) increases appointment attendance.

  • Caveat or limits: Models guide choices but don’t guarantee success; local context, equity, and user testing are essential.
  • When this holds vs. when it might not: Works when you select an appropriate model and measure outcomes; fails if you ignore context or overload users.
  • Further reading / references:
    • The Behaviour Change Wheel — Michie, van Stralen & West (search: “Behaviour Change Wheel Michie 2011”)
    • COM‑B model overview — UK Behavioural Insights Team (search: “COM-B model explanation”)

Definitions

  • COM‑B: capability, opportunity, motivation model of behaviour.
  • Intervention function: a way a service can change behaviour (e.g., education, incentives).
  • User journey: sequence of user interactions with a service.
  • Behavioural outcome: specific action the service aims to change.
  • Paraphrase: Service designers convert broad aims (like “increase attendance” or “improve adherence”) into specific intervention functions—education, prompts, incentives—that are then assigned to particular user touchpoints (where users interact with the service) and to staff roles that deliver or support those interventions.

  • Key terms

    • Intervention function — a class of ways to change behaviour (e.g., education, persuasion, prompts, incentives).
    • Touchpoint — any moment or channel where a user interacts with the service (app notification, front‑desk, email).
    • Staff role — the job or responsibility (receptionist, nurse, coach) that delivers or supports an intervention.
    • Behavioural target — the specific action you want users to do (e.g., book and attend an appointment).
    • COM-B — a model showing behaviour depends on Capability, Opportunity, Motivation (used to choose intervention functions).
  • Why it matters here

    • Makes goals actionable: Translating a vague goal into specific functions shows what to do, when, and who should do it.
    • Ensures fit between solution and context: Mapping functions to touchpoints and roles ensures interventions reach users in the right place and are realistically deliverable by staff.
    • Enables measurement and iteration: Concrete functions tied to touchpoints and roles create clear metrics (e.g., prompt sent, prompt acted on) so you can test and improve.
  • Follow-up questions / next steps

    • Which specific behaviour do you want to change (exact action and target users)? — this is needed to pick functions.
    • Do you have existing touchpoints and staff capacity to deliver interventions, or will new channels/training be required?
  • Further reading / references

    • The Behaviour Change Wheel — Susan Michie, et al. (search: “Behaviour Change Wheel Michie 2011”)
    • COM-B model overview — UK Behavioural Insights Team (search: “COM-B model explanation”)
  • Paraphrase

    • Design services to test interventions on real user problems, measure whether they change behaviour, and repeat (iterate) so you can keep what works and expand it safely.
  • Key terms

    • Real problem — an actual, observed user need or barrier (not just a guess).
    • Measure — a specific, tracked indicator (e.g., attendance rate, sign‑up completion).
    • Iteration — a short cycle of designing, testing, learning, and refining.
    • Scale — expanding an intervention so it reaches more users or settings.
    • A/B test — comparing two versions to see which performs better.
    • Process metric — measures how the service is used (engagement); outcome metric — measures the behaviour change you want.
  • Why it matters here

    • Focuses effort: testing on real problems prevents wasting time on unhelpful features.
    • Reduces risk: small, measured iterations show whether an intervention helps before you scale.
    • Improves learning: metrics plus quick cycles reveal which elements cause change (so you can keep the effective parts and drop the rest).
  • Follow-up questions / next steps

    • Which specific user problem and behaviour do you want to test first? (e.g., missed appointments, low sign‑ups)
    • What simple measures will show success? (pick 1–2 outcome metrics and 1 process metric)
  • Further reading / references

    • The Behaviour Change Wheel — search: “Behaviour Change Wheel Michie 2011” (useful for mapping interventions to behavioural barriers)
    • COM‑B model overview — search: “COM-B model explanation Behavioural Insights Team” (explains capability, opportunity, motivation as causes to target)
  • Claim: Rigidly embedding behaviour-change models into services can mislead design, ignore context, and produce ineffective or unequal outcomes.

  • Reasons:

    • Models simplify complex, situated human behaviour; overreliance can miss social, cultural, and systemic drivers (jargon: “model” = simplified explanatory framework).
    • Validation in one context doesn’t guarantee transferability; staff workflows and user needs vary, so interventions may fail or cause unintended consequences.
    • Operationalising models can prioritize measurable metrics over meaningful change, incentivising gaming or narrow fixes that worsen equity.
  • Example or evidence: Trials of nudges often show small, short‑term effects and variable replication across populations (Background: behavioural science replication literature).

  • Caveat or limits:## Embedding behaviour‑change models can mislead service design

  • Claim: Rigidly embedding behaviour‑change models into services risks producing oversimplified, ineffective, or inequitable interventions. The criticism

  • Reasons:

    • Models are abstractions; they can omit targets rigid crucial local social, cultural, or structural drivers (jargon: “abstraction” = simplified model of reality).
    • Over, unreliance fosters checkbox design—implementing modeladapted steps without deep user research leads to poor fit and low uptake embedding —.
    • Models often center individual agency and may ignore systemic barriers (cost, models used access, discrimination), worsening inequities.
  • Example or evidence: Health flexibly-nudge programs based solely on reminders sometimes fail when transport, with local cost, or distrust—unmodeled factors—prevent attendance.

  • evidence can Caveat or limits: This criticism targets uncritical, rigid use; models still still help help when combined with local qualitative research and structural analysis.

  • When this criticism applies vs..

  • when it might not: Applies in complex, unequal contexts with little co When applies‑design; less problematic for narrow, well‑ vs.studied behaviors with when not supporting infrastructure. -: Applies Further reading / references to one:

    • “The Behaviour Change Wheel‑size” — search: “Behaviour‑fits Change Wheel‑all Michie 201, metric1” ‑dr -iven roll “Nouts;udge and its limitations less applicable” — when models search: are co “limitations of nud‑desging publicigned, piloted policy literature”, and context‑tested.
  • Further reading / references:

    • The Behavioural Insights Team — (search: “limitations of behavioural interventions replication”)
    • “Nudge: Improving Decisions About Health, Wealth, and Happiness” — Thaler & Sunstein (background reading).
Back to Graph