Co-creation of healthcare policy and healthcare innovation: strengths, weaknesses, mechanisms, and a governance model for measurable value
Abstract
Co-creation (and related “co-production” and “co-design”) is increasingly used to shape healthcare policy and to accelerate innovation adoption by involving patients, citizens, clinicians, payers, regulators, and industry in joint problem framing, design, and evaluation. Conceptually, co-creation can increase the relevance, legitimacy, implementability, and equity of innovation-supporting policies, thereby improving efficiency, effectiveness, quality, and outcomes. Empirically, however, systematic reviews repeatedly conclude that outcome evidence is inconsistent, under-measured, and often limited to process or experiential benefits rather than system-level value.
This essay (i) synthesizes documented strengths and weaknesses of co-creation as a policy approach, (ii) maps these to mechanisms and real-world examples across three policy levels (national/regional; payer reimbursement; hospital/provider) and four innovation types (digital health; payment models; workforce redesign; pharma/medtech adoption), and (iii) proposes a governance model designed to produce measurable gains in efficiency, quality, and outcomes through disciplined evaluation, transparent decision rights, and learning-system feedback loops.
1. Introduction
Healthcare systems face persistent performance gaps: unwarranted variation, suboptimal outcomes, workforce constraints, and cost growth. In response, policymakers increasingly look to participatory approaches - co-creation, co-production, co-design - to improve the fit between policy instruments and real-world implementation contexts.
A practical definition is: structured collaboration among affected stakeholders to jointly define problems, design policy or service solutions, and iteratively evaluate and adapt them. In health systems, this spans a continuum from consultation to partnership and shared leadership at multiple levels (direct care, organizational governance, and public policy).
The core hypothesis is that better alignment between policy design and lived operational realities improves adoption and performance - thus enhancing efficiency (resource use), effectiveness (clinical impact), quality (safety, experience), and outcomes (health and patient-reported outcomes).
2. Conceptual framing and evidentiary caution
2.1 Terminology and scope
Health and public-sector literatures often use co-creation/co-production/co-design inconsistently, with overlapping definitions and variable methodological rigor. This matters because “co-creation” can describe anything from a one-time consultation workshop to shared governance with delegated authority - very different interventions with different risks and benefits.
2.2 What the evidence reliably supports - and what it does not
Systematic and umbrella reviews in adjacent domains (public-sector innovation; patient and public involvement; co-production measurement) repeatedly find that:
- co-creation is frequently justified as improving relevance and implementation, but
- robust measurement of downstream outcomes (cost, utilization, equity-adjusted outcomes) is often absent or inconsistent.
Accordingly, claims in this essay are limited to (a) theoretically supported mechanisms and (b) documented examples of co-creation processes - without asserting universal or guaranteed causal effects on efficiency or outcomes unless the cited sources explicitly establish them.
3. Strengths and weaknesses of co-creation as a policy approach (mechanism-based)
Table 1. Mechanisms linking co-creation to innovation performance, with strengths and failure modes
| Mechanism | Strength (why it can improve efficiency/quality/outcomes) | Weakness/failure mode (why it can harm or stall value) | Documented anchor |
|---|---|---|---|
| Epistemic complementarity (experiential + technical evidence) | Surfaces workflow constraints, patient priorities, and contextual factors that conventional evidence appraisal can miss—improving design fit and reducing implementation waste | Experiential input may be non-representative; can be overweighted or undervalued relative to clinical/economic evidence, depending on governance | NICE explicitly incorporates patient experiential evidence in technology appraisal processes. (PMC) |
| Legitimacy and trust | Increases perceived procedural fairness, improving compliance and adoption (critical for payment reform and digital standards) | Tokenism (“participation-washing”) can reduce trust more than no involvement; contested decisions may still be rejected | NICE positions structured public involvement as part of transparent, open processes (policy and manuals). (nice.org.uk) |
| Implementation ownership | Co-created solutions can reduce resistance, shorten adoption curves, and improve sustainment | Higher transaction costs (time, facilitation), slower decisions; stakeholder fatigue | Reviews note outcomes are under-measured and processes variable; burden is a recurring issue. (RePub) |
| Risk and uncertainty management | Supports “conditional” adoption with evidence development (e.g., CED; managed entry agreements), enabling access while resolving uncertainty | Administrative complexity; data infrastructure gaps; may become a barrier to access or innovation if poorly designed | CMS Coverage with Evidence Development (CED) formalizes conditional coverage linked to study participation and re-review. (cms.gov) |
| Equity and inclusion (when designed well) | Brings marginalized needs into policy, potentially preventing digital exclusion and inappropriate service redesign | If stakeholder selection is skewed, co-creation can amplify organized interest groups and widen inequities | Reviews of co-design emphasize uneven rigor and risks to authenticity/equity if methods are weak. (OUP Academic) |
| Accountability and transparency (when designed well) | Clear co-creation governance can improve accountability for outcomes and decision rationales | Some innovation policy tools rely on confidentiality (e.g., certain pricing arrangements), limiting accountability and public scrutiny | OECD notes confidential arrangements can undermine accountability for reimbursement decisions. (OECD) |
4. Mapping strengths/weaknesses across policy levels and innovation types (with concrete examples)
4.1 National/regional policy level
National/regional policy shapes innovation through regulation, standards, Health Technology Assessment (HTA), population health strategy, and system-wide infrastructure (data interoperability, privacy rules, workforce scope-of-practice, etc.). Co-creation here primarily aims to balance societal values, technical evidence, and implementability.
(A) Digital health (standards, evaluation criteria, governance)
Strength: Co-created evaluation standards can reduce duplication and adoption friction by aligning developers, commissioners, and clinical stakeholders around evidence expectations.
Example: The UK National Institute for Health and Care Excellence (NICE) Evidence Standards Framework for Digital Health Technologies was developed with partner organizations and subjected to stakeholder consultation, providing tiered evidence expectations for digital health tools used in purchasing/commissioning decisions.
Key mechanism: epistemic complementarity + implementation ownership (commissioners and developers share a common evidentiary “contract”).
Weakness: Co-design in digital health is often inconsistently reported; poor rigor can lead to solutions that still fail on adoption, equity, privacy, or clinical integration.
(B) Payment models (national rules enabling value-based care)
At the national/regional level, co-creation typically takes the form of consultations and deliberation to calibrate policy trade-offs (risk adjustment, outcome definitions, safeguards).
Strength: improved political feasibility and technical realism; prevents “policy brittle-ness” when provider operations are complex.
Weakness: interest-group capture risk; delays in reform due to contested distributional impacts.
(C) Workforce redesign (scope-of-practice, education, licensure)
Strength: Co-creation with professional bodies and patient groups can identify safe task redistribution opportunities and acceptable accountability structures.
Weakness: Professional boundary disputes and representativeness problems (e.g., “who speaks for frontline staff?”) can stall reform; evidence of downstream productivity gains is rarely measured in co-creation studies.
(D) Pharma/medtech adoption (regulation, HTA, societal values)
Strength: Co-creation can improve legitimacy of difficult rationing and access decisions by combining evidence appraisal with structured public value judgments.
Examples:
- NICE Citizens Council is a deliberative mechanism intended to incorporate societal values into NICE decision-making (social value judgments).
- The European Medicines Agency (EMA) Patients’ and Consumers’ Working Party (PCWP) (established 2006) provides recommendations to EMA and scientific committees on matters of interest regarding medicines.
Weakness: Conflicts of interest (COI) and uneven representativeness can bias which patient perspectives are institutionalized; deliberative processes can be time-intensive.
4.2 Payer reimbursement policy level
Payers operationalize innovation through coverage decisions, reimbursement rules, contracting models, and performance measurement. Co-creation here is most consequential when it changes (volume/quality/outcome) incentives and (structured) data requirements.
(A) Digital health reimbursement
Strength: Co-created coverage criteria (clinical benefit thresholds, data submission standards, equity safeguards) can reduce wasteful purchasing/production and improve targeting to high-value use cases (VBHC).
Weakness: If payers adopt evidentiary requirements without feasible (ergonomic) data pathways, innovations may be blocked or produce compliance burden rather than valuable outcomes for healthcare professionals and patients (a general risk highlighted in digital co-design evidence critiques).
By “compliance burden” I mean the time, effort, cost, and operational complexity that organizations (and sometimes patients) must absorb to meet externally imposed requirements - for example, reporting, documentation, auditing, data submission, coding rules, consent workflows, cybersecurity controls, or eligibility checks - often introduced through policy, reimbursement, regulation, or procurement conditions. In innovation policy (digital health, payment reform, CED/MEAs), compliance burden matters because it can consume (clinical/bedside) capacity that would otherwise deliver care or improvement, and it can reduce net (clinical) efficiency gains even when the underlying innovation is beneficial.
Why compliance burden happens (mechanisms)
- Policy designs that assume data exists “for free” (but the real world has messy, fragmented non-standardized systems, data silos and vendor lock-in).
- Misaligned measures (collecting many metrics that are weakly related to real-life care processes and clinical outcomes).
- Duplicative reporting across payers, regulators, and internal governance.
- Conditional coverage/contracting that requires evidence generation or outcome verification but lacks shared infrastructure and (open, international) standards.
- Efficiency: staff time shifts from cure and care to administration; IT costs rise; cycle times increase.
- Quality/safety: added documentation can distract from clinical work; fragmented workflows increase (bedside) error risk.
- Outcomes: if burden reduces adoption or fidelity, the innovation’s real-world effect diminishes.
- “Minimum viable (clinically relevant) metrics”: collect only measures that drive decisions (scale/stop, payment adjustment).
- Automate data capture: use existing EHR fields; avoid double entry; require interoperability where possible/necessary.
- Standardize definitions across payers/regions (common data models and measure specs).
- Use stage gates: lighter reporting during early pilots; heavier but still relevant requirements only once value is demonstrated.
- Align incentives: if reporting is required, fund the analytics/admin effort explicitly.
(B) Payment models (bundles, shared savings, episodes)
Strength: Early and structured stakeholder input improves feasibility of model parameters (attribution, risk adjustment, quality measures), reducing unintended consequences and gaming.
Example: Centers for Medicare & Medicaid Services (CMS) has used Requests for Information (RFIs) to solicit public input on future payment model designs, including an Episode-Based Payment Model RFI published in the Federal Register.
Weakness: Consultation-heavy co-creation can become performative; heterogeneous stakeholder preferences can yield compromise designs that are complex and weakly (valuable) incentive-aligned.
(C) Workforce redesign reimbursement (funding new roles and team-based care)
Strength: Co-creation between payers and providers can enable new billing/reimbursement structures (e.g., care coordination, community-based roles) with agreed accountability and measurement.
Weakness: Without credible measurement and enforceable accountability, new payments risk becoming add-ons rather than productivity-enhancing substitutions - an evaluation gap commonly observed in co-production literatures.
(D) Pharma/medtech reimbursement (conditional reimbursement; managed entry)
Strength: Co-created arrangements can allow earlier access while managing uncertainty about effectiveness and budget impact.
Documented mechanisms and examples:
- Coverage with Evidence Development (CED): CMS (USA) describes CED as conditional coverage requiring participation in approved studies/registries, with planned evidence re-examination and reconsideration.
- Managed Entry Agreements (MEAs): OECD/EU documentation defines MEAs as arrangements between firms and payers enabling coverage while managing uncertainty (financial or performance-based).
Weakness (documented concerns):
- Confidential pricing/discount practices can limit transparency and public accountability.
- Evaluations (e.g., Belgium) have raised concerns that MEAs may secure lower confidential prices but may not ensure alignment with added value or strong incentives for evidence generation.
- Critiques of CED argue it can operate as a barrier to access/innovation in some circumstances (noting this is a viewpoint, not a universal finding).
4.3 Hospital/provider policy level
Hospitals and provider organizations translate policy into procurement, clinical pathways, digital implementation, workforce configuration, and continuous improvement routines. Here, co-creation is most effective when it is tightly coupled to operational change and measurement.
(A) Digital health implementation (workflow-integrated adoption)
Strength: Co-design with clinicians and patients improves usability, clinical workflow fit, and acceptability - key determinants of adoption. Umbrella reviews on digital co-design emphasize the intent to improve effectiveness and sustainability, while also noting heterogeneity and reporting weaknesses.
Weakness: Co-design can still fail if data integration, interoperability, privacy, and training are inadequately resourced; these are implementation constraints rather than “design workshop” problems.
(B) Payment model implementation (internal incentives, service-line governance)
Strength: Co-creation across finance, clinical leadership, and frontline teams can align internal incentives (pathway redesign, discharge processes) with external payer contracts.
Weakness: If governance does not specify decision rights and accountability, internal co-creation can generate consensus without execution (“meeting-driven change”).
By “meeting-driven change” I mean a common failure mode of co-creation (especially at hospital/provider level) where the organization invests heavily in workshops, steering committees, and consensus-building meetings, but the work does not translate into implemented operational changes or measurable clinical performance improvement and workforce wellbeing. It is “change” because plans, principles, and (theoretical) pathway diagrams get produced; it is “meeting-driven” because the primary output becomes more process (more meetings), rather than redesigned workflows, reallocated resources, changed incentives, and tracked outcomes.
(C) Workforce redesign (team-based models, task redistribution)
Strength: Co-created redesign can reduce process waste and improve patient experience by reallocating tasks and clarifying roles around patient journeys.
Example mechanism: Experience-Based Co-Design (EBCD) is a structured participatory approach combining patient/staff narratives with service design methods to improve experiences and services. Patients (and carers) and staff work as partners to redesign a service or care pathway, using systematic collection of lived experience (e.g. including filmed narratives) to identify priority “touchpoints” and then co-design practical changes.
Weakness: EBCD and similar approaches can be resource-intensive and may face scalability/generalizability constraints (a limitation frequently discussed in the EBCD evidence ecosystem).
EBD (Experience-Based Design) is used as an umbrella approach to “designing better experiences” by understanding both the care journey and the emotional journey; EBCD is a more structured co-design variant within that family.
(D) Pharma/medtech adoption (procurement, pathways, monitoring)
Strength: Co-creation between clinicians, pharmacists, patients, and procurement can improve appropriate use criteria and monitoring—supporting outcomes-based adoption when coupled to data.
Weakness: Where external agreements are confidential or measurement is weak, providers may inherit administrative burden without clear value.
5. A co-creation governance model optimized for measurable gains in efficiency, quality, and outcomes
A recurring lesson from the evidence is not that co-creation is “good” or “bad,” but that value depends on governance design: who participates, how decisions are made, what evidence counts, and whether outcomes are measured and used to adapt policy.
Below is a governance model designed explicitly to avoid common failure modes (tokenism, capture, unmeasured impact, excessive process cost).
5.1 Design principles (non-negotiables)
1. Value hypothesis + metric plan before participation begins
Every co-creation initiative starts with a testable value hypothesis linked to efficiency, quality, and outcomes; evaluation design is approved at “gate 0.” This aligns with learning-system logic emphasizing continuous data capture and improvement cycles.
“Gate 0” is a pre-start decision checkpoint that happens before a co-creation initiative is allowed to consume significant time, money, or stakeholder bandwidth. It is borrowed from stage-gate / portfolio governance logic: you do not “start the project” (workshops, pilots, procurement, contracting) until the initiative passes a minimal set of readiness and value-clarity tests. Gate 0 ensures the co-creation process is instrumented for measurable value from the outset, rather than becoming an open-ended engagement exercise.
2. Defined decision rights (advisory vs co-decision)
Explicitly state whether the forum is consultative (advisory), co-decisional (shared decision-making), or delegated (delegated authority within a mandate) - consistent with engagement continua described in patient engagement frameworks.
Consultative, co-decisional, or delegated describe the decision rights granted to a co-creation body (a panel, forum, citizens’ jury, clinician–patient design group, etc.). They answer: Who actually decides? and How binding is stakeholder input? The three modes sit on a spectrum from “advise” to “decide.”
3. Representativeness and equity-by-design
Stakeholder selection uses transparent criteria (including targeted inclusion of underserved groups), with training/support and compensation where appropriate (documented in mature involvement infrastructures such as NICE’s approach).
4. Conflict-of-interest (COI) and capture controls
Required declarations; balanced panels; independent chair; publication of minutes and rationales where legally feasible - especially critical in reimbursement and medtech/pharma contexts given transparency concerns around confidential arrangements.
5. Stage-gated experimentation with scale criteria
Co-creation is coupled to piloting and clear scale/stop rules, preventing perpetual pilots without outcomes.
5.2 Institutional architecture (three-tier “Value Co-Creation System”)
Tier 1: National/Regional Health Innovation & Value Council (HIVC)
Mandate: Set standards, guardrails, and evaluation requirements for co-created policies (digital evidence standards, HTA input rules, model contracts, data governance).
Composition: regulators/HTA bodies, payers, provider leaders, patient/citizen representatives, methodologists, ethics and data governance experts.
Key outputs:
- Standardized measurement core set (efficiency, quality, effectiveness/outcomes, equity).
- COI and transparency rulebook for co-creation processes (e.g. lobbyregister).
- A public register of co-created initiatives (purpose, participants, decisions, evaluation plan).
Tier 2: Regional Innovation Sandboxes / Learning Collaboratives
Mandate: Run time-bounded pilots for digital tools, payment models, workforce redesign, and conditional adoption pathways - using shared data infrastructure and rapid-cycle evaluation consistent with learning health system principles.
Operating model:
- “Stage gates” (Design → Pilot → Expand → Scale/Stop) (e.g. Project Management Body of Knowledge, (PMBOK), Earned Value Management (EVM)).
- Required comparators (baseline or matched controls where feasible).
- Data-sharing agreements (including patient privacy safeguards).
Tier 3: Provider Co-Design & Implementation Units (CDIUs)
Mandate: Execute local redesign using structured methods (e.g., EBCD for pathway/workforce redesign) and implement digital solutions with workflow integration.
Accountability: local outcomes dashboard + escalation to Tier 2 if benefits not materializing.
5.3 Measurement system (how “measurable gains” are operationalized)
To align with efficiency, effectiveness, quality, and outcomes - while remaining implementable - the model uses a minimum viable evaluation bundle:
- Efficiency: cost per episode or per capita (payer), avoidable utilization (Emergency Department (ED) visits, readmissions), cycle-time metrics (e.g., referral-to-treatment).
- Quality and safety: guideline-concordant care, adverse events, medication safety signals.
- Outcomes: condition-specific outcomes plus patient-reported outcome measures (PROMs) where available.
- Experience and adoption: patient and staff experience measures; adoption/implementation fidelity (critical for digital health).
- Equity: stratified metrics by socioeconomic status, age, ethnicity (where legally and ethically permissible), and digital access proxies (to detect “digital divide” effects highlighted in digital adoption literatures).
For broader public health impact and multi-level effects, evaluators can structure reporting across Reach, Effectiveness, Adoption, Implementation, Maintenance (RE-AIM), which was explicitly designed to avoid over-focusing on efficacy alone.
For strategic alignment, systems often frame goals as improving outcomes and experience while controlling per-capita cost (Triple Aim).
The Triple Aim is a health system performance framework that calls for the simultaneous pursuit of three goals:
- Improve the individual experience of care (including quality and satisfaction),
- Improve the health of populations, and
- Reduce per-capita cost of healthcare.
Berwick et al. also emphasize the need for an “integrator” (an accountable entity) that accepts responsibility for all three aims for a defined population.
The Quadruple Aim expands the Triple Aim by adding a fourth goal:
- Improve the work life / well-being of healthcare providers and staff (often framed as clinician and team well-being).
The rationale is that burnout and poor work conditions undermine care quality, safety, and sustainability - so provider well-being is not optional if the other aims are to be achieved.
The Quintuple Aim expands the Quadruple Aim by adding a fifth goal:
- Advance health equity (reducing disparities so that no group is disadvantaged in achieving health outcomes).
This framing argues that equity should be an explicit, measurable aim - rather than an implicit by-product of quality improvement (e.g. SDOH).
In contemporary policy and improvement practice, these aims are commonly used as portfolio-level objectives: the “right” innovation or reform is expected to show credible contribution across aims (with explicit trade-offs if not).
5.4 Policy/innovation-type “playbooks” (to reduce transaction costs)
To prevent co-creation from becoming reinvented process-by-process, the Council maintains four playbooks:
- Digital health playbook: evidence tiering aligned to national standards (e.g., NICE evidence standards framework (ESF)), interoperability and privacy checklist, equity impact assessment, implementation-fidelity monitoring.
- Payment model playbook: standardized methods for stakeholder RFIs, risk adjustment, quality measure selection, gaming safeguards; explicit simplification targets.
- Workforce redesign playbook: structured co-design method options (Experience-Based Co-Design (EBCD)), workforce outcome metrics (time-on-task, burnout proxies where available), and safety guardrails.
- Pharma/medtech adoption playbook: templates for Coverage with Evidence Development (CED) and Managed Entry Agreements (MEAs) with transparency minima, evidence-generation obligations, exit criteria, and patient involvement touchpoints (regulatory + HTA).
6. Discussion: when co-creation is most likely to produce value
Given the documented limitations of outcome evidence, a defensible conclusion of co-creation is conditional:
Co-creation is most likely to improve efficiency, quality, and outcomes when it is (i) tightly coupled to implementable policy levers (coverage, standards, contracts), (ii) governed with clear decision rights and COI controls, and (iii) evaluated with pre-specified metrics and scale/stop rules. Without these, co-creation risks becoming a high-cost process intervention with unclear returns - an issue repeatedly observed in reviews noting sparse and inconsistent impact reporting.
References (selection)
Berwick DM, Nolan TW, Whittington J. The Triple Aim: Care, Health, and Cost. Health Affairs (2008).
Bodenheimer, T., & Sinsky, C. (2014). From triple to quadruple aim: care of the patient requires care of the provider. The Annals of Family Medicine, 12(6), 573-576.
Carman KL et al. Patient and family engagement: a framework for understanding the elements and developing interventions and policies. Health Affairs (2013).
Chambers, J. D., Chenoweth, M., Cangelosi, M. J., Pyo, J., Cohen, J. T., & Neumann, P. J. (2015). Medicare is scrutinizing evidence more tightly for national coverage determinations. Health Affairs, 34(2), 253-260.
Glasgow, R. E., Vogt, T. M., & Boles, S. M. (1999). Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American journal of public health, 89(9), 1322-1327.
Grogan, J. (2023). Medicare’s' Coverage With Evidence Development': A Barrier To Patient Access And Innovation. Health Affairs Forefront.
Hashem, F., Calnan, M. W., & Brown, P. R. (2018). Decision making in NICE single technological appraisals: How does NICE incorporate patient perspectives?. Health Expectations, 21(1), 128-137.
Kilfoy, A., Hsu, T. C. C., Stockton-Powdrell, C., Whelan, P., Chu, C. H., & Jibb, L. (2024). An umbrella review on how digital health intervention co-design is conducted and described. NPJ Digital Medicine, 7(1), 374.
Lammons, W., Buffardi, A. L., & Marks, D. (2025). Measuring impacts of patient and public involvement and engagement (PPIE): a narrative review synthesis of review evidence. Research Involvement and Engagement, 11, 76.
Masterson, D., Areskoug Josefsson, K., Robert, G., Nylander, E., & Kjellström, S. (2022). Mapping definitions of co‐production and co‐design in health and social care: a systematic scoping review providing lessons for the future. Health Expectations, 25(3), 902-913.
Madanian, S., Nakarada-Kordic, I., Reay, S., & Chetty, T. H. (2023). Patients' perspectives on digital health tools. PEC innovation, 2, 100171.
McCoy, M. S., Warsh, J., Rand, L., Parker, M., & Sheehan, M. (2019). Patient and public involvement: Two sides of the same coin or different coins altogether?. Bioethics, 33(6), 708-715.
McGinnis, J. M., Stuckhardt, L., Saunders, R., & Smith, M. (Eds.). (2013). Best care at lower cost: the path to continuously learning health care in America.
Neumann, P. J., & Chambers, J. D. (2013). Medicare's Reset On'Coverage With Evidence Development'. Health Affairs Forefront.
Neyt, M., Gerkens, S., San Miguel, L., Vinck, I., Thiry, N., & Cleemput, I. (2020). An evaluation of managed entry agreements in Belgium: A system with threats and (high) potential if properly applied. Health Policy, 124(9), 959-964.
Nundy, S., Cooper, L. A., & Mate, K. S. (2022). The quintuple aim for health care improvement: a new imperative to advance health equity. Jama, 327(6), 521-522.
Unsworth, H., Dillon, B., Collinson, L., Powell, H., Salmon, M., Oladapo, T., ... & Tonnel, A. (2021). The NICE evidence standards framework for digital health and care technologies–developing and maintaining an innovative evidence framework with global impact. Digital health, 7, 20552076211018617.
Vargas, C., Zorbas, C., Longworth, G. R., Ugalde, A., Needham, C., Sunil, A., ... & Allender, S. (2025). Exploring co-design: a systematic review of concepts, processes, models, and frameworks used in public health research. Journal of Public Health, fdaf084.
Voorberg, W. H., Bekkers, V. J., & Tummers, L. G. (2015). A systematic review of co-creation and co-production: Embarking on the social innovation journey. Public management review, 17(9), 1333-1357.
Centers for Medicare & Medicaid Services (CMS). Coverage with Evidence Development (CED).
European Medicines Agency (EMA). Patients’ and Consumers’ Working Party (PCWP).
Federal Register (USA). Request for Information; Episode-Based Payment Model.
KCE - How to improve the Belgian process for Managed Entry Agreements? An analysis of the Belgian and international experience. KCE Report 288C (2017)
National Institute for Health and Care Excellence (NICE). Technology appraisal involvement and participation (process manual).
National Institute for Health and Care Excellence (NICE). Evidence Standards Framework for Digital Health Technologies and supporting paper (Unsworth et al., 2021).
NHS England. How co-production is used to improve the quality of services and people’s experience of care: A literature review (2023).
Institute for Healthcare Improvement (IHI). Experience-Based Co-Design of Health Care Services - A Case Study for US Health Care Delivery System Innovation (case study) (2017).
OECD (Wenzl & Chapman). Performance-based managed entry agreements for new medicines in OECD countries and EU member states: How they work and possible improvements going forward . OECD Health Working Papers No. 115 (2020).
An essay concerning a new healthcare (personal view)