Skip to main content

Improvement studies for equitable and evidence-based innovation: an overview of the ‘IM-SEEN’ model

Abstract

Background

Health inequalities are ubiquitous, and as countries seek to expand service coverage, they are at risk of exacerbating existing inequalities unless they adopt equity-focused approaches to service delivery.

Main text

Our team has developed an equity-focused continuous improvement model that reconciles prioritisation of disadvantaged groups with the expansion of service coverage. Our new approach is based on the foundations of routinely collecting sociodemographic data; identifying left-behind groups; engaging with these service users to elicit barriers and potential solutions; and then rigorously testing these solutions with pragmatic, embedded trials. This paper presents the rationale for the model, a holistic overview of how the different elements fit together, and potential applications. Future work will present findings as the model is operationalised in eye-health programmes in Botswana, India, Kenya, and Nepal.

Conclusion

There is a real paucity of approaches for operationalising equity. By bringing a series of steps together that force programme managers to focus on groups that are being left behind, we present a model that can be used in any service delivery setting to build equity into routine practice.

Background: pervasive health inequalities

Health outcomes are inequitably distributed across and between populations [1,2,3]. The inverse care law states that the availability of medical care is inversely proportional to need [4]. The most disadvantaged groups in society often experience the worst health outcomes [5].

As signatories to the Sustainable Development Goals seek to advance Universal Health Coverage (UHC), governments and health system leaders face complex decisions about how to extend access to services whilst balancing equity considerations against cost-effectiveness: for example, it is often expensive to reach disadvantaged and remote communities.

In the 2010 review ‘Fair society, Healthy Lives’, Michael Marmot introduced the concept of ‘proportionate universalism’ (Table 1), arguing that health services should benefit all, but with the greatest gains experienced by those with the greatest needs [1]. Following on from this, in 2014, WHO published ‘Making fair choices on the path to UHC’ which urged system leaders to focus on extending coverage of a core basket of priority services to all citizens; paying particular attention to ensuring that disadvantaged groups are not left behind [6]. In the same year, WHO and the World Bank issued a joint call for services to routinely gather data on core sociodemographic indicators, arguing that data collection is the essential first step in moving towards redressing health inequalities [7].

Table 1 Proportionate universalism [1, 8, 9]

Unfortunately, whilst sociodemographic data collection has become more widespread, ubiquitous inequalities persist, [3] suggesting that our health systems are not translating new intelligence into meaningful action. An added problem is that interventions and service modifications designed to address inequalities are rarely evaluated using robust scientific techniques such as randomised controlled trials (RCTs) [10].

Our team – a collaboration between the International Centre for Eye Health (ICEH) at the London School of Hygiene & Tropical Medicine (LSHTM), the University of Botswana, the Kenyan Ministry of Health, Nepal Netra Jyoti Sangh, the College of Ophthalmology for Eastern, Central and Southern Africa, and Peek Vision – has been funded by the NIHR and The Wellcome Trust to develop and field-test an equity-focused continuous improvement model that addresses these challenges (Table 2). Whilst other publications from our group provide detailed methods for each of the elements and will present emerging findings, this paper seeks to provide a holistic overview of how the model fits together, the issues it seeks to address, and potential application to other fields.

Table 2 Applying the model in the field of eye care

The IM-SEEN model

The model that we have developed is based around three elements: routinely gathering sociodemographic data from service users and regularly interrogating these data to identify which groups are experiencing the worst outcomes; engaging with representatives from these groups to elicit their perspective on the main issues and solutions; and then using rigorous randomisation-based testing of these potential solutions in order to equitably improve outcomes (Fig. 1). Each element requires scientifically-grounded work; gathering and analysing data; conducting interviews; and running pragmatic embedded trials.

Fig. 1
figure 1

The IM-SEEN approach to continually improving equitable outcomes

We have dubbed the overall approach ‘IM-SEEN’: Improvement Studies for Equitable and Evidence-based Innovation. The acronym highlights our focus on engaging with members of underserved groups and basing the improvement cycle around their concerns and ideas, rather than making assumptions or acting on the behalf of these communities.

The IM-SEEN model was iteratively developed by a team of public health specialists, statisticians, qualitative researchers, economists, programme implementers, ethicists and government policymakers. AB, ON, MG, SM, MB and NB scoped the initial need for an approach to continually improving health service outcomes with a focus on those ‘left behind’ to close socioeconomic gaps. LA led a series of reviews and the drafting of early models which were iteratively refined between 2021–2023 during a series of online and in-person workshops funded by the NIHR and Wellcome Trust. The core team are co-authors of this paper.

The IM-SEEN process for continuous equitable improvement

Gathering sociodemographic data to identifying underserved groups

The first step in model involves quantifying baseline inequalities and identifying the sociodemographic group(s) with the worst outcomes. This process should be built into routine data collection, with analysis and reporting automated as much as possible.

In our eye programmes, screeners are digitally documenting the sociodemographic characteristics (including age, sex, ethnicity/language, religion, education, health status, assets, and income) of every individual who is found to have an eye need and referred on to receive further care. Quarterly meetings are used to review these data with the programme leads. We use multivariable logistic regression to identify which characteristics are most strongly associated with non-attendance. Detailed methods are available in a separate publication [22].

Understanding why certain groups do not attend – and what could be done about it

Once the characteristics most strongly associated with non-attendance have been identified, the next step is to engage with representatives from these underserved group(s) to understand the barriers they face, and then collaboratively identify service modifications that might improve outcomes. These engagement and co-creation processes should seek to obtain meaningful and actionable data with minimum time and resource requirements.

Our team has conducted a scoping review of rapid qualitative methods that can be used to elicit barriers and potential solutions [23]. Based on this work we have developed a bespoke rapid qualitative elicitation approach: research assistants will perform telephone interviews with non-attenders in each setting and use an a priori deductive framework to code responses. The sample size will be determined by thematic saturation. The long list of barriers and potential solutions derived from these interviews will not necessarily be generalisable to all non-attenders from the same underserved group. To identify the potential solutions that are felt to offer the most value by a statistically representative sample, we will send SMS messages to approximately 400 other non-attenders from the same underserved group, asking them to rank the mooted solutions. The top-ranked interventions will be reviewed by the national leadership team to assess risk, cost, feasibility, and likely impact. Safe and feasible interventions that have a scientifically plausible mechanism of action will be implemented and rigorously evaluated. A detailed protocol for this elicitation process has been published online [24].

Testing promising interventions

Once a set of interventions have been derived from engaging with non-attenders, the next step is to implement them and evaluate whether they improve outcomes and reduce sociodemographic gaps. The IM-SEEN model uses a platform randomised controlled trial (RCT) design to assess whether a service modification is causally associated with improvement. This means that the intervention is randomly allocated to individuals or sites. This is only ethical when there is clinical equipoise i.e., it is unclear whether the intervention is better or worse than the status quo. Each intervention will be reviewed by an independent in-country ethics committee.

Allocation, outcome assessment, statistical testing, and reporting should be automated as much as possible to reduce costs to the health programme. Changes within the most underserved groups are the primary outcomes. Mean changes for the entire population is a secondary outcome.

In Botswana’s eye screening programme, we have embedded an automated platform trial that routinely collects and analyses all referral and attendance data. A simple Bayesian algorithm coded in R allocates referred individuals to the intervention or control arm, automatically reviews attendance data, and performs interim statistical testing according to predetermined stopping rules. The algorithm continually adjusts the allocation ratio to favour the best-performing arm(s), minimising the number of people who are assigned to less/ineffective arms. Our trial is not yet complete, but the detailed protocol has been published elsewhere [25].

We are in the process of seeking ethical approval to establish platform RCTs in each country. These use a master protocol that specifies the population (people identified with an eye care need) and primary outcome (attendance), but allow multiple interventions to be tested over time. Every time a new intervention is suggested, ethics committees only have to review the risks of that intervention, having already approved the overall trial architecture. This makes it much more efficient than running serial individual RCTs for each new intervention that is suggested. We are in the process of publishing a detailed protocol for the overall platform trial design.

Taking effective service improvements to scale

Once interventions have been rigorously assessed, the final step is to take effective interventions to scale across the entire national programme and then repeat the cycle. We envisage that the process will lead to incremental improvements, with approximately 1–2 cycles per year, depending on local leadership and resourcing.

Why is this model needed?

From data collection to action

Many services now acknowledge and quantify inequalities but do not or cannot translate this intelligence into meaningful action. Where it does happen, the disaggregation of data to assess inequalities and intersectionality [26] often occurs only at the completion of a programme, when there is low potential for the findings to result in change. We feel that there is a need for a practical tool to guide managers through the process of systematically analysing routinely collected sociodemographic data in real-time, and then turning that insight into robust action to improve outcomes for all service beneficiaries, with the greatest effort focused on those with the greatest need.

Engaging and co-creating

Whilst people affected by a given problem tend to have sensible ideas about how to fix it, initiatives to target underserved groups (e.g. those living in remote areas) are rarely developed with meaningful input from service users themselves [27, 28]. Instead, managers sit down to discuss potential issues and solutions on behalf of the underserved groups, and then implement service modifications without further consultation. This is partly because it can be time-consuming and expensive to seek non-tokenistic input from others – especially from those at the margins of society [27]. However, this needs to change. Community engagement and empowerment is one of the core tenets of Primary Health Care [29] and all governments have committed to deliver health systems that place greater decision-making power in the hands of the people [9, 29].

A model for continuous equity-driven service improvement should meaningfully engage with representatives of the groups found to be facing the highest barriers. Ultimately it is these service users who have the best understanding of why they cannot access care or achieve good outcomes, and they are likely to have practical ideas for how the service could be modified to better serve their population.

We note that service leaders need scientifically robust yet rapid and affordable methods for eliciting barriers and co-designing solutions, however current engagement exercises tend to cluster between two opposing poles: expensive, bespoke, in-depth qualitative research that takes many months to plan and execute on one hand, and zero/tokenistic engagement on the other. The first approach provides robust findings at a very high cost for service providers, the second is affordable but does not produce usable intelligence. Somewhere between the two lies a minimum viable product; the cheapest and fastest possible approach that delivers meaningful data based on genuine engagement.

Industry tends to use focus groups and telephone surveys for rapid market research, but we are not aware of any rapid pragmatic research methods being routinely used in health service improvement; for instance, the recent King’s Fund workshop on ‘improving services by listening to patient voices’ did not showcase any qualitative methods that could be conducted in fewer than six months [30]. This is a strategic barrier to co-production [31]. Our work to develop rapid yet robust methods represents a step forward, but our approach is still in the process of being tested. The IM-SEEN model stipulates that ideas for service improvements should come from engagement with affected communities, but does not dictate the exact methods as different contexts require different approaches.

Checking whether ‘service improvements’ actually improve services

Once potential solutions have been identified it is vital that they are rigorously evaluated. This should entail checking whether any changes made to the service lead to changes in outcomes – positive or negative – as well as understanding the effect size and distribution among different groups. Specifically, it is important to check that access and outcomes improve for all groups, ideally with the greatest gains observed among groups with the greatest need.

Despite widespread lip service to ‘continuous improvement’, in our experience, service modifications designed to boost equity are often conducted as one-off initiatives. Furthermore, efforts to reduce inequalities tend to be poorly evaluated [10]. This is surprising given the rise and rise of Plan Do Study Act cycles [32,33,34]. Whilst the core ‘PDSA’ model is based on the scientific approach of formulating a hypothesis, collecting data to test the hypothesis, analysing and interpreting results, and making inferences to iterate the hypothesis, [35] most quality improvement initiatives fail to quantify change appropriately and it is rare to find truly iterative examples where services have progressed through more than one or two revolutions of the cycle [36, 37].

Even when a service does routinely gather high quality data and test hypothesis-driven innovations, the process tends to be limited by an overdependence on crude before-after testing or interviews with a handful of service users (which can offer valuable information about how/why and intervention works but tells us nothing about the mean effect size). We need to be sure that any observed changes in outcomes are driven by service modifications. More than that, we need to ask if it is ethical to modify services without recourse to robust means of evaluating impact – especially where unintended consequences could lead to harm or a deterioration in service quality or equity.

The most robust means of evaluating whether service innovations, reconfigurations, amendments, adaptations, and other ‘improvements’ actually confer benefit is by conducting randomised controlled trials [38]. However, RCTs are generally expensive, require specialist statistical support, and can take years to run, rendering them unfeasible for most settings [39]. When resources are available, the expensive price tag exerts a strong pressure to reserve this tool for service amendments that have a high ‘pre-test’ probability of success. This means that the least robust service modifications are systematically subjected to the weakest levels of methodological scrutiny, potentially squandering resources, incurring opportunity costs, and even exposing users to harm.

The rising use of RCTs in industry – often referred to as ‘A/B testing’—has spawned a wave of low-cost, real-time, automated approaches to running real-time pragmatic trials in order to optimise services with high-quality empirical data. The ‘test everything with controlled experiments’ approach was born of the observation that tiny service changes sometimes had large impacts on important outcomes, and that most large, expensive reforms based on promising ideas fail to deliver the intended change [40]. Allied work from non-health areas of continuous improvement has demonstrated that multiple small improvements can lead to large overall gains – strengthening the case for multiple rapid tests of multiple service modifications [41, 42]. This mature and powerful ‘test everything’ approach is being used to optimise search engines, improve web page click-throughs, and drive profit margins [43,44,45] but has not yet made the transition to health service improvement.

As health programmes increasingly digitise patient flow, opportunities are emerging to embed prospective randomisation and statistical testing into administrative software [46]. The adoption of ‘built-in’ testing would reduce the barriers for routine RCT testing. By making it easier to perform RCTs to test service modifications, we would vastly improve safety by helping managers to reliably differentiate between effective and ineffective amendments. The automation of randomization, allocation, and statistical analysis works best when algorithms can be directly embedded into clinical software, as this eliminates the delays associated with human factors.

Even automated RCTs still take time and specialist expertise to set up, and these costs mean that programmes will have fewer resources to deploy for service delivery. The time taken to design the trial and obtain ethical approval can also delay the implementation of potential service improvements. These ethical issues must be weighed against the fact that introducing interventions without robust evaluation can lead to the unknowing delivery of ineffective or harmful interventions. Nevertheless, given the work, time and costs involved in setting up a platform trial, this approach will deliver the greatest cost-benefits if used to continually assess a large number of interventions over a long period of time.

Changes and interventions that are found to be effective at improving outcomes and reducing the inequalities should be taken to scale across entire services. In summary, there is a need to develop embedded RCT testing code that can run resource-light trials in order to provide robust evidence on whether well-intentioned service modifications are helping or harming.

Discussion

In this paper we have presented an overview of the IM-SEEN model and a description of how we are applying it in the field of eye health in four different country programmes. A key strength and limitation of the model is that is describes essential elements but does not prescribe the exact methods. Whilst we are using a specific set of sociodemographic indicators and multivariable logistic regression to identify groups with the lowest attendance rates in Botswana, Kenya, India and Nepal, this specific approach will not be appropriate for all scenarios. To take a hypothetical example, a regional cervical screening service associated with urban/rural disparities may want to use chi-square testing, followed by Rapid Anthropological Assessment [47] as these specific methods are best suited to the programme’s needs. Similarly, our model is based on the use of automated adaptive RCTs as these minimise the number of people exposed to ineffective or harmful interventions and should facilitate rigorous and efficient continuous identification of service modifications that improve equitable outcomes. However, there are virtually infinite potential configurations for these RCTs and it would not be appropriate for our team to mandate one specific approach.

Whilst the model is been designed for use in any field, its initial deployment and empirical testing is underway in community-based eye health services. Our model directly supports the recommendations of the 2019 World Report on Vision through promoting high quality implementation and health systems research, empowering people and communities, and creating an enabling environment to implement integrated people centred eye care [48]. These themes resonate with the core pillars of the Astana Declaration on Primary Health Care: empowering people and communities, and advancing equitable care that is responsive to local needs [29].

One major advantage of testing the model in smartphone-based eye screening programmes is that exposure and outcome data are routinely digitally collected and stored in a unified database where an automated testing system can operate with minimal need for human intervention. We are keen to apply the model to address other areas such as the inequitable uptake of cancer screening, inequitable diagnosis and provision of treatment for diabetes and hypertension, and the distribution of vaccines. The model demands that sociodemographic data are obtained from intended service beneficiaries and that the primary outcome is recorded – be that attendance, treatment, cure, or anything else. Ideally, the primary outcome will be recorded routinely and digitally for every patient. Where this is not the case, additional costs will be incurred. Taking eye care as an example, the ultimate outcome is corrected vision but service attendance is often used as a proxy.

There has been a proliferation of theoretical models of proportionate universalism and pro-equity service delivery, but as Francis-Oliviero and colleagues note in their review of the field, interventions and real-world examples are rare [10]. As far as we are aware, the IM-SEEN model is the first operational model that has been developed to drive continuous evidence-based and equitable improvement in real-world programmes. As results from the model’s application in the field of eye care services emerge, we will continue to refine the approach and apply it to other areas. We encourage other researchers, programme managers and policymakers to adopt the principles – if not the model itself in future work to extend health service coverage to all groups, with a focus on those with the greatest need.

Availability of data and materials

No data are available.

References

  1. Marmot M, Allen J, Goldblatt P, Boyce T, McNeish D, Grady M, et al. Fair society healthy lives: The marmot review. London: Institute of Health Equity. Available from: https://www.instituteofhealthequity.org/resources-reports/fair-society-healthy-lives-the-marmot-review. Cited 2021 Nov 11.

  2. Marmot M. Social determinants of health inequalities. Lancet Lond Engl. 2005;365(9464):1099–104.

    Article  Google Scholar 

  3. World Health Organization. Health inequities and their causes. Available from: https://www.who.int/news-room/facts-in-pictures/detail/health-inequities-and-their-causes. Cited 2022 Mar 9.

  4. Hart JT. The Inverse care law. The Lancet. 1971;297(7696):405–12.

    Article  Google Scholar 

  5. World Health Organization. Closing the gap in a generation: health equity through action on the social determinants of health - Final report of the commission on social determinants of health. Geneva. Available from: https://www.who.int/publications-detail-redirect/WHO-IER-CSDH-08.1. Cited 2021 Nov 11.

  6. World Health Organization. Making fair choices on the path to universal health coverage: final report of the WHO consultative group on equity and universal health coverage. World Health Organization; 2014. 78 p. Available from: https://apps.who.int/iris/handle/10665/112671. Cited 2021 Oct 15.

  7. World Health Organization, World Bank. Monitoring progress towards universal health coverage at country and global levels: framework, measures and targets. World Health Organization; 2014. Report No.: WHO/HIS/HIA/14.1. Available from: https://apps.who.int/iris/handle/10665/112824. Cited 2021 Nov 11.

  8. Allen LN. The philosophical foundations of ‘health for all’ and universal health coverage. Int J Equity Health. 2022;21(1):155.

    Article  PubMed  PubMed Central  Google Scholar 

  9. WHO and UNICEF. Declaration of Alma-Ata. Available from: https://www.who.int/teams/social-determinants-of-health/declaration-of-alma-ata. Cited 2022 Mar 23.

  10. Francis-Oliviero F, Cambon L, Wittwer J, Marmot M, Alla F. Theoretical and practical challenges of proportionate universalism: a review. Rev Panam Salud Pública. 2020;44: e110.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Burton MJ, Ramke J, Marques AP, Bourne RRA, Congdon N, Jones I, et al. The lancet global health commission on global eye health: vision beyond 2020. Lancet Glob Health. 2021;9(4):e489-551.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Dantas LF, Fleck JL, Cyrino Oliveira FL, Hamacher S. No-shows in appointment scheduling - a systematic literature review. Health Policy Amst Neth. 2018;122(4):412–21.

    Article  Google Scholar 

  13. Ramke J, Kyari F, Mwangi N, Piyasena M, Murthy G, Gilbert CE. Cataract services are leaving widows behind: examples from national cross-sectional surveys in Nigeria and Sri Lanka. Int J Environ Res Public Health. 2019;16(20):E3854.

    Article  Google Scholar 

  14. Bastawrous A, Rono HK, Livingstone IAT, Weiss HA, Jordan S, Kuper H, et al. Development and validation of a smartphone-based visual acuity test (Peek Acuity) for clinical practice and community-based fieldwork. JAMA Ophthalmol. 2015;133(8):930–7.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Rono H, Bastawrous A, Macleod D, Mamboleo R, Bunywera C, Wanjala E, et al. Effectiveness of an mHealth system on access to eye health services in Kenya: a cluster-randomised controlled trial. Lancet Digit Health. 2021;3(7):e414–24.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Khan AA, Talpur KI, Awan Z, Arteaga SL, Bolster NM, Katibeh M, et al. Improving equity, efficiency and adherence to referral in Pakistan’s eye health programmes: Pre- and post-pandemic onset. Front Public Health. 2022;10: 873192.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Morjaria P, Bergson S, Bastawrous A, Watts E, Pant S, Gudwin E, et al. Delivering refractive care to populations with near and distance vision impairment: 2 novel social enterprise models. Asia-Pac J Ophthalmol Phila Pa. 2022;11(1):59–65.

    Article  Google Scholar 

  18. Morjaria P, Bastawrous A, Murthy GVS, Evans J, Sagar MJ, Pallepogula DR, et al. Effectiveness of a novel mobile health (Peek) and education intervention on spectacle wear amongst children in India: results from a randomized superiority trial in India. EClinicalMedicine. 2020;28: 100594.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Rono H, Bastawrous A, Macleod D, Bunywera C, Mamboleo R, Wanjala E, et al. Smartphone-guided algorithms for use by community volunteers to screen and refer people with eye problems in Trans Nzoia County, Kenya: development and validation study. JMIR MHealth UHealth. 2020;8(6): e16345.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Rono HK, Bastawrous A, Macleod D, Wanjala E, Tanna GLD, Weiss HA, et al. Smartphone-based screening for visual impairment in Kenyan school children: a cluster randomised controlled trial. Lancet Glob Health. 2018;6(8):e924–32.

    Article  PubMed  PubMed Central  Google Scholar 

  21. News. Peek Vision. Available from: https://peekvision.org/en_GB/news/. Cited 2022 Apr 1.

  22. Allen LN, Nkomazana O, Mishra SK, Ratshaa B, Ho-Foster A, Rono H, et al. Sociodemographic characteristics of community eye screening participants: protocol for cross-sectional equity analyses in Botswana, Kenya, and Nepal. Wellcome Open Research. Available from: https://wellcomeopenresearch.org/articles/7-144. Cited 2022 May 4.

  23. Allen L, Azab H, Jonga R, Gordon I, Karanja S, Thaker N, et al. Rapid methods for identifying barriers and solutions to improve access to community health services: a scoping review. SSRN. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4427283. Cited 2023 Apr 28.

  24. Allen LN, Nkomazana O, Gichangi M, Mishra SK, Karanja S, Burton MJ, et al. Barriers to accessing community eye clinics in Botswana, India, Kenya, and Nepal and potential solutions: protocol for an exploratory-sequential mixed-methods study. Lancet Prepr. Available from: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=44272851.

  25. Allen LN, Ratshaa B, Macleod D, Bolster N, Burton M, Kim M, et al. Protocol for an automated, pragmatic, embedded, adaptive randomised controlled trial: behavioural economics-informed mobile phone-based reminder messages to improve clinic attendance in a Botswanan school-based vision screening programme. Trials. 2022;23(1):656.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Holman D, Salway S, Bell A, Beach B, Adebajo A, Ali N, et al. Can intersectionality help with understanding and tackling health inequalities? Perspectives of professional stakeholders. Health Res Policy Syst. 2021;19(1):97.

    Article  PubMed  PubMed Central  Google Scholar 

  27. WHO Europe. Toolkit on social participation. Available from: https://www.euro.who.int/en/publications/abstracts/toolkit-on-social-participation.-methods-and-techniques-for-ensuring-the-social-participation-of-roma-populations-and-other-social-groups-in-the-design,-implementation,-monitoring-and-evaluation-of-policies-and-programmes-to-improve-their-health-2016. Cited 2022 Apr 8.

  28. Turk E, Durrance-Bagale A, Han E, Bell S, Rajan S, Lota MMM, et al. International experiences with co-production and people centredness offer lessons for covid-19 responses. BMJ. 2021;372: m4752.

    Article  PubMed  PubMed Central  Google Scholar 

  29. WHO and UNICEF. Declaration of astana on primary health care. Available from: https://www.who.int/teams/primary-health-care/conference/declaration. Cited 2022 Mar 23.

  30. The King’s Fund. Improving quality of care: the vital role of people’s voices. Available from: https://www.kingsfund.org.uk/events/improving-quality-of-care. Cited 2022 Apr 1.

  31. Wellings D, Thorstensen-Woll C. How does the health and care system hear from people and communities?. The King’s Fund. Available from: https://www.kingsfund.org.uk/publications/health-care-system-people-and-communities. Cited 2022 Mar 25.

  32. Institute for Healthcare Improvement. Science of improvement: how to improve. Available from: http://www.ihi.org:80/resources/Pages/HowtoImprove/ScienceofImprovementTestingChanges.aspx. Cited 2022 Mar 18.

  33. NHS England and NHS Improvement. Plan, Do, Study, Act (PDSA) cycles and the model for improvement: Online library of Quality, service improvement and redesign tools. Available from: https://www.england.nhs.uk/wp-content/uploads/2022/01/qsir-pdsa-cycles-model-for-improvement.pdf.

  34. US CDC. Evaluate actions - tools for successful community health improvement efforts. Available from: https://www.cdc.gov/chinav/tools/evaluate.html. Cited 2022 Mar 18.

  35. Berwick DM. Developing and testing changes in delivery of care. Ann Intern Med. 1998;128(8):651–6.

    Article  CAS  PubMed  Google Scholar 

  36. Reed JE, Card AJ. The problem with Plan-Do-Study-Act cycles. BMJ Qual Saf. 2016;25(3):147–52.

    Article  PubMed  Google Scholar 

  37. Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan–do–study–act method to improve quality in healthcare. BMJ Qual Saf. 2014;23(4):290–8.

    Article  PubMed  Google Scholar 

  38. Sibbald B, Roland M. Understanding controlled trials: Why are randomised controlled trials important? BMJ. 1998;316(7126):201.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Hariton E, Locascio JJ. Randomised controlled trials—the gold standard for effectiveness research. BJOG Int J Obstet Gynaecol. 2018;125(13):1716.

    Article  Google Scholar 

  40. Kohavi R, Tang D, Xu Y, Hemkens LG, Ioannidis JPA. Online randomized controlled experiments at scale: lessons and extensions to medicine. Trials. 2020;21(1):150.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Viewpoint: Should we all be looking for marginal gains? BBC News. Available from: https://www.bbc.com/news/magazine-34247629. Cited 2022 Apr 1.

  42. Hall D, James D, Marsden N. Marginal gains: Olympic lessons in high performance for organisations. HR Bull Res Pract. 2012;7(2):9–13.

    Google Scholar 

  43. Tang D, Agarwal A, O’Brien D, Meyer M. Overlapping experiment infrastructure. In: Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. Available from: https://dl.acm.org/doi/https://doi.org/10.1145/1835804.1835810. Cited 2022 Apr 1.

  44. The surprising power of online experiments. Harvard business review. Available from: https://hbr.org/2017/09/the-surprising-power-of-online-experiments. Cited 2022 Apr 1.

  45. Bakshy E, Eckles D, Bernstein MS. Designing and deploying online field experiments. In: Proceedings of the 23rd international conference on World wide web. New York, NY, USA: Association for Computing Machinery; 2014. p. 283–92. (WWW ’14). Available from: https://doi.org/10.1145/2566486.2567967. Cited 2022 Apr 1.

  46. Mc Cord KA, Al-Shahi Salman R, Treweek S, Gardner H, Strech D, Whiteley W, et al. Routinely collected data for randomized trials: promises, barriers, and implications. Trials. 2018;19(1):29.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Demolis R, Botao C, Heyerdahl LW, Gessner BD, Cavailler P, Sinai C, et al. A rapid qualitative assessment of oral cholera vaccine anticipated acceptability in a context of resistance towards cholera intervention in Nampula Mozambique. Vaccine. 2018;36(44):6497–505.

    Article  PubMed  PubMed Central  Google Scholar 

  48. World Health Organization. World report on vision. Geneva: WHO; 2019. Available from: https://www.who.int/publications-detail-redirect/9789241516570. Cited 2022 Feb 1.

Download references

Acknowledgements

None.

Funding

This work was supported by the National Institute for Health Research (NIHR) (using the UK’s Official Development Assistance (ODA) Funding) and Wellcome [215633/Z/19/Z] under the NIHR-Wellcome Partnership for Global Health Research. The views expressed are those of the authors and not necessarily those of Wellcome, the NIHR or the Department of Health and Social Care.

Author information

Authors and Affiliations

Authors

Contributions

LNA wrote the draft manuscript. All authors reviewed, revised, and approved the final manuscript.

Corresponding author

Correspondence to Luke N. Allen.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

AB and NB both work for Peek Vision.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Allen, L.N., Nkomazana, O., Mishra, S.K. et al. Improvement studies for equitable and evidence-based innovation: an overview of the ‘IM-SEEN’ model. Int J Equity Health 22, 116 (2023). https://doi.org/10.1186/s12939-023-01915-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12939-023-01915-5

Keywords