Skip to main content

Health system responsiveness: a systematic evidence mapping review of the global literature



The World Health Organisation framed responsiveness, fair financing and equity as intrinsic goals of health systems. However, of the three, responsiveness received significantly less attention. Responsiveness is essential to strengthen systems’ functioning; provide equitable and accountable services; and to protect the rights of citizens. There is an urgency to make systems more responsive, but our understanding of responsiveness is limited. We therefore sought to map existing evidence on health system responsiveness.


A mixed method systemized evidence mapping review was conducted. We searched PubMed, EbscoHost, and Google Scholar. Published and grey literature; conceptual and empirical publications; published between 2000 and 2020 and English language texts were included. We screened titles and abstracts of 1119 publications and 870 full texts.


Six hundred twenty-one publications were included in the review. Evidence mapping shows substantially more publications between 2011 and 2020 (n = 462/621) than earlier periods. Most of the publications were from Europe (n = 139), with more publications relating to High Income Countries (n = 241) than Low-to-Middle Income Countries (n = 217). Most were empirical studies (n = 424/621) utilized quantitative methodologies (n = 232), while qualitative (n = 127) and mixed methods (n = 63) were more rare. Thematic analysis revealed eight primary conceptualizations of ‘health system responsiveness’, which can be fitted into three dominant categorizations: 1) unidirectional user-service interface; 2) responsiveness as feedback loops between users and the health system; and 3) responsiveness as accountability between public and the system.


This evidence map shows a substantial body of available literature on health system responsiveness, but also reveals evidential gaps requiring further development, including: a clear definition and body of theory of responsiveness; the implementation and effectiveness of feedback loops; the systems responses to this feedback; context-specific mechanism-implementation experiences, particularly, of LMIC and fragile-and conflict affected states; and responsiveness as it relates to health equity, minority and vulnerable populations. Theoretical development is required, we suggest separating ideas of services and systems responsiveness, applying a stronger systems lens in future work. Further agenda-setting and resourcing of bridging work on health system responsiveness is suggested.


The World Health Report of 2000 (WHR2000), ‘Health systems: improving performance’ broke ground framing health systems performance and development around three intrinsic goals: good health, fairness of financial contributions, and responsiveness to the expectations of the population – stressing the importance of responsiveness in particular in reducing inequalities, and improving the situation of the worst-off [1, 2].

The potential and significance of a responsive health system is that it should provide inclusive, legitimate, participatory and accountable services, ensure the social rights of citizens, and draw attention to the needs of minority groups [3,4,5]. More broadly, it should support nation-building, state-legitimacy, public participation, and social cohesion [6,7,8]. A responsive health system is also said to contribute to other health system goals such as improved access and acceptability of services, and improved health-seeking behavior, and therefore ultimately contribute to improved population health [9, 10]. However, the WHR2000 also foregrounded a debate around health systems as a ‘social good’ (values-based health systems), arguing that improved health system responsiveness is a legitimate endeavor in and of itself, irrespective of whether it directly improves population health or not. As emphasized by Da Silva [11] in a related report: ‘The greater the responsiveness of the health system to the expectations of individuals’ regarding the non-health enhancing aspects of care the higher will be the level of welfare achieved, irrespective of its impact on health’ (p.2). Non-health enhancing aspects of care here may include dignity of patients, confidentiality of information, autonomy, prompt attention, quality of the amenities, choice of provider, provider-patient communication and access to social support networks (for in-patients).

Health system responsiveness is also thought to improve systems functioning, for example, improving information flow and feedback, and improving capacities for decision-making within the health system [12, 13]. Therefore, interventions towards health system responsiveness are thought to have a health system strengthening effect, for example by strengthening ‘feedback channels’ [13, 14]. We use the term ‘feedback channel’ to describe the varied ways relevant information and evidence about systems functionality is fed back from those being served by the system (public/patients/community) to those actors with strategic decision-making authority over systems functionality. Feedback is channeled via formal mechanisms intended to facilitate the flow of feedback, such as complaints processes, but also via informal channels such as social media, or relational networks (more below). Effective feedback strengthens system functionality by ensuring that those being served by the system (public/patient/community) have voice in decision-making about their health system, and decision-makers have enough and the right information to make informed strategic decisions [15, 16]. This feedback enhances the chances for an effective systemic response to public/patient/community experience and views.

There have been multiple calls for initiatives and interventions to support health system responsiveness. Some are more ‘short-route’ interventions, such as efforts to strengthen information systems, to legitimize complaints systems, to increase community participation and voice, and the introduction of varied accountability mechanisms [17,18,19]. There are also ‘long-route’ interventions, such as democratic elections to vote in different government leadership, or macro-level systems interventions responding to national surveys or datasets. The short-route interventions are more prevalent, and more widely reported (more below). The most common are the ‘shortest-route’ feedback interventions such as formal facility-focused mechanisms focused on gathering patients’ perspectives on the quality of care they received, usually administered at the point of service during or immediately after care, such as score/report cards, social audits, and e-grievance systems such as toll-free hotlines and web-based portals have been introduced [20,21,22]. Also increasingly common are interventions which initiate accountability mechanisms such as clinic committees, intersectoral health forums, and community monitoring, these being one step removed from direct patient feedback [23,24,25,26,27,28,29].

We recognize that there are many issues (e.g. sufficient resources, having qualified staff and appropriate structures and supports) that contribute to responsive health systems and that patient voice is only one a component. The public (the ‘population’) continue to experience a range of problems in both high income countries (HIC) and low to middle income countries (LMICs): from lack of service availability to limited access; poor quality of services to ethical infringements and rights violations; commercial exploitation to collusion and corruption; rigid bureaucratic norms; to inadequate measures or processes and rules for accountability [27, 30,31,32]. Patients often experience inappropriate provider behavior including disrespect, abuse and inattention, and outright denial of care, much of which never gets reported through formal channels or mechanisms [9, 33,34,35,36]. It has also been shown that many health system actors (such as providers or policy makers) display limited receptivity to concerns raised by patients and the broader public [13]. The public continue to struggle to engage with the system about their problems and to secure appropriate responses and remedies [7, 22, 28]. Access to feedback channels and, more importantly, the ability to leverage reaction or response to feedback, is often inequitable, determined by social and educational status and the social capital that can be mustered [28, 37, 38] – yet while responsiveness as a health system goal is intended to draw attention to the needs of the vulnerable, such inequity has received little attention [32, 39].

Therefore, while there is great potential for enhanced health system responsiveness to improve systems’ functioning, ensure minority or vulnerable groups have more voice, and even lead to improved health, there is little evidence of this potential being fully leveraged. Two decades after the WHR2000, there has been substantial research and intervention work aimed at the goals of good health and fair financing, but in comparison, astonishingly little on health system responsiveness [13, 14, 40, 41]. There are still major questions about every aspect of responsiveness: its framing (for example, is it the same as accountability?); theorization (should the focus be on patient or population expectations?); resulting measurement (do you just measure patient satisfaction?); and praxis (what is a responsive health system, and how do you intervene to make a system more responsive?). What evidence there is to date, has not been collated in any useful way that allows researchers and practitioners to engage fully on the issue or develop it further.

In response to this, we conducted a systematic evidence mapping review on health system responsiveness, with a global scope, but seeking specifically to support research on LMICs. The aim was to comprehensively and descriptively map the currently dispersed terrain of evidence relating to ‘health system responsiveness’, in order to understand the current state of knowledge and identify evidence gaps for further work. The review is framed by the question, what evidence is there on health system responsiveness, how it is framed, theorized, and measured; and what empirical evidence exists of related interventions in health systems?


This systematic evidence mapping review was conducted by a team from [blinded] and [blinded], conducted during 2017–2020, resulting in output articles such as this, as well as a comprehensive documentary database (the database continues to be updated beyond the review end-date). Systematic evidence mapping reviews are increasingly being performed to map diverse literature in public health and health policy and systems research (HPSR) and involves systematic synthesis, organisation and interpretation across a large body of evidence, using rigorous and replicable strategies [42,43,44]. The approach is commonly used to organise and make available literature, as well as to describe the breadth and depth of this literature, identify its main characteristics and its gaps for future research [43,44,45,46,47]. The main characteristics we sought to describe were the quantity of evidence (areas of saturation and gaps), design and focus of research, and patterns pertaining to the content of literature such as the dominant framing of responsiveness.

While this review approach includes assessment of relevance and quality (of the publication source), it does not set out to assess the rigor of findings within the included studies, nor seek to compare the outcomes or effectiveness of interventions described. This is a common characteristic of evidence mapping reviews, as the approach is designed to describe a large quantity of literature, rather than delve deeply into each included item [42]. The application of deeper analysis pertaining to more specific research questions is understood as a subsequent activity and output after the evidence mapping review is concluded. Thus, the focus on a broad scoping of the terrain results in reduced analytical depth, and a large number of items needing to undergo full-text review and be reported to readers (in this case, over 800 items underwent full-text review). During the review process, we regularly considered approaches to reduce included items. For example, one possibility would have been to exclude items relating to ‘accountability’ as they are reviewed elsewhere [7, 31, 48, 49]. However, too many-directly relevant items important to understanding health system responsiveness made this an unviable exclusion option. Another option would have been to limit to items relating only to LMIC-settings, but again this would have removed core items relating to the conceptualization of health system responsiveness. Therefore, while such limitations might have reduced the final cluster included, they would have undermined the main aim of the review: to evidence the full breadth of most relevant publications relating to health system responsiveness, across diverse disciplinary terrains, in order to fully describe what is known about health system responsiveness at this time, and also be a comprehensive resource for future work.

We followed recommended phases for evidence mapping synthesis reviews including: 1) determining the scope and question of the topic under review; 2) searching for and selecting evidence; 3) mapping and reporting the findings of existing research; and 4) identifying evidence gaps [45, 50]. In the first phase, we refined the scope of the main review by conducting an initial rapid scoping review, which provided the analytical frame for the systematic review extraction process. Items found through the scoping review were subsumed (and assessed again) in the larger systematic review phase. We also conducted HPSR topic-expert consultations in the first phase (n = 6, [blinded]) – including experts in responsiveness, governance and accountability, in order to clarify topic scope and foci [45]. They supported the identification of search terms, topic areas, and key publications (conceptual and empirical). During this phase we refined study eligibility criteria and data extraction items for the evidence mapping component.

Next, we conducted a qualitative systematized review, keeping records of all searches conducted. Searches were performed using three electronic databases namely: EbscoHost (which is inclusive of Academic Search Premier; AfricaWide; Health Source; PsyhcInfo; SocIndex; and Cinalhl), PubMed, and Google Scholar. The initial staged searches were conducted during June–September 2019. To be eligible for inclusion, a paper needed to include ‘responsiveness’ and ‘health system’ and their variations (see Supplementary files). Initial pilot searches further refined the search terms and identified exclusion clusters.

Additional literature was sourced through reference list searches, expert consultations, hand-searching through Google search results (first 100 items, of varied search term variations), and through online repositories such as the WHO and World Bank online repositories. These searches were conducted iteratively until saturation was reached and no new relevant materials, nor further topics were found [51].

All abstracts were screened and included if they met the following inclusion criteria: (1) peer or institutionally-reviewed; (2) provided conceptual or empirical information on: responsiveness, accountability (internal and external) or user feedback within a health system; (3) published in English; and (4) published between 2000 and 2019 (earlier relevant material was included if directly relevant, although few were found). This period of publication was motivated by the inception of the conceptualisation and measurement of responsiveness by the WHO in 2000. No geographical limits were set.

We excluded items that met the following exclusion criteria: (1) studies about physiological or biomedical responsiveness to medication or treatment program; (2) responsiveness as a psychometric property of data collection instruments; (3) responsiveness that was not related to health or the health sector; 4) studies on feedback between providers only (e.g. performance feedback); (5) studies that focused on patient-reported outcomes measures (PROMS, specifically focused on the clinical aspects of care); (6) items where full texts could not be sourced; and (7) items that did not provide substantial information on health system responsiveness in the full text, or used ‘responsiveness’ in a descriptive, non-specific manner.

We examined the titles and abstracts/summaries to identify relevant items for further full-text screening. Three reviewers compared the eligible full-text documents and resolved discrepancies through discussions and consensus. During the initial screening process, we categorized items broadly as ‘empirical’ or ‘conceptual’. During the full-text review phase, we also conducted a further quality assessment phase, in which quality of publication source was assessed for all items (for example, publication indexed, or publishing institution known), and empirical items were further checked for clarity relating to stated aims, methodology (and rigor relating to execution of this methodology), and substantiation of findings. From the remaining items, we then extracted descriptive data into an extraction sheet, including: year of publication, publication type, country, region coverage, country status (economic ranking), study design, populations/samples, contribution (empirical/conceptual), and underpinning ideas and framing of responsiveness (see Supplementary materials). Refresher searches were conducted (using the same search terms and processes) quarterly (Dec 2019, March 2020, July 2020, Oct 2020), to check for newly published literature.

Our analysis in the review can be considered mixed methods given that we performed quantitative analysis (descriptive statistics) as well as qualitative (thematic) analysis. More specifically, we generated frequencies tables to determine the bibliographic results of the of the body of evidence, used thematic analysis to identify existing conceptualizations and dominant categorizations of health systems responsiveness.


The database search yielded a total of 1084 records, and an additional 134 records found by other means (Fig. 1). We collated the records and deleted duplicates, leaving 1219 records to be screened by title and abstract. After screening 870 items were included for full-text screening of potential relevance. The 2020 refresher searches resulted in 15 items being added. Ultimately, 621 items that were relevant to health system responsiveness were identified and included (see Supplementary materials for full listing of all 621 items).

Fig. 1
figure 1

PRISMA flow diagram

Bibliographic characteristics of the body of literature on health system responsiveness

In the first results section we report on the included items (what we would term the evidence map), reviewing the collection of 621 items against consideration of publication rate, geographic location/focus, publication type, and empirical versus conceptual contribution. We have consolidated the graphics in Fig. 2 for ease of viewing.

Fig. 2
figure 2

Consolidated graphics relating to publication rate, location and type

In the last 20 years, there has been growth in interest and therefore publications on health system responsiveness. However, these are still very small numbers compared to the other goals such as health financing. After publication of the WHR2000, there was relatively limited interest in responsiveness (as indicated by publication numbers), until a decade later, around 2011 (see Fig. 2a). Slightly more items focus on HICs (241/621) versus LMICs (217/621), although more countries are classified as LMICs than HICs globally (Fig. 2b). Only nine (9/621) items on focus on fragile and conflict affected states (Fig. 2b) such as Afghanistan, the Democratic Republic of Congo and Sierra Leone. A large cluster focus on Europe (139/621), with a slightly smaller cluster on Asia (104/621), and the Americas (85/621) (See Fig. 2c). When disaggregated, the majority of the European publications relate to European-HICs (132/139), with only 8/139 relating to European-LMICs.

There were several types of included publications namely: peer-reviewed articles (empirical studies and reviews), chapters and books, theses, institutional reports (from multilateral or donor organizations such as the WHO, World Bank, United Nations (UN), and The President’s Emergency Plan for AIDS Relief (PEPFAR); civil society, research and academic institutions), and commentaries/editorials/letters. Most items were articles (462/621), including 48 review articles (mostly focused on accountability). Commentaries/editorials/letters made up the next largest grouping (47/621) and there were 40/621 institutional reports (Fig. 2d). With regard to the nature of contribution (shown in Fig. 2e). most items reported on empirical research (426/621); reporting quantitative data (232/426), with relatively fewer qualitative data-based studies (127/426) and mixed methods’ studies (63/426) (Fig. 2f). Conceptual items (131/621), reflected on issues relating to responsiveness. A few (57) papers presented the combination empirical-conceptual work (Fig. 2e).

Underpinning ideas about health system responsiveness within the literature

The body of evidence contains varying definitions of health system responsiveness (Table 1). Authors seem to agree that health system responsiveness involves not only the system’s ability to respond, but also the actual response. For example, Joarder [55] defines responsiveness as the ‘ … social actions that providers do to meet the legitimate expectations of service seekers’ thus focusing on the tangible activities, processes and interaction between providers and service seekers (p.3). Lodenstein et al. [13] state that responsiveness is a culmination of system factors and processes such as ‘ … broader governance and health system context, features of the social accountability initiatives, motives and perceptions of providers at a particular point in time’ (p.2). Terms are used inconsistently across varied definitions, and there also seems to be little consensus in these definitions about who the system should be responsive to (some suggest service users, while others prescribe a broader focus towards citizens, communities and the public).

Table 1 Varying definitions for the concept of responsiveness

Within the 621 included publications, only eight explicitly provide a clear conceptualization or framing of health system responsiveness, and there are links between these eight. Table 2 provides an outline of these eight conceptualizations, describing the key features of each, where the conceptualization originates, what tools have developed from this, and an assessment of whether the conceptualization has had ‘traction’ within the broader included literature (that is, has it been taken up by other studies, tested empirically, or adapted further), as part of the ‘mapping’ of ideas about health system responsiveness.

Table 2 Explicit conceptualizations of health system responsiveness

We found no single widely accepted or clearly dominant framing of health system responsiveness among these eight, but unsurprisingly, the WHO-powered conceptualization first presented in the WHR2000 [54] shows the most traction, that is ‘the health system’s ability to meet the population’s legitimate expectations regarding non-health aspects of their interactions with the system’ (p.1). Responsiveness in this earlier WHR2000 framing comprises two main categories (respect for persons and patient orientation), with eight domains, namely: dignity of patients, confidentiality of information, autonomy, prompt attention, quality of the amenities, choice of provider, provider-patient communication and access to social support networks (for in-patients) [52]. There are now several variations of this idea – and four of the eight framings in Table 2 are self-declared adaptations of the WHR2000 conceptualization. Two of the four, offer conceptual frameworks and measurement tools for improving responsiveness of a specific building block (human resources and data information systems), while the other two offer a rights-based lens and analytic tool to understand system-wide determinants of responsiveness. Five of the eight provide both a conceptualization, and a developed tool for measurement of responsiveness against that conceptualization – while the remaining three are purely conceptual offering a framework or lens to understand health system responsiveness. It is not always possible to trace the development of a particular conceptualization from publication to publication over the 20-year period, and instead there appears to be a more disjointed ‘picking’ of ideas from different eras/topics/contexts.

Three dominant categorizations of health system responsiveness

Beyond these eight conceptualizations, the explicit or implicit framing of health system responsiveness across the 621 included studies can be organized into three interrelated dominant ‘categorizations’:

  1. 1)

    The unidirectional user-service interface: strongly influenced by the WHO framing, items in this categorization tend to assesses responsiveness as a (usually national scale) service performance and quality indicator, and the preferred method for measurement, via the WHO designed quantitative instrument, is an exit survey at point of care, or household survey of patient experiences.

  2. 2)

    Responsiveness as feedback between users and the system: in this related cluster, the focus is on modes of gathering feedback from patients and patient representatives (usually gathered before, during or after care), and sometimes shows how feedback is utilized for service improvements.

  3. 3)

    Responsiveness as accountability: which mainly reports on processes and structures that support accountability (often broader than the patient, for example, community accountability). Specific tools and mechanisms are suggested and assessed, thought to ensure that stakeholders (users, public, provider and system) are answerable and held accountable for their actions.

These categorizations are indicative, emerging from our review analytics, intended to give the reader a feel for the landscape (rather than to impose rigid classifications/typologies). The categorizations are therefore not totally distinct from each other, with obvious overlaps and relationships between them (see Fig. 3). For example, as illustrated in Table 3, the first two categorizations (user-service interface, and service feedback) focus on interactions at facility-level, and often gather feedback from users at point of exit, while the second and third categorizations (service feedback and accountability) include collecting feedback from ‘non-users’.

Fig. 3
figure 3

Relationship between the dominant categorizations

Table 3 Comparison of dominant categorizations of responsiveness in the literature

Sorting literature into these categorizations is also complicated by different use of the same terms. For example, to avoid confusion, in the discussion below, we have clustered the varied terms used for ‘individuals’, grouping ‘patients’, ‘clients’, ‘users’ (from categorization 1 and 2) who are all effectively health service users; and grouping ‘citizens’ and ‘community’ (from categorization 3) who form the broader public and could also include users or potential users. Despite these overlaps and complexities, we find the proposed categorizations a useful way of understanding how health system responsiveness is framed and understood across the literature.

Description of categorization 1: ‘Unidirectional user-service interface’

For categorization 1, health system responsiveness is understood primarily as a performance and service quality indicator [53, 62]. This seems to originate in the WHR2000 on health system performance and was likely also influenced by the increased importance given to ‘patient-centered care’ in that decade, which includes emphasis on non-clinical aspect of care. The focus in this categorization is on gathering feedback from users about their experiences of their interacting with the health service [59,60,61, 63].

As depicted in Table 3, 25% (155/621) of the included items aligned with this categorization of health system responsiveness. Most papers were published in the last decade, and particularly in the last 5 years (2016 > 2020 = 57/155); more items focused on LMICs than HICs (LMIC = 76/155), and of these most focused on Asia (47/155), and then African LMICs (17/155). Items in this categorization were mostly empirical (124/155), with sub-clusters of responsiveness assessments of specific services (e.g. mental health, HIV, antenatal and reproductive services, ambulatory and chronic care), and of services for specific groups (e.g. older adults, people with mental health problems, physical disabilities, and migrants).

Measurement of responsiveness within this category was primarily quantitative, usually applying the WHO’s responsiveness survey instrument [11, 54, 64]. This instrument measures eight domains indicative of overall responsiveness level, and measures the distribution of responsiveness by groups (the inequality score) [11, 65, 66]. The importance of the domains of responsiveness varies between higher and lower income countries [11]. Measuring responsiveness in this way is primarily aimed at producing quantifiable indicators that denote overall health system performance [67]. With data collected at a national level through household surveys conducted as part of the WHO’s Multi-Country Survey Study on Health and Responsiveness 2000–2001 [68] and the World Health Surveys 2001–2004 [64]. These resulted in a global ‘ranking’ of countries by their overall level of responsiveness e.g. Italy, France and Spain were ranked as the top three most responsive systems in Europe [69]. Such surveys have not been repeated since, so there is no way of knowing whether countries have improved/regressed in relation to their national responsiveness assessment. More recent studies have applied the tool to measure responsiveness at the meso-level (organization/facility) or for specific services or programs [70,71,72]. There are also responsiveness assessments for the health system building blocks. Relating to the service delivery building block, Joarder, for example, offers a conceptual framework and measurement tool (a questionnaire) for service provider responsiveness which considers both the provider (service delivery context and practiced responsiveness) and client (demand) elements [55]. Fazaeli et al. also provide a framework and measurement tool for information and data system responsiveness [1].

Across items in this category, the WHO responsiveness tool has been validated and adaptations suggested – such as the addition of domains for education and information sharing [3, 36], effective care [73, 74], trust [73, 75], coordination and responsibility [62]. Several authors have argued that the WHO’s conceptualization and tools have inherent inadequacies [69, 76, 77]. For example, it is observed that the early WHO framing emerged out of key informant interviews with experts, but it is not clear who these experts were, and where they were drawing their experience from [69]. Others have observed that while this framing was intended to produce quantifiable indicators to allow for easier comparison across countries, services, and population groups [72], measuring the performance of a complex health system is not so easily done, and a single tool is unlikely to be adequate to assess a multi-dimensional compound measure such as responsiveness – or allow for fair comparison across vastly different health systems contexts [69].

Description of categorization 2: Feedback loops between users and health service providers

The publications in categorization 2 (Table 3), focus on a bi-directional flow of information between the health system and public, usually focused on health services specifically, on the grounds that, for example according to WHR2000 [54], ‘[the] effective flow of information between the health system and the population is a key element of responsiveness’ (p.3). This cluster comprises 40% of the included items (251/621). Most items were published in the last decade, and in the last 5 years (2016 > 2020 = 117/251); most are empirical (179/251); and most relate to HICs (118/251) rather than LMICs (83/251). There was a cluster of studies relating to Europe (72/251), then Africa (39/251), Asia (38/251), and the Americas (37/251).

In this cluster, authors focus on actions taken in response to user and public feedback, usually emphasizing the need for robust information systems, and shared decision-making in the development, provision and improvement of services to meet the expectations and needs of the public [15, 56, 78, 79]. It is also stressed that feedback from users is important to enhance transparency and accountability [15], so there is some overlap with the third categorization (accountability). Like the first categorization, this one relies mainly on gathering feedback relating to user experiences with the service – but has an additional focus on the action taken as a result, to ensure (usually individual) feedback is fed back to effect institutional change (usually service improvement). The types of actions most commonly described are analysis of feedback data to identify poor performance/service provision, and to improve safety and quality improvement procedures. Other types of action included the involvement of users in co-design or development of services. In this cluster, there is focus on user feedback at different timepoints (potential/current/previous users), and varied synonyms for feedback are present, such as patient-evaluations/expectations/preferences/experience/involvement [57, 80]. In this cluster, while it is apparent how feedback might improve the quality of a particular service, there are no robust (causal) explanations provided for how gathering feedback and resulting service improvement, leads to a more responsive health system. There are a few efforts in this direction, for example, the Patient Feedback Response Framework proposed by Sheard et al. [81] offers a way to assess systems change in response to feedback loops, proposing three stages: 1) normative legitimacy, or providers’ sense of moral obligation and receptivity to user feedback; 2) structural legitimacy emanates from providers’ perceived power within organizations (e.g. the perceived autonomy, authority and availability of resources to develop strategic plans in response to patient feedback); and 3), organizational readiness to change, a collective resolve to pursue the courses of action involved in change implementation [81].

Within this categorization, publications can be divided into four themes: 1) receptivity of systems actors, which includes the exploration of users’ and providers’ perspective regarding feedback loops such as complaint management processes, patient experience and user involvement in services [82,83,84,85]; 2) the empirical collection and analysis of feedback data [86,87,88,89,90,91,92]; 3) the utilization feedback to effect change including improved health outcomes, health worker behavioral skills to enhance community/public communication and relationship, [15, 79, 81, 85, 93, 94]; and 4) the direct involvement of users in the improvement of services [95,96,97]. Notably, all four themes included some level of action, focused on the systems response and not just merely gathering user feedback.

With regard to methodologies, feedback is usually collected at an individual (micro) level through self-reported instruments such as satisfaction-, quality-, or experience of care surveys [87, 92, 98,99,100,101,102], as well as analysis of complaint and feedback management procedures [103, 104], unstructured qualitative feedback and follow-ups [89, 105, 106], provider rating reviews [86, 90, 107]. Satisfaction surveys and feedback gathered via complaints processes are by far the most commonly reported of these.

Description of categorization 3: Responsiveness as accountability

In the third categorization, responsiveness is understood as a broader issue of accountability, not only to users, but to the broader public. According to Baharvand [108] ‘responsiveness in the public sector is called accountability. And needs a proper accountability system’ (p.1). Even if these studies assess micro/individual level interventions, for example, their framing of responsiveness is usually as a broader ‘social good’, and the assumption is that when accountability to the broader public (or community) is strengthened, the health system becomes more responsive [13, 108]. Here, health system responsiveness is framed as inextricably part of ‘social accountability’ [13, 14, 109], where responsiveness to the public’s needs is a consequence of the interaction of broader governance and health system contexts [13].

In the available evidence, 32% (196/621) of the items pertained to this category. Most of these were recently published in 2011–2020 (146/196), mostly in the last 5 years (2016 > 2020 = 85); most are empirical (116/196); with most assessments focused on specific services (e.g. reproductive health); and most relate to LMIC settings (89/196), with regional clusters focused on Africa (44/196) and Asia (38/196).

Within this category, there is variation in how authors frame accountability, but they can still be usefully divided into those addressing responsiveness as it relates to ‘internal accountability’ (within health system and at different levels) or ‘external accountability’ (between the health system and community or civil society) [31, 58, 110,111,112,113]. Most related to assessments of external accountability (122/196), rather than internal accountability (45/196). Those reporting on internal accountability processes tend to address institutional governance and oversight mechanisms/processes that address building blocks such as health financing [112,113,114], clinical and services [113, 115,116,117], provider and human resources [30, 118]. Enhanced internal accountability is understood to make the system more responsive, although there is general acknowledgement of the complexity of internal accountability as a result of interdependent relationships between health systems actors [58, 109, 119,120,121]. Brinkerhoff stresses that accountability involves two way relationships, where those in positions of power are obligated to provide information about and/ or justification for their actions to other actors [122, 123]. Examples of empirical assessments of internal accountability by Hamal et al. and the Human Rights Watch, show how accountability failures (i.e. lack of monitoring of policy implementation and health services such as the maternal death review processes in the Indian and South African public systems), have implications for maternal health outcomes and inequities [48, 124]. Studies focused on external accountability tend to look at feedback between ‘community’ and system and tend to focus on the depth and level of involvement of actors (passive or active). For example, evaluations of ‘citizen engagement’ where the public directly or indirectly hold politicians, providers, policy-makers accountable for their actions or performance [23, 125, 126]. A recent study that has gained traction is a realist review of accountability initiatives by Lodenstein et al. who argue that social accountability has two dimensions: citizen engagement, and citizen oversight and monitoring, and that when the context enables civic engagement, through internal (formal) accountability measures as well as civil society, and media (informal), it changes provider incentives which results in provider responsiveness [13]. They argue that civic engagement without oversight mechanisms will not result in responsiveness but rather a minimum degree of ‘receptivity’. Accountability, by nature, is largely relational and interestingly we recognize the role of reciprocity in mediating responsive relationships (and networks) between stakeholders (system, providers, beneficiaries).

Among the papers in this cluster, the preferred method to assess internal accountability is the measurement of performance and quality assurance indicators for various building blocks (e.g. quality of care standards, financial efficiency), usually quantitively measured and narrowly framed [110, 117]. With regard to external accountability, there are generally two sub-clusters: one focusing on the creation of spaces for user involvement and citizen engagement/decision-making [23, 110, 127,128,129], usually measuring performance using quantitative approaches. The second sub-cluster focuses on the degree and quality of engagement and participation [130,131,132,133], and these tend to apply mixed method approaches [12, 134, 135], described as necessary for the complexity involved in this assessment.

Mechanisms (and their feedback loops) that potentially support health system responsiveness

Moving beyond the conceptualisations of responsiveness found in the literature – this review next considers what is presented as best practice to make a health system more responsive. Across the (621) included studies, reports of interventions intended to enhance health system responsiveness focus predominantly on the introduction/strengthening of a particular ‘mechanism’. There is generally much greater mention of mechanism type, functioning and implementation approach than how feedback gathered via these mechanisms is acted upon, or how the system responds; there is also more on how mechanisms affect specific services than how multiple mechanisms/feedback channels impact on overall system responsiveness (more below).

Specific mechanisms are considered within all three categorizations, although most frequently mentioned in the Category 3 (accountability) cluster. There, ‘mechanisms’ are understood to be governance tools that facilitate and enhance (internal) accountability within a health system, or (external) between health system and the public [12, 13, 136]. Similarly, the term ‘mechanism’ is used for tools/interventions/activities intended to enhance feedback and therefore responsiveness within the system or between the public and the system. Some mechanisms are formally mandated (e.g. in policy), initiated by the system and institutionalized; but it is important to note that there are also informal forms of feedback that are important for system responsiveness, but are not always specifically sought out (e.g. advocacy via civil society or complaints via social media) [12, 13]. In Table 4, we provide examples of common ways the connection between feedback, mechanism, and responsiveness are described – noting that terms such as ‘feedback’, ‘mechanism’, ‘process’, ‘initiative’, and ‘intervention’ are used interchangeably. These descriptions are predominantly located within the C3 framing (accountability), and slightly present in the C2 framing (feedback loops between users and health service providers); while generally missing from the C1 framing (unidirectional user-service interface).

Table 4 Examples of descriptions of connections between mechanisms, feedback, and responsiveness

Almost half of all included items (302/621) focus on responsiveness mechanisms (Table 5) – in particular formally mandated/institutionalized ones such as: community monitoring, complaint management procedures, satisfaction or quality of care surveys, incident reporting, intersectoral action/collaboration, health facility committees (HFCs) and hospital boards, medico-legal cases, ombudsman, patient charters, satisfaction−/experience−/quality of care surveys, social audits, and scorecard/report cards. Informal feedback is less prominent.

Table 5 Mechanisms that potentially support health system responsiveness (organized by publication prominence)

In this cluster of 302 items relating to mechanisms (Table 5), most provide a general, usually conceptually-based, description of a particular formal mechanisms and their role in health systems functioning and strengthening [17, 133, 137,138,139,140,141,142,143,144]. There are also two smaller clusters of items, namely: 1) publications that report an evaluation of a mechanism; and 2) publications that describe the process of implementing a particular mechanism. The evaluative sub-cluster contains mainly empirical quantitative studies such as quasi-experimental, randomized-controlled or matched interventions designs (pre and post intervention) [145,146,147]. The effectiveness of mechanisms are commonly measured against the improvement of quality of care and coverage indicators [145, 148, 149]; health outcome indicators [147, 150] and indicators of (degree of) voice/participation [150,151,152,153]. While this review does not assess the validity of study findings, on the whole, there are significantly fewer reports of evaluated ‘success’ of mechanisms (in achieving intended outcomes, or showing improvement in responsiveness), than reports of mechanisms failing to achieve intended effects/outcomes/impact. For example, while HFCs are one of the most widely described mechanism type, significant challenges are reported, across all regions. Challenges include lack of awareness of HFCS, inadequate planning and monitoring of the functioning processes, power imbalances between communities and health system actors and low levels of political will [148].

The studies focusing on mechanism implementation mainly rely on mixed methodologies and qualitative designs (e.g. ethnographic, narrative and document analysis) [154,155,156,157], and offer insights relating to the operational processes and configuration of how these mechanisms function best, including specific activities such as training/meeting approaches and composition; implementation challenges or enablers [128, 135, 156, 158,159,160]; the roles of various systems actors in the functioning of these mechanisms, and the nature of relationships and networks (e.g. between state and non-state actors), as well as issues relating to leadership, representation, power dynamics, trust and communication [128, 159, 161,162,163,164,165]. It is also emphasized across this literature that mechanisms operate in a specific context, and their functioning cannot be separated from their context [162, 166]. Molyneux et al. [7] offer a framework that assesses factors influencing the functioning and impact of community accountability mechanism, including the design (details of the mechanisms and how it ought to operate, who should be involved), and process (how the mechanisms are actually functioning) [7].

Across the 302 items (Table 5), the mechanisms that receive the most attention are satisfaction surveys, quality of care surveys; HFCs and hospital boards; scorecards and complaint management systems are the more commonly reported mechanisms – suggesting they might be the most commonly implemented in practice. In relation to the publications considering satisfaction surveys, while most focused on empirically assessing user experiences [146, 167,168,169], there were a few that documented a reaction/response (actual or intentional) because they employed strategies to use satisfaction survey data to improve services [93, 170,171,172].

The cluster of publications relating to informal feedback and its (potential) impact on health system responsiveness was significantly smaller (21/301) than that examining formal feedback. The most commonly described of this form of feedback was via social media, the studies being primarily descriptive, often relating to the potential for user experiences (or complaints) to be fed through social media to service engagement [90, 168, 173,174,175,176]. For example, a case study on whether Twitter supports interpersonal communication and feedback to health services in the UK for people with mental disorders [177].

There are also a few other items relating to other forms of media (such as a description of civic journalism initiatives within five provinces in South Africa [178]; and social protest (such as the description of public protest in India attempting to hold systems actors accountable, and make demands for the system to be more responsive to needs of pregnant woman [179].


This review confirms there is continued and growing interest in health system responsiveness (evidenced by the rapid increase in recent publications), and its substantive relevance as a concept and area of focus - as a value, a key performance goal, and an important accountability and communication factor. As the WHR2000 argued, improved responsiveness is a legitimate endeavor in its own right, for protecting and enhancing the population’s basic human rights [54]. Therefore, as Askari [3] states ‘there is a growing need to increase the [health system’s responsiveness] as a key element of observance and fulfillment of justice’ (p.1). However, fair financing and equity still have more prominence and traction. For example, Wiysonge et al. reviewed the effects of financial arrangements for health systems specific to LMICs and found 7272 directly relevant items [180].

However, we also confirm that there are still major questions about every aspect of health system responsiveness: its framing (there are many), and theorization (there are few), resulting measurement (varied) and implementation practice (diverse). Without greater specificity, there is a risk that responsiveness remains a descriptive ideal, something mentioned in the introductory or conclusion sections of policies and articles – and the vital real-world application and effect remaining intangible.

Conceptual and definitional issues have received little attention, despite this being a standard pre-requisite for empirical research and intervention. Some of this ambiguity emerges as a result of the diversity of the field – and future work in this area should continue to consider context-specificity. However, researchers might also ‘check’ their framing against three initial questions: 1) what constitutes a response?; 2) at what level is response anticipated (provider or systemic)?; and 3) who is the response for (individual or public)? We do not see it as a task of this evidence mapping review to provide a ‘new’ definition for health system responsiveness. Instead, we would advocate for a broader and collective project of theoretical development, that emerges from context-specific realities, and that builds a dialectical bridge across the multiple interests and ideas described earlier.

The lack of coherent framing is important, as this means there is no main coherent idea or theory to test and develop further. This review shows that while authors might use the same term (responsiveness), there are vastly different interpretations lying under this use, drawing from varied applications/sources, rather than iteratively building on clustered ideas, or linking and learning from similar applications in different contexts. The varied clusters of work on health system responsiveness remains largely siloed from each other and often based on individual interests. This has had an impact on the theoretical development (lacking an iterative dialectic approach), as well as the empirical evidence-base – resulting in wildly diverse conceptualizations of what responsiveness is and how it should be measured, as well as conclusions about how to improve ‘it’.

Despite the order artificially imposed in this review, the evidential landscape remains largely ‘chaotic’ (an important finding in itself). The diversity of framing and focus, reflected in differing application of ideas and measurement approaches, makes it extremely challenging for researchers and practitioners seeking to enter this space. This might be a reason for its lesser traction than the other health system goals. Of course, diversity of ideas can encourage new thinking, and we are not encouraging conceptual ‘capture’ – but at this point, after two decades, this diversity appears to be more disabling than enabling.

Within that project, all of the ideas that underpin the compound concept that is ‘health system responsiveness’ would need to be interrogated and operationalised, as further research and implementation depends on achieving better clarity (see Table 6). For example, if the focus is on ‘systemic response to citizens’ legitimate expectations of the non-health enhancing aspects of services?’, then who is a citizen, who decides what a legitimate expectation is, and what is included/excluded as a non-health enhancing aspect requires interrogation. Furthermore, there are still significant questions about what a ‘response’ actually is – and how a ‘reaction’ might different from an ‘intentional response’, or how routinized responses might differ from, say, a public health emergency response.

Table 6 Theoretical questions for further engagement

Making a distinction between system and service responsiveness

As part of this call for theoretical development, we would also suggest it would be useful to develop a greater theoretical distinction between ‘health system responsiveness’ and ‘health service responsiveness’ (see Table 7). This review has shown that the majority of current items, might use the term ‘system’, but in fact, are primarily focused on the interaction between individual user/patient and the health service [2, 14, 181]. This explains why satisfaction surveys and complaints systems currently dominate the terrain. For example, the dominant depiction across all 621 items is of responsiveness as a specific service feedback loop, in which feedback (usually gathered at point of care) about a particular service is shared with that service, and it is about individual (micro)-level expectations, receptivity and feedback to that patient, and service-level reactions. However, this has been shown to be limited – and instead is strongly influenced by complex factors such as attitudes, societal values, and power dynamics among diverse actors [23, 24, 182]. We must question whether a more limited ‘services’ framing adequately captures the core ideas and systems thinking suggested of ‘health system responsiveness’ [3, 73, 77]. Few of the items reviewed here approach responsiveness from a ‘whole-of-systems’ perspective – a broader view of responsiveness, that takes into consideration the expectations of broader actors in the system (populations, not just users). Such a perspective is in line with the current trajectory within systems thinking and within HPSR [14, 183] – but would then presumably prioritize the assessment of responsiveness across multiple building blocks and focus on the interactions between blocks (instead of the single-block focus of much of the current empirical examples). Taking a systems view, receptivity might then be considered at a systemic level (e.g. organizational cultural orientation towards taking on feedback and adaptations, rather than individual decision-maker receptivity); feedback would more likely be understood as multiple streams of feedback from varied sources, via varied formal and informal channels; and reaction might be understood as a sustainable systems-wide reaction/response. In our view, part of the missing evidence map, is work on health system responsiveness that applies a systems-thinking approach, and acknowledges the complexity, multifaceted and interconnected relationships among the components in the health system [183]. This lens would assume health system responsiveness to be inclusive of ‘health service responsiveness’, but would extend more broadly, and require different framing and measurement approaches. For example, it would not be adequate to equate a survey of patient satisfaction at a particular point of care, with an assessment of system responsiveness.

Table 7 Conceptualising health system responsiveness as distinct from health service responsiveness

In addition to a project of theoretical development – a related project of assessment and research tool development is needed. This review shows there are few robust tools that comprehensively assess health system responsiveness as it is (variously) framed. Tools for assessing health system responsiveness, that encompass a system thinking approach, would still need to be developed. For example, the national scale of the survey tool that emerged from the WHR2000 does not necessarily enable researchers to assess the complex systemic aspects suggested in the framings and categorizations described above. To be fair, the WHR2000 tools were intended to produce quantifiable indicators to allow for easier comparison across countries [72]; but it is widely acknowledged that measuring the performance of a complex health system is not so easily done, and a single tool is unlikely to adequately assess a multi-dimensional compound measure such as responsiveness – or allow for fair comparison across vastly different health systems contexts. It was widely noted that the approach was too limited to encompass the broader complex ideas about responsiveness put forward in the WHR2000 [69]. Robone et al. noted that while this approached allowed you to see variations in reported levels of responsiveness across countries, the literature is sparse on the determinants of responsiveness, particularly of system-wide characteristics [184].

This review indicated other gaps relating to a systems perspective of responsiveness. For example, it is widely argued that systems functioning and change needs to be considered over time, suggesting that once-off surveys (such as the 2001 national assessments, or once-off service surveys focusing on a particular interaction) would not adequately assess whether systems are becoming more/less responsive over time, how systems are adapting to the changing needs of citizens, or how responsiveness relates to systems resilience (building positive adjustments to systems shocks over time). There were few assessments in this review that showed any type of cross-sectional assessment over time. Siloed and once-off service assessments do not show the fluidity of health systems, that change over time. Nor do they enable an understanding of varied levels of responsiveness within systems (or systems within systems), such as the variation between public and private sectors within the same national health system. For example, a for-profit health service might be highly responsive to the needs of a wealthy patient group, but would not necessarily contribute to a responsive national health system (where equity might require being less responsive to certain individual patient needs, [30]. The ‘systems side’ of health system responsiveness is seriously neglected and is the major theoretical gap – and development in this area would enable better bridging across the materials clustered in the three categories.

The case for health system responsiveness is also difficult to make because of missing empirical evidence (Table 8). For example, it is easy to see the geographic gaps, as HIC European systems tend to dominate. There are also several contexts in which responsiveness is an unknown – such as fragile and conflict affected states, where responsiveness might arguably be most essential. In building the case for responsiveness there would be value in mining the existing clusters for insights useful to other contexts – a research activity that has not been thoroughly accomplished. For example, the fact that certain approaches were developed for use in HICs, does not mean they would not bring valuable insight in LMIC settings. There are also opportunities for considering evidence across relatable contexts, or regionally. For example, it would be useful to mine the materials relating to particular mechanisms, exploring enable/disabling factors for successful implementation and mechanism functioning in comparable contexts.

Table 8 Empirical evidence gaps

Beyond geography, another major gap of the current literature is population. Although minorities and vulnerable groups are at the centre of the very idea of responsiveness, this review showed how rarely such groups are addressed - and this is a significant gap. All of these require more exploration, as does the broader connection between responsiveness and equity as it relates to a population as a whole.

In the current evidence-base, many items focus on whether mechanisms are currently present and functioning or not. It also tends to evidence challenges facing mechanism implementation more often than enablers and success stories. There are only a few examples of short-term and quite limited successes – and even fewer examples available of fully functioning mechanisms, implemented and operating as intended, consistently ensuring citizen voice and feedback gets taken up by the system, and resulting in systemic response, over sustained periods of time. There are opportunities to mine and repurpose existing data on mechanisms for new uses. For example, satisfaction surveys are widely applied in multiple countries, usually at a national scale, but there are few examples of such being leveraged to support work on systems responsiveness (it might not tell the whole story, but might provide an important piece of the puzzle). There are opportunities for comparing differences in mechanism performance in different contexts, and for integration of information about multiple mechanisms in the same system, to gain a more complex map of feedback.

Researchers (especially those in C3) have sought to take broader forms of feedback into consideration – for example applying rights-based approaches, taking broader ‘users’ into account. There is a large body of work on the types of feedback and empirical evaluations that demonstrate that feedback loops contribute quality improvement or systems changes. However, there is limited published literature that synthesizes the ‘how’ or the factors that hinder and enable feedback loops to facilitate a systems response. Further, of the included (621) studies tend to focus on the ‘gathering feedback’ and fewer on responsiveness as ‘the way the system responds or reacts to that feedback’. While there is evidence of feedback loops being in place and functional, what is not as clear is whether/how such feedback engenders response to citizen expectations. There is also as yet no robust explanations provided for how feedback leads towards a more responsive health system. Responsiveness is rarely framed as the actual (systems strengthening) changes made in the health system to address/respond to issues identified.

While there is merit to further work determining the effectiveness of mechanisms, there has been a call to move towards exploring the more nuanced aspects of their functioning in context, and in consideration of accountability relationships [185]. Further, better approaches for considering multiple actors influencing these mechanism(s) are needed. That is, the evidence indicates that the varied composition of different actors (state, health providers and staff, civil society or groups of individuals from communities) shape these mechanisms. (Civil society actors in particular are poorly evidenced/represented in the current research). What is less apparent is how varied actors facilitate mechanism processes at different levels of the system. Within the implementation of mechanisms, power and positionality are thought to be fundamental aspects, specifically to influence legitimacy and promote voice, as people hold various levels of power to act and make decisions and as a result of power imbalances may become more pronounced in certain mechanisms.

Little is known about how informal feedback relates to formal mechanisms, or how either/both influence decision-making, or leverage the system to respond. The framing of responsiveness as accountability (more common in C3), lends itself more easily to take informal feedback into account – and generally relates more easily to a systems perspective. For example, pushing beyond the user-provider interaction and includes the public and other actors in the system to hold each other accountable. Another gap is further consideration of ‘multi-level governance’ as it relates to responsiveness – for example, generating perspectives of mechanisms and interactions inclusive of individual, collective and government actions and decisions [186], allowing for a detailed exploration and analysis interactions of influences, arrangements and configurations within and between mechanisms. However, in general, the current literature is imbalanced towards particular actors (mainly users and service providers), and towards individual formal mechanisms (rather than multiple mechanisms, and varied forms of feedback) – and suggests a bias towards understanding feedback gained via formally instituted mechanisms [185]. It is our perspective that a campaign started via social media, or a community that burns down a clinic in a desperate LMIC setting, might also be considered a form of feedback relevant to system responsiveness – and hypothesize further that those without voice might provide feedback more frequently via informal channels [176, 178].


The substantive relevance of having responsive health systems has been convincingly argued – but the evidencing of this claims is not yet fully developed. This leaves health system responsiveness as a ‘nice to have’ or an ideal – rather than a concrete performance goal requiring routine monitoring, attention and resourcing. Although health system responsiveness is understood to be important in many ways – for example, ensuring the social rights of citizens, drawing attention to minority groups, supporting social cohesion, improving population health, improving systems functioning, and ultimately having a health system strengthening effect - at this time, these ideas remain untested hypotheses. There is very little literature providing evidence for these claims or showing how a more responsive health system is a stronger health system.

This is one example of why there is still significant work to be done on health system responsiveness. In comparison with the other goals, there appears to have been a lack of prioritization and resourcing of work on responsiveness in the research, policy, and research/intervention arenas [58, 187, 188]. Currently, there are no distinct research interest or ‘sub-field’ teams working within the health system responsiveness terrain; no specific international networks or platforms focusing on it either (in comparison with other goals or topics). Further research agenda-setting work is required, as is resource mobilization to support it. There is an urgent need for synthesis of existing ideas, development of new ideas, and ultimately of ‘bridging work’ across existing evidence. As this review shows, such initiatives would not need to start from scratch.

There is major work to be done, for researchers and practitioners. For researchers, improved theoretical development needs to lead to improved (more complex, and more suited to purpose) measures and tools – which need to be tested and extended in real world health systems. Better measurement tools (adequate for assessing this complex concept) should result in measurable improvements that can be pragmatically (and routinely) pursued by practitioners. For practitioners, if responsiveness is to move from being a ‘nice to have’ ideal, to a systems performance goal, then it needs to be taken more seriously, and more routinely monitored and considered. Ultimately, the question that remains is: whose responsibility is it, to ensure our health systems become more responsive? The answer might be as simple and as complex as ‘everyone’.

Availability of data and materials

All data generated or analysed during this study are included in this published article [and its supplementary files listed above].





Health facility committee


High income country


Health policy and systems research


Health system


Low to middle income country


The President’s Emergency Plan for AIDS Relief


Patient-reported outcomes measures


Quality of care


The United Nations


World Health Organization


World Health Report


World Health Report of 2000


  1. 1.

    Fazaeli S, Ahmadi M, Rashidian A, Sadoughi F. A framework of a health system responsiveness assessment information system for Iran. Iran Red Crescent Med J. 2014;16(6):e17820.

    PubMed  PubMed Central  Article  Google Scholar 

  2. 2.

    World Health Organization. The world health report 2000: Health systems: Improving performance. Geneva: World Health Organization; 2000. Report No.: 924156198X

    Google Scholar 

  3. 3.

    Askari R, Arab M, Rashidian A, Akbari-Sari A, Hosseini SM, Gharaee H. Designing Iranian model to assess the level of health system responsiveness. Iran Red Crescent Med J. 2016;18(3):e24527-e.

    Article  Google Scholar 

  4. 4.

    Bridges J, Pope C, Braithwaite J. Making health care responsive to the needs of older people. Age Ageing. 2019.

  5. 5.

    Rottger J, Blumel M, Engel S, Grenz-Farenholtz B, Fuchs S, Linder R, et al. Exploring health system responsiveness in ambulatory care and disease management and its relation to other dimensions of health system performance (RAC) - study design and methodology. Int J Health Policy Manag. 2015;4(7):431–7.

    Article  PubMed  PubMed Central  Google Scholar 

  6. 6.

    Anell A, Glenngard AH, Merkur S. Sweden health system review. Health Syst Transit. 2012;14(5):1–159.

    PubMed  PubMed Central  Google Scholar 

  7. 7.

    Molyneux S, Atela M, Angwenyi I, Goodman C. Community accountability at peripheral health facilities: A review of the empirical literature and development of a conceptual framework. Health Policy Plan. 2012:1–14.

  8. 8.

    Brinkerhoff DW, Bossert TJ. Health governance: principal–agent linkages and health system strengthening. Health Policy Plan. 2014;29(6):685–93.

    Article  PubMed  PubMed Central  Google Scholar 

  9. 9.

    Abbasi K. The World Bank and world health: focus on South Asia II: India and Pakistan. BMJ. 1999;318(7191):1132–5.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  10. 10.

    Ughasoro MD, Okanya OC, Uzochukwu BSC, Onwujekwe OE. An exploratory study of patients’ perceptions of responsiveness of tertiary health-care services in Southeast Nigeria: a hospital-based cross-sectional study. Niger J Clin Pract. 2017;20:267–73.

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  11. 11.

    de Silva A. A framework for measuring responsiveness. Geneva: World Health Organization; 2000.

    Google Scholar 

  12. 12.

    Atela MH. Health system accountability and primary health care delivery in rural Kenya. An analysis of the structures, process and outcomes: University of Cambridge; 2013.

  13. 13.

    Lodenstein E, Dieleman M, Gerretsen B, Broerse JEW. Health provider responsiveness to social accountability initiatives in low- and middle-income countries: A realist review. Health Policy Plan. 2016;32(1):125–40.

    Article  PubMed  PubMed Central  Google Scholar 

  14. 14.

    Mirzoev T, Kane S. What is health systems responsiveness? Review of existing knowledge and proposed conceptual framework. BMJ Glob Health. 2017;2(4):e000486.

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Baldie DJ, Guthrie B, Entwistle V, Kroll T. Exploring the impact and use of patients’ feedback about their care experiences in general practice settings-a realist synthesis. Fam Pract. 2018;35(1):13–21.

    Article  PubMed  PubMed Central  Google Scholar 

  16. 16.

    Listening Project. Feedback mechanisms in international assistance organizations. Cambridge: CDA Collaborative Learning Projects; 2011.

    Google Scholar 

  17. 17.

    Shrivastava SR, Shrivastava PS, Ramasamy J. Community monitoring. Gateways Int J Community Res Engagement. 2013;6:170–7.

    Article  Google Scholar 

  18. 18.

    Gurung G, Derrett S, Gauld R, Hill PC. Why service users do not complain or have ‘voice’: A mixed-methods study from Nepal’s rural primary health care system. BMC Health Serv Res. 2017;17(1):81.

    PubMed  PubMed Central  Article  Google Scholar 

  19. 19.

    Falisse J-B, Meessen B, Ndayishimiye J, Bossuyt M. Community participation and voice mechanisms under performance-based financing schemes in Burundi. Tropical Med Int Health. 2012;17(5):674–82.

    Article  Google Scholar 

  20. 20.

    Bauhoff S, Tkacheva O, Rabinovich L, Bogdan O. Developing citizen report cards for primary care: evidence from qualitative research in rural Tajikistan. Health Policy Plan. 2016;31(2):259–66.

    PubMed  Article  PubMed Central  Google Scholar 

  21. 21.

    Edward A, Osei-Bonsu K, Branchini C, Yarghal TS, Arwal SH, Naeem AJ. Enhancing governance and health system accountability for people centered healthcare: an exploratory study of community scorecards in Afghanistan. BMC Health Serv Res. 2015;15(299).

  22. 22.

    Mirzoev T, Kane S. Key strategies to improve systems for managing patient complaints within health facilities - what can we learn from the existing literature? Glob Health Action. 2018;11(1):1458938.

    PubMed  PubMed Central  Article  Google Scholar 

  23. 23.

    Cleary SM, Molyneux S, Gilson L. Resources, attitudes and culture: an understanding of the factors that influence the functioning of accountability mechanisms in primary health care settings. BMC Health Serv Res. 2013;13(1):320.

    Article  PubMed  PubMed Central  Google Scholar 

  24. 24.

    George A. Using accountability to improve reproductive health care. Reprod Health Matters. 2003;11(21):161–70.

    Article  PubMed  PubMed Central  Google Scholar 

  25. 25.

    Loewenson R, Tibazarwa K. Annotated bibliography: social power, participation and accountability in health. Harare: TARSC, EQUINET with COPASAH; 2013.

    Google Scholar 

  26. 26.

    Tripathy JP, Aggarwal AK, Patro BK, Verma H. Process evaluation of community monitoring under national health mission at Chandigarh, union territory: methodology and challenges. J Fam Med Prim Care. 2015;4(4):539–45.

    Article  Google Scholar 

  27. 27.

    George A. ‘By papers and pens, you can only do so much’: views about accountability and human resource management from Indian government health administrators and workers. Int J Health Plann Manag. 2009;24(3):205–24.

    Article  Google Scholar 

  28. 28.

    Frisancho A. Citizen monitoring to promote the right to healthcare and accountability. Maternal mortality, human rights and accountability: Routledge; 2013. p. 41–58.

  29. 29.

    Roussos ST, Fawcett SB. A review of collaborative partnerships as a strategy for improving community health. Annu Rev Public Health. 2000;21(1):369–402.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  30. 30.

    Berlan D, Shiffman J. Holding health providers in developing countries accountable to consumers: A synthesis of relevant scholarship. Health Policy Plan. 2012;27(4):271–80.

    PubMed  Article  Google Scholar 

  31. 31.

    Danhoundo G, Nasiri K, Wiktorowicz ME. Improving social accountability processes in the health sector in sub-Saharan Africa: A systematic review. BMC Public Health. 2018;18(1):497.

    PubMed  PubMed Central  Article  Google Scholar 

  32. 32.

    Jones AM, Rice N, Robone S, Dias PR. Inequality and polarisation in health systems’ responsiveness: a cross-country analysis. J Health Econ. 2011;30(4):616–25.

    Article  PubMed  Google Scholar 

  33. 33.

    Andersson N, Matthis J, Paredes S, Ngxowa N. Social audit of provincial health services: building the community voice into planning in South Africa. J Interprof Care. 2004;18(4):381–90.

    Article  PubMed  Google Scholar 

  34. 34.

    Larson E, Mbaruku G, Kujawski SA, Mashasi I, Kruk ME. Disrespectful treatment in primary care in rural Tanzania: beyond any single health issue. Health Policy Plan. 2019:1–6.

  35. 35.

    Magruder KJ, Fields NL, Xu L. Abuse, neglect and exploitation in assisted living: an examination of long-term care ombudsman complaint data. J Elder Abuse Negl. 2019;31(3):209–24.

    Article  PubMed  Google Scholar 

  36. 36.

    Joarder T, George A, Ahmed SM, Rashid SF, Sarker M. What constitutes responsiveness of physicians: a qualitative study in rural Bangladesh. PLoS One. 2017;12(12):1–19.

    Article  CAS  Google Scholar 

  37. 37.

    USAID. The citizens monitoring and feedback mechanism: A guide for LGUs in installing a participatory monitoring and evaluation system. 2001.

    Google Scholar 

  38. 38.

    Van Teeffelen J, Baud I. Exercising citizenship invited and negotiated spaces in grievance redressal systems in Hubli–Dharwad. Environ Urban ASIA. 2011;2(2):169–85.

    Article  Google Scholar 

  39. 39.

    Alavi M, Khodaie Ardakani MR, Moradi-Lakeh M, Sajjadi H, Shati M, Noroozi M, et al. Responsiveness of physical rehabilitation centers in capital of Iran: disparities and related determinants in public and private sectors. Front Public Health. 2018;6:317–27.

    PubMed  PubMed Central  Article  Google Scholar 

  40. 40.

    Olivier J, Whyle E, Khan G. Health system responsiveness: a scoping review. South Africa: University of Cape Town; 2020.

    Google Scholar 

  41. 41.

    Olivier J, Molyneux CS, Gilson L, Schneider H, Sheikh K. Strengthening health system responsiveness to citizen feedback in South Africa and Kenya - project proposal. South Africa: University of Cape Town; 2017.

    Google Scholar 

  42. 42.

    Danan ER, Krebs EE, Ensrud K, Koeller E, MacDonald R, Velasquez T, et al. An evidence map of the women veterans’ health research literature (2008-2015). J Gen Intern Med. 2017;32(12):1359–76.

    Article  PubMed  PubMed Central  Google Scholar 

  43. 43.

    Whyle E, Olivier J. Social values and health systems in health policy and systems research: a mixed-method systematic review and evidence map. Health Policy Plan. 2020;35(6):735–51.

    PubMed  PubMed Central  Article  Google Scholar 

  44. 44.

    Miake-Lye IM, Hempel S, Shanman R, Shekelle PG. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5(28).

  45. 45.

    Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The global evidence mapping initiative: scoping research in broad topic areas. BMC Med Res Methodol. 2011;11(1):92.

    PubMed  PubMed Central  Article  Google Scholar 

  46. 46.

    Haddaway NR, Styles D, Pullin AS. Evidence on the environmental impacts of farm land abandonment in high altitude/mountain regions: a systematic map. Environ Evid. 2014;3(1):17.

    Article  Google Scholar 

  47. 47.

    James KL, Randall NP, Haddaway NR. A methodology for systematic mapping in environmental sciences. Environ Evid. 2016;5(1):7.

    Article  Google Scholar 

  48. 48.

    Hamal M, Dieleman M, De Brouwere V, de Cock Buning T. How do accountability problems lead to maternal health inequities? A review of qualitative literature from Indian public sector. Public Health Rev. 2018;39(1):9.

    PubMed  PubMed Central  Article  Google Scholar 

  49. 49.

    Topp SM, Edelman A, Taylor S. “We are everything to everyone”: a systematic review of factors influencing the accountability relationships of Aboriginal and Torres Strait Islander health workers (AHWs) in the Australian health system. Int J Equity Health. 2018;17(1):67.

    PubMed  PubMed Central  Article  Google Scholar 

  50. 50.

    Clavisi O, Bragge P, Tavender E, Turner T, Gruen RL. Effective stakeholder participation in setting research priorities using a global evidence mapping approach. J Clin Epidemiol. 2013;66(5):496–502.e2.

    PubMed  Article  PubMed Central  Google Scholar 

  51. 51.

    Given LM. 100 questions (and answers) about qualitative research. Thousand Oaks: SAGE Publications; 2016.

    Google Scholar 

  52. 52.

    Hsu C-C, Chen L, Hu Y-W, Yip W, Shu C-C. The dimensions of responsiveness of a health system: a Taiwanese perspective. BMC Public Health. 2006;6(1):1–7.

    Article  Google Scholar 

  53. 53.

    Dewi FD, Sudjana G, Oesman YM. Patient satisfaction analysis on service quality of dental health care based on empathy and responsiveness. Dent Res J. 2011;8(4):172–7.

    Article  Google Scholar 

  54. 54.

    Darby C, Valentine N, Murray CJL, de Silva A. World Health Organization (WHO): strategy on measuring responsiveness. GPE discussion paper series: No23. Geneva: World Health Organization; 2000.

    Google Scholar 

  55. 55.

    Joarder T. Understanding and measuring responsiveness of human resources for health in rural Bangladesh: Johns Hopkins University; 2015.

  56. 56.

    Condon L. Seeking the views of service users: from impossibility to necessity. Health Expect. 2017;20(5):805–6.

    Article  PubMed  PubMed Central  Google Scholar 

  57. 57.

    Groenewegen PP, Kerssens JJ, Sixma HJ, van der Eijk I, Boerma WGW. What is important in evaluating health care quality? An international comparison of user views. BMC Health Serv Res. 2005;5.

  58. 58.

    Prakash G, Singh A. Responsiveness in a district health system: the changing relationship of the state with its citizen; 2013.

    Google Scholar 

  59. 59.

    Busse R, Valentine N, S L, Prasad A, Van Ginneken E. Being responsive to citizens’ expectations: The role of health services in responsiveness and satisfaction. In: Figueras J, McKee M, editors. Health systems, health, wealth and societal well-being: Assessing the case for investing in health systems. Maidenhead: McGraw-Hill Education; 2012. p. 175–208.

    Google Scholar 

  60. 60.

    Miller JS, Mhalu A, Chalamilla G, Siril H, Kaaya S, Tito J, et al. Patient satisfaction with HIV/AIDS care at private clinics in Dar es Salaam, Tanzania. AIDS Care. 2014;26(9):1150–4.

    Article  PubMed  PubMed Central  Google Scholar 

  61. 61.

    Mishima SM, Campos AC, Matumoto S, Fortuna CM. Client satisfaction from the perspective of responsiveness: strategy for analysis of universal systems? Rev Lat Am Enfermagem. 2016;24:e2674.

    PubMed  PubMed Central  Article  Google Scholar 

  62. 62.

    Torabipour A, Gharacheh L, Lorestani L, Salehi R. Comparison of responsiveness level in Iranian public and private physiotherapy clinics: a cross-sectional multi-center study. Mater Socio-Med. 2017;29(3):172–5.

    Article  Google Scholar 

  63. 63.

    Busse R. Understanding satisfaction, responsiveness and experience with the health system. In: Papanicolas I, Smith P, editors. Health system performance comparison: an agenda for policy, information and research. New York: McGraw-Hill Education; 2013. p. 255–79.

    Google Scholar 

  64. 64.

    Rice N, Robone S, Smith PC. The measurement and comparison of health system responsiveness: Centre for Health Economics, University of York; 2008. Contract No.: 08/05.

  65. 65.

    Valentine NB, Lavallée R, Bao L, Bonsel GJ, Murray CJL. Classical psychometric assessment of the responsiveness instrument in the who multi-country survey study on health and responsiveness 2000–2001. Health systems performance assessment: debates, methods and empiricism. Geneva: World Health Organization; 2003. p. 597–629.

    Google Scholar 

  66. 66.

    Valentine NB, Salomon JA, Murray C, Evans D, Murray C, Evans D. Weights for responsiveness domains: analysis of country variation in 65 national sample surveys. In: Murray CJL, Evans DB, editors. Health systems performance assessment: debates, methods and empiricism. Geneva: World Health Organization; 2003. p. 631–52.

    Google Scholar 

  67. 67.

    Bramesfeld A, Wensing M, Bartels P, Bobzin H, Grenier C, Heugren M, et al. Mandatory national quality improvement systems using indicators: an initial assessment in Europe and Israel. Health Policy. 2016;120(11):1256–69.

    PubMed  Article  PubMed Central  Google Scholar 

  68. 68.

    Ustun TB, Chatterji S, Villanueva MV, Bendib L, Celik C, Sadana R, et al. WHO multi-country survey study on health and responsiveness 2000–2001. Geneva: World Health Organization; 2003.

    Google Scholar 

  69. 69.

    Navarro V. The new conventional wisdom: an evaluation of the WHO report health systems: improving performance. Int J Health Serv. 2001;31(1):23–33.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  70. 70.

    Yakob B, Ncama BP. Measuring health system responsiveness at facility level in Ethiopia: performance, correlates and implications. BMC Health Serv Res. 2017;17(1):263.

    Article  PubMed  PubMed Central  Google Scholar 

  71. 71.

    Fiorentini G, Robone S, Verzulli R. Do hospital-specialty characteristics influence health system responsiveness? An empirical evaluation of in-patient care in the Italian region of Emilia-Romagna. Health Econ. 2018;27(2):266–81.

    Article  PubMed  PubMed Central  Google Scholar 

  72. 72.

    Bramesfeld A, Stegbauer C. Assessing the performance of mental health service facilities for meeting patient priorities and health service responsiveness. Epidemiol Psychiatr Sci. 2016;25(5):417–21.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  73. 73.

    Rottger J, Blumel M, Fuchs S, Busse R. Assessing the responsiveness of chronic disease care - is the World Health Organization's concept of health system responsiveness applicable? Soc Sci Med. 2014;113:87–94.

    PubMed  Article  PubMed Central  Google Scholar 

  74. 74.

    Forouzan AS. Assessing responsiveness in the mental health care system: the case of Tehran: Umeå Universitet; 2015.

  75. 75.

    Tille F, Röttger J, Gibis B, Busse R, Kuhlmey A, Schnitzer S. Patients’ perceptions of health system responsiveness in ambulatory care in Germany. Patient Educ Couns. 2019;102(1):162–71.

    Article  PubMed  PubMed Central  Google Scholar 

  76. 76.

    Geldsetzer P, Haakenstad A, James EK, Atun R. Non-technical health care quality and health system responsiveness in middle-income countries: a cross-sectional study in China, Ghana, India, Mexico, Russia, and South Africa. J Glob Health. 2018;8(2):020417.

    PubMed  PubMed Central  Article  Google Scholar 

  77. 77.

    Njeru MK, Blystad A, Nyamongo IK, Fylkesnes K. A critical assessment of the WHO responsiveness tool: lessons from voluntary HIV testing and counselling services in Kenya. BMC Health Serv Res. 2009;9(1):1.

    Article  Google Scholar 

  78. 78.

    Davies J, Wright J, Drake S, Bunting J. ‘By listening hard’: developing a service-user feedback system for adopted and fostered children in receipt of mental health services. Adopt Foster. 2009;33(4):19–33.

    Article  Google Scholar 

  79. 79.

    Han E, Hudson Scholle S, Morton S, Bechtel C, Kessler R. Survey shows that fewer than a third of patient-centered medical home practices engage patients in quality improvement. Health Aff (Millwood). 2013;32(2):368–75.

    Article  Google Scholar 

  80. 80.

    Hovey RB, Morck A, Nettleton S, Robin S, Bullis D, Findlay A, et al. Partners in our care: patient safety from a patient perspective. Qual Saf Health Care. 2010;19(6):e59.

    CAS  PubMed  PubMed Central  Google Scholar 

  81. 81.

    Sheard L, Marsh C, O'Hara J, Armitage G, Wright J, Lawton R. The patient feedback response framework - understanding why UK hospital staff find it difficult to make improvements based on patient feedback: a qualitative study. Soc Sci Med. 2017;178:19–27.

    PubMed  PubMed Central  Article  Google Scholar 

  82. 82.

    Adams M, Maben J, Robert G. ‘It’s sometimes hard to tell what patients are playing at’: how healthcare professionals make sense of why patients and families complain about care. Health. 2018;22(6):603–23.

    Article  PubMed  PubMed Central  Google Scholar 

  83. 83.

    Atherton H, Fleming J, Williams V, Powell J. Online patient feedback: a cross-sectional survey of the attitudes and experiences of United Kingdom health care professionals. J Health Serv Res Policy. 2019;24(4):235–44.

    PubMed  PubMed Central  Article  Google Scholar 

  84. 84.

    Schaad B, Bourquin C, Panese F, Stiefel F. How physicians make sense of their experience of being involved in hospital users’ complaints and the associated mediation. BMC Health Serv Res. 2019;19(1):1–8.

    Article  Google Scholar 

  85. 85.

    Asprey A, Campbell JL, Newbould J, Cohn S, Carter M, Davey A, et al. Challenges to the credibility of patient feedback in primary healthcare settings: a qualitative study. Br J Gen Pract. 2013;63(608):e200–8.

    Article  PubMed  PubMed Central  Google Scholar 

  86. 86.

    Brookes G, Baker P. What does patient feedback reveal about the NHS? A mixed methods study of comments posted to the NHS choices online service. BMJ Open. 2017;7(4):e013821.

    Article  PubMed  PubMed Central  Google Scholar 

  87. 87.

    Cheng BS, McGrath C, Bridges SM, Yiu CK. Development and evaluation of a dental patient feedback on consultation skills (DPFC) measure to enhance communication. Community Dent Health. 2015;32(4):226–30.

    CAS  PubMed  PubMed Central  Google Scholar 

  88. 88.

    Gill SD, Redden-Hoare J, Dunning TL, Hughes AJ, Dolley PJ. Health services should collect feedback from inpatients at the point of service: opinions from patients and staff in acute and subacute facilities. Int J Qual Health Care Adv Access. 2015;27(6):507–12.

    Article  Google Scholar 

  89. 89.

    Whitney J, Easter A, Tchanturia K. Service users’ feedback on cognitive training in the treatment of anorexia nervosa: a qualitative study. Int J Eat Disord. 2008;41(6):542–50.

    Article  PubMed  PubMed Central  Google Scholar 

  90. 90.

    van Velthoven MH, Atherton H, Powell J. A cross sectional survey of the UK public to understand use of online ratings and reviews of health services. Patient Educ Couns. 2018;101(9):1690–6.

    Article  PubMed  PubMed Central  Google Scholar 

  91. 91.

    Tengilimoglu D, Sarp N, Yar CE, Bektas M, Hidir MN, Korkmaz E. The consumers’ social media use in choosing physicians and hospitals: the case study of the province of Izmir. Int J Health Plann Manag. 2017;32(1):19–35.

    Article  Google Scholar 

  92. 92.

    Lambert MJ, Shimokawa K. Collecting client feedback. In: Norcross JC, editor. Evidence-based therapy relationships. 48. 2011/03/16 ed; 2011. p. 72–9.

    Google Scholar 

  93. 93.

    Boiko O, Campbell JL, Elmore N, Davey AF, Roland M, Burt J. The role of patient experience surveys in quality assurance and improvement: a focus group study in English general practice. Health Expect. 2015;18(6):1982–94.

    PubMed  Article  PubMed Central  Google Scholar 

  94. 94.

    Murante AM, Vainieri M, Rojas D, Nuti S. Does feedback influence patient - professional communication? Empirical evidence from Italy. Health Policy. 2014;116(2–3):273–80.

    Article  PubMed  PubMed Central  Google Scholar 

  95. 95.

    Lucock M, Halstead J, Leach C, Barkham M, Tucker S, Randal C, et al. A mixed-method investigation of patient monitoring and enhanced feedback in routine practice: barriers and facilitators. Psychother Res. 2015;25(6):633–46.

    Article  PubMed  PubMed Central  Google Scholar 

  96. 96.

    Lawton R, O'Hara JK, Sheard L, Armitage G, Cocks K, Buckley H, et al. Can patient involvement improve patient safety? A cluster randomised control trial of the Patient Reporting and Action for a Safe Environment (PRASE) intervention. BMJ Qual Saf. 2017;26(8):622–31.

    PubMed  PubMed Central  Article  Google Scholar 

  97. 97.

    Stewardson AJ, Sax H, Gayet-Ageron A, Touveneau S, Longtin Y, Zingg W, et al. Enhanced performance feedback and patient participation to improve hand hygiene compliance of health-care workers in the setting of established multimodal promotion: a single-centre, cluster randomised controlled trial. Lancet Infect Dis. 2016;16(12):1345–55.

    Article  PubMed  PubMed Central  Google Scholar 

  98. 98.

    Campbell J, Narayanan A, Burford B, Greco M. Validation of a multi-source feedback tool for use in general practice. Educ Prim Care. 2010;21(3):165–79.

    Article  PubMed  PubMed Central  Google Scholar 

  99. 99.

    Emslie MJ, Andrew J, Entwistle V, Walker K. Who are your public? A survey comparing the views of a population-based sample with those of a community-based public forum in Scotland. Health Soc Care Community. 2005;13(2):164–9.

    PubMed  Article  PubMed Central  Google Scholar 

  100. 100.

    Farmer J, Bigby C, Davis H, Carlisle K, Kenny A, Huysmans R. The state of health services partnering with consumers: evidence from an online survey of Australian health services. BMC Health Serv Res. 2018;18(1):628.

    Article  PubMed  PubMed Central  Google Scholar 

  101. 101.

    Scott J, Heavey E, Waring J, Jones D, Dawson P. Healthcare professional and patient codesign and validation of a mechanism for service users to feedback patient safety experiences following a care transfer: a qualitative study. BMJ Open. 2016;6(7):e011222-e.

    Article  Google Scholar 

  102. 102.

    Southwick FS, Cranley NM, Hallisy JA. A patient-initiated voluntary online survey of adverse medical events: the perspective of 696 injured patients and families. BMJ Qual Saf. 2015;24(10):620–9.

    Article  PubMed  PubMed Central  Google Scholar 

  103. 103.

    Kerrison S, Pollock A. Complaints as accountability? the case of health care in the United Kingdom: Public Law; 2001. p. 115–33.

  104. 104.

    Katzenellenbogen JM, Sanfilippo FM, Hobbs TMS, Knuiman MW, Bassarab D, Durey A, et al. Voting with their feet - predictors of discharge against medical advice in Aboriginal and non-Aboriginal ischaemic heart disease inpatients in Western Australia: an analytic study using data linkage. BMC Health Serv Res. 2013;13(1):1–10.

    Article  Google Scholar 

  105. 105.

    Willig JH, Krawitz M, Panjamapirom A, Ray MN, Nevin CR, English TM, et al. Closing the feedback loop: an interactive voice response system to provide follow-up and feedback in primary care settings. J Med Syst. 2013;37(2):9905.

    Article  PubMed  PubMed Central  Google Scholar 

  106. 106.

    Wright C, Davey A, Elmore N, Carter M, Mounce L, Wilson E, et al. Patients’ use and views of real-time feedback technology in general practice. Health Expect. 2017;20(3):419–33.

    Article  PubMed  PubMed Central  Google Scholar 

  107. 107.

    Entwistle VA, Andrew JE, Emslie MJ, Walker KA, Dorrian C, Angus VC, et al. Public opinion on systems for feeding back views to the National Health Service. Qual Saf Health Care. 2003;12(6):435–42.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  108. 108.

    Baharvand P. Responsiveness of the health system towards patients admitted to west of Iran hospitals. Electron J Gen Med. 2019;16(2):1–7.

    Article  Google Scholar 

  109. 109.

    Garza B. Increasing the responsiveness of health services in Mexico’s Seguro popular: three policy proposals for voice and power. Health Syst Reform. 2015;1(3):235–45.

    Article  PubMed  PubMed Central  Google Scholar 

  110. 110.

    Van Belle S, Mayhew SH. What can we learn on public accountability from non-health disciplines: a meta-narrative review. BMJ Open. 2016;6:e010425.

    PubMed  PubMed Central  Article  Google Scholar 

  111. 111.

    Gaitonde R, Muraleedharan VR, San Sebastian M, Hurtig A-K. Accountability in the health system of Tamil Nadu, India: exploring its multiple meanings. Health Res Policy Syst. 2019;17(1):44.

    PubMed  PubMed Central  Article  Google Scholar 

  112. 112.

    de Kok BC. Between orchestrated and organic: accountability for loss and the moral landscape of childbearing in Malawi. Soc Sci Med. 2019;220:441–9.

    Article  PubMed  PubMed Central  Google Scholar 

  113. 113.

    Berta W, Laporte A, Wodchis WP. Approaches to accountability in long-term care. Healthcare Policy. 2014;10(SP):132.

    PubMed  PubMed Central  Google Scholar 

  114. 114.

    Uzochukwu B, Mbachu C, Okeke C, Onwujekwe E, Molyneux S, Gilson L. Accountability mechanisms for implementing a health financing option: the case of the basic health care provision fund (BHCPF) in Nigeria. Int J Equity Health. 2018;17(1):N.PAG-N.PAG.

    Article  Google Scholar 

  115. 115.

    Bell SK, Delbanco T, Anderson-Shaw L, McDonald TB, Gallagher TH, Bell SK, et al. Accountability for medical error: moving beyond blame to advocacy. CHEST. 2011;140(2):519–26.

    PubMed  Article  PubMed Central  Google Scholar 

  116. 116.

    Belela-Anacleto ASC, Pedreira MLG. Patient safety era: time to think about accountability. Nursing in critical care. Malden: Wiley-Blackwell; 2016. p. 321–2.

    Google Scholar 

  117. 117.

    Murthy RK, Klugman B. Service accountability and community participation in the context of health sector reforms in Asia: implications for sexual and reproductive health services. Health Policy Plan. 2004;19(suppl 1):i78–86.

    PubMed  Article  PubMed Central  Google Scholar 

  118. 118.

    Checkland K, Marshall M, Harrison S. Re-thinking accountability: trust versus confidence in medical practice. Qual Saf Health Care. 2004;13(2):130–5.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  119. 119.

    Nxumalo N, Gilson L, Goudge J, Tsofa B, Cleary S, Barasa E, et al. Accountability mechanisms and the value of relationships: experiences of front-line managers at subnational level in Kenya and South Africa. BMJ Glob Health. 2018;3(4):e000842.

    PubMed  PubMed Central  Article  Google Scholar 

  120. 120.

    Woollard R, Buchman S, Meili R, Strasser R, Alexander I, Goel R. Social accountability at the meso level: into the community. Can Fam Physician. 2016;62(7):538–40.

    PubMed  PubMed Central  Google Scholar 

  121. 121.

    Cordery CJ. Dimensions of accountability: voices from New Zealand primary health organisations: Victoria University of Wellington; 2008.

  122. 122.

    Brinkerhoff D. Accountability and health systems: overview, framework, and strategies. Bethesda: Partners for Health Reformplus; 2003.

    Google Scholar 

  123. 123.

    Brinkerhoff DW. Accountability and health systems: toward conceptual clarity and policy relevance. Health Policy Plan. 2004;19(6):371–9.

    Article  PubMed  PubMed Central  Google Scholar 

  124. 124.

    Human Rights Watch. ‘Stop making excuses’ accountability for maternal health care in South Africa. New York: Human Rights Watch; 2011.

    Google Scholar 

  125. 125.

    Mafuta EM, Dieleman MA, Hogema LM, Khomba PN, Zioko FM, Kayembe PK, et al. Social accountability for maternal health services in Muanda and Bolenge health zones, Democratic Republic of Congo: a situation analysis. BMC Health Serv Res. 2015;15:1–17.

    Article  Google Scholar 

  126. 126.

    Hamal M, de Cock BT, De Brouwere V, Bardají A, Dieleman M, Bardají A. How does social accountability contribute to better maternal health outcomes? A qualitative study on perceived changes with government and civil society actors in Gujarat, India. BMC Health Serv Res. 2018;18(1):N.PAG-N.PAG.

    Article  Google Scholar 

  127. 127.

    Mafuta EM, Dieleman MA, Essink L, Khomba PN, Zioko FM, Mambu TNM, et al. Participatory approach to design social accountability interventions to improve maternal health services: a case study from the Democratic Republic of the Congo. Glob Health Res Policy. 2017;2:4.

    PubMed  PubMed Central  Article  Google Scholar 

  128. 128.

    Shukla A, Sinha SS. Reclaiming public health through community-based monitoring: The case of Maharashtra, India: Municipal Services Project; 2014. Contract No.: 27.

  129. 129.

    Madon S, Krishna S. Challenges of accountability in resource-poor contexts: lessons about invited spaces from Karnataka’s village health committees. Oxf Dev Stud. 2017;45(4):522–41.

    Article  Google Scholar 

  130. 130.

    Hefner JL, Hilligoss B, Sieck C, Walker DM, Sova L, Song PH, et al. Meaningful engagement of ACOS with communities: the new population health management. Med Care. 2016;54(11):970–6.

    Article  PubMed  PubMed Central  Google Scholar 

  131. 131.

    Andrews ML, Sánchez V, Carrillo C, Allen-Ananins B, Cruz YB. Using a participatory evaluation design to create an online data collection and monitoring system for New Mexico’s community health councils. Eval Program Plann. 2014;42:32–42.

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  132. 132.

    Boothroyd RI, Flint AY, Lapiz AM, Lyons S, Jarboe KL, Aldridge WA. Active involved community partnerships: co-creating implementation infrastructure for getting to and sustaining social impact. Transl Behav Med. 2017;7(3):467–77.

    PubMed  PubMed Central  Article  Google Scholar 

  133. 133.

    Ringold D, Holla A, Koziol M, Srinivasan S. Citizens and service delivery: assessing the use of social accountability approaches in human development. Washington DC: The World Bank; 2012.

    Google Scholar 

  134. 134.

    Oliver S, Armes DG, Gyte G. Public involvement in setting a national research agenda: a mixed methods evaluation. Patient. 2009;2(3):179–90.

    PubMed  Article  PubMed Central  Google Scholar 

  135. 135.

    Srivastava A, Gope R, Nair N, Rath S, Rath S, Sinha R, et al. Are village health sanitation and nutrition committees fulfilling their roles for decentralised health planning and action? A mixed methods study from rural eastern India. BMC Public Health. 2016;16(1):59.

    PubMed  PubMed Central  Article  Google Scholar 

  136. 136.

    Bonino F, Warner A. What makes humanitarian feedback mechanisms work? Literature review to support an ALNAP–CDA action research into humanitarian feedback mechanisms. London: ALNAP/ODI; 2014.

    Google Scholar 

  137. 137.

    Fox JA. Social accountability: what does the evidence really say? World Dev. 2015;72:346–61.

    Article  Google Scholar 

  138. 138.

    Bleich SN, Ozaltin E, Murray CJL. How does satisfaction with the health-care system relate to patient experience? Bull World Health Organ. 2009;87(4):271–8.

    Article  PubMed  PubMed Central  Google Scholar 

  139. 139.

    Grandvoinnet H, Ghazia A, Shomikho R. Opening the black box: the contextual drivers of social accountability. Washington, DC: World Bank Group; 2015.

    Book  Google Scholar 

  140. 140.

    Paina L, Saracino J, Bishai J, Sarriot E. Monitoring and evaluation of evolving social accountability efforts in health: a literature synthesis; 2019.

    Google Scholar 

  141. 141.

    Post D, Agarwal S, Venugopa V. Rapid feedback: The role of community scorecards in improving service delivery: World Bank Social Development Department (SDV); 2014.

  142. 142.

    Tremblay D, Roberge D, Berbiche D. Determinants of patient-reported experience of cancer services responsiveness. BMC Health Serv Res. 2015;15(1):425.

    Article  PubMed  PubMed Central  Google Scholar 

  143. 143.

    Wilson LJ, Yepuri JN, Moses RE. The advantages and challenges of measuring patient experience in outpatient clinical practice. Part 3: patient satisfaction and your practice. Am J Gastroenterol. 2016:757–9.

  144. 144.

    World Health Organization. Intersectoral action on health: a path for policy-makers to implement effective and sustainable action on health. Kobe: World Health Organization (WHO), The WHO Centre for Health Development; 2011.

    Google Scholar 

  145. 145.

    Bjorkman M, Svensson J. Power to the people: evidence from a randomized field experiment on community-based monitoring in Uganda. World Bank Policy Res Working Paper. 2009;124(2):735–69.

    Google Scholar 

  146. 146.

    Olayo R, Wafula C, Aseyo E, Loum C, Kaseje D. A quasi-experimental assessment of the effectiveness of the community health strategy on health outcomes in Kenya. BMC Health Serv Res. 2014;14(Suppl 1):S3.

    PubMed  PubMed Central  Article  Google Scholar 

  147. 147.

    Gullo S, Galavotti C, Sebert Kuhlmann A, Msiska T, Hastings P, Marti CN. Effects of a social accountability approach, CARE’s community score card, on reproductive health-related outcomes in Malawi: A cluster-randomized controlled evaluation. PLoS One. 2017;12(2):1–20.

    Article  CAS  Google Scholar 

  148. 148.

    McCoy DC, Hall JA, Ridge M. A systematic review of the literature for evidence on health facility committees in low-and middle-income countries. Health Policy Plan. 2011;27(6):449–66.

    PubMed  Article  PubMed Central  Google Scholar 

  149. 149.

    Oguntunde O, Surajo IM, Dauda DS, Salihu A, Anas-Kolo S, Sinai I. Overcoming barriers to access and utilization of maternal, newborn and child health services in northern Nigeria: an evaluation of facility health committees. BMC Health Serv Res. 2018;18(1):104.

    Article  PubMed  PubMed Central  Google Scholar 

  150. 150.

    Teklehaimanot HD, Teklehaimanot A, Tedella AA, Abdella M. Use of balanced scorecard methodology for performance measurement of the health extension program in Ethiopia. Am J Trop Med Hyg. 2016;94(5):1157–69.

    PubMed  PubMed Central  Article  Google Scholar 

  151. 151.

    Atela M, Bakibinga P, Ettarh R, Kyobutungi C, Cohn S. Strengthening health system governance using health facility service charters: a mixed methods assessment of community experiences and perceptions in a district in Kenya. BMC Health Serv Res. 2015;15(1):539.

    Article  PubMed  PubMed Central  Google Scholar 

  152. 152.

    Ho LS, Labrecque G, Batonon I, Salsi V, Ratnayake R. Effects of a community scorecard on improving the local health system in eastern Democratic Republic of Congo: qualitative evidence using the most significant change technique. Confl Heal. 2015;9(1):1–11.

    Article  Google Scholar 

  153. 153.

    O'Hara JK, Armitage G, Reynolds C, Coulson C, Thorp L, Din I, et al. How might health services capture patient-reported safety concerns in a hospital setting? An exploratory pilot study of three mechanisms. BMJ Qual Saf. 2017;26(1):42–53.

    Article  PubMed  Google Scholar 

  154. 154.

    Katahoire AR, Henriksson DK, Ssegujja E, Waiswa P, Ayebare F, Bagenda D, et al. Improving child survival through a district management strengthening and community empowerment intervention: early implementation experiences from Uganda. BMC Public Health. 2015;15(1):797.

    Article  PubMed  PubMed Central  Google Scholar 

  155. 155.

    Lecoanet A, Sellier E, Carpentier F, Maignan M, Seigneurin A, Francois P. Experience feedback committee in emergency medicine: a tool for security management. Emerg Med J. 2014;31(11):894–8.

    Article  PubMed  Google Scholar 

  156. 156.

    Lodenstein E, Mafuta E, Kpatchavi AC, Servais J, Dieleman M, Broerse JEW, et al. Social accountability in primary health care in west and Central Africa: exploring the role of health facility committees. BMC Health Serv Res. 2017;17:1–15.

    Article  Google Scholar 

  157. 157.

    Nagy M, Chiarella M, Bennett B, Walton M, Carney T. Health care complaint journeys for system comparison. Int J Health Care Qual Assur (09526862). 2018;31(8):878–87.

    Article  Google Scholar 

  158. 158.

    Street J, Duszynski K, Krawczyk S, Braunack-Mayer A. The use of citizens’ juries in health policy decision-making: a systematic review. Soc Sci Med. 2014;109:1–9.

    Article  PubMed  Google Scholar 

  159. 159.

    Serapioni M, Duxbury N. Citizens’ participation in the Italian health-care system: the experience of the mixed advisory committees. Health Expect. 2014;17(4):488–99.

    Article  PubMed  Google Scholar 

  160. 160.

    Gurung G, Tuladhar S. Fostering good governance at peripheral public health facilities: an experience from Nepal. Rural Remote Health. 2013;13(2):2042.

    CAS  PubMed  Google Scholar 

  161. 161.

    Srivastava A, Bhattacharyya S, Gautham M, Schellenberg J, Avan BI. Linkages between public and non-government sectors in healthcare: a case study from Uttar Pradesh, India. Glob Public Health. 2016;11(10):1216–30.

    Article  PubMed  Google Scholar 

  162. 162.

    Schaaf M, Topp SM, Ngulube M. From favours to entitlements: community voice and action and health service quality in Zambia. Health Policy Plan. 2017;32(6):847–59.

    Article  PubMed  PubMed Central  Google Scholar 

  163. 163.

    Uzochukwu B. Trust, accountability and performance in health facility committees in Orumba south local government area, Anambra state, Nigeria; 2011.

    Google Scholar 

  164. 164.

    Gullo S, Galavotti C, Altman L. A review of CARE’s community score card experience and evidence. Un resumen de las experiencias y la evidencia de la Carta de Resultados Comunitarios de CARE. 2016;31(10):1467–78.

  165. 165.

    Karuga RN, Kok M, Mbindyo P, Hilverda F, Otiso L, Kavoo D, et al. “It’s like these CHCs don’t exist, are they featured anywhere?”: social network analysis of community health committees in a rural and urban setting in Kenya. PLoS One. 2019;14(8):1–19.

    Article  CAS  Google Scholar 

  166. 166.

    Ramiro LS, Castillo FA, Tan-Torres T, Torres CE, Tayag JG, Talampas RG, et al. Community participation in local health boards in a decentralized setting: cases from the Philippines. Health Policy Plan. 2001;16(Suppl 2):61–9.

    Article  PubMed  PubMed Central  Google Scholar 

  167. 167.

    Aiken LH, Sermeus W, Van den Heede K, Sloane DM, Busse R, McKee M, et al. Patient safety, satisfaction, and quality of hospital care: cross sectional surveys of nurses and patients in 12 countries in Europe and the United States. Bmj. 2012;344(mar20 2):e1717.

    Article  PubMed  PubMed Central  Google Scholar 

  168. 168.

    Geletta S. Measuring patient satisfaction with medical services using social media generated data. Int J Health Care Qual Assur. 2018;31(2):96–105.

    Article  PubMed  PubMed Central  Google Scholar 

  169. 169.

    Stepurko T, Pavlova M, Groot W. Overall satisfaction of health care users with the quality of and access to health care services: a cross-sectional study in six central and eastern European countries. BMC Health Serv Res. 2016;16:1–13.

    Article  Google Scholar 

  170. 170.

    Farrington C, Burt J, Boiko O, Campbell J, Roland M. Doctors’ engagements with patient experience surveys in primary and secondary care: a qualitative study. Health Expect. 2017;20(3):385–94.

    Article  PubMed  PubMed Central  Google Scholar 

  171. 171.

    Sciamanna CN, Novak SP, Houston TK, Gramling R, Marcus BH. Visit satisfaction and tailored health behavior communications in primary care. Am J Prev Med. 2004;26(5):426–30.

    Article  PubMed  PubMed Central  Google Scholar 

  172. 172.

    Thornton RD, Nurse N, Snavely L, Hackett-Zahler S, Frank K, DiTomasso RA. Influences on patient satisfaction in healthcare centers: a semi-quantitative study over 5 years. BMC Health Serv Res. 2017;17(1):361.

    PubMed  PubMed Central  Article  Google Scholar 

  173. 173.

    Househ M, Borycki E, Kushniruk A. Empowering patients through social media: the benefits and challenges. Health Informatics J. 2014;20(1):50–8.

    Article  PubMed  Google Scholar 

  174. 174.

    James TL, Villacis Calderon ED, Cook DF. Exploring patient perceptions of healthcare service quality through analysis of unstructured feedback. Expert Syst Appl. 2017;71:479–92.

    Article  Google Scholar 

  175. 175.

    Kite J, Foley BC, Grunseit AC, Freeman B. Please like me: Facebook and public health communication. PLoS One. 2016;11(9):e0162765.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  176. 176.

    Rozenblum R, Greaves F, Bates DW. The role of social media around patient experience and engagement. BMJ Qual Saf. 2017:845–8.

  177. 177.

    Shepherd A, Sanders C, Doyle M, Shaw J. Using social media for support and feedback by mental health service users: thematic analysis of a twitter conversation. BMC Psychiatry. 2015;15(1):29.

    Article  PubMed  PubMed Central  Google Scholar 

  178. 178.

    Cullinan K. Citizen reporting on district health services. In: Padarath A, English R, editors. South African health review. 2012/2013 ed. Durban: Health Systems Trust; 2013. p. 83–8.

    Google Scholar 

  179. 179.

    Sri BS, N. S, Khanna R. An investigation of maternal deaths following public protests in a tribal district of Madhya Pradesh, Central India. Reprod Health Matters. 2012;20(39):11–20.

    Article  Google Scholar 

  180. 180.

    Wiysonge CS, Paulsen E, Lewin S, Ciapponi A, Herrera CA, Opiyo N, et al. Financial arrangements for health systems in low-income countries: an overview of systematic reviews. Cochrane Database Syst Rev. 2017;9.

  181. 181.

    Ebenso B, Huque R, Azdi Z, Elsey H, Nasreen S, Mirzoev T. Protocol for a mixed-methods realist evaluation of a health service user feedback system in Bangladesh. BMJ Open. 2017;7(6):e017743-e.

    Article  Google Scholar 

  182. 182.

    Lodenstein E, Dieleman M, Gerretsen B, Broerse JE. A realist synthesis of the effect of social accountability interventions on health service providers’ and policymakers’ responsiveness. Syst Rev. 2013;2:98.

    PubMed  PubMed Central  Article  Google Scholar 

  183. 183.

    Adam T, de Savigny D. Systems thinking for strengthening health systems in LMICs: Need for a paradigm shift. Health Policy Plan. 2012;27(Suppl 4):iv1–3.

    PubMed  PubMed Central  Google Scholar 

  184. 184.

    Robone S, Rice N, Smith PC. Health systems’ responsiveness and its characteristics: a cross-country comparative analysis. Health Serv Res. 2011;46(6pt2):2079–100.

    PubMed  PubMed Central  Article  Google Scholar 

  185. 185.

    Lodenstein E, Ingemann C, Molenaar JM, Dieleman M, Broerse JEW. Informal social accountability in maternal health service delivery: a study in northern Malawi. PLoS One. 2018;13(4):1–17.

    Article  CAS  Google Scholar 

  186. 186.

    Abimbola S, Negin J, Jan S, Martiniuk A. Towards people-centred health systems: a multi-level framework for analysing primary health care governance in low- and middle-income countries. Health Policy Plan. 2014;29(Suppl 2):ii29–39.

    PubMed  PubMed Central  Article  Google Scholar 

  187. 187.

    Pratt B, Hyder AA. Reinterpreting responsiveness for health systems research in low and middle-income countries. Bioethics. 2015;29(6):379–88.

    Article  PubMed  PubMed Central  Google Scholar 

  188. 188.

    Kowal P, Naidoo N, Williams SR, Chatterji S. Performance of the health system in China and Asia as measured by responsiveness. Health. 2011;3(10):638–46.

    Article  Google Scholar 

Download references


Not applicable.


This research has been funded through the Health Systems Research Initiative (HSRI) in the UK, a collaboration between the UK MRC, ERSC, DFID and the Wellcome Trust. Grant number: MR/P004725/1.

Author information




GK performed main review, made written contributions to manuscript and reviewed drafts. NK reviewed and contributed to drafts. EW was involved in main review process and reviewed drafts. LG made contributions to the conception and reviewed drafts. SM made contributions to the conception and reviewed drafts. NS reviewed and contributed to drafts. BT made contributions to the conception and reviewed drafts. EB reviewed and contributed to drafts. JO was involved in the main review process, wrote first draft and reviewed drafts. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Gadija Khan.

Ethics declarations

Ethics approval and consent to participate

Not applicable for this study.

Consent for publication

Not applicable not this study.

Competing interests

The authors declare that they have no competing interests in this section.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Search terms for review.

Additional file 2.

Data extraction table for the 621 publications included in the review and analysis.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Khan, G., Kagwanja, N., Whyle, E. et al. Health system responsiveness: a systematic evidence mapping review of the global literature. Int J Equity Health 20, 112 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Responsiveness
  • Health system
  • Accountability
  • Feedback loops
  • User experience
  • Evidence mapping