Did a quality improvement intervention improve quality of maternal health care? Implementation evaluation from a cluster-randomized controlled study

listen audio

Study Justification:
The study aimed to evaluate the effectiveness of a maternal healthcare quality improvement intervention in improving the quality of care provided in primary care facilities in rural Tanzania. The justification for the study was based on the need to address the low quality of care observed at baseline, as well as the importance of improving maternal health outcomes and reducing maternal and newborn deaths.
Highlights:
– The study was conducted in 24 primary care facilities in four districts of rural Tanzania.
– The intervention group received a quality improvement intervention, including in-service training, mentorship, supportive supervision, and infrastructure support.
– The control group continued with standard care.
– The study measured fidelity with indicators of quality and compared the quality between intervention and control facilities.
– Results showed that the intervention was associated with an increase in newborn counseling but did not lead to significant improvements in other quality indicators.
– On average, facilities reached 39% implementation of the intervention.
– The study concluded that the multi-faceted quality improvement intervention did not result in meaningful improvements in quality, possibly due to a failure to sustain high-level implementation and limitations in theory.
Recommendations:
Based on the study findings, the following recommendations can be made:
1. Improve sustainability of quality improvement interventions by addressing implementation challenges and ensuring ongoing support.
2. Consider alternative approaches or additional interventions to improve quality in primary care facilities with weak starting quality, including addressing infrastructure and provider competence.
3. Conduct further research to identify effective strategies for improving quality of maternal healthcare in resource-limited settings.
Key Role Players:
To address the recommendations, the involvement of the following key role players is crucial:
1. Ministry of Health: Responsible for policy development and coordination of healthcare services.
2. District Health Authorities: Responsible for implementing and monitoring healthcare services at the district level.
3. Primary Care Facility Managers: Responsible for overseeing the implementation of quality improvement interventions at the facility level.
4. Healthcare Providers: Responsible for delivering quality maternal healthcare services.
5. Training Institutions: Responsible for providing in-service training and continuous professional development for healthcare providers.
Cost Items for Planning Recommendations:
While the actual cost of implementing the recommendations will vary depending on the context, some key cost items to consider in planning include:
1. Training and Capacity Building: Costs associated with training healthcare providers and facility managers on quality improvement strategies and best practices.
2. Infrastructure Improvement: Costs for improving the physical infrastructure of primary care facilities, including equipment, supplies, and medication.
3. Monitoring and Evaluation: Costs for establishing systems to monitor and evaluate the implementation and impact of quality improvement interventions.
4. Supportive Supervision: Costs for providing ongoing mentorship and supportive supervision to healthcare providers.
5. Research and Development: Costs for conducting further research to identify effective strategies for improving quality of maternal healthcare.
Please note that the above cost items are general categories and the actual cost estimation will require a detailed analysis of the specific context and requirements.

The strength of evidence for this abstract is 6 out of 10.
The evidence in the abstract is rated 6 because the study design is a cluster-randomized controlled study with implementation evaluation, which provides a moderate level of evidence. The study measured fidelity with indicators of quality and compared quality between intervention and control facilities. However, the study found no meaningful improvement in quality, suggesting that the intervention may not be effective. To improve the evidence, future studies could consider increasing the sample size, conducting a longer follow-up period, and addressing the limitations mentioned in the abstract.

Objective: To test the success of a maternal healthcare quality improvement intervention in actually improving quality. Design: Cluster-randomized controlled study with implementation evaluation; we randomized 12 primary care facilities to receive a quality improvement intervention, while 12 facilities served as controls. Setting: Four districts in rural Tanzania. Participants: Health facilities (24), providers (70 at baseline; 119 at endline) and patients (784 at baseline; 886 at endline). Interventions: In-service training, mentorship and supportive supervision and infrastructure support. Main outcome measures: We measured fidelity with indictors of quality and compared quality between intervention and control facilities using difference-in-differences analysis. Results: Quality of care was low at baseline: the average provider knowledge test score was 46.1% (range: 0-75%) and only 47.9% of women were very satisfied with delivery care. The intervention was associated with an increase in newborn counseling (β: 0.74, 95% CI: 0.13, 1.35) but no evidence of change across 17 additional indicators of quality. On average, facilities reached 39% implementation. Comparing facilities with the highest implementation of the intervention to control facilities again showed improvement on only one of the 18 quality indicators. Conclusions: A multi-faceted quality improvement intervention resulted in no meaningful improvement in quality. Evidence suggests this is due to both failure to sustain a high-level of implementation and failure in theory: quality improvement interventions targeted at the clinic-level in primary care clinics with weak starting quality, including poor infrastructure and low provider competence, may not be effective.

This study was implemented in 24 primary care clinics, or dispensaries, in four districts of Pwani Region, Tanzania. Selection criteria were previously described in detail [15]. Dispensaries are outpatient facilities programed to provide primary care, including reproductive health services [16, 17]. In Pwani, 73% of deliveries occurred in health facilities in 2010, and around one third of those occurring in health facilities occurred in primary care facilities [12]. We stratified the 24 facilities by district and then randomized facilities in a 1:1 ratio to either the intervention or the control group, resulting in three intervention and three control facilities in each district. Randomization occurred by pulling facility names out of a hat in the presence of research staff and regional health officials. Clusters were defined as the health facility and the surrounding catchment area. Facilities in the intervention group received a maternal and newborn health quality improvement intervention, while facilities in the control group continued with standard care. Delivery of interventions known to avert maternal and newborn deaths (e.g. high quality antenatal care (ANC) and rapid deployment of emergency care) [18] requires competent and motivated providers working within well-equipped facilities that are able to support basic emergency obstetric and newborn care (BEmONC), with appropriate access to referral facilities. The MNH+ intervention uses BEmONC training to provide a review of foundational knowledge, complemented by continuous mentoring and supportive supervision by an obstetrician, and provision of the necessary equipment, supplies, and medication. Our theory of change is that these quality inputs will translate into better quality process of care and outcomes (box). Implementation of the intervention began in June 2012; by July 2013, the full intervention was underway and continued into the spring of 2016. Theory of change and intervention components We developed an implementation index to assess the effect of variation of the intervention across the 12 intervention facilities [20, 21]. For each intervention component, we identified indicators for the dose delivered (e.g. proportion of expected supportive supervision visits delivered), reach to the intended audience (e.g. proportion of providers who are trained) and dose received (e.g. provider’s training scores). Fidelity is defined as the correct application of the program [21]. Instead of looking at whether each individual intervention component was implemented as intended, we chose a more demanding definition of fidelity: whether the immediate intended effect, that is improvement in quality, was achieved. We thus specified a range of quality metrics using Donabedian’s model of quality of care of structure, process, and outcome. Trained providers completed a 60-question multiple-choice test that emphasized obstetric and newborn emergency care and two clinical vignettes that tested their clinical judgment in obstetric emergencies (appendix 1), receiving a continuous score between 0 and 1 on each instrument. We used data from facility registers to create a composite indicator of routine obstetric services (appendix 2). For each facility, we created an indicator for the sum of each of the six BEmONC signal functions (life-saving health services) that had been performed in the previous 3 months. We measured reported receipt of services as the proportion of women receiving a uterotonic, the proportion of women receiving IV antibiotics and a composite indicator of counseling on six items. We measured patients’ perception of quality through composite indicators for nontechnical quality and technical quality. We asked patients and providers to report their perception of quality at the facility. Patients also reported their satisfaction with delivery care. Indicators were created to compare those with the top rating (e.g. excellent or very satisfied) to all others. We measured four indicators of maternal health through biomarkers collected during the household survey: lack of anemia (hemoglobin level is 12.0 g/dl or above for nonpregnant women and 11.0 g/dl or above for pregnant women [22]), lack of hypertension (average systolic reading less than 140 mm Hg and average diastolic reading less than 90 mm Hg [23]), distribution of EQ-5D (EuroQol Group, Rotterdam, Netherlands) and distribution of mid-upper arm circumference (MUAC). Patient-level data were collected as repeated cross-sections in 2012, 2014 and 2016 (Appendix 2 for summary) [15, 24, 25]. All households in the catchment area were enumerated. The sample size was determined based on another primary outcome, utilization. At midline, we selected 60% of women from each catchment area using a simple random sample. Women were eligible for the household survey if they were at least 15 years of age and lived within the catchment area of a study facility, and included in this analysis if they had delivered their most recent child between 6 weeks and 1 year prior to the interview in one of the study facilities. At midline and endline, women were invited to have their hemoglobin and blood pressure tested. The job satisfaction survey was offered to all healthcare providers [26], while the obstetric knowledge test and the clinical vignettes were offered to healthcare providers who had received formal pre-service training in obstetric care (i.e. clinical officers and nurses). The facility audit was adapted from the needs assessment developed by the Averting Maternal Death and Disability Program and the United Nations system [27]. The audit asked about services routinely provided by that facility. In addition, we collected aggregate monthly indicators of use and quality from the facility registers and partographs. The provider surveys, facility audits and register abstraction were conducted annually. The implementation team at Tanzania Health Promotion Support (THPS) collected data on intervention delivery. Data collection methods are further described in appendix 2. All women and healthcare providers participating in surveys provided written, informed consent prior to participation. Ethics review boards in both Tanzania, National Institute for Medical Research and Ifakara Health Institute and in the U.S., Columbia University and the Harvard T.H. Chan School of Public Health approved this study. Completed surveys were imported into Stata version 14.2 for cleaning and analysis. We first conducted descriptive statistics then assessed the implementation and fidelity of the intervention. Each of the three indicators (dose delivered, dose received and reach) were multiplied together to obtain a composite indicator for each of the three components (infrastructure, training and supportive supervision) [21, 28]. These three scores were then averaged to create a single composite measure of implementation strength. Complete implementation would thus be represented by a score of ‘1’ and complete failure of implementation by a score of ‘0’. To measure the effect of the MNH+ intervention on obstetric quality, we conducted difference-in-differences analyses assessing the difference between intervention and control facilities in the change of each quality indicator from baseline (2012) to endline (2016). These analyses control for both differences in quality patterns between facilities at baseline and changing patterns over time that are external to the intervention but consistent across the region. We included a fixed effect for district to account for stratification during the design phase. Except where noted, all models used generalized estimating equations with an exchangeable correlation structure. For binary quality measures, we used a log link to estimate risk ratios [29]. The robust sandwich estimator was used to account for clustering at the facility level. Because anemia and hypertension were not measured at baseline, we could not conduct a difference-in-differences analysis. Instead, we compared intervention to control at endline and adjusted for age, household wealth and district [30, 31]. Additionally, we assessed whether there was an effect of the intervention on the quality results at midline (2014). To assess changes in provider knowledge and competence, our primary analysis evaluated within provider changes. Because of unexpectedly low retention of providers across the five-year study period, we assessed changes from baseline (2012) to first follow-up (2013). We conducted a secondary analysis to measure changes in mean facility knowledge score from baseline (2012) to endline (2016). We conducted linear regression with a fixed effect for district and the robust sandwich estimator to account for clustering at the facility level. We conducted a sub-group analysis to assess the impact of the intervention in the high-implementation facilities (top third) compared to control facilities (N = 12) through difference-in-differences analyses.

Based on the information provided, the study evaluated a maternal healthcare quality improvement intervention in rural Tanzania. The intervention included in-service training, mentorship, supportive supervision, and infrastructure support. However, the study found that the intervention did not result in meaningful improvements in the quality of care. The implementation of the intervention was only 39%, and there was no evidence of change across most quality indicators.

Based on these findings, potential innovations to improve access to maternal health could include:

1. Strengthening implementation: Develop strategies to ensure a higher level of implementation of quality improvement interventions, such as improved monitoring and evaluation systems, regular feedback loops, and accountability mechanisms.

2. Provider competence: Focus on improving the competence and skills of healthcare providers through targeted training programs, continuous professional development, and mentorship opportunities.

3. Infrastructure support: Invest in improving the infrastructure of primary care facilities, including ensuring access to basic emergency obstetric and newborn care, adequate equipment, supplies, and medication.

4. Patient-centered care: Emphasize the importance of patient satisfaction and perception of quality by incorporating patient feedback mechanisms, promoting respectful and culturally sensitive care, and involving women in decision-making processes.

5. Community engagement: Involve the community in maternal health initiatives through community health workers, health education programs, and community mobilization activities to increase awareness, demand, and utilization of maternal health services.

6. Integration of services: Explore opportunities for integrating maternal health services with other healthcare services, such as family planning, HIV/AIDS prevention and treatment, and nutrition programs, to improve overall health outcomes for women and their families.

It is important to note that these recommendations are based on the study’s findings and may need to be adapted to the specific context and resources available in different settings.
AI Innovations Description
The study mentioned in the description evaluated a quality improvement intervention aimed at improving maternal healthcare in rural Tanzania. The intervention included in-service training, mentorship, supportive supervision, and infrastructure support. The study found that the intervention had limited success in improving the quality of care provided.

The main outcome measures used in the study were indicators of quality, including provider knowledge test scores, patient satisfaction with delivery care, and various other quality indicators. The results showed that the quality of care was low at baseline, with only minimal improvements observed after the intervention. The intervention was associated with an increase in newborn counseling, but there was no evidence of improvement in other quality indicators.

The study highlighted two main reasons for the lack of significant improvement in quality. First, the implementation of the intervention was not sustained at a high level, with facilities reaching only 39% implementation on average. Second, the intervention may have been ineffective due to targeting primary care clinics with weak starting quality, including poor infrastructure and low provider competence.

Based on these findings, it is recommended that future interventions aimed at improving access to maternal health should focus on addressing the underlying issues of infrastructure and provider competence. Additionally, efforts should be made to ensure sustained implementation of interventions to achieve meaningful improvements in quality of care.
AI Innovations Methodology
The study mentioned in the description aimed to test the effectiveness of a quality improvement intervention in improving the quality of maternal healthcare. The intervention included in-service training, mentorship, supportive supervision, and infrastructure support in 12 primary care facilities in rural Tanzania, while 12 other facilities served as control groups. The study measured fidelity with indicators of quality and compared the quality between the intervention and control facilities using a difference-in-differences analysis.

The methodology used in the study involved the randomization of facilities into intervention and control groups. The intervention group received the maternal and newborn health quality improvement intervention, while the control group continued with standard care. The implementation of the intervention started in June 2012 and continued until the spring of 2016.

To assess the effect of the intervention, an implementation index was developed to measure the variation of the intervention across the 12 intervention facilities. This index considered the dose delivered, reach to the intended audience, and dose received for each intervention component. Fidelity was defined as the achievement of the immediate intended effect, which is the improvement in quality.

Various indicators were used to measure the quality of care, including provider knowledge test scores, patient satisfaction with delivery care, reported receipt of services, patients’ perception of quality, and biomarkers related to maternal health. Data were collected through surveys, facility audits, and register abstraction. Difference-in-differences analyses were conducted to compare the change in each quality indicator between the intervention and control facilities from baseline to endline.

The study found that the quality of care was low at baseline, and the intervention did not result in meaningful improvements in quality. The lack of sustained high-level implementation and the targeting of quality improvement interventions at primary care clinics with weak starting quality were identified as possible reasons for the lack of effectiveness.

In summary, the methodology used in the study involved randomization of facilities, implementation of a quality improvement intervention, measurement of fidelity and quality indicators, and statistical analysis to compare the intervention and control groups.

Share this:
Facebook
Twitter
LinkedIn
WhatsApp
Email