The impact of continuous quality improvement on coverage of antenatal HIV care tests in rural South Africa: Results of a stepped-wedge cluster-randomised controlled implementation trial

listen audio

Study Justification:
– The study aimed to assess the effects of continuous quality improvement (CQI) on the quality of antenatal HIV care in rural South Africa.
– Evidence for the effectiveness of CQI in resource-poor settings is limited, making this study important for understanding its impact in such contexts.
– The study focused on two primary endpoints: viral load monitoring and repeat HIV testing, which are critical for the elimination of mother-to-child transmission of HIV and the health of pregnant women living with HIV.
Highlights:
– The study used a stepped-wedge cluster-randomized controlled trial design, comparing CQI to usual standard of antenatal care in 7 nurse-led primary care clinics in rural South Africa.
– The intervention was delivered by trained CQI mentors and included standard CQI tools such as process maps, fishbone diagrams, run charts, Plan-Do-Study-Act cycles, and action learning sessions.
– The study found that CQI significantly increased viral load monitoring but did not improve repeat HIV testing.
– These results suggest that CQI can be effective at increasing the quality of primary care in rural African communities.
Recommendations:
– Policy makers should consider implementing CQI as a routine intervention to improve the quality of primary care in rural African communities.
– Future implementation research should accompany the use of CQI to understand the mechanisms of action and identify factors that support long-term success.
Key Role Players:
– Trained CQI mentors
– Health workers, including nurses and HIV lay counsellors
– Investigators and researchers
– Policy makers and government officials
– Implementing partners and organizations
Cost Items for Planning Recommendations:
– Training and capacity building for CQI mentors and health workers
– CQI tools and materials (process maps, fishbone diagrams, run charts, etc.)
– Action learning sessions and workshops
– Data management and analysis
– Monitoring and evaluation activities
– Communication and dissemination of findings
– Stakeholder engagement and collaboration efforts

The strength of evidence for this abstract is 8 out of 10.
The evidence in the abstract is strong because it is based on a stepped-wedge cluster-randomised controlled trial, which is a rigorous study design. The study provides detailed information on the intervention, methods, and results. However, to improve the evidence, the abstract could include information on the sample size, statistical significance, and effect sizes of the findings.

Background Evidence for the effectiveness of continuous quality improvement (CQI) in resource-poor settings is very limited. We aimed to establish the effects of CQI on quality of antenatal HIV care in primary care clinics in rural South Africa. Methods and findings We conducted a stepped-wedge cluster-randomised controlled trial (RCT) comparing CQI to usual standard of antenatal care (ANC) in 7 nurse-led, public-sector primary care clinics—combined into 6 clusters—over 8 steps and 19 months. Clusters randomly switched from comparator to intervention on pre-specified dates until all had rolled over to the CQI intervention. Investigators and clusters were blinded to randomisation until 2 weeks prior to each step. The intervention was delivered by trained CQI mentors and included standard CQI tools (process maps, fishbone diagrams, run charts, Plan-Do-Study-Act [PDSA] cycles, and action learning sessions). CQI mentors worked with health workers, including nurses and HIV lay counsellors. The mentors used the standard CQI tools flexibly, tailored to local clinic needs. Health workers were the direct recipients of the intervention, whereas the ultimate beneficiaries were pregnant women attending ANC. Our 2 registered primary endpoints were viral load (VL) monitoring (which is critical for elimination of mother-to-child transmission of HIV [eMTCT] and the health of pregnant women living with HIV) and repeat HIV testing (which is necessary to identify and treat women who seroconvert during pregnancy). All pregnant women who attended their first antenatal visit at one of the 7 study clinics and were ≥18 years old at delivery were eligible for endpoint assessment. We performed intention-to-treat (ITT) analyses using modified Poisson generalised linear mixed effects models. We estimated effect sizes with time-step fixed effects and clinic random effects (Model 1). In separate models, we added a nested random clinic–time step interaction term (Model 2) or individual random effects (Model 3). Between 15 July 2015 and 30 January 2017, 2,160 participants with 13,212 ANC visits (intervention n = 6,877, control n = 6,335) were eligible for ITT analysis. No adverse events were reported. Median age at first booking was 25 years (interquartile range [IQR] 21 to 30), and median parity was 1 (IQR 0 to 2). HIV prevalence was 47% (95% CI 42% to 53%). In Model 1, CQI significantly increased VL monitoring (relative risk [RR] 1.38, 95% CI 1.21 to 1.57, p < 0.001) but did not improve repeat HIV testing (RR 1.00, 95% CI 0.88 to 1.13, p = 0.958). These results remained essentially the same in both Model 2 and Model 3. Limitations of our study include that we did not establish impact beyond the duration of the relatively short study period of 19 months, and that transition steps may have been too short to achieve the full potential impact of the CQI intervention. Conclusions We found that CQI can be effective at increasing quality of primary care in rural Africa. Policy makers should consider CQI as a routine intervention to boost quality of primary care in rural African communities. Implementation research should accompany future CQI use to elucidate mechanisms of action and to identify factors supporting long-term success.

Details of the Management and Optimisation of Nutrition, Antenatal, Reproductive, Child health & HIV care (MONARCH) implementation project and study have been previously published [30] and are summarised below. The Africa Health Research Institute (AHRI) at Somkhele (previously known as the Africa Centre for Population Health) is located in a rural community in northern KwaZulu-Natal, South Africa. Our CQI intervention was conducted at 7 nurse-led South African National Department of Health (DoH) primary care clinics: 6 were located within the geographic bounds of the AHRI Population Intervention Platform Surveillance Area (PIPSA) South [31], and 1 clinic was located in the market town of Mtubatuba, which is often used by PIPSA residents (Fig 1). Management of the primary care clinics is overseen by Hlabisa Hospital, the local district hospital. HIV prevalence amongst women of reproductive age in this area is approximately 37% [32]. Additional contextual information is described in the Supporting Information: laboratory results workflow (S1 Text), clinic size (S1 Table), and staffing (including lay counsellors; S2 Table). The intervention was delivered by an external CQI team of mentors (including 2 isiZulu-speaking nurses) from the Centre for Rural Health (CRH) at the University of KwaZulu-Natal, who travelled to the study community (hereafter referred to as the CRH team). The mentors were closely supported by an improvement advisor (consultant obstetrician), a scientific advisor, and a data manager. The PIPSA is depicted in white and covers 438 km2. Primary care clinics and the local district hospital, Hlabisa Hospital, are marked with a red cross. Source credit: Sabelo Ntuli, AHRI Research Data Management. AHRI, Africa Health Research Institute; PIPSA, AHRI Population Intervention Platform Surveillance Area. We conducted a stepped-wedge cluster RCT (www.clinicaltrials.gov; {"type":"clinical-trial","attrs":{"text":"NCT02626351","term_id":"NCT02626351"}}NCT02626351) from 15 July 2015 to 30 January 2017. Each clinic formed a cluster except for the 2 smallest clinics, which were merged into one cluster. After a 2-month baseline data collection period, the first cluster rolled over to the intervention on 29 September 2015. Each subsequent cluster rolled over from control to intervention in random order every 2 months (Fig 2). Primary care clinics provided pre-intervention data until each rolled over to the CQI intervention in random order. All clinics provided data continuously throughout the study period. Baseline data collection across all clinics occurred from 15 July 2015 to 28 September 2015 (Step 0). As ANC data were captured retrospectively at delivery, the total observation period exceeded the data collection period by approximately 6 months. Width of each step is proportional to the number of months under observation. The baseline period (pre-intervention, depicted in light blue) contributed approximately 8 months, and the endline (Step 7) contributed approximately 4.5 months [30]. Intervention steps (intensive CQI phase, 2-month step) are depicted in medium blue. ANC, antenatal care; CQI, continuous quality improvement. Trial registration occurred after the baseline and the first step of this 8-step stepped-wedge RCT, on 10 December 2015. The reason for this timing was that it became clear during the baseline that a rigorous scientific evaluation of the CQI intervention would be feasible and desirable for both government and implementing partners (S1 Text). The stepped-wedge design was selected for both pragmatic and ethical reasons [30]. The description of our results follows the 2018 Consolidated Standards of Reporting Trials (CONSORT) extension for stepped-wedge RCTs [33] (S1 CONSORT Checklist). We use the Template for Intervention Description and Replication (TIDieR) to describe the intervention in detail (Table 1) [34]. Briefly, the intervention focused on developing the capacity of local ANC health workers in study clinics and aimed to improve implementation of the national eMTCT guidelines. The intervention was based on the Institute for Healthcare Improvement (IHI) breakthrough collaborative CQI model [35]. Clinical processes and resources were first ascertained during situational analyses conducted in the 2-week lead-up to intervention rollover. CQI tools provided a structured approach to improving process change with clearly defined goals and activities and were implemented flexibly based on need (Table 1). Patient care pathways in the clinic were documented using process maps [36] to identify areas for improvement (e.g., filing of VL results). Barriers and enablers of target endpoints (e.g., VL monitoring) were identified with fishbone diagrams [37], providing the opportunity for comprehensive, clinic-wide improvement. Improvement activities were reviewed using iterative Plan-Do-Study-Act (PDSA) cycles during which “one learns from taking action” in real time (tests of change), unlike awaiting the results of a formal research study [38]. Run charts—outcome time trends plotted during the course of improvement activities—provided visual feedback on whether any changes were likely due to the intervention [39]. As part of the IHI breakthrough collaborative model, the CRH CQI mentors also conducted action learning sessions to consolidate learning, share experiences between healthcare facilities, and motivate collaboration. Abbreviations: AHRI, Africa Health Research Institute; ANC, antenatal care; CQI, continuous quality improvement; CRH, Centre for Rural Health; DoH, South African National Department of Health; eMTCT, elimination of mother-to-child transmission of HIV; HIV PCR, polymerase chain reaction (nucleic acid amplification test for detecting HIV infection); M&E, monitoring and evaluation; MONARCH, Management and Optimisation of Nutrition, Antenatal, Reproductive, Child health & HIV care; MTCT, mother-to-child transmission of HIV; NPT, Normalisation Process Theory; PDSA, Plan-Do-Study-Act; TIDieR, Template for Intervention Description and Replication; TQM, Total Quality Management; UKZN, University of KwaZulu-Natal; VL, HIV viral load The intervention delivery according to the stepped-wedge study design is described in Table 1. The CRH team delivered CQI intensively to each cluster during the 2-month intervention step and then continued with the less intensive intervention during the maintenance phase (Table 1, Fig 2). They delivered a standard “dose” of approximately 19 visits during the intervention phase (2–3 visits per week) and continued with approximately monthly visits during the maintenance phase (Fig 2) for ongoing support and mentorship. The CRH mentors held action learning sessions at the end of each intervention step. In typical CQI delivery, health workers from all clinics in an intervention community would concurrently engage in the intervention and attend all action learning sessions. In our CQI delivery, health workers participated in the intervention and the action learning sessions only during the phase when their clinic was in the intervention arm of our trial. Table 1 further describes materials; procedures; how and where the intervention was delivered (including duration and timing of CQI visits); and how we measured “dose,” “reach,” and fidelity of the intervention [41]. During control steps of the study design, health workers continued providing antenatal and postnatal care as usually implemented within routinely available resources. Clusters were defined as described earlier. ANC health workers in clusters participated in CQI based on availability and ability to commit to CQI ideally for the entire study period. The CQI mentors tried to recruit health workers in leadership roles (e.g., operational managers, professional nurses) to clinic CQI teams to increase the likelihood that CQI activities would continue after the end of the intervention. For the primary endpoints, all women aged ≥18 years were eligible for recruitment at delivery if they were resident in the PIPSA area during pregnancy and/or had ever attended any of the 7 study clinics for ANC in pregnancy. We have described our randomisation procedure in detail elsewhere [30]. Briefly, the unit of randomisation was cluster balanced by patient volume. A senior biostatistician external to the study team performed the randomisation of all clusters during the baseline and before the first intervention step. Investigators and healthcare workers in the clusters were blinded to randomisation until the AHRI Chief Information Officer revealed each randomised cluster to the AHRI study team 2 weeks prior to the scheduled intervention rollover date for each cluster. The pre-specified registered primary endpoints were indicators of quality of care in HIV-related ANC: (i) VL monitoring among pregnant women living with HIV and (ii) repeat HIV testing among pregnant women not living with HIV. We report the intervention impact on both primary endpoints in this manuscript. We will analyse and report the secondary endpoints elsewhere. The data on our primary endpoints were sourced entirely from routine patient medical records (maternity case records) [30], which were photographed at delivery. All clusters provided pre-CQI, CQI implementation, and post-CQI data continuously throughout the study. As the maternity case records were first accessed after delivery, ANC data were captured retrospectively—this extended the baseline observation period by an additional 6 months, resulting in a total data collection period of 19 months and a total observation period of 25 months. The period after all clusters had received the CQI intervention was 4.5 months (Fig 2). We used a Research Electronic Data Capture (REDCap) study database for data entry [42]. We collected outcome data continuously over the study at all 7 primary care clinics participating in this study. We also collected outcome data at Hlabisa Hospital maternity ward, because most women living in the study subdistrict deliver at this hospital. We also conducted a process evaluation to better understand intervention delivery and explain our primary findings. For this, we sourced field notes and reports by the CRH team collated every 2 months. The reports described actual visit dates and type, results of the root-cause analyses, the improvement interventions (including PDSA cycles), successes and challenges, as well as other observations, including impressions of health worker receptivity to CQI (Table 1). We further conducted semi-structured interviews with consenting health workers on their experiences of implementing CQI. We describe in detail the methods, data, and results of the process evaluation in an upcoming scientific publication. As we describe in our protocol paper [30], we assumed for our baseline power calculation—informed by local routine data—that without CQI 40% of all pregnant women living with HIV would receive a test for VL monitoring and 65% of pregnant women not living with HIV would receive a repeat HIV test. We further assumed that half of all pregnant women would be HIV positive and that pregnant women would make 3 ANC visits. We assumed an intracluster correlation coefficient (ICC) of 0.10, which is a conservative assumption compared to other ICCs measured in similar settings [43]. We assumed missing data from 15% of enrolled women. If we enrolled a total of 1,260 pregnant women (i.e., 630 women living with HIV and 630 women not living with HIV), we estimated 80% power to detect at least a 15-percentage-point increase in our 2 primary endpoints at the 5% significance level [30]. In discussion with local stakeholders, we identified this minimum detectable difference over the course of a pregnancy as relevant for health policy and clinical practice. We performed intention-to-treat (ITT) analyses based on the clinic attended at the first antenatal booking visit—individuals declared their “intention” to attend that same facility for the remainder of pregnancy. Although it is well established that the AHRI surveillance population is mobile [31], ITT assumes exposure to a single clinic for the entire duration of pregnancy regardless of actual attendance elsewhere. All participants were assigned CQI exposure status at each ANC visit (by the actual date of that visit) according to the exposure status of their clinic at that time. Participants whose assigned clinic rolled over to CQI during their ANC thus had 1 or more initial ANC visits that were CQI unexposed and 1 or more later visits that were CQI exposed. The beginning of each step (CQI rollover date) was defined as the date of the first actual CQI intervention visit in the randomised cluster. The binary VL monitoring endpoint was measured in pregnant women living with HIV and defined as a documented VL test performed at a particular visit. Each ANC visit was eligible for an endpoint assessment on or after the first documented HIV-positive status irrespective of whether ART was initiated or continued in pregnancy and irrespective of actual VL results. This definition accounts for real-life imperfections in adherence to guidelines and documentation of ART prescriptions. Women who seroconverted from HIV-negative to HIV-positive status during the study were not included in the analysis of the VL monitoring endpoint. The binary repeat HIV testing endpoint was measured in pregnant women not living with HIV and defined as a subsequent documented HIV test at a particular visit. Each ANC visit following the first documented negative HIV test was eligible for assessment of this endpoint. ANC visits among women who subsequently tested HIV positive were not eligible for the repeat HIV testing endpoint after the first documented HIV-positive test. We did not restrict our endpoint definitions by visit number or gestation, to allow for real-life imperfections in adherence to guidelines. The 2018 extension of the CONSORT statement for stepped-wedge cluster RCTs states that “in addition to reporting a relative measure of the effect of the intervention, it can be helpful to report an absolute measure of the effect” [33]. The reason for these recommendations are that “relative measures of the effects are often more stable across different populations” [33]. Relative measures are therefore more useful than absolute measures for policy makers considering transferring an intervention from one context to another one. In contrast, “absolute measures of effects are more easily understood” [33]. We follow these recommendations and report and interpret both the relative and absolute effect sizes as our primary analyses. For the primary analysis, we estimated the relative effect sizes using modified Poisson mixed effects generalised linear regression models. Modified Poisson regression has become a standard for the analysis of RCTs with binary outcomes [44,45] because it has advantages over logistic and log binomial regression models [44,46]. One advantage of modified Poisson regression is that it directly generates risk ratios rather than the odds ratios that logistic regression generates. Risk ratios are easier to interpret than odds ratios and, unlike odds ratios, are collapsible [47–49]. Another advantage of modified Poisson regression is that it does not suffer from the convergence problems that commonly arise with another approach to estimating risk ratios, log-binomial regression [50–52]. In addition, modified Poisson regression models are more robust to model misspecification than log-binomial regression models [53]. A disadvantage of modified Poisson regression is that it can produce predicted probabilities greater than unity. However, several simulation studies have shown that modified Poisson regression provides risk ratio estimates equivalent to regression models that cannot produce such predicted probabilities, such as log-binomial regression [50,54–56]. Modified Poisson regression models are thus a good choice for estimating risk ratios in RCTs [45]. To measure absolute effect sizes, we used mixed effects linear probability regression models. We used 3 models to estimate the effect of the CQI intervention. First, we used the standard Hussey and Hughes model, which includes time-step fixed effects and clinic random effects (Model 1) [57]. Second, we used an extension to this model with a nested random clinic–time step interaction term (Model 2). This extension is recommended by Hemming and colleagues [58], because it allows secular time trends to vary randomly by clinic. Third, we extended the standard Hussey and Hughes model with individual random effects nested within clinic random effects (Model 3). This model accounts not only for clustering of outcomes by clinic but also for clustering of outcomes within individual women across time steps. In all models, we used cluster robust standard errors to further adjust for clustering and model misspecification [44,45]. In addition to the per-visit effect sizes described earlier, we also computed the cumulative absolute probabilities of attaining our endpoints. For this purpose, we used the per-visit absolute effect sizes measured in each model and applied the exponential formula to estimate the cumulative probabilities across the median number of visits during a pregnancy [59]. For both of our primary analyses—estimating relative and absolute risk—we also ran regressions with additional control variables. We added (i) maternal age and parity; (ii) gestation at each ANC visit and total number of ANC visits attended in pregnancy; and (iii) maternal age, parity, gestation at each ANC visit, and total number of ANC visits. We measured CQI effect heterogeneity by duration of CQI exposure, using the same mixed effects regression models as in the primary analyses but replacing the fixed effect for overall CQI exposure with fixed effects for CQI exposure for each time step since rollover to CQI. We used Stata version 15.0 (StataCorp LLC, College Station, TX) for all statistical analyses. Ethics approval for the study was obtained from the University of KwaZulu-Natal Biomedical Research Ethics Committee (reference BE209/14). The ethics approval included a waiver of the requirement for individual consent to access routine clinical data from maternity case records, excluding labour and delivery clinical notes. Engagement meetings were held with subdistrict and district-level DoH partners prior to study commencement to share our study objectives and introduce the intervention. Standard DoH approvals for commencing the study were also obtained as part of a Memorandum of Understanding between AHRI and the DoH. Following the analyses of our results and prior to the publication of this paper, we held engagement workshops with the sub-district and district-level DoH partners, as well as with the primary funders of this study (the Delegation of the European Commission to South Africa). During these workshops, we jointly interpreted our findings and derived policy recommendations. Although this is a low-risk health systems implementation trial, an independent Data Safety and Monitoring Board (DSMB) annually reviewed study progress. No adverse event data were formally collected, because the intervention targeted health workers in clinical facilities.

The innovation described in the study is the implementation of continuous quality improvement (CQI) in primary care clinics in rural South Africa to improve the quality of antenatal HIV care. The CQI intervention included the use of standard CQI tools such as process maps, fishbone diagrams, run charts, Plan-Do-Study-Act (PDSA) cycles, and action learning sessions. Trained CQI mentors worked with health workers, including nurses and HIV lay counsellors, to tailor the intervention to local clinic needs. The study found that CQI significantly increased viral load (VL) monitoring, which is critical for the elimination of mother-to-child transmission of HIV, but did not improve repeat HIV testing. The results suggest that CQI can be effective at increasing the quality of primary care in rural African communities and should be considered as a routine intervention to boost the quality of maternal health care.
AI Innovations Description
The recommendation from the study is to implement continuous quality improvement (CQI) interventions in primary care clinics to improve the quality of antenatal HIV care in resource-poor settings. The CQI intervention includes the use of standard CQI tools such as process maps, fishbone diagrams, run charts, Plan-Do-Study-Act (PDSA) cycles, and action learning sessions. Trained CQI mentors work with health workers, including nurses and HIV lay counsellors, to tailor the intervention to the specific needs of each clinic. The study found that CQI significantly increased viral load (VL) monitoring, which is critical for the elimination of mother-to-child transmission of HIV (eMTCT) and the health of pregnant women living with HIV. However, it did not improve repeat HIV testing. The results suggest that CQI can be an effective intervention to improve the quality of primary care in rural African communities, and policymakers should consider implementing CQI as a routine intervention to boost the quality of maternal health care. Further implementation research is needed to understand the mechanisms of action and identify factors supporting long-term success.
AI Innovations Methodology
Based on the provided information, the study conducted a stepped-wedge cluster-randomized controlled trial to assess the effects of continuous quality improvement (CQI) on the quality of antenatal HIV care in primary care clinics in rural South Africa. The CQI intervention was delivered by trained mentors using standard CQI tools and aimed to improve the implementation of national guidelines for the elimination of mother-to-child transmission of HIV (eMTCT). The primary endpoints of the study were viral load (VL) monitoring and repeat HIV testing. The study found that CQI significantly increased VL monitoring but did not improve repeat HIV testing.

To simulate the impact of recommendations on improving access to maternal health, a methodology could be developed based on the study’s approach. The methodology could include the following steps:

1. Identify the specific recommendations for improving access to maternal health. These recommendations could be based on evidence-based practices, guidelines, or innovative approaches.

2. Define the primary endpoints or indicators that will be used to measure the impact of the recommendations on improving access to maternal health. These endpoints could include metrics such as the number of antenatal care visits, the percentage of pregnant women receiving essential tests or interventions, or the reduction in maternal mortality rates.

3. Design a stepped-wedge cluster-randomized controlled trial or another appropriate study design to evaluate the impact of the recommendations. This design allows for the gradual implementation of the recommendations across different clusters or groups, which can help assess the effectiveness of the interventions over time.

4. Develop a data collection plan to capture relevant data on the primary endpoints before, during, and after the implementation of the recommendations. This could involve collecting data from medical records, surveys, or other sources.

5. Analyze the data using appropriate statistical methods, such as modified Poisson regression or mixed effects models, to estimate the relative and absolute effect sizes of the recommendations on the primary endpoints. Consider controlling for potential confounding variables, such as maternal age, parity, and gestational age.

6. Calculate the cumulative probabilities of attaining the primary endpoints over the course of a pregnancy using the per-visit effect sizes measured in the analysis.

7. Assess the heterogeneity of the effects of the recommendations by duration of exposure, using regression models with fixed effects for each time step since the implementation of the recommendations.

8. Conduct a process evaluation to better understand the delivery of the recommendations and identify factors that may influence their effectiveness. This could involve collecting qualitative data through interviews or focus groups with healthcare providers and other stakeholders.

9. Engage with relevant stakeholders, such as policymakers and healthcare providers, to interpret the findings and derive policy recommendations based on the results of the study.

10. Consider conducting additional implementation research to further elucidate the mechanisms of action and identify factors that support the long-term success of the recommendations.

By following this methodology, it would be possible to simulate the impact of recommendations on improving access to maternal health and provide evidence-based insights for policymakers and healthcare providers.

Share this:
Facebook
Twitter
LinkedIn
WhatsApp
Email