Introduction Maternity waiting homes (MWHs) aim to improve access to facility delivery in rural areas. However, there is limited rigorous evidence of their effectiveness. Using formative research, we developed an MWH intervention model with three components: infrastructure, management and linkage to services. This protocol describes a study to measure the impact of the MWH model on facility delivery among women living farthest (≥10 km) from their designated health facility in rural Zambia. This study will generate key new evidence to inform decision-making for MWH policy in Zambia and globally. Methods and analysis We are conducting a mixed-methods quasiexperimental impact evaluation of the MWH model using a controlled before-and-after design in 40 health facility clusters. Clusters were assigned to the intervention or control group using two methods: 20 clusters were randomly assigned using a matched-pair design; the other 20 were assigned without randomisation due to local political constraints. Overall, 20 study clusters receive the MWH model intervention while 20 control clusters continue to implement the ‘standard of care’ for waiting mothers. We recruit a repeated cross section of 2400 randomly sampled recently delivered women at baseline (2016) and endline (2018); all participants are administered a household survey and a 10% subsample also participates in an in-depth interview. We will calculate descriptive statistics and adjusted ORs; qualitative data will be analysed using content analysis. The primary outcome is the probability of delivery at a health facility; secondary outcomes include utilisation of MWHs and maternal and neonatal health outcomes. Ethics and dissemination Ethical approvals were obtained from the Boston University Institutional Review Board (IRB), University of Michigan IRB (deidentified data only) and the ERES Converge IRB in Zambia. Written informed consent is obtained prior to data collection. Results will be disseminated to key stakeholders in Zambia, then through open-access journals, websites and international conferences. Trial registration number NCT02620436; Pre-results.
The primary research question is: 1. What is the impact of the MWH model on the probability of facility delivery among mothers living more than 10 km from the health facility? Secondary questions include: This study began in March 2016 and will be completed in December 2018. The intervention and comparison sites are located in the primarily rural Zambian districts of Choma, Kalomo and Pemba districts of Southern Province; Nyimba and Lundazi districts of Eastern Province; and Mansa and Chembe districts of Luapula Province (figure 2). Map of the Maternity Home Alliance intervention and control study sites by partner. BU/RTC, Boston University and Right to Care Zambia; UM, University of Michigan. Choma district has a population of 247 860 and a population density of 34/km2, with 68.7% of its population being rural. Kalomo district has a population of 258 570 and a population density of 17.2/km2, with 91.8% of its population being rural.27 Nyimba district has a population of 85 025 and a population density of 8.1/km2, with 91% of its population being rural. Lundazi district has a population of 323 870 and a population density of 23/km2, with 95.1% of its population being rural.28 Mansa district has a population of 228 392 and a population density of 23.1/km2, with 61.9% of its population being rural.29 This study employs a quasiexperimental controlled before-and-after (CBA) design with a total of 40 study clusters, 20 intervention and 20 control clusters. Clusters consist of health facilities and their catchment households. Intervention clusters are receiving the core MWH model, inclusive of newly constructed homes with the elements from the three domains: (1) infrastructure, equipment and supplies; (2) policies, management and finances; and (3) linkages and services detailed in the intervention section of the protocol. Control clusters are implementing the ‘standard of care’ for waiting mothers in Zambia. Because no national policy exists, the standard of care is facility driven and varies widely. Some standard-of-care facilities have no designated space for a mother to wait; others have no MWH but provide a designated space for waiting mothers within the clinic; and a small number have an existing MWH-like structure but with highly variable quality.13 Because the intervention aims to generate demand for health facility delivery, it is critical that facilities are capable of managing basic emergency obstetric and neonatal complications (BEmONC). Because of inconsistencies in available secondary data sources across the different districts, we established supplemental criteria that could be drawn from the available sources.30 31 Clusters were eligible for inclusion in the study if the health facility was located ≤2 hours driving time to a comprehensive emergency obstetric and neonatal care (CEmONC) capable referral facility, performed a minimum of 150 deliveries per year and met at least one of two sets of conditions below: Eligibility condition set 1: Eligibility condition set 2: There is a total of 40 clusters (20 intervention, 20 comparison) in this study (table 1). Each implementing partner used different methods to select and assign clusters to study arms. BU/RTC supported areas had a total of 36 eligible health facilities that were located ≤2 hours driving time to a referral facility and performed a minimum of 150 deliveries per year. Of those, 22 (61%) met one of the two eligibility conditions. This partner selected the 20 farthest away from referral, created 10 pairs matched on annual delivery volume and distance, then randomised matched pairs to intervention or control, using the RAND function in Microsoft Excel. All eligible sites were included regardless of the presence of an existing infrastructure or space that functioned as an MWH. Control sites with existing infrastructure or space are considered standard of care. Though sites with existing MWH infrastructure were generally not structurally sound. Quasiexperimental study design to evaluate the impact of MWHs in rural Zambia MWH, maternity waiting home; NR, not randomised; O, observations at baseline (O1, in 2016) and endline (O2, in 2018) at intervention (X) and comparison (_) sites; R, cluster randomised; X, minimum core maternity home (see above). Africare/UM had a total of 29 eligible health facilities that were located ≤2 hours driving time to a referral facility and performed a minimum of 150 deliveries per year. Of those, 22 (76%) met one of the two sets of eligibility conditions. Africare/UM was unable to randomly allocate sites to a study arm due to local political constraints, as the Ministry of Health feared community fatigue due to the large number of organisations implementing projects and conducting research. They instead worked with the Ministry of Health to identify 10 intervention sites using the same eligibility criteria. They then selected comparison sites, matched to intervention sites on annual delivery volume and distance to a referral hospital. Sites with an existing infrastructure that functioned as an MWH were not considered as an option for comparison sites. After selecting sites, both partners then constructed the core MWH model at each of the 20 intervention sites. Population data are being collected from two main sources: household surveys (HHS) and in-depth interviews (IDI). Baseline data collection occurred in early 2016 prior to the implementation of the MWH model in intervention clusters; endline data collection will occur in late 2018, after an 18-month intervention period. The HHS is administered to a sample of 2400 recently delivered women (eligibility criteria described below) residing in intervention and control clusters. In the case of maternal death, the household head or senior woman was interviewed as a proxy participant. The HHS captures information on the domains and data fields seen in table 2. The HHS was pretested among a sample of 50 participants representing all the major local languages. At baseline, only small adjustments were made in response to the pretest, primarily changing formal translations into the vernacular. Summary table of data fields collected from the household survey ART, antiretroviral therapy; CEmONC, comprehensive emergency obstetric and neonatal care; C-section, caesarean section; MWH, maternity waiting home; PMTCT, prevention of mother-to-child transmission of HIV. IDIs are conducted among a subsample of 240 HHS participants in order to gain a deeper understanding of community awareness, perceptions and experiences. Because the seven districts are spread out and culturally different, we wanted to ensure we reached saturation or predictability in each district to better explore context with the qualitative data.32 Consequently, we planned to conduct a large number of IDIs to make sure there was sufficient coverage of different populations to provide insight into the quantitative survey findings. IDI content builds on themes captured in the HHS and includes perceptions of labour and delivery practices, barriers to accessing care, knowledge and awareness of MWHs, sources of knowledge of MWH, perceptions of the quality of maternity homes (safety, comfort, management and services), perceptions of MWH ownership, perceptions of health facility and expenses incurred for last delivery. The population-based approach captures the experiences of those who used the facility in their catchment, other facilities and those who did not access a facility for delivery, allowing us to more accurately estimate the impact of the MWH model intervention among women living farthest from the health facility in an intention-to-treat analysis. To estimate the impact of the MWH model based on an intention-to-treat analysis, we aim to select a representative sample of women in our sample frame who delivered a baby in the past 12 months, irrespective of her place of delivery or her use of an MWH. With this strategy, we will also be able to explore the relationship between use of the MWH and location of delivery. As such, we are recruiting a repeated cross section of 2400 households at each round of the survey (approximately 60 households per cluster): 1200 from both intervention and control sites at both baseline (completed in 2016) and endline (planned for 2018), for a total study sample of 4800 households (table 3). Total sample size for evaluation *In-depth interviews (IDI) are a subset of the total household survey population selected for more in-depth information and are therefore not factored in as additional human subject participants in the total sample size for this study. After accounting for the clustered sampling design (intracluster correlation coefficient estimated at 0.04 based on previous work33–35), and assuming an alpha of 0.05, this sample will provide us with 80% power to detect a minimum 10 percentage point difference in the anticipated impact of the MWH intervention on the primary outcome of facility delivery, a programmatically meaningful difference. We recruited a sample of 240 women for the IDIs (randomly selecting 10% of the household sample) at baseline, and will recruit another 240 at endline. For the purposes of this evaluation, a household is defined as a group of people who regularly cook together. Inclusion criteria for the HHS are: To select a sample representative of women living at least 10 km from our health facility, we employ multistage random sampling procedures (figure 3). We begin the first stage of sampling by visiting every village within the catchment area of each study site, informing the local village leader of the purpose of the study and taking the global positioning system (GPS) coordinates from the approximate geographical ‘center’ of the village. We input these GPS coordinates into ArcGIS Online (Esri, Redlands, CA) and use the line creation tool to draw the most direct route along the roads and paths visible on the World Imagery basemap between each village centre and their associated health facility. We then use this network of roads to calculate the distance of each village to the health facility and develop a sampling frame of all villages within each catchment area located more than 10 km from the health facility (rounding up from 9.5 km). We then randomly select a sample of 10 villages from each catchment area with probability proportional to population size. We list every eligible village within a catchment area in Microsoft Excel along with the total population of the village. We assign a series of numbers to each village, corresponding to the population size (ie, if village 1 had 30 people, 1–30; village 2 had 20 inhabitants, 31–50), and use the random number generator function to select the villages in each catchment area. Multistage random sampling strategy for baseline and endline. CEmONC, comprehensive emergency obstetric and neonatal care; GPS, global positioning system; HHS, household survey; IDI, in-depth interview. Second, we work with community volunteers and village leaders to list all households within the selected villages that have a woman who had a delivery in the last year. We randomly order them by rolling a dice twice, first for a random start and then for a random skip until all households are ordered. We visit each household in that order and confirm their eligibility for study participation. We continue down the list until six eligible households in each village are identified. We select additional villages and additional households if necessary to reach our sample of 2400 households per round. This process assumes that the health facility staff are able to accurately and completely identify all villages within their catchment area. The study team and community volunteers introduce the study to potential participants and request permission from the household head or most senior woman in the household to screen for eligibility. If household eligibility is confirmed, the study team proceeds with the informed voluntary consent process with the household head or senior woman. Once informed consent is obtained and documented from the household head or senior woman, the enumerator records the geolocation of the household and commences the interview or schedules a later appointment. The household head or senior woman responds to the first part of the survey for approximately 15 min, enumerating all of the people in the household in a table that captures demographics as well as recent deliveries and delivery outcomes. On completion of the household demographics and enumeration, an eligible woman is selected to respond to the remainder of the survey. If more than one woman in the household had delivered a baby in the past 12 months, the electronic data capture system is programmed to randomly select one eligible woman to respond to the remainder of the survey. The selected woman is then consented separately, enrols in the study and completes the HHS in a private space where she feels comfortable. Completion of the HHS takes approximately 45 min. Of the woman participants, 10% are randomly selected to participate in a 30 min IDI immediately following the survey. IDI participants can take a short break after the HHS, or reschedule if more convenient. The household-level sampling procedures described here have been conducted at baseline (2016) and will be conducted at endline (2018) with a new cross-sectional sample of households and women within the households. The same households are not followed over time. The development of the research question and outcome measures was informed by key stakeholders and patients’ experience and preferences derived from free list responses, key informant interviews and focus group discussions conducted during the formative evaluation.24–26 Input from key stakeholders and community members helped to ensure that the intervention would be responsive to community standards of acceptability and a feasible option to increase facility deliveries. Patients were not involved in study design, recruitment and/or conduct of the trial. Given the nature of the intervention, there was limited potential burden on patients, and therefore the burden of the randomised controlled trial was not assessed by the patients. The primary audience for this evaluation is the Government of Zambia, particularly the Ministry of Health, Ministry of Community Development and the Ministry of Chiefs and Traditional Affairs, which will use the results to inform the development of maternal and child health strategies and policies in Zambia. We have disseminated the baseline findings to key stakeholders internal to Zambia and will disseminate the full study findings after endline. Many of the findings will likely be of broader interest throughout the region and globally where maternal mortality is high, resources are low and access to facility-based delivery remains an issue. As such, results of this evaluation will be disseminated as widely as possible through open-access journals, websites and international conferences. At baseline and endline, a local team of enumerators literate in the appropriate local language(s) and in English are trained in qualitative and quantitative research methods and human subjects’ protection. Surveys are designed in SurveyCTO Collect software (V.2.212; Dobility) and are captured electronically using encrypted tablets. The IDIs are digitally captured on audio recorders. Enumerators explain the tablet system to all participants and explain the digital audio recorders to those selected for IDIs. Several checks assure the quality of collected survey data. First, enumerators participate in an extensive 5-day training. Second, the enumerators are overseen by data collection team leads with greater experience in data collection fieldwork. Team leads are overseen by a field supervisor. Team leads and the field supervisors review surveys for accuracy and completeness nightly. Third, field supervisors randomly select a 5% subsample of households to be audited; the auditor revisits these households and repeats a subset of survey questions that are checked for reliability. Fourth, the field supervisors conduct a short nightly debrief with the data leads who each oversee three other enumerators and are responsible for conducting the IDIs. Debriefs cover the following topics: field challenges, sampling, total surveys conducted and IDIs. Lastly, quantitative data are encrypted, uploaded and transferred nightly to the data analysis team where progress is reviewed in real time. On a nightly basis, qualitative data are removed from the recorders and saved on a password-protected computer. Survey data are captured on tablets and saved to the internal memory. During data collection, each evening, the field supervisor reviews the survey and encrypts it so data are no longer accessible on the tablet. The supervisor uploads encrypted data nightly to a secure server administered by SurveyCTO (V.2.212; Dobility). The evaluation team downloads the encrypted data using the SurveyCTO Client software (V.2.212; Dobility), and decrypts the data using a decryption key generated by the research team. The evaluation team oversees data entry, management and storage for qualitative data. All IDIs are translated into English and transcribed verbatim. Digital recorders and paper copies of written notes are kept in a locked cabinet until transcriptions are checked for accuracy and completeness, at which point audio files are deleted and notes are shredded. The electronic transcriptions do not contain identifying information, only a study ID number linked to the quantitative survey. A separate linking file for the quantitative and qualitative data is password protected and only accessible to the study team. The primary independent variable of interest is assignment to the intervention. For the analysis, we will compare baseline characteristics between the intervention and control groups to assess balance. We collect data on potential confounders to increase precision, analyse heterogeneity and, if necessary, control for any potential imbalance between the groups. The primary dependent variable is the probability of facility delivery for most recent birth, based on self-report by mothers. Secondary outcomes include: Because the data were self-reported and asked about experience up to 12 months before, there are limitations to what can reasonably be asked without introducing major recall bias. The survey captures additional indicators of morbidity including intravenous antibiotics, blood transfusions and referral to CEmONC, but we have limited secondary outcomes to those most likely to be clearly remembered. All quantitative analyses will be conducted in SAS V.9.4 (SAS). Our quantitative analytic plan is threefold, yielding descriptive, bivariate and multivariate statistics. First, we will describe the study sample, stratifying by intervention and control group and testing for differences between the groups. Second, we will estimate differences between the groups for primary and secondary outcomes, controlling for a set of baseline demographics. Categorical variables will be compared between the groups using a Χ2 test when cell sizes are sufficient or Fisher’s exact test when the cell sizes are small; continuous variables will be compared using t-tests if normally distributed or non-parametric Wilcoxon rank-sum tests if the distribution is non-normal. Third, we will fit several regression models to estimate the impact of the intervention on the primary and secondary outcomes, adjusting for baseline values, assignment matching variables and any imbalanced covariates. To control for the phased timing of implementation, we include a variable in the main models that captures the month the home opened. All qualitative data will be analysed in NVivo V.10© software (QSR International). We will conduct a content analysis of the IDI transcripts. Coding themes have been identified a priori. Additional themes will be included as they emerge. We will triangulate findings with the quantitative data to identify consistencies, inconsistencies or additional themes to be explored. We will use the themes developed during the baseline analysis to analyse the endline data and identify any new themes as they emerge. To systematically assess confounders and the risk of bias at the preintervention phase, intervention phase and postintervention phase, we will use the ROBINS-I tool.36 This tool enables us to transparently report threats to validity of this quasiexperimental study during analysis, interpretation and dissemination. Results for the primary and each secondary evaluation question will be presented.