Background: Quality of care around childbirth can reduce above half of the stillbirths and newborn deaths. Northeast Namibia’s neonatal mortality is higher than the national level. Yet, no review exists on the quality of care provided around childbirth. This paper reports on baseline assessment for implementing WHO/UNICEF/UNFPA quality measures around childbirth. Methods: A mixed-methods research design was used to assess quality of care around childbirth. To obtain good saturation and adequate women opinions, we purposively sampled the only high-volume hospital in northeast Namibia; observed 53 women at admission, of which 19 progressed to deliver on the same day/hours of data collection; and interviewed 20 staff and 100 women who were discharged after delivery. The sampled hospital accounted for half of all deliveries in that region and had a high (27/1,000) neonatal mortality rate above the national (20/1,000) level. We systematically sampled every 22nd delivery until the 259 mother–baby pair was reached. Data were collected using the Every Mother Every Newborn assessment tool, entered, and analyzed using SPSS V.27. Descriptive statistics was used, and results were summarized into tables and graphs. Results: We reviewed 259 mother–baby pair records. Blood pressure, pulse, and temperature measurements were done in 98% of observed women and 90% of interviewed women at discharge. Above 80% of human and essential physical resources were adequately available. Gaps were identified within the WHO/UNICEF/UNFPA quality standard 1, a quality statement on routine postpartum and postnatal newborn care (1.1c), and also within standards 4, 5, and 6 on provider–client interactions (4.1), information sharing (5.3), and companionship (6.1). Only 45% of staff received in-service training/refresher on postnatal care and breastfeeding. Most mothers were not informed about breastfeeding (52%), postpartum care and hygiene (59%), and family planning (72%). On average, 49% of newborn postnatal care interventions (1.1c) were practiced. Few mothers (0–12%) could mention any newborn danger signs. Conclusion: This is the first study in Namibia to assess WHO/UNICEF/UNFPA quality-of-care measures around childbirth. Measurement of provider–client interactions and information sharing revealed significant deficiencies in this aspect of care that negatively affected the client’s experience of care. To achieve reductions in neonatal death, improved training in communication skills to educate clients is likely to have a major positive and relatively low-cost impact.
Qualitative and quantitative methods were both used to assess the baseline implementation of quality-of-care interventions around childbirth at an intermediate hospital in northeast Namibia. We applied mixed-methods data collection as it aligns with the Donabedian and WHO frameworks for assessing quality-of-care facility. Also, the frameworks best suit our study as they are modeled to tell a story on care provision through the three components of care. The components include inputs, outputs/processes, and outcomes around childbirth. The qualitative data were collected by observing women in the maternity ward as they navigated admission, labor, and childbirth. In contrast, quantitative data assessed facility functionality and readiness, record review, and structured interviews with women discharged after delivery, staff, and the facility manager. The research was supported by the Namibian Ministry of Health and the University of the Western Cape (UWC). Ethical approval was obtained from UWC and the Namibian Ministry of Health. Kavango region, northeast Namibia, was purposively sampled because it has the only intermediate-referral hospital in that region. The hospital accounts for half of all deliveries in the region and has a high neonatal mortality rate (27/1,000) above the national level (20/1,000) (16). The factors that influenced the selection of the hospital included (1) high case load/deliveries, (2) poor newborn health indicators, and (3) being a UNICEF-supported region/hospital for maternal newborn programs. Also, the region records 72.8% health facility deliveries, 75% deliveries by skilled birth attendants, and 47.7% postnatal care within 2 days (16). Meanwhile, northeast Namibia’s intermediate hospital deliveries increased from 8,823 in 2019 to 11,967 by 2020. By the time of data collection, infrastructure and human resources for health (17) were inadequate to accommodate the increasing deliveries, posing a challenge to the healthcare system, which is expected to improve quality healthcare amidst an overcrowded maternity unit. Yet, no quality improvement program existed. The selection of staff for the interview (N = 20) was purposeful. The selection criteria included staff working with pregnant women, in the labor and delivery unit, and in the postnatal care and premature unit. The facility manager was conveniently selected for the interview as the only manager for the facility. Observed women (N = 53) were conveniently sampled as they were admitted in the maternity ward for labor and delivery during the data collection period. The women who delivered (N = 100) were also sampled conveniently for the interview during data collection when they were discharged home. The sampled numbers of the facility manager, staff, and observed and interviewed women were based on the estimated good reach on saturation and obtaining adequate voice representation. The woman was counted as part of the 53 if she was observed but did not completed four stages of childbirth. The stages included are as follows: admission into the maternity ward, labor, delivery, and immediate care after birth on the day of data collection. Of 53 observed women, 19 women completed the four stages. For the record review, we purposively chose January to December 2016 and systematically sampled every 22nd delivery until the necessary sample size was reached. The calculated sample size was per the study protocol using 5,716 deliveries in 2016. With 0.05 alpha and 0.80 power, we needed a sample size of 211 before and after groups. So, for a full review of records as part of this baseline study (before the group), considering potential information in the records, we indicated reviewing 250 records of mother–newborn pairs. Thus, because of missing records, we reviewed 259 mother–baby pairs. The endline paper will report the results of the pre- and postintervention phases. The EMEN tool is divided into six tools or forms. The facility’s structural and functionality readiness form1 assesses physical resources, supplies, equipment, and medicine. The management interview form2 assesses the policy environment, while form3 assesses the formal and refresher training the staff received in maternal and newborn care. The form also has vignettes to test staff knowledge of the subject areas. Form4 observes the women from admission to labor and delivery as she navigates the process of care. Form5 captures data on the care provided from the medical record. The form also collects outcome data and reviews partographs and records of women who underwent a cesarean section to deliver. Form6 assesses women’s perceptions of the quality of care they received during hospitalization (Supplementary Table S3). The EMEN assessment tool was developed by pulling together the best interventions of WHO’s Service Availability Readiness Assessment (SARA) and those used in vigorous research settings (9). By using the tool to collect data, we were able to capture gaps in quality of care identified in other large studies (9, 18–20) and across the WHO/UNICEF quality framework (Supplementary Figure S1 and Table S3). This demonstrates the strong validity and reliability of the EMEN tool and the results of this study. Since no single tool is sufficient to capture all quality measures (21–23), we encourage researchers to use a mixture of tools to derive the best benefit from the results. Even if it is one quality domain to be assessed, we used at least 3–4 EMEN tools to capture quality standards widely (Supplementary Table S2). Despite the EMEN tool having found a high implementation of human, essential physical resources, and drugs, we observed a few inconsistencies on the ground vs. the findings. Assistant data collectors comprised one retired nurse and two nursing students who interviewed staff and reviewed maternity records. The data collectors also included two student doctors who conducted observations and exit interviews. The first author interviewed medical doctors. We collected data by adapting the Every Mother Every Newborn (EMEN) assessment tool into local context. The EMEN tool assesses the quality-of-care interventions during childbirth, especially the first 24 h (24). EMEN tool development was based on harmonizing interventions from tool(s) of WHO’s SARA and those used in robust research settings (9). The final version incorporated experiences from implementing the same tool in Bangladesh, Ghana, and Tanzania. The assistant data collectors were trained by the UNICEF international consultant who led cross-sectional studies using similar study tools in the three countries. The training included observing them in practice, ensuring data quality and consistency. The EMEN tool has strong validity and reliability as it incorporates experiences from large-scale studies and robust surveys (9). Our other paper that assessed the capacity of the EMEN tool found it strong in capturing WHO/UNICEF/UNFPA maternal and newborn quality standards (15). The collected data did not include any respondents’ personal identifiers. Prior to each interview, the assessors read the oral consent script and asked the participant to respond “yes” or “no.” The interview proceeded with only those who consented. The data collection was from December 10, 2019 to January 19, 2020. Quantitative data were entered, coded, cleaned, and analyzed using SPSS for Mac, version 27. We used descriptive statistics to summarize key results into tables and figures. Since it was one site, the facility’s structural and functionality readiness and manager questionnaires were manually analyzed. We applied all six EMEN assessment tools to capture quality-of-care interventions around childbirth. We adopted the scoring analysis approach of the tools from Brizuela et al. (22). We found the approach useful and built on it to analyze data from the EMEN tool by benchmarking our results/responses captured by the tool against each quality measure (Supplementary Table S2). We expanded on the Brizuela et al. (22) scoring approach for assessing the capacity of tools to capture quality standard measures. In addition, instead of just reporting the number of quality items/questions present, we analyzed the proportion of responses from each tool against a WHO/UNICEF/UNFPA standard measure (Supplementary Table S2). All the questions in the tools included measures related to inputs/processes/outputs/outcomes. We reviewed each questionnaire and matched questions in the tools with the WHO/UNICEF/UNFPA quality measures associated with the standards. A detailed description of the mapping exercise is published in our other paper (15). In summary, we matched questions/responses in the tools to each of the measures, which required warranting that all responses/questions in the tools and all measures were considered. For instance, responses on the availability of lifesaving supplies and functioning equipment for emergency care and newborn resuscitation were captured under facility readiness and observation of care tools. For these analyses, we used descriptive statistics to calculate the average or proportion of responses captured by each tool. For quality measures with multiple subcomponents/questions, at least one of the subcomponents captured was considered enough. For example, a quality measure might list several medicines and the tool might measure a subset of the medicines on the list unless the quality measures clearly require that all subcomponents be present for the measure to be met (e.g., provision of essential newborn care required four elements, and the tools had to capture responses for all four). Then, we calculated the average or response percentage of quality measures captured per tool (e.g., the average response proportion of quality measures of a given quality statement captured within a specific tool) (Supplementary Table S2). This was a crucial step in having a summarized table of results depicting clearly which indicator(s) or quality intervention(s) were poorly, moderately, or highly practiced. It then becomes easier to tell from the table (Supplementary Table S2) which EMEN tool captured most of the WHO/UNICEF/UNFPA quality measures under each quality statement and/or standard. Data management for data collected around childbirth was performed using paper-based tools. The principal investigator checked the first 10 responses of each tool for completeness and consistency of codes. Since the principal investigator was on site, forms with identified problems were immediately given back to the assessor for verification and correction. The data were declared as a missing value if it could not be corrected using a register or the mother was not present at the time of verification. All completed clean data were handed over to the principal investigator for safekeeping. Only the data management team had access to the data. The data were entered into SPSS software by a statistician from Namibia University of Science and Technology, who, after entry, handed back all the records to the principal investigator for safekeeping and storage. The first author performed data cleaning before analysis.