Background: Understanding the magnitude and clinical causes of maternal and perinatal mortality are basic requirements for positive change. Facility-based information offers a contextualized resource for clinical and organizational quality improvement. We describe the magnitude of institutional maternal mortality, causes of death and cause-specific case fatality rates, as well as stillbirth and pre-discharge neonatal death rates. Methods: This paper draws on secondary data from 40 low and middle income countries that conducted emergency obstetric and newborn care assessments over the last 10years. We reviewed 6.5 million deliveries, surveyed in 15,411 facilities. Most of the data were extracted from reports and aggregated with excel. Results: Hemorrhage and hypertensive diseases contributed to about one third of institutional maternal deaths and indirect causes contributed another third (given the overrepresentation of sub-Saharan African countries with large proportions of indirect causes). The most lethal obstetric complication, across all regions, was ruptured uterus, followed by sepsis in Latin America and the Caribbean and sub-Saharan Africa. Stillbirth rates exceeded pre-discharge neonatal death rates in nearly all countries, possibly because women and their newborns were discharged soon after birth. Conclusions: To a large extent, facility-based findings mirror what population-based systematic reviews have also documented. As coverage of a skilled attendant at birth increases, proportionally more deaths will occur in facilities, making improvements in record-keeping and health management information systems, especially for stillbirths and early neonatal deaths, all the more critical.
This secondary data analysis is based on a review of cross-sectional health facility surveys known as emergency obstetric and newborn care (EmONC) assessments, which focus on routine intrapartum care for women and their newborns as well as more complicated births. These assessments have been driven by the United Nations Fund for Population (UNFPA), the United Nations Children’s Fund (UNICEF), the WHO, and the Averting Maternal Death and Disability (AMDD) program at Columbia University. The methods have been described elsewhere, but a summary follows [16, 17]. Most EmONC assessments were national in scope and targeted facilities providing childbirth services. As a rule, all hospitals were selected, and if a “census” of childbirth sites was not possible, hospitals were supplemented by either a random sample of mid-level facilities (health centers, clinics), or a “restricted census” of higher volume mid-level facilities that attended more than a specified number of monthly deliveries. Usually, both private and public sector facilities were included. Table 1 shows the number of hospitals and other facilities surveyed in each country and the population size covered by the facilities visited. EmONC assessment characteristics and facility-based rates and ratios (40 countries) NR not reported, LAC Latin America and the Caribbean, MMR maternal mortality ratio, mo months, MMR (2015) see reference [7] aEcuador, Democratic Republic of Congo, Rwanda, São Tomé and Príncipe, Sierra Leone and Zanzibar reported only direct maternal deaths bEritrea: 25,000 deliveries are live births, based on 2006 data, not EmONC assessment; institutional delivery rate also not based on EmONC assessment cMalawi and Zambia: deaths and deliveries weighted; in Malawi unweighted deliveries = 367,738 and unwt deaths = 557; in Zambia, unweighted deliveries = 254,790 and unwt deaths = 673 dBangladesh health facilities that performed cesareans were considered hospitals; if not, considered “other”. Sample included 24 districts eComplications included only major direct obstetric complications (hemorrhage, severe pre-eclampsia/eclampsia, sepsis, prolonged/obstructed labor, severe abortion complications, ruptured uterus, ectopic pregnancy) fMaternal deaths include all maternal deaths (direct, indirect and unknown causes) gMozambique: Based on 3 months of data, multiplied by 4 to show 12 months, for consistency across countries hRwanda: Based on 6 mo.s of data for facility births, complications and deaths, multiplied by 2 to show 12 months of data, for consistency across countries In each country, a core team adapted a set of standardized instruments that covered the availability and status of infrastructure, human resources, drugs, equipment, and supplies, and service statistics, in addition to a provider interview and chart reviews [18]. Most relevant to this paper was the 12-month retrospective summary of service statistics that included the number of deliveries, women experiencing obstetric and non-obstetric complications by type of complication, maternal deaths by cause, and birth outcomes. Data collectors extracted data from logbooks in labor and delivery wards, maternity wards, operating theatres, and newborn care units in each facility. When any doubt or clarification was required, data collectors turned to the staff on duty. Definitions of causes of maternal death were informed by the international statistical classification of diseases and related health problems, 10th edition (ICD-10) and its application to deaths during pregnancy, childbirth and the puerperium (ICD-MM). Obstetric complications were elaborated upon to distinguish between antepartum and postpartum hemorrhage and retained placenta. Prolonged and obstructed labor were included, sometimes joined as one category. Ruptured uterus and ectopic pregnancy along with postpartum sepsis, severe pre-eclampsia and eclampsia were the final “major direct complications” listed on the instrument. Indirect complications included malaria, HIV/AIDS, severe anemia, and less commonly, hepatitis and diabetes. In each case, the form included a category for “other” direct complications and “other” indirect complications. Causes of death mirrored the listing of complications. Finally, space permitted the reporting of unspecified/unknown causes of maternal death. For the 12-month summary of maternal deaths, the data collectors were guided by the primary sources they located on the wards or with the staff. Where maternal death audits or reviews took place, those records were also accessed, but generally no subsequent recoding was performed. The 12-month retrospective compilation of service statistics was also designed to test the intrapartum and early neonatal death rate as an indicator of intrapartum care quality [19]. Data extraction from maternity or delivery registers captured the number of antepartum (macerated) and intrapartum (fresh) stillbirths, defined by 28 weeks of gestation or more. Intrapartum stillbirths and live births were divided between those weighing above and below 2500 g. Early neonatal deaths were defined as those occurring before discharge or within the first 24 h, whichever came first. Countries varied widely as to level of detail captured, and thus, categories were added for unspecified stillbirths and birth weights when the timing of death or birth weight was not recorded, and for live births and early neonatal deaths when birth weight was not recorded. These categories for maternal and newborn outcomes were standardized across countries. Data collectors were trained to use a manual with the same definitions for each obstetric complication, type of stillbirth and early neonatal death. EmONC assessment final reports were the source of most of the data compiled in this paper; we had access to primary data in nine countries, but only in two or three situations did we access those data. Because reporting was largely driven by country interests, not all reports contained the same information nor was it presented in a standardized fashion. Consequently, the number of countries in each table differs. For example, some countries did not report the major obstetric complications by type of complication, making it impossible to calculate cause specific case fatality rates. One report candidly reported that the number of maternal deaths was grossly underreported and was not included. Other countries presented the intrapartum and pre-discharge neonatal death rate as recommended, restricting the numerator and denominator to babies weighing 2500 g or more, but they failed to report all stillbirths, nor did they report the number for which birth weight or stillbirth timing was unspecified; these data were not included in the paper. A small number of countries reported only direct maternal deaths, omitting the number of unspecified/unknown maternal deaths or indirect deaths; these reports were retained. Some countries distinguished between antepartum hemorrhage and postpartum hemorrhage, while others reported the two together. About 10 of the 40 countries had conducted more than one EmONC assessment. In all cases, we extracted information from the most recent report except for Ethiopia, whose final report for their most recent assessment was not yet available. Based on numbers drawn from the reports, we calculated the percentage of deliveries with obstetric complications and the institutional maternal mortality ratio, using 100,000 deliveries rather than live births since some countries only counted deliveries. We also calculated any regional aggregations, newborn mortality rates, and the ratio of stillbirths to early neonatal deaths. The case fatality rate was calculated by dividing the number of maternal deaths due to a specific complication by the number of complications treated. The stillbirth rate was estimated by dividing the total number of stillbirths by all deliveries (multiplied by 1000); the pre-discharge neonatal mortality rate was similar but we removed the deliveries resulting in a stillbirth from the denominator. Ministries of Health provided oversight to all EmONC assessments and were usually supported by a technical steering committee. Public or private research institutions, universities, or central statistical offices were the most common implementing bodies for the assessments. Data collection teams usually consisted of four data collectors, generally having a health background. Data collectors participated in a weeklong training that included a review of each questionnaire, role plays, and exercises to familiarize themselves with the questionnaires and the data collectors’ manual. Each training included a one-day field activity in local hospitals and health centers where teams completed the questionnaires under supervision. Generally, quality assurance teams closely monitored the first week or two of field activities. Teams usually required one to two days to complete a hospital and half a day to complete a health center. Data collection was paper-based for all countries but one, and data entry performed with CSPro. Report analyses were produced with statistical software such as STATA, SPSS or sometimes excel. When mid-level facilities were sampled, the data were weighted based on selection probability. Weighting is required to account for the non-uniform selection probabilities that would affect how data from selected facilities represent all facilities, including those not selected. Technical support was provided by consultants to the AMDD program. Countries varied by the intensity of support – from no direct AMDD support (Ecuador, Panama, Cote d’Ivoire, Eritrea), to minimal remote support (Mongolia, Cambodia, Afghanistan), to most countries with more intensive support. UNFPA and UNICEF were the predominant supporters for EmONC assessments but bilateral partners and foundations also played important roles depending on the country. Names of women or other identifying information were never included in the primary data collection. Countries followed the guidance of their ministries of health and when required, approval of the protocols and data collection instruments from local institutional review boards was obtained. No additional approval was sought for this paper since the primary source of the data were reports in the public domain.
N/A