Background In 2017, India was home to nearly 20% of maternal and child deaths occurring globally. Accredited social health activists (ASHAs) act as the frontline for health services delivery in India, providing a range of reproductive, maternal, newborn, child health, and nutrition (RMNCH&N) services. Empirical evidence on ASHAs’ knowledge is limited, yet is a critical determinant of the quality of health services provided. We assessed the determinants of RMNCH&N knowledge among ASHAs and examined the reliability of alternative modalities of survey delivery, including face-to-face and caller attended telephone interviews (phone surveys) in 4 districts of Madhya Pradesh, India. Methods We carried out face-to-face surveys among a random cross-sectional sample of ASHAs (n = 1,552), and administered a follow-up test-retest survey within 2 weeks of the initial survey to a subsample of ASHAs (n = 173). We interviewed a separate sub-sample of ASHAs 2 weeks of the face-to-face interview over the phone (n = 155). Analyses included bivariate analyses, multivariable linear regression, and prevalence and bias adjusted kappa analyses. Findings The average ASHA knowledge score was 64% and ranged across sub-domains from 71% for essential newborn care, 71% for WASH/ diarrhea, 64% for infant feeding, 61% for family planning, and 60% for maternal health. Leading determinants of knowledge included geographic location, age <30 years of age, education, experience as an ASHA, completion of seven or more client visits weekly, phone ownership and use as a communication tool for work, as well as the ability to navigate interactive voice response prompts (a measure of digital literacy). Efforts to develop a phone survey tool for measuring knowledge suggest that findings on inter-rater and inter-modal reliability were similar. Reliability was higher for shorter, widely known questions, including those about timing of exclusive breastfeeding or number of tetanus shots during pregnancy. Questions with lower reliability included those on sensitive topics such as family planning; questions with multiple response options; or which were difficult for the enumerator to convey. Conclusions Overall results highlight important gaps in the knowledge of ASHAs. Findings on the reliability of phone surveys led to the development of a tool, which can be widely used for the routine, low cost measurement of ASHA RMNCH&N knowledge in India.
The study took place in four districts (Hoshangabad, Mandsaur, Rewa, and Rajgarh) of Madhya Pradesh (MP), a central landlocked state in India that is largely Hindi speaking, primarily Hindu, and mostly an agrarian economy [29]. Frontline health services are anchored by an estimated 75,000 ASHAs working across Madhya Pradesh’s 52 districts [30]. The study setting in MP is characterized by disparities in access to education—especially among women, literacy rates are lower in rural areas (urban: 78%; rural 51%); mobile phones owned by women (urban: 50%; rural 19%), and access to health services [31]. In 2015, only 35% of children were breastfed within one hour of birth and 58% of children exclusively breastfed until 6 months [31]; while one in four children under 5 experiencing wasting or thinness (weight-for-height), and 42% were stunted (height-for-age) [31]. ASHAs in the selected study areas were randomly selected for participation in a cross-sectional face-to-face interview (n = 1,552). One ASHA per primary sampling unit, or village, was sampled as part of a larger impact evaluation of a mobile health program, Kilkari, targeting pregnant women in the same geographic area [32]. The sample size is sufficient to detect a 7% difference between any two groups in the overall knowledge score 50% or higher, assuming an alpha of 0.025, standard deviation of 0.18, and 0.80 power. In the parent evaluation, the sample size was calculated to detect a 7% difference in the overall knowledge score 50% or higher between the intervention and control groups. A sub-sample of ASHAs interviewed during the cross-sectional face-to-face survey were re-interviewed 1–2 weeks following the initial survey to determine the degree to which repeated measurements in ASHAs interviewed (test-retest) provided similar answers. Reliability analyses of the face-to-face survey were used to streamline the survey tool to a length more manageable and focused on modules for which reliability testing was deemed necessary for implementation via the test retest. Assuming a kappa of 0.80, a margin of error of 0.05, an alpha of .05, and the proportion of positive responses of 0.35 for rater 1 and 0.40 for rater 2, 146 participants who have completed both surveys were required. To develop a phone survey tool, ASHAs who had previously completed a face-to-face interview in the baseline survey 1–2 weeks prior were re-interviewed over the phone. The test-retest was deemed to be a reasonable length for a phone survey, so the same tool was used. The sample size requirements for the phone survey were the same as above: 146 completed interviews were needed. The ASHA face-to-face survey included modules on demographic and work information, mobile phone ownership, use, and literacy, and experiences with Mobile Academy (a mobile health information training program). The questions were developed with the ASHA guidelines in mind as well the material available in training programs, such as Mobile Academy; questions were adjusted based on pretesting. The test-retest ASHA survey tool was a shorter tool as compared to the baseline face-to-face tool, but includes a subset of the same questions. The phone surveys were conducted with the same tool and methods to develop this are detailed elsewhere [33]. In brief, we assumed a step-wise approach starting with an expert driven approach to item generation, followed several iterations of piloting and translation before ultimately developing a large-scale face-to-face survey. To assess inter-rater reliability, we then repeated an abbreviated version of the same face-to-face survey amongst a sub-sample of respondents and administered the same tool via a CATI survey to a separate sample of ASHAs originally interviewed during the face-to-face survey. The main survey consisted of 10 male enumerators and 7 training days, and lasted from June to November 2018. The phone survey lasted 9 days with 1 day of training and two days of pilot testing. Three male enumerators conducted the phone survey. Both in person and phone interviews were conducted with the aid of the survey on tablets programmed using census Pro. The surveys included single response as well as multi-response questions. The multiple response questions were asked without prompting any response options but probing for other answers, and then selecting all responses mentioned on the tablet. All data were analyzed using Stata 15 [34]. Analysis of the determinants of ASHA knowledge was conducted through a multi-step process. Fig 1 presents a conceptual framework used to guide the analysis; it theorizes the relationship between personal characteristics, ASHA work-related characteristics, social norms, health system inputs, knowledge among ASHAs, and service delivery. This framework was adapted from the logic model generated by Naimoli et al [35] and further modified to include additional areas, such as social norms as described by Kok et al [36]. Those domains or topics with asterisks are ones for which we do not have data in our survey. To assess determinants of ASHA knowledge, composite knowledge scores were created from 35 questions that were split into five separate domains: maternal health, infant feeding, essential newborn care (ENC), family planning, and WASH/diarrhea (Table 1). Within each domain, questions were given equal weight, and coded as 1 or 0 if there was one clear answer. If there were multiple correct options, each option was equally weighted so that if all correct options were picked, the score for that question would be equivalent to 1, but if 2 out of 3 correct options were selected, the score would be 0.66 for that question. The total score was calculated by summing all the individual domain scores. The scores were based on a scale of 0 to 100. Bivariate and multivariable analyses were conducted with the total score as the outcome variable. Independent variables were selected based on our conceptual framework (Fig 1). Multivariable analyses only included those variables that had an association with total knowledge score at a significance level of 0.20 or below during the bivariate analysis in an effort to avoid over-fitting the model [37]. β Coefficients from the adjusted regression model are presented with 95% confidence intervals. Reliability analyses were conducted with the unit of analysis being the individual ASHA. Kappa statistics were calculated to determine agreement between the two modalities tested in the test-retest survey and the phone survey. A kappa at or above 0.7 was considered to indicate moderate to strong agreement beyond chance [38]. To adjust kappa coefficients for differences in the prevalence levels of an indicator, as well as random and/or systematic differences between the two survey ratings, Prevalence Adjusted Bias Adjusted Kappa (PABAK) scores were calculated and are presented in the results. Prevalence indices account for differences in the prevalence of an indicator; where the prevalence is high, chance agreement may also be high and correspondingly, the kappa reduced [39]. PABAK scores between the face-to-face survey and the test-retest survey as well as PABAK scores between the face-to-face survey and the phone survey for each question gives us a sense of (a) the overall reliability of the question and (b) reliability of the question through the phone modality. A question was deemed reliable over the phone modality if the 0.7 kappa statistic threshold was met in both surveys. Ethical approval for research activities in India was obtained from Johns Hopkins School of Public Health’s Institutional Review Board in Baltimore Maryland, USA and from Sigma Research and Consulting in New Delhi, India. All participants provided verbal consent before engaging in interviews.