Approximately 3 million children younger than 5 years living in low- and middle-income countries (LMICs) die each year from treatable clinical conditions such as pneumonia, dehydration secondary to diarrhea, and malaria. A majority of these deaths could be prevented with early clinical assessments and appropriate therapeutic intervention. In this study, we describe the development and initial validation testing of a mobile health (mHealth) platform, MEDSINC®, designed for frontline health workers (FLWs) to perform clinical risk assessments of children aged 2-60 months. MEDSINC is a web browser-based clinical severity assessment, triage, treatment, and follow-up recommendation platform developed with physician-based Bayesian pattern recognition logic. Initial validation, usability, and acceptability testing were performed on 861 children aged between 2 and 60 months by 49 FLWs in Burkina Faso, Ecuador, and Bangladesh. MEDSINC-based clinical assessments by FLWs were independently and blindly correlated with clinical assessments by 22 local health-care professionals (LHPs). Results demonstrate that clinical assessments by FLWs using MEDSINC had a specificity correlation between 84% and 99% to LHPs, except for two outlier assessments (63% and 75%) at one study site, in which local survey prevalence data indicated that MEDSINC outperformed LHPs. In addition, MEDSINC triage recommendation distributions were highly correlated with those of LHPs, whereas usability and feasibility responses from LHP/FLW were collectively positive for ease of use, learning, and job performance. These results indicate that the MEDSINC platform could significantly increase pediatric health-care capacity in LMICs by improving FLWs’ ability to accurately assess health status and triage of children, facilitating early life-saving therapeutic interventions.
The MEDSINC platform acquires and digitalizes key evidence-based data points that are then analyzed through physician-based logic to generate integrated clinical risk assessments, triage, treatment, and follow-up recommendations. The platform interprets 42 key clinical data points based on the WHO IMCI-iCCM guidelines and protocols, as well as other evidence-based data points that allow for expansion of the clinical conditions/diseases evaluated by MEDSINC. The platform guides users through a complete assessment, obligating the user to sequentially answer all questions using supportive embedded demonstration and training animated gif illustrations to enhance and improve the quality of data point acquisition. A summary of the acquired data points is summarized in Table 1. MEDSINC’s clinical logic is based on Bayesian weighting of each data point followed by cluster-pattern analysis for each specific disease or clinical conditions (Figure 1). Each data point is provided a numerical “weighted” score based on the degree of variance from normal database values or degree of clinical severity and then assigned into one or more of the eight disease assessment groups (malaria, measles, skin infections, meningitis, otitis media, dysentery, urinary tract infection, and anemia) in which risk versus no risk is determined based on assigned weighted data tolerance scores for each specific disease. In addition, aggregated scores of these data points are further analyzed to determine clinical severity risk (none/mild, moderate, and severe) for key clinical conditions, respiratory distress/pneumonia, dehydration, sepsis–systemic inflammatory response syndrome (SIRS) risk, and acute malnutrition that are based on sliding tolerance severity score thresholds. Therefore, the initial iteration of this platform generates 20 integrated clinical assessments. Based on the generated clinical assessments and severity risk, MEDSINC also creates the WHO IMC-iCCM compliant triage, treatment, and follow-up recommendations that are specific for age and weight. Clinical data points used by MEDSINC Bayesian/cluster-pattern recognition algorithms MUAC = mid-upper arm circumference. MEDSINC Bayesian/cluster-pattern algorithms use acquired clinical data points (see Table 1) that are given a numerical weighted score and then grouped based on clinical assessment patterns being processed. Severity assessments (none–moderate–severe) are then generated by unique tolerance scores for respiratory distress, dehydration, sepsis risk, and acute malnutrition. Clinical risk for eight additional clinical conditions—malaria, urinary tract infection, measles, anemia, cellulitis, ear infection, meningitis, and dysentery—are based on individual-based scores. MEDSINC platform also generates patient-specific triage, treatment, and follow-up recommendations. This figure appears in color at www.ajtmh.org. Of importance is that the MEDSINC platform is engineered to be fully functional with or without access to cellular/wireless connectivity and is operating system agnostic, which allows it to be used on any mobile device with a touch screen. MEDSINC is also highly configurable for clinical content and regional localization (e.g., language, clinical diseases, treatment, user interface (UI)/user experience (UX), and treatment protocols) with the ability to quickly provide updates to reflect changing program/national guidelines for local customization. Validation test sites were determined by regional collaborating testing groups in concert with the Ministry of Health (MOH). This included two remote regional village sites in Burkina Faso (Yako and Gourcy districts); four regional sites (urban, costal, highland, and amazon) in Ecuador (Quito, Perdernales, Sigchos, and Coca); and the Rayer Bazar urban slum region in Dhaka, Bangladesh. Each testing site received appropriate approval by their ethics committees. THINKMD was granted approval through the University of Vermont’s Institutional Review Board for research in Burlington, VT; UNICEF-Burkina Faso received approval and authorization from the local MOH, Directorate of Maternal and Child Health in Burkina Faso; Save the Children-Bangladesh received approval and authorization from the organization’s internal ethics committee based at Save the Children headquarters in Washington, DC; and Universidad San Francisco de Quito (USFQ)/MOH-Ecuador received their approval from USFQ, Comité de Ética de Investigación en Seres Humanos in Quito. Field-based validation studies were performed following a standardized “train-the-trainer” approach with slight site-specific modifications based on collaborator request, the available technology at testing locations, number of attendees, and previous training of FLWs. Initial MEDSINC training occurred with LHPs and included a presentation on the background, functionality, and full use of the MEDSINC platform, in addition to working through standardized test cases, including metronome rate training to simulate heart and respiratory rate acquisition, and live cases using colleagues as patients. This approach using identical content was then repeated by the LHP for FLW training in groups of two to four trainees. Average time for each training group (ranging from 10 to 20 individuals) was between 4 and 6 hours, depending on the number of individuals. A focused validation study design was used with local testing partners and is outlined in Figure 2. All subjects, 2–60 months of age, were enrolled during FLW home visits as part of community outreach or at presentation to local/regional clinic for routine or acute care health-care visits. Parental consent was required and granted verbally at all study sites. MEDSINC assessments were performed on Apple iPod touch (Apple Inc., Cupertino, CA) (Burkina Faso and Ecuador) or Lenovo Yoga 8 Tablets (Lenovo Group Ltd., Beijing, China) (Bangladesh). Based on the MEDSINC version used at each site, severity risk correlations (respiratory distress, dehydration, sepsis–SIRS, and acute malnutrition) were evaluated at all three sites, whereas additional new specific disease risk assessments (malaria, dysentery, meningitis, ear infection, skin infection, anemia, measles, and urinary tract infection) developed during these validation studies were also evaluated during Ecuador and Bangladesh validation testing. Validation study design and recruitment of subjects. All subjects initially received an independent MEDSINC (offline) clinical assessment by FLWs over a 4- to 21-day study period depending on the size of the cohort and timeframe determined by in-country testing partners. Following each FLW MEDSINC clinical assessment, each subject was then independently evaluated and assessed by LHPs, who were blinded to the MEDSINC assessments generated by FLWs, using their personal standardized clinical approaches. The care and triage recommendations for each subject enrolled in validation studies were determined by LHPs based on their clinical assessments. All assessments were stored locally on the devices until internet connections were established. Data were transferred daily to the Health Insurance Portability and Accountability Act (HIPPA)-approved secure data management THINKMD server. Data were protected and stored on International Business Machines Cloudant Database as a Service. All access to Cloudant databases is encrypted via HTTPS and passed through a RESTfull application programming interface with full authentication. There were nightly backups and replication of the databases. Assessment of MEDSINC UI and UX was performed using questionnaire surveys completed by participants through interviews conducted by local testing staff following completion of each local validation study. Feedback was acquired from all participating stakeholders, including FLWs, LHPs, MOH staff, local and regional program staff, and child caregivers, throughout all study periods and sites. Dichotomized correlation analysis, as well as sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), was performed between independent FLW-generated MEDSINC assessments compared with independent LHP physician “gold standards,” generated clinical assessments of the same child. We estimated the mean and 95% credible intervals using Bayesian inference, with a binomial likelihood and a Jeffrey’s prior, that is, a beta distribution with both parameters equal to 0.5. Inter-rater reliability analysis between MEDSINC-generated assessments by FLWs and LHPs were performed using both Cohen’s kappa statistics,16,17 and Gwet’s agreement coefficient (AC1)18–20 analysis because our data include unbalanced classes. Gwet’s AC1 is considered a more reliable indicator of diagnostic reliability than Cohen’s kappa because it relaxes the assumption made by Cohen’s kappa that each evaluator is independent, and as a result, Gwet’s AC1 does not suffer from the paradox of Cohen’s kappa, where there is high agreement, but a low kappa statistic.18–20 Support for the Gwet’s AC1 reliability analysis was confirmed by using stochastic simulations. mHealth evidence reporting and assessment guidelines were adhered to for developing, testing, and reporting.21
N/A