Determining the agreement between an automated respiratory rate counter and a reference standard for detecting symptoms of pneumonia in children: Protocol for a cross-sectional study in Ethiopia

listen audio

Study Justification:
– Acute respiratory infections (ARIs), particularly pneumonia, are a leading cause of under-5 mortality worldwide.
– Manual counting of respiratory rate (RR) for detecting fast breathing, a sign of pneumonia, is challenging and can lead to inappropriate treatment.
– Introducing automated RR counters may provide a solution, but their agreement with expert pediatricians needs to be determined.
Study Highlights:
– The study aims to determine the agreement between an Automated Respiratory Infection Diagnostic Aid (ARIDA) and expert pediatricians counting RR using a video reference standard.
– The study also evaluates the consistency of ARIDA devices and expert clinicians in counting RR.
– RR fluctuation over time after ARIDA attachment is measured in normal breathing children aged 2 to 59 months.
Study Recommendations:
– Further work is needed to establish a global consensus on the most appropriate reference standard and an acceptable level of agreement for automated RR counters.
– Ministries of health should consider the evidence from this study when deciding whether to scale up new automated RR counters.
Key Role Players:
– Expert pediatricians
– Research assistants
– Videographer
– Research nurse
– Data manager
– Video expert panel members
– Advisory Committee members
– Quality assurance personnel
Cost Items for Planning Recommendations:
– Personnel salaries and benefits
– Training and capacity building
– Data collection and management tools
– Video recording and editing equipment
– Tablets and other electronic devices
– Transportation and logistics
– Quality assurance visits
– Ethical approval fees
– Communication and dissemination of findings

Background: Acute respiratory infections (ARIs), primarily pneumonia, are the leading infectious cause of under-5 mortality worldwide. Manually counting respiratory rate (RR) for 60 seconds using an ARI timer is commonly practiced by community health workers to detect fast breathing, an important sign of pneumonia. However, correctly counting breaths manually and classifying the RR is challenging, often leading to inappropriate treatment. A potential solution is to introduce RR counters, which count and classify RR automatically. Objective: This study aims to determine how the RR count of an Automated Respiratory Infection Diagnostic Aid (ARIDA) agrees with the count of an expert panel of pediatricians counting RR by reviewing a video of the child’s chest for 60 seconds (reference standard), for children aged younger than 5 years with cough and/or difficult breathing. Methods: A cross-sectional study aiming to enroll 290 children aged 0 to 59 months presenting to pediatric in- and outpatient departments at a teaching hospital in Addis Ababa, Ethiopia, was conducted. Enrollment occurred between April and May 2017. Once enrolled, children participated in at least one of three types of RR evaluations: (1) agreement—measure the RR count of an ARIDA in comparison with the reference standard, (2) consistency—measure the agreement between two ARIDA devices strapped to one child, and (3) RR fluctuation—measure RR count variability over time after ARIDA attachment as measured by a manual count. The agreement and consistency of expert clinicians (ECs) counting RR for the same child with the Mark 2 ARI timer for 60 seconds was also measured in comparison with the reference standard. Results: Primary outcomes were (1) mean difference between the ARIDA and reference standard RR count (agreement) and (2) mean difference between RR counts obtained by two ARIDA devices started simultaneously (consistency). Conclusions: Study strengths included the design allowing for comparison between both ARIDA and the EC with the reference standard RR count. A limitation is that exactly the same set of breaths were not compared between ARIDA and the reference standard since ARIDA can take longer than 60 seconds to count RR. Also, manual RR counting, even when aided by a video of the child’s chest movements, is subject to human error and can result in low interrater reliability. Further work is needed to reach global consensus on the most appropriate reference standard and an acceptable level of agreement to provide ministries of health with evidence to make an informed decision on whether to scale up new automated RR counters.

This study aims to understand whether an ARIDA RR count agrees with an expert panel of pediatricians counting RR by reviewing a video of the child’s chest for 60 seconds (reference standard) for children aged younger than 5 years with cough and/or difficult breathing. The primary objective of this study is to determine the performance of an ARIDA, as defined by agreement and consistency, in children aged younger than 5 years with cough and/or difficulty breathing. The secondary objective is to determine the performance of expert clinicians (ECs) counting RR, as defined by agreement and consistency, in children aged younger than 5 years with cough and/or difficulty breathing. The third objective is to measure RR fluctuation over time after ARIDA device attachment in normal breathing children aged 2 to 59 months. The study is a cross-sectional study comprising three types of RR evaluations: agreement, consistency, and RR fluctuation over time. The study was conducted in the pediatric in- and outpatient departments at Saint Paul’s Hospital and Millennium Medical College in Addis Ababa, Ethiopia. This hospital was selected based on the high incidence of pneumonia in outpatient and inpatient departments, interest and willingness of hospital managers to host the study, availability of Integrated Management of Neonatal and Child Illness (IMNCI)-trained [14] expert clinicians (ECs), and availability of a suitable study room, reliable electricity supply, and access to treatment including amoxicillin and oxygen. Ethical approval was obtained from the Armauer Hansen Research Institute/ALERT Ethics Review Committee (a biomedical research institute in Ethiopia) on March 7, 2017 (ref. PO02/17) and favorable ethical opinion received by the Liverpool School of Tropical Medicine Ethics Committee. All participants consented to the study by reading and signing the information and consent form. All children attending in- and outpatient departments at Saint Paul’s Hospital and Millennium Medical College in Addis Ababa between April 5 and May 22, 2017, were potential participants in the study and were systematically screened for eligibility. Children aged 0 to <2 months were excluded from the consistency evaluation due to the anticipated difficulty in attaching two devices at once to a small child. Children aged 0 to <2 months and those with fast breathing were excluded from the fluctuation evaluation due to anticipated difficulty in measuring RR in this group for an extended period of time and also to isolate the effect of the ChARM attachment on RR from other causes of RR fluctuation. All other children aged younger than 5 years who were accompanied by a caregiver aged 18 years and older, not too agitated to be assessed by a research nurse, who did not present with general danger signs or IMNCI referral signs or device manufacturer safety exclusion criteria (wearing supportive device at area of chest/belly, skin not intact in chest/belly, born before 37 weeks of gestation [<2 months only]), were not an inpatient being managed by barrier nursing (such as severe burns, child with neutropenia, severe infectious diseases), and were not advised against research procedures by the supervising clinician were eligible to participate in the study. General danger signs for newborns (±2 bpm), a third VEP member reviewed the video and if two out of three counts agreed (≤±2 bpm), the mean of the two closest RR counts was used. If all three VEP members disagreed (>±2 bpm), the video was sent for review to a fourth VEP member. If the fourth VEP member’s count agreed (≤±2 bpm) with any of the first three VEP members’ counts, the mean of the two closest counts was used. If all four panel members disagreed (>±2 bpm), the data from this evaluation were excluded from the agreement analysis. The primary outcome on which sample size was based was the agreement between the ARIDA and VEP RR counts. As per Bland-Altman [17], we conducted a precision-based sample size calculation based on the confidence interval for the 95% limits of agreement. The formula estimates the required number of children per age group (n) based on the desired width of the confidence interval. Using normal approximation and allowing a confidence interval of ±0.5 standard deviations of the difference between the two devices, a sample size of 46 children per age group was required for the agreement and consistency evaluations, adjusted to 52 per group for failure to get a reference standard count. For the RR fluctuation evaluation, a sample size of 30 children was used. Data analysis for all three RR evaluations was conducted in Stata 13 (StataCorp LLC) and Excel (Microsoft Corp). First, the number of children screened, eligible, consented, and enrolled in each type of evaluation was described. Baseline characteristics (age and sex) by screening breathing status (normal/fast) for those enrolled were described. All full-length source videos were reviewed for quality assurance purposes, and descriptive information on the video quality was recorded, including those where all four VEP members disagreed on the RR count. For the ARIDA and EC agreement and consistency evaluations, mean difference, root mean square difference, absolute mean difference, proportion of RR counts ±2 bpm from the reference standard, and positive and negative percentage agreement with 95% confidence intervals were calculated in Stata 13 by age group, and Bland-Altman plots with limits of agreement and 95% confidence intervals by age group and breathing status were created. Percentage of unsuccessful attempts and failures (defined by three unsuccessful attempts) and mean time to get an ARIDA RR count were calculated. A per-protocol analysis was used whereby children were excluded from the analysis if an RR could not be obtained simultaneously by the ARIDA and by the EC, with a VEP RR reading where at least two of the panel members were within ±2 bpm of each other. For the RR fluctuation evaluation, mean difference in the RR count between baseline and 1 minute, 1 and 3 minutes, and 3 and 5 minutes were calculated. The proportion of children with fast or normal RR classification at baseline and the change between RR classifications over time were analyzed. Malaria Consortium and UNICEF (Supply Division and Ethiopia Country Office) conducted quality assurance visits every 2 weeks to the research site during data collection. All data collected from the screening and RR evaluations were checked and verified by the data manager daily. A sample of three videos was sent weekly to an independent study advisor for RR evaluation using WHO IMCI guidelines [3] and to Malaria Consortium HQ for quality assurance. The project had an 11-person Advisory Committee made up of experts on maternal and child health who provided technical oversight and reviewed the study protocol.

The study described aims to determine the agreement between an Automated Respiratory Infection Diagnostic Aid (ARIDA) and an expert panel of pediatricians counting respiratory rate (RR) by reviewing a video of the child’s chest for 60 seconds (reference standard). The study focuses on children aged younger than 5 years with cough and/or difficult breathing. The primary objective is to determine the performance of the ARIDA in terms of agreement and consistency, while the secondary objective is to determine the performance of expert clinicians (ECs) counting RR. The study also includes an evaluation of RR fluctuation over time after ARIDA device attachment. The study was conducted in a teaching hospital in Addis Ababa, Ethiopia, and involved enrolling 290 children aged 0 to 59 months presenting to pediatric in- and outpatient departments. The data collection included three types of RR evaluations: agreement, consistency, and RR fluctuation over time. The study used electronic data collection platforms and video reviews by the expert panel for data analysis. Ethical approval was obtained, and informed consent was obtained from caregivers before enrollment. The study had specific enrollment criteria and exclusion criteria based on age, breathing status, and other factors. The primary outcome measures were the mean difference between the ARIDA and reference standard RR count for the agreement evaluation, and the mean difference between RR counts obtained by two ARIDA devices for the consistency evaluation. The study used a per-protocol analysis and conducted data analysis using statistical software. Quality assurance measures were implemented throughout the study, including quality checks, independent evaluations, and regular visits by external organizations.
AI Innovations Description
The study described aims to determine how an Automated Respiratory Infection Diagnostic Aid (ARIDA) agrees with the count of an expert panel of pediatricians counting respiratory rate (RR) by reviewing a video of a child’s chest for 60 seconds. The study focuses on children aged younger than 5 years with cough and/or difficult breathing.

The primary objective of the study is to determine the performance of the ARIDA in comparison to the expert panel’s count, as defined by agreement and consistency. The secondary objective is to determine the performance of expert clinicians counting RR, as defined by agreement and consistency. Additionally, the study aims to measure RR fluctuation over time after ARIDA device attachment in normal breathing children aged 2 to 59 months.

The study is a cross-sectional study conducted at a teaching hospital in Addis Ababa, Ethiopia. It enrolled 290 children aged 0 to 59 months presenting to pediatric in- and outpatient departments. The children participated in at least one of three types of RR evaluations: agreement, consistency, and RR fluctuation over time.

The agreement evaluation involved comparing the RR count of the ARIDA with the reference standard (expert panel’s count). The consistency evaluation measured the agreement between two ARIDA devices strapped to one child. The RR fluctuation evaluation measured the variability of RR count over time after ARIDA attachment.

The study collected data using an electronic data collection platform and involved trained expert clinicians and a video expert panel. Ethical approval was obtained, and informed consent was obtained from the caregivers of the participating children.

The primary outcomes of the study were the mean difference between the ARIDA and reference standard RR count (agreement) and the mean difference between RR counts obtained by two ARIDA devices started simultaneously (consistency).

In conclusion, this study aims to assess the performance of an ARIDA device in comparison to expert clinicians’ manual RR counting. The findings of this study can provide valuable insights into the potential use of automated RR counters in improving access to maternal health by accurately detecting symptoms of pneumonia in children.
AI Innovations Methodology
The study described in the provided text aims to determine the agreement between an Automated Respiratory Infection Diagnostic Aid (ARIDA) and an expert panel of pediatricians in counting respiratory rate (RR) for children aged younger than 5 years with cough and/or difficult breathing. The study also aims to measure the consistency of RR counts obtained by two ARIDA devices and the RR fluctuation over time after ARIDA attachment.

To simulate the impact of recommendations on improving access to maternal health, a methodology could be developed as follows:

1. Identify the recommendations: Review existing literature, consult with experts, and gather input from stakeholders to identify potential recommendations for improving access to maternal health. These recommendations could include interventions such as increasing the number of healthcare facilities, improving transportation infrastructure, implementing telemedicine services, or providing training for healthcare providers.

2. Define the indicators: Determine the indicators that will be used to measure the impact of the recommendations on improving access to maternal health. These indicators could include metrics such as the number of women receiving prenatal care, the number of skilled birth attendants available, or the distance to the nearest healthcare facility.

3. Collect baseline data: Gather data on the current state of access to maternal health services in the target population. This could involve conducting surveys, interviews, or analyzing existing data sources to obtain information on the indicators identified in step 2.

4. Develop a simulation model: Use the collected data to develop a simulation model that represents the current state of access to maternal health services. This model should incorporate the identified recommendations and their potential impact on the indicators.

5. Simulate the impact of the recommendations: Run the simulation model with different scenarios that represent the implementation of the recommendations. This could involve adjusting parameters such as the number of healthcare facilities, the availability of transportation options, or the training level of healthcare providers. The model should generate outputs that reflect the changes in the indicators of access to maternal health services.

6. Analyze the results: Analyze the outputs of the simulation model to assess the impact of the recommendations on improving access to maternal health. This could involve comparing the indicators between different scenarios or conducting sensitivity analyses to evaluate the robustness of the results.

7. Communicate the findings: Present the findings of the simulation analysis to stakeholders, policymakers, and other relevant parties. This could involve preparing reports, presentations, or visualizations that clearly communicate the impact of the recommendations on improving access to maternal health.

By following this methodology, policymakers and stakeholders can gain insights into the potential impact of different recommendations on improving access to maternal health. This information can inform decision-making and help prioritize interventions that are most likely to have a positive impact.

Partagez ceci :
Facebook
Twitter
LinkedIn
WhatsApp
Email