High-fidelity prototyping for mobile electronic data collection forms through design and user evaluation

listen audio

Study Justification:
– Mobile data collection systems are often difficult to use for nontechnical or novice users.
– Developers of such tools do not adequately involve end users in the design and development process, leading to interaction challenges.
– The study aimed to assess the guidelines for form design using high-fidelity prototypes based on end-user preferences.
– It also investigated the association between the results from the System Usability Scale (SUS) and the Study Tailored Evaluation Questionnaire (STEQ) after evaluation.
– The study recommended practical guidelines for implementing the group testing approach in low-resource settings during mobile form design.
Highlights:
– A high-fidelity prototype was developed using Axure RP 8 to assess user preferences and usability.
– 80% of research assistants (RAs) appreciated the form progress indication, found navigation easy, and were satisfied with error messages.
– The SUS average score was 70.4, indicating above-average usability.
– The STEQ results showed a 70% level of agreement with affirmative evaluation statements.
– There was a strong positive association between SUS and STEQ results.
– The study demonstrated the value of user testing and group testing in low-resource settings.
Recommendations:
– Embrace the group testing approach for assessing user needs in diverse user groups.
– Proper preparation and infrastructure are needed for usability testing.
– Implement practical guidelines for the group testing approach in low-resource settings during mobile form design.
Key Role Players:
– Researchers
– Research assistants
– End users
– Designers
– Developers
– Policy makers
– Project managers
Cost Items for Planning Recommendations:
– Research and development costs
– Training costs for research assistants
– Infrastructure costs for usability testing
– Equipment costs (e.g., mobile phones)
– Travel and accommodation costs for group testing
– Data analysis and reporting costs

The strength of evidence for this abstract is 7 out of 10.
The evidence in the abstract is based on a study with a sample size of 30 research assistants. The study used high-fidelity prototypes to assess form design guidelines and evaluate usability. The results showed a SUS average score of 70.4, indicating above-average usability. The study also found a strong positive association between SUS and STEQ scores. However, the evidence could be strengthened by increasing the sample size and conducting the study with a more diverse group of users. Additionally, providing more details about the methodology and statistical analysis would improve the overall quality of the evidence.

Background: Mobile data collection systems are often difficult to use for nontechnical or novice users. This can be attributed to the fact that developers of such tools do not adequately involve end users in the design and development of product features and functions, which often creates interaction challenges. Objective: The main objective of this study was to assess the guidelines for form design using high-fidelity prototypes developed based on end-user preferences. We also sought to investigate the association between the results from the System Usability Scale (SUS) and those from the Study Tailored Evaluation Questionnaire (STEQ) after the evaluation. In addition, we sought to recommend some practical guidelines for the implementation of the group testing approach particularly in low-resource settings during mobile form design. Methods: We developed a Web-based high-fidelity prototype using Axure RP 8. A total of 30 research assistants (RAs) evaluated this prototype in March 2018 by completing the given tasks during 1 common session. An STEQ comprising 13 affirmative statements and the commonly used and validated SUS were administered to evaluate the usability and user experience after interaction with the prototype. The STEQ evaluation was summarized using frequencies in an Excel sheet while the SUS scores were calculated based on whether the statement was positive (user selection minus 1) or negative (5 minus user selection). These were summed up and the score contributions multiplied by 2.5 to give the overall form usability from each participant. Results: Of the RAs, 80% (24/30) appreciated the form progress indication, found the form navigation easy, and were satisfied with the error messages. The results gave a SUS average score of 70.4 (SD 11.7), which is above the recommended average SUS score of 68, meaning that the usability of the prototype was above average. The scores from the STEQ, on the other hand, indicated a 70% (21/30) level of agreement with the affirmative evaluation statements. The results from the 2 instruments indicated a fair level of user satisfaction and a strong positive association as shown by the Pearson correlation value of .623 (P<.01). Conclusions: A high-fidelity prototype was used to give the users experience with a product they would likely use in their work. Group testing was done because of scarcity of resources such as costs and time involved especially in low-income countries. If embraced, this approach could help assess user needs of the diverse user groups. With proper preparation and the right infrastructure at an affordable cost, usability testing could lead to the development of highly usable forms. The study thus makes recommendations on the practical guidelines for the implementation of the group testing approach particularly in low-resource settings during mobile form design.

The study participants were 30 RAs, and all of them were collecting data on a maternal and child health project (the Survival Pluss project) in northern Uganda, which is funded by the Norwegian Programme for Capacity Development in Higher Education and Research for Development (NORHED) [26]. Of the RAs, 3 were certificate holders and 9 were diploma holders, whereas 18 were degree holders in various fields, which included accounting, agriculture, social work, laboratory services, and nursing. Of these, 23 RAs had been collecting data for a period of 2 years or less, whereas 7 had collected data for a period ranging from 4 to 6 years. All the RAs had used open data kit (ODK) [5,27] to collect data; however, 3 reported to have used tangerine, Survey Monkey, and OpenMRS, in addition to ODK [28]. A Web-based high-fidelity prototype for MEDCFs was developed between January and February 2018. This prototype was meant to demonstrate the RAs’ design preferences having collected them earlier using a mid-fidelity prototype [29,30]. It was also used as a basis for evaluating to what extent these design preferences contribute to the usability of the data collection forms. A high-fidelity prototype is a computer-based interactive representation of the product with a close resemblance to the final design in terms of details and functionality. The high-fidelity prototypes not only test the visuals and aesthetics of a product but also the UX aspects in relation to interaction with the product [31]. The prototype (see Multimedia Appendix 1) was created in Axure RP 8 without any backend functionality and was created to fit on Samsung Galaxy J1 Ace phones that were being used to collect data on the Survival Pluss project, and they had a view port size of 320 by 452. The prototype had 3 main sections structured based on the project’s content. These consisted of the demographic section where participants were required to fill the participant ID, interviewer name, and interviewer telephone number. Section I had list pickers and section II showed different table designs capturing a child’s sickness record. We explained to the RAs the potential value of the user testing exercise before giving them access to the prototype and to the tasks they were supposed to do. A summary of the entered data on the child sickness was available for the users to crosscheck and agree or disagree to its correctness, after which they were prompted to submit. Before submission, the users were warned of the inability to edit the data once they have been submitted. At this point, the progress bar indicated 100%, meaning that the form had been filled to completion and submitted. The group testing exercise was conducted in February 2018 in Lira, Uganda. The RAs were required to complete some tasks (Multimedia Appendix 2) during the group testing exercise. This was meant to create uniformity in the prototype evaluation and also to be able to measure the time it took for each of the RAs to complete the same tasks. In addition to carrying out the tasks, they were also meant to read the feedback given as a result of the actions carried out and to respond appropriately until they correctly submitted the form. It was a requirement to complete all the tasks before submission of the form, and the participants were expected to record their start time before and finish time after the testing exercise. A total of 2 observers were present to record the exercise and to attend to the questions when asked to. The start time and end time were recorded for each participant in each session. The prototype evaluation happened immediately after the group testing exercise. This was an ex-post naturalistic evaluation because we were evaluating an instantiated artifact in its real environment, that is, with the actual users and in the real setting [18,32]. The artifact was a high-fidelity prototype, and the actual users were the RAs who were collecting data on mobile phones using ODK, an OSS software. A total of 2 instruments were used to evaluate the prototype usability, one was the SUS, a standardized questionnaire, and the other was STEQ. By combining the two, we expected to gain more detailed insight and also to test our generated questionnaire against the standardized one. These 2 posttest questionnaires were administered after the participants had completed the tasks in a bid to show how users perceived the usability of the data collection forms [33]. The STEQ comprised 13 statements and was developed based on the literature with a purpose of making an alternative instrument, other than the SUS. The statements were based on features such as form progress, simplicity in use, error correction and recovery, and visual appeal, among others. The RAs were required to indicate their level of agreement with the evaluation statements by selecting options, which included strongly disagree, disagree, somewhat agree, agree, strongly agree, and don’t know and were tallied to a score of 1, 2, 3, 4, 5, and 6, respectively. The evaluation statements were selected from 4 usability evaluation questionnaires, namely the Computer System Usability Questionnaire [34], Form Usability Scale [35], Questionnaire for User Interaction Satisfaction [36], and statements from the Usability Professional Association [37]. The selected statements were based on the fact that they could be used to assess usability in mobile data collection forms as defined by the design preferences of the RAs and were all affirmative statements with positive valence. It is alleged that participants are less likely to make mistakes by agreeing to negative statements [38] similar to the case of a balanced questionnaire consisting of positive and negative statements [39]. However, and for the sake of simplicity, we used only affirmative statements adopting the style of the 4 abovementioned usability evaluation questionnaires. The SUS is a balanced questionnaire that is used to evaluate the usability of a system and comprises 10 alternating positive and negative statements [40]. The SUS acted as a complementary scale to the STEQ. The SUS has been experimentally proven to be reliable and valid [33] because of its ability to control against acquiescence bias and extreme response bias [38,39]. In acquiescence bias, respondents tend to agree with all or almost all statements in a questionnaire, whereas the extreme response bias is the tendency to mark the extremes of rating scales, rather than the points near the middle of the scale [38,39]. These biases greatly affect the true measure of an attitude. The word system was replaced with the word form for some of the statements in both questionnaires. Results from the 2 instruments were compared. Previous studies have shown that irrespective of the questionnaires used being balanced or affirmative, the scores from the 2 questionnaires are likely to be similar [38]. This is because there is little evidence to show that the advantages of using balanced questionnaires outweigh the disadvantages, some of which include misinterpretation of the scales leading to mistakes by the users [38]. The STEQ was summarized using frequencies in an Excel sheet where the evaluation statement with majority agreeing to it was taken as the option which RAs were most satisfied with (Table 1). On the other hand, SUS scores are calculated based on the statement being scored [40], and we did the same in this study. For the positive statements 1, 3, 5, 7, and 9, the score contribution was what the user had selected minus 1. For the negative statements 2, 4, 6, 8, and 10, the score contribution was 5 minus what the user had selected. The total sum of the score contributions was obtained and multiplied by 2.5 [40]. This gave the overall result of the form usability from each participant. The 13 statements in the tailormade evaluation questionnaire and the number of respondents (n=30) in each category from strongly disagree to strongly agree. aSome respondents did not reply to all statements.

One potential innovation to improve access to maternal health is the use of high-fidelity prototyping for mobile electronic data collection forms. This innovation involves involving end users in the design and development of mobile data collection tools to ensure usability for nontechnical or novice users. The study mentioned in the description used a high-fidelity prototype developed based on end-user preferences to assess form design guidelines. The results showed that the prototype had above-average usability, with users appreciating features such as form progress indication, easy navigation, and error messages. The study also recommended practical guidelines for implementing group testing approaches, particularly in low-resource settings, during mobile form design. Overall, this innovation can help improve access to maternal health by ensuring that mobile data collection tools are user-friendly and meet the needs of diverse user groups.
AI Innovations Description
The recommendation from the study is to use high-fidelity prototyping for mobile electronic data collection forms to improve access to maternal health. The study found that involving end users in the design and development of product features and functions improves usability and user satisfaction. The high-fidelity prototype, developed using Axure RP 8, demonstrated the users’ design preferences and allowed for evaluation of usability. The study recommends implementing group testing approaches, particularly in low-resource settings, to assess user needs and develop highly usable forms. The study participants were 30 research assistants (RAs) collecting data on a maternal and child health project in northern Uganda. The RAs had varying levels of education and experience in data collection. The high-fidelity prototype was developed based on the RAs’ design preferences and evaluated through tasks and questionnaires. The results showed above-average usability and user satisfaction. Overall, the study recommends using high-fidelity prototyping and group testing approaches to improve access to maternal health through mobile electronic data collection forms.
AI Innovations Methodology
The study described in the provided text focuses on improving access to maternal health through the use of high-fidelity prototyping for mobile electronic data collection forms (MEDCFs). The objective of the study was to assess the guidelines for form design using high-fidelity prototypes developed based on end-user preferences. The study involved 30 research assistants (RAs) who evaluated the prototype and completed given tasks during a common session.

To simulate the impact of the recommendations on improving access to maternal health, the study used a methodology that involved the following steps:

1. Development of a high-fidelity prototype: A web-based high-fidelity prototype was developed using Axure RP 8. The prototype closely resembled the final design of the MEDCFs in terms of details and functionality.

2. Group testing exercise: The RAs participated in a group testing exercise where they were required to complete tasks using the prototype. The tasks were designed to create uniformity in the evaluation and measure the time taken to complete them. Observers were present to record the exercise and address any questions.

3. Prototype evaluation: After the group testing exercise, the RAs completed two post-test questionnaires to evaluate the usability of the prototype. The first questionnaire used was the System Usability Scale (SUS), a standardized questionnaire that assesses the usability of a system. The second questionnaire was the Study Tailored Evaluation Questionnaire (STEQ), which comprised 13 affirmative statements based on features such as form progress, simplicity in use, error correction and recovery, and visual appeal.

4. Data analysis: The results from the STEQ were summarized using frequencies in an Excel sheet to determine the evaluation statements with the highest agreement. The SUS scores were calculated based on the user’s selection for each statement. The score contributions were summed up and multiplied by 2.5 to give the overall form usability score for each participant.

5. Comparison of results: The results from the STEQ and SUS were compared to assess the association between the two evaluation instruments. The Pearson correlation value was calculated to determine the level of association.

The study found that the high-fidelity prototype had above-average usability, with 80% of the RAs appreciating the form progress indication, finding the form navigation easy, and being satisfied with the error messages. The results from the STEQ and SUS indicated a fair level of user satisfaction and a strong positive association between the two evaluation instruments.

In conclusion, the methodology used in this study involved developing a high-fidelity prototype, conducting a group testing exercise, and evaluating the prototype using post-test questionnaires. This methodology can be used to simulate the impact of recommendations for improving access to maternal health by assessing the usability and user experience of mobile electronic data collection forms.

Share this:
Facebook
Twitter
LinkedIn
WhatsApp
Email