Background: This study aimed to evaluate the real-world implementation of the Friendship Bench (FB) – an evidence-based brief psychological intervention delivered by community health workers (CHWs) – three years after its implementation in three city health departments in Zimbabwe. Implementation sites were evaluated according to their current performance using the RE-AIM framework making this one of the first evaluations of a scaled-up evidence-based psychological intervention in sub-Saharan Africa (SSA). Methods: Using the RE-AIM guide (www.re-aim.org), the authors designed quantitative indicators based on existing FB implementation data. Thirty-six primary health care clinics (PHC) in Harare (n=28), Chitungwiza (n=4) and Gweru (n=4) were included. Among these clinics 20 were large comprehensive health care centers, 7 medium (mostly maternal and child healthcare) and 9 small clinics (basic medical care and acting as referral clinic). Existing data from these clinics, added to additionally collected data through interviews and field observations were used to investigate and compare the performance of the FB across clinics. The focus was on the RE-AIM domains of Reach, Adoption, and Implementation. Results: Small clinics achieved 34% reach, compared to large (15%) and medium clinics (9%). Adoption was high in all clinic types, ranging from 59% to 71%. Small clinics led the implementation domain with 53%, followed by medium sized clinics 43% and large clinics 40%. Small clinics performed better in all indicators and differences in performance between small and large clinics were significant. Program activity and data quality depends on ongoing support for delivering agents and buy-in from health authorities. Conclusion: The Friendship Bench program was implemented over three years transitioning from a research-based implementation program to one led locally. The Reach domain showed the largest gap across clinics where larger clinics performed poorly relative to smaller clinics and should be a target for future implementation improvements. Program data needs to be integrated into existing health information systems. Future studies should seek to optimize scale-up and sustainment strategies to maintain effective task-shared psychological interventions in SSA.
The aim of this study was to evaluate the implementation of the FB program three years post scale up using the RE-AIM framework. To gain an overview of the FB activities at the 36 PHCs, we carried out a review of the FB data that should have been routinely collected in a data collection book at each clinic since 2016. However, we found that FB related data since the scale-up exercise started was not reliably available. Upon investigating reasons for the lack of reliable data, several factors became obvious: a) CHWs had not been trained in data collection and responsibilities were unclear; b) data was collected only from those clinics whose CHW supervisors received continual guidance from FB research team members which was only happening in Harare; c) FB data was not integrated in the routine clinic data collection efforts and therefore not prioritized. Since reliable routine FB data was unavailable, we restructured our research approach which required a deviation from our protocol [22]. Instead of drawing from three years of data collection (2016-2019), we decided to collect fresh data in 2019, specifically focusing on the RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) indicators described below. We based this decision on the assumption that we capture activity for the month prior to data collection, assuming this reflected the usual or at least minimum level of activity as no efforts had been made to increase the activity levels since the scale up in 2016. Using the RE-AIM guide (www.re-aim.org), we designed quantitative draft indicators based on availability of Friendship Bench implementation data such as PHC clinic user numbers, number of clinic staff, catchment area size, number of CHWs, FB program user number, data pertaining to program usage such as frequency of consultations, FB-related tools such as benches, questionnaires, notebooks, frequency of supervision meetings for CHWs and support group meetings for clients. The research team decided on the final indicator definitions in an iterative process based on our understanding of RE-AIM framework’s definition during a 2-day-long meeting. The team consisted of experienced international and local mental health researchers who have expertise in program development, implementation, and evaluation. Three members of the research team had successfully developed and scaled up evidence-based practices (RA, DC, RV). In total, we created 16 indicators (Table (Table1)1) covering three of the five domains (Reach, Adoption, Implementation) which are described below. It was not possible to cover all five domains of the RE-AIM framework due to the lack of reliable data as described above. List of indicators for three domains of RE-AIM model Based on the finalized indicators, a FB specific RE-AIM interview guide was designed with multiple versions for different interview partner groups (see Additional file Table Table2)2) to elicit the additional information needed. We planned to confirm all statements made by stakeholders by requesting to be shown notebooks, notes, registries, filled out questionnaires or any other applicable documents. Number and distribution of clinic types in the three cities The following table lists all indicators and descriptive details for the three domains of the RE-AIM model chosen (Table (Table11). This study was carried out in 36 PHC clinics in three cities Harare (n=28), Chitungwiza (n=4) and Gweru (n=4). All 36 PHC clinics were part of the FB scale-up process in 2016 in which CHWs were mandated to take the manualized FB training. CHWs are attached to PHC clinics which cater for the needs of communities in areas with high population density (“townships”) characterized by informal income generating activities. Depending on their size, PHC clinics serve between 20,000-80,000 people from the most socio-economically disadvantaged sectors of the population and are defined as large (poly clinics), medium (family health clinics) and small (satellite) clinics [23]. CHWs are present at all clinics and do health promotion at clinic level as well as outreach activities. CHWs are overseen by health managers (district health promoting officers – DHPOs). Clinic size defines how many CHWs (currently between 1-14 CHWs) are attached. The group of CHWs who had prior experience with the FB program through their participation in the FB RCT [6] were assigned a peer-supervisory role in the PHC clinics they were attached to. Not all clinics had such a peer supervisor. Peer supervisors support other CHWs with referral issues, regular debriefing, and data collection. Depending on the size of the clinic, different numbers of wooden benches (Friendship Benches) are placed on the clinic premises. CHWs see clients between Monday and Thursday mornings at the clinic and at other times during the week in the informal setting of the community. Patients waiting for services at the PHC clinic are being sensitized about mental health and the FB program by FB CHWs who are trained as “mobilizers”. These can refer to the CHW on the bench who will administer a locally validated screening tool, the Shona Symptom Questionnaire (SSQ-14) [24]. Clients who score above the cut-off score and/or wish to receive the FB program are given psychoeducation and problem-solving therapy (PST). The intended FB workflow and its steps is described in the patient flow chart (Fig (Fig1).1). Clients are encouraged to come back for follow-up sessions for up to 4-6 times. All clients are invited to join a peer-led support group which focuses on income generation activities such as crocheting bags out of recycled plastic or doing community gardening. Group meetings happen weekly on the clinic grounds and are facilitated by the CHWs. Friendship Bench patient flow in a PHC clinic. Authors RV, CC, SM, JT, DC are affiliated with Friendship Bench and therefore have permission to use the company logo We investigated FB activities in 20 large, 7 medium, and 9 small clinics, as shown in Table Table22. The study was authorized by the city health authorities who run the PHCs and had received ethics approval from the Medical Research Council Zimbabwe (MRCZ). All 36 clinic leads were informed about purpose and duration of the study and key informant groups [CHWs, CHW supervisors, nurses in charge, and District Health Promotion Officers (DHPO)]. Four trained research assistants collected data between October and November 2019. Two research teams visited first clinics in Harare, then Gweru and Chitungwiza. Two consecutive days were spent at each clinic to conduct all questionnaire-based interviews and each team covered two clinics per week. As each clinic had varying numbers of CHWs and we had time constraints, half of all FB counsellors were selected randomly in their presence and subsequently interviewed. All FB program activities for a period of one month prior to the data collection were investigated and data was entered into KOBOtool (http://support.kobotoolbox.org/) to allow for complete data collection. CHW notes and notes from the supervisor of the CHWs were read to verify the responses given in the interviews. Data was verified on site after collection and then uploaded daily onto a password protected cloud. Only the local research team had access to the data base. To estimate the performance achieved at each clinic, we used descriptive and inferential statistics to analyze the quantitative data. The indicators for each domain were weighted as equivalent. All indicators were rated using a binary scale (0=not present or 1=present), except for the two indicators for Reach that were assigned % values. Indicator scores represented the level of performance, mean results per clinic group are presented. Additionally, individual indicator scores were summed, and each clinic received a total summary score for each domain which was ranked according to a procedure suggested by Farris et al. [25]. The higher the score or % value per clinic, the further up on the ranking a clinic was, which means the better the performance in that domain. Within each indicator and domain, there were cases where several clinics had the same combined domain rank and, therefore, ended up with the same rank. The mean of all three domain ranks per clinic formed a composite ranking which represented the overall level of FB site performance (see Additional file Table Table1).1). We ranked all clinics again based on this total score with the assumption that a lower score means a higher rank. Inter-Item correlations of domains were calculated. Considering that the data was collected from all clinics participating in this study and that there was homogeneity of variance across the clinics, an analysis of variance (ANOVA) was used to determine any significant differences among the clinics’ scores and differences according to type of clinic. Additionally, a Tukey post hoc test was used to confirm the differences between clinics individually and when aggregated as type groups. To establish whether clinic types had an influence on the performance of a FB site, we aggregated the clinics according to their types (large, medium sized, small) and compared their performance using their overall composite ranking (mean of all 3 ranks).