Introduction Many children in developing countries grow up in environments that lack stimulation, leading to deficiencies in early years of development. Several efficacy trials of early childhood care and education (ECCE) programmes have demonstrated potential to improve child development; evidence on whether these effects can be sustained once programmes are scaled is much more mixed. This study evaluates whether an ECCE programme shown to be effective in an efficacy trial maintains effectiveness when taken to scale by the Government of Ghana (GoG). The findings will provide critical evidence to the GoG on effectiveness of a programme it is investing in, as well as a blueprint for design and scale-up of ECCE programmes in other developing countries, which are expanding their investment in ECCE programmes. Methods and analysis This study is a cluster randomised controlled trial, in which the order that districts receive the programme is randomised. A minimum sample of 3240 children and 360 schools will be recruited across 72 district school cohort pairs. The primary outcomes are (1) child cognitive and socioemotional development measured using the International Development and Early Learning Assessment tool, the Strengths and Difficulties Questionnaire, and tasks from the Harvard Laboratory for Development Studies; (2) child health (measured using height/weight for age, height-for-weight Z scores). Secondary outcomes include (1) maternal mental health, (using Kessler-10 and Warwick Edinburgh Mental Wellbeing Scale) and knowledge of ECCE practices; (2) teacher knowledge, motivation and teaching quality (measured with classroom observation); (3) parental investment (using the Family Care Index and Home Observation Measurement of the Environment and the Child-Parent Relationship Scale); (4) water, sanitation, and hygiene (WASH) practices; (5) acute malnutrition (using mid-upper arm circumference). We will estimate unadjusted and adjusted intent-to-treat effects. Ethics and dissemination Study protocols have been approved by ethics boards at the University College London (21361/001), Yale University (2000031549) and Ghanaian Health Service Ethics Review Committee (028/09/21). Results will be made available to participating communities, funders, the wider public and other researchers through peer-reviewed journals, conference presentations, social and print media and various community/stakeholder engagement activities. Trial registration number ISRCTN15360698, AEARCTR-0008500.
The programme will be rolled out at scale in Northern Ghana, where prevalence of chronic malnutrition is nearly twice as high as the national average (at 33% and 19%, respectively) and 60%–70% of the population were classified as poor in 2017, compared with 23% nationally.13 14 Baseline conditions among the sample in the efficacy trial reveal that one-third of children (3–5 years) had acute malnutrition and had experienced diarrhoea in the last 30 days; less than one-third could identify a shape such as a triangle and even fewer were able to sort figures based on colour and shape. Around half of the mothers were at risk of depression and only 13% reported that anyone in the household had played with the target child in the last 3 days. The children live in households of on average 10 members in which only 20% ever attended school and even fewer are literate (8%). Household sanitary infrastructure is largely lacking—nearly 80% report that they defecate openly. Access to public infrastructure is dismal: one in three communities do not have access to electricity and more than half cannot be accessed for 1–2 months of the year due to lack of paved roads.15 The programme developed by the NGO Lively Minds aims to provide preschool age children with the foundations they need to succeed in school. As young children transition to school, they draw on multiple skills to succeed in the classroom, including early academic and behavioural skills, socioemotional development and aspects of physical health including motor development.16 17 Exposure to quality ECCE has the potential to improve school readiness across these various domains. Current conceptualisations of ECCE quality rest however not only on children’s experiences in their classroom, but also on their home environments, and point to the critical role that warm, nurturing, responsive and stimulating relationships with caregivers both at home and school play. Attachment theory focuses on the importance of consistent and sensitive interactions with teachers and parents18; constructivist learning theories focus on the development of cognitive skills through engaging in age-appropriate activities.19 Further, regular communication between parents and schools allows them to work together toward children’s learning and development and has been shown to improve longer‐term academic outcomes for preschool and KG children.20 21 Building on this theoretical framework, the Lively Minds intervention has four core aims: To this end, the intervention consists of three key components: Figure 1 summarises the conceptual framework underpinning this programme, highlighting key channels through which the programme is expected to impact children’s developmental outcomes and school readiness. A critical and distinguishing feature of the programme is that through engaging KG teachers and mothers, it has the potential to simultaneously improve the home and preschool environments; typically, ECCE programmes are designed to target one or the other. Conceptual framework underpinning the Lively Minds intervention. KG, kindergarten; PS, play schemes. The programme is owned and implemented by the Ghana GES. This is a key difference between the version of the programme evaluated in the efficacy trial and the current version. At that time, the programme was owned by the NGO Lively Minds who engaged the GES in implementation but remained in the driving seat throughout, retaining control over programme content and implementation strategy and leading on all aspects of programme implementation and monitoring on the ground. Control and responsibility for all of these elements have now been transferred to GES. Lively Minds provides technical assistance; for example, it helps districts set up the necessary monitoring and accounting databases, attends key training sessions of district teams and KG teachers, responds to queries from the GES Working Group in charge of the programme and accompanies the district officials to some of the monitoring visits. Implementation is conducted at the district level, that is, once a district is enrolled in the programme, all of the KGs in the district are invited to participate. Before enrolment of districts begins, NGO Lively Minds familiarises the regional GES office (there are currently 16 regions in Ghana made up of 261 districts) with the Lively Minds Programme and trains a team tasked with mobilisation of the districts. Following this, there are five main implementation stages starting from enrolment of the district into the programme. These are summarised in figure 2. Online supplemental material S1 provides a more detailed explanation of what each step entails. GES Lively Minds (LM) implementation model. GES, Ghana Education Service; PS, play schemes; PTA, Parent Teacher Association; KG, kindergarten. bmjopen-2022-061571supp001.pdf This study is an open-label CRCT, accommodating the phased and complete roll-out of the GES-LM Programme. The programme will be scaled to all of Northern Ghana’s 60 districts over 10 phases; each phase (equivalent to one school term) covering a district group (DG) of six districts, from September 2021 to September 2024 (figure 3A). Researchers at the the Institute for Fiscal Studies (IFS) randomised districts to DGs using R V.4.1.1. DGs will be enrolled into the evaluation study from January 2022 to September 2024 (hence, leaving out DG1) in six evaluation cohorts (ECs) (figure 3B). In each EC, one DG is allocated at random into the treatment group and one to the control DG. DG6 and DG7 are used first as control and later as treatment DGs, relying on different cohorts of children (those starting school in 2022 and those in 2023) (in different schools). DG10 will provide the control group for EC5 and EC6 using different schools. Control DGs will begin to receive the programme three terms after being enrolled into the study, after endline data collection has been completed (with the exception of EC6, where control will receive the programme after two terms). Baseline data collection will start at the end of January 2022 (after participant enrolment) (figure 3C). Data collection will proceed continuously through to the end of the project in early 2025. Figure 4 shows this design in a flow diagram, while figure 5 shows the participant timeline. Project design: (A) intervention roll-out; (B) treatment and control allocation and cohort selection; (C) baseline and endline data collection timing. DG, district group; EC, evaluation cohort. Randomisation flow diagram. EC, evaluation cohort. Participant timeline. Within each district, we plan to randomly sample five programme-eligible preschools (excluding efficacy trial schools) and eight children who are enrolled or about to enrol in the sampled preschool. In DG6, DG7 and DG10, we will sample children and schools twice (once for each cohort of children). When we sample for a second time, we will exclude schools previously enrolled into the study. This gives us a minimum total sample of 360 preschools and 2880 children and households. Schools will be selected at random from a complete school list in each district provided by the GES. We will exclude focus on government schools, and exclude atypical schools, specifically only boys’ and only girls’ schools (more than 99% of government schools are mixed gender), as well as schools with less than 20 enrolled students. Doing so, we exclude 20 schools from the sampling frame (20 small ones, one of which has no boys enrolled in KG). Our study participants will be children and their households, who attend or are due to attend KG. These will be identified from school enrolment lists, except for EC1 where due to the COVID-19 pandemic, school openings were different and lists were not readily available. We therefore sampled in this EC only from a census surrounding the school. To be eligible for the study, children must fulfil both of the following criteria (the same as in the efficacy trial): Where more than one child is eligible within a single household, we will randomly select one child to be a part of the study. Primary caregivers (PCGs) will be asked for their consent (and to give consent on behalf of their child) before being enrolled in the study. Model consent form and information sheet for PCGs are provided in online supplemental materials S2 and S3. In each sampled unit (school), we intend to conduct four separate surveys/assessments types, each to be implemented both at baseline and at endline, two to three school terms later. Surveys/assessments include: (1) child assessment, (2) PCG interview, (3) KG teacher interview and classroom observation, (4) community survey. These surveys will be collected by local well-trained and experienced interviewers, who will not be informed about treatment allocation. In order to gather evidence on district-level preparedness for the programme (for later heterogeneity analyses), we additionally plan to conduct a survey of the district official in charge of preschool programmes in each district once. We will also collect secondary data available on districts, such as languages spoken or voter turnout. We have three sets of primary outcomes: Our secondary outcomes are as follows: In selecting measures for our key outcomes, we prioritised those which have already been validated and used in the study context (for example, IDELA22 26) or in a context that is similar. Since very little related work has been conducted in our study context, this was not possible for all outcomes that we want to capture. For these outcomes, we next identified measures which have been widely used and validated across multiple contexts, for example, the Kessler-10 and FCI. We will adapt these measures to the study context following best practice27: tools will be reviewed for sociocultural relevance and linguistic accuracy through consultation with local professionals and modified accordingly; tools will be translated and back-translated into all of the main languages spoken in the study communities. Full validation of the tools before administration of the baseline is beyond the scope of this study. However, all outcome measurement tools will be piloted on small samples in the study area, feedback on performance will be collected from the surveyors and further adaptations will be made. As part of the training, the survey team will be required to achieve inter-rater reliability of at least 0.8 (κ statistic) on the child assessment and classroom observation tools. Once data are collected, we will assess reliability and validity of the outcome measures by examining whether any items have a lot of missing responses, items show sufficient variability, correlations with other variables go in the expected direction and measures exhibit sufficient degree of internal consistency (Cronbach’s α of at least 0.7). We will also assess dimensionality of the measures using exploratory and confirmatory factor analysis and functioning of individual items using Item Response Theory to generate item characteristics curves. More detailed information on these outcomes is provided in online supplemental material S4. For details on data management, please refer to the project’s management registration: https://ifs.org.uk/uploads/Research registraion form.pdf. Data availability is outlined in online supplemental material S5. As our design does not deviate in a substantive way from a conventional two-arm CRCT, we use standard CRCT methods to assess statistical power. For the power analysis, we consider the district as a cluster, providing us with 54 clusters. We consider the full sample size of 2,880 children (5 school per district-cohort, 8 children per school). We leverage data available from the efficacy trial to better inform how covariates increase our power to detect effects, including expected loss to follow-up, which we set at 6%. We calculate minimum detectable effects (MDEs) using the following formula28: α denotes the significance level, β power, ρ unconditional intracluster correlation (ICC), t is the critical value of the t-distribution, σ is the SD of the outcome, n is children per arm, m is children per cluster, Rc2 is proportion of cluster-level variance component explained by covariates and Rp2 is the individual equivalent. One can interpret the effect of Rc2 and Rp2 as altering the ‘variance inflation factor’ of the power calculation—when they equal zero, the formula reduces to the standard CRCT one. We set α to 0.05, β to 0.8. We estimate the ICC for each outcome data using data from the efficacy trial (ranging from 0.011 to 0.110). Note that we cannot conduct the same exercise for all of our primary outcomes, as some were not included in the original trial. The MDEs for some of our primary and secondary outcomes measured at the child/household level are shown in table 1. The power to detect effects at the school level is significantly more limited. Minimum detectable effects (MDEs) Each outcome controls for its baseline value only. The MDE for the cognition score will allow us to determine whether the scaled up, government-owned and run version of the Lively Minds Programme is at least as effective as the NGO-run smaller scale version evaluated in the efficacy trial in its key aim of improving cognitive development of preschool children. However, if the scale-up results in a significant reduction in effectiveness, we will not be able to determine whether the programme continues to have an impact. It should be noted that efficacy trial impacts were largest for the poorer children in the sample. On average, the areas where the scale-up is happening are poorer than those where the efficacy trial took place, so even if there is some loss of effectiveness as the result of the scale-up, we should still be able to detect impacts given our power. For our primary and secondary outcomes, we will estimate an intent-to-treat effect of the programme. Our design is the same as a CRCT stratified by EC. As such, we follow the standard estimation technique for two-way stratified CRCTs, For the outcome variable Y of individual i in district j at date t, we estimate the following: Where γt is an evaluation cohort fixed effect, Treatmentijt is binary indicator variable equal to 1 if individual i is in a treatment district at date t, Xijt is a set of control variables. The treatment effect we identify is θ, and SEs will be clustered at the school level. We will compute the Romano-Wolf stepdown p values to adjust for multiple hypothesis testing. Hypotheses will be arranged within families of outcomes—that is, each set of research questions separately. The set of controls Xijt will be identified using a double-lasso procedure of the covariates we collect as part of our data collection. We will present estimates of the treatment effect both with and without these control variables. In addition to those controls that the double-lasso procedure generates, we will use (at a minimum) the following controls: We will explore heterogeneity in treatment effect in two distinct ways. We are first interested in considering treatment heterogeneity by baseline characteristics of the child, their home environment, and features of the environment at the preschool and district level. Given the large number of relevant dimensions, we will use recent methodological advances29 to assess heterogeneity across all of these dimensions in a rigorous and data-driven way. Second, we will assess how study impacts vary by implementation fidelity and compliance in different dimensions. Given intervention fidelity measures are only observed in treatment districts, we will also examine heterogeneity from the data collected as part of the district official survey, focusing on those covariates that are most predictive of high fidelity in treatment districts. While there is always a risk of unintended consequences in all types of trials, in this sort of intervention, such a risk is minimal. However, if there is any clear evidence of harm, then the study will halt under international ethical guidelines for medical research. This is a large study with many collaborators, and the data gathered will be able to answer more scientific questions than those outlined in this protocol. The study team expects to conduct and publish such additional analyses. Patients or the public were not involved in the design of our research.