Implementation of a National Health Insurance (NHI) in South Africa requires a reliable, standardized health information system that supports Diagnosis-Related Groupers for reimbursements and resource management. We assessed the quality of inpatient health records, the availability of standard discharge summaries and coded clinical data and the congruence between inpatient health records and discharge summaries in public-sector hospitals to support the NHI implementation in terms of reimbursement and resource management. We undertook a cross-sectional health-records review from 45 representative public hospitals consisting of seven tertiary, 10 regional and 28 district hospitals in 10 NHI pilot districts representing all nine provinces. Data were abstracted from a randomly selected sample of 5795 inpatient health records from the surgical, medical, obstetrics and gynaecology, paediatrics and psychiatry departments. Quality was assessed for 10 pre-defined data elements relevant to NHI reimbursements, by comparing information in source registers, patient folders and discharge summaries for patients admitted in March and July 2015. Cohen’s/Fleiss’ kappa coefficients (κ) were used to measure agreements between the sources. While 3768 (65%) of the 5795 inpatient-level records contained a discharge summary, less than 835 (15%) of diagnoses were coded using ICD-10 codes. Despite most of the records having correct patient identifiers [κ: 0.92; 95% confidence interval (CI) 0.91-0.93], significant inconsistencies were observed between the registers, patient folders and discharge summaries for some data elements: attending physician’s signature (κ: 0.71; 95% CI 0.67-0.75); results of the investigation (κ: 0.71; 95% CI 0.69-0.74); patient’s age (κ: 0.72; 95% CI 0.70-0.74); and discharge diagnosis (κ: 0.92; 95% CI 0.90-0.94). The strength of agreement for all elements was statistically significant (P-value ≤ 0.001). The absence of coded inpatient diagnoses and identified data inaccuracies indicates that existing routine health information systems in public-sector hospitals in the NHI pilot districts are not yet able to sufficiently support reimbursements and resource management. Institutional capacity is needed to undertake diagnostic coding, improve data quality and ensure that a standard discharge summary is completed for every inpatient.
A sample of public-sector hospitals across 10 NHI pilot districts selected by the South African National Department of Health (NDoH), was identified (Figure 1). These districts were selected based on a combination of factors such as demographics, socio-economic factors including income levels and social determinants of health, health profiles, health-delivery performance, health-service management, financial and resource management (Matsoso and Fryatt, 2013). NHI pilot districts. 1. OR Tambo; 2. Thabo Mofutsanyana; 3. City of Tshwane; 4. uMzinyathi; 5. uMgungundlovu; 6. Vhembe; 7. Gert Sibande; 8. Pixley ka Seme; 9. Dr Kenneth Kaurnda; 10. Eden. A retrospective cross-sectional health records review was undertaken on a sample of public-sector hospitals in South Africa, with a focus on addressing all health-systems bottlenecks and challenges to reverse the worsening disease burden. The NHI pilot sites were established to test the feasibility of implementing the NHI to reduce the high maternal and child mortality rates in South Africa and other components of the disease burden. The objectives of the pilots include testing the ability of the districts to assume greater responsibilities under the NHI, to assess utilization patterns, and costs and affordability of implementing a primary health-care service package (Matsoso and Fryatt, 2013). The sampling frame for the study was all public hospitals with their five treatment departments—surgical, medical, paediatrics, obstetrics and gynaecology and psychiatry, located within the 10 NHI pilot districts (N = 83). A cluster study design was adopted whereby each of the NHI pilot districts was considered a cluster. To cover each stratum (i.e. hospital level), proportional sampling was used to randomly select three district hospitals, and all regional and tertiary hospitals from each of the pilot districts in the nine provinces to yield a total of 45 hospitals. Table 1 outlines the breakdown of the sampled hospitals. The sample size of in-patient health records was determined by assuming a 50% prevalence for the number of admissions per hospital, with a 95% confidence level and a precision level of 0.05. Given the cluster design of the study and unknown effect on the data, a design effect of 1.5 was assumed. Based on these parameters, a sample of 578 in-patient records was estimated for each district. Characteristics of the different hospital levels (South African National Department of Health, 2004) Consequently, data were expected from 5780 routine in-patient-level records at 45 sampled public-sector hospitals from five treatment departments or groups of departments—surgical, medical, paediatrics, obstetrics and gynaecology and psychiatry. For consistency across the hospitals, all departments in each hospital were assigned to one of these five groups. The records were drawn proportionally based on the estimated number of admissions in the selected hospitals during the study months to allow for seasonal disease surges (March 2015, summer season and a peak for diarrhoeal cases, and July 2015, winter period) and the number of hospitals per level in each NHI pilot district (Table 2). Estimated numbers of records for review by types of public hospitals within NHI pilot districts Depending on the size of the hospital, approximately 10 records were accessed from each of the treatment departments for each study month, at each study hospital. All records were accessed if the number of admissions during a study month in a department was <10. Data were collected between August 2016 and April 2019 by trained fieldworkers. Research teams were given log sheets to be signed by the managers (CEOs) of the hospitals visited. The log sheet included the names of hospitals visited, time spent at the facility and date of visit. Also, the teams were given a data-collection summary checklist (Supplementary Appendix SA) outlining the data-collection activities conducted in each hospital. These included extracting and photographing information from selected in-patient health records contained in registers, patient folders and discharge summaries and extracting information from available eRHIS (electronic RHIS). This information was captured using the Research Electronic Data Capture (REDCap), a web-based application for building and managing online surveys and databases (Harris et al., 2009). The project manager reviewed the completed instruments and the data-collection summary checklist and communicated any inconsistencies to the supervisors/fieldworkers to resolve any data-quality problems that occurred during fieldwork. Document and documentation standards and the availability of discharge summaries were investigated using a data-collection checklist (Supplementary Appendix SA). This checklist was used to identify the relevant documents and information on the availability of patient-discharge summaries. The presence of a patient-discharge summary was confirmed by taking a de-identified photo of the record and uploading it onto REDCap. Record quality was measured using two dimensions: (1) Completeness of the data in the ward register, patient medical record and discharge summaries; and (2) data accuracy i.e. the agreement between data in the patient medical record (paper-based and electronic), discharge summaries and ward register for 10 pre-defined data elements: patient age, patient identifier, attending physician’s signature, admission diagnosis, discharge date, discharge (final) diagnosis, condition on discharge, procedures, follow-up plan and results of the investigation (Wimsett et al., 2014). Data completeness was assessed by reviewing the proportion of discharge summaries that had all the required data fields completed by a clinical registrar, a general practitioner/medical officer or nursing staff. A percentage average of the availability of coded diagnoses during the two 1-month periods was reviewed. Record accuracy was investigated at two levels by measuring the agreement analysis for the 10 pre-defined data elements: Statistical analyses were completed using the svyset command in STATA 16.0 (StataCorp LLC,) to incorporate the three-stage cluster study design of the sample. For the first stage, the primary sampling unit was the hospital/facility stratified by the hospital/facility type and there was no finite population correction since the number of records to be reviewed was not known beforehand. The second and the third stages only had the study month and the facility departments as the primary sampling units, respectively. Once set, proportions for the documents and the documentation standards were estimated and reported with their respective 95% confidence intervals (CIs). Cohen's and Fleiss’ kappa coefficients (κ) were used to measure intra-rater reliability between the values of the pre-defined data elements found in the discharge summaries compared to the patient’s health records and ward registers, ignoring the survey design. Cohen’s kappa was used for the 10 data elements for the two data sources—patients’ folder vs. discharge summaries and, where ward registers were included in the comparison, Fleiss’ kappa was used. We reported the measure of agreement/kappa scores and the CI ranges. A P-value of <0.05 eliminated chance agreement and CIs were also reported. The study proposal received ethics clearance from the Human Research Ethics Committee of the South African Medical Research Council (Ref: EC 003-2/2016) and the University of Pretoria Health Ethics Committee (Ref No: 305/2017). Permission to access the patient’s health records from the various hospitals was obtained from the respective provincial and district health departments, and the study hospitals. Because the study did not require direct interactions with patients, patient consent was not required. However, strict confidentiality was adhered to with regard to the protection of information obtained from patient records; individual patient health records were de-identified and assigned a unique subject identifier at the point of data collection. The record of the links between project ID codes (unique subject identifier) and the patient identifiers (folder numbers) was securely kept in an encrypted database (REDCap).