68 Chapter 4 complaints on their quality of live before and after surgery. Response options were on a five-point Likert scale ranging from prominent increase of complaints to great improvement. Statistical analyses We produced descriptive statistics for the scores of the measurements. No missing data were expected in both cohorts since the online questionnaire could only be submitted when completed. Construct validity Part of construct validity is inter-item correlation (ICC). In this study, construct validity was furthermore assessed by confirmatory factor analysis and hypothesis testing. Fit parameters were used to determine whether the data fitted the hypothesised factor structure. To evaluate model fit, the comparative fit index (CFI), Tucker-Lewis Index (TLI) and the root mean square error of approximation (RMSEA) were calculated.10 These guidelines, proposed by Hu and Bentler10, suggest that models with CFI and TLI close to 0.95 or higher and RMSEA close to 0.06 or lower are representative of good fitting models. For hypotheses testing, correlation hypotheses were computed comparing the sum score of the impact items of the OQUA with the sum score GHSI and its subscales.11 Spearman’s rho correlations were used for assessing the hypothesised relations because scores were non-normally distributed. Correlation was considered as low <0.30; moderate 0.30-0.59; and high ≥0.60.12 Test-retest reliability Patients with active ear infections or who had complaints which were expected to change within 1-2 weeks after their visit at the ENT surgeon were regarded as unstable patients and were excluded from reliability analyses. To investigate the test-retest reliability of the OQUA, the quadratic weighted kappa was calculated for items measured with ordinal scales, and the intra-class correlation coefficient (ICC) was calculated for items measured with continuous scales using two-way ANOVA random effect models for agreement. An ICC value of 0.70, in a sample of 50 patients, was recommended as a minimum standard for reliability13. The kappa scores were interpreted through the Landis and Koch classification system; this system divides the score into 5 different classes, and the higher the score means better the reliability14.
RkJQdWJsaXNoZXIy MjY0ODMw