Thesis

145 Experienced outcomes of MCD survey-after-the-MCD-series are used in a study about MCD evaluations (forthcoming), or on the article on the impact of MCD specifically on ‘moral craftsmanship’ (Huysentruyt et al., 2023). Data selection Of the initial 17 participating teams we included 16: one team stopped because they wanted more general team meetings instead of MCD. The local steering committee made this decision, which made us exclude the data of 4 of the 148 single MCD sessions. Furthermore, in 2018 the Ministry of Justice and Security announced the closing down of one of the participating prisons. We excluded data from 3 MCD sessions after that announcement, as they were dominated by frustrations related to the closing down instead of focusing on a moral dilemma. Furthermore, we excluded MCD sessions that showed to use another conversation method than the dilemma method (n=10). Data analyses For a broad understanding of the outcomes of MCD, we first performed quantitative analyses of closed items from the evaluation forms. All quantitative analyses were conducted using Statistical Package for Social Sciences (SPSS), version 26. For the mean scores of participants per single MCD, we ran a multilevel analysis in which we made corrections for a) multiple participants evaluating the same MCD session, b) multiple sessions of the same team, and c) multiple teams that were part of the same professional discipline. The quantitative analyses of the survey-after-the-series consisted of frequency descriptions, with bar charts to show percentages. In addition, we used ANOVA tests with crosstabs in which items were plotted against participants’ discipline to determine whether differences in outcomes were experienced based on the different prison staff disciplines. To gain in-depth knowledge about the outcomes, we used an embedded mixed method; we added qualitative analyses conducted using MAXQDA® software, version 2020. Via an inductive process, the open-ended items from the participant and the facilitator versions of the single-MCD-evaluation forms received open codes separately (Ryan & Bernard, 2003). We constantly compared indicators, codes, and researchers’ interpretations (Green & Thorogood, 2014). The coding was independently performed by at least two researchers, who eventually reached consensus on the final codes. In case of disagreement or doubt, we consulted a third researcher. We categorized all participants’ outcomes using thematic content analysis, which ‘summarizes the variation 6

RkJQdWJsaXNoZXIy MjY0ODMw