What counts as academic rigour? Epistemic politics in MA dissertation assessment in an Algerian EFL department
Introduction. Academic rigour is central to graduate assessment but how written rubrics actually translate into examiners’ judgements remains under-theorized. Objective. This paper investigates how standards of academic rigour are both articulated in written policy and enacted in practice when assessing Master of Arts dissertations. Materials and Methods. Drawing on a qualitative, multi-methods study conducted at the English Department, University of Batna 2, this research project employs a purposeful corpus comprising 120 Master's dissertations that were submitted between 1 May 2023 and 30 June 2025. Additionally, it incorporates the examiners' reports and semi-structured interviews with 12 supervisors and 13 examiners. A stratified sub-sample of 36 dissertations was analysed in depth. Data were examined through document analysis, thematic coding and cross-source triangulation to map written criteria against evaluative practice. Results. The results show that, although official rubrics supply clear procedural criteria, evaluators frequently rely on tacit interpretive standards so that policy and practice align only partially. Three interrelated mechanisms explain this divergence: methodological legibility (how clearly methodological choices make a thesis readable and defensible), supervisory socialisation (the informal norms supervisors transmit), and internal board composition (the mix of examiners’ expertise and expectations). Conclusion. We argue that improving fairness and consistency requires calibrated rubrics augmented with annotated exemplars, routine examiner-calibration workshops, and targeted supervisor development to increase analytic transparency. The study’s significance lies in offering an empirically grounded account of the policy – practice gap, providing concrete interventions for institutional assessment and quality-assurance, and setting an agenda for comparative and experimental research to evaluate the effectiveness of the proposed measures.

















Пока никто не оставил комментариев к этой публикации.
Вы можете быть первым.
Bastola, N. and Hu, G. (2020), “Supervisory feedback across disciplines: Does it meet students’ expectations?”, Assessment & Evaluation in Higher Education, 46, 407-423. https: // doi.org/10.1080/02602938.2020.1780562. (In UK).
Belcher, B., Rasmussen, K., Kemshaw, M. and Zornes, D. (2016), “Defining and assessing research quality in a transdisciplinary context”, Research Evaluation, 25(1), 1-17. https: // doi.org/10.1093/reseval/rvv0. 25. (In UK).
Benbouabdallah, H. and Benmekhlouf, I. (2023), “Teachers’ opinions regarding the main standards for evaluating a master thesis: The case of EFL teachers at the Department of English, Batna 2 University”, Unpublished Master’s dissertation, University of Batna 2, Batna, Algeria.
Bourdieu, P. (1988), Homo academicus, Stanford University Press, Stanford, United States.
Bourke, S. and Holbrook, A. (2013), “Examining PhD and research masters theses”, Assessment & Evaluation in Higher Education, 38(4), 407-416. https: // doi.org/10.1080/02602938.2011.638738. (In UK).
Bukhari, N., Jamal, J., Ismail, A. and Shamsuddin, J. (2021), “Assessment rubric for research report writing: A tool for supervision”, Malaysian Journal of Learning and Instruction, 18(2), 1-43. https: // doi.org/10.32890/mjli2021.18.2.1. (In Malaysia).
Chugh, R., Macht, S. and Harreveld, B. (2021), “Supervisory feedback to postgraduate research students: A literature review”, Assessment & Evaluation in Higher Education, 47(5), 683-697. https://doi.org/10.1080/02602938.2021.1955241. (In UK).
Braun, V. and Clarke, V. (2006), “Using thematic analysis in psychology”, Qualitative Research in Psychology, 3(2), 77-101. https://doi.org/10.1191/1478088706qp063oa. (In UK).
Crowe, M., Slater, P. and McKenna, H. (2024), “Demonstrating research quality”, Journal of Psychiatric and Mental Health Nursing, 32(3), 686-688. https://doi.org/10.1111/jpm.13145. (In UK).
Goodman, P., Robert, R. and Johnson, J. (2020), “Rigor in PhD dissertation research”, Nursing Forum, 55(4). https://doi.org/10.1111/nuf.12477. (In UK/In USA)
Holbrook, A., Bourke, S., Lovat, T. and Dally, K. (2004), “Investigating PhD thesis examination reports”, International Journal of Educational Research, 41, 98-120. https://doi.org/10.1016/j.ijer.2005.04.008. (In Netherlands).
Knorr-Cetina, K. (1999), Epistemic cultures: How the sciences make knowledge, Harvard University Press, Cambridge, United States.
Kumar, V. and Stracke, E. (2011), “Examiners’ reports on theses: Feedback or assessment?”, Journal of English for Academic Purposes, 10, 211-222. https://doi.org/10.1016/j.jeap.2011.06.001. (In UK).
Lee, A. (2018), “How can we develop supervisors for the modern doctorate?”, Studies in Higher Education, 43, 878-890. https://doi.org/10.1080/03075079.2018.1438116. (In UK).
Mafora, P. and Lessing, A. (2016), “The voice of the external examiner: Experiences from South African higher education”, South African Journal of Higher Education, 28, 1295-1314. https://doi.org/10.20853/28-4-389. (In South Africa).
Man, D., Xu, Y., Chau, M., O’Toole, J. and Shunmugam, K. (2020), “Assessment feedback in examiner reports on master’s dissertations in translation studies”, Studies in Educational Evaluation, 64, 100823. https://doi.org/10.1016/j.stueduc.2019.100823. (In UK/In Netherlands).
Morse, J.M. (2015), “Critical analysis of strategies for determining rigor in qualitative inquiry”, Qualitative Health Research, 25, 1212-1222. https://doi.org/10.1177/1049732315588501. (In USA).
Mullins, G. and Kiley, M. (2002), “‘It’s a PhD, not a Nobel Prize’: How experienced examiners assess research theses”, Studies in Higher Education, 27(3), 369-386. https: // doi.org/10.1080/0307507022000011507. (In UK).
Othman, J. and Lo, Y. (2023), “Constructing academic identity through critical argumentation: A narrative inquiry of Chinese EFL doctoral students’ experiences”, SAGE Open, 13. https://doi.org/10.1177/21582440231218811. (In USA).
Phuong, H., Phan, Q. and Le, T. (2023), “The effects of using analytical rubrics in peer and self-assessment on EFL students’ writing proficiency: A Vietnamese contextual study”, Language Testing in Asia, 13. https://doi.org/10.1186/s40468-023-00256-y. (In UK).
Reddy, Y.M. and Andrade, H. (2010), “A review of rubric use in higher education”, Assessment & Evaluation in Higher Education, 35(4), 435-448. https: // doi.org/10.1080/02602930902862859. (In UK).
Sadler, D.R. (2009), “Indeterminacy in the use of preset criteria for assessment and grading”, Assessment & Evaluation in Higher Education, 34(2), 159-179. https: // doi.org/10.1080/02602930801956059. (In UK).
Stigmar, M. (2018), “Learning from reasons given for rejected doctorates: Drawing on some Swedish cases from 1984 to 2017”, Higher Education, 77, 1031-1045. United Kingdom. https://doi.org/10.1007/s10734-018-0318-2. (In UK).
Tiwari, H. (2024), “Behind the curtain: External Examiners’ Experiences about Thesis Evaluation”, Shanti Journal, 4(1). https://doi.org/10.3126/shantij.v4i1.70529. (In Nepal).
Varela, M., Lopes, P. and Rodrigues, R. (2021), “Rigour in the management case study method: A study on master’s dissertations”, The Electronic Journal of Business Research Methods, 19, 1-13. (In UK).
Vita, G. and Begley, J. (2023), “A framework of ‘doctorateness’ for the social sciences and postgraduate researchers’ perceptions of key attributes of an excellent PhD thesis”, Studies in Higher Education, 49, 1884-1899. https://doi.org/10.1080/03075079.2023.2281540. (In UK).
Yadav, D. (2021), “Criteria for good qualitative research: A comprehensive review”, The Asia-Pacific Education Researcher, 31, 679-689. https://doi.org/10.1007/s40299-021-00619-0. (In Singapore/In Philippines).