by Junko Winch

Module Evaluation Questionnaires (MEQs) are an important source of student feedback on teaching and learning. They are also often relied upon as evidence cases for promotion for tutors. However, in their current form they suffer from low response rates reducing their usefulness and validity. Local practices have grown to address the need for feedback, but they are inconsistent year on year or across the University. Existing research on teaching evaluations indicates that there are a source of bias and suggests careful design of MEQs.
The MEQ project
The MEQ project was undertaken to inform the University of Sussex’s policy and practice. The output was presented to the University’s Surveys Group for the strategic direction of the University.
Method
Research Question 1: Purpose of MEQs
Method: Literature Review
A literature search was conducted using a total of 79 journal publications from the fields of psychology, education, business management, and sociology. The publication dates ranged from 1978 to 2020. The keywords used in the search were student evaluation of teaching (SETs), validity, assessment, and evaluation. All selected journals underwent a content analysis.
Research Question 2: Analysis of MEQ’s Seven Statements
Method: Qualitative
This stage involved a qualitative analysis of the seven core statements in the Module Evaluation Questionnaire (MEQ). The analysis focused on the following aspects:
- The purpose of the University MEQ
- Identified weaknesses
- Suggestions for improvement
Literature review
The literature review revealed tutors’ and students’ biases related to MEQs. However, bias is a source of unreliability, which also threatens validity. Validity and reliability are defined in various terms, but for the purpose of this report, validity is defined as “the general term most often used by researchers to judge quality or merit” (Gliner et al., 2009, 102) and reliability as “consistency with which we measure something” (Robson, 2002, 101).
Tutors related biases
The literature highlighted that tutor related biases must be taken into account:
- Students who have learned more in class will receive higher grades and will naturally rate the professor more highly because of the knowledge they have gained on the course (Patrick, 2011, 242).
- Tutors who give higher grades receive better evaluation (Carrell and West, 2010).
Student related biases
Student biases may arise from a wide range of factors, including the weather, time of day, personality traits, gender, racial stereotypes, the tutor’s physical attractiveness, student anxiety, and more.
The findings and recommendations
1. The purpose of MEQs
MEQs have three purposes: institutional, teaching and academic promotion. To help to reduce the bias effects outlined in the literature, full MEQs and other teaching related data should be provided to promotion panels to avoid the cherry picking of comments or data by applicants. For example, quantitative data such as class average attendance rate, average, minimum and maximum marks as well as qualitative response analysis would help build a more accurate overall picture of the class.
2. Analysis of MEQs
Students’ biases mentioned in the literature may present difficulty in relying on MEQs as sole instrument. Furthermore, the current MEQ statements may confuse students due their contents and wording.
Following points are suggested:
- Purpose and goal of the questionnaire should be clearly stated. The purpose of the stakeholders should be taken into account when designing the MEQs to ensure that the intended MEQ purpose is achieved.
- Some statements ask two questions in one statement. However, some students may not necessarily answer both questions, which affect validity.
- Consideration should be given to the words such as ‘satisfied’ which might have different connotations depending on cultures and individuals.
Recommendations
Carefully developed MEQs have potential to offer valuable insights to all stakeholders. The primary recommendation is to undertake a staff-student partnership to agree the purpose of the MEQs and co-design a revised instrument that meets the stated purpose.
Reflections
I have engaged this project as my Continual Professional Development and appreciate that it has given me various opportunities. For example, I was given an opportunity to write this blog. Furthermore, giving a presentation to the University Surveys Group reminded me of my doctorate viva, as the University Survey Group included Pro Vice Chancellor for Education and Students, Associate Dean of the Business School and the Deputy Pro Vice Chancellor of Student Experience. When answering questions from the University Survey Group, I learned how difficult it is to meet the needs of different perspectives and cultures. For example, I was asked a question from a quality assurance perspective, which was unexpected as I wrote this report from a teaching staff perspective. The University Survey Group also included Students’ Experience team, which also made me consider another perspective involving MEQs. Furthermore, working with my colleague from the Business School made me realise the departmental/academic discipline’s cultural differences from where I am affiliated (School of Media, Arts and Humanities). Looking back, this was a very valuable experience for me and hopefully the institution.
References
Carrell, S.E. and West, J.E. (2010) ‘Does professor quality matter? Evidence from random assignment of students to professors’, Journal of Political Economy, 118, pp.409–432. https://doi.org/10.1086/653808
Gliner, J.A., Morgan, G.A. and Leech, N.L. (2009) Research methods in applied settings: An integrated approach to design and analysis. New York: Routledge.
Patrick, C.L. (2011) ‘Student evaluations of teaching: effects of the Big Five personality traits, grades and the validity hypothesis’, Assessment & Evaluation in Higher Education, 36(2), pp.239–249. https://doi.org/10.1080/02602930903308258
Robson, C. (2002) Real world research. 2nd edn. Oxford: Blackwell.
Hi Junko,
Thank you for an engaging blog post. I read it with much interest and I agree with your analysis of how MEQs are designed and methodologically valid (or not). I am sure this is not an extensive description of the work you’ve done but I have a couple of questions;
It’s not clear where you picked up the student bias from; is this part of your own research or from Patrick (2011)?
Can you draw any parallels with the NSS discussion and analysis made by Bell and Brooks (2018) about what makes student satisfaction?
Hi Alexandre,
Thank you very much for your prompt comments and questions.
The student bias figure was summarised from various references and they are not my own research nor Patrick (2011). Unfortunately, I was advised to keep references to the minimum. If you are interested, I’m happy to send those references to you.
Yes, I agree that it would be very interesting to do parallel with NSS discussion and analysis by Bell and Brook (2018). Thank you very much for your advice. I may develop this following your recommendations.
Hi Junko,
Thank you for this blog and I am doing some research around how important the MEQs are, and you have said “They are also often relied upon as evidence cases for promotion and teaching.” Do you have have data around this and it would be also great if you can send over your your full research around this which would be of great help.
Thank you