Interpreting the Results of Evaluations

At the outset we should distinguish between two kinds of feedback: (a) determining that a problem exists, and (b) diagnosing just what the problem might be. Like medical patients, students are better at identifying (a) than (b). They are generally correct when they claim that something is amiss, but often less reliable when attempting to identify the precise cause of the problem. This should come as no surprise. For one thing, few have been trained to assess teaching techniques, and thus most focus on effects, rather than on causes. For another, most students lack a precise vocabulary for talking about teaching, and tend to invoke familiar groups for complaint, even when they do not apply.
Interpreting evaluations. (2007). Bok Centre, Harvard University

4.1 HOW USEFUL IS STUDENT FEEDBACK?

Many studies have been performed on student evaluation of teaching, and most find that the feedback students provide is consistent and reliable (Gravestock & Gregor-Greenleaf, 2008). Students are reliable and valid assessors, and are uniquely positioned to provide feedback on “those aspects of courses and teaching about which they are the best and most reliable and often the only informants” (Hativa, 2014, p. 29). As an instructor, there are several steps you can take to make sure that the information you are receiving is reliable and valid. Cashin (1990) suggests that in order to receive a clear picture of the range of student experiences in the course, evaluations should be received from at least 10 students, and at least 2/3 of the students in the class.

However, evaluations developed by one instructor will never produce results as valid as those developed by psychometric experts. As such, instead of approaching mid-course evaluations as a means to provide evidence of teaching ability, focus on questions that address particular areas of concern or interest.

4.2 COLLECTING AND ORGANIZING THE RESULTS OF MID-COURSE EVALUATIONS

Depending on the questions asked and whether the evaluation seeks to address particular problem areas, there are several useful ways to organize evaluation results. A challenge with open-ended responses is collecting the results into useable, comparable results. Do not dismiss the value of student written comments simply because they contain widely varied, and perhaps even contradictory, reactions; an important part of the process of interpreting evaluations is understanding the roots of these variations and contradictions.

Analyzing your results

Consider employing some or all of the following techniques in analyzing the results of your evaluations.

  • Note any common themes or student comments that emerge from the evaluations as a whole.
  • Group the responses into positive and negative piles. Identify trends and themes in each category. Select representative student comments for each theme. See Appendix D for a worksheet to record a summary of responses.
  • To provide a visual summary of the strongest and weakest elements of your evaluation, keep numerical track of the number of positive and negative responses to each question or emerging themes (e.g., communication with students). You can then arrange these on a chart that will provide a striking depiction of the relative status of each criterion. See Appendix D for a worksheet to assist you to conduct such an analysis (an example is also provided).

Contextualizing your results

Evaluate the responses you’ve received in the context of information you’ve gathered about student demographics and behaviours.

  • Look to the information you’ve gathered about students’ academic histories for sources of contradiction. If some students report that requirements for written essays are difficult to understand, check whether they are coming from another discipline or faculty; perhaps they are used to different conventions. Students concerned about the length of assignments may be earlier in their academic careers than those who are comfortable with the workload. It will then be up to you to determine whether your expectations are appropriate to the level of preparation of the students the course is likely to attract.
  • Student identification of any course problems can also be analyzed in the context of their responses about their own learning habits. If students complain about the difficulty of the material, do patterns in their learning habits, such as not completing all the readings, or starting assignments very close to the due date, indicate the origin
    of this challenge? If so, it may be more useful to focus on the source of the problem than its manifestation.
  • One of the valuable aspects of mid-course evaluations is gaining a deeper understanding of community norms and expectations. You may wish to identify any comments in the evaluations that suggest that your norms and expectations (for example, for communication or availability outside of the classroom) are different from your students’, particularly if you are an instructor new to the institution.

Please see Appendix D for examples of worksheets that can help you sort the feedback you receive.

For more information on collecting and organizing the results of your evaluation, please see:
Lewis, K. G. (2001). Making sense of student written comments. New Directions for Teaching and Learning, 87, 25-32.