FPC Memo—Course Evaluations, February 2020

Posted on February 24, 2020 by Tabatha Coleman

To: Ad hoc committee on course evaluations
From: FPC
In re: Summary of FPC meeting on course evaluations, 24 February 2020

While many of us recognize the inherent problems in course evaluations, we also believe that they serve an important purpose, allowing students to share their views and experience of the classroom. We discussed that many students who have concerns about what is happening in the classroom might well be reluctant to share that information with a department chair or the provost. As a tool that gives students agency and may bring to light problems that are otherwise invisible, we think course evaluations have a role.

In discussing ways to best evaluate our colleagues, where “best” means that the information gleaned would help the instructors themselves become better teachers and would give us on FPC an accurate sense of who they are in the classroom, we came up against familiar obstacles. As one colleague put it: “You can have cheap, easy or effective. Pick two.” That is, models that seem to be the gold standard and that would offer us the best information are so labor-intensive as to be impracticable on our campus. Thus, we would like to have as much good information as possible, recognizing that we may not be able to get the best.

Members had varying degrees of comfort with our current evaluations. Some expressed an appreciation of these over the old ones, since we can now track trends over time in specific categories. Some lamented those sets of evaluations where we receive few or no narrative comments (especially as we work hard to appreciate the entirety of a set of evaluations, and not merely the numbers). Others described the problem of disentangling enthusiastic or disparaging comments (which may be about personality or its absence, bias, etc.) from substantive comments about the quality of instruction. And all of us noted in various ways how we work hard to contextualize every set of evaluations—a class of only four students; a despised but required class for a major; an intellectually or emotionally charged class. In sum, given the imperfect metric we have, we work like archaeologists, sifting through each set of evaluations, carefully weighing the evidence, always attempting to contextualize it as best we can, knowing that they can only approximate the past.

In the end, one colleague summed up what we most want from course evaluations:

An assessment method that helps us identify a strong teacher, who offers clear expectations, supportive scaffolding, is available to students, and is broadly supportive of our student body. It should also be a tool that helps instructors improve their craft.

To that end, numerous ideas arose that we felt could be pieces of a teaching assessment process that would support the above desiderata. These include (in no particular order):

  • A method of course visitation by faculty in the discipline that is standardized across the college and conforms to best practices. This would ideally involve multiple visits over multiple quarters by multiple individuals. One colleague employs a method of visiting a colleague’s course for an entire week, and uses a rubric to evaluate what he sees. Another discussed how the entire department visits the class of junior colleagues, so they get the benefit of multiple perspectives. Yet, we on FPC have seen many department letters that showed little to no evidence of classroom visitation, or perhaps merely one visit by a department chair at one point during the probationary period. A more formalized, clearly articulated system would lead to more detailed evaluations from peers, and would likely benefit instructors as well.
    • There was also a mention of trained student evaluators. If peers might fall into the same trap as students (because we love our peers and want to see the best in them), trained student evaluators might be able to offer more objective snap-shots of one’s teaching.
    • There is a model that exists (somewhere? in the ether?) where members of the RPT committee conduct course visits. It exists; we are not advocating for this.
  • A modified assessment tool that encourages specific feedback on a more limited set of data points. We observed that the current evaluation form is most helpful when students take time to expand upon an assessment. For example, students who expand upon “The teaching techniques were effective in helping me learn” by noting appreciation of group work, the challenges of getting a group of five people together to work on a presentation, the boredom of reading straight from a PowerPoint—these give important information to the instructor and to FPC about what it looks like inside the classroom. As one colleague noted this would “make student engagement in the evaluation process more explicit and helpful.” Some ideas we generated around this:
    • What if there were fewer items and students were explicitly encouraged to explain their responses with detailed narrative comments about issues they can more fairly assess (e.g., class organization, materials, etc.)?
    • Perhaps it would be fairer to eliminate questions where we are more likely to be evaluating charisma or personality, instead of the learning that is happening.
    • Would it be possible with online evaluations to import from a syllabus the course goals, have students review them, and then evaluate whether the course has met them?
  • Building in more reflective self-assessment throughout one’s teaching career. If our goal is not only to assess our colleagues’ teaching, but to support them in their journey to becoming better teachers, are there things that we can do to encourage self-assessment in ways that are not onerous? These small opportunities for self-assessment could provide a foundation for formal self-assessments in FPC review, and give them a treasury of documents to draw on. Two ways we might do this:
    • Create a template for/encourage/require midterm evaluations. These would not be for FPC or necessarily anyone other than the instructor. Yet, establishing a culture where midterm evaluations are the norm, might encourage on-going formative assessment for us as instructors.
    • One colleague noted that at a different institution, he was required to fill out an evaluation of the course at the same time that students filled out their course evaluations. This struck us as an interesting model, where one could at an optimal moment pause to look back on what worked and what didn’t, things you might do differently going forward, and differences between this iteration and previous iterations of the class. It would also offer candidates who are writing personal statements for 3rd-year and tenure reviews a way to review their own change over time.

We hope these reflections are helpful to you as you begin the process of reviewing the efficacy of our current student evaluation process.