Photo by Ben White on Unsplash

Manipulating student evaluations, the Sales School Method.

Greg Downey
10 min readOct 16, 2020

Student evaluations are biased? Yes, yes, they are. Research has repeatedly shown that students’ evaluations of teaching quality show a range of biases. For example, Anne Boring, Kellie Ottoboni and Philip B. Stark argue on the LSE’s Impact blog that:

Student evaluations of teaching (SET) are strongly associated with the gender of the instructor. Female instructors receive lower scores than male instructors. SET are also significantly correlated with students’ grade expectations: students who expect to get higher grades give higher SET, on average. But SET are not strongly associated with learning outcomes.

Even given their limitations and outright unfairness, universities are not likely to give up student evaluations soon.

So I offer you another alternative: How I manipulate student evaluation scores.

Using techniques I first learned while a door-to-door salesman as an undergraduate, here are tried and tested techniques for improving your evaluations, from the keyboard of a 25+ year university veteran. Buckle up — this may sound cynical — but I hope to persuade you that my goals and methods are not only ethical, but actually might improve your teaching.

A couple of years ago in a departmental meeting, I shared the Downey Sales School Methods for manipulating student evaluation scores, and my colleagues responded with horror and amazement, so I think I know how you’re reacting. But I hope to show you there’s pedagogical value in these propositions.

Step 1: Study the evaluation

To manipulate your evaluation scores, the first thing you need to do is look long and hard at the evaluation form itself. Study the evaluation.

What you are likely to find is that your university is asking strange questions, some of which are not obvious nor easily anticipated. It’s very hard to manipulate your evaluations if you don’t know what’s being evaluated.

For example, my university evaluates a series of items, some of which appear to be of dubious pedagogical value. The form itself shapes how teachers are evaluated. Issues that may seem trivial are elevated. Crucial considerations are levelled. ‘Composite scores’ arise that lump together the vague with the vital, the tangential with the foundational.

For example, my employer’s evaluation form asks whether the ‘objectives’ of a class are ‘clear’ to the students. Every single element of that criteria is suspect: is ‘clear’ better than ‘unclear’? Why? Should ‘objectives’ be something that can be clarified? Does it make sense to evaluate the ‘objectives’ of a single class? Whose objectives?

The form does not ask whether those ‘objectives’ (whatever they are and whether a course should have them) were ambitious, exciting, challenging, inspiring. My university form has institutionalized ‘objective clarity’ as the pedagogical value. It’s literally the first question (I believe) in the evaluation form. I don’t agree that’s always such an important pedagogical value.

In my teaching, I hope students might take a course for lots of reasons (‘objectives’?) but have their expectations expanded and transcended. For example, maybe they take my human evolution class because the course description sounds ‘fun,’ or because they have to take an Arts unit and it sounds like the least worst option. Should I care about their objectives in taking the course? Should the course be evaluated negatively if this is not clear? In the semester, I hope I can teach some critical thinking, to question how science is reported, to denaturalise gender, race, sex, and a host of other factors…

Even then, every student’s objectives will not be the same: they are at different intellectual points in their journeys when we meet up to travel together for a while.

I could go on, but you get my point: My university teaching evaluation says ‘clarity’ is key and the only thing about ‘objectives’ that is important (nothing else a teacher might prefer to evaluate). So how do we respond?

Step 2: I TELL them how the course meets the criteria!

Yep, you read that right. With ‘clear objectives,’ for instance, I just tell them several times throughout the semester that the ‘objectives’ are ‘clear.’ I define something in the syllabus as ‘objectives.’ I put the ‘objectives’ up on slides in the first week. And in the last lecture, in my course wrap up, here’s my objectives, here’s how we met them, isn’t that f*in’ clear as day.

I go through the whole evaluation form, and I make sure that it’s explicit how the course addresses a criterium and how the class and the teaching staff, including my assistants, met the criteria.

It’s not some Jedi mind trick. It’s because I actually do seek to achieve much of what my university sets out in our evaluation form as our implicit ‘pedagogical values’ as a community. The evaluation form is an ethical document, implicitly including what the university has decided is of value and that we, as educators, owe to our students.

Step 3: Redefine the criteria

If one of the evaluation criteria is trivial or misguided, inappropriate for the class or of dubious value, I seek to elevate the criteria into something interesting.

For example, the evaluation form might ask about ‘job ready skills.’ (I don’t think they’re asking it yet… but I’m waiting because it is coming out more and more in our learning and teaching statements.)

Not every class should be transparently about readiness for the ‘market,’ no matter what neoliberal theorists of education insist. In that situation, redefine the criteria for your students. Directly address it: these are the skills you need for your professional lives; here’s how we cultivated those skills.

This is an old trick taught by the Southwestern Company, purveyor of reference books, to their door-to-door salespeople (yes, two summers selling books door-to-door helped me pay to go to university). You have to anticipate your students’ objections or confusion and show them how to make sense of the evaluation. You have to teach them how to evaluate you.

For example, I go back to my assessment tasks and remind them of how they were structured to teach skills that will be valuable going forward and, even though I would cringe inside, I would talk about how they are more ‘job ready.’

Step 4: Re-examine your course from the perspective of your evaluations

Of course, the design thinking has to be there from the start if we’re going to highlight it, but I’m assuming that most educators are already doing this type of work. If you’re sleepwalking through course design, you might have to assess your design.

Ask yourself if you really are thinking hard about what you’re trying to accomplish. Consider how this relates to the shared goals your university puts forward implicitly in the way that they evaluate.

For example, if your university has a question about whether their lecturer was ‘well prepared’ or ‘organised,’ and you prefer to just shoot from the hip in your teaching, you might have to ask yourself whether you need rethink your design. Or can you argue to the students that the meaning of ‘prepared’ or ‘organised’ in this particular class is distinctive? For example, when I taught research methods, I did not prepare lectures with the same careful choreography as my survey classes, but that rationale needs to be out in the open and the meaning of ‘prepared’ or ‘organised,’ if that is how the evaluation question is worded, discussed.

There are likely to be howls of complaint about my methods for ‘gaming’ evaluations, but I think that we overestimate how transparent evaluation forms and criteria are. Many are written in a way that really doesn’t make any sense sometimes unless you’re marinated in the special lingo of the ‘learning and teaching’ literature.

In addition, it makes sense to be transparent about what we sought to achieve, why, and how.

Step 5: Outright rebellion — seize the criteria!

You can criticise an evaluation criterion explicitly if you think it’s necessary. Tell students what a better way to interpret it is for your class.

The reason is not just because evaluations are hateful or ‘unfair’ or ‘biased.’ Just as assessment tasks should be tailored to individual classes, so should be evaluation of the teaching. That’s why you need to redefine the criteria and tell students what you think criteria should be. (Of course, they may disagree and still evaluate unfairly, but my experience is that a semester of good-faith teaching goes a long way to opening up to listening.)

We don’t manipulate evaluations because we do not care about good pedagogy. But I also don’t assume that I’m smarter than the people who have designed the evaluation system (I have come around a lot in over two decades of teaching). And you can bet I will have baked into the course structure my real goals and got them evaluated.

Special case: ‘timely’ feedback in large classes

The true test of the Downey Sales School method came in the area of ‘timely feedback’ in very large lecture classes. In our course evaluations at my university, there’s a question about whether the students get ‘timely feedback.’ For years, I heard lecturers say that with big classes, getting slammed for ‘timely feedback’ in evaluations was inevitable. Instructors complained that students did not understand the logistics of marking stacks of paper and seemed to punish them even though the turnaround times were reasonable.

I understand. My course on human evolution last semester had over 500 students — some years it has had over 600 — and I work with six teaching assistants to get through all the marking. Yes, I do all the normal tricks: specialised rubrics, a handbook for markers, a special menu of pre-fab comments for common problems (with suggested solutions) that I send to each marker to install. Even so, I manipulate the evaluation.

First, although instructors might complain, do they ever talk to students openly and without being defensive about what ‘timely feedback’ is? Have they explained WHY students don’t get their papers back the next day? Not just to say that the task is difficult but why it’s important not to just use marking software. I think we need to talk about what the students gain by waiting a couple of weeks.

And in my case, I highlight that ‘timely’ is not ‘instant’; ‘timely,’ I tell them, means they have time to incorporate that feedback into the next assignment (I’d rather the evaluation form asked if the marking process was helpful or helped them develop as students).

I try to come out explicitly and talk to students about how heroic their markers and tutors are, given they have work of their own as grad students, to get marks back for so many students as quickly as they do. I discuss how each student is getting tailored feedback, and how this cannot be instantaneous. I talk about the feedback explicitly, using the positive language of the evaluation form, saying this is the definition of ‘timely feedback.’

I have even had students applaud their tutors for ‘timely’ feedback. We’ve talked about how they have plenty of time to implement the feedback from the first assignment in the second one. I don’t apologise to students — I tell them how we meet our pedagogical goals as a teaching team, and I define that goal in terms used in the evaluation process.

That’s another sales school technique. You incorporate the terms you want them to think in into your set-up.

The take-away: teaching students to evaluate

The bottom line: you have to help students develop the right expectations of you and your course if they will evaluate effectively. And you need to keep an eye on your evaluations as one outcome of your teaching, not because you’re afraid of them or a bad teacher, but because your university likely doesn’t give you a lot of external references points for what they think quality teaching is. The evaluation form is a concrete set of expectations that’s worth studying.

It helps me that I’m an old white guy with a certain personality (including that I’ve been to sales school). I don’t know how well the Sales School method works with other people, but I want to share it.

The problem though is similar to one salespeople face: in sales, you’ve got to answer objections before they are made. If you wait until someone evaluates you negatively, it’s too late. I try to get in front of the evaluation, build the evaluation into the design, adjust student expectations, talk openly about my goals and rationales, and even dispute or ask my students to adjust the criteria explicitly.

Just like a ‘pop’ quiz doesn’t really test students adequately, a ‘pop’ evaluation with no agreed-upon language or discussion of the criteria is unlikely to lead to fair evaluation (and there’s still no guarantee of it).

Do I still have students who hate me and try to burn me in evaluations? Oh, yeah… I could tell you stories after over 20 years of doing this, often with hundreds of students a semester. Disgruntled students are a constant fact of teaching, and I’m sure I don’t even face the worst of it by a long stretch. I can’t fix that.

But I can seek to overcome the fact that students are asked to evaluate us as instructors without ever being taught how. The evaluation process is often set up to too closely resemble a ‘customer satisfaction’ survey, so it’s a good idea to talk about it and try to elevate the process, not letting the students assume that they understand what a course evaluation is. Otherwise, they might just think it’s a chance to ‘upvote’ or ‘downvote’ you as an entertainer.

One key is to make sure that you’re addressing evaluation at the beginning of the course and return to it at the end, talk about the syllabus and what you promised to accomplish right before they evaluate. Use the language that your university has adopted in its evaluation tools, even if it’s clunky or ill-fitting. It’s not just to boost evaluations: it’s pedagogically important to concretise what we have explored and learned together.

By going back to the beginning and starting with a vision of how you will be evaluated, I hope you too get a fairer evaluation.

I don’t know if this will work for you, but I wanted to share it.

This column also appeared on Neuroanthropology.net.

--

--

Greg Downey
Greg Downey

Written by Greg Downey

Neuroanthropologist, psychological anthropologist, sports researcher and journal editor - expat Yank in Australia. Follow for news on anthro, brain, culture...

No responses yet