Gendered Language in Teacher Reviews

As a short follow-up to Bridget Brown’s post from last week about student evaluations, here’s an  interactive chart developed by Benjamin M. Schmidt (an assistant professor of history at Northeastern University), which allows one to see trends in the use of language on RateMyProfessor,com reviews.

As the description on the page says, Schmidt’s chart allows users to enter words or two-word phrases and then displays their prevalence “per million words of text (normalized against gender and field).” One can also limit the results to only negative or only positive reviews.

Here it is.

Take a look, try out some terms, and see what you come up with.

I’m curious to hear if the results correlate (or don’t) with people’s initial assumptions about such data. I’m also interested to hear what people think such data might indicate and, for that matter, what it might hide. I think it’s also worth considering what the flaws/limitations in the methodology of such a study/tool are.

Student Evaluations: Reading Them, Reading Us

I’ve just completed my semi-annual Reading of the Student Evaluations. During the Fall 2014 semester, I had one section that was a joy to meet with, one that seemed to be happy and engaged and learning well enough, and one that was a bit of a disaster (due in large part to one difficult, challenging student).

As usual, there’s good news and bad news.

First, the good news: Perhaps not surprisingly, the class that was a joy to meet with mostly gave me very positive evaluations:

“Everything was well taught and explained.”

“I enjoyed how the instructor asked us to write about topics we could relate to such as social media and celebrities.”

“The professor helped me better my writing significantly.”

Something worked in this class. As I understand it, my goals for the class, and my idea of myself as an instructor aligned with their expectations of the class, and their perception of me as a teacher. Yay me! Continue reading