As editors of the Columbia Underground Listing of Professor Ability, our primary goal is to ensure that students have access to as much information as possible when choosing their programs of study. We believe that we are entitled to the best instruction possible, and that the easiest way to ensure this is to provide a public record of faculty teaching ability. Thus we wholeheartedly support any effort to make public the University’s existing records regarding teaching effectiveness, and applaud the University Senate’s decision to investigate the feasibility of open course evaluations.
We acknowledge that open course evaluations from the University may eventually render CULPA obsolete. This is not necessarily a bad thing—if there ever comes a time when the University does a better job of serving the needs of the student body than CULPA, we will happily devote our time to that effort instead. But that time has not yet come. There are several grave issues with the current proposal to release course evaluations, and we expect that students will continue to rely on CULPA as the primary source of course information even if the senate’s resolution is ratified.
Foremost among these issues is a lack of adequate information regarding existing alternatives to University-administrated evaluations. Over the years, numerous public figures on campus have made misstatements about CULPA’s content and policies, and we want to ensure that these misstatements do not color the discussion of open course evaluations, or shape any system that is ultimately put into place. While we are generally impressed by the thoroughness and clarity of the senate’s report, we were distressed to discover a number of inaccuracies in the transcript of the March 30 plenary meeting. Two stand out. On the seventh page of the transcript of the senate discussion on course evaluations, Mr. Alex Frouman (who is by no means alone in these beliefs) states, “[CULPA] tends to contain polarized reviews of people that loved the class or hated the class, and tends not to be very representative of the students. It also happens to be anonymous and largely unmoderated and would be inappropriate.”
There are two claims here: that CULPA reviews tend to be strongly polarized, and that CULPA is unmoderated. Each of these claims is false.
It will hopefully surprise nobody that we have an extensive internal record-keeping system—among the statistics we record for each review is a rating on a five-point scale indicating how positive it is. As of this writing, “strongly negative” or “somewhat negative” reviews make up 27 percent of all entries in the database (11 percent “strongly” and 16 percent “somewhat”), while “neutral” reviews make up 29 percent and “somewhat positive” or “strongly positive” reviews make up the remaining 44 percent. In short, the plurality of reviews on CULPA are positive, and neutral reviews outnumber negative ones.
As for moderation, every one of the more than 21,000 reviews on the site is reviewed by CULPA’s editorial staff. There are prominent buttons throughout the site for students and professors to flag reviews as inappropriate, and we make a point of responding quickly to these responses. (This fact is discussed in the written report.)
In a discussion with Mr. Ryan Turner and Ms. Sara Snedeker last fall, we explained that numerical data do not support the common wisdom about the distribution of sentiment in CULPA reviews, but there is no indication that these remarks were taken into account—indeed, the draft resolution makes specific mention of third-party sites providing “polarized and unfiltered reviews.” In general, we are alarmed by both the meeting transcript and senate report’s reliance on anecdote rather than available data in discussing CULPA.
The senate’s recommendations are informed, at least in part, by an anecdotal and demonstrably untrue characterization of CULPA. While we have broader concerns about the feasibility of truly fair course evaluations conducted under the aegis of the University, we feel compelled to begin our participation in this discussion by correcting these basic inaccuracies. Whatever the outcome of this process, we draw encouragement from the senate’s willingness to take open evaluation seriously, and we look forward to continuing to do our part in safeguarding the quality of undergraduate education at Columbia.
The authors have been granted anonymity in order to protect the integrity of their review process and prevent others from influencing their decisions.