Tom Mathewson has asked my colleague Bette Gordon and me to articulate a "con" position to the Senate report. While we have not been working on this issue over the last year, as the Student Affairs Committee has, we do have some strong objections to the report and to the general recommendation for open course evaluations. Although we in no way speak for everyone--and so we aren't acting as representatives of them--we have received ample statements from a range of faculty, particularly in the Arts and Sciences, who oppose this resolution. There are of course also faculty who approve. But our mandate today is to present the critique of the resolution and the Report on which it's based.
I'd like to focus on three main points in the very short time available. There is much more one could say, but the 3 themes I want to address are: 1) The idea of "transparency" and "openness" that the Senate report relies on as its very basis; 2) The completely unrecognized and unacknowledged possibility of gender, racial, and other kinds of bias in the course evaluation process--and how the effects of this kind of bias would be exacerbated with public exposure; and 3) the idea that publicly disclosed course evaluations would be an improvement over the current system in relation to academic freedom (which the drafters call "a general and abstract" concern).
First off, the proposal is unclear about what the purpose of course evaluations is. At times it suggests that they are for feedback to Professors (which is, of course, one of the primary objectives of course evaluations the way they are constituted now at Columbia). At other times, it suggests they are a mechanism for a kind of remedially oriented or accountability. At other times, it recognizes that evaluations are used in hiring, promotion, and tenure cases. Finally, at other times it suggests that their purpose is to help publicize classes--and I would say that this last objective seems to be the one that is most insistently promoted as the reason for public course evaluations. Moreover, throughout it suggests that "access" or "openness" will lead to all three of these very different, and perhaps incompatible ends.
So here I'd like to address the first point: that of openness and transparency. The very subtitle of the draft report on "open" course evaluations is revealing: "promoting a culture of openness." "Open" is used twice in the title of the report. Following this, the "executive summary" of the Report states this: "Open course evaluations promote a culture of transparency, accountability, and self-examination consistent with the highest ideals of scholarship . . "
So I want to question this idea of transparency and accountability here. Transparency means that there is no blockage; that something is see-through. If you see me through a transparent window it means that I can see you. This process, as described in the draft report, is not transparent. Students can evaluate professors and those evaluations can be seen by everyone in the Columbia community, so-called, in the public posting of course evaluations. Yet professors can't see the identities of who has evaluated them. As Jean Cohen, Nell and Edward Singer Professor of Contemporary Civilization and Political Science states, "you can't have it both ways... either anonymity all they way down or "transparency" all the way down." Meaning, if you want "transparency" then there cannot be anonymity; that is a contradiction. If anonymity of the reviewers is to be preserved, then the public anonymity of the evaluations should also be preserved. She continues: "Since course evaluations are anonymous, as they should be, to protect the student, they should also not be made public, for similar reasons: to protect the professor." Protect the professor from what? From anonymous, unverifiable evaluations that become a source of hearsay and gossip that can potentially have lasting damaging effects. From a potential infringement on academic freedom, the freedom to teach about potentially controversial and uncomfortable topics without fear of reprisal.
The executive statement talks about a "culture of accountability." Who, then, is accountable in this "culture"? It must work both ways. There is no accountability whatsoever if individual evaluators cannot be held accountable for their evaluations. If we have our car, child, or our health "evaluated," we need to know not only how the evaluation works, but we also want to know the identity and credentials of the evaluator. Without verifiable identities, the anonymous evaluators cannot be held "accountable" for their actions.
There is thus no real transparency and there is no real accountability--and certainly not a culture of it. The student drafters of this report seem to think that transparency and accountability should only work one-way.
This brings me to point 2, which is closely related (actually, all these points are intimately interconnected). This has to do with the question of bias--racial, gendered, or otherwise--in course evaluations. The draft report makes no mention or even glancing recognition of the possibility of bias in course evaluations: a surprising--and revealing--omission. And although there are not tons of data on course evaluations (one wonders why), there are nevertheless a number of studies that DO indeed indicate that student course evaluations can reveal patterns of bias.
A 2005 study by Therese Huston summarizes some of these findings. For example, "one study found that students rate Asian-American instructors as less credible and intelligible than white instructors."
"In a series of semi-structured interviews, Hendrix (1998) found that students in a predominantly White university did not believe that a professor’s race influenced their perceptions of that instructor’s credibility, yet the students simultaneously described a different set of criteria for evaluating the credibility of their Black instructors for courses on certain topics (relative to the criteria applied to their White instructors)."
"Students’ comments revealed that Black instructors had more credibility when they taught courses that had an ethnic or racial focus, and students reported that they would more readily question and challenge the credibility of Black instructors for courses that lacked an ethnic / racial component to them." Another study showed that "there is also consistent evidence that to receive high course evaluations, students require female faculty to demonstrate more warmth and interpersonal contact than they require of male faculty." Or: Hamermesh & Parker (2005) also found an interaction between gender and course level (i.e., whether the course being evaluated was an upper or lower division course). Female faculty teaching upper-division courses received course evaluations that were about average for the sample, but female faculty teaching lower-division courses received course evaluations that were far below average.
This would go along with what Professor Susan Boynton has pointed out [and which Professor Gordon will address]: there are categories of teachers who often bear the brunt of skewed evaluation results, because of their structural position, such as junior faculty. The argument might be that well, course evaluations are inevitably biased, and perhaps we should work to make better, more bias-free course evaluations (which could only work if there were correlations with the race and gender of the evaluators, for example), but that the public disclosure of these evaluations is not the issue and would in no way worsen the effects of bias. I disagree. To argue that public disclosure would not worsen such bias is to profoundly misunderstand how women, minorities, and other "underrepresented groups," in the words of President Bollinger's and Provost Coatsworth's recently publicized, important diversity initiative, are often in a fragile relationship to power structures and the judgments that issue from them. While forced public disclosure of unverifiable, non-transparent, non-accountable evaluations constitutes an infringement upon ALL faculty's rights to academic freedom--in my opinion, and it doesn't matter if the faculty member in question has the best evaluations in the history of humanity, that's irrelevant-- such disclosure would be much more unfair, if I can use that word, in the case of faculty who are the object of reviews biased in relation to race and/or gender or junior status (and I'm not saying that bias is necessarily conscious). And in this age of internetted, viral, blogospherical dissemination, it is naive to think that public disclosure is somehow contained within the boundaries of a pristine, so-called "Columbia community." What students see as blithely giving them neutral "information" so they can choose their classes more efficiently is, from the perspective of many of those being evaluated, something that infringes on their essential rights as workers and on their academic freedom.
And here I want to address the third point, which has to do with the argument that publicized course evaluations would be a kinder, gentler, more positive alternative to the more unmediated, less controlled, wilder, and potentially nastier forum of CULPA (or other sites, like ratemyprofessor.com). On page 25 of the report the drafters state:
"We agree that questions about the impact of course evaluations on academic freedom are serious; however, we believe that publishing the results of Columbia evaluations will REDUCE CONCERNS ABOUT ACADEMIC FREEDOM, NOT INTENSIFY THEM. Since students commonly make course decisions based on the information they find at CULPA .info, academic freedom is already curtailed, and nothing can be done about it; by bringing evaluations in-house, evaluation designers can carefully script the questions so as to minimize the problem as much as possible."
In another part of the report the drafters state that "the genie is already out of the bottle."
This is a profound misunderstanding of academic freedom. Sites like CULPA and ratemyprofessor do not have the imprimatur of Columbia University, the employer of the teachers being evaluated. Student comments on those sites are meant for other students, and are not situated within any context whatsoever. They can range from wildly positive to the fair-minded to the vilely racist, sexist, and homophobic. Yet as teachers, we do not have to consider those comments as impinging on our rights to say what we need to say in the classroom, because our institution protects us by its commitment to academic freedom. This is an institutional commitment by Columbia--how students rate us on randomly orbiting websites does not impinge on our basic protections as teachers at Columbia. If course evaluations are publicly disseminated--and make no mistake, they will not be limited to those with CUIDs--then it will be as if Columbia has put its stamp of approval on the results, no matter if bias enters into the picture, no matter what the extenuating, contextual circumstances are surrounding the evaluations, no matter what the specific course content is.
Rather than a "culture of openness", transparency, and accountability, the mandated public disclosure of course evaluations would lead rather, to a culture of increased surveillance, increased suspicion, and reduction of autonomy (and this is completely tied-up with the entailments of the internet). This is from many faculty members' perspectives, and we have received numerous comments in the last few days. What might be a "culture of openness" to students if evaluations are publicly disclosed, what might seem just to be a more "efficient" and easy way to be a consumer, will be bought at a high price in terms of faculty protections and autonomy.
The following is from one of the comments sent to the Senate and it is from an adjunct professor:
"I propose the following: publish the student's name and the grade he/she received in the course together with the student's evaluation, so the reader can form a rational opinion based on ALL the facts. Only this transparent approach, which upholds accountability and traceability, and true to the spirit of Democracy, makes any sense."
This brings me back to my initial comments and those of Professor Cohen, among others. It won't do to have evaluations that are public but anonymous IF transparency, democracy, and accountability are to be sustained. If you don't want transparency and democracy, then that's a choice--but don't pretend that this would constitute some sort of open, accountable "culture" writ large. Throughout the Report, there is an appeal to all of us to respect students' judgments, to recognize that students recognize quality, that they are fair and they know what an unfair review is, that students are responsible. By the same token, given this logic, students should recognize that professors are responsible, caring, adults, bound by a code of conduct as Columbia University professors, and that professors would not use their power disrespectfully if confronted with signed negative reviews (published after the grades are given, of course). Only in a mutually open and verifiable process can transparency and accountability be achieved, which is absolutely essential in any public disclosure of official documents that are, at least now at Columbia University, part of the personnel records of the employee--like course evaluations.
Evaluations are meant to evaluate faculty performance so that we understand our strengths and weaknesses as professors. With this information, we are meant to hone our craft in the classroom. Evaluations are useful when they assess the quality and quantity of assigned readings, the connections between lecture material and readings, and so on. They are not valuable when they assess personal characteristics, ease or difficulty of grading practices, or professor personality. These CONFIDENTIAL student reviews are read within departments where bias can be somewhat edited out when the evaluations are discussed between faculty members and the chair of the department. (However, as discussed by Professor Ivy in her presentation, bias does not simply appear in the overt statements and so cannot be eliminated simply by identifying the offending judgments. It is implicit even in quantitative assessments.) Evaluations are useful and help us become better professors. In most courses, students complete evaluations with no sense of SELF-PRESENTATION, because they know that only faculty will read them. These evaluations are not for PUBLIC PERFORMANCE, they are not convoluted by pressure from the social environment. Nobody knows what other people are writing. That is why they can be effective. The evaluations on internet sites like CULPA and RatemyProfessor are often sexist and racist and can contain hate-filled bits of student dissatisfaction. The students on those sites write for a public -- other students. This public presentation can often be unconsciously shaped by the desire to impress peers with cleverness or to position a student as superior to another student or to the faculty member in question. They are NOT useful for faculty self-assessment at all.
Because of the Internet, we live in a time of anonymous self-presentation and public performance. Just look at YouTube at any given moment!! Keep in mind the recent comments on BWOG in the wake of the announcement that Obama would speak at Barnard graduation, or in the Spectator’s comment section. This kind of public presentation or performance is not useful and if we institute public evaluations at Columbia, we are likely to see the same kind of results.
Students say that this aspect of public demonstration and vitriol in CULPA and other online sources is part of the incentive for creating Columbia-sanctioned public evaluations. But HOW will hateful and bias comments be edited? The idea that someone in the registrar’s office, as the students suggest on page 31 of their report, would be the individual responsible for mediating between faculty (who “flag” inappropriate comments) and students, further signals just how irrelevant the faculty are in this model. Do you really want someone who has no experience in a classroom deciding -- over the voice of the professor -- whether or not an evaluation is "inappropriate"? With what expertise would they judge? Should faculty edit other faculty evaluations? This raises the question of what comments are appropriate and what comments are not. Is calling somebody a “Marxist” or a “socialist” or a “feminist” or an “agenda-laden postmodernist” appropriate? These terms are not inherently inappropriate, but are often used in a pejorative way, so how do we sort out the complexities in language in this kind of forum, which would be essential to do because professors’ reputations and careers are also on the line. Who determines when language becomes inappropriate? I think we can all appreciate what happens when you open the door to this type of censorship.
You can also see how open course evaluations could easily compromise academic freedom, something that we, at Columbia, take pride in. Open course evaluations could create an atmosphere of pandering and surveillance that may undermine responsible teaching. In fact, they may pose an unacceptable risk to faculty who teach controversial topics or topics of more public interest -- or even faculty who speak out on a public matter unrelated to their teaching.
I’d like to read a statement from a well-regarded professor of history in response to the student committee’s report and proposed resolution:
“As someone who teaches courses on the Middle East, the practice of making evaluations public will force me into self-censorship or into restricting my teaching to highly specialized topics. Any student’s disagreement with a political opinion will gain publicity with the University’s stamp of approval. The idea that this information will be limited to CUID holders only is ludicrous as we know from past experience that individual students and student groups are happy to provide outside pressure groups with information to be used against faculty members or the University as a place of intellectual discourse as a whole. Obviously, the same danger will confront any faculty member who treats problems of gender, race, or politics in other parts of the world. So I strongly oppose the idea to make student evaluations public.”
There is nothing to prevent someone with a UNI from leaking teaching data to the media for compensation, political interest or ego fulfillment. There are so many forms of internet harassment by which people are subject to the force of rumor and other unsubstantiated statements that this can only constitute a furtherance of the practice of judgment by hearsay, which is the university’s function to overcome.
In terms of course content, sometimes student opinions in evaluations are based on the kinds of courses that faculty teach, often without any choice in the matter. Junior faculty (who are most vulnerable to negative evaluations) often teach required courses, some that can be unpopular with many students simply because they are obligatory. These junior faculty who spend time teaching these courses that cover a huge range of material, such as those in the Core Curriculum, are therefore most likely to receive evaluations that do not do justice to their expertise. Lecture courses are also prone to eliciting superficial judgments, sometimes based on the physical appearance of the lecturer.
If openness is the issue, then students’ names should be on evaluations just as ours will be. Putting your name on an evaluation may make you more careful of what you say and how you say it. Owning your own comments would encourage responsibility. And this could be done after grades are in, so there is no vulnerability. To quote another faculty member, “Students can't have it both ways... either anonymity all the way down or ‘transparency’ all the way down.”
However, since course evaluations are anonymous, as they probably should be, to protect the student, they should also not be made public, for similar reasons, to protect the professor. Students demand and are entitled to privacy regarding their own performance in the form of grades and transcripts. We do not publish student grades, and if we did, wouldn’t that constitute a violation of the principle of transparency?
But ultimately, we have to question the effectiveness of evaluations at all. They are certainly in need of improvement. Those that claim to measure teaching effectiveness must have some grasp of teaching and learning theories. Measurement of evaluations must be quantitatively informed and sufficiently sophisticated to be useful. Variables such as time of day, teaching style, instructor ethnicity and gender and sexual orientation (if the instructor has made that orientation explicit to students), nature of the course (requirement or elective), and course content should be factored in. Numbers alone reveal little. An evaluation often tells more about a student’s opinion of a professor than about the professor’s teaching effectiveness.
Most problematic is conflating judgment for the purpose of promotion and teaching assessment with consumer advice. Another well-known professor writes the following:
“I continue to think that the present proposal for open course evaluations is a bad idea. Of course students will always seek information from others about which courses to take, as they should. However, in my experience, the evaluation system produces brief and unreliable comments, while the very act of publicizing them will give them a legitimacy they do not deserve. Moreover, the highly precise numbers attached to these evaluations produce an unwarranted veneer of ‘scientific’ accuracy. Personally, I do not worry much about what happens to evaluations of senior professors but the system can be very harmful for junior colleagues who are still learning how to teach. But my foremost objection is to the commodification of education -- treating the student as a consumer in a cornucopia of available courses with the professor as simply another product to be evaluated and consumed. I know that this is the trend of our times -- the commodification of everything -- but that is no reason for us to sign on to the trend.”
I have to agree, opening up evaluations to “help” students choose classes does move us toward a consumer model of education.
Perhaps what we need are two different systems. For professors, we need evaluations that are used for faculty improvement and better teaching, and to make decisions about promotion and tenure. For students who would like to shop for classes -- they can look at course syllabi and attend a class or several classes to judge for themselves. Perhaps the “shopping” period of the semester can be extended, to give students adequate time to evaluate classes. Portfolios of lecture notes, syllabi and videotapes can give evidence of the quality or content and presentation of a class. Students know that what is good for one student is not right or effective for the next, and they should make up their own minds, based on their needs, their likes, and their ambitions. Perhaps in town hall meetings such as this, juniors and seniors can meet face to face with freshman and sophomores -- providing guidance and information in person.
One more comment about our peer iInstitutions ... because something has been implemented elsewhere does not mean it has been a success. if Harvard burned itself down, should we too?
I am on the faculty at the Medical Center (CUMC). I have taught there for most of 30 years, and at Teachers College and Brooklyn College as well, with a total of about 40 years of teaching. My main concern about using evaluations has always been that they promote grade inflation and encourage teaching as entertainment.
In several departments where I taught, the highest/most positive student evaluations went to the teachers considered by the faculty to be the least knowledgeable, those least informed and least up-to-date, who gave the best grades, ended class early, and spent class time telling funny stories. Some of them gave out only As (A+, A, A-) for years. Often they allowed students with failing class records to do a "special" paper to raise their INC grade to the inevitable A. Classes that were more challenging, where the faculty did not lump everyone's performance into an "A" bucket, were difficult to fill, and highly criticized on the evaluations. If you sat through the classes, you concluded that you could not tell from evaluations anything about the value of the course and the instruction, but could only tell whether the material was a challenge or not, with those that were challenging getting worse evaluations. In these programs, it was also clear that the student evaluations were very different once the students were a year or two out of school. At that point, as conversations with graduates showed, many had recognized the good education they had gotten in the challenging course where they earned a B-. But even then, the courses with the guaranteed A were often remembered very fondly, as being a "breeze" and "lots of fun", rather than a distressing challenge that actually educated them.
Most of my teaching has been in (science) graduate education where this should be less of a problem. Certainly in undergraduate education there is evidence that many students have avoided science classes because they typically have a lower grade distribution than non-science classes. In the Brooklyn College chemistry department between the 1950s and 1970s (when I attended and then taught there), there was a mandated limit of 7% A in any course. It made for a high level of training, but limited enrollment. Even today, science grades tend to be lower than non-science grades and that is reflected in the preference of student for non-science electives. But I have seen this WITHIN science departments as well, as students avoid courses and instructors where they think they may get B or C.
The discussion I read about the publication of evaluations never mentioned the effect of grading on faculty evaluations and the potential further effect of greater publication of evaluations on faculty grading, with likely even more grade inflation. I would love to see a system that could prevent (and better reverse) the continued grade inflation I have seen over the last 40 years. Then students would recognize when they are not devoting the effort they should, and graduate schools could distinguish really good applicants from those that "breezed through."
The Department of Art History and Archaeology voted unanimously against the proposed publication of course evaluations for all the reasons voiced in the town meeting: transparency without accountability; vulnerability of professors conducting discussions on sensitive moral or political topics; bias; the public exposure of personnel files; the direct impact on grade inflation.
However, I would like to share some additional concerns that arose during our departmental discussion.
1) Many people outside the Columbia community obtain a “UNI” and it takes a long time (how many months?) for those who have left to lose their “UNI.” For example, anyone who has a library card has a “UNI” and expanded access to library services is the single most popular request directed to the Columbia Alumni Association, which numbers over 300,000 members. If a different access policy is not implemented, the public will have very easy access to this information.
2) It is behind the times to be speaking about “openness” in internet policy. Due to numerous abuses, the big issues at the moment are “verifiliability of internet agents/subjects; reality of internet data/representation by subjects/participants.” Ethicists are recommending careful consideration of “who/what/ is vulnerable and when?” (www.sis.pitt.edu/~peterb/3005-011/Buchanan-pitt.pdf) Even commercial vendors such as Amazon are responding to these concerns, e.g. by marking whether book reviews are posted by people who have purchased the book. The university should be leading the discussion on the responsibilities of free speech, not waving it aside.
3) One faculty member objected that if the administration backs this measure, it has the responsibility to compose an entirely different form with a short series of comparatively neutral questions., e.g. “Do you recommend the course?”
He argues that the course evaluations are doing too much work. They were invented to allow the students to give feedback to the professor and therefore incorporated rather “personal” questions, e.g. “how can the faculty member improve?” After many years, these documents were absorbed into personnel files with a consequent quantitative overlay for pseudo-scientific comparison across fields and disciplines. Now the proposal is for students to be able to use them to talk to each other about what they liked or disliked in the course. These documents must be disentangled one from the other as the present format solicits too much inappropriate, personal commentary.
Thank you very much for organizing the town meeting. It was enlightening to learn how many ethical issues are raised by this single proposal.