APRIL 11, 2012, 4:30-6:30 PM
104 JEROME GREENE HALL, LAW SCHOOL
About 40 people attended.
Sen. Sharyn O’Halloran (Ten., SIPA): Good evening. I’m Sharyn O’Halloran, and I’m the George Blumenthal Professor of Political Economics at Columbia as well as the chair of the Executive Committee of the University Senate. And I would like to thank everyone for being here at the town hall meeting on Columbia University’s policy regarding open class or course evaluations. This is an event hosted by the University Senate, and I’d like to thank Tom Mathewson for making all the arrangements.
Now the purpose of the forum is really twofold. First, to update or inform the Columbia community about the proposed policy to recommend that all course evaluations be made publicly available, and to get your feedback on a proposal that the Student Affairs Committee has really been working on for the last year, almost two years, regarding the way in which to make evaluations available, both the quantitative information as well as some of the qualitative information, and how that can best be done through a public vehicle such as a website.
Now, what is said tonight, your comments and the feedback, is really going to provide a basis for recommendations on how the Columbia community can balance the needs of students to decide which courses and professors best meet their curriculum needs, while at the same time mitigating potential downsides of misuse of publicly available course evaluations. And it’s clear as we engage this important issue that it’s necessary to stay true to our mission of an open and free environment for teaching, learning and research, and that’s sort of the criteria that really guide much of our thinking. And so your participation is both welcome and very important.
Now the structure of the debates for tonight will just be to set an overview presentation. We’re not going to do PowerPoint, but we’re just going to describe what the actual proposal is, and then for the opportunity to get feedback, and we’re going to have Senators Alex Frouman from the College, Sarah Snedeker from Barnard and Ryan Turner from SEAS to give the background on the current policy which allows individual schools and departments to decide whether evaluations are publicly available or not and what information is available, as well as discuss the proposal to open up evaluations more broadly.
Now we’re going to have a critique or a counter-proposal to the Student Affairs Committee’s proposal, and that will be presented by senators Bette Gordon, who’s from the School of the Arts, as well as Marilyn Ivy from Social Sciences. And then we’re going to open up the floor for discussion and questions. Now we want everyone to have an opportunity to express an opinion, and we believe free expression of opinion is essential to the university and that all members of the Columbia community have the right to express their views in a safe environment. I’m not sure if course evaluations are going to lead to as passionate a response as some of our other issues, such as smoking policy; however, I just want us to all put that in context that we do expect proper and respectful dialogue amongst the Columbia community. And so we ask that everyone be allowed to state an opinion without interruption, and we ask that you keep your comments short, no more than a minute. Okay? And I want to thank you again for joining us this evening, and we look forward to a very productive and positive conversation. So we’ll start with the students.
Sen. Ryan Turner (Stu., SEAS): Okay, great, thanks, Sharyn. So the open course evaluations subcommittee, which is a subcommittee of the Student Affairs Committee, was formed in the fall of 2011 to take up an effort that actually dates back several years among the students to get these course evaluations published. Our committee has had a number of meetings over the past year with stakeholders across the university: deans, students, professors, administrators, provost, etc. We also met extensively with different senate committees, and we benchmarked our peer schools, especially Ivy League schools, to see what kinds of course evaluation systems exist at those institutions. We also benchmarked all of the Columbia schools, including the professional schools. It turns out most professional schools at Columbia already provide some type of open course evaluations, though it’s inconsistently applied, and we’ve also reviewed some of the academic literature on the subject. There have been a number of studies going back many years about course evaluations, some of the problems with course evaluations, gender and racial biases among others. So we’ve taken a look at those and captured all that information in our report which I think you all got a link to as available on the website, and you should have the executive summary in front of you. We’ve also produced a resolution that will be voted on later this month probably that captures in various summary form those points as well and our specific recommendations.
Basically the committee feels to sum it up that is a very important issue for students. It’s really an issue of maximizing the value of our educations. We think that the information obtained from course evaluations goes a long way toward helping students pick the right classes and making the most of their very precious and limited time here at Columbia. It’s not an issue of students versus faculty. We actually believe that this policy of opening up course evaluations will in fact induce students to provide better feedback when they know that their opinions will be counted and will be taken seriously by faculty. That’s actually going to lead to better feedback for the faculty. So we think that is really a policy that benefits the university as a whole.
I’m going to let Sara tell you a little bit about our specific recommendations.
Sen. Sara Snedeker (Stu., Barnard): Hi. So before I do that I just want to point out some of the most important things that our subcommittee found. I think the first is that several of our peer schools have a very robust course evaluation system that’s available to their communities. Harvard, Yale and Princeton are all good examples of this. And these are schools that we can look to for examples when we are building a policy that is best for our university community.
Second, that several schools at Columbia already release the results of their course evaluations, including some with qualitative data -- for example, the Business School, the Law School and SIPA. Other schools release only quantitative data, such as Journalism. And finally we found that Columbia actually has a history of releasing the results of course evaluations. From about 1962 until the early 1990s a Columbia-Barnard Course Guide was released, and this has summaries of the evaluations of all the courses that are offered. It was a student committee I think that summarized them, and it also includes grade distributions, quantitative data, and direct quotes from qualitative evaluations. And I think it was a big part of culture at Columbia University for a long time, and the information we have is that it was stopped because of lack of staffing and funding. So what we’re doing here is not a huge departure from Columbia’s established method for course evaluations.
And so our specific recommendations. We are proposing a gradual process. So the first step is to release the results of quantitative data. This function currently exists in CourseWorks and can be turned on right now. Departments can opt in, and so we’re, I guess, would be asking for Arts and Sciences and other Columbia University schools to opt into this system if appropriate. Nothing that we’re recommending is binding toward the entire university. We definitely understand that each school has its own individual concerns, and those need to be taken into account.
After quantitative data is released, we’re proposing that eventually a qualitative question should also be released, something to the extent of would you recommend this course to your peers and why. This is a question that is used at our peers. Harvard, Yale and Princeton use some variation of this question. We think it’s useful because it specifically tells students the information they need to know and also tells the reviewer what the purpose of this question is.
Next we are proposing that there should be a system in place to allow for inappropriate comments to be removed. And so professors theoretically would read their evaluations and flag problematic ones, and an administrator would review them. The implementation I guess would be up to CUIT, but that’s definitely a necessary part of a Columbia University open course evaluation system.
We also believe that the best system would be integrated with the course catalogue. This would allow students while they’re searching classes to click and see the results. Right now the CourseWorks system is not ideal. Most people don’t know where it is. It’s hard to get to. The data’s not aggregated over time or by professor, and so we think that a system that was integrated with the course catalogue would be a vast improvement over this right now.
We’re also proposing to extend the evaluation period past the end of final exams, but before students receive their grades. This would allow students to fill out the evaluations, not during the most stressful time of the semester, and also with their experience from the final in mind. And I guess that’s about it basically.
Sen. Alex Frouman (Stu., CC): And there are other small considerations, like minimum thresholds for the data to be released, and you can find that in the report. A couple of very important considerations I’d like to add. The first is for graduate student instructors. And we spoke a lot with the Graduate Student Advisory Council, which is the council for the Graduate School of Arts and Sciences, and we believe that if there is an open course evaluation system for graduate student instructors it should be opt-in. And that when the graduate student instructors are in the front of the classroom teaching the class, they’re also learning. And so that’s a very different process than if you’re a faculty member who’s here to teach, or research and teach.
And the other concern is for junior faculty or faculty who are new to Columbia and Columbia’s teaching. And we think that before anyone has the results of evaluations released, they should be teaching for two semesters at Columbia to allow them time to receive the feedback and incorporate that feedback into their teaching. It’s stressful to come to Columbia and a new job. I can’t say firsthand, but it’s expressed to me that being a junior faculty is also very difficult and a challenging time. And we think any proposal should take that into account. And that is why is junior faculty or faculty otherwise coming here would have time to adjust to the environment and the evaluations from those two semesters would never be released, only from the third semester onwards.
Snedeker: And also I’d like to point out when we say graduate student instructor, we’re not including TAs. The results of the evaluations of TAs would not be included in this system at all.
O’Halloran: All right. Thank you for the student presentation overview now. We now have a counter, and if you’d like. Marilyn first and then Bette.
Sen. Marilyn Ivy (Ten., A&S/Social Sciences): So Tom Mathewson asked my colleague Bette Gordon and me to articulate a con position to the Senate report. So we didn’t necessarily volunteer ourselves to do this, but we’re willing and happy to do it. While we haven’t been working on this issue over the last year or the last two years, as I hear it, as the Student Affairs Committee or some version of it has, we do have some well-considered objections. We tried to be thoughtful about this, to the report and to the general recommendation made for open course evaluations. And I want to say that although we in no way speak for all our fellow faculty members, and so we aren’t acting as representatives of them, even in our positions as senators, we have received statements from an ample range of faculty, particularly in the Arts and Sciences, who oppose this resolution. There are, of course, also faculty who approve, but our mandate today is to present a critique of the resolution or a con of the resolution and the report on which it’s based. So that’s what we’ll do.
So I’d like to focus on three main points in this very short time available, and there’s of course much more one could say. But the three things I want to address are: one, the idea of transparency, of openness, that the Senate report relies on as its very basis; two, the unrecognized and unacknowledged possibility of gender, racial and other kinds of bias in the course evaluation process. Even though we just heard the words gender and racial bias in the presentation, I looked at this pretty carefully and I looked at it again right now, and I don’t remember seeing anything about that, although I could be wrong, but I don’t believe that there is any acknowledgement of those particular kinds of bias and how the effects of this kind of bias would be exacerbated with public exposure. And three, the idea that publicly disclosed course evaluations would be an improvement over the current system in relation to academic freedom, which the drafters call a general and abstract concern. At one point that academic freedom is a general and abstract concern, and for my way of thinking, I think many of us that there’s nothing abstract about it.
But anyway, first off, the proposal is unclear about what the purpose of course evaluations is, and that maybe reflects the uncertainty that is embedded in course evaluations. At times it suggests that they are for feedback to professors, which is of course one of the primary objectives of course evaluations the way that they are constituted now at Columbia. And at other times it suggests that they are a mechanism for a kind of a remedially oriented intervention that relates to feedback to help the professor improve. At other times it recognizes that evaluations are used in hiring, promotion and tenure cases, which they certainly are.
Finally, at other times it suggests that their purpose is to publicize classes or to give information about classes. And I would say that this last objective seems to be the one, for obvious reasons, that is most insistently promoted as the reason for the need for public course evaluations. Moreover, throughout it suggests that openness will lead to all three of these very different and perhaps incompatible ends.
So here I’d like to address the first point, that of openness and transparency. So the very subtitle of the draft report on open, so-called open course evaluations, is revealing. Quote, promoting a culture of openness. So if you have the draft report, you can see that. So open is used twice in the title of the report. Open course evaluations promoting a culture of openness. So openness is obviously a very important ideal here.
Following this, the executive summary of the reports, which is in the report. The executive summary states this. Quote, open course evaluations promote a culture of transparency, accountability, and self-examination, consistent with the highest ideals of scholarship. End of quote. So I want to question or think about this idea of transparency, the ideas of transparency and accountability here. Transparency means that there’s no blockage, that something is see-through. And if you see me through a transparent window, it means that I can see you. That’s what transparency is. This process as described in the draft report is not transparent. Students can evaluate professors and those evaluations can be seen by everyone in the Columbia community, so called, in the public posting of course evaluations. Yet professors can’t see the identities of who has evaluated them. As Jean Cohen, Nell and Herbert Singer Professor of Contemporary Civilization in the Core Curriculum, states, “You can’t have it both ways. Either anonymity all the way down or transparency all the way down.” Meaning, if you want transparency, then there cannot be anonymity. That is a contradiction. If anonymity of the reviewers is to be preserved, then the public anonymity of the evaluation should also be preserved. She continues, “Since course evaluations are anonymous, as they should be, to protect the student, they should also not be made public for similar reasons to protect the professor.”
Protect the professor from what? Well, from anonymous, unverifiable evaluations that become a source of hearsay and gossip that could potentially have lasting, damaging effects. From a potential infringement on academic freedom, the freedom to teach about potentially controversial and uncomfortable topics without fear of reprisal. From the degrading effects of bias that anonymous comments are more likely to be, and there is evidence that anonymous conflicts are more likely to be exaggerated, untrue, biased, and or hostile than those which are signed and for which people are willing to take public responsibility.
The executive statement talks about a culture of accountability. Who then is accountable in this culture? It must work both ways. There’s no accountability whatsoever if individual evaluators cannot be held accountable for their evaluations. If we have our car, our child, or our health evaluated, we need to know not only how the evaluation works, but we also want to know the identity and credentials of the evaluator, his or her training and the history work of experience with such procedures. Without verifiable identities, the anonymous evaluators cannot be held accountable for their actions. There is thus no real transparency, and there is no real accountability, and certainly not a culture of it.
The student drafters of this report seem to think that transparency and accountability should only work one way. And this brings me to point two, which is closely related and actually of course all these points are intimately interconnected. This has to do with the question of bias, racial, gender or otherwise in course evaluations. The draft report, as I said, makes no mention or similarly glancing recognition of the possibility of at least gender and racial bias in course evaluations. It is a surprising and revealing omission, I think. And although there are not tons of data on course evaluations, one wonders why, there are nevertheless a number of studies that do indeed indicate that student course evaluations can reveal patterns of bias.
A 2005 study by Therese Huston summarizes some of these findings, and I’ll just give a few examples. One is, “One study found that students rate Asian American instructors as less credible and intelligible than white instructors.” “In a series of semi-structured interviews, Hendrix (1998) found that students in a predominantly White university did not believe that a professor’s race influenced their perceptions of that instructor’s credibility; yet, the students simultaneously described a different set of criteria for evaluating the credibility of their black instructors for courses on certain topics relevant to the criteria applied to their white instructors. Students’ comments revealed that black instructors had more credibility when they taught courses that had an ethnic or racial focus, and students reported that they more readily question and challenge the credibility of black instructors for courses that lacked an ethnic/racial component to them.”
Another study showed that “there’s also consistent evidence that to receive high course evaluations, students require female faculty to demonstrate more warmth and interpersonal contact than they require of male faculty.” Or Hamermesh and Parker (2005) also found an interaction between gender and course level, i.e., whether the course being evaluated was an upper- or lower-division course. Female faculty teaching upper-division courses received course evaluations that were about average for the sample, but female faculty teaching lower-division courses received course evaluations that were far below average. That’s just a sample of a few studies.
This would go along with what Professor Susan Boynton has pointed out, and which I think Professor Gordon will note, that there are categories of teachers who often bear the brunt of skewed evaluation results because of their structural position, such as junior professors and maybe particularly junior female professors, which may require them to teach mandatory courses that students do not wish to take that they cannot drop because of poor performance and that therefore rate lower than courses which they take out of interest and on the basis of personal choice. And this is not to mention anecdotal evidence for this kind of bias and for negative comments or comments that reveal bias, even if faculty are allowed, which is being proposed here, to delete such comments from public disclosure, the student who wrote that comment is still included in the quantitative evaluation. Bias is not just about explicit vitriolic comments. Quantitative assessments are influenced as well.
The argument might be that, well, okay, course evaluations are inevitably biased one way or the other and perhaps we should work to make better, more bias-free course evaluations, which would only work in this case if there were correlations with the race and gender of the student evaluators. And I think we would all want to see better course evaluations. But the argument would go that the public disclosure of these evaluations is not the issue, and would in no way worsen the effects of bias. And I would disagree. To argue that public disclosure would not worsen such bias is to profoundly misunderstand how women, minorities and other underrepresented groups, in the words of President Bollinger and Provost Coatsworth that were recently publicized in an important diversity initiative, are often in a fragile relationship to power structures and the judgments that issue from them. While forced public disclosure of unverifiable, non-transparent, non-accountable evaluations constitutes an infringement upon all faculty’s rights to academic freedom, in my opinion, and it doesn’t matter if the faculty member in question has the best evaluations at Harvard, Yale and Columbia put together, that’s irrelevant. Such disclosure would be much more unfair, if I can use that word, in the case of faculty who are the object of reviews biased in relation to race and gender or junior status, and I’m not saying that bias is necessarily conscious. And in this age of internet and viral blogs, spherical dissemination, it is naïve to think that public disclosure is somehow contained within the boundaries of a pristine so-called Columbia community. What students see as blithely giving them neutral information so that they can choose the classes more efficiently is from the perspective of many of those being evaluated something that infringes on their essential rights as workers and on their academic freedom.
And here I want to address the third point which has to do with the argument that publicized course evaluations would be a kinder, gentler, more positive alternative to the more unmediated, less controlled, wilder and potentially nastier forum of CULPA or other sites like ratemyprofessor.com. On page 25 of the report the drafters state, quote, we agree that questions about the impact of course evaluations on academic freedom are serious. However, we believe that publishing the results of Columbia evaluations will reduce concerns about academic freedom, not intensify them. Since students commonly make course decisions based on the information they find at CULPA, academic freedom is already curtailed and nothing can be done about it. By bringing evaluations in-house, evaluation designers can carefully script the questions so as to minimize the problem as much as possible. End of quote. I think this is a profound misunderstanding of academic freedom. Sites like CULPA and ratemyprofessor do not have the imprimatur of Columbia University, the employer of the teachers being evaluated. Student comments on those sites are meant for other students and are not situated with any context whatsoever. They can range from wildly positive to the fair-minded to the wildly racist, sexist and homophobic. Yet as teachers we do not have to consider those comments as impinging on our rights to say what we need to say in the classroom because our institution protects us by its commitment to academic freedom. This is an institutional commitment by Columbia. How students rate us on randomly websites does not impinge on our basic protections as teachers at Columbia, though there have been periodic efforts and some indications that collective legal action may be taken against such sites for slander and/or defamation.
If course evaluations are publicly disseminated, and make no mistake, they will not be limited to those with a CUID, then it will be as if Columbia has put a stamp of approval on the results no matter if bias enters into the picture, no matter what the extenuating, contextual circumstances are surrounding the evaluations, no matter what the specific course content is.
Rather than a culture of openness, transparency and accountability, the mandated public disclosure of course evaluations would lead rather to a culture of increased surveillance, increased suspicion, and reduction of autonomy. And this is completely tied up, I think it’s not all separate from the entailments of the internet.
This is for many faculty, for many faculty members’ perspectives, and we’ve received numerous comments in the last few days. What might be a culture of openness to students if evaluations are publicly disclosed, what might seem to be a more efficient and easy way to be consumer, will be bought at a high price in terms of faculty’s protections and autonomy.
The following is from one of the comments said to the Senate and it’s from an adjunct professor, and I quote, “I propose the following: publish the student’s name and the grade he/she received in the course together with the student’s evaluation so the reader can form a rational opinion based on all the facts. Only this transparent approach which upholds accountability and traceability and true to the spirit of democracy makes any sense.”
And this brings me back to my initial comments and those of Professor Cohen among others. It won’t do to have evaluations that are public but anonymous if transparency, democracy and accountability are to be upheld. If you don’t want transparency and democracy, then that’s a choice. But don’t pretend that the publication of anonymous evaluations would constitute some sort of open, accountable culture at large. Throughout the draft report, there is an appeal to all of us to respect students’ judgments, to recognize that students also recognize quality, that they are fair and that they know what an unfair review is, and that students are responsible, and that we should trust them. We meaning faculty. By the same token, given this logic, students should recognize that professors are responsible, caring adults bound by a code of conduct as Columbia University professors, and the professors would not use their power disrespectfully if confronted with signed, negative reviews published after the grades are given, of course, for extra protection. Only in a mutually, and I want to underscore mutually, open and verifiable process can transparency and accountability be achieved, which is absolutely essential in any public disclosure of official documents that are at least now at Columbia University part of the personnel records of the employee, like course evaluations.
O’Halloran: Thank you very much, Professor Ivy. We now have Professor Gordon.
Sen. Bette Gordon (NT, Arts): Yes, okay. I’ll try to just keep my remarks brief. A few comments, some that Professor Ivy has mentioned and then some others. Evaluations are meant to evaluate faculty performance so that we understand our strengths and weaknesses as professors. With this information we are meant to hone our craft in the classroom and use evaluations which can be useful when they assess the quality and quantity of assigned readings and the connection between lecture material and readings and so on. They’re not valuable when they assess personal characteristics, ease or difficulty of grading practices, or professor personality. These confidential student reviews are read within departments where bias can be somewhat edited out when the evaluations are discussed between faculty members and chair. However, as discussed, bias does not simply appear in overt statements and so it cannot be eliminated totally. It’s implicit even of course in quantitative and qualitative assessments. These are useful and helpful our evaluations because we can become better professors. In most courses students complete evaluations with no sense of self-presentation because they know that only the faculty will read them. These evaluations are not for public performance. They’re not convoluted by the pressure from the social environment. Nobody knows what other people are writing. That’s why they can be effective.
The evaluations on the internet sites like CULPA and ratemyprofessor are often, as we’ve said, sexist and racist and can contain hate-filled bits of student dissatisfaction. The students write for a public, for other students. This public presentation can often be unconsciously shaped by the desire to impress peers with cleverness or to position a student as superior to another student, or even for that matter to the faculty. They’re not useful for faculty self-assessment at all.
Because of the internet we live in a time of anonymous self-presentation as public performance. Just look at YouTube any day. We are all performing for each other. Keep in mind the recent comments on Bwog in the wake of the announcement that Obama would speak at Barnard graduation, or on the Spectator comment section. This kind of public presentation or performance is not useful, and if we institute public evaluations at Columbia we are likely to see the same kind of results.
Students say that this aspect of public demonstration and vitriol in CULPA and other online sources is part of the incentive for creating Columbia-sanctioned public evaluations. But how will hateful and biased comments be edited? The idea that someone in the registrar’s office, page 31 of their report calling for an administrator, an individual responsible for mediating between faculty who flag inappropriate comments and students, further signals just how irrelevant the faculty are in their model. Do you really want someone who has no experience in a classroom deciding over the voice of the professor whether or not an evaluation is inappropriate? With what expertise would they judge? Should faculty edit other faculty’s evaluations? This raises the question of what comments are appropriate and what comments are not.
Is calling somebody a Marxist, a socialist, a feminist, or an agenda-laden post-modernist appropriate? These terms are not inherently inappropriate, but they’re often used in a pejorative way. So how we do sort out the complexities and language in this kind of a forum? Which would be essential to do because professors’ reputations and careers are also on the line. Who determines when language becomes inappropriate? I think we can appreciate what happens when you open the door to this kind of censorship.
You can also see how open course evaluations could easily compromise academic freedom, something that we at Columbia take pride in. Open course evaluations could create an atmosphere of pandering, surveillance that could undermine responsible teaching. In fact, they could pose an unacceptable risk to faculty who teach controversial topics or topics of more public interest, or even faculty who speak out on a public matter unrelated to their teaching.
I would like to read a statement from a very well-regarded professor of history in response to this report and proposed resolution. Quote: As someone who teaches courses on the Middle East, the practice of making evaluations public will force me into self-censorship or into restricting my teaching to highly specialized topics. Any students’ disagreements with the political opinion will gain publicity with the university’s stamp of approval. The idea that this information will be limited to CUID holders only is ludicrous as we know from past experience that individual students and student groups are happy to provide outside pressure groups with information to be used against faculty members or the university as a place of intellectual discourse as a whole. Obviously the same danger will confront any faculty member who treats problems of gender, race or politics in other parts of the world. So I strongly opposed the idea to make student evaluations public. Unquote.
There’s nothing to prevent someone with a UNI from leaking teaching data to the media for compensation, for political interest or ego fulfillment. There are so many forms of internet harassment by which people are subject to the force of rumor and other unsubstantiated statements, that this can only constitute a furtherance of the practice of judgment by hearsay, which it is the university’s function to overcome.
Again, according to Susan Boynton, who is one of our professors who gave us some comment. Course content is something to think about, and often without any choice in the matter, as Marilyn mentioned, junior faculty teach required courses, and some of those are unpopular with students. Junior faculty spend time teaching these courses over a huge range of material, especially in the core curriculum, and therefore most likely are the ones who receive evaluations that do not do justice to their expertise.
Lecture courses are also prone to eliciting superficial judgments. I was going to talk about signing your name to evaluations and keeping things open. Of course signing your name could make those comments more responsible, you could own your comments more, but again I understand that there’s a feeling of vulnerability even if they were handed in after grades were handed in, even after that. Sometimes we as faculty could use perhaps evaluations of those people who graduated, you know, many years ago, and maybe that would be one possible solution. But ultimately we have to question even the very effectiveness of evaluations at all. There certainly needs to be some improvement. Those who claim to measure teaching effectiveness must have some grasp of teaching and learning theories. Measurement of evaluations must be quantitatively informed and sufficiently sophisticated to be useful. Variables such as the time of day you teach your course. What if it’s eight or nine o’clock in the morning? Teaching style, instructor ethnicity and gender and sexual orientation as already mentioned, the nature of the course—is it required, is it an elective, and of course the content should be factored in. Numbers alone reveal little and evaluation often tells more about a student’s opinion than it does about a professor’s teaching effectiveness. Most problematic is the conflation of judgment for the purpose of promotion, teaching assessment with consumer advice. One other quote I just want to read. I’m almost near the end. This is another well-known and very beloved, I know you’ve all probably had his course, professor who says, “Dear colleagues, I continue to think that the present proposal for open course evaluations is a bad idea. Of course students will always seek information from others about which courses to take, and they should. However, in my experience, an evaluation system produces brief and unreliable comments, while the very act of publicizing them will give them a legitimacy they do not deserve. Moreover, the highly precise numbers attached to these evaluations produce an unwarranted veneer of scientific accuracy. Personally I do not worry much about what happens to evaluations of senior professors, but the system can be very harmful for junior colleagues who are still learning how to teach, and that goes even beyond the first or second year. Some of us continue to learn how to teach even in our fifteenth year of teaching. But my foremost objection is to the commodification of education -- treating the students as consumers in a cornucopia of available courses with the professor as simply another product to be evaluated and consumed. I know that this is the trend of our times, the commodification of everything, but that is no reason for us to sign on to this trend.”
So perhaps what we need are two different systems, one for professors who need evaluations that are used for faculty improvement and better teaching and to make decisions about promotion and tenure. And then another system for students who would like to shop for classes. They can look at course syllabi, they can attend a class or several classes to judge for themselves. We have a shopping period, we have an add/drop period. Portfolios of lecture notes, syllabi, sometimes videotapes can give evidence of the quality or content in presentation of a class. Students know that what is good for one student is not right or effective for the rest, and they should make up their own minds based on their own needs, their likes, and their ambitions. Perhaps in town meetings like this one juniors and seniors can meet face to face with freshmen and sophomores providing guidance and information directly in person.
One more thing I just want to say just as a throwaway comment. Because something has been implemented elsewhere, Harvard, Yale or wherever, does not mean it has been a success. If Harvard burned itself down, should we too do the same thing? Thank you.
O’Halloran: I hope we can agree to no to that end. But with that in mind, I want to thank you very much for the thoughtful comments. They were excellent and they take a lot of concern and consideration for the students’ opinion, which was a very thorough and thoughtful job looking at this question from multiple angles. So instead of going back and forth, I would like to open up the floor right now to your questions, and if you could please state your name and your affiliation, that would be excellent, that would be very helpful. First set of questions.
Justin Nathaniel Carter: Good afternoon. My name is Justin Nathaniel Carter. I’m the current student services representative for the General Studies Student Council and I’m also a USenate candidate for the opening University Senate seat. This is a statement that we passed at the GSSC and I will get straight to it. Statement to the University Senate on open course evaluations by the General Studies Student Council. The School of General Studies Student Council in solidarity with the undergraduate councils at Columbia University supports all efforts to publish the results of end-of-semester course evaluations. In considering the fundamental tenets of the university that is a free and open dialogue to enrich the lives of its citizens, the GSSC passed a resolution in November, 2010, in full support of opening course evaluations to the university public. We believe that both qualitative and quantitative results should be made available to all incoming and continuing students. Doing so can only benefit both students and faculty alike.
In managing their studies at Columbia University, students should be afforded the opportunity to make the best and most informed decisions. Restricting this information causes the entire university community to suffer from a lack of communication and hinders intellectual institutional development. Students suffer from lack of adequate information when searching for classes that best suit their academic goals, and faculty miss out on students who truly wish to energetically work and learn from them.
We understand that faculty have reservations over the publication of information that may affect their reputation within the community. But we believe that the well-rounded standardized approach of publishing official evaluations is more constructive than restricting students to unofficial sources such as CULPA. We also believe that by making such information public our faculty will further foster collaborative efforts to not only be the best researchers in academia, but also the best educators. We believe in the uninterrupted flow of knowledge and insight in keeping with the foundations of the university as an institution. We believe that criticism and approbation foster growth and inhibit mediocrity. Students invest time and energy in providing constructive feedback during the busy end-of-the-semester period. This reflection allows students to reassess our educational objectives and learning needs. Sharing, not censoring, this valuable information can only maximize its use and original purpose. We seek to end the one-way feedback loop in which students are missing out on being part of an integrated community focused on promoting open dialogue and a shared commitment to improving upon Columbia’s already strong academic prowess.
We applaud the work of senators who have done due diligence in evaluating peer institutions and carefully crafting a proposed system that would provide for accountability, discretion, and efficiency. We believe in freedom of information, and as so many senators have previously affirmed, we believe in transparency. We support open course evaluations to the fullest extent.
O’Halloran: Thank you very much. Additional comments. Yes, Sam.
Sen. Samuel Silverstein (Ten. P&S): Sam Silverstein, Medical School. Could I ask those who oppose and say that it was a very thoughtful comment, or set of comments. Whether the single question “Would you recommend this course to others?” would be an acceptable initiation of such an evaluation, or is that also in your view off the table?
Gordon: I think it’s an open question, and the same kinds of issues that we spoke of could appear in those kinds of ... I mean, it’s open so anybody can say anything. And the big problem, you know, which I mentioned is that who will censor, if censorship is even the right word, inappropriate comments and the whole question of language. Also are we talking about a number system as well in addition to that comment?
Silverstein: No. No.
Gordon: Only that comment.
Silverstein: What I’m talking about is a yes, no. Because if that’s off the table, then I think, for me, we’re at the limit of discussion. So I do want to know whether the simple yes/no question, would you recommend this course to others, would be unacceptable or acceptable? It seems to me to be a watershed kind of question.
Ivy: With limited discussion because you don’t feel like it’s, we can’t, it’s not discussable to talk about whether there should be open course evaluations or not? I mean there are many things one can say about improving the system if it’s implemented. And should there be quantitative or qualitative questions, what kind of qualitative questions, etc., etc. I mean, I have something from the University of California at Berkeley and it seems that Harvard is, according to their chart from 2010, it’s only Harvard that has full disclosure or more complete disclosure of qualitative questions, where Princeton and Yale and others, it’s much more limited. They say none, no qualitative questions. And it’s also, if you look at the draft report, there’s a lot of opt-out options.
Silverstein: I’m not asking you about the draft report. I’m trying to understand. The core issue, it seems to me, is whether there is acceptance of the idea that answering the question yes/no, would you recommend this course to others, is acceptable.
Ivy: Right. Look, that to me, that would be better than having to divulge all the comments and have an administrator adjudicate. I mean there’s a whole range of things that would be more acceptable to me. I made my statement and I’m opposed to the entire prospect.
Silverstein: So even the yes/no question.
Ivy: I’m opposed to the publication, open disclosure or publication of course evaluations.
Gordon: Whatever they may be, even yes/no.
Silverstein: Okay. Thank you. Thank you.
Ivy: From my perspective. And that’s the case I made. I didn’t get into what would be the tinkering with it to make it better if it were to come to pass now.
Silverstein: No, I wasn’t asking you. [Cross talk] I wasn’t asking you that. I asked a simple question. You gave me a simple answer.
O’Halloran: And the reasoning from just extrapolating is that in that yes/no statement you have the inherent biases that would be transferred. Those are, if I’m understanding your position correctly, you would at that point be recommending it, and controlling for gender and race.
Silverstein: I don’t want to make this complicated because it wasn’t a complicated question. The question was yes/no, and the answer was no. It doesn’t have anything to do with controlling for anything. It has to do with the straightforward yes/no, and we have an answer. I’m very grateful, and I think it’s very important to understand what the position is, and I now understand it. Thank you.
O’Halloran: Next question.
Aarti Sethi: I actually just have a small comment.
O’Halloran: Would you please state your name?
Sethi: My name is Aarti Sethi. I’m a Ph.D. student in the Department of Anthropology, and I just have a short comment which is that, I mean, for the last two years now I’ve taught as a TA and I’ve been anonymously evaluated and I support anonymous evaluations because I find them very, very helpful as someone who’s learning to be an instructor and to teach. But I’m also a graduate student, and so I sit in classes and attend classes like undergraduates do, and I do not support the publishing of faculty evaluations. I think evaluations are important because they help those of us who teach to become better teachers, but I fully recognize the cautions that the faculty is putting before us today about why the public publishing of course evaluations is inimical to the creation of an academic environment. And very simply because I don’t think that the teaching and learning that happens in the context of a classroom should be or is like surfing reviews on Yelp. Entering a classroom should not be an exercise in a transaction where you pay for a service and then you evaluate whether you think that service has been adequately rendered to you or not. And then you surf, you know, other consumers who have evaluated the service and based on that review you decide whether you want to sit in on a class or not. If you decide on a class by attending three lectures, looking at a syllabus, seeing whether the pedagogic style works for you or not, having conversations with friends and colleagues and peers who have sat in in these classes. And I think we should think about the large models that we’re enacting when we use words like stakeholders and things like that. The university, the marketplace of ideas is a very inadequate metaphor to describe what happens in a classroom, and I don’t think this model of evaluation should be the one that we enact to describe that experience. Thank you.
O’Halloran: Thank you very much. Yes.
Sen. Carlos Alonso (Dean, Graduate School of Arts and Sciences): Hi. I’m Carlos Alonso, dean of the Graduate School of Arts and Sciences, and I just wanted to comment on the heels of that comment by a graduate student. The proposal has evolved in such a way to recognize that graduate pedagogues, graduate students who are involved in some aspect of teaching in the university are to some degree exempted from this kind of scrutiny in the sense that there seem to be provisions made for not requiring the evaluations of TAs to be published. And I’m not going to rehearse the reasons that have been introduced. It’s very clear that they seem to have carried the day in the sense that they have produced an argument for exemption of TAs from the requirement that their evaluations be published. But by the same token, the proposal seems to be advancing that TAs or graduate instructors who appear as instructors of record, in other words students who have sole responsibility at least in the way in which the directory of classes defines that as being listed as the instructor of record will not be exempted from having their evaluations published. However, I think that that is a confusion that needs to be addressed. Students who are in charge of their section in many, many instances, if not all of them, are in fact following a syllabus that was not prepared by them. They may have control of the classroom, but they are not in fact working with a piece of pedagogical program that they have concocted on their own. And I do understand that the proposal is making provisions for an opt-in option for those students. But the problem with that is that opting in is, it seems like, a very reasonable way of addressing the conundrum, but the fact is that an opt-in option has all sorts of ramifications in the sense that the decision not to opt in is itself a decision that can be interpreted and will be interpreted in many, many ways that will be beyond the control of the instructor. So I would respectfully submit that the same justifications and reasons that had led the proposers to exempt TAs from having their evaluations published should be extended to all graduate instructors. The reasons are the same, the justifications are the same, and I don’t think that we should be making distinctions of the sort that the proposers are trying to make between certain types of graduate instructors and other types of graduate instructors.
O’Halloran: Okay. Thank you. The students did wish to respond to that.
Frouman: So, thank you very much for your comments. Just to give you a little background on the decision to make it opt-in. So to be very clear, graduate student instructors such as the ones listed in the bulletin of courses as a sole instructor which like Dean Alonso pointed out do not often actually make the syllabus or design the course. We have designated them to be opt-in. They would not be open automatically. And this is something we would love to discuss. This was by recommendation of a graduate student on the Graduate Student Advisory Council who said that he would like to have his evaluations open, and he would not like to be prevented from doing so. So that at least was the motivation for our recommendation. We’ve had discussions with the graduate student senators. And it’s a question that’s been raised several times that if it becomes opt-in then for everyone who opts out perhaps there would be a stigma. It’s a valid point. I often wonder if that would be the case simply because the Arts and Sciences is currently opt-in for every department, and I only know of the economics department who does opt in, and I don’t know if there is much awareness of it otherwise. So perhaps it would be a different case. But at least in the status quo, an opt-in system has not created a stigma for those who don’t opt in, although I think they should.
O’Halloran: So if I understand you, the proposal or recommendation would be that the graduate students, all the graduate students, whether they were a TA or acting as an instructor, as in the core, would be exempt from this. Is that correct? Okay. Thank you. Thank you for the suggestion. Yes. Please say who you are.
Jacob Andreas: My name is Jacob Andreas. I’m a senior in the engineering program. So two different things. The first concerns releasing existing data from CourseWorks, which is, you know, talked about as the first step, and that seems ill considered. [can’t hear]. Certainly there is [can’t hear him]. First of all, there is [?], there are [?] in the existing course books evaluations I think. You know, we need to know more about that text data. [Can’t hear] Or at least better information about –
Frouman: Just to quickly clarify. We think it would be appropriate to release the text data. So a qualitative response to a question such as would you recommend this course to a peer and why, not necessarily evaluations of the professor or other specific questions. So and to clarify, at Harvard that is the only question that’s released similar to the ones you mentioned earlier, Yale and Princeton.
Andreas: [can’t hear]
Snedeker: I’m not sure if the current CourseWorks functionality includes the release of qualitative data. So that’s why that is the initial step, partially why.
Andreas: I mean both as understanding how this is going to work going forward and the release of that data, we need to know how long the [can’t hear]. Because this gives us some idea of how well anything we put forward in the future is going to work. And, you know, in principle the information is available to us [can’t hear].
O’Halloran: So in many ways, so if I’m taking your recommendation that you might almost want to have this not retroactive but prospective. Is that one of the recommendations you suggest?
Andreas: Well, if it’s going to be retroactive--
O’Halloran: --it would need a lot more filtering. Okay. Okay.
Andreas: ...because numbers without text are dangerous.
O’Halloran: Thank you from an engineer. I appreciate that. [Laughter]
Andreas: The second thing, speaking about things that are dangerous, is all of this discussion about attaching the names of students to evaluations as in the interest of fairness. I think that's—as a student, that's very scary. The suggestion that there needs to be this—if there's going to be anonymity on one side, there has to be anonymity on the other side—presupposes a symmetry between the situation that the faculty member is in and the situation that the student is in, and that's just not the case. There's not symmetrical information, there's not symmetrical power in this relationships, and, you know, realistically, the consequences of a bad review for an instructor in this system is that fewer students are goint to sign up for their classes, and maybe it goes into the discussion of tenure, and maybe it goes into other things, but realistically the consequences of an uninformed decision for a student are much, much more serious. You know, we have very limited time at this university, and the decisions that we're making about our course selection right now, the decisions that are going to shape our intellectual growth for the rest of our lives, right? When we get four classes, five classes, six classes a semester, we have to make them count. And I can promise you that if you suggest that students' names should be attached to these evaluations, then no one is going to write them, because these reviews are going to poison the experience for the rest of school if you 're willing to say things about professors that are populat, either because students will think less of you for saying that, or because [inaudible] will follow you from the department for having said that, or if you have to take another course with that professor, or if you have to take courses with other professors that support that professor. And if we don't have this information, then we really have no basis on which to choose classes. It's nice to suggest that we should give people all these syllabi, we should give them hours and hours of video of recorded lectures to watch, we should give them a list of classes and allow them to shop, but it's not realistic You know, the suggestion that there is a shopping period at this school frankly is a fiction [laughter]. If you are a person in classes with lots of reading, when you are trying to choose among five courses in the beginning of the semester and all have reading assignments, either you are going to read 800 pages, 500 pages—
O’Halloran: —to your benefit, to your benefit.
Andreas: To your benefit, but during that first week, while attending twice as many courses as you expect to attend that semester, and inevitably you're going to fall behind.
And if you look at the way people actually make decisions about courses here, they're not going around to a lot of courses, because that's just not something that our this system is set up to do. You know, the reason these course guides like Culpa, like Ratemyprofessor.com, come into existence is to make the kind of information that they suggest you can get by talking to other students available to everybody, because there are courses that are offered once every four years, there are courses on esoteric subjects that you're not going to be able to ask people about. If you want to avoid the commodication of education, you've got to allow us us make thoughtful decisions about our courses
O’Halloran: Okay. Thank you. Those are well considered points. Additional questions. Yes.
Rosalind Morris (Professor of Anthropology): Could you just give me one clarification? I do have a much more broad question, but both you and the students said that the proposal is not that any department or school be bound to undertake these.
O’Halloran: These are recommendations.
Morris: But nonetheless the recommendation is that, according to what’s written here, all faculty be subject to open evaluations. It’s item 4.
O’Halloran: It is right now, the language is and you can read it.
Morris: So although the resolution isn’t binding, your recommendation is that the university nonetheless implement a system that would be.
O’Halloran: The recommendation is that, coming from the Student Affairs Committee as written, as the resolution, is that it would be non-binding; however, it is strongly recommended that in fact all departments and schools make available the quantitative information and potentially some qualitative information as well. That is how it’s stated. That’s correct.
Frouman: It’s slightly more nuanced than that—
Morris: I just read you your own statement so it’s fine. [Cross talk]. Yes, thank you. I have. That’s why I wanted to clarify that it does say what I read to you. My name is Rosalind Morris. I am a professor in the Department of Anthropology. And I’ve taught here for 18 years. So you could all have done four degrees with me. And I can only speak on the basis of my experience, and I can only speak on the basis of an authentic and long-lived commitment to being a good teacher and to helping other people be good teachers. And I have sat on many tenure review committees, and I have indeed overseen reviews of the entire tenure review process. And those experiences have forced me to confront the fact, which was described by Professor Ivy, of systemic bias that unfortunately expresses itself both consciously and unconsciously in the teacher evaluations that are produced in the context that we are familiar with. And you mentioned in your own presentation that the review process was removed from the public sphere in the early ’90s. And that’s not an incidental moment, because it’s the moment at which Columbia began to make really sustained efforts to transform the nature of its faculty, to increase the hiring of women, to increase the representation of its diverse communities, to really change the nature of the university. And it’s in that moment that one starts to see the evaluations assume the qualities that they do, i.e., they become sites at which bias expresses itself. And these things are problematic. They can be dealt with if reviewed very carefully by people who are attentive to such matters. When they are published and travel in the world that we all know is marked by very, very few boundaries and is usable in ways that are not coincident with their original purposes, enormous damage can be done. I mean, every day we all turn on the news and find the story of someone who’s been driven to some self-damaging act because they feel humiliated, exposed, their professional standing undermined, their sense of self-worth undermined, their reputations in their worlds deeply, deeply damaged. I do not want that to happen here.
But you have all spoken, and I understand why you speak with great commitment to the ideas of transparency and democracy. I really do share the faculty members’ sense that if you want to participate in the world as adults--no one here is a minor—if you want to participate in this world as adults to lay claim to the tasks, the burdens and responsibilities of democracy, you must be willing to stand by what you say. There really is not transparency without accountability. You have to be willing to sign on to what you are willing to have other people judged by, to what you’re willing to have the professions be adjudicated on the basis of. If you want those to be actually part of what they are now, teacher evaluations, then be responsible for it. But if you’re not prepared to, then I think the question of publication becomes a worrisome matter. And it is publicity. I think Professor Gordon spoke very well. It is publicity that changes this. We want evaluations. They do help us to be better. But you are not talking about evaluations; you are talking about a shopping guide. You have talked about maximization of your value. You have spoken about the limitations of your time. You have spoken about all those things that are proper to the marketplace.
I’m an old-fashioned person. I do not think the university is a marketplace. I do think there’s plenty of time, and I have many, many students, hundreds every year, who shop. Some stay in classes and don’t. They have time to look at a syllabus, attend a lecture, and make a judgment. You’re capable of that. But I would ask you to think of any other circumstance in which people’s professional reviews are conducted in public as an anticipation of a consumer act. There’s only one institution in which I can think of in which that is the case. And it’s slavery. I’m not analogizing the situation to slavery. I’m asking you why you want to conduct professional review in public as the antecedent gesture of a consumer act. That strikes me as an enormously revealing and terrifying thing, and I don’t attribute to you that motive. But we all get caught up in the fantasy of transparency and in the presumption that it is democracy, all the while thinking, I’ll maximize my value.
I hope that you will not have to undergo public professional evaluations in the halls where perhaps all the members of the corporation or the legal firm are invited.
O’Halloran: Thank you. Yes.
Sen. Rebecca Jordan-Young (Fac., Barnard): I’m Rebecca Jordan-Young at Barnard. I teach in the Women’s Gender and Sexuality Studies Department at Barnard. And I actually want to start by saying that I appreciate all the work that the students put into this, and I feel like you have done the job that I think makes sense for you to do. You’ve worked hard on something that your constituents have told you that they want and that you have strong feelings about and it feels like something that’s going to be quite useful to you. And I think that what it is is a clearly written and thoughtful document, and at the same time, I’m going to join the other faculty members who are really strongly opposed to this document.
I’ll give you just a handful of reasons. One is we’ve talked about bias in the sense of sort of vitriolic comments and things that could be seen. I’m a lot less worried about that kind of bias, and that’s not usually the way bias does express itself. Bias isn’t sort of icing that can be skimmed off all of the very neutral-sounding comments. In fact, most bias, the people who are expressing it feel that they themselves are being quite fair and neutral, and we know this from endless social psychology studies. It’s very, very important. So knowing that if you are a person of color or a woman or quite young or just someone who’s not expected by the students in that classroom for the subject matter, you get a little discount on every one of those quantitative questions that’s asked. There’s no way to pull that out. You can notice that and look at it in the other context that we’ve talked about for using evaluation.
I also really liked what Professor Gordon said about, and I think you were quoting someone, I’m not sure, about the veneer of precision that the quantitative data can give. And I’m always alert for scientism because I’m a scientist who teaches at the intersection of science and society, and I worry very much about the scientism appeal of attaching percentages to these things.
I also wanted to state to the students who’ve said—I appreciate and respect very much how seriously you take the decisions about your classes. And for that reason I would want to ask you, do you really want to make those decisions based on the anonymous opinions of people about whom you know nothing? And wouldn’t it be better for us—what I think could come out of this whole discussion and be very useful is that it’s quite clear that the current process isn’t working for the students. It doesn’t work for them. They don’t feel like they have enough information to move forward. Personally I also hate shopping. I hate that I start out a class without knowing for sure which of my students are really going to be there, and it takes a while to get a good dynamic. I don’t think that’s necessarily the best way. What about other modes? I think we have to get more creative. I can think of a few things, and these are completely off the top of my head, so don’t hold me to them. But what about this? Open houses in departments where the evaluations are sitting there in the department, and you can come in and flip through them and talk with professors about them, what they might mean. You talk with students. You know, you invite, like when we do program planning meetings now, where we invite our majors and our professors to be there and be available. If you’re interested in taking some classes in a department, you come in and you look at it. What I’m worried about is that this so-called open evaluation process distills the way that you gather information about how you choose your courses, and I’m worried that students are going to be putting less care and less time into it because it is going to feel like Yelp.
Oh, one final thing that I’ll say is, if we do move to the point of needing some kind of a compromise, and I don’t believe that these open evaluations are a good idea, period. I will say, however, that I agree that the power imbalance between students and professors is such that it wouldn’t make sense to attach student names. However, I think that they can’t be just random and free-floating. So they should attach, if we got to that point, it should be at the very least attached to, for example, the grade that the student earned in the course. I still don’t think that that’s adequate, though.
So I want to just finish by saying I am opposed to it, even though I respect where it’s coming from and I feel like we need to do something different.
O’Halloran: Okay. Thank you very much.
Malvina Kefalas: Hi. My name is Malvina Kefalas. I am the representative for academic affairs at Barnard’s Student Government Association. First of all, I’d like to commend the student senators for all their incredibly hard work, and I’d like to support the GSSC statement as well. And I’d like to start off by saying that what I believe is irrelevant because I want to represent the opinions of the students that I serve. And my committee, the student academic advisory committee, is in the process of conducting an academic satisfaction survey at Barnard, and we’ve asked students how they feel about open course evaluations, and we’ve had overwhelming support in favor of them.
So one of the things that I think is really important is that students have demonstrated an incredible understanding of how delicate this decision is. And I think they understand and sympathize with the concerns of professors and faculty that evaluations can and should be a forum for them to increase and better understand their teaching abilities. However, I think that they also understand that the course evaluation system that we have now is and will continue to be a way in which students can express to peers how they feel about courses, and that this helps us to build a more civil society. And I agree with many of the statements, and I think that students will agree with many of the statements, that this decision is very multilayered and there are a multitude of ways that this can be carried out. But I think that the most important thing to remember is that this is ultimately about the relationship between professors and students, and as a result of that, we, I think students understand that, and I think we should expect the maturity of students in evaluating the professors. And I think the idea that we are not only capable but expected to present bigoted statements is frankly undermining of the intellectual capability of our campus, and I think we can expect that students understand this and in having a more public evaluation system will be very, very careful in how they represent their opinion of a course.
O’Halloran: Okay. Thank you very much. Yes.
Lili Burns: I’m a senior at Columbia College, and I feel compelled to say something just because I strongly disagree with the project of publishing professor evaluations for many of the reasons that have already been stated. But I would like to respond to that [last] comment which is that I think if anything what we’ve learned lately is that students are capable of hateful comments. And one thing that happened in the wake of the blog disowned those comments. We said that those were not representative of us, that anonymous internet comments are not a way that you can really get a sense of what the student body thinks. So I’m a little bit confused about how all of a sudden that is being designated as the appropriate mechanism for evaluating what students really think because we know that those kinds of comments are not always what we all feel.
And I would also like to say, in response to what people have been saying about the different power dynamic. I mean, I think I agree. There is a different power dynamic between students and faculty, but I think that also has to do with the fact that there is different investment. And we all care deeply about our courses, and I know that we put a lot of thought into them. But they are one semester in our lives, maybe two, and no matter what, we will not be as invested as the professors. This is their career, their job, what they care about, and no matter how much we care, it’s their livelihood that we’re debating. And I think that that is a really terrifying to put into the hands of students. And I do believe in students, and I believe that we are capable of making intelligent comments, and I know that we are, but I also think that we have seen what can happen when we put too much power into the hands of a large group of people. And I’m just worried that that will happen.
And I’ve spoken to a lot of students, and I think that part of the reason why it seems like they all think that this would be a good idea is because they don’t know the issues that are on the line. At first glance it seems like this would be something that’s great, but once the conversation gets played out, I think a lot of people have been very understanding of what is really at stake.
O’Halloran: Great. Thank you very much. Yes, in the back. Please identify yourself. I can’t hear you.
Sophie Queuniet: I’m a lecturer in the French Department. My name is Sophie Queuniet. I have one practical question I want to ask the side of the students. Even if we accept this kind of open evaluations and we can have results and know exactly what teacher is supposedly better, there are as many seats as can be. There are, imagine, 20 seats per class, and even though you might have names, you might not be able to get into those classes. So at the end of it, it’s just floating around, you know, some kind of opinions about people that you might never be able to take classes with. So my point here is, the class you take with a teacher, it’s a strong relationship you have between a student and the teacher, and there’s no way you can really understand what it is before you take it. People have different relationships with teachers. It’s completely subjective. I’ve been doing this job for 20 years now, and I don’t know what my point is, but ... When I look at all my evaluations, fortunately they’ve been very positive. But really when I look at them, it was very, very rare when I had feedback about my teaching and what I could really work on. It was mostly opinions about me, like oh, she’s great, which is very good for my ego. It’s wonderful, but it didn’t teach me anything about my teaching and what I had to improve.
So I just want to question the use of it. Yes, okay, you’ll know maybe, you want to have the best teacher. But first you cannot always get the best teacher because there are just limited seats. And two, it depends on your relationship. You can never know in advance the relationship you’re going to have with your teacher. And third, the argument is, you know, to quote the French philosopher Jacques Ranciere, the best teacher is the ignorant teacher. And why is that? Because the learning comes from you. It doesn’t come from your teacher. So that’s it.
O’Halloran: Can I just take one more, and then I’ll have you respond.
Sen. Jeanine D’Armiento (Ten., P&S): So I’m Jeanine D’Armiento from the Medical Center. And first I want to say I appreciate a lot of the students coming to the debate, originally seeing this sort of on the surface as something that is just an evaluation, and then beginning to recognize some of our real serious concerns and fears, I have to say. There are so many very important comments that were already made. But I want to say that the National Academy of Sciences recently, maybe five years ago, four years ago, did a review of women in science, and during that review they commented on recommendation letters written for female students. And all of the recommendation letters, whether written by women or men, tended to be worse for the female students, and the conclusion from that was that there were unfortunately subtle biases. And this is in the case of people intending to write good things because people tended to write about the women’s maybe personalities in certain ways. And you can read the report. It’s on the internet.
But this type of concern, this type of research, which has been brought up by many of the comments today, are really, really important, but they’re recent. They weren’t in the literature 20 years ago. This is very new literature, and I think that that’s where we are coming from. And many of us obviously have experienced this ourselves. So this is what I do appreciate many of you understanding.
O’Halloran: Okay. You wanted to answer. You, okay.
Frouman: So thank you all for these comments. There’s one request that I made at the last Senate meeting as well. The reason we bring up our peer schools is not as a model of what we ought to do, but rather as an example of whether or not it works at other schools. And one of the questions we have often asked faculty members and also administrators is whether that these issues people keep bringing up have become problems at our peer schools that have open evaluations. I’d love to hear any stories or opinions based on what we’ve seen at these other schools because I haven’t heard this feedback yet from faculty or people who were previously at them.
Jordan-Young: Can I briefly address that?
O’Halloran: Please respond to that.
Jordan-Young: You can’t possibly measure that. The point is that there is nothing against which to measure these evaluations. You could do, for example, a very long-term. It would take a very, very long study. But in fact what we have are studies that have much more controlled circumstances that give us precisely the same kind of information about a mechanism that’s called implicit bias. It’s been studied in multiple different situations, and the particular situation which was just described is an excellent one because it shows that even when people are intending to provide a positive feedback. These are students of theirs that they want to get placed, for example. But that implicit gender biases and racial biases nonetheless shape the presentation of someone’s skills and materials in a way that systematically discounts their abilities. And the problem with trying to come up with a way to know if this was causing problems at somewhere that was doing open evaluations is there’s no way to standardize and send out blindly students to the same exact course and evaluate. There’s no baseline against which you can then determine, aha, we have to correct upwards the evaluations of all the women by 15 percent or something. So do you understand what I’m saying?
The data that we already have from controlled studies and circumstances is a much better guide than anything that you could get from looking at how evaluations are working.
Frouman: Thank you. I don’t believe I mentioned bias specifically. I wasn’t necessarily referring to that. Thank you.
O’Halloran: Okay. Anyone else? Yes, Jose. Come over here. Then Sam.
Sen. Jose Robledo (Stu., General Studies): Hi. Jose Robledo, School of General Studies. Two comments and then a question. Comment one. One of the comments that I got from a student and why he thought that the open course evaluations were a good idea, and hearing a lot of the comments that were made about the commodification, which I will ask you guys about in a minute, is he said that he felt as though the faculty, especially the tenured faculty, some of the superstar professors that we have, would have nothing to fear. And what he actually feels bad about is there are some junior faculty who are looking to really make their mark, really care about their pedagogical methods, they really care about their class. But this superstar senior faculty member sitting on their laurels from whatever award they got last year or 10 years ago and is not really putting in the same effort. So that student felt like they could make a difference. Now whether that’s right or wrong, I’m not here to argue that. I just think that that’s a point that’s been a concern and raised, and I think that’s something that we should address.
On the question side, is how do the students feel about some of the questions or some of the comments that have been raised in terms of the commodification of this kind of information, the idea that we’re not the marketplace of ideas, and doing this makes us into more capitalist—I don’t know if I’m interpreting wrong. Sorry.
O’Halloran: I’m going to take a few other questions if I could. Okay. Yes.
D’Armiento: First of all, I really want to support the concerns about the hidden mirror model of transparency that’s being argued here. And, after the grade is in, I don’t actually see that there’s much difference in the power relationships. Both sides can damage one another as in any exchange of public speech. And I’m very concerned about self-censorship that will come out of this. But to add an additional concern to this discussion. It would be wonderful if we could actually embed this discussion in one about education because what I’m also hearing from the administration and from faculty is concerns about grade inflation. Is there a role anymore for honest feedback in the learning process? Because if you can get a very good idea about a course from the syllabus, the workload, you can see the professor in action and get a sense of their personality. So what’s missing? It seems to be this concern to know what the grading patterns are. And that enters into a feedback chain where what we’re seeing is this collapse in the grading system.
So if this went forward, I would actually like to see the administration give us quotas for grades. Because I don’t see any other way of compensating. And even though that is going to –
O’Halloran: So let me if I could. I’m trying to take all these and put them into a recommendation. So your recommendation is if in fact there was an open evaluation system, the causal statement is that would put pressure on faculty, and therefore what would be very helpful is in fact we had a designated curve, 10 percent, or something along those lines.
Another voice: Well, first of all, I’d like to see a discussion of the consequences of this on teaching practice and on interaction.
O’Halloran: But assuming, assuming that.
Another voice: Worst case scenario that it went into effect, I think the only way to secure some kind of role of honest feedback, would actually be to institute quotas.
O’Halloran: So right. So a curve. A curve on those. Okay. Yes.
Victor Kagan: My name is Victor Kagan. I’m in the Dental School. I have a very brief and simple comment.
O’Halloran: I’m sorry. Are you a faculty or student?
Kagan: I’m a student. It seems to me that there are two real issues going on, and it’s not really between students and professors. It’s really between professors and then separately for students. First, that professors and faculty use these evaluations to modify or better their courses for their future. And second, for students to find out which courses to choose. So the notion that students shouldn’t be shopping for classes and all that. Well, there is a shopping period, and that’s essentially what students are doing, is going to classes, looking at the syllabus, and looking at the workload, seeing if it works for them. So it really does exist. The question is how do we use it and how do we have some sort of system for students choosing classes if that doesn’t work for students, and if we’re not okay with open evaluations. So a professor seems to be using these evaluations for the right reasons, and that should continue. The question really should be what part if any of it can now be available for students to use to look for other classes since the other system may be too tedious or long.
O’Halloran: Thank you. Sam.
Silverstein: Let me say that there is of course unconscious bias. I’m on a committee at the NIH that’s now looking at the 10 percent difference between the number of blacks that get funded and the number of whites that get funded, and whether that reflects unconscious bias, conscious bias or other factors. We’re not going to solve all of those things. But the reason I wanted to say something is because I think most of our discussion is about not getting to yes. And I’m really interested in whether we can get to yes. And I thought your comments about alternative systems for releasing information, and my understanding of this proposal was that it would not release qualitative comments, but this was about a very small number of questions, principally the question “Would you recommend this course to others?” which is, let me say, the central question at the Yale site, that’s the central question at the Harvard site. It’s the only question I think at the Harvard site.
There may be other ways of presenting it, such as the way you thoughtfully suggested. I think it’s also useful to say that faculty evaluations, evaluations of everyone are useful to the people they’re meant to evaluate. But they of course should not be widely disseminated. Those reasonably ought to be kept private. All of us as professionals participate in professional reviews, where for instance for PLOS I sign my name. So my name is on the paper. For many other journals, I don’t sign my name. And reviewers at the NIH are anonymous or at the National Science Foundation. So recognizing that there are many imperfections in the system, I would urge all of you to think of ways that we can get to yes because as you said very thoughtfully, the students feel that the system is not working for them, and maybe we can find a way to make it work, and that’s I think very important.
O’Halloran: Okay. Thank you very much. Other questions? Yes.
Johab: Thank you. I have two things to say. First of all –
O’Halloran: Please define yourself, sir.
Johab (SEAS student): Sorry, my name is Johab and I’m a sophomore at Columbia Engineering. First of all, a lot of people have been talking about shopping for classes, and most people who have talked about it have talked it rather negatively in the sense that it sort of converts Columbia as an international education institution into more of a capital institution. But honestly I see a lot of merits in the concept of a shopping period. When I’m registering for classes, the fact that I have a shopping period where I can check out new classes and when I can experiment with classes that I wouldn’t ordinarily have taken in the mainstream of my curriculum, that enables me to discover a lot of new stuff. And I think that’s a very important part of academic curriculum at Columbia because it encourages students to explore.
That said, I, personally I don’t think open course evaluations are a good idea, even though I said this about the shopping period. Because I think there’s a line to be drawn between what is recommended for students by the faculty and by the administration, and what is recommended to students by other students, their peers. And I think there’s an amount of trust that students place in the academic institution, in Columbia, to recommend to them what to take and what not to take. And this is embodied in for instance major requirement classes. Columbia outlines a set number of classes that you have to take to qualify for a major, and this is not something that you can decide as a student. And I think there is a certain level of respect that students should give to the institution and to the faculty in recommending classes to them. I think there’s a point up to which they should trust the academic system to recommend classes for them. And I don’t think it’s necessarily a good thing for their peers to recommend classes for them. Thank you.
O’Halloran: Okay. Thank you very much. Yes.
Sherri: All right. So hi. My name is Sherri. I’m a sophomore in Columbia College. And I think that one of the biggest issues about open course evaluations, at least the way I see it, isn’t the bias issue. I think the target for these course evaluations impacts the way that students write them. So like now that these evaluations are going to professors who are evaluating these courses, which means that me as a student, my goal in writing these evaluations is being as nuanced as possible insofar as I include what I liked about the class, what I didn’t like about the class, what I thought should change about the class. Whereas if I feel like the target of my course evaluation is for other students, I think that us students as a whole, our evaluations will tend towards the sensationalist, towards like absolute decision like pro vs. con, rather than that nuanced evaluation that we generally give our professors if we think they themselves are reading them. So I think to that extent having open course evaluations will just change the content of those evaluations, which I don’t think is a good thing. But I still think that the level of information we have as students going into course registration is insufficient. Like I think that syllabi should be given to us at registration. So like now, if this is registration week, the syllabi for the classes I want to take in the fall should be provided to us by this time. Like the number of e-mails I’ve had to send to my professors asking them for details on their courses, like that shouldn’t have to happen.
And I also think that people think that professors are good or bad or mediocre for different reasons. Like I value commitment to the subject or commitment to the teaching, whereas other people value organization or value all these different things. So I think that if comments are disseminated, if comments from students are provided to other students, then they should not be given just like all of the evaluations that are public in terms of like in all of the answers to a specific question. But if there are so many positive evaluations, I think that the information should be provided like why is this professor a positive professor, why was it a negative professor. Because I think every student has different priorities when they’re looking at what they want in a professor and what they don’t. I value theatrical lectures. A lot of students don’t, and I think like that that information should be shared rather than do I recommend this class or not.
O’Halloran: Okay. And then just one concrete recommendation that you came forward with is that trying to get the syllabus now while you’re shopping for things online and that making it available either through CourseWorks or to look back would be helpful to you. Is that correct? I believe that is actually, I just want to make sure that you understood that that’s actually happening right now, and so that’s being integrated into the system. I believe that it may not have been all courses, but I do believe that we’re moving in that direction. If you have suggestions on the implementation, I’m not sure, I don’t know who’s seeing what, but that would be very helpful at this point. Yes.
Sherri: Okay. Well, I mean, so like, at least my understanding of it is that they’re provided on CourseWorks after you register for the classes.
O’Halloran: No. No. Right now we’re changing. That’s changing. So I do not know if you can see that. You should be able to see that. I don’t know if it’s for all courses.
Sherri: All right. I haven’t been able to see that.
O’Halloran: Okay Fine. So then, okay, fine. Thank you. Thank you. Yes.
Prof. Cathy Popkin: My name’s Cathy Popkin. I’m a professor in the Slavic department. This is going to sound like a really stupid comment –
Popkin: Well. I mean, it’s going to sound paranoid or something. But I want to just raise it because it is a real issue with things that go on the internet. I happen to believe that Columbia and Barnard students write incredibly intelligent things on course evaluations. I’ve learned an enormous amount, especially in the responses to the questions I write myself about things, I wonder how did this go, how did that go. But even if we were to suppose that all of the evaluations, whatever form they take, that went up on whatever this site would be were intelligent, were useful, were not biased, were not in any way detrimental in reviews or not in any way. I don’t know. You name it, any use that it could be, any misuse that it could be put to in the context, in our professional lives. How do you want your ex-wife, your old boyfriend, your stalker, your nosy neighbor? I mean there’s a certain expectation of privacy, and I understand this is supposed to be limited to Columbia, the Columbia community, you know, my eye. Once it’s on the internet, it’s all over. It really is, and it’s the exact same principle that led the Arts and Sciences to decide after all not to require every faculty member to post a CV online for that same reason. Thank you.
O’Halloran: All right. Thank you very much. Yes.
Senator-elect Aly Jiwani (Stu., SIPA): I’m Aly Jiwani from SIPA. I will challenge this notion out there that students currently have enough information to make an informed decision about what courses they want to pick based on the open syllabus, based on the two-week shopping period. I do think there’s value-added in the course evaluations. SIPA happens to be one of the schools that publishes course evals, and I want to read a couple of questions, quantitative questions from the SIPA course evals. Questions like organization of the course from 1 to 5, one being poorly organized, five being well organized. Questions like relevance and quality of the assigned readings: 1 poor, 5 excellent. So these are questions that can only be answered by people that have taken the course. You cannot possibly answer these questions in the first two weeks or gauge answers to these questions. So that’s one point.
I also have a question for the faculty side. Given the fact that every school currently collects course evaluations and these evaluations are used for things like curricular development and faculty tenure, do you maintain that once, if we get to a point where we open these evaluations to the students, do you maintain that, knowing this, students will go to the extreme and start writing extreme comments, hateful comments? Do you think it’s going to change the way students do the evaluations, simply knowing—starting let’s say in the fall—if they know that their evaluation is going to be public?
O’Halloran: Thank you. Yes.
Rachel Borne: Good evening. Thank you very much. My name is Rachel Borne. I’m a third-year student of economics in the School of General Studies, and I’ll be a first-year student in the joint program with SIPA next year. I’m very much looking forward to reading the evaluations of the esteemed faculty that I’ll have the privilege of working with. And I want to thank the Senate for their hard work on this. I want to also thank you for having this forum of the town hall. I wonder if the opposition is actually opposed to this very town hall because it allows people like students to voice our support.
Regardless, with the utmost respect for our esteemed faculty here, the opposition seems to come from a place of no. As a non-traditional student, I am continually impressed by the thoughtfulness, initiative and good will of the Columbia student body. Some of the comments I’ve heard regarding hateful, racist comments and internet harassment seems to be based on this doomsday, worst-case scenario mind frame. There are very simple ways to design surveys to try to eliminate bias. Let’s be solution-oriented here and not shut down the whole conversation.
Further, this argument seems to protect poor-performing professors. So let’s instead come from a place of yes. Let’s come from a place where we recognize that we have valuable, intelligent, inquisitive students here. Let’s allow students to assist the fostering of greater intellectual and pedagogical quality. Let’s allow faculty to engage in a dialogue with students about the quality of learning. In fact, knowing that something will be made public for the purposes of educating fellow students and holding faculty accountable, we’re probably going to make even more thoughtful comments.
Let’s also imagine the profoundly inspiring and strong lecturer we’ve all had educational experiences with. Someone who really inspires us to learn and grow. Everyone on campus knows who these people are. Why? They have silver nuggets, they have gold nuggets on CULPA. I wish that was called something else. [Laughter] But everyone knows who they are, and wouldn’t you want to be one of those people? Isn’t that what we want to strive for instead of protecting poor-performing professors?
With respect to the comment that entering a classroom shouldn’t be where you pay for a service and evaluate whether you like it, it absolutely is. When you are a non-traditional student, you tend to see an elite education as a luxury good, but it is a good nonetheless. By its very nature, the opposition has contradicted their argument by evoking the shopping period argument. If we are not consumers of education, why do we get to go shopping?
Further, for students in General Studies, and I am very humble to be part of this community, many of us pay our own way. I’m one of those students. In this light, I’d like to share a brief experience with Statistics last summer. It is a required course for the economics degree and in order to graduate on time I had to take this course. I did everything I could working with my advisor, reviewing the syllabus, researching the professor, to prepare for the course. This person failed to inform the students that she would not be attending two weeks of the six-week course. She barely taught the material, copying material from the textbook. One day she got upset and closed the course early because she could not figure out how to address a request for clarification by the student. The review session for the final was to copy the table of contents of the book and say this is what we have covered. Students were and are the only ones in the classroom who are witnessing this. So to say that we shouldn’t be involved in holding professionals accountable I find baseless. We are the only ones who can offer specific and constructive feedback.
I don’t want to talk too much. But I did do due diligence in addressing this issue. That’s why I’m not going to the Law School. I spoke with the head of the department with my own advisors at GS and even mobilized a petition by my fellow students to try and resolve this issue. In the end the course evaluation was the only hope for us students to have a voice at all so that we could leave our community better for future students. I received an apology and a pat on the back from the department head as he had received many complaints, but never someone who came to his office.
I will never, ever, ever see that $4,000 again. It is a good. Furthermore, Professor Morris noted that with transparency comes accountability. In my 10 years of working in the professional realm with nonprofits and political campaigns, I was held accountable. Now as a student at an elite institution, I am held accountable all the time. Administrators I work with, professors I come into contact with, and peers that I build lifelong relationships with judge me on my performance in a way that in the real world would never happen. What we’re asking for here is to be assured that our professors are being held to that same standard. We have no idea where these course evaluations go currently, and that disincentivizes participation.
So what I heard the opposition saying is that we should perpetuate the veil of the ivory tower even in 2012 where even the participants in our own community are passive recipients of pedagogy sometimes unparalleled and exquisite, sometimes quite frankly inadequate. I’m absolutely in support of fully publishing anonymous evaluations and extending the deadline until after finals. Thank you so much.
O’Halloran: Thank you. Okay. Any additional comments? Yes. In the back.
Sol Napolitano: Sol Napolitano, School of Continuing Education. I just want to make the quick comment that regardless if the evaluations remain private or if they’re made public, I hope that there is some room for the departments to share with each other perhaps some best practices that come about as a result of the tabulation of statistics. I think it’s a more high level, but it’s a hope of mine. Thank you.
O’Halloran: Thank you very much. Yes.
Ed Walsh: I’m Ed Walsh. I’m an administrator at the School of Social Work where among many things, one of the things I do is manage the student feedback on the course and instructor. And we’re very mindful of calling it student feedback on course and instructor, and not course evaluation. And I’ll come back to that. But I’m also Columbia College, Class of 1982. So I’m very familiar with the Course Guide. And my recollection of that is that it was a student enterprise. As you mention in the report, it has this long list of editors, of contributors, and I think that what’s missing from it, from the whole opening up of qualitative information that everybody is so concerned about, is not so much censorship. I don’t think it’s censorship. I think what you’re looking for is an editorial role. And by opening that bum role about Professor Moshowitz’s class that you thought was so clever and nice that you wanted to have share it. That course guide, that was one paragraph, culling information from what must have been 50 or 60 pages of open-ended feedback. And I think the system you’re proposing removes this editorial role from the whole process, and I think that does raise an important concern.
We at Social Work think of it more as a feedback mechanism. It’s a poll. It’s a survey. A professional evaluation would be where somebody who has years of teaching experience, is familiar with the subject, is aware of other ways of constructing the syllabus, other readings to select, other assignments to do, other teaching approaches to take, could come in an observe the instructor, perhaps review the feedback from students to focus groups, and then give a professional evaluation. That’s what an evaluation is. I think the surveys that we’re talking about aren’t truly evaluations.
O’Halloran: Right. So many of the professional schools do peer evaluations in their review. So they don’t just rely on those that have actually have peers go into and use their—
Walsh: And I would be delighted if the university would expand its capacity to offer that kind of evaluative service.
O’Halloran: So that would be a recommendation that you would think of.
Walsh: And I’m of mixed mind about the quantitative information being made more easily accessible. It’s accessible to students at Social Work through the student union office or the library. Whether or not it should be more easily accessible online, it might have saved the student over here from the train wreck of a course she ended up with, if that were a course that routinely being offered, and not just one offered that did not work out well. It might help students to identify classes at the other end that are all fives rather than all ones. But my experience of 20 years of doing these things is the answer is always 4.2. It’s, you know, life, universe and everything. So, but maybe that’s all you really need. And really are looking for a shopping guide in the same sense of an annotated bibliography is a shopping guide. Sort of a way of culling down your choices and then selecting the courses. I think there’s something to be said for that.
O’Halloran: But definitely you want to make sure that the peer review, that those types of events do not happen. Because that’s not something that we would ever want for ourselves or for our students. So I think we would all share your concern over that one. So thank you for bringing it to the Senate’s attention, and I guess I give you a pat on the back and encourage anyone who’s having that same experience to talk, one, to the professor and then to the chair. Yes.
Morris: I just wanted to mention there was a question posed from the student panel, and he was asked about the comparability issue, and, you know, given Harvard and Princeton and Yale and so forth. I think we should bear in mind that Columbia has very specific status in the media world of the United States. What happens here is instantly reviewed and used in the public media. What goes online about Columbia is mobilized for local political agendas, national political agendas, and so forth. People look at Columbia by virtue of its faculty, by virtue of where it is, with a far greater interest and a much greater alacrity to transmit materials from students, often things that are, you know, not intended for publication. But students have sometimes indeed even been mobilized against their own will to pass on information and so forth.
So Columbia actually, you can’t just say well, it worked at Harvard, which we don’t know. There haven’t been sustained long-term studies. So maybe in 15 years we’d have those, but it’s not the same kind of institution, it doesn’t occupy the same place in the public imagination.
But now, I have a question for the students here. First of all, adjunct professors who are teaching only one course will not have evaluations that circulate and that will sort the basis for your later decisions. There’s a large portion of the teaching faculty, particularly in some of the professional schools. So that’s not going to function as the teaching guide you want. Second, you’ve referred very positively to CULPA as the place you go to. So, you know, most of us know what’s on CULPA. Some of us find it inspiring and gratifying and affirming, and some of us find it, because we’re not only interested in our own images, some of us find it disturbing. So you’ve got CULPA. What’s, why don’t you try and improve CULPA? What is the desire to turn the evaluation process for professional review into the shopping guide?
O’Halloran: Okay. Since you asked a specific question, let them.
Turner: Well, yeah, to respond to your question about CULPA, I agree with you. There’s a ton of very useful information on CULPA, and there’s also some that is perhaps more polarized and more biased as you’d expect from any open platform like that. I think that the central benefit of bringing the institution in-house is just to increase the response rate. If you send out that e-mail to students, especially if you offer an incentive such as delaying the release of their grade until after they filled out an evaluation, you’re going to get a much greater response rate and consequently much more information than CULPA can possibly hope to get.
Morris: Does that mean that you would not allow students to not participate in the evaluation if those chose not to?
Snedeker: Harvard currently has a grade incentive program, but also they allow students to opt out of the evaluation. Their grade will not be delayed if they go in and opt out. And that’s just one example of a possible implementation.
O’Halloran: Okay. So we are past our time. I wanted to thank everyone for a very informative discussion with some wonderful suggestions that we’ll bring back If you have additional comments, please send them to Tom. He’s the senate manager, or to myself. and we will make sure that they get circulated and back into the dialogue that the Senate will be having. Thank you very much and have a good evening.
END OF SESSION