Category Archives: Assessing Learning

Plus/Minus Grading Systems

Have you ever thought about the pros and cons of our plus/minus grading system versus the A-B-C-D-F system employed by other universities? The plus/minus system actually has several advantages—see for example the reasons RPI adopted a plus/minus system in 2004.

Despite the advantages, under a plus/minus system I’m finding it challenging to implement something I learned from Ken Bain in 2007.

In his 3-day Best Teachers Summer Institute, Bain leads a unit on assessment. In particular he asks educators, “What does it mean for a student to be an ‘A thinker’ in your discipline? What must students be able to demonstrate or do to live up to that standard? How would it be measured if there were no such thing as numerical grades?” Bain suggests that we share with students our answers to these questions to help them understand what it takes to be a so-called A thinker.

When I did Bain’s exercise, I learned that the definitions for A vs. B vs. C thinkers are not trivial, but doable. But consider this. Our plus/minus system has 11 grade categories whereas the A-B-C-D-F system has 5. It’s been 3 years, and acceptable definitions for each of the 11 brackets still elude me. What is it exactly that a C+ thinker can do that a C thinker cannot?

If you have found good ways to define plus/minus grades in the spirit of delineating what students demonstrate they can do (rather than numerical scores), I’d like to hear from you. What works for you?

—–

Here’s another issue with our grading system that borders on triviality, but, well, I think about these things. Suppose in a jumbo lecture an instructor gives 3 multiple-choice exams, averages the scores, and (voilà) assigns final grades according to the chart in the faculty handbook. In our plus/minus system, if we assume the course scores are uniformly distributed between 50% and 100% (which they are not, but bear with me), then 10% of the students are expected to be a half-percentage point below a cutoff, inviting natural rounding up. That is, 1 in 10 students may be arguing for the next higher grade, a grade they technically didn’t achieve. Maybe the number is not exactly 10%, but it’s close for many reasonable grade distributions.

Certainly, there are all sorts of considerations when resolving the rounding question, but that’s not the point. The issue is the sheer number of “rounding cases” that our grading system invites. I recently taught a course where 21% of the final scores were less than a half point below a cutoff.

Happy 2010 everyone.

Posted in Assessing Learning, Grading | 4 Comments

VOCAT and the Question of Openness

x-posted from cac.ophony.org

It recently occurred to me that very little has been written about the Schwartz Communication Institute’s most ambitious and potentially most promising project, our Video Oral Communication Assessment Tool, or VOCAT. I have presented on VOCAT a number of times over the years (most recently at the 2009 Computers and Writing conference in June), but have not yet written about it in any kind of real detail. So it’s high time to remedy that.

VOCAT is a teaching and assessment web application. It is the fruit of a collaboration between the Schwartz Institute and mad genius code-poets at , Cast Iron Coding, Zach Davis and Lucas Thurston. It is still very much in development (perpetually so) but is already in use in introductory speech communication and theater courses as well as a number of assessment projects. Our career center used it effectively a few semesters ago as well. To date, approximately 3200 Baruch students have used the tool.

VOCAT was developed in recognition of the principle that careful, guided review of video recordings of their oral presentations (or of any performance, for that matter) can be remarkably effective for aiding students in becoming confident, purposeful and effective speakers. It serves as a means for instructors to easily provide feedback on student presentations.  It enables students to access videos of their performances as well as instructor feedback and to respond to both. It likewise aggregates recorded presentations and instructor feedback for each user and offers an informative snapshot of a student’s work and progress over the course of a given term or even an entire academic career. Presentations can be scored live, as students perform, or asynchronously once the videos have been uploaded. (Our turnaround time at this stage is between one and seven days depending on how many sections are using the tool at once — once some of the key steps happen server-side, turnaround time will not be as much of a concern.) Built on the open source TYPO3 content management system, it is a flexible, extensible and scalable web application that can be used at once as a teaching tool and as a means of data collection for research or other assessment purposes. (Screenshots are available here. I am also happy to share demo login info with anyone who would like to take a look — please email me at mikhail [dot] gershovich [at] baruch [dot] cuny [dot] edu.)

While VOCAT is quite feature-rich at this early stage, especially when it comes to reporting, data export, and rubric creation, we are always thinking about ways in which the tool can be made more robust and flexible. Currently, we are playing around with adding a group manager feature for group presentations, tagging for non-numeric assessment, moving from QT to Flash video, video annotation, as well as server-side video processing and in-line video and audio recording. We are also considering allowing users to choose to enable social functionality to take advantage of web 2.0 tools for sharing and commenting on one another’s work. And since, at its core, VOCAT is a tool for aggregating and responding to anything that can be uploaded, we’re thinking about other uses to which it could be put. It could easily, for example, be adapted for writing assessment. And someone once suggested that it could be useful for teaching bedside manner for medical students. Adapting VOCAT for these purposes is hardly a big deal.

The platform on which VOCAT is built is open source but the tool itself is not yet open. Right now, it is Baruch’s alone. Whether it should stay that way is a question much discussed around here. Here at the Institute we face several critical issues around open education, not the least of which is conflicting views on student access of Blogs@Baruch. In regards to VOCAT, however, the one thing constantly on my mind is the tension between an internal drive to share the tool as an open-source web application and build a community around it (there are no shortage of interested parties) and the pressures (or maybe a pernicious institutional common sense) that seem to compel us to keep VOCAT proprietary and use it to generate as much revenue as possible. I have heard arguments that VOCAT should be Baruch’s alone — that we should charge for its use and seek private funding for its deployment and development. This is a business school, after all, and I’m sure promoting and marketing VOCAT could be an interesting project for an upper division Marketing course.

Yet, I am inclined to believe that VOCAT should be shared freely and widely with other institutions and that other developers should be encouraged to develop for it.  A great many more students would benefit and development would certainly be accelerated as more and more schools add features they need that could then be adopted for use here. Were VOCAT open, in other words, it would evolve quickly and probably in ways we haven’t even imagined. And that is very exiting.

In the coming months, I hope to continue to present on VOCAT and to gain insights into the roles it can play in communication intensive courses or in a communication-focused curriculum of any sort. More importantly, I would like to move towards opening it up and will work with our developers on the features and functionality that facilitate sharing. I hope also to draw upon the tremendous expertise of my friends and colleagues involved in the open education movement and learn from those who have worked with and developed various open source tools for teaching and learning. Listening to others’ ideas for VOCAT has been invaluable to thinking through what this web app could ostensibly do with the right sort of development.  could be and how to best realize its full potential as a teaching tool — both in terms of deployment, training, and development.

Posted in Assessing Learning, Communication Skills, Student Participation, Teaching Large Classes, Using Technology | 6 Comments

A+ . . . Despite Heavy Accent

Question: A student gives a presentation. He has a heavy foreign accent and is at times incomprehensible. Overall, the speech seems well researched and on target. What do you do?

a. Give him an A.

b. Subtract points for incomprehensibility and give him a B.

c. Tell him that the presentation was unacceptable and that he should improve his oral communication proficiency.

Instructors cite a variety of reasons (often with a kernel of truth) why they let incomprehensibility slide:

1. “Asking a student to reduce his/her accent is embarrassing and discouraging.” — It is true that accents are windows to our identity, and that a student changing his/her accent may experience a tangible sense of loss or feel repercussions from home culture friends and family.

(more…)

Posted in Assessing Learning, Communication Skills, Uncategorized | 10 Comments

Usefulness of Tests

When the idea for a general teaching blog was first formed, David Birdsell, Dean of the School of Public Affairs here at Baruch College, made a great suggestion – writing posts on the face-to-face faculty development events such as our Master Teacher Series. Last week, Edward L. Deci, professor of psychology at the University of Rochester and founder of Self-Determination Theory (SDT), conducted a session for the Master Teacher Series entitled, “Motivation for Teaching and Learning at the College Level:Facilitating Autonomous Motivation.”

While Edward Deci’s presentation was geared towards college teaching, he stated how motivation is very broadly relevant, for example, in parenting, sports, etc. He talked about the three basic psychological needs (autonomy, competency, and relatedness) and their importance in extrinsic and intrinsic motivation. For this post, I will focus a only a small portion of his thought-provoking presentation – the usefulness of tests.

I liked what he said about the usefulness of tests. He explained that tests can be useful when we focus on the “primary function… to assess whether students have learned and can perform.” Therefore, “tests can provide meaningful feedback to students, teachers, and administrators.” One key point was to “minimize rigidity” in testing, for example, having students grade their own quizzes for feedback on how well they are learning the course material. He emphasized the importance of being respectful and responsive to students and to provide a choice whenever possible.

I have discussed with students how well the class is doing as a group on the tests and even changed the format of the final exam based upon those discussions. For example, in one class, I noticed that my students performed best on short essay questions in my assessing their knowledge and understanding of the course material. We came to an agreement that the final exam would be all short essay questions – students had to choose 20 out of 30 short essay questions to answer. I felt the outcome from this change was a better measure of what students had learned.

This leads me to ask:

Are you open to renegotiating the learning contract (the syllabus if we’re focusing on the explicit part) with your students?

What other adjustments have you made in your courses based upon students’ input in order to enhance learning and assessment?

Posted in Assessing Learning, Grading, Syllabi | 3 Comments

Multiple Choice Questions for Quants?

Lately, I’ve been wondering about the efficacy of multiple choice exams in quantitative disciplines, like operations management, calculus, finance, etc., and discovered this little study that García Cruz and Garret presented at the 2006 International Conference on Teaching Statistics in Brazil (link to proceeding). Using a combination of multiple-choice and open-ended questions about descriptive statistics, they found, “that many students who choose the correct answers in multiple-choice questions were completely unable to demonstrate any reasonable method of solving related open questions.” Food for thought.

Posted in Assessing Learning | Comments Off on Multiple Choice Questions for Quants?

Viral Video, Donor Dollars, and Academic Integrity: Poor Students versus Freedom of Speech?

This summer, two of my colleagues became the subject of a YouTube viral video. Maybe you heard about the swearing, pants-dropping debate coaches (well, only one dropped his drawers) videotaped (with their consent) at the national cross-examination debate tournament… It was quite a spectacle. Since then, the video has been taken down, the debate association has issued a statement, the mooner was fired (purportedly, for years of questionable conduct) and the other young coach sanctioned by her University. YouTube consumers have moved on to fresher fodder. Yet, as midterms approach, new “angry professor” videos are likely to surface – momentary catharsis for undergrads trapped in fill-in-the-blank purgatory. No college is immune from this new virus…

VIRAL VIDEOS ARE A NEW FORM OF FALLOUT

Though colleges have had to manage external criticism in the past, the viral video phenomenon is a different beast. Consider the issues our campus faced a couple of years ago with the fresh(wo)man text War is a Force that Gives Us Meaning…(New York Sun article: “Baruch Requires Students Read Book Some Are Labeling Anti-Semitic“).

Though the issue received prominent attention in the print press, the back-and-forth was short-lived, the college had time to craft a response (i.e., freedom of speech), and the exchange was largely print-based. The story reached thousands – not millions. The story lacked compelling oral and visual content (e.g., yelling, crying – mooning). It paled in comparison to the storm surrounding the viral debate video (e.g., print and television stories, a rumored Chronicle investigation, a 100% funding cut for one program and potentially related cuts at other colleges). Comparatively, the War controversy was tame. Importantly, it did not result in financial fallout…

OUTSIDER OPINION AFFECTS THE BOTTOM LINE (AND POOR STUDENTS)

College costs are rising, tax levy and financial aid moneys are in flux, and increasingly we need donor/investor money to bridge the gaps. Their money enables poor, working, and middle class students to enjoy the privilege of post-secondary education (aside: thank you for subsidizing my B.A., M.A., and Ph.D. Ohio/national taxpayers!) If they respond to controversy by curtailing their support, students can be deprived of programs, perspectives, professors… To the extent that most students cannot afford the “true” costs of their schooling (i.e., a 100% tuition-funded institution…), we have to consider/manage how their underwriters perceive our campus. Viral video makes us more vulnerable… financially and intellectually…

(more…)

Posted in Assessing Learning, Learning Goals and Objectives, Uncategorized | Comments Off on Viral Video, Donor Dollars, and Academic Integrity: Poor Students versus Freedom of Speech?

Writing Better Learning Objectives

When I attended the Zicklin Business School Summer Teaching Seminar in 2007 (and again this year), the first thing I noticed was that the terms “learning goals” and “learning objectives” are used interchangeably. This seems to be the case throughout much of the College. From my training and experience in strategic management and following the approaches of Robert Mager, the behavioral psychologist known for his books on instructional design – to me, goals and objectives are two different things, although connected. I strongly believe that to write better learning objectives, we need to define these terms and use them more precisely and consistently across the Baruch College community.

A well-written goal simply states an outcome or end result to be achieved. In other words, where do we want to go? While goals should be specific, they are often phrased in broader terms that need to be operationally defined (called “fuzzies” by Robert Mager). Now that we know where we want to go, how do we get there? This is where objectives come in. They should be specific and measurable and state what must be done to achieve the goal. In the case of learning objectives, they should be phrased from students’ perspective, not teachers’.

From an instructional design perspective, learning objectives have three purposes:

  • Serve as a guide in designing a course
  • Communicate to students what they are expected to achieve
  • Assist in evaluating instruction

I found a good article summarizing Robert Mager’s approach to writing learning objectives: “How to Write Great Learning Objectives.” I don’t adhere to Robert Mager’s approach as a strict formula to follow, especially when it comes to less tangible subjects – instead I use his approach as a guideline in writing more specific and therefore clearer learning objectives. I have found his approach in writing learning objectives very useful in guiding and improving instruction. The place for us to start, though, is clearly defining learning goals and objectives and using these terms consistently.

Posted in Assessing Learning, Learning Goals and Objectives | 4 Comments