title bar: Module 6

MODULE SIX OBJECTIVES

At the end of this module, participants will be able to:

  • discuss and apply strategies for assessing students online
  • discuss and apply strategies for assessing online classes
picture of laptop on table with PDA

Portfolios, Peer Review, and Assessment

In 2001, I co-taught two sections of a graduate course with another instructor. It was a Computer Systems Management course. Traditional 14-week semester. One section was completely online; the other section, face to face (F2F). Both sections had 30 students.

  • The F2F section met once a week on the University of Maryland campus at College Park. And we did typical F2F things: small group work, large group work, panels, labs, "lectures," etc.
  • The online section was completely online, using WebTycho, an online learning platform similar to Blackboard and WebCT. And we did typical online things: online introductions, discussion board cafe, weekly discussion board prompts, etc.

The primary assessment for both sections was a digital portfolio, where each student had to develop a project, reflect on the process, and present the results.

The point was to assess students on performance.

At the beginning of the class, we came up with a series of benchmarks that functioned as a set of standards for the portfolio projects. Then we offered guidelines and checkpoints. For instance, by the third week of class, the portfolio should demonstrate such and such; by the fifth week, such and such; by the seventh week, such and such...and so on. The components of the portfolio were supposed to demonstrate each student's competence/excellence in a particular standard.

By allowing the students to be in charge of determining how they met those standards, we were consciously trying to be learner-centered and constructivist.

So much about the two sections of the class were the same. Here is where they differed:

  • Students in the F2F class received regular (bi-weekly) feedback from the instructors on their digital portfolios.
  • Students in the Online class received regular (bi-weekly) feedback from students. Each student was required to upload his/her digital portfolio in the Discussion Board every other week. Further, each student was required to comment/respond to 5 other students. If five students had already responded to a digital portfolio, then the responding student had to go to someone else's portfolio thread and respond. Every student did five responses. Every student got responses.

We tried to model good responses: commend, commend, recommend, etc.

The interesting part:

As the weeks went by, we watched the quality of the portfolios in the online class to go up and up and up to where by the end of 14 weeks, the very worst portfolio in the online class was better than the very best portfolio in the f2f class.

Truthfully, we did expect the online portfolios to be better. The research we had read was pretty clear that collaborative learning typically produces higher quality work. We just didn't expect the differences between the two classes to be so stunning!

Why were they so much better?

The online students had the opportunity to see what others were doing. Learn from others. Model from others. Also, the act of responding to and critiquing other students' work required that the critiquing student think more deeply about the concepts (or think about the concepts from a slightly different angle).

It was a fascinating thing to watch.

And it forever changed my attitude about group work, learning communities, and assessment of student work.

I realized that a class design can tap into the magic of collaborative learning without instructors having to break students into groups and saying--okay develop this project together.

Further, assessment of students was a breeze with the online section. It had occurred throughout the course...with most of the assessment "comments" coming not from instructors, but from students.

And assessment of the course was easy to do as well. The portfolios themselves provided artifacts by which the course itself could be evaluated.

The course design and process produced these particular results. Clear and evident to everyone.

One interesting thing we did not expect, however, involved the plagiarism issue. In the online section, we could find no cases of plagiarism. In the F2F class, we found (at least we were able to prove) 3 cases.

Why? We are guessing that since the f2f students were turning in work to two harried instructors at the end of the semester, they were more tempted to take a shortcut or two and plagiarize. And since the online students were looked at by 30 other people for 14 weeks, each student was, essentially, forced to find his or her own voice.

And the online section frankly appeared to us to have a greater sense of community. They never met. But they learned from one another, assessed one another, and produced higher quality individual work because of it.

The Readings

Both sets of readings (the Palloff & Pratt chapter and the Bender chapter) offer excellent specific strategies for assessing student work and for assessing the design of an online class. Not much surprise there, but Bender also deals with the issue of assessing the whole enterprise of online learning.

Most of our effort in the final module of the class will be on assessing this class in particular. To the extent that we preached anything here, did we practice what we preached? In some cases, probably yes. In some cases, probably not so much.

After you have read the Palloff & Pratt and the Bender, head on over to the discussion board. We need to talk a little bit about assessing this class.

And we have some wrapping up to do! :-)

 

©2008 McDaniel College