One woman's path through doula training, childrearing, and a computer science Ph. D. program

Thursday, March 1, 2012

Teaching HCI and Jeopardy

This quarter, like most Winter quarters, I am a teaching assistant for the human-computer interaction (HCI) class on our campus. It is a mixed undergraduate and graduate class, and is cross-listed to two or three departments (this year: two). There is always a group project, and my job as a TA is to advise the groups on their projects. This was a slow week, in terms of project deliverables, so I thought we would spice up the discussion sections with a friendly game of Jeopardy.

A night or two before, I made a game using the software on Jeopardy Labs incorporating the topics in the first or second slide deck that the instructor provided on the course website. I took the questions -- err, the answers -- directly from the class notes, verbatim. One interesting thing to note is that we do not have regular assessments of rote memorization -- that is, there are no quizzes, no multiple-choice tests, and there is no final exam in the class. Instead, every assignment is project-based. It is an engineering course, and as such, we expect students to incorporate elements of theory and coursework into their engineering (or reverse-engineering) as required by the assignment.

So when I pulled out the first month's content in the Jeopardy game (which you can play online for free) I was unsurprised at the number of wrong answers... though I did wish there were more correct answers. What I found surprising was each of the three discussion section's reaction to the game.

In section A, at 11am,  four of the five groups actively participated in the game. Group sizes ranged from two to five students per group, with the two-person group leaving the game with 0 points (likely indicating that they did not answer any questions).


Group Group Size Score
A.1 4 300
A.2 3 -400
A.3 3 -500
A.4 5 -1900
A.5 2 0


Negative points indicated groups that would volunteer to answer a particular question (or provide the question for a particular answer) but would get it wrong -- thus subtracting rather than adding the points. Group A.1 won the game with 300 points; Group A.4 had the lowest number of points at -1900. Several members of the group, representing the largest group in the section, would attempt answering the most difficult questions -- frequently getting the answers wrong -- but engaging the class in merriment commiserating on their loss (after loss, after loss) of points.

The total points awarded in Section A was the sum of the absolute value of each group's points, or 3100.

In Section A, I did not allow other groups to answer the question after one group provided an incorrect answer. I did, however, provide hints when the answers were not given quickly. For example, I read: "This technique is used to test a system or complicated components of a system that do not exist."


One student was rubbing his head, and another was softly muttering under his breath: "Oh, oh, I remember this, oh!" -- or "I can even visualize the diagram, with the one guy in a different room with the curtain drawn."

I said, "That's right, it's like he is a man behind the curtain."

I waited a little longer.

"Dorothy would use this technique."

"Ding ding! What is Wizard of Oz?"

"That's right!" I exclaimed.


Section A played the game with a great, positive attitude. One student said, "This is fun! We should do this again!" to which I replied, with a wink, that next week, another game awaits.

Section B, at 12:30pm, had three groups. In this section, the largest group (B.2) had the most points at the end of the game, and the smallest group (B.1) had the least points. There were 1700 points distributed in Section B, indicating that groups had the chance to make a comeback -- the point total does not capture if a team has a string of bad luck followed by a string of good luck, or otherwise has a mix of correct and incorrect answers. Each of the three groups actively participated in the game, and, when I threatened another game next week, a student responded that it's high time to study. Right answer!


Group Group Size Score
B.1 2 -500
B.2 5 800
B.3 3 400


Students in both Section A and Section B avoided the Grounded Theory category like the plague. The last category standing, one student in Section B asked, "Can you give us a hint on what Grounded Theory is? Before I select it as a category?"

I thought for a moment, about whether to facepalm or giggle. Instead I just stared blankly at the student until he said, "Uh, never mind -- I'll take Grounded Theory for 100."

In Section B, I provided more hints. "These can be administered to large populations and can include open or closed items," I read from the screen. I waited a few moments. "It starts with a Q." I waited a few more moments. "The second letter is a U."

"Ding ding!" a student called.

"Ding?" I asked.

"What is a questionnaire?" the student answered.

"Correct!" I said, bouncing a little. "Good job!"

Section C was the smallest of the three sections, with just two student groups. There were 3000 points awarded in this section. But what struck me most was the feeling that the game was unfair. I mentioned earlier that we do not have regular quizzes or other assessment techniques to test memorization and rote learning. However, a huge amount of content is presented -- content that somehow needs to be learned, mastered, and applied to the course project and other design activities.

GroupGroup SizeScore
C.152300
C.22700


The student argued that this activity, playing Jeopardy with HCI concepts and terms from lecture, was testing just that -- memorization. Further, he said, designing a system using HCI concepts, and calling out the concepts by name, are two different things. You can look up the names. But you should be able to describe the concepts.

Further, he said, the class size was unfair. Assuming that each student can answer five percent of the questions correctly (I raised my eyebrows -- hoping he said 95% and I misheard), the student argued that there were simply not enough students in the section to make critical knowledge mass necessary to produce correct answers.

I argued that part of the course is learning how to convey your ideas to an audience, and how to persuade others in the HCI field that your methods are consistent and well-grounded. And the only way you can do that is to know the terms, to speak the language. What's more, I said, it only takes one person that knows 100% of the content to produce correct answers. The number of students in the class should not matter. You should each know all of the content.

Right?

If an HCI student cannot tell me, the TA, the difference between performance measurement and retrospective testing, or the difference between latent and manifest content, does that mean he or she does not remember the terms, or does it mean that he or she does not understand them? Can you make an affinity diagram if you cannot remember it from lecture? Can you apply Grounded Theory when you do not select it as a category in Jeopardy because the entire concept draws a blank for you?

I have TAed classes with weekly quizzes, and classes without. My opinion is that (short) weekly quizzes help the instructor and teaching staff in two ways:
  1. Weekly quizzes clue me in on each student's progress and performance.
  2. Weekly quizzes give the students a list of solid topics to study each week.
Maybe HCI should bring back the weekly quiz, so that a little bit of repetition and memorization makes its way to the curriculum. Or maybe we need to reconsider the course project, and see how we can better incorporate the terms and concepts from lecture into the project. That is the danger of creating adequate project requirements: leaving the requirements too open allows students to disregard the formalism; closing them too much stifles creativity.

What do you think?

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...