|Clickers or audience response systems are used to increase student engagement in a course and – when partnered with peer-teaching techniques – to improve student retention of course content and overall student learning. Like other instructional technologies, however, clickers can be used well or poorly, and a poor use of the clickers can limit their effectiveness. Based on student feedback and a review of published articles about clickers, we provide the following pedagogical advice about how to use clickers successfully in class:|
- Be unpredictable in the timing and number of clicker questions you ask. In a lunch session when students were providing feedback about their instructors’ use of technology, one student described a course in which the faculty member predictably asked one clicker question mid-way through the lecture and no other questions after that. According to the student, her fellow students would pay attention up to the point where the clicker question was asked, and then they would tune out, confident that they would not be asked another question to which they would need to respond. One of the advantages of using clickers in lecture-based courses in particular is that it encourages students to remain focused on the lecture, while testing their understanding of key concepts. In order to maximize this “student engagement effect” – in other words, the tendency of students to listen more attentively when they are expecting a clicker question that tests their knowledge – it’s important that you ask more than one clicker question per lecture, finding the right balance between too many and too few clicker questions. It’s also a good idea to vary the point in your lecture where you ask such questions. Determining the right number of clicker questions depends on your sense of what significant topics you are covering in your lecture and what typical student misunderstandings might exist in relation to those topics, but you definitely want to ask more than one clicker question per class.
- Use clickers to assess student understanding of major course concepts. After registering responses via the clicker receiver, the software can display a bar chart showing the range of student answers to a particular clicker question. The distribution of those responses – the percentage of students who chose the correct answer, the percentage who chose the most correct-sounding wrong answer, etc. – will help you determine how well students have understood the concept. In determining what and how many clicker questions to ask, you may want to ask yourself the following questions:
- What are the major ideas I am trying to convey during this lecture, and what are some of the ways that students typically misunderstand those ideas?
- What are the most common preconceptions that students have related to this concept, and how can I test to see if students have abandoned those (mistaken) preconceptions?
In general, when you have finished explaining a key concept, you can use a clicker question to help determine if students have understood that concept well (allowing you to move on to the next topic) or if students still haven’t grasped the concept and need to discuss it amongst themselves or need additional explanations and examples. You can also use a clicker question at the beginning of class to test how many students have done and/or have understood the assigned reading.
- Give students time to discuss their understanding among themselves. If a majority of students don’t choose the correct answer to the question the first time you ask it, or if a significant percentage of students choose the wrong answer(s), one option is to explain the concept again, hoping for better results. Another commonly used and arguably more effective option is to have the students discuss with their peers how they arrived at their answers, working together to figure out what the correct answer must be. After that discussion, you can re-poll the question, to see if students have gravitated towards the correct answer. Research has shown that when students are given a chance to discuss concept questions with peers after an initial polling session, the number of students who choose the correct answer on the subsequent polling increases substantially. (See Catherine H. Crouch and Eric Mazur, “Peer Instruction: Ten Years of Experience and Results,” American Journal of Physics, Vol. 69, No. 9, September 2001, p. 972. Available online at http://web.mit.edu/jbelcher/www/TEALref/Crouch_Mazur.pdf.)
- Ask students to predict the outcome of an experiment by means of a clicker question before conducting the experiment, then compare the student predictions with the actual results. This technique is related to what we called the “student engagement effect” of clicker use, above. If students have made a prediction in advance about the outcome of that experiment (or about the results of a national survey), the students will be highly motivated to find out if they predicted the result correctly. They will thus observe the experiment (or pay attention to the concept you are discussing) with much more focused attention. If many students predicted an incorrect outcome, you can discuss possible reasons for that prediction, once again trying to counteract the mistaken presuppositions that led to the error.
- Ask controversial opinion/attitude questions via the clickers – perhaps in anonymous mode – as a way to generate student discussion. When students use clickers in class, rather than raising their hands, they feel much less exposed. It’s much harder for their fellow students to see which response they choose, and if you make the question an anonymous one – in other words, if responses are not associated with actual student clicker IDs – then students feel even more protected (and therefore willing to respond honestly). Using clickers to respond to the question (rather than a display of hands) also frees students from the peer pressure of responding the way the majority of class members are responding. You can ask student opinions about controversial questions and then compare those results with national survey data, provoking involved, thoughtful discussion after presenting the data.