How to Use Knowledge Surveys

In the simplest application of knowledge surveys, students complete a survey during the first few days of a class, providing baseline information. They then take an identical survey just prior to the final exam. Before taking the final exam, both students and instructors see the results and compare them with results at the beginning of the semester. The students see how much they have learned and, also, see the areas where they need to focus studying. The instructor sees which parts of the class have been most succesful and which have not.

Some instructors may choose to use mini-surveys to help students prepare for tests. In this case, the select a subset of questions for a survey that focuses on one or a few topics. Although this use of knowledge surveys has value to the students, it does not particularly help the instructor.

Some instructors may use surveys in the "scholarship of teaching and learning"—as part of classroom research focused on learning. For example, results of knowledge surveys may be used to evaluate student learning goals, barriers to learning, or knowledge retention. Surveys can be used to separate "good" questions from "bad." That is, instructors may identify questions that do not reliably reflect student learning. Surveys also permit evaluation of the efficacy of different pedagogies and of curricula.

Knowledge surveys produce a tremendous amount of data. Instructors may administer them in many ways but, due to the amount of data, manual scoring is not recommended. Standard hard copies of exams can be handed out, with students asked to complete scantron (bubble) sheets. Most computer scoring packages can return results in a format easily imported into a spread sheet for analysis. Alternatively, surveys may be administered using courseware such as Google or specific survey packages developed for knowledge surveys (see references in this module).

Data may be sorted and analyzed in any of a number of ways. Perhaps an instructor wants to measure learning for different subject areas of their class. Perhaps they want to know how learning at the low end of Bloom's scale compares with learning at the high end. Or, perhaps they just want to compare students, or to compare one class to another. Analysis is best aided by graphical means. Some good examples of graphical output can be found on this document, taken from a GSA poster, Perkins and Wirth Knowledge Survey: Applications and Results (Acrobat (PDF) 2MB Jan5 06).