SAGE Musings: "Blooming" Exams and Other Summative Assessments

Jenny McFarland, Edmonds Community College

Author Profile
published Sep 13, 2017

Why "Bloom"?

When faculty are asked what they want students to know and be able to do at the end of a class or a program, we usually respond with high level outcomes. We want our students to understand the process of science and the role of science in society. We want our students to think critically and apply scientific reasoning in their personal lives and choices. We want our students to be able to use quantitative reasoning in generating and interpreting data. We expect students to understand and be able to apply core concepts in our disciplines. However, we may not be assessing these higher level, analysis, synthesis and evaluation outcomes as much as we assess knowledge and comprehension in our courses. "Blooming" our summative assessments – that is, identifying the Bloom's level of each item - can help us better align our ultimate learning goals with our practice.

Faculty and STEM education researchers "bloom" exams or summative assessment questions for many reasons, including (but not limited to)

  • to test the hypothesis that some exams emphasize factual recall, in spite of the stated outcomes that students be able to use a deeper, conceptual disciplinary understanding to solve problems or make decisions (Zheng et al., 2008; Momsen et al., 2010).
  • to test the effect of an intervention or assessment and compare students' abilities or scores on lower order cognitive skills (LOCS) vs. higher order cognitive skills (HOCS) questions on exams (Stanger-Hall, 2012).
  • to be able to give students specific feedback on their performance on questions that assesses higher order cognitive skills (e.g. analysis) and lower order cognitive skills (e.g. comprehension).
  • to compare the percentage of LOCS vs. HOCS final exam questions in prerequisite courses to higher level courses.

All of these applications of Bloom's taxonomy help faculty, departments and programs align summative assessments with their desired student outcomes. We can apply Bloom's taxonomy to other summative assessments as well, in addition to Blooming exam questions. The elements of rubrics for poster and oral presentations, capstone projects, or portfolios can also be "bloomed" to ensure that students and faculty are explicitly addressing and assessing higher order cognitive skills in these projects.

How to "Bloom" Assessments

Several schemes have been developed in the past decade to help faculty "bloom" assessment questions. Early on, the Blooming Biology Tool (BBT) was developed as an "assessment tool based on Bloom's Taxonomy, to assist science faculty in better aligning their assessments with their teaching activities and to help students enhance their study skills and metacognition" (Crowe et al., 2008). Chapter 2 of Dirks et al., 2014, contains a 'protocol of categorizing questions' (see their Box 2.1), derived from the BBT. These tools have been used by many instructors and STEM education researchers in the past 9 years.

Another recently published tool is the Bloom's Dichotomous Key (BDK) in Table 2 of Semsar and Casagrand, 2017. The BDK can be used across STEM disciplines (it is not biology specific), is relatively easy to use, and has high inter-rater reliability.

Regardless of the specific tool used to categorize summative assessment questions, at least two raters should assess each question, discuss any differences and come to consensus. The process of "blooming" summative assessments as a team of faculty in a course, department, or community of practice helps develop a shared understanding of what students should know and be able to do. It provides an excellent opportunity to consider the alignment between what we want students to know and do and what we actually ask them to do in our summative assessments.


References:

Crowe, A., Dirks, C., and Wenderoth, M.P. (2008) Biology in Bloom: Implementing Bloom's Taxonomy to Enhance Student Learning in Biology. CBE – Life Sciences Education 7(4):368-381. Available online at https://www.lifescied.org/doi/full/10.1187/cbe.08-05-0024.

Dirks, C., Wenderoth M.P., and Withers, M. (2014) Chapter 2: Evaluating the Cognitive Levels of Instructional Materials Using an Educational Taxonomy in Assessment in the College Science Classroom. New York: W. H. Freeman and Company. See excerpts at https://web.archive.org/web/20210508023051/http://nas-sites.org/responsiblescience/files/2014/09/Materials-for-Assessment.pdf.

Momsen, J. L., Long, T. M., Wyse, S. A., and Ebert-May, D. (2010). Just the facts? Introductory undergraduate biology courses focus on low-level cognitive skills. CBE - Life Sciences Education, 9(4): 435-440. Available online at https://www.lifescied.org/doi/10.1187/cbe.10-01-0001.

Semsar, K., and Casagrand, J. (2017) Bloom's dichotomous key: a new tool for evaluating the cognitive difficulty of assessments. Adv Physiol Educ. 41(1):170-177. Available online at https://journals.physiology.org/doi/full/10.1152/advan.00101.2016.

Stanger-Hall, K.F. (2012). Multiple-choice exams: an obstacle for higher-level thinking in introductory science classes. CBE - Life Sciences Education, 11(3):294-306. Available online at https://www.lifescied.org/doi/10.1187/cbe.11-11-0100.

Zheng, A.Y., Lawhorn, J.K., Lumley, T., and Freeman, S. (2008). Application of Bloom's Taxonomy Debunks the "MCAT Myth". Science, 319:414-415. Available online at https://science.sciencemag.org/content/319/5862/414.




SAGE Musings: "Blooming" Exams and Other Summative Assessments -- Discussion  

Join the Discussion


Log in to reply