Meaningful, Informative, Simple: The Elusive Goals of Program Assessment at West Chester University of PA
LeeAnn Srogi, Department of Geology and Astronomy, West Chester University of PA
We began the struggle to assess our geoscience programs back in the 1990's. Our department offers a B.S. degree in geoscience and a B.S.Ed. degree in Earth and Space Science. We created an assessment plan including a portfolio of student work from selected courses with a reflective essay written by the students during their senior seminar course. This plan proved to be unworkable: the logistics were too complicated and we hadn't thought through our specific learning outcomes and how they would translate into effective rubrics for student work. Before the plan could be revised, the department made substantial changes to the curriculum, eliminating some of the courses which were critical parts of the assessment plan.
Meanwhile, our attention shifted to assessing our introductory-level General Education courses. The University had established seven goals of general education, of which each department had to select three goals. The three goals we selected (in brief, quantitative skills, critical thinking, and "scientific understandings"), had a number of specific outcomes that could be assessed. We currently assess four introductory-level courses: geology, astronomy, meteorology, and "Humans and the Environment." We use a common, simple rubric to assess specific student products, such as questions on an exam, for each goal. This is a summative, not a formative assessment. The instructor(s) of each course writes the questions and assesses the students' work. The rubric for program assessment is different from the instructors' grading for the course. In courses that have multiple sections, instructors use the same questions and the same rubric, although the syllabi, textbooks and labs are different. All data are submitted to the University in aggregate. We also use an anonymous survey asking students at the end of the semester how well the course met general education goals. The general education assessment has been successfully administered for 3-4 years. Results in the first year showed that: 1) we needed to improve our teaching to build students' quantitative skills; and 2) we needed to discuss the general education goals with the students. Both improvements were instituted and we have met our targets in subsequent years. Within the past two years, the University has begun its own assessment of the general education goals, and their interest in departmental assessment of general education courses has diminished to the point where we are no longer required to submit data. So we find ourselves re-thinking the purpose of our general education assessment.
In my opinion, our department's general education assessment has taught us some important lessons for all future assessment:
- One of the most important roles of assessment is that it forces faculty to articulate course goals and learning outcomes, and to discuss those with the students. More than one of our faculty has commented that the general education assessment helped them improve their course syllabus and communication with students about our department's mission.
- Outcomes being assessed need to accurately reflect the course goals, content, and delivery. We have found large differences in performance on some questions among students in different sections of the same course. In general, students in the sections taught by faculty who wrote the assessment questions perform better. We don't have any other data that suggest some sections are not being taught well; instead, there seems to be a mismatch between course content and teaching methods and the specific questions being used for assessment.
- The right balance has to be struck between meaningful data and simple data collection. Faculty teaching some general education courses insisted on using essay questions, rather than multiple choice questions, for assessment of critical and analytical thinking. Because this is more time-consuming, these data were not gathered some years when the instructors lacked time to complete the evaluation. Rather than having "better data" we have no data at all.
At the same time the general education assessment was progressing, the University was undergoing NCATE accreditation for its education programs, including our B.S.Ed in Earth and Space Science. A requirement was an extensive assessment of the B.S.Ed. program and student performance, which was reviewed and finally approved by the National Science Teachers Association (NSTA) in 2008, thanks to hard work by Dr. Steve Good in our department.
We are now working on a new assessment plan for our B.S. Geoscience degree program. We will make it parallel to the B.S.Ed. assessment, since both programs share common core courses and some of our majors transfer between programs. The B.S.Ed. assessment includes two items that are part of the new B.S. assessment plan: a student survey with items that are directly linked to our program goals, and a rubric for evaluating student performance on significant research assignments. The B.S.Ed. program assessment is based in part on the Competencies (content knowledge and skills) established by the NSTA, so these Competencies are part of the B.S. assessment, as well. Our current curriculum was devised to meet the course requirements for the Professional Geologists certification by the Association of State Boards of Geologists (ASBOG), because many of our majors ultimately obtain this certification. The ASBOG test blueprint items for the Fundamentals of Geology exam provide another set of student learning outcomes. We have created a matrix showing which of our courses potentially "cover" each NSTA Competency and ASBOG item. However, the NSTA Competencies and ASBOG items provide far too many outcomes for student learning assessment at the course and program level.
I come to this workshop with questions about my department's assessment plans:
- What do we want or need to know about our existing program to make it better (and what do we mean by "better")?
- How can we explicitly connect our department goals (which are pretty broad) with what the students are doing in our courses and in extra-curricular opportunities?
- How can we distill the broad department goals, and the numerous too-specific NSTA Competencies and ASBOG items into a small set of manageable student learning outcomes?
- What kinds of information can we collect from our students about these learning outcomes that will help us to improve our program?
- How can we implement a plan that will be simple enough to be sustainable, while collecting meaningful data that will tell us what we need to know?