« Workshop Discussion Threads
Ok, we also keep hearing all of the emphasis on assessment that is going around all over the country. The problem we have is that no one here can tell us exactly what constitutes assessment outside of regular exams & assignments. We have been told that we have to develop assessment plans that are not included in our course grades, but there are no models or guidelines or examples that the regeants have given us. I have 2 things that I consider assessment: I use a CPS system every class period, and I do a pre-test (I call it trivia with the CPS to teach them how to use it and see what they know on the 2nd day of class), and then a post-test (final exam review) with the same questions. I don't know if that counts, but I thought I'd get some opinions from outside my school. Thanks!
edittextuser=1708 post_id=2921 initial_post_id=0 thread_id=984
In my view, assessment must be integrated into the design of the course, well adapted to the type of teaching you do and useful for students to learn. It should be multidimensional, using different tools that are adpated to your teaching, and also to the different ways students learn (written, visual, auditive). It is also best if it is somewhat continuous so you can evaluate the progres your students are making. For instance, I use a quick assessment for every class, in which I ask my students to perform a number of tasks prior to class and I quickly look at them so I can give them feedback. So, for instance I believe in proximal learning (i.e., you learn based on your prior knowledge) so in one of my assessments I ask students to connect what they have read for a class to the knowledge they had before reading. This also exercises their metacognitive abilities (reflecting on their learning). I also believe that they need ot develop their ability to read graphics so I ask them to comment on a graphic every day. Then we discuss this graph in class. That way I give them the correct answer in class as part of the regular lecture. Finally, I make them do all kinds of activities that are not typical (like deciding on their own activities in lab) and provide them with guidance and a rubric to help them figure out what to do. The more your class is outside the bounds of typical classes, the more you give them guidance using if possible a tool that can be used in assessment like a rubric. I hope this helps and I'd be happy to talk more about it with you if you want.
edittextuser=595 post_id=2922 initial_post_id=0 thread_id=984
Mel-In my view 'assessment' is like 'data'. What counts depends on what question you are trying to answer.
So, it sounds to me like the regents want some measure of student learning in your classes other than grades- possibly for accreditation. What kind of learning do they want to measure? Some sort of general ed requirements? Critical thinking? Or do you get to decide as a department what would constitute important learning?
Once you have the answer to what you are trying to measure, the how becomes easier I think.
You might be interested in the workshop or its resulting website that the Building Strong Geoscience Departments project is running on program assessment this February.
edittextuser=3 post_id=2925 initial_post_id=0 thread_id=984
Thank you for your comments. To Cathy - my answer to your question about what they want measured is that I have no idea. They haven't told us (or at least it hasn't filtered down to us from the administration). I completely agree that if we had a clear understanding of what they wanted, then I could easily figure a plan into it.
I do have my students interpret geologic (in the case of physical geology) figures or graphs. I pop it up on the lecture screen and ask them what they see/notice, and the discussion leads off from how they respond. They're more comfortable with this process as the semester progresses, but I don't quantify the results.
For Catherine, I concentrate on trying to develop critical thinking skills (partially in the way I just stated). I see too many students come to my classes who just want to know what to memorize for a test. They're a little irritated when I ask application questions that they have to critically think through :o) I think that I'm better at doing assignments/labs in my environmental science course that have assessment ruberics built in rather than my other classes. I think this is attributed to the fact that I developed this course during one of the NAGT workshops! It's a great process. I had no pre-conceived ideas about how I wanted to teach the class because it was new. I need to start over in my geology course to incorporate more of those type assignments and labs. I think it's harder for that class because I know what material that I need to cover for anyone who may turn into a geology major & for those who will need to take the teaching certification exam. I read on some of the other posts that others have a harder time with the physical geol course as well. I guess it's just a matter of incorporating assignments/labs in small increments until I (or we) end up where we want to be.
edittextuser=1708 post_id=2930 initial_post_id=0 thread_id=984
Mel, you know you bring up something that I've been struggling with myself as a first year faculty. The amount of material necessary to prepare students in my major (meteorology) is not conducive to me running my classes with the kinds of hands-on activities and lab exercises and assessments (gasp! - I barely know how to do those at this point) I want to do. I feel like I would have to really trust my students to do a lot of the detail work (reading, problems) on their own while I focus on broader issues and hands-on activities. And as first and second year students, I'm not sure that I can put that burden on them and expect success.
Since many of you are not first year instructors, I'm curious what your thoughts on this are. What is the line between making sure I cover all the material and making the class more experiential?
edittextuser=2142 post_id=2932 initial_post_id=0 thread_id=984
Here's an interesting story: Last winter break I decided to do some pre- and post-testing of my spring semester Earth Science students assessing their general ES knowledge of concepts. I devised a 33 question Survey, administered electronically by Blackboard, based largely on Julie Libarkin's Geoscience Concept Inventory with some questions from the NY State Regents ES exam thrown in. I edited both sources to select those questions germane to the ES course I taught. I delivered it to both my face-to-face ES students and my online ES students, as a pre-test during the first week of the semester and as a post-test during the last week. In both sections of my spring semester, the scores went *down*! Stubbornly refusing to believe my instruction actually made my students more ignorant, I remain, however, at a loss to explain this. Perhaps it's the forced-response format of these tools, and I don't teach that way (the issue of 'authentic assessment')? Yet Julie's GCI is validated, so I trust her work, and her questions are good. I'm eager to see how my fall semester students do. If I see a repeat of the spring's results, then I've got to go back to the drawing board.
edittextuser=444 post_id=2933 initial_post_id=0 thread_id=984
Hi Todd, It is always difficult to decide what to include and what to exlude from a class that has so much material to cover. As I mentioned in my talk on tuesday, I have decided many years agao to go for the lesser material and try to provide my students with the tools and the methodologies they need to learn the material I do not teach by themselves. I chose among the possible material the one I am most at ease with and connect it to the material not taught. Also, when the class is more experiential they not only learn more from it but also from the fact that they enjoy it more and I enjoy teaching it more too. All this being said I still cover even if superficially some of the material to which they need to have been exposed to pursue their studies.
edittextuser=595 post_id=2937 initial_post_id=0 thread_id=984
On the topic of whether to cover more material quickly, or less material more effectively in a majors class:
We offer a full meteorology curriculum in our program with three meteorology professors and ~60 undergrads (freshman through senior)...What this means is that each of us (the professors) has each of the individual meteorology students in 3-4 different courses over 4 years. I often see the same students in my classes from one semester to the next. We have a lot of material to cover...
What's discouraging, is that we find there is little retention from one semester to the next - despite the fact that all of our courses tend to build on each other. Students in my senior Climatology class have some of the SAME misconceptions I find in my intro class! The upshot is that I spend a lot of time reviewing what they've learned in previous classes - and never get to all the new material anyway. Ultimately, being able to cover a lot of material in upper-division classes requires that students have a solid understanding of the 'basics'...
This semester I've begun giving my senior students regular short quizzes on the 'basics' - as a way to review, and as a way of determining how quickly I can introduce new concepts. I'm finding that each semester is different, and I need to tailor the pace of the class to the students in that particular class..
edittextuser=1087 post_id=2938 initial_post_id=0 thread_id=984
A comment on the pre- and post-test scores dropping that Robert mentioned. I too have had experience with that from some students (not across the board). I have a theory (not tested in any way) that as a pre-test, students are purely guessing. As a post-test, students are trying to remember the knowledge that they've learned, so they're using different methods to answer the question. One issue that I have with the pre-test is that there is a large amount of guessing for most of the students. Is it a true assessment with that amount of uncertainty between the two tests?
edittextuser=1708 post_id=2940 initial_post_id=0 thread_id=984
Cindy, that's a very similar structure to our department, though with fewer students (6-10 seniors any given year). I like the idea of giving review quizzes on the fundamentals. I could see that really helping me tune my lessons into the right level - because I totally overshot my seniors this year right from the start and I've been trying to rebuild their trust in me ever since the first lab assignment...
edittextuser=2142 post_id=2941 initial_post_id=0 thread_id=984
This post was editted by Arafat Akinlabi on Jun, 2013
Did you look at the paired scores for individual students? David McConnell just presented a reanalysis of some of Julie's data and looking at the pairs gave substantially different results than looking at the class averages. You can find his paper and presentation here:
David would be a great person to discuss your results with. He's at NC State.
edittextuser=3 post_id=2942 initial_post_id=0 thread_id=984
Cindy, I should have looked at your profile sooner - I'm very familiar with UNC (I just graduated from CSU, and I know Mike Taber fairly well, and I know he used to teach there before he left for the Springs a few years ago...)
In any case, I do like your suggestions, and I'd like to think about ways to do more of the large class activities that you mentioned in your profile - I have 50 students in my intro class. I think we'll be retooling that class some over the next few years, so there's an opportunity to try some new things.
edittextuser=2142 post_id=2945 initial_post_id=0 thread_id=984
Mel et al., An interesting enhancement to a multiple-choice pre- and post-instruction test might be to ask students to rate their confidence in the answer they provide to each question. The confidence rating could use a standard 5- Likert scale. Students who have to guess an answer would presumably rate their confidence as low, whereas if they think they know the answer they'd probably rate their confidence higher. This might provide a way to weight the pre-/post-test differences to reduce the weight given to accidentally correct answers in the pre-test, say. It might also reveal problems with the instruction if students systematically answer a question incorrectly but rate their confidence in the answer as high.
edittextuser=436 post_id=2946 initial_post_id=0 thread_id=984
Cindy, You note that "What's discouraging, is that we find [in meteorology majors' courses] there is little retention from one semester to the next - despite the fact that all of our courses tend to build on each other."
We see the same thing. In my atmospheric dynamics class, which requires vector (3rd semester) calculus as a prerequisite, I have to reteach the definitions and interpretations of derivatives and integrals, which is why I regard myself as a calculus "reteacher". The same is true to varying degrees about meteorology content. I've concluded in the case of the calculus that the subject matter is relatively abstract and difficult, and that most students simply need multiple exposures to it in different contexts before they begin to understand it well enough to use it as a problem-solving and interpretative tool. It takes more than one course's worth of exposure for them to get it. The same is probably true about at least some meteorology concepts.
That said, I still harbor hope that there are pedagogical strategies that might help students get the concepts the first (or second) time and retain them. I've just got to read more of the literature, be more creative, take more risks, spend more time ....
edittextuser=436 post_id=2947 initial_post_id=0 thread_id=984
" ...You can find his paper and presentation here:..."
Cathy -- thanks so much for that tip! I'll be sure to do that once I come up for air when our workshop is over :>)
edittextuser=444 post_id=2956 initial_post_id=0 thread_id=984
Join the Discussion
« Workshop Discussion Threads
Log in to reply