Expertise and Field-data Collection

Basil Tikoff, University of Wisconsin-Madison, and Thomas Shipley, Temple University
published May 2, 2017 10:00am

This blog post is written to serve as a starting point for a full paper on the field-data collection strategies, how they are likely to change with the availability of digital databases in the field, and what we need to know to support a modern science that relies on field-data. The reader will note we often explore an idea without fully resolving it. We are doing this for the dual purpose of whetting your intellectual appetite and stopping editors from being too fussy that something might already be "published".

We take as our starting point that the mind is unambiguously the most important tool of a scientist. The point is that cognitive science studies the mind and a program of research on expert reasoning could potential improve scientific practice. The program has three critical pieces: characterizing the cognitive processes employed in scientific practice, identifying likely errors, and knowing when science has been improved.

This blog focuses on the first piece and specifically how geologists collect spatial data in the field. Field-based geologists – given five minutes, because we/they don't often think in these terms – will come up with four distinct approaches. Reconnaissance mode is used when you don't know much about an area. It is still applicable in places like Alaska. Mapping mode is survey-/field-camp-/quadrangle- style mapping. Because of its pedagogical value, most geologists are taught in this way. Sampling mode is when you are out to get a specific specimen and just enough (which is often, sadly, none) context to understand it. We have a colleague who calls it "body snatching", with vaguely sinister Victorian connotations. The last – and probably the most difficult – could be called problem-solving mode. Most academic geologists work in this mode. In this case, the boundary of the field area is defined by the problem to be solved. There is a lot of variability about how to work in this mode, because both the problems and the people who work on them are very different from each other.

Within problem-solving mode, there are discernable approaches. One expert might be described as driven to collect as many observations as possible in a field day. This approach sacrifices in-depth querying of each stop for higher density or observations in a larger area. Another expert might tend to be laser-focused on a specific problem, going to a specific place and collecting the best data available to their specific goal. Yet another expert might vacillate between the two modes, as one observation triggers a revision of plan to explore a new line of enquiry. In our experience, these are the people who are naturally curious and/or intellectually distractible. They also have tendencies to fritter (mea culpa! says a co-author).

Here we suggest that the approaches can be represented along a continuum from bottom-up data-driven data collection to top-down theory-driven data collection. Either mapping or problem-solving mode can lead to bottom-up – or empirical - data collection. In this approach, you really don't know what you are going to find, and you'll work out a model (or adopt one form the literature) once you have an idea about what the rocks "are saying". Top-down – or theoretically motivated – data collection requires working in problem-solving mode. This approach typically requires that you think you are in a good place to check your conceptual model. Both approaches require empirical data collection and a conceptual model; the difference is which of the two, data or model, has primacy. Mapping mode – relative to problem-solving mode - allows a weaker commitment to prior expectations about what will be found leading the expert to wander, perhaps on a rough grid, in an area to develop a sense of the outcrops, to get a sense of what observations might be available. For an empiricist, this is heaven. Top-down problem-solving mode, in contrast, is the product of strong expectations about what will be found so that a few high quality observations may yield significant new insights (such as those that could discriminate among theories). There is nothing – nothing! – better to a theoretician than making a prediction and seeing it play out in reality. It is a type of high that requires development of a new theory to ever know. Until recent advances in computation and the advent of digital resources in the field, this required significant mathematical sophistication, at least in structural geology. Notice that observation of current practice does not unambiguously identify one approach as better or worse: They have different strengths and weaknesses. Likely the strengths and weaknesses interact with both the skills of the observer and the context of the problem. But, a fundamental problem is that we do not have a good metric to measure the value of any type of field data collection practice. Similarly, when students develop data collection strategies and begin to learn to coordinate models and data it would be good to have measures of good and poor practices as skills develop.

What does this all means for digital databases? Certainly, it means that digital data systems need to have the ability to collect empirical data and assemble/characterize that data. Yet, if a digital data system is going to really help in the process of doing science, it also needs to somehow incorporate conceptual or numerical models. How? That will have to wait for later.

Stay tuned.



Comment? Start the discussion about Expertise and Field-data Collection