Better use of transfer functions?
Transfer functions have had a bit of a hard time of late following Steve Juggins (2013) convincing demonstration that 1) secondary gradients can influence your model, and 2) that variation down-core in a secondary variable can induce a signal in the thing being reconstructed. This was followed up by further comment on diatom-TP reconstructions (Juggins et al., 2013), and not to be left out, chironomid transfer functions have come in from some heat, if the last (that I went to) IPS meeting was any indication. In a session at the 2015 Fall Meeting of the AGU, my interest was piqued by Yarrow Axford’s talk using chironomid temperature reconstructions, but not for the reasons you might be thinking.
Yarrow’s talk covered her work on temperature reconstructions from lakes around Greenland. For some reasons that she didn’t go into the ice core records aren’t the ultimate decider of temperature trends in Greenland over the Holocene. Other temperature records are needed to better characterise variations in temperature over the last 10,000 years. Which is where the chironomids come in…
For those of you now expecting a rant about the abuse and misuse of transfer functions, well, sorry to disappoint. What interested me about Yarrow’s talk was that she addressed upfront the potential for issues with transfer functions reconstructions. This acceptance of the problems from people using transfer functions is something new to me1 and it is a welcome development indeed.
Yarrow decided that she would only trust chironomid temperature reconstructions if they met three criteria:
- that the core contains several species sensitive to temperature change, at warm and cold temperatures, and that the record wasn’t dominated by aggregated taxa like Tanytarsus with consequently broad or no temperature sensitivity,
- that, using independent training sets the temperature reconstructions yielded the same general trend, and
- that the potential for change in secondary gradients to be contained in the sediment record was minimal.
Let’s be clear; any additional thought that people put into assessing the quality of reconstructions is to be applauded. I’m not convinced by the merit or utility of all of Yarrow’s rules but this sort of thinking is refreshing.
Rule 1 seems odd to me. Perhaps this is because I don’t know much about chironomids? It seems self-evident that reconstructions from assemblages dominated by non-sensitive taxa aren’t to be trusted or might be subject to lots of noise or influence from secondary sources. It is also difficult to operationalise this rule; how high a proportion of the assemblage do we allow for these non-sensitive taxa before we worry about the reconstruction? I suspect this could be informed by some simulations similar to those used in Steve’s Sick Science paper if someone wanted to do it.
Rule 2, for me, is the weakest. Yarrow used a NE North American chironomid-temperature data set as the main training set because of the geographical location of her lake sites, but used a separate training set of Pete Langdon’s from Iceland as the independent data set. This Iceland data set used different taxonomic decisions and groupings, the idea being that if similar reconstructions were produced using it, we can have more confidence in the reconstruction. The problem with all this however is that reconstructions generated by independent training sets aren’t independent because they obviously use the same core assemblage data.
Transfer functions are largely just fancy filters of assemblage data; to generalise broadly, if the species composition changes we’ll see a change in the reconstructed values and the magnitude of this change in the reconstruction is determined by whether the species that are changing abundance are important indicators in the training set, or not, for the variable of interest. This is where the real elephant in the transfer function room lives; no matter how carefully you build your training set, you are always at the mercy of whatever signals your lake recorded in the sediments. I’m getting ahead of myself however.
As far as all this pertains to Yarrow’s Rule 2, we must be careful not to think of these different reconstructions as being independent. We have only one record of compositional change so we can’t generate radically different reconstructions, unless that is if the training sets contain radically different species-environment relationships. I find it hard to believe that any training set from comparable environments will embed radically different species-environment relationships; organisms like chironomids just don’t seem built that way.
So where does that leave Rule 2? I would say that if the reconstructions produced are qualitatively different (different trends, implications, …), that should set the alarm bells ringing. There’s clearly something in the reconstruction that is sensitive to the sorts of taxonomic aggregations that differentiate the training sets.
But what if the reconstructions are qualitatively similar? I’m far from convinced that this should give any assurance that the reconstruction is any more reliable that before. It could just as easily be that any secondary gradients induce trends in the reconstructions in the same way in both training sets.
Which brings me to Yarrow’s Rule 3. Just as we minimise, to the best of our ability, the secondary gradients in training sets, minimising the potential for secondary influences in the core record is as important. Yarrow did this in the case of her research by choosing lakes in catchments with no catchment vegetation or any soil to speak of — from the photo she showed of one of her sites, she nailed this one!
Development of soils and vegetation in catchments has profound effects on the lake ecosystem and especially in the forms and sources of nutrients and other compounds to the lake. Such effects have logical consequences for the lake biota. Now, while the initial development of soil processes and vegetation in the Arctic at the end of the last glacial and start of the Holocene are clearly temperature driven, if you are interested in temperature variation throughout the Holocene, there are lots of things that might affect nutrient inputs from catchments, or modify the in-lake environment that are not driven by temperature. In those circumstances, if your interest is in neoglacial cooling, the medieval warm period, etc, interference from these secondary gradients can be a real problem.
What really impressed me about Yarrow’s use of the transfer functions was that clearly a lot of thought had gone into site selection and how to best guard against the inherent problems in the methods. Perhaps I’ve been away from jobbing palaeolimnologists for too long2 but, quibbles about Rules 1 and 2 aside, this is welcome and long overdue attention that we need more of.
References
Juggins, S. (2013). Quantitative reconstructions in palaeolimnology: New paradigm or sick science? Quaternary science reviews 64, 20–32. doi:10.1016/j.quascirev.2012.12.014.
Juggins, S., John Anderson, N., Ramstack Hobbs, J. M., and Heathcote, A. J. (2013). Reconstructing epilimnetic total phosphorus using diatoms: Statistical and ecological constraints. Journal of paleolimnology 49, 373–390. doi:10.1007/s10933-013-9678-x.
Having not been the recent IPS meeting in Lanzhou I’m even further removed from the application of transfer functions these days. I was aware that there had been some movement on both sides to identify ways forward for people wanting to implement or create reconstructions, however.↩
I’ve only been away from the ECRC for coming on three years!↩