Sunday, 4 March 2018

Identification from photographs – an art rather than a science?



An article on testing the skills of Great Crested Newt Licence-holders was brought to my attention yesterday (as part of a wider circulation). A synopsis of a more detailed paper was published in ‘Inside Ecology’ (Online Magazine for Ecologists, Conservationists and Wildlife Professionals) and was titled Using online images for species identification with the follow-up comment: ‘How much reliance can be placed on the identification of a species using an online image?  In a newly published paper, researchers document the results of a study looking at accuracy and agreement of newt identification between experts…

This is an issue that is very close to my interests, even though it was dealing with a different group of organisms. I’m afraid it was extremely disappointing in many respects and made an awful lot of assumptions that are basic pitfalls of research. 
 
The first pitfall was that the researchers appeared to have fallen into the trap that an online image with a name against it is deemed to be correctly identified! Having spent literally thousands of hours examining online images of Diptera, I can say with total confidence that even the best sites contain wrongly identified images. What is more, if you send a correction these are very rarely taken up, so the photograph continues to be wrongly titled. Only this last week I found one on the NBN Atlas; I also found one on the Diptera.info site a couple of hours later. Not only were both the wrong species but also the wrong family!

It also happens in published papers – I’ve seen DNA profiles published for flies that are clearly assigned to the wrong family! Misidentification is rife and the online World makes it increasingly likely that misidentifications will be perpetuated into other records by the match the photo approach that is emerging as the modern norm.

There is a basic rule of thumb – if you want to identify an animal/plant/other item go back to first principles – can you see the defining features? Start by checking the family, then the genus and then the species. This problem also happens with specimens: I not infrequently find myself scratching my head over a specimen presented to me as ‘and I think I am right that this is …’ I then spend five minutes going around the houses before it dawns on me that I’ve taken it for granted that they have got the genus right! So, the moral of this story is that we all do it. BUT, if you are undertaking a detailed investigation into photographic identification you should at least make sure that all of the subject matter has been vetted by a recognised expert (and not just a GCN licence-holder).

The second pitfall was sample size. I vaguely remember being taught statistics at University. Or, at least, some poor soul had to try to get my defective brain to understand the basics of T-tests etc. The one thing I do remember is that sample size is critical and the smaller the sample size the less reliable the statistics. So, a sample size of 17 seems to me to be woefully inadequate for any scientifically rigorous analysis. Indeed, this size sample is so small that I am amazed the reviewers of the published paper did not raise an issue!

Then there comes the issue of how the sample group was assembled – it came from a call for volunteers. In other words, a self-selecting group and in no way a randomised and stratified sample. As such, this puts the results severely into doubt. Interestingly, the participants were asked to assess their abilities against those of their peers – that was illuminating! (Figure 1) I think that the message coming from this exercise is that ‘pride comes before a fall’. The one participant who stands out for me was No 17 who considered their abilities to be ‘worse than’ their peers and yet they ranked No 1 in terms of the performance in relation to study species (but sat in the bottom quartile for overall performance).



Figure 1. Ranking of participant performance in photo ID of newts - after Austen et al. (2018), Species identification by conservation practitioners using online images: accuracy and agreement between experts. PeerJ 6:e4157; DOI 10.7717/peerj.4157


This example highlights the importance of being aware of one’s potential failings. Who says we are an ‘expert’? If we call ourselves experts then by what reliable marker have we arrived at this conclusion? And, should we call ourselves an ‘expert’. I prefer the term specialist because ‘expertise’ suggests a level of infallibility. Not so – everybody makes mistakes, no matter how experienced they are! However, if there is one aspect that ought to set the ‘expert’ apart from the rest of the field, it is their ability to recognise their own fallibilities and not to take an identification beyond what can realistically be done in the medium concerned. If the characters are not clear, then leave the diagnosis at generic level.

Interestingly, the GCN study viewed uncertainty as a failing. I think it is anything but – it recognises the limitations of identifying an animal from an awkward angle and without the ability to rotate it and check critical features. Their analysis implies that it ought to be possible to identify everything from a photograph, but that is patently untrue. Not all photographs are top quality, pin sharp and high resolution. Equally, a photograph from one angle is often insufficient to make a firm diagnosis (perhaps even with newts – I don’t know enough about them).

Over-confidence is something that leads to misidentification. It reminds me of the occasional problems with participants in social media who don’t like it when a specialist will not take a diagnosis beyond generic level. Having been called ‘timid’ by one over-confident participant, it reminds me not to accept records from people who are over-confident. The tabulation of the GCN licence-holders reminds us of our own fallibilities.

Additionally, the study excluded contextual information. Now, I can understand why they might want to exclude such information; but that overlooks the critical point about identification from photographs. It may well be that the contextual information gave the game away! But then, does an ability to identify from photographs tell you a great deal about a licence-holder’s ability in the field? It may tell you a bit, but from personal experience I don’t think field skills and photo ID skills are totally inter-related.

When I started working from photographs I’m sure I made all sorts of howlers, despite having 20+ years’ experience at the time (I’ve now been doing online photo ID for about 12 years). Having a bit of context is often essential – date and basic geography helps to eliminate or embrace particular species. One of the reasons I am interested in photo ID is that as I assemble a bigger database it becomes clear how many records submitted to the HRS might be dodgy on phenology alone. I wonder, do newts that are found on land look different to those in water? I’ll bet they do because they won’t be in full breeding regalia! So, date context may be important in separating species.

All-told, I was underwhelmed by this study. It does, nevertheless, raise important questions about the challenges of making reliable identifications from photographs in a wide range of taxa. If there are problems with a small group such as newts, then the issue becomes many times worse when applied to, say, hoverflies or solitary bees that have multiple generations and seasonal as well as gender-related polymorphism. Inevitably, photo ID becomes an art as much as a science, but it depends very substantially upon good knowledge of comparative anatomy rather than painting by numbers!


4 comments:

  1. This comment has been removed by a blog administrator.

    ReplyDelete
  2. This comment has been removed by a blog administrator.

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete