Conservation and ecology have long been underfunded areas of work. The need for large amounts of data and potentially many highly skilled, specifically trained scientists and conservationists, has traditionally presented issues of finance and subject specialism. This is especially true when huge banks of data from newly developed technology like camera-trap images or audio recordings need processing and understanding. Thanks to developing technology, such vast sets of information are now readily available. However, the question is: how do we process them with the speed and accuracy necessary to make them useful and meaningful?
Training A.I. to be able to process large quantities of data is no mean feat. Software isn’t as accurate or sensitive as humans at many conservation research tasks. The amount of data needed to train an A.I. algorithm to recognise images and sounds can present hurdles. Yet despite this, early adopters in conservation science are enthusiastic.
Wildbook, a not-for-profit organisation, is one such early adopter of neural networks and computer- vision algorithms. It utilises its learnt knowledge of the unique physical features of different species, mapping their behaviours from their signature stripes, spots, ear notches, tail shapes, and fin flukes.
The technology has also been used by companies and charities, all of which maintain ownership of their own data set, and are applied to Wildbook’s ever-developing brains and knowledge.
New information and answers may already have been recorded by the work of charities, but the extent of their recorded data is vast, and fresh data can be lost in amongst the old. Discoveries can become undiscovered. Via the Wildbook programme’s search and assessment abilities, though, this danger can be removed. Retrieving any such lost information is made possible. So, for example, patterns of animal behaviour that could well have remained hidden in amongst the archives will be rediscovered.
More recently the use of information provided by ‘citizen scientists’ – individuals uploading photographs and video footage relating to different species from their personal holiday albums or volunteering opportunities – has been able to be utilised. The capacity for A.I. to weave this together to form a more complete and cohesive whole data resource is a game changer. So much more information can be processed; outlier behaviours can now be fitted into ever- developing, species-specific and ecosphere-wide understanding.
Wildbook’s recent development of an intelligent bot which can sweep YouTube footage for whale shark imagery uploaded by tourists and divers, has provided high level data relating to location and movement patterns based on the animal’s unique spot and tail markings. This has given insights into the individual lives of particular whale sharks and also into the species as a whole.
The whale shark data can be picked up from imagery that has come from places researches cannot go, for funding prioritisation or pragmatic reasons such as time. This means that outlier members of a species can still be spotted and their data fed into the researcher’s wider understanding. From this breadth of information can arise the potential for the charity to adjust its focus or drive more finance into the geographical and subject-specific areas, where it is needed.
The processing of different forms of data – images, audio, swim patterns, etc. – means that species-specific behavioural understanding can take a leap forward. So too, an understanding of the interaction between the natural world and that of technology and human infrastructure. This transection can then be monitored and assessed. In turn, the effect of humanity on the natural world can be seen.
With all this wealth of information, there is also the opportunity to present new knowledge to inform and direct large and small-scale social and political decision making. Specifically, to inform the development of a partnership approach; a way to adapt humanity’s progress which has the least negative impact on the varied ecospheres in which we all live.
A recent article in National Geographic highlighted that “the potential for computer vision to help the planet encompasses everything from analysing aerial imagery in arctic and savannah landscapes to monitor large animals, to tracking forest recovery and loss from satellite imagery, to even monitoring plastic pollution using A.I. drones. Computer vision can offer a lot not just for wildlife conservation applications but also sustainability more broadly”.
A.I. is still not, however, a perfect solution for all conservation roles. The amount of time, data and initial error rates can make it a non-starter for many species and tasks. When attempting to work with a very limited data set from an endangered species it is not necessarily the go-to method and, make no mistake, there is no sense of A.I. replacing the very real need for the human side of data analysis. A.I. is currently incredibly useful for searching and identifying but has yet to make leaps of intuitive understanding, to piece together understanding into why certain behaviours are occurring or how to manage or minimise the destructive or harmful ones.
In every way there is potential for technology to mitigate the impact of human behaviour on the natural world, and in conservation as well as in all other scientific and social spheres, the use of A.I. is certainly an area of fast developing interest and specialism. Microsoft’s principal researcher in their A.I. For Earth programme, Dan Morris, told National Geographic: “fundamentally it’s really about leveraging A.I. to save the planet.”
For further information, citizen scientists are encouraged to follow progress into A.I. at go.nature.com/2sbjasb , which offers up-to-date resources and developments in this research field.