Google and ZSL set up AI to detect poachers


Teaming up with conservationists and teaching algorithms to identify species is just one ways machine learning is evolving. 

Google is working with researchers from the Zoological Society of London to help detect poachers and recognise animals through artificial intelligence. 

Usually millions of images captured by heat and motion triggered cameras would have to be manually processed, with a person sifting through the files and recording the animals observed. 

But that role is being handed over to Google’s algorithm, which has been specially trained for the task. 

Put simply, machine learning is a practical application of artificial intelligence (AI). 


Google’s algorithm has been taught to recognise one animal from another based on previous examples. Around a million and a half images were used to develop and train this particular model. 

Once this dataset has been processed, the algorithm can encounter new images and recognise the animals featured. 

“Machine learning has the potential to really speed up our analysis of these images to help species identification,” says Sophie Maxwell, Conservation Technology Lead at ZSL. 

“It also helps us to detect poachers in the field. We can download the algorithms to sit on the cameras themselves, so that they can detect humans in the images in real time and raise alerts of those in protected areas so that we can respond to these threats.” 

But there is a catch to all this. Machine Learning faces a challenge that humans do not. Subtle variations are able to trick even the most sophisticated algorithms into mistaking one image for another. 

These are known as “adversarial examples.” In December a team from MIT fooled Google’s algorithm into thinking that a photo of skiers was a dog. Such a mistake wouldn’t have been made by a human under the same conditions. 

But the accuracy level looks set to improve over time according to Matt McNeil, Head of Google Cloud Customer Engineering. 

“As you start creating more extensive models which are trained on much larger datasets they start becoming much more resilient to changes in pixels. Being able to be more accurate really.” 

“I think there is an aspect which is simply related to the quantity and the depth of training.” 

And both the quantity and depth of such training will need to be much greater if the public are to trust AI in other areas like driver-less cars – it’s no good if your car misinterprets a stop sign as a lollipop. 

Here at Dollar Destruction, we endeavour to bring to you the latest, most important news from around the globe. We scan the web looking for the most valuable content and dish it right up for you! The content of this article was provided by the source referenced. Dollar Destruction does not endorse and is not responsible for or liable for any content, accuracy, quality, advertising, products or other materials on this page. As always, we encourage you to perform your own research!


Author : Alex Morgan

Image 1 credit 

Image 2 credit 

970 x 90 Homepage (latest news)