ML at the Edge

There's a discussion over in camera traps about the design of a device to run autonomously (in an inaccessible location) with reliable power (solar) but low bandwidth, intermittent communication back to base (cost constrained really as its via satellite).

This environment sets an objective of only reporting images which are very likely to be of interest (i.e. a low false alarm rate as the data leaves the camera) but of course, as with all camera traps, there will be low frequency occurrences of images of importance that the researchers would love to retain - implying a high detection rate!

It's clear to me that machine learning will be the go to method for species identification in the pipeline of image processing, but what about its applicability out at the edge to determine "image of interest"? What might the approach be to training, assuming a fixed camera position (so that false alarm images look very similar to human eye perception although the lighting conditions may vary enormously), a high "natural" false alarm rate at the camera because of PIR activation, and a low "true" detection rate.

I've been pursuing some ideas based on signal processing and image processing techniques (in effect looking at the statistical variation between images to note when something strange occurs). Any thoughts or pointers on the application of machine learning where you have:

- a fixed image scene, unique to this remote location

- a large proportion of images not of interest

- just a few which must be captured


StephODonnell's picture