We've published a nice case study from @Sol+Milne about his work with Orangutan Nest Watch. As you can read in the full piece, he's concluded it by sharing an open challenge he's facing with processing some of his data:
With the fieldwork almost complete, I am back in the UK working on the analysis of the data. In beginning this stage, I have run into some issues with image processing and would be keen to get some feedback from the WILDLABS community.
Here is my issue: every photo I have taken with the drone has central longitude, latitude, and altitude data from the drone’s onboard GPS and barometer. I have used PIX4D and Agisoft Photoscan to produce contiguous, and importantly, georeferenced ‘orthomosaics’, generated through the assembly of all drone photos into a single, complete image.
However, I need to get accurate coordinates for each nest and fig tree I find, but I am not always able to generate the large georeferenced images I need. This is due to errors in the overlap between images, changing light conditions, and variable topography. I have tried to decrypt flight data from the drone in order to generate information on its location during surveys, but these files are heavily protected due to DJI’s intellectual property safeguards. So, we can’t get in and process this information using Exiftool to derive a georeferenced image.
Does anyone have information on how I can process these images, in order to get coordinates for each pixel in the image? This is a big hurdle in my studies and I would be very keen to use the collective knowledge of this community of experts to help me solve this problem.
Have you faced this problem in your work? If so, Sol would appreciate hearing how you handled this challenge. Join the discussion below.
13 November 2018 12:09pm
I just thought I would mention that i've been talking to Sol about his issue and also around multispectral and thermal sensors. No solution as yet to the issue but Sol will share some of his data with us so we can take a look.