discussion / Camera Traps  / 25 January 2024

Jupyter Notebook: Aquatic Computer Vision

Dive Into Underwater Computer Vision Exploration

 

OceanLabs Seychelles is excited to share a Jupyter notebook tailored for those intrigued by the underexplored intersection of computer vision and marine environments. This tool is a stepping stone for enthusiasts and researchers looking to apply computer vision to stationary underwater cameras. While not yet production-ready, everything you need to deploy and test an "AI On The Edge" detection model in a real-world scenario is here!

 

About the Project:

  • Purpose: The project aims to automate the detection of marine wildlife in underwater footage, reducing the need for manual video analysis and large data uploads.
  • Current Stage: It's important to note that this is a work in progress. The notebook is not ready for production use, but it contains all the necessary elements to test and potentially develop a production-level model.
  • Optimization Efforts: We've begun by normalizing light levels and exploring different color spaces to improve image quality for detection purposes.

Why Engage With This Notebook?

  • Live Stream Integration: Learn how to access live underwater streams for immediate application of your models, avoiding the logistical challenges of underwater hardware deployment.
  • Focus on Relevant Data: We provide strategies to process only footage containing wildlife, ensuring that model training and testing are both efficient and relevant.
  • Contribute to Improvement: As we refine our approach to image optimization, we welcome insights and contributions from those experienced in image classification and computer vision.

Collaborate and Innovate

This initiative is a collaborative effort at its heart. We're looking for contributions from anyone with a background in image classification or an interest in marine biology and computer vision. Your expertise could help evolve this notebook into a robust tool for underwater research and conservation efforts.

If this project sparks your interest, we encourage you to dive in, experiment with the notebook, and share your findings. Together, we can push the boundaries of what's possible in underwater observation and analysis.

If you are a computer vision or image classification wiz, let me know! Id love to learn more and find out if we're onto something here!



 

If you find this kind of thing interesting, I made a little youtube video showing how we captured the first sample data used in this project:



 




This is quite interesting. Would love to see if we could improve this code using custom models and alternative ways of processing the video stream.