discussion / AI for Conservation  / 3 February 2021

Tech Tutors: How do How do I launch machine learning projects using MLOps?

Hi everyone,

This week we are pleased to welcome back EdgeImpulse's Daniel Situnayake. Dan will be building on his first tutorial, where he walked us through training our first machine learning model. This time, Dan will tackle the topic: How do I launch machine learning projects using MLOps?

If you missed the event, you can catch up on the recording below.

This thread is your place to continue the Q&A (or ask your question ahead of time if you can't make it to the episode). Drop any questions for Dan in this thread and we'll make sure he sees them!

And likewise, if you have questions for each other or want to continue discussions from the episode chat, you can also use this thread to connect and collaborate!

WILDLABS Team




Dan's comments about the need for technologists and conservationists to manage and share (properly annotated) data struck a chord with me, it was right at the end of the presentation.

I fired off a point into the chat "could your old background data be my background data?" or something similar, and it got me thinking...

(Firstly, sorry for clouding this issue with my simultaneous "Ian Tuna" joke)

In the context of using AI as described in EdgeImpulse example, lets say - audio.

Lets say Group A are trying to detect the roar of lions, and Group B are trying to detect the grunt of wild pigs, and lets imagine that is in roughly the same area of Africa.  I mean they could both send out teams to capture the sound of their target animal, and the sound of NOT their target animal, e.g everything else.

Well if "serengetti sounds" was a known audio track, then all you need is the unique sound of the beast in question, am I right?