Group

Camera Traps / Feed

Looking for a place to discuss camera trap troubleshooting, compare models, collaborate with members working with other technologies like machine learning and bioacoustics, or share and exchange data from your camera trap research? Get involved in our Camera Traps group! All are welcome whether you are new to camera trapping, have expertise from the field to share, or are curious about how your skill sets can help those working with camera traps. 

discussion

Insect camera traps for phototactic insects and diurnal pollinating insects

Hello, we developed an automated camera trap for phototactic insects a few years ago and are planning on further developing our system to also assess diurnal pollinating...

13 0

Hi @abra_ash , @MaximilianPink, @Sarita , @Lars_Holst_Hansen

I'm looking to train a very compact (TinyML) model for flying pollinator detection on a static background. I hope a network small enough for microcontroller hardware will prove useful for measuring plant-pollinator interactions in the field. 

Presently, I'm gathering a dataset for training using a basic motion-triggered video-capture program on a raspberry pi. This forms a very crude insect camera trap. 

I'm wondering if anyone has any insights on how I might attract pollinators into my camera field of view?  I've done some very elementary reading on bee optical vision and currently trying the following: 

Purple and yellow artifical flowers are placed on a green background, the center of the flowers are lightly painted with a UV (365nm) coat. 

A sugar paste is added to each flower. 

The system is deployed in an inner-city garden (outside my flat), and I regularly see bees attending the flowers nearby. 

Here's a picture of the field of view: 

Does anyone have ideas for how I might maximise insect attraction? I'm particularly interested in what @abra_ash and @tom_august might have to say - are optical methods enough or do we need to add pheremone lures?

Thanks in advance!

Best, 

Ross

 

Hi Ross, 

Where exactly did you put the UV paint? Was it on the petals or the actual middle of the flowers? 

I would recommend switching from sugar paste to sugar water and maybe put a little hole in the centre for a nectary. Adding scent would make the flowers more attractive but trying to attract bees is difficult since they very obviously prefer real flowers to artificial ones. I would recommend getting the essential oil Linalool since it is a component of scented nectar and adding a small amount of it to the sugar water. Please let us know if the changes make any difference!

Kind Regards, 

Abra

 

See full post
discussion

Project Spotlight: Monitoring tropical freshwater fish in Kakadu National Park with drones, underwater cameras and AI

This was such a fantastic presentation in our June Variety Hour show. Andrew and his team are exploring applications of a whole range of technologies, anda are looking to share...

1 0

During Andrew's talk, @dmorris put out a call in the chat that might be relevant to folks catching up on the video, so I'll drop it here too: 

Re: Andrew's fish work... part of the reason I got in touch with Andrew a few weeks ago is that I'm trying to keep track of public datasets and public models for marine video that have basically this gestalt (video where fish look fishy-ish). I think we're getting close to enough public data to train a general-purpose model that will work well across ecosystems. My running list of datasets is here:

https://lila.science/otherdatasets#images-marine-fish

Let me know if folks know of others!

There are also a grand total of two public models that I'm aware of that sort of fall into this category... one is Andrew's:

https://github.com/ajansenn/KakaduFishAI

The other is:

https://github.com/warplab/megafishdetector

If folks know of other publicly-available models, let me know about those too!

See full post
discussion

What does "EK" stand for (as in "100EK113")?

This might be the least important post ever made to this group, but, here we go...Most folks here are likely familiar with the overflow folders that most camera traps make to...

2 1

Most trailcam manufacturers use OEM camera designs from factories that cater to many different applications and companies. Most likely, Bushnell didn't design the camera circuitry and write the software, hence what you're seeing is the vanilla software from the reference design created by the image processor manufacturer used by the camera. My guess is the EK stands for "Evaluation Kit" and the design firm that designed the circuitry for the Bushnell trailcam didn't change the software for the file naming. 

I like your hypothesis.  I'm going with that until someone demonstrates otherwise.  It's less fun than finding out that Elsa Kramer's birthday was January 13th (or November 3rd?), but it seems plausible.  Reconyx went the extra 0.00001 miles, I guess, to change one string in the code.

See full post
article

New Add-ons for Mbaza AI

At Appsilon, we are always working to enable our users to get the most out of our solutions. With this in mind, we are happy to introduce two new add-ons to Mbaza AI. 

1
See full post
discussion

Camera Trap Data Normalization Help

Hi all! I am working on a project studying pinniped habitat use in the Eastern US. We set up camera traps across the haul-out areas, set to take 4 images per hour. If a seal...

9 0

Hi Michelle,

it would be good to get some more information on your project. What question are you trying to answer? How many cameras did you have deployed? How many different study sites/haul-out areas are there?

 

Absolutely, I am happy to provide it. Our objectives are 1) to improve the understanding of local and seasonal haul-out patterns, and the numbers of seals hauled out during daylight hours; 2) to investigate any haul-out patterns in relation to environmental factors. 

We had 10 different cameras set up at two different survey areas (8 at one location and 2 at another). We are studying the two survey areas separately. At one of the locations there are 8 different survey sites within the area where seals haul-out, at the other location, there are 2 different survey sites within the area. 

So far we have completed 3 survey seasons as well. 

Let me know if there is anything else I can provide!

See full post
discussion

Overview: Depth Sensing Technologies for Camera Traps

Hi I am cross-posting a conversation I had with some people from the Global Open Science Hardware group and figured yall were the experts on this stuff: https://forum.openhardware...

14 0

@hikinghack @Nycticebus-scientia We just ran an open-source competition on this exact question! DrivenData is a small group of data scientists, and we partnered with the Max Planck Institute for Evolutionary Anthropology (MPI-EVA) and the Wild Chimpanzee Foundation (WCF) to compile enough hand-labeled data to apply machine learning to the depth problem. In your original post, this falls under the AI Prediction Based category. The code behind all of the top-performing models is freely available on GitHub. We're hoping to make these freely available in the future in the a more user-friendly way.

@JHughes to your point about motion sensor camera traps being continuously set off -our free, open-source web application called Zamba Cloud uses machine learning to automatically sort videos by either what animal they contain or whether they are blank. You just upload a set of videos, and the web application will output a list of labels for each video! This may not address the battery drainage problem, but can help with filtering out blank videos and identifying which actually captured animals.

(apologies for joining this thread late)  Has anyone tried using the LIDAR in one of the newer iPhones so-equipped as a trail cam trigger? (e.g. iPhone12 Pro), perhaps in conjunction with a nueral network that processes the depth map looking for animal triggers?  (I know this is liable to be a power-hungry trigger, for now, just looking at a proof of concept)  

See full post
article

Osa Conservation: A Multi-Tech Toolbox of Solutions

Osa Conservation
In this Conservation Tech Showcase case study from Osa Conservation, you’ll learn about how technology is aiding their long-term efforts to prevent wildlife crime, protect critical species, and build a climate-adaptive...

2
See full post
discussion

Cuddelink Camera Issue?

I just wanted to see if anyone else was having an issue with Cuddelink cameras sometimes emailing you super old images. Like for example, when we went out in March to change the...

0
See full post
Link

Scientists step up hunt for ‘Asian unicorn’, one of world’s rarest animals

The saola, also known as the 'Asian unicorn,' remains one of the world's most elusive and rare animals. Conservation organizations, such as the Saola Working Group, operating in Laos and Vietnam, are conducting extensive searches using camera traps and exploring new methods like training dogs to detect saola signs and developing rapid DNA field test kits in order to save this remarkable creature from extinction.

0
Link

Camera trap pics of rare species in Vietnam raise conservation hopes

Camera traps installed in Vietnam's Phong Dien Nature Reserve have yielded remarkable images of rare and endangered species, igniting optimism for biodiversity conservation in the region. Among the captivating photographs are sightings of muntjac deer, the crested argus, Annamite striped rabbit, and Owston's palm civet. The camera trap images serve as a testament to the presence of these elusive creatures, fueling hope for their conservation.

0
discussion

What are the waterfowls?

I took photos and videos about a lovely couple who has got a new family at Ducks Lock along Oxford Canal. Can anyone tell me the name of waterfowl.

0
See full post
Link

Collar Tracking vs. Camera Traps for Monitoring Mexican Wolves

This article compares and contrasts the success of their wolf monitoring efforts with both collars and camera traps. The camera traps were intended to help researchers identify individual wolves within the population, but they experienced difficulties identifying both uncollared wolves and those with marked collars.

0
discussion

Camera Trap Data Management Survey

The Wildlife Insights team is running a survey to learn more about camera trap data management practices, and in particular to understand potential barriers to data sharing, as...

1
See full post
discussion

AI Animal Identification Models

Hi Everyone,I've recently joined WILDLABS and I'm getting to know the different groups. I hope I've selected the right ones for this discussion...I am interested in AI to identify...

12 3

Thanks Dan, that is very helpful. No zebras here but I did see four deer wandering through the streets this morning. Quite wild at times here!

I am totally willing to try an image classifier if it reports multiple objects it identifies in a scene. I will give this a go.

I think 1 fps would be quite acceptable actually, and in some perspective actually advantageous in reducing how much data is getting logged.

I tried upgrading my existing object detector model to YOLOv8 following the links you sent, but I don't think it is possible to upgrade the model on the framework I'm using (ml5js) so I think I will have to try a different framework.

Thank you.

Hi David

It appears that you have been looking for existing models, however, most existing models are trained on either COCO or some other very generic dataset. So, if you want to identify just animals, you may be better off training your own model. It seems no one in this thread mentioned yet that it is possible to do transfer learning on existing models, which keeps most of the "visual part" of the model as is, but just changes the classification part so it can identify other things. This way you can take an existing model trained on COCO and in a fraction of the time it takes to train a full model, just retrain that for your animals.

Also have a look at your requirements for the inferencing stage. Some models take long in training but are superfast in inference and others are slow in both cases but very accurate, etc. If you want semi-realtime inferencing, you are probably looking at single shot detectors (SSD), and not RCNNs.

See full post