Hi Michelle,
I had a group of undergrads help me with a 40,000-image dataset a few years back. We used the TEAM network Wild.ID program, so each photo that was tagged indicated who tagged it. That was helpful for checking quality later on. For our common, unmistakeable species (e.g. whitetail deer), I didn't require a second identification, but for more challenging groups (foxes, mustelids), I would often have a second person review the ID, or do it myself. Later on, I had a student go through all the tagged images of a particular species (gray squirrel, etc.) and verify the first ID. I found that some of the undergrads were very reliable in their ability to ID the species, whereas some other students needed to have their work checked more meticulously. I later thought of the idea of building a training set of say, 100 photos, to have each student run through to get a sense for their familiarity with the species, but also their ability to handle the more tricky scenarios that come up often in camtrap datasets.
Most folks could only handle 1-1.5 hours of continuous tagging. I had a few enthusiasts who would go for 2 hours straight, but that was rare. We logged effort in a shared google spreadsheet, where the students noted the dataset they worked on, any issues that came up, and any individual images that needed a second check.
I also tried to set up a more ergonomic workstation for folks (multiple monitors raised up, ergonomic mouse, etc.). Since the motion is so repetetive, easy for folks to develop carpal tunnel syndrome.
If you are dealing with a much larger dataset, you might want to look into more sophisticated AI/automation methods, but for a smaller project, this was doable. If you have a university connection, you can often recruit folks through chapter groups of The Wildlife Society. Student are often eager to gain experience, although many don't stick with it once they find out how unglamorous it is!
Good luck!
-Andy