The system is pretty flexible but we reccommend reading the Supported Capture Volumes section of Platform Core Features for a full view of all your options.
Provided that the performers are fully visible in at least 2 cameras, we will capture data for them. The AI predicts where they are based on where they have been and where they end up in the volume.
It can, with correct camera placement and enough cameras. We can advise on the best camera layout for your use case.
They can, as long as the performer is fully framed at the angle at which they are placed.
Not at this point in time. We capture bipedal humanoid subjects.
Yes, they do.
A camera being bumped or occluded briefly is OK, they sytem will compensate. If a camera is knocked over then you'll have to recalibrate, or process without that camera.
Nope, anyone can be the calibration subject.
Nope, only the first person who does the inital calibration - the system will work out the limb propertions of the other captue subjects
Their tracking will then be affected from those frames. If they re-enter the volume, it'll take a few seconds to track them again and they will be assigned a new actorID.
We can currently capture a football, we'll be adding mroe objects into the system over time.
We can use simple coloured placed on objects to track thier position in a scene - give firstname.lastname@example.org a shout to find out more.
You can't ignore people in the scene, so you'd have to capture everyone and discard the tracks you don't want after processing.
We can indeed! For best finger capture, we recommend cameras being close to the hands. Let us know your use case and we can advise on positioning.
Our finger tracking is around +70% quality compared to our body tracking of 90+% - we know that it take conderiably less time to clean up than with othr systems.
We actully need the body to make sure the orientation of the hands is correct, so we advise on captuing both at the same time.
We can deal with outdoor ligthing conditions, indoor office lighting, regular studio lights and on-set lighting.
Where the scenario is darker, some settings can be adjusted on the MoCap cameras to cope with the environment.
What we really need is good contrast between the performers and the background. If the human eye cannot make out the body and it's limbs, the platform won't be able to either. If you're unsure or need advice, pelase get in touch with a move.ai team member or reach out to email@example.com with the subject title 'Move.ai MoCap - Lighting Question' and we'll get some further details from you.