Capturing
The system is pretty flexible but we recommend reading the Capture Volumes section for more information.
Provided that the performers are fully visible in at least 2 cameras, we will capture data for them. The AI understands the motion using biomechanical models , providing constraints on the movements of the body. Can your system deal with objects like tables or boxes?
It can, with correct camera placement and enough cameras to prevent occlusion.
They can, as long as the performer is fully framed at the location at which they are being captured.
Not at this point in time. We capture bipedal humanoid subjects.
Yes, they do.
A camera being bumped slightly or occluded briefly is okay. However, if a camera is knocked over then we recommend recalibrating, or processing without that camera.
Nope, anyone can be the calibration actor and based on this, the system will work out the dimensions of the other actors accordingly.
If the actor is not visible for a period, we recommend they do a T-pose upon re-entry, to ensure the system recognises them.
We can currently capture a football or other round object, we'll be supporting more objects into the system over time.
You can't ignore people in the scene, so you'd have to capture everyone and discard the tracks you don't want after processing.
We can indeed! For best finger capture, we recommend cameras being close to the hands.
Our finger tracking is around 70% quality compared to our body tracking of 90%.
Click here and check out our Finger Tracking Best Practice
We actully need the body to make sure the orientation of the hands is correct, so we advise on capturing both at the same time.
We can support a variety of lighting conditions, such as indoor office lighting, natural outdoor lighting, regular studio lights and on-set lighting.
We recommend good contrast between the performers and the background. If the human eye cannot make out the body and it's limbs, the platform won't be able to either. If you're unsure or need advice, pelase get in touch with a move.ai team member or reach out to support@move.ai.
Well, we've done some testing and it turns out that the iPhone 8 meets our minimum requirements. This means that it can capture footage for our clever system to process at the same standard as the latest iPhone! However, it is important to be aware of the differences between iPhones - although the iPhone 8 is compatible, it doesn't have the Ultra Wide lens that the iPhone 13 Pro has, and therefore the field of view is smaller. As a result, the iPhone 8 may need to be placed slightly further back fom the capture volume than the iPhone 13 Pro. Nevertheless, it's quite impressive being able to MoCap on an iPhone dating back to 2017!
We actually do things differently. Our system uses the human body itself to calibrate and to capture motion. Therefore the movement we can extract is authentic human movement, as opposed to tracking a number of markers placed over a body and propagating that data back together.
We position our quality in terms of clean up time required, of course all MoCap data does need some clean up. Our system can reduce data clean up time by up to 80%. Clean up for finger tracking can take longer - this is because the fingers are difficult to see (the pixel denisty is lower) when the performer is further away from the camera, this could cause more noise on the hands."
We have conducted several side-by-side tests to validate that we meet the same levels of production quality data as Vicon & OptiTrack. Please reach out to support@move.ai for more info.
