Software Operation
GUI Overview

Feature guide:
Feature: |
Info: |
Calibration Status (1) |
Green - Extrinsic and intrinsic successfully found. Yellow - Intrinsic found but extrinsic missing. Orange - No extrinsic or intrinsic found. |
Sync Status (2) |
Green - Successful synchronisation Orange - Synchronisation error |
3D View (3) |
Blue - 3D view active Grey - 2D view active |
Camera Stream (4) |
Blue - Active Grey - Not active |
3D Data Overlay (5) |
Blue - Active Grey - Not active |
Second Solve Recording Panel (6) |
This can be used to record takes for post-processing in order to achieve a higher fidelity solve |
Second Solve Start Recording Button (7) |
Used to begin a recording for second solve. |
Mocap Mode Tab (9) |
Used when you want to capture realtime motion capture |
Intrinsics Mode Tab (10) |
Used when you want to capture your intrinsic lens calibration |
Extrinsics Mode Tab (11) |
Used when you want to capture your extrinsic camera location calibration |
Actor Tracks List (12) |
Shows the actors who are currently being tracked |
Listening list (13) |
Shows available IP addresses that can broadcast the real time mocap data stream |
Streams (14) |
Shows IP addresses which are receiving the real time mocap data stream |
Start Up
-
Locate the Move Live Software application by searching for it on the PC.
-
Use the below table to determine your next steps:
Do you have a lens calibration (Intrinsic)? |
Do you have a camera positioning calibration (Extrinsic) |
Go to |
Yes |
Yes |
|
Yes |
No |
|
No |
No |
Calibrating Your System
The Move Live system requires two sets of calibration data in order to operate the Mocap mode. The intrinsic calibration informs the system about the camera matrix and distortion coefficients, so that it can correctly interpret and un-distort the image. This will only need to be done once, as long as the same camera & lens are paired together in the future and the configuration of the lens zoom & focus has not changed.
The extrinsic calibration tells the system where the cameras are positioned and how they are orientated, in order to combine the 2D tracking from each image and triangulate the actor(s) within the volume. This will need to be done every time a camera moves and can only be captured once the intrinsics have been provided.
Intrinsic Calibration
Watch this video to see how to capture an intrinsic calibration.
-
Create a new project, or overwrite the calibration in your existing project.
Note: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.
-
Head to the Intrinsics tab.
-
You can either load one of the default intrinsics (selecting the focal length being used), or for a more precise intrinsic calibration, capture your own.
-
To capture your own:
-
Enter the following details:
-
See this example chessboard to use. This can be shown on a screen, or printed onto a rigid board.
-
Number of intersections in the chessboard width and height. The chessboard above has a width of 9 and height of 14. You should use a rigid, physical chessboard.
-
Set the detect interval to 100, or increase it so that it doesn’t capture too many datapoints too quickly. If it does not collect enough, then decrease this number.
-
-
Hit Activate then right click the chosen camera below and click Start Recording.
-
Place the chessboard in view of the chosen camera and green data points will begin to appear where the camera detects intersections on the checkerboard.
-
Move the chessboard around to fill the entire frame of the camera, rotating the board on all three axes - pivoting left/right/up/down & rotating clockwise/counterclockwise.
-
Once you have data points distributed across the entire frame, especially at the edges, you can right click on the camera id, and click 'Stop Recording'.
-
Suitable distribution shown below.
-
A minimum of 50 frames are required, but no more than 300 are necessary and may result in longer calibration processing times.
-
-
Right click again and click calibrate.
-
Proceed to do the same for all other cameras.
-
-
Once you have your intrinsics for all of your cameras, deactivate the Intrinsic mode and save your project.
-
You are now ready to capture your extrinsics.
Extrinsic Calibration
Watch this video to see how to capture the extrinsic calibration.
-
If you have an existing project with the correct intrinsics, open that and you can overwrite the old extrinsics. Alternatively, if you have just captured your intrinsics, remain in the same project to capture your new extrinsics now.
-
Note: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.
-
Note: Projects can be found in the Invisible_Projects folder within the Home directory.
-
-
Click on the Extrinsic tab (top right corner).
-
If using a human for the calibration, select ‘Human’ for the detection mode and enter the actor’s height in metres (excluding footwear).
-
Stand in the centre of the volume (this will define the location and orientation of origin), with your hands above your head in a Y-pose. Click ‘Activate’ and then ‘Start Record’.
-
The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Slowly walk around the volume, spiralling outwards from the centre, filling the entire space until you have an even distribution of data points around your entire volume and the cameras have gone green (from red).
-
-
If using a digital charuco board for the calibration, select ‘Charuco’ for the detection mode and enter the respective details.
-
The charuco board must either be shown on screens spanning across at least two planes, or printed on a physical rigid board.
-
Place the charuco board in the location you'd like to use as the origin location. Click ‘Activate’, and then ‘Start Record’
-
The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Move the charuco board around the screens to collect data on at least two planes, until you have a good distribution across the images of the cameras and the cameras have gone green (from red).
-
-
-
If the system is not detecting many key points, you may need to adjust your camera framing, as all cameras must see the human in order to detect them.
-
Once complete, click Stop Record and then Calibrate. This will flash green/yellow whilst it is processing, and then return to solid green when it's finished.
-
Check the calibration outcome and quality in the terminal window. At the bottom there will be the status, successful or failed. If you scroll up, there will be a calibration error value for each camera individually as well as an average for all cameras.
-
An excellent calibration will be less than 5, a good calibration will be less than 9. Above this, a new calibration is recommended.
-
-
When this is finished, click File > Save Project.
-
When the 3D overlay is enabled, the camera reprojections will be shown on each camera preview. Please note that these will not be correctly positioned until Mocap is activated (at which point the images shown will be undistorted).
Mocap Operation
Watch this video to see how to operate the Mocap mode.

-
Open an existing project or remain in the project you’ve just captured your intrinsic & extrinsic calibrations for.
-
Note: Projects can be found in the Invisible_Projects folder within the Home directory.
-
-
Open the Mocap tab on the right hand side of the viewport.
-
Select the number of actors you’d like to track (1-2).
-
Please note that we do not recommend changing this while the mocap mode is activated.
-
-
Choose the Tracking Area (detection method) - this allows you to restrict who will be detected.
-
Camera positions - This will create a detection area based on the perimeter of the camera positions.
-
Polygon - This allows you to create a bespoke shaped detection area based on the number of sides and the radius.
-
None - This allows anyone seen by the cameras to be detected.
-
-
Choose the Initialisation Mode (tracking method) - this will determine which of the detected actors will be tracked.
-
Auto - It will automatically track actors once it detects them.
-
Click - The operator can clicks on any actor’s bounding box to track them.
-
Hands - The system will track actors who raise their hands above their shoulders.
-
Don’t track - In this mode, the system will not track any actors.
-
-
Hit Activate!
-
Once an actor meets the detection and initialization criteria, it will begin estimating their bone lengths. You can see this progress on the right hand side in the Actor Tracks list. To enable this to complete as quickly as possible, the actor should do dynamic movements, flexing all of their joints.
-
When the bone length estimation completes, the system will begin streaming their tracking data to any connected clients.
-
Tip - If you can't see the 3D mesh overlay on the actor, make sure you've enabled the 3D data overlay.
-
-
To remove an actor’s track, right click on the track in the Actor Tracks list and select ‘Remove track’.
Second Solve
The Second Solve feature allows users to record video during Mocap operation, that can be processed afterwards through our Second Solve engine, to achieve a high fidelity solve of the data for use it post-production.

-
Ensure fps is set to no greater than 60fps
-
Activate Mocap mode and begin tracking your actor(s) (if you wish)
-
When your actor(s) are ready, instruct them to hold a T-pose in the centre of the volume.
-
Hit Start in the Recordings panel, you can see the elapsed time shown, alongside the no. of lost frames.
-
A significant number of lost frames may indicate an issue with your hardware/network.
-
-
When you're finished, Stop the recording
-
Make sure Mocap mode is deactivated
-
Click the settings button to instruct the system how many actors to process and whether to toggle on ball tracking.
-
Hit Start on the respective take to begin processing
-
When the take has finished processing, the status light will be green.
-
You can now lick on the Video button to view the input videos, the File button to view the output files of the animation, and the Bin button to delete the respective take.
Optimizing Your Mocap Output
Filtering
You can adjust the level of filtering to bias for lower latency or higher quality Mocap
-
Head to File > Settings to toggle the filter settings for Move Live from 0 (least filtering) to 5 (max filtering).
Please note - Having the filtering set to a higher value will ensure smoother tracking but this will increase latency. Lowering this value will lower latency but increase noise in the data.
FPS
You can adjust the frame rate to improve the Mocap quality. Higher frame rates reduce motion blur during faster movements.
-
In the top left corner of the Camera Info panel, click the square button to open the FPS window
-
Ensure the FPS is 60 for calibrations, but you can increase this to 110 FPS for mocap.
-
Set the framerate to your desired value between 60-110 and click Set to apply the changes.
Please note - Higher framerates can enable better quality mocap, however all calibrations must be done at 60fps and if your environment is very bright, the cameras may experience troubles with the auto exposure at high fps, so return to a lower fps in these circumstances or lock the exposure in SpinView. Cameras can not perform above 60fps if you are flipping the images due to upside down mounting.
Bone length estimation duration - This is currently experiencing a bug which we are investigating.
You can adjust the number of frames used for bone length estimation to speed up tracking initialisation.
-
Close Move Live
-
Head to files>computer>usr>local>moveai
-
Open a terminal window in this location
-
Enter sudo gedit settings_rt.ini
-
Enter the password of the server
-
-
Edit line NewTrackNumBoneLengthEstimationFrames to a value that suits you.
-
The default value is 500
-
Consider that a higher frame rate will achieve a greater amount of frames in the same amount of time
-
-
Save
-
Relaunch Move Live
Please note - reducing the amount of frames used for bone length estimation may impact the quality of bone length estimations, and in turn, Mocap quality.
Track removal speed - This is currently experiencing a bug which we are investigating.
You can adjust the duration the system waits after losing sight of an actor, before it removes the tracking and checks if any other actors meet the tracking criteria. The shorter the time, the quicker it will remove the tracking and look for a new actor.
-
Close Move Live
-
Head to files>computer>usr>local>moveai
-
Open a terminal window in this location
-
Enter sudo gedit settings_rt.ini
-
Enter the password of the server
-
-
Edit line AutoRemovalTrackMinIdleTimeThresholdSeconds to a value that suits you
-
The default value is 4 seconds
-
-
Save
-
Relaunch Move Live
Data Visualization
When the mocap mode is activated, you can use the ‘view’ toggles to change the view modes, such as 2D/3D view, camera previews on/off and 3D overlay on/off.
2D View
When in the 2D view, you can see the 3D data overlaid on each camera's preview, or turn off the camera previews to see the 3D overlay solely from the camera's perspective.

3D View
When in the 3D view, you can see the 3D representation of the actor and the cameras within the environment. To navigate, use the WASD keys to translation, and press and hold the left mouse click to rotate.

Integrating with 3D Engines
The data stream from Move Live to 3D engines contains the root location of the body, with respect to the origin defined by the calibration, and the rotation of the joints. As a result, any desired skeleton scaling should be done as part of the retargeting process in the 3D engine.
Streaming data to Unreal Engine
The Software comes with a Live Link plugin for Unreal Engine, so that you can stream the mocap data and map it to your characters in real-time.
-
Download the blank Unreal Engine project with the Live Link from here.
-
Follow these steps to get started with the project, or to learn how to copy the plugin files into your own project.
-
Once installed, simply enter the IP address of the server running Move Live in the plugin to pull in the data stream. Remember to include the :54321 at the end of the IP, as shown in the bottom left corner of the Move Live software
Note: The origin location of the Move Live System will be defined by the start location of the actor during calibration. This will then need to be aligned with the origin in Unreal.
Simulating the Move Live data stream for testing
Move Live MoCap is streamed out over a GRPC protocol. This can be received by any client, should you wish to set one up. Check out the this guide on how to simulate the data stream for testing. Using this streamer, you can develop your client, such as an Unreal Engine project without being connected to Move Live.
FBX Export - Please note, this feature has been discontinued
You can record the real-time .fbx files for future use. Please note, this is different to the second solve .fbx files.

Watch this video to see how to record and export .fbx files from Move Live.
During mocap, you can click ‘Start recording’ on each actor track in the Actor Tracks list to begin recording the fbx of their motion. When you’re done, hit export and you can find the .fbx files within the project directory. This .fbx will be from the real-time data stream, if you'd like a high fidelity version, use the Second Solve feature.