Overview
What is Move Live?
Move Live is a real-time markerless motion capture (mocap) software by Move AI, freeing talent from wearing suits or equipment, opening up solutions to motion capture anyone, anywhere. Traditionally, motion capture has been the preserve of high budget films, but Move Live aims to make mocap accessible to a much broader audience including XR and low to mid budget VP as well as live events.
How does Move Live work?
Deep learning models have been used to teach the system how to detect key points on the human body within 2D images from multiple cameras, and then reconstruct the motion in 3D. This data is then run through a local, real-time neural network to apply biomechanical and kinematic models to ensure a lifelike representation of the actor/s and their movement is achieved. This data can then be streamed into a 3D engine to puppeteer a virtual avatar in real-time, or simply provide effects such as life-like shadows for a performer on stage.
System Capabilities
Max Actors | 2 |
Max Frame Rate | 110fps |
Volume Size | 2.5m x 2.5m - 14m x 14m |
Cameras | 4 - 8 |
Operators | Solo operation or full autonomy |
Hardware Setup Time | <1 hour |
Calibration Time | <1 minute |
Latency | ≈100ms (dependent on network setup) |
Hardware Requirements
The Move Live product is provided as software only, please see the below hardware requirements necessary to run the software.
Volume Configurator
Depending on the size of the space you are capturing in, we support a minimum amount of cameras, as well as lens type to ensure stable tracking throughout the space. The supported capture volumes have been categorised into Small, Medium and Large and the details of these sizes, as well as the minimum/maximum hardware requirements are outlined below.
The camera layout measurements are based on the camera positions themselves, surrounding the space you want to capture within.
| Small Volume | Medium Volume | Medium Volume | Large Volume | Large Volume |
No. Cameras | 4 cams | 6 cams | 8 cams | 8 cams | 8 cams |
No. Actors | 1 actor | 2 actors | 2 actors | 2 actors | 2 actors |
Max size | 6m x 6m | 9m x 9m | 10m x 10m | 12 x 12m | 14m x 14m |
Min size | 2.5m x 2.5m | 4m x 4m | 4m x 4m | 4m x 4m | 4m x 4m |
Lens type | 2.8mm | 2.8mm | 2.8mm | 3.5mm | 4mm |
Cameras
Camera Modules
The system currently supports the below FLIR cameras.
Lenses
Every setup differs - The optimal focal length for your setup will be based on both the distance of the camera from the capture volume and the size of the volume. The greater the focal length, the more zoomed in the image will be, which will be more suitable when the cameras are further away from the volume. In general, it is best to position the cameras further back from the volume (without obstruction), so that you can avoid using wide angle lenses, as they often experience large amounts of distortion in the image. If you'd like more information about which lens model we recommend for a specific volume size, check out the Volume Configurator.
The Move Live software comes with three preset intrinsic lens calibrations (2.8mm, 3.5mm & 4mm). For optimal configuration it is recommended that you capture your own intrinsics using the Move Live software, as the intrinsics may vary for each camera & lens pairing.
Recommended lenses:
Please note: The current supported camera requires a CS-mount lens. To use a C-mount lens with these cameras, you will need a CS to C-mount adapter.
Server Component Specifications
Note: The hardware specifications outlined below are our recommendation and supported components. Please contact us if you have any questions in regards to hardware compatibility outside of this list.
Component: | Minimum Required Specification: |
CPU | 12th Gen Intel Core i9-12900K, 16 core, 24 thread, 3.2GHz / AMD Ryzen™ Threadripper™ PRO 7965WX, 24 core, 48 thread, 4.1GHz |
GPU | NVIDIA RTX A6000 |
PSU | 2000W (CRPS) 100-240VAC 50-60Hz |
Motherboard | Any compatible with all other components. |
RAM | DDR4 32GB |
Hard Drive | SSD NVMe 1TB |
Network Interface Card | 1x1Gb/sec, 1x10Gb/sec |
Operating System | Ubuntu 22.04 (specific version required) |
Licence Dongle
A USB dongle will be provided with the Software, assigned with a licence configured based on the Software sale terms. The Move Live software requires the dongle to be plugged in at all times in order to operate.
Network Switch
The switch must have:
A minimum number of PoE (power over ethernet) ports to match the number of cameras
A 10Gbit/s uplink (to the server)
Jumbo frames (9000 MTU) capability
We recommend the NETGEAR MS510TXPP.
Cabling
You will need:
1x CAT6a (shielded recommended when nearby lots of power cables) per camera to connect to the switch
1 x CAT6a (shielded necessary) to connect the switch to the server.
Camera Mounts
Suitable mounting methods may differ for each user based on infrastructure and mounting locations. We recommend using:
All cameras will need this attachment to connect it to a mount:
Hardware Installation
Mounting the cameras
The below steps outline the process of setting up and connecting the cameras.
Position the cameras as close to the the volume as possible (to ensure the actors consume as many pixels as possible), but try not to exceed a 45 degree angle of the cameras. If they're any steeper, they will have a birds-eye view, which increases the amount of occlusion, for example the shoulders will occlude the waist. Cameras should usually be no higher than 3m.
Mount cameras in locations so that they are stable and evenly spaced above the volume, facing into the capture volume. An ideal setup would have cameras on all sides of the volume, however if this is not possible, provide the greatest variety of camera perspectives that the infrastructure allows.
If you will be using the cameras at 60fps, you can mount the cameras upside down and flip the image in SpinView using the Reverse X and Reverse Y options. However, this is not possible if you'll be running the cameras at higher framerates.
Connect cameras to the switch using CAT6a cables. Ensure the cables are secured such that there is no tension on the cameras. Any movement of the cameras will require a new calibration, so it's best to make sure they won't move over time or if a cable is pulled.
Connect the 10Gbit/s port on the switch to the 10Gbit/s NIC on the server using Cat6a.
Ensure the switch has jumbo frames enabled. Methods to do this may vary by manufacturer, refer to manufacturers guide.
Software Requirements
Ubuntu 22.04
Nvidia drivers
Move Live
Spinnaker SDK & Drivers
WIBU Drivers
Software Installation
The software installation process will require access to an internet connection, but general operation of the Move Live software does not require any internet connection.
Installation
Install Ubuntu 22.04.5
Download the ISO file from here
Back up any existing data and note down any custom settings you have configured, as the installation process will require you to erase your disk.
Follow the installation process
Install Nvidia drivers
Go to Apps and search for ‘Additional Drivers’
Within the section relating to your installed GPU
Tick the 'NVIDIA driver metapackage from Nvidia driver 535.'
NOT the 'NVIDIA Server driver metapackage from Nvidia driver 535 server'
Click Apply Changes
Reboot the server
Install Move Live
Run the below command in a new terminal.
wget -qO- https://aptrepo.move.ai/install_u22.sh | sudo bash
Reboot the server
Run the below command in a new terminal.
sudo apt install mocap-rt=2.0.3741-3741
Run the below command in a new terminal.
sudo apt install mocap-desktop=1.0.11-18
Reboot the server
Install Spinnaker SDK and drivers
Check if the SpinView application is already installed, if not follow the below steps
Download Spinnaker 4.0.0.116-amd64-pkg-22.04 from https://flir.netx.net/file/asset/59513/original/attachment
Extract the folder
Open terminal (in same location as file) and run the below command
sudo ./install_spinnaker.sh
Say YES to everything
Install WIBU Licensing dongle drivers
Check if the CodeMeter application is already installed, if not follow the below steps
Download CodeMeter User Runtime for Linux Version 8.10b | 2024-08-06 | multilanguage from https://www.wibu.com/uk/support/user/downloads-user-software.html
Open a terminal in the same location as the downloaded deb file and enter the following command
sudo apt install ./codemeter_8.10.6221.500_amd64.deb
Make sure the provided dongle is plugged in.
Request a licence
Once installed, open CodeMeter Control Centre and this will show you the physical dongle you have installed.
Make sure the Dongle is enabled when selected.
Select License Update and then Next before selecting Create License Request.
Note: If WIBU systems need to update the licence software/firmware, then you would be required to select Import License Update, but this option does NOT update the Move AI mocap software.
Then select Add License of a New Vendor
Note: To extend a license, select 'Extend Existing Licence'
Enter the Move Ai Firm Code, this is 6002284.
Note: This is the Move AI code indefinitely and does not change.
Select the file name and path where you want to save the file. This will then create the request file and save it to the location path you selected.
Example request file - 3-0000000.WibuCmRaC
Attach this to an email and share with the Move team.
Activate your licence
Once the update file has been received from the Move team.
Example update file - 3-0000000.WibuCmRau
Drag and drop the update file into the window and the updates will be applied.
Open the Web Admin page (bottom right).
Scroll down to the Move.ai section and check the expiration date is as expected.
Network Configuration
Open Settings > Network > Select the port connected to the camera switch
In the Identity tab, set the MTU to 9000 (jumbo packets)
In the IPVv4 tab:
Disable reverse path filtering using the below process:
Open the network security config file using the below command
sudo gedit /etc/sysctl.d/10-network-security.conf
Enter the password of the server when requested
Comment out the following lines in the 10-network-security.conf file, accessed using the following command:
Save and close
Increase the receive buffer size using the below process:
Open the sysctl config file using the below command:
sudo gedit /etc/sysctl.conf
Enter the password of the server when requested
Add the 2 lines below
net.core.rmem_max=10485760
net.core.rmem_default=10485760
Save and close
SpinView
SpinView is the software provided by the camera manufacturer that allows you to manage the camera settings and check the connection on the network.
Check you have a connection to all cameras in SpinView. If you are not able to see any cameras in SpinView, refer to Support & Troubleshooting.
Setting the IP addresses of your cameras
Once you've configured the network you'll use for your camera switch, you can set the persistent IP address of your cameras, so that they don't revert to their default if they're power cycled.
In order to change the settings of your cameras, you need to ensure they are on an accessible IP range. If there is a red error icon next to the serial number, the IP is not on the same range - right click and select ‘Force IP to resolve’ to correct this.
To set a persistent IP address for your cameras, open the settings by doubling clicking on one. Then, locate the below rows in the features panel underneath and make the following changes (on each camera).
Current IP Configuration Persistent
Tick this box
Persistent IP Address
Enter the desired IP address, which is on the same range as the network card it's plugged into (identified in the blue bar above).
The IP must be entered in integer format. Use this site to convert it.
Persistent Subnet Mask
Enter the respective subnet.
The subnet must be entered in integer format. Use this site to convert it.
Repeat for all other cameras.
Power cycle your switch (in turn, power cycling the cameras)
Once powered up, check your cameras have remained on the expected IP addresses.
Refer to this document for more info - https://www.flir.co.uk/support-center/iis/machine-vision/knowledge-base/persistent-ip-in-spinnakerqt/
Framing up the cameras
Using the below image as a guide, orientate the cameras to give the best possible view of capture volume, with the least unused space in the frame of each camera. To achieve the best MoCap, ensure you can see your actors fully head-to-toe in as many locations as possible. Allow a little extra space around the edges of the frame, as the system will crop the images slightly when undistorting the lenses.
SpinView can be used to adjust camera settings, such as White Balance and Exposure. However, once changed, it is essential to check the cameras are still able to capture at your desired framerate.
Software Operation
GUI Overview
Feature guide:
Feature: | Info: |
Calibration Status (1) | Green - Extrinsic and intrinsic successfully found. Yellow - Intrinsic found but extrinsic missing. Orange - No extrinsic or intrinsic found. |
Sync Status (2) | Green - Successful synchronisation Orange - Synchronisation error |
3D View (3) | Blue - 3D view active Grey - 2D view active |
Camera Stream (4) | Blue - Active Grey - Not active |
3D Data Overlay (5) | Blue - Active Grey - Not active |
Second Solve Recording Panel (6) | This can be used to record takes for post-processing in order to achieve a higher fidelity solve |
Second Solve Start Recording Button (7) | Used to begin a recording for second solve. |
Mocap Mode Tab (9) | Used when you want to capture realtime motion capture |
Intrinsics Mode Tab (10) | Used when you want to capture your intrinsic lens calibration |
Extrinsics Mode Tab (11) | Used when you want to capture your extrinsic camera location calibration |
Actor Tracks List (12) | Shows the actors who are currently being tracked |
Listening list (13) | Shows available IP addresses that can broadcast the real time mocap data stream |
Streams (14) | Shows IP addresses which are receiving the real time mocap data stream |
Start Up
Locate the Move Live Software application by searching for it on the PC.
Use the below table to determine your next steps:
Do you have a lens calibration (Intrinsic)? | Do you have a camera positioning calibration (Extrinsic) | Go to |
Yes | Yes | |
Yes | No | |
No | No |
Calibrating Your System
The Move Live system requires two sets of calibration data in order to operate the Mocap mode. The intrinsic calibration informs the system about the camera matrix and distortion coefficients, so that it can correctly interpret and un-distort the image. This will only need to be done once, as long as the same camera & lens are paired together in the future and the configuration of the lens zoom & focus has not changed.
The extrinsic calibration tells the system where the cameras are positioned and how they are orientated, in order to combine the 2D tracking from each image and triangulate the actor(s) within the volume. This will need to be done every time a camera moves and can only be captured once the intrinsics have been provided.
Intrinsic Calibration
Watch this video to see how to capture an intrinsic calibration.
Create a new project, or overwrite the calibration in your existing project.
Note: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.
Head to the Intrinsics tab.
You can either load one of the default intrinsics (selecting the focal length being used), or for a more precise intrinsic calibration, capture your own.
To capture your own:
Enter the following details:
See this example chessboard to use. This can be shown on a screen, or printed onto a rigid board.
Number of intersections in the chessboard width and height. The chessboard above has a width of 9 and height of 14. You should use a rigid, physical chessboard.
Set the detect interval to 100, or increase it so that it doesn’t capture too many datapoints too quickly. If it does not collect enough, then decrease this number.
Hit Activate then right click the chosen camera below and click Start Recording.
Place the chessboard in view of the chosen camera and green data points will begin to appear where the camera detects intersections on the checkerboard.
Move the chessboard around to fill the entire frame of the camera, rotating the board on all three axes - pivoting left/right/up/down & rotating clockwise/counterclockwise.
Once you have data points distributed across the entire frame, especially at the edges, you can right click on the camera id, and click 'Stop Recording'.
Right click again and click calibrate.
Proceed to do the same for all other cameras.
Once you have your intrinsics for all of your cameras, deactivate the Intrinsic mode and save your project.
You are now ready to capture your extrinsics.
Extrinsic Calibration
Watch this video to see how to capture the extrinsic calibration.
If you have an existing project with the correct intrinsics, open that and you can overwrite the old extrinsics. Alternatively, if you have just captured your intrinsics, remain in the same project to capture your new extrinsics now.
Note: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.
Note: Projects can be found in the Invisible_Projects folder within the Home directory.
Click on the Extrinsic tab (top right corner).
If using a human for the calibration, select ‘Human’ for the detection mode and enter the actor’s height in metres (excluding footwear).
Stand in the centre of the volume (this will define the location and orientation of origin), with your hands above your head in a Y-pose. Click ‘Activate’ and then ‘Start Record’.
The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Slowly walk around the volume, spiralling outwards from the centre, filling the entire space until you have an even distribution of data points around your entire volume and the cameras have gone green (from red).
If using a digital charuco board for the calibration, select ‘Charuco’ for the detection mode and enter the respective details.
The charuco board must either be shown on screens spanning across at least two planes, or printed on a physical rigid board.
Place the charuco board in the location you'd like to use as the origin location. Click ‘Activate’, and then ‘Start Record’
The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Move the charuco board around the screens to collect data on at least two planes, until you have a good distribution across the images of the cameras and the cameras have gone green (from red).
If the system is not detecting many key points, you may need to adjust your camera framing, as all cameras must see the human in order to detect them.
Once complete, click Stop Record and then Calibrate. This will flash green/yellow whilst it is processing, and then return to solid green when it's finished.
Check the calibration outcome and quality in the terminal window. At the bottom there will be the status, successful or failed. If you scroll up, there will be a calibration error value for each camera individually as well as an average for all cameras.
When this is finished, click File > Save Project.
When the 3D overlay is enabled, the camera reprojections will be shown on each camera preview. Please note that these will not be correctly positioned until Mocap is activated (at which point the images shown will be undistorted).
Mocap Operation
Watch this video to see how to operate the Mocap mode.
Open an existing project or remain in the project you’ve just captured your intrinsic & extrinsic calibrations for.
Note: Projects can be found in the Invisible_Projects folder within the Home directory.
Open the Mocap tab on the right hand side of the viewport.
Select the number of actors you’d like to track (1-2).
Please note that we do not recommend changing this while the mocap mode is activated.
Choose the Tracking Area (detection method) - this allows you to restrict who will be detected.
Camera positions - This will create a detection area based on the perimeter of the camera positions.
Polygon - This allows you to create a bespoke shaped detection area based on the number of sides and the radius.
None - This allows anyone seen by the cameras to be detected.
Choose the Initialisation Mode (tracking method) - this will determine which of the detected actors will be tracked.
Auto - It will automatically track actors once it detects them.
Click - The operator can clicks on any actor’s bounding box to track them.
Hands - The system will track actors who raise their hands above their shoulders.
Don’t track - In this mode, the system will not track any actors.
Hit Activate!
Once an actor meets the detection and initialization criteria, it will begin estimating their bone lengths. You can see this progress on the right hand side in the Actor Tracks list. To enable this to complete as quickly as possible, the actor should do dynamic movements, flexing all of their joints.
When the bone length estimation completes, the system will begin streaming their tracking data to any connected clients.
Tip - If you can't see the 3D mesh overlay on the actor, make sure you've enabled the 3D data overlay.
To remove an actor’s track, right click on the track in the Actor Tracks list and select ‘Remove track’.
Second Solve
The Second Solve feature allows users to record video during Mocap operation, that can be processed afterwards through our Second Solve engine, to achieve a high fidelity solve of the data for use it post-production.
Ensure fps is set to no greater than 60fps
Activate Mocap mode and begin tracking your actor(s) (if you wish)
When your actor(s) are ready, instruct them to hold a T-pose in the centre of the volume.
Hit Start in the Recordings panel, you can see the elapsed time shown, alongside the no. of lost frames.
A significant number of lost frames may indicate an issue with your hardware/network.
When you're finished, Stop the recording
Make sure Mocap mode is deactivated
Click the settings button to instruct the system how many actors to process and whether to toggle on ball tracking.
Hit Start on the respective take to begin processing
When the take has finished processing, the status light will be green.
You can now lick on the Video button to view the input videos, the File button to view the output files of the animation, and the Bin button to delete the respective take.
Optimizing Your Mocap Output
Filtering
You can adjust the level of filtering to bias for lower latency or higher quality Mocap
Head to File > Settings to toggle the filter settings for Move Live from 0 (least filtering) to 5 (max filtering).
Please note - Having the filtering set to a higher value will ensure smoother tracking but this will increase latency. Lowering this value will lower latency but increase noise in the data.
FPS
You can adjust the frame rate to improve the Mocap quality. Higher frame rates reduce motion blur during faster movements.
In the top left corner of the Camera Info panel, click the square button to open the FPS window
Ensure the FPS is 60 for calibrations, but you can increase this to 110 FPS for mocap.
Set the framerate to your desired value between 60-110 and click Set to apply the changes.
Please note - Higher framerates can enable better quality mocap, however all calibrations must be done at 60fps and if your environment is very bright, the cameras may experience troubles with the auto exposure at high fps, so return to a lower fps in these circumstances or lock the exposure in SpinView. Cameras can not perform above 60fps if you are flipping the images due to upside down mounting.
Bone length estimation duration - This is currently experiencing a bug which we are investigating.
You can adjust the number of frames used for bone length estimation to speed up tracking initialisation.
Close Move Live
Head to files>computer>usr>local>moveai
Open a terminal window in this location
Enter sudo gedit settings_rt.ini
Enter the password of the server
Edit line NewTrackNumBoneLengthEstimationFrames to a value that suits you.
The default value is 500
Consider that a higher frame rate will achieve a greater amount of frames in the same amount of time
Save
Relaunch Move Live
Please note - reducing the amount of frames used for bone length estimation may impact the quality of bone length estimations, and in turn, Mocap quality.
Track removal speed - This is currently experiencing a bug which we are investigating.
You can adjust the duration the system waits after losing sight of an actor, before it removes the tracking and checks if any other actors meet the tracking criteria. The shorter the time, the quicker it will remove the tracking and look for a new actor.
Close Move Live
Head to files>computer>usr>local>moveai
Open a terminal window in this location
Enter sudo gedit settings_rt.ini
Enter the password of the server
Edit line AutoRemovalTrackMinIdleTimeThresholdSeconds to a value that suits you
The default value is 4 seconds
Save
Relaunch Move Live
Data Visualization
When the mocap mode is activated, you can use the ‘view’ toggles to change the view modes, such as 2D/3D view, camera previews on/off and 3D overlay on/off.
2D View
When in the 2D view, you can see the 3D data overlaid on each camera's preview, or turn off the camera previews to see the 3D overlay solely from the camera's perspective.
3D View
When in the 3D view, you can see the 3D representation of the actor and the cameras within the environment. To navigate, use the WASD keys to translation, and press and hold the left mouse click to rotate.
Integrating with 3D Engines
The data stream from Move Live to 3D engines contains the root location of the body, with respect to the origin defined by the calibration, and the rotation of the joints. As a result, any desired skeleton scaling should be done as part of the retargeting process in the 3D engine.
Streaming data to Unreal Engine
The Software comes with a Live Link plugin for Unreal Engine, so that you can stream the mocap data and map it to your characters in real-time.
Download the blank Unreal Engine project with the Live Link from here.
Follow these steps to get started with the project, or to learn how to copy the plugin files into your own project.
Once installed, simply enter the IP address of the server running Move Live in the plugin to pull in the data stream.
Note: The origin location of the Move Live System will be defined by the start location of the actor during calibration. This will then need to be aligned with the origin in Unreal.
Simulating the Move Live data stream for testing
Move Live MoCap is streamed out over a GRPC protocol. This can be received by any client, should you wish to set one up. Check out the this guide on how to simulate the data stream for testing. Using this streamer, you can develop your client, such as an Unreal Engine project without being connected to Move Live.
FBX Export - Please note, this feature has been discontinued
You can record the real-time .fbx files for future use. Please note, this is different to the second solve .fbx files.
Watch this video to see how to record and export .fbx files from Move Live.
During mocap, you can click ‘Start recording’ on each actor track in the Actor Tracks list to begin recording the fbx of their motion. When you’re done, hit export and you can find the .fbx files within the project directory. This .fbx will be from the real-time data stream, if you'd like a high fidelity version, use the Second Solve feature.
Support & Troubleshooting
Support workflow
All logs will be saved and dated in the Project folder. For any support requests, include the logs folder with the enquiry. Reach out to [email protected]
Troubleshooting
Please share any questions you have and we will grow this section with our responses.
Question: | Answer: |
Why won't SpinView Launch? |
|
Why won't Move Live Launch? |
|
Why don't any cameras appear in the SpinView? |
|
Why don't any cameras appear in Move Live? |
|
Why can I only view one camera at a time in SpinView? |
|
Why are the cameras appearing at a lower FPS than expected? |
|
What shall I do if my licence expires? | Speak to Move AI to discuss extending your licence. Once you've received an update file, refer to step 4d here |