Overview
What is Move Live?
Move Live is a real-time markerless motion capture (mocap) software by Move AI, freeing talent from wearing suits or equipment, opening up solutions to common virtual set problems. Traditionally, motion capture has been the preserve of high budget films, but Move Live aims to make mocap accessible to a much broader audience including XR and low to mid budget VP and live events.
How does Move Live work?
Deep learning models have been used to teach the system how to detect key points on the human body within 2D images from multiple cameras, and then reconstruct the motion in 3D. This data is then run through a local, real-time neural network to apply biomechanical and kinematic models to ensure a lifelike representation of the actor/s and their movement is achieved. This data can then be streamed into a 3D engine to puppeteer a virtual avatar in real-time, or simply provide effects such as life-like shadows for a performer on stage.
System Capabilities
Max Actors | 4 |
Max Frame Rate | 110fps |
Volume Size | 2.5m x 2.5m - 14m x 14m |
Cameras | 4 - 8 |
Operators | Solo operation |
Hardware Setup Time | <1 hour |
Calibration Time | <1 minute |
Latency | ≈100ms (dependent on network setup) |
Hardware Requirements
The Move Live product is provided as software only, please see the below hardware requirements necessary to run the software.
Volume Configurator
Depending on the size of the space you are capturing in, we support a minimum amount of cameras, as well as lens type to ensure stable tracking throughout the space. The supported capture volumes have been categorised into Small, Medium and Large and the details of these sizes, as well as the minimum/maximum hardware requirements are outlined below.
The camera layout measurements are based on the camera positions themselves, surrounding the space you want to capture within.
| Small Volume | Medium Volume | Medium Volume | Large Volume | Large Volume |
No. Cameras | 4 cams | 6 cams | 8 cams | 8 cams | 8 cams |
No. Actors | 1 actor | 2 actors | 2 actors | 2 actors | 2 actors |
Max size | 6m x 6m | 9m x 9m | 10m x 10m | 12 x 12m | 14m x 14m |
Min size | 2.5m x 2.5m | 4m x 4m | 4m x 4m | 4m x 4m | 4m x 4m |
Lens type | 2.8mm | 2.8mm | 2.8mm | 3.5mm | 4mm |
Cameras
Camera Modules
The system currently supports the below FLIR cameras.
Lenses
Every setup differs - The optimal focal length for your setup will be based on both the distance of the camera from the capture volume and the size of the volume. The greater the focal length, the more zoomed in the image will be, which will be more suitable when the cameras are farther away from the volume. In general, it is best to position the cameras further back from the stage (without obstruction), so that you can avoid using wide angle lenses, as they often experience large amounts of distortion in the image. If you'd like more information about which lens model we recommend for a specific volume size, check out the Volume Configurator.
The Move Live software comes with three preset intrinsic lens calibrations (2.8mm, 3.5mm & 4mm). For optimal configuration it is recommended that you capture your own intrinsics using the Move Live software, as the intrinsics may vary for each camera & lens pairing.
Recommended lenses:
Please note: The current supported camera requires a CS-mount lens. To use a C-mount lens with these cameras, you will need a CS to C-mount adapter.
Server
Component Specifications
Note: The hardware specifications outlined below are our recommendation and supported components. Please contact us if you have any questions in regards to hardware compatibility outside of this list.
Component: | Minimum Required Specification: |
CPU | 12th Gen Intel Core i9-12900K, 16 core, 24 thread, 3.2GHz / AMD Ryzen™ Threadripper™ PRO 7965WX, 24 core, 48 thread, 4.1GHz |
GPU | NVIDIA RTX A6000 |
PSU | 2000W (CRPS) 100-240VAC 50-60Hz |
Motherboard | Any compatible with all other components. |
RAM | DDR4 32GB |
Hard Drive | SSD NVMe 1TB |
Network Interface Card | 1x1Gb/sec, 1x10Gb/sec |
Operating System | Ubuntu 20.04 (specific version required) |
Licence Dongle
A USB dongle will be provided with the Software, assigned with a licence configured based on the Software sale terms. The Move Live software requires the dongle to be plugged in at all times in order to operate.
Network Switch
The switch must have:
A minimum number of PoE (power over ethernet) ports to match the number of cameras
A 10Gbit/s uplink (to the server)
Jumbo frames (9000 MTU) capability
We recommend the NETGEAR MS510TXPP.
Cabling
You will need:
1x CAT6a (shielded recommended when nearby lots of power cables) per camera to connect to the switch
1 x CAT6a (shielded necessary) to connect the switch to the server.
Camera Mounts
Suitable mounting methods may differ for each user based on infrastructure and mounting locations. We recommend using:
All cameras will need this attachment to connect it to a mount:
Hardware Installation
Mounting the cameras
The below steps outline the process of setting up and connecting the cameras.
Position the cameras as close to the the volume as possible (to ensure the actors consume as many pixels as possible), but try not to exceed a 45 degree angle of the cameras. If they're any steeper, they will have a birds-eye view, which increases the amount of occlusion, for example the shoulders will occlude the waist. Cameras should usually be no higher than 3m.
Mount cameras in locations so that they are stable and evenly spaced above the stage, facing into the capture volume. An ideal setup would have cameras on all sides of the volume, however if this is not possible, provide the greatest variety of camera perspectives that the infrastructure allows.
If you will be using the cameras at 60fps, you can mount the cameras upside down and flip the image in SpinView using the Reverse X and Reverse Y options. However, this is not possible if you'll be recording at 110fps.
Connect cameras to the switch using CAT6a cables. Ensure the cables are secured such that there is no tension on the cameras. Any movement of the cameras will require a new calibration, so it's best to make sure they won't move over time or if a cable is pulled.
Connect the 10Gbit/s port on the switch to the 10Gbit/s NIC on the server using Cat6a.
Ensure the switch has jumbo frames enabled. Methods to do this may vary by manufacturer, refer to manufacturers guide.
Software Requirements
Ubuntu 20.04
Nvidia drivers
Spinnaker SDK & Drivers
WIBU Drivers
Move Live
Software Installation
Installation
Install Ubuntu 20.04
Follow the installation process.
Install Nvidia drivers
Go to Apps and search for ‘Additional Drivers’
Within the section relating to your installed GPU
Tick the 'NVIDIA driver metapackage from Nvidia driver 535'
NOT the 'NVIDIA Server driver metapackage from Nvidia driver 535 server'
Click Apply Changes
Install Spinnaker SDK and drivers
Download Spinnaker 3.2 for Ubuntu 20.04 AMD 64bit from https://flir.netx.net/file/asset/59606/original/attachment
Extract the folder
Open terminal (in same location as file) and run the below command
sudo ./install_spinnaker.sh
Say YES to everything
Install WIBU Licensing dongle drivers
Download CodeMeter User Runtime for Linux Version 8.10 | 2024-04-24 | multilanguage from https://www.wibu.com/uk/support/user/downloads-user-software.html
Open a terminal in the same location as the downloaded deb file and enter the following command
sudo apt install ./codemeter_8.10.6221.500_amd64.deb
Make sure the provided dongle is plugged in.
Licence Update - Request
Once installed, open CodeMeter Control Centre and this will show you the physical dongle you have installed.
Make sure the Dongle is enabled when selected.
Select License Update and then Next before selecting Create License Request.
Note: If WIBU systems need to update the licence software/firmware, then you would be required to select Import License Update, but this option does NOT update the Move AI mocap software.
Then select Add License of a New Vendor
Note: To extend a license, select 'Extend Existing Licence'
Enter the Move Ai Firm Code, this is 6002284.
Note: This is the Move AI code indefinitely and does not change.
Select the file name and path where you want to save the file. This will then create the request file and save it to the location path you selected.
Example request file - 3-0000000.WibuCmRaC
Attach this to an email and share with the Move team.
Licence Update - Implementation
Once the update file has been received from the Move team...
Example update file - 3-0000000.WibuCmRau
Drag and drop the update file into the window and the updates will be applied.
Open the Web Admin page (bottom right).
Scroll down to the Move.ai section and check the expiration date is as expected.
Install Move Live
The Move team will provide you with credentials to install Move Live. Please reach out if you haven't received these.
Enter the below command into a new terminal, replacing your_username & your_password with the provided credentials provided by Move AI.
USERNAME=your_username && PASSWORD=your_password && sudo su -c "bash <(wget --user $USERNAME --password $PASSWORD -qO- https://aptrepo.move.ai/install.sh) $USERNAME $PASSWORD"
Network Configuration
Open Settings > Network > Select the port connected to the camera switch
In the Identity tab, set the MTU to 9000 (jumbo packets)
In the IPVv4 tab:
Disable reverse path filtering by commenting out the following lines in the 10-network-security.conf file, accessed using the following command:
Increase the receive buffer size by adding the 2 lines below to the sysctl file, accessed using the following command:
sudo gedit /etc/sysctl.conf
Enter the password of the server when requested
net.core.rmem_max=10485760
net.core.rmem_default=10485760
SpinView
SpinView is the software provided by the camera manufacturer that allows you to manage the camera settings and check the connection on the network.
Check you have a connection to all cameras in SpinView. If you are not able to see any cameras in Spinview, refer to Support & Troubleshooting.
Setting the IP addresses of your cameras
Once you've configured the network you'll use for your camera switch, you can force the IP address of your cameras, so that they don't revert to their default if they're power cycled.
In order to change the settings of your cameras, you need to ensure they are on an accessible IP range. If there is a red error icon next to the serial number, the IP is not on the same range - right click and select ‘Force IP to resolve’ to correct this.
To set a persistent IP address for your cameras, open the settings by doubling clicking on one. Then, locate the below rows in the features panel underneath and make the following changes (on each camera).
Current IP Configuration Persistent
Tick this box
Persistent IP Address
Enter the desired IP address, which is on the same range as the network card it's plugged into (identified in the blue bar above).
The IP must be entered in integer format. Use this site to convert it.
Persistent Subnet Mask
Enter the respective subnet.
The subnet must be entered in integer format. Use this site to convert it.
Repeat for all other cameras.
Power cycle your switch (in turn, power cycling the cameras)
Once powered up, check your cameras have remained on the expected IP addresses.
Refer to this document for more info - https://www.flir.co.uk/support-center/iis/machine-vision/knowledge-base/persistent-ip-in-spinnakerqt/
Move Live Updates
To update Move Live, open a terminal window and enter the following two commands (sequentially):
sudo apt update
sudo apt upgrade mocap-rt
Software Operation
GUI Overview
Indicator Lights
Feature: | Indicators: |
Calibration Status | Green - Extrinsic and intrinsic successfully found. Yellow - Intrinsic found but extrinsic missing. Orange - No extrinsic or intrinsic found. |
Sync Status | Green - Successful synchronisation Orange - Synchronisation error |
3D View | Blue - 3D view active Grey - 2D view active |
Camera Stream | Blue - Active Grey - Not active |
3D Data Overlay | Blue - Active Grey - Not active |
Intrinsic Capture Status | Green - Ready Flashing yellow - Capturing |
Intrinsic Calibration Status | Green - Ready Flashing yellow - Calibrating |
Extrinsic Capture Status | Red - Not enough frames captured Green - Adequate frames captured |
Startup
Locate the Move Live Software application by searching for it on the PC.
If you:
Have intrinsic & extrinsic calibrations already - refer to Mocap Operation.
Have intrinsic but not extrinsic calibration already - refer to Extrinsic Calibration.
Do not have intrinsic & extrinsic calibration - create a new project and refer to Intrinsic Calibration.
Please note - we are currently aware of a bug the first time you launch Move Live after booting the server. When activating any mode, the GUI will freeze - to resolve, this, please restart Move Live.
Note: Projects can be found in the Invisible_Projects folder found within the Home directory.
Framing up the cameras
Using the below image as a guide, orientate the cameras to give the best possible view of capture volume, with the least wasted space in the frame of each camera. To achieve the best MoCap, ensure you can see your actors fully head-to-toe in as many locations as possible.
Creating a Calibration
The Move Live system requires two sets of calibration data in order to operate the Mocap mode. The intrinsic calibration tells the system about the camera matrix and distortion coefficients, so that it can correctly interpret and un-distort the image (the image shown in the GUI will be the raw, distorted image, until Mocap is activated). This will only need to be done once, as long as the same camera & lens are paired together in the future and the configuration of the lens has not changed.
The extrinsic calibration tells the system where the cameras are positioned and how they are orientated, in order to combine the 2D tracking from each image and triangulate the actor(s) within the volume. This will need to be done every time a camera moves and can only be captured once the intrinsics have been provided.
Intrinsic Calibration
Watch this video to see how to capture an intrinsic calibration.
Create a new project, or overwrite the calibration in your existing project.
NOTE: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.
Head to the Intrinsics tab.
You can either load one of the default intrinsics (selecting the focal length of your lens), or for a more precise intrinsic calibration, capture your own.
To capture your own:
Enter the following details:
Number of intersections in the chessboard width and height. See example below which has a width of 9 and height of 14. You should use a rigid, physical chessboard.
Set the detect interval to 100, so that it doesn’t capture too many similar frames. If it does not collect enough, then increase this number.
Hit Activate then right click the chosen camera below and click and start recording.
Place the chessboard in view of the chosen camera and green data points will begin to appear where the camera detects intersections on the checkerboard.
Move the chessboard around to fill the entire frame of the camera, rotating the board on all three axes - left/right, up/down, clockwise/counterclockwise.
Once you have data points distributed across the entire frame, you can finish the calibration. (ideal distribution shown above)
A minimum of 50 frames are required, but no more than 250.
Right click on the camera and click stop record, then right click again and click calibrate.
Once you have your intrinsics for all of your cameras, deactivate the Intrinsic mode and save your project.
You are now ready to capture your extrinsics.
See this example chessboard to use
Extrinsic Calibration
Watch this video to see how to capture the extrinsic calibration.
If you have an existing project with the correct intrinsics, open that and you can overwrite the old extrinsics. Alternatively, if you have just captured your intrinsics, remain in the same project to capture your new extrinsics now.
NOTE: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.
Click on the Extrinsic tab (top right corner).
If using a human for the calibration, select ‘Human’ for the detection mode and enter the actor’s height in metres (excluding footwear).
Stand in the centre of the volume (this will define the location and orientation of origin), with your hands above your head in a Y-pose. Click ‘Activate’ and then ‘Start Record’.
The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Slowly walk around the volume, spiralling outwards from the centre, filling the entire space until you have an even distribution of data points around your entire volume and the cameras have gone green (from red).
If using a digital charuco board for the calibration, select ‘Charuco’ for the detection mode and enter the respective details. The charuco board must be shown on screens spanning across at least two planes.
Place the charuco board in the location you'd like to use as the origin location. Click ‘Activate’, and then ‘Start Record’
The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Move the charuco board around the screens to collect data on at least two planes, until you have a good distribution across the images of the cameras and the cameras have gone green (from red).
If the system is not detecting many key points, you may need to adjust your camera framing.
Once complete, click Stop Record and then Calibrate. This will flash whilst it is processing, and then return to a solid button when it's finished.
Check the calibration outcome and quality in the terminal window. At the bottom there will be the status, successful or failed. If you scroll up, there will be a calibration error value for each camera individually as well as an average for all cameras.
An excellent calibration will be less than 5, a good calibration will be less than 9. Above this, a new calibration is recommended.
When this is finished, click File > Save Project.
When the 3D overlay is enabled, the camera reprojections will be shown on each camera preview. Please note that these will not be correctly positioned until Mocap is activated (at which point the images shown will be undistorted).
Mocap Operation
Watch this video to see how to operate the Mocap mode.
Open an existing project or remain in the project you’ve just captured your intrinsic & extrinsic calibrations for.
Open the Mocap tab on the right hand side of the viewport.
Select the number of actors you’d like to track.
Please note that we do not recommend changing this while the mocap mode is activated.
Choose the detection method - this allows you to restrict who can be tracked.
Camera positions - this will create a detection area based on the perimeter of the camera positions.
Polygon - This allows you to create a bespoke shaped detection area based on the number of sides and the radius.
None - This allows anyone seen by the cameras to be tracked.
Choose the initialisation mode - this will determine which of the detected actors will be tracked.
Auto - It will automatically track actors once it detects them.
Click - The operator can clicks on any actor’s bounding box to track them.
Hands - The system track actors who raise their hands above their shoulders.
Don’t track - In this mode, the system will not track any actors.
Hit Activate!
Once an actor meets the detection and initialization criteria, it will begin estimating their bone lengths. You can see this progress on the right hand side in the Actor Tracks list. To enable this to complete as quickly as possible, the actor should do dynamic movements, flexing all of their joints.
Tip - If you can't see the mesh overlay, make sure you've enabled the 3D data overlay.
To remove an actor’s track, right click on the track in the Actor Tracks list and select ‘Remove track’.
Optimizing Your Mocap Output
Filtering
Access the Settings in the top left of the software to toggle the filter settings for Move Live.
Having the filtering set to a higher value will ensure smoother tracking but this will increase latency.
Lowering this value will lower latency but increase noise in the data.
FPS
To adjust the fps of the cameras:
Close Move Live
Head to files>computer>media>models
Open a the terminal window
Enter sudo gedit settings_rt.ini
Edit line 46 DebugCameraFPS to either 60 or 110 to switch between framerates.
Higher framerates can enable better quality mocap, however all calibrations must be done at 60fps and if your environment is very bright, the cameras may experience troubles with the auto exposure at high fps, so return to a lower fps in these circumstances.
Please note - cameras can not perform above 60fps if you are flipping the images due to upside down mounting.
Data Visualization
When the mocap mode is activated, you can use the ‘view’ toggles to change the view modes, such as 2D/3D view, camera previews on/off and 3D overlay on/off.
2D View
When in the 2D view, you can see the 3D data overlaid on each camera's preview, or turn off the camera previews to see the 3D overlay solely from the camera's perspective.
3D View
When in the 3D view, you can see the 3D representation of the actor and the cameras within the environment. To navigate, use the WASD keys to translation, and press and hold the left mouse click to rotate.
Integrating with 3D Engines
The data stream from Move Live to 3D engines contains the root location of the body, with respect to the origin defined by the calibration, and the rotation of the joints. As a result, any desired skeleton scaling should be done as part of the retargeting process in the 3D engine.
Streaming data to Unreal Engine
The Software comes with a Live Link plugin for Unreal Engine, so that you can stream the mocap data and map it to your characters in real-time.
Download the blank Unreal Engine project with the Live Link from here.
Follow these steps to get started with the project, or to learn how to copy the plugin files into your own project.
Once installed, simply enter the IP address of the server running Move Live in the plugin to pull in the data stream.
Note: The origin location of the Move Live System will be defined by the start location of the actor during calibration. This will then need to be aligned with the origin in Unreal.
Simulating the Move Live data stream for testing
Move Live MoCap is streamed out over a GRPC protocol. This can be received by any client, should you wish to set one up. Check out the this guide on how to simulate the data stream for testing. Using this streamer, you can develop your client, such as an Unreal Engine project without being connected to Move Live.
FBX Export
Watch this video to see how to record and export .fbx files from Move Live.
During mocap, you can click ‘Start recording’ on each actor track in the Actor Tracks list to begin recording the fbx of their motion. When you’re done, hit export and you can find the .fbx files within the project directory.
Support & Troubleshooting
Support workflow
All logs will be saved and dated in the Project folder. For any support requests, include the logs folder with the enquiry. Reach out to [email protected]
Troubleshooting
Please share any questions you have and we will grow this section with our responses.
Question: | Answer: |
Why won't SpinView Launch? |
|
Why won't Move Live Launch? |
|
Why don't any cameras appear in the SpinView? |
|
Why don't any cameras appear in Move Live? |
|
Why can I only view one camera at a time in SpinView? |
|
Why are the cameras appearing at a lower FPS than expected? |
|
What shall I do if my licence expires? | Speak to Move AI to discuss extending your licence. Once you've received an update file, refer to step 4iv here |