Skip to main content
All CollectionsMove Live
Move Live 2.0 Documentation
Move Live 2.0 Documentation

Everything you need to know about Move Live real-time markerless motion capture.

Updated this week

Overview

What is Move Live?

Move Live is a real-time markerless motion capture (mocap) software by Move AI, freeing talent from wearing suits or equipment, opening up solutions to motion capture anyone, anywhere. Traditionally, motion capture has been the preserve of high budget films, but Move Live aims to make mocap accessible to a much broader audience including XR and low to mid budget VP as well as live events.

How does Move Live work?

Deep learning models have been used to teach the system how to detect key points on the human body within 2D images from multiple cameras, and then reconstruct the motion in 3D. This data is then run through a local, real-time neural network to apply biomechanical and kinematic models to ensure a lifelike representation of the actor/s and their movement is achieved. This data can then be streamed into a 3D engine to puppeteer a virtual avatar in real-time, or simply provide effects such as life-like shadows for a performer on stage.

System Capabilities

Max Actors

2

Max Frame Rate

110fps

Volume Size

2.5m x 2.5m - 14m x 14m

Cameras

4 - 8

Operators

Solo operation or full autonomy

Hardware Setup Time

<1 hour

Calibration Time

<1 minute

Latency

≈100ms (dependent on network setup)


Hardware Requirements

The Move Live product is provided as software only, please see the below hardware requirements necessary to run the software.

Volume Configurator

Depending on the size of the space you are capturing in, we support a minimum amount of cameras, as well as lens type to ensure stable tracking throughout the space. The supported capture volumes have been categorised into Small, Medium and Large and the details of these sizes, as well as the minimum/maximum hardware requirements are outlined below.

The camera layout measurements are based on the camera positions themselves, surrounding the space you want to capture within.

Small Volume

Medium Volume

Medium Volume

Large Volume

Large Volume

No. Cameras

4 cams

6 cams

8 cams

8 cams

8 cams

No. Actors

1 actor

2 actors

2 actors

2 actors

2 actors

Max size

6m x 6m

9m x 9m

10m x 10m

12 x 12m

14m x 14m

Min size

2.5m x 2.5m

4m x 4m

4m x 4m

4m x 4m

4m x 4m

Lens type

2.8mm

2.8mm

2.8mm

3.5mm

4mm

Cameras

Camera Modules

The system currently supports the below FLIR cameras.

Lenses

Every setup differs - The optimal focal length for your setup will be based on both the distance of the camera from the capture volume and the size of the volume. The greater the focal length, the more zoomed in the image will be, which will be more suitable when the cameras are further away from the volume. In general, it is best to position the cameras further back from the volume (without obstruction), so that you can avoid using wide angle lenses, as they often experience large amounts of distortion in the image. If you'd like more information about which lens model we recommend for a specific volume size, check out the Volume Configurator.

The Move Live software comes with three preset intrinsic lens calibrations (2.8mm, 3.5mm & 4mm). For optimal configuration it is recommended that you capture your own intrinsics using the Move Live software, as the intrinsics may vary for each camera & lens pairing.

Recommended lenses:

Please note: The current supported camera requires a CS-mount lens. To use a C-mount lens with these cameras, you will need a CS to C-mount adapter.

Server Component Specifications

Note: The hardware specifications outlined below are our recommendation and supported components. Please contact us if you have any questions in regards to hardware compatibility outside of this list.

Component:

Minimum Required Specification:

CPU

12th Gen Intel Core i9-12900K, 16 core, 24 thread, 3.2GHz / AMD Ryzen™ Threadripper™ PRO 7965WX, 24 core, 48 thread, 4.1GHz

GPU

NVIDIA RTX A6000

PSU

2000W (CRPS) 100-240VAC 50-60Hz

Motherboard

Any compatible with all other components.

RAM

DDR4 32GB

Hard Drive

SSD NVMe 1TB

Network Interface Card

1x1Gb/sec, 1x10Gb/sec

Operating System

Ubuntu 22.04 (specific version required)

Licence Dongle

A USB dongle will be provided with the Software, assigned with a licence configured based on the Software sale terms. The Move Live software requires the dongle to be plugged in at all times in order to operate.

Network Switch

The switch must have:

  • A minimum number of PoE (power over ethernet) ports to match the number of cameras

  • A 10Gbit/s uplink (to the server)

  • Jumbo frames (9000 MTU) capability

We recommend the NETGEAR MS510TXPP.

Cabling

You will need:

  • 1x CAT6a (shielded recommended when nearby lots of power cables) per camera to connect to the switch

  • 1 x CAT6a (shielded necessary) to connect the switch to the server.

Camera Mounts

Suitable mounting methods may differ for each user based on infrastructure and mounting locations. We recommend using:

All cameras will need this attachment to connect it to a mount:


Hardware Installation

Mounting the cameras

The below steps outline the process of setting up and connecting the cameras.

  1. Position the cameras as close to the the volume as possible (to ensure the actors consume as many pixels as possible), but try not to exceed a 45 degree angle of the cameras. If they're any steeper, they will have a birds-eye view, which increases the amount of occlusion, for example the shoulders will occlude the waist. Cameras should usually be no higher than 3m.

  2. Mount cameras in locations so that they are stable and evenly spaced above the volume, facing into the capture volume. An ideal setup would have cameras on all sides of the volume, however if this is not possible, provide the greatest variety of camera perspectives that the infrastructure allows.

  3. If you will be using the cameras at 60fps, you can mount the cameras upside down and flip the image in SpinView using the Reverse X and Reverse Y options. However, this is not possible if you'll be running the cameras at higher framerates.

  4. Connect cameras to the switch using CAT6a cables. Ensure the cables are secured such that there is no tension on the cameras. Any movement of the cameras will require a new calibration, so it's best to make sure they won't move over time or if a cable is pulled.

  5. Connect the 10Gbit/s port on the switch to the 10Gbit/s NIC on the server using Cat6a.

  6. Ensure the switch has jumbo frames enabled. Methods to do this may vary by manufacturer, refer to manufacturers guide.


Software Requirements

  1. Ubuntu 22.04

  2. Nvidia drivers

  3. Move Live

  4. Spinnaker SDK & Drivers

  5. WIBU Drivers

Software Installation

The software installation process will require access to an internet connection, but general operation of the Move Live software does not require any internet connection.

Installation

  1. Install Ubuntu 22.04.5

    1. Download the ISO file from here

    2. Back up any existing data and note down any custom settings you have configured, as the installation process will require you to erase your disk.

  2. Install Nvidia drivers

    1. Go to Apps and search for ‘Additional Drivers’

    2. Within the section relating to your installed GPU

    3. Tick the 'NVIDIA driver metapackage from Nvidia driver 535.'

      1. NOT the 'NVIDIA Server driver metapackage from Nvidia driver 535 server'

    4. Click Apply Changes

    5. Reboot the server

  3. Install Move Live

    1. Run the below command in a new terminal.

      wget -qO- https://aptrepo.move.ai/install_u22.sh | sudo bash
    2. Reboot the server

    3. Run the below command in a new terminal.

      sudo apt install mocap-rt=2.2.3767-3767
    4. Run the below command in a new terminal.

      sudo apt install mocap-desktop=1.4.15
    5. Reboot the server

  4. Install Spinnaker SDK and drivers

    1. Check if the SpinView application is already installed, if not follow the below steps

      1. Download Spinnaker 4.0.0.116-amd64-pkg-22.04 from https://flir.netx.net/file/asset/59513/original/attachment

      2. Extract the folder

      3. Open terminal (in same location as file) and run the below command

        sudo ./install_spinnaker.sh
      4. Say YES to everything

  5. Install WIBU Licensing dongle drivers

    1. Check if the CodeMeter application is already installed, if not follow the below steps

      1. Download CodeMeter User Runtime for Linux Version 8.10b | 2024-08-06 | multilanguage from https://www.wibu.com/uk/support/user/downloads-user-software.html

      2. Open a terminal in the same location as the downloaded deb file and enter the following command

        sudo apt install ./codemeter_8.10.6221.500_amd64.deb
    2. Make sure the provided dongle is plugged in.

    3. Request a licence

      1. Once installed, open CodeMeter Control Centre and this will show you the physical dongle you have installed.

      2. Make sure the Dongle is enabled when selected.

      3. Select License Update and then Next before selecting Create License Request.

        1. Note: If WIBU systems need to update the licence software/firmware, then you would be required to select Import License Update, but this option does NOT update the Move AI mocap software.

      4. Then select Add License of a New Vendor

        1. Note: To extend a license, select 'Extend Existing Licence'

      5. Enter the Move Ai Firm Code, this is 6002284.

        1. Note: This is the Move AI code indefinitely and does not change.

      6. Select the file name and path where you want to save the file. This will then create the request file and save it to the location path you selected.

        • Example request file - 3-0000000.WibuCmRaC

      7. Attach this to an email and share with the Move team.

    4. Activate your licence

      1. Once the update file has been received from the Move team.

        1. Example update file - 3-0000000.WibuCmRau

      2. Drag and drop the update file into the window and the updates will be applied.

      3. Open the Web Admin page (bottom right).

      4. Scroll down to the Move.ai section and check the expiration date is as expected.

Network Configuration

  1. Open Settings > Network > Select the port connected to the camera switch

  2. In the Identity tab, set the MTU to 9000 (jumbo packets)

  3. In the IPVv4 tab:

    1. Set the IP to your desired IP address (ideally on a dedicated range)

    2. Set the subnet mask to 255.255.255.0

    3. No gateway is required

    4. Set DNS to automatic

  4. Disable reverse path filtering using the below process:

    1. Open the network security config file using the below command

      1. sudo gedit /etc/sysctl.d/10-network-security.conf 
    2. Enter the password of the server when requested

    3. Comment out the following lines in the 10-network-security.conf file, accessed using the following command:

      1. ## net.ipv4.conf.default.rp_filter=1
        ## net.ipv4.conf.all.rp_filter=1
    4. Save and close

  5. Increase the receive buffer size using the below process:

    1. Open the sysctl config file using the below command:

      1. sudo gedit /etc/sysctl.conf 
    2. Enter the password of the server when requested

    3. Add the 2 lines below

      1. net.core.rmem_max=10485760
        net.core.rmem_default=10485760
    4. Save and close


SpinView

SpinView is the software provided by the camera manufacturer that allows you to manage the camera settings and check the connection on the network.

Check you have a connection to all cameras in SpinView. If you are not able to see any cameras in SpinView, refer to Support & Troubleshooting.

Setting the IP addresses of your cameras

Once you've configured the network you'll use for your camera switch, you can set the persistent IP address of your cameras, so that they don't revert to their default if they're power cycled.

In order to change the settings of your cameras, you need to ensure they are on an accessible IP range. If there is a red error icon next to the serial number, the IP is not on the same range - right click and select ‘Force IP to resolve’ to correct this.

To set a persistent IP address for your cameras, open the settings by doubling clicking on one. Then, locate the below rows in the features panel underneath and make the following changes (on each camera).

  1. Current IP Configuration Persistent

    • Tick this box

  2. Persistent IP Address

    • Enter the desired IP address, which is on the same range as the network card it's plugged into (identified in the blue bar above).

    • The IP must be entered in integer format. Use this site to convert it.

  3. Persistent Subnet Mask

    • Enter the respective subnet.

    • The subnet must be entered in integer format. Use this site to convert it.

  4. Repeat for all other cameras.

  5. Power cycle your switch (in turn, power cycling the cameras)

  6. Once powered up, check your cameras have remained on the expected IP addresses.

Framing up the cameras

Using the below image as a guide, orientate the cameras to give the best possible view of capture volume, with the least unused space in the frame of each camera. To achieve the best MoCap, ensure you can see your actors fully head-to-toe in as many locations as possible. Allow a little extra space around the edges of the frame, as the system will crop the images slightly when undistorting the lenses.

SpinView can be used to adjust camera settings, such as White Balance and Exposure. However, once changed, it is essential to check the cameras are still able to capture at your desired framerate.


Software Operation

GUI Overview

Feature guide:

Feature:

Info:

Calibration Status (1)

Green - Extrinsic and intrinsic successfully found.

Yellow - Intrinsic found but extrinsic missing.

Orange - No extrinsic or intrinsic found.

Sync Status (2)

Green - Successful synchronisation

Orange - Synchronisation error

3D View (3)

Blue - 3D view active

Grey - 2D view active

Camera Stream (4)

Blue - Active

Grey - Not active

3D Data Overlay (5)

Blue - Active

Grey - Not active

Second Solve Recording Panel (6)

This can be used to record takes for post-processing in order to achieve a higher fidelity solve

Second Solve Start Recording Button (7)

Used to begin a recording for second solve.

Mocap Mode Tab (9)

Used when you want to capture realtime motion capture

Intrinsics Mode Tab (10)

Used when you want to capture your intrinsic lens calibration

Extrinsics Mode Tab (11)

Used when you want to capture your extrinsic camera location calibration

Actor Tracks List (12)

Shows the actors who are currently being tracked

Listening list (13)

Shows available IP addresses that can broadcast the real time mocap data stream

Streams (14)

Shows IP addresses which are receiving the real time mocap data stream

Start Up

  1. Locate the Move Live Software application by searching for it on the PC.

  2. Use the below table to determine your next steps:

Do you have a lens calibration (Intrinsic)?

Do you have a camera positioning calibration (Extrinsic)

Go to

Yes

Yes

Yes

No

No

No

Calibrating Your System

The Move Live system requires two sets of calibration data in order to operate the Mocap mode. The intrinsic calibration informs the system about the camera matrix and distortion coefficients, so that it can correctly interpret and un-distort the image. This will only need to be done once, as long as the same camera & lens are paired together in the future and the configuration of the lens zoom & focus has not changed.

The extrinsic calibration tells the system where the cameras are positioned and how they are orientated, in order to combine the 2D tracking from each image and triangulate the actor(s) within the volume. This will need to be done every time a camera moves and can only be captured once the intrinsics have been provided.

Intrinsic Calibration

Watch this video to see how to capture an intrinsic calibration.

  1. Create a new project, or overwrite the calibration in your existing project.

    Note: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.

  2. Head to the Intrinsics tab.

  3. You can either load one of the default intrinsics (selecting the focal length being used), or for a more precise intrinsic calibration, capture your own.

  4. To capture your own:

    1. Enter the following details:

      • See this example chessboard to use. This can be shown on a screen, or printed onto a rigid board.

      • Number of intersections in the chessboard width and height. The chessboard above has a width of 9 and height of 14. You should use a rigid, physical chessboard.

      • Set the detect interval to 100, or increase it so that it doesn’t capture too many datapoints too quickly. If it does not collect enough, then decrease this number.

    2. Hit Activate then right click the chosen camera below and click Start Recording.

    3. Place the chessboard in view of the chosen camera and green data points will begin to appear where the camera detects intersections on the checkerboard.

    4. Move the chessboard around to fill the entire frame of the camera, rotating the board on all three axes - pivoting left/right/up/down & rotating clockwise/counterclockwise.

    5. Once you have data points distributed across the entire frame, especially at the edges, you can right click on the camera id, and click 'Stop Recording'.

      • Suitable distribution shown below.

      • A minimum of 50 frames are required, but no more than 300 are necessary and may result in longer calibration processing times.

    6. Right click again and click calibrate.

    7. Proceed to do the same for all other cameras.

  5. Once you have your intrinsics for all of your cameras, deactivate the Intrinsic mode and save your project.

  6. You are now ready to capture your extrinsics.

Extrinsic Calibration

Watch this video to see how to capture the extrinsic calibration.

  1. If you have an existing project with the correct intrinsics, open that and you can overwrite the old extrinsics. Alternatively, if you have just captured your intrinsics, remain in the same project to capture your new extrinsics now.

    1. Note: Cameras must be running at 60fps for calibrations, if you've changed this, please revert it now.

    2. Note: Projects can be found in the Invisible_Projects folder within the Home directory.

  2. Click on the Extrinsic tab (top right corner).

    1. If using a human for the calibration, select ‘Human’ for the detection mode and enter the actor’s height in metres (excluding footwear).

      1. Stand in the centre of the volume (this will define the location and orientation of origin), with your hands above your head in a Y-pose. Click ‘Activate’ and then ‘Start Record’.

      2. The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Slowly walk around the volume, spiralling outwards from the centre, filling the entire space until you have an even distribution of data points around your entire volume and the cameras have gone green (from red).

    2. If using a digital charuco board for the calibration, select ‘Charuco’ for the detection mode and enter the respective details.

      1. The charuco board must either be shown on screens spanning across at least two planes, or printed on a physical rigid board.

      2. Place the charuco board in the location you'd like to use as the origin location. Click ‘Activate’, and then ‘Start Record’

      3. The system will now begin overlaying a point cloud of datapoints overlayed on the camera previews. Move the charuco board around the screens to collect data on at least two planes, until you have a good distribution across the images of the cameras and the cameras have gone green (from red).

  3. If the system is not detecting many key points, you may need to adjust your camera framing, as all cameras must see the human in order to detect them.

  4. Once complete, click Stop Record and then Calibrate. This will flash green/yellow whilst it is processing, and then return to solid green when it's finished.

  5. Check the calibration outcome and quality in the terminal window. At the bottom there will be the status, successful or failed. If you scroll up, there will be a calibration error value for each camera individually as well as an average for all cameras.

    1. An excellent calibration will be less than 5, a good calibration will be less than 9. Above this, a new calibration is recommended.

  6. When this is finished, click File > Save Project.

  7. When the 3D overlay is enabled, the camera reprojections will be shown on each camera preview. Please note that these will not be correctly positioned until Mocap is activated (at which point the images shown will be undistorted).

Mocap Operation

Watch this video to see how to operate the Mocap mode.

  1. Open an existing project or remain in the project you’ve just captured your intrinsic & extrinsic calibrations for.

    1. Note: Projects can be found in the Invisible_Projects folder within the Home directory.

  2. Open the Mocap tab on the right hand side of the viewport.

  3. Select the number of actors you’d like to track (1-2).

    1. Please note that we do not recommend changing this while the mocap mode is activated.

  4. Choose the Tracking Area (detection method) - this allows you to restrict who will be detected.

    1. Camera positions - This will create a detection area based on the perimeter of the camera positions.

    2. Polygon - This allows you to create a bespoke shaped detection area based on the number of sides and the radius.

    3. None - This allows anyone seen by the cameras to be detected.

  5. Choose the Initialisation Mode (tracking method) - this will determine which of the detected actors will be tracked.

    1. Auto - It will automatically track actors once it detects them.

    2. Click - The operator can clicks on any actor’s bounding box to track them.

    3. Hands - The system will track actors who raise their hands above their shoulders.

    4. Don’t track - In this mode, the system will not track any actors.

  6. Hit Activate!

  7. Once an actor meets the detection and initialization criteria, it will begin estimating their bone lengths. You can see this progress on the right hand side in the Actor Tracks list. To enable this to complete as quickly as possible, the actor should do dynamic movements, flexing all of their joints.

  8. When the bone length estimation completes, the system will begin streaming their tracking data to any connected clients.

    1. Tip - If you can't see the 3D mesh overlay on the actor, make sure you've enabled the 3D data overlay.

  9. To remove an actor’s track, right click on the track in the Actor Tracks list and select ‘Remove track’.

Second Solve

The Second Solve feature allows users to record video during Mocap operation, that can be processed afterwards through our Second Solve engine, to achieve a high fidelity solve of the data for use it post-production.

  1. Ensure fps is set to no greater than 60fps

  2. Activate Mocap mode and begin tracking your actor(s) (if you wish)

  3. When your actor(s) are ready, instruct them to hold a T-pose in the centre of the volume.

  4. Hit Start in the Recordings panel, you can see the elapsed time shown, alongside the no. of lost frames.

    1. A significant number of lost frames may indicate an issue with your hardware/network.

  5. When you're finished, Stop the recording

  6. Make sure Mocap mode is deactivated

  7. Click the settings button to instruct the system how many actors to process and whether to toggle on ball tracking.

  8. Hit Start on the respective take to begin processing

  9. When the take has finished processing, the status light will be green.

  10. You can now lick on the Video button to view the input videos, the File button to view the output files of the animation, and the Bin button to delete the respective take.

Optimizing Your Mocap Output

Filtering

You can adjust the level of filtering to bias for lower latency or higher quality Mocap

  1. Head to File > Settings to toggle the filter settings for Move Live from 0 (least filtering) to 5 (max filtering).

Please note - Having the filtering set to a higher value will ensure smoother tracking but this will increase latency. Lowering this value will lower latency but increase noise in the data.

FPS

You can adjust the frame rate to improve the Mocap quality. Higher frame rates reduce motion blur during faster movements.

  1. In the top left corner of the Camera Info panel, click the square button to open the FPS window

  2. Ensure the FPS is 60 for calibrations, but you can increase this to 110 FPS for mocap.

  3. Set the framerate to your desired value between 60-110 and click Set to apply the changes.

Please note - Higher framerates can enable better quality mocap, however all calibrations must be done at 60fps and if your environment is very bright, the cameras may experience troubles with the auto exposure at high fps, so return to a lower fps in these circumstances or lock the exposure in SpinView. Cameras can not perform above 60fps if you are flipping the images due to upside down mounting.

Bone length estimation duration - This is currently experiencing a bug which we are investigating.

You can adjust the number of frames used for bone length estimation to speed up tracking initialisation.

  1. Close Move Live

  2. Head to files>computer>usr>local>moveai

  3. Open a terminal window in this location

  4. Enter sudo gedit settings_rt.ini

    1. Enter the password of the server

  5. Edit line NewTrackNumBoneLengthEstimationFrames to a value that suits you.

    1. The default value is 500

    2. Consider that a higher frame rate will achieve a greater amount of frames in the same amount of time

  6. Save

  7. Relaunch Move Live

Please note - reducing the amount of frames used for bone length estimation may impact the quality of bone length estimations, and in turn, Mocap quality.

Track removal speed - This is currently experiencing a bug which we are investigating.

You can adjust the duration the system waits after losing sight of an actor, before it removes the tracking and checks if any other actors meet the tracking criteria. The shorter the time, the quicker it will remove the tracking and look for a new actor.

  1. Close Move Live

  2. Head to files>computer>usr>local>moveai

  3. Open a terminal window in this location

  4. Enter sudo gedit settings_rt.ini

    1. Enter the password of the server

  5. Edit line AutoRemovalTrackMinIdleTimeThresholdSeconds to a value that suits you

    1. The default value is 4 seconds

  6. Save

  7. Relaunch Move Live

Data Visualization

When the mocap mode is activated, you can use the ‘view’ toggles to change the view modes, such as 2D/3D view, camera previews on/off and 3D overlay on/off.

2D View

When in the 2D view, you can see the 3D data overlaid on each camera's preview, or turn off the camera previews to see the 3D overlay solely from the camera's perspective.

3D View

When in the 3D view, you can see the 3D representation of the actor and the cameras within the environment. To navigate, use the WASD keys to translation, and press and hold the left mouse click to rotate.


Integrating with 3D Engines

The data stream from Move Live to 3D engines contains the root location of the body, with respect to the origin defined by the calibration, and the rotation of the joints. As a result, any desired skeleton scaling should be done as part of the retargeting process in the 3D engine.

Streaming data to Unreal Engine

The Software comes with a Live Link plugin for Unreal Engine, so that you can stream the mocap data and map it to your characters in real-time.

  1. Download the blank Unreal Engine project with the Live Link from here.

  2. Follow these steps to get started with the project, or to learn how to copy the plugin files into your own project.

  3. Once installed, simply enter the IP address of the server running Move Live in the plugin to pull in the data stream. Remember to include the :54321 at the end of the IP, as shown in the bottom left corner of the Move Live software

Note: The origin location of the Move Live System will be defined by the start location of the actor during calibration. This will then need to be aligned with the origin in Unreal.

Simulating the Move Live data stream for testing

Move Live MoCap is streamed out over a GRPC protocol. This can be received by any client, should you wish to set one up. Check out the this guide on how to simulate the data stream for testing. Using this streamer, you can develop your client, such as an Unreal Engine project without being connected to Move Live.

FBX Export - Please note, this feature has been discontinued

You can record the real-time .fbx files for future use. Please note, this is different to the second solve .fbx files.

Watch this video to see how to record and export .fbx files from Move Live.

During mocap, you can click ‘Start recording’ on each actor track in the Actor Tracks list to begin recording the fbx of their motion. When you’re done, hit export and you can find the .fbx files within the project directory. This .fbx will be from the real-time data stream, if you'd like a high fidelity version, use the Second Solve feature.


Support & Troubleshooting

Support workflow

All logs will be saved and dated in the Project folder. For any support requests, include the logs folder with the enquiry. Reach out to [email protected]

Troubleshooting

Please share any questions you have and we will grow this section with our responses.

Question:

Answer:

Why won't SpinView Launch?

  • Check the correct version of SpinView is installed.

Why won't Move Live Launch?

  • Check all pre-requisite software setup was completed correctly, particularly that the correct versions of software were installed.

Why don't any cameras appear in the SpinView?

  • Check network has been setup correctly.

  • Check you are receiving 10GB from the switch and the MTU in the Network Settings is set to 9000.

Why don't any cameras appear in Move Live?

  • Check the correct version of SpinView was installed.

  • Check if SpinView is open in the background, the system can only read in camera data to application at a time.

  • Restart Move Live.

Why can I only view one camera at a time in SpinView?

  • Check that jumbo frames are enabled on the switch, and the MTU in the Network Settings is set to 9000.

Why are the cameras appearing at a lower FPS than expected?

  • Check that jumbo frames are enabled in the NIC settings of the PC and on the switch.

  • Check the network configuration steps

  • Check the components of the network meet the Hardware Requirements.

  • If the room is too bright, the shutter may not be able to achieve the high fps due to the required exposure. Try adjusting the 'Lighting Mode' of each camera in Spinview to adjust the

What shall I do if my licence expires?

Speak to Move AI to discuss extending your licence. Once you've received an update file, refer to step 4d here


Did this answer your question?