Move Multi-Cam QuickStart Guide
Updated this week

In this guide, you will find the workflow to use Move AI’s Multi-Camera Markerless Motion Capture technology.

There are five steps:

💡 For any other guidance, check out the knowledge base at help.move.ai or reach out to [email protected] 

1. Shoot Prep

1.1. Check you have the necessary equipment

  • Cameras

  • Mounting methods

  • Floor markers

  • Lighting

  • Reflections

  • Clothing

  • Footwear

  • Space

1.3. Configure your setup based on your capture volume

  • How many cameras do you need?

  • How big is your volume?

  • What resolution/framerate should you capture at?

1.4. Camera Setup

1.5. Mark out your volume

  • Putting markers on the floor can help the actors understand the size/position of the capture volume

1.6. Framing Up The Cameras:

  1. Position the cameras evenly around your space, to create your capture volume.

  2. Stand in front of a camera and use the App to see the live preview; adjust the framing of the camera so that it can see as much of the capture volume as possible, ensuring that it can see your entire body (with your hands up) at the closest location.

    At a minimum, the camera needs to see your hands when they're above your head in the centre of the volume (as this is part of the calibration process). However, it is important to consider if the movements you'll be capturing may require more height, or if your actor may be taller than you.

  3. It is recommended to use an object, marker or tape to identify the limits of your capture volume so that you don't exceed the space during your capture.

  4. Repeat this step on each camera until you are happy that you have good coverage around the entire volume. As a bare minimum, two cameras must be able to see your entire body at all times during the capture.

  5. Once you have framed them all up, you're ready to calibrate!

2. Capture Your Motion

  1. Hit record.

  2. Stand in the centre of the capture area and:

    1. Clap 3 times above your head clearly, ensuring it is sufficiently louder than any background noise. (Note - this isn’t necessary if your cameras are synchronized)

    2. Stand in a Y pose for 2 seconds, then walk from the middle of the capture area to the edge of the volume in front of each camera, facing the camera at all times. Move in a natural motion, pause at each camera and return to the centre while still facing the camera.

    3. Repeat this motion walking to each camera, then backwards to the centre, before moving on to the next.

    4. If using less than 4 cameras, walk around the perimeter of the volume to ensure the system has an accurate understanding of the shape of the capture volume. (The perimeter will be used to ensure only the desired actors are tracked during action takes)

  3. Now finish the recording.

  4. At this stage, we recommend measuring the height of your actor and noting it down. You'll need it later when you process the calibration! Their height should not include the footwear they are wearing.

    💡 Make sure only the actor (i.e no other people) are in the capture area during the calibration otherwise the system will not perform optimally.

    Make sure that the actor moves in a natural motion and aims to cover all of the capture volume within the capture area.

    If the camera are moved (even slightly) at any point during the shoot, a new calibration will be required for optimal results.

  1. Press record.

  2. Stand in the centre of the volume.

  3. Clap 3 times above your head. (Note - this isn’t necessary if your cameras are synchronized)

  4. Hold a T-Pose for 2 seconds.

  5. Now, perform! Make sure the actor remains inside the capture volume and that the time from the T-pose to the end of your movements does not exceed 4 minutes.

  6. When you are finished, stop your recording.

  7. We always recommend doing an extra calibration at the end of your session, just in case a camera was moved during the session.

    💡 To capture multiple actors, make sure they stand side by side and perform a T-Pose at the same time at the start of the take, though only one person needs to clap.

2.3. Actor Profiles: (Enterprise Only)

  • Record a separate take of the actor doing the specified movement sequence as shown in the videos to create an actor profile. You can apply this to your action takes when processing them.

3. Prepare Your Footage

  • The files must be named consistently according to the following convention:

    • camX_takename

  • Navigate to app.move.ai to create your project & session.

  • You can batch upload the videos from your PC to the session you’ve created.

4. Process Your Footage

  1. Select the '+ Calibration' icon

  2. Title: Give your calibration a title

  3. Calibration Actor Height: You must do this in order for the system to accurately extract a person's bone lengths - this information is required in metres (i.e. 1.88m) and will be saved after the first time entering their details. This should not include footwear.

  4. Time sync clapping starts: This is the time in the calibration when the clapping starts.

  5. Time sync clapping ends: This is the end time of the clapping.

  6. Time calibration sequence starts: This is the start time of the calibration (first moment in a Y-pose)

  7. Time calibration sequence ends: This is the end time of the calibration sequence.

  8. Hit 'run' and you'll be notified when your calibration is done.

  9. While this is processing, you can queue up your action takes, and Enterprise users can queue up their actor profiles to be created.

  10. It's always worth checking your calibration.

Click the '+ Take' icon and select the video of your desired action take.

  1. Scene: Create a New Scene or select the Scene in which you wish to add your take.

  2. Title: Give your Take a Title.

  3. Actor Number: You must tell the system how many actors are in the take, to ensure it tracks all actors.

  4. Prop: Here you can select if you have any props in the scene you would like to track. Currently, the platform supports the tracking of a single football.

  5. Time sync clapping Start/End: Input the clap times, this should be the window of time at which the actor is clapping.

  6. Time take action starts: This is the start time of the action in your take. Always start your take when the Actor(s) in the video are clearly visible, holding the t-pose and not occluding each other.

  7. Time take action ends: This is the end time of the action in your take, or when you want your animation output data to finish.

  8. Click ‘Save’.

  9. At this stage, Enterprise users have the option to apply an actor profile. If you haven’t created one yet, take a pause on processing your action take and do that now.

  10. Select the calibration you want to use. This should be for the same camera positions as the Take you are about to process

  11. Fingers: ON or OFF. Finger tracking will increase processing time.

  12. Rigs: Select the Rig you want to retarget to. (If you plan to retarget elsewhere, we recommend choosing the Move_Mo or Move_Ve rig, as these will maintain the bone lengths of your actor.

  13. Click 'Run' and your video will now start processing.

  14. It's always worth checking your action takes.

💡 The platform allows you to specify the time range of the take that you wish to extract motion for. I.e. if you have a take that is 2 minutes (timecode: 02:00) long but you only wish to extract motion for a sub-range such as 00:05-01:10 then you can specify this in the box, as long as the start time is at a moment where the actor(s) doing a t-pose.

5. Export Your MoCap

  • Download FBX Pre-Retarget: This is the FBX skeleton with motion capture data applied to it - straight from the platform

  • Download FBX Retargeted: This is an FBX with motion capture retargeted to the skeleton of your rig

  • Download Blender Mocap: This downloads both the pre- and the retargeted motion to your rig, inside it's mesh alongside the camera positions too. This file can be automatically opened in Blender

  • Download MAYA HIK Pre-Retargeted: This downloads the Maya HIK file which can be opened directly in Maya

Did this answer your question?