• WIP
  • Example data

Hi everyone! I'm so excited about this project, and want to thank all of you for making this happen. I hope to be able to contribute to the community in some way down the road. I'm an effects artist primarily working in Houdini and a little Unreal, I love to make my own stuff on the side but don't have much time to rig and animate. I'm planning on purchasing a first production release and can't wait! In the meantime I was wondering if anyone who has been playing with a working beta would be so kind as to upload an example of what I can expect coming out of Blender from Chordata? Most likely an .fbx?

I'm eager to get my hands on some of the data to try to set up a workflow with Houdini, it's capable of re-targeting and filtering, just need to figure out how to do it! Thanks!

Hi, welcome to the forum.
Unfortunately we currently don't have a proper capture sample, the idea is to make some, and make them available thought our downloads page, but since we are currently focused in several development fronts, we keep postponing it 😅 .

I extracted this one from an old capture session on a NewArts rehersal. It has some calibration issues, particularly noticeable in the neck. But for the purposes of testing in Houdini it will work.

I'm sending you the original blend file. You can open it in Blender and export to many formats. See:
https://docs.blender.org/manual/en/latest/files/import_export.html
https://docs.blender.org/manual/en/latest/addons/import_export/index.html

Awesome, thanks for this, I'll take a look tonight!

4 days later

Quick update, was able to export the Armature data out of Blender as .FBX and import into Houdini. I also need to figure out how to export the T-Pose as I think it will be needed to line up different rigs for re-targeting.

Also regarding the mocap data, it appears as though the animation is pinned to a center point, I think I read something about it in another thread too. Is this a limitation of this type of motion capture?

    Hi,
    The FBX format should also contain the original rest pose, but I don't know how can you retrieve it inside Houdini.

    If you need to export a static FBX with only the T-POSE you can select [Rest Position] inside Blender properties panel > Armature data Tab > Skeleton. (The armature object should be selected, like in the picture)

    Then export without the "baked animation option" found on the Export options at the lower left of the FBX export screen

    sggfx Also regarding the mocap data, it appears as though the animation is pinned to a center point, I think I read something about it in another thread too. Is this a limitation of this type of motion capture?

    It's not a limitation of the inertial type of capture. It's just that other technologies can provide a much more accurate translation capture.
    We just didn't have enough time to complete the implementation the algorithm that can calculate translation yet. You can follow the state of the development in this thread. If the contributors are not able to take care of that sooner the ground animation feature should be available together with our official release.

    We will also provide hooks to receive absolute translation information coming from external devices such as commercially available VR trackers. That way users interested in obtaining a more precise translation will be able to integrate them

      Thank you for the detailed guide, I'm new to Blender so figuring that out as I go. Having a second .FBX with only the T-Pose is perfect. I should be able to find the difference in rotations between this rig T-Pose and target rig T-Pose, or use Houdini constraints to attach them. I will be looking into both solutions.

      daylanKifky

      It's not a limitation of the inertial type of capture. It's just that other technologies can provide a much more accurate translation capture.
      We just didn't have enough time to complete the implementation the algorithm that can calculate translation yet. You can follow the state of the development in this thread. If the contributors are not able to take care of that sooner the ground animation feature should be available together with our official release.

      We will also provide hooks to receive absolute translation information coming from external devices such as commercially available VR trackers. That way users interested in obtaining a more precise translation will be able to integrate them

      That is great news! I will definitely keep an eye on the thread.

      a month later