• WIP
  • makehuman.js

Hello m13
This looks great! having Makehuman integrated with Chordata would be a great contribution!
Please tell us a little more about this project.

We are currently working in the complete version of the remote console which allows to manage and visualize the capture in real time in a webapp. The communication btw backend and frontend using COPP through Websockets is already implemented togheter with many other features. Perhaps you will like to take a look a it, to copy the parts of code that are useful to your project.


(of course, there's a lot of styling to be done, we are working in the implementation of the core features currently)

And who knows, perhaps in the future we can include your makehuman.js implementatio directly into the remote console πŸ˜‰

  • m13 replied to this.
  • m13 likes this.

    daylanKifky ah cool! where’s the code? 😁 i already saw somewhere that COPP is also available over WS.

    and yes, someday maybe even integrating makehuman with chordata has also been on my mind. a great feature for those who do not want to model a human.

    daylanKifky Please tell us a little more about this project.

    Here's the link to run it.

    I've been using MakeHuman in the past but there were always some things which weren't like how I would've liked them. There are also years of experience in that code base, e.g.

    • The mesh has 1258 morph targets, controlled by 249 controllers, to model different ages, gender, etc..
    • The skeleton has 263 bones which are controlled by a smaller set of 'poseunits'.

    So to learn about it and be able to tweak it, I decided to port it. (And as a WebApp one get's platform independence and, amazingly, the performance is already nice without something like numpy.)

    The next step is to animate the whole thing:

    • Last week I began playing with Chordata, which I want to use for the body pose.
    • Google MediaPipe provides face, hand and body recognition from a video stream, so far I've been able to receive and visualise the face data (but not applying it to the skeleton yet).
    6 months later

    Hi. I finally began using the Chordata for real. This is how it looks now:

    • The display of the sensors is intended to help when mounting the KCeptors to the body (shake to identify πŸ™‚ )
    • The pose calibration uses a timer and changing messages to calibrate without the help of another person.

    Caveats: I am not using the WebSocket port on the Notochord yet, so my C++ proxy is still needed, the app hosted on GitHub can not access the Notochord (even with Access-Control-Allow-Origin set to "*"), so you would need to build and run it yourself.

    I'm currently stuck with the pose calibration failing (see my other posts). Once that is solved, I move on to actually animate a skeleton. got it! πŸ™‚

    5 days later

    Wow! this is looking great!
    I answered on the other post. Let us know how this project moves forward!

    my assistant wearing the kceptors while i'm coding:

    daylanKifky We are currently working ... visualize the capture in real time in a webapp. ... Perhaps you will like to take a look a it, to copy the parts of code that are useful to your project.

    what repository and branch can i to look at?