rushboarduk
Hi!
Making the chordata framework compatible with virtual production is definitively in our roadmap. We are starting some mid-term projects aimed to make it possible, so expect some news in the future.
In the meantime we would love to explore possibilities together with you guys. One of the ideas that came out from the community was to use a single vive tracker to get reliable position tracking while using the existing suit to obtain the inner pose.
Our team is currently focused on the final steps of the framework rework: KCeptor++ testing and production, notochord python module, new pose calibration. These tasks will probably occupy the whole first semester of 2022 and will settle the bases to easily experiment new functionalities like the ones you are suggesting.
rushboarduk I've been wondering, therefore, if the K-Ceptors can be adapted to work in a similar way. Would the Beta Testers and Chordata be willing to explore experimenting with fish eye lenses on the sensors, to build a Virtual Production solution?
As you know, the Chordata system is expandable by design, so the quick answer to your question is: yes, from a hardware point of view, they can be adapted to include something like fish eye lenses, optical sensors, etc.
The challenge with that comes in the signal processing field. At the core of a system such as antilatency's one there's a sensor fusion algorithm which uses the information of the IR image togheter with the mIMU data to obtain absolute position. Of course they will keep that development closed, and they have the right to do it since it's at the core of their busisness.
At this point i have to state: I'm just theoretically speaking here. There's also the intellectual property aspect to be considered. They might have a patent protecting that particular setup of IR camera + fisheyes + mIMU + ground/ceiling IR anchors. So if anyone tries to make a similar system care should be taken in not to violate their authorship.
The interesting aspect of that company to me is that they represent one of the first examples of commercially available products which sucesfully fuses self-contained optical sensors with inertial data to obtain reliable absolute position and orientation. That´s a quite active field of research in the scientifc community, particullarly in the past years. And lots of different setups have been proposed, yet just a few make their way into a final product.
So, in order to wrap up this long answer: to make the Chordata system compatible with virtual production we should start by defining the actual set of sensors + anchors to be used, and that's where we would like to know your ideas. Anything from crazy thoughts to real-life examples or academic papers will contribute to this. Let's leave this thread open to this discussion 🙂
PS: NakedRabbit an an alpha of the new version of the blender addon which implements the new pose-calibration has been released, take a look here