While I was searching around for information on OSC add-ons in Blender for an unrelated project, I came across this little gem on using OpenCV and a little Python code to get realtime facial mocap working with BlenRig in Blender 2.8:
It's not perfect, as it looks like eye direction, blinks, and the tongue still need to be hand animated (although, maybe an enterprising coder out there could make some improvements), but it's a great starting point and looks to be fairly easy to test. However, my own test didn't get off the ground, as I was having issues with the
ensurepip step (I'm still waiting to hear back from Gadget Workbench to figure out the problem), and I don't know anything about python. When it comes to coding, my grasp of it is only slightly better than my half remembered high school Spanish: I can generally get an idea of what's going on, and might even be able to muddle through, but there are huge gaps in my knowledge.
Anyway, I think this would be a great addition to the Chordata suit. If the code could be changed to look for video from say, a helmet mounted IP webcam, then you could have body and facial data coming into Blender simultaneously.