Everything is setup
highmatergogo We're thinking abut looking into this but was wondering if you guys had any ideas about how you'd go about implementing such a system?
I would start by reading the paper on which Madgwick bases his code. The process is explained in a fairly intuitive manner, so it should be easy to understand even for non-experts. You can skim through the first chapter, skip part II
and instead focus on III
where the method is described.
After that you can start translating the code into python.
I forked the original repo here:
https://gitlab.com/chordata/gait-tracking-with-imu
(please send me your gitlab user so I can give you permission to commit code.)
In the repository I created a jupyter notebook with the first steps done, and comments on each section of the algorithm. The idea would be to translate the sections one by one to python. Special care should be taken to the checkpoints. In these steps a plot of the partial result is plotted. It is imperative that the result is checked at this stages by comparing it with the matlab plots and values.
(i still have to add comments on the second part of the algorithm, will do soon)
If you are not familiar with jupyter or this python workflow in general take a look at the explanations in the README. It also contains workarounds for possible problems if you are running the matlab scripts with GNU octave
(as I do), and there's also a mention to another repository where the translation to python is done (at least in part).
Let me know if I should further explain something
PS: The idea is to first translate the algorithm using the original datasets as a rosetta stone (comparing the partial outputs from both). Once this offline translation is done we can go ahead and adapt it to work with the real-time capture using the stream of data from the suit.