Tested out receive rates today, and noticed some odd behavior from the --odr flag. Dealing with slip, milliseconds can count, so I was documenting average frequency for a range of odr values (50-90Hz in increments of 10, 90-100Hz in increments of 1) and saw that the average value at 100Hz, and only at 100Hz, was basically in line with the average value at 50Hz. Note that the script I am using to take these times bundles the raw and quaternion packets back together, so the times measured are between each set of raw and quaternion values. The average values for each odr value are graphed below. Only one sensor was used for these tests.
Because I was using the data coming out of blender, I wanted to verify that this problem also existed for the data coming directly out of notochord. The individual graphs look fairly different (a lot more variation in frequency when coming out of blender as opposed to the notochord), but the averages are almost exactly the same.
Here you can see that the odr=50 graphs and the odr=100 graphs for each output are almost exactly the same, not just the same average.
Sample Frequency Variation from Blender from 50-100 in Increments of 10, y axis 0-0.1 seconds. Upper left to lower right.
Sample Frequency Variation from Notochord from 50-100 in Increments of 10, y axis 0-0.05 seconds. Upper left to lower right.
I am currently on notochord commit d051c5a0fe10787a6b332bc5d0a2a871040dac2b, which I believe is the latest significant version.
For now, I will be using --odr=97, as that gave the best results in my tests. I will of course have to test with multiple sensors to see how that affects the rate.
While I'm already on the topic... Is there any potential way to further increase the sample rate, especially for the raw values? My professor is looking to push it as far as possible to gain those extra milliseconds.
Edit: Updated images for increased clarity