Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timestamp jitter #52

Open
jpapon opened this issue Aug 9, 2017 · 7 comments
Open

Timestamp jitter #52

jpapon opened this issue Aug 9, 2017 · 7 comments

Comments

@jpapon
Copy link

jpapon commented Aug 9, 2017

Hey,

I've been working with an XSens IMU recently connected via USB, and I'm running into some timestamp jitter issues. Looking through the code, it appears you are using rospy.Time.now() as the message timestamp - so the time the IMU data is received.

On my machine, there is considerable jitter in the received messages. For instance, if I have the IMU set to report at 200hz, if I run:
rostopic hz /imu/data -w 2

I see rates between 190 and 210hz reported. Furthermore, if I record a bunch of data and dump out the timestamps to file, then look at the difference, there is considerable jitter, as well as occasional delays where the next data might not be published for .1 seconds.

To try to fix this, I've begun using the timestamps coming out of the IMU, and simply calculating an offset from the ROS clock so I can publish sync'd to the ROS time stamps. I've pushed this to my fork here (this is just test code, it would need to be cleaned for a PR) https://github.com/jpapon/ethzasl_xsens_driver.git

The problem is, even with this, I'm seeing a continual deviation between the clocks that builds. I got around this by allowing up to .1 seconds of slop, after which the offset resets. In my tests, the clocks were off by about .01 seconds per second - this seems quite large to me.

My main question is - is this jitter normal? Or is it a problem with my USB bus?
Doesn't it make sense to use the XSens time reference, rather than rospy.Time.now()?

Also, any idea if this is caused by using Python? Has there been any work on a C++ version of an XSens ROS node?

Cheers, and thanks!!

@fcolas
Copy link
Contributor

fcolas commented Aug 18, 2017

Hi,
It's an interesting question.
First, I wouldn't believe it would be caused by Python, unless you notice that the node is taking 100% CPU. And I don't know of any effort to convert my node into C++. I know that there are (or were) other ROS XSens drivers (some in C++, based on an old C library from XSens) but I didn't check since the introduction of the fourth generation of their IMUs (which changed part of the protocol).

I have observed a similar behaviour (continual deviation) a while ago on the serial-connected IMU (third generation). It was configured to yield too much data compared to the baudrate of the line and, as a consequence, the time at which the message was received was more and more shifted until one message was dropped by the IMU (as shown by the sample counter). (I'm not sure what we did about it: either take it into account or lower somehow the number of messages sent, or change the configuration so that it fits. However the warning that changing the baudrate might not work dates back from this issue.)
On USB, the theoretical bandwidth should be more than enough but it doesn't mean that the specific components of the IMU couldn't limit the bandwidth (I believe that, at some point, the internal output was serial with a serial<->USB chip for the communication, but then the internal serial bus would still be a bottleneck).
With this hypothesis, it would be interesting to check with various configurations if we couldn't determine a sort of practical bandwidth limit.

For the record, what was the exact configuration of your IMU for which you observed this behavior? Did you try to reduce the frequencies to see whether it makes a difference?

You are right in suggesting that, when available, we should use the timestamp of the IMU rather than the timestamp of the computer on the receiving end. The slight issue with that is two-fold:

  • there is no guarantee that, even when configured at the same frequency, the timestamp and other information are in the same message (although it seems to be the case in practice);
  • the timestamp might be relative (only GPS-enabled IMUs have absolute timestamps), which means that we would need to estimate the offset (which is not so easy to do correctly).

I hope it helps answer some of your questions.
Best,

@HannesSommer
Copy link

@jpapon, I believe you are actually looking (partially) for one way timestamp translation (typically done by estimating offset and slope of hardware timestamps based on the associated receive timestamps).
This is related to @fcolas ' second bullet point.

It is not a very hard problem but still some work to implement and test. We are currently working on a library solely for that purpose and designed to be used within (c++) sensor drivers: https://github.com/ethz-asl/cuckoo_time_translator. It has also python bindings for the underlying algorithms (currently the "convex hull" algo. and a Kalman filter).
Another implementations of the convex hull algorithm you can find in http://ori.ox.ac.uk/ticsync/.
This publication https://eprints.qut.edu.au/82427/ is not primarily about it but explains both problem and solution nicely to some extend (section B. Learning a One-Way Clock Mapping). They also announce a ROS library that should contain an implementation but I could not find it so far.

@fcolas , I hope you don't mind the little advertisement :).

@fcolas
Copy link
Contributor

fcolas commented Nov 28, 2017

@jpapon I've pushed a working branch with additional tools to help look into this issue. Namely the node publishes the time it took to get the measurement from the IMU through the driver (/acquisition_time), the difference between current and ROS timestamps (/delta_stamp) and the difference between the current timestamp and the time reference from the IMU (/delta_time_ref).
What I observe is indeed a drift building up and being reset between the two clocks; however, it is more around 0.0015 second per second (see graph below).
Can you try it on your imu?
imu_time_drift
Note that the "noise" seems correlated between acquisition time, stamp difference and difference with time reference (example below). I'd need to see at a lower level than the node what is happening.

imu_jitter

(Note that I'm doing these tests in a virtual machine, but it is still using ntp.)

@jpapon
Copy link
Author

jpapon commented Nov 28, 2017

@fcolas Interesting, I'll try it out when I have some time. I actually wound up throwing together my own c++ node using the xcommunication library, and that eliminated the problem for the most part. There was still some small amount of jitter, but I suspect that was due to us reading from 5 RealSense cameras on USB as well.

@skohlbr
Copy link

skohlbr commented Feb 2, 2019

@jpapon Is the minimal cpp driver you've mentioned available somewhere? I'm running the ethzasl_xsens_driver on a very low power Atom and it's taking ~40% CPU, that's why I'd be interested in a more lightweight option.

@skohlbr
Copy link

skohlbr commented Feb 6, 2019

Here's the jitter I'm getting with a MTi-30 set to 50 Hz. The plot shows the difference between header timestamps on the IMU messages published. Ideally, this should be a line parallel to the x axis at y = 0.02.

mti_30_plot

@skohlbr
Copy link

skohlbr commented Mar 3, 2019

So the new 2019 SDK by Xsens has a ROS cpp driver implementation. It has a few pecularities (such as looking up ROS params on the parameter server in sensor data callbacks leading to huge CPU consumpion of the ROS master). After fixing those, I integrated https://github.com/ethz-asl/cuckoo_time_translator as suggested by @HannesSommer. It seems to work very well, time stamp jitter gets eliminated completely.

Jitter with data stamped based on ros::Time::now() on data receive, IMU transmitting at 50Hz (blue plot):
xsens_cpp_driver_jitter_receive_stamps

Jitter with cuckoo time translator (blue plot again):
xsens_cpp_driver_jitter_cuckoo_time_translator
The red plot is difference between timestamp in message and receive time in my debugging node. It's interesting that this step pattern is again visible there, once more confirming the previously discussed clock drift.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants