WIP: Optimise quaternion computations#114
Conversation
|
There might be some more inspiration for improvements from these packages I particularly like this, which adds a quaternion dtype to numpy |
|
We're hoping to make a new release in the not too distant future! Would be nice to include any optimisation to runtime for quaternion operations. I think the slow speeds we see are a consequence of using a list of quat objects for some operations, which is then slow to convert back to a numpy array for efficient vectorised calculations, as you mention. |
|
Ah cool to hear @rhysgt! I will try to pick this up again (or happy to set up a time where we can pair up on it, as I am still a bit lost about the roles of all the data structures in DefDAP). btw would you and/or @mikesmic mind checking the Zulip chat re reading Jie's data with overlapping bins? (The data is linked in the zip and I can't read it correctly with DefDAP — I get nan's for the strain maps.) (Maybe I should just open an issue 😅 but right now it's bedtime and I wanted to make sure it's on your radar before release. 😊) |
Thanks @mikesmic for your guidance re quaternion product — that helped me understand the code in
calc_sym_eqvswhich is currently the slowest part of reading in a file. I've vectorised most of the operations and... See identical performance! 😅 So more work is needed. The other thing is thatextract_quat_compstakes up ~50% of the runtime and that should be ~0 if the internal data structure was a NumPy array to begin with. The other target of optimisation is that operations likenp.crossare more efficient when the coefficients are in certain axes... So I'll play with that too.