I’m not sure it’s a math issue per se, but I do think the approach is making things a little more complicated. It’s noteworthy that the behavior becomes most noticeably and predictably unexpected when the orientation passes certain critical points, most particularly when the device’s “forward” direction transitions from pointing below the horizon to above the horizon or vice versa. This indicates to me that much if not all of the weirdness is coming from the calculation of the deltas on lines 82 through 84. Because these deltas are being reverse-computed from the device orientation, they are sensitive to event horizons and are liable to do strange things when, for example, rotation goes from -179 to 180 and the difference reports being 359 degrees instead of a more sensible -1 degrees. Combine this with the overdescribed nature of unit quaternions for rotations, and though I haven’t fully debugged it out myself (it would take me a while having to go back and forth between my laptop and my phone), I think you have a lot of potential confusion to deal with using this approach.
Device orientation is actually a significantly more challenging problem than it may appear on the surface. The approach you’re using most directly reminds me of gyroscope dead reckoning, which can work short-term if you use the correct rotational velocities and handle all the edge cases, but even that’s insufficient long-term. I once worked on a done flight controller that used gyroscope dead reckoning, and we observed it to start wandering away from level after only a few minutes because our (cheap) gyroscope was actually good enough to pick up on orientation changes due to the rotation of the planet. Basically, it’s not a problem I’d recommend solving yourself if you can avoid it, especially if the browser provides a device orientation API for you.
Circling back to your interest in using two rotations at the same time, I think the easiest way to avoid much of the complexity would be to do it through parenting: have one transform node that’s controlled directly by the browser’s device orientation (without any recalculations in the middle), then have the camera that spherical panning is manipulating be a child of that. (You could achieve something similar by having spherical panning control the camera while device orientation controls the photo dome, or vice versa.) This should work fairly well in the general case for combining two rotational modalities; however, in this particular case, I think even this technique is going to get weird – not to implement, but to use. Device orientation is real-world related, and so the camera can take on any rotation it wants; but spherical panning has an inherent notion of gravity, and it uses that notion to ensure that the camera’s “right” vector always lies within the XZ plane. Using parenting to do the rotations as described above shouldn’t negatively impact this math, but as rotation is manipulated by multiple things, spherical panning’s notion of gravity will cease to correlate with the visuals and will become hard to understand for users, potentially leading to confusing as gimbal constraints (what happens when you try to look straight up or down in spherical panning) happen in unexpected places.
The upshot of all that is just that I’m not sure these two rotational modalities work well together in general, though maybe your use case won’t have the problems I speculated about. If you want to use them together, though, using the device orientation directly and combining rotations through parenting is the way I’d recommend doing it. Hope this helps, and best of luck!