Hi Kubig,
your version look better even though this:
Resource Manager now no longer terminates unexpectedly while closing a SIP Transaction. Previously, this situation sometimes caused an assert in the stack. (ER# 300007729)
and this fix for previous 8.1.502.85:
Now Resource Manager does not terminate due the following race condition: one thread detects a socket read failure and closes the socket, while another thread sends a message on the same socket at the same time. (ER# 305054141)
and this...:
Resource Manager (RM) now correctly handles an invalid action that it receives in messages from another RM node in High Availability. Previously, RM could terminate in this situation. (ER# 304433067)

do i need to upgrade?
The "funny" thing is that we had to upgrade from previous version (don't remember what was) because of this bug:
Resource Manager now no longer terminates unexpectedly while closing a SIP Transaction. Previously, this situation sometimes caused an assert in the stack. (ER# 300007729)
the all humour is that sip server version previously installed with that rm had another one bug - when msml service became unavailable and someone put call on hold, then sip server terminate unexpectedly. I think you know why i know too much about this details...
Well, this is thing of the past. Today i have following complaints:
1. Situation where the active RM sends new call information to the backup and gets a response before it can update its own transaction record. Engineering believes that this is rare situation. but during last ~ 6 months i got it 3 or 4 times. I think you will see new release note about it. The bad thing that rm crashed into core in this case.
2. sometimes (i don't know why,but at last time it occurred immediately after 1st item) nodes in rm cluster run out of sync. The interesting thing is both 1st and 2nd node see each other and send/receive HB messages (i suppose). But there is no inter-node updates with current status. Nodes may work such way during several days (my observation - 4 days). after that - see num. 3.
3. sometimes (at last time it occurred 4 days after 2nd item

) primary node received an invite and... that's all. no other messages sent or received. BUT the backup node "think" that primary is in service! moreover,because of item 2 (or something else) if you kill primary node, secondary not doesn't become primary =( epic fail. during last 2 days i saw it twice in different sites.
i know it looks strange and it works good in lab, but in production it's crazy %)
looking on release notes with such fix when "in very rare cases" and "rm can became unstable"... i believe it is not good-prepared product.
sorry for emotions. if i could throw away RM i would do it. in 7.6 it was non-mandatory component.. good times.