How reliable are bias servo methods?

Bias servo methods that adjust the quiescent current of a class AB stage amplifier typically place a diode or transistor on the heatsink so that the offset voltage between the bases of the output transistors are decreased to oppose the increase in quiescent current with temperature. Are these control loops stable or do they tend to sometimes fail with the quiescent current increasing to high, perhaps damaging levels? I have done some testing with the classic "rubber diode" bias servo methods and it seems like once the quiescent current exceeds a certain value that a runaway effect may occur. Is there a good way to apply additional negative feedback to the bias servo to make the servo loop stable?
 
First of bias is a very low-speed loop with a reaction time at some seconds speed, there are a relatively low speed of heating from OPS devices crystals to a heating sensor (diode, or other BJT). Designer must properly slow-down thermo-control-loop, but really something like 1uF base-collector at a sensing node are enough to remove any oscillation.
Second is a bias coefficient in mV/(celsie degree) which must be matched or greater than same coefficient of output devices.
Last is a better thermocoupling between OPS devices and heating sensor. There even ThermalTrak devices with built-in sensor diode at a same die with OPS BJT, but unfortunately those diodes can't be used directly in an appropriate VAS node due to thermal matching at a relatively high current.
 
Assuming a complementary emitter follower output stage, emitter resistors are normally used to prevent thermal runaway. They provide some fast local feedback to the output stage in addition to the slow thermal loop.

By the way, you can also get rid of the thermal sensing altogether and make a separate loop that senses the output device currents and controls some nonlinear function of those.