Hvz,
Something I've implemented in my OS X project is an accelerated AGC algorithm. Essentially, an audio threshold is established (user configurable). For instance, your target level is -14dB (for a rolling average). The rolling average is configured, as is a specified attack and rise time. These are configured to be slow, and the attack and release behave non-linearly (logarithmic).
Some example settings:
Threshold: -14dB
Attack: 2 sec
Release: 6 sec
The initial hold time would be set to the time constant (tau) of the attack (which isn't really a constant, it is user-configurable - it is a constant in the programming sense - it is set and not configured under normal operation). If your time to attack is 2 seconds, your "hold time" is two seconds. Your average - a sum of all audio across the spectrum, is identified to exceed or fall below that threshold based on the attack time. If it continues to exceed that threshold, the audio is attenuated with a logarithmic decay algorithm until the average is brought down to your threshold. Conversely, this is relevant to the rise time as well, in an opposing fashion (with attack time still equivalent to the hold time).
After that, a final limiter algorithm (behaving exactly like yours does) is employed, which bumps the gain up or down based on instantaneous analysis.
This is just a thought. I can't get Stereo Tool to quit "diving" (attenuating quickly) when I try to employ some more aggressive normalization.
...Still trying to figure out how you've done this with .NET!

X-CODE offers an interface for Audio Units. These are easy to work with, as the classes for audio are readily available.
http://developer.apple.com/mac/library/ ... ction.html
If there is a way to do this already, and I am missing the point - I apologize for bringing it up!