BBC Dip

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Celef, was the pink noise coherent?

i struggle a bit to describe this, but pink noise sounds a bit hollow and dull when listen over two speakers in a stereo setup. when i play only one speaker the tonal balance changes quite a lot and becoming brighter, the interference is obvious when using two speakers. i do not understand how a dip in the loudspeaker frequency response can cure this!?

even more notably, when i watching films the dialogue can often sound dull. i use only a two speaker setup.
 
The BBC used a mono signal for testing stereo quality, which surely must have been in phase, Yet you are saying that a mono signal of pink noise will exhibit a collapsed stereo image.

Yes and yes it does, coincident with the comb filtering action.

You are also saying that an out of phase signal will appear brighter then that developed form a genuinely laterally positioned sound source.

Yes I am.

These both seem to me to be counter intuitive given the nature of constructive and destructive interference.

The combing nulls from the in-phase signals correspond to the peaks in the comb from the out-of-phase signals - and vice versa. Hence the mathematical relationship.

Central image collapse occurs in the nulls of the in-phase signals.

But the out-of-phase signal (nulls included) is also devoid of the "head shadowing" effects that would be apparent for genuine laterally positioned sound sources, hence it appears bright.
 
Incidental to this, which still seems a bit 'fuzzy' to me, I think that many current films have very poor speech quality, for eg' a cowboy in the desert sounding as though he is in a wardrobe. Voices in the open are thin and lacking bass, and certainly do not have mic. proximity effects.
 
Voices in the open are thin and lacking bass

It sounds like a good description of the real sound of the voice in the open.

Much like that white/gold or blue/black dress? problem that depends on one's brains' normalisation of either artificial or natural light to determine what colours you perceive, so must the normalisation of room modes on the human voice determine how one expects the human voice to sound . Most western culture type people must be normalised to hear internal spaces as part of the character of the human voice.
 
...pink noise sounds a bit hollow and dull when listen over two speakers in a stereo setup. when i play only one speaker the tonal balance changes quite a lot and becoming brighter, the interference is obvious when using two speakers

You appear to be describing stereo comb filtering, which is not what the dip can compensate. To compensate stereo combing you need a centre speaker.
 
You appear to be describing stereo comb filtering, which is not what the dip can compensate. To compensate stereo combing you need a centre speaker.

Aha, no wonder I thought it sounded strange to add a dip in the frequency response, thank you for telling me! :)

May i say i find the tonal balance to be restored if i sit with a speaker setup that is similar to headphone listening, but then there is no phantom image to talk about
 
Of course the modelling of frequency response of headphones is another aspect to this, and IME the sound from them is much more directly on tweeter axis sounding and hence toppy.

I don't think they compare well, and that the headphone sound cannot reliably be used as a reference to aim for with a speaker.
 
Essentially our hearing is less sensitive to laterally originating sound sources in the frequency band in question. When such energy is encoded in a stereo recording and then replayed from (predominantly) the front, the reproduction will appear overly-bright. Hence a dip compensates for the apparent excess energy in this band (to a degree dependent on all the factors I have described previously).

It's the job of the mastering engineer to do this.. the dip would be added to the EQ so that it everything sounds good on flat-response monitors.

If you then play this correctly mastered programme on BBC dip monitors, it will be correcting this twice.

Or are you saying that this effect adds so much excess energy, it keeps reappearing, no matter how many times you EQ it out, either via the monitor's crossover filters or via mastering EQ.. A bit like the purse that replenishes the money in it every time its spent.

I'm equally confused as to how the excess energy keeps re-appearing and so requiring a BBC dip.

It sounds like it would have a place in live monitoring of signal for broadcast, but not at all for recording or replay at home (unless one only listens to live-broadcast audio which hasn't been adjusted for this effect), and therefore bad engineering for anything to be sold outside of the industry. Unless of course people want to market this colouration because it has already been accepted by people using inappropriate speakers at home!
 
Last edited:
Or have the cognitive ability to filter out what is important

.. and only be left with what is unimportant? I don't quite understand this reply :) sorry. You may well be conflating my previous comment as part of the rest of the thread. My comment to which you replied was strictly only a response to a particular post and not related to anything else previously said...
 
.. and only be left with what is unimportant? I don't quite understand this reply :) sorry. You may well be conflating my previous comment as part of the rest of the thread. My comment to which you replied was strictly only a response to a particular post and not related to anything else previously said...

My comment reflected on our perceptive ability to remove aspects of room acoustics and the like from what we perceive, as I have commented previously in this thread and others recently. Possibly I confused the threads I have been responding to, but it remains hugely overlooked that our perception relies on third-order "bispectral" processing, yet we curtail our analyses and modelling to second-order.
 
what does it mean?

Copied from another thread concerning the perception of resonances to save me writing out something similar again...

I have commented in a couple of other threads recently concerning the notion of the ears and brain as a phase-insensitive (second-order) spectrum analyser that is based upon Ohm's Acoustical Law (that the ears are insensitive to phase). This thread also seems relevant to the subject...

If we take an example of two pure tones, we can (slowly) vary the phase of one relative to the other and be unable to tell the difference audibly. Thus we conclude that the phase response is not relevant and that we only need concern ourselves with the magnitude response. Yet there are plenty of examples, such as the audible differences widely reported between linear and minimum phase EQ, that according to Ohm should not be audible. We are left seeking some form of caveat to Ohm's Acoustical Law (whilst acknowledging it still models the dominant cues in our auditory perception very well).

The comments in this thread and my other thread contributions concerning drums and plucked strings point to inter-spectral relationships in our perceptual models. That is to say, the resonance of a drum is bound in our perception to the spectral components of the impact that excited it. One analysis capable of linking such spectral patterns is the third-order "bispectrum". It is notable that the bispectrum was first suggested in an application such as this many decades ago, and whilst seemingly ignored thereafter, it has become widely used in other areas of cognitive analysis.

The bispectrum is an outstanding candidate for being incorporated in our auditory perception given, for example, its ability to discern a sound source separate from the acoustic environment that is relevant to some comments earlier in this thread, and its ability to discern one sound source from another of similar spectral content that is especially useful for those of us lucky enough to be at a cocktail party. (As far as I am aware, there are no good reasons why we should exploit the trispectrum and no good reason how we could exploit higher orders).

We might instead consider adding a time element to "Ohm's spectrum analyser" (possibly modelling the convergence of our cognitive apparatus on some prior learned auditory percept?) whereby we hone in on resonances as time after the initial transient passes. Effectively our spectrum analyser trades off time resolution for frequency resolution, and we end up with Ohm's steady-state spectrum analyser. Like so many aspects in audio engineering, this was described long ago by Michael Gerzon in his AES paper "Super-Resolving Short-Term Spectral Analyzers". Once again, this method requires spectral traces to be combined in some percept that allows us to distinguish transient "quality".

So whilst controlling loudspeaker fundamental Q-factors and other resonances (including those due to the acoustic environment) remain predominant in establishing audibly "clean" transients, there exists the possibility that other spectral elements might also influence our perception - at least in the short term. I am a proponent of phase compensation of the low frequency roll-off of a loudspeaker that (IMHO) delivers "cleaner" and "tighter" bass, that even accounting for group delay audibility thresholds, Ohm says should not be audible. Maybe Ohm lacked only high power processing tools, but it appears that simply examining the relationship between input and output spectra in an audio system may not be sufficient: There might be inter-spectral elements that second-order analyses are destined only to obscure.

Hope that helps...
 
It would still have a myriad of errors. Recording and reproduction are engineering compromises.

I am not sure of the reason for the caveat re "bass" barring compensation of the proximity effect? The free-field qualification is also confusing.

Sorry freed should be free of course.

My bass comment was re the room reinforcement which would no longer occur.
 
Room reinforcement is a somewhat arbitrary term too. In the recording environment, the acoustics are absolutely required - how much depends on the artistry of the recording engineer. At the reproducing end, room modes can be an issue, but in most cases they can be dealt with sufficiently well. But the practical lack of ability to record and recreate a sound field around a listener remains a compromise - hence the dip in stereo.
 
If we are decided the BBC Dip is a 2kHz sort of crossover suckout, this Classix II design by Paul Carmody might interest:

Classix II - undefinition

He calls it the "Flying Squirrel"! :D

The crossover for this speaker is what I'd like to call a "flying squirrel." What I mean is that both drivers just barely reach into one another's frequency spectrum, so the crossover between the two is almost a stunt-like flying leap. With a lot of coaxing on my part, the end result comes across sounding pretty seamless, and very natural, if I may say so.

Apparently good with ropy recordings. Who knows?
 
Of course the modelling of frequency response of headphones is another aspect to this, and IME the sound from them is much more directly on tweeter axis sounding and hence toppy.

I don't think they compare well, and that the headphone sound cannot reliably be used as a reference to aim for with a speaker.

sure, headphones has some drawbacks and flaws, but the directness to the music are just fantastic with headphones and i can play as loud as i want without disturbing

i think these parameters are somehow important, minimize the colouration from the room and sound level
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.