Cardioid Bass

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
al2002 said:

Just curious, why do you think so?

Jean-Michel Le Cléac'h does think so. He is a very respectable man in the hifi world, he achieved great things like his horn design or his filter setup never having business in mind. Then he back up his saying as a real scientist. For these reasons I would tend to think like him. But if you prove me he is wrong with solid argument I will change my mind... The truth is subjective, isn't it? ;)

Its original document is written in French, I will try to translate as good as I can: "People often refer to Blauert and Laws criteria has an argument tending to prove that phase distortion is inaudible. Blauert and Laws criteria is very little pertinent in hifi. The fact that two waves coming from 2 drivers seems to come from one source only does implies that the phase distortion is not audible."

In the original language:
On fait souvent référence au critère de Blauert et Laws comme un des arguments tendant à prouver que la distorsion de phase ne s'entend pas.
Remarque : le critère de Blauert et Laws est peu pertinent en haute-fidélité. Le fait que des trains d'ondes émis par deux hautparleurs semblent provenir d'une seule source ne signifie pas que la distorsion de phase n'est pas audible.

Now if you need further information, you would have to ask the author himself. He posts sometimes here.

Regards,
Etienne
 
Administrator
Joined 2004
Paid Member
WithTarragon said:
But there are some real problems in scaling and detection that interact with memory, expectation, context, alertness and attention.

Hi Tom - that quote above really caught my eye.
Tell you why.

Back when I was 18 years old, the the Air Force tested my hearing. Sure wish I still had a copy of that test - it would give me great bragging rights. My hearing was literally "off the chart". But why? Because I have superhuman dog/cat ears (as my wife claims)? No. Because I wanted to do well on the test.

I understood the test, being an audio geek, and gave it my full concentration and effort. Thus I did much better on the test than the average GI - not because my ears were better, just because I was paying attention - I cared about the test.

This must not be new to you at all, just wanted to mention it here as a good example of how one listener can really skew the results by simply paying attention. :D
 
WithTarragon said:
. But I think we are probably both in agreement that there are many, many other forms of nonlinearities and when those forms are not steady state they are less easily measured and probably not very well understood (physically or perceptually).

A little confused here since most nonlinearities are "steady state" or "time invarient" (I've never seen one that wasn't), and actually nonlinear systems are very well understood. Its how we perceive them that is at issue and the current standards are pretty meaningless in that regard.


However different systems do sound "different" although they may measure similarly. The nature of that difference, is ultimately quantifiable. Sometimes it is easy and sometimes it is not so easy.
So I think there is still an important relation to be understood between these physical anomalies (distortion in its many forms, dispersion, response etc, etc) and the consequent perception (audibility, detectability, accuracy etc etc). I suspect we are not in a terrible disagreement on this point.
-Tom

I would totally agree that it is all quantifiable, and it would be wonderfull to do that work and develop that understanding. Unfortunately, in the end everyone would just say "well it sounds .... to me" and all that work goes out the window. To quantify something requires a reference of what you are quantifying it against. If that reference is, as it is today, a personal preference, then the reference is not fixed and is ever changing and quantification becomes meaningless.

Codecs and audio prothetics are areas of real science, codecs waining these days, but loudspeakers has never been IMO and concert hall acoustics days have passed. Today everything is "multi-purpose" and acoustics is simply not a big factor.

I have become increasing frustrated in my endeavors to create high sound quality because it is not something that has any value. Its a boutique marketplace in audio these days both at the production and reproduction stages and there isn't really any place in it for science.
 
Etienne88 said:

Thank you John for your summary. I like the cardioid principle for it easiness of positioning!

As usual I have some questions! ;)
Do OB or acoustic resistance box induce GD? I have the feeling that OB will have no to very little GD and that acoustic resistance box will have level comparable to closed box...

Regards,
Etienne

All these systems are minimum phase and as such will introduce frequency dependent GD. That is, it doesn't matter of the woofer system is. What matters is the low frequency cut off and slope. You can not get around it. There are 3 basic factors which introduce nonlinear phase shift and frequency dependent GD; the low frequency cut off of the system, the high frequency of the system and crossovers. It is possible to design a crossover so that it introduces no GD, at least at some design point, and the high frequency cut off really isn't much on an issue since it typically introduces a constant GD of the linear phase variety over the pass band starting somewhat below the cut off point. But the GD associated with the high pass nature of the low frequency cut off is really unavoidable. To remove it requires extension of the low frequency cut off well below the audible range or digital processing. But the bottom line is once the system is in a room you are dealing to the GD associated with the in room response at the listening point. That is, it is the GD of the coupled room/woofer(s) that matters, not what the woofer does in free space.
 
small aside to clarify

john k... said:


You can not get around it.

<and>

To remove it requires extension of the low frequency cut off well below the audible range or digital processing.

Just in case someone misses those last 3 very important words: it is worth emphasising the point that you can get around "it" by means of appropriate digital filtering of the signal. (Certainly when minimum phase.)

Note that I'm not arguing with any other points made in this thread, nor suggesting this as a solution to any particular problem, just reiterating that compensation of the low frequency GD of a speaker is certainly possible, when desired. (Not, generally, an ensemble; not, generally, the field in a room.)

While the experts know that, a few less experienced readers might miss the significance of the 3 words.

Ken
 
Re: small aside to clarify

kstrain said:


Just in case someone misses those last 3 very important words: it is worth emphasising the point that you can get around "it" by means of appropriate digital filtering of the signal. (Certainly when minimum phase.)

Note that I'm not arguing with any other points made in this thread, nor suggesting this as a solution to any particular problem, just reiterating that compensation of the low frequency GD of a speaker is certainly possible, when desired. (Not, generally, an ensemble; not, generally, the field in a room.)

While the experts know that, a few less experienced readers might miss the significance of the 3 words.

Ken

Stax headphones were designed with very low resonance just to get arround this problem. Those headphones were used for sound quality testing at ALL of the car companies and playback for LF was critical. However, on further study, it was found that the LF perception has a tactile element that is still missing with headphones and without it the sound is just not realistic. After adding in subs, etc. to create the tactile sensation, the group delay problem was back, but the sound was much more realistic. Everyone uses subs now.

One can talk about "perception thresholds" of all of these various aberations, but until you "scale" the effects against all of the other aberations involved in the trade-offs you have no idea if these effects are worth the tradeoff or not.

Use multiple subs - of any kind! - and be done with it. The rest is details lost in the noise.
 
Cardioid with multiple loudspeakers ?

Voice-coil inductance and driver suspension compliance are both nonlinear parameters which vary with amplitude/displacement during music-time.

Waveform induced energies become stored and later released by a loudspeaker driver in the first instance, also sequentially and separately via air motion within an enclosure assemby, this latter aspect then being in series with the driver's non-linearities due to driven input at any moment, yet with all energy exchanges arising on an on-going basis.

Minimum phase has been assumed again. There appears to be an understanding of loudspeakers here as if they are in simple isolation, but which is NOT what happens when a SS NFB amplifier drives a dynamic loudspeaker. Tube drive is obviously different, but with equally significant differences too.

Voltage drive leads to greater energy storage within a LS system than would be dissipated by a resistor, and this additional energy remains *trapped* within the LS system until independently transduced in LS time at natural intrinsic frequencies, or as heat.

If drivers were resistive then EQ would be okay, but that is not where we are at !

If an eventual sine energised LS amplitude response is compensated for by amplifier line level equalisation, then the dynamic response is also being modified, and this cannot fail but to be rendered significantly incorrect wrt an original waveform and especially with NFB controlled voltage drive, because the equalisation cannot compensate for driver/load parameters which vary non-linearly in time due to music waveform drive, and trap amplifier energy differently to those observed via sine analysis.

If you equalise until you get a flat SPL response when a driver has roll-off or resonance (bandpass cabinets being much worse in this regard) within the desired bandwidth, then either music waveform induced phase response differentials introduce group delay disturbance with attendant amplitude error during waveform time, or, if the phase response has additionally been electronically developed to accurately track the frequency of amplitude correction, then the amplitude response in time from say to the hammer of a kick drum or pluck of a bass string leads to the first half cycle response (and sometimes longer) being too low/high due to the fixed correction (either analogue or digital) failing to match for the eventual dynamic loudspeaker response when compared to the amplitude a steady sine would eventually develop.

No one has responded to Panomaniac's question in Post#159 !

Sometimes the pre-equalised response of a bass loudspeaker to a kick drum waveform can be inadequate for the first half cycle due to it being set up for the second cycle onwards where driver release of energy stored during the first half cycle becomes additive to ongoing transduction.
Composite output from the EQed loudspeaker for the kick then becomes a 'duff' wavefront which sucks more than it blows during the first cycle, and which really is something we should care about !
This sound becomes recognisable, but only to those who have heard better and then start to figure out that something is not 'right'.
They might have initially been persuaded by an exact steady sine SPL plot, but the reproduction will always sound as if something is not quite right, as indeed it is not !

Amplitude - phase - time period. Change any one and another becomes altered.

Bass loudspeaker systems have neither a flat response, nor 'linear' characteristics unless running unrealistically quietly, so when we equalise their amplitude response either we modify their phase response, or their dynamic response prior to correct amplitude development during the known period of time which is necessary for that correction to become stably established.

When reproducing music, EQed waveform *induced* changes within the loudspeaker do not remain time coincident with the compensated drive, and can actually end up inducing phantom peaks (dips are less noticeable) which were not part of an original waveform, and which might be worse or not arise at all if an amplifier or a loudspeaker is changed !

Clearly a sensible compromise should be possible, which leads to asking what is really audible, as with group delay etc.
We are informed that 0.8/F is acceptable, however, once an effect has become recognisable it can annoy greatly. And yet that recognition is a 'learning' process, so a blind testing panel might not have a clue as to what subsequently could become a recognisable annoyance, possibly with some specific type of music reproduction which did not trigger sufficient recognition to become statistically relevent at an original test sitting.

The group delay and any variation eventually generated by music waveforms via real world dynamic loudspeaker systems with respect to an original voltage waveform prior to equaliser compensation, changes dynamically with the music and is quite different to a fixed one observed via steady sine investigations: Hence it will modify harmonic relationships more that the THD of any reasonable amplifier will !

Maybe the 'experts' already do have their heads around this - maybe not - but we must be wary of possible posturing attempts to persuade those who are not expert that the *impossible* is possible via some expensive EQ system, which can likely help, but most certainly cannot cure LF driver section problems.

Thus separate EQ generated outputs, via dipole/monopole drivers having different dynamic waveform induced reactive variation, when used to generate a steady sine observed cardioid response, will also cause the developed response and its single rearward null to become conicular with an angular variation of the null cone apex around the rear of the composite varying with music drive in music time; this in a manner which simply cannot be observed via sine investigation !


Cheers ............ Graham.
 
Graham Maynard said:
If you equalise until you get a flat SPL response when a driver has roll-off or resonance (bandpass cabinets being much worse in this regard) within the desired bandwidth, then either music waveform induced phase response differentials introduce group delay disturbance with attendant amplitude error during waveform time, or, if the phase response has additionally been electronically developed to accurately track the frequency of amplitude correction, then the amplitude response in time from say to the hammer of a kick drum or pluck of a bass string leads to the first half cycle response (and sometimes longer) being too low/high due to the fixed correction (either analogue or digital) failing to match for the eventual dynamic loudspeaker response when compared to the amplitude a steady sine would eventually develop.

This is the longest sentence I have ever read. :D

Graham,

do you have anything (scientifically) that support what you are claiming?



/Peter
 
Hi Pan,

I have written longer sentences ! LOL.
I need to get the words down and then try and make them readable. If you look at the last modified time - I ran out !

Unfortunately I have nothing to hand for Posting up.

Once I check something out for myself, I move on. In those days I used Floppies and have had several computer crashes since. Also not sufficient spare time to re-do and present technical support, especially as not of benefit to myself because I've already done it.

However, I did suggest how anyone could check this out for themselves back in Post#151. Watch out for the inadequate first half cycle, and how voltage pre-emphasis can actually make driver current errors worse; ie. help one aspect but degrade others in 'music-time'.

There is no free lunch !

Cheers ......... Graham.

PS. I was not able to finish correcting my expression for the rear null variation either, because that is of course driver, wall and room distorted too, not just a simplistic conicular development.
 
Pan said:

So the GD would be dependent of the frequency response. A high order slope at the highpass would make GD higher than would a shallow slope. The least GD and phase distortion would be from a wide bandwith device with shallow roll off on both ends.
/Peter

john k... said:

What matters is the low frequency cut off and slope.

Thank you guys for your answers!

Graham, reading you is an interesting challenge for me! :D

Regards,
Etienne
 
I must say that I don't agree with Grahm at all. If the system is linear, eq will correct it. If you eq based on amplitude and phase that it can be made theoretically perfect. If only phase then you still have amplitude errors due to the system finite band width, but things will be correct in time. Now, if the system (speaker) is nonlinear, then the nonlinearity is there before and after. The linear aspects of the system can be corrected by eq, but not the nonlinearity. It was there before eq and remains after.

A system response to any input is given by convolution of the input with the systems impulse response.

O(t) = I(t) <x> S(t)

where <x> means convolution. O in output, I input and S the system impulse.

In a perfectly linear system with band width DC to light, S(t) is the perfect impulse. With limited band width S(t) will have a tail and result in both time (phase) and amplitude distortion. Amplitude distortion due to the roll offs, and time due to phase shifts (or vise versa). Now, suppose that the system has some nonlinearity in it. In the most general case we could say that the system impulse is a function of time and the input. Thus S=S(t,I(t)). So what we can do is factor the system impulse into the linear impulse of the ideal system with same bandwidth, S(t), and an error associated with the system nonlinearity, e(t,I(t)) = S(t,I(t)) -S(t)

So the real system has response of

O(t) = I(t) <x> {S(t) + e(t,I(t)}

Now, any applied Eq will also have its own impulse. For argument I will assume the EQ is linear.

Thus the eqed output is

O(t) = I(t) <x> EQ(t) <x> {S(t) + e(t,I(t)}

= {I(t)<x> EQ(t) <x> S(t)} + {I(t)<x> EQ(t) <x> e(t,I(t)}

The final error in the output is

E(t) = O(t) - I(t)

= {I(t)<x> EQ(t) <x> S(t)} + {I(t)<x> EQ(t) <x> e(t,I(t)} -I(t)

However, since the EQ is applied to correct the linear part of the system response, the first term,

I(t)<x> EQ(t) <x> S(t) = I(t)

since the eq will correct the linear part of the system impulse to a perfect impulse. So

E(t) = I(t)<x> EQ(t) <x> e(t,I(t))

which tells us the correct Eq is capable of removing all linear error (in the time domain). Linear error is that which arises from limited bandwidth and the resulting linear aspects of energy storage. What remains, the nonlinear error, can be written as

E(t) = Eq(t) <x> I(t) <x> e(t,I(t))

and the error in Eqed system is basically just the nonlinear error of the original system with eq applied.

Now, if the original system has any reasonable degree of accuracy in the first place then e (nonlinear error) should be small. Thus the total error should remain small before or after eq is applied. If it is large before hand, then it will remain large after Eq.
 
john k... said:
I must say that I don't agree with Grahm at all.


John

I agree with you about Graham, I don't agree either, and your analysis is correct for electronics or at a single point in space, but it doesn't hold for 3 dimensional problems like acoustics. In other words you cannot correct a 3 dimension problem with a one dimensional solution. I know you know that, but I don't want others to get the idea that electronic EQ can perfectly correct a loudspeaker, because it can't, and this is a widely held erroneous belief.
 
We are in total agrement. I meant to add the caveat about only applicable for a single point in space. But I was in a hurry as I was about to head out to play tennis, and obviously scampered off before my thoughts were complete. :) Thank you for making that clariification.

I wanted to make the argument in the time domain to make the point that the eq doesn't foul up the initial or transient part of the response associated with the dominant linear portion of the system.
 
Hi Etienne.

You have no idea what a challenge I am to myself !
Maybe it takes more than one post to clarify.

___________________________________________________
Hi John,

You say that if a system is linear then EQ will correct it.
I agree, and that is what your maths are based upon.

However you also state that things will be correct in time.
With real world drivers - no; the drivers are not resistive !

Energy dynamically stored within a driver is not released at the same frequency with which it was energised (see transient responses), the release alters ongoing waveshapes wrt to both amplitude and time axes, and EQ cannot counter this in advance of that driver's energy storage/release.

There can also be considerable storing of energy within a LF driver, this being exacerbated due to the way in which a NFB controlled output stage effectively generates whatever current is necessary (against the back-EMF) to hold output voltage waveform correct.

Your calculations do not include this additional current drive which is out of phase with voltage and which modifies loudspeaker *energy* wrt the original voltage waveform in music time.


Hi Earl,

You say that EQ can't perfectly correct a LS response.
So did I, but you still disagree with what I wrote ?

I attempted to explained from the LS *system* point of view which includes the source. Whether I can make myself understood is another matter which might be beyond my capabilities, but no worry.

Earl, you also say that you cannot correct a three dimensional problem with a one dimensional solution.
Exactly !
Here I suggest you are thinking about room dimensions and the internally reflected energies therein which become trapped as and after the wavefront energises the room.

So what about the LS driver and its enclosure ?

Internal driver/air spring storage effects and cabinet reflections can and do cause reproduction dips/peaks after resonant/wavelength related time periods via similar mechanisms, just as within a room.

EQ which is used to reduce a room harmonic peak also reduces the initial dynamic startup at that frequency - also - it alters all harmonic relationships where that frequency is a fundamental or component - and - similar happens when EQing to counter a driver/enclosure's natural characteristics!

The three dimensions you mention relate to wavelength - and thus to 'time period's.

This was why I wrote -
"Amplitude - phase - time period. Change any one and another becomes altered."

Given that dipole+monopole cardioid relies upon matched responses for the radiation pattern to develop, and that EQ cannot compensatate for the different driver/cabinet characteristics, then only a limited bandwidth of null can be managed, and that rear null characteristic will also vary in music time due to unavoidable natural dynamic amplitude and phase differentials which cannot be EQ compensated for.

JohnK generates cardioid resistively for the lowest frequencies, and yet, where a dipole+monopole cardioid system might have the most significant advantage, is via generating a narrow band null which takes out the most significant corner reflections that induce corner site peaks/dips, typically around 100Hz, and where the frequency of the unwanted response is dependent upon the distance of any non-cardioid loudspeaker speaker system from corner walls.
This might be especially useful if the response could be tunable for a range of corner/wall distances.

EQ alone can reduce this same corner/wall reflection peak, but not without simultaneously modifying the overall dynamic response to all fundamental/harmonic relationships at that frequency.


Cheers ......... Graham.

PS. John you posted whilst I wrote this.
I suspect that a concensus is developing, and suggest that the statement of disagreement comes from different viewpoints at times of writing/reading.
 
Graham Maynard said:


Hi John,

You say that if a system is linear then EQ will correct it.
I agree, and that is what your maths are based upon.

However you also state that things will be correct in time.
With real world drivers - no; the drivers are not resistive !


Being resistive has nothing to do with whether a system is linear or not. A purely resistive system can still be nonlinear. Stored energy does not indicate nonlinearity. It indicates the presence of reactive elements. Mass is reactive, but it is not nonlinear. For a mildly nonlinear system (i.e. one in which the linear aspects of the system are dominant) eq will correct the linear aspects of the system in time and frequency. The nonlinearity can be expressed as an error, e, representing the deviation form linear behavior. If e is small to start with, relative to some observation point, which is in any good system, then it will remain small after eq. There is no caveat of the system being resistive or not. Being resistive doesn't make the system linear, it makes it dissipative. Being linear only implied that if the input is i(t) with FFt = I(w), then the output will be O(w) with Ifft = o(t). If the system were nonlinear the output would be, in general I(w) > O'(w,w^2, w^3.....) with some Ifft = o'(t).

The error in O' compared to O is

E(w) = O'(w) - O(w)

and the IFFt of E(w) is

e(t) = o'(t) - o(t).

The problem becomes of how to determine e. The usual distortion tests are only an indication of how e behaves for a specific input. But being nonlinear it does not follow that if the error for a sine wave of frequency w1 is applied to the system is e1, and that for a sine of w2 it is e2, then when w1 +w2 is applied the error is e1+e2. That is e(w1,w2) is not necessarily e(w1) + e(w2). Obviously this is not the case in IM is present since then e(w1,w2) would have additional components at n(w1 +/- w2), etc which are not present in e(w1) or e(w2).
 
John,

You are picking holes again for the sake of it by taking meanings out of my words which I have not intended.

I have the greatest difficulty in expressing what I want to say here, for it is not simple, and all you do is make it harder for me.

Why do you not address the point that it is the amplifier/loudspeaker interface reaction which causes significant EQ/LS problems that negate dynamic correction and thus the cardioid di/mono-pole generation?

Have you actually checked for this yourself ?

Maybe you are ignoring this aspect because of the way you are already using direct connection, or recommending others to directly connect SS amplifiers to LF drivers ?

At least I have managed to express the nature of the problems to everyone, and as I see you want to nit pick I am away again.
Put your theory around amp/LS system interface energy - might be good for another page - seriously !

Will readers please judge for themselves by checking out realistic simulations with accurate virtual loads or scoping (remember that old low res piece of test gear) so that you yourselves cannot be swayed by all of this diversionary posturing here, for it is like what Earl himself recounts as 'looking where the light is brighter'.

There really is no point in me coming back here again; not worth the hassle.

AF is not like RF with regard to cardioid; there is no carrier: Dynamic AF waveforms end up distorting the developed patterns in music time, and even with steady sines the full cardioid cannot hold over a broad LF frequency range if it is developd via separately EQed di/mono-pole LS systems.

EQ can help audio, and maybe multiple subs can too, but not if you want to develop cardioid bass.

......... Graham.

PS. If I get the chance I will come back and present waveforms etc myself, but as I said - I have moved on (beyond direct SS-NFB/LS connection).
 
Graham Maynard said:
John,


Why do you not address the point that it is the amplifier/loudspeaker interface reaction which causes significant EQ/LS problems that negate dynamic correction and thus the cardioid di/mono-pole generation?

Have you actually checked for this yourself ?

Maybe you are ignoring this aspect because of the way you are already using direct connection, or recommending others to directly connect SS amplifiers to LF drivers ?

At least I have managed to express the nature of the problems to everyone, and as I see you want to nit pick I am away again.
Put your theory around amp/LS system interface energy - might be good for another page - seriously !

Will readers please judge for themselves by checking out realistic simulations with accurate virtual loads or scoping (remember that old low res piece of test gear) so that you yourselves cannot be swayed by all of this diversionary posturing here, for it is like what Earl himself recounts as 'looking where the light is brighter'.

There really is no point in me coming back here again; not worth the hassle.

AF is not like RF with regard to cardioid; there is no carrier: Dynamic AF waveforms end up distorting the developed patterns in music time, and even with steady sines the full cardioid cannot hold over a broad LF frequency range if it is developd via separately EQed di/mono-pole LS systems.

EQ can help audio, and maybe multiple subs can too, but not if you want to develop cardioid bass.

......... Graham.

PS. If I get the chance I will come back and present waveforms etc myself, but as I said - I have moved on (beyond direct SS-NFB/LS connection).

You are correct Grahm, this isn't worth arguing about.
 
gedlee said:



I have become increasing frustrated in my endeavors to create high sound quality because it is not something that has any value. Its a boutique marketplace in audio these days both at the production and reproduction stages and there isn't really any place in it for science.


It's clear to me you painted yourself into a corner. LOL ;)
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.