human hearing

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Sorry, no. The pressure waves in the air intermodulate/interfere as soon as they meet. That is the nature of wave propagation in a medium such as air.

Interference and intermodulation are very different.

Interference is a linear effect; 2 sines waves interfere and create a beat signal; It's "add and subtract".

intermodulation requires a nonlinearity to create the IM tones: this is a multiplication.

Air is pretty linear?
 
Perhaps a better approach would be to use some type of acoustic filtering to subtract the high frequency content so that the signal and tranducer side of the equation could remain constant rather than being the variable.

This is what the link in post 40 refers to.

It doesn't eliminate the rest of the apparatus though, as the effect of driving any equipment beyond it's operating envelope is usually clearly audible.

You need to make some very careful choices of equipment to ensure the system operates linearly for the stimuli applied.

Andy.
 
Ex-Moderator
Joined 2002
PGW said:

Interference is a linear effect; 2 sines waves interfere and create a beat signal; It's "add and subtract".

intermodulation requires a nonlinearity to create the IM tones: this is a multiplication.

Ok, yes, I was too generic in my definitions. What I was trying to say was exactly as you stated it. But I cannot see any controls in the Oohashi paper to show that beat frequencies in the audio range were considered.

Air is pretty linear?

Nelson Pass has suggested on several occasions that air is single ended ...;)
 
Nay9 stated:

For some reason we tend to make a distinction between hearing a sound and feeling a vibration, in reality they are the same thing.

I can agree partially only. There are some things hearing can do and touch can't (and vice versa as well, as we can't hear cold or heat): Hearing is directional, hering can distinguish between frequencies quite well, the hearing mechanism has a quite astonishing dynamic range etc.

Regards

Charles
 
Quote from Steve Eddy about IM in loudspeakers:
"That wouldn't prevent it. They sent all the information above 26kHz to the supertweeter so the supertweeter handled multiple frequency components which means the supertweeter could indeed produce intermodulation components."

I admit you have a point here, but if the super tweeter produced audible intermodulation products, why didn't anyone hear them with the low-frequency channel switched off?

"What Karou and Shogo did was use a syntheized stimulus tone which had five frequency components above 20kHz. Each of those five frequency components were fed to a separate driver so that each driver was fed a single frequency. Therefore none of the drivers would produce any modulation products of the stimulus tone."

Unfortunately, I don't see any way one could apply this to natural sounds such as gamelan music. I doubt if annoying continuous test tones are going to affect the alpha waves of Japanese gamelan players or of anyone else in the same way as music. So I guess for the type of research Oohashi et al. did, using a very linear super tweeter and checking if it produces an audible sound of its own is the only thing you can do.
 
I think the use of static monaural signals is one of the most often repeated errors when the properties of human hearing are tested.

While it could be difficult to detect 1% THD on a monaural sinusoidal stimulus things are different with a good stereo recording: the same THD would muddy up the imaging.

The same applies to phase anomalies: They have most influence when it comes to perception of direction.

Also the Fletcher-Munson curves should be taken exactly as what the are: The AVERAGE listener's perception of a STATIC stimulus, nothing more - nothing less.

When we are listening to music we don't listen to static sinusoidal signals (this would be rather boring music). So it is dangerous to assume that a bandwith of 20 Hz to 20 kHz is sufficient because Fletcher-Munson suggest that !

BTW: A sinusoidal is a signal with zero-bandwith and does not carry any information, according to information theory.

Regards

Charles
 
Obviously, if static sinewaves were all we could hear, there would be no argument here.

The Karou and Shogo experiment is still interesting, and shows that they were right to include a 'control'. In this case they ended up proving that their single tweeter wasn't up to the task in some way - it should have produced identical (negative or positive) results to the multiple tweeters.


One thing I've noticed is that at the top of my hearing range (15Khz or so, it drops with age I'm afraid...) it is much easier to hear a tone if I turn my head back and forth than when it's held still. (This doesn't work with headphones, unsurprisingly). Presumably, there's some phase information I'm picking up, and the ear is more sensitive to this than amplitude.

Cheers
IH
 
I have to admit that I was a little unspecific with my statement about validity of tests using static sinusoids.

These are of course valid - if one wants to know our hearing's reaction to static stimulus.
But these tests would not be valid for making conclusions about our hearing's reaction to any other kind of stimulus.

Regards

Charles
 
MarcelvdG said:
Quote from Steve Eddy about IM in loudspeakers:
"That wouldn't prevent it. They sent all the information above 26kHz to the supertweeter so the supertweeter handled multiple frequency components which means the supertweeter could indeed produce intermodulation components."

I admit you have a point here, but if the super tweeter produced audible intermodulation products, why didn't anyone hear them with the low-frequency channel switched off?

I don't know that they didn't hear anything. As far as I'm aware, Oohashi only said that the high frequency channel on its own didn't produce the same EEG results as when both were playing.

"What Karou and Shogo did was use a syntheized stimulus tone which had five frequency components above 20kHz. Each of those five frequency components were fed to a separate driver so that each driver was fed a single frequency. Therefore none of the drivers would produce any modulation products of the stimulus tone."

Unfortunately, I don't see any way one could apply this to natural sounds such as gamelan music. I doubt if annoying continuous test tones are going to affect the alpha waves of Japanese gamelan players or of anyone else in the same way as music. So I guess for the type of research Oohashi et al. did, using a very linear super tweeter and checking if it produces an audible sound of its own is the only thing you can do.

I don't see any way one could do it either. I'm simply saying that Karou and Shogo's results may have implications for Oohashi's results.

se
 
phase_accurate said:
I think the use of static monaural signals is one of the most often repeated errors when the properties of human hearing are tested.

While it could be difficult to detect 1% THD on a monaural sinusoidal stimulus things are different with a good stereo recording: the same THD would muddy up the imaging.

Actually the research indicates that detection is much more sensitive when the stimulus is pure tones or broadband noise. For example, frequency response anomalies are detected at much lower levels when using noise than when using music.

Also the Fletcher-Munson curves should be taken exactly as what the are: The AVERAGE listener's perception of a STATIC stimulus, nothing more - nothing less.

The Fletcher-Munson curves are all about loudness which is a quite subjective evaluation. We don't have quite the sense of relative proportion when it comes to hearing that we do with say vision. It's a pretty simple matter to look at two objects and get a good fix on whether one is twice as large as the other. But how do you gauge when one sound is twice as loud as another? :)

BTW: A sinusoidal is a signal with zero-bandwith and does not carry any information, according to information theory.

Yet our sense of hearing doesn't give a damn about information theory and senses a sinusoid just the same.

se
 
phase_accurate said:
I have to admit that I was a little unspecific with my statement about validity of tests using static sinusoids.

These are of course valid - if one wants to know our hearing's reaction to static stimulus.
But these tests would not be valid for making conclusions about our hearing's reaction to any other kind of stimulus.

True enough. But that doesn't mean that the results of tests using music should be immune from scrutiny. Getting at the truth still involves weeding out reasonable alternative possibilities which would otherwise leave us with ambiguous results.

se
 
I just looked up my copy of Oohashi et al.'s AES preprint, and in section 3.2, they state explicitly that: "When only the high frequency components of over 26kHz were presented intermittently in the second and the third experiments, no subject could recognize them as audible sound". In section 0, they state that: "Statistical evidence is also presented that although no subject could recognize high frequency components as audible sound, under certain experimental conditions, music containing high frequency components is perceived as more pleasant and rich in nuance than music from which high frequency components are eliminated."
 
Howdy folks... Long time no see :)

I've run over this interesting thread and can't help but to comment on some of the statements.

In my humble oppinion, it's totally nuts to make these tests with square wave forms (a 5kHz with a capacitor as 100 kHz filter), as one did.

A square signal cannot and will never be a valid subject for audio tests, as it is a fourier series of sine tones. The test conducted with the above referred set-up and a tweeter, will generate a lot of problems, one of which is the amplifier and it's slew rate, which is dependent on the load (especially capacitive load).

Secondly, everyone focuses on the driver (speaker unit) and the human ear. Why has no-one mentioned the effect of wave intermodulation in the transmission channel from driver to ear: the AIR! ;-) :idea:

Finally, my job as engineer at Brüel & Kjær Sound & Vibration has taught me, that even a slight design change of a Sound Level Meter (SLM) has a great influence on the signal measured by that SLM, even though the SLM body is placed behind the microphone.
My point is, that anything near the trransmission line of the perceived sound is influencing the result. Reflections, standing waves, harmonics induced by sound energy impact on materials (resulting in a resonance frequency of that material/structure) all influence the result, so with all due respect, I'll stick to B&K's 60 yrs+ experience and not worry too much about 22kHz+ frequencies.

p.s.: To the one who questioned if the 20kHz limit was the reason for bandwidth limit in power amp's: No, the limit is needed to avoid HF oscillation in any feed-back design, such as a power amp.

Just my 2 cents
Jennice
 
Jennice said:

p.s.: To the one who questioned if the 20kHz limit was the reason for bandwidth limit in power amp's: No, the limit is needed to avoid HF oscillation in any feed-back design, such as a power amp.

Most power amps have an RC input filter which imposes some upper frequency limit. This is because they will produce grave distortion if asked to reproduce signals with an excessive frequency-amplitude product, due to slew-rate limiting. The input filter is there to stop later stages of the amp being asked to reproduce signals which they can't.

Slew-rate limiting is only obliquely related to classic (Nyquist) stability; the Douglas Self book gives a very clear explanation of this.

Cheers
IH
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.