Hearing and the future of loudspeaker design

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
+1 jcx. And also read Elias' single speaker stereo matrix thread. And one of the Ambiophonics/Binaural threads.

I've read these!!! Actually it is the reading of these threads that prompted me to explore the various "components" discussed in these rather LONG threads. I think it may be beneficial to dissect the "larger picture" and isolate individual issues in speaker design for experiment and study and THEN put them back into a larger picture for coherency.

Also, as a DIYer, I want to see these theories in action and quantify how the theoretical side interacts with our biological DSP (brain).

I also started a thread on the room interaction aspect of loudspeaker design but that one received no replies :(
 
If you want to get to hear it as the artist or recoding engineer hear it, you need to also have their hearing differences. Then you would have to fist compensate it to your hearing to get "flat" response and then apply inverse adjustment of their freq response to hear it as they do.
If you just compensate it to flat, it makes no sense, because no one hears it like that.

I suggest that in the actual presentation of my idea.
 
On the subject of recording. It is impossible to address the variety of difference of sound re-production (speaker+amp+room+ear) from the recording side. With current technology we can only experiment with addressing these issues from the playback side. The recording side CAN however provide recording data to be used for accurate playback calculations. :)

Perhaps we can try to develop technologies for even and uniform dispersion of sound in a 3 dimensional plane but the design of non-experimental commercial speaker drivers technology do not allow that and therefore out of the hands of the average DIYer.
 
Last edited:
Objectively speaking, unless the microphones used to record the instrument were right next to your ears at the actual recording location :cool: you can't really objectively compare if the reproduction is true to the live. What you can do is compare the sound of the reproduction with a similar one heard in a different scenario. Again this addresses precision and the topic is on accuracy.

Perhaps leaving more for the brain to process is better than using analog/digital methods - but if we truly believe that, why are we even messing with speaker design?
Objectively speaking, comparing the transfer function of a speaker is relatively easy, if it has reproduced the recording without gross changes in frequency and phase response, it has retained some degree of "High Fidelity".

We "mess" with speaker design because they are where deviation from frequency and phase response are the worst in the reproduction chain, and the room compounds those problems.

In every design I have heard using good quality transducers, the closer to flat frequency and phase response, the less ringing in impulse response, and the lower the distortion, the more accurate and revealing it is of the original recording.

Your idea of creating a DSP controlled user determined inverse loudness compensation curve has the basic flaw of hearing behaving differently at any given loudness, what is correct for a 60 dB level would not be correct at any other level.

I have experimented with using inverse loudness contour curves, and find them in no way to be useful or preferable to flat response, except when listening to speakers at a level far below or above the original recording's mix level.

Even though I have noise induced hearing loss in the 4000 Hz range (different response dip and frequency in each ear), I only boost that range on an EQ when listening at low levels (or British sit coms ;) ), if listening at "normal" levels the same boost that allows me to hear low level detail would sound terribly "middy".

If I tried to boost 18 kHz to the point where I could hear it, my tweeters would go up in smoke and my girlfriend, who can still hear 20 kHz just fine, would leave the room with her fingers in her ears.

By the way, even and uniform dispersion of sound in a 3 dimensional plane can be done by a DIY by copying the Tom Danley Synergy Horn concepts, which also have excellent frequency and phase response.

Art
 
Last edited:
Certianly not everything!

To address "sweet spot" which is really just dimensional and time distortions: Hmm, what if a speaker are designed with a digital XO and came with a "clip" that you wear and interact with the processing platform to addresses location issues?

One of my roomates during my university days studied location awareness and sound engineer. He developed one of those "headphone" simulations where distance, 3d aligmnent and time are all factored into an algorithm used in sound recording. This is much easier when done with headphones as you do not have the other factors you address: fixed cabs, radiation polars, room interaction. Also it is easier to address the distance of the drivers in a microphone. You can search for similar simulations on youtube. For those that don't believe in "soundstage" on headphones. It is just a different set of recording requirements.

I too am really interested in polar radiation, room interaction etc etc. However, to study something, you try to isolate the variables first and then incorporate a larger picture.

I don't see why DSP's can't solve every problem has anything to do with my post. Other than stating that water can't cure cancer on a post that is discussing whether water can re-hydrate an individual.

Objectively speaking, comparing the transfer function of a speaker is relatively easy, if it has reproduced the recording without gross changes in frequency and phase response, it has retained some degree of "High Fidelity".

We "mess" with speaker design because they are where deviation from frequency and phase response are the worst in the reproduction chain, and the room compounds those problems.

In every design I have heard using good quality transducers, the closer to flat frequency and phase response, the less ringing in impulse response, and the lower the distortion, the more accurate and revealing it is of the original recording.

Your idea of creating a DSP controlled user determined inverse loudness compensation curve has the basic flaw of hearing behaving differently at any given loudness, what is correct for a 60 dB level would not be correct at any other level.

I have experimented with using inverse loudness contour curves, and find them in no way to be useful or preferable to flat response, except when listening to speakers at a level far below or above the original recording's mix level.

Even though I have noise induced hearing loss in the 4000 Hz range (different response dip and frequency in each ear), I only boost that range on an EQ when listening at low levels (or British sit coms ;) ), if listening at "normal" levels the same boost that allows me to hear low level detail would sound terribly "middy".

If I tried to boost 18 kHz to the point where I could hear it, my tweeters would go up in smoke and my girlfriend, who can still hear 20 kHz just fine, would leave the room with her fingers in her ears.

Art

Thanks for the insightful reply! My first experiments were to adjust the volume at the frequencies where I had hearing loss (outside the normal curve). This created all sorts of perception distortions (or I believe them to be). Instruments "seem" to be overlapping from live recordings etc. Prolonged listening to adapt didn't change this. I think your point of "hearing behaving differently at any given loudness" as well as the distortions of timing and sound energy radiation differences between different designs may have a lot to do with it.

With the current state of DSP we can use algorithms to compensate for some features of the room and time using software like Accourate and others. What I'm suggestion is adding another algorithm chain to the process for individual hearing. It is also the intention of this post that someone with knowledge on the issue of psychoanalytic and hearing can chime in to point us in the direction of studies or experiments in this direction.

Art's experience shows that signal processing has a whole set of flaws (discussed in many previous threads): different listener, location etc etc and all are issues that need to be addressed in relation to each other. The reason I focused on the variations in hearing is because while many issues can be addressed through advancement in speaker design, the variation in hearing can only be addressed on an individual basis.

Markus also brings up the valuable (in my opinion most valuable) point of how to quantitatively and objectively measure this type of experimentation. To this I say: i have no clue. :eek: But I believe the answer may lie somewhere in his own statement of hearing aids. How are they measured quantitatively?

I hope people can understand that: furniture, room size, speaker cabinetry, crossovers, different ear drums are all a form of signal processing, just not in the digital realm. Call it ASP if you will: analogue signal processing. To not compensate for them is ignoring accuracy :)
 
The reason I focused on the variations in hearing is because while many issues can be addressed through advancement in speaker design, the variation in hearing can only be addressed on an individual basis.

Markus also brings up the valuable (in my opinion most valuable) point of how to quantitatively and objectively measure this type of experimentation. To this I say: i have no clue. :eek: But I believe the answer may lie somewhere in his own statement of hearing aids. How are they measured quantitatively?
A Real-ear aided hearing test is required for meaningful hearing aid evaluation. It allows the hearing aid's performance to be monitored while the user is wearing the hearing aid.
Two tests are commercially available to measure signal to noise related losses: the Speech-In-Noise test (SIN), and the Hearing-in-Noise Test (HINT).

Other tests include:
Impedance test–helps identify hearing disorders that may be medically treatable.
Bone conduction test–measures the amount of hearing loss that is due to dysfunction of the hearing nerve cells.
Determination of the hearing aid user's most comfortable and uncomfortable loudness levels.
Tone decay test–evaluates the auditory system beyond the hearing nerve cells.
Speech reception and speech recognition tests measure the listener's maximum and real world speech understanding ability.

Advanced programmable DSP is now small and powerful enough to be included in hearing aids, or located externally and fed to in ear monitors.

Since what you are talking about is individual hearing differences and deficiencies, the only way to address those differences properly is with in ear monitors, unless you happen to be listening with someone with identical hearing, or listening alone.

Statistically speaking, finding someone with identical hearing is as likely as finding a person with the same fingerprint as you.

Art
 
I gave my idea so I think I should give my proposed experiment. It is quite amateur and has flaws and many limitations.

1. Start with a standard listening volume. This will depend on the recording, distance of speakers etc etc. Use several different tracks of varying lengths. Have a second person artificially insert an anomaly of a same frequency but varying dB somewhere in the track. e.g. a 3db(on top of the volume) 400Hz noise at the 3 minute mark. I suggest increments of 1db until it can be detected then lowering by .1 db until it can't be detected anymore. Detection is by comparing with an original unmodified.

2. Repeat this procedure at the dB you come to. This will help determine if you can actually hear the noise or if it is an illusion within an illusion caused by repeated listening.

3. Once you determine the dB threshold for a specific frequency, move to 1 band up on an equalizer (The digital crossover I use only allows for 32 bands) and repeat.

4. Once you have the thresholds, plot it on a curve and see if the curve is smooth or spiky. This may give insight on your individual hearing. If you're interested in various hearing losses and their curves: Audiometry: the Testing of Human Hearing

5. In the digital crossover, apply a new equalizer with the measured thresholds. Depending on if your curve is smooth or spiky, you may have to adjust to fit your needs. Then apply the other room, phase correction etc etc etc algorithms to your DSP. Sit back and listen. I'd probably choose a live recording. Are there distortions? Are the instruments overlapped, is the sound stage the appropriate size, do the instruments "sound real". If not, then the idea that your brain already accounted for the hearing loss over time is most likely the case. If none of these apply, then perhaps it is time to take the testing 1 step further and compare your plots with another to look for patterns and consider implementing the design.

There are a lot of things this test does not take into consideration: background noise, more than 1 person listening situations etc etc. Some of these can be accounted for: e.g. individual profiles but some cannot and this is not a 1 stop wonder. It serves as a test to see if this is a direction you want to investigate further.
 
So a hearing aid for the stereo built into the stereo in a way in order to make it sound as the engineer heard it. First you are going to need to know how the engineer's hearing works. Seems like a very difficult challenge. Switch recordings and start over...

I don't know how live recordings work :) but for many modern non-live recordings, sound engineers already factor their hearing into the equation. They don't rely purely on what they hear. It is just one tool. Of course you have sound engineers who do not...well as music lovers, most of us know that not every record is produced with accuracy and precision in mind.
 
Last edited:
If you set up a string quartet behind an acoustically transparent curtain, and a pair of loudspeakers, and the same piece is played by the musicians and a recording of it made previously run through the loudspeakers, and you can't hear the point at which the changeover is made, those are pretty good, well set up loudspeakers. Never mind that the hall acoustics weren't perfect (by any means), the twenty or so other people with me listening had all (well, perhaps apart from a few students) spent a largish percentage of their lives mixing rock and roll or other ear-destroying sources at high levels, that the microphone techniques required to get the switchover to work were perhaps not entirely catholic, that the musicians themselves were conservatory students rather than the big stars one might have wanted to reproduce: the loudspeaker was producing the same wavefronts in the air as the musicians.

Now, if the experience could be reproduced with a jazz trio, a solo piano, an actor declaiming Shakespeare… but none of the systems managed. Loudspeaker technology and microphones have some way to go yet.

In the studio, I'm not trying to convince your ears that you've got a string quartet in your living room, more that there's a hole in your wall behind which they are playing in a concert hall. I've been in the studio setting up mics, chatting to musicians, listening to them, and I'm going to try and squeeze that sensation through my studio monitors which have not been set up for me, but for the room. I know my ears are worn and battered, but I'm comparing with real live instruments.

If it's music that has never had an acoustic parent (ie, neither sampled nor 'synthetic copies of', but purely electronic sounds) I'll judge the balance both volume and tonal on those same speakers, which are not perfect, in a room that isn't either, with ears that lack something to, using as my reference live musical performance.

With amplified instruments, such as electric guitar or ondes martenot, the amplification can (should) give its particular coloration to the final sound; in musical reproduction it is supposed to be as transparent as humanly possible, practically invisible.

All right, I've stopped preaching, you can take fingers out of ears;).
 
I don't know how live recordings work :) but for many modern non-live recordings, sound engineers already factor their hearing into the equation. They don't rely purely on what they hear. It is just one tool. Of course you have sound engineers who do not...well as music lovers, most of us know that not every record is produced with accuracy and precision in mind.

The sound engineers I know of couldn't even tell you what their monitors and room are doing let alone their own hearing. If they were compensating for their own hearing, how would they do so if the variability in human hearing is so great that everyone needs their own curve or settings?:confused:

Just a thought.

Most recording have zero concern for accuracy nowadays. Everyone enhances them for stereo.
 
If you set up a string quartet behind an acoustically transparent curtain, and a pair of loudspeakers, and the same piece is played by the musicians and a recording of it made previously run through the loudspeakers, and you can't hear the point at which the changeover is made, those are pretty good, well set up loudspeakers. Never mind that the hall acoustics weren't perfect (by any means), the twenty or so other people with me listening had all (well, perhaps apart from a few students) spent a largish percentage of their lives mixing rock and roll or other ear-destroying sources at high levels, that the microphone techniques required to get the switchover to work were perhaps not entirely catholic, that the musicians themselves were conservatory students rather than the big stars one might have wanted to reproduce: the loudspeaker was producing the same wavefronts in the air as the musicians.

Now, if the experience could be reproduced with a jazz trio, a solo piano, an actor declaiming Shakespeare… but none of the systems managed. Loudspeaker technology and microphones have some way to go yet.

In the studio, I'm not trying to convince your ears that you've got a string quartet in your living room, more that there's a hole in your wall behind which they are playing in a concert hall. I've been in the studio setting up mics, chatting to musicians, listening to them, and I'm going to try and squeeze that sensation through my studio monitors which have not been set up for me, but for the room. I know my ears are worn and battered, but I'm comparing with real live instruments.

If it's music that has never had an acoustic parent (ie, neither sampled nor 'synthetic copies of', but purely electronic sounds) I'll judge the balance both volume and tonal on those same speakers, which are not perfect, in a room that isn't either, with ears that lack something to, using as my reference live musical performance.

With amplified instruments, such as electric guitar or ondes martenot, the amplification can (should) give its particular coloration to the final sound; in musical reproduction it is supposed to be as transparent as humanly possible, practically invisible.

All right, I've stopped preaching, you can take fingers out of ears;).

I really enjoyed reading your reply. From the listening side, I too can feel the illusion. An illusion of the performance that is like listening into a window where the music is being played.

I do have a question which is: when the students are listening to the instruments and comparing to speaker, although the are hearing the same variation (difference), are they all hearing the same things?

Take this analogy: a group of students is comparing a painting to a photographic reproduction. They all have different vision without artificial correction. Although they can all spot the same difference, they are actually all comparing different artworks to different photographs (the difference being the same degree for both). This is my only concern.

On a personal note:

I really enjoy listening to music and if classified fall under the non-analytic category. I have a few pairs of commercial speakers of varying price range and have completed 2 DIY plan and 3 designs of my own. Out of the speakers, I have a pair of infinity kappas always hooked in to my system and is my preferred listening choice. The science and design aspect of speakers is a hobby and I pursue it not to create the best "sounding" speakers for me but for the "hifi" aspect.

The biggest eye opener for me so far has been a binaural recording played back through dipole system with the listening position in the middle of the two speakers. This particular system has it's own flaws for my ears but is the closest to a re-production of reality that I have ever heard. When I closed my eyes I did not always know what was real and what was coming from the speakers. However from a measured reproduction aspect, the setup is not very true. The measurements are no where near flat and the waterfall plot showed very poor decay. I would have recreated this similar system in my own home if it isn't for the lack of binaural recordings available :mad:
 
The sound engineers I know of couldn't even tell you what their monitors and room are doing let alone their own hearing. If they were compensating for their own hearing, how would they do so if the variability in human hearing is so great that everyone needs their own curve or settings?:confused:

Just a thought.

Most recording have zero concern for accuracy nowadays. Everyone enhances them for stereo.

I apologize for my own misunderstanding. To clarify: I am not a sound engineer. I am however CTS certified and in my test prep and test, the issue of hearing and compensation for hearing loss/variation of the engineeer is addressed. With this I was under the impression that professional sound engineers abide by this principle, especially if they are certified.

However, I may be wrong in my assumption. It seems not every sound engineer adheres to the same philosophy. Not that certification determines the skill of a sound engineer since it is as much an art as it is technical. It does serve as a basis for professionalism. For example: I may not want to eat the food of a chef that doesn't know the temperature that e.coli is killed if this information is made aware to me. He may cook the best food in the world. My significant other will eat anything if it tastes good.

Here are a few links that address the issue:
Common Misconceptions about Hearing
Audio engineers: Enemies of Our Ears
Engineering in the biz w/hearing loss - Gearslutz.com

There are also a variety of books including the InfoComm training manual which explain this particular aspect of sound reproduction.

From my understanding of my training material: As for how they compensate. If they want to follow a standard set of procedures, they would first attempt to create a recording and engineering environment where the speakers and room has as little character as possible (measuring as flat and having as even a decay etc etc as possible). They would then engineer the recording in this environment and using a combination of audio and visual tools. They should have an understanding of basic hearing and their own hearing. Hearing is the first topic covered in any professional or academic sound engineering course. Once the "art" is completed, it is then played back in a series of different listening environments aka referencing rooms. These may be created to resemble a living room, a car, headphone listening etc etc. The purpose is to create an artistic listening experience in any listening environment. HIFI recordings however are produced to be referenced in an environment similar to the engineering room aka minimum distortion. They are engineered for the purpose of accurate and precise sound reproduction and not for compatibility.

This is an extremely condensed and summarized butchering of the actual course material and it is my limited interpretation of it. There is a need for various type of recordings.

There are many tools for a sound engineer to rely on outside direct hearing.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.