Enclosure resonances, not a big deal?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
gedlee said:


The goal is a solid closed loop and recent studies at Harman and others are showing that this can be done very effectively. Many just don't accept this, but I have always believed that it was possible and even innevitable.


But, they're generally all short-term studies from what I've seen and I think a lot of the finer points are only readily identified with longer term listening.
 
At one show some guys came into the room, sat down listening to what was probably an unknown piece of music and got up after 15-20 secconds stating "What crap!" One person wouldn't listen because the amp was so bad (although how they knew that was beyond me. I've measured that amp and it actually works quite well.)

Hey, I can probably find amplifiers for you that you'd really like. The Crown DC300A. Measures well, built like a tank, lasts forever. What more could you ask?:cannotbe:
 
badman said:

But, they're generally all short-term studies from what I've seen and I think a lot of the finer points are only readily identified with longer term listening.

With statistically based studies it is possible to know how much you don't know - how much variance in the data is not accounted for. These studies have shown very high degrees of confidence. Could something become important on further listening, thats possible, but its just as likely that it won't change either.

When I say that I take a long time to decide on a subjective opinion thats because I don't trust myself as a solo biased listener. Only time can get arround these problems. But when massive subject studies are done and the data is consistant and well correlated then I trust that. There is no need for longer studies.

Listener fatigue however, is one area that has had almost no attention paid to it. There is no "payback" to doing such a study. People buy on early impressions and that's what matters to the big guys. I listen to my own speakers and have for years so I'm comfortable that my opinion is not going to change.
 
gedlee said:


With statistically based studies it is possible to know how much you don't know - how much variance in the data is not accounted for. These studies have shown very high degrees of confidence. Could something become important on further listening, thats possible, but its just as likely that it won't change either.

When I say that I take a long time to decide on a subjective opinion thats because I don't trust myself as a solo biased listener. Only time can get arround these problems. But when massive subject studies are done and the data is consistant and well correlated then I trust that. There is no need for longer studies.


Unless the duration of the study itself is a limiting factor, which is what I'm getting at. We're not talking about studies where multiple pieces of music are used, or the listeners are able to familiarize themselves with the sound of the devices in question. It's entirely possible for a piece of music to not have content within the areas in which improvements are made. An example would be heavy rap. This often has a constant beat and rumble effects and tend to mask any enclosure resonance. But a tympani strike might let them be readily audible.

This is a serious shortcoming of all hifi test methodologies. There are no long-term tests applied. Behavioral science is a closely related field in many respects, and it would be completely unacceptable to rely exclusively upon short-term tests for all things behavioral.

Audible tests are designed to be convenient, but still maintain what scientific vigor they can. But much of their worth is discarded with the attempt at isolating effects. It's incredibly difficult to do valid longer-timeframe tests, but without them, we're taking snapshots of a scene in motion.
 
The tests are not that poor. The source material itself has been studied extensively to find that which is found to yield the most reliable results, its not random at all. Several selections are usually used and the listeners get to take their time. You're being way too critical of something that you don't seem that well versed in. Floyd Toole and his staff were not novices.
 
Reliable... within a given short-term format. We're stuck with that format for these types of tests when done with a group, but it's a major flaw.

That's like testing what tires work best on a dry racetrack for a given automobile. It's useful to understand road performance, but doesn't tell you much about which tires perform best on the road in the rain.

Repeatability is great.... for some effects.

Have you ever heard "a new sound" on an album you've listened to many times? It happens to every listener I've asked, even on the same system. These sounds would be well below the threshold of the short term testing as people don't have time to familiarize themselves with the recording, speaker, etc. They're busy processing the gross effects. No wonder then, that the niceties of refinement would be largely missed by this process. But improvements on these low-level sounds would be audible, and likely audible subconsciously long before we're aware of them and their effects.

Short term has that major flaw. It takes a long time to become intimately familiar with something perceptually, and that applies to:
The room
The speaker
The front end
The recording

No test is perfect unless it imitates the usage environment. Audibility tests... there's a BIG leap of faith there, with all the disconnects I've mentioned. But you're indoctrinated to trust these forms of testing, just as my own bias is pretty far in the subjectivist camp.

It's all good, but a healthy dose of engineering belongs in the guys who build purely by ear (which I strongly disagree with) just as many 'pure engineers' could use a little (more) practicum and recognition that testing doesn't cover everything.

I have nothing negative to say about Toole, and you're right, I could use some more time reviewing testing. But, I'd rather build and my time is limited.
 
I've had widely varying experiences with being able to discriminate in audio environments. At one extreme, about 20 years ago I visited somebody who had a 'stacked Advent' stereo setup, generally decent quality solid state pieces, but in his unfinished basement, and he demonstrated a tweaked Nakamichi cassette deck with advanced Dolby noise reduction that I could not truthfully tell was in the chain or not merely by listening while standing up in an arbitrary location in front of the speakers.

At the other extreme, an experience in my own setup a couple days ago where I could readily detect the difference between two good quality line level interconnects that were each 3 feet or less in length in my own setup, particularly at high frequencies. With the lower capacitance (11pf/ft) better dielectric (foamed polystyrene), lower skin effect (7x36awg center conductor) cable, the HF was much more locked in and stable than with the generic Monster Cable (27 pf/Ft, solid plastic insulator, ~20 awg center conductor).

Familiarity with the recording and playback system, quality of the recording and playback system, quality of the listening environment setup, care in speaker and listening positioning all contribute to the ability to perceive differences of this nature. If the system can't reproduce it, how would you know how much gain riding, compression and mixdown imaging settings vs playback quality issues should affect a track consisting of a tambourine or handclap with a peak amplitude 25db down from the average main instrument level? It takes time and usually repeated exposure to start to assess such low level but clearly audible musical details (or at least they *should* be) in the context of playback quality, and those are the ones that are among the most quickly compromised by many quality related factors in the reproduction chain.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.