John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
Referring to the posts from JC and hhoyt, it seems that both aren´t so much interested in completely "ungounded" opiniones about sonic differences, but have different qualifying schemes.

While JC relies more on a qualitative approach (people known to him being experts) with sound quality description from listeners while hhoyt seems to prefer the more traditional approach of quantitative evaluation (aka controlled "blind" listening test).

Although both methods have their merits, i still think the qualitative approach is favourable especially in the case of multidimensional evaluations, where the boiling down to a yes/no decision by the participants seems to be quite difficult. Even in the hedonic rating variant there is to much variability possible in the judgement process from listener to listener.

The preamplifier experiment i´ve described a couple of weeks ago is imo located somewhere in the middle or is a combination of both methods.

The qualitative part was done in advance by choosing the listeners for the trial as i based my inclusion decision on listeners evaluation experience -means doing such comparisons between two devices was just normal part of their daily business - and my assessment of their likes and dislikes in percepted sound quality based on listening sessions/evaluations from the past that we shared.
Additionally it was based on my knowledge about their usual reproduction/evaluation environement.

The quantitative part was the preference decision that they did after receiving and comparing the units.

Of course this selection is impractical in most cases, but it points to a possible enhancement of the usual test practice; synchronization of describing/used vocabulary provided by usage of accordingly done sound samples and visualization of the results by spider webs will certainly add valuable information and will help wrt replication.
 
I think this musician thing has done the twist it tango. There are certainly musicians who are good critical listeners for reproduced music. Just as there are some auto mechanics who have critical listening skills.

But being an auto mechanic does not mean one must have that skill set.

Good musicians do need critical skills to be good musicians. Those skills by themselves are not a guarantee of critical listening skills to reproduced music.

Then we have the case of musicians who are not even near the top tier. My experience from selling retail sound systems was when a customer would punch up the loudness button, turn up the bass tone control and turn down the treble knob, they were electric bass guitarists.

Other tip off was really good musicians would sometimes change the speed control on direct drive turntables because they either heard the pitch or timing was off a bit.

Never had a mechanic do that.
 
As PMA already said, broad generalization is usually a problem.
Wrt musician it depends on the genre, the listening skills overall and experience in evaluating of compromised reproduction systems (beside needed interest in all topic at all).

I know very skilled musicians that nevertheless aren´t interested in the quality of home reproduction systems, mainly as the seem to listen in a very different way, means sometimes focussing on the instrumental parts of the same kind they perform themselves and able to imagine a lot more of the (in reality) missing parts of the reproduction.

So being able to describe sonic quality differences is one thing being interested in the reproduction quality is a totally different thing.
 
Scott, I do have an idea of what’s really necessary to enjoy ones life, all these trinkets and toys aren’t really on that list. I also believe that in the not too distant past that people walked a lot more than they do now and didn’t think much about doing so.

My vinyl setup is actually very close to what you’ve shown in your profile pic, doesn’t get used much, but will be the last thing to go when that time comes.

Blanket statements about musicians hearing are a bit worthless imo.
 
Member
Joined 2014
Paid Member
I think this musician thing has done the twist it tango. There are certainly musicians who are good critical listeners for reproduced music. Just as there are some auto mechanics who have critical listening skills.

But being an auto mechanic does not mean one must have that skill set.


Good automechanics have very good listening skills, that's how they can home in on where problems are. Very good ones use stethoscopes to listen to bits of engines.
 
Jakob, I think you're way too generous about the so called experts folks using the qualitative method are using for feedback, much less the methods. If you're my friend/compatriots and I send you some equipment to try, how much are you going to be influenced by all sorts of subconscious social forces, notwithstanding the "new" factor and the wholly deblinded process?!

If we look at many of the designers (not all, mind you), they're storytellers first. Everyone loves a good story, but it may or may not come with technical merit.

Few are willing to criticize "I had my buddy listen and he looked it" whereas you put some sort of clear methodology and all of a sudden everyone's a critic.
 
Jakob, I think you're way too generous about the so called experts folks using the qualitative method are using for feedback, much less the methods. If you're my friend/compatriots and I send you some equipment to try, how much are you going to be influenced by all sorts of subconscious social forces, notwithstanding the "new" factor and the wholly deblinded process?!

I used the term "expert" as descriptor for humans JC (for example) trusts in their listening ability, which means not all opinions have the equal impact/influence, ccertainly not if coming from a totally unknown listener.

While i agree that a certain level of "blinding" has to be included, i think you underestimate the qualitative approach as it relies on the detailed description of differences (for example).
If listeners have to compare two devices in an A/B comparison and use a restricted set of attributes (synchronized/calibrated and agreed upon before) and maybe using a scale on each attribute, it is quite unlikely that they by chance all choose the same detailed description of a given "nondifference" .

Billshurv linked a some time ago a techn note from Delta:

https://assets.madebydelta.com/docs/share/Akustik/TECH_Document_Perceptual_characteristics_of_audio_UK.pdf

included is a description of their socalled "wheel of sound", a collection of sound describing attributes. Usually a subset of around ten of these is used in a specific test. I´ve mentioned publications from Choisel and Wickelmaier about selecting a listener panel and they found that listener agree quite easily on ~9 main describing attributes for sound.

Crucial is the synchronization/calibration part and reference sound samples to do that must be available during the evaluation.

The EBU 3286 provides another set of descriptors and categories and uses the spider web visualization of the results as well.

Further corrobation might follow by using additional preference tests at the end.

If we look at many of the designers (not all, mind you), they're storytellers first. Everyone loves a good story, but it may or may not come with technical merit.

Few are willing to criticize "I had my buddy listen and he looked it" whereas you put some sort of clear methodology and all of a sudden everyone's a critic.

Might be, we all are usually better in critizing others but, as i am participating in discussions about test methods for questionable effects since long time ago, i´d say the number of people really appreciating justified crtic is surprisingly low. :)
 
Jakob2, you got it right on! I found this out when I lived for over 1 year with over 100 classical musicians in Switzerland. They could hear 'through' a table radio for what they wanted to hear, and seldom complained about 'fidelity' except for differences in actual instruments, like violins. I even married one, and she was the same. She put up with my hi fi, but no real enthusiasm. However, when we went to a LIVE performance with a famous Strad violin, she knew immediately what 'knocked me over' when he first started to play.
 
While i agree that a certain level of "blinding" has to be included, i think you underestimate the qualitative approach as it relies on the detailed description of differences (for example).
If listeners have to compare two devices in an A/B comparison and use a restricted set of attributes (synchronized/calibrated and agreed upon before) and maybe using a scale on each attribute, it is quite unlikely that they by chance all choose the same detailed description of a given "nondifference" .

Just to be sure, I appreciate the value in qualitative testing (or at least qualitative data analysis) as much of the early parts of R&D in my world are essentially that. Once a process/test becomes mature enough, then we start hitting it with the harder metrics. It's a necessary that the reviewers take the process seriously and have some amount of consistency to their evaluations (e.g. if we asked a reviewer to give his/her subjective opinion of a component, ideally blinded to that component, does it remain similar across multiple evaluations?).

My worry is more in the rigor of these evaluations. A short read through audio magazines leaves me completely bereft of confidence in the reviewers, as it generally feels one can replace the component name of a review with a similar class component and nothing else. In other words, are the controls and protocols for qualitative testing actually being followed?

Might be, we all are usually better in critizing others but, as i am participating in discussions about test methods for questionable effects since long time ago, i´d say the number of people really appreciating justified crtic is surprisingly low. :)

Oh, I'm in full agreement about justified criticism being poorly received and guilty myself! My point was more to the social side of things, as we're wont to get along, and it's easier for us to give a free pass to an anecdote (rather than pressing a person for their methodology and detailed notes from a qualitative evaluation) than it is to accept a more rigorous experiment for what it is (which, oftentimes is that the methodology too loose and the results are too nebulous to draw any conclusions).
 
Status
Not open for further replies.