Placement of resistors in signal path.

Please see summary of test limitations attached below. Excerpted from, "An Overview of Sensory Characterization Techniques:
From Classical Descriptive Analysis to the Emergence of Novel Profiling Methods"

IME the memorization part is what can be hardest to learn, particularly so for very low level weird and or unusual electronic sounds.

Also, as respected forum member PMA observed after personally learning how to pass ABX, the protocol requires fatiguing sustained concentration to pass a test, in contrast to the ease of sighted listening. People are not usually so quick to accuse PMA of using a 'playbook.'

EDIT: For the record and in response other accusations, I have said on numerous occasions that I recommend periodic blind testing for everyone to help maintain calibration as to what is real and what isn't.
The attached chart that shows the limitations of several types of tests contains several blaring inaccuracies. In ABX, it says limitations are "No guidance over an attribute to focus on", which is quite inaccurate, both by your own statements and by those contained in the Clark paper. In fact, you can, and should, train listeners in recognizing attributes. The ABX is then assumed to be "less sensitive", where it is actually not unless done with too frew trials, testers, and lack of training. Stating a limitation is "relies on assesors memory" is just silly. The comparisons are instant, and while auditory memory is invovled, the testor may evaluate each choice for as long as he wishes.

Double blind testing is the gold standard in many types of research, including drug and medical testing.
 
As Mark often tells forum members, "You are always welcome to visit and see for yourself". Maybe you can invite him over and conduct ABX for him... :bulb:
I suspect that would define a futile waste of time. It's a belief system that isn't easily changed by someone not desiring to change. Lets also understand that a good ABX test is non-trivial, requires careful setup invovling precision level matching, and to produce significant data, takes time, and ideally a group of testers. It's frankly a lot of work to go through just to change a mind. Not worth my time.
 
@jaddie, Thank you for the last two posts above. Written as they are I am more on the side of agreeing with you.

If we can keep the conversation cooled down some there is something I would like to ask your opinion on. In another thread at: https://www.diyaudio.com/community/...-can-it-make-a-difference.384031/post-6972024 ...Post #160, Lars Risbo, PhD, of Purifi, and previously with Texas Instruments commented in his second paragraph about audibility.

Similarly in an ESS presentation document they talk about some of what it is they think audiophiles hear that maybe other people don't. The document can be viewed at: https://www.yumpu.com/en/document/read/23182504/noise-shaping-sigma-delta-dacs-ess-technology-inc
Starting around page 28 of the card deck they begin talking about one thing that some people can hear. They talk about some additional problems starting around page 39.

In both of the above cases what is being discussed looks to me like its (1) things some people may hear and others not hear, and (2) the things that may or may not be heard don't seem to be primarily a function of FR or of audibility thresholds. Rather, the primary factor appears to be something more along the lines of how individual brains separate out signal from noise (where 'noise' is being used here in the broad sense of consisting of any unwanted signal).

An example of such a separation task (maybe a kind of wetware filter function) might be understanding what your friend across the dinner table in a loud restaurant is saying. Obviously FR and hearing thresholds are factors too, but they don't seem to be the main factor involved in extracting the speech signals of this case from the unwanted noise.

Then my question would be to the effect that if we assume for moment that the described effects truly occur, what does that have to say about limits of audibility established by ABX testing when only fairly small fractions of he human population are used as test subjects. How would a researcher discover what the development team at Purify might be able to hear that most people don't hear? How would a researcher know about the ESS executives that have been trained to hear something most other people don't hear?

The reason I'm asking is that I am beginning to suspect that under favorable conditions, say, low stress and use of a familiar reproduction system, some audiophiles may have brains that have learned to perform signal extraction tasks (of low level distortions and or noises) that most brains have not learned. If true, maybe that could account for some of unexplained claims, counter claims, arguments, anger, rants, etc. that have been occurring in audio forums for decades?

Another thought along similar lines is that if something called 'expectation bias' occurs in hi-fi listening, how to we know its not that of people expecting to hear only transparent reproduction? How do we know if for them only pure music is the signal, and little reproduction imperfections are the noise to be separated out?

Having written that those last couple of paragraphs I expect to be attacked for having the gumption to question some other people's fixed beliefs. As you pointed out already its not worth the time and effort to try to prove something to a few very vocal people who don't want change. You're not the first person to make that particular observation BTW. I've said it before too.
 
Last edited:
  • Like
Reactions: lrisbo
looks to me like its (1) things some people may hear and others not hear, and (2) the things that may or may not be heard don't seem to be primarily a function of FR or of audibility thresholds. Rather, the primary factor appears to be something more along the lines of how individual brains separate out signal from noise (where 'noise' is being used here in the broad sense of consisting of any unwanted signal).
Obviously FR and hearing thresholds are factors too, but they don't seem to be the main factor involved in extracting the speech signals of this case from the unwanted noise.
Looks certain way to you and seem to be? Do you have anything other than speculations? You know, like evidence.
some other people's fixed beliefs.
Not beliefs but observations. Numbers of DBT on audio cables, DACs, amps have been conducted and the results reported. They proved the boutique audio business wrong. OTOH, those boutique audio business' haven't reported DBT supporting their products' superior audible sound quality. Why? It's either they haven't done such test or have done it but the results aren't in their favor so they swept it under the rug.
 
@jaddie, Thank you for the last two posts above. Written as they are I am more on the side of agreeing with you.

If we can keep the conversation cooled down some there is something I would like to ask your opinion on. In another thread at: https://www.diyaudio.com/community/...-can-it-make-a-difference.384031/post-6972024 ...Post #160, Lars Risbo, PhD, of Purifi, and previously with Texas Instruments commented in his second paragraph about audibility.
He's reporting their findings. Nothing to disagree with. He talks about "training" for better difference detection, and I don't disagree, never have.
Similarly in an ESS presentation document they talk about some of what it is they think audiophiles hear that maybe other people don't. The document can be viewed at: https://www.yumpu.com/en/document/read/23182504/noise-shaping-sigma-delta-dacs-ess-technology-inc
Starting around page 28 of the card deck they begin talking about one thing that some people can hear. They talk about some additional problems starting around page 39.
I don't have time to go through that right now...work demands. Perhaps later.
In both of the above cases what is being discussed looks to me like its (1) things some people may hear and others not hear, and (2) the things that may or may not be heard don't seem to be primarily a function of FR or of audibility thresholds.
Agreed with (1), never been a dispute. I'm not sure about (2) being a function of only FR, though FR differentials are audible as a function of the total area of the response curve affected. It's bandwidth * gain = audibility. And "audibility threasholds", as related to nonlinear distortion is the holy grail we don't quite have quantified, though we do understand audibility is a function of many factors including signal type and harmonic complexity, and masking by other signals, etc. Complicated to quantify, but that doesn't mean we don't know anything.
Rather, the primary factor appears to be something more along the lines of how individual brains separate out signal from noise (where 'noise' is being used here in the broad sense of consisting of any unwanted signal).
More like individual ability/aptitude along with training. Just like some people can't hear pitch well, and others have perfect absolute pitch. Abilities are varied, but abilities can be improved by training. It's entirely valid to train testers.
An example of such a separation task (maybe a kind of wetware filter function) might be understanding what your friend across the dinner table in a loud restaurant is saying. Obviously FR and hearing thresholds are factors too, but they don't seem to be the main factor involved in extracting the speech signals of this case from the unwanted noise.
Actually the "cocktail party" hearing ability is far more complex thant FR and thresholds. It's very much a spatial hearing and a particular type of perception that's wrapped in congnition. Even those with impaired hearing can retain, or loose, the cocktail party effect somewhat independently of hearing loss.
Then my question would be to the effect that if we assume for moment that the described effects truly occur, what does that have to say about limits of audibility established by ABX testing when only fairly small fractions of he human population are used as test subjects. How would a researcher discover what the development team at Purify might be able to hear that most people don't hear?
That's a very different research project than attempting to determine a particular mechanisms audibility. And, you have to pick your "small fractions" well. One test I did invovled a wide cross section of economic strata, listening abilities, age and sex. It's invovled, but not impossible. You're always going to be careful about extrapolating a generality.
How would a researcher know about the ESS executives that have been trained to hear something most other people don't hear?
That would be part of the test background. If you don't ask the right questions, and something like training background would be pretty important. It's also something you can pre-evaluate with another kind of test.
The reason I'm asking is that I am beginning to suspect that under favorable conditions, say, low stress and use of a familiar reproduction system, some audiophiles may have brains that have learned to perform signal extraction tasks (of low level distortions and or noises) that most brains have not learned. If true, maybe that could account for some of unexplained claims, counter claims, arguments, anger, rants, etc. that have been occurring in audio forums for decades?
Yes that's true, but with any group you have to also consider bias, which is a huge factor in what is perceived. The placebo effect is well known, and proven to swamp out actual observations when the differentials are small.
Another thought along similar lines is that if something called 'expectation bias' occurs in hi-fi listening, how to we know its not that of people expecting to hear only transparent reproduction?
Yes, bias is powerful.
How do we know if for them only pure music is the signal, and little reproduction imperfections are the noise to be separated out?
I'm very cautious about "pure" as it relates to a signal though. It's never really anything like "pure" when it comes to a recording, but we're going for "unmodified" relative to the best we can get of the storage medium. But pure...probably not the best word.

As to someone separating imperfections from "noise", well, just ask them. In the end, music enjoyment is affected by a lot of things, or not. All systems are big signal modifiers, just some less big. Some get full enjoyment from a rather distorted system.
Having written that those last couple of paragraphs I expect to be attacked for having the gumption to question some other people's fixed beliefs. As you pointed out already its not worth the time and effort to try to prove something to a few very vocal people who don't want change. You're not the first person to make that particular observation BTW. I've said it before too.
Actually I hope the dialog is helpful and continues.
 
Sagen, there is no "signal path". It's a current loop. Resistors: if changing a resistor for a different type of the same value objectivly changes the sound get your scope and check that node for instabilities. Fix that. You will enjoy richer tone colours and all of that with the cheapest of resistors you can find.
 
Sagen, there is no "signal path". It's a current loop. Resistors: if changing a resistor for a different type of the same value objectivly changes the sound get your scope and check that node for instabilities. Fix that. You will enjoy richer tone colours and all of that with the cheapest of resistors you can find.
resistors are not created equal - some are more nonlinear than others. We had a whole production batch of Eigentakt amps failing end of line test because the wrong type of resistor was used in one single location on the PCB
 
resistors are not created equal - some are more nonlinear than others.
That would be true of every component, passive or active. The real question is one of degree. That makes the statement above meaningless without numbers and specifics.
We had a whole production batch of Eigentakt amps failing end of line test because the wrong type of resistor was used in one single location on the PCB
Misapplication of a component type often results in fireworks. Was this failure specific to resistor nonlineariety or just a misapplication?
 
Meaning: was it "wrong" because it was, say, a 1/8 watt type used when 2 Watt was needed (i.e. a Technical difference) or it was "wrong" because it was the wrong colour or built on the "wrong" phase of the Moon?

If the former, it can very well make an end of line TEST fail; it will most probably involve test instruments, measurements, even audition tests detecting audible differences.

If the latter, meaning they "failed" after somebody found "oh my God, we used the wrong COLOUR/BRAND one!!!!" , means it can´t be taken seriously.
 
You also really have to be able switch rapidly between configurations to do a good job of comparison listening as the auditory working memory in the brain is limited to roughly 7 seconds or so.
When I was 16, I worked at an electronics store. The Bose stuff was far away from all of the other speakers, so you'd magically think the AM5 was worth the price because you forgot what you just heard already 🙂
 
  • Like
Reactions: JMFahey
resistors are not created equal - some are more nonlinear than others. We had a whole production batch of Eigentakt amps failing end of line test because the wrong type of resistor was used in one single location on the PCB

Please elaborate, did a letter get switched around and thick-films instead of thin-films were used? I don't think that is the point here, most interested people know about the basic differences in SMD resistor technology.
 
When I was 16, I worked at an electronics store. The Bose stuff was far away from all of the other speakers, so you'd magically think the AM5 was worth the price because you forgot what you just heard already 🙂
I do not cheat myself or are imbued of supposed "Audio Superiority" nor golden ears or similar nonsense, rather a quite practical feet on the ground problem solver, so knowing my own limitations, for speaker comparison,both for own and Customer benefit, built a 2 speaker cabinet with a footswitch controlled relay inside, so I can instantly switch back and forth between 2 speakers.
Only way to catch/show even subtle differences, such as a couple more/less wire turns in a voice coil, different former materials, adhesives (both type and quantity), even different dustcaps,different cone batches, fresh/softened (a.k.a. "burn in") speakers, etc.

Different time tests become unreliable by comparison, unless differences are gross.
 
  • Like
Reactions: kodabmx
It would be useful to know HOW each resistor type causes the effects described. I mean they have inductance or something ? I don't understand how resistor A is refined and clear, while resistor B is smooth and even responsed. Or something. Because it seems like they have this effect on the sound regardless of where they are in the signal path or what the job is they are doing. I have nothing in my electronic experience to make that idea work.
Thats because you actually have electronic experience/knowledge not just over active imagination.
 
@Enzo, For a guess, and it is just a guess, if resistors have some unique sound its probably from some small second order physical effects. Say, nickel end caps maybe could add a little low level IMD. Also, while resistors have thermal noise (Johnson noise) while just sitting there, resistors may also produce some excess noise, say, possibly related to signal current and or maybe from temperature variation with signal level, etc. Some of excess noise may or may not be correlated with the audio signal in a way that affects perception in some people yet not in others. IMHO we're sorta getting into an area that has not been fully researched yet. Hopefully continuing research in the field of 'Auditory Scene Analysis' can shed a little more light on such questions over the coming years.


Possible reading material for those interested:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5945068/

https://www.frontiersin.org/articles/10.3389/fnins.2016.00524/full#:~:text=Auditory scene analysis (ASA) refers,sound waves reaching the ears.
Maybe maybe maybe. It can all be measured and nothing here is new. Theres very few instances where resistors (and zero when there specs are the same) make an audible difference, and these extreme examples are well known to competent designers. Blind test or your just another mouth piece.
 
Hi, everyone. Lets try this dilema then. I have only two set of resistors, cant by other. So, put the type of resistor that is the more stabil one and clean sounding as first resistor on dac board where I2S come in, or as last before dac chip? Im talking about the 22 Ohm resistors on the board,. I try to keep this easy, but I see this post taking of now. AND, that is ok, alot to learn here for us all I think ( : Frank
 
Last edited:
Well he plausibly explained how two different resistors could sound different somehow in a specific application. But the general claim I really want explained is how said resistors have the predictable effect wherever they are used in a signal path. Of all the various points in circuits at different signal levels, with different currents, in filters, in attenuators, in tuned circuits , in your mothers cole slaw, they always make things warmer or more defined or less salty.
 
put the type of resistor that is the more stabil one and clean sounding as first resistor on dac board where I2S come in, or as last before dac chip?

This is what confuses me, you are talking about a resistor in the digital domain, not the audio circuits. I don't follow how a resistor affects the sound in the digital circuits. Please enlighten me.