What is wrong with op-amps?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
This could become quite complex. We could end up with a clear preference in the use cases you propose, but someone could rightfully make the case that the use case is not realistic.

Sort of like trying to establish whether two car types have a different drive experience by looking at how much they drift when hard cornering at 120 miles/hr.

Jan

Sealed box 10 inputs, 10 outputs rank them by preference. If some are the same indicate that. No absolutes. Data received will help shape additional experiments. Have three or four folks here who can contribute. There are no wrong answers. If all rank the same then that is a valid answer.

But my money is the uA741 at least will stand out.
 
Sealed box 10 inputs, 10 outputs rank them by preference. If some are the same indicate that. No absolutes. Data received will help shape additional experiments. Have three or four folks here who can contribute. There are no wrong answers. If all rank the same then that is a valid answer.

But my money is the uA741 at least will stand out.

I see TL072 used in many circuits where they don't sound good to me. Any chance of including something like that?
 
that's also what the A/B switching is about too - you can spend as long/as many trials unblinded to establish any audible difference in A vs B

Yes, learn the cheats like the sound of the relay click. Eliminating irrelevant issues i.e. those that would have little or nothing to do with sitting down and listening to music at a normal level is just too much work. Once one of the parties feels the have no out they panic and leave the test.
 
At this point it is entirely irrelevant which opamps are included, or how many.

What matters is if anyone, or preferably a number of people manage to discriminate between them in a consistent way. Even a reasonably consistent way.

If one looks at it as a graph with data points, as long as the data points (each being an opamp "rank") for each participant follow the same trend, and also that they are not all deemed to be indestinguishable, then the goal of this test (imo) has been met.

Actually, if any TWO opamps are consistently deemed to sound different, the most general goal has been met. Of course those might be a 709 and an AD797, and between those two possible points there is more to "peel back'.

Again, this to me is a very preliminary step/test. Basic, important, but preliminary.
 
Yes, learn the cheats like the sound of the relay click. Eliminating irrelevant issues i.e. those that would have little or nothing to do with sitting down and listening to music at a normal level is just too much work. Once one of the parties feels the have no out they panic and leave the test.

Scott, let's drop the ABX/DBT debate on this?

There is no requirement for fast or slow switching/comparisons.
Take days or weeks to do things with the choices, if you or one wishes to. Or seconds.

Why are you worried about any of this?
Anything that gets done now, is more than nothing.
If there are issues, problems or faults, it all can be revisited, improved, changed, whatever... you could even participate if you wanted to, I suppose (Ed's supplying the hardware, not I).

_-_-
 
Yes, learn the cheats like the sound of the relay click. Eliminating irrelevant issues i.e. those that would have little or nothing to do with sitting down and listening to music at a normal level is just too much work. Once one of the parties feels the have no out they panic and leave the test.

Research shows that everybody cheats and everybody lies. Everybody. When they do, they usually only do a little bit of it, and they find some plausible-in-the-moment-to-them reason to justify it to themselves.
 
AB/X, DBT are tools with necessary conditions for correct application - if you learn of a "cheat", some information "leak" violating DBT, then you need to refine the test

likewise some subjectivists complain about the "ambush DBT" with lack of training and confusing , "stressful" protocol seemingly intended to prevent thier hearing differences that are in their uncontrolled listening experience "night and day"

the condtions complained about are seldom actual requirements of a proper listening test - subjects and experimenters should practice, adjust condtions for maximum actual hearing discrimination


Knowing what to expect, how the setup is, 'primes' you about what to focus on etc and is likely to point your attention to a certain direction which definitely is not 'blind'.

Jan


you are misusing "blind" in the listening test context - focus is definitely a human perceptual/neural processing reality that can be usefully engaged in determining what can be heard
maybe you are worried about prematurely using "storied" differences that could lead to focusing on something thats simply not there and cause the subjects to miss a difference they might pick up with appropriate focus

if so, I agree that "focus" could be two edged - but training, searching "by ear" for differences that seem repeatable in the A/B phase should be open minded, exploratory

for casual music enjoyment you may not care that some level of detailed focus could be reliably detected by AB/X - but then that's a different question - accepting differences above human hearing threshold and then ranking their relevance
 
Last edited:
Administrator
Joined 2007
Paid Member
Sealed box 10 inputs, 10 outputs rank them by preference.

Interesting idea.

Whatever tests you do, folk will measure and analyse. They always do.

A sealed box, resin filled and with circuitry deliberately included to give nominally identical HF roll off would be mandatory... the performance of the 741 would be so far below what was needed that it would stand out a mile under test. You might stand a chance with other devices, certainly if the actual hidden circuitry was unknown. Make it known and some will build and test with the actual devices and 'guess correctly'.
 
Knowing what to expect, how the setup is, 'primes' you about what to focus on etc and is likely to point your attention to a certain direction which definitely is not 'blind'.

Jan

That's getting rather close to saying something like studying for a test at school is cheating.

If someone wants to learn and practice to discriminate between two opamps, that should be fine, if what we want to know is their best possible performance at the task. That way if they make claims about hearing things from opamps they have studied with, our test results can be directly applicable to evaluating the plausibility of the claims.
 
AX tech editor
Joined 2002
Paid Member
That's getting rather close to saying something like studying for a test at school is cheating.

No not cheating, but if the teacher hints at the areas he will question you about you can focus on those areas and possibly improve your score.

It boils down to: can you hear the difference between two opamps, or can you hear the difference between an NE5534 and an OPA134.

And as Mooly noted, with all these details it''s trivial to duplicate it and identify simply by measurements. It''s blown before it started, unfortunately.

Jan
 
It boils down to: can you hear the difference between two opaps, or can you hear the difference between an NE5534 and an OPA134.

Do you mean to say you are sure that practicing or not has no bearing on how well you might do at that?

People who learn to transcribe recorded music, or that lean the distinguish the subtle inflections of some new spoken language usually benefit greatly from practice.

Why would you expect it to be different with opamp sounds? They can be pretty subtle.
 
Last edited:
Look, if we as engineers are not up to the task of designing test equipment suitable for the task at hand, should be we blaming the test subjects?

Suppose we are testing with gerbils, are you going to complain they are outsmarting you? (Okay, sorry if that's going too far...)
 
Last edited:
No of course not. I do not get the point of your post.

Jan

It seems like you are concerned that allowing test subjects to practice using the test apparatus will allow them to find a way to cheat too easily. However, I think they need to be allowed to practice in some kind of realistic way. The best solution is probably to have a well designed test apparatus that is not easily cheated. If that is not possible, then a maybe a pretty similar practice apparatus can be provided for practice sessions.
 
Last edited:
AX tech editor
Joined 2002
Paid Member
It seems like you are concerned that allowing test subjects to practice using the test apparatus will allow them to find a way to cheat too easily. However, I think they need to be allowed to practice in some kind of realistic way. The best solution is probably to have a well designed test apparatus that is not easily cheated. If that is not possible, then a maybe a pretty similar practice apparatus can be provided for practice sessions.

Mark that is not at all what I meant. I think we have a major disconnect here, apologies.

Jan
 
And as Mooly noted, with all these details it''s trivial to duplicate it and identify simply by measurements. It''s blown before it started, unfortunately.

Jan

What we need is the proverbial perfect take home exam. Second year we had a take home exam, my friends and I who worked all the problem sets together all year had developed a common set of variable names. We got embroiled in a cheating scandal in which the teacher ended up being chastised for holding a witch hunt and wasting the deans time.
 
Last edited:
Mark that is not at all what I meant. I think we have a major disconnect here, apologies.

Jan

Sorry, I may have been guilty of conflating comments from multiple posters. There have been concerns from some about possible cheating, and other questions about what constitutes "blind" testing. On the issue of blind testing, there were multiple posts about what is fair or proper and what is not. With both sets of concerns in mind, cheating, and practicing or otherwise learning or finding some way to do better on test and having it still be blind, it seemed like both issues were intertwined and needed to be addressed in some simultaneous way.
 
What we need is the proverbial perfect take home exam.

Maybe a software solution could come reasonably close. Hi-rez wav files have sometimes been able to convey audible distortion from hardware. Of course, in that case there would always be the limitations of each participant's playback equipment. To help get an handle on that, offering wave files with known distortion levels might allow people to see what is detectable by them using whatever equipment choices may be available for trial. To prevent some potential issues that might occur using wave files, multiple wave files of different content could be made for each piece of test circuit hardware. Then the user task might be to identify which wave files were processed by the same hardware.

Otherwise, it sounds like we are talking about take home analog hardware. To reduce ease of cheating in that case, besides sealed units, some obfuscation might be designed in, such as circuitry to clip or otherwise drastically distort overdriven input signals before the test amplifier, so as to obscure more subtle amplifier characteristics under such conditions. Could have a noise gate too, to mute the output in the absence of sufficient input signal, in order to complicate noise analysis. However, all the anti-cheat stuff gets rather messy, could add additional distortion of it's own, etc. And someone may still find a workaround.

Software seems to me like the best way to experiment with first, and see how far it's possible to go with that. If people really can't hear really small distortions, then recording and playback of the analog signal may be undistorted enough to be usable our purposes. After all, we often can hear some distortion of the upstream electronics despite the final distortion of loudspeakers.

I guess it depends on what we are trying to accomplish. If the purpose is to show it is possible to make a line level opamp amplifier with a gain of 10 that is completely inaudible, then that probably requires take home analog hardware.

On the other hand, if we are going to put the opamps in circuits demanding enough that one or a few of the circuits may be reasonably expected to produce some low level of barely audible distortion, then software may be fine.

And in reality, most commercially produced audio products are built to a price point, and not for performance at any cost. In such circuits, some distortion may be audible, for example when one opamp is asked to do what 3 could do better, and in that case, the particular opamp may be distinguishable. If so, people who hear that may not be crazy. And not crazy despite the part number of the particular opamp being associated with a data sheet that says it has impressive specs. In such circuits it's also possible swapping opamps may change the sound some.
 
Perhaps it's better looked at from specificity of test: do we want all-cause identification of n-different opamps or do we want identification of n-different opamps purely from musical selections?

I'd assume the latter would be a more important test if we're wanting to elucidate our listening experience. In which case protocols need to be placed to mitigate confounding factors.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.