Audio Wisdom: Debunking common myths

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
janneman said:


John,

I did mil hardware burn-in too. But only to weed out the infant failures, so that the rest of the population was more reliable. NOT to 'improve' the equipment.

Jan Didden

Hi Jan, happy new year.

Agreed, the vast bulk of burn in it to clear the front end of the bathtub. Unfortunately, there are sometimes circuits which are so darn fussy, relying on offsets or leakages, that no matter how well designed, on occasion there is reliance on some device parameter. A typical example is a large analog die with a high gain differential front end, under flexure stress from the epoxy die attach, and the epoxy "gives" a little during the burn in process. Or, some power die attached with a 50/50 Pb/In, and the die attach undergoes creep under dissipation at 125 C ambient, altering the gain across the chip.

What is scary is the possibility that there is some silly lateral transistor somewhere with a bad oxide passivation and potassium or sodium floating around, preventing the circuit from stabilizing. Luckily, this is usually found as light sensitivity, and can be culled out.

Good solid engineering practice is to design out dependence on device parametric shift. But on occasion, when working at the edge of SOTA ..(unfortunately, the other edge), these things happen.

Cheers, John
 
AX tech editor
Joined 2002
Paid Member
Interesting info that, John. Indeed, we also called the failure curve the 'bath tub' curve. You want to do the burn-in to get down to the flat portion, but not so far that you get into the region where failures rise again due to old age.

How this got hi-jacked into audio and totally misunderstood as improving equipment I don't know either. I have a tendency to blame marketing for cases such as this but that is just an educated guess. ;)

Jan Didden
 
Tony said:


I worked for Advanced Micro Devices in Manila from 1979 to 1988 and we did burn-ins of semicon devices as part of the testing process. What i understood about this process is that infantile moratality parts are discovered and eliminated, not let out of the production doors...

how this process got hijacked into audio, i don't undrerstand...:confused:

Me neither - a terrible confusion of terms.

I also worked for semi and ATE companies and the concept of "accelerated life test" and "burn-in" are well known.

The former is to establish statistical long term reliability data, the latter to push the potential early failures down on to the flat bottom of the "bath-tub" curve.

This break-in idea of plugging a fridge into a power cord until it sound "good" is wierd. IMO :confused:
 
jneutron said:


:confused: :confused: :confused:

To quote the famous philosopher...Earsplittenloudenboomer..

Que??

As an inguneer who read Popper a coupla decades ago, what does I'z do?? I check da wiki-ped...

http://en.wikipedia.org/wiki/Karl_Popper

Wherein, i findz dis gem..

Quote within wickipedia (sans links):

""He strongly disagreed with Niels Bohrs' instrumentalism and supported Albert Einstein's realist approach to scientific theories about the universe.""

Hence, my relativistic pun..

Cheers, John

ps..your not saying "gasp" that wikipedia is wrong, are you?? What'd be the purpose of life without wiki??

I know it was a pun. But that's the problem.

Freud's concepts are frequently being distorted. No psychologist would use "anal" the way it's popularly used.

By the same token I would expect an engineer-type to safeguard Einstein. Einstein's theory of relativity has no relation to "everything is relative," which I'm sure you know. Still, this is another lie that has found its way into academia. Einstein's theory is said to have inspired the "postmodern philosophy of relativism." Not only is that complete rubbish, no philosopher would ever use the definition of "relativism" in this context. That's a bogus construct of the reactionary and anti-modernist political movement known as Intelligent Design.

I will not accept the idea that I'm being too serious here. To do that would be to concede defeat. I won't do that. I think it's our responsibility to safeguard the truth.

Edit: My post gets censored here, to my surprise. I originally used another, stonger word for "distorted."
 
phn said:
By the same token I would expect an engineer-type to safeguard Einstein. Einstein's theory of relativity has no relation to "everything is relative," which I'm sure you know. Still, this is another lie that has found its way into academia. Einstein's theory is said to have inspired the "postmodern philosophy of relativism." Not only is that complete rubbish, no philosopher would ever use the definition of "relativism" in this context. That's a bogus construct of the reactionary and anti-modernist political movement known as Intelligent Design.

I will not accept the idea that I'm being too serious here. To do that would be to concede defeat. I won't do that. I think it's our responsibility to safeguard the truth.

To quote one to whom I trust with my life: "Pick the battles that need to be fought."

The understandings of Einstein do not require constant vigilance on my part. The philisophical aspects, I do not address, but leave to philosophers.

phn said:
I know it was a pun. But that's the problem.

Sticking tenaciously to hard line beliefs is not the best way to reach an audience. You might as well argue about the definition of "is".

Humor is one of the best ways to teach a somewhat difficult subject to laymen. Stern adherance is not necessary, knowledge and confidence in such shows through best when the teacher can laugh and identify with the audience.If I had said what you did, I would have wanted someone like you to tell me to lighten up and smell the roses..

Cheers, John

ps..they have an auto-censure here..
 
cliffforrest: "... This break-in idea of plugging a fridge into a power cord until it sound "good" is wierd. IMO ..."

janneman: " ... How this got hi-jacked into audio and totally misunderstood as improving equipment I don't know either. ..."

jneutron: "... What is scary is the possibility that there is some silly lateral transistor somewhere with a bad oxide passivation and potassium or sodium floating around, preventing the circuit from stabilizing. ... Good solid engineering practice is to design out dependence on device parametric shift. ..."

tony: "... we did burn-ins of semicon devices ... how this process got hijacked into audio, i don't undrerstand ..."

I guess we can all agree that the burn-in works for mechanical devices = like the breakin period on a new car. I suppose that most here will also agree that digital and analog electronics should at least be powered up, if only momentarily, to weed out the bad assemblies and potentially failing manufactured parts.

I would argue that, for example, because of simple changes in resistance over time due to internally generated heat would justify an extended "burn-in". These changes in resistance might occure in a discrete part (resistor, capacitor or whatever) or at a "cold solder junction" on a circuit board, that might not show up during a simple power up to detect a "go / no go" result (resistance or conductive states being dependant not only on temperature, but also on humidity and other factors like changes in galvanic action, etc.). ... Another example might be at a semiconductor barrier where "stray capacitance" can change over time subject to thermal considerations = Has anyone ever noticed that output transistor bias voltage may drift over time? In some cases this drift may be significant enough to justify "tune up" prior to shipment = thus better quality equipment is "burned in" by those manufacturers who wish to maintain their reputation, whether it is less than 1% that need adjustment or not. This is to say nothing about possible slow leaks in vacuum tubes or changes in impedence in some transformers.

:smash:
 
A few things, first, it has been show that at least some humans have the capability to look at two objects at the same time by focusing each eye on a seperate object. I would note this was originally discovered in studying sevants, and specifically looking at those with extreme memories. However this lead to further research on how we hold visibile objects in our brain, and how with deal with multiple vision stimuli.

As to ABX testing, again, I'm sorry I disagree at its sensitivity. I will not site articles in defense of my view. Even looking at earlier works from the 1960's when ABX was first coming into popularity a lot of scientists warned that it potentially lacked the sensitivity of other methods and should not be used alone. Rosenblith and Stevens' article On the DL for Frequency found that ABX testing was on the order of 4 times less sensitive than AX directive testing methods, showing that potential problems may exist in using this method alone. Macmillan's article on calculating accuracy and bias in these sorts of testing method talks about findings suggesting difference in the sensitivity of methods, placing ABX among the worst of the psychophysiological methods, but the most reliable at not picking up false positives. They also discuss what I have always practiced, that all reliable studies should always use a multimethod approach unless it has already been shown that in the specific situation the single method used will accuratly give the identical results of other methods. They site randomized trials inwhich multiple methods are used and a reliability score is obtained between the methods. Pollacks work on Speech discrimination also found differences in the methods, necessitating that different methods be used depending on what you are needing to pick up. Doering's work on successive and simultaneous tone perception found that with less complex tones, any study method gave the same results, however with more complex tones such as piano tones the ABX method and successive listening showed differences from simultaneous used in same different, matching to sample, and forced choice methods.

Can you site articles which discuss the use of multiple methods giving identical results in the scenereo's you discuss. I could not find any that listed this procedure done, a clear flaw in the methods.

These are all articles that found discrepencies between methods, and multiple studies showing sensitivity issues with ABX test methods. The Conlcusion of all of the articles I sited, and ones I didn't, all suggested the importance of a multimethods approach. Two articles I found from the anthropoligic medical sciences attacked current medical sciences for relying exclsuivly on these methods of testing. They mentioned the only valid excuse is that in ensures that a product works when brought to market, but unfortunatly allows too many potentialyl good drugs fall through the cracks because of false negatives.
 
FastEddy said:
I guess we can all agree that the burn-in works for mechanical devices = like the breakin period on a new car. I suppose that most here will also agree that digital and analog electronics should at least be powered up, if only momentarily, to weed out the bad assemblies and potentially failing manufactured parts.

The break-in process for a car is entirely different, and is more a machining process, under controlled (hopefully) limits of rpm. This is not the same thing as early failure discrimination, nor is it the same as a parametric shift.

Just powered up rarely finds latent manufacturing defects, hence the term latent..

FastEddy said:
I would argue that, for example, because of simple changes in resistance over time due to internally generated heat would justify an extended "burn-in". These changes in resistance might occure in a discrete part (resistor, capacitor or whatever) or at a "cold solder junction" on a circuit board, that might not show up during a simple power up to detect a "go / no go" result (resistance or conductive states being dependant not only on temperature, but also on humidity and other factors like changes in galvanic action, etc.)..

Heat based changes in resistance of components indicates poor design, either in the use of the component or the manufacturing of it. If humidity or temp is an issue for any parts, then the design is flawed.

FastEddy said:
... Another example might be at a semiconductor barrier where "stray capacitance" can change over time subject to thermal considerations = Has anyone ever noticed that output transistor bias voltage may drift over time?
Stray capacitance has more to do with contamination of passivation or coating of the parts of an assembly. Again, manu defects, or design flaw. (if internal dust is a problem, the design flaw is letting it in..) Bias setting is not a stray capacitance issue, but either drift of the bias setting components, or change of op temp due to thermal conduction drift, like silpad or kapton relaxation.

FastEddy said:
In some cases this drift may be significant enough to justify "tune up" prior to shipment = thus better quality equipment is "burned in" by those manufacturers who wish to maintain their reputation, whether it is less than 1% that need adjustment or not. This is to say nothing about possible slow leaks in vacuum tubes or changes in impedence in some transformers.

All said and done, you are discussing burn in as a quality measure, not as a performance enhancement.

Cheers, John
 
AX tech editor
Joined 2002
Paid Member
jneutron said:
[snip]Heat based changes in resistance of components indicates poor design, either in the use of the component or the manufacturing of it. If humidity or temp is an issue for any parts, then the design is flawed. [snip]Cheers, John


Indeed. And even if there WAS a heat based change in resistance, why should it ALWAYS lead to better sound? That's just illogical. That would mean that ALL designs would be wrong and only become 'right' after a heat-caused resistance shift? Give me a break...

Jan Didden
 
jneutron: " ... The break-in process for a car is entirely different ... Just powered up rarely finds latent manufacturing defects, hence the term latent ..."

Yes, Mechanical.

" ... Heat based changes in resistance of components indicates poor design, either in the use of the component or the manufacturing of it. If humidity or temp is an issue for any parts, then the design is flawed. ..."

Yes, and that information is invaluable to those engineers responsible for the design ... if for no other reason than to "force" an update to the design. This is or should be SOP for quality control (that may or may not but should exist).

" ... Stray capacitance has more to do with contamination of passivation or coating of the parts of an assembly. Again, manu defects, or design flaw. ..."

I refer the gentleman to the arguements I gave some moments ago ...

" ... All said and done, you are discussing burn in as a quality measure, not as a performance enhancement. ..."

janneman: " ... even if there WAS a heat based change in resistance, why should it ALWAYS lead to better sound? That's just illogical. ..."

I would submit that the persuit of higher quality is a performance enhancement and the quest for this information (re: quality improvements or not) is enough justification for the "burn in" process ... whether it improves the sound or degrades it or does nothing at all, this information is vital to an engineer and a quality manufacturer.

:smash:
 
Well ok, I did think you were saying that such methods are accurate when used alone. However, more to the point, it’s hard to trust the information we gather from these methods when other methods are used within the same study, and I simply didn't see other methods used by many of the people you sited, or other previous work. In general it seems like many of these studies have lacked decidedly different methods, or if they do use them, and difference arise, they right them off as less accurate than the ABX method. Forgetting the fact that, though ABX or even AX methods are more likely to give false negatives, even if they are less likely to give false positives. Though I would prefer a measure to err on the side of caution, I don’t want to miss something either. If I didn’t find anything, I would not make the conclusion that nothing exists, I would state that this method was unable to find anything, other methods must be used to ensure the reliability of these results.

Something else to point out, the difference between experts and non-experts in these experiments has shown that experts are far more sensitive measures than non-experts, but non-experts more likely to pick up anomalies. However, again, I see very little work done that actually uses experts to test sound. The closest you see is minor training, but far from what I would call an expert.

I completely agree that methods should be controlled, but I am always willing to give up a little lab control if its in favor of better external reliability. I mean, we need to do the lab work first, it creates the theory base needed to make certain assumptions in the real world work. However, if we don’t do real world studies, then nothing accomplished in the lab can be known with certainty to hold up in the real world. This is why there has been such a push towards the applied sciences in addition to lab sciences.
 
FastEddy said:
jneutron: " ... The break-in process for a car is entirely different ... Just powered up rarely finds latent manufacturing defects, hence the term latent ..."

Yes, Mechanical.

Car yes...electronics, mainly but not all.

FastEddy said:
quote jneutron""" ... Stray capacitance has more to do with contamination of passivation or coating of the parts of an assembly. Again, manu defects, or design flaw. ...""

I refer the gentleman to the arguements I gave some moments ago ...

You stated (DC) bias drift as a result of stray capacitance shift...You are incorrect.

FastEddy said:
I would submit that the persuit of higher quality is a performance enhancement and the quest for this information (re: quality improvements or not) is enough justification for the "burn in" process ... whether it improves the sound or degrades it or does nothing at all, this information is vital to an engineer and a quality manufacturer.

:smash:
Quality is the ability to meet requirements. Performance enhancement is not "burned in" to anything. It is designed in, or it is lacking.

Cheers, John
 
its interesting we have multiple "arguments" going on here, some discussing theoretical practices, methods, and others discussing more engineering practices. I like it though, we seem to be having relatively calm adult discussions of our views on these issues, rather than the immature name calling that happens so often. We are also making good use of professional resources rather than citing opinion.

I also wanted to bring up that a common problem of most ABX, AX, and even forced choice testing methods is that they do better with simple stimuli compared with complex stimuli. I know I already mentioned that, but I meant to add that I think by its definition music is a complex stimuli, and this is consistent with the research on the subject. One reason they don't use actual music in most tests of hearing is that it creates problems with the measures, and gives variables that can't easily be accounted for. However, when you take on the monumental task of assessing how we perceive the reproduction of music through a sound system, and attempt to understand all the factors involved, you are forced to deal with complex stimuli. This necessitates better methods than we have previously used.

However an earlier statement is essentially correct, no scientist or university is going to bother researching this as it has little effect on the advancement of science. I feel it has value to the manufacturers of audio equipment, to the better understanding of human perception as it relates to hearing, etc. etc. but the method I’m arguing still gets at a niche area of science that few would find beneficial to their portfolio. I’m sure its of greater benefit to marketing companies to make up the research or used skewed methods, the average person even on a forum as technical as this doesn’t understand ABX, AX, Double Blind, Forced Choice, Behavioral Observation Analysis, etc. etc. They don’t need to, and we can make up some great sounding crap to sell magic marbles. I love that they wrote a white paper for those, it makes the average reader think, Hey its scientific, it must be proven and tested, why else would they write a white paper. Fine, was it published in a peer reviewed journal, if so, how was it perceived, what journal-“The Study of Alien Technology.”

Oh one last thing, I'm reading a study that cites two articles discussing enhanced hearing perception in musicians. This goes back to the expert vs non-expert. However, its another interesting bit of information, to me at least. Makes you wonder if so many musicians like highend equipment because their work has enhanced their hearing in a way that improved their appreciation of good sound.
 
First a correction. I wrote "anti-modernist." It should be anti-modernism. Two very different things.

jneutron said:


To quote one to whom I trust with my life: "Pick the battles that need to be fought."

The understandings of Einstein do not require constant vigilance on my part. The philisophical aspects, I do not address, but leave to philosophers.



Sticking tenaciously to hard line beliefs is not the best way to reach an audience. You might as well argue about the definition of "is".

Humor is one of the best ways to teach a somewhat difficult subject to laymen. Stern adherance is not necessary, knowledge and confidence in such shows through best when the teacher can laugh and identify with the audience.If I had said what you did, I would have wanted someone like you to tell me to lighten up and smell the roses..

I fully agree. You need to pick your battles. I pick the truth, perhaps the rarest of commodities. "The truth" might be a bit sumptuous. I don't see the lack of truth as a problem. I rather see it as the natural condition of the world. It's the willingness to accept lie I don't like.

The bigots defeated decency by calling it "politically correct." PC implies that you take yourself too seriously, that you have no sense of humor, that you are a party-pooper, a spoilsport. Nobody wants to be associated with those things and the dissenting voices stop. The liars win.

People with no humor only think they have humor. I have humor. I laugh at anything. I really do. I think Bill Maher is one of the funniest things on the planet. He's the most politically correct American I know.

jneutron said:
Quality is the ability to meet requirements.

True. But the problem with quality is that we have objective and subjective qualities.

"All users of computer artifacts are painfully aware of the low quality of software." That quote comes from Computers in Context: The Philosophy and Practice of Systems Design.

Computer artifacts are of poor quality not because the programmers can't program. They are of poor quality because they fail to meet the (subjective) needs of the user.

Quality is elusive.
 
phn said:
First a correction. I wrote "anti-modernist." It should be anti-modernism. Two very different things.

"I'm a simple country doctor"...I'll take your word on that..seriously, that diff went right by me.. ""Where'd he go???...Where'd whoooooooo go?""


phn said:
But the problem with quality is that we have objective and subjective qualities.

Both are measured. I work with objective measures, others use subjective ones.

But nonetheless, quality goals are invoked and either met, or not.


phn said:
"All users of computer artifacts are painfully aware of the low quality of software." That quote comes from Computers in Context: The Philosophy and Practice of Systems Design.

Computer artifacts are of poor quality not because the programmers can't program. They are of poor quality because they fail to meet the (subjective) needs of the user.

Many fail because not all conditions are considered in the design. Given the complexity of systems now, it is not possible to do so, now we have to rely on beta testing using millions of testers..and still, failures occur.

I still use systems I programmed in '95..I will not allow changes to the operating system or functional code, as it ain't broken.

phn said:
Quality is elusive.
Not really. The main problem I see is defining what quality is. A volkswagon beetle was quality, so's a big mac.. I don't drive a beetle, and I certainly don't eat big macs..but they still, were quality product..they met specifications.

Cheers, John
 
How exactly do you measure subjective qualities? This interaction designer would love to know. REALLY. If I did know I would have been in Redmond, WA raking in millions and playing golf with Bill Gates.

I know that there are tools to "measure" these things. Interaction Design: Beyond Human-Computer Interaction is a good schoolbook as far as methods are concerned. It's very American. Taylorism was never a success in Europe. (Perhaps in Britain.) It teaches one way to "measure" these things is to count how many keys the user has to press in order to complete a certain task. I guess we now know why text messaging never was a success with cell phone users.

The only thing we can measure with a great deal of accuracy is failure. Whenever the user has to press F1 is a failure. It signifies an unhappy user experience.
 
phn said:
How exactly do you measure subjective qualities? This interaction designer would love to know. REALLY. If I did know I would have been in Redmond, WA raking in millions and playing golf with Bill Gates.


As I stated, I work with objective measures, others use subjective ones. However, I would think that a public opinion poll would be considered as a method of measuring subjective entities.. Ya gotta ask the psych guy bout that, though. I can tell ya how to measure resistance, though...;)


phn said:

The only thing we can measure with a great deal of accuracy is failure.

Please...I'm an expert at measuring that...:bawling:



phn said:
Whenever the user has to press F1 is a failure. It signifies an unhappy user experience.

I am unaware of "F1" useage..what does pressing it do?

Cheers, John
 
jneutron said:


I am unaware of "F1" useage..what does pressing it do?

Cheers, John

Lucky you. :D F1=help.

A user will usually try to solve a problem without using the help function, which is usually pretty useless to begin with. "The idea that users will read the manual is something of a quaint anachronism these days" (Where the Action Is: The Foundation of Embodied Interaction). When the user ultimately turns to help he is frustrated. He has given up.
 
Evidence panel

This link might do a nice job of to subjective subjective measures. Essentially the majority of what is known as applied research is a method of evaluating subjective issues.

An example of how this comes into play in the work I do is evaluating a program. As I mentioned in some earlier posts, my work is around developing social welfare programs around at risk parenting issues. The success or failure of one of these programs is very difficult to meausre, and is essentially subjective. The few objective matters we can measure take a lifetime to assess, and/or are suspect to millions of intervening variables we can't account for. Instead we develop tools to help measure subjective issues, and standerdize or control them as much as possible, to approximate an objective like measure. We can do this with quite a bit of accuracy, and its gaining a lot of acceptance.

One way is to take an issue, say Parent Child Interaction, and break it down into specific acceptable domains. Things like affection, positive reinforment, punishment, nutrition, etc. Then you define what constitutes good and bad for each of these very specificly. Frequently the manuals for these are in excess of 500 pages, to give you an idea just how specific we get in defining a variable. It allows little room for arguement over what you see, giving it greater internal validity. Then using well trained observers, you "score" parents and children doing specific planned activities, inwhich we have expected good and bad results. Multiple reviewers are used until there is strong inter-rater reliability, accomplished through a Kappa R value, and these results are used to assess change in the PCI over time in the program. You might add that you expect the curriculum affects the PCI, so you track the use of materials in the visit, and find what possible associations exist between outcomes and use of those materials.

Interviews are another method, but you have to use established methods, be very controlled in how you run them, and have very well trained interviews to get accurate results. Normally you don't write your own questions, you take questions from accepted interview tools, you add in lie detection questions, etc. Sometimes you even use computer based physiological tests, such as attention, focus, reaction time, cognitive fatique tests, etc.

Also good debate

The above link is a fun read, I recomend it. It doesn't talk much about specific methedology and does require further research, btu is essentially a debate of objective vs subjective. I will say, in psychology a randomized trial of a program is considered an absolute quantitative objective measure, however if I explained it many of the scientists and engineers here would probably normally think of this as more subjective. It has to do with the math we use and what we quess happens in randomizing a program trial, we feel the results do give as close to an approximation of causation as anything. The arguement presented is between randomized controlled trials, quantititative means, and more subjective qualitative measures. I happen to fall inbetween the two, as I do think that you lose an awful lot of external validity even in a randomized controled trial.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.