Audio Wisdom: Debunking common myths

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Yes, but you see almost every one of the technical advances you refer to here were made by engineers and non-degreed technicians and even hobbiests working in a real laboratory without benifit of the "advanced" esoteric speculation of academic science and phylosophy ... they found out the answers for themselves, in many cases, ignoring the conventional "wisdom" ...

Well, I guess Shockley, Bardeen, Brattain et al had better give their Nobels back.
 
To be honest, that’s what’s so interesting about this. I have a hard time claiming that I am doing anything new or revolutionary. When I talk about my work to others, generally, I don't talk about it being cutting edge or revolutionary. It's not, IMO, but I have been told by my seniors that I need to, because that’s how you make it in this world. All I do is take other peoples idea's, old tried and true ideas, like Skinner, and adapting the ideas into practical programs. I develop and evaluate interventions, using the theories people like Skinner developed. Currently there is a huge move towards evidence based practices, causing big changes in how programs are funded. Though still not ready to properly fund research efforts, many funding agencies want to see ongoing evaluations of programs in order to keep the funding up.

My current project requires 4 reports a year giving results on the programs impact on clients, recidivism, impact on the Department of Social Services, etc. I use a multi-dimensional model for evaluation which includes ongoing visit reports, direct behavioral analysis, client interviews, program facilitator interviews, and finally analyzing the New York State data warehouse in order to track recidivism and length of stay in foster care. I'm also currently adding a component which will allow me to track, to some extent, the effects our curriculum has on the success or failure of clients. It's related to a previous published work I did on retention and engagement in a nurse home visitation program for teen moms.

So how does all this relate to my point that scientists are frequently reluctant to share. Again, in my own field, which I have most experience in, everyone is trying to make a name for themselves. They develop a good idea, but not one that can be adopted into general public use. They give talks, write papers, perform randomized trials, and it appears to the public that its this wonderful new idea that could completely change the world of whatever its trying to deal with. Problem is, once the studies are done, the programs die out, because they require actual community based centers to support the ongoing efforts of such programs, along with steady funding. It's too expensive to use researchers, but if you use paraprofessionals, nurses, or anyone of lower credentials, you often begin to dilute the program, and results can no longer be guaranteed. In fact, the frequently go away, and you just have one more crappy social welfare program. As a result of this history, an example is Head Start Program; many of the researchers developing these ideas are reluctant to share the ideas, as they don't want to see this happen to their name. Instead, either the idea dies with them, or they begin training and licensing centers to act as representative center. However, it’s at a huge cost generally, and requires unreasonable levels of training, all the while not able to maintain proper funding. That’s where I come in, I work with these groups to take the program, trim the fat, and make them practical, while insuring through simple methods, that none of the effects are lost.
 
To me the real crux of the matter is targets and the discussion's drifting to implementation. Physicists are best at developing the theoretical framework, engineers the concrete realization, neither has a leg up in determining valid goals for sound reproduction. Nothing in my engineering educational background related to the ear/brain system. Isn’t how the ear works and valid targets for audio reproduction properly the domain of the medical and behavioural sciences?
 
Computer game designers do what they do with little or no academic background. It's virtually all done by intuition, gut feeling and "wouldn't that look cool?"

Computer systems design has a success rate of about 25 pct. No wonder game design is one of the hottest subjects in college. I'm not talking just about gamers. I'm talking MIT. The academics have tons to learn from those 20-something game designers.

Ultimately when we deal with game design, systems design and audio we deal with human experience.
 
SY: " ... Well, I guess Shockley, Bardeen, Brattain et al had better give their Nobels back. ..."

Maybe ...

SY: ... All of those sorts of people look at these issues and do research. A scan of JASA and JAES will find physicists, mathematicians, engineers, and psychologists, often on the same paper. Even some musicians now and then. ..."

Papers, yes ... the hardware, not always. Often there is just one or two techies or hardware hackers, nailing together some parts from the junk yard, trying to see what will happen or if something that "commonly held wisdom" says "can't be", actually is . The big word papers often come long after the fact of invention. (Academia has very often been wrong about what is possible and what is not. The Wright brothers being a recent but well trod example.)

SY: " ... Nothing, and I mean NOTHING, about semiconductors and their junctions makes any sense without a fundamental understanding of that esoteric academic post-1913 quantum mechanics. ..."

Okay, if you say so. But I for one believe that Schottky is being left out of the time frame. Although he and his acadamic friends did not publish until 1939 (in light of unexplained Quantum considerations like the tunnel diode, etc.), his basic discoveries about solid state fast recovery diodes were done before 1913 ... the explaining papers not catching up to the discoveries more than three decades later.

In otherwords, the barrier junction gadgets worked and existed and were used in practical applications (radio receivers) long before the acadamic paper work caught up to the facts ... quantum Theories not withstanding. :hot:
 
Burn in of aviation electronic equipment is SOP

During my tenor in the USAF my carreer field was....F-15 avionics tech. Most proficient on the electronic countermeasures systems ie. Jamming radar. These bigass RF amplifiers had traveling wave tubes TWT's which required a burn-in period whenever a new tube was installed. The ATE would go thru a subroutine that would "cook" the tubes for a predetermined period of time which totally depended on the new tubes datasheet. The burn-in had a two fold purpose 1) de-gas the tube 2) reliabilty test under prolong operation. Can't have a jammer fail when you have a SAM in your 6...LOL:)
 
phn said:


Now, now, John. I would expect that from somebody that doesn't know what Einsten was talking about. But definitely not from an engineer.

:confused: :confused: :confused:

To quote the famous philosopher...Earsplittenloudenboomer..

Que??

As an inguneer who read Popper a coupla decades ago, what does I'z do?? I check da wiki-ped...

http://en.wikipedia.org/wiki/Karl_Popper

Wherein, i findz dis gem..

Quote within wickipedia (sans links):

""He strongly disagreed with Niels Bohrs' instrumentalism and supported Albert Einstein's realist approach to scientific theories about the universe.""

Hence, my relativistic pun..

Cheers, John

ps..your not saying "gasp" that wikipedia is wrong, are you?? What'd be the purpose of life without wiki??
 
rdf said:
To me the real crux of the matter is targets and the discussion's drifting to implementation. Physicists are best at developing the theoretical framework, engineers the concrete realization, neither has a leg up in determining valid goals for sound reproduction. Nothing in my engineering educational background related to the ear/brain system. Isn’t how the ear works and valid targets for audio reproduction properly the domain of the medical and behavioural sciences?

To me, many of the papers tend toward's pure research, not directly applicable to real life stereo reproduction.

Localization testing requires real life stimulus. Not sines, SAM's, target and pointers which are pure ITD or IID..

There must be a clear distinction between simple system tests, and the more complicated interpretation, which is what we do.

How the ear works is a subset.

Cheers, John
 
You do have to be careful with wikipedia, it gets a lot wrong. The information is only as good as the author, and it is absolutely questionable who these authors are. Yes they go under some level of "Peer review" but again, who are these peers, what makes them experts on the subject. I know wikipedia is not considered an acceptable source of information in most if any universities because of this issue.

My case in point is a discussion I was having in a summer class I tought on the history of modern jazz. We were talking about the most recent modern movement after fussion which involved the mixing of electronic instruments, acoustic instruments, and DJ's to form yet another sound. For those wondering who I am talking about, the most famous I know of would be The Beastie Boys, and the best would be Medeski Martin and Wood. Anyway, one of the kid disagreed with me that Breakbeats was a term coined recently by those sampeling the breaks in the music, and that the term had existed for as long as actual break beats had existed. To be honost, I don't know either way for sure, the details aren't clear, but I am only familiar with the term being used in more recent times. I do know that some jazz musicians now use the term when they take a pause to let the drum through. However, I have asked them, and most of them aren't sure if the term came from earlier jazz, or they started using it because of the DJ's. Someone else even mentioned they learned the term from a studio engineer, and it brought up the issue of potentially being a term coined by studio's to let the musicians know when they want that break to take place.

Anyway, to my point, in trying to find an article on the subject quickly durring class, We went to wikipedia, and it stated what I said. That it was a modern term coined by musicians sampeling the beats from older 70's recordings mostly. He then went and changed the wikipedia article to state what he thought it was, and did one of those, See I'm right now. Last I looked, his changes are still in the article. It just goes to show you who gets to change these articles, why they are changed, and what level of review they undergo compared to real published works.

I have been helping my own supervisor write this new text book coming out on community psychology, applied research methods, etc. As I have said before, for whatever reason, the group I work with is considered one of the better authorities on the subject, developing some of the best new methods. As I said, its a group, this book is being written by over 10 psychologists and their assistants, like myself, and spans 3 universities. It has taken these guys almost two years of submitting the chapters to the editors, having them reviewed by the editors, going out for peer review, getting approved, etc etc. It just shows you the level of control that is required for a good primary source to be developed, compared with Wikipedia. I wrote the section of direct behavioral analysis, and I have now rewritten that section with the help of some better writters 5 times, and it still hasn't been accepted. One wrong word that might imply something that we don't know as absolutely true, and they won't accept it.
 
By the way John, I feel as if you are one of the few who gets what makes my research idea different from that already done. The majority of work already done has not satisfiably proven how we, for instance, hear music. In order to ensure an acceptable scientific framework, they go for very strong internal reliability, and weaken the external to almost nill. It’s a problem I find with science, and is something, as I have said, that I deal with in my own work all the time. I know that subjectivity is growing in the scientific community right now where it makes sense, such as the behavioral sciences, including brain mapping and our senses. However, we are far from there yet. None of the research that I have come across has both used the scientific method of reason, and at the same time, applied to real world scenarios. I want to see that happen before I accept with absolution that our measurements are a reflection of what we actually hear. I also think this work will eventually bring about new measurements that can be helpful in designing that better sounding amplifier, or set of speakers.

I went over this in detail once before here, but I will do it quickly again. Many of the beloved ABX and Double Blind testing methods assume absolution in the results. However, there is an intervening variable that people neglect to take into account, and that is our ability to remember from our senses. Sense memory, if you will, is very complex in its ability to stay clear and long term. It works by association in our brain, and has the problem of becoming diluted if a strong enough association is not made. For example, if you play two different tones in succession, one with .1% distortion, and the other with 10% distortion, you would expect a difference to be heard. There is a large enough difference here that even diluted we will accurately be able to tell the difference. However, play one with 10% and 9% back to back, and the difference are so slight that our brain will associate the experiences together, melding the effect, and our ears will actually trick us into hearing no difference reliably. Unfortunately, as far as I know, Hearing is one of the area's in research where we haven't studied this effect well enough. I'm familiar with lots of research on taste, sight, and smell that deals with this, but not hearing. The example I frequently use in sight to show this is with colors. If you take paint samples, and show somebody just two of the colors, just one apart from the other, in succession, they won't be able to reliably tell the difference. However, if you show them the colors as the same time, suddenly the difference becomes clear, and the person could reliably tell the difference, no matter what order they were arranged in. You could even do this with ten colors and be successful. Though after a while our brain does become fatigued and begin to lose its ability. This is also a well know phenomena not often accounted for in tests. Though Listening tests are one of the few were scientists do take into account some level of listener fatigue?

I have an idea for a way to better evaluate audio equipment that follows the basic scientific theories of the behavioral sciences, yet is open enough to account for unknowns in the evaluation. I have brought it up before, but was berated for it, so I'm unsure if I want to write it up again. Anyway, I think it would be a way to improve magazine reviews a lot, without really making their jobs all that much harder.

Matt
 
Hi pjpoes. Interesting stuff. If I read it right, you're talking about a rough analogy to the 3-colour paradox with a large memory/recollection component added?

Hi John. I'm not sure I agree, at least not on my last fast perusal of JAES abstracts by the authors SY referenced earlier. Take Toole's work for example. Unless I'm mistaken, the goal was correlating loudspeaker performance parameters against listener preference and no comparison against a live reference was used. No basis exists on which to judge reproductive accuracy from the results then. It’s still valuable work, primarily as a marketing tool (pun unintended) and not as pure research towards designing accurate loudspeakers.
 
By 3 colors paradox I assume you mean the concept that just because we can't see or hear it doesn't mean it doesn't exist. If so, then in essence, that is what I'm saying, but almost in reverse. Normally I believe this refers to the ability to measure the true essence of something, but not be able to perceive it.

What I'm saying is a common problem in music reproduction studies is that they develop test methods that don't pick up certain things that may in fact be there. I believe that humans must, in the case of music, listen for a period of time in order to perceive all that they are capable of.

As for the idea of Sense memory, I believe that is really a bit different from the color paradox. Think of it this way, we are cognitive misers, all of us. In a sense, our brains are very energy efficient devices using a form of WinZip computers could only dream of. We compress sensory input in extreme ways, though I would say, unfortunately we don't have the best data retrieval abilities, and the compression is definatly not lossless. None the less, it’s very efficient. I would also note, some of what I say here may not be completely accurate, this is a bit out of my field, but I studied it in school, and it has interest to me, so I have some knowledge of it. Anyway, one view is that our brains take in information and put it into the first level of our short term memory. Our short term memory is very limited and must put things together manageable ways. From this point it then must be organized and placed into our long term memory for later retrieval. This doesn't work great as usually something is lost. From studying how humans recall, one thing we discovered is that recall seems to work with association, thus it was surmised that we must parse things in our brain with other things we associate together. This makes it easier to remember a great many things in a limited capacity. Again problem is recall though; often through association we blend memories together. This can go so far as to create false memories. An example of this is when you dream of something that really happened, usually the dream is inaccurate, but your recollection of that event will now be changed by the dream, a melding of the two together.

As for how this affects ABX or Double Blind Tests, basically, if you quickly change between stimuli, do completely work in the short term memory. Our brains simply can not function fast enough to transfer this to long term memory and bring it back accurately for recall. This isn't to say that none of it goes to long term, but again, think in terms of association. If you play two tones with differing levels of distortion, your brain will associate the two together, since they are similar, and thus your memory of the event will be a melding of the two experiences, and inaccurate. In psychology what we call this issue is a measuring instrument sensitivity problem. It could be that the results are accurate, and humans can't perceive a difference, but because of this intervening variable, we know the results aren't guaranteed. A good analogy would be if trying to measure distortion levels with an analyzer and the self noise in the analyzer is above the distortion level in the device being measured. It could be that the distortion of that device is what you are measuring, but you just don't know because it’s buried in the noise floor of the measuring instrument. In that situation normally you would get a more sensitive device with a lower noise floor. With humans, that’s not so easy, but we do have methods designed to deal with such situations. As I said earlier, I use these sorts of methods in my own work, and feel they could be applied well to the evaluation of equipment. However, since this is all conjecture, I first would love to be involved in testing these ideas in real university funded studies. Unfortunately, I would rather save the children than save the audiophile, so I don't know that I really will ever be involved in such work.
 
Lunch is over at work so to quickly reply to the '3-colour' thing: three similar colour patches A, B and C. A looks identical to B, B identical to C, A and C look different. Essentially B sits between the minimum perceptible colour differences A and C.
 
rdf said:
Hi John. I'm not sure I agree, at least not on my last fast perusal of JAES abstracts by the authors SY referenced earlier. Take Toole's work for example. Unless I'm mistaken, the goal was correlating loudspeaker performance parameters against listener preference and no comparison against a live reference was used. No basis exists on which to judge reproductive accuracy from the results then. It’s still valuable work, primarily as a marketing tool (pun unintended) and not as pure research towards designing accurate loudspeakers.

1) What recorded material was used to force the loudspeakers.

1a: Was it material that utilized IID to direct the positioning of a virtual source? If so, that is a condition which does not exist in nature.

1b: Was it a two microphone recording, which tries to mimick the ITD and IID received at a position? That duplicates that which is used for headphone reproduction, but two speakers do not sucessfully map the correct ITD/IID to a listener at a distance.

2) Was there any attempt to determine the relative positioning of independent images within the soundstage?

2a: Does a source, which is intended to be 10 degrees to the right of another and five feet behind, appear to be so when reproduced on a speaker system.

2b: Was any attempt made to determine the accuracy of positioning in an absolute sense? What is the probability distribution for human perception when provided accurate localization cues.

2c: Was any attempt made to determine the accuracy of positioning in a relative sense? What is the probability distribution for human perception when the test is to determine the relative positioning between two independent sources.

3. Was the human time dependent localization re-interpretation characteristic determined and controlled for??? Or, did he (as everybody else I've read of) simply make the assumption that humans INSTANTLY reconfigure their localization algorithms when presented entirely inaccurate localization cues???

If we run from a room which is lit with 20 kilowatts of lghtbulbs to a room which is lit by a single candle, do we make the assumption that a candle is not visible?

Do we assume that we cannot hear our own bloodflow, simply because we do not normally live in an anechoic environment?

Do we assume a picture is not present in those 3d thingamabobs, when all we have to do is disconnect the focal length from angular separation?

I don't see the right work being done..

Cheers, John
 
Thanks RDF, yes that is a very accurate depiction of what I was talking about. Again, this is all out of my field of expertise, Ive read about this in articles and such, I know it because I need to, but not well enough to talk as an authority. I need to know it in order to account for it in my work, but thats the extent of it. However, when I first learned about this from my current mentor, I really didn't realize the full extent to which it affects all that we do. As a result, it began me thinking of this issue in terms of audio reviews, which then logically lead to the actua science, and a realisation that we simply aren't doing it right yet.

Again, reading through Johns last post leads me to believe that he agrees that the research so far has made too many assumptions of human perception, and not taken into account a great many intervening variables.

You know why you can wager that nobody can accuratly tell the difference between two somewhat similar amplifiers in a DBT method, because none of those variables are accounted for. Humans can't in such succession. Take the 3 color paradox, which now that you have described it I do remember learning about, and change the colors for flavors of amplifier. Has anyone bothered to make the same sort of comparisons? Maybe, I really don't know, I haven't done my homework on the subject as I probably should, but I would love to know if anyone even believes its an issue. My experience so far has simply been that people blindly accept the old methods as accurate, and don't question the testing methods when nothing shows up.

Again, to use a possibly easier to understand analogy, I did a study once on peoples voting perceptions. I was trying to see if people actually vote based on candidate characteristics or if the party played a bigger role. I used a 4x3 method that was way too complex and involved groups of far too little people. I believe in total I had ten people per group, 40 different participants, shown 3 differnet stimuli each. My measuring method used an unproven observation analysis system, an unproven questionare, and tested voting records. The first two methods were poorly constructed and as a result gave no useful information. If I had assumed that my measures were accurate, then I would extract that people all vote randomly, and that party and candidates have nothing to do with that. However, that was illogical, and a further analysis of my measures discovered many problems with the measures themselves. Including a lot of things I didn't account for. My analogy here is that, I created measures that I assumed would correlate to actual voting trends, and they didn't. The same is true, I believe, with the science of sound reproduction so far. People have assumed that the methods developed in labs, and used in the engineering of equipment correlate to actual human experience of sound reproduction, and yet no studies to prove this connection have been done correctly yet. I shouldn't say none, but none that are widely accepted. I have found a few that I agree with, but I have noticed a great many people trying to discredit the ideas and pick apart the methods in the AES journals. One such arguement was simply stating that this man, a psychologist, had no background in audioreproduction, therefor his methods must be flawed. Though the man responsible for the study was a psychologist by degree, he had spent his life studying hearing, and happens to be an audiophile, who wanted to do some fun studies on the subject. His results fly in the face of conventional wisdom, therefor he must be wrong.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.