John Curl's Blowtorch preamplifier part III

Status
Not open for further replies.
Recently I listed some standard excuses for not accepting published evidence, "the reviewers weren't qualified" line was at the fourth line iirc... ;)

Its not just that they are standard excuses. There is some very interesting psychological research about how human biases affect perception of expert opinion, and perception/acceptance of published research. Sometimes humans will do very illogical things to protect/preserve preexisting beliefs. Although scientists are supposed to be trained to be much more objective, it can be seen in reality some biases are almost impossible to overcome and or virtually impossible to see actively working in one's own mind.
 
Measurement tells you the real thing in real time. Simulation anyhow, can show you more regarding how things work. When you make changes during tweaking (adding wool into enclosure, adding weight to speaker cone, adding capacitance to notch filter, etc.), you should know what parameters are affected. Simulation can help you here.

There are misunderstandings about phase. But absolute phase imo is not (very) important.

Not sure sim would get me much, although I am basing my speculations on calculated parameters of changes made.

I really need to measure a complete set of parameters both in/out of focus including close/undershoot/overshoot......this could take awhile but comparing data with the actual event is the only way I see to understanding.

What would you science jockeys recommend for a start?

Have to figure out a protocol including what testing/devices to be used?

I have no ‘real’ testing gear at the moment and my computing power is limited to a chrome book and an iPad!

I’m guessing I’m gonna need a dedicated laptop with some sort of rew software? Hardware?

Help turn me into an objectivist! j/k ..... more like a educated subjectivist :D

Bob
 
Jakob2, you have never showed anything of your own results and conclusions here and on any other forums, direct or by quoting your own peer reviewed published results. Your motivation to do so (or not, thereof) are none of my concern, but until you decide come up with something, you are exactly what you seem to be: at best a theoretician with an agenda and an axe to grind.

I agree on this.
 
OK, so we do not care about intermodulation distortion in speakers/drivers? Authors of such studies do not have to show measurements of the equipment used for their studies? Reviewers in the scientific magazines do not ask for proof or verification? Do we have to replicate the flaws and errors?

Of course we do. Have a look at Ashihara's 2007 paper I mentioned. Measurements and all.
 
Probably not quite as you perceive it. I would strongly suggest reading Johnathan Haidt's, The Righteous Mind. He did some very interesting research into what you are talking about. In short, he found there are reasons the human population have tribal tendencies, there are reasons that there are liberal and conservative, etc. Each of those things has been needed at different times in human evolution, and natural selection has worked to preserve them. Haidt also found that several personality dimensions (a concept used in psychological testing and research) work well to characterize left vs right, liberal vs conservative, personality types across many very different types of cultures. He stated that he was once liberal, but after his research had to change his position to middle-of-the-road as he could no longer justify strong liberal leanings. Home | moralfoundations.org

I’ll check it out thanks.....knowledge is power!

But we can’t ignore our inner voice/gut feelings either. :p
 
@ syn08,

a recent article in the AES journal proved beyond any doubt that there is absolutely no difference (measured or ABX) between SACD and classic CD.

I can accept that it should not be seen as a claim, but I struggle to believe that it isn't "praising" ?!

I'm not pushing Oohashi et al's publication, had expressed some of my concerns in the past (you know the sitting on the fence part :) ) , but appreciated their approach (as they combined subjective - even blinded - evaluation with more objective imaging efforts).

Additionally it was nice to read that they addressed several related issues,as

-) considered the intermodulation problem
-) provided measurements
-) provided base line results
-) considering the multiple comparison problem in the imaging department (using Friston's work)
-) considering the multiple comparison problem in the psychoacoustic evaluation experiments (using Scheffè's paired comparison method, which is quite strict)

and Oohashi et al. did some additional studies to examine why they got different results when using headphones instead of loudspeakers.

Over the years several other studies were published examining the so-called Hypersonic Effect so there was a lot more to find than a single controversial study.

As I've stated before, one could argue that all experimenter seem to have a relation to the original team but otoh there is some inconsistency in the results.

Afair nobody else did a replication with negative results.

As said before, if there is some relevance if people are not listening to gamelan music is questionable and in addition it finally has to be examined if the inevitable interference could be the reason for the results.

And no, i don't think that it is "pushing" .... :)
 
Authors of such studies do not have to show measurements of the equipment used for their studies?

That seems to be the norm. It is very unfortunate from the perspective that it tends make replication difficult or impossible.

Of course, in academia people have to publish to get promoted. People there are constantly thinking about getting out publications. They know that journal editors are more likely to accept papers that will be of interest to their readers, and readers tend to want to see big, important, just-breaking news (much like newspaper readers like). Questionably done research that is unlikely be replicable is more likely produce impressive results and be published (or so some published research shows), but it is much more of a problem in some fields than others.
 
That seems to be the norm. It is very unfortunate from the perspective that it tends make replication difficult or impossible.

Of course, in academia people have to publish to get promoted. People there are constantly thinking about getting out publications. They know that journal editors are more likely to accept papers that will be of interest to their readers, and readers tend to want to see big, important, just-breaking news (much like newspaper readers like). Questionably done research that is unlikely be replicable is more likely produce impressive results and be published (or so some published research shows), but it is much more of a problem in some fields than others.
Utter nonsense (all but the underlined). You've obviously never published, which is not a problem. Except when you write utter nonsense. Where did you get that? Make it up? Just heard somewhere?
 
Last edited:
OK, so we do not care about intermodulation distortion in speakers/drivers? Authors of such studies do not have to show measurements of the equipment used for their studies? Reviewers in the scientific magazines do not ask for proof or verification? Do we have to replicate the flaws and errors?

I'm not sure, to which this is related, but you should ask that syn08, as it seems that he had no objections in M & M's "we don't publish measurements" case.

Oohashi et al. showed a lot of measurements, and they wrote quite detailed about the conditions during the trials.
 
Utter nonsense (all but the underlined). You've obviously never published, which is not a problem. Except when you write utter nonsense. Where did you get that? Make it up? Just heard somewhere?

That interest for readers is one of the criteria for editors to decide what to publish is surely correct.

The next sentences are imo a bit unhappy worded/phrased (it could be that I completely misunderstood), but there are reasons why the so-called replication crisis exists. And some fields - imo for good/bad reasons - suffer more from this problem than others.
 
I believe it is now clear why the Oohashi study was rejected for publication in all audio technical journals of the time, ...

Please provide evidence. I'm an open-minded skeptic, "trust, but verify", and I'll happily believe you if you can back it up.

FYI, I'm not a fan of Oohashi, not because of any flaws, I'm just not so interested. I am interested in honesty though.

That interest for readers is one of the criteria for editors to decide what to publish is surely correct.

The next sentences are imo a bit unhappy worded/phrased (it could be that I completely misunderstood), but there are reasons why the so-called replication crisis exists. And some fields - imo for good/bad reasons - suffer more from this problem than others.
Yup, you got me on both: "of interest" is true, but I was blinded by the "big, breaking news part". You and Markw4 are right about the interest part though.
And also, I did something I hate when others do it. I'm no expert on *all* scientific publications. Only the neuroscience and perception primary literature. Other fields... not so much.

@Markw4: I am ignorant about other types of pubs, and am therefore probably wrong. Sorry. Tried to fix post above, but too late I guess...
 
Last edited:
Where did you get that?

Actually, there is published research on that. Have to see if I can find it, its been awhile since I came across it. Also, at a personal level, a friend of my son's in high school grew up to become a pediatric psychiatrist at UCSF. During his college years he worked for a period in the research lab of world renowned biochemist (been a long time since we talked about that and I can't recall the guy's name). My son's friend was excited to get the opportunity to work there and during one of his visits home told me how he didn't need me anymore because he now was working with the best scientists in the world! Well, next time I saw him he was very disillusioned about the experience with the great researcher. Seems the guy had college student lab assistants keep repeating the same experiment over and over and over for years trying to show a positive effect so he could publish. I said, 'I told you that stuff happens sometimes, and you didn't believe me.' Myself, I prefer to keep my personal life private, but I spent more than 30-years at places like UCLA and Stanford, sometimes as faculty. There is a lot that goes on at times, but like I said, it varies a lot from field to field. One of the medical physics professors I knew said, 'anybody that has an advanced degree knows what prostitution is all about.' Of course, he was speaking of his own Ph.D. research and what his adviser told him to do. http://robotics.cs.tamu.edu/RSS2015NegativeResults/pmed.0020124.pdf
 
Last edited:
Status
Not open for further replies.