The only ''definitive'' answer in this Subjective world is...

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Exactly. Fair bit of it as sub-modules within a greater numpy/scipy script. But Fortran's strength in array math is hard to beat when you need to get into the deep end of things.

LIGO used Python (but I don't know everything behind the scenes). Considering the target processors are the same I would suspect the Fortran compilers and optimizers play a part. I did a little hand assembly once because the code the C optimizer did for floating point could not be made to use the full capabilities of the FPU at any settings. Python (numpy) does a pretty fast million point FFT and there is a wrapper for MIT's FFTW.

Here's Nasa's benchmarks. NASA Modeling Guru: Comparing Python, NumPy, Matlab, Fortran, etc.
 
Dunno about secret instructions, but, heck, who else is going to have that kind of characterization of the entire pipeline from memory-on to squeeze all the latencies up just so?

Which might as well be secret instructions. :)

Edit -- Java was, IMO, the most surprising. And aren't most numpy/scipy modules precompiled C routines? (You can run f2py and c2py, though; I only have second-hand knowledge from close friends, haven't had to get into the weeds myself yet)
 
Last edited:
Edit -- Java was, IMO, the most surprising. And aren't most numpy/scipy modules precompiled C routines?

Yes, I think in general they are. Useless trivia, doing 20*log10() on the output of an FFT does not check for zero (in this case only REALLY zero fails not numerical noise). In 40 years of using simulators I never had a problem but recently found a way to make it break all the time.
 
It's not the statistics of ABX specifically, although there was a battle some time ago in one of the audio magazines about ABX statistics. One person was complaining that the false negative rate was too high, but that is an artifact of trying to do a low N trial and maintain some modicum of robustness on false positives. He was arguing that we need to relax our standards for what is considered a "real" effect, but given the entire scientific community is moving to higher statistical standards, this is the wrong move. It's better to say it's below our ability to detect than to falsely claim s positive conclusion on dodgy data, even if that goes against the grain of our intuition and the way the lay community communicates.

The moral of the story is not that ABX is inherently bad, but trying to elucidate anything from a small dataset is a recipe for failure.

In terms of an experimental design, ABX may have a higher cognitive load than other protocols, and thus be less sensitive. But that isn't definitive. High N trials require a lot out of the tested individuals (victims), and really have to be spaced out to avoid fatigue.
 
Last edited:
Thank you, all very interesting.

It would also be interesting to know if you have any thoughts about the limits of human distortion perception.
Finding the "limit" is not of much interest to me as this requires specialized signals and techniques. What I want to know is when nonlinearity influences the music that we hear and that is an entirely different aspect.
Another very interesting result had to do with ABX testing. A few people reported they could not hear any difference between the files, and that they scored below average in ABX testing using Foobar2000.


Anyway, in this case the few listeners reported low scores and high chances of guessing, and they believed they could hear no differences. The interesting thing is that their low scores appear to indicate that part of their brains were reacting to distortion that was present and biasing them to choose non-randomly and inversely to what they were trying to do.
Unless this effect were pronounced and occurred with many people I would discount it as fluke event and not worthy of interpretation.
Although this was an informal test involving a small sample of self-selected participants, the results suggest that there may be some merit to claims that very low levels of distortion are audible to at least some people, and that ABX testing may not be the most reliable way to test for that ability.
This depends entirety on what you mean by "low level" and what orders of nonlinearity we are talking about. As I said, I can create a nonlinearity at .1 % THD that everyone will hear and another at 20% THD that no one will hear. What does that mean?
At least for me, it would be very interesting to see more formal research in this area to try to better pin down what is and what is not possible, and how to best perform testing.

No one is going to research this!! No one paid attention when we did our preliminary study. The response was simply "We all know that THD doesn't work, but its so easy to do!!!"
 
PS. FORTRAN is still my main programming language, but only for crunching numbers. For GUI I use VB.Net but call the huge world of mathematical subroutines that FORTRAN offers. Its complex number abilities are unparalleled (or I should say "paralleled", since FORTRAN is now a totally parallel language.)
 
What I want to know is when nonlinearity influences the music that we hear and that is an entirely different aspect.

Agreed. The problem I would like to address is that many people will insist than no human can hear less than .1% THD, that there is no point designing amplifiers much cleaner than that, and anyone who claims to hear lower levels of distortion in music reproduction systems is a liar or a self-deluded hallucinating fool. The same people like to insist the *only* possible proof that someone can hear less than .1% THD is to present ABX test results using Foobar2000 (because that's the only freeware ABX test available).

The problem with providing the "proof" is that many people have great difficulty not getting distracted by the ABX process to the extent that they significantly fail to hear as consistently during testing as they do normal listening, but only for distortion levels below .1% THD. I found that a sorting test seems to work for me without interfering with how I normally hear distortion in music. I also believe that it would probably be possible to produce a sorting test that is just hard to game or cheat as current implementations of ABX. We just don't have a free one, and no one has published research showing it is a viable alternative to ABX.

Futher, I think if we had a test that interfered less with sensitive hearing than the ABX tests we have now, we might find more people may hear and find objectionable low levels non-linear distortion when listening to recorded music than some people currently believe.
 
Maybe I over spoke about self-delusional fool thing. It hasn't been that bad around here lately. Probably more accurate to say some people have been accused of imagining things that could possibly be real. And the matter is complicated by a few claims that seem physically absurd. Hard to sort out what may be real from what may be imagined. I would vote for more research.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.