What is a 1-bit DAC ?

Konnichiwa,

kittaylor said:
Care to name any of these killer sounding CDs? I dig the five disc Remember Shakti live set myself.

Anything recorded by Keith Johnson (Spectral) or Tony Faulkner where you like the music. Some (but not all) stuff on Eloquence, Naim and Linn (largely classical).

Any Rudy Van Gelder recordings remastered from the original analogue tapes (sorry, could not resist).

Sayonara
 
Kuei Yang Wang said:
Have you ever ABX tested the ABX test itself?
This is meaningless.
Using some items reliably audible, such as one channel with inverted polarity?

What would you say if you found that you (or others) cannot reliably detect that with .05 significance? Do you conclude that this was inaudible or do you conclude that the test is flawed? Your call. I note that NON of the widely published ABX tests used in support of that idiotic "everything sounds the same" position had first tested the audibility limit of their test/subjects.
Of course, if ABX test is conducted in idiotic manner, then its results are idiotic. Garbage in, garbage out. If you draw from here conclusions about ABX test as flawed fundamentally, then you only show your misunderstanding of what it is about.

If you can't ABX that, then you have only one honest conclusion - *you* cannot hear the difference. So what? What is flawed about the method? You may argue about its application, but the method itself is dog simple.

But lets reverse your argument. Suppose you bought that $1000 power cable and jump around in joy that "as if a curtain has been unveiled". Then a friend comes to you and lets you take a blind test, identify when the new cable is connected. All of the sudden you find yourself in huge difficulty to determine that. What you conclude? That the cable was still worth every penny, and that the test is flawed? Your call. I note that NON of those idiotic "every molecule sounds different" types have done their reality check. Most of them can't tell apart mp3 from original.

How many trials does your ACX Test include?
16 if you are confident. Thats enough to disprove null hypotesis. Goes fast.
30 if it is very subtle. Thats enough get type-2 error low. More if you want to get anal about it. I wouldn't. If you can't disprove null hypotesis in 30 trials of focused attempts, then the difference is so subtle that you don't jump around claiming dramatic differences.

And 2, I would say that the neccesary number of trials to give statististical significance is closer to 50-100 range, to be certain that insensitive test subjects etc. do not cause statistical problems I would suggest that something along the lines of 5 Trials of at least 25 - 50 Subjects with "outlier" data discarded needs to be used for statistical significance. In order to not fall into the attention span deficit trap I suggest that if more than 5 Trials are used per subject they are carried out in several seperate sittings, if you wish to take due care to find out of there is an audible difference, rather than to find support for the Null Hyphothesis.
You keep speaking about large group testing. Why?? I have repeatedly underlined that you need to be concerned only about your own tests. To conduct largescale ABX testing to draw any conclusions about whole human population is utterly futile. ABX test is not suited for that either. I'd say you need minimum 20-30 of trials PER subject, that has to have some experience and be motivated. To draw *any* statistically significant data about whole population you need at least 1000 subjects, and even then you can't *prove* null hypotesis.
You can only claim that there is no evidence of disproving null hypotesis. And it should be so that any one subject can disprove it.

Given that ABX Tests in general fail pretty reliably to show modest audbible differences reliably one must consider their "error budget". The usual answer that not only was no audible difference observed, but also (disregarding any issues one may take about test steups etc, which are equally many) the type B error risk was very high, usually well above 50%.
ABX test doesn't fail to show modest *audible* differences reliably for subjects that DO hear the difference. You refer to some unspecified misguided test results.
One can do any bs in public to support some agenda, but I can't imagine how any sane person would keep lying to himself.

It takes 1 time for a person to experience huge difference between confidence of sighted evaluation and shocking difficulty of blind test, to realise that there is no other way to establish true audibility of subtle things by other means than blind repetitive testing. And you can't avoid statistics there - it is to *help* you. ABX is just convenient method to do that.

posted by wimms
But, being intelligent person, you must get the gut feeling that something is wrong when you predict reliable hearing of difference when you know what you listen, but in blind ABX do no better than random chance.
No, being an intelligent person I am actually aware of a variety issues with all sorts of forms of testing, including ABX.

And I know enough about statististics and ABX testing to largely discard the majority of results as being both severely flawed in the execution of the test and in the statistical analysis.

I am also aware of the propensity of humans to hear a difference when in fact non is observable or equally the propensity of humans to fail to observe any audible difference despite one being present, depending upon whatever beliefs are held.
I don't get what your point is. You argue about different thing, and while doing that deny very simple message: if you conduct ABX test on subtle things for your own personal confidence, you'll exclude any possible bias. If you believe you can always exclude such bias without blind test, you are dreaming.

ABX raises more questions than it answers and is fundamentally flawed on several levels, as a methode to determine the presence or absence of small audible differences. It is sufficiently reliable for large audible differences only.
I disagree. The alternative is even worse - groundless assertion of subject about subtle audible differences with tons of vocabulary describing it has no credence, especially when that same subject is unable to disprove null hypotesis.
As you speak of large testgroups with diverse experiences wrt large audible differences only, you are right. But for experienced individuals it is opposite - as number of trials increases, not only reliability of true result increases, but also sensitivity to subtle differencies.

Anyway, point of taking individual ABX test is not to prove any unversal truth to anyone, but simple method of removing any bias and getting reliable result. Fact that well executed ABX result has orders of magnitude more credibility is just unavoidable byproduct.

Correct. However the results of such tests can support or recject the null hyphotesis nevertheless. And more than enough people, including the original ABX mafia have IN WRITING, REPEATEDLY and based on VERY POOR STATISTICS claimed exactly that, namely proof for the null hypthesis.
What mafia?? If you can't disprove null hypotesis, then *you* can't hear the difference. Thats all there is to it.

I'm more disgusted about some nuts getting excited about a new shiny power cable and how dramatically it "improves" the already insanely good sound. And not only is that done repeatedly, in writing, they have a bunch of periodicals for that.
 
Konnichiwa,

wimms said:
I'm more disgusted about some nuts getting excited about a new shiny power cable and how dramatically it "improves" the already insanely good sound. And not only is that done repeatedly, in writing, they have a bunch of periodicals for that.

And there we finally note your agenda.

I am not making claims that mains cable changes are reliably audible, however a number of particular very real effects mean that they CAN have an audible effect on sound, as do other cables. Clearly you feel that this should not be the case and that they should not hear differences.

Blind or not is also neither here or there, but one of my "calibrations" for DBT sensitivity is a reasonably reliable idenfication of the differences between Radioshack "Goldpatch" RCA cables and RG213 terminated using WBT or WBT Clone locking RCA connectors.

And that is where agenda come sin and the majority of published ABX tests have had from the beginning the agenda to produce pseudo scietific data support any number of polemics and propagande ESPECIALLY against cable makers (who are largely overcharge and underdeliver and often make cables that have a very large audible effect simply by deliberate manipulations of the sound, but all that is again irelevant to me).

And my point gets proven again.

Again, to be clear, if your test does not show an audible difference between Radioshack "Goldpatch" RCA cables and RG213 terminated using WBT or WBT Clone locking RCA connectors you may as well not bother.

I recommend the following setup to ensure a level playing field:

Stax ESL Headphones
Free-field EQ applied using highly transparent circuitry
Crossfeed applied using highly transparent circuitry
HDCD Processor 1 as AD/DA converter, that failing any of the following: Audio Synthesis DAX Discrete, dCS Elgar or it's Pro Cousin

This eliminates most of the common test setup mistakes which usually incude dibolical room acoustics and usually very low grade ancilaries which would be certainly not acceptable for Studio Monitoring purposes and at best can be used to jude if something is audbile in the context of mass-produced No-Fi. I find it indeed appaling how incompetent usually the acoustical room and speaker setup is handled.

Past assebling a "monitor grade" listening setup listeners should have the time to become familiar and comfortable with the setup AND they should be kept completely uninformed as to what particular is being tested (knowledge of what is being tested and having a preset opinion on if it makes an audible difference will result in most people failing the test, for obvious reasons).

Finally any single test series must be short enough that no attention span issues become notable. For me even 10 trials in one row is well too much, I cannot remain sufficiently focused (largely because I feel the whole thing is a waste of time I suspect).

Do you agree that the above described kind of test setup would help to maximise test sensitivity to small differences, if coupled to sufficiently large teststes (we may disagree about exact numbers, largely because I wish to minimise the risk of type B errors whereas you seem to primarily concerend to reduce the type A error risk)?

I am not aware of too many blind tests that have been carried out to the kind of care as listed and have been published. Maybe you wish to start? Please do not forget to first reliably reject the hyphoesis that the testsetup itself leads to insensitivity to small differences, in other words calibrate and validate your test setup.

Sayonara
 
I'm with wimms on this one. ABX testing itself isn't fundamentally flawed. It's a powerful tool, assuming it's used correctly.

If someone makes the claim that a certain cable makes the violinist backflip out of the soundstage and dropkick the listener in the face, then I want to see the listener either fly halfway across the listening room or say "thank god you didn't hook up the violin-ninja cable".

If you say you're *confident* that changing something has made an audio system sound better, then you should be able to discern between the original and changed systems 100% correctly regardless of how many tests you do.

If you think that a change has made a subtle difference (you acknowledge that it's not "night and day") then getting 14 out of 20 guesses right may indicate something good, and maybe you should test it a bit more, perhaps using different people. But don't start thinking you're confident - let other people prove your work over time. Then you can grow an ego.

If you get 11/20 in an ABX test, that indicates nothing.
 
Kuei Yang Wang said:
And there we finally note your agenda.
MY agenda?? :confused:

I am not making claims that mains cable changes are reliably audible, however a number of particular very real effects mean that they CAN have an audible effect on sound, as do other cables. Clearly you feel that this should not be the case and that they should not hear differences.
CAN. The magical word that makes even most bizarre ideas seem plausible. The engine of illusions.

Not speaking of insanely sensitive scientific instrumentation, never heard of $1000 mains cables used in recording studios, either.
Audio equipment whose weakest links is, um, 2m of mains cable not even close to signal path, is worth nothing else but putting inside the fireplace instead of dancing around it.

I recommend the following setup

Do you agree that the above described kind of test setup would help to maximise test sensitivity to small differences, if coupled to sufficiently large teststes
Now that is something. Agree with all points. That would be a good start.

(we may disagree about exact numbers, largely because I wish to minimise the risk of type B errors whereas you seem to primarily concerned to reduce the type A error risk)
Interesting. Unless you are after proving null hypothesis, what other point would be to focus on type-B risk?

I am not aware of too many blind tests that have been carried out to the kind of care as listed and have been published. Maybe you wish to start?
No, I don't wish to prove anybody elses claims. I'd do my tests when I'd have to prove something else bizarre that flies on the wings of CAN.
 
Godwin's Law of audio

Groan.

Really some things never change. It seems there is an audio version of Godwin's Law of Usenet. The Audio corollary seems to be that "As an audio discussion grows longer, the probability of the discussion becoming an argument about ABX or DBT approaches one."

The usual interpretation of this means that as soon as ABX/DBT is mentioned the discussion is instantly dead. The discussion was about single bit DACs.

I think we can consider this thread dead.

Cards, on the table. I'm a rabid supporter of ABX and DBT. I teach science at undergraduate and postgraduate level. There is no other way. Sadly it is staggering how many university graduates don't understand.
 
Actually, please note that "bass" & "treble" controls are completely useless to correct tonal flaws in either room acoustics, speakers or recordings.

You need much more precisely usable tools, such as a Parametric equaliser AND the skill to use it.
No.

I recorded a live classical singer with the following horrid configuration:

Powerbook G3 bronze 1/8" line in
Shure 87A mic
ART Tube PAC
Audacity

This low-grade setup was all I could scavenge with no notice. The bottom line is that the resulting recording, which I did not apply equalization to, was drastically too boosted in the higher frequencies. The ambient noise of the room was exaggerated and the sound was very unbalanced.

Turning down the "tone" control on playback drastically made the sound surprisingly professional-seeming. I am not arguing that it was audiophile-quality but the singer said it was the first time she had heard herself recorded and been pleased with the result. The difference in listenability was night and day.

So, in contrast to pedantic hyperbole, such controls do serve a purpose. Sometimes the perfect can't be used to destroy the good.