Another Objective vs Subjective debate thread

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Hi,

identical files will play identically ALL ELSE BEING EQUAL

The capitalised rider could be pedantically added to every statement, but why do that just to tame the pedants in our midst?

All else being equal was neither claimed or inferred. I agree, if all else is equal (and this is a fun part, IT NEVER EVER IS, honestly) then there can be no difference.

However "ALL else being EQUAL" is impossible (one of the very few true impossibilities in this universe).

Ciao T
 
By wintermute - These are all cases where I would swear black and blue that I heard it, and there were no visual clues to help with the illusion. I have of course no way of proving it. The part that I've always been baffled by is that I didn't think that effects outside the width of the speakers, or even more bizarrely above or behind me, were actually possible.

Those "illusions" are cool , aren't they. The up/down ones are cool where they offset the bass-sub response a few mS after the full-range on a 4 or 5.1 system giving the realistic effect of something coming down through ceiling and into the floor :eek: . Front/ back perception and other holographic effects are just phase cancellation. I have a DSP on my HT PC that allows for specific control of this. "Stereo wide" is a crude example of this effect.

OS
 
To infer, BTW, means to deduce or conclude (information) from evidence and reasoning rather than from explicit statements. The word you are groping for is 'implied'

Isn't it a pity that you are familiar only with the definition which you have produced and that a quick Google might have prevented that slip up. You would have found that it also means to imply.

I agree with Fran. It might be time to tone down the attitude.
 
"Infer is sometimes confused with imply, but the distinction is a useful one. When we say that a speaker or sentence implies something, we mean that it is conveyed or suggested without being stated outright: When the mayor said that she would not rule out a business tax increase, she implied (not inferred) that some taxes might be raised. Inference, on the other hand, is the activity performed by a reader or interpreter in drawing conclusions that are not explicit in what is said: When the mayor said that she would not rule out a tax increase, we inferred that she had been consulting with some new financial advisers, since her old advisers were in favor of tax reductions."
 
OK, I decided to look it up

infer (ɪnˈfɜː)

— vb , -fers , -ferring , -ferred
1. to conclude (a state of affairs, supposition, etc) by reasoning from evidence; deduce
2. ( tr ) to have or lead to as a necessary or logical consequence; indicate
3. ( tr ) to hint or imply

[C16: from Latin inferre to bring into, from ferre to bear, carry]

usage: The use of infer to mean imply is becoming more and more common in both speech and writing. There is nevertheless a useful distinction between the two which many people would be in favour of maintaining.


The last sentence is the part to which SY refers. Funny, I have always used it in the three manners shown above. I stand corrected and officially apologize for the stance I took with counter culture.
 
OK, first - I will stand behind my original statement that two separate digital audio files that produce identicle checksums are in fact identicle, even if one is fragmented and the other one isn't. I never cliamed that they couldn't sound different. However, if they can be proven to sound different, that comes down to a hardware issue. In the case of the hard drive searching all over the platters making noise on the supply lines and causing differences, then the hardware is the culprit, not the files. Modern PC hardware should be able to easily differentiate between digital data and noise, and if it can't, then you'll have more problems than music not sounding right, you'll have a crash-prone computer. As for my statement about anything that can be heard can be measured, I didn't mean to imply that we have the capability at the present time to measure everything, it is quite plausible that a way of measuring subtle differeces just hasn't been found yet, the comment about sound stage perception being an excellent example. I myself have experienced some rather dramatically wide and deep audio imagery that almost gave me a real sense of being there, but was limited I think by the conflicting visual cues my eyes were giving me. If I listen to music in a darkened room, the sound seems to have more realizm, but I have no idea how one would measure that, perhaps by brain scan? But the essence of the statement remains true, if there are in fact audible differences, then they must be measurable in some way. As for the rest of this debate, well, human beings always struggle to communicate effectively, I'm certainly no exception, and we all need to allow for that.

Mike
 
Just another Moderator
Joined 2003
Paid Member
Mike 100% agreement from me that if the checksums are the same then the files *should* be identical. just out of curiosity which checksum did you check? md5?

Many years ago I was mortified when doing a data conversion from a mainframe to a unix computer. I'd written a compression algorithm and compressed the files before transfer as they had to go over a 33.6k modem connection. One file would not decompress and I thought I had a bug in my decompression program. it turned out that the file had been corrupted during the transfer but the CRC checks in the data transfer hadn't picked it up. I think (but am not 100% certain) that I ran the unix sum program on the first and subsequent transferred files and they both gave the same checksum.

Is the thread you are referring to the one that was supposed to have one file normal and one with added distortion? I must say if both files were actually identical that the test was even more interesting than I first thought ;)

Tony.
 
The claim was that files ripped from a CD to a solid state drive, such as a USB stick, were superior to (sounded better than) files ripped to a HD with a rotating platter, even when they both had the same MD5 checksum.

Wishful thinkers will believe anything.

The cynical will exploit them.

Those without the courage of their convictions will 'cut them a little slack'.

Embedded in the language is a certain folk wisdom. 'Common sense', we call it. In this instance the applicable aphorism is: 'Give them an inch and they'll take a mile'.
 
Just another Moderator
Joined 2003
Paid Member
The claim was that files ripped from a CD to a solid state drive, such as a USB stick, were superior to (sounded better than) files ripped to a HD with a rotating platter, even when they both had the same MD5 checksum.

OK in that case I have to concur. if there was any difference in the *playback* then copying the file that was ripped to HD to the SSD should result in identical playback, *If* and its a big if there was any difference in the playback, it has nothing to do with the original ripping.

Tony.
 
Hi,

OK, first - I will stand behind my original statement that two separate digital audio files that produce identicle checksums are in fact identicle, even if one is fragmented and the other one isn't. I never cliamed that they couldn't sound different. However, if they can be proven to sound different, that comes down to a hardware issue.

Or possibly software. Different CPU loads modulate the power supply, this in turn can cause the clock oscillator that drive the audio (or USB) subsystem to show both different levels of jitter and a different jitter spectrum.

As for my statement about anything that can be heard can be measured, I didn't mean to imply that we have the capability at the present time to measure everything, it is quite plausible that a way of measuring subtle differeces just hasn't been found yet,

This I think anyone can agree with. Would you also be willing to grant that some of the measurements currently applied to HiFi equipment may be unable to offer us a direct "better measurement = better sound" correlation, in other words, they measure abstract qualities that have have no direct bearing on sound quality?

the comment about sound stage perception being an excellent example. I myself have experienced some rather dramatically wide and deep audio imagery that almost gave me a real sense of being there, but was limited I think by the conflicting visual cues my eyes were giving me. If I listen to music in a darkened room, the sound seems to have more realizm, but I have no idea how one would measure that, perhaps by brain scan?

Yes, this may be a possible approach.

But the essence of the statement remains true, if there are in fact audible differences, then they must be measurable in some way. As for the rest of this debate, well, human beings always struggle to communicate effectively, I'm certainly no exception, and we all need to allow for that.

Please understand, I was not attacking your point, i was merely asking to have it clarified, as it was written it allowed different interpretations, so I desired to be sure about what you wished to say before blasting away.

For, sadly, there are people known who do take the position that everything that matters in audio is already covered by a few very basic measurement and that there is no need to measure anything else or experiment further.

Such a position is one that most probably would not like to be identified with.

Ciao T
 
Hi,

The claim was that files ripped from a CD to a solid state drive, such as a USB stick, were superior to (sounded better than) files ripped to a HD with a rotating platter, even when they both had the same MD5 checksum.

Hmm, if this was the claim, then it is obvious the claim was void. Obviously in what you describe there was no playback, so clearly no sound produced.

Well, except for the sound from Hard Drive, which would have been absent from the USB Memory and hence we would have had a lower acoustic noisefloor, but still no signal.

Now, if by comparison the claim was that two identical rips, with the same checksum, on ripped to HDD the other ripped to a SSD of some description attached via USB sounded ifferent when played back from the respective media they where ripped to, this certainly within the realm of the possible, one would need to know much more about the actual test-setup to be able to comment with any confidence level.

Wishful thinkers will believe anything.

Indeed. And if they lack counter arguments to points made by those opposing their position they take refuge in ridicule and non-topic related attacks.

Those without the courage of their convictions will 'cut them a little slack'.

Yes, those lacking the courage to oppose the unreasonable will 'cut them some slack', on the principle that we should live and let live, on the understanding that we are dealing with hobby and the fate of the world does not ride on how scientific or not audio systems is evaluated. So we do have do have a tendency to cut those who promote un-reason of the objectivist brand some slack, just as we do those who promote un-reason of any other brand.

Embedded in the language is a certain folk wisdom. 'Common sense', we call it. In this instance the applicable aphorism is: 'Give them an inch and they'll take a mile'.

Yes, there is much wisdom in this folk wisdom. In this instance the applicable aphorism is: 'Where there is smoke, there is Fire'.

Ciao T

PS, thank you for correcting me, English is my third language, so I often struggle with finer points.
 
Yes, I think it may have been MD5, but it was several years ago, so I'm not totally sure. But I do remember I was checking songs ripped to hard drive that were compressed with lossless Wavepak encoder, then re-expanded to original wav format, and compared to original files that were never compressed. I wanted to verify that Wavepak wouldn't eff-up my music before I started using it on a regular basis. What I found was that Wavepak did work well, but one set of files checked differently, and after investigating cause, found that original uncompressed file had somehow gotten corrupted, and did sound a little distorted, the compressed file sounded fine. I never did figure out how that happened, but it was the only time it did, so don't waste time fretting about it.
Counter Culture,
I too am sometimes sceptical about claims made here, like Carl Sagan once said; extrordinary claims require extrodinary evidence. However, having said that, I cannot outright dismiss what others claim to hear even if they do seem a little "out there". A case in point would be the rude awakening I got after I built my first LM3886 chip amp. I had been aware of their existence for a while, but never saw any reason to check them out, after all, theyr'e just chips and "everyone knows they don't sound good". Then one day I stumbled onto this site quite by accident, and saw all accolades people were tossing around about chip amps, so I thought what the hey, they're cheap enough, lets give 'em a whirl. So I ordered a couple of chips, dug through the junk parts for everything else, and built one. Well, at first I wasn't very impressed, but after a couple of hours of play-time, I noticed the sound had come into focus like I'd never heard before, easily the best sounding amp I'd ever built. I was hearing things in music that I was very familiar with that I never knew were there. Amazing, especially when you consider it was made mostly from junk box parts. Been using and enjoying music with them ever since.
ThorstenL,
From my own experience, when checksums are the same, I hear no difference. I can only speculate as to possible cause when others say they do hear differences, but I'm fairly confident it in the hardware and not the files or method used to rip them. And no, I'm not saying that if it measures good it must sound good or visa-versa, just that if there is a real difference, there must be some way of measuring it, we just haven't figured out how to do it in some cases yet.
I hope this clarifies my points a little bit, I'm not "the great communicator", and most likely never will be.
Anyhoo, it's getting late, way past this old fart's bed time, so g'night all.

Mike
 
Administrator
Joined 2004
Paid Member
The claim was that files ripped from a CD to a solid state drive, such as a USB stick, were superior to (sounded better than) files ripped to a HD with a rotating platter, even when they both had the same MD5 checksum.


Oy vey! This was argued to death some months back. How short are the memories here?
The conclusion was that the only difference could be in the playback. I.E. a HDD might generate more electrical noise than a SSD or USB stick, and that noise somehow gets into the audio stream.

Do we really have to go over all this again? :rolleyes:
 
Folks,

In audio debates like this one we usually see DB Tests waved around.

I was pointed to a much better summary of the most basic objections (long before I wheel in Placebo/Nocebo, though their presence is implied by the author and inferred by me), so I will link the original post, but also take the liberty to re-produce it here, as I deem it to be in the public domain.

Keith_W@ AVGuide said:
I hear a lot from objectivists that medical DBT's are the standard of evidence, therefore we audiophiles should submit to DBT's as the best way to evaluate equipment. As a physician, it is my job to read and critique journals, so I am VERY familiar with the ins and outs of medical DBT's.

While I have no sympathy for the most hardcore objectivists, the very same "partisan hacks" Mr. Harley was ranting about, I believe that DBT's need to be applied intelligently before the results can be interpreted. But the way most audio DBT's are conducted, these are about as unscientific as the subjectivists they are trying to debunk.

As pointed out in another post by Jonathan Valin - medical DBT's are massive. They involve thousands of patients with strict entry criteria (the disease being studied is strictly defined, you can not have other medical conditions which may interfere with data interpretation, you must be of a certain age, etc etc). These studies are carefully designed, take months or years to complete, months to analyze, and then months for the peer review process and finally publication.

Audio DBT's are not. We never know how sophisticated the listening panel are, whether they know what to look for, and whether individual variations in hearing, perception, chronic diseases which may affect hearing - have been identified and controlled for. We do not know if the test material (music) being played is familiar to the listener. We don't know if non-verbal cues (which can be used to lead AND mislead) are present. And finally, the evaluation period is all too brief. We all know that it can sometimes take weeks of listening to material we are familiar with, on systems we are familiar with, before we get to know the effect of a particular change. How are we expected to identify the changes in such a short period of time, and in an unfamiliar system?

The lack of identification of potential sources of bias, the lack of scientific rigour exposes most DBT proponents as sham merchants keen to offer scientific window dressing on a testing methodology which is of limited use.

The next difference is this - in a medical DBT, we know what we are looking for. The trials are designed to demonstrate the primary or secondary endpoint. For example - if a drug is purported to reduce the incidence of stroke, the primary endpoint would be number of new strokes per year in the treatment and control group. The study design would identify other stroke reducing drugs in both groups, and specify what is permissible and what isn't. We don't just take a new drug, give it to 5,000 patients and give placebo to 5,000 controls, and look at both populations to see what happens.

In an audio DBT, what is the primary endpoint? Does the testing panel even know what they are supposed to be looking for? Is there a score sheet that says "image width was xxx meters" or "frequency response: skewed to bass or treble"? Well, there isn't. You are expected to notice a difference, whatever it is, and then use that as a basis for comparison.

Another point about the size of the sample. In medical DBT's, we often do a power calculation before we start recruiting. A power calculation tells us how many subjects we need to recruit before the DBT becomes statistically meaningful. For example, a dose of 10,000 Grays of radiation only needs a small sample size to demonstrate the harmful effect. But what about much smaller doses, like the radiation you get from a chest X-ray? You need to know how many people to study before you even begin the study.

So what sample size is needed to demonstrate a relatively subtle audio tweak, such as the effect of various interconnects? Nearly everyone except the most naive listener can hear differences in loudspeakers, so you can get away with a small sample size. How many people do you need to test to demonstrate the difference between 0.01% THD and 0.02% THD? Most audio DBT's involve maybe 5-10 listeners. Is this enough to demonstrate the difference?

Despite the medical DBT being held up as the gold standard, I can tell you that many of them are either uninterpretable or poorly generalizable because of various failings in study design, study sample, and so on. Many of them tweak the statistics to make the differences seem more impressive than actually measured. Of course, it is possible to tweak the stats the opposite direction, for example to minimize the number of adverse outcomes. Simply redefine your endpoint, and there you go. I also know enough about these academics to know that their motives are not always pure and there may be all kinds of conflicts of interest.

I should also say: in audio DBT's, there is a strong bias towards the null hypothesis (that intervention X made no difference). Medical DBT's would be the same - if they were as poorly designed as audio DBT's. In fact, there have been a number of medical DBT's that have shown the null hypothesis when all of us in clinical practice, anecdotally know that the intervention makes a difference with our patients. In such cases, I ignore the study and tell people that I know that xxx works. Eventually another DBT may come along that changes the conclusion. It happens all the time!

In the end, you may call me a "super-objectivist". I think that audio DBT's have their place, but only well designed ones. I hear a lot from objectivists that medical DBT's are the standard of evidence, therefore we audiophiles should submit to DBT's as the best way to evaluate equipment. As a physician, it is my job to read and critique journals, so I am VERY familiar with the ins and outs of medical DBT's.

While I have no sympathy for the most hardcore objectivists, the very same "partisan hacks" Mr. Harley was ranting about, I believe that DBT's need to be applied intelligently before the results can be interpreted. But the way most audio DBT's are conducted, these are about as unscientific as the subjectivists they are trying to debunk.

As pointed out in another post by Jonathan Valin - medical DBT's are massive. They involve thousands of patients with strict entry criteria (the disease being studied is strictly defined, you can not have other medical conditions which may interfere with data interpretation, you must be of a certain age, etc etc). These studies are carefully designed, take months or years to complete, months to analyze, and then months for the peer review process and finally publication.

Audio DBT's are not. We never know how sophisticated the listening panel are, whether they know what to look for, and whether individual variations in hearing, perception, chronic diseases which may affect hearing - have been identified and controlled for. We do not know if the test material (music) being played is familiar to the listener. We don't know if non-verbal cues (which can be used to lead AND mislead) are present. And finally, the evaluation period is all too brief. We all know that it can sometimes take weeks of listening to material we are familiar with, on systems we are familiar with, before we get to know the effect of a particular change. How are we expected to identify the changes in such a short period of time, and in an unfamiliar system?

The lack of identification of potential sources of bias, the lack of scientific rigour exposes most DBT proponents as sham merchants keen to offer scientific window dressing on a testing methodology which is of limited use.

The next difference is this - in a medical DBT, we know what we are looking for. The trials are designed to demonstrate the primary or secondary endpoint. For example - if a drug is purported to reduce the incidence of stroke, the primary endpoint would be number of new strokes per year in the treatment and control group. The study design would identify other stroke reducing drugs in both groups, and specify what is permissible and what isn't. We don't just take a new drug, give it to 5,000 patients and give placebo to 5,000 controls, and look at both populations to see what happens.

In an audio DBT, what is the primary endpoint? Does the testing panel even know what they are supposed to be looking for? Is there a score sheet that says "image width was xxx meters" or "frequency response: skewed to bass or treble"? Well, there isn't. You are expected to notice a difference, whatever it is, and then use that as a basis for comparison.

Another point about the size of the sample. In medical DBT's, we often do a power calculation before we start recruiting. A power calculation tells us how many subjects we need to recruit before the DBT becomes statistically meaningful. For example, a dose of 10,000 Grays of radiation only needs a small sample size to demonstrate the harmful effect. But what about much smaller doses, like the radiation you get from a chest X-ray? You need to know how many people to study before you even begin the study.

So what sample size is needed to demonstrate a relatively subtle audio tweak, such as the effect of various interconnects? Nearly everyone except the most naive listener can hear differences in loudspeakers, so you can get away with a small sample size. How many people do you need to test to demonstrate the difference between 0.01% THD and 0.02% THD? Most audio DBT's involve maybe 5-10 listeners. Is this enough to demonstrate the difference?

Despite the medical DBT being held up as the gold standard, I can tell you that many of them are either uninterpretable or poorly generalizable because of various failings in study design, study sample, and so on. Many of them tweak the statistics to make the differences seem more impressive than actually measured. Of course, it is possible to tweak the stats the opposite direction, for example to minimize the number of adverse outcomes. Simply redefine your endpoint, and there you go. I also know enough about these academics to know that their motives are not always pure and there may be all kinds of conflicts of interest.

I should also say: in audio DBT's, there is a strong bias towards the null hypothesis (that intervention X made no difference). Medical DBT's would be the same - if they were as poorly designed as audio DBT's. In fact, there have been a number of medical DBT's that have shown the null hypothesis when all of us in clinical practice, anecdotally know that the intervention makes a difference with our patients. In such cases, I ignore the study and tell people that I know that xxx works. Eventually another DBT may come along that changes the conclusion. It happens all the time!

In the end, you may call me a "super-objectivist". I think that audio DBT's have their place, but only well designed ones. Most DBT's I read about are absolutely pathetic.


I would only re-iterate one point:

"Most DBT's I read about are absolutely pathetic."

Ciao T
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.