Is Vista really capable of bit-perfect output?

Status
Not open for further replies.
dogber1 said:
imho, XX-HighEnd is badly written and not even remotely worth its price - there are much better players for audiophile users which are well-engineered and free, e.g. foobar2000.

Hi Dogber,

If you've followed this thread, you'll know that I've been using Foobar2000 (with ASIO) for the last three years or so. I'm a big fan.

XXHighEnd is no where near as nice to use (sorry Peter).

However, I just find XXHighEnd sounds better. Full stop.

In your article, you quote just one player that you know of that uses "exclusive mode" and has bitperfect access to the soundcard's driver - XM-Play.

Are you denying that XXHighEnd does this too? You do, after all, know of XXHighEnd...

Mani.
 
XXHighEnd is no where near as nice to use (sorry Peter).

No problem Mani, at all.
I know it (of course) but I obviously have to give priority to SQ, and as long as I can find means to improve on that, it will keep on having priority.

It's only fair to state it, and as long as people are not shouting about the looks or means to operate it (including myself), it will remain like that.
And as you probably know, I listen to *every* remark, and respond likewise. You (actual users) ask, I do, as far as my capabilities go.
 
I too used FooBar2000 for years with ASIO4ALL. I thought it could not be beat (because it was bit perfect). Before that I used WinAmp with ASIO4ALL for years. I didn't find a difference in SQ between WinAmp and Foobar.

XXHighEnd using its Engine #3 mode (requires 32bit Vista) sounds better then anything I have tried. I use it daily (and still in Demo mode)

I lately tried XM-Play and it sounded like foobar to me 🙂

Cheers,
Brent
 
Hi,

manisandher said:
Are you denying that XXHighEnd does this too? You do, after all, know of XXHighEnd...
nope - the third engine of XX-HighEnd is bitperfect, but I find the software itself quite unusable due to the unmature state which it is currently in.

However, I just find XXHighEnd sounds better. Full stop.
There isn't any physical difference between the output of foobar2000 and XX-HighEnd (given that they are configured for bitperfect playback), so both players are totally equivalent in that regard. I don't want to offend you or anyone else, but human perception is easily deceivable, including mine - you might hear a difference when in fact there isn't any.
 
No offence ('offense' for my US friends) taken. I think this is something that we all need to be mindful of.

You know, I had no intention of switching players. I loved (still love) Foobar.

Moreover, I really didn't want to believe that a computer/soundcard combo could match/beat my beloved and expensive (to me, at least) transport that I had bought new only a short while earlier.

But this is what I hear...

I don't understand it, but it is what I hear... and it's repeatable.

You know, I didn't (still don't) understand quantum mechanics... but I know it works!

Mani.
 
manisandher said:
Perhaps.

But then why do I have a 'psychological bias' towards a $100 piece of software vs. my $7000 transport?

This just makes no sense to me. I would loooooove my transport to trounce any computer/soundcard combo...

... but it doesn't.

Mani.


Are you using a soundcard?

Because the way I read "XXHE" site, It didn't seen to like the possible influence of soundcard drivers.
 
digital transport is a very easy problem to solve - no one in his right mind would get the idea to invest insane amounts of money into standard ethernet equipment, yet there is a large market for superstitious nonsense in the hifi sector. the physical reality is that you can get bitperfect output with spending maybe 20-30$ on hardware and exactly 0$ on software which performs absolutely identical to other, much more expensive solutions - almost every experience which contradicts this is ultimately "voodoo" and rather a belief than an intelligent observation.
 
awpagan said:
Are you using a soundcard?

Yes, but only to give me an aes/ebu connection to my DAC. I slave the soundcard to the DAC.

dogber1 said:
the physical reality is that you can get bitperfect output with spending maybe 20-30$ on hardware and exactly 0$ on software which performs absolutely identical to other, much more expensive solutions - almost every experience which contradicts this is ultimately "voodoo" and rather a belief than an intelligent observation.

I think most people who have contributed to this thread would concur with John Westlake in that there are 3 parameters that affect the sound quality:

1. Data Accuracy (bit perfect)
2. SPDIF Output Phase Noise (Jitter)
3. RF & Earth Leakage Current introduced Phase Noise spurie & noise products

I don't believe 2) and 3) have anything to do with "voodoo"...

If you're ever in London, feel free to give me a shout - you're welcome to make an "intelligent observation" of the differences I hear for yourself.

Mani.
 
manisandher said:


I think most people who have contributed to this thread would concur with John Westlake in that there are 3 parameters that affect the sound quality:

1. Data Accuracy (bit perfect)
2. SPDIF Output Phase Noise (Jitter)
3. RF & Earth Leakage Current introduced Phase Noise spurie & noise products

I don't believe 2) and 3) have anything to do with "voodoo"...

None of those 3 are voodoo. Assuming you are reclocking at the DAC and the DAC is properly isolated, then as long as #1 holds, 2 and 3 are taken care of _at the DAC_ and the source should not matter. If the source does matter, then one of the 3 does not hold.

Hence, claims that 2 different, BIT PERFECT players, connected such that 2 and 3 hold, sound different from each other are voodoo.
 
I don't believe 2) and 3) have anything to do with "voodoo"...
the effects of jitter have been largely overstated in the past. while it is certainly true that the temporal accuracy of the signal which is fed into the DAC needs to be in the order of 100 ps for CD audio, the effect is largely mitigated by using a buffer mechanism and a local oscillator in order to "reclock" the data. sometimes the process is also somewhat incorrectly called "timebase correction"/"clock recovery", and any quality piece of equipment does it.

the "loudness war" had a much bigger impact on the overall quality of recordings, and this is largely ignored in the HiFi circles.
 
To dogber1

If you do not continuously adjust frequency of the internal oscillator past SPDIF receiver, the buffer will eventually under/overflow.

I did a test: analog recording the same piece from CD player using sound card equipped with spdif input.

One run - internal card clock set to 44.1kHz

Second run - external clock 44.1kHz provided by the CD player.

I aligned the two tracks in audacity to begin and end at the same moment, using some signal peaks.

One of the 10second tracks was 10 samples longer - that was the difference in CD clock and sound card clock.

That is why quality equipment is using asynchronous sample rate conversion which is significantly more complicated (but just one chip anyway 🙂 ).
 
phofman, yeah, there are much more complicated jitter correction schemes - I was just outlining the most simple one for better understanding. The bottom line is that the jitter of the signal source can be suppressed well enough so the noise of the resulting signal is below other noise sources.

There is no question that the 3 settings sound different.
that's something I haven't questioned. What I have serious doubts in, however, are
- the differences between the bitperfect XX-HighEnd and foobar2000 or any other bitperfect player for that matter,
- the effects of the source signal with jitter on the resulting converted signal (given that the equipment has correction mechanisms).
 
The question is how the correction mechanism is effective. There are basically scenarios:

1. Slaving SPDIF source to the DAC - perfect solution, though almost unused.

2. Asynchronous SRC - I have read some not very satisfied experience, though it is undeniably a step forward.

3. Asynchronous buffer - it does not work, unless you are willing to introduce a very long audio delay, hard to compensate for in audio/video setup.

4. Avoiding all synchronous protocols (SPDIF, USB adaptive). Technically, I like this one 🙂

I do not believe too in sonic differences of bit-perfect software along the whole route to the card for asynchronous cards (SPDIF, USB asynchronous). I could admit there might be some influence for USB adaptive (majority of USB cards), I do not know how the USB controller handles DMA etc. in audio mode.
 
Wow, I cannot believe the nonsense everyone posted here!

manisandher said:
2. SPDIF Output Phase Noise (Jitter)
While software might possibly affect this by influencing system load, it cannot have an effect if one is using a DAC that is master flow control--such as a few of the USB DACs, and about all PCI sound cards--yet the XXHE guy claims even with those it makes a difference.

3. RF & Earth Leakage Current introduced Phase Noise spurie & noise products
OK, but it cannot be influenced by software.

dogber1 said:
into the DAC needs to be in the order of 100 ps for CD audio, the effect is largely mitigated by using a buffer mechanism and a local oscillator in order to "reclock" the data.
Most outboard DACs use a PLL to recover the clock from S/PDIF, or nowadays, use ASRC to convert between the two clock domains. Both are far from perfect. There are in fact very few players that use the buffer mechanism, and usually those are transport/DAC pairs that send the master clock signal from the DAC back to the transport to keep the needed buffer size minimal (otherwise drift between the clocks can be very significant and the buffer would be on the order of seconds).
Besides, CD audio is a lame target to aim for. For 24/96 you need a fraction of a picosecond jitter.

phofman said:
If you do not continuously adjust frequency of the internal oscillator past SPDIF receiver, the buffer will eventually under/overflow.
That's why you reset the buffer during periods of silence and between songs. A DIY implementation of this has been mentioned on this forum long ago.

That is why quality equipment is using asynchronous sample rate conversion which is significantly more complicated (but just one chip anyway 🙂 ).
ASRC embeds the jitter into the signal as amplitude errors. You call that quality? ASRC is based on the assumption that you can estimate incoming clock rate accurately, which is not relaly the case. ASRC attenuates interface jitter but is very far from eliminating its effects.

b-square said:
Buffer and clock out. Yes, it is a 100% solution.
No, it's a very high latency solution--way worse than just issues with say A/V synch, we're talking here easily half a second delay or more. A 100% solution has DAC clock providing flow control.
 
Status
Not open for further replies.