abzug said:
Synchronous or asynchronous? The synch one could have software influencing jitter somehow.
Exactly what I am wondering.
abzug said:
Synchronous or asynchronous? The synch one could have software influencing jitter somehow.
Well I was thinking the most common USB DAC, which is synchronous with phase-locked loop for system clock generation.
These DACs have a buffer that prevents timing errors. Even if the PPL is not 100% accurate, as long as the buffer is full of data the DAC's reciever gets the data from the buffer, the DAC reads the header on the the paacket then plays the music data as defined by the header.
How does a palyer's software impriove on this or have any impact at all?
Originally posted by JimOfOakCreek
Well I was thinking the most common USB DAC, which is synchronous...
Is that really the most common kind? I have to ask: why?? All the pro external interfaces (Fireface, etc) surely have their own oscillators in them...? Who in their right mind would design an audio device that has to be slaved to the clock of a non-audio transmission interface when they could use a decent local oscillator instead? Gah.
Originally posted by b-square
That's not what synchronous means in this case. The USB receiver has a local clock, regardless.
Oh, okay, I was actually going on the second half of the sentence I quoted - I thought he meant it used a PLL on the USB clock to derive some kind of audio clock.
If synchronous transfer uses a local oscillator, then how can the playback software have any effect on jitter?
Ah hold on, I guess I misworded it. By "local oscillator" I meant an oscillator that runs on its own and dictates the whole system's timing, rather than a VCXO slaved to a PLL connected to the USB input.
The output jitter of a PLL system is always going to depend on the jitter of the signal coming into it, so it seems ridiculous to me to design an audio interface that gets its clock from the USB bus. It's not like PCI sound cards derive their audio clocks from the PCI bus! They have their own oscillators on them, and when their buffer is running low on audio data they request more from the host machine over the bus. Anything else would just be madness.
Until this thread came up I was happily assuming all USB audio devices worked the same way as PCI ones in that sense. Can anyone explain why they don't?
The output jitter of a PLL system is always going to depend on the jitter of the signal coming into it, so it seems ridiculous to me to design an audio interface that gets its clock from the USB bus. It's not like PCI sound cards derive their audio clocks from the PCI bus! They have their own oscillators on them, and when their buffer is running low on audio data they request more from the host machine over the bus. Anything else would just be madness.
Until this thread came up I was happily assuming all USB audio devices worked the same way as PCI ones in that sense. Can anyone explain why they don't?
The USB audio profile is unidirectional isochronous transport. If a packet doesn't arrive or is corrupted, it is not retransmitted. If the data doesn't show up in time, the receiver just waits patiently, etc. FireWire devices and USB-based audio devices that do not use the audio profile do not suffer from these constraints.
Thinking through how XXHE might be manipulating the playback without manipulating the bits, the thing that keeps coming to mind is intentionally starving the USB receiver so the DAC doesn't get any data at certain points. That would be odd and would seem really dependent on the USB DAC implementation (and even using a USB DAC). It seems unlikely.
It really feels like there is some significant misinformation here.
Thinking through how XXHE might be manipulating the playback without manipulating the bits, the thing that keeps coming to mind is intentionally starving the USB receiver so the DAC doesn't get any data at certain points. That would be odd and would seem really dependent on the USB DAC implementation (and even using a USB DAC). It seems unlikely.
It really feels like there is some significant misinformation here.
b-square said:
It really feels like there is some significant misinformation here.
He doesn't seem to give direct answers. Is he having trouble explaining his technology or is he evading?
Hahaha, no ... I was without power most of the day yesterday.
You guys certainly don't allow someone (like me) to keep some things for himself, do you ? IMO if this were blahblah only (I mean, without real life proof) it would be another matter.
For me (and please appreciate that) all is so much in the beginnings of how (digital) sound should be, that I myself really am not ready to explain something that could be plain wrong (for the workout and for the theories). If you would have followed the devleopment (which is public from of May 2007 over 53 different versions, of which 8 or so with explicit sound changes, most for the better, some for the worse), you'll know that not everything is under (my) control ... yet. Since some are ready to flame anyway, I don't see much reason to get flamed for wrong theories.
And about the latter, I'm just honest.
Actually as honest as how everything is presented at phasure, including the upload of the first 0.9t version which was so way different for SQ that I was afraid to put it up for longer than a week and over two weeks of listening on what could be wrong.
Most often (and I just did it again over at phasure) it will be *me* stating that SQ has become worse for such and so reason, which would be the opposite of being commercial on selling something, right ? The whole point is, once we are together (which is really my objective !) working on better playback, there will be no flaming, but just telling that I failed (when something changed for the worse).
I am certainly not asking to be gentle, but I could maybe ask to be with the other guys that are open to this, and in the end help with developing better playback means. To that matter this is not much different from open source development, me just doing what you want. And I don't think there is proof of things not working out like that, so far.
If you are satisfied with the playback you currently have, there is really no reason for you to join or even try. I hope you can agree though, that this is no reason by itself to then say this is placebo's working only, that it would be a wrong property by itself if someone has trouble explaining his technology or blame him from evading.
🙂
Peter
You guys certainly don't allow someone (like me) to keep some things for himself, do you ? IMO if this were blahblah only (I mean, without real life proof) it would be another matter.
For me (and please appreciate that) all is so much in the beginnings of how (digital) sound should be, that I myself really am not ready to explain something that could be plain wrong (for the workout and for the theories). If you would have followed the devleopment (which is public from of May 2007 over 53 different versions, of which 8 or so with explicit sound changes, most for the better, some for the worse), you'll know that not everything is under (my) control ... yet. Since some are ready to flame anyway, I don't see much reason to get flamed for wrong theories.
And about the latter, I'm just honest.
Actually as honest as how everything is presented at phasure, including the upload of the first 0.9t version which was so way different for SQ that I was afraid to put it up for longer than a week and over two weeks of listening on what could be wrong.
Most often (and I just did it again over at phasure) it will be *me* stating that SQ has become worse for such and so reason, which would be the opposite of being commercial on selling something, right ? The whole point is, once we are together (which is really my objective !) working on better playback, there will be no flaming, but just telling that I failed (when something changed for the worse).
I am certainly not asking to be gentle, but I could maybe ask to be with the other guys that are open to this, and in the end help with developing better playback means. To that matter this is not much different from open source development, me just doing what you want. And I don't think there is proof of things not working out like that, so far.
If you are satisfied with the playback you currently have, there is really no reason for you to join or even try. I hope you can agree though, that this is no reason by itself to then say this is placebo's working only, that it would be a wrong property by itself if someone has trouble explaining his technology or blame him from evading.
🙂
Peter
Originally posted by XXHE
The sole fact that an encoded file is passing the Audio Engine of Vista just *makes* that bit perfect (an explicit action of the OS). This says nothing about PCM.
Guys,
I think I have to come back on this one.
In a thread over at AA I was kind of discussing this with ThomasPf, and since I can't find the (by MS) written proof of this anymore, the only thing left to say from my side is that I was wrong.
This should imply that if you yourself aren't able to force Vista into Exclusive Mode (or use ASIO / KS for that matter), you are not able to play encoded e.g. DTS into a receiver/decoder.
?
Edit : Of course the suggestions in this thread on encapsulating the lot in tricked header data could still work in Shared Mode, with the same constraints as were discussed earlier (the match between the file and the attaching of the endpoint device).
Not sure what the problem is abzug, but here's a post from ThomasPf to Peter (XXHE).
Posted by ThomasPf (A ) on March 5, 2008 at 09:24:00
In Reply to: You could be right, but ... posted by PeterSt on March 5, 2008 at 01:12:41:
Hi Peter,
In order for Vista to detect this there would have to be code in the audio stack that always parses digital data streams to detect the presence of encoded data. I do not believe that code exists. If you could send me the link to the Microsoft document that you saw that this in I would be grateful.
On all the systems I have tested this with, the behavior on Vista works as I described. DTS encoded PCM data wil not pass unmodified.
Cheers
Thomas
Could you explain who or wha has been busted?
Mani.
Posted by ThomasPf (A ) on March 5, 2008 at 09:24:00
In Reply to: You could be right, but ... posted by PeterSt on March 5, 2008 at 01:12:41:
Hi Peter,
In order for Vista to detect this there would have to be code in the audio stack that always parses digital data streams to detect the presence of encoded data. I do not believe that code exists. If you could send me the link to the Microsoft document that you saw that this in I would be grateful.
On all the systems I have tested this with, the behavior on Vista works as I described. DTS encoded PCM data wil not pass unmodified.
Cheers
Thomas
Could you explain who or wha has been busted?
Mani.
It's a joke about AA. I meant he's busted posting in a terrible place like the Asylum.
I know that thomas from another forum; I've had correspondence with him. You don't need to quote stuff here; I just found this funny, that's all.
I know that thomas from another forum; I've had correspondence with him. You don't need to quote stuff here; I just found this funny, that's all.
So, to sum up:
1) It's bit perfect
2) PCM needs to be processed some special new way to sound best
3) No, it's bit perfect
4) If you were to, say, play a byte later in the stream than it was originally, that might be useful
5) No, really, it's bit perfect!
6) You're mucking around with the data based on some theories you have that you won't explain and if we don't just accept that it is bit perfect and magically sounds better than we are just mean, old unbelievers.
7) Oh, and it is bit perfect
That XXHE causes things to sound different, even better, I am perfectly willing to believe. That it does so _without_ modifying the data, the sequence, or the jitter is not something I am willing to believe.
1) It's bit perfect
2) PCM needs to be processed some special new way to sound best
3) No, it's bit perfect
4) If you were to, say, play a byte later in the stream than it was originally, that might be useful
5) No, really, it's bit perfect!
6) You're mucking around with the data based on some theories you have that you won't explain and if we don't just accept that it is bit perfect and magically sounds better than we are just mean, old unbelievers.
7) Oh, and it is bit perfect
That XXHE causes things to sound different, even better, I am perfectly willing to believe. That it does so _without_ modifying the data, the sequence, or the jitter is not something I am willing to believe.
Spartacus said:Hello Peter, are you able to explain how one bit perfect player can sound different to another?
XXHE said:I can fairly say "no" because by now there's too much to it.
... software can influence jitter ...
And each DAC, no matter brand or type, can be influenced the same... No matter how USB asynchronously it is connected ...
The software which just *is* under my control is the core audio engine I wrote (referred to as XXEngine3), and one of its means to influence "jitter" is its Q1 slider.
Please take from me : *every* line of software code influences the sound.
... because really everything matters...
Things are as fragile as can be and in fact completely stupid to be so. But they are.
b-square said:That XXHE causes things to sound different, even better, I am perfectly willing to believe. That it does so _without_ modifying the data, the sequence, or the jitter is not something I am willing to believe.
XXHE is obviously stating that XXHighEnd is modifying the jitter, but not modifying the data... no?
He seems to suggest that things are too complicated for him to explain, having now been through 53 different versions of the software with inconsistent results (some for the better, some for the worse):
XXHE said:For me (and please appreciate that) all is so much in the beginnings of how (digital) sound should be, that I myself really am not ready to explain something that could be plain wrong (for the workout and for the theories).
Mani.
If the DAC is driven by a local clock, jitter introduced before that point won't matter. Guess I'll just have to do call tracing on a machine running XXHE to see what is actually going on.
I think most sound cards use a local oscillator as well.b-square said:If the DAC is driven by a local clock, jitter introduced before that point won't matter. Guess I'll just have to do call tracing on a machine running XXHE to see what is actually going on.
I can imagine software making a difference in sound for non-asynchronous cards (i.e. most of USB cards, I do not know about firewire). As input jitter will always propagate to some extent through the PLL, low/constant latency and constant pace of data delivery may bring improvements. It is basically what user soundcheck (using a USB card) is experiencing in the linux thread http://www.diyaudio.com/forums/showthread.php?s=&threadid=93315&perpage=25&pagenumber=1 and describing in his wiki http://www.diyaudio.com/wiki/index.php?page=LINUX+Audio
But for PCI or asynchronous USB cards it is only about bit perfection (all the way down to the card, of course) and timely delivery to avoid dropouts.
But for PCI or asynchronous USB cards it is only about bit perfection (all the way down to the card, of course) and timely delivery to avoid dropouts.
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Line Level
- Is Vista really capable of bit-perfect output?