Therefore the only 100% reliable method is loop the stream back and record it. Then compare the original with the recorded file.
But I'm telling the obvious, right ?
But I'm telling the obvious, right ?
That requires a card with S/PDIF in and otpion of no resampling or other processing of the incoming stream.
abzug, yeah, but it's a pretty safe bet to assume that the dts stream isn't detected.
most hardware decoders which are used in receivers do check the frame against its checksum. if it fails, the analog output is muted for the length of the frame (32ms for DD iirc).
not long ago, a user of my driver wrote me that he had problems with skipping sounds while playing AVIs. he complained that when he enabled the passthrough mode in his player software, there would be a noticable audio skip in some scenes, whereas the software decoder was working perfectly fine. I asked him to send me a sample scene for debugging, and when he did, the same thing occured on my system to my surprise. It took me half an hour to figure out that nothing was wrong with my driver, but in fact with the ac3 stream, and when I played the sample file with mplayer using its ac3 software decoder, it spat out that the CRC failed for one frame, but it still decoded the frame.
most hardware decoders which are used in receivers do check the frame against its checksum. if it fails, the analog output is muted for the length of the frame (32ms for DD iirc).
not long ago, a user of my driver wrote me that he had problems with skipping sounds while playing AVIs. he complained that when he enabled the passthrough mode in his player software, there would be a noticable audio skip in some scenes, whereas the software decoder was working perfectly fine. I asked him to send me a sample scene for debugging, and when he did, the same thing occured on my system to my surprise. It took me half an hour to figure out that nothing was wrong with my driver, but in fact with the ac3 stream, and when I played the sample file with mplayer using its ac3 software decoder, it spat out that the CRC failed for one frame, but it still decoded the frame.
XXHE said:Therefore the only 100% reliable method is loop the stream back and record it. Then compare the original with the recorded file.
But I'm telling the obvious, right ?
I've done that, too, to prove that the kmixer of XP and 2k is bitperfect, but that's much more of a hassle.
There are various "if"s tied into that one...dogber1 said:the kmixer of XP and 2k is bitperfect
...which I mentioned in length here: http://code.google.com/p/cmediadrivers/wiki/Bitperfectabzug said:
There are various "if"s tied into that one...
... while I proved that KMixer of XP just is not bit perfect (applying all the necessary). Not with normal signed drivers ...
E.g. with RME MME drivers, yes, then XP is bit perfect.
E.g. with RME MME drivers, yes, then XP is bit perfect.
driver signing has absolutely nothing to do whether a driver or even the kmixer is bitperfect or not.
It has to the sense of when RME (etc., you 😉) had to run all through that "sighning" program, it would not get through. The program would a.o. check whether you appropriately route via KMixer.
that's not causal but merely coincidental.
"Independent hardware manufacturers" (IHVs) can acquire a certificate for roughly 500 USD per year which enables them to sign their drivers on their own.
Also, there are programs which allow IHVs to slap a shiny "designed for windows" logo on their boxes - only these have strict requirements and only for these, drivers are tested in Microsoft's WHQL afaik. But none of the requirements demand audio processing.
"Independent hardware manufacturers" (IHVs) can acquire a certificate for roughly 500 USD per year which enables them to sign their drivers on their own.
Also, there are programs which allow IHVs to slap a shiny "designed for windows" logo on their boxes - only these have strict requirements and only for these, drivers are tested in Microsoft's WHQL afaik. But none of the requirements demand audio processing.
The mixer in Vista uses floating point math to interpolate the upsampling of 44.1 to 48khz. Windows XP kmixer used integer match to interpolate which is less accurate. In the case of Vista, if you use an upsampling 24 bit DAC, the mixer will be 1 bit off in (2 the the 24th power) or 1 bit in 16,777,216. I wonder if that's worth worrying about?
Very interesting discussion. This got me to look up kmixer in the MSDN website. According to the MS Developers documentation, the behavior of kmixer is fairly simple and deterministic. Kmixer will only upsample/downsample if it has to. This means that, according to the developers documentation, if an audio player is sending 16 bit 44.1Khz to a device that advertises it can handle such sample rate, kmixer will do nothing... Is there more to that that I don't understand?
glt, Vista's kmixer will convert the uint16-samples to float32 and back in any case.
jim, the audio processing of Vista's kmixer is without any doubt much superior with regard to quality - the only thing that annoys me that it doesn't disable itself when it isn't needed. Also, automatic sample rate switching isn't possible anymore with Vista.
Personally, this annoys the hell outta me, so I'll hang on to XP as long as I can because it simply allows me to play my flac compressed and tagged DTS files without going to the lengths of kernel streaming.
jim, the audio processing of Vista's kmixer is without any doubt much superior with regard to quality - the only thing that annoys me that it doesn't disable itself when it isn't needed. Also, automatic sample rate switching isn't possible anymore with Vista.
Personally, this annoys the hell outta me, so I'll hang on to XP as long as I can because it simply allows me to play my flac compressed and tagged DTS files without going to the lengths of kernel streaming.
dodber1, is that why you call Vista's kmixer not bit perfect? Because it converts to 32 bit float and back? Also, would it be trivial to modify the Windows USB driver to advertise itself to only handle 16 bit 44.1KHz data so to "force" kmixer to stay at 16/44.1? Would this be of any benefit?
I call Vista's kmixer not bitperfect because it simply isn't bitperfect - if an application plays a 16 bit stream with DirectSound/MME, it gets altered by the kmixer no matter if it is required or not.
the internal format of the kmixer can't be changed by a driver.
the internal format of the kmixer can't be changed by a driver.
dogber1,
Are you referring to the fact that the Vista kmixer will convert the int16 values to float32 values and back? If so, it's pretty obvious that this conversion isn't lossy. A 32-bit float can represent all of the int16 values.
You can in fact pack up to 25-bit ints losslessly into float32 - I do this myself when programming VST plugins, since I prefer fixed-point processing but am forced to use float32 for the VST I/O. Works like a charm.
Of course, I'm assuming that the kmixer doesn't alter the float32 values after it has them, but from what you've said it sounds like it doesn't.
Are you referring to the fact that the Vista kmixer will convert the int16 values to float32 values and back? If so, it's pretty obvious that this conversion isn't lossy. A 32-bit float can represent all of the int16 values.
You can in fact pack up to 25-bit ints losslessly into float32 - I do this myself when programming VST plugins, since I prefer fixed-point processing but am forced to use float32 for the VST I/O. Works like a charm.
Of course, I'm assuming that the kmixer doesn't alter the float32 values after it has them, but from what you've said it sounds like it doesn't.
isn't the float32 format in audio processing implicitly normalized to an absolute value of 1.0? this leads to roundoff errors and hence to conversion loss.
Originally posted by dogber1
isn't the float32 format in audio processing implicitly normalized to an absolute value of 1.0? this leads to roundoff errors and hence to conversion loss.
When float32 is used in audio on the PC, audio values do tend to be normalised to between +/-1. To convert to integer formats you have to scale the normalised floating-point values up to fill the range of integral type you want to use: in this case, you just multiply all values by 2^15 and then cast to int16. Converting the other way involves casting to float32 and then dividing by 2^15. If the floating-point data isn't modified between conversions, then the method is totally lossless.
some quick and dirty C code:
compiling with gcc and running the stuff returns:
maybe I did something wrong when converting the SInt16 forth and back, but I borrowed most parts of the code from Apple's driver kit and that's the way they do it.
Code:
#include <stdio.h>
typedef signed short SInt16;
typedef float float32;
int main()
{
SInt16 inSample,outSample;
float32 fSample;
int lossCnt = 0;
printf("sizeof(SInt16)=%d, sizeof(float32)=%d\n", sizeof(SInt16), sizeof(float32));
inSample = -32768;
do {
//int16 => float32
if (inSample < 0)
fSample = inSample / 32768.0;
else
fSample = inSample / 32767.0;
//float32 => int16
if (fSample < -1.0)
fSample = -1.0;
if (fSample > 1.0)
fSample = 1.0;
if (fSample < 0)
outSample = (SInt16) fSample * 32768.0;
else
outSample = (SInt16) fSample * 32767.0;
if (outSample != inSample)
lossCnt++;
inSample++;
} while (inSample != (SInt16) -32768);
printf("%d roundoff errors\n", lossCnt);
printf("done.\n");
return 0;
}
Code:
sizeof(SInt16)=2, sizeof(float32)=4
65533 roundoff errors
done
- Status
- Not open for further replies.
- Home
- Source & Line
- Digital Line Level
- Is Vista really capable of bit-perfect output?