Is Vista really capable of bit-perfect output?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
abzug, yeah, but it's a pretty safe bet to assume that the dts stream isn't detected.

most hardware decoders which are used in receivers do check the frame against its checksum. if it fails, the analog output is muted for the length of the frame (32ms for DD iirc).
not long ago, a user of my driver wrote me that he had problems with skipping sounds while playing AVIs. he complained that when he enabled the passthrough mode in his player software, there would be a noticable audio skip in some scenes, whereas the software decoder was working perfectly fine. I asked him to send me a sample scene for debugging, and when he did, the same thing occured on my system to my surprise. It took me half an hour to figure out that nothing was wrong with my driver, but in fact with the ac3 stream, and when I played the sample file with mplayer using its ac3 software decoder, it spat out that the CRC failed for one frame, but it still decoded the frame.
 
that's not causal but merely coincidental.
"Independent hardware manufacturers" (IHVs) can acquire a certificate for roughly 500 USD per year which enables them to sign their drivers on their own.
Also, there are programs which allow IHVs to slap a shiny "designed for windows" logo on their boxes - only these have strict requirements and only for these, drivers are tested in Microsoft's WHQL afaik. But none of the requirements demand audio processing.
 
The mixer in Vista uses floating point math to interpolate the upsampling of 44.1 to 48khz. Windows XP kmixer used integer match to interpolate which is less accurate. In the case of Vista, if you use an upsampling 24 bit DAC, the mixer will be 1 bit off in (2 the the 24th power) or 1 bit in 16,777,216. I wonder if that's worth worrying about?
 
Very interesting discussion. This got me to look up kmixer in the MSDN website. According to the MS Developers documentation, the behavior of kmixer is fairly simple and deterministic. Kmixer will only upsample/downsample if it has to. This means that, according to the developers documentation, if an audio player is sending 16 bit 44.1Khz to a device that advertises it can handle such sample rate, kmixer will do nothing... Is there more to that that I don't understand?
 
glt, Vista's kmixer will convert the uint16-samples to float32 and back in any case.

jim, the audio processing of Vista's kmixer is without any doubt much superior with regard to quality - the only thing that annoys me that it doesn't disable itself when it isn't needed. Also, automatic sample rate switching isn't possible anymore with Vista.
Personally, this annoys the hell outta me, so I'll hang on to XP as long as I can because it simply allows me to play my flac compressed and tagged DTS files without going to the lengths of kernel streaming.
 
dogber1,

Are you referring to the fact that the Vista kmixer will convert the int16 values to float32 values and back? If so, it's pretty obvious that this conversion isn't lossy. A 32-bit float can represent all of the int16 values.

You can in fact pack up to 25-bit ints losslessly into float32 - I do this myself when programming VST plugins, since I prefer fixed-point processing but am forced to use float32 for the VST I/O. Works like a charm.

Of course, I'm assuming that the kmixer doesn't alter the float32 values after it has them, but from what you've said it sounds like it doesn't.
 
Originally posted by dogber1
isn't the float32 format in audio processing implicitly normalized to an absolute value of 1.0? this leads to roundoff errors and hence to conversion loss.

When float32 is used in audio on the PC, audio values do tend to be normalised to between +/-1. To convert to integer formats you have to scale the normalised floating-point values up to fill the range of integral type you want to use: in this case, you just multiply all values by 2^15 and then cast to int16. Converting the other way involves casting to float32 and then dividing by 2^15. If the floating-point data isn't modified between conversions, then the method is totally lossless.
 
some quick and dirty C code:
Code:
#include <stdio.h>

typedef signed short SInt16;
typedef float float32;

int main()
{
	SInt16 inSample,outSample;
	float32 fSample;
	int lossCnt = 0;
	printf("sizeof(SInt16)=%d, sizeof(float32)=%d\n", sizeof(SInt16), sizeof(float32)); 
	inSample = -32768;
	do {
//int16 => float32
		if (inSample < 0)
			fSample = inSample / 32768.0;
		else
			fSample = inSample / 32767.0;
//float32 => int16
		if (fSample < -1.0)
			fSample = -1.0;
		if (fSample > 1.0)
			fSample = 1.0;
		if (fSample < 0)
			outSample = (SInt16) fSample * 32768.0;	
		else
			outSample = (SInt16) fSample * 32767.0;
		if (outSample != inSample)
			lossCnt++;
		inSample++;
	} while (inSample != (SInt16) -32768); 
	printf("%d roundoff errors\n", lossCnt);
	printf("done.\n");
	return 0;
}
compiling with gcc and running the stuff returns:
Code:
sizeof(SInt16)=2, sizeof(float32)=4
65533 roundoff errors
done
maybe I did something wrong when converting the SInt16 forth and back, but I borrowed most parts of the code from Apple's driver kit and that's the way they do it.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.