Is Vista really capable of bit-perfect output?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Yeah, unfortunately, converting asymmetrically seems to be a common way people do this. That's not lossless - in fact, it's quite ugly. People seem to find it important for some reason to make sure they use every floating-point value between -1 and +1. Since the integral formats are asymmetrical, direct conversion will clip if you ever have a floating-point value of +1. I might be missing something, but I just cannot see the point of all of this.

A better way (at least, this is how I do it) is to multiply or divide all values by 2^15, regardless of whether they are positive or negative. Since the audio data is always starting off in integral format (i.e. coming from the sound card) then there's no issue of potential clipping because there will never be any values equal to +1 in the incoming stream. Doing it this way is lossless, and is just much nicer. It also means you can omit the If statement and so neatens up the code a tiny bit too.

I raised this point with the portaudio dev team sometime last year - I think they might have actually made the change.
 
Your code example isn't doing it the way I described - I have coded this up and verified it: it works.

Also, it's quite obvious in theoretical terms how it works - you can put the integer value you want into the float32's mantissa, and set the float32's exponent so the absolute value comes out correct. Since the IEEE754 float32 format has a 25-bit mantissa, every 25-bit int is representable.

It genuinely does work.
 
you suggested that the conversion is lossless if a factor of 2^15 is used for all values. if this was true, then the values from -32768 to -1 would be lossless since 2^15 is used on them. The sample with zero is correct regardless of the factor, so there must be not more than 2^15-1 conversion errors which account for the values from 1 to 32768. But this isn't the case, thus my code has errors or your assumption is false - quod erat demonstrandum.
 
Wingfeather said:
Your code example isn't doing it the way I described - I have coded this up and verified it: it works.

ahem... sorry guys, but have you seen the MS sources for the relevant parts of Vista? I guess you don't... thus I don't quite see the point of this discussion.

If you want to be sure of what your software (and OS) do, you must go for an OpenSource system. Use Linux, and you can read (and modify, if you like to do so...) all the sources for the whole system. Use any closed-source system (such as windows or Mac), and you'll never know what it's doing behind your back.

You can only see it as a sealed back box... the only thing you can do is to try and check whether it's really bit-perfect or not by playing a stream trough it to a digital out, capturing that one and compare it with the original.

Oh, of course there's no guarantee that your results will apply also to some slightly different version of the system and/or after the next "bug-fix" or service pack release...

BTW: someone was asking about how to compare wav files disregarding simple misalignments... check "shntool":

http://etree.org/shnutils/shntool/
 
Originally posted by dogber1
But this isn't the case, thus my code has errors or your assumption is false

Then I think your code has errors. I've explained why it works in theory (which is trivial), and I've done it myself in both MATLAB and C++ (which was verified both with a test harness and on an AP). It really, totally, truly does work.

I don't know what the error might be, though the following line looks a bit suspect:

outSample = (SInt16) fSample * 32768.0;

Are you sure that fSample isn't being cast to SInt16 before it's being multiplied by 32768.0? I'm not an expert in C casting and operator precedence by anyone's standards.



Originally posted by UnixMan
...thus I don't quite see the point of this discussion.

True. I'm being pedantic. All I meant to say was that the conversion between int16 and float32 (and back) isn't lossy if it's done right. I won't take this thread off-course any further.
 
Are you sure that fSample isn't being cast to SInt16 before it's being multiplied by 32768.0? I'm not an expert in C casting and operator precedence by anyone's standards.
excellent point. after modifying the code a bit, I get
Code:
sizeof(SInt16)=2, sizeof(float32)=4 
0 roundoff errors 
done.
which proves your theory. So Vista's kmixer either processes the sound regardless if it's required or not, or the conversion is done improperly.
 
dogber1 said:

excellent point. after modifying the code a bit, I get
Code:
sizeof(SInt16)=2, sizeof(float32)=4 
0 roundoff errors 
done.
which proves your theory. So Vista's kmixer either processes the sound regardless if it's required or not, or the conversion is done improperly.

Check out this thread:

http://www.avsforum.com/avs-vb/showthread.php?t=713073&page=24

There's a lot of noise there, but also posts from some Microsoft coders. In particular JJ_0001.
 
I have to agree with unixman that the way out of this mess is using open source products.

Following these discussions I got the feeling that with linux there is reverse engineering of hardware whereas with windows people interested in details have to make guesses about detailed functions of drivers and the core system blocks in addition to hardware analysis. Open source drivers for windows are very scarce and big thanks to their authors who put the incredible effort into the analysis and guess work of how the closed system works.

Sorry for the offtopic, no flame pls.
 
Just to recap, I tested whether or not I could get bit-perfect output from Vista by seeing if I could get a software player to output an HDCD-encoded wav file and have my DAC recognize it as such. (Though I accept that technically, this is not necessarily a water-tight test.)

I can get my DAC to recognize an HDCD-encoded signal (with Foobar+ASIO and XXHXighEnd+Engine#3), but...

... I have to play around with some pots in the soundcard's mixer software in order to do so.

With the MOTU 896HD, no attenuation is required, but I have to shift the L/R panpots to their maximum positions.

WIth the RME FF800, I need an exact 1.0dB attenuation on the spdif output (attenuating by 0.9dB or 1.1dB switches the HDCD processing off).

Furthermore, I have noticed that for both soundcards, different firmware/driver updates require different levels of L/R panning and attenuation.

I find this really strange... and worrying (in terms of getting a bit-perfect output).

Does anyone know why I should have to do this? Does this prove that soundcards (my two at least) change the data before they send it out of the spdif output to the DAC?

Would love to hear your thoughts or comments on this.

Mani.
 
I think that it is highly unlikely you will be getting a bit-perfect output from your soundcard if you have the output slider set to anything other than 0dB. I've not used the newer RME cards (only the 96/8 PAD and the Hammerfall 9636 - neither of which have on-board mixers), but I do use Lynx cards which are essentially equivalent to the newer RME stuff, and I know that bit-perfect output with these cards is only ever attained with a specific set of mixer controls (i.e., dither off and all attenuation set to 0dB).

I can't say it conclusively because I don't even have a Fireface to test, but I would have to guess that your HDCD decoder is being tricked somehow.
 
Wingfeather said:
I think that it is highly unlikely you will be getting a bit-perfect output from your soundcard if you have the output slider set to anything other than 0dB.

... I would have to guess that your HDCD decoder is being tricked somehow.

Ordinarily, I would have to agree with you.

However, I'm not sure that it can explain what is actually happening:

I connect my transport directly to the DAC using an spdif cable. The DAC detects the HDCD signal no problem.

I then connect the same spdif cable to the soundcard and take the spdif output to the same input of the DAC I was using previously. The DAC does not detect the HDCD signal... unless I manipulate the soundcard's mixer.

As I stated earlier, the amount and type of manipulation I have to do depends on the version of the soundcard's firmware/driver.

Doesn't this prove that the HDCD decoder is not being tricked, but that the soundcard is actually changing the data depending on its firmware/driver?

Incidentally, only once the DAC detects an HDCD signal from the transport via the soundcard, it will detect an HDCD-encoded wav file from Foobar and XXHighEnd, provided there is no manipulation of the data happening in the software.

I still find all this weird... and troubling.

Phofman,

Do you know of any soundcards that have spdif pass-through and also have a wordclock input that I can slave to my DAC?

Mani.
 
Do you know of any soundcards that have spdif pass-through and also have a wordclock input that I can slave to my DAC?

If by wordclock you mean SPDIF input providing clock signal for the card, than it would be e.g. any envy24-based card equipped with SPDIF receiver. The Envy24 chip itself does no signal modification (it has no DSP core), volume controls are handled by attached codecs - logically none for SPDIF.

I would assume windows drivers have no reason to mess with data to the SPDIF output channel (SPDIF-passthrough). The linux driver definitely does not.
 
Mani,

That definitely sounds very strange. I think it's something you should talk to the guys at RME about, because RME kit is at the level where this sort of thing shouldn't happen, and to the best of my knowedge they're a very helpful company. I've actually measured the Lynx cards with an AP and I know them to be bit-perfect (with the obviously-appropriate settings) if all you're doing is routing the inputs to the outputs in the hardware - I'd definitely expect the Fireface to be the same.

If you want wordclock I/O via BNC in an internal card (as in pro-level cards), then stuff from RME (HDSP9632 with the WCM expansion board or the AES-32) or Lynx has it. But then, your Fireface should be in this list and it's not working right.

Contacting tech support gets my vote.
 
Hi - I just read this thread and downloaded XXhighend.

It took me less than 5 minutes of the demo version to be persuaded and purchase a copy.

I have a Sony Vaio notebook with a USB output to a twisted pair buffalo DAC.

Up until now foobar with asio was the best sound I could achieve.

This is clearly superior - some remaining rough edges have been removed - but leaving more fine detail than rather less.

The only small problem is occasional pauses in playback - any advice to correct this would be welcome.

This is the best audio playback I have heard anywhere :)

cheers

mike
 
Hey dogber,

What you call a scam, blindly I'm sure, took me over 5000 hours to create. It is up to you of course at finding it all isn't worth a dime, but I am not sure it is up to you to shout out loud - and whereever you see a possibility - that this is all a scam without any justification.

Maybe it is time that you eleborate on it. I don't care. But then we can see where you are from. Thus far I can only smell some protection from your bit perfect CMedia drivers, that is, I can't think of anything else in normal areas.

I can't recall I ever harmed you anywhere ...

Peter
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.