John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Do you have a reference for that? I've read the seminal papers from the IEEE and JAES and have not seen that mentioned. Or maybe you can suggest a test vehicle where the result will not not be additive noise and no artifacts. The standard multitone signal? Unfortunately I have no idea where to get an actual recording of music that is not already self dithered by mic and mixer noise.

Results that depend on flawed A/D's or math short cuts also are not useful.

EDIT - I just realized we need to clarify what we are talking about, I was talking about the dither process in isolation. Dither applied as part of a SRC for instance, does not isolate one factor and is not really a valid experiment.

Unfortunately at this moment i don´t have a/the references at hand (thought i had, but have to dig deeper in the archive to retrieve) and as i don´t know exactly about the content of JAES and IEEE seminal papers (beside knowing the publications by Lipshitz,Vanderkooy, Wannamaker, Wright, Gray and Stockham) i can´t comment if they already covered/mentioned that point.

Although i remember that Lipshitz/Vanderkooy/Wannamaker at least already discussed that only subtractive jitter is able to render the quantization error completely independent from the input signal. Nonsubtractive dither (TPDF) achieves to (simplification) let the error appear to be psychoacoustically like white noise.

The Authors relied in that regard on listening experiments and we know that there is usually no guarantuee for correctness, although the results seem to confirm that the noise is uncolored/unmodulated.

But there is a caveat given the assumptions (non overload region, quite high sampling rate in comparison to the input signal) of the reasoning.
There seems to be a dependence of the factor between input frequency and sampling frequeny (integer or not) and some authors found that stochastic dither achieved better results.
Use of noiseshaping further complicates the optimization process.
 
<snip>

All the studies included in the meta-study I mentioned, IIRC about two dozen, were controlled in some fashion, and thus scientifically sound and relatively trustworthy. The meta-study on these studies was clear: well-trained listeners have a 60% chance of hearing a difference. This is important for Joe Consumer. What you as an individual think you hear in a setting heavily skewed towards the agenda of the presenters is really irrelevant for the rest of us.

Jan
That´s what the meta-analysis roughly found overall; it might be that for todays hirez (means more bits and more bandwidth) the estimation of the mean parameter would be even higher. And it is just an estimation for the mean groupwise, so a lot of individuals will do much better or much more worse, that said under the assumption that no hard physiological lower barrier exists.

For good reasons Reiss emphasized that much more experimental work is needed.

Not sure if that is more important to Joe Consumer than the recommendation of someone in whos hearing ability he trusts....... :)
 
First of all, a filesystem is a database! It has metadata and indexes and blocks of data. Saying the directories and other metadata are stored in a different part of a disk is just wrong, inodes reside in disk blocks and are distributed over the disk.

Yes, a file system is a database of sorts. My understanding is that MS has had plans off and on to get rid of their existing file systems and replace them with some kind of more general relational database, but they could never get it to work acceptably.

Regarding where directory information is stored, it depends on the particular file system. In some cases it has been located near the inner part of the disk to speed lookup performance. Obviously, with ram cached raids, that doesn't really matter anymore. However, the file system directory is still something distinct from the file data itself. For one consideration, my understanding would be that for Windows, file types are part of filename extensions, so they are not stored in the file itself, whereas Macs have used file forks inside the data file itself to contain that information. But, yes at some level an unformatted disk doesn't know how data will be stored on it. When it comes to GFS or other more modern file systems, I don't know offhand if directory data and file data are stored in the same blocks. If they aren't, I would argue that they are still distinct for most purposes.
 
The magnitude of the error does not have anything to do how few bits are flipped. At $1M a mask set I would venture to say the chance is not taken. A piece of rubylith fell off of one of my mask sets in 1980 the only occurrence I have ever experienced of this problem.

It does, the masks are subject to an automatic control process, that has as an input precisely the design database. The control process is set with a threshold of detecting errors.

If you as a designer didn't hear about such errors, it doesn't mean they don't exist, it means they were not reported, or reported but considered non critical.

Flying pieces of rubylith were a different story, I don't know, I was just born then. There is no human intervention in the mask making process today, it is fully automated, from data input to the final control. At our foundry there is 1 (one) engineer in charge for the whole mask making process, from data handling to final check, including defect control and DRC. That's for a production of $16 billion worth of chips every year. He's entering the clean area perhaps once a year, if that.
 
AX tech editor
Joined 2002
Paid Member
Jan,
Actually, file names and time stamps are in the directory structure part of the disk, not where file data is stored.

My point being that whatever is written to or read from a disk or other stream is all subject to the same process including error correction in the widest sense. In this context it doesn't make a difference whether the bits are part of the file data or the file name or timestamp. On the lowest level, everything is written to and read from disk allocation blocks and that is where the data integrity stuff operates.

Jan
 
yes someone linked, quoted a bit earlier in this thread...

at some point the lack of good tests for any specific audible effect requires some degree of reliance on 'over bounding', magnifying/exaggerating errors then extrapolating

like listening for TPDF signal correlation at 8 bits - when the 'issue' is the claims of dither 'fail' at 16 bits - below the noise floor of nearly all recordings, home playback environments


Wannamaker is surprisingly readable for a PhD thesis -
http://www.digitalsignallabs.com/phd.pdf

but for the lazy:
For audio signal processing purposes, there seems to be little point in rendering any moments of the total error other than the first and second independent of the input. Variations in higher moments are believed to be inaudible and this has been corroborated by a large number of psycho-acoustic tests conducted by the authors and others [13, 21]. These tests involved listening to a large variety of signals (sinusoids, sinusoidal chirps, slow ramps, various periodically switched inputs, piano and orchestral music, etc.) which had been requantized very coarsely (to 8 bits from 16) in order to render the requantization error essentially independent of low-level non-linearities in the digital-to-analogue conversion system through which the listening took place. In addition, the corresponding total error signals (output minus input) were used in listening tests in order to check for any audible dependences on the input. Using undithered quantizers resulted in clearly audible distortion and noise modulation in the output and error signals…
When 2RPDF [ triangle PDF ] dither was employed, no instance was found in which the error was audibly distinguishable from a steady white noise entirely unrelated with the input...
Admittedly, these tests were informal, and there remains a need for formal psychoacoustic tests of this sort involving many participants under carefully controlled conditions.

The use of of non-subtractive, iid 2RPDF [ triangle PDF ] dither is recommended for most audio applications requiring multi-bit quantization or requantization operations, since this type of dither renders the power spectrum of the total error independent of the input, while incurring the minimum increase in error variance.
 
Last edited:
It does, the masks are subject to an automatic control process, that has as an input precisely the design database. The control process is set with a threshold of detecting errors.

I meant to say, take a valid database and randomly flip one bit. There is no way a priori to tell how much damage (if any) this does. I've tried it for fun, an .exe usually freezes or segmentation faults. Some .jpg's break some have bad pixels, same with .pdf's some flag corrupt file some don't. A .wav is usually fine (a pop of unknown amplitude) unless you hit the header.
 
Last edited:
Some .jpg's break some have bad pixels, same with .pdf's some flag corrupt file some don't.



I've experienced this with a backup of photographs, there were some strange anomalies in appearance in damaged files, but they threw no errors. I also found the same damages on the backup. Because of such infrequent access, the issues went unnoticed for what was presumably a long time.

My assumption is that this could also hold true for audio files.

I do also firmly believe in snapshots over duplicates to be able to roll back instead of pull from a backup which may already contain duplicates of those errors. This holds true for user errors as well as corruption issues.
 
I meant to say, take a valid database and randomly flip one bit. There is no way a priori to tell how much damage (if any) this does. I've tried it for fun, an .exe usually freezes or segmentation faults. Some .jpg's break some have bad pixels, same with .pdf's some flag corrupt file some don't. A .wav is usually fine (a pop of unknown amplitude) unless you hit the header.

That is true, since the polygons in the mask data are always described in a vector format (some e-beam pattern generators may convert that to a raster/pixel format, though), so it's not about turning on/off pixels, but about altering coordinates, which is almost guaranteed to have a catastrophic effect.

The story about jpeg is the same as for any compressed data; change a bit and the decompressor can either generate a wrong output (which may or may not appear as visible altered, or get out of sync and generate junk at the output. The highest the compression, the higher the risk of altering a bit leading to junk. Because of the fundamental source coding theorem, entropy is a limit to the length that we can use to lossless code a file and, at this limit, any bit change is catastrophic for the decompressor output.

If you look at the vector format as a lossless compressed format of a pixel/bitmap image, then it's now clear why the vector format is so much sensitive to changes.
 
Last edited:
Member
Joined 2014
Paid Member
I have had a music file get bit corruption, but only one. I was testing a proof of concept for a music server using some ropey old hardware I had lying around. Only this one track had a problem. Still played but with a small distorted segment. copied the damaged track off, deleted it and put a good copy on again, same issue. Then the hardware decided it had had enough of life and died. Lesson learned and I bought an HP microserver, which is the best £120 I have spent on muscial enjoyment in a long time.

But I kept the corrupted file and one day will load it into audacity to find out what has gone wrong. But that was operating hardware and software well out its comfort zone so the result was hardly suprising.
 
Member
Joined 2011
Paid Member
Harmon Kardon's R&D Budget is $400 million! WOW!

I just saw this tidbit in the July 2017 issue of Stereophile (below).

Beats By Dr. Dre must have an R&D Budget of BILLIONS! Same with Bose and Magico and Wilson Audio and Vandersteen and Constellation and all the other companies that are kicking Harmon Kardon's asszs in the marketplace. Sonya Vabeetch!


_
 

Attachments

  • Image1.png
    Image1.png
    688.7 KB · Views: 268
I just saw this tidbit in the July 2017 issue of Stereophile (below).

Beats By Dr. Dre must have an R&D Budget of BILLIONS! Same with Bose and Magico and Wilson Audio and Vandersteen and Constellation and all the other companies that are kicking Harmon Kardon's asszs in the marketplace. Sonya Vabeetch!


_

Harman International not Harmon Kardon. But it doesn't say what share of the budget. Total sales last reported of 7 billion dollars. So a bit more than the usual ratio of 5%. Of course Samsung expects the automotive market share to hit 100 billion over a bit under 10 years. So the lion's share probably goes to that market.
 
It looks like we are in an impasse again. '-) I can hardly follow what you guys are debating about, but then I'm an old guy who doesn't really like digital playback, even now.
However, I got my digital playback going over the weekend to preview a sampler Blu-ray recorded at 24-96K. The music samples are varied and interesting, BUT the sound quality is marginal, even annoying. Now, is this because my equipment is not good enough (for me)? No, because I started my listening session with some Chesky 24-96K recordings that sounded more than acceptable with my digital playback. What is it? I think it is in the console electronics, but I can't be sure. That is the only consistent factor in the equation for EVERY selection, that I can't get around. Should I tell the record company? People usually do not appreciate 'feedback' like this, so I think I will just let it go.
 
However, I got my digital playback going over the weekend to preview a sampler Blu-ray recorded at 24-96K. The music samples are varied and interesting, BUT the sound quality is marginal, even annoying. Now, is this because my equipment is not good enough (for me)?

Yes, or at least quite possible. How are you getting from digital to analog? Using a Blu-ray player?
 
Status
Not open for further replies.