John Curl's Blowtorch preamplifier part II

Status
Not open for further replies.
Bit for bit

On thumb drives I have files that binary file check the same but sound different on playback.

Dan, I hope you are not implying that with all else remaining identical two bit-for-bit identical files can sound different? This is literally the same argument as saying all else remaining constant, playing the same file twice could produce different outputs. There are too many devices in the way of a USB drive file comparison to actually place the blame on one part. Many USB memory devices have odd internal optimization routines and output buffering not to mention OS access issues which can cause impaired playback in some cases. The only way to compare files both digitally and audibly is to transfer them to the same HDD or SSD which is not fragmented, perform the bit-for-bit with an industrial quality utility such as Eclipse ImageVerify and then output to the same DAC in whatever AB/X style you desire.

When you get the sound properly 'right' you might even get four women come up to the desk and compliment your efforts as happened a couple of weeks ago....
That reminds me of doing sound in country music bars here in NC...my wife got really disturbed watching drunk women grab my a$$ while doing sound, and I thought she might get in a fight with them...
Memories...speaking of which I rue todays news of Allan Holdsworth's passing...he was brilliant...his solos were like applying a high math function to a melody...

Cheers,

Howie
Howard Hoyt
CE - WXYC-FM 89.3
UNC Chapel Hill, NC
www,wxyc.org
 
Member
Joined 2002
Paid Member
George, the best test is the decay of a pure piano note into silence, the problem is un-dithered at few bits full scale. You can take a chance and cheat a little by turning up the volume at the right moment. It becomes painfully obvious.
OK, thanks.
This one I think fulfills the criteria:
Original 24/96 file Mephisto Waltz Excerpt #3 (ending)
Steinway and Sons grand piano recording : High definition music | Audiophile music recordings | HD tracks by LessLoss

I downloaded and converted to:
16/96KHz non dithered, dithered, dithered with noise shaping
and to
16/44.1KHz non dithered, dithered, dithered with noise shaping

You can test your low signal level discriminating capabilities.
Converted files will be at Dropbox for a few days

16/96KHz non dithered
https://www.dropbox.com/s/pcu3u8qykx11g8l/%281%29%20Mephisto-Listz-III_24_96%20to%2016_96%20non-dithered.wav?dl=0

16/96KHz dithered
https://www.dropbox.com/s/zhm3xdchud0xx2t/%282%29%20Mephisto-Listz-III_24_96%20to%2016-96%20dithered.wav?dl=0

16/96KHz dithered with noise shaping
https://www.dropbox.com/s/r3xjht0u4ay76sh/%283%29%20Mephisto-Listz-III_24_96%20to%2016-96%20dithered%20%40%20noise%20shaped.wav?dl=0

16/44.1KHz non dithered
https://www.dropbox.com/s/4r8w3n3z6c52j48/%284%29%20Mephisto-Listz-III_24_96%20to%2016-44k1%20non-dithered.wav?dl=0

16/44.1KHz dithered
https://www.dropbox.com/s/mkrj3evsqcncjq0/%285%29%20Mephisto-Listz-III_24_96%20to%2016-44k1%20dithered.wav?dl=0

16/44.1KHz dithered with noise shaping
https://www.dropbox.com/s/zstons4airaws65/%286%29%20Mephisto-Listz-III_24_96%20to%2016-44k1%20dithered%20%40%20noise%20shaped.wav?dl=0

Dither applied is Steinberg Wavelab “Internal, Noise type 1”
Noise shape applied is Steinberg Wavelab “Noise shaping 3”

George
 
B4B

Good to see you again Howie, BTW you are wasting your time on this one.

Hi Scott!
Happy Easter Monday (legal term)! Maybe you are right, but I'll waste a few more minutes before shutting up:

For better or for worse, having performed tens of thousands of transfers from all different media to files and Red Book images, I have unfortunately had to perform both B4B and audible comparisons more than any human should have to...and I sure have heard differences, but never, and I repeat, never from a file played twice, or two files which pass a B4B using industrial-quality verification, not desktop B4B apps which can ignore many factors like differences due to CD Text, other metadata, padding, E3 ECC from Mode 1 ROMs, sector shifts, etc...

There were many (some famous but I will not defame) customers who claimed to hear a difference between their files and the finished Red Book CD, but very few had any idea of whether their CD player was performing correctly. I had to take our highly modified Sony ES player with E1, E2 and BLER error flag output and an Apogee DAC to their facility so they could hear the CD sound like the file, and see it was their CD player with the problem...crazily enough some wanted their CDs to sound different like the defective and interpolating CD player made them sound...it takes all kinds.

My office monitoring system to analyze potential subtle differences was an outboard Apogee DAC, Luxman amp and Stax HPs, the system was pretty revealing...

*now listening to Allan Holdsworth (RIP) - IOU...:)*

Howie

Howard Hoyt
CE - WXYC-FM 89.3
UNC Chapel Hill, NC
WXYC Chapel Hill, North Carolina - 89.3 FM
 
George -- thanks, I'll give them a whirl. At the same time, I was thinking this morning it'd be interesting/useful to provide a training data set so people have a chance to familiarize themselves with the different dither techniques and then a blinded test set. I'll write more as I digest my thinking. :)
 
B4B

A last word from me on this file comparison issue:

There is a large difference between different file players with how they buffer, handle metadata, arrange for thread priority, etc. Depending on storage device fragmentation, OS rev, memory buss congestion, background apps and CPU usage there can be audible issues.

The most reliable player setup for most Win systems I have found is using Foobar2K with the JPLAY driver. The JPLAY driver gives highly configurable buffering and excellent OS integration regardless of player. If your streaming service or player needs something other than an ASIO driver, JPLAY offers the ASIOBridge.

I am outputting to an Auralic Vega D>A using the Auralic's DS driver so I am not using JPLAY, but have set it up for others with excellent, reliable playback results, especially on systems which seemed to have issues with HD files originally. My brother, for example, had problems playing 24bit 96K files from USB on his laptop to a Dragonfly DAC. The JPLAY driver configred with a large buffer eliminated the issues of stuttering, pausing and odd sound issues.

*I'm listening to Allan Holdsworth's (RIP) Road Games while finishing up a 3-cell Li charger PCB layout, party on...*

Cheers!
Howie

Howard Hoyt
CE - WXYC-FM 89.3
UNC Chapel Hill, NC
WXYC Chapel Hill, North Carolina - 89.3 FM
 
Dither applied is Steinberg Wavelab “Internal, Noise type 1”
Noise shape applied is Steinberg Wavelab “Noise shaping 3”
George

George,
You are doing a great service for anyone who has not explored lower bit issues. I created something similar for a Sheffield demo disc back in the 1990s when the issue of dither and dither spectrum was a hot topic. We recorded the same piece at consecutive 10dB gain decrements until all you heard was peaks which activated the lowest few bits with and without dither.

You are forgetting one thing, though: un-dithered CDs sound so much better and quieter...
...
...
when there is no music playing.

Cheers and thank you!

Howie
Howard Hoyt
CE - WXYC-FM 89.3
UNC Chapel Hill, NC
WXYC Chapel Hill, North Carolina - 89.3 FM
 
Dan, I hope you are not implying that with all else remaining identical two bit-for-bit identical files can sound different?.
This is literally the same argument as saying all else remaining constant, playing the same file twice could produce different outputs.
Hi Howie, yes I am saying this.
I am saying things like transferring the same file twice to target flash memory device (Android Phone, Portable Music Player, USB Thumb Drive etc) using either of two different USB cables produces interestingly differing playback behaviours.
One USB cable is standard el cheapo OEM, the other USB cable is unique in that it has filtering mixture incorporated within each connector.
There are too many devices in the way of a USB drive file comparison to actually place the blame on one part. Many USB memory devices have odd internal optimization routines and output buffering not to mention OS access issues which can cause impaired playback in some cases.
Modern Flash memory devices are 3 bits per storage cell, ie 8 voltage levels are stored.
With modern fabrication/geometry sizing, storage cell voltage levels are dependent on single digit numbers of electrons.
This causes write voltage level noise dependency and is expected to be normal operation of such memories.
The work around is inbuilt CRC checking/correction of read out data to ensure correct data output.
Reducing system noise would be expected to incur lesser cell voltage level write errors and therefore less/different data correction intervention during read/output process.
The subjective result is smoother clearer playback akin to audible improvement result due to reduction in system clock jitter.
The only way to compare files both digitally and audibly is to transfer them to the same HDD or SSD which is not fragmented, perform the bit-for-bit with an industrial quality utility such as Eclipse ImageVerify and then output to the same DAC in whatever AB/X style you desire.
I have gone to the trouble of running low level format softwares on USB Thumb drives in order to return them to 'virgin' condition.
I also have used newly low level formatted HD partitions which give the same result as USB Thumb drives and self contained player devices such as Android phone, Android Pad etc.
File checking has returned same data in all cases...presumably the the only difference is Time/Date stamp.

That reminds me of doing sound in country music bars here in NC...my wife got really disturbed watching drunk women grab my a$$ while doing sound, and I thought she might get in a fight with them...
Bob's Country Bunker ?.
Memories...speaking of which I rue todays news of Allan Holdsworth's passing...he was brilliant...his solos were like applying a high math function to a melody...
Do you have a favourite album that I should look up ?.

Dan.
 
AX tech editor
Joined 2002
Paid Member
https://askleo.com/why_or_how_do_files_become_corrupt/

P.s. the USB cable is never mentioned (as a source of corruption), but the flash device may be a problem in the long run.

It seems this guy never heard of checksums and/or error correction? 'The quick brown...' will NEVER become 'The slow brown...'. His story is quite misleading.

A file can become corrupt to the point that it cannot be loaded or read (and a few bits fallen over can cause that) but not to the effect that it loads correctly with the contents changes.

Jan
 
Member
Joined 2004
Paid Member
It seems this guy never heard of checksums and/or error correction? 'The quick brown...' will NEVER become 'The slow brown...'. His story is quite misleading.

A file can become corrupt to the point that it cannot be loaded or read (and a few bits fallen over can cause that) but not to the effect that it loads correctly with the contents changes.

Jan

if you are insecure about data on disks the current read technology should ruin your sleep- PRML- Partial Response Maximum Likelihood. https://en.wikipedia.org/wiki/Partial-response_maximum-likelihood A techy way of saying guessing. But it works.
 
B4B

Hi Dan!

Well, once again choice of words gets in the way of accurate communication. I believe we are in agreement on the possible causes of sonic differences, I have observed the phenomenon you are describing with USB memory, the same sources of timing problems and data corruption can also occur using external USB HDDs or network sources, as I found out building a network storage system at AMI. The issue causing the problem with verification in these situations is the fact that the addressing of the data by common B4B utilities is different than how that same data is streamed to the D>A. In these cases, if somehow you were able to do a streaming B4B at the D>A input you would see differences.

The point I was trying to make (poorly) was: two identical bitstreams in both data bits and timing going into a D>A will sound identical, all else being equal.

I believe contrasting the two above statements proves the second point, and that is why I added my comment about player configurations. It is critical to ensure the data is indeed not corrupt and buffered before being tortured back into audio. This principle is at the basis of any successful transfer which is the core of accurate media replication...and I drummed it into our mastering engineers relentlessly.

If you observe a difference in sound with only metadata changing, try this experiment: copy a track to the same HDD partition and compare. Do they sound the same? If yes, then just change the date/time stamp. If you now hear a difference, there is something wrong in your PC setup. Only a thorough analysis of the data with industrial data analysis tools (not PC based apps) as it is streamed will show the cause and that is above my pay grade. I can assure you in a properly set up system this does not occur. It is the basis of modern digital media replication.

Regarding the music of Allan Holdsworth, it is a subject of contention between the traditional melodic and more pattern oriented or if you will, math melodic camps of music appreciation. I think to understand his approach which is not key oriented but chord oriented, this video is good, and if you find his descriptions of chord patterns tedious, skip to 12:20, where he describes his soloing philosophy:
https://www.youtube.com/watch?v=wts2Mw6Nb5s
Maybe I am too easily bored, but I highly value artists who take risks...I mean this is funny to say in a forum dedicated to reproduction of recordings, but after I have heard a solo once, I want to hear what else that artist has to say musically, not the same thing over and over. The same approach to soloing as Allan's has been emphatically stated by Frank Zappa as well, here is a video where he echoes the same sentiment as Allan:
https://www.facebook.com/paul.dezelski/videos/1259695020711121/
I love his phrase: "...you get a piece of time and you get to decorate it..."
If the Allan Holdsworth video intrigues you, the two albums which are good intros to his music are I.O.U and Velvet Darkness.

Cheers!
Howie

Howard Hoyt
CE - WXYC-FM 89.3
UNC Chapel Hill, NC
WXYC Chapel Hill, North Carolina - 89.3 FM
 
if you are insecure about data on disks the current read technology should ruin your sleep- PRML- Partial Response Maximum Likelihood. https://en.wikipedia.org/wiki/Partial-response_maximum-likelihood A techy way of saying guessing. But it works.

Yes it may 'guessing' about the bits, but not about the data!!! The data is safe even if the bits are not. And then when the bits fail in overwhelming numbers then the data fails and this failure will be noted!!! it will not be silent.

It is misleading to imply that failing bits lead to failing data without alarm bells ringing. Even 40 years ago, we where using papertape, we where not capable of loosing data by not noticing it, we always used 'fail safe' combined with 'fail detection' (as is being done today), making the data safe (from a backup then), but not (unnoticed) copying the failed data into the next generation of backups.
 
Last edited:
B4B

A last word from me on this file comparison issue:
OK, I lied.
There is a large difference between different file players with how they buffer, handle metadata, arrange for thread priority, etc. Depending on storage device fragmentation, OS rev, memory buss congestion, background apps and CPU usage there can be audible issues.
I think the missing information explaining how a file could compare properly with another yet the two sound different lies in the way a PC handles the two scenarios. I am not a digital guy but have a passing familiarity with the process which I would LOVE the more experienced people here to correct and expound on.

When B4B comparator is asked to compare two files, it requests enough memory space from the OS in which to load both files (or, if memory limited a specific subset of the data in the file), and then proceeds to compare the two memory contents, moving from file header to last bit and reports the results. This process has handshaking and the B4B program will wait to execute if there is an interruption in data transfer until all data is in memory. In this respect it is not comparing the files directly, it is comparing the data contained in the two files as it transferred into memory. Given the largely stochastic nature of music data, the chances that the two files could be transferred with errors and then compare correctly are exceedingly small.

When a PC player plays a file, it only allocates enough memory to accommodate the buffer size specified in the settings. At this point the player's code handles pulling source data from the original file location and filling the buffer. It then passes the data into a second much smaller buffer set up in memory to clock the data out at the bitrate specified in the file header. If either buffer runs out, it is up to the player to handle the exception; does it interpolate? mute? repeat the previous frame? Lock the player UI up? it depends on the player design. From the beginning of digital audio, a premium has been place on a seamless musical experience, which is why minimal data correction was designed into the Red Book CD-AUDIO format. Combine this with Sony's silly reason for deciding on 74 minutes, and you get the CD as we know it. In general usage it is often not a lossless format. With a good CD player and pristine CD it can be. The same applies to PC audio systems. They are optimized toward an average listener, not someone needing bit-perfect delivery to an external D>A. This is one of the reasons I was glad to see an after-market driver like JPLAY developed. That driver handles the buffering and streaming to the D>A as optimally as possible to ensure fidelity.

It doesn't take much imagination to think of a scenario where the memory buss bandwidth or available CPU cycles are insufficient due to other processes concurrently running such as anti-viruses, the detestable scheduled program update utilities, etc...

It is for this reason the PCs used in replication plants for handling customer data are not general-purpose PCs. We set them up with OSes and ONLY the programs needed to perform their desired functions. They were not networked on the internet, they are on a dedicated internal intranet. In general we spec'ed very high-end CPUs and maxxed out the memory with ECC-type memory which made a large difference in random loss of data accuracy when streaming at high rates to LBRs (Laser Beam Recorders) running at high multiple of PCM clock rates.

Anyone with more knowledge or insight into this issue, please weigh in!

Cheers!
Howie

Howard Hoyt
CE - WXYC-FM 89.3
UNC Chapel Hill, NC
WXYC Chapel Hill, North Carolina - 89.3 FM
 
Last edited:
AX tech editor
Joined 2002
Paid Member
OK, I lied.

I think the missing information explaining how a file could compare properly with another yet the two sound different lies in the way a PC handles the two scenarios. I am not a digital guy but have a passing familiarity with the process which I would LOVE the more experienced people here to correct and expound on.

When B4B comparator is asked to compare two files, it requests enough memory space from the OS in which to load both files (or, if memory limited a specific subset of the data in the file), and then proceeds to compare the two memory contents, moving from file header to last bit and reports the results. This process has handshaking and the B4B program will wait to execute if there is an interruption in data transfer until all data is in memory. In this respect it is not comparing the files directly, it is comparing the data contained in the two files as it transferred into memory. Given the largely stochastic nature of music data, the chances that the two files could be transferred with errors and then compare correctly are exceedingly small.

When a PC player plays a file, it only allocates enough memory to accommodate the buffer size specified in the settings. At this point the player's code handles pulling source data from the original file location and filling the buffer. It then passes the data into a second much smaller buffer set up in memory to clock the data out at the bitrate specified in the file header. If either buffer runs out, it is up to the player to handle the exception; does it interpolate? mute? repeat the previous frame? Lock the player UI up? it depends on the player design. From the beginning of digital audio, a premium has been place on a seamless musical experience, which is why minimal data correction was designed into the Red Book CD-AUDIO format. Combine this with Sony's silly reason for deciding on 74 minutes, and you get the CD as we know it. In general usage it is often not a lossless format. With a good CD player and pristine CD it can be. The same applies to PC audio systems. They are optimized toward an average listener, not someone needing bit-perfect delivery to an external D>A. This is one of the reasons I was glad to see an after-market driver like JPLAY developed. That driver handles the buffering and streaming to the D>A as optimally as possible to ensure fidelity.

It doesn't take much imagination to think of a scenario where the memory buss bandwidth or available CPU cycles are insufficient due to other processes concurrently running such as anti-viruses, the detestable scheduled program update utilities, etc...

It is for this reason the PCs used in replication plants for handling customer data are not general-purpose PCs. We set them up with OSes and ONLY the programs needed to perform their desired functions. They were not networked on the internet, they are on a dedicated internal intranet. In general we spec'ed very high-end CPUs and maxxed out the memory with ECC-type memory which made a large difference in random loss of data accuracy when streaming at high rates to LBRs (Laser Beam Recorders) running at high multiple of PCM clock rates.

Anyone with more knowledge or insight into this issue, please weigh in!

Cheers!
Howie

Howard Hoyt
CE - WXYC-FM 89.3
UNC Chapel Hill, NC
WXYC Chapel Hill, North Carolina - 89.3 FM

The notion that any app 'requests memory' from the OS is incorrect. The app requests data from an input stream or two input streams and writes to output streams via file handles. What goes on in the hardware/memory is totally obscured. The app has no kownledge where the data comes from or goes to, memory, hard drive, CPU local storage, whatever.
In Object Oriented Software the app creates an object like 'audio frame' and requests the OS to open a file stream and fill the object. If this is a file comparison app it creates two objects, fill each from two streams and do a compare. It then either destroys the objects and creates two new ones, and fill these with the next block or frame etc, or it overwrites the original objects. All this is going on dynamically and totally out of sight of the app. The app tells the OS what it wants and the OS tells it where it put it.

Jan
 
Modern MMS (memory management software) resides in [on top of] the HAL (hardware abstraction layer) where the MMU (memory management unit) is being controlled [only this MMU is aware of actual memory chip's {or otherwise}], all these are managed by the OS (operating system) and totally opaque to the software. The software (even behind the object layer) sees memory that is totally virtualized, the software can not know where the memory is, what the memory is and even if there is any memory at all.
 
Status
Not open for further replies.