Hi,
I agree with Conrad. Typical THD measurements hardly tell You anything about sound and differences between electronics are comparatively small against those between two speakers.
To a certain degree the speakers seem to be ´transparent´. Even with speakers exhibiting strong failures (amplitude response, time behaviour, etc) it might be possible to distinguish the sonic fingerprint of widely differing electronics (say a tube to a class-D amp).
It´s my impression though, that dynamic speakers -at least the vast majority of them- are simply not good enough to distinguish well executed electronics of similar built (say two class AB amps). They mask the differences. To evaluate those one needs speaker systems with exceptional lowlevel resolution and good time- and distortion-behaviour, such as certain electrostats. Such systems can perform on a niveau that even tiny differences between electronics become obvious, because the ´masking treshold´ is lower.
jauu
Calvin
A predator of course, I assume? 😀Ok, I'm a dinosaur
I agree with Conrad. Typical THD measurements hardly tell You anything about sound and differences between electronics are comparatively small against those between two speakers.
To a certain degree the speakers seem to be ´transparent´. Even with speakers exhibiting strong failures (amplitude response, time behaviour, etc) it might be possible to distinguish the sonic fingerprint of widely differing electronics (say a tube to a class-D amp).
It´s my impression though, that dynamic speakers -at least the vast majority of them- are simply not good enough to distinguish well executed electronics of similar built (say two class AB amps). They mask the differences. To evaluate those one needs speaker systems with exceptional lowlevel resolution and good time- and distortion-behaviour, such as certain electrostats. Such systems can perform on a niveau that even tiny differences between electronics become obvious, because the ´masking treshold´ is lower.
jauu
Calvin
Nelson Pass wrote an interesting piece on various forms of amplifier distortion a couple of years ago. He labeled intermodulation distortion "the elephant on the dance floor".
Given that the actual music signal has many frequencies and changes rapidly with time, it is easy to speculate that various small nonlinearities in active components can stack up to all sorts of nonmusical ("unnatural") intermodulation distortions that the ear/brain immediately will pick up as "wrong", even in the presence of loudspeaker coloring and distortion, but still are so small that they will be hidden somewhere in the noise floor of a single frequency THD+N sweep.
http://www.passdiy.com/pdf/articles/distortion_feedback.pdf
Given that the actual music signal has many frequencies and changes rapidly with time, it is easy to speculate that various small nonlinearities in active components can stack up to all sorts of nonmusical ("unnatural") intermodulation distortions that the ear/brain immediately will pick up as "wrong", even in the presence of loudspeaker coloring and distortion, but still are so small that they will be hidden somewhere in the noise floor of a single frequency THD+N sweep.
http://www.passdiy.com/pdf/articles/distortion_feedback.pdf
Otherwise, its trivially easy to see that the deficiencies of, say the mp3 codec aren't primarily in the realm of frequency response.
To be fair, that's software/signal processing, not really electronics. The whole idea is to change the signal, not to pass it on undistorted.
To be fair, that's software/signal processing, not really electronics. The whole idea is to change the signal, not to pass it on undistorted.
No, the whole idea is to reduce the bit rate, not to 'change the signal'. The signal changing/distortion that goes on is a side-effect of wanting a lower bit rate.
From my perspective, the distinction between 'software/signal processing' and electronics can't be reasonably maintained. Take noise shaping - that's signal processing -but its built into the hardware of almost all of today's ADC and DACs. Without it, over-sampled ADCs would require astronomical sample rates to get the SNRs they achieve.
No, the whole idea is to reduce the bit rate, not to 'change the signal'. The signal changing/distortion that goes on is a side-effect of wanting a lower bit rate.
No, that's the goal. That goal is achieved by changing the bits, hopefully in such a way as to not be audible (though many of us would claim that the goal has been very imperfectly achieved).
That goal is achieved by changing the bits, hopefully in such a way as to not be audible (though many of us would claim that the goal has been very imperfectly achieved).
The goal is achieved not primarily by 'changing the bits' but rather by comparing an (admittedly imperfect) perceptual model with the incoming signal to see what, if any, can be thrown away to achieve the targeted bit rate. Even Karlheinz Brandenburg does not claim its perfectly transparent.
comparing an (admittedly imperfect) perceptual model with the incoming signal to see what, if any, can be thrown away
We're in heated agreement. 😀 The signal is intentionally altered.
We're in heated agreement.
Ah, at last I thought you'd seen the light.😀 But no, because you then go on to say...
😀 The signal is intentionally altered.
And so we're back where I started. So let's see if you get it this time. The signal is, in most cases altered to fit the constraint of the lower bit rate. But notice that I included the two words 'if any' - in a few cases there's no need to throw anything away. Perhaps the signal is so quiet that its bit rate doesn't exceed the target bit rate. Or perhaps its loud but a very predictable signal, like a sinewave. In such cases, there's no need to alter the signal and hence its false to say 'the signal is intentionally altered'.
Any clearer now?
Any clearer now?
Very. In general, the signal is altered. You point out that one can find some special cases where that is not so, but in general, there is a deliberate change in the signal.
Very. In general, the signal is altered.
Yes, in general it is altered. The vast majority of the time it is altered.
You point out that one can find some special cases where that is not so, but in general, there is a deliberate change in the signal.
Here I cannot grasp your meaning. The problem I'm having is with considering the words 'in general' in conjunction with 'deliberate'. Do you perchance think that the circuit itself is doing something 'deliberately' ? If so, I suggest you are anthropomorphizing unnecessarily. I have been taking your word 'intentionally' to refer to the designers' intentions, but perhaps all along you've been attributing intention to a collection of dirty sand? Please clarify.
Do you perchance think that the circuit itself is doing something 'deliberately' ?
Yes, in the same sense that a compressor or an equalizer is doing something "deliberately." The designer of the circuit (or the software algorithm) is presumably a human with intention.
Yes, in the same sense that a compressor or an equalizer is doing something "deliberately."
So then, in using the word 'deliberately' just of a circuit and not of a human, we are at last in agreement. The circuit does indeed, in general, 'deliberately' alter the sound. But this language is misleading as a circuit cannot (unlike a human) do anything accidentally - everything it does must be deliberate - even the adding of distortion and noise, which are unavoidable. So now we have this new, Humpty Dumpty meaning for the word, I'd go further and say that the circuit does not merely in general 'deliberately' alter the sound, it does so always, no exceptions. The dichotomy introduced by this new meaning for the word 'deliberate' is a false one, leading to confusion in the readership.
The designer of the circuit (or the software algorithm) is presumably a human with intention.
And their intention is not to change the sound, its to reduce the bit rate. As I've been maintaining all along.
And their intention is not to change the sound, its to reduce the bit rate. As I've been maintaining all along.
Note that I *always* used the term "signal," not "sound."
Note that I *always* used the term "signal," not "sound."
Yes, that's fine. My meaning isn't altered by rephrasing it with the word signal instead of sound. I agree they do have different meanings, but I wasn't attempting to be too technical in this case - you were having trouble keeping up as it was😀
[...] The signal is, in most cases altered to fit the constraint of the lower bit rate. But notice that I included the two words 'if any' - in a few cases there's no need to throw anything away. [...]
But even then the process is not 100% lossless; the polyphase filter bank used in MP3 is not strictly reversible (i.e., no bit-exact reconstruction is possible). It wasn't designed to be, either.
Kenneth
Yes, thanks for that snippet. I've not been arguing that the codec is lossless - rather that it wasn't designed intentionally to change the signal. No electronics circuit can be truly lossless, only software can achieve that.
Well, the whole discussion is tomeyto/tomaato -- if you design a lossy codec, you know it's going to be lossy and change the signal -- if you design a lossless codec, you know it'll be lossless.
Personally, I now use exclusively FLAC encoding for my music server & USB DAC setup, and haven't looked back ever since. The ancient MP3 format is good for nothing these days, there are many better alternatives (both lossy and lossless). Saves me the expense of getting a fancy-schmanzy, audiophool-approved CD "transport", too!
Kenneth
Personally, I now use exclusively FLAC encoding for my music server & USB DAC setup, and haven't looked back ever since. The ancient MP3 format is good for nothing these days, there are many better alternatives (both lossy and lossless). Saves me the expense of getting a fancy-schmanzy, audiophool-approved CD "transport", too!
Kenneth
Well, the whole discussion is tomeyto/tomaato -- if you design a lossy codec, you know it's going to be lossy and change the signal -- if you design a lossless codec, you know it'll be lossless.
Unsurprisingly, I don't recognise your characterisation of the discussion here. The codec first of all isn't a piece of software in this instance, its a piece of hardware, so 'lossless' cannot apply. The thread is, after all about the sound of electronics vs transducers.
Secondly the mp3 codec itself is not germane to my point - I just chose a piece of kit that gave a good enough frequency response and reasonable THD figures as an example of a circuit. That was to offer up a counter example to what I saw as a rather simplistic claim that frequency response and THD figure is all that's really useful to know about a piece of audio electronics in terms of its sound. (I paraphrase heavily, I recognise that CH's position is a little more nuanced than my very brief characterisation of it).
Personally, I now use exclusively FLAC encoding for my music server & USB DAC setup, and haven't looked back ever since. The ancient MP3 format is good for nothing these days, there are many better alternatives (both lossy and lossless). Saves me the expense of getting a fancy-schmanzy, audiophool-approved CD "transport", too!
Pretty much parallels my own experience although I use SPDIF as I haven't yet got a good enough sounding USB DAC. But I'm moving in that direction. I'm currently using a $30 DVD player as my transport and getting excellent results.
Is this true of the direct mike feed in the studio or concert hall recording? Can I assume that every recording in my collection (save one or two exceptionally boring "audiophile" specials) will sound like complete crap regardless of the care and work I've put into designing and building my system?
YES. Even the exceptionally boring "audiophile" specials do NOT sound like a piano. The reason is simple.
(1) A piano is physically large and the sound radiats from a large sound board and a large case and MOST of what we hear when we listen to a real piano is reflectons. the bass reflects off the foor before it reaches our ears and the highes reflect off the open lid.. Stereo speakers are not large, even a 15" woofer is a small "point source" compared to an piano.
(2) stereo at it's best can only reproduce sound in sweet spot. If you had a real piano you could walk in a circle around it while it was being played and it would sound like a piano at every point along your path. Try walking in a circle around a pair of stereo speakers. Because the piano is omnidirectional most of wht you hear is reflected sound from the room. With stereo most of what you here is direct from the speakers
In short, the reason none of your recording sound like a real piano is because they are played through speakers.
I have a goal or project I'm working on to build a "piano speaker". This is a special purpose audio system for ONLY solo piano. It will be omnidirectional and and send the bass into the floor and reflect highs off a lid that raises at an angle and be about the size and shap of a very small baby grand. The goal is to be able to walk in a circle around the audio system an that it sound like a real piano at all points along the path.
The Source will be a digital piano, not a recording. So what I'm really working on is a musical instrument, something that produces music, not re-produces music.
Interesting comments Chris! Yamaha and others make "digital" grand pianos that have speakers top and bottom. I've played these often while teaching lessons. The samples are lousy and obviously looped and the speakers sound like something from a Honda Accord but the way the sound is amplified around you feels more like a real piano than playing premium software like Synthogy Ivory (what you actually hear when listening to piano on movies and tv and pop recordings) through a pair of studio monitors.
- Status
- Not open for further replies.
- Home
- Source & Line
- Analog Line Level
- The "Sound" of Electronics vs Transducers