Are our recording technology outdated ?

Anecdote: I've taking part in a single blind test of an FM MPX filter. It drops at around 15kHz at more than 80dB/octave, extremely steep with wild phase gyrations.
We were listening to headphones through an FM multiplex unit that could be bypassed with a relay, and we could change between the two positions with a small wired remote as much as we wanted. We did not know which position was which.
Five participants, one said he could not hear a difference, the other 4 (including me) all preferred the filter position. Go figure.

Jan
 
  • Like
Reactions: Pano
What about the recording microphone? Is that digital or analogue? As far as I know, although I am open to surprises, there are no purely digital microphones. By the latter, I understand a diaphram which responds to sound and an electrical sensor translating the instantaneous diaphram position directly into digital. A laser and a chip with fast light sensors and a diaphram with a small curved prism can be arranged in such a way, that the laser is reflected from the curved side of the prism, such that it falls on the set of sensors according to the diaphram's position.

However, it seems the least complicated and most economical way is to use an ANALOGUE microphone and an A/D converter.
 
  • Like
Reactions: wiseoldtech
There's two things wrong here. Firstly, a digitized signal doesn't look like that, it's just a drawing convention. A common fallacy.
Secondly, hearing is intrinsically a digital process. Haircell neurocells firing pulses that vary with intensity and frequency, a mix between PPM and PAM. You can't get much more digital than that.
If you start with wrong understanding then you get to a wrong conclusion.
Jan
First, the drawing isn't to be taken literally.... it is just for a basic sense of understanding, so don't be so harsh to dismiss it.
Secondly, I'm not an ear doctor, nor do I understand the so-called "intricate" workings of human hearing.
However, perhaps conveniently missing from your comment is the pure and known fact that soundwaves transmitted through the air are of analog nature, not digital.
And that is how we hear sounds.
And that is the most important thing to take into account.

There are no "digital" speakers.... period.
 
Of course, but once they get to the inner ear, things change quite a bit, which is what Jan is pointing out.
The hearing signal path might be compared to a microphone and preamps that are all analog - being fed into an ADC.
So whatever the source material is.... and whatever happens in the human ear..... the link is always through the air.
Unless, of course, a person has an implanted device of some sort.
 
@wiseoldtech and @MarcelvdG , it's important to understand and consider that a properly reconstructed digital signal is analog. The reconstruction removes the discreteness in time. And assuming the signal has been properly dithered (very hard not to do these days), the "discreteness in level" (i.e., quantization error) is completely removed and replaced with good old analog-style noise.
 
Last edited:
  • Like
Reactions: Pano
a properly reconstructed digital signal is analog
Yes, and from what I've read recently, the audio signal is in a normal analog format once it gets into the brain. At least it's been detected as an analog signal there. But the inner ear is a sort of strange way to detect it, compared to what we do in our recordings.
 
@wiseoldtech and @MarcelvdG , it's important to understand and consider that a properly reconstructed digital signal is analog. The reconstruction removes the discreteness in time. And assuming the signal has been properly dithered (very hard not to do these days), the "discreteness in level" (i.e., quantization error) is completely removed and replaced with good old analog-style noise.

Absolutely, a well-designed digital to analogue converter converts the digital signal to analogue 😉 I was only commenting on the stuff about brains, I think they are not digital according to the definition that I know.

A properly dithered quantization error is not quite the same as additive noise, by the way. In fact there is a thread about that: https://www.diyaudio.com/community/threads/high-order-dither-listening-test.313257/ Not that it makes much of a practical difference: Mooly and PMA disliked the test tracks with additive noise about as much as those with dithered quantization with a similar probability distribution.
 
Regarding stereo recordings: I used to record my partner's choir when she still had one. The recordings were made with two AKG C900 cardoid microphones in an ORTF set-up (17 cm distance, 110 degrees difference in direction), a home-made microphone preamplifier and a Fostex FR2-LE field memory recorder, at only 44.1 kHz sample rate because the choir members wanted the recordings on CD-R in audio CD format. (That is, the Fostex recorded in 44.1 kHz, 24 bit format and I would later round that to 44.1 kHz, 16 bit using triangular dither.)

The recordings sounded remarkably realistic, but the reverberation always seemed stronger than in reality. I guess that's because you hear it all come from the front with conventional stereo.
 
  • Like
Reactions: Pano
...assuming the signal has been properly dithered...

Proper dither is one factor, although there may be some disagreement as to which dither algorithm is to be preferred for what use (say, for final mastering verses for intermediate processing). Other factors may include anti-alias filtering, clock jitter (phase noise), digital filter quality (computational expense), Vref quality, analog processing stages, etc. Lots of things can potentially cause problems.
 
Last edited:
As far as I know, neurons encode signals in firing rates, but they don't only fire just after the edges of some clock signal, and the firing rate doesn't change in multiples of some step or other. In that sense a neural signal is more similar to FM (which is usually considered analogue) than to digital. Mind you, I'm no neurologist.
 
Regarding stereo recordings: I used to record my partner's choir when she still had one. The recordings were made with two AKG C900 cardoid microphones in an ORTF set-up (17 cm distance, 110 degrees difference in direction), a home-made microphone preamplifier and a Fostex FR2-LE field memory recorder, at only 44.1 kHz sample rate because the choir members wanted the recordings on CD-R in audio CD format. (That is, the Fostex recorded in 44.1 kHz, 24 bit format and I would later round that to 44.1 kHz, 16 bit using triangular dither.)
The recordings sounded remarkably realistic, but the reverberation always seemed stronger than in reality. I guess that's because you hear it all come from the front with conventional stereo.
One voice, one microphone...mono channel recording paned later after processing.Reverberations can be added from different mikes...complex choice...
 
The ear and listening process is irrelevant to the topic. The question was if we need an other recording technology in the advent of digital. "Recording technology" need to be broken down into mic/channel configuration and information storage format. It seems like most here agree on that the former has not got any bearing on the latter. i.e. mic/channel configuration could have been in need for improvement long before digital recording entered the scene - I believe so - 2ch stereo is such a limitation + that there need to be a way to subtract the acoustics in the reproduction environment from the one recorded in the phonogram. This mean that the acoustics at the recording location need to be characterised and available as meta data in the media file. As Marcel writes, at leat 4 channels is needed to fully reproduce a pressure wave. This would not be practically possible in an analog system (or very hard with less performance) so hey! to digital and all it's future possibilities ;-)

//
 
Yes, and from what I've read recently, the audio signal is in a normal analog format once it gets into the brain. At least it's been detected as an analog signal there. But the inner ear is a sort of strange way to detect it, compared to what we do in our recordings.
It is rather perceived as an FFT of the analog format. Our brain sees pictures of the composition of sounds, not the wriggling lines we observe on a scope.
 
The never-ending argument over 1's and 0's as opposed to analog.......and if it's better.....
Sure, there are "apparent" differences.
But we hear, speak, and are born with the ability of analog sound, naturally.
You choose your own poison.

View attachment 1073953

The main thing with digital is error detection in encoding + transmission but as digital speeds (and levels) increase you are back to the same problem as analogue. So I unless you rate squeeze to add framed hashing with retransmission to detect errors or analogue parity encode..