Marcel, you are correct that hearing does not conform to the formal definition of digital. My point was to note that it is not a smooth analog process like an analog signal in our amplifiers and speakers. The haircells generate electrical impulses which vary with intensity and frequency, some haircells only responding to some frequencies. In that sense it looks like an FFT analyzer, although, again, probably not according to the formal definition.No. Jan opened this little fringe discussion.
I don't know how to make clear that the statement 'our hearing is an intrinsic analog process' gives the wrong idea.
Jan
just a random thought in my procees so i can see what hole i am missing for further reading to begin withSo, did you bring that up as a place of interest worth exploring re: some statements of perceivable differences between digital and analogue?
The inner ear is actually a frequency based selective organ. Hair cells respond to specific frequencies much like an array of tuning forks. In fact, inner ear implants (artificial substitutes) use several frequency channels. As far as I can remember, the hair cells have hairs of different stiffness to respond to different frequencies.
https://www.britannica.com/science/ear/Transmission-of-sound-within-the-inner-ear
https://www.britannica.com/science/ear/Transmission-of-sound-within-the-inner-ear
Marcel, you are correct that hearing does not conform to the formal definition of digital. My point was to note that it is not a smooth analog process like an analog signal in our amplifiers and speakers. The haircells generate electrical impulses which vary with intensity and frequency, some haircells only responding to some frequencies. In that sense it looks like an FFT analyzer, although, again, probably not according to the formal definition.
I don't know how to make clear that the statement 'our hearing is an intrinsic analog process' gives the wrong idea.
Jan
It's definitely a very complicated process with some almost linear and many very non-linear steps. Comparing the filtering effect of the basilir membrane to an FFT is also an oversimplification, because the neural firing patterns tend to synchronize to the waveforms on the part of the basilir membrane that excites the hair cell.
As I've sometimes mentioned, I prefer simplicity over complexity in things.
This includes my systems of course.
Less "juggling" of the primary sources, less manipulation, thus more "purity" as the end result.
A violin string, a drum, etc, when played, results in an analog sound wave.
The resulting sound fed from my speakers to my ears is also transmitted via analog.
With digital "conversion" in between, things become more juggled around, and more needs to be done to result in a satisfying "flavor" for enjoyment.
Of course with CD's this is mandatory, however the general public seems to be attracted towards the (natural) analog world.
People that have "grown up with" digital have started to find out the so-called "less than perfect" analog world and are gravitating to it.
And why is that?.... what's the draw?
This includes my systems of course.
Less "juggling" of the primary sources, less manipulation, thus more "purity" as the end result.
A violin string, a drum, etc, when played, results in an analog sound wave.
The resulting sound fed from my speakers to my ears is also transmitted via analog.
With digital "conversion" in between, things become more juggled around, and more needs to be done to result in a satisfying "flavor" for enjoyment.
Of course with CD's this is mandatory, however the general public seems to be attracted towards the (natural) analog world.
People that have "grown up with" digital have started to find out the so-called "less than perfect" analog world and are gravitating to it.
And why is that?.... what's the draw?
I’ll try to find the research paper I read that showed electrical signals in the brain that are duplicates of the audio signal heard. The captures are noisy, but very identifiable.It is rather perceived as an FFT of the analog format. Our brain sees pictures of the composition of sounds, not the wriggling lines we observe on a scope.
This is an interesting observation. I understand that blind people can develop te ability to form mental pictures of their environment based on (reflected & direct) sound. From phones to pixels, so to say.It is rather perceived as an FFT of the analog format. Our brain sees pictures of the composition of sounds, not the wriggling lines we observe on a scope.
Jan
Together with the instantaneous voltages, an analogue signal also has a rate of voltage change at every sampled point. This is ignored when the original signal is reconstructed.
...This is ignored when the original signal is reconstructed.
How so? Are you thinking of a zero order hold without a reconstruction filter?
Of course everything is turning digital.My observation is completely the other way around - everything is gravitating towards digital - and for the better actually. What you call Juggling is transparent and what you miss is probably the distorsion of the old, not-so-wise, analog technology distorsion.
That's what marketing is all about - pleasing the consumer, making things "easy".... attractive.
A part of making the masses increasingly lazy, complacent, satisfied, all while contributing towards the benefit of corporations.
Long gone are those days of having to get up from the sofa to change the channel on the TV set, or adjust the volume of the stereo.
Now, only a finger on a remote is needed, or even to speak to it.
For choir I've often used the AKG C3000, it's a classic for choir. Also a cardioid mic.I used to record my partner's choir when she still had one. The recordings were made with two AKG C900 cardoid microphones in an ORTF set-up
Yes, that's typical. Ever since I started recording back in the 60s I've wondered why microphones pick up so much more of the room or venue than we hear with just our ears. There has been a lot of thought and research gone into this, but I don't know the answer. Using super or hyper-cardioid pattern mics can work well. The Schoeps MK41 with its tight pattern is well known to do a good job of "That's what it sounded like." Not a budget mic, tho. There are other techniques outside the scope of this thread.The recordings sounded remarkably realistic, but the reverberation always seemed stronger than in reality.
The last FM plant I built kept audio in the digital domain from the label's mastering stage until the transmitter's output stage. All processing was in the digital domain as well.Would you count an FM radio signal
Hi,
Ortf is nice for big ensemble in big rooms imho. In fact iirc Ortf developped the principle for classical recording into their Hall of the area.
A coincident couple ( XY) sound dryer and is often better suited when there is too much room. It can be used closer to the source too ( i'm not sure but i think SRA range is more suited).
There is few radio running analog this days...
Ortf is nice for big ensemble in big rooms imho. In fact iirc Ortf developped the principle for classical recording into their Hall of the area.
A coincident couple ( XY) sound dryer and is often better suited when there is too much room. It can be used closer to the source too ( i'm not sure but i think SRA range is more suited).
There is few radio running analog this days...
My point is the smoothing of the regenerated signal is left to a filter. Why not a system of curve fitting using software algorithms to fit curves to consecutive ordinates in the digital signal? I know, this is far more complicated compared with a D/A converter and a filter. Algortithms can analyse the digital stream and compute the instantaneous voltage/time to control a variable ramp generator.How so? Are you thinking of a zero order hold without a reconstruction filter?
Code:
digital stream ----> curve fitting algorithms ----> controlled
ramp generator + D/A converter ----> reconstructed
analogue signal
Is it? For recordings intending to capture an original acoustic event mic techniques arguably are a make-or-break metric. The most innovation appears to be in the field - literally - of nature recordings. Mic arrays leveraging aspects of HRTF are common and can result in a spectacular combination of space and specificity without DSP. Kimber has his Jecklin disc system, beyond that I'm not aware of any music recordings factoring the complexities of a human head as the last transducer in the chain.There are other techniques outside the scope of this thread.
@edbarx There is only one mathematically correct way to shape the curvature that connects the sample points. It is a sinc filter, an infinitely long one that takes infinite time to run. Therefore for practical purposes a windowed-sinc filter may be used. The idea is to produce close to an ideal brickwall lowpass filter. Any type of lowpass filter that sufficiently approximates a brickwall should do more or less the same thing. Typically most dac chips include linear phase LP filter intended to work well enough. If the dac designer wants to use a more computationally expensive filter, some dacs allow for digital LP filtering to be done externally to the dac chip. A final analog LP filter in the dac output stage is supposed to finish up whatever filtering remains to be done.
OTOH Wadia dacs were known for using a spline-fit algorithm for connecting the sample points. Problem with that is that it does not produce the mathematically correct curvature between the dots, even if it does look smooth. It means the waveform can be distorted compared to the original analog signal that was sampled. That said, Wadia dacs had a reputation for sounding good as compared to other dacs from that era. Someone told me some clever engineers at Wadia figured out some way to compensate the spline interpolation so it would sound good. Don't know what they actually did.
OTOH Wadia dacs were known for using a spline-fit algorithm for connecting the sample points. Problem with that is that it does not produce the mathematically correct curvature between the dots, even if it does look smooth. It means the waveform can be distorted compared to the original analog signal that was sampled. That said, Wadia dacs had a reputation for sounding good as compared to other dacs from that era. Someone told me some clever engineers at Wadia figured out some way to compensate the spline interpolation so it would sound good. Don't know what they actually did.
Last edited:
Wadia also used to machine their cases from Billet for no reason anyone could explain other than looked really good.
The local radio station I work for as a volunteer went largely digital in 2010 and almost completely in 2020 and all FM radio chips I've worked on since 2010 have digital processing from the IF onward, but that's all beside the point. The point is that signals that most people would call analogue can be discrete in level.The last FM plant I built kept audio in the digital domain from the label's mastering stage until the transmitter's output stage. All processing was in the digital domain as well.
You are just grumpy that everything isnt like it use to be ;-)Of course everything is turning digital.
That's what marketing is all about - pleasing the consumer, making things "easy".... attractive.
A part of making the masses increasingly lazy, complacent, satisfied, all while contributing towards the benefit of corporations.
Long gone are those days of having to get up from the sofa to change the channel on the TV set, or adjust the volume of the stereo.
Now, only a finger on a remote is needed, or even to speak to it.
Sound reproduction is way better these days than say 30 years ago.
//
- Home
- Source & Line
- Analog Line Level
- Are our recording technology outdated ?