Second lecture
Today’s lecture was devoted to an overview of neural processing pathways of auditory stimuli, with emphasis on the central role suppression plays in filtering out of unwanted information.
To highlight this point, the lecturer recalled how she was stricken during an ecography in her fist pregnancy, by the notorious cardiac activity in an embryo just a few weeks into development. She wondered what happens with cardiac related noise, for example by circulation through the carotid arteries, just some tens of millimeters from the cochleae, that should be readily sensed yet we are not aware of. Subsequent experiments with guinea pigs showed conclusively there is synchronous excitation of auditory cortex neurons in strong correlation with ECG, proving it is heard but suppressed.
This pattern repeats in other senses, and in a way parallels evolutionary strategy. Namely, the nervous system starts in the early developmental stages with excess of sensory derived signals, and both in a hard coded way during further development, and after birth through experience in the early life stages, useless or irrelevant phenomena connections are selectively pruned thus fine tuning the perceptive experience.
As supporting evidence, cochlear implants (extremely crude attempts to generate nervous signals for a very limited set of frequencies), succeed in providing speech recognition only for pre-lingual subjects (less than 8 months) or for subjects that lost hearing well after having learned to talk. They fail (in providing speak ability) in subjects deaf during the time window of opportunity at early age when the nervous system is maturing.
An amusing sideline. The hyppocampus is a region responsible among other things of processing and integrating signals form vision and speech, and of packaging information for memory storage. The theta-waves are associated with this region, and interestingly their frequency is more or less the well known 120 beats/minute so mesmerizing for trans music fans.
Next Monday we’ll be toured through the neurophysiology lab, and will be given a closing lecture for this first part.
Rodolfo
Today’s lecture was devoted to an overview of neural processing pathways of auditory stimuli, with emphasis on the central role suppression plays in filtering out of unwanted information.
To highlight this point, the lecturer recalled how she was stricken during an ecography in her fist pregnancy, by the notorious cardiac activity in an embryo just a few weeks into development. She wondered what happens with cardiac related noise, for example by circulation through the carotid arteries, just some tens of millimeters from the cochleae, that should be readily sensed yet we are not aware of. Subsequent experiments with guinea pigs showed conclusively there is synchronous excitation of auditory cortex neurons in strong correlation with ECG, proving it is heard but suppressed.
This pattern repeats in other senses, and in a way parallels evolutionary strategy. Namely, the nervous system starts in the early developmental stages with excess of sensory derived signals, and both in a hard coded way during further development, and after birth through experience in the early life stages, useless or irrelevant phenomena connections are selectively pruned thus fine tuning the perceptive experience.
As supporting evidence, cochlear implants (extremely crude attempts to generate nervous signals for a very limited set of frequencies), succeed in providing speech recognition only for pre-lingual subjects (less than 8 months) or for subjects that lost hearing well after having learned to talk. They fail (in providing speak ability) in subjects deaf during the time window of opportunity at early age when the nervous system is maturing.
An amusing sideline. The hyppocampus is a region responsible among other things of processing and integrating signals form vision and speech, and of packaging information for memory storage. The theta-waves are associated with this region, and interestingly their frequency is more or less the well known 120 beats/minute so mesmerizing for trans music fans.
Next Monday we’ll be toured through the neurophysiology lab, and will be given a closing lecture for this first part.
Rodolfo
ChocoHolic said:...but up to now I am not able extract what sort of distorsions, delays, compressions and other errors are inconvinient for us.
...I am not able to transform the results of the biological examinations to a technical specification of 'good' or 'bad' properties for an audio amplifier.![]()
...difficult...
Trouble there are two possible approaches which are uncknowingly mixed up.
On the one hand, one may look for as neutral as possible reproduction, knowing it is technically impossible to regenerate exactly an original sound field. Yet in the end this goal is in my view the only sensible way.
The other possibility is to exploit added "features" (as MS should say) as beneficial form a perceptive viewpoint. This means in getting in the studio engineer shoes (albeit limitedly).
It then should be hardly suprising that what is the "right" way for one is "terrible" for another and this how things actually stand.
That sound perception - as it happens with other senses - has in itself a large influence of learned experience manifest in the way original paralell signal paths are selected and pruned, implies it is a personal, individual and complex instance hardly able to be reproduced, what is probably the key to so much diversity of oppinion.
Rodolfo
Rodolfo,
you make two very important points:
This may actually get you excommunicated from the camp in this forum which holds that objective scientific measurements exist and can indeed tell the whole story. To me it's almost self evident, end of course on can still work with this "problem" in a scientific way: by measuring and statistically exploiting the variety of subjective impressions.
I'd say that's the key of engineering. There is nothing wrong with engineering, to the contrary: only good engineers can get us close to "perfection". But the key to good engineering is to know where you want to go in the first place ... and here we all lack some data.
The exploitation of human perception is evident not only in musical composition, or in all arts, but also in so called "objective representations of reality" (such as, audio reproduction). In *all* measurement, data gathering and reporting, scientific, journalistic, artisitc or else, you pick out a specific aspect of reality, arbitrarily chosen by you, and you highlight that particular aspect using some techniques.
In photography for instance, you adjust contrast and sharpness, or you take away color altogether, to achieve a stark impression of the feeling of reality. It is not reality, but it *conveys* reality.
In journalism, you select the story you think is interesting... you give it importance. You create the story so to speak - out of a background of millions of other stories that you ignore.
In audio recording, you put the microphone in a select location, you use compression etc
In audio reproduction, you choose a frequency range and dispersion pattern which you want to represent. Absent perfection, you must make a choise of which aspects are most important to you.
Not too surprising that you may also choose the "lesser of two evils" in choosing which distortions to suppress, and which to allow, if you have to make a choice between two techncially incompatible goals. Not too surprising that many valid alternate solutions exist, therefore.
BTW in science it's just the same. You measure the parameter which for better or worse reason you *believe* to be important.
This is why the belief in "scientific truth" always make me smile.
you make two very important points:
That sound perception - as it happens with other senses - has in itself a large influence of learned experience manifest in the way original paralell signal paths are selected and pruned, implies it is a personal, individual and complex instance hardly able to be reproduced, what is probably the key to so much diversity of oppinion.
This may actually get you excommunicated from the camp in this forum which holds that objective scientific measurements exist and can indeed tell the whole story. To me it's almost self evident, end of course on can still work with this "problem" in a scientific way: by measuring and statistically exploiting the variety of subjective impressions.
The other possibility is to exploit added "features" (as MS should say) as beneficial form a perceptive viewpoint. This means in getting in the studio engineer shoes (albeit limitedly).
I'd say that's the key of engineering. There is nothing wrong with engineering, to the contrary: only good engineers can get us close to "perfection". But the key to good engineering is to know where you want to go in the first place ... and here we all lack some data.
The exploitation of human perception is evident not only in musical composition, or in all arts, but also in so called "objective representations of reality" (such as, audio reproduction). In *all* measurement, data gathering and reporting, scientific, journalistic, artisitc or else, you pick out a specific aspect of reality, arbitrarily chosen by you, and you highlight that particular aspect using some techniques.
In photography for instance, you adjust contrast and sharpness, or you take away color altogether, to achieve a stark impression of the feeling of reality. It is not reality, but it *conveys* reality.
In journalism, you select the story you think is interesting... you give it importance. You create the story so to speak - out of a background of millions of other stories that you ignore.
In audio recording, you put the microphone in a select location, you use compression etc
In audio reproduction, you choose a frequency range and dispersion pattern which you want to represent. Absent perfection, you must make a choise of which aspects are most important to you.
Not too surprising that you may also choose the "lesser of two evils" in choosing which distortions to suppress, and which to allow, if you have to make a choice between two techncially incompatible goals. Not too surprising that many valid alternate solutions exist, therefore.
BTW in science it's just the same. You measure the parameter which for better or worse reason you *believe* to be important.
This is why the belief in "scientific truth" always make me smile.
Tschrama,
interesting about the monks inventing musical scales according to their hearing observations. Do you have source for that (books of www)?
I have to disappoint you. The basis of our musical scales is much much older. One of the main contributers is one of the most hated people amongst pupils: Pythagoras !!
http://www.jimloy.com/physics/scale.htm
And yes, it definitely IS related to mathematics AND physiology !
Since I am not a music historian I am not quite sure about that but my feeling is that during the first millennium music did not make much progress (this may be an unfair term but no better one came up). The oldest piece of music I ever heard was by an ancient greek composer (around 250 BC). It was an interpretation by the Hilliard Ensemble and Jan Garbarek. It doesn't sound that different fom sacral music from the beginning of the 2nd millennium (so it wouldn't even sound that strange to many amongst us). I wonder though how this piece of music has been passed down since the basics of our modern musical notation go back to the beginning of the 2nd millennium (this time it was definitely by some monk(s) !).
Modern music IMO started its live during the renaissance.
Regards
Charles
with guinea pigs showed conclusively there is synchronous excitation of auditory cortex neurons in strong correlation with ECG, proving it is heard but suppressed.
Having done my share of audio-electrophysiology experiments on guinea pigs, I can tell you it';s not that simple. You might wanna take a look at 'stochastic resonance', and spontaneous firing rate related to blood presure, blood electrolytes etc etc.. still it is obvious that somehow perception of certian contineous 'sounds' is blocked.. we would hear constance noise if it were otherwise.
phase_accurate said:
I have to disappoint you. The basis of our musical scales is much much older. One of the main contributers is one of the most hated people amongst pupils: Pythagoras !!
You don't, except if you thought I didn't know that. I guess both is true, as it wouldn't be the first time a re-discovery is made by a totaly differrent method.
I found the monks info in a book for first years conservatory student, history of music bla bla ... dunno, it's not mine.
MBK said:....
This may actually get you excommunicated from the camp in this forum which holds that objective scientific measurements exist and can indeed tell the whole story. ......
I still belive it is possible to objectively assess how close to perfection our reproduction systems are, and it is my choice at least to follow this path of "neutrality" as much as possible, leaving between the original artists and the listener only the minimum unavoidable disruption and perhaps the studio "artists" (this said with due respect, some out there are really good and only enhance the end product).
If, on the other hand, I had a feeling for contributing to the end product in a particular personal instance, probably I should experiment with preprocessing, from humble tone controls (my amplifiers lack them altoghether) through graphic equalizers, up to sophisticated DSP processing engines.
Rodolfo
tschrama said:
Having done my share of audio-electrophysiology experiments on guinea pigs, I can tell you it';s not that simple. You might wanna take a look at 'stochastic resonance', and spontaneous firing rate related to blood presure, blood electrolytes etc etc.. still it is obvious that somehow perception of certian contineous 'sounds' is blocked.. we would hear constance noise if it were otherwise.
No contradiction as I see, the bottom line is actual perception is a selective process.
Rodolfo
ingrast said:
Trouble there are two possible approaches which are uncknowingly mixed up.
On the one hand, one may look for as neutral as possible reproduction, knowing it is technically impossible to regenerate exactly an original sound field. Yet in the end this goal is in my view the only sensible way.
The other possibility is to exploit added "features" (as MS should say) as beneficial form a perceptive viewpoint. This means in getting in the studio engineer shoes (albeit limitedly).
It then should be hardly suprising that what is the "right" way for one is "terrible" for another and this how things actually stand.
.....
Rodolfo
I would also tend to the first approach.
Often it happens, that an improvement of a particular amplifier property causes an other drawback. And we have just poor knowledge to decide.... (Devil or hell?

Popular example:
Ultra massive overall feedback designs in the late 70s.
THD come great with this (at 1kHz 😀 ) . But sound did not really improve..... OK, then we started to measure TIM, and considered all frequency responses of the sequenced gain stages ....discussed how much headroom the input stage would need.... things came better...
But the discussion, which sort of errors are quite annoying, remained on a religious base. 🙄
There are only very less points of common agreement.
I also think that there will always remain a certain area of subjective perception. But from biological point of view I would expect much more common agreement than observed in the battle of sound beliefs and related circuits.
ChocoHolic said:
..... But from biological point of view I would expect much more common agreement than observed in the battle of sound beliefs and related circuits.
I still belive it is possible, either in an immediate future or later, to convene objectively (i.e. by neurophysiological measurements) on a set of relevant thresholds below which two different signals elicit the same perceptive response.
For example, a probable and perhaps too stringent set could be (as I posted earlier in other threads), below 0.001% THD, negligible phase shift, and absence of slew rate limits within the 20-20KHz range.
This could then work as a "standad interface" few could object. From there on, each may conciously opt to "color" to taste how he/she finds more pleasent an experience.
Remember the guitar amplifiers of the 60's? Certainly manufacturers had in mind faithful amplification at first, yet musicians quickly found overdrive and certain speaker cone constructions made for new and unexpected sounds to add to their repertoir.
In a way something like this is probably happening with amplfiers and speakers. Certain brands and technologies provide a unique signature found to be pleasing to certain users (and reviewers), which coupled with the vagaries of fashion and marketing lead in the end to current debates, while measurements fail to show they are objectively "better" in the sense of faithfullness (often all the opposite).
Rodolfo
this thread is quite an interesting read! i believe that we have more control over how we perceive what we hear than is currently believed. for example, people with selective hearing. it's amazing how they can focus on something that they want to hear and effectively block out most everything else. another interesting observation comes from my grandparents. over the years my grandfather seems to have developed a vocal notch filter that includes the main frequencies of my grandmother's voice 🙂 he seems to hear other people just fine but when she says something he always asks her to repeat what she said. either that or her yelling at him over the years has actually caused hearing damage over that particular frequency range! anyway, thanks for the good read and the informative links.
Brian
Brian
ingrast said:I still belive it is possible, either in an immediate future or later, to convene objectively (i.e. by neurophysiological measurements) on a set of relevant thresholds below which two different signals elicit the same perceptive response.
This could then work as a "standad interface" few could object. From there on, each may conciously opt to "color" to taste how he/she finds more pleasent an experience.
i would agree with you on this solely based on the facts that our ears all have the same basic mechanical structure and that we all seem to perceive many different acoustical phenomena the same way.
BWRX said:...another interesting observation comes from my grandparents. over the years my grandfather seems to have developed a vocal notch filter that includes the main frequencies of my grandmother's voice 🙂 ...
Amazing proof of Natiure's wisdom !!!!!
Rodolfo
- Status
- Not open for further replies.
- Home
- General Interest
- Everything Else
- The cochlear amplifier