Well insofar as a group of blokes taking **** about something they aren't well versed in this seems about on par for a lounge thread. It might lead somewhere who can tell...
Nobody is forcing you to stay in a thread that you think has no worth - bye!
In my experience blokes often talk **** after taking **** 😉Well insofar as a group of blokes taking **** about something they aren't well versed in this seems about on par for a lounge thread. It might lead somewhere who can tell...
Why are you responding to Bill, Dan and Pete talking about taking **** but ignoring my post about phase change? 🙄Nobody is forcing you to stay in a thread that you think has no worth - bye!
Nobody is forcing you to stay in a thread that you think has no worth - bye!
I never said I was going anywhere. Just that this is a subject that no one here had any credentials in.
I was hoping it was a thread about psychoacoustics but seems to be turning into one about psychosis
If anyone read that overview paper on ASA, you would have seen some of the main current strands in ASA research (At the risk of being told I'm talking ****):
- People without knowledge of ASA assume that we hear what hits the ear - in other words all is just bottom up but research shows that top down processing is important too - the likes of attention, prediction, knowledge all influence what we perceive. Inattentional deafness is as real as inattentional blindness - a lot of people don't perceive the gorilla in the auditory scene just as they don't perceive the gorilla in the visual scene. This is mainly related to cognitive load. This speaks to two things needed for a useful blind test - training & avoidance of undue cognitive load - that's why ABX testing is not as sensitive as other blind tests - identifying which X is A or B is a higher cognitive load than stating which you prefer A or B
For more read "Musical Change Deafness: The Inability to Detect Change in a Non-speech Auditory Domain"
- People without knowledge of ASA assume that we hear what hits the ear - in other words all is just bottom up but research shows that top down processing is important too - the likes of attention, prediction, knowledge all influence what we perceive. Inattentional deafness is as real as inattentional blindness - a lot of people don't perceive the gorilla in the auditory scene just as they don't perceive the gorilla in the visual scene. This is mainly related to cognitive load. This speaks to two things needed for a useful blind test - training & avoidance of undue cognitive load - that's why ABX testing is not as sensitive as other blind tests - identifying which X is A or B is a higher cognitive load than stating which you prefer A or B
For more read "Musical Change Deafness: The Inability to Detect Change in a Non-speech Auditory Domain"
And here's probably the best detailed explanation of roughly how ASA works:
The cochlea decomposes the signal into a set of frequency components and establishes the tonotopic organization that is found throughout much of the auditory system up to and including the primary auditory cortex; for an overview of the subcortical auditory system see Irvine (2012). However, even at this early stage, processing is not just a passive feedforward process. For example, cochlear intrinsic nonlinearities increase the saliency of onsets, emphasize spectral peaks and reinforce harmonically related components in incoming signals. Recurrent feedback, primarily mediated via physical changes in outer hair cell motility, provides a mechanism for adaptive gain control and active modulation of cochlear processing (Guinan, 2006). As the signals pass onwards from the cochlea toward the brain, additional features are extracted and represented in overlapping maps, largely in parallel across the tonotopic axis; such features include onsets, offsets, periodicities, amplitude and frequency modulations (AM, FM), and interaural time and level differences (ITD, ILD). Together these features form the basis for the grouping processes underlying ASA.
Subcortical processing provides cortex with time-locked information about acoustic features detected within the incoming mixtures of sounds, but this information is agnostic with regard to which features belong together or the sources from which they might originate. Cortex then, possibly through inferential processes (Friston, 2005), groups and segregates features into composite event and object representations; representations which become increasingly more abstract at higher levels of the auditory processing hierarchy (Kumar et al., 2014). Thus it is likely that cortex is responsible for object formation. Similarly to other sensory systems, the cortical auditory system is organized in a hierarchical manner (Leaver and Rauschecker, 2010). For example, a pitch processing hierarchy runs from primary auditory cortex in Heschl’s gyrus through planum temporale, superior temporal gyrus and planum polare (Patterson et al., 2002). Differential activations along this pathway distinguish sounds from silence, pitched from unpitched sounds, and melodic patterns from repeated pitches. Further, activity along this pathway also correlates with the emergence of categories (e.g., voices, musical instruments) from feature combinations (Leaver and Rauschecker, 2010). Consistent evidence comes from magnetoencephalographic (MEG) studies of cortical responses to events in speech mixtures (Simon, 2015): primary auditory cortical activations with latencies around 50 ms are primarily related to feature-based representations, while those localized to planum temporale with latencies from 100 ms onwards are related to object-based representations (see also Näätänen and Winkler, 1999).
Bill, if you need any of this explained, just ask or I can give you the reference to the paper although you probably read it already 😎
The cochlea decomposes the signal into a set of frequency components and establishes the tonotopic organization that is found throughout much of the auditory system up to and including the primary auditory cortex; for an overview of the subcortical auditory system see Irvine (2012). However, even at this early stage, processing is not just a passive feedforward process. For example, cochlear intrinsic nonlinearities increase the saliency of onsets, emphasize spectral peaks and reinforce harmonically related components in incoming signals. Recurrent feedback, primarily mediated via physical changes in outer hair cell motility, provides a mechanism for adaptive gain control and active modulation of cochlear processing (Guinan, 2006). As the signals pass onwards from the cochlea toward the brain, additional features are extracted and represented in overlapping maps, largely in parallel across the tonotopic axis; such features include onsets, offsets, periodicities, amplitude and frequency modulations (AM, FM), and interaural time and level differences (ITD, ILD). Together these features form the basis for the grouping processes underlying ASA.
Subcortical processing provides cortex with time-locked information about acoustic features detected within the incoming mixtures of sounds, but this information is agnostic with regard to which features belong together or the sources from which they might originate. Cortex then, possibly through inferential processes (Friston, 2005), groups and segregates features into composite event and object representations; representations which become increasingly more abstract at higher levels of the auditory processing hierarchy (Kumar et al., 2014). Thus it is likely that cortex is responsible for object formation. Similarly to other sensory systems, the cortical auditory system is organized in a hierarchical manner (Leaver and Rauschecker, 2010). For example, a pitch processing hierarchy runs from primary auditory cortex in Heschl’s gyrus through planum temporale, superior temporal gyrus and planum polare (Patterson et al., 2002). Differential activations along this pathway distinguish sounds from silence, pitched from unpitched sounds, and melodic patterns from repeated pitches. Further, activity along this pathway also correlates with the emergence of categories (e.g., voices, musical instruments) from feature combinations (Leaver and Rauschecker, 2010). Consistent evidence comes from magnetoencephalographic (MEG) studies of cortical responses to events in speech mixtures (Simon, 2015): primary auditory cortical activations with latencies around 50 ms are primarily related to feature-based representations, while those localized to planum temporale with latencies from 100 ms onwards are related to object-based representations (see also Näätänen and Winkler, 1999).
Bill, if you need any of this explained, just ask or I can give you the reference to the paper although you probably read it already 😎
- People without knowledge of ASA assume that we hear what hits the ear
Seriously? Surely it's pretty universally understood that the ears are fairly rubbish and have a huge wet squish DSP connected to them to turn what they receive into something useful. and no two DSPs are programmed the same which makes it a hoot describing a sound to someone else, same as you cannot explain yellow to me in a meaningful way.
How does the way we perceive sound as you've outlined relate specifically to audio reproduction in our rooms?
Seriously? Surely it's pretty universally understood that the ears are fairly rubbish and have a huge wet squish DSP connected to them to turn what they receive into something useful. and no two DSPs are programmed the same which makes it a hoot describing a sound to someone else, same as you cannot explain yellow to me in a meaningful way.
Just as I thought - you know it all - just sit back & snipe like ScottJ with inane comments like this.
Just as I thought - you know it all - just sit back & snipe like ScottJ with inane comments like this.
I ask a question you don't like and get insults back. Nice. What are you actually trying to accomplish with this thread then? Obviously not a discussion!
I ask a question you don't like and get insults back. Nice. What are you actually trying to accomplish with this thread then? Obviously not a discussion!
Where was the question in your last post? You have a twisted view of what a discussion is.
As I said just sit back & snipe, like ScottJ - that must be your idea of "discussion"
I guess if you dismiss auditory perception as wet squish DSP then DIYAudio as entanglement of wires & music as a clatter of noise is equally apt 😀
- Status
- Not open for further replies.
- Home
- Member Areas
- The Lounge
- Auditory Perception in relation to this hobby