Temporal resolution

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
At some point in DAC, all output logic controlled by single sample value becomes analog output. Each output period of system corresponding to single sample output has impulse response of output system that is directly proportional in amplitude to output sample value. Continuous output is sum of all IR moments.
 
Yes if you want to consider the word "language" in that sense. Music has a different one. I said that at some one just starts goinig around in circles and this is an example.:D

Thinking which is conflicted always breaks down into circular paradox of reasoning. More than one outcome becomes possible, or no outcome at all when presented with stimulus.

In your system of thought, a question is always asked about each and every moment of sound perception: Is this music or not music. If the answer is yes, one set of rules are applied; and if the answer is no, another set of rules are applied.

Your system of thought implies that absolute truth does not exist about any one event. Truth doesn't have a fixed definition, and no proof becomes possible.

Yes, your postings are like broken record.
 
Thinking which is conflicted always breaks down into circular paradox of reasoning. More than one outcome becomes possible, or no outcome at all when presented with stimulus.

In your system of thought, a question is always asked about each and every moment of sound perception: Is this music or not music. If the answer is yes, one set of rules are applied; and if the answer is no, another set of rules are applied.

Your system of thought implies that absolute truth does not exist about any one event. Truth doesn't have a fixed definition, and no proof becomes possible.

Nothing like this. This is really abstruse! What I meant what just that there are some things more relevant to formation of the emotional feeling than others but it isn't really something you need to think about! It's the way you perceive it in real time!! The more you have (musical) experience the more you know it. And it is not even so subjective as you might think as the language is just one, if you know it....subjectivity is just in small details.
 
At some point in DAC, all output logic controlled by single sample value becomes analog output. Each output period of system corresponding to single sample output has impulse response of output system that is directly proportional in amplitude to output sample value. Continuous output is sum of all IR moments.

In the old days.

Now, with lots of dsp horsepower available, it is possible to reconstruct in real time, the output signal using many output values. You are thinking only of a D/A feeding a S/H into filter.

jn
 
Besides a human being can do also the job of a microphone! I have a friend that was so sick about getting a flat frequency response out of his speakers at the listening position that after some months spent in positioning, moving and adding things around the room asked for measurements. Amazingly the FR was really flat +/- 1 dB from 60 to 18000 Hz with a smooth cut-off at both ends and slowly sloping down towards the trebles. Despite the proof that he had gotten what was looking for he still didn't like it!
 
Last edited:
In one sense fidelity to anything is a moot point. It has been shown that the horsepower is now available to break down a recording into its constituent musical parts - unfortunately the product stalled, never hit the market - , which means a recreation or manipulation to make the sound, derived from a recording, be anything you want to be is at hand. So those who want a very warped, or exaggerated replicant of the "original" will be able to do so ...
 
Sorry, we are at that point, I've mentioned it before - http://www.diyaudio.com/forums/mult...echnology-truly-primitive-24.html#post3659888. Would have been very expensive, and wasn't 'perfect' in its first iteration - but sufficiently capable to steer the majority of the output of a specific instrument to a particular speaker, say.

Still not buying it, no matter how many times it's said. It won't be perfect the second, third, fourth, twentyith iteration.

I don't want "the majority" of any instrument sent to one speaker in space. I want the entire instrument.

I don't want a secondary instrument to steal any of the timing or amplitude of a primary instrument just because it phased into the temporal space for an instant.

The problem would be barely doable if the two channel recording also contained the interchannel timing delays in a coherent fashion, but that cannot happen unless the mixing desk built ITD into the mixdown. Putting ITD into the final product will comb the living daylights out of mono replay, and can only be an exact solution for one specific speaker system at one specific distance from the listener with one specific angular placement. Use on a system that is different from target will give unknown results.

It can't be done properly with current program material.

What should be done, is sell the full multitrack to the consumer in 16/44 minimum. Let the user system add IID/ITD for playback specific parameters.

jn
 
gk7 said:
What do you regard as the original, the data that comes from your CD, or the musicians performance ?
The voltage signal in the studio. Anything earlier than that and we are discussing microphones and recording techniques - a big subject but quite separate from the issue of preserving, transmitting and reconstructing the same voltage signal in my home.
 
The problem would be barely doable if the two channel recording also contained the interchannel timing delays in a coherent fashion, but that cannot happen unless the mixing desk built ITD into the mixdown. Putting ITD into the final product will comb the living daylights out of mono replay, and can only be an exact solution for one specific speaker system at one specific distance from the listener with one specific angular placement. Use on a system that is different from target will give unknown results.

It can't be done properly with current program material.
All the theoretical reasons 'why it can't be done' may be put forward, but the proof's in the pudding. The human brain is very adept at reconstructing when sufficient clues are in the place -- the Lexicon was a real device, product, was demonstrated on quite a number of occasions, and people were bowled over by what it did, both from its ability to sufficiently separate the strands, and that the end, in the room, sound was dramatically enhanced ...


What should be done, is sell the full multitrack to the consumer in 16/44 minimum. Let the user system add IID/ITD for playback specific parameters.

jn
Of course, this would be the ideal. There will be a service down the road, which will be thoroughly deconstruct old recordings, and then provide, say, 24 track breakdowns of the material - for the user to play with as they want ... much more useful than the current hi-res download thingies ...
 
Last edited:
Nothing like this. This is really abstruse! What I meant what just that there are some things more relevant to formation of the emotional feeling than others but it isn't really something you need to think about! It's the way you perceive it in real time!! The more you have (musical) experience the more you know it. And it is not even so subjective as you might think as the language is just one, if you know it....subjectivity is just in small details.

Have you read "This Is Your Brain On Music"?
 
All the theoretical reasons 'why it can't be done' may be put forward, but the proof's in the pudding.
If by "proof is in the pudding" you mean this quote from you:
and wasn't 'perfect' in its first iteration - but sufficiently capable to steer the majority of the output of a specific instrument

Then you and I have a different definition of "proof".

Next I guess you'll be proving that we can survive a fall from 35 thousand feet without a parachute onto a parking lot because "we were alive for the majority of the fall", although in your words, it "wasn't perfect".

In the discipline of high end audio, huge weight is placed on distortion, noise, soundstage image reconstruction, frequency response, and yet now your are claiming success based entirely on something you state "wasn't perfect".

hmmm..
The human brain is very adept at reconstructing when sufficient clues are in the place -- the Lexicon was a real device, product, was demonstrated on quite a number of occasions, and people were bowled over by what it did, both from its ability to sufficiently separate the strands, and that the end, in the room, sound was dramatically enhanced ...

There is a distinct difference between separating the "strands" so to speak, and that of better quality reproduction.
Of course, this would be the ideal. (selling the multitrack to the consumer) There will be a service down the road, which will be thoroughly deconstruct old recordings, and then provide, say, 24 track breakdowns of the material - for the user to play with as they want ... much more useful than the current hi-res download thingies ...
We agree. However, technology does not currently exist to figure out every aspect of an instrument or voice to do what you say can be done. Doing so offline with some huge horsepower and a human to steer the computational decisions would be the most promising. As, the software is not ready to maintain distinction when for example, the lead guitar in heartbreaker breaks into distortion. A very good set of ears in the redistribution process is very important. I want my Led to be LED. Not gospel music, not elevator music, but true Zep. I suspect "To Be Real" by Cheryl Lynn (disco) would be easy enough for the software, but when the bassist goes to town, will all the slap harmonics and impulses be correct? Will fuzz or wah-wah retain the correct channel?

jn
 
If I understand correctly, are we being asked to believe that an electronic box of tricks can, in real time, take a stereo audio signal and separate it into instruments? Are the Maxwell demons now trying to expand their empire beyond noise reduction into stereo imaging?

People making these claims really ought to realise just how foolish it makes them look in the eyes of those who know anything about anything!
 
If I understand correctly, are we being asked to believe that an electronic box of tricks can, in real time, take a stereo audio signal and separate it into instruments? Are the Maxwell demons now trying to expand their empire beyond noise reduction into stereo imaging?

People making these claims really ought to realise just how foolish it makes them look in the eyes of those who know anything about anything!

From what has been stated, a device has been demonstrated which can do "almost" that. (edit: I suspect the source material was chosen specifically so that no two images were in the same spacial location.)

For a mixdown where no interchannel timing delays were introduced, I suspect it can be done via correlation and signal levels. First, equate a specific level interchannel, then apply correlation.

It'd be a tad more rigorous mathematically if interchannel timing were applied, but typical content isn't mixed down that way.

And if the source material was via a two mike setup, all bets are off.

jn
 
Last edited:
Disabled Account
Joined 2004
From what has been stated, a device has been demonstrated which can do "almost" that. (edit: I suspect the source material was chosen specifically so that no two images were in the same spacial location.)

jn

I've seen some DSP demo's using Bayesian techniques to capture a voice in a crowd, fairly interesting if not perfect. Similar to blind deconvolution I think which always amazed me. Picture + unknown badness yields decent picture and a good idea of the badness.
 
Last edited:
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.