You say you don't want to argue, so... Let's say you walk into a pub, and start a discussion by stating "the earth is flat!". Someone responds with "What makes you say that?", and you answer "Because I see it with my own eyes". If the person responds with "no, it isn't, it's round", and you counter with "Can you prove that? And can I see an opthoalmologist's declaration that your eyes are in good functioning health?"
Can you see how that discussion could go rapidly downhill?
Sorry, that is one of the worst and most stupid explanations I have ever seen from a (I assume) grown up man, in my whole life..😱. I give up!

Actually it is a very relevant and clever analogy, I really like it. With just a minor difference - the response is not "no it is not flat, it is round", but even less resolute - "By all objective observations and measurements so far it is round. Very unlikely is it flat but we can discuss it if you have some more objective support for your claim apart of visual feeling". Then comes the response of "check with your doctor", just like you responded here.
Very relevant analogy indeed.
Very relevant analogy indeed.
Last edited:
just a minor difference - the response is not "no it is not flat, it is round", but even less resolute - "By all objective observations and measurements so far it is round. Very unlikely is it flat but we can discuss it if you have some more objective support for your claim apart of visual feeling". Then comes the response of "check with your doctor", just like you responded here..
Yes, that is an appropriate correction. Thanks!
well, yes and no."Evidence" is not sighted listening, but statistically significant positive results from blinded listening test.
Unfortunately, DBT works badly with human perception, and false negatives are pretty common. While a "positive" ABX detection is definitely a proof, a negative one isn't so conclusive. The unavoidable "stress" implied in the test, as well as any preconception (and many other things) may easily invalidate the results: "placebo" works also the other way 'round (it's called the "nocebo" effect).
Moreover, a trivial self-made ABX test as done at home using the available tools is far from being a truly scientifically valid DBT. Which would be much more complex, requires a huge lot of specific expertize to be properly setup, high statistics, a large panel of test subjects, different systems & conditions, etc, etc. No way to do that at home.
Sighted-listening of course does not and can not constitute a proof, but should not be disregarded so easily, either.
you can't "fix" anything, as there's nothing broken!How can you fix something you do not know whether it actually exists?
And...
again: stop worrying, you can't.How do you get feedback on changes implemented without "measuring" them?

Unless someone will find out a way to actually get some objective measure of such subtle effects, there's simply nothing which can be done in a "scientific" way.
The only thing which can be done is taking a fully empirical approach, with no warranties or certainties whatsoever.
Does this software or kernel or setting or whatever "sounds better" (to you) than some other one? Use it. You feel that it sounds worse? Try something else. As simple as that. Forget to try to make it a science, to understand how and why. Unless you are Sony or some other huge corporation and are willing to invest billions in that research, doing so is way beyond our possibilities.
exactly.RT kernel/linux distribution introduces changes to scheduling. It plays key role in low-latency setups - CPU/kernel must keep up with the fast arriving interrupt requests. [...]
Now, consider the nowadays typical setup: PC->USB(UAC2)->DAC.
Have you ever thought that, with such changes in kernel timings, you are likely also altering the exact timing of the signals on the USB bus? 😉
Then, have you ever considered that any time there is a transition on the line (on any transmission line, even an optical or wireless one...), there is also a corresponding current pulse on the "receiver"?
Now, consider that any piece of wire or PCB track have stray inductance, resistance, capacitance as well as unwanted couplings with nearby circuit elements (and no PSU have an impedance equal to zero, either).
What happen?
That's it: different timing on the PC -> different "noise" on the USB interface -> different "noise" on the DAC board -> different "noise" on the output audio signal.
The principle is as simple as that. Yet, turning this knowledge into any predictable way of altering the result is practically impossible. No way.
exactly.Just in no way can you affect this by software in a CONSISTENT manner.
and here instead you're wrong. Terribly wrong. Perception is an extremely complex thing. Our perception is by no means "equivalent" to what could be seen on a scope or spectrum analizer. It's by no means an LTI system. Quite the contrary. It's a wildly non-linear, time-dependent, chaotic system.Especially not to produce the "reported" changes - stronger bass, more pronounced mids, etc. That would require intelligent filtering of the samples, not some random events of noise.
What you subjectively perceive as "stronger bass", "more pronounced mids", etc., may actually depend on (significant) frequency response alterations, but it may as well have absolutely nothing to do with that. Subtle background noise changes, particularly if the noise is correlated with the signal, may actually have a huge impact on just about every aspect of the perceived sound characteristics.
Beware: I mean "inaudible" noise. That is, low level noise that you will NOT perceive by/as itself. Nothing to do with hum or other audible disturbances.
as said, also the blind ABX test results get confused by our thoughts (and many other things, too). Human perception is a wild, mined territory... 😉Of course I have tried blinded testing, e.g. my old Tool for A/B Testing | Blog IVITERA a.s. .
That is not my theory, it is a regular scientific procedure ABX test - Wikipedia . In fact it is common sense. Everyone has many times experienced how his senses are easy to get confused by our thoughts. You have to get rid of that influence. How? The easiest is by not knowing - hence the blind test.
unfortunately, RT (which, among the other things, usually means "fully preemptible" Kernel) is a very delicate thing. It can easily screw up everything. No wonder Kipeta is having a hard time to set it up and make it work properly.bottom line: the 32 and 64 sound like they have an upper mid hump with rolled off highs. (on my computer, in my system, with my ears). Only the RT sounds "normal", which is a godsend, because I thought I was going to have to scrap the whole thing and start using another system. My only gripe at this point is how buggy the RT version is for me with numerous reinstalls, etc. The latest bug this week is having to restart it every day when I get home because my remote ui says it can't find the page. (perhaps a wifi issue now, since "Daphile" oddly shows up in my available networks while this problem is occurring. Now, if that's all just expectation bias so be it... but really the only expectation I ever had was the dumb thing to just work.
Which "extraordinary claim"? Which "scientific research"? The (nonsensical) marketing claim that digital audio is perfect? 🙄The difference is that phofman is not claiming something that goes against half a century of scientific research and understanding. It is only extraordinary claims that require extraordinary evidence.
The simply plain wrong idea that same bits = same output, disregarding the well known and obvious influence of clock, references, etc.?
You'd better read here... (particularly the 3rd part). Q&A with John Swenson:
Part 1: What is Digital?
Part 2: Are Bits Just Bits?
Part 3: How bit-perfect software can affect sound
Last edited:
Unfortunately, DBT works badly with human perception
So what do you suggest as an alternative?
Sighted-listening of course does not and can not constitute a proof, but should not be disregarded so easily, either.
Not disregarded. But if there are two explanations for the results of a sighted test - one that goes against pretty much all modern understanding of physics and information theory, and the other that is totally in accordance with our understanding of how the human mind interprets things, which one do you think requires the stronger proof?
Unless someone will find out a way to actually get some objective measure of such subtle effects, there's simply nothing which can be done in a "scientific" way.
Actually there is a way. It is applied in a lot of other fields where the distinctions are subtle and subject to "interpretation" by the human mind - double-blind ABX tests.
Forget to try to make it a science, to understand how and why. Unless you are Sony or some other huge corporation and are willing to invest billions in that research, doing so is way beyond our possibilities.
Actually, this is where I have to say that I am slightly put off by your choice of nickname. UNIX was developed by Bell Labs, by some of the people I had the good fortune to get to know personally. A lot of the research into what and how human ears perceive was also done there (and there is a bit of overlap between those areas). All those people (or at least those of them that are still alive) would laugh pretty heartily at your claims.
That's it: different timing on the PC -> different "noise" on the USB interface -> different "noise" on the DAC board -> different "noise" on the output audio signal.
To quote one of my friends, who is a very accomplished professional in the broadcast industry: "we used to worry about master clocks and jitter, but ASRCs have solved that issue". If you have a DAC that is that sensitive to input jitter, I suggest you get rid of it, and avoid "audiophile" DACs designed by clueless "designers".
Which "scientific research"?
The stuff that you don't seem to be familiar with.
You'd better read here... (particularly the 3rd part). Q&A with John Swenson
Ah, yes, someone with strong academic credentials and no stake in the game... 🙂
You'd better read here... (particularly the 3rd part).
Part 3: How bit-perfect software can affect sound
Unixman, you are a technical reasonable man. Do you really think the engineers would have designed the USB stack to have problems with timing like this?
Packet jitter is most frequently caused by software. In many systems the time at which each packet is scheduled for transmission is computed in software. If that software is late in doing its job, the packet timing will change. This software that does the packet scheduling is almost always interrupt driven. The exact time this interrupt routine is called can be affected by other software on the computer, in particular process and thread priorities. The kernel scheduling protocol also has a significant affect.
This is how it works in linux (close to your nick):
The USB audio driver receives audio data every alsa period, let's say every 10ms. The driver is responsible for feeding the audio stream data into lower level USB-core driver, which prepares actual USB frames. USB1 = frame every 1ms. The frames are prepared into RAM, read by the DMA controller at speed of its clock. The CPU has no direct control of this clock, at least not that I could find in the docs (I did look quite hard - the EHCI controller has some registers for fine-tuning the clock but IIRC it was available as linux module param, not tweaked by the kernel directly)
For 48kHz stream the usb-audio driver will split the data in the alsa period part of incoming buffer into packets of 48 samples and let the usb-core driver prepare the frames. The frames are put into memory, waiting for the DMA reading pointer of the USB controller to be read and transferred to the USB audio device. Period of 10ms will produce samples for 10 consecutive frames.
How about interrupts. The USB-audio driver will mark every 10th frame block with interrupt request. When the USB controller hits this mark, it throws an interrupt, telling (upon reading its corresponding register) the drivers "Hi, I am right now reading this part". That is equivalent to PCI(e) interrupts - the principle is just the same.
Now a period of data is by far not all from the buffering standpoint. The audio driver allocates not just one alsa period of data, but so called alsa buffer of data. Usually that is at least 4 periods, some softwares use much more. The player/driver fills the whole buffer with data first and then it tells the USB controller to start reading the frames.
E.g. MPD uses 4 periods per buffer MPD/AlsaOutputPlugin.cxx at master * MusicPlayerDaemon/MPD * GitHub
By default MPD uses 500ms of buffer time MPD/AlsaOutputPlugin.cxx at master * MusicPlayerDaemon/MPD * GitHub
Now calculate with me - 500ms of data are prepared in RAM. After 125ms (period boundary - IRQ from the USB controller) the chain is told to supply another 125ms of data. How long can that take to a modern CPU? A few millisecs at most? And again 500ms of data is waiting for the USB controller to be read, at the pace of the controller's clock, not the CPU. Pretty safe margin.
For USB async the feedback requires shorter reaction times since buffers in the async usb receiver are not so large. But we are still talking about pretty long times.
And what happens, should the CPU not make it on time? Xrun, buffer underflow, audible click.
A few years ago I did trace the USB packets in wireshark while loading my at that time pretty low performance system VERY hard. In all monitored frames, every 1ms, there were 48 samples. As precise as it can get. For 44.1 kHz there were 44 packets in 9 frames and 45 packets in the 10th frame. Again, cannot be more precise.
Daphile uses squeezelite - hardcoded buffer time of 40ms, period time 10 ms squeezelite/squeezelite.h at 68770e4ed38d3a547912c39de69edaf41dcace84 * ralph-irving/squeezelite * GitHub
Two full buffers of samples are prepared before starting the playback: squeezelite/output_alsa.c at master * ralph-irving/squeezelite * GitHub - 80ms of margin.
Where is the USB packet jitter caused by the software that guy talks about? All it takes is just browse the freely available git repositories of linux alsa and individual players and study the actual source code. It is not so complicated for a computer professional.
Where is the USB packet jitter caused by the software that guy talks about? All it takes is just browse the freely available git repositories of linux alsa and individual players and study the actual source code. It is not so complicated for a computer professional.
Yes, but that requires understanding the code. One of my most memorable moments was at a conference, standing chatting with a couple of people. One of them was wearing a T-shirt with the famous UNIX V6 code that has the comment "you are not expected to understand this". A young Austrian academic guy walked up and tried to impress us by saying "I do understand it, do you?". The guy who wore the T-shirt answered "Yes, I wrote it". In perfect Dennis Ritchie style. No pretensions, no one-upmanship, just matter-of-factly, but with a subtle hint that he did get the irony of the situation.
This well-known article by the author of PulseAudio nicely explains the general principles of computer audio What's Cooking in PulseAudio's glitch-free Branch .
A few words about benefits of the RT kernel/setup. The overall latency of the chain is limited by length of the alsa buffer. Hundreds of ms are fine for standard computer playback where data can be sourced very fast. However, what if a key on midi keyboard is pressed - the music generated at playback speed would sound in half a second. Recording session - playback monitor would be delayed by half a second. Totally unusable, unacceptable.
For these scenarios the alsa buffer must be kept as small as possible. Still the writing pointer (software - the driver) must at all times keep ahead of the hardware reading pointer of the soundcard/USB controller to avoid underruns. The time margin can be only single milliseconds or less and that really requires tight scheduling, assigning higher scheduling priorities of the playback process/processes.
A good example is the configuration screen of jack control panel - e.g. this is pretty tight timing - 3 periods of 64 frames each at 96kHz - overall latency of max. 2ms. Unlikely to work reliably without the RT patches https://westcoastsuccess.files.wordpress.com/2008/03/jack_settings_2ms.png?w=700 .
A few words about benefits of the RT kernel/setup. The overall latency of the chain is limited by length of the alsa buffer. Hundreds of ms are fine for standard computer playback where data can be sourced very fast. However, what if a key on midi keyboard is pressed - the music generated at playback speed would sound in half a second. Recording session - playback monitor would be delayed by half a second. Totally unusable, unacceptable.
For these scenarios the alsa buffer must be kept as small as possible. Still the writing pointer (software - the driver) must at all times keep ahead of the hardware reading pointer of the soundcard/USB controller to avoid underruns. The time margin can be only single milliseconds or less and that really requires tight scheduling, assigning higher scheduling priorities of the playback process/processes.
A good example is the configuration screen of jack control panel - e.g. this is pretty tight timing - 3 periods of 64 frames each at 96kHz - overall latency of max. 2ms. Unlikely to work reliably without the RT patches https://westcoastsuccess.files.wordpress.com/2008/03/jack_settings_2ms.png?w=700 .
Last edited:
Unfortunately there are no valid alternatives to speak about, there's simply no way to get completely reliable answers. One should understand perception and use whatever he can, with a grain of salt. Taking ABX tests as a religion is insane, perhaps even more than completely trusting one self subjective ("sighted") listening. 😉So what do you suggest as an alternative?
again: WHICH ONEs?Not disregarded. But if there are two explanations for the results of a sighted test - one that goes against pretty much all modern understanding of physics and information theory,
I'm afraid you have completely missed the point. What I am discussing regards indirect side-effects on analog electronics and signals (and D/A conversion). It has almost nothing to do with IT and digital data transmission. As I clearly stated since the beginning, we are not discussing about data transfer problems. I took for granted that in every condition the system is working within specs, with no errors or problems whatsoever.
Not really. Not at all. ASRCs does solve very little about jitter. The most they can do is to (sort of) "filter" the jitter spectrum, much like traditional PLLs did on some old S/PDIF receivers. On the other hand, ASRCs "embeds" the (remaining) input jitter into the output data stream, effectively turning jitter into data errors.To quote one of my friends, who is a very accomplished professional in the broadcast industry: "we used to worry about master clocks and jitter, but ASRCs have solved that issue".
That's one of the reasons why S/PDIF and other synchronous links have been abandoned in favour of asynchronous ones such as UAC1/2 (in async mode).
perhaps you should check better who he is and what he have done in the field. Nevertheless, argument by authority are just plain nonsense. If you have valid arguments to do so you can criticize the ideas, not the person.Ah, yes, someone with strong academic credentials and no stake in the game... 🙂
We are talking about "audiophile fine tuning", not technical problem solving. There are no problems whatsoever to speak about!Unixman, you are a technical reasonable man. Do you really think the engineers would have designed the USB stack to have problems with timing like this?
Yet, there can be (and there are) many "channels" through which the different system activities may cause various kind of "changes" on the USB bus (at the physical, analog signals level - that's what we are talking about).
For example: don't you think that the clock(s) running the PC and the one driving the USB lines are also affected by their own jitter? Won't you guess that such jitter is likely influenced by the PC load and activities, e.g. via load-induced ground & power supply rails noise as well as local temperature variations, among the other things?
That's just one of the literally countless ways through which the "software" activities on the system may actually change the analog signals (which represents the digital data) on the USB bus, and from there "propagate" down to the output audio signal.
As said we are not talking about obvious, direct effects but rather about very subtle, very indirect (and very unpredictable) ones.
Another more obvious (and more directly "IT/digital-related") source of such changes is the UAC1/2 async feedback control loop, adjusting the instantaneous data-rate to keep the receiver buffer filled.
to have a (small) chance to get some insight about what I am talking, you should rather have used a scope and a spectrum analyzer, looking e.g. for differences in noise & signal spectrum under different conditions. 😉A few years ago I did trace the USB packets in wireshark [...]
Sighted-listening of course does not and can not constitute a proof, but should not be disregarded so easily, either.
It only demonstrates a preference, not a positive result. While you use magic as justification, you have no case.
Unfortunately there are no valid alternatives to speak about, there's simply no way to get completely reliable answers.
Indeed. It is still possible that the earth is flat. But for all practical purposes it seems to act like it is round, so if someone claims it is flat, they'd better produce some pretty darn strong proof.
Not really. Not at all. ASRCs does solve very little about jitter. The most they can do is to (sort of) "filter" the jitter spectrum, much like traditional PLLs did on some old S/PDIF receivers.
Yes. Really. Completely. That is what an ASRC does - it isolates the outgoing clock from the incoming clock.
On the other hand, ASRCs "embeds" the (remaining) input jitter into the output data stream, effectively turning jitter into data errors.
That makes no sense. Please explain.
That's one of the reasons why S/PDIF and other synchronous links have been abandoned in favour of asynchronous ones such as UAC1/2 (in async mode).
Wrong. Synchronous links are troublesome if you don't use an ASRC.
perhaps you should check better who he is and what he have done in the field. Nevertheless, argument by authority are just plain nonsense. If you have valid arguments to do so you can criticize the ideas, not the person.
Fair point - but as the links you provided only contain opinions and no actual evidence, I was assuming you were presenting them based on the supposed credentials of John Swenson.
For example: don't you think that the clock(s) running the PC and the one driving the USB lines are also affected by their own jitter? Won't you guess that such jitter is likely influenced by the PC load and activities, e.g. via load-induced ground & power supply rails noise as well as local temperature variations, among the other things?
Possibly, but that is not the point - the question is if that jitter affects the sound.
Julf said:A few years ago I did trace the USB packets in wireshark
Please don't falsely attribute things to me that I didn't write.
to have a (small) chance to get some insight about what I am talking, you should rather have used a scope and a spectrum analyzer, looking e.g. for differences in noise & signal spectrum under different conditions.
Actually I think we are more concerned with things you can hear, but I am sure we would all love to see scope and spectrum analyzer pics showing how much the RT kernel improves the sound. Looking forward to you posting them!
perhaps you should check better who he is and what he have done in the field.
I am sorry, but that paragraph just does not describe the way computer audio works. The exact timing of packets is not tied to serving the interrupt requests. Be it written by whoever. Honestly, I do not think that guy has studied the PC side in detail, it's all general assumptions in that text. The DAC side yes, but not the source side.
Without linux and the vast information available in the source code and mailing lists visited by specialists (not marketing people) I would have known next to nothing about the inner details.
The packets are timed by the USB controller. The IRQ only tells the driver/software "I am at this point, get ready new data for me to read". Jitter in scheduling will not result in direct jitter of packets transfer, those are independent processes.
For example: don't you think that the clock(s) running the PC and the one driving the USB lines are also affected by their own jitter? Won't you guess that such jitter is likely influenced by the PC load and activities, e.g. via load-induced ground & power supply rails noise as well as local temperature variations, among the other things?
That's just one of the literally countless ways through which the "software" activities on the system may actually change the analog signals (which represents the digital data) on the USB bus, and from there "propagate" down to the output audio signal.
As said we are not talking about obvious, direct effects but rather about very subtle, very indirect (and very unpredictable) ones.
Yes, noise can have many effects. How does the RT kernel (i.e. faster serving interrupts) help? Because that is the center of argument here.
Another more obvious (and more directly "IT/digital-related") source of such changes is the UAC1/2 async feedback control loop, adjusting the instantaneous data-rate to keep the receiver buffer filled.
I do not like some "perhaps, maybe". Be specific, what can go wrong with the USB feedback related to scheduler? The source code is available
linux/sound/usb/endpoint.c - Elixir - Free Electrons
linux/sound/usb/endpoint.c - Elixir - Free Electrons
linux/sound/usb/endpoint.c - Elixir - Free Electrons
Feedback format specification http://www.scaramanga.co.uk/stuff/qemu-usb/usb11.pdf page 64
to have a (small) chance to get some insight about what I am talking, you should rather have used a scope and a spectrum analyzer, looking e.g. for differences in noise & signal spectrum under different conditions. 😉
Do you think I have never hooked my scope to the soundcard output or measured/analyzed frequency spectrum in Arta/jaaa? Of course PC activity affects the output to some extent, depending how well the sound device is isolated from the PC noise. I have fought countless ground loops - they are pretty common problem.
Again, how does the tighter scheduling in RT kernel improve the noise/spectrum?
Computer audio is no voodoo like some would want us to believe. You do not have to be a large corporation to learn the internals and to not believe some vague indefinite claims.
I have a question about the Alsa buffers,
I have disabled my internal soundcard in the bios,
so Daphile doesn"t see any soundcard, my DAC is connected
with USB to the Amanero Combo 384
Does the concept of Alsa buffer is still valid in this scenario?
I have installed the RT image and the normal 64 bitversion
multible times after each other, and i still can hear differences
even my girlfriend, but we didn't wear blindfolds ;-)
it also can be a issue of the sum of the parts
the nuc i'm ussing is not the fastest one (celeron)
sometimes a little detail can make a big difference
I have disabled my internal soundcard in the bios,
so Daphile doesn"t see any soundcard, my DAC is connected
with USB to the Amanero Combo 384
Does the concept of Alsa buffer is still valid in this scenario?
I have installed the RT image and the normal 64 bitversion
multible times after each other, and i still can hear differences
even my girlfriend, but we didn't wear blindfolds ;-)
it also can be a issue of the sum of the parts
the nuc i'm ussing is not the fastest one (celeron)
sometimes a little detail can make a big difference
It is a regular USB audio device in async mode. That description http://www.diyaudio.com/forums/pc-b...e-music-server-player-os-249.html#post5211904 is for USB soundcards specifically.
Example of values period_size, buffer_size accepted by this USB soundcard are Amanero card now supporting DSD native * Issue #12 * lintweaker/xmos-native-dsd * GitHub Buffer size of 44100 audio frames for 88.1kHz sample rate means 500ms buffer time, 250ms period time. Surprisingly long. Of course the longer the better (to minimize the risk of underruns).
Example of values period_size, buffer_size accepted by this USB soundcard are Amanero card now supporting DSD native * Issue #12 * lintweaker/xmos-native-dsd * GitHub Buffer size of 44100 audio frames for 88.1kHz sample rate means 500ms buffer time, 250ms period time. Surprisingly long. Of course the longer the better (to minimize the risk of underruns).
that could be (plus likely he uses windows, not Linux), but consider that that series of articles is aimed at a general, non-specialistic audience, so the given explanation may have traded exactness for simplicity. I would not take it too literally in such details.Honestly, I do not think that guy has studied the PC side in detail, it's all general assumptions in that text. The DAC side yes, but not the source side. [...]
here is were the misunderstanding arose: if you read back carefully what I have written, you'd notice that I have always used words like "changes" and "differences". I never talked about "improvements" in that sense.Yes, noise can have many effects. How does the RT kernel (i.e. faster serving interrupts) help? Because that is the center of argument here.
[...]
Again, how does the tighter scheduling in RT kernel improve the noise/spectrum?
Possible "improvements" (if any) has to do only with human perception, not with objective, technical quantities!
When I said something like "noise spectral and timing changes" I really meant that: it may (and likely will) change somewhat the low-level "side-effects" of data transmission. It's quite unlikely that they are exactly the same in one case or the other. This does not mean that e.g. noise gets reduced or become somehow "better" from a purely technical POW. Just (slightly) different.
The key word is perception. If you leave it out of the picture, most (if not all) of the discussions about audio (in general, not only computer audio) sounds like nonsense. OTOH, if you take perception into account, a whole lot of otherwise unexplainable things becomes possible.
When you consider perception, our usual quantitative / "mathematical" POW (which implicitly assumes an LTI system) goes astray: 2+2=/=4 and 4 is not always > 2 either, so to speak ...
That's actually one of the main reasons of just about all incomprehension between "subjectivists" and "objectivists" among audio enthusiasts.
sure, of course it's no vodoo... but only if you consider just the technical aspects, leaving out the subtle and (apparently) "irrelevant" side-effects, and the peculiar characteristics of the final "receiver", that is the perception of the human listener. When you begin to take that into account, all sort of unexpected things shows up.Computer audio is no voodoo like some would want us to believe.
Last edited:
sure, of course it's no vodoo... but only if you consider just the technical aspects, leaving out the subtle and (apparently) "irrelevant" side-effects, and the peculiar characteristics of the final "receiver", that is the perception of the human listener. When you begin to take that into account, all sort of unexpected things shows up.
Yes, such that if you believe there to be differences, you will hear differences, even if there is no actual physical difference. That is indeed by far the most important factor.
that could be (plus likely he uses windows, not Linux), but consider that that series of articles is aimed at a general, non-specialistic audience, so the given explanation may have traded exactness for simplicity. I would not take it too literally in such details.
Yeah, I have noticed general, non-specialistic audience especially in audio gets fed nonsenses and gullibly believes it's truth. That article is plain wrong, not simplified exactness. I wonder why you linked it as an argument when we should not take it too literally...
But no need to fuss about 🙂
It's really funny to see the same people still debating the same stuff with the same nonsense arguments.
And that's ongoing - at least from my perspective - since 2007.
J. Svensson, that name was mentioned earlier, was one of the first who confirmed the rather positive effects of my Touch Toolbox.
That's been the first time I made all what's been discussed in the "Linux Audio the way to go" thread over here at DIYA, available to the public.
And. The SB Touch was running a rt-kernel btw! And Moode is also offering an rt-kernel.
USB was mentioned and its related audio impact.
You can spent nowadays hundreds of bucks on USB gadgets, which repower, reclock, regenerate and isolate. And guess what!?!? These devices
work! Actually they work quite well.
And it's not about Bits=Bits. It's about noise and timing.
My conclusion still is: The better the downstream HW (DAC and filters), the less impact you'll see from optimizations on a computer.
People, who ask for measurements are well aware that it's almost impossible
to prove the obvious. Even industry professionals fail to do so. (I talked to them about the subject.)
Bottom line. Many of the people around - hobbyists and professionals follow -
a theoretical path to improve things. And then they measure (as much as possible) and finally listen what the result will sound like. They all "listen"!
Nailing the subject properly down is much too complex, at least as complex as an entire audio system with all it's numerous flaws.
Enjoy. I do.
And that's ongoing - at least from my perspective - since 2007.
J. Svensson, that name was mentioned earlier, was one of the first who confirmed the rather positive effects of my Touch Toolbox.
That's been the first time I made all what's been discussed in the "Linux Audio the way to go" thread over here at DIYA, available to the public.
And. The SB Touch was running a rt-kernel btw! And Moode is also offering an rt-kernel.
USB was mentioned and its related audio impact.
You can spent nowadays hundreds of bucks on USB gadgets, which repower, reclock, regenerate and isolate. And guess what!?!? These devices
work! Actually they work quite well.
And it's not about Bits=Bits. It's about noise and timing.
My conclusion still is: The better the downstream HW (DAC and filters), the less impact you'll see from optimizations on a computer.
People, who ask for measurements are well aware that it's almost impossible
to prove the obvious. Even industry professionals fail to do so. (I talked to them about the subject.)
Bottom line. Many of the people around - hobbyists and professionals follow -
a theoretical path to improve things. And then they measure (as much as possible) and finally listen what the result will sound like. They all "listen"!
Nailing the subject properly down is much too complex, at least as complex as an entire audio system with all it's numerous flaws.
Enjoy. I do.
It's really funny to see the same people still debating the same stuff with the same nonsense arguments.
Indeed. Injecting voodoo into very straightforward stuff.
J. Svensson, that name was mentioned earlier, was one of the first who confirmed the rather positive effects of my Touch Toolbox.
I assume you are talking about John Swanson? If not, who is "J. Svensson"?
Good for you. Too bad anyone who actually understands the linux kernel has questioned your claims.
You can spent nowadays hundreds of bucks on USB gadgets, which repower, reclock, regenerate and isolate. And guess what!?!? These devices
work! Actually they work quite well.
If you say so. The funny thing is that that would be very easy to verify with measurements. Why have we never seen any measurements confirming the supposed benefits?
People, who ask for measurements are well aware that it's almost impossible to prove the obvious.
Ever heard of a double-blind ABX? Standard procedure in the industry.
And. The SB Touch was running a rt-kernel btw! And Moode is also offering an rt-kernel.
No wonder these low-performance devices with small audio buffers use rt kernels. The playback thread needs raised scheduling priority to avoid xruns reliably. I would use the same RT kernel on RPi for an audio appliance. What Tim says about RT kernel in Moode Moode Audio Player for Raspberry Pi - Page 52 - Software - Computer Audiophile Also patches for > 192kHz RPi I2S playback are included only in the advanced kernels.
Recently I was talking to a guy with some odroid board - his USB2 controller with USB-audio v.2 async DAC was throwing 8k IRQs/s, i.e. every single USB2 frame. Of course only the RT kernel could handle that load somewhat (poorly) reliably.
Again - xruns, buffer underflows, not jitter in packets.
RT may even help with some specific hardware on x86 devices, why not. I would not want to listen to xrun dropouts.
- Home
- Source & Line
- PC Based
- Daphile - Audiophile Music Server & Player OS