Moode Audio Player for Raspberry Pi

because its not, google "OS Jitter" there are some researches about this out there

Because it is, OS Jitter is properly known as scheduling jitter. It applies to applications and hardware running a RTOS. You can't implement a RTOS unless you understand the interrupt capabilities of the underlying CPU architecture (Hardware).

None of this is related to audio playback.

i can "clearly" hear SQ Improvement (like most would say it sounds more analogue compared to "digital") and the only really logical reason for that is OS Jitter

This is nonsensical. OS Jitter has nothing to do with audio quality.

Like i said in the thread other at AS im really curious what a tweaked RT Kernel could do here further

Just like your other tweaks, a RT Kernel will do nothing for audio quality. Audio playback is a non real-time process, data is buffered in memory. If a CPU is not doing much apart from topping up the buffer for the audio application then what is a RT kernel going to improve ?
 
Not going to argue about this, try it or dont :) would be great if theory matches practice
but.... what moves the buffer? its just there so the cpu has enough to make it a continuous stream... tho im not a IT expert soo... i can just suggest to try it for yourself, if you dont hear a difference, great

Edit: and i know how a async dac works.... theory sounds great but it still does make differences, tho it doesnt make night/day differences
 
Last edited:
I think the principle requires some explanation, so that people do not get confused with the OS scheduling claims.

CPU and the device (USB controller, PCI(e) controller in an internal soundcard, I2S peripheral of an ARM SoC (e.g. RPi)) pass data through RAM. The CPU (driver) tells the device (simplified, generalized): your RAM block starts at address A, ends at B (this memory block is called buffer in alsa/WASAPI), and after X integers throw an interrupt to inform me about your progress (the memory region between interrupts is called period in alsa/WASAPI and buffer in ASIO). The device confirms the setup. Alsa/WASAPI can use two or more of these regions within the whole buffer, ASIO uses always two.

A playback process starts and CPU starts copying samples into that memory region. Typically after the first (one or more) period of data is written, the CPU tells the device to start reading. The device starts reading the data, using Direct Memory Access (DMA), from RAM to its internal buffer, to be sent via I2S to DAC, to USB bus, anything. When the device crosses period boundaries as configured by the CPU, it throws an IRQ, telling "I have just finished reading here" to the driver. Meanwhile the app has filled at least one more period with fresh data. Upon receiving the IRQ information, it fills another period with new data (e.g. X1 is completed reading, device reads X2, so CPU will fill block X3). Of course the buffer is filled in a circular manner (after e.g. X8 the period X1 is used again), the DMA hardware in the device knows it must wrap around upon reading X8 too.

Typically the application uses a blocking write - it tells the driver "I want to write new data, wake me up when there is enough free space in the buffer", and returns the CPU to the OS for other processes. At every IRQ handler the driver checks the reading pointer of the device, and when the already used-up part of the buffer is long enough, it wakes up the application to finish the writing and start a new cycle.

The device keeps running continuously. It can happen that the CPU does not keep up with the device pace and does not write new samples ahead of the device reading pointer. That produces the infamous xrun/buffer under/overrun which is clearly audible as a click/glitch.

The more the writing pointer is ahead of the reading pointer, the larger audio latency the chain incurs, but the safer the timing margin for the CPU/OS to supply fresh data. Digital audio work (e.g. synthesis from midi keyboard, audio recording with monitoring, interactive audio editing) requires this latency very short, at single millisecs. Of course this calls for the CPU to supply data very often, with strict timing. If only two short periods are used for the buffer (such as in ASIO, jackd recommends at least 3 periods), and CPU does not feed new data to X2 while X1 is being read by the device, a glitch will occur. This is where the realtime extensions to the kernel find their use, and where a large scheduling jitter can cause timely-delivery problems. But the basic rule for playback safety is configuring the latency as large as can be accepted for the given use case. Which for plain listening makes hundreds of millisecs, enough time for any reasonable OS. Actually the very popular hardware FIFOs for RPi add latencies of hundreds of ms too, to avoid under/overruns within the typical listening time.

USB is more complex but the principle is identical, the USB controller reads pre-composed frames from RAM to the USB bus.
 
Last edited:
I think the principle requires some explanation, so that people do not get confused with the OS scheduling claims.

CPU and the device (USB controller, PCI(e) controller in an internal soundcard, I2S peripheral of an ARM SoC (e.g. RPi)) pass data through RAM. The CPU (driver) tells the device (simplified, generalized): your RAM block starts at address A, ends at B (this memory block is called buffer in alsa/WASAPI), and after X integers throw an interrupt to inform me about your progress (the memory region between interrupts is called period in alsa/WASAPI and buffer in ASIO). The device confirms the setup. Alsa/WASAPI can use two or more of these regions within the whole buffer, ASIO uses always two.

A playback process starts and CPU starts copying samples into that memory region. Typically after the first (one or more) period of data is written, the CPU tells the device to start reading. The device starts reading the data, using Direct Memory Access (DMA), from RAM to its internal buffer, to be sent via I2S to DAC, to USB bus, anything. When the device crosses period boundaries as configured by the CPU, it throws an IRQ, telling "I have just finished reading here" to the driver. Meanwhile the app has filled at least one more period with fresh data. Upon receiving the IRQ information, it fills another period with new data (e.g. X1 is completed reading, device reads X2, so CPU will fill block X3). Of course the buffer is filled in a circular manner (after e.g. X8 the period X1 is used again), the DMA hardware in the device knows it must wrap around upon reading X8 too.

Typically the application uses a blocking write - it tells the driver "I want to write new data, wake me up when there is enough free space in the buffer", and returns the CPU to the OS for other processes. At every IRQ handler the driver checks the reading pointer of the device, and when the already used-up part of the buffer is long enough, it wakes up the application to finish the writing and start a new cycle.

The device keeps running continuously. It can happen that the CPU does not keep up with the device pace and does not write new samples ahead of the device reading pointer. That produces the infamous xrun/buffer under/overrun which is clearly audible as a click/glitch.

The more the writing pointer is ahead of the reading pointer, the larger audio latency the chain incurs, but the safer the timing margin for the CPU/OS to supply fresh data. Digital audio work (e.g. synthesis from midi keyboard, audio recording with monitoring, interactive audio editing) requires this latency very short, at single millisecs. Of course this calls for the CPU to supply data very often, with strict timing. If only two short periods are used for the buffer (such as in ASIO, jackd recommends at least 3 periods), and CPU does not feed new data to X2 while X1 is being read by the device, a glitch will occur. This is where the realtime extensions to the kernel find their use, and where a large scheduling jitter can cause timely-delivery problems. But the basic rule for playback safety is configuring the latency as large as can be accepted for the given use case. Which for plain listening makes hundreds of millisecs, enough time for any reasonable OS. Actually the very popular hardware FIFOs for RPi add latencies of hundreds of ms too, to avoid under/overruns within the typical listening time.

USB is more complex but the principle is identical, the USB controller reads pre-composed frames from RAM to the USB bus.
Well, thanks for the write-up, is it "set in stone" that every usb controller for example actually makes use of DMA? (i didnt know DMA is a thing, so thanks for explaining)

like i said im not a IT expert i just tried to optimize the PI "as far as possible" without negative sideeffects, im also sure not every setting i set will actually make a difference, i just can report that in the end "overall" this stuff makes a difference, atleast on my end, i just can suggest to try it for yourself, if you wanna think all of this is placebo without trying, sure, youre free todo so
 
Hi,

For some info on upcoming moOde 8 (Q1 2022) visit our Forum :)
https://moodeaudio.org/forum/showthread.php?tid=4697&pid=39272#pid39272
moode-r800.png

-Tim
 
  • Like
Reactions: 1 user
Hi Tim,
I downloaded the latest moode player 7.6.1, flashed the image and tried to boot my new purchased rpi 4b. However, the device Green light blinked 8 times indicating the SDRAM failed. I tried flashed 2 to 3 times and tried olderversion 7.6.0 image as well, all ended up the same result.

Just to isolate the issues, I tried the piCoreplayer and volumio image. Both booted the pi device successfully so excluded the Pi and the SD card hardware problem. Both brand new anyway.

At this stage I assumed the downloaded Moode iso image is corrupted somehow. I used the MDS check by using the command:
certutil -hashfile moode-r761-iso.img MD5
I got 27b9a8c8e3508667cb03b3ed43598e27 where the moode download page reads MD5: 6ee0d9d956d29285d03fed7bbbe95de8. So it sort of confirmed my assumption that the image file is corrupted. I also check the same for the picoreplayer and the volumio ico image and they passed the MDS check.

So I since downloaded the image a couple of times and even with a different browser and all getting the same corrupted copy. I don't know what else can be done from my end. Could anyone else please checked their copy's MDS is the same as mine? Otherwise it could be the server copy issue. It is highly unlikely the server copy is corrupted otherwise many other users will have the same issue as mine?

Felix
 
Last edited:
Something to do with the new revision 1.4 and higher Pi-4 boards having a component that needs updated firmware files on the OS which apparently the new RaspiOS Bullseye release provides. I didn't bother researching beyond the post below after we got confirmation in our Forum that stock Bullseye which we are using for upcoming moOde 8 booted fine on the rev 1.4 and higher 4's. I don't know if there is a compatible firmware update for the older RaspiOS Buster.
https://forums.raspberrypi.com/viewtopic.php?t=315742
 
Hi Tim,
I downloaded the latest moode player 7.6.1, flashed the image and tried to boot my new purchased rpi 4b. However, the device Green light blinked 8 times indicating the SDRAM failed. I tried flashed 2 to 3 times and tried olderversion 7.6.0 image as well, all ended up the same result.

Just to isolate the issues, I tried the piCoreplayer and volumio image. Both booted the pi device successfully so excluded the Pi and the SD card hardware problem. Both brand new anyway.

At this stage I assumed the downloaded Moode iso image is corrupted somehow. I used the MDS check by using the command:
certutil -hashfile moode-r761-iso.img MD5

I got 27b9a8c8e3508667cb03b3ed43598e27 where the moode download page reads MD5: 6ee0d9d956d29285d03fed7bbbe95de8. So it sort of confirmed my assumption that the image file is corrupted. I also check the same for the picoreplayer and the volumio ico image and they passed the MDS check.

So I since downloaded the image a couple of times and even with a different browser and all getting the same corrupted copy. I don't know what else can be done from my end. Could anyone else please checked their copy's MDS is the same as mine? Otherwise it could be the server copy issue. It is highly unlikely the server copy is corrupted otherwise many other users will have the same issue as mine?

Felix

We hash the .zip
https://github.com/moode-player/moode/releases/tag/r761prod
 
Just wanted to chime in and say thank you to the Moode contributors. I'm running a RPI Zero 2W and DigiAMP+ with some CHN50s for a little music box. It's running a DVD-RW LG USB drive for CD access and I'll add a 2.8" HDMI touch screen with some buttons for fast access - for this reason I have max_usb_current=1 in the config.txt.

Code:
uname -a
Linux moode 5.10.63-v7+ #1496 SMP Wed Dec 1 15:58:11 GMT 2021 armv7l GNU/Linux

top - 11:04:27 up 26 min,  1 user,  load average: 0.79, 0.73, 0.57
Tasks: 140 total,   1 running, 139 sleeping,   0 stopped,   0 zombie
%Cpu(s): 18.9 us,  0.5 sy,  0.0 ni, 80.5 id,  0.1 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :    428.0 total,    154.8 free,    112.0 used,    161.2 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    255.2 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                             
  856 mpd       20   0  239892  46276  20524 S  75.8  10.6  16:40.77 mpd

I've increased the buffer sizes - as much as the 512MB will allow with SoX upsampling active for CD images, and I've added a 1024MB swap file that it has on occasion used to prevent any hanging. The CPU set for performance too. Top above running a internet streaming radio.

I've done an update on the OS, all seems happy (I said no to the www page updates to stick with Moode provided ones). The question I have is can I do a upgrade on top of this?
 
Excellent write-up Phofman, explains perfectly why a RT Kernel is unnecessary.
Unfortunately phofman is wrong, wrong on the conclusions. And that he is since
for more than a decade. Beside that he leaves out a lot of other aspects.
E.g. His text completely ignores the fact that there's a lot more going on inside a
computer. And a lot of tasks inside that mess are asking for highest priority.
You simply can't look at the audio process alone. The whole stuff is extremely
complex.

Fact is, the kernel is getting better and better managing the mess. The rt-kernel
patch gets more and more integrated into the standard kernel too.

Another fact is, that DACs progressed heavily over the last 10 years.

However. Most of them still respond to upstream "adjustments" and changes.

Have a look e.g. at the Yulong measures to fight the upstream mess.

Yulong-Aquila2.png


As you can see a simple USB receiver chip, as being used in many DACs
out there, just won't cut it. It requires serious efforts to isolate from the upstream mess.
It seems, from what I read, Yulong did quite a good job on that part. My last years Gustard A18 DAC
acquisition though, still responds heavily on upstream tunings (usb filters, usb cable, transport tunings, asf.asf.) .
I mean it's a great DAC -- if the upstream mess is gone.

A few DAC manufacturers run these efforts despite the fact that their DACs "measure" well in terms
of "standard" audio measurements. Because what they measure and what they hear are two different subjects.

Back to the kernel. As long as the DACs still suffer from the upstream mess the transport has an impact
on the overall audio performance. It of course remains a subjective matter if changes are being noticed
in whatever system under whatever conditions.
An rt-kernel - in my experience - can still - and in my case - contribute(s) to the overall perceived
sound nicely. I am running my own rt-kernel based on the latest RPi kernel versions on RPiOS - and wouldn't
run without them.

However. A rt-kernel alone doesn't do anything. What it does, it puts you in charge
to get your system "tuned". Like tuning a 100 string guitar. It's not just the audio process.
It's the entire platform you need to look at.
You simply can't handle big city traffic issues by just clearing one street.

All that requires expert knowledge. Or proper advice.

There were times when I was providing the custom kernels, incl. the Moode rt-kernel
for Moode. I mainly did it, simply to prove all these "phofman" type of guys wrong.
From the feedback I and Tim received during those days, people seemed to support my
position.

Bottom line. It's IMO still worth to go realtime - at least if done right.

All of us audiophile nerds are more than happy to gain a subtle improvement here and there.
And that keeps us going.


Enjoy.
 
  • Like
Reactions: 1 users
Full test setup of my wife's little not-so-boombox:

IMG_9486.jpg



CHN50 with DigiAMP+ and RPI4 2GB
7" Touchscreen via HDMI
Meanwell 19V 90W SMPS (model GSM90A19-P1M)
Powered USB2.0 hub
Matek UBEC 7-27V in, 2x 5V 6A out buck regulator.
LG GP70NS50 external DVD/CD drive

The screen is current screwed into a travel box for safekeeping.. I'm buying the ply tomorrow for making the speaker cabinets.
 
Last edited:
Hello, I have an issue with my ifi audio zen dac v2. I have an Allo Usbridge Signsture streamer and using MoodeAudio. The output is via USB. No matter which settings I use, any DSD file is being transcoded on the fly to PCM 24 96, although my DAC is clearly capable of playing DSD files. I tried to disconnect and reconnect thr USB cable and some times this solves the issue, but is seems completely random. My settings have been checked. The stream is direct with no resampling or sox. As I said, disconnecting and reconnecting the USB cable may sometimes solve the problem, but there is no guarantee it will work every time. The pCm files are all sent with no transcoding. Is this a firmware/driver issue?