I'm pulling up stakes in the Windows camp, dual-boot Linux as step #1

I've used Fedora for decades, being advance code for Red Hat which is commercial and solid as heck. I might actually say that Red Hat is the gold standard, not wishing to get into a fight. Fedora code runs ahead and once stable, is incorporated into Red Hat (or at least this was the way things ran). I haven't followed these things closely. I'll admit to not trying Debian or Ubuntu. Anything else I did try was very early, and I needed solid more than anything.

Since it was mentioned, I have downloaded MINT LMDE and will give it a spin on a small computer box that sits on the network beside my NAS doing odd jobs. I really want to migrate completely to Linux of any flavor and detach from Micro$oft completely. Win 11 - never. I haven't run Office on the Windows box (I use Open Office, which is fine).
For enterprise servers with a support model, Red Hat is certainly the gold standard. No argument there. I always preferred FreeBSD over Linux for my servers, but Linux is no slouch. We used CentOS almost exclusively for our “house brand” data centre managed services, such as firewalls, load balancers, servers, etc. at my ex-employer. It was solid as can be.

When I dumped OpenSUSE a little over a year ago (which was a short relationship) I narrowed my choices down to Debian and Fedora. Either one would have been a good choice, but I ultimately chose Debian because it seems like there is a .deb for just about everything out there, including some non-free software that I use (Corel Aftershot Pro, etc.).

Before that, it was Manjaro, which was great until it wasn’t. Hindsight is 20/20 but I realize now I should have just spent the extra 20 minutes to install and configure Arch. Some day I will. My nephew here in Japan runs Endeavour and seems to love it.

We are spoiled with choices here in the Linux camp. There is something for everyone. That’s a double-edged sword in a way, and can lead to incessant distro hopping. But we learn more every time…
 
I use Linux OS for a long time now and have experienced the development from Suse linux on diskettes to what it is now. I have seen how Linux changed the commercial server market and transformed businesses. I regard myself as an experienced user, not as a whizzkid though, so I still have to look up things if I need to troubleshoot something... but comfortable with that.
My fav is Fedora/Red Hat with KDE Plasma desktop, unfortunately I have to keep using Windows 10 and 11 due to software that only runs on Windows. And to be honest, apart from the spyware and privacy issues of Microsoft I do like how it works, Win11 is snappy and fast on modern hardware and looks good. There are enough annoyances left, but Windows for sure also got better over time, with good driver support. The BIG problem remains: not open source- so you have no chance to know what happens under the bonnet and you are dependent of proprietary stuff. Hate Windows Update and having to reboot after updates...
 
I have heard/read KDE/Plasma can still be a bit wonky, but I haven’t used KDE since about 2003. It’ll get fixed though, I have no doubt.
I think they have it ironed out. Although Linux Mint Cinnamon is my daily driver now, I periodically boot into and use Arch running KDE Plasma. I haven't had any significant issues with Plasma that I recall since the initial release of version 6.0. Now, Arch itself, well, it is a rolling release that is on the bleeding edge - updates will keep you on your toes.
 
  • Like
Reactions: cogitech
Now, Arch itself, well, it is a rolling release that is on the bleeding edge - updates will keep you on your toes.
Indeed, increase that by an order of magnitude with Manjaro. Okay, I’m exaggerating. I must say though, that the overall feeling of running a bleeding edge, rolling release distro is palpable, exciting. Like a sports car. I’ve realized that it might not have been the best fit for my temperament. On the other hand, I feel like I have been waiting a long time for Debian 13 to be released. I am looking forward to a more recent kernel and Gnome. The stability is worth the wait, though.
 
  • Like
Reactions: TerryForsythe
The one thing I think that Arch does better than any other distribution I have tried is its handling of Grub. It doesn't need to regenerate the boot menu everytime it updates the kernel.

I appreciate that because I have 5 different installations - I use them to test script code I maintain. I have Arch handling my boot menu. In that boot menu I have a menu entry for each of the other installations. I use chainloading so I don't need to mess with it when the kernels in the other distributions are updated.
 
  • Like
Reactions: cogitech
In Wayland, the compositor is the window manager. I don't miss the overhead of the window manager being a separate program.

My programs call GTK and can use X11 and Wayland interchangeably. I can't tell which is running by looking at the screen. Dragging windows is much faster under Wayland.
Ed

Dragging windows?

Yikes!

How about the Interprocess Communication? Does it support multi threading? Symmetric Multiprocessing? Interrupt latency?

Come folks. today's hardware is multi core and our OSs need to reflect that!

https://linux-kernel-labs.github.io/refs/heads/master/lectures/smp.html
 
BTW, back to practical things...

My damnnn W11 PC this AM would not boot up... I didn't even see the boot logo.... I had no time to waste, so I used my W10 machine (yes, I resisted updating that one). The work machine works fine.

A little while ago, I decided to give it a try... and with more patience I realize that W11 had installed an update of sorts. MIND YOU. I have modified the stupid registry so it would NOT do that... but the fu%%%r did it.... and shut itself down over night ( I don't turn off my machines ).

I have the machines hooked up via a nice KVM switch... and I got gobs of USB cables so I just moved the important stuff from W11 to W10 only to find out that I had forgotten to back up Chrome to the NAS for year... meaning I was missing some passwords.

Eons ago, Firefox made it very easy to share a configuration between machines... all you had to do was mount the network drive and have everything moved there. But Chrome is a PITA. Their hackers -don't call them engineers- just made it incredibly onerous...

Anyhow... with more time in the afternoon, I sat down with the W11, disconnected everything, turned the power down for a minute, etc... and then turned it on... only to realize the stupid thing was doing an installation of a small enough update.

Gotta tell ya... W10 is much better at this sort of thing.

IF I could get a good browser, music processor and mail tool running under vxWorks 7 SMP, I'd be there in a microsecond. I'm really sick of dealing with companies that use idiotic hackers that have no clue about their users or how to architect a software product.

Oh well... btw, I'm posting this on the W10 machine. I think I'm gonna reverse roles... I'll update the W10 machine with my browser passwords and email user file and then I'll just use W11 for ripping CDs and DVDs.
 
^ Actually, Ed, yes. You'd be surprised as to how much of an Application's workload can be done in parallel by a modern SMP Operating System with multi core hardware. A lot of it is done under the covers by the compiler, the operating system and the hardware architecture ( think Board Support Package ). Pretty much this is all mainstream (meaning low cost multi core) in the last 15 years, or less.

Examples of an application in parallel would be searches, arithmetic calculations, data IO to memory (DMA in this case), predictive analysis that will preload data from memory, etc. Multi-threading still shares the CPU cycles, but with multi-core programming and using processes -not threads- you split the work in parallel. In addition, using SMP also provides seamless access to memory via shared caches and shared memory blocks. I also believe that you can even split processes across cores, with threads of the "same" process going into different cores. See below.

Examples of an OD in parallel would be handling the automatic distribution of tasks on different cores ( static core affinity or dynamic load balancing across the cores ), managing of tasks and processes, hardware IO and so so... splitting threads in a process across different cores is also handled by the SMP OS, by creating "virtual" copies of the process in different cores and then using cache to handle the memory -however this does impose a hit on performance because the shared memory is exclusive to one virtual process at any one time... hence it can block other virtual processes.

In my work, I've never had to split a process (that had to use the same memory blocks) across cores. I may instantiate several copies of a process but they usually will work on different memory ranges so they don't block each other ( for example processing different parts of an array or different messages in a queue) However, they might block on hardware access, but that is also solved through the SMP synchronization.

Normally, this is outside the scope of the application programmer, except in the embedded, real time world where "applications" are actually fairly deep in the OSI stack. Sometimes, in order to streamline the processing, the application programmer will specify specific cores for given processes, but this is rare in the application programmer workspace as the code is designed to be portable across many hardware architectures... hence they use "adaption layers" to communicate with the OS and the BSP.

+++

Why is this all important and relative?

Because you folks keep discussing Linux distros based solely on the User interface and the Presentation Managers.

Did you folks ever used YELLOW DOG LINUX? Now, that was a proper Linux. Multi core programming. We used it in some glass cockpit display avionics that did lots of graphics processing. That was about 18 years ago. Our OS was not SMP, but it was already pushing the future .-two of us did the architecture-. Our hardware was PPC based SBC (single board computers), not Apple. If you wanted to do graphics, taking big gobs of data and rendering the resulting displays in real time, that was the distro.
 
Last edited:
So, therefore, y'all are discussing just the usage of the distro, not the actual OS. As I noted, I use Ubuntu because it's the Golden Standard.

Except for Yellow Dog which was specifically enhanced for multi core, and also WInd River did have a version of Linux with RT enhancements.

Displays... not when you are drawing weather maps, situational displays, radar displays, at 100 Hz. Getting huge amounts of data that you have to filter. Data came from several data streams and instruments and it had to be processed into matrices and then fed into the GPUs.

Remember that was in the late 2000s... around 2007. Today, the hardware is faster, indeed, but we also have much more data.

Then, the refresh rates have been updated. And so have the phase locked loops... it's not unusual today to run PPLs at 8Khz.

Here's an example of something that NEEDS multi core programming just to run without blowing up...

https://www.cymer.com/

Anyhow, I have no specific love, one more than the other, for any distro of Linux. They are all the same to me, except for the two above, which no longer exist. And besides, their RT performance was only in the kernel.... so, pretty much you had to program your stuff in the kernel.

To be honest, if you want to do real time in Linux in the kernel, you might as well grab the Android distro and do it all in the kernel. That's how we stress tested the SOCs that we were developing for cell phones -a job I had a few years ago. It's actually a nice distro when you run everything in the kernel.

Actually, if your were to DIY an AUDIO SERVER, I'd use either Android or Rapsbian and write an "application" that ran in the kernel with bit perfect accuracy.
 
Last edited: