I dont understand the purpose of using high end CD player over a media PC server

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Administrator
Joined 2004
Paid Member
Hi boconnor,
I just lost a detailed reply to you. I'll not repeat it except to say that there are technical and measurable reasons why the more expensive (non-junk) machines sound better. I could show you on the bench. Notice that I said you can measure these differences as well.

Hi abraxalito,
I'm not up to speed on the minutia for hardware data flows these days. I have direct experience with earlier computers to deal with real time control, and I know the same issues still exist. The higher processor speed has helped, but the Windows OS still grabs chunks of time for itself. As long as this doesn't tie up the DMA used for data transfer - or the HD for that matter, you should be able to keep the buffer from running out. In fact, this is the only thing you should have to watch, the buffer on the sound card. Interrupts are too slow as well. Now you're talking PC XT, or the original PC. This ran up to probably the 386 type machines and software. Those were dark days for audio processing!

With a single processor that has to participate in the data transfer, access is granted in time slices. This is how separate threads can run "concurrently". These time slices are actually much finer than any interrupt will be. How many threads are running will partially dictate how much time your audio data transfers have to move.

The performance of a digital system from the point of view of audio quality is really down to delivering the right bits, on time.
Well, yes. The timing is somewhat relaxed given some of the larger data buffers. Digital data transfer isn't an issue, or the machine will have crashed, as was mentioned earlier.

But perhaps you were referring to performance in terms of responsiveness to the user, in which case I agree with what you've written.
No. The user has only hit "go", and any user intervention beyond that point makes massive changes as to what you are reproducing (like pause or another track, etc ... ). The problem is really what the OS decides it's time to do and the priority given to that process and threads.

If you are transferring the data to a remote machine (probably) on a network, then any large transfer, such as a print job, could really upset the applecart. Using a data switch (mostly all newer hardware) as opposed to a data hub, will really help here unless the music server also runs the print queue. Then you're sunk (depending on the print job size)! Ethernet is far preferable to wireless for transfer speed. Yep, plug it in. A data request from surfing the web can grind things to a halt as well, depending on exactly where the data is heading. Suffice to say that surfing the web while listening to your music server is what will stress your system. If your high speed only runs with a 10 Meg connection speed (to the network), you may be fine as long as your other machines are talking at 100 Meg or so.

It's nice background music as far as I'm concerned.

-Chris :)
 
I'm not up to speed on the minutia for hardware data flows these days.

Interrupts are by no means the latest and greatest invention, they've been around for a while. And no, they're not 'too slow' as you've said here - they're the lifeblood of the system. Any computer system would drop data all over the shop if it was dealing with that data only as timeslices allowed it to. Interrupt latency can be measured in microseconds whereas timeslices are in milliseconds. Huge difference. The difference is not only in time scale - interrupts are called that because they can't be predicted, timeslices are scheduled. An interrupt allows a higher priority task to pre-empt a lower one, otherwise it would have to wait until its scheduled turn. Too late.

Now, neither Windows XP nor Linux are true real-time systems. This means that there are occasions when the system pre-empts everything else in the scheduling. Even then, interrupts will still be serviced otherwise audio and other essential services would drop out.

As long as this doesn't tie up the DMA used for data transfer - or the HD for that matter, you should be able to keep the buffer from running out.

Sure, buffers are necessary for dealing with interrupt latencies. How do you think that DMA transfers are set up? You think they're initiated in a time-slice? DMAs go on in the background with no CPU intervention, cycle-stealing. But when that transfer is complete - guess what? It will be signalled by an interrupt. The interrupt service routine (ISR) will probably prime the next DMA transfer as this is time-critical stuff. No way it can be safely left to the OS in the next re-schedule.

With a single processor that has to participate in the data transfer, access is granted in time slices.

Access to what here?

This is how separate threads can run "concurrently".

Threads and processes are rather separate things. On some newer CPUs, threads are implemented at the CPU level, not in the OS. That's Intel's 'Hyperthreading'. Here, judging from the surrounding context, you're talking about 'processes' when you say 'threads'.

These time slices are actually much finer than any interrupt will be. How many threads are running will partially dictate how much time your audio data transfers have to move.

Who taught you this stuff?:D Seriously, where did you learn that process reschedules are much finer than interrupts?
 
Administrator
Joined 2004
Paid Member
Hi abraxalito,
Who taught you this stuff?
With all due respect, where are you coming from?

Look, I said that I'm not an expert in computer operation. I do know some things that came about from working on industrial controllers, and a very early project on voice recognition. They were using IBM XTs for that, with an expansion case as well.

Now, from what I know, interrupts are a slow way to get the attention of the processor. Its like "knock knock - hey, please service my request?". Then there are interrupt controllers that look at the relative importance that the interrupt has before granting attention to that interrupt. This is a hardware operation, not done in software. I also know of software interrupts, but not really that much in any detail.

Now, the time allotments given to any process to run in are controlled tightly. In a Windows type environment, the "foreground" process has more time slices as a general rule. Unless there is a high priority process running in the background. I think it will come to the foreground once it asserts itself.

Maybe not perfect, possibly with some confusion in the terms I used, but I'm a guy who deals with computers as tools. I do not write any code. I do install hardware and software onto computers for various jobs. Some are voice mail systems. I know when I have certain problems in software, some due to bios and some due to failing equipment. These computers are set up on networks some times, and often connected to specialized hardware to make up larger systems. This includes test equipment automation.

Sure, buffers are necessary for dealing with interrupt latencies. How do you think that DMA transfers are set up?
Yes, what I said. There was no reason to take that tone though.

Interrupts are by no means the latest and greatest invention, they've been around for a while.
No kidding. They are not news to me either. Micro controllers also have an interrupt service.

The difference is not only in time scale - interrupts are called that because they can't be predicted, timeslices are scheduled. An interrupt allows a higher priority task to pre-empt a lower one, otherwise it would have to wait until its scheduled turn. Too late.
I'm well aware of this also. I was taught the opposite, but then we were dealing with an app that was the prime focus for the machine. If we could run in terminal with a command prompt, without the overhead of the "presentation manager", there was more time for the application to run. Windows seems to be particularly bad about how much time the OS needs to run. DOS is great, it just allows the application to run without too much fuss.

Now, neither Windows XP nor Linux are true real-time systems. This means that there are occasions when the system pre-empts everything else in the scheduling. Even then, interrupts will still be serviced otherwise audio and other essential services would drop out.
Audio is a low priority process as far as Linux and Windows is concerned. Linux allows for greater control and can be stripped down while still running reliably. You can compile only what you need running. With Windows, this is a far more difficult task. It's big, fat and a memory hog, you have to beat it down with a stick.

Access to what here?
Storage device to audio device, probably through main memory first. I didn't think you would be unclear on this.

Threads and processes are rather separate things. On some newer CPUs, threads are implemented at the CPU level, not in the OS.
As I said, I'm not really up on the latest stuff. I shouldn't have to be for what I do. Most of my happy work came using OS/2, so I use terms related to OS/2. A process could (would) generate many threads. Poorly written software could terminate and leave various threads running or looping. They were referred to as "zombie threads". OS/2 was pretty good at compartmentalizing memory where a program would run all by itself. The OS did the threading and killing of zombie threads.

Imagine my surprise once I looked under the hood on my Fedora 12 machine. It appears to be ... OS/2!

So I mention this to give you some idea where my background is. Interrupt polling was a slow way for a program to break in for CPU cycles in the world I knew.

Now that you know where I'm coming from, what is your experience? Also, there is no need to be aggressive when you post.
 
Is the Mac OS any better than the Microsoft offerings? Does running hardware, OS and software all from one company have any advantage? (you would think it would, but you never know). I know they used to have there own way of dealing with audio, which provided less latency but that may have been back in OS 9 days.
 
Timing jitter in a pc based sound system? FIFO buffers have been used for a long time to avoid that, even the lowly 16550 UART has one, a low buffer generates an interrupt and the PC transfers data to fill it, The data output from the UART and input to DAC chips is clocked by a crystal oscillator so the data is read out at a constant rate with jitter in the sub us range there is no dependency on the PC for timing because the buffer never empties except in the case of an error. Even ripping a CD uses buffers at several levels a PC could not handle interrupts at a rate fast enough to service an unbuffered source that was why even RS232 serial ports required buffers once the data rate got above 9600bps and the overhead in servicing an interrupt is higher with modern operating systems than DOS.
 
If you are transferring the data to a remote machine (probably) on a network, then any large transfer, such as a print job, could really upset the applecart. Using a data switch (mostly all newer hardware) as opposed to a data hub, will really help here unless the music server also runs the print queue. Then you're sunk (depending on the print job size)! Ethernet is far preferable to wireless for transfer speed. Yep, plug it in. A data request from surfing the web can grind things to a halt as well, depending on exactly where the data is heading. Suffice to say that surfing the web while listening to your music server is what will stress your system. If your high speed only runs with a 10 Meg connection speed (to the network), you may be fine as long as your other machines are talking at 100 Meg or so.

It's nice background music as far as I'm concerned.

-Chris :)

Hi Chris,

Most of what you're saying above could have readily applied to a system 10 - 12 years ago.

I remember ripping CDs whilst browsing the web, only to find that my nice new mp3 songs were full of glitches.

However, with current hardware (anything from the past 7 years I would say) is quite capabable of "proper" multitasking. These being systems based around dual core CPUs and SATA.

Back in the 90's all my systems were SCSI based, simply because the I/O overhead with IDE was simply too great for me. I can remember watching screen refreshes being stalled, due to disk I/O hogging CPU cycles.
This didn't happen with SCSI of course, because the disk I/O was largely handled 'off board' and transfers took place via DMA.

I also lost count of the number of CDs turned into coasters with early IDE CD writers, simply because the CPU could not handle the multiple I/O requests (reading from the IDE disk and then writing to the IDE CD writer in the same bus). For this reason, I had a SCSI CD writer as well.

However, the advent of multicore CPUs and SATA have changed all this. It's pretty rare to see a PC struggling with I/O (I mean many simultaneous I/O operations as against a sustained transfer) nowadays. Serial Attached SCSI (SAS) devices are even becoming increasingly popular in serves now too.

Hence your statement (highlighted in bold above) is no longer a concern on modern hardware.

Likewise, even a 10mbit/s network is unlikely to be stressed by a print job (however large) or music streaming (unless streaming seriously high bit rates but even then..).

However - this assumes that the network is switched rather than fed through a simple hub (does anybody use hubs anymore?).

I am a network analyst (amongst other things lol) and at my place of work, I support a team of software developers that regularly run large CVS code checkouts / check ins and the (100mbit/s switched network) handles that with ease. Likewise, the server can be seen to be busy with say 4 - 5 simultaneous checkins / checkouts, but by no means does it struggle.

We also quite often run large FTP transfers between servers, and again there is no noticeable impact on either LAN or Server performance..

However...

A large transfer from a PC (whether FTP or SMB) can render the PC unresponsive - but I have yet to try streaming mp3s at the same time (a project here maybe!). I would be susprised if the mp3 stream is interupted though.

Apologies for rambling on... :)

Tony.
 
Last edited:
Timing jitter in a pc based sound system? FIFO buffers have been used for a long time to avoid that, even the lowly 16550 UART has one, a low buffer generates an interrupt and the PC transfers data to fill it, The data output from the UART and input to DAC chips is clocked by a crystal oscillator so the data is read out at a constant rate with jitter in the sub us range there is no dependency on the PC for timing because the buffer never empties except in the case of an error. Even ripping a CD uses buffers at several levels a PC could not handle interrupts at a rate fast enough to service an unbuffered source that was why even RS232 serial ports required buffers once the data rate got above 9600bps and the overhead in servicing an interrupt is higher with modern operating systems than DOS.

Interesting comment.

However, at work we have a standard HP PC (XW4400) with a firewire input.

This will readily handle HD video streams direct from a Sony HD camera, with no apparent buffering that I can see.

We pass these streams directly into Sony Vegas Studio editing software.

When we specced out this kit, I argued this couldn't be done, but apparently I was proved wrong.
 
This will readily handle HD video streams direct from a Sony HD camera, with no apparent buffering that I can see.
There is buffering on several levels from the firewire chipset both in the camera and the computer to the PCI bus bridge chips. Even if DMA is used the DMA frame becomes the buffer.
Lets take the PCI bus, this is asynchronous transfer with burst transfers so it has to buffer as does any asynchronous link.

I don't know what you mean by no apparent buffer unless you mean the latency is so low that it is not visible which would be the case with any system capable of 20000us or less interrupt latency(an eternity for a system with 33ns bus cycle) , a buffer as large as half a frame would not apparent. The software decoder in the editor also needs to buffer a fair portion of a frame just to decode the block encoding scheme used by most video compression formats.
 
Now that you know where I'm coming from, what is your experience? Also, there is no need to be aggressive when you post.

This is pretty much multitasking OS stuff that I covered when I was at university - an aspect of computer architecture and software engineering modules - that was a long time ago though. Since then I've worked on the design of one or two real-time systems for processing vibration data (the OS was OS-9, a kind of Linux-lite) and written a device driver interrupt handler. This was written in 68020 assembler, a CPU that's pretty much gone obsolete now.

I'm perplexed why you think I'm being aggressive. I have a very direct manner of speech, I see no point in mincing words, but I'm a very placid and peace-loving guy. People only get aggressive when they perceive some kind of threat - you making erroneous statements about the inner workings of computers is hardly something that's going to get my gander up.:) Its not that important in the scheme of things to develop any kind of sweat over.

Now, from what I know, interrupts are a slow way to get the attention of the processor. Its like "knock knock - hey, please service my request?". Then there are interrupt controllers that look at the relative importance that the interrupt has before granting attention to that interrupt. This is a hardware operation, not done in software. I also know of software interrupts, but not really that much in any detail.

They're the only way to get the CPU's attention, otherwise it just does its own thing. Some interrupts can be masked, which means the CPU's at liberty to ignore them, others (e.g. NMI - non maskable interrupt) can't be ignored. A reset is also a kind of NMI, the CPU is unable to ignore that and it will be immediately stopped in its tracks.

So whilst they are in effect saying 'knock, knock' the CPU's response to them begins as soon as its finished the current instruction. That time these days is measured in nanoseconds. Control will be passed over to the OS which decides which ISR to call. In some cases the interrupt may be directly vectored to the address, in other cases there might be polling to see which peripheral was responsible for the IRQ. You're correct, there may be an interrupt controller, that barely slows things down though as its hardware and just a bunch of logic gates. The interrupt controller might be responsible for generating the interrupt vector, this speeds up the response by making the polling step redundant.

Audio is a low priority process as far as Linux and Windows is concerned. Linux allows for greater control and can be stripped down while still running reliably. You can compile only what you need running. With Windows, this is a far more difficult task. It's big, fat and a memory hog, you have to beat it down with a stick.

Audio does not depend on just one process - if it did then dropouts would be common seeing as Linux is not real-time - it can decide to hog the process queue. Soundcards come with drivers that have to be installed, they're specific to that hardware. The real work is done in these drivers which will (I'm assuming) install their own ISRs. So no, since audio is a demanding, real time task, its dealt with by interrupts, not by a scheduled task. Once the data has been packaged up in a buffer in CPU accessible memory, then it might be handed over to a task. The responsibility not to miss a sample is entirely down to interrupt handling.

So I mention this to give you some idea where my background is. Interrupt polling was a slow way for a program to break in for CPU cycles in the world I knew.

Polling is quite different - that's entirely CPU dependent. Perhaps you've somewhere got a crossed wire between polling and interrupts. I doubt any audio software nowadays uses polling - its pretty much only done when there's no RTOS to oversee things. Polling is incredibly slow and inefficient.
 
...you need to spend over $2K for a really good one.

When people start to specify equipment by price range, rather than by detailed performance, I know it's time to give them the blind eye, or the deaf ear in this case.

This is DIY audio, not 'go out and get a job on Wall St. and spend the proceeds on equipment and the first thing you tell anybody is how much it cost' audio.

I have also found that Windows Update breaks stuff that is out of the ordinary. Most things I have made work under Windows have been broken, and it takes far too long to figure out how to fix them again. I now use Windows for non-lab related things, and no longer for server duty either.

Failing that, if you want to serve music files, your best bet might very well be a trimmed down and optimized system. Some flavor of Linux would be my suggestion, and have it running a command line interface only.

This is just paranoia, and anti-Microsoft partisanship. In my experience Linux is just as prone to time-consuming update-induced breakages. I know it's tempting to indulge in this, but you really shouldn't let your jealousy of Bill's riches overflow into your assessment of firmware.

The less running that demands time slices, the better the performance should be. Also, moving the processing to the D/A device, card or USB, will really help.

Yes. I can buy a recording interface that will readily pass eight channels of 2496 over USB2 with a performance acceptable to musicians and recording engineers. I do know a fair bit about timing issues in IBM PCs, enough to know there are likely to be issues and new solutions unknown as yet to any of us here, especially given the rate at which technology changes, but without entering into a discussion about these, I would suggest that there is sufficient bandwidth available in a modern multi-processor xGHz machine to cope with 2 channels of 1644.1.

If you are transferring the data to a remote machine (probably) on a network, then any large transfer, such as a print job, could really upset the applecart. Using a data switch (mostly all newer hardware) as opposed to a data hub, will really help here unless the music server also runs the print queue. Then you're sunk (depending on the print job size)! Ethernet is far preferable to wireless for transfer speed. Yep, plug it in. A data request from surfing the web can grind things to a halt as well, depending on exactly where the data is heading. Suffice to say that surfing the web while listening to your music server is what will stress your system.

We're not talking about listening to music from a Windows server which is handling a print queue while the administrator downloads from Windows Update. We're talking about a dedicated music machine or network. Your enthusiasm to denigrate computer-based playback systems is causing you postulate unlikely scenarios.

Horatio Nelson said:
I see no ships...

w
 
I have also found that Windows Update breaks stuff that is out of the ordinary. Most things I have made work under Windows have been broken, and it takes far too long to figure out how to fix them again. I now use Windows for non-lab related things, and no longer for server duty either.

This is just paranoia, and anti-Microsoft partisanship. In my experience Linux is just as prone to time-consuming update-induced breakages. I know it's tempting to indulge in this, but you really shouldn't let your jealousy of Bill's riches overflow into your assessment of firmware.

As a Network / Systems Administrator of 15 years standing, I wholeheartedly agree with this.

I have "lost" many a Linux server due to an unwanted kernel update, which has rendered the system unuseable.

I would in fact argue that Windows updates are more reliable, particularly considering they are released on a weekly basis, and therefore tend to be more frequent than Linux updates.

For the record - I am in neither the Microsoft or Linux camps. Each operating system has it's place in the business, and I use them accordingly. I would describe myself as being an advanced user with both operating systems.
 
Quote:
Originally Posted by anatech View Post
...you need to spend over $2K for a really good one.
When people start to specify equipment by price range, rather than by detailed performance, I know it's time to give them the blind eye, or the deaf ear in this case.

That's unfortunatelly the reality of this game AND advanced DIY tends to be more expensive than simply keeping a day job (not necessairly on Wall St), saving up und buying a decent (not hyped) performer. Those post when it is said that "one cannot hear any difference".Compared to what?? It would be much more informative to state " In my (given system) with the music of choice (hair bands of 80's:) I compared high end CD player ( priced at $2.5-3k USD +++plus -below that it's hardly a high end) and there was no audible benefit whatsoever in keeping CD player.
 
Quote:
Originally Posted by anatech View Post
...you need to spend over $2K for a really good one.
When people start to specify equipment by price range, rather than by detailed performance, I know it's time to give them the blind eye, or the deaf ear in this case.

That's unfortunatelly the reality of this game AND advanced DIY tends to be more expensive than simply keeping a day job (not necessairly on Wall St), saving up und buying a decent (not hyped) performer. Those post when it is said that "one cannot hear any difference".Compared to what?? It would be much more informative to state " In my (given system) with the music of choice (hair bands of 80's:) I compared high end CD player ( priced at $2.5-3k USD +++plus -below that it's hardly a high end) and there was no audible benefit whatsoever in keeping CD player.

100% agree.

It would be really interesting to find out about the rest of the system and the music material used in comparison.
Even most of recent remasters of otherwise good productions from the past have compressed and flattened sound (loudness wars) - using these discs actually won't help in comparing different equipment.
 
Administrator
Joined 2004
Paid Member
Hi Tony,
Well, since buying all my test gear and CD players, I can't afford a computer that is anywhere near current. I have nothing that uses Firewire or SATA. I still get excited by multiple USB ports, but the lack or RS-232 would kill me. The most capable processor I have is about 2 GHz I think, it may possibly have up to 1 GB ram and a single processor. I have been suffering with NT4 for years, the OS/2 Advanced server runs far better (no memory leaks, I can leave it running!). So this may explain my point of view. Everything I have said I've observed directly. My next server will be a Centos build. I have to learn how to configure this thing first.

My last professional job had me certified for Voip. So I would configure these systems (mostly Avaya) in an existing network. This is something you don't just stick on a network and switches are not enough. I have seen a print job burn the networked system down (multi-site). Their IT dept decided they knew how to do my job and set it up. Didn't fly too well like that, and the new switches exceeded the cost of TDM wiring. The real kick in the face is that Voip does not buy you anything over a TDM solution (Voip outside). I was also tasked with rebuilding old and new voice mail servers as the hardware died, so I learned a lot there. Most of these systems ran on OS/2 at first, some older on concurrent CPM. So I have a heavy dose of older systems type work. So I have limited network knowledge (not afraid to admit that) and excellent telephony programming and provisioning knowledge (that most IT departments don't have, but say they do).

I'd say that your comments are pretty accurate Tony, and I thank you for clearing a few things up for me. You correctly identified where my experience came from.

Hi abraxalito,
Well, from Tony's post, you can see what universe I came from. Those very dark times that appear normal to me. Industrial computing tends to be 10 years back in time from the normal world. Mostly because the systems are expected to be up and supported for 15 ~ 20 years. When I attended University, we used punch cards in SWAT-V (Fortran, Structured Programming - Waterloo revision). 'nough said about that I guess. Did I give you the shivers?

Some interrupts can be masked, which means the CPU's at liberty to ignore them, others (e.g. NMI - non maskable interrupt) can't be ignored. A reset is also a kind of NMI, the CPU is unable to ignore that and it will be immediately stopped in its tracks.
That is something we became very familiar with.

Audio does not depend on just one process - if it did then dropouts would be common seeing as Linux is not real-time - it can decide to hog the process queue.
At least I'm well aware of those type things.

-Chris
 
I have been suffering with NT4 for years, the OS/2 Advanced server runs far better (no memory leaks, I can leave it running!). So this may explain my point of view. Everything I have said I've observed directly. My next server will be a Centos build. I have to learn how to configure this thing first.

Ahh the heady days of OS/2 - I loved that O/S back in the 90's! I taught myself how to tweak it to the hilt for optimum performance. I mostly used Warp 3. I still have a copy somewhere... Along with Warp 4...

My last professional job had me certified for Voip. So I would configure these systems (mostly Avaya) in an existing network. This is something you don't just stick on a network and switches are not enough.

I have actually recently implemented a VoIP system at work (SpliceCom Maximiser, Call server 5200). We simply placed it on the main LAN, and used flood control on the (HP ProCurve) switches. However, we recently switched to SIP and I had to move it off the LAN (but that's another story..).

I have seen a print job burn the networked system down (multi-site).

I can see how a large print job would kill a WAN but not a LAN - but again that depends on how much WAN bandwidth is available..

Didn't fly too well like that, and the new switches exceeded the cost of TDM wiring.

TDM? Not familiar with that acronym?

However... We have taken the thread way off topic - I guess any more exchanges should take place through PM if you so desire.
 
Administrator
Joined 2004
Paid Member
Hi wakibaki,
When people start to specify equipment by price range, rather than by detailed performance, I know it's time to give them the blind eye, or the deaf ear in this case.
Had you completely read what I posted, you would know that I warned about equipment that was dressed up junk. I had excluded that stuff. As for the valuation, it is accurate for the life of CD players from the start in Canada. Servicing all brands allowed me to get a firm grasp on the market over the years simply because people always needed to know if the unit was worth repairing.

As completely unpopular as giving a price range is (and I would normally be agreeing with you) this really is the cold, hard truth in this case. I have seen supposed "excellent, top rated" machines that were the equal of a $3K unit (or whatever), but these have always turned out to be poor performers by comparison. It is romantic to think a real deal has been found, but this is a pipe dream.

I'm really cheap when it comes to buying anything (ask my friends), but I will spend what is needed if I know for sure what the truth is. I am well regarded when it comes to audio service, and in particular for setting up CD players. After a lot of examination, I parted with some substantial dollars - TWICE spaced a decade apart. So unless you are buying used or deep discount (dealer cost or stolen), the $2K figure is unfortunately accurate. You are free to disagree of course, but I know how CD players are built and how they work from the ground up. Right from the beginning of the product line.

This is DIY audio, not 'go out and get a job on Wall St. and spend the proceeds on equipment
Surely even you know when you can't build something. The average DIYer may be able to build a kit, but I haven't seen any capable of the performance we are talking about. I have and had access to all kinds of sub-assemblies and bits for CD players. Revox, Nakamichi, Philips, Pioneer, Sony ... on and on. Building a one off would be a major undertaking, and very costly as well. More so than just buying the thing. The #1 thing you will have trouble with is building the transport. The rest can be assembled easily these days.

I'm sorry you have a problem with the fact that I spent the money, but I'm not gloating and certainly the expenditure hurt. The same holds true when buying a firearm. You must spend the money in order to get something that performs extremely well. My other hobby.

This is just paranoia, and anti-Microsoft partisanship. In my experience Linux is just as prone to time-consuming update-induced breakages.
No it isn't. This is direct experience. Microsoft even messed up DOS 3.3, but didn't want to admit it, or accept the software back.

In the early days of Linux, it was rocky - I was using OS/2 at the time. That and Unix were the most stable platforms out there. I even had Unix loaded on one machine (Seagate ST-225 HD for a date). Early Linux would destroy itself if it wasn't shut down. Power fails were rebuild time. But guess what? NT4 also trashed itself often times. You could walk up to OS/2 server and turn the power of as it was processing. When the power was returned, it would rebuild itself and normally just continue. That was with JFS and drive mirroring at that time. Sorry, but that is far in advance of anything Microsoft had out for years. Eventually, Linux became tolerant of power fails and I began playing with it (Redhat 6.1). I'm now running Fedora 12 and it updates really nicely.

By comparison, each and every time an XP update came that broke something, the help desk could not fix the fault. It became clear that their job was to prove the fault lay in 3rd party drivers, or an application. After talking to tier 2, the only "fix" was to reload the machine (3 days, thanks) and suffer without the broken software. Even internal things to Windows were broken. Some industrial locations running man-down safety systems were affected often enough to discontinue using Microsoft. The fact they can not fix what they broke in most cases only shows they are not able to support the product. I've worked in command line often enough (CD boot) to save a system that I really know what's up. So I'm not someone who doesn't like Bill Gates, I am someone who has suffered long and hard with their poor coding. BTW, Linux is at least honest about things, as was the OS/2 team. They always came up with a patch, or a work around. You need to be reliable - for sure? Contract with IBM, end 'o story. With Windows, you really are pretty much on your own, but you pay a premium for that honor.

BTW, installing the last home server beta (holy cow, he beta tests!), the install aborted. Then, it would not reinstall over itself - even after I formated the drive (not the quick format, the one that takes a hour or so). I loaded an earlier version of Centos without any trouble on that machine. It is still running fine. What does that tell you? Understand that I keep giving Microsoft chances to redeem itself. So far, XP has been the best they have put together. I find this accusation unfounded, and pretty silly to make if you don't know a person's history. This is unlike you from my experience.

Hi Tony,
I have "lost" many a Linux server due to an unwanted kernel update, which has rendered the system unuseable.
Agreed, but not lately. Depends on the vendor I guess. Have you tried Redhat? Older (stable) versions of Fedora should be decent as well.

I would in fact argue that Windows updates are more reliable, particularly considering they are released on a weekly basis, and therefore tend to be more frequent than Linux updates.
I haven't really found that to be the case, although Microsoft hasn't broken anything for about 6 months now. Mind you, I no longer run anything terribly important on Windows boxes any more. Either way, Windows doesn't hurt as much lately.

Fedora releases fixes every single week, and sometimes more often for real problems. Sounds like you're not running Redhat/Fedora code. Fedora hasn't broken anything yet. I find that amazing.

For the record - I am in neither the Microsoft or Linux camps. Each operating system has it's place in the business, and I use them accordingly. I would describe myself as being an advanced user with both operating systems.
I would say the same, and I agree with you (hence this XP machine). The one difference is that I freely admit that I'm not an advanced user, not like 10 years ago. My job is what I do with a computer, not the computer. These are tools that require far too much attention. (= not production ready) I think they need to settle on a stable business machine for 5 year periods in order to allow productivity to rise to acceptable levels. The users need to get on with their jobs, and IT departments need to be able to focus on the network instead of software maintenance. This costs everyone in excess of millions a year - at least!

This was all OT anyway. The cost of the CD player has nothing to do with streaming music on a network. I doubt you will see this in corporate America any time soon.

-Chris
 
All Linux servers I support are running CentOS 5.x.

I have disabled automatic kernel updates as I have several kernel modules compiled to that specific version of kernel (the hardware is HP).

Also have an office full of XP machines (approx 60) and honestly haven't seen any problems with updates for about 2 - 3 years (maybe one of two exceptions - again this is HP hardware).

Our developers that use Linux are running Ubuntu - I don't support that as a rule - that was their choice. Most of them dislike Fedora intensely (why I don't know).

As for NT4 - absolute solid reliability post SP2 (IIRC) having used it (corporate environment - both servers and desktops) for about 8 years. Even managed to have machines recover after having been accidentally switched off (don't ask..).

Actually - AFAIK a large part of the British armed forces still depend on NT4 to this day... And I have seen it in use in several airports as well.

But it is past it's "sell by date" now for sure! After all, what 'modern' O/S lacks USB support!!!
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.