FreeNAS - Anyone Tried it as a Music Server?

Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
Andy , I am just curious .. What :eek: do you use a 4 X 300gb raid array for ??

I just use 2 - $40 SATA2 80GB's partitioned in two parts (operating system -15gb/site- video editing 65gb) in raid 0. All of the media is on my other 3TB in standard mode. My "insurance" is optical backup of the 3 TB (DVD - Blue Ray). My website is on the raid array but also backed up as one blue -ray ISO on another "slow drive".
OS
 
I remember when 1 MB in a VAX-780 was considered a lot of RAM.
And when 32 MB (Not GB!) drives had 14" platters, LOL!
And when graphics hardware was programmed in microcode.

I remember...
- when 8K was a lot of RAM
- programming in Fortran on punch cards
- using 9 track tapes
- entering 6800 assembler using toggle switches

Check out the Data Robotics storage systems, those look pretty cool and I believe are Linux based.
 
Andy , I am just curious .. What :eek: do you use a 4 X 300gb raid array for ??

At the time I set it up, 300GB drives were the "sweet spot" for cost/GB. That was quite a while back. I have about 1500 CDs ripped to FLAC, so that takes up a lot of space. Plus there's lots of other data too, such as backup OS HD images of the machine it's in, and two other machines. My OS is on a separate 160GB drive, so the RAID is data-only.
 
As far as the RAID setups go, I would go with Intel hardware, some if not most of their support chips sets include RAID. I doubt that it burdens the processor much, since Intel knows what they're doing - well most of the time. It seems that they learned their lesson with the 286 and all the bugs - they never got protected mode working right and just moved on to the 32 bit 386 protected mode.

I know, as I think you probably do also, from working in CPU design that exhaustive testing is absolutely necessary. I've seen projects nearly fail due to bugs in subsystems supplied by other vendors and so, knowing the complexity I tend to be reluctant to go with companies that do not have the funds to fully test their designs. Kind of like knowing what goes into the sausage, lol! The stories I could tell!
This is my approach based on not wanting to make debugging the system a long, risky, and painful effort. I would certainly do more of what you're doing if computers were more of a hobby for me.

I'd tend to trust well tested servers with RAID setups where they have to be relied on for business and professional work. Most of the reasonably priced used systems use SCSI drives as I'm sure you know. And probably the companies that have been doing RAID for 10 or 20 years with a good reputation, will usually get it right. I'd probably also trust Adaptec and Highpoint; it would be nice if there was a low cost clone, but I think they use custom chip sets. Oh, yeah, I worked for a certain large chip company who offered a RAID chip - I would not trust it LOL! I don't think it benched that well in tests I saw years ago on Tom's Hardware.
I'll try to find a review of the Intel RAID board. I think RAID got spec'ed in for media/high end PCs by MS or Intel. The Gateway (Intel motherboard)/Vista system that I have supports RAID, and a high end ASUS motherboard that I have also supports RAID also with an Intel chipset. Most of our other systems are Dell also with Intel chipsets.

I've said for years that the slow disk drives are the bottleneck in computers these days and this has been part of the reason that I've wanted to do a RAID setup in my main system as you are doing.
I've found that the Seagates with the 32MB cache seem to perform well without RAID and at a reasonable cost. The other consideration is that a server such as the FreeNAS setup that I'm considering only has to be fast enough to stream audio, video, and support late night backups. I'm not too worried about speed in that case.

Pete B.
 
Last edited:
Last edited:
Why not a 1 to 2 TB simple esata external drive for backup?
I use a 500 GB unit with USB.

Well, that's entirely too simple and inexpensive! :)

Seriously though, that may be the best approach of all. I could just get 3 new 1TB drives for the RAID setup and a 2TB external drive for backup and call it a day, for a few years at least.

BTW, here's a link to that Highpoint RAID card. It uses an Intel IOP348 chip. 300 bucks at newegg.

http://www.highpoint-tech.com/USA/rr4320.htm
 
Okay, I'm talking to myself here, but I thought I'd bring up something I found about potential problems using desktop drives in a hardware RAID array. This PDF file on the Western Digital site describes the problem and its solution, the so-called TLER feature of their RAID-edition drives. Other vendors have a similar feature with a different name. What they don't tell you is that they have a utility called WDTLER that allows you to enable this feature for desktop drives as well. That utility can be found here. It needs to be put on a bootable DOS CD, bootable DOS floppy or bootable USB drive. I found an ISO image of a bootable DOS CD with the WDTLER files here. The other alternative makes use of the "HP USB boot utility" for making bootable USB sticks. You'll need to do a search for that one. I only found it on rapidshare, so I won't post a link.

It looks like I'll be scaling back on my target disc capacity. 2TB external USB hard drives can be had for $190. So I think I'll get one of those, and three 1TB drives for my RAID array. That solution is less than $500 total and more than doubles my current capacity while providing full backup as well. That should hold me for several years. Anticipating the future with extremely large storage capacity leads to complex and expensive solutions. Thanks to Pete for this suggestion which, despite its simplicity, I hadn't considered before. Duh!
 
Thanks Andy, I was not aware of that issue. I wonder if the other companies have a similar capability. They must at least have server optimized versions of their drives.

Have you seen SpinRite from Steve Gibson at GRC? This is an impressive disk recovery/health check program, there's a white paper on it here - "SpinRite's Technology": http://www.grc.com/srdocs.htm
We used this at work in the mid 1980s, and I was surprised to find it on the web many years ago. It really does almost do magic, by using statistical analysis of the data on the drive.
It is an option if more than one drive in an array goes bad, or just to analyze the one bad drive.

What is interesting about that paper is that the RAID controllers must not give the slow drive even one chance for a retry - or maybe it retries for 8 seconds. Seems if what the paper says is true then a simple reboot, and bringing the drive back online (that took longer than 8 seconds to repair itself) would be repaired and the array should be fine. I don't know if there is a way to clear the fault status, and back up the state of the RAID one tick? It would fail again quickly if the drive was really bad, and the parity would protect if the same area had trouble. You'd probably want to do a full surface test if such a failure happened and you just reset the state, to maintain the redundancy. Or do a health check of the drives with something like SpinRite. People swear by this program, and I think he claims in his white paper that our drives are in a steady state of decline over time, it is not just sudden death that hits hard drives.
I've not tried his program for a health check or rejuvenate of a drive, I did "demo" test the program for a drive that was dropped. I did not see a lot in the way of health check reports, however, people do swear by it. I have to look into the reporting status a bit more.

Also, I forgot that modern drives are virtualized, in that bad blocks are mapped out to spares so spindle synchronization will not help in the case of a bad block. On the other hand the penalty is just a bit of delay and loss of throughput, and the drives should not have many bad blocks so it shouldn't happen often. Bad blocks will likely cause another seek, and more rotational latency - or do they put spares in every track? I'm not sure.

I had an fairly new IBM drive that was not working well - ran the IBM low level diagnostics where it tried to repair/recalibrate the drive. I think it ran out of spares as I recall. That system probably needed a new power supply since the next drive also failed fairly quickly, then it hit me to replace the power supply.

Partly thinking out loud here, but it seems that we're working out what to do for our RAID systems. That board does look quite good, and the price can probably be justified in terms of time savings and peace of mind if you know what I mean.

Pete B.
 
Have you seen SpinRite from Steve Gibson at GRC? This is an impressive disk recovery/health check program, there's a white paper on it here - "SpinRite's Technology": http://www.grc.com/srdocs.htm
We used this at work in the mid 1980s, and I was surprised to find it on the web many years ago. It really does almost do magic, by using statistical analysis of the data on the drive.
It is an option if more than one drive in an array goes bad, or just to analyze the one bad drive.

I used that back in the '80s as well. Ruined a bunch of drives with it :). One thing it did if asked was examine the list of bad sectors as marked by the hard drive vendor, analyze them, and mark them as good again if it didn't find a problem. The trouble was, the hard drive vendor's data was much more reliable than that of SpinRite, and the sectors SpinRite marked as good really were bad. Disaster ensued. The other thing I used it for was getting the optimum interleave on those old pre-IDE drives. That really did speed things up nicely. I don't think I'd use it again though, based on those experiences. I did use the old 1:1 interleave RLL controllers back then based on his advice, and they were indeed faster than anything else I tried at the time. Ah, the not-so-good old days!

What is interesting about that paper is that the RAID controllers must not give the slow drive even one chance for a retry - or maybe it retries for 8 seconds. Seems if what the paper says is true then a simple reboot, and bringing the drive back online (that took longer than 8 seconds to repair itself) would be repaired and the array should be fine.

The way I read it was that the drive might not respond at all, not even with an error message, for more than 8 seconds. Once that occurs, the controller drops it from the array. Then the array must be rebuilt, which can take days. During the rebuild, the array is vulnerable such that if the same error occurs again, all the data is lost. My interpretation was that if the drive at least returns an error message, the array is put into a temporarily bad state from which it recovers quickly without needing a rebuild.

I don't know if there is a way to clear the fault status, and back up the state of the RAID one tick? It would fail again quickly if the drive was really bad, and the parity would protect if the same area had trouble. You'd probably want to do a full surface test if such a failure happened and you just reset the state, to maintain the redundancy. Or do a health check of the drives with something like SpinRite. People swear by this program, and I think he claims in his white paper that our drives are in a steady state of decline over time, it is not just sudden death that hits hard drives.

Dunno. I suspect how this is done might be controller-specific. The Promise controller I have has a nice utility called WebPAM, which has some error checking. After rebuilding my machine after a motherboard failure, I ran the WebPAM maintenance routine, and it did find some inconsistent data. Apparently it fixed everything okay, as I never did see a problem.

I've not tried his program for a health check or rejuvenate of a drive, I did "demo" test the program for a drive that was dropped. I did not see a lot in the way of health check reports, however, people do swear by it. I have to look into the reporting status a bit more.

Dunno about that either. I don't think I'll be messing with SpinRite again though.

Also, I forgot that modern drives are virtualized, in that bad blocks are mapped out to spares so spindle synchronization will not help in the case of a bad block. On the other hand the penalty is just a bit of delay and loss of throughput, and the drives should not have many bad blocks so it shouldn't happen often. Bad blocks will likely cause another seek, and more rotational latency - or do they put spares in every track? I'm not sure.

You've got me there too :).

Partly thinking out loud here, but it seems that we're working out what to do for our RAID systems. That board does look quite good, and the price can probably be justified in terms of time savings and peace of mind if you know what I mean.

One thing I noticed was the hardware compatibility list of the Highpoint controller was not very big. Apparently the WD "Green Power" drives don't work with it. One thing I'm concerned about is whether my Promise controller, which is old and discontinued now, will recognize the new drives. Just in case, I'm getting drives that are on the compatibility list for the Highpoint controller. That way, if they don't work with the Promise controller, I'll get the Highpoint.
 
I had a feeling you might have tried it, lol, you sure do explore all the esoteric features - you should have been a beta tester. Our guys used it in the most basic mode - I'm fairly sure and it saved us many times. We had a tech who took care of these sorts of problems.

I suggested it to a friend who lost a drive in a server at work, and somehow the backup also. It worked for them.

Seems if it works people tend to swear by it, if not well I can understand your position, you probably swear at it!

Sounds like you've got a good plan there for the Promise/Highpoint. I thought that you might want the back up drive always online for automated backups. You could put the 2 TB drive also in your system, but I understand if you want some isolation from the power supply for example - just in case it fails in a catastrophic way.
 
Last edited:
I suggested it to a friend who lost a drive in a server at work, and somehow the backup also. It worked for them.

That's good. I didn't realize it could be used for recovery purposes. I just remember years and years ago, Gibson did an article about SpinRite (maybe in Computer Shopper?). I did get some great performance improvements when finding the optimum interleave and re-low-level-formatting to that interleave. It was only when I let it mark bad sectors as good that the problems occurred. I guess low-level formatting has been a thing of the past since IDE.

About ten years ago, I was active in the dslreports.com security forum, mostly learning from the computer security pros that posted there. I found out there's a whole lot of Gibson haters there. He does have a tendency to hype stuff, and I guess that's what puts people off. Some people really have it in for him.
 
Last edited:
The guy seems to be fairly low key as I see it. He didn't even upgrade SpinRite for NTFS until fairly recently. I think he just made it file structure independent, it simply fixes the data as best as it can. The white paper is not as good as some of the older ones that I remember that were more technical.

He inadvertently insulted some young hacker on one of the security forums, who then hit his site with a DDOS attack. Gibson writes an amazing story about it on his site that seems factual as far as I can tell: www.crime-research.org/library/grcdos.pdf
Gibson hacked the protected sites where the hackers hung out to chat and spied on them - perhaps he's made a lot of enemies.

There might be a large number of fakes there, or intellectuals who just can't stand being out done - I don't know.
 
I don't know what's going on with that either. It is a reputable "white hat" site though, not a bunch of hackers. I do remember reading a bunch of posts in that forum by a guy named Friedl, and thinking that the name was familiar. He turned out to be the brother of Jeffrey Friedl, the guy who "wrote the book" on regular expressions. It's a great book on a boring subject. I don't know how he managed to do such a great job on such a rotten subject :).

Anyway, I'm not trying to make excuses for that behavior, and I don't share that view myself, even though I've had problems with his software. I was just bringing it up because I've seen this opinion expressed in a number of places.
 
A software/Unix guru friend of mine suggested this as an alternative to FreeNAS:
http://www.geek.com/articles/chips/feature-linux-media-server-using-ubuntu-810-2009065/

Ack, I can't keep up with you here! :)

I only glanced at that, so I don't know if it's for hardware RAID, software RAID, or both/either. Anyhoo, in researching possible software RAID solutions, a couple of controllers that can use many SATA drives came up. Both are made by Supermicro. One is PCI-X, supported by many Linux versions (including unRAID) and supposedly works great. But you need a server motherboard to make full use of it, because of the PCI-X slot. Most server motherboards are quite expensive from what I've seen. Supermicro also makes a PCI-E version which supposedly works quite well under Windows, but Linux support is lagging. That's one of the reasons I kind of decided to just sit back and wait until all the dust settles. Meantime, I'll have doubled my disc capacity without having to deal with all this exotica. This sort of stuff used to float my boat, but it becomes tiresome over the years.

I'd love to have a storage server that backs up all my stuff automatically and has storage overhead like RAID5 or RAID6 (not full redundancy). It's possible, but not without a lot of pain. I figure in a couple of years it will all work itself out and I can just buy stuff, knowing it will work. Haha. I wish.
 
Last edited:
I'd love to have a storage server that backs up all my stuff automatically and has storage overhead like RAID5 or RAID6 (not full redundancy). It's possible, but not without a lot of pain. I figure in a couple of years it will all work itself out and I can just buy stuff, knowing it will work. Haha. I wish.

Any Linux box (e.g.with webmin and mediatomb as the link above) will provide you with production quality software RAID0/1/5/6/10. Flexibility of that raid is incomparable with any even expensive hardware raid or the software bios raids discussed in this thread - growing the array by adding drives or replacing drives with larger ones, combining any type of drives (PATA, SATA, USB, eSATA, additional SATA controllers), fine-tuning the arrays (chunk size, chunk alignment etc.), each partition of a drive can be part of different array (typically small partitions on each drive constitute a mirror for the root filesystem, while the large data partions are joined to RAID5/6/10). The performance hit compared to reasonably-priced HW raids (not the bios raids) is minimum. Of course a HW server raid with 15k SAS drives and 128MB of battery backed-up memory is noticeably faster but for media storage/backups that is an overkill.

Our company backup server HP ML115 for 600USD (entry-level double Opteron, a regular nvidia chipset) with RAID0+1 (2x750GB + 2x1.5TB SATA, upgraded from 4x750GB RAID10 just yesterday) offers 220MB/s raw-partition write speed, array synchronization to striped external eSATA drives (using the cheapest PCIe SATA controller for 45USD) runs at 180MB/s. The hardware has been countlessly upgraded throughout years, the only SW reinstallation was required when switching from 32bit to 64bit debian linux and that was as simple as getting a list of package names on the old server (one command), copying the list on the new one (one command) and installing the same packages from 64-bit distribution (one command). The configuration directory /etc was copied over too, there were basically no changes.

For daily fully automated backup/recovery of servers/workstations I can recommend the wonderful backuppc http://backuppc.sourceforge.net/index.html (again a ready-made package in any decent linux distribution). Its pool directory on our backup server has over ten million files consuming over 1.5TB. Even though it makes incremental backups, it can recover on your workstation or export to external location any file/complete directory subtree from any backup run.
 
Thanks phofman. I've had my eyes on something that would support software RAID or equivalent using some of the newer PCI-E SATA cards supporting more than four discs. One such card is the Supermicro SASLP-MV8. These are not very expensive at $100, and support 8 drives. I don't need that many discs now, but it's possible in the future I might use an HTPC for video. I have heard the Linux drivers for this card aren't completely reliable yet, but I'd bet the situation will be much better a year from now. So I am going to sit tight for now.
 
Status
This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.