|
|
FreeNAS 8 is pretty bad ass
|
|
|
|
Clinically Insane
Join Date: Mar 2001
Location: yes
Status:
Offline
|
|
ZFS/FreeBSD based, pretty simple to setup and get going, automated ZFS snapshot features, a variety of sharing options...
Any users here?
|
|
|
|
|
|
|
|
|
Clinically Insane
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status:
Offline
|
|
|
|
|
|
|
|
|
|
|
Moderator
Join Date: Jan 2001
Location: Polwaristan
Status:
Offline
|
|
Planning on being a user. I have the chassis and storage set aside, including SSDs for l2arc and zil, and just need to find the time to put it all together at work. My overall storage needs aren't huge, perhaps 8 TB or so, but the l2arc ssd is really what I'm keen on seeing in action for raw read and iops.
|
|
|
|
|
|
|
|
|
Professional Poster
Join Date: Feb 2000
Location: Nashua NH, USA
Status:
Offline
|
|
Got a 7 box ATM waiting for better feature parity and volume migration.
|
|
|
|
|
|
|
|
|
Clinically Insane
Join Date: Mar 2001
Location: yes
Status:
Offline
|
|
Originally Posted by Cold Warrior
Planning on being a user. I have the chassis and storage set aside, including SSDs for l2arc and zil, and just need to find the time to put it all together at work. My overall storage needs aren't huge, perhaps 8 TB or so, but the l2arc ssd is really what I'm keen on seeing in action for raw read and iops.
What sizes do you plan on making your L2ARC and ZIL pools?
|
|
|
|
|
|
|
|
|
Moderator
Join Date: Jan 2001
Location: Polwaristan
Status:
Offline
|
|
240 GB each, which equals the SSD capacity. But I recall reading that for every some amount of l2arc, one should have a certain amount of RAM. I'll have to dig that up to be certain.
|
|
|
|
|
|
|
|
|
Clinically Insane
Join Date: Mar 2001
Location: yes
Status:
Offline
|
|
RAM is your cheapest buffer. RAM will be used for both read and write buffeting until it is exhausted, so I think the general rule of thumb is always to max out your RAM since it is cheaper than SSDs. ZFS will definitely make use of whatever RAM is available.
|
|
|
|
|
|
|
|
|
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status:
Offline
|
|
RAM is your most expensive buffer and shouldn't be used for write buffering AFAIK (writes don't count until they hit the ZIL).
We use ZFS with FreeBSD 8.2 rather than FreeNAS and we haven't found L2ARCs to be terribly useful for our ZFS applications. The database servers have enough RAM to buffer everything hot and the file storage servers aren't really sensitive to the latency.
Database servers are 11x 2 drive mirror 146G 15k SAS with 2 hotspares and mirrored 50GB SLC SSD ZIL. We typically see about 150G disk cache of their 192G RAM.
File storage servers are 3x 7 drive raidz 1-2T SATA with 1 hotspare and no ZIL. They only have like 8G RAM so there's not much disk caching, maybe 6.5G.
|
|
|
|
|
|
|
|
|
Clinically Insane
Join Date: Mar 2001
Location: yes
Status:
Offline
|
|
By expensive I meant cost per gigabyte. Whether it is an effective or adequate buffer in and of itself depends on various things.
It can be hard to gauge the impact of these three buffer options though, because anything that is taking a little strain off either reads or writes could help the other. The advice I have come across seems to suggest being liberal with RAM allocation since the cost per gigabyte is cheap in many cases, although it obviously depends on what kind of RAM your server requires and its cost.
|
|
|
|
|
|
|
|
|
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status:
Offline
|
|
RAM is still your most expensive buffer. Flash is half to a tenth the price of RAM.
|
|
|
|
|
|
|
|
|
Clinically Insane
Join Date: Mar 2001
Location: yes
Status:
Offline
|
|
Originally Posted by mduell
RAM is still your most expensive buffer. Flash is half to a tenth the price of RAM.
Do you mean USB based Flash, PCI based, or SSDs that utilize the SATA interface?
I take back what I said, the price per gigabyte thing is not the right way to look at it, because they don't make a single 16GB stick of RAM, for instance, like they do an SSD.
What I said was pretty messed up, you were right for correcting me. Perhaps it would have been more accurate to say that it is pretty cheap and easy to put something in your free RAM slots that are just sitting there and are there anyway and won't limit your storage capacity, so they are as good a starting place as any. This was what I was thinking originally, I was just being dumb.
|
|
|
|
|
|
|
|
|
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status:
Offline
|
|
Originally Posted by besson3c
Do you mean USB based Flash, PCI based, or SSDs that utilize the SATA interface?
SATA, although PCIe is becoming attractive.
Originally Posted by besson3c
I take back what I said, the price per gigabyte thing is not the right way to look at it, because they don't make a single 16GB stick of RAM, for instance, like they do an SSD.
What? They make plenty of 16GB RAM sticks. I bought a box with 16 of them today.
Originally Posted by besson3c
What I said was pretty messed up, you were right for correcting me. Perhaps it would have been more accurate to say that it is pretty cheap and easy to put something in your free RAM slots that are just sitting there and are there anyway and won't limit your storage capacity, so they are as good a starting place as any. This was what I was thinking originally, I was just being dumb.
I still don't follow. If you've got a free SATA/PCIe port you can add flash at a fraction of the price of adding RAM.
|
|
|
|
|
|
|
|
|
Clinically Insane
Join Date: Mar 2001
Location: yes
Status:
Offline
|
|
Originally Posted by mduell
What? They make plenty of 16GB RAM sticks. I bought a box with 16 of them today.
Really?! Damn, I should quit while I'm not ahead... I was looking at the NewEgg server memory page and the highest they went was 8 gig sticks. Are these available with ECC and classified as server RAM?
I still don't follow. If you've got a free SATA/PCIe port you can add flash at a fraction of the price of adding RAM.
But in doing so you are limiting your storage, is my point, because a SATA port can be used to increase the total size of your disk array where a RAM slot won't. My point was basically that you might as well use your RAM slots, but if you have a bunch of spare SATA or PCIe ports laying around, sure, I absolutely see your point.
Do you know if people mix and match pools with disks attached to PCIe as well as disks attached to SATA ports? Is this generally considered a good idea, or is it best to put these devices in separate pools?
|
|
|
|
|
|
|
|
|
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status:
Offline
|
|
Originally Posted by besson3c
Really?! Damn, I should quit while I'm not ahead... I was looking at the NewEgg server memory page and the highest they went was 8 gig sticks. Are these available with ECC and classified as server RAM?
Samsung announced 32GB DDR3 modules 2 years ago. 16 and 32GB are still relatively expensive, they're not a mass market thing like Newegg carries. Here's some of the options for configuring a Dell R910:
512GB Memory (32X16GB), 1066MHz, Quad Ranked LV RDIMMs for 4 Processors [add $18,470.00]
2TB Memory (64x32GB), 1066MHz, Quad Ranked LV RDIMMs for 4 Processors [add $239,952.00]
Originally Posted by besson3c
But in doing so you are limiting your storage, is my point, because a SATA port can be used to increase the total size of your disk array where a RAM slot won't. My point was basically that you might as well use your RAM slots, but if you have a bunch of spare SATA or PCIe ports laying around, sure, I absolutely see your point.
Sure, you're giving up one drive slot. Most ZFS pools have enough disks this isn't a big deal if they need a ZIL/L2ARC or two.
Originally Posted by besson3c
Do you know if people mix and match pools with disks attached to PCIe as well as disks attached to SATA ports? Is this generally considered a good idea, or is it best to put these devices in separate pools?
Absolutely. We sometimes have an external chassis (via PCIe SAS HBA) for the bulk of the drives with the SATA SSD ZIL internal to the box. We could use a PCIe SSD. There's no issue with using devices on multiple interfaces.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Forum Rules
|
|
|
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
|
HTML code is Off
|
|
|
|
|
|
|
|
|
|
|
|