Welcome to the MacNN Forums.

If this is your first visit, be sure to check out the FAQ by clicking the link above. You may have to register before you can post: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below.

You are here: MacNN Forums > Software - Troubleshooting and Discussion > Alternative Operating Systems > FreeNAS 8 is pretty bad ass

FreeNAS 8 is pretty bad ass
Thread Tools
besson3c
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Sep 14, 2011, 05:59 PM
 
ZFS/FreeBSD based, pretty simple to setup and get going, automated ZFS snapshot features, a variety of sharing options...

Any users here?
     
Laminar
Clinically Insane
Join Date: Apr 2007
Location: Iowa, how long can this be? Does it really ruin the left column spacing?
Status: Offline
Reply With Quote
Sep 14, 2011, 07:37 PM
 
Lounge worthy?
     
Cold Warrior
Moderator
Join Date: Jan 2001
Location: Polwaristan
Status: Offline
Reply With Quote
Sep 14, 2011, 10:29 PM
 
Planning on being a user. I have the chassis and storage set aside, including SSDs for l2arc and zil, and just need to find the time to put it all together at work. My overall storage needs aren't huge, perhaps 8 TB or so, but the l2arc ssd is really what I'm keen on seeing in action for raw read and iops.
     
BLAZE_MkIV
Professional Poster
Join Date: Feb 2000
Location: Nashua NH, USA
Status: Offline
Reply With Quote
Sep 14, 2011, 11:22 PM
 
Got a 7 box ATM waiting for better feature parity and volume migration.
     
besson3c  (op)
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Sep 15, 2011, 10:25 AM
 
Originally Posted by Cold Warrior View Post
Planning on being a user. I have the chassis and storage set aside, including SSDs for l2arc and zil, and just need to find the time to put it all together at work. My overall storage needs aren't huge, perhaps 8 TB or so, but the l2arc ssd is really what I'm keen on seeing in action for raw read and iops.

What sizes do you plan on making your L2ARC and ZIL pools?
     
Cold Warrior
Moderator
Join Date: Jan 2001
Location: Polwaristan
Status: Offline
Reply With Quote
Sep 15, 2011, 06:06 PM
 
240 GB each, which equals the SSD capacity. But I recall reading that for every some amount of l2arc, one should have a certain amount of RAM. I'll have to dig that up to be certain.
     
besson3c  (op)
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Sep 15, 2011, 07:53 PM
 
RAM is your cheapest buffer. RAM will be used for both read and write buffeting until it is exhausted, so I think the general rule of thumb is always to max out your RAM since it is cheaper than SSDs. ZFS will definitely make use of whatever RAM is available.
     
mduell
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status: Offline
Reply With Quote
Sep 17, 2011, 03:00 PM
 
RAM is your most expensive buffer and shouldn't be used for write buffering AFAIK (writes don't count until they hit the ZIL).

We use ZFS with FreeBSD 8.2 rather than FreeNAS and we haven't found L2ARCs to be terribly useful for our ZFS applications. The database servers have enough RAM to buffer everything hot and the file storage servers aren't really sensitive to the latency.

Database servers are 11x 2 drive mirror 146G 15k SAS with 2 hotspares and mirrored 50GB SLC SSD ZIL. We typically see about 150G disk cache of their 192G RAM.

File storage servers are 3x 7 drive raidz 1-2T SATA with 1 hotspare and no ZIL. They only have like 8G RAM so there's not much disk caching, maybe 6.5G.
     
besson3c  (op)
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Sep 17, 2011, 07:30 PM
 
By expensive I meant cost per gigabyte. Whether it is an effective or adequate buffer in and of itself depends on various things.

It can be hard to gauge the impact of these three buffer options though, because anything that is taking a little strain off either reads or writes could help the other. The advice I have come across seems to suggest being liberal with RAM allocation since the cost per gigabyte is cheap in many cases, although it obviously depends on what kind of RAM your server requires and its cost.
     
mduell
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status: Offline
Reply With Quote
Sep 20, 2011, 07:34 PM
 
RAM is still your most expensive buffer. Flash is half to a tenth the price of RAM.
     
besson3c  (op)
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Sep 21, 2011, 12:16 AM
 
Originally Posted by mduell View Post
RAM is still your most expensive buffer. Flash is half to a tenth the price of RAM.

Do you mean USB based Flash, PCI based, or SSDs that utilize the SATA interface?

I take back what I said, the price per gigabyte thing is not the right way to look at it, because they don't make a single 16GB stick of RAM, for instance, like they do an SSD.

What I said was pretty messed up, you were right for correcting me. Perhaps it would have been more accurate to say that it is pretty cheap and easy to put something in your free RAM slots that are just sitting there and are there anyway and won't limit your storage capacity, so they are as good a starting place as any. This was what I was thinking originally, I was just being dumb.
     
mduell
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status: Offline
Reply With Quote
Sep 21, 2011, 12:58 AM
 
Originally Posted by besson3c View Post
Do you mean USB based Flash, PCI based, or SSDs that utilize the SATA interface?
SATA, although PCIe is becoming attractive.

Originally Posted by besson3c View Post
I take back what I said, the price per gigabyte thing is not the right way to look at it, because they don't make a single 16GB stick of RAM, for instance, like they do an SSD.
What? They make plenty of 16GB RAM sticks. I bought a box with 16 of them today.

Originally Posted by besson3c View Post
What I said was pretty messed up, you were right for correcting me. Perhaps it would have been more accurate to say that it is pretty cheap and easy to put something in your free RAM slots that are just sitting there and are there anyway and won't limit your storage capacity, so they are as good a starting place as any. This was what I was thinking originally, I was just being dumb.
I still don't follow. If you've got a free SATA/PCIe port you can add flash at a fraction of the price of adding RAM.
     
besson3c  (op)
Clinically Insane
Join Date: Mar 2001
Location: yes
Status: Offline
Reply With Quote
Sep 21, 2011, 11:33 AM
 
Originally Posted by mduell View Post
What? They make plenty of 16GB RAM sticks. I bought a box with 16 of them today.
Really?! Damn, I should quit while I'm not ahead... I was looking at the NewEgg server memory page and the highest they went was 8 gig sticks. Are these available with ECC and classified as server RAM?


I still don't follow. If you've got a free SATA/PCIe port you can add flash at a fraction of the price of adding RAM.
But in doing so you are limiting your storage, is my point, because a SATA port can be used to increase the total size of your disk array where a RAM slot won't. My point was basically that you might as well use your RAM slots, but if you have a bunch of spare SATA or PCIe ports laying around, sure, I absolutely see your point.

Do you know if people mix and match pools with disks attached to PCIe as well as disks attached to SATA ports? Is this generally considered a good idea, or is it best to put these devices in separate pools?
     
mduell
Posting Junkie
Join Date: Oct 2005
Location: Houston, TX
Status: Offline
Reply With Quote
Sep 22, 2011, 03:15 PM
 
Originally Posted by besson3c View Post
Really?! Damn, I should quit while I'm not ahead... I was looking at the NewEgg server memory page and the highest they went was 8 gig sticks. Are these available with ECC and classified as server RAM?
Samsung announced 32GB DDR3 modules 2 years ago. 16 and 32GB are still relatively expensive, they're not a mass market thing like Newegg carries. Here's some of the options for configuring a Dell R910:
512GB Memory (32X16GB), 1066MHz, Quad Ranked LV RDIMMs for 4 Processors [add $18,470.00]
2TB Memory (64x32GB), 1066MHz, Quad Ranked LV RDIMMs for 4 Processors [add $239,952.00]

Originally Posted by besson3c View Post
But in doing so you are limiting your storage, is my point, because a SATA port can be used to increase the total size of your disk array where a RAM slot won't. My point was basically that you might as well use your RAM slots, but if you have a bunch of spare SATA or PCIe ports laying around, sure, I absolutely see your point.
Sure, you're giving up one drive slot. Most ZFS pools have enough disks this isn't a big deal if they need a ZIL/L2ARC or two.

Originally Posted by besson3c View Post
Do you know if people mix and match pools with disks attached to PCIe as well as disks attached to SATA ports? Is this generally considered a good idea, or is it best to put these devices in separate pools?
Absolutely. We sometimes have an external chassis (via PCIe SAS HBA) for the bulk of the drives with the SATA SSD ZIL internal to the box. We could use a PCIe SSD. There's no issue with using devices on multiple interfaces.
     
   
 
Forum Links
Forum Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Top
Privacy Policy
All times are GMT -4. The time now is 02:43 AM.
All contents of these forums © 1995-2017 MacNN. All rights reserved.
Branding + Design: www.gesamtbild.com
vBulletin v.3.8.8 © 2000-2017, Jelsoft Enterprises Ltd.,