How to set up a home file server using FreeNAS

FreeNAS Protocols and Features

Fun with FreeNAS
At this point, it is recommended to add disks to create a full mirror set. Any kind of vibration, from washing machines in the next room or even having more than one non-enterprise class drive in the same chassis, or even having your server near loud noise can kill performance. Will see how things progress and if they work out then I will add them to the nic options for all nics. After that, your NAS will no longer be reachable from the your dynamic domain. Swap size on each drive in GiB, affects new disks only. Since the old key will no longer function, any old keys can be safely discarded. So started reading your guys comments.

If You Appreciate What We Do Here On TecMint, You Should Consider:

Add up to 8 storage drives for maximum capacity.

This of course could be a huge problem if you later want to recover the file s. When accessing password-protected shares on a remote Windows PC, you have to type in a user and password for a Windows account that's configured on the remote PC. Thus if you wanted to keep each user's credentials secret, you'd have to create an account for each person on every computer, or at least the ones that are sharing files.

However, because a NAS drive controls the access, having matching Windows accounts on all the computers isn't necessary.

FreeNAS supports all the popular sharing protocols. When you set up the disks, you can even enable encryption. FreeNAS also has a web server you could set up for a local intranet or similarly add a port forward to open it up to the Internet. Perhaps a naive assumption on my part? I have since tried other settings: I will try running the benchmarks again with the AC turned off! Followed your guide, and even tried compiling drivers. Not sure that version would matter.

Good to know that you need to use hardware version 11 for the guest tools to work. Thanks for the great blog Ben, really useful.. How do you like Plex? First off, great blog and great article, very informative and helpful. This is also the 5 time of starting from scratch. Any suggestions on what I should look for would be greatly appreciated. One think you might try is going to an older version of VMware 6 or older version of FreeNAS not to run in production obviously, just to troubleshoot.

Also, see if the vmxnet3 drivers work with a normal FreeBSD 9. Apologies, I forgot to reply back on here to let you know I got it working. Oddly enough, I ended up starting from scratch a 6th time, not changing how I was installing everything from the previous attempts and for whatever reason it worked this 6th time. So, everything is working with VMware 6.

Glad to hear you got it working. Thanks for the report on version I might remove my warning note and flash that myself soon.

Been running OmniOS for months, works like a champ. Systems runs like a champ. I posted some of the gory details at the FreeNAS forum:. Thanks for posting that, hopefully someone from FreeNAS can respond there to give some insight into the issue. Ahem… well, I was only half right. I was able to duplicate your problem.

And on iSCSI multiple instances of this:. Disabling segmentation offloading appears to have fixed the issue for me. You can turn it off with:. Note that this setting does not survive a reboot, so if it does help add it to the post init scripts or network init options under FreeNAS.

Two of my 3 hosts had Delayed ACK enabled and they went down, with the above issue. The one that did not stayed up however it was under the least amount of load. My environment is used 24 hours a day. Basically for me everything was stable on FreeNAS 9. I suspect the new performance features in the 9.

Thanks for the confirmation of the issue Richard. I remembered at work we had to disable segmentation offloading because of a bug in the 10GB Intel drivers which had not been fixed by Intel as of Feb at least , and may be the same issue on the VMXNET3 driver. See my comment above responding to Keith and let me know if that helps your situation at all. So, since reliability is more important to me than performance, I have reverted to the somewhat slower E NIC drivers. This would let me restore synchronous writes on the VMware datastores without sacrificing too much performance.

There have been a slough of updates and bug fixes since I started testing it in early May. I also have link aggregation configured however I am only using fault tolerant teams, so this should be safe.

Configuration reload request received, reloading configuration; Jun 21 Configuration reload request received, reloading configuration; Jun 23 Configuration reload request received, reloading configuration; Jun 24 Will see how things progress and if they work out then I will add them to the nic options for all nics. This may not be the issue in your environment but worth a shot. The systems I am having issues with is our non critical production and testing systems.

We are a Cisco Catalyst house for the network but I went back to MTU fairly early on as there was a suggestion that frames of bytes might not be equal in all systems and drivers. I do wish FreeNAS would chill out on the regular updates a bit and work on the stability aspect.

An impressive little system anyway and I do appreciate all the hard work that goes into it, plus the community that comes with it. Thanks for the confirmation on the E I just tried a verify install on mine and it found no inconsistencies so the older FTP source probably did it.

On the other hand FreeNAS is pretty quick at fixing issues being found by a very wide and diverse user base. I stay a little behind on the FreeNAS updates to be on the safe side. No errors on the syslog console but as soon as we powered off the FreeNAS everything else sprang into life. I had the opportunity to speak to a senior VMWare engineer about this last week who advised be anything over 30 indicates an issue. I may go back to 9. Thanks for the update, Rich.

So started reading your guys comments. And all the discussion on the freenas forums you linked too. I got a bit confused as to what your actual conclusions where? Am I right in assuming that the following is what you came up with? My guess is this will make the largest difference. So maximize the number of vdevs. Mirrors will get you the most performance.

If you have 18 disks you could also consider 3 vdevs of 6 disks in RAID-Z2, but mirrors would be far better. Hi, I found your blog from Google search and is reading few posts now.

It happened that you have quite similar All-in-One system like mine, except I use all SSD environment I also setup VMXNET3 using binary driver so actually we are pretty on the very same road of finding optimized way of setting up the system. Who face this problem with FreeNAS 9. Using sysctl tweak sysctl kern. We might need to wait till FreeNAS At few first update, FreeNAS 9. Hi, abcslayer, sounds like you like ZFS for the same reason I do. Thanks for the info on the coalesce.

Hopefully iX can trace it down. Ben do you have any standard test that one can run to validate performance with OmniOS? Hoping to do an apples to apples like comparison.

I posted the sysbench script that I use in the comments here: Hope that helps, let me know if I can give more details on the test.

Not enough to make a big difference. Also, I have passed through a Supermicro that is running firmware V Here are some of the tests, similar setup as you mention. One difference is you are on firmware P20 where I was on P You may want to consider downgrading. There are a million other things that could explain the difference, but the fact that Q1 is slower but Q32 is not almost points to a latency or clock frequency difference: It could also be any number of other things: Thanks for pointing out the firmware!

I was reading the freenas forums and it seems they recommend P20 so I installed that. I removed it and put on P So perhaps something with OS, version who knows? May I ask why you chose to stripe your Slog?

Have you found a big difference in doing so? Also, have you tried NVMe drives or do you think this is overkill? I get a little more performance out of striped log so I run that way on my home storage.

For mission critical storage I always mirror or do a stripe of mirrors. I had the opportunity to try out a S — seems like it has a bit of an advantage over the S Could be due to the size as well. Scores are now much more inline and sometimes over. So definitely your slog setup had an impact on your benchmarks. The size will make a big difference, especially on sequential writes. What size did you end up getting?

Ended up getting the gigabyte S The S gb will do sequential write vs sequential write on the S But the S will do higher random write.

Both are faster than the GB S which is sequential write and a lot less with random write. So based on some freenas forum posts I switched to iSCSI and now i hit the NOP issue as well … and found this post … do you have the latest freenas and vmware 6?

Ill try the init script in freenas and report back. To get the system out of this I either need to reboot or connect to ESXI host terminal and issue the following commands. This brings things back to life for me. I never experience what you are describing on boot. But perhaps something is going on in the background where your mount is coming online disconnecting and then coming back online again causing ESXi to put the share into an inaccessible state.

Ben, you are a true geek! Glad that worked, Reqlez. If you get the D be sure to let me know how it works for you—being able to go to GB, even 64GB really frees up a lot of memory constraints.

Also having those 4 extra cores really helps with CPU contention between VMs if your server is loaded up—especially for these all-in-one setups. Also, it seems like there is a few revisions of the firmware and they give no indication as to what version or revision is considered safe. The one from last year apparently had some complications. My card has an LSI chip for it. Where did you download? But your model, you can download here… but you might have to cross flash somehow?

Thanks for the info and also for confirming that 04 is working for you. I will stick with firmware 16 until Supermicro releases the 04 version for P Does anybody know if there is a way to store swap file on the Pool? That said I mirror my boot drives.

Even with my method, if the drive housing the configuration files e. Only way around that is to hardware RAID I would prefer doing it on my cheap 6TB array since there is lots of space and i can make a big swap file just in case. As for swap usage with lots of ram … maybe Unix is learning from Microsoft Exchange? You can always have a backup pfsense server running on the pool.

Hi folks, I do have a Microserver Gen8 and I would like to experiment with sharing storage back to hypervisor. As this is a home lab, I do not have 2 x good SSDs a nad a rack machine I could put a lot of other harddrives as well.

I want to experiment but later on my configuration should work as a my home NAS server and a place I can put my tests VMs. Since you mentioned you want a safe config I should mention that running on VMDKs is generally not recommended. I was just wondering: After far as freenas is concerned, passing raid volumes to use for data storage in freenas is asking for issues… Can you clarify more what you are trying to do? I am aware of possible issues of virtualised but it has been pointed out by Ben and various other sources, that these concerns may be outdated.

You have checksums on every data block, you can setup weekly scrubs to make sure your data is not corrupt, you can enable vmware-snpashot feature in freenas and then run snapshots on your ZFS dataset that you provide to ESXi so you have a filesystem consistent, and some times even application consistent snapshot that you can restore to. Worked like a charm. I noticed during this that my motherboard supports software raid.

In your opinion would it be a bad idea to just Software Raid the drive that holds OmniOS and everything else or is that a bad idea? Unless I missed a very important announcement. What I mean by Software raid is the Intel motherboard raid. So looks like I am running optimum setup. Just need to backup ESXi config. Let me know if you run into any issues. It is passable to only ues the on board SATA port?

Yes, you can use it to experiment. Any thoughts on Solaris You might be able to solve that error by disabling the read-ahead cache—but no guarantees that it will work—and if it does you may run into other stability issues. From start to finish! Just remember that you want firmware version 20 for that card. Can get direct from avago site. I could download the firmware version 20? From the Avago Website?

There are so many firmwares — http: Sorry, for the many posts. Ping from FreeNas They are two networks: If you set the netmask to On my storage network I like to give all my storage servers a Hi Ben, First of all thanks for all wonderful guides. I found your blog after searching about Hp microserver gen8.

I need some help because the more i read the harder it gets to decide: Can i build my server with this configuration? Im not inspecting heavy loads. HP microserver or supermicro? The HP Chassis has custom cutouts for the ports so it will probably only ever fit the motherboard that comes with it.

If you decide to go Supermicro you have quite a few options. You can buy a pre-built server like http: Thank you very much Ben, Im starting from scratch so i dont have any HP hardware at all. To put things in perspective I ran a Minecraft server on vmdks without issue. Best practice is to run VT-d. This configuration was the cheapest on Black Friday: So im starting with 10gb ram and 2x2TB WD reds, no vt-d. Do you think that 10gb ram is sufficient for 2 VMs with FreeNas?

Forgot to mention the price. I cant see anything in this class which can beat that price in my country. Price and availability in your country is certainly a consideration. If something were to break it would be easier to source a part from the vendor in your country instead of having it shipped internationally. I finally got my freenas up and running, ran into an issue and wondering if I could get some further guidance.

HP microserver Gen8 EL 2. CIFS write speed degrades for large files. Memory usage is about 6GB out of 8GB dedicated to freenas. TSO offloading is disabled. While this is happening with CIFS, NFS share mounted as datastore on esxi on the other is able to cope with large file without any issue. I suspect there is something amiss in the CIFS configuration.

It seems like ARC buffer is filling up and is not being flushed in timely manner. I fail to understand that. It would be really nice to try if you had a SAS HBA … maybe Ben has more experience with freenas than me and can provide some input to troubleshoot. Sorry just noticed you said NFS works fine. Its a home lab server and not intended for any serious business. It was set to standard but changing it to disable resulted in files getting transferred. The write speed drops down to 26MBps gradually and no sudden drops thereafter.

So what does that prove? How do i go about improving things from here? On a separate note, how does ARC optimization work in Freenas? By the looks of the Freenas report, it seems as if used ARC Size is flushed ocassionaly and not the frequently. I would do random. Maybe do the tests with sync standard and sync disabled?

For you I think getting a better SSD for ZIL should be your first priority to get your write speeds performing better, and if you have issues with read performance maybe more memory for ARC. For servers at work we also find it fairly inexpensive to equip them with GB or GB—the thing with using an SSD for L2ARC is it uses up a slot in your chassis that could be used for more spindles. Those slots are not cheap. Just to be sure, I will use this as esxi datastore to host freenas VM.

The RAM is still underutilized as per the pretty graphs. Got DC series, fast as hell. Cancelled my order of Samsung EVO. Intel has the same controller ssd controller as S3X series. Still a bit apprehensive about Samsung SM, but the it statistics are making me drool over it. The random write performance gain is that of S, sequential write twice better for half the price. Glad to hear OmniOS is working well for you. OmniOS is a very good platform. By the looks for the following comparison where samsung sm beat intel dc in almost all statistics.

The iostats shows throughput on 2xwd red 3TB drives to be roughly KBps i. That is probably the throughput of all your spinners combined. Just to get an idea, What kind of write throughputs have you been getting on intel ssds?

I did record a sequential write benchmark on my https: OmniOS seems to get much better striping performance. Intel is on its way. Performance aside, it turns out Intel and LSI go together hand in hand. Why would you get a when Ben recommends a per Personally I am using a which gave me better results than the Personally I am using a which gave me better results than the SM Because the is an enterprise drive and also the successor to the Not sure about power loss protection, latency or endurance on the Let us know how your tests go.

WIth that said, it has the same ssd controller as Intel S3X series, higher write throughput even more than S and built in power loss protection which surely is the icing on the top. Definitely looks like some kind of contention issue tho. I mean, there are some SYNC writes in play while using CIFS, like … filesystem changes… but … thats pretty minuscule unless he is writing , 4KB files into the dataset, in this case he is writing 1 big file … ZIL usage will be pretty low in that scenario with CIFS assuming sync is set to standard.

Does this make sense? Head over to DynDNS to set up a free account. In small writing at the bottom is the link to set up your free account for up to 2 hostnames. Enter your desired hostname you can select various domains from the drop-down , then click on the blue text that lists your IP. This will automatically fill in your IP address.

For mine, I accessed the router configuration at The username you set up earlier, and the domain is the address you chose at DynDNS.

If all went well, you should be presented with a straight overview of the whole filesystem, so you can now drill down into the precise folder or share you want access to.

Congratulations, you now have complete access to your shares from anywhere in the world! We have really only just scratched the surface of how powerful FreeNAS is — I hope to highlight some of its other features in the future, so stay tuned.

I hope you also tried copying a file across to see how blazingly fast it is too. Problems and comments are welcome as always but if your problem is really technical, you might get a better response by crowdsourcing it to our tech answers site. Your email address will not be published.

Buy this article as PDF

Leave a Reply

FreeNAS is a free, open source BSD-based operating system that can turn any PC into a rock-solid file server. Today I’m going to walk you through a basic installation, setting up a simple file share, and setting things up so you can access your files from anywhere over the Internet using the. FreeNAS offers a stable platform for home and office use. From a simple file server to a connected media hub, it's possible to configure FreeNAS to perform a . More than Just Storage. Plugins allow you to use your FreeNAS system for so much more than just data storage. Create the ultimate entertainment device using the Plex plugin, set up a personal cloud with Owncloud, or host your .