I used the radxa piNAS. It worked but bridging SATA off usb isn't ideal. So, i recommend avoiding any home-brew solution which claims sata but is usb connected.
Secondly don't skimp on the disk. Ssd or hdd buy the best you can. I ran with shucked WD sata portables and the failure rate over two years was high. I now run with patriot SSD but there's a sense I underspecced.
Hard Disks and ssd run hot. A lot hotter than you might think. Cooling is noisy. So if your dream is passive no fan be warned, things doing storage just run hotter than you might think even if no rotating disk. Hot hard disks will be happier if it's thermally stable. Shorter life, but better than if the temps cycle. Hot ssd seem just to be hot.
Pick a unit which can run truenas, and start on scale not core because core is dying out. If you want a BSD nas look at sylve. Scale does docker and VMs. I run core, I wanted BSD. I am now on an Intel architecture, the pi wasn't strong enough to do nas and virtuals.
Even if you don't want truenas pick one which can because it means it's fully generic. Packaged nas solutions from the hw vendor can lock you in.
I truly believe zfs beats the alternatives. Snapshots, good redundancy, monitoring, good tooling. That's why I went BSD and truenas. The Linux zfs story has got better.
It's a myth you can't run zfs without ecc but it's better with ecc.
It's a myth you can't run zfs with less than 4gb but if you want to run virtuals and avoid stalls under write you want more than 4gb.
It's a myth you can run dedupe on low end hardware. It's a lot of work for less benefit. The default compression in zfs is good. I've never bothered with zlog disks at home, but for write intensive work they help. (We do at work, mirror ssd for log, hdd for the big space)
You still need 3-2-1 backup. Off-line zfs snapshots work for me and some cloud for the third leg.
The post earlier on today about Intel n100 based tiny nas looks interesting. They target ssd, 4 to 6, they look generic, they can do bigger memory models.
Building your own is for fun, not dependencies unless unavoidable. It's probably less reliable unless you really invest time and effort.
Tune zfs for your filesize. There's heaps of stuff online about optimal sizing for postgres and the like.
> i recommend avoiding any home-brew solution which claims sata but is usb connected.
In the Jeff geerling nas thread, there didn't seem to be very much actual material support for warding / scaring people away from USB attached sata. It's worked fine for me for a decade, worked fine for people with vastly bigger setups than mine.
I largely regard it as decade old FUD myself, & totally decoupled from the state of things. Works more thanfine for a lot of people.
That said, I'd be willing to have some skepticism for some of the Pi hats that do usb-sata bridging, like what you were using. I'd feel much more confident using a more off the shelf dual drive bay product or two. $30 a pop.
Thanks a lot for sharing! A great food for thoughts. And yeah, I definitely don't want to build a custom solution - it should be max reliable and fast enough.
Not sure what you mean with MiniNAS but buying a small 2 disk synology and setting up proper backups for everything was definitely a decission I would always do again.
It just works and comes with many additional packages
I have not built a mini-nas for team use and personally would not from my experience with home grown solutions. I've worked with people that did things like this for cost cutting measures but it can end up costing millions or much more if work flows are interrupted. At a very minimum there would need to be n+1 live mini-nas with active replication to the stand-by nodes and there would need to be snapshots and backups. There would need to be really good monitoring of everything. These devices do not have the physical expansion capacity to do this correctly for large amounts of data in my opinion after snapshots. It may start off fine but storage requirements will only go up. Capacity planning with mini-nas constraints will quickly paint one into a corner. Mini computers also have heat efficiency issues that would be exacerbated by multiple people accessing them with heavy IO automation tools that you might not be using now but your team may in the future. There are ways to mitigate thermal issues but then the advantages of a mini-form factor would quickly go out the window. Another issue is power backup. Corporate systems deemed critical will be on UPS and generators. Most datacenters will not permit home grown solutions especially when they do not have redundant power inputs and assorted certifications that reduce the risk these devices will trigger fire suppression systems.
Now mini-NAS for backups of backups? That would make sense to me if you do not trust whomever is managing your administrative and backup servers. It might not even be full backups but one could at least back up the data critical to the business that your team is responsible for. A development team lead could automate more frequent backups to their mini-nas than the company has implemented for the wider audience use cases. That would not even need to be in the data-center unless your privacy, compliance and security teams say otherwise. Your home grown solution will still need really good physical security as random contract janitor can just walk out of the building with all of your intellectual property. The NAS just needs a very fast low latency connection to the data-center.
In my experience the revenue impacting data storage and flows should always be on the corporate maintained infrastructure unless one really wants to stand in front of the C-levels explaining why all the data was lost of a home grown implementation. Ideological and technical issues aside the optics will be awful. I've seen people walked out of companies for much less. Augmenting the corporate systems on the other hand can be a life saver especially if you know what data is critical to revenue flows and how many snapshots and full backups would keep your teams workflows uninterrupted. As an augmentation your systems could save the day and your team would look really sharp. As a side note, when your team does save the day by going above and beyond ensure that your management write up your team for awards. That can reduce management friction in the future and buy some leniency for mistakes that will inevitably occur.
I used the radxa piNAS. It worked but bridging SATA off usb isn't ideal. So, i recommend avoiding any home-brew solution which claims sata but is usb connected.
Secondly don't skimp on the disk. Ssd or hdd buy the best you can. I ran with shucked WD sata portables and the failure rate over two years was high. I now run with patriot SSD but there's a sense I underspecced.
Hard Disks and ssd run hot. A lot hotter than you might think. Cooling is noisy. So if your dream is passive no fan be warned, things doing storage just run hotter than you might think even if no rotating disk. Hot hard disks will be happier if it's thermally stable. Shorter life, but better than if the temps cycle. Hot ssd seem just to be hot.
Pick a unit which can run truenas, and start on scale not core because core is dying out. If you want a BSD nas look at sylve. Scale does docker and VMs. I run core, I wanted BSD. I am now on an Intel architecture, the pi wasn't strong enough to do nas and virtuals.
Even if you don't want truenas pick one which can because it means it's fully generic. Packaged nas solutions from the hw vendor can lock you in.
I truly believe zfs beats the alternatives. Snapshots, good redundancy, monitoring, good tooling. That's why I went BSD and truenas. The Linux zfs story has got better.
It's a myth you can't run zfs without ecc but it's better with ecc.
It's a myth you can't run zfs with less than 4gb but if you want to run virtuals and avoid stalls under write you want more than 4gb.
It's a myth you can run dedupe on low end hardware. It's a lot of work for less benefit. The default compression in zfs is good. I've never bothered with zlog disks at home, but for write intensive work they help. (We do at work, mirror ssd for log, hdd for the big space)
You still need 3-2-1 backup. Off-line zfs snapshots work for me and some cloud for the third leg.
The post earlier on today about Intel n100 based tiny nas looks interesting. They target ssd, 4 to 6, they look generic, they can do bigger memory models.
Building your own is for fun, not dependencies unless unavoidable. It's probably less reliable unless you really invest time and effort.
Tune zfs for your filesize. There's heaps of stuff online about optimal sizing for postgres and the like.
> i recommend avoiding any home-brew solution which claims sata but is usb connected.
In the Jeff geerling nas thread, there didn't seem to be very much actual material support for warding / scaring people away from USB attached sata. It's worked fine for me for a decade, worked fine for people with vastly bigger setups than mine.
I largely regard it as decade old FUD myself, & totally decoupled from the state of things. Works more thanfine for a lot of people.
That said, I'd be willing to have some skepticism for some of the Pi hats that do usb-sata bridging, like what you were using. I'd feel much more confident using a more off the shelf dual drive bay product or two. $30 a pop.
Thanks a lot for sharing! A great food for thoughts. And yeah, I definitely don't want to build a custom solution - it should be max reliable and fast enough.
https://news.ycombinator.com/item?id=44465319 is the previous discussion.
Not sure what you mean with MiniNAS but buying a small 2 disk synology and setting up proper backups for everything was definitely a decission I would always do again.
It just works and comes with many additional packages
I have not built a mini-nas for team use and personally would not from my experience with home grown solutions. I've worked with people that did things like this for cost cutting measures but it can end up costing millions or much more if work flows are interrupted. At a very minimum there would need to be n+1 live mini-nas with active replication to the stand-by nodes and there would need to be snapshots and backups. There would need to be really good monitoring of everything. These devices do not have the physical expansion capacity to do this correctly for large amounts of data in my opinion after snapshots. It may start off fine but storage requirements will only go up. Capacity planning with mini-nas constraints will quickly paint one into a corner. Mini computers also have heat efficiency issues that would be exacerbated by multiple people accessing them with heavy IO automation tools that you might not be using now but your team may in the future. There are ways to mitigate thermal issues but then the advantages of a mini-form factor would quickly go out the window. Another issue is power backup. Corporate systems deemed critical will be on UPS and generators. Most datacenters will not permit home grown solutions especially when they do not have redundant power inputs and assorted certifications that reduce the risk these devices will trigger fire suppression systems.
Now mini-NAS for backups of backups? That would make sense to me if you do not trust whomever is managing your administrative and backup servers. It might not even be full backups but one could at least back up the data critical to the business that your team is responsible for. A development team lead could automate more frequent backups to their mini-nas than the company has implemented for the wider audience use cases. That would not even need to be in the data-center unless your privacy, compliance and security teams say otherwise. Your home grown solution will still need really good physical security as random contract janitor can just walk out of the building with all of your intellectual property. The NAS just needs a very fast low latency connection to the data-center.
In my experience the revenue impacting data storage and flows should always be on the corporate maintained infrastructure unless one really wants to stand in front of the C-levels explaining why all the data was lost of a home grown implementation. Ideological and technical issues aside the optics will be awful. I've seen people walked out of companies for much less. Augmenting the corporate systems on the other hand can be a life saver especially if you know what data is critical to revenue flows and how many snapshots and full backups would keep your teams workflows uninterrupted. As an augmentation your systems could save the day and your team would look really sharp. As a side note, when your team does save the day by going above and beyond ensure that your management write up your team for awards. That can reduce management friction in the future and buy some leniency for mistakes that will inevitably occur.