• Why I Switched from Unraid to TrueNAS Scale for Performance

    Why I Switched from Unraid to TrueNAS Scale for Performance

    I’ve been using Unraid for a while, and I have to say—it’s an amazing solution if you want a flexible storage setup. One of the best things about Unraid is how easy it makes expanding storage. You can mix and match drives of different sizes, add new ones whenever you need, and the system just works. It’s perfect for a home server where you’re throwing together whatever hard drives you have lying around.

    But with that flexibility comes a trade-off: performance.

    I often saw wildly varying speeds depending on what I was doing. Sometimes it was fine, but other times, especially when handling lots of small files or parallel reads/writes, speeds would drop significantly. I wanted more consistent high-speed performance, especially with a 10Gbps network.

    The Problem: Unraid’s Performance Bottlenecks

    Unraid’s default file system (XFS or BTRFS, depending on your setup) works well for general storage, but because it doesn’t stripe data across drives like a traditional RAID setup, performance can be inconsistent. Since each drive operates independently (except for parity calculations), transfer speeds are limited by the speed of individual drives rather than the combined speed of an array.

    I often saw wildly varying speeds depending on what I was doing. Sometimes it was fine, but other times, especially when handling lots of small files or parallel reads/writes, speeds would drop significantly. I needed more consistent high-speed performance, especially with a 10Gbps network.

    The Solution: ZFS with RAIDZ1

    To solve this, I switched to ZFS on Truenas Scale. My current setup is built around RAIDZ1 vdevs that are 5 drives wide, using mixed-capacity disks. This means:

    • ZFS spreads data more efficiently across multiple drives, unlike Unraid’s per-drive approach.
    • You can adjust performance with VDEV width: Wider vdevs allow you to distribute data across more drives. This allows for higher sequential throughput. Narrower VDEVs do haver lower performance BUT with the benefit of a smaller chance of your whole array being lost due to a failing drive. Additionally there is lower latency for Random I/O with narrower VDEVs
    • You can still use mixed drive sizes: With ZFS, each VDEV should have the same sized drives, but you can have 1 VDEV with 16TB drives, 1 VDEV with 12TB drives and 2 VDEVs with 8TB drives with little to no performance impact.

    With this setup, I’m now maxing out my 10Gbps connection consistently. File transfers, database operations, and media streaming all perform way better than they did on Unraid. With Unraid, I’d often see speeds closer to a single drives performance though that obviously depended on the operation.

    The performance gains

    With ZFS, the performance is stable and predictable. Thanks to ZFS caching (ARC), commonly accessed files are insanely fast. Recently I was moving a 400mb folder to a share from my desktop. The write occurred so quickly that I redid it because I thought something was wrong. There wasn’t even time for the transfer prompt to pop up, it felt like I was working with a local disk.

    The Trade-Off

    The main downside? Expanding ZFS pools isn’t as easy as Unraid. In Unraid, you can just throw in a new drive anytime. With ZFS, you generally need to plan ahead, once a vdev is created, you can’t easily expand it without adding another full vdev or resilvering the vdev one disk at a time. So while Unraid wins on flexibility, ZFS wins on performance and reliability.

    Final Thoughts

    If you just need a simple, flexible storage solution, Unraid is still a fantastic option. But if you’re hitting performance bottlenecks—especially on a 10Gbps network—ZFS is a game changer. It’s been a night-and-day difference for me.

    Would I still recommend Unraid? Absolutely. But if you need high-speed performance, ZFS on Truenas is worth looking at.

    One note is that currently ZFS is supported within Unraid but it’s still early on and things like modifying ARC (memory) size requires manual configuration and the implementation doesn’t feel as well thought out, missing many features within the UI.


  • Fixing a 502 Bad Gateway with WordPress, Nginx Proxy Manager and TrueNAS Scale

    I was setting up WordPress (this one to be specific) on TrueNAS Scale with Nginx and Cloudflare handling the proxy solution. I installed it as an app, mapped the appropriate port, and everything looked fine—until I tried routing it through Nginx Proxy Manager (NPM) with my custom domain.

    502 Bad Gateway.

    Not ideal.

    The Setup

    • TrueNAS Scale hosting WordPress via an app install (192.168.xxx.xxx:xxxxx).
    • Nginx Proxy Manager handling SSL and domain routing (danielgt.com).
    • Let’s Encrypt certs for HTTPS via Nginx SSL management.

    I tried pointing NPM to the internal IP and port like I would normally do with a service. HTTPS just died with a 502, and HTTP sat there loading indefinitely.

    The Fix

    The issue seemed to be due to SSL settings. I wish this was some knowledge that just jumped out at me, but it took a bunch of fiddling with a number of app reinstalls. Things started working when I made these updates to my SSL Settings inside of NPM.

    Enable Force SSL

    Enable HTTP/2 Support

    Finally inside of WordPress, I made the update that had been making my installs inaccessible previously. Inside Settings -> General I updated the urls and did a force refresh to the front end due to some caching issues.

    Final Thoughts

    Nginx normally makes things SUPER easy, this was kind of a surprise as once I had my Cloudflare + Nginx setup finished previously, making apps accessible was always super easy. This seems to be an issue with how WordPress and my configuration interacted which I think is abnormal.