<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://tech.uvoo.io/index.php?action=history&amp;feed=atom&amp;title=Zfs_vs_hardware_raid_controller</id>
	<title>Zfs vs hardware raid controller - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://tech.uvoo.io/index.php?action=history&amp;feed=atom&amp;title=Zfs_vs_hardware_raid_controller"/>
	<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Zfs_vs_hardware_raid_controller&amp;action=history"/>
	<updated>2026-05-16T01:01:13Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.2</generator>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Zfs_vs_hardware_raid_controller&amp;diff=5668&amp;oldid=prev</id>
		<title>Busk: Created page with &quot;That’s actually a **very solid default rule**, and it lines up almost perfectly with how ZFS behaves in the real world 👍  Let me sharpen it a bit specifically for ZFS + y...&quot;</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Zfs_vs_hardware_raid_controller&amp;diff=5668&amp;oldid=prev"/>
		<updated>2026-03-28T20:16:37Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;That’s actually a **very solid default rule**, and it lines up almost perfectly with how ZFS behaves in the real world 👍  Let me sharpen it a bit specifically for ZFS + y...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;That’s actually a **very solid default rule**, and it lines up almost perfectly with how ZFS behaves in the real world 👍&lt;br /&gt;
&lt;br /&gt;
Let me sharpen it a bit specifically for ZFS + your kind of workloads:&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧠 Your Rule (Refined for ZFS)&lt;br /&gt;
&lt;br /&gt;
&amp;gt; **“Use mirrors unless workload is mostly sequential + read-heavy + static.”**&lt;br /&gt;
&lt;br /&gt;
That’s basically the *right mental model*.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔥 Why RAID10 (mirrors) is usually the right choice&lt;br /&gt;
&lt;br /&gt;
For your environment (LXD, Postgres, APIs, etc.):&lt;br /&gt;
&lt;br /&gt;
### You are mostly doing:&lt;br /&gt;
&lt;br /&gt;
* random reads/writes&lt;br /&gt;
* small block IO&lt;br /&gt;
* sync writes (fsync, WAL, etc.)&lt;br /&gt;
* metadata-heavy operations&lt;br /&gt;
&lt;br /&gt;
👉 ZFS mirrors excel at:&lt;br /&gt;
&lt;br /&gt;
* parallel IOPS (each vdev = queue)&lt;br /&gt;
* low latency under load&lt;br /&gt;
* fast resilvers (huge deal)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚠️ Where RAIDZ2 actually wins&lt;br /&gt;
&lt;br /&gt;
Your “static / read-heavy” instinct is correct, but let’s define it more precisely:&lt;br /&gt;
&lt;br /&gt;
## RAIDZ2 is best when:&lt;br /&gt;
&lt;br /&gt;
* Large sequential reads (media, backups)&lt;br /&gt;
* Large sequential writes (archives, logs)&lt;br /&gt;
* Data is mostly **append-only**&lt;br /&gt;
* Not latency-sensitive&lt;br /&gt;
* Capacity matters&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧪 Practical Examples&lt;br /&gt;
&lt;br /&gt;
## 🟢 Mirrors (RAID10)&lt;br /&gt;
&lt;br /&gt;
Use for:&lt;br /&gt;
&lt;br /&gt;
* LXD VM disks&lt;br /&gt;
* PostgreSQL / databases&lt;br /&gt;
* Kubernetes / etcd&lt;br /&gt;
* anything with fsync&lt;br /&gt;
* CI/CD workloads&lt;br /&gt;
* general-purpose root pool&lt;br /&gt;
&lt;br /&gt;
👉 This is probably **90% of real-world infra workloads**&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🔵 RAIDZ2&lt;br /&gt;
&lt;br /&gt;
Use for:&lt;br /&gt;
&lt;br /&gt;
* backups&lt;br /&gt;
* object storage&lt;br /&gt;
* logs (cold)&lt;br /&gt;
* media / large files&lt;br /&gt;
* snapshot archives&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 💥 The Big Gotcha Most People Miss&lt;br /&gt;
&lt;br /&gt;
## RAIDZ2 looks good in benchmarks…&lt;br /&gt;
&lt;br /&gt;
…but falls apart under:&lt;br /&gt;
&lt;br /&gt;
* queue depth&lt;br /&gt;
* random IO&lt;br /&gt;
* mixed workloads&lt;br /&gt;
&lt;br /&gt;
👉 Especially with spinning disks&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚡ Why mirrors feel “faster” even when MB/s looks similar&lt;br /&gt;
&lt;br /&gt;
Because:&lt;br /&gt;
&lt;br /&gt;
* ZFS schedules IO per vdev&lt;br /&gt;
* Mirrors = more vdevs = more parallelism&lt;br /&gt;
&lt;br /&gt;
👉 So system responsiveness is WAY better&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔁 Resilvering (VERY important)&lt;br /&gt;
&lt;br /&gt;
This is a huge differentiator:&lt;br /&gt;
&lt;br /&gt;
### Mirrors:&lt;br /&gt;
&lt;br /&gt;
* Only resilver **used blocks**&lt;br /&gt;
* Much faster rebuild&lt;br /&gt;
* Less stress on disks&lt;br /&gt;
&lt;br /&gt;
### RAIDZ2:&lt;br /&gt;
&lt;br /&gt;
* Must rebuild **entire vdev stripe**&lt;br /&gt;
* Slower&lt;br /&gt;
* Higher risk during rebuild&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧠 Your 20-disk scenario (applied)&lt;br /&gt;
&lt;br /&gt;
Your instinct:&lt;br /&gt;
&lt;br /&gt;
&amp;gt; “Always RAID10 unless read-heavy static”&lt;br /&gt;
&lt;br /&gt;
👉 I would translate that to:&lt;br /&gt;
&lt;br /&gt;
## Default:&lt;br /&gt;
&lt;br /&gt;
**10 mirrors (RAID10)**&lt;br /&gt;
&lt;br /&gt;
## Only use RAIDZ2 if:&lt;br /&gt;
&lt;br /&gt;
* You explicitly need capacity&lt;br /&gt;
* AND workload is not latency sensitive&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔥 What I would actually do (based on your stack)&lt;br /&gt;
&lt;br /&gt;
Since you:&lt;br /&gt;
&lt;br /&gt;
* run infra services&lt;br /&gt;
* care about performance&lt;br /&gt;
* likely run databases / APIs&lt;br /&gt;
&lt;br /&gt;
👉 I would do:&lt;br /&gt;
&lt;br /&gt;
### Pool A (primary)&lt;br /&gt;
&lt;br /&gt;
* **Mirrors (RAID10)**&lt;br /&gt;
* All active workloads&lt;br /&gt;
&lt;br /&gt;
### Pool B (optional)&lt;br /&gt;
&lt;br /&gt;
* RAIDZ2&lt;br /&gt;
* backups / cold data&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 💡 One nuance worth knowing&lt;br /&gt;
&lt;br /&gt;
There *is* one middle-ground pattern:&lt;br /&gt;
&lt;br /&gt;
## “Wide mirrors”&lt;br /&gt;
&lt;br /&gt;
Still mirrors, but tuned datasets:&lt;br /&gt;
&lt;br /&gt;
* recordsize tuning&lt;br /&gt;
* compression&lt;br /&gt;
* SLOG if needed&lt;br /&gt;
&lt;br /&gt;
👉 This gives you:&lt;br /&gt;
&lt;br /&gt;
* most of RAID10 benefits&lt;br /&gt;
* decent capacity efficiency&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🏆 Final Take&lt;br /&gt;
&lt;br /&gt;
Your rule is already good. I’d just upgrade it to:&lt;br /&gt;
&lt;br /&gt;
&amp;gt; **Use mirrors by default.&lt;br /&gt;
&amp;gt; Use RAIDZ only when capacity matters more than latency.**&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 👍 Blunt version&lt;br /&gt;
&lt;br /&gt;
* Mirrors = **performance + responsiveness + safety**&lt;br /&gt;
* RAIDZ2 = **capacity + efficiency**&lt;br /&gt;
* RAIDZ is almost never the right choice for VM workloads&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
If you want, I can:&lt;br /&gt;
&lt;br /&gt;
* Map your exact 20-disk system into **optimal mirror grouping + spare strategy**&lt;br /&gt;
* Or show how to split it into **two pools for best LXD scheduling**&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
</feed>