<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://tech.uvoo.io/index.php?action=history&amp;feed=atom&amp;title=Zfs_hdd_tuning</id>
	<title>Zfs hdd tuning - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://tech.uvoo.io/index.php?action=history&amp;feed=atom&amp;title=Zfs_hdd_tuning"/>
	<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Zfs_hdd_tuning&amp;action=history"/>
	<updated>2026-05-16T01:04:51Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.2</generator>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Zfs_hdd_tuning&amp;diff=5667&amp;oldid=prev</id>
		<title>Busk: Created page with &quot;Nice—those **10K SAS Hitachi drives** are actually a really solid fit for ZFS if you tune it right. Let’s dial this in specifically for your setup.  ---  # 🧱 Your Drive...&quot;</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Zfs_hdd_tuning&amp;diff=5667&amp;oldid=prev"/>
		<updated>2026-03-28T19:24:33Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;Nice—those **10K SAS Hitachi drives** are actually a really solid fit for ZFS if you tune it right. Let’s dial this in specifically for your setup.  ---  # 🧱 Your Drive...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Nice—those **10K SAS Hitachi drives** are actually a really solid fit for ZFS if you tune it right. Let’s dial this in specifically for your setup.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧱 Your Drive&lt;br /&gt;
&lt;br /&gt;
## Hitachi DKR5E-J1R2SS&lt;br /&gt;
&lt;br /&gt;
![Image](https://cdn11.bigcommerce.com/s-017c0/images/stencil/1280x1280/products/12022/50761/3282390-E_1__08710.1680120086.JPG?c=2)&lt;br /&gt;
&lt;br /&gt;
![Image](https://m.media-amazon.com/images/I/614y6FroB-L._AC_UF894%2C1000_QL80_.jpg)&lt;br /&gt;
&lt;br /&gt;
![Image](https://www.disctech.com/SCASite/product_images/0B23727_1000-1.jpg)&lt;br /&gt;
&lt;br /&gt;
![Image](https://i.ebayimg.com/images/g/YakAAeSwtkhoFgAO/s-l400.jpg)&lt;br /&gt;
&lt;br /&gt;
### Key characteristics:&lt;br /&gt;
&lt;br /&gt;
* ~120–180 IOPS per drive&lt;br /&gt;
* Low latency vs SATA, but still **spinning disk**&lt;br /&gt;
* Enterprise reliability (great for ZFS)&lt;br /&gt;
* Likely **512e or 4K sectors** (we’ll force correct alignment)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🎯 Goals for ZFS Tuning&lt;br /&gt;
&lt;br /&gt;
You want to optimize for:&lt;br /&gt;
&lt;br /&gt;
* LXD containers + VMs&lt;br /&gt;
* Mixed workloads (random IO heavy)&lt;br /&gt;
* Stability over raw throughput&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 1 — Create ZFS Pool (IMPORTANT SETTINGS)&lt;br /&gt;
&lt;br /&gt;
Assuming HBA mode and multiple disks (recommended RAIDZ or mirrors):&lt;br /&gt;
&lt;br /&gt;
### 🔥 Best layout options:&lt;br /&gt;
&lt;br /&gt;
* **Mirrors (RAID10 equivalent)** → BEST performance&lt;br /&gt;
* **RAIDZ2** → BEST safety (recommended if ≥6 disks)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Example (mirrors)&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool create -o ashift=12 HDD \&lt;br /&gt;
  mirror /dev/sdb /dev/sdc \&lt;br /&gt;
  mirror /dev/sdd /dev/sde&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### Example (RAIDZ2)&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool create -o ashift=12 HDD \&lt;br /&gt;
  raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🔑 Why `ashift=12` matters&lt;br /&gt;
&lt;br /&gt;
* Forces 4K alignment (even if drive lies)&lt;br /&gt;
* Prevents **massive write amplification**&lt;br /&gt;
&lt;br /&gt;
👉 Always do this. No exceptions.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 2 — ZFS Dataset Tuning (CRITICAL)&lt;br /&gt;
&lt;br /&gt;
For LXD workloads:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zfs set atime=off HDD&lt;br /&gt;
zfs set compression=lz4 HDD&lt;br /&gt;
zfs set xattr=sa HDD&lt;br /&gt;
zfs set acltype=posixacl HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🧠 Recordsize tuning (VERY important)&lt;br /&gt;
&lt;br /&gt;
### For general LXD pool:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zfs set recordsize=128K HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### For VM disks (zvols):&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zfs set volblocksize=16K HDD/vm&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 Why:&lt;br /&gt;
&lt;br /&gt;
* 128K = good throughput default&lt;br /&gt;
* 16K = better for random IO (VMs, DB)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 3 — LXD Storage Pool&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
lxc storage create HDD zfs source=HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 4 — Special LXD tuning&lt;br /&gt;
&lt;br /&gt;
### For containers:&lt;br /&gt;
&lt;br /&gt;
ZFS dataset works great as-is.&lt;br /&gt;
&lt;br /&gt;
### For VMs:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
lxc storage set HDD volume.zfs.block_mode=true&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 This uses zvols instead of files → better performance&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚡ Optional (BUT HIGHLY RECOMMENDED)&lt;br /&gt;
&lt;br /&gt;
## 🚀 Add SLOG (for sync writes)&lt;br /&gt;
&lt;br /&gt;
Without SLOG:&lt;br /&gt;
&lt;br /&gt;
* Sync writes = slow (HDD latency)&lt;br /&gt;
&lt;br /&gt;
With SLOG:&lt;br /&gt;
&lt;br /&gt;
* Huge boost for:&lt;br /&gt;
&lt;br /&gt;
  * databases&lt;br /&gt;
  * fsync-heavy workloads&lt;br /&gt;
  * Kubernetes / etcd&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Best SLOG device:&lt;br /&gt;
&lt;br /&gt;
* NVMe with power-loss protection&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool add HDD log /dev/nvme0n1&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🧠 L2ARC (read cache)&lt;br /&gt;
&lt;br /&gt;
Only add if:&lt;br /&gt;
&lt;br /&gt;
* RAM &amp;lt; working set&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool add HDD cache /dev/nvme1n1&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 Not critical unless heavy reads&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 5 — Kernel tuning (small but helpful)&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
echo 1 &amp;gt; /sys/module/zfs/parameters/zfs_prefetch_disable&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 Disables prefetch (better for random workloads)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚠️ What NOT to do&lt;br /&gt;
&lt;br /&gt;
❌ Don’t use:&lt;br /&gt;
&lt;br /&gt;
* RAID controller + ZFS (double abstraction)&lt;br /&gt;
* RAID5 on controller (write hole risk)&lt;br /&gt;
* ashift=9 (huge mistake on modern disks)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧪 Expected Performance&lt;br /&gt;
&lt;br /&gt;
With mirrors:&lt;br /&gt;
&lt;br /&gt;
* ~2x IOPS scaling&lt;br /&gt;
* Great for VMs&lt;br /&gt;
&lt;br /&gt;
With RAIDZ2:&lt;br /&gt;
&lt;br /&gt;
* Better capacity&lt;br /&gt;
* Lower IOPS (write penalty)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🏆 Final Recommended Setup (for YOU)&lt;br /&gt;
&lt;br /&gt;
Given your setup:&lt;br /&gt;
&lt;br /&gt;
### If you want performance:&lt;br /&gt;
&lt;br /&gt;
👉 **Mirrors + ZFS**&lt;br /&gt;
&lt;br /&gt;
* Best VM performance&lt;br /&gt;
* Scales well&lt;br /&gt;
&lt;br /&gt;
### If you want capacity + safety:&lt;br /&gt;
&lt;br /&gt;
👉 **RAIDZ2 + ZFS**&lt;br /&gt;
&lt;br /&gt;
* Survives 2 disk failures&lt;br /&gt;
* Still solid performance&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔥 My Personal Pick for Your Use Case&lt;br /&gt;
&lt;br /&gt;
Since you're running:&lt;br /&gt;
&lt;br /&gt;
* LXD cluster&lt;br /&gt;
* likely DB / infra workloads&lt;br /&gt;
&lt;br /&gt;
👉 I would do:&lt;br /&gt;
&lt;br /&gt;
**Mirrors + ZFS + optional NVMe SLOG**&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 💡 Bonus: LXD Multi-Pool Strategy&lt;br /&gt;
&lt;br /&gt;
You can do:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
lxc storage create fast zfs source=SSDPOOL&lt;br /&gt;
lxc storage create HDD zfs source=HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
* Put VMs → fast&lt;br /&gt;
* Backups / bulk → HDD&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 👍 Final Take&lt;br /&gt;
&lt;br /&gt;
Your **10K SAS drives + ZFS (HBA mode)**:&lt;br /&gt;
&lt;br /&gt;
* Extremely reliable&lt;br /&gt;
* Predictable performance&lt;br /&gt;
* WAY safer than RAID for long-term integrity&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
If you want next step, I can:&lt;br /&gt;
&lt;br /&gt;
* Map your exact number of disks → best vdev layout&lt;br /&gt;
* Or tune specifically for PostgreSQL / your monitoring stack&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
</feed>