<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://tech.uvoo.io/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Busk</id>
	<title>UVOO Tech Wiki - User contributions
 [en]</title>
	<link rel="self" type="application/atom+xml" href="https://tech.uvoo.io/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Busk"/>
	<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php/Special:Contributions/Busk"/>
	<updated>2026-04-24T04:36:31Z</updated>
	<subtitle>User contributions
</subtitle>
	<generator>MediaWiki 1.35.2</generator>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Usage_notes&amp;diff=5677</id>
		<title>Usage notes</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Usage_notes&amp;diff=5677"/>
		<updated>2026-04-23T16:58:34Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Here are compact prompt templates that usually keep Codex usage down while still getting good output.  **The pattern that saves the most usage** Give it:  1. one narrow goal,...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here are compact prompt templates that usually keep Codex usage down while still getting good output.&lt;br /&gt;
&lt;br /&gt;
**The pattern that saves the most usage**&lt;br /&gt;
Give it:&lt;br /&gt;
&lt;br /&gt;
1. one narrow goal,&lt;br /&gt;
2. one bounded file or function,&lt;br /&gt;
3. one explicit output format,&lt;br /&gt;
4. one stop condition.&lt;br /&gt;
&lt;br /&gt;
That lines up with OpenAI’s current guidance that Codex usage depends mainly on task size, complexity, context held, and where tasks are executed. OpenAI also notes that Codex pricing for many plans is now token-based, which makes oversized prompts and large context especially relevant. ([OpenAI Help Center][1])&lt;br /&gt;
&lt;br /&gt;
### Low-usage templates&lt;br /&gt;
&lt;br /&gt;
**1) Single-function bug fix**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Only inspect `main.go`, function `handleUpload`.&lt;br /&gt;
Find the bug causing the failure.&lt;br /&gt;
Return:&lt;br /&gt;
1. root cause in 3 bullets max&lt;br /&gt;
2. minimal unified diff&lt;br /&gt;
Do not scan other files.&lt;br /&gt;
Do not refactor unrelated code.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**2) Small targeted refactor**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Only modify `internal/auth/middleware.go`.&lt;br /&gt;
Goal: reduce duplication in token parsing without changing behavior.&lt;br /&gt;
Constraints:&lt;br /&gt;
- keep exported APIs unchanged&lt;br /&gt;
- no new dependencies&lt;br /&gt;
Return only a unified diff.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**3) Error log triage**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Analyze only this error and the code I pasted below.&lt;br /&gt;
Do not assume repo-wide context.&lt;br /&gt;
&lt;br /&gt;
Error:&lt;br /&gt;
&amp;lt;paste error&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Code:&lt;br /&gt;
&amp;lt;paste small relevant snippet&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Return:&lt;br /&gt;
- most likely cause&lt;br /&gt;
- second most likely cause&lt;br /&gt;
- exact patch to try first&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**4) Focused code review**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Review only `storage.go` for:&lt;br /&gt;
- race conditions&lt;br /&gt;
- nil dereferences&lt;br /&gt;
- leaked resources&lt;br /&gt;
&lt;br /&gt;
Do not suggest style changes.&lt;br /&gt;
Rank findings by severity.&lt;br /&gt;
Limit to top 5 issues.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**5) Test generation without repo crawl**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Write table-driven tests for `ParseConfig` in `config.go`.&lt;br /&gt;
Assume no other files unless referenced here.&lt;br /&gt;
Return a complete `_test.go` file only.&lt;br /&gt;
Keep cases minimal but high value.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**6) Safe optimization pass**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Inspect only this function for performance issues:&lt;br /&gt;
&amp;lt;paste function&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Constraints:&lt;br /&gt;
- preserve behavior&lt;br /&gt;
- prefer simpler code over clever code&lt;br /&gt;
- no concurrency changes&lt;br /&gt;
Return:&lt;br /&gt;
1. brief explanation&lt;br /&gt;
2. revised function only&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**7) CLI command help**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Create a Cobra subcommand named `serve-certs`.&lt;br /&gt;
Only produce:&lt;br /&gt;
- command struct/function&lt;br /&gt;
- flags&lt;br /&gt;
- RunE body stub&lt;br /&gt;
&lt;br /&gt;
Do not implement unrelated package wiring.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**8) SQL / migration help**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Review this migration only.&lt;br /&gt;
Check for:&lt;br /&gt;
- invalid PostgreSQL syntax&lt;br /&gt;
- unsafe defaults&lt;br /&gt;
- ordering issues&lt;br /&gt;
- rollback concerns&lt;br /&gt;
&lt;br /&gt;
Return only concrete problems and corrected SQL.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**9) “Do not roam” repo instruction**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Work only in these files:&lt;br /&gt;
- cmd/app/main.go&lt;br /&gt;
- internal/config/config.go&lt;br /&gt;
&lt;br /&gt;
Ignore the rest of the repository unless I explicitly add files later.&lt;br /&gt;
If you think another file is needed, name it but do not open it.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**10) Patch-first mode**&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
I want the smallest fix that works.&lt;br /&gt;
Do not redesign.&lt;br /&gt;
Do not rename symbols.&lt;br /&gt;
Do not move files.&lt;br /&gt;
Return only the patch.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### Good add-ons that reduce waste&lt;br /&gt;
&lt;br /&gt;
Use these as suffixes when needed:&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Keep the answer under 200 lines.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Stop after the first good fix.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Ask for no follow-up unless a missing type/signature blocks the patch.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Do not include explanation unless the patch is non-obvious.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Prefer editing existing code over introducing abstractions.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### Best workflow for lowest usage&lt;br /&gt;
&lt;br /&gt;
For routine work, this tends to be the most efficient:&lt;br /&gt;
&lt;br /&gt;
* Start with **local Codex in CLI or VS Code**.&lt;br /&gt;
* Use **GPT-5-Codex-Mini** when it is good enough, since OpenAI says it is a smaller, more cost-effective option that can provide up to 4x more usage in the subscription. ([OpenAI Help Center][2])&lt;br /&gt;
* Limit work to **one file / one function / one bug**.&lt;br /&gt;
* Ask for a **diff only** whenever possible.&lt;br /&gt;
* Start a **fresh session** once context gets bloated.&lt;br /&gt;
* Use **cloud / long-running tasks only for multi-file or project-scale work**, because OpenAI says larger codebases, long-running tasks, and sessions that require more held context use significantly more. ([OpenAI Help Center][1])&lt;br /&gt;
&lt;br /&gt;
### What to avoid&lt;br /&gt;
&lt;br /&gt;
These tend to increase usage fast:&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Review my whole repo and improve everything.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Find all bugs, refactor, add tests, and optimize performance.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Here are 5,000 lines of logs, what happened?&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Keep trying different fixes until all tests pass.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
OpenAI’s current notes also indicate Plus is tuned more for steady day-to-day use, while Pro is aimed at longer, higher-intensity Codex sessions. ([OpenAI Help Center][3])&lt;br /&gt;
&lt;br /&gt;
Here’s a reusable “best default” template:&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Only inspect &amp;lt;file_or_function&amp;gt;.&lt;br /&gt;
Goal: &amp;lt;single goal&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Constraints:&lt;br /&gt;
- no unrelated refactors&lt;br /&gt;
- no new dependencies&lt;br /&gt;
- keep public behavior unchanged&lt;br /&gt;
- do not inspect other files&lt;br /&gt;
&lt;br /&gt;
Return:&lt;br /&gt;
1. brief root cause&lt;br /&gt;
2. minimal unified diff&lt;br /&gt;
3. one sentence on risk&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
And here’s the lowest-usage version:&lt;br /&gt;
&lt;br /&gt;
```text&lt;br /&gt;
Only inspect &amp;lt;file&amp;gt;.&lt;br /&gt;
Fix &amp;lt;specific bug&amp;gt;.&lt;br /&gt;
Return only a minimal unified diff.&lt;br /&gt;
Do not explain.&lt;br /&gt;
Do not inspect other files.&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
I can turn these into a version tailored for your Go/Postgres/React workflow.&lt;br /&gt;
&lt;br /&gt;
[1]: https://help.openai.com/en/articles/11369540-using-codex-with-your-chatgpt-plan?utm_source=chatgpt.com &amp;quot;Using Codex with your ChatGPT plan&amp;quot;&lt;br /&gt;
[2]: https://help.openai.com/en/articles/9624314-model-release-notes?utm_source=chatgpt.com &amp;quot;Model Release Notes | OpenAI Help Center&amp;quot;&lt;br /&gt;
[3]: https://help.openai.com/en/articles/6825453-chatgpt-release-notes?utm_source=chatgpt.com &amp;quot;ChatGPT — Release Notes&amp;quot;&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Codex_gemini_install_cli&amp;diff=5676</id>
		<title>Codex gemini install cli</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Codex_gemini_install_cli&amp;diff=5676"/>
		<updated>2026-04-23T02:06:05Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;# Debian/Ubuntu  ``` sudo apt-get remove -y nodejs npm curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash - sudo apt-get install -y nodejs sudo npm install -g np...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Debian/Ubuntu&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
sudo apt-get remove -y nodejs npm&lt;br /&gt;
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -&lt;br /&gt;
sudo apt-get install -y nodejs&lt;br /&gt;
sudo npm install -g npm@latest&lt;br /&gt;
npm install -g @openai/codex&lt;br /&gt;
npm install -g @google/gemini-cli&lt;br /&gt;
codex --version&lt;br /&gt;
gemini --version&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Resize_lxd&amp;diff=5675</id>
		<title>Resize lxd</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Resize_lxd&amp;diff=5675"/>
		<updated>2026-04-17T23:42:38Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;# 1. Resize via LXD ONLY (don’t touch zfs manually) lxc config device set docker0 root size=300GB  # 2. Restart VM (fast + clean) lxc restart docker0  # 3. Inside VM growpar...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# 1. Resize via LXD ONLY (don’t touch zfs manually)&lt;br /&gt;
lxc config device set docker0 root size=300GB&lt;br /&gt;
&lt;br /&gt;
# 2. Restart VM (fast + clean)&lt;br /&gt;
lxc restart docker0&lt;br /&gt;
&lt;br /&gt;
# 3. Inside VM&lt;br /&gt;
growpart /dev/sda 1&lt;br /&gt;
resize2fs /dev/sda1&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Dokur_windows&amp;diff=5674</id>
		<title>Dokur windows</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Dokur_windows&amp;diff=5674"/>
		<updated>2026-04-17T22:45:12Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
lscpu | grep -E &amp;quot;Virtualization|VT-x|AMD-V&amp;quot;&lt;br /&gt;
&lt;br /&gt;
sudo apt update&lt;br /&gt;
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils cpu-checker&lt;br /&gt;
&lt;br /&gt;
kvm-ok&lt;br /&gt;
&lt;br /&gt;
sudo usermod -aG kvm $USER&lt;br /&gt;
sudo usermod -aG libvirt $USER&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
services:&lt;br /&gt;
  windows:&lt;br /&gt;
    image: dockur/windows&lt;br /&gt;
    container_name: windows&lt;br /&gt;
    devices:&lt;br /&gt;
      - /dev/kvm&lt;br /&gt;
    cap_add:&lt;br /&gt;
      - NET_ADMIN&lt;br /&gt;
    ports:&lt;br /&gt;
      - 8006:8006&lt;br /&gt;
      - 3389:3389/tcp&lt;br /&gt;
      - 3389:3389/udp&lt;br /&gt;
    stop_grace_period: 2m&lt;br /&gt;
    restart: on-failure&lt;br /&gt;
    environment:&lt;br /&gt;
      VERSION: &amp;quot;win11&amp;quot;&lt;br /&gt;
      RAM_SIZE: &amp;quot;8G&amp;quot;&lt;br /&gt;
      CPU_CORES: &amp;quot;4&amp;quot;&lt;br /&gt;
      DISK_SIZE: &amp;quot;64G&amp;quot;&lt;br /&gt;
    volumes:&lt;br /&gt;
      - /opt/windows:/storage&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
mkdir -p /opt/windows&lt;br /&gt;
docker compose up -d&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Dokur_windows&amp;diff=5673</id>
		<title>Dokur windows</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Dokur_windows&amp;diff=5673"/>
		<updated>2026-04-17T22:43:14Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;``` lscpu | grep -E &amp;quot;Virtualization|VT-x|AMD-V&amp;quot;  sudo apt update sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils cpu-checker  kvm-ok  sudo usermod...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
lscpu | grep -E &amp;quot;Virtualization|VT-x|AMD-V&amp;quot;&lt;br /&gt;
&lt;br /&gt;
sudo apt update&lt;br /&gt;
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils cpu-checker&lt;br /&gt;
&lt;br /&gt;
kvm-ok&lt;br /&gt;
&lt;br /&gt;
sudo usermod -aG kvm $USER&lt;br /&gt;
sudo usermod -aG libvirt $USER&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Frr_commercial&amp;diff=5672</id>
		<title>Frr commercial</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Frr_commercial&amp;diff=5672"/>
		<updated>2026-04-11T17:47:35Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SONiC: The King of the Cloud Data Center&lt;br /&gt;
Originally created by Microsoft to run Azure, SONiC (Software for Open Networking in the Cloud) is the undisputed heavyweight champion of the modern data center.&lt;br /&gt;
&lt;br /&gt;
Architecture: SONiC is built entirely around containers and a centralized Redis database. Every component (BGP, LLDP, SNMP) runs in its own Docker container. If the BGP container crashes, the switch keeps forwarding packets using the routes stored in the Redis DB while the container restarts.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;SAI&amp;quot; Advantage: SONiC's superpower is the Switch Abstraction Interface (SAI). SAI is an API that allows SONiC to talk to almost any vendor's switching silicon (Broadcom, Mellanox, Cisco, etc.) without changing the core OS.&lt;br /&gt;
&lt;br /&gt;
Best For: Massive Scale-Out Leaf-Spine networks, Kubernetes environments, and cloud providers. If you are building a data center fabric to support thousands of servers and need deep telemetry and automation, SONiC is the industry standard.&lt;br /&gt;
&lt;br /&gt;
DANOS: The Carrier Edge Workhorse&lt;br /&gt;
Created by AT&amp;amp;T (based on their acquisition of Vyatta), DANOS (Disaggregated Network Operating System) was built specifically for the telecom edge.&lt;br /&gt;
&lt;br /&gt;
Architecture: Unlike SONiC’s database-centric model, DANOS is built around high-performance packet processing in software, heavily utilizing DPDK (Data Plane Development Kit). It is designed to handle complex routing features that simple data center switches usually struggle with.&lt;br /&gt;
&lt;br /&gt;
Telecom Features: Data centers usually just need simple IP routing. Telecoms need deep, complex protocols: MPLS, L2VPN/L3VPN, Carrier-Grade NAT, Hierarchical QoS (throttling specific types of traffic), and cell-tower timing protocols (PTP). DANOS excels here.&lt;br /&gt;
&lt;br /&gt;
Best For: Cell tower aggregation routers, Broadband Network Gateways (the router your home ISP uses to authenticate your modem), and provider edge routers.&lt;br /&gt;
&lt;br /&gt;
# XDP&lt;br /&gt;
&lt;br /&gt;
There is no direct XDP equivalent to DANOS—meaning there isn't a single, monolithic &amp;quot;install this ISO and get a Cisco-like CLI&amp;quot; project that runs pure XDP under the hood. &lt;br /&gt;
&lt;br /&gt;
Because XDP (eXpress Data Path) and eBPF are essentially ways to run highly secure, custom C code directly inside the Linux kernel's network driver, the ecosystem is built more like a toolkit than a finished consumer appliance. &lt;br /&gt;
&lt;br /&gt;
However, there are major open-source projects using XDP to build insanely fast routers and load balancers. Here are the biggest ones you should know about, especially given your work with Kubernetes.&lt;br /&gt;
&lt;br /&gt;
### 1. Cilium (The K8s Heavyweight)&lt;br /&gt;
If you are looking for an open-source, production-ready XDP router, **Cilium** is the undisputed king right now. &lt;br /&gt;
&lt;br /&gt;
While it is primarily known as a Kubernetes CNI (Container Network Interface), it is fundamentally an eBPF/XDP-based distributed router and firewall. &lt;br /&gt;
* **How it works:** Cilium replaces `kube-proxy` entirely. It attaches XDP programs to your host's network interfaces. When a packet arrives destined for a Kubernetes service, the XDP program routes it or load-balances it before the standard Linux networking stack even wakes up.&lt;br /&gt;
* **BGP Integration:** Cilium now has native BGP support built right in. It can establish peering sessions (just like MetalLB and FRR do) to advertise K8s services, but it handles the actual packet forwarding via eBPF/XDP for massive performance gains. &lt;br /&gt;
&lt;br /&gt;
### 2. Katran (Meta's Layer 4 Load Balancer)&lt;br /&gt;
Open-sourced by Meta (Facebook), **Katran** is not a full-featured BGP router, but it is the most famous XDP project in existence. &lt;br /&gt;
* Meta uses Katran to load-balance traffic to their data centers. It uses XDP to process millions of packets per second on standard Linux servers, entirely stateless.&lt;br /&gt;
* If you wanted to build a high-speed VIP (Virtual IP) director without Kubernetes, Katran is the reference architecture.&lt;br /&gt;
&lt;br /&gt;
### 3. Polycube (The Network Function Framework)&lt;br /&gt;
Backed by the Linux Foundation, **Polycube** is probably the closest thing to a &amp;quot;general purpose&amp;quot; XDP network OS. &lt;br /&gt;
* It is a framework that provides ready-to-use network functions (like routers, firewalls, NAT, and bridges) built entirely on eBPF and XDP.&lt;br /&gt;
* You can run a daemon on a Linux server and use the Polycube CLI (or REST API) to spin up a virtual XDP-based router, attach physical interfaces to it, and define routing tables. &lt;br /&gt;
&lt;br /&gt;
### How people actually build XDP Routers (FRR + XDP)&lt;br /&gt;
If a company wants to build an edge router using XDP today, they don't usually look for a pre-built project. They build a hybrid, using the exact tools you are already using.&lt;br /&gt;
&lt;br /&gt;
They split the router into two planes:&lt;br /&gt;
1.  **The Control Plane (FRR):** They run FRR to handle the complex BGP math. &lt;br /&gt;
2.  **The Data Plane (XDP):** They write a small XDP C program attached to the network card. Its only job is to look at a packet's destination IP, look up the MAC address in an &amp;quot;eBPF Map&amp;quot; (a high-speed memory table), and send it out the right port.&lt;br /&gt;
&lt;br /&gt;
**The Glue:** FRR has a feature called the **FPM (Forwarding Plane Manager)**. Instead of FRR pushing routes into the standard Linux routing table via Zebra, you configure Zebra to stream those routes to a custom daemon via FPM. That daemon translates the FRR routes and injects them directly into the XDP program's eBPF Map. &lt;br /&gt;
&lt;br /&gt;
### The Bottom Line&lt;br /&gt;
If you want to play with XDP routing in a lab or a cluster, drop MetalLB and standard `kube-proxy`, and deploy **Cilium** with its strict XDP mode enabled. It will give you a hands-on look at how much faster eBPF dataplanes are compared to standard Linux kernel routing.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Frr_commercial&amp;diff=5671</id>
		<title>Frr commercial</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Frr_commercial&amp;diff=5671"/>
		<updated>2026-04-11T17:27:50Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;SONiC: The King of the Cloud Data Center Originally created by Microsoft to run Azure, SONiC (Software for Open Networking in the Cloud) is the undisputed heavyweight champion...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SONiC: The King of the Cloud Data Center&lt;br /&gt;
Originally created by Microsoft to run Azure, SONiC (Software for Open Networking in the Cloud) is the undisputed heavyweight champion of the modern data center.&lt;br /&gt;
&lt;br /&gt;
Architecture: SONiC is built entirely around containers and a centralized Redis database. Every component (BGP, LLDP, SNMP) runs in its own Docker container. If the BGP container crashes, the switch keeps forwarding packets using the routes stored in the Redis DB while the container restarts.&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;SAI&amp;quot; Advantage: SONiC's superpower is the Switch Abstraction Interface (SAI). SAI is an API that allows SONiC to talk to almost any vendor's switching silicon (Broadcom, Mellanox, Cisco, etc.) without changing the core OS.&lt;br /&gt;
&lt;br /&gt;
Best For: Massive Scale-Out Leaf-Spine networks, Kubernetes environments, and cloud providers. If you are building a data center fabric to support thousands of servers and need deep telemetry and automation, SONiC is the industry standard.&lt;br /&gt;
&lt;br /&gt;
DANOS: The Carrier Edge Workhorse&lt;br /&gt;
Created by AT&amp;amp;T (based on their acquisition of Vyatta), DANOS (Disaggregated Network Operating System) was built specifically for the telecom edge.&lt;br /&gt;
&lt;br /&gt;
Architecture: Unlike SONiC’s database-centric model, DANOS is built around high-performance packet processing in software, heavily utilizing DPDK (Data Plane Development Kit). It is designed to handle complex routing features that simple data center switches usually struggle with.&lt;br /&gt;
&lt;br /&gt;
Telecom Features: Data centers usually just need simple IP routing. Telecoms need deep, complex protocols: MPLS, L2VPN/L3VPN, Carrier-Grade NAT, Hierarchical QoS (throttling specific types of traffic), and cell-tower timing protocols (PTP). DANOS excels here.&lt;br /&gt;
&lt;br /&gt;
Best For: Cell tower aggregation routers, Broadband Network Gateways (the router your home ISP uses to authenticate your modem), and provider edge routers.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Git_signing&amp;diff=5670</id>
		<title>Git signing</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Git_signing&amp;diff=5670"/>
		<updated>2026-03-31T17:44:52Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
git config --global gpg.program gpg&lt;br /&gt;
git config --global gpg.format openpgp&lt;br /&gt;
gpg --list-secret-keys --keyid-format LONG&lt;br /&gt;
git config --global user.signingkey 708D1D7948FXXXXX&lt;br /&gt;
export GPG_TTY=$(tty)&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Git_signing&amp;diff=5669</id>
		<title>Git signing</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Git_signing&amp;diff=5669"/>
		<updated>2026-03-31T17:44:35Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;git config --global gpg.program gpg git config --global gpg.format openpgp gpg --list-secret-keys --keyid-format LONG git config --global user.signingkey 708D1D7948FXXXXX expo...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;git config --global gpg.program gpg&lt;br /&gt;
git config --global gpg.format openpgp&lt;br /&gt;
gpg --list-secret-keys --keyid-format LONG&lt;br /&gt;
git config --global user.signingkey 708D1D7948FXXXXX&lt;br /&gt;
export GPG_TTY=$(tty)&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Zfs_vs_hardware_raid_controller&amp;diff=5668</id>
		<title>Zfs vs hardware raid controller</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Zfs_vs_hardware_raid_controller&amp;diff=5668"/>
		<updated>2026-03-28T20:16:37Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;That’s actually a **very solid default rule**, and it lines up almost perfectly with how ZFS behaves in the real world 👍  Let me sharpen it a bit specifically for ZFS + y...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;That’s actually a **very solid default rule**, and it lines up almost perfectly with how ZFS behaves in the real world 👍&lt;br /&gt;
&lt;br /&gt;
Let me sharpen it a bit specifically for ZFS + your kind of workloads:&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧠 Your Rule (Refined for ZFS)&lt;br /&gt;
&lt;br /&gt;
&amp;gt; **“Use mirrors unless workload is mostly sequential + read-heavy + static.”**&lt;br /&gt;
&lt;br /&gt;
That’s basically the *right mental model*.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔥 Why RAID10 (mirrors) is usually the right choice&lt;br /&gt;
&lt;br /&gt;
For your environment (LXD, Postgres, APIs, etc.):&lt;br /&gt;
&lt;br /&gt;
### You are mostly doing:&lt;br /&gt;
&lt;br /&gt;
* random reads/writes&lt;br /&gt;
* small block IO&lt;br /&gt;
* sync writes (fsync, WAL, etc.)&lt;br /&gt;
* metadata-heavy operations&lt;br /&gt;
&lt;br /&gt;
👉 ZFS mirrors excel at:&lt;br /&gt;
&lt;br /&gt;
* parallel IOPS (each vdev = queue)&lt;br /&gt;
* low latency under load&lt;br /&gt;
* fast resilvers (huge deal)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚠️ Where RAIDZ2 actually wins&lt;br /&gt;
&lt;br /&gt;
Your “static / read-heavy” instinct is correct, but let’s define it more precisely:&lt;br /&gt;
&lt;br /&gt;
## RAIDZ2 is best when:&lt;br /&gt;
&lt;br /&gt;
* Large sequential reads (media, backups)&lt;br /&gt;
* Large sequential writes (archives, logs)&lt;br /&gt;
* Data is mostly **append-only**&lt;br /&gt;
* Not latency-sensitive&lt;br /&gt;
* Capacity matters&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧪 Practical Examples&lt;br /&gt;
&lt;br /&gt;
## 🟢 Mirrors (RAID10)&lt;br /&gt;
&lt;br /&gt;
Use for:&lt;br /&gt;
&lt;br /&gt;
* LXD VM disks&lt;br /&gt;
* PostgreSQL / databases&lt;br /&gt;
* Kubernetes / etcd&lt;br /&gt;
* anything with fsync&lt;br /&gt;
* CI/CD workloads&lt;br /&gt;
* general-purpose root pool&lt;br /&gt;
&lt;br /&gt;
👉 This is probably **90% of real-world infra workloads**&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🔵 RAIDZ2&lt;br /&gt;
&lt;br /&gt;
Use for:&lt;br /&gt;
&lt;br /&gt;
* backups&lt;br /&gt;
* object storage&lt;br /&gt;
* logs (cold)&lt;br /&gt;
* media / large files&lt;br /&gt;
* snapshot archives&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 💥 The Big Gotcha Most People Miss&lt;br /&gt;
&lt;br /&gt;
## RAIDZ2 looks good in benchmarks…&lt;br /&gt;
&lt;br /&gt;
…but falls apart under:&lt;br /&gt;
&lt;br /&gt;
* queue depth&lt;br /&gt;
* random IO&lt;br /&gt;
* mixed workloads&lt;br /&gt;
&lt;br /&gt;
👉 Especially with spinning disks&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚡ Why mirrors feel “faster” even when MB/s looks similar&lt;br /&gt;
&lt;br /&gt;
Because:&lt;br /&gt;
&lt;br /&gt;
* ZFS schedules IO per vdev&lt;br /&gt;
* Mirrors = more vdevs = more parallelism&lt;br /&gt;
&lt;br /&gt;
👉 So system responsiveness is WAY better&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔁 Resilvering (VERY important)&lt;br /&gt;
&lt;br /&gt;
This is a huge differentiator:&lt;br /&gt;
&lt;br /&gt;
### Mirrors:&lt;br /&gt;
&lt;br /&gt;
* Only resilver **used blocks**&lt;br /&gt;
* Much faster rebuild&lt;br /&gt;
* Less stress on disks&lt;br /&gt;
&lt;br /&gt;
### RAIDZ2:&lt;br /&gt;
&lt;br /&gt;
* Must rebuild **entire vdev stripe**&lt;br /&gt;
* Slower&lt;br /&gt;
* Higher risk during rebuild&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧠 Your 20-disk scenario (applied)&lt;br /&gt;
&lt;br /&gt;
Your instinct:&lt;br /&gt;
&lt;br /&gt;
&amp;gt; “Always RAID10 unless read-heavy static”&lt;br /&gt;
&lt;br /&gt;
👉 I would translate that to:&lt;br /&gt;
&lt;br /&gt;
## Default:&lt;br /&gt;
&lt;br /&gt;
**10 mirrors (RAID10)**&lt;br /&gt;
&lt;br /&gt;
## Only use RAIDZ2 if:&lt;br /&gt;
&lt;br /&gt;
* You explicitly need capacity&lt;br /&gt;
* AND workload is not latency sensitive&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔥 What I would actually do (based on your stack)&lt;br /&gt;
&lt;br /&gt;
Since you:&lt;br /&gt;
&lt;br /&gt;
* run infra services&lt;br /&gt;
* care about performance&lt;br /&gt;
* likely run databases / APIs&lt;br /&gt;
&lt;br /&gt;
👉 I would do:&lt;br /&gt;
&lt;br /&gt;
### Pool A (primary)&lt;br /&gt;
&lt;br /&gt;
* **Mirrors (RAID10)**&lt;br /&gt;
* All active workloads&lt;br /&gt;
&lt;br /&gt;
### Pool B (optional)&lt;br /&gt;
&lt;br /&gt;
* RAIDZ2&lt;br /&gt;
* backups / cold data&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 💡 One nuance worth knowing&lt;br /&gt;
&lt;br /&gt;
There *is* one middle-ground pattern:&lt;br /&gt;
&lt;br /&gt;
## “Wide mirrors”&lt;br /&gt;
&lt;br /&gt;
Still mirrors, but tuned datasets:&lt;br /&gt;
&lt;br /&gt;
* recordsize tuning&lt;br /&gt;
* compression&lt;br /&gt;
* SLOG if needed&lt;br /&gt;
&lt;br /&gt;
👉 This gives you:&lt;br /&gt;
&lt;br /&gt;
* most of RAID10 benefits&lt;br /&gt;
* decent capacity efficiency&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🏆 Final Take&lt;br /&gt;
&lt;br /&gt;
Your rule is already good. I’d just upgrade it to:&lt;br /&gt;
&lt;br /&gt;
&amp;gt; **Use mirrors by default.&lt;br /&gt;
&amp;gt; Use RAIDZ only when capacity matters more than latency.**&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 👍 Blunt version&lt;br /&gt;
&lt;br /&gt;
* Mirrors = **performance + responsiveness + safety**&lt;br /&gt;
* RAIDZ2 = **capacity + efficiency**&lt;br /&gt;
* RAIDZ is almost never the right choice for VM workloads&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
If you want, I can:&lt;br /&gt;
&lt;br /&gt;
* Map your exact 20-disk system into **optimal mirror grouping + spare strategy**&lt;br /&gt;
* Or show how to split it into **two pools for best LXD scheduling**&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Zfs_hdd_tuning&amp;diff=5667</id>
		<title>Zfs hdd tuning</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Zfs_hdd_tuning&amp;diff=5667"/>
		<updated>2026-03-28T19:24:33Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Nice—those **10K SAS Hitachi drives** are actually a really solid fit for ZFS if you tune it right. Let’s dial this in specifically for your setup.  ---  # 🧱 Your Drive...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Nice—those **10K SAS Hitachi drives** are actually a really solid fit for ZFS if you tune it right. Let’s dial this in specifically for your setup.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧱 Your Drive&lt;br /&gt;
&lt;br /&gt;
## Hitachi DKR5E-J1R2SS&lt;br /&gt;
&lt;br /&gt;
![Image](https://cdn11.bigcommerce.com/s-017c0/images/stencil/1280x1280/products/12022/50761/3282390-E_1__08710.1680120086.JPG?c=2)&lt;br /&gt;
&lt;br /&gt;
![Image](https://m.media-amazon.com/images/I/614y6FroB-L._AC_UF894%2C1000_QL80_.jpg)&lt;br /&gt;
&lt;br /&gt;
![Image](https://www.disctech.com/SCASite/product_images/0B23727_1000-1.jpg)&lt;br /&gt;
&lt;br /&gt;
![Image](https://i.ebayimg.com/images/g/YakAAeSwtkhoFgAO/s-l400.jpg)&lt;br /&gt;
&lt;br /&gt;
### Key characteristics:&lt;br /&gt;
&lt;br /&gt;
* ~120–180 IOPS per drive&lt;br /&gt;
* Low latency vs SATA, but still **spinning disk**&lt;br /&gt;
* Enterprise reliability (great for ZFS)&lt;br /&gt;
* Likely **512e or 4K sectors** (we’ll force correct alignment)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🎯 Goals for ZFS Tuning&lt;br /&gt;
&lt;br /&gt;
You want to optimize for:&lt;br /&gt;
&lt;br /&gt;
* LXD containers + VMs&lt;br /&gt;
* Mixed workloads (random IO heavy)&lt;br /&gt;
* Stability over raw throughput&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 1 — Create ZFS Pool (IMPORTANT SETTINGS)&lt;br /&gt;
&lt;br /&gt;
Assuming HBA mode and multiple disks (recommended RAIDZ or mirrors):&lt;br /&gt;
&lt;br /&gt;
### 🔥 Best layout options:&lt;br /&gt;
&lt;br /&gt;
* **Mirrors (RAID10 equivalent)** → BEST performance&lt;br /&gt;
* **RAIDZ2** → BEST safety (recommended if ≥6 disks)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Example (mirrors)&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool create -o ashift=12 HDD \&lt;br /&gt;
  mirror /dev/sdb /dev/sdc \&lt;br /&gt;
  mirror /dev/sdd /dev/sde&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### Example (RAIDZ2)&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool create -o ashift=12 HDD \&lt;br /&gt;
  raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🔑 Why `ashift=12` matters&lt;br /&gt;
&lt;br /&gt;
* Forces 4K alignment (even if drive lies)&lt;br /&gt;
* Prevents **massive write amplification**&lt;br /&gt;
&lt;br /&gt;
👉 Always do this. No exceptions.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 2 — ZFS Dataset Tuning (CRITICAL)&lt;br /&gt;
&lt;br /&gt;
For LXD workloads:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zfs set atime=off HDD&lt;br /&gt;
zfs set compression=lz4 HDD&lt;br /&gt;
zfs set xattr=sa HDD&lt;br /&gt;
zfs set acltype=posixacl HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🧠 Recordsize tuning (VERY important)&lt;br /&gt;
&lt;br /&gt;
### For general LXD pool:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zfs set recordsize=128K HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### For VM disks (zvols):&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zfs set volblocksize=16K HDD/vm&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 Why:&lt;br /&gt;
&lt;br /&gt;
* 128K = good throughput default&lt;br /&gt;
* 16K = better for random IO (VMs, DB)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 3 — LXD Storage Pool&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
lxc storage create HDD zfs source=HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 4 — Special LXD tuning&lt;br /&gt;
&lt;br /&gt;
### For containers:&lt;br /&gt;
&lt;br /&gt;
ZFS dataset works great as-is.&lt;br /&gt;
&lt;br /&gt;
### For VMs:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
lxc storage set HDD volume.zfs.block_mode=true&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 This uses zvols instead of files → better performance&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚡ Optional (BUT HIGHLY RECOMMENDED)&lt;br /&gt;
&lt;br /&gt;
## 🚀 Add SLOG (for sync writes)&lt;br /&gt;
&lt;br /&gt;
Without SLOG:&lt;br /&gt;
&lt;br /&gt;
* Sync writes = slow (HDD latency)&lt;br /&gt;
&lt;br /&gt;
With SLOG:&lt;br /&gt;
&lt;br /&gt;
* Huge boost for:&lt;br /&gt;
&lt;br /&gt;
  * databases&lt;br /&gt;
  * fsync-heavy workloads&lt;br /&gt;
  * Kubernetes / etcd&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Best SLOG device:&lt;br /&gt;
&lt;br /&gt;
* NVMe with power-loss protection&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool add HDD log /dev/nvme0n1&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🧠 L2ARC (read cache)&lt;br /&gt;
&lt;br /&gt;
Only add if:&lt;br /&gt;
&lt;br /&gt;
* RAM &amp;lt; working set&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
zpool add HDD cache /dev/nvme1n1&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 Not critical unless heavy reads&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚙️ Step 5 — Kernel tuning (small but helpful)&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
echo 1 &amp;gt; /sys/module/zfs/parameters/zfs_prefetch_disable&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
👉 Disables prefetch (better for random workloads)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# ⚠️ What NOT to do&lt;br /&gt;
&lt;br /&gt;
❌ Don’t use:&lt;br /&gt;
&lt;br /&gt;
* RAID controller + ZFS (double abstraction)&lt;br /&gt;
* RAID5 on controller (write hole risk)&lt;br /&gt;
* ashift=9 (huge mistake on modern disks)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🧪 Expected Performance&lt;br /&gt;
&lt;br /&gt;
With mirrors:&lt;br /&gt;
&lt;br /&gt;
* ~2x IOPS scaling&lt;br /&gt;
* Great for VMs&lt;br /&gt;
&lt;br /&gt;
With RAIDZ2:&lt;br /&gt;
&lt;br /&gt;
* Better capacity&lt;br /&gt;
* Lower IOPS (write penalty)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🏆 Final Recommended Setup (for YOU)&lt;br /&gt;
&lt;br /&gt;
Given your setup:&lt;br /&gt;
&lt;br /&gt;
### If you want performance:&lt;br /&gt;
&lt;br /&gt;
👉 **Mirrors + ZFS**&lt;br /&gt;
&lt;br /&gt;
* Best VM performance&lt;br /&gt;
* Scales well&lt;br /&gt;
&lt;br /&gt;
### If you want capacity + safety:&lt;br /&gt;
&lt;br /&gt;
👉 **RAIDZ2 + ZFS**&lt;br /&gt;
&lt;br /&gt;
* Survives 2 disk failures&lt;br /&gt;
* Still solid performance&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 🔥 My Personal Pick for Your Use Case&lt;br /&gt;
&lt;br /&gt;
Since you're running:&lt;br /&gt;
&lt;br /&gt;
* LXD cluster&lt;br /&gt;
* likely DB / infra workloads&lt;br /&gt;
&lt;br /&gt;
👉 I would do:&lt;br /&gt;
&lt;br /&gt;
**Mirrors + ZFS + optional NVMe SLOG**&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 💡 Bonus: LXD Multi-Pool Strategy&lt;br /&gt;
&lt;br /&gt;
You can do:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
lxc storage create fast zfs source=SSDPOOL&lt;br /&gt;
lxc storage create HDD zfs source=HDD&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&lt;br /&gt;
* Put VMs → fast&lt;br /&gt;
* Backups / bulk → HDD&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# 👍 Final Take&lt;br /&gt;
&lt;br /&gt;
Your **10K SAS drives + ZFS (HBA mode)**:&lt;br /&gt;
&lt;br /&gt;
* Extremely reliable&lt;br /&gt;
* Predictable performance&lt;br /&gt;
* WAY safer than RAID for long-term integrity&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
If you want next step, I can:&lt;br /&gt;
&lt;br /&gt;
* Map your exact number of disks → best vdev layout&lt;br /&gt;
* Or tune specifically for PostgreSQL / your monitoring stack&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=IDrac&amp;diff=5666</id>
		<title>IDrac</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=IDrac&amp;diff=5666"/>
		<updated>2026-03-26T20:15:58Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Install from ipmi&lt;br /&gt;
&lt;br /&gt;
- Configuration-&amp;gt;Virtual Media-&amp;gt;Connect Virtual Media&lt;br /&gt;
- Select File&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sudo ipmitool lan print 1&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=LXD_security_luks_ssh_sync&amp;diff=5665</id>
		<title>LXD security luks ssh sync</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=LXD_security_luks_ssh_sync&amp;diff=5665"/>
		<updated>2026-03-20T03:26:12Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;To sync your SSH configuration and keys across your LXD cluster (the Ubuntu 24.04 node and your older 22.04 nodes), you can use a combination of **LXD's built-in file push** a...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To sync your SSH configuration and keys across your LXD cluster (the Ubuntu 24.04 node and your older 22.04 nodes), you can use a combination of **LXD's built-in file push** and a simple **Bash alias**.&lt;br /&gt;
&lt;br /&gt;
Since your 6-disk ZFS mirror is likely your &amp;quot;main&amp;quot; node, we’ll use it as the source of truth.&lt;br /&gt;
&lt;br /&gt;
### 1. The Manual Sync (Quickest)&lt;br /&gt;
LXD allows you to push files directly into a container or across nodes if they are part of the same cluster. If you want to push your updated `config` to a specific container:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
lxc file push ~/.ssh/config my-alpine-container/root/.ssh/config&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### 2. The Cluster-Wide Sync Script&lt;br /&gt;
If you want to ensure every node in your cluster has the same SSH shortcuts, you can run this loop from your primary node. It iterates through all nodes and updates their local `~/.ssh/config`.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
for node in $(lxc cluster list --format csv -c Name); do&lt;br /&gt;
  scp ~/.ssh/config $node:~/.ssh/config&lt;br /&gt;
done&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 3. Using &amp;quot;SSH Agent Forwarding&amp;quot; (Recommended)&lt;br /&gt;
Instead of copying your **Private Keys** to every node (which is a security risk), you should use **Agent Forwarding**. This allows you to keep your keys on your local machine but &amp;quot;lend&amp;quot; them to the cluster nodes as you jump between them.&lt;br /&gt;
&lt;br /&gt;
**Update your `~/.ssh/config` on your local machine:**&lt;br /&gt;
```text&lt;br /&gt;
Host lxd-cluster-*&lt;br /&gt;
    ForwardAgent yes&lt;br /&gt;
    User busk&lt;br /&gt;
    IdentityFile ~/.ssh/id_ed25519_personal&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**How it works:**&lt;br /&gt;
1. You SSH into Node A.&lt;br /&gt;
2. From Node A, you want to `git clone` or SSH into Node B.&lt;br /&gt;
3. Node A &amp;quot;asks&amp;quot; your local machine to sign the request using your key.&lt;br /&gt;
4. **The private key never leaves your laptop.**&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 4. Syncing via a Git &amp;quot;Dotfiles&amp;quot; Repo&lt;br /&gt;
Since we just set up your Git SSH access, the most professional way to manage this is to create a private **&amp;quot;dotfiles&amp;quot;** repository.&lt;br /&gt;
&lt;br /&gt;
1. **Create a private repo** on GitHub called `dotfiles`.&lt;br /&gt;
2. **Push your config:** ```bash&lt;br /&gt;
   git add ~/.ssh/config&lt;br /&gt;
   git commit -m &amp;quot;update ssh config&amp;quot;&lt;br /&gt;
   git push&lt;br /&gt;
   ```&lt;br /&gt;
3. **Pull on other nodes:** On any new node in your cluster, just `git pull` to get your latest shortcuts and host definitions.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 5. Security Warning for ZFS Pools&lt;br /&gt;
If you are syncing these files to a directory on your `tank` pool, ensure the ZFS dataset permissions are restricted. SSH will fail to work if the permissions are too &amp;quot;open&amp;quot; (e.g., if a group has write access).&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
# Ensure the ZFS mount for your home dir is strict&lt;br /&gt;
sudo chmod 700 /home/busk/.ssh&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Summary Checklist&lt;br /&gt;
| Method | Best For... | Security Level |&lt;br /&gt;
| :--- | :--- | :--- |&lt;br /&gt;
| **Agent Forwarding** | Jumping between nodes | **High** (Keys stay local) |&lt;br /&gt;
| **LXC File Push** | Quick fixes to containers | **Medium** |&lt;br /&gt;
| **Git Dotfiles** | Long-term cluster management | **High** (If repo is private) |&lt;br /&gt;
&lt;br /&gt;
**Would you like me to show you how to automate the &amp;quot;Git Pull&amp;quot; of your config files every time you log into a node?**&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=LXD_security_luks_dropbear&amp;diff=5664</id>
		<title>LXD security luks dropbear</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=LXD_security_luks_dropbear&amp;diff=5664"/>
		<updated>2026-03-20T03:25:46Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Setting up **Dropbear** for **Initramfs** is the gold standard for &amp;quot;headless&amp;quot; secure servers. It allows you to SSH into your server *before* the main operating system even sta...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Setting up **Dropbear** for **Initramfs** is the gold standard for &amp;quot;headless&amp;quot; secure servers. It allows you to SSH into your server *before* the main operating system even starts, just to type the LUKS passphrase.&lt;br /&gt;
&lt;br /&gt;
This gives you the security of a manual passphrase with the convenience of not needing a physical keyboard and monitor attached to the server.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 1. Install the Dropbear Initramfs Tool&lt;br /&gt;
Ubuntu has a specific package that integrates the lightweight Dropbear SSH server into the boot process.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
sudo apt update&lt;br /&gt;
sudo apt install dropbear-initramfs -y&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### 2. Configure Your SSH Key&lt;br /&gt;
For security, you should use an **SSH Key** rather than a password to log in during this pre-boot phase.&lt;br /&gt;
&lt;br /&gt;
1.  **Copy your public key** (usually found at `~/.ssh/id_ed25519.pub` on your local machine).&lt;br /&gt;
2.  **Add it to the authorized keys for boot:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo nano /etc/dropbear/initramfs/authorized_keys&lt;br /&gt;
    ```&lt;br /&gt;
    *Paste your public key there and save.*&lt;br /&gt;
&lt;br /&gt;
### 3. Set the Network Config (Optional but Recommended)&lt;br /&gt;
If your server doesn't get an IP automatically via DHCP during boot, you can hardcode one in the GRUB config, but usually, the default is fine if your router is active. &lt;br /&gt;
&lt;br /&gt;
To change the port (to avoid conflicts with your main SSH later), edit the config:&lt;br /&gt;
```bash&lt;br /&gt;
sudo nano /etc/dropbear/initramfs/config&lt;br /&gt;
```&lt;br /&gt;
Change to: `DROPBEAR_OPTIONS=&amp;quot;-p 2222&amp;quot;` (This puts the &amp;quot;Unlock SSH&amp;quot; on port 2222).&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 4. Update and Reboot&lt;br /&gt;
Tell the system to rebuild the boot image with Dropbear inside it.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
sudo update-initramfs -u&lt;br /&gt;
sudo reboot&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 5. How to Unlock Remotely&lt;br /&gt;
Once the server starts booting, it will &amp;quot;hang&amp;quot; at the encryption prompt. From your local laptop, run:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
ssh -p 2222 root@your-server-ip&lt;br /&gt;
```&lt;br /&gt;
*Note: It will always be `root`, even if that's not your main user.*&lt;br /&gt;
&lt;br /&gt;
Once you are logged in, you will see a minimal prompt. Run the unlock command:&lt;br /&gt;
```bash&lt;br /&gt;
cryptroot-unlock&lt;br /&gt;
```&lt;br /&gt;
Type your passphrase. Once accepted, the SSH session will close automatically, the drive will mount, and Ubuntu will finish booting into the main OS (and start your LXD containers).&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Summary of your &amp;quot;Private Cloud&amp;quot; Security&lt;br /&gt;
You now have a setup that mirrors high-end cloud security:&lt;br /&gt;
1.  **LXD Isolation:** Your apps are siloed in containers or VMs.&lt;br /&gt;
2.  **Full Disk Encryption:** Data is encrypted at rest (LUKS).&lt;br /&gt;
3.  **Encrypted Storage Pool:** Your LXD data lives on a ZFS-over-LUKS layer.&lt;br /&gt;
4.  **Remote Secure Unlock:** You can manage reboots from anywhere without leaving keys &amp;quot;in the lock&amp;quot; (the keyfile).&lt;br /&gt;
&lt;br /&gt;
# small bash script that checks the status of your LXD encrypted pool and alerts you if it fails to mount after an unlock&lt;br /&gt;
&lt;br /&gt;
Goal,Config Setting&lt;br /&gt;
Specify a Key,IdentityFile ~/.ssh/key_name&lt;br /&gt;
Use a Nickname,Host nickname&lt;br /&gt;
Set User,User username&lt;br /&gt;
Force Port,Port 2222 (if you changed it for security)&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=LXD_security_luks&amp;diff=5663</id>
		<title>LXD security luks</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=LXD_security_luks&amp;diff=5663"/>
		<updated>2026-03-20T03:23:02Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Setting up an encrypted storage pool for LXD is the closest you can get to the &amp;quot;AWS EBS Encryption&amp;quot; model on your own hardware. By using **LUKS** (Linux Unified Key Setup) und...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Setting up an encrypted storage pool for LXD is the closest you can get to the &amp;quot;AWS EBS Encryption&amp;quot; model on your own hardware. By using **LUKS** (Linux Unified Key Setup) underneath **ZFS**, even if someone pulls the physical drive from your server, the data is just random noise without the decryption passphrase.&lt;br /&gt;
&lt;br /&gt;
Here is how to build that secure foundation on Ubuntu 24.04.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Phase 1: Prepare the Encrypted Partition&lt;br /&gt;
We will use a raw partition or a secondary drive (e.g., `/dev/sdb`). &lt;br /&gt;
&lt;br /&gt;
1.  **Install the necessary tools:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo apt update &amp;amp;&amp;amp; sudo apt install cryptsetup zfsutils-linux -y&lt;br /&gt;
    ```&lt;br /&gt;
2.  **Format the partition with LUKS:**&lt;br /&gt;
    *Choose a strong passphrase; you will need this every time the server boots.*&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo cryptsetup luksFormat /dev/sdb&lt;br /&gt;
    ```&lt;br /&gt;
3.  **Open the encrypted &amp;quot;container&amp;quot;:**&lt;br /&gt;
    This maps the encrypted physical drive to a virtual device at `/dev/mapper/lxd_encrypted`.&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo cryptsetup open /dev/sdb lxd_encrypted&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Phase 2: Create the LXD Storage Pool&lt;br /&gt;
Now that we have an open, decrypted block device, we tell LXD to use it for a ZFS pool.&lt;br /&gt;
&lt;br /&gt;
1.  **Create the Pool:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo lxc storage create secure-pool zfs source=/dev/mapper/lxd_encrypted&lt;br /&gt;
    ```&lt;br /&gt;
2.  **Verify the Pool:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    lxc storage list&lt;br /&gt;
    ```&lt;br /&gt;
    You should see `secure-pool` listed with the driver `zfs`.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Phase 3: Launch a Secure Container/VM&lt;br /&gt;
To ensure your workloads actually live on this encrypted disk, you must specify the pool during launch.&lt;br /&gt;
&lt;br /&gt;
* **For a Container:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    lxc launch ubuntu:24.04 my-secure-app --storage secure-pool&lt;br /&gt;
    ```&lt;br /&gt;
* **For a VM (Even More Secure):**&lt;br /&gt;
    ```bash&lt;br /&gt;
    lxc launch ubuntu:24.04 my-vault --vm --storage secure-pool&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Phase 4: Handling Reboots (The &amp;quot;Unlock&amp;quot; Process)&lt;br /&gt;
Unlike a public cloud where the provider handles the hardware keys, **you** own the keys now. This means when your physical server reboots, the LXD pool will stay &amp;quot;offline&amp;quot; because the LUKS partition is locked.&lt;br /&gt;
&lt;br /&gt;
**To restore access after a reboot:**&lt;br /&gt;
1.  **Unlock the drive:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo cryptsetup open /dev/sdb lxd_encrypted&lt;br /&gt;
    ```&lt;br /&gt;
2.  **Restart LXD (if it didn't auto-detect):**&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo systemctl restart snap.lxd.daemon&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
&amp;gt; [!TIP]&lt;br /&gt;
&amp;gt; If you want the server to unlock automatically at boot (less secure, but common for remote servers), you can use a **Keyfile** stored on a specialized USB drive or an initramfs configuration, though this technically means anyone with physical access to that USB drive can bypass the encryption.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### How this mimics &amp;quot;Cloud Security&amp;quot;&lt;br /&gt;
* **The &amp;quot;Admin&amp;quot; Problem:** If an admin tries to `cat` a file from the host's `/var/lib/lxd/storage-pools/`, they can only do so while the LUKS partition is &amp;quot;Open.&amp;quot; &lt;br /&gt;
* **Cold Boot Attacks:** If the server is powered off, the data is mathematically inaccessible. &lt;br /&gt;
* **Snapshot Safety:** Since the ZFS snapshots live inside the LUKS container, your backups are also encrypted by default if you replicate the ZFS stream.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Automount&lt;br /&gt;
&lt;br /&gt;
Automating the unlock process is a balance between **convenience** and **security**. If you store the key on the same hard drive, you defeat the purpose of encryption (it’s like leaving the key in the lock). &lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Pro&amp;quot; way to do this on-premise is to store the keyfile on a **separate USB drive** or a dedicated **microSD card** that stays plugged into the server.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 1. Create the Keyfile&lt;br /&gt;
First, we generate a file full of random data to act as your &amp;quot;digital key.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
# Create a 4KB keyfile&lt;br /&gt;
sudo dd if=/dev/urandom of=/root/lxd_keyfile bs=1024 count=4&lt;br /&gt;
# Set strict permissions (Only root can read it)&lt;br /&gt;
sudo chmod 400 /root/lxd_keyfile&lt;br /&gt;
```&lt;br /&gt;
*(Note: For higher security, move this file to a USB drive mounted at `/mnt/usb/keyfile` instead of `/root/`.)*&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 2. Add the Keyfile to LUKS&lt;br /&gt;
LUKS has &amp;quot;slots&amp;quot; for multiple keys. We will keep your manual passphrase in Slot 0 and add this file to Slot 1.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
sudo cryptsetup luksAddKey /dev/sdb /root/lxd_keyfile&lt;br /&gt;
```&lt;br /&gt;
*You will be prompted for your **original passphrase** to authorize adding the new key.*&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 3. Configure `/etc/crypttab`&lt;br /&gt;
This file tells Ubuntu to unlock the drive automatically during the boot sequence using the keyfile.&lt;br /&gt;
&lt;br /&gt;
1.  **Get the UUID of your physical drive:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    blkid /dev/sdb&lt;br /&gt;
    ```&lt;br /&gt;
    *Copy the UUID (e.g., `UUID=&amp;quot;1234-abcd-...&amp;quot;`).*&lt;br /&gt;
&lt;br /&gt;
2.  **Edit the crypttab file:**&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo nano /etc/crypttab&lt;br /&gt;
    ```&lt;br /&gt;
3.  **Add this line:**&lt;br /&gt;
    ```text&lt;br /&gt;
    lxd_encrypted  UUID=your-uuid-here  /root/lxd_keyfile  luks&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 4. Update Initramfs&lt;br /&gt;
Since storage is a core system component, you need to update the boot RAM disk so the kernel knows how to handle this mapping at startup.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
sudo update-initramfs -u&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 5. Final Step: The ZFS Mount&lt;br /&gt;
LXD is usually smart enough to see the `/dev/mapper/lxd_encrypted` device appear and then mount the ZFS pool. However, if the pool doesn't import automatically, you can add a simple systemd override or just run `lxc storage import` if needed.&lt;br /&gt;
&lt;br /&gt;
### The &amp;quot;Cloud-Level&amp;quot; Security Result&lt;br /&gt;
* **Rebooting:** The server boots, finds the keyfile, unlocks the drive, and LXD starts your containers.&lt;br /&gt;
* **Theft Scenario:** If someone steals the server but **unplugs the USB key** (or if you keep the keyfile on a network share that you disconnect), the data remains encrypted and unreadable.&lt;br /&gt;
* **Rogue Admin:** An admin with access to the running OS can still see the data (since it's unlocked), but an admin with physical access to the &amp;quot;cold&amp;quot; hardware cannot.&lt;br /&gt;
&lt;br /&gt;
&amp;gt; [!WARNING]&lt;br /&gt;
&amp;gt; If you lose both your manual passphrase **and** this keyfile, the data on that LXD pool is gone forever. There is no &amp;quot;Password Reset&amp;quot; in LUKS encryption.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=LXD_security&amp;diff=5662</id>
		<title>LXD security</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=LXD_security&amp;diff=5662"/>
		<updated>2026-03-20T03:19:20Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;To recreate a &amp;quot;Cloud-Style&amp;quot; secure environment using **LXD** on-premise, you have to move away from the idea of being a &amp;quot;System Admin&amp;quot; and start acting like a &amp;quot;Service Provide...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To recreate a &amp;quot;Cloud-Style&amp;quot; secure environment using **LXD** on-premise, you have to move away from the idea of being a &amp;quot;System Admin&amp;quot; and start acting like a &amp;quot;Service Provider.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
In a standard setup, you sudo into everything. In a high-security LXD setup, you use **Project Isolation**, **Restricted Roles**, and **Hardware-Backed Encryption**.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 1. Project Isolation (The Virtual Private Cloud)&lt;br /&gt;
Don't just run containers in the &amp;quot;default&amp;quot; project. Projects in LXD act like AWS Accounts or Azure Subscriptions. They have their own networks, storage volumes, and—crucially—their own security policies.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
# Create a secure project&lt;br /&gt;
lxc project create secure-zone -c features.networks=true -c features.images=true&lt;br /&gt;
&lt;br /&gt;
# Switch to it&lt;br /&gt;
lxc project switch secure-zone&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### 2. Restrict the &amp;quot;Admin&amp;quot; (RBAC)&lt;br /&gt;
To prevent a rogue local admin (or yourself by mistake) from having &amp;quot;God Mode&amp;quot; over every container, you should use **Canonical RBAC** (Role-Based Access Control) or integration with **OpenID Connect**.&lt;br /&gt;
&lt;br /&gt;
By integrating LXD with an identity provider (like Keycloak or Authelia), you can ensure that even if someone has a login to the physical host, they don't have the &amp;quot;LXD API&amp;quot; permissions to peek into a specific project's containers without an audited login.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 3. Encryption at Rest (The Storage Layer)&lt;br /&gt;
In AWS, the EBS volumes are encrypted. In LXD, you should use **ZFS or LVM with LUKS**.&lt;br /&gt;
&lt;br /&gt;
* **The Goal:** If someone steals the physical hard drives from your server, they see nothing.&lt;br /&gt;
* **The Setup:** Encrypt the entire partition using LUKS before assigning it to the LXD Storage Pool.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
# Example: Creating an encrypted ZFS pool for LXD&lt;br /&gt;
cryptsetup luksFormat /dev/sdb&lt;br /&gt;
cryptsetup open /dev/sdb crypt_storage&lt;br /&gt;
lxc storage create encrypted-pool zfs source=/dev/mapper/crypt_storage&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 4. Shielded Containers (The &amp;quot;Nitro&amp;quot; Equivalent)&lt;br /&gt;
LXD allows you to run **Virtual Machines (VMs)** instead of just containers. While containers share the host kernel (easier for a rogue root user to &amp;quot;escape&amp;quot;), LXD VMs use a separate kernel and hardware virtualization (QEMU/KVM).&lt;br /&gt;
&lt;br /&gt;
For your most sensitive data, use an LXD VM with **vTPM** (Virtual Trusted Platform Module) enabled:&lt;br /&gt;
```bash&lt;br /&gt;
lxc launch ubuntu:24.04 my-secure-vm --vm -c limits.cpu=2 -c security.agent=true&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 5. The &amp;quot;No-Sudo&amp;quot; Architecture (The API Approach)&lt;br /&gt;
The biggest risk in an on-premise setup is someone running `sudo lxc exec container bash`. &lt;br /&gt;
&lt;br /&gt;
To mitigate this:&lt;br /&gt;
1.  **Disable root SSH:** Never allow root login on the host.&lt;br /&gt;
2.  **Use the LXD Brink/Candid:** Force all container access through a remote API call that requires a multi-factor authentication (MFA) token, rather than direct local socket access.&lt;br /&gt;
3.  **Kernel Hardening:** Use **AppArmor** and **Seccomp** (enabled by default in LXD) to prevent containers from making sensitive system calls to the host kernel.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### Summary Checklist for a &amp;quot;Private Cloud&amp;quot;&lt;br /&gt;
| Feature | Local LXD Setup |&lt;br /&gt;
| :--- | :--- |&lt;br /&gt;
| **Identity** | Integrate with OIDC/Keycloak (No local passwords). |&lt;br /&gt;
| **Storage** | LUKS-encrypted ZFS or LVM pools. |&lt;br /&gt;
| **Isolation** | Use LXD Projects to silo different departments/apps. |&lt;br /&gt;
| **Runtime** | Use VMs (`--vm`) for high-security, Containers for dev. |&lt;br /&gt;
| **Auditing** | Forward LXD logs to a separate, write-only syslog server. |&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Go_install&amp;diff=5661</id>
		<title>Go install</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Go_install&amp;diff=5661"/>
		<updated>2026-02-27T15:49:51Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
https://golang.org/doc/install&lt;br /&gt;
```&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
set -eu&lt;br /&gt;
version=1.26.0&lt;br /&gt;
sudo apt remove -y golang-go || true&lt;br /&gt;
# curl -LO https://go.dev/dl/go1.20.3.linux-amd64.tar.gz&lt;br /&gt;
curl -LO https://golang.org/dl/go$version.linux-amd64.tar.gz&lt;br /&gt;
sudo rm -rf /usr/local/go &amp;amp;&amp;amp; sudo tar -C /usr/local -xzf go$version.linux-amd64.tar.gz&lt;br /&gt;
echo 'export PATH=${PATH}:/usr/local/go/bin' &amp;gt;&amp;gt; ~/.bashrc&lt;br /&gt;
. ~/.bashrc&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Snap&lt;br /&gt;
```&lt;br /&gt;
sudo snap install --classic go&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Apt&lt;br /&gt;
```&lt;br /&gt;
sudo apt install -y golang-go&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Winrm_python&amp;diff=5660</id>
		<title>Winrm python</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Winrm_python&amp;diff=5660"/>
		<updated>2025-12-11T07:25:35Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Allow Python winrm&lt;br /&gt;
&lt;br /&gt;
## Steps&lt;br /&gt;
&lt;br /&gt;
### Create user&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
# Create the password object&lt;br /&gt;
$Password = ConvertTo-SecureString &amp;quot;myPassword&amp;quot; -AsPlainText -Force&lt;br /&gt;
&lt;br /&gt;
# Create the user account&lt;br /&gt;
New-LocalUser -Name &amp;quot;test&amp;quot; `&lt;br /&gt;
    -Password $Password `&lt;br /&gt;
    -FullName &amp;quot;Test Automation User&amp;quot; `&lt;br /&gt;
    -Description &amp;quot;User for WinRM access&amp;quot; `&lt;br /&gt;
    -PasswordNeverExpires&lt;br /&gt;
&lt;br /&gt;
Add-LocalGroupMember -Group &amp;quot;Remote Management Users&amp;quot; -Member &amp;quot;test&amp;quot;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
### Firewall rule&lt;br /&gt;
```&lt;br /&gt;
New-NetFirewallRule -DisplayName &amp;quot;Allow WinRM from Specific IP&amp;quot; `&lt;br /&gt;
    -Direction Inbound `&lt;br /&gt;
    -LocalPort 5986 `&lt;br /&gt;
    -Protocol TCP `&lt;br /&gt;
    -Action Allow `&lt;br /&gt;
    -RemoteAddress 10.x.x.x&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
###&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
icm '10.x.x.x' -Cr $c -Port 5986 -UseSSL -SessionOption $o { &amp;quot;5986 OK&amp;quot; }&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### User must belong to this with Read &amp;amp; Execute for python winrm&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
winrm configSDDL default&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### Restart if needed&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
Restart-Service WinRM&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
### Python&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
import winrm&lt;br /&gt;
&lt;br /&gt;
s = winrm.Session(&lt;br /&gt;
    '10.x.x.x',                       # IP/host is fine&lt;br /&gt;
    auth=(r'test', 'mypassword'),     # .\ for local user; DOMAIN\user for domain&lt;br /&gt;
    transport='ssl',                     # HTTPS on 5986 with Basic over TLS&lt;br /&gt;
    server_cert_validation='ignore',     # OK for self-signed / lab&lt;br /&gt;
    message_encryption='auto',           # Optional; mostly irrelevant over HTTPS&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
try:&lt;br /&gt;
    r = s.run_cmd('hostname')&lt;br /&gt;
    print(&amp;quot;Status:&amp;quot;, r.status_code)&lt;br /&gt;
    print(&amp;quot;STDOUT:&amp;quot;, r.std_out.decode(errors=&amp;quot;ignore&amp;quot;).strip())&lt;br /&gt;
    print(&amp;quot;STDERR:&amp;quot;, r.std_err.decode(errors=&amp;quot;ignore&amp;quot;).strip())&lt;br /&gt;
except Exception as e:&lt;br /&gt;
    print(&amp;quot;Error:&amp;quot;, e)&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Notes on winrm configSDDL&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
Opens Permissions Dialog: Running winrm configSDDL default brings up the familiar Windows security permissions dialog for the default WinRM listener.&lt;br /&gt;
Grants Non-Admin Access: You add non-admin users/groups (e.g., DOMAIN\User) and check &amp;quot;Allow&amp;quot; for Read and Execute permissions, enabling them to use remote management tools like PowerShell remoting.&lt;br /&gt;
Manages RootSDDL: This command effectively configures the RootSDDL setting, which defines who can access the WinRM service remotely.&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Winrm_python&amp;diff=5659</id>
		<title>Winrm python</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Winrm_python&amp;diff=5659"/>
		<updated>2025-12-10T20:46:06Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Allow Python winrm&lt;br /&gt;
&lt;br /&gt;
## Steps&lt;br /&gt;
&lt;br /&gt;
### Create user&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
# Create the password object&lt;br /&gt;
$Password = ConvertTo-SecureString &amp;quot;myPassword&amp;quot; -AsPlainText -Force&lt;br /&gt;
&lt;br /&gt;
# Create the user account&lt;br /&gt;
New-LocalUser -Name &amp;quot;test&amp;quot; `&lt;br /&gt;
    -Password $Password `&lt;br /&gt;
    -FullName &amp;quot;Test Automation User&amp;quot; `&lt;br /&gt;
    -Description &amp;quot;User for WinRM access&amp;quot; `&lt;br /&gt;
    -PasswordNeverExpires&lt;br /&gt;
&lt;br /&gt;
Add-LocalGroupMember -Group &amp;quot;Remote Management Users&amp;quot; -Member &amp;quot;test&amp;quot;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
### Firewall rule&lt;br /&gt;
```&lt;br /&gt;
New-NetFirewallRule -DisplayName &amp;quot;Allow WinRM from Specific IP&amp;quot; `&lt;br /&gt;
    -Direction Inbound `&lt;br /&gt;
    -LocalPort 5986 `&lt;br /&gt;
    -Protocol TCP `&lt;br /&gt;
    -Action Allow `&lt;br /&gt;
    -RemoteAddress 10.x.x.x&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
###&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
icm '10.x.x.x' -Cr $c -Port 5986 -UseSSL -SessionOption $o { &amp;quot;5986 OK&amp;quot; }&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### User must belong to this with Read &amp;amp; Execute for python winrm&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
winrm configSDDL default&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### Restart if needed&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
Restart-Service WinRM&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
### Python&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
import winrm&lt;br /&gt;
&lt;br /&gt;
s = winrm.Session(&lt;br /&gt;
    '10.x.x.x',                       # IP/host is fine&lt;br /&gt;
    auth=(r'test', 'mypassword'),     # .\ for local user; DOMAIN\user for domain&lt;br /&gt;
    transport='ssl',                     # HTTPS on 5986 with Basic over TLS&lt;br /&gt;
    server_cert_validation='ignore',     # OK for self-signed / lab&lt;br /&gt;
    message_encryption='auto',           # Optional; mostly irrelevant over HTTPS&lt;br /&gt;
)&lt;br /&gt;
&lt;br /&gt;
try:&lt;br /&gt;
    r = s.run_cmd('hostname')&lt;br /&gt;
    print(&amp;quot;Status:&amp;quot;, r.status_code)&lt;br /&gt;
    print(&amp;quot;STDOUT:&amp;quot;, r.std_out.decode(errors=&amp;quot;ignore&amp;quot;).strip())&lt;br /&gt;
    print(&amp;quot;STDERR:&amp;quot;, r.std_err.decode(errors=&amp;quot;ignore&amp;quot;).strip())&lt;br /&gt;
except Exception as e:&lt;br /&gt;
    print(&amp;quot;Error:&amp;quot;, e)&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Winrm_python&amp;diff=5658</id>
		<title>Winrm python</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Winrm_python&amp;diff=5658"/>
		<updated>2025-12-10T20:44:55Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;# Allow Python winrm  ## Steps  ### Create user  ``` # Create the password object $Password = ConvertTo-SecureString &amp;quot;myPassword&amp;quot; -AsPlainText -Force  # Create the user accoun...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;# Allow Python winrm&lt;br /&gt;
&lt;br /&gt;
## Steps&lt;br /&gt;
&lt;br /&gt;
### Create user&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
# Create the password object&lt;br /&gt;
$Password = ConvertTo-SecureString &amp;quot;myPassword&amp;quot; -AsPlainText -Force&lt;br /&gt;
&lt;br /&gt;
# Create the user account&lt;br /&gt;
New-LocalUser -Name &amp;quot;test&amp;quot; `&lt;br /&gt;
    -Password $Password `&lt;br /&gt;
    -FullName &amp;quot;Test Automation User&amp;quot; `&lt;br /&gt;
    -Description &amp;quot;User for WinRM access&amp;quot; `&lt;br /&gt;
    -PasswordNeverExpires&lt;br /&gt;
&lt;br /&gt;
Add-LocalGroupMember -Group &amp;quot;Remote Management Users&amp;quot; -Member &amp;quot;test&amp;quot;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
### Firewall rule&lt;br /&gt;
```&lt;br /&gt;
New-NetFirewallRule -DisplayName &amp;quot;Allow WinRM from Specific IP&amp;quot; `&lt;br /&gt;
    -Direction Inbound `&lt;br /&gt;
    -LocalPort 5986 `&lt;br /&gt;
    -Protocol TCP `&lt;br /&gt;
    -Action Allow `&lt;br /&gt;
    -RemoteAddress 10.x.x.x&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
###&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
icm '10.x.x.x' -Cr $c -Port 5986 -UseSSL -SessionOption $o { &amp;quot;5986 OK&amp;quot; }&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### User must belong to this with Read &amp;amp; Execute for python winrm&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
winrm configSDDL default&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
### Restart if needed&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
Restart-Service WinRM&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Kaizen_principal&amp;diff=5657</id>
		<title>Kaizen principal</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Kaizen_principal&amp;diff=5657"/>
		<updated>2025-11-11T20:02:15Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Kaizen is a Japanese philosophy of continuous improvement that involves employees at all levels making small, incremental changes to processes and systems to enhance efficienc...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Kaizen is a Japanese philosophy of continuous improvement that involves employees at all levels making small, incremental changes to processes and systems to enhance efficiency, quality, and productivity. It is based on the idea that small, consistent improvements lead to significant long-term results and is applicable to both business and personal life. Core principles include eliminating waste, creating a culture of engagement, and using tools like the PDCA cycle (Plan-Do-Check-Act). &lt;br /&gt;
Core principles and concepts &lt;br /&gt;
&lt;br /&gt;
• Continuous improvement: The central idea is that improvement is an ongoing, not a one-time, event. Even small changes made consistently can lead to exponential growth over time. &lt;br /&gt;
• Employee involvement: Kaizen involves everyone in an organization, from front-line workers to management, in identifying problems and suggesting solutions. &lt;br /&gt;
• Elimination of waste: A primary goal is to reduce waste, a key principle in methodologies like the Toyota Production System. &lt;br /&gt;
• Focus on process: It emphasizes improving the process itself, rather than blaming individuals, to achieve better results. &lt;br /&gt;
• Standardization: Establishing standard processes allows for consistent performance and makes it easier to identify problems and areas for further improvement. &lt;br /&gt;
&lt;br /&gt;
Application and methods &lt;br /&gt;
&lt;br /&gt;
• Kaizen events (or blitzes): These are focused, short-term exercises where a team works to make rapid improvements in a specific area. &lt;br /&gt;
• The 5S system: A popular method for organizing and improving the workplace: &lt;br /&gt;
&lt;br /&gt;
	• Sort: Remove unnecessary items. &lt;br /&gt;
	• Set in order: Arrange items for easy access. &lt;br /&gt;
	• Shine: Clean the work area. &lt;br /&gt;
	• Standardize: Establish consistent processes. &lt;br /&gt;
	• Sustain: Maintain the improvements over time. &lt;br /&gt;
&lt;br /&gt;
• The PDCA cycle: A model for implementing changes: &lt;br /&gt;
&lt;br /&gt;
	• Plan: Identify a problem and plan a change. &lt;br /&gt;
	• Do: Implement the change on a small scale. &lt;br /&gt;
	• Check: Review the results of the change. &lt;br /&gt;
	• Act: Implement the change more broadly if successful, or go back to the drawing board. &lt;br /&gt;
&lt;br /&gt;
Benefits &lt;br /&gt;
&lt;br /&gt;
• Increased efficiency and productivity &lt;br /&gt;
• Improved quality of products and services &lt;br /&gt;
• Reduced waste and costs &lt;br /&gt;
• Empowered and engaged employees &lt;br /&gt;
• A culture of continuous learning and problem-solving &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Coredns_postgres&amp;diff=5656</id>
		<title>Coredns postgres</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Coredns_postgres&amp;diff=5656"/>
		<updated>2025-11-06T02:52:36Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
CoreDNS can be configured to use a PostgreSQL database as a backend for DNS records through the pdsql plugin. This plugin leverages PowerDNS's generic SQL backend capabilities, allowing CoreDNS to retrieve DNS records from a PostgreSQL database. &lt;br /&gt;
Here's an example of how to configure CoreDNS to use PostgreSQL with the pdsql plugin: &lt;br /&gt;
1. Corefile Configuration: &lt;br /&gt;
The Corefile is the main configuration file for CoreDNS. To use PostgreSQL, you would include the pdsql plugin in your Corefile as follows: &lt;br /&gt;
. {&lt;br /&gt;
    pdsql pgsql postgres://user:password@host:port/database_name&lt;br /&gt;
    # Other plugins can be added here, e.g., forward, cache, etc.&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
• .: This specifies that the server block applies to the root zone, meaning it will handle all DNS queries that are not explicitly handled by other server blocks. &lt;br /&gt;
• pdsql: This enables the PowerDNS SQL backend plugin. &lt;br /&gt;
• pgsql: This indicates that the database dialect is PostgreSQL. &lt;br /&gt;
• postgres://user:password@host:port/database_name: This is the connection string for your PostgreSQL database. &lt;br /&gt;
	• Replace user with your PostgreSQL username. &lt;br /&gt;
	• Replace password with your PostgreSQL password. &lt;br /&gt;
	• Replace host with the hostname or IP address of your PostgreSQL server. &lt;br /&gt;
	• Replace port with the port your PostgreSQL server is listening on (default is 5432). &lt;br /&gt;
	• Replace database_name with the name of your PostgreSQL database. &lt;br /&gt;
&lt;br /&gt;
2. PostgreSQL Database Schema: &lt;br /&gt;
The pdsql plugin expects a specific database schema that is compatible with PowerDNS's generic SQL backend. You would need to create tables in your PostgreSQL database to store DNS records, such as domains, records, and potentially supermasters and comments. &lt;br /&gt;
Example Table Structure (simplified): &lt;br /&gt;
CREATE TABLE domains (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    name VARCHAR(255) NOT NULL UNIQUE,&lt;br /&gt;
    master VARCHAR(128) DEFAULT NULL,&lt;br /&gt;
    last_check INT DEFAULT NULL,&lt;br /&gt;
    type VARCHAR(6) NOT NULL,&lt;br /&gt;
    notified_serial INT DEFAULT NULL,&lt;br /&gt;
    account VARCHAR(40) DEFAULT NULL&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE records (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    domain_id INT NOT NULL REFERENCES domains(id) ON DELETE CASCADE,&lt;br /&gt;
    name VARCHAR(255) DEFAULT NULL,&lt;br /&gt;
    type VARCHAR(10) DEFAULT NULL,&lt;br /&gt;
    content VARCHAR(64000) DEFAULT NULL,&lt;br /&gt;
    ttl INT DEFAULT NULL,&lt;br /&gt;
    prio INT DEFAULT NULL,&lt;br /&gt;
    change_date INT DEFAULT NULL,&lt;br /&gt;
    disabled BOOLEAN DEFAULT FALSE,&lt;br /&gt;
    ordername VARCHAR(255) DEFAULT NULL,&lt;br /&gt;
    auth BOOLEAN DEFAULT TRUE&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
3. Populating the Database: &lt;br /&gt;
You would then populate these tables with your DNS records. For example, to add an A record for www.example.com pointing to 192.168.1.100: &lt;br /&gt;
INSERT INTO domains (name, type) VALUES ('example.com', 'NATIVE');&lt;br /&gt;
INSERT INTO records (domain_id, name, type, content, ttl)&lt;br /&gt;
VALUES ((SELECT id FROM domains WHERE name = 'example.com'), 'www.example.com', 'A', '192.168.1.100', 3600);&lt;br /&gt;
&lt;br /&gt;
Note: This is a basic example. For a production environment, you would need to consider aspects like database security, replication, and more comprehensive schema design. You might also need to compile CoreDNS with the pdsql plugin if it's not included in your default CoreDNS build. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Coredns_postgres&amp;diff=5655</id>
		<title>Coredns postgres</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Coredns_postgres&amp;diff=5655"/>
		<updated>2025-11-06T02:51:55Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CoreDNS can be configured to use a PostgreSQL database as a backend for DNS records through the pdsql plugin. This plugin leverages PowerDNS's generic SQL backend capabilities, allowing CoreDNS to retrieve DNS records from a PostgreSQL database. &lt;br /&gt;
Here's an example of how to configure CoreDNS to use PostgreSQL with the pdsql plugin: &lt;br /&gt;
1. Corefile Configuration: &lt;br /&gt;
The Corefile is the main configuration file for CoreDNS. To use PostgreSQL, you would include the pdsql plugin in your Corefile as follows: &lt;br /&gt;
. {&lt;br /&gt;
    pdsql pgsql postgres://user:password@host:port/database_name&lt;br /&gt;
    # Other plugins can be added here, e.g., forward, cache, etc.&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
• .: This specifies that the server block applies to the root zone, meaning it will handle all DNS queries that are not explicitly handled by other server blocks. &lt;br /&gt;
• pdsql: This enables the PowerDNS SQL backend plugin. &lt;br /&gt;
• pgsql: This indicates that the database dialect is PostgreSQL. &lt;br /&gt;
• postgres://user:password@host:port/database_name: This is the connection string for your PostgreSQL database. &lt;br /&gt;
	• Replace user with your PostgreSQL username. &lt;br /&gt;
	• Replace password with your PostgreSQL password. &lt;br /&gt;
	• Replace host with the hostname or IP address of your PostgreSQL server. &lt;br /&gt;
	• Replace port with the port your PostgreSQL server is listening on (default is 5432). &lt;br /&gt;
	• Replace database_name with the name of your PostgreSQL database. &lt;br /&gt;
&lt;br /&gt;
2. PostgreSQL Database Schema: &lt;br /&gt;
The pdsql plugin expects a specific database schema that is compatible with PowerDNS's generic SQL backend. You would need to create tables in your PostgreSQL database to store DNS records, such as domains, records, and potentially supermasters and comments. &lt;br /&gt;
Example Table Structure (simplified): &lt;br /&gt;
CREATE TABLE domains (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    name VARCHAR(255) NOT NULL UNIQUE,&lt;br /&gt;
    master VARCHAR(128) DEFAULT NULL,&lt;br /&gt;
    last_check INT DEFAULT NULL,&lt;br /&gt;
    type VARCHAR(6) NOT NULL,&lt;br /&gt;
    notified_serial INT DEFAULT NULL,&lt;br /&gt;
    account VARCHAR(40) DEFAULT NULL&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE records (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    domain_id INT NOT NULL REFERENCES domains(id) ON DELETE CASCADE,&lt;br /&gt;
    name VARCHAR(255) DEFAULT NULL,&lt;br /&gt;
    type VARCHAR(10) DEFAULT NULL,&lt;br /&gt;
    content VARCHAR(64000) DEFAULT NULL,&lt;br /&gt;
    ttl INT DEFAULT NULL,&lt;br /&gt;
    prio INT DEFAULT NULL,&lt;br /&gt;
    change_date INT DEFAULT NULL,&lt;br /&gt;
    disabled BOOLEAN DEFAULT FALSE,&lt;br /&gt;
    ordername VARCHAR(255) DEFAULT NULL,&lt;br /&gt;
    auth BOOLEAN DEFAULT TRUE&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
3. Populating the Database: &lt;br /&gt;
You would then populate these tables with your DNS records. For example, to add an A record for www.example.com pointing to 192.168.1.100: &lt;br /&gt;
INSERT INTO domains (name, type) VALUES ('example.com', 'NATIVE');&lt;br /&gt;
INSERT INTO records (domain_id, name, type, content, ttl)&lt;br /&gt;
VALUES ((SELECT id FROM domains WHERE name = 'example.com'), 'www.example.com', 'A', '192.168.1.100', 3600);&lt;br /&gt;
&lt;br /&gt;
Note: This is a basic example. For a production environment, you would need to consider aspects like database security, replication, and more comprehensive schema design. You might also need to compile CoreDNS with the pdsql plugin if it's not included in your default CoreDNS build. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Coredns_postgres&amp;diff=5654</id>
		<title>Coredns postgres</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Coredns_postgres&amp;diff=5654"/>
		<updated>2025-11-06T02:51:37Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;``` CoreDNS can be configured to use a PostgreSQL database as a backend for DNS records through the pdsql plugin. This plugin leverages PowerDNS's generic SQL backend capabili...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
CoreDNS can be configured to use a PostgreSQL database as a backend for DNS records through the pdsql plugin. This plugin leverages PowerDNS's generic SQL backend capabilities, allowing CoreDNS to retrieve DNS records from a PostgreSQL database. &lt;br /&gt;
Here's an example of how to configure CoreDNS to use PostgreSQL with the pdsql plugin: &lt;br /&gt;
1. Corefile Configuration: &lt;br /&gt;
The Corefile is the main configuration file for CoreDNS. To use PostgreSQL, you would include the pdsql plugin in your Corefile as follows: &lt;br /&gt;
. {&lt;br /&gt;
    pdsql pgsql postgres://user:password@host:port/database_name&lt;br /&gt;
    # Other plugins can be added here, e.g., forward, cache, etc.&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
• .: This specifies that the server block applies to the root zone, meaning it will handle all DNS queries that are not explicitly handled by other server blocks. &lt;br /&gt;
• pdsql: This enables the PowerDNS SQL backend plugin. &lt;br /&gt;
• pgsql: This indicates that the database dialect is PostgreSQL. &lt;br /&gt;
• postgres://user:password@host:port/database_name: This is the connection string for your PostgreSQL database. &lt;br /&gt;
	• Replace user with your PostgreSQL username. &lt;br /&gt;
	• Replace password with your PostgreSQL password. &lt;br /&gt;
	• Replace host with the hostname or IP address of your PostgreSQL server. &lt;br /&gt;
	• Replace port with the port your PostgreSQL server is listening on (default is 5432). &lt;br /&gt;
	• Replace database_name with the name of your PostgreSQL database. &lt;br /&gt;
&lt;br /&gt;
2. PostgreSQL Database Schema: &lt;br /&gt;
The pdsql plugin expects a specific database schema that is compatible with PowerDNS's generic SQL backend. You would need to create tables in your PostgreSQL database to store DNS records, such as domains, records, and potentially supermasters and comments. &lt;br /&gt;
Example Table Structure (simplified): &lt;br /&gt;
CREATE TABLE domains (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    name VARCHAR(255) NOT NULL UNIQUE,&lt;br /&gt;
    master VARCHAR(128) DEFAULT NULL,&lt;br /&gt;
    last_check INT DEFAULT NULL,&lt;br /&gt;
    type VARCHAR(6) NOT NULL,&lt;br /&gt;
    notified_serial INT DEFAULT NULL,&lt;br /&gt;
    account VARCHAR(40) DEFAULT NULL&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE records (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    domain_id INT NOT NULL REFERENCES domains(id) ON DELETE CASCADE,&lt;br /&gt;
    name VARCHAR(255) DEFAULT NULL,&lt;br /&gt;
    type VARCHAR(10) DEFAULT NULL,&lt;br /&gt;
    content VARCHAR(64000) DEFAULT NULL,&lt;br /&gt;
    ttl INT DEFAULT NULL,&lt;br /&gt;
    prio INT DEFAULT NULL,&lt;br /&gt;
    change_date INT DEFAULT NULL,&lt;br /&gt;
    disabled BOOLEAN DEFAULT FALSE,&lt;br /&gt;
    ordername VARCHAR(255) DEFAULT NULL,&lt;br /&gt;
    auth BOOLEAN DEFAULT TRUE&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
3. Populating the Database: &lt;br /&gt;
You would then populate these tables with your DNS records. For example, to add an A record for www.example.com pointing to 192.168.1.100: &lt;br /&gt;
INSERT INTO domains (name, type) VALUES ('example.com', 'NATIVE');&lt;br /&gt;
INSERT INTO records (domain_id, name, type, content, ttl)&lt;br /&gt;
VALUES ((SELECT id FROM domains WHERE name = 'example.com'), 'www.example.com', 'A', '192.168.1.100', 3600);&lt;br /&gt;
&lt;br /&gt;
Note: This is a basic example. For a production environment, you would need to consider aspects like database security, replication, and more comprehensive schema design. You might also need to compile CoreDNS with the pdsql plugin if it's not included in your default CoreDNS build. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Refine_postgrest&amp;diff=5653</id>
		<title>Refine postgrest</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Refine_postgrest&amp;diff=5653"/>
		<updated>2025-10-25T17:46:22Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;https://github.com/ffimnsr/refine-postgrest-ts&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;https://github.com/ffimnsr/refine-postgrest-ts&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Ssh_audit_bashrc&amp;diff=5652</id>
		<title>Ssh audit bashrc</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Ssh_audit_bashrc&amp;diff=5652"/>
		<updated>2025-10-17T18:26:14Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Yes, you absolutely can log all commands for a specific user inside an LXD container without giving it privileged access.  The best way to do this is by modifying the user's s...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Yes, you absolutely can log all commands for a specific user inside an LXD container without giving it privileged access.&lt;br /&gt;
&lt;br /&gt;
The best way to do this is by modifying the user's shell configuration files within the container. This approach doesn't require any special container privileges because it operates at the user and shell level, not the kernel level like `auditd`.&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### Using the User's `.bashrc` File&lt;br /&gt;
&lt;br /&gt;
This method involves adding a logging command to the specific user's `~/.bashrc` file. Every time that user opens a new terminal or runs a command, it will be logged.&lt;br /&gt;
&lt;br /&gt;
Here are the steps to follow **inside the LXD container**:&lt;br /&gt;
&lt;br /&gt;
1.  **Access the Container Shell**&lt;br /&gt;
    First, get a shell inside the container you want to monitor.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    lxc exec your-container-name -- bash&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
2.  **Switch to the Target User**&lt;br /&gt;
    If you are not already logged in as the user you want to monitor, switch to that user. Let's say the user is named `testuser`.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    su - testuser&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
3.  **Edit the `.bashrc` File**&lt;br /&gt;
    Open the user's `.bashrc` file with a text editor like `nano`.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    nano ~/.bashrc&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
4.  **Add the Logging Command**&lt;br /&gt;
    Scroll to the very end of the file and add the following line. This command uses `PROMPT_COMMAND` to execute the `logger` utility before each new command prompt is displayed.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    export PROMPT_COMMAND='RETRN_VAL=$?;logger -p local6.info &amp;quot;USER: $(whoami) PWD: $(pwd) CMD: $(history 1 | sed &amp;quot;s/^[ ]*[0-9]\+  //&amp;quot; )&amp;quot;'&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
    This line will log the **username**, their **current directory**, and the **command they just ran**.&lt;br /&gt;
&lt;br /&gt;
5.  **Configure Log Storage (as root)**&lt;br /&gt;
    You'll need to tell the system where to store these logs. Exit from the user's session (`exit`) to return to your root shell within the container.&lt;br /&gt;
&lt;br /&gt;
    Create a new configuration file for `rsyslog`:&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    nano /etc/rsyslog.d/50-user-commands.conf&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
    Add the following line to this new file. This tells `rsyslog` to send any logs from the `local6` facility to a specific file.&lt;br /&gt;
&lt;br /&gt;
    ```&lt;br /&gt;
    local6.* /var/log/user_commands.log&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
6.  **Restart `rsyslog`**&lt;br /&gt;
    Apply the changes by restarting the `rsyslog` service.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    systemctl restart rsyslog&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### How It Works and How to View Logs&lt;br /&gt;
&lt;br /&gt;
From now on, whenever `testuser` executes a command, it will be automatically logged to `/var/log/user_commands.log` inside the container.&lt;br /&gt;
&lt;br /&gt;
To see the logs in real-time, you can use the `tail` command from within the container's root shell:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
tail -f /var/log/user_commands.log&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
The output will look something like this:&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
Oct 17 12:18:01 container-name testuser: USER: testuser PWD: /home/testuser CMD: ls -l&lt;br /&gt;
Oct 17 12:18:05 container-name testuser: USER: testuser PWD: /home/testuser CMD: cd /tmp&lt;br /&gt;
Oct 17 12:18:09 container-name testuser: USER: testuser PWD: /tmp CMD: echo &amp;quot;hello world&amp;quot;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
#### Important Considerations ⚠️&lt;br /&gt;
&lt;br /&gt;
  * **Bypassable**: A knowledgeable user could potentially bypass this logging by editing their own `.bashrc` file, using a different shell, or manually unsetting the `PROMPT_COMMAND` variable.&lt;br /&gt;
  * **Bash Specific**: This method works for the `bash` shell. If the user uses a different shell like `zsh`, you would need to modify the corresponding configuration file (e.g., `~/.zshrc`).&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Iam_postgres&amp;diff=5651</id>
		<title>Iam postgres</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Iam_postgres&amp;diff=5651"/>
		<updated>2025-09-19T19:38:24Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;``` Two options   I have a postgresql application that uses postgrest for http api and keycloak to issue JWTs for access on the http api.    Please create an easy to manage ge...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
Two options&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I have a postgresql application that uses postgrest for http api and keycloak to issue JWTs for access on the http api.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Please create an easy to manage generic IAM system for the Postgresql database that can easily add new schemas.tables access to JWT issued users where the users have CRUD access to the objects they create. If possible without getting too complex add a group system where other JWTs users could easily have CRUD access to resource by adding them to a group. I'd like this to be as simple as possible and performant so it works with large tables and can easily add new tables. Use example tables of todos and chats to test this out. I'm sure you have existing methods that people have done this before so I'm trusting you to come up with the best solution.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here are related roles that postgrest uses&lt;br /&gt;
&lt;br /&gt;
-- Owner for schemas/objects (no login)&lt;br /&gt;
&lt;br /&gt;
create role api_owner noinherit;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- Runtime roles (what requests run as)&lt;br /&gt;
&lt;br /&gt;
create role web_anon nologin;&lt;br /&gt;
&lt;br /&gt;
create role web_user nologin;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- PostgREST connection user&lt;br /&gt;
&lt;br /&gt;
create role authenticator login noinherit password 'mypass';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-- Allow PostgREST to SET ROLE to runtime roles&lt;br /&gt;
&lt;br /&gt;
grant web_anon to authenticator;&lt;br /&gt;
&lt;br /&gt;
grant web_user to authenticator;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Which response is more helpful?&lt;br /&gt;
Your choice will help Gemini give you more helpful responses. Loading the response may take a moment.&lt;br /&gt;
Choice A&lt;br /&gt;
Of course. Here is a comprehensive, simple, and performant IAM system for PostgreSQL that integrates seamlessly with PostgREST and Keycloak JWTs.&lt;br /&gt;
&lt;br /&gt;
This solution is built on standard PostgreSQL features like Row-Level Security (RLS) and is designed to be generic. You can easily apply this pattern to new schemas and tables.&lt;br /&gt;
&lt;br /&gt;
The Core Concept&lt;br /&gt;
User Identity: We'll use the JWT sub (subject) claim, a unique UUID from Keycloak, to identify users.&lt;br /&gt;
&lt;br /&gt;
Auto-Provisioning: A helper function will automatically create a user record in our database on their first API call, making user management zero-touch.&lt;br /&gt;
&lt;br /&gt;
Ownership: Every protected resource (like a todo or a chat message) will have an owner_id column.&lt;br /&gt;
&lt;br /&gt;
Group Sharing: Resources can be shared with groups via a simple _shares table for each resource type. This allows multiple groups to access a single resource with fine-grained permissions (read, write, delete).&lt;br /&gt;
&lt;br /&gt;
Enforcement: PostgreSQL's Row-Level Security (RLS) policies enforce all access rules directly in the database, ensuring they are secure, performant, and cannot be bypassed.&lt;br /&gt;
&lt;br /&gt;
1. IAM Schema and Helper Functions&lt;br /&gt;
First, we create a dedicated schema iam to hold our core tables and functions. This keeps the security logic separate from your application data.&lt;br /&gt;
&lt;br /&gt;
The most important piece is the iam.current_user_id() function, which securely reads the user's ID from the JWT and creates a local user record if one doesn't exist.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
-- Create a schema to hold our IAM logic&lt;br /&gt;
CREATE SCHEMA iam;&lt;br /&gt;
ALTER SCHEMA iam OWNER TO api_owner;&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- TABLE: iam.users&lt;br /&gt;
-- Stores a local reference to our application users based on their JWT sub.&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
CREATE TABLE iam.users (&lt;br /&gt;
    id         UUID PRIMARY KEY,&lt;br /&gt;
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()&lt;br /&gt;
);&lt;br /&gt;
ALTER TABLE iam.users OWNER TO api_owner;&lt;br /&gt;
COMMENT ON TABLE iam.users IS 'Stores application users, keyed by their Keycloak JWT sub.';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- HELPER FUNCTION: iam.current_user_sub()&lt;br /&gt;
-- Extracts the 'sub' claim from the current JWT.&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
CREATE OR REPLACE FUNCTION iam.current_user_sub()&lt;br /&gt;
RETURNS UUID AS $$&lt;br /&gt;
  SELECT nullif(current_setting('request.jwt.claims', true)::jsonb -&amp;gt;&amp;gt; 'sub', '')::uuid;&lt;br /&gt;
$$ LANGUAGE sql STABLE;&lt;br /&gt;
ALTER FUNCTION iam.current_user_sub() OWNER TO api_owner;&lt;br /&gt;
COMMENT ON FUNCTION iam.current_user_sub() IS 'Get the UUID of the currently authenticated user from the JWT sub claim.';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- HELPER FUNCTION: iam.current_user_id()&lt;br /&gt;
-- Gets the current user''s UUID and auto-provisions a local record if needed.&lt;br /&gt;
-- This is the main function you will use in policies and default values.&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
CREATE OR REPLACE FUNCTION iam.current_user_id()&lt;br /&gt;
RETURNS UUID AS $$&lt;br /&gt;
DECLARE&lt;br /&gt;
  user_id UUID := iam.current_user_sub();&lt;br /&gt;
BEGIN&lt;br /&gt;
  IF user_id IS NOT NULL THEN&lt;br /&gt;
    -- On first-time access, create a local user record automatically.&lt;br /&gt;
    INSERT INTO iam.users (id) VALUES (user_id) ON CONFLICT (id) DO NOTHING;&lt;br /&gt;
  END IF;&lt;br /&gt;
  RETURN user_id;&lt;br /&gt;
END;&lt;br /&gt;
$$ LANGUAGE plpgsql SECURITY DEFINER;&lt;br /&gt;
ALTER FUNCTION iam.current_user_id() OWNER TO api_owner;&lt;br /&gt;
COMMENT ON FUNCTION iam.current_user_id() IS 'Get current user ID, creating a local record on first access.';&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- TABLES: iam.groups &amp;amp; iam.group_members&lt;br /&gt;
-- For managing group-based permissions.&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
CREATE TABLE iam.groups (&lt;br /&gt;
    id         BIGSERIAL PRIMARY KEY,&lt;br /&gt;
    name       TEXT NOT NULL,&lt;br /&gt;
    owner_id   UUID NOT NULL REFERENCES iam.users(id) DEFAULT iam.current_user_id(),&lt;br /&gt;
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()&lt;br /&gt;
);&lt;br /&gt;
ALTER TABLE iam.groups OWNER TO api_owner;&lt;br /&gt;
CREATE INDEX ON iam.groups(owner_id);&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE iam.group_members (&lt;br /&gt;
    group_id   BIGINT NOT NULL REFERENCES iam.groups(id) ON DELETE CASCADE,&lt;br /&gt;
    user_id    UUID NOT NULL REFERENCES iam.users(id) ON DELETE CASCADE,&lt;br /&gt;
    PRIMARY KEY (group_id, user_id)&lt;br /&gt;
);&lt;br /&gt;
ALTER TABLE iam.group_members OWNER TO api_owner;&lt;br /&gt;
&lt;br /&gt;
-- Allow regular users to manage their own groups and memberships&lt;br /&gt;
GRANT SELECT, INSERT, UPDATE, DELETE ON iam.groups TO web_user;&lt;br /&gt;
GRANT SELECT, INSERT, DELETE ON iam.group_members TO web_user;&lt;br /&gt;
&lt;br /&gt;
-- RLS for groups: Users can only see/manage groups they own or are members of.&lt;br /&gt;
ALTER TABLE iam.groups ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY groups_rls_policy ON iam.groups FOR ALL TO web_user&lt;br /&gt;
  USING (&lt;br /&gt;
    owner_id = iam.current_user_id() OR&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1 FROM iam.group_members gm WHERE gm.group_id = id AND gm.user_id = iam.current_user_id()&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
&lt;br /&gt;
-- RLS for group members: Users can only see/manage memberships for groups they own.&lt;br /&gt;
ALTER TABLE iam.group_members ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY group_members_rls_policy ON iam.group_members FOR ALL TO web_user&lt;br /&gt;
  USING (&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1 FROM iam.groups g WHERE g.id = group_id AND g.owner_id = iam.current_user_id()&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
2. Example Application Schema and Tables&lt;br /&gt;
Now, let's create a data schema with our todos and chats tables. Notice how they follow a simple pattern:&lt;br /&gt;
&lt;br /&gt;
An owner_id column that defaults to the current user.&lt;br /&gt;
&lt;br /&gt;
A corresponding _shares table for group permissions.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
-- A schema for our application data&lt;br /&gt;
CREATE SCHEMA data;&lt;br /&gt;
ALTER SCHEMA data OWNER TO api_owner;&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- EXAMPLE 1: TODOS&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
CREATE TABLE data.todos (&lt;br /&gt;
    id         BIGSERIAL PRIMARY KEY,&lt;br /&gt;
    owner_id   UUID NOT NULL REFERENCES iam.users(id) DEFAULT iam.current_user_id(),&lt;br /&gt;
    title      TEXT NOT NULL,&lt;br /&gt;
    done       BOOLEAN NOT NULL DEFAULT false,&lt;br /&gt;
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()&lt;br /&gt;
);&lt;br /&gt;
ALTER TABLE data.todos OWNER TO api_owner;&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE data.todo_shares (&lt;br /&gt;
    todo_id    BIGINT NOT NULL REFERENCES data.todos(id) ON DELETE CASCADE,&lt;br /&gt;
    group_id   BIGINT NOT NULL REFERENCES iam.groups(id) ON DELETE CASCADE,&lt;br /&gt;
    can_read   BOOLEAN NOT NULL DEFAULT true,&lt;br /&gt;
    can_write  BOOLEAN NOT NULL DEFAULT false,&lt;br /&gt;
    can_delete BOOLEAN NOT NULL DEFAULT false,&lt;br /&gt;
    PRIMARY KEY (todo_id, group_id)&lt;br /&gt;
);&lt;br /&gt;
ALTER TABLE data.todo_shares OWNER TO api_owner;&lt;br /&gt;
CREATE INDEX ON data.todo_shares(group_id);&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- EXAMPLE 2: CHATS&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
CREATE TABLE data.chats (&lt;br /&gt;
    id         BIGSERIAL PRIMARY KEY,&lt;br /&gt;
    owner_id   UUID NOT NULL REFERENCES iam.users(id) DEFAULT iam.current_user_id(),&lt;br /&gt;
    message    TEXT NOT NULL,&lt;br /&gt;
    created_at TIMESTAMPTZ NOT NULL DEFAULT now()&lt;br /&gt;
);&lt;br /&gt;
ALTER TABLE data.chats OWNER TO api_owner;&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE data.chat_shares (&lt;br /&gt;
    chat_id    BIGINT NOT NULL REFERENCES data.chats(id) ON DELETE CASCADE,&lt;br /&gt;
    group_id   BIGINT NOT NULL REFERENCES iam.groups(id) ON DELETE CASCADE,&lt;br /&gt;
    can_read   BOOLEAN NOT NULL DEFAULT true,&lt;br /&gt;
    can_write  BOOLEAN NOT NULL DEFAULT false,&lt;br /&gt;
    can_delete BOOLEAN NOT NULL DEFAULT false,&lt;br /&gt;
    PRIMARY KEY (chat_id, group_id)&lt;br /&gt;
);&lt;br /&gt;
ALTER TABLE data.chat_shares OWNER TO api_owner;&lt;br /&gt;
CREATE INDEX ON data.chat_shares(group_id);&lt;br /&gt;
3. Permissions and RLS Policies&lt;br /&gt;
This is where we enforce the rules. We grant basic CRUD permissions to the web_user role and then apply RLS policies to filter which rows they can actually affect.&lt;br /&gt;
&lt;br /&gt;
The policy logic is the same for every table: A user can access a row if they are the owner, OR if the row has been shared with a group they are a member of.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
-- Grant basic permissions to the web_user role&lt;br /&gt;
GRANT USAGE ON SCHEMA iam, data TO web_user;&lt;br /&gt;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA data TO web_user;&lt;br /&gt;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA data TO web_user;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- RLS POLICY: TODOS&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
ALTER TABLE data.todos ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
&lt;br /&gt;
CREATE POLICY todos_rls_policy ON data.todos FOR ALL TO web_user&lt;br /&gt;
  USING (&lt;br /&gt;
    -- The user is the owner&lt;br /&gt;
    owner_id = iam.current_user_id() OR&lt;br /&gt;
    -- The todo is shared with a group the user is a member of&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1&lt;br /&gt;
      FROM data.todo_shares s&lt;br /&gt;
      JOIN iam.group_members gm ON s.group_id = gm.group_id&lt;br /&gt;
      WHERE s.todo_id = id&lt;br /&gt;
        AND gm.user_id = iam.current_user_id()&lt;br /&gt;
        AND (&lt;br /&gt;
          -- Logic for different commands&lt;br /&gt;
          CASE&lt;br /&gt;
            WHEN (current_setting('request.method', true) = 'DELETE') THEN s.can_delete&lt;br /&gt;
            WHEN (current_setting('request.method', true) IN ('POST', 'PATCH')) THEN s.can_write&lt;br /&gt;
            ELSE s.can_read -- Default to read for GET, HEAD, etc.&lt;br /&gt;
          END&lt;br /&gt;
        )&lt;br /&gt;
    )&lt;br /&gt;
  )&lt;br /&gt;
  WITH CHECK (&lt;br /&gt;
    -- When inserting or updating, check for ownership or write access&lt;br /&gt;
    owner_id = iam.current_user_id() OR&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1&lt;br /&gt;
      FROM data.todo_shares s&lt;br /&gt;
      JOIN iam.group_members gm ON s.group_id = gm.group_id&lt;br /&gt;
      WHERE s.todo_id = id AND gm.user_id = iam.current_user_id() AND s.can_write&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
&lt;br /&gt;
-- RLS for shares: only the owner of the parent resource can manage its shares&lt;br /&gt;
ALTER TABLE data.todo_shares ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY todo_shares_rls_policy ON data.todo_shares FOR ALL TO web_user&lt;br /&gt;
  USING (&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1 FROM data.todos t WHERE t.id = todo_id AND t.owner_id = iam.current_user_id()&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
-- RLS POLICY: CHATS (Identical logic, just different table names)&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
ALTER TABLE data.chats ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
&lt;br /&gt;
CREATE POLICY chats_rls_policy ON data.chats FOR ALL TO web_user&lt;br /&gt;
  USING (&lt;br /&gt;
    owner_id = iam.current_user_id() OR&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1&lt;br /&gt;
      FROM data.chat_shares s&lt;br /&gt;
      JOIN iam.group_members gm ON s.group_id = gm.group_id&lt;br /&gt;
      WHERE s.chat_id = id&lt;br /&gt;
        AND gm.user_id = iam.current_user_id()&lt;br /&gt;
        AND (&lt;br /&gt;
          CASE&lt;br /&gt;
            WHEN (current_setting('request.method', true) = 'DELETE') THEN s.can_delete&lt;br /&gt;
            WHEN (current_setting('request.method', true) IN ('POST', 'PATCH')) THEN s.can_write&lt;br /&gt;
            ELSE s.can_read&lt;br /&gt;
          END&lt;br /&gt;
        )&lt;br /&gt;
    )&lt;br /&gt;
  )&lt;br /&gt;
  WITH CHECK (&lt;br /&gt;
    owner_id = iam.current_user_id() OR&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1&lt;br /&gt;
      FROM data.chat_shares s&lt;br /&gt;
      JOIN iam.group_members gm ON s.group_id = gm.group_id&lt;br /&gt;
      WHERE s.chat_id = id AND gm.user_id = iam.current_user_id() AND s.can_write&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
&lt;br /&gt;
ALTER TABLE data.chat_shares ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY chat_shares_rls_policy ON data.chat_shares FOR ALL TO web_user&lt;br /&gt;
  USING (&lt;br /&gt;
    EXISTS (&lt;br /&gt;
      SELECT 1 FROM data.chats c WHERE c.id = chat_id AND c.owner_id = iam.current_user_id()&lt;br /&gt;
    )&lt;br /&gt;
  );&lt;br /&gt;
How to Use and Extend It&lt;br /&gt;
Workflow Example&lt;br /&gt;
User A (Alice) calls POST /todos with {&amp;quot;title&amp;quot;: &amp;quot;My first todo&amp;quot;}.&lt;br /&gt;
&lt;br /&gt;
The DEFAULT on owner_id sets her as the owner.&lt;br /&gt;
&lt;br /&gt;
The RLS WITH CHECK clause passes because she is the owner.&lt;br /&gt;
&lt;br /&gt;
Alice calls POST /iam.groups with {&amp;quot;name&amp;quot;: &amp;quot;Project Team&amp;quot;}. She becomes the owner of this group.&lt;br /&gt;
&lt;br /&gt;
Alice adds User B (Bob) to the group by calling POST /iam.group_members with {&amp;quot;group_id&amp;quot;: 1, &amp;quot;user_id&amp;quot;: &amp;quot;bob-s-uuid&amp;quot;}.&lt;br /&gt;
&lt;br /&gt;
Alice shares her todo with the group by calling POST /data.todo_shares with {&amp;quot;todo_id&amp;quot;: 1, &amp;quot;group_id&amp;quot;: 1, &amp;quot;can_write&amp;quot;: true}.&lt;br /&gt;
&lt;br /&gt;
Bob can now call GET /todos?id=eq.1 and see Alice's todo. He can also PATCH it because can_write is true. He cannot delete it.&lt;br /&gt;
&lt;br /&gt;
Adding a New Table (data.notes)&lt;br /&gt;
It's a simple, repeatable 3-step process:&lt;br /&gt;
&lt;br /&gt;
Create the Tables: Create data.notes (with owner_id) and data.note_shares by copying the todos example.&lt;br /&gt;
&lt;br /&gt;
Grant Permissions: GRANT ALL ON data.notes, data.note_shares TO web_user;&lt;br /&gt;
&lt;br /&gt;
Apply RLS Policies: Copy the RLS policies from data.todos and data.todo_shares, then find-and-replace the table and column names.&lt;br /&gt;
&lt;br /&gt;
This design is highly performant because the iam.current_user_id() is evaluated once per transaction, and the RLS subqueries are simple EXISTS checks that benefit greatly from standard PostgreSQL indexes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This response is more helpful&lt;br /&gt;
Choice B&lt;br /&gt;
Of course. Here is a generic, performant, and easy-to-manage IAM system for PostgreSQL and PostgREST that uses JWTs for authentication.&lt;br /&gt;
&lt;br /&gt;
This solution is centered around a dedicated iam schema and a reusable SQL function that enforces Row-Level Security (RLS). This approach ensures that access control is handled directly and securely within the database.&lt;br /&gt;
&lt;br /&gt;
Core Concepts&lt;br /&gt;
Ownership: Every protected resource (a row in a table) has an owner_id. This is automatically set to the user's ID (sub claim from the JWT) when the resource is created.&lt;br /&gt;
&lt;br /&gt;
Group Sharing: Resources can optionally be assigned a group_id. If a group_id is set, any user who is a member of that group gains full CRUD access to that specific resource.&lt;br /&gt;
&lt;br /&gt;
Centralized Logic: A single, generic SQL function, iam.check_resource_access, contains all the logic to determine if a user can access a resource. This function checks for both direct ownership and group membership.&lt;br /&gt;
&lt;br /&gt;
Row-Level Security (RLS): PostgreSQL's RLS feature is used to apply this access logic to every query, ensuring that users can only see or modify data they are authorized to access. This is enforced for the web_user role that PostgREST uses for authenticated requests.&lt;br /&gt;
&lt;br /&gt;
SQL Implementation&lt;br /&gt;
Here is the complete SQL script to set up the IAM system.&lt;br /&gt;
&lt;br /&gt;
Part 1: Initial Role Setup&lt;br /&gt;
These are the standard PostgREST roles you provided.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
CREATE ROLE api_owner NOINHERIT;&lt;br /&gt;
CREATE ROLE web_anon NOLOGIN;&lt;br /&gt;
CREATE ROLE web_user NOLOGIN;&lt;br /&gt;
CREATE ROLE authenticator LOGIN NOINHERIT PASSWORD 'mypass';&lt;br /&gt;
GRANT web_anon TO authenticator;&lt;br /&gt;
GRANT web_user TO authenticator;&lt;br /&gt;
Part 2: Create the IAM Schema and Helper Function&lt;br /&gt;
This schema will hold our group tables and the core access logic. The helper function provides an easy way to get the current authenticated user's ID from the JWT.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
CREATE SCHEMA iam;&lt;br /&gt;
&lt;br /&gt;
CREATE FUNCTION iam.get_current_user_id()&lt;br /&gt;
RETURNS uuid AS $$&lt;br /&gt;
BEGIN&lt;br /&gt;
  RETURN (current_setting('request.jwt.claims', true)::jsonb -&amp;gt;&amp;gt; 'sub')::uuid;&lt;br /&gt;
EXCEPTION&lt;br /&gt;
  WHEN OTHERS THEN&lt;br /&gt;
    RETURN '00000000-0000-0000-0000-000000000000'::uuid;&lt;br /&gt;
END;&lt;br /&gt;
$$ LANGUAGE plpgsql STABLE;&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE iam.groups (&lt;br /&gt;
  id uuid PRIMARY KEY DEFAULT gen_random_uuid(),&lt;br /&gt;
  owner_id uuid NOT NULL DEFAULT iam.get_current_user_id(),&lt;br /&gt;
  name text NOT NULL,&lt;br /&gt;
  created_at timestamptz NOT NULL DEFAULT now()&lt;br /&gt;
);&lt;br /&gt;
CREATE INDEX ON iam.groups (owner_id);&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE iam.group_members (&lt;br /&gt;
  group_id uuid NOT NULL REFERENCES iam.groups(id) ON DELETE CASCADE,&lt;br /&gt;
  user_id uuid NOT NULL,&lt;br /&gt;
  created_at timestamptz NOT NULL DEFAULT now(),&lt;br /&gt;
  PRIMARY KEY (group_id, user_id)&lt;br /&gt;
);&lt;br /&gt;
CREATE INDEX ON iam.group_members (user_id);&lt;br /&gt;
Part 3: Create Example Application Tables&lt;br /&gt;
Here we set up an app schema with todos and chats tables. Note the addition of the owner_id and group_id columns, which are key to the IAM system.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
CREATE SCHEMA app;&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE app.todos (&lt;br /&gt;
  id serial PRIMARY KEY,&lt;br /&gt;
  task text NOT NULL,&lt;br /&gt;
  done boolean NOT NULL DEFAULT false,&lt;br /&gt;
  owner_id uuid NOT NULL DEFAULT iam.get_current_user_id(),&lt;br /&gt;
  group_id uuid REFERENCES iam.groups(id) ON DELETE SET NULL,&lt;br /&gt;
  created_at timestamptz NOT NULL DEFAULT now()&lt;br /&gt;
);&lt;br /&gt;
CREATE INDEX ON app.todos (owner_id);&lt;br /&gt;
CREATE INDEX ON app.todos (group_id);&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE app.chats (&lt;br /&gt;
  id serial PRIMARY KEY,&lt;br /&gt;
  message text NOT NULL,&lt;br /&gt;
  owner_id uuid NOT NULL DEFAULT iam.get_current_user_id(),&lt;br /&gt;
  group_id uuid REFERENCES iam.groups(id) ON DELETE SET NULL,&lt;br /&gt;
  created_at timestamptz NOT NULL DEFAULT now()&lt;br /&gt;
);&lt;br /&gt;
CREATE INDEX ON app.chats (owner_id);&lt;br /&gt;
CREATE INDEX ON app.chats (group_id);&lt;br /&gt;
Part 4: Create the Generic RLS Function&lt;br /&gt;
This is the core of the IAM system. It's a single function that checks if the current user is either the direct owner or a member of the group the resource is assigned to.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
CREATE FUNCTION iam.check_resource_access(_owner_id uuid, _group_id uuid)&lt;br /&gt;
RETURNS boolean AS $$&lt;br /&gt;
DECLARE&lt;br /&gt;
  current_user_id uuid := iam.get_current_user_id();&lt;br /&gt;
BEGIN&lt;br /&gt;
  IF current_user_id = _owner_id THEN&lt;br /&gt;
    RETURN true;&lt;br /&gt;
  END IF;&lt;br /&gt;
&lt;br /&gt;
  IF _group_id IS NOT NULL THEN&lt;br /&gt;
    RETURN EXISTS (&lt;br /&gt;
      SELECT 1&lt;br /&gt;
      FROM iam.group_members gm&lt;br /&gt;
      WHERE gm.group_id = _group_id AND gm.user_id = current_user_id&lt;br /&gt;
    );&lt;br /&gt;
  END IF;&lt;br /&gt;
&lt;br /&gt;
  RETURN false;&lt;br /&gt;
END;&lt;br /&gt;
$$ LANGUAGE plpgsql STABLE SECURITY DEFINER;&lt;br /&gt;
Part 5: Apply Security Policies and Grant Permissions&lt;br /&gt;
Now we enable RLS and apply policies that use our generic function. We also apply policies to the iam tables themselves so users can manage their own groups. Finally, we grant the necessary permissions to the web_user role.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
ALTER SCHEMA iam OWNER TO api_owner;&lt;br /&gt;
ALTER TABLE iam.groups OWNER TO api_owner;&lt;br /&gt;
ALTER TABLE iam.group_members OWNER TO api_owner;&lt;br /&gt;
ALTER FUNCTION iam.get_current_user_id() OWNER TO api_owner;&lt;br /&gt;
ALTER FUNCTION iam.check_resource_access(uuid, uuid) OWNER TO api_owner;&lt;br /&gt;
&lt;br /&gt;
ALTER SCHEMA app OWNER TO api_owner;&lt;br /&gt;
ALTER TABLE app.todos OWNER TO api_owner;&lt;br /&gt;
ALTER TABLE app.chats OWNER TO api_owner;&lt;br /&gt;
&lt;br /&gt;
ALTER TABLE iam.groups ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY &amp;quot;Allow owners to manage their groups&amp;quot; ON iam.groups&lt;br /&gt;
  FOR ALL TO web_user&lt;br /&gt;
  USING (owner_id = iam.get_current_user_id())&lt;br /&gt;
  WITH CHECK (owner_id = iam.get_current_user_id());&lt;br /&gt;
&lt;br /&gt;
ALTER TABLE iam.group_members ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY &amp;quot;Allow group owners to manage members&amp;quot; ON iam.group_members&lt;br /&gt;
  FOR ALL TO web_user&lt;br /&gt;
  USING (EXISTS (SELECT 1 FROM iam.groups WHERE id = group_id AND owner_id = iam.get_current_user_id()))&lt;br /&gt;
  WITH CHECK (EXISTS (SELECT 1 FROM iam.groups WHERE id = group_id AND owner_id = iam.get_current_user_id()));&lt;br /&gt;
&lt;br /&gt;
CREATE POLICY &amp;quot;Allow members to view other members&amp;quot; ON iam.group_members&lt;br /&gt;
  FOR SELECT TO web_user&lt;br /&gt;
  USING (EXISTS (SELECT 1 FROM iam.group_members WHERE group_id = iam.group_members.group_id AND user_id = iam.get_current_user_id()));&lt;br /&gt;
&lt;br /&gt;
ALTER TABLE app.todos ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY &amp;quot;Allow CRUD for owners and group members&amp;quot; ON app.todos&lt;br /&gt;
  FOR ALL TO web_user&lt;br /&gt;
  USING (iam.check_resource_access(owner_id, group_id))&lt;br /&gt;
  WITH CHECK (iam.check_resource_access(owner_id, group_id));&lt;br /&gt;
&lt;br /&gt;
ALTER TABLE app.chats ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
CREATE POLICY &amp;quot;Allow CRUD for owners and group members&amp;quot; ON app.chats&lt;br /&gt;
  FOR ALL TO web_user&lt;br /&gt;
  USING (iam.check_resource_access(owner_id, group_id))&lt;br /&gt;
  WITH CHECK (iam.check_resource_access(owner_id, group_id));&lt;br /&gt;
&lt;br /&gt;
GRANT USAGE ON SCHEMA app, iam TO web_user;&lt;br /&gt;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA app, iam TO web_user;&lt;br /&gt;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA app TO web_user;&lt;br /&gt;
How to Use the System&lt;br /&gt;
Let's assume you have two users with the following JWT sub IDs:&lt;br /&gt;
&lt;br /&gt;
User A: 11111111-1111-1111-1111-111111111111&lt;br /&gt;
&lt;br /&gt;
User B: 22222222-2222-2222-2222-222222222222&lt;br /&gt;
&lt;br /&gt;
Example Workflow&lt;br /&gt;
User A creates a private todo.&lt;br /&gt;
&lt;br /&gt;
POST /todos with body {&amp;quot;task&amp;quot;: &amp;quot;My private task&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
The database automatically sets owner_id to User A's ID. Only User A can read or modify this todo.&lt;br /&gt;
&lt;br /&gt;
User A creates a group.&lt;br /&gt;
&lt;br /&gt;
POST /groups with body {&amp;quot;name&amp;quot;: &amp;quot;Project Team&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
This creates a group owned by User A. Let's say its new id is 99999999-9999-9999-9999-999999999999.&lt;br /&gt;
&lt;br /&gt;
User A adds User B to the group.&lt;br /&gt;
&lt;br /&gt;
POST /group_members with body {&amp;quot;group_id&amp;quot;: &amp;quot;999...&amp;quot;, &amp;quot;user_id&amp;quot;: &amp;quot;222...&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
Only User A can do this because they own the group.&lt;br /&gt;
&lt;br /&gt;
User A creates a shared chat message.&lt;br /&gt;
&lt;br /&gt;
POST /chats with body {&amp;quot;message&amp;quot;: &amp;quot;Team, let's sync up&amp;quot;, &amp;quot;group_id&amp;quot;: &amp;quot;999...&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
Now, both User A (as the owner) and User B (as a group member) can read, update, or delete this specific chat message.&lt;br /&gt;
&lt;br /&gt;
How to Add New Protected Tables&lt;br /&gt;
Adding a new table to this IAM system is simple and follows a consistent pattern.&lt;br /&gt;
&lt;br /&gt;
Add Columns: Add owner_id and group_id columns to your new table, using the same definitions as the todos and chats examples.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
CREATE TABLE app.new_table (&lt;br /&gt;
  id serial PRIMARY KEY,&lt;br /&gt;
  -- your other columns&lt;br /&gt;
  owner_id uuid NOT NULL DEFAULT iam.get_current_user_id(),&lt;br /&gt;
  group_id uuid REFERENCES iam.groups(id) ON DELETE SET NULL&lt;br /&gt;
);&lt;br /&gt;
Add Indexes: Create indexes on the new owner_id and group_id columns for performance.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
CREATE INDEX ON app.new_table (owner_id);&lt;br /&gt;
CREATE INDEX ON app.new_table (group_id);&lt;br /&gt;
Apply Policy: Enable RLS and apply the generic policy.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
ALTER TABLE app.new_table ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
&lt;br /&gt;
CREATE POLICY &amp;quot;Allow CRUD for owners and group members&amp;quot; ON app.new_table&lt;br /&gt;
  FOR ALL TO web_user&lt;br /&gt;
  USING (iam.check_resource_access(owner_id, group_id))&lt;br /&gt;
  WITH CHECK (iam.check_resource_access(owner_id, group_id));&lt;br /&gt;
Grant Permissions: Grant the web_user role permissions on the new table.&lt;br /&gt;
&lt;br /&gt;
SQL&lt;br /&gt;
&lt;br /&gt;
GRANT SELECT, INSERT, UPDATE, DELETE ON app.new_table TO web_user;&lt;br /&gt;
GRANT USAGE, SELECT ON SEQUENCE app.new_table_id_seq TO web_user;&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=RLS_example_4&amp;diff=5650</id>
		<title>RLS example 4</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=RLS_example_4&amp;diff=5650"/>
		<updated>2025-09-18T15:35:01Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;**Go-Migrate SQL files** (from scratch) and a **Python test script** that logs into Keycloak, grabs JWTs, and exercises CRUD + RLS paths to prove the policy works.  ---  # mig...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;**Go-Migrate SQL files** (from scratch) and a **Python test script** that logs into Keycloak, grabs JWTs, and exercises CRUD + RLS paths to prove the policy works.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# migrations/001\_init.up.sql&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- +goose Up&lt;br /&gt;
-- === Base setup: extensions, schemas, roles ===&lt;br /&gt;
create extension if not exists pgcrypto;  -- for gen_random_uuid()&lt;br /&gt;
&lt;br /&gt;
create schema if not exists app;&lt;br /&gt;
create schema if not exists todo;&lt;br /&gt;
&lt;br /&gt;
-- Roles model (owner of DB objects vs runtime role for PostgREST)&lt;br /&gt;
do $$&lt;br /&gt;
begin&lt;br /&gt;
  if not exists (select 1 from pg_roles where rolname = 'api_owner') then&lt;br /&gt;
    create role api_owner login;&lt;br /&gt;
  end if;&lt;br /&gt;
  if not exists (select 1 from pg_roles where rolname = 'web_anon') then&lt;br /&gt;
    create role web_anon nologin;&lt;br /&gt;
  end if;&lt;br /&gt;
end$$;&lt;br /&gt;
&lt;br /&gt;
grant usage on schema app, todo to web_anon;&lt;br /&gt;
&lt;br /&gt;
-- === Identity mapping: internal UUID ↔ external JWT sub ===&lt;br /&gt;
create table if not exists app.users (&lt;br /&gt;
  id                uuid primary key default gen_random_uuid(),&lt;br /&gt;
  external_subject  text not null unique, -- JWT sub (opaque; not guaranteed to be a UUID)&lt;br /&gt;
  display_name      text,&lt;br /&gt;
  email             text,&lt;br /&gt;
  is_active         boolean not null default true,&lt;br /&gt;
  created_at        timestamptz not null default now()&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create or replace function app.current_user_id()&lt;br /&gt;
returns uuid&lt;br /&gt;
stable&lt;br /&gt;
security definer&lt;br /&gt;
set search_path = public, pg_temp, app&lt;br /&gt;
language plpgsql&lt;br /&gt;
as $$&lt;br /&gt;
declare&lt;br /&gt;
  claims   jsonb;&lt;br /&gt;
  sub_txt  text;&lt;br /&gt;
  uid      uuid;&lt;br /&gt;
begin&lt;br /&gt;
  claims := nullif(current_setting('request.jwt.claims', true), '')::jsonb;&lt;br /&gt;
  if claims is null then&lt;br /&gt;
    return null;&lt;br /&gt;
  end if;&lt;br /&gt;
&lt;br /&gt;
  sub_txt := claims-&amp;gt;&amp;gt;'sub';&lt;br /&gt;
  if sub_txt is null then&lt;br /&gt;
    return null;&lt;br /&gt;
  end if;&lt;br /&gt;
&lt;br /&gt;
  -- Lookup existing active user&lt;br /&gt;
  select u.id into uid&lt;br /&gt;
  from app.users u&lt;br /&gt;
  where u.external_subject = sub_txt&lt;br /&gt;
    and u.is_active;&lt;br /&gt;
&lt;br /&gt;
  if uid is not null then&lt;br /&gt;
    return uid;&lt;br /&gt;
  end if;&lt;br /&gt;
&lt;br /&gt;
  -- Auto-provision on first use from claims&lt;br /&gt;
  insert into app.users (external_subject, display_name, email)&lt;br /&gt;
  values (&lt;br /&gt;
    sub_txt,&lt;br /&gt;
    coalesce(claims-&amp;gt;&amp;gt;'name', null),&lt;br /&gt;
    coalesce(claims-&amp;gt;&amp;gt;'email', null)&lt;br /&gt;
  )&lt;br /&gt;
  returning id into uid;&lt;br /&gt;
&lt;br /&gt;
  return uid;&lt;br /&gt;
end&lt;br /&gt;
$$;&lt;br /&gt;
&lt;br /&gt;
revoke all on function app.current_user_id() from public;&lt;br /&gt;
grant execute on function app.current_user_id() to web_anon;&lt;br /&gt;
&lt;br /&gt;
-- === Groups &amp;amp; membership ===&lt;br /&gt;
create table if not exists app.groups (&lt;br /&gt;
  id          uuid primary key default gen_random_uuid(),&lt;br /&gt;
  name        text not null unique,&lt;br /&gt;
  created_by  uuid not null references app.users(id) on delete restrict,&lt;br /&gt;
  created_at  timestamptz not null default now()&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create table if not exists app.group_members (&lt;br /&gt;
  group_id  uuid not null references app.groups(id) on delete cascade,&lt;br /&gt;
  user_id   uuid not null references app.users(id) on delete cascade,&lt;br /&gt;
  role      text not null check (role in ('owner','manager','member')),&lt;br /&gt;
  added_at  timestamptz not null default now(),&lt;br /&gt;
  primary key (group_id, user_id)&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create index if not exists idx_group_members_user on app.group_members(user_id);&lt;br /&gt;
create index if not exists idx_group_members_group on app.group_members(group_id);&lt;br /&gt;
&lt;br /&gt;
-- Automatically add the creator as owner of any newly created group&lt;br /&gt;
create or replace function app.add_creator_as_owner()&lt;br /&gt;
returns trigger language plpgsql as $$&lt;br /&gt;
begin&lt;br /&gt;
  insert into app.group_members(group_id, user_id, role)&lt;br /&gt;
  values (new.id, new.created_by, 'owner')&lt;br /&gt;
  on conflict do nothing;&lt;br /&gt;
  return new;&lt;br /&gt;
end$$;&lt;br /&gt;
&lt;br /&gt;
drop trigger if exists trg_groups_owner on app.groups;&lt;br /&gt;
create trigger trg_groups_owner&lt;br /&gt;
after insert on app.groups&lt;br /&gt;
for each row execute function app.add_creator_as_owner();&lt;br /&gt;
&lt;br /&gt;
-- === Todos &amp;amp; shares ===&lt;br /&gt;
create table if not exists todo.todos (&lt;br /&gt;
  id         uuid primary key default gen_random_uuid(),&lt;br /&gt;
  owner_id   uuid not null references app.users(id) on delete restrict,&lt;br /&gt;
  title      text not null,&lt;br /&gt;
  body       text,&lt;br /&gt;
  is_done    boolean not null default false,&lt;br /&gt;
  created_at timestamptz not null default now(),&lt;br /&gt;
  updated_at timestamptz not null default now()&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create index if not exists idx_todos_owner on todo.todos(owner_id);&lt;br /&gt;
&lt;br /&gt;
create table if not exists todo.todo_group_shares (&lt;br /&gt;
  todo_id   uuid not null references todo.todos(id) on delete cascade,&lt;br /&gt;
  group_id  uuid not null references app.groups(id) on delete cascade,&lt;br /&gt;
  can_read  boolean not null default true,&lt;br /&gt;
  can_write boolean not null default false,&lt;br /&gt;
  can_del   boolean not null default false,&lt;br /&gt;
  primary key (todo_id, group_id)&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create index if not exists idx_todo_shares_todo on todo.todo_group_shares(todo_id);&lt;br /&gt;
create index if not exists idx_todo_shares_group on todo.todo_group_shares(group_id);&lt;br /&gt;
&lt;br /&gt;
-- Maintain updated_at on todos&lt;br /&gt;
create or replace function todo.bump_updated_at()&lt;br /&gt;
returns trigger language plpgsql as $$&lt;br /&gt;
begin&lt;br /&gt;
  new.updated_at := now();&lt;br /&gt;
  return new;&lt;br /&gt;
end$$;&lt;br /&gt;
&lt;br /&gt;
drop trigger if exists trg_todos_updated_at on todo.todos;&lt;br /&gt;
create trigger trg_todos_updated_at&lt;br /&gt;
before update on todo.todos&lt;br /&gt;
for each row execute function todo.bump_updated_at();&lt;br /&gt;
&lt;br /&gt;
-- === RLS enablement ===&lt;br /&gt;
alter table app.users                 enable row level security;&lt;br /&gt;
alter table app.groups                enable row level security;&lt;br /&gt;
alter table app.group_members         enable row level security;&lt;br /&gt;
alter table todo.todos                enable row level security;&lt;br /&gt;
alter table todo.todo_group_shares    enable row level security;&lt;br /&gt;
&lt;br /&gt;
-- app.users: each user can see/update themselves&lt;br /&gt;
create policy users_select_self on app.users&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (id = app.current_user_id());&lt;br /&gt;
&lt;br /&gt;
create policy users_update_self on app.users&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (id = app.current_user_id())&lt;br /&gt;
with check (id = app.current_user_id());&lt;br /&gt;
&lt;br /&gt;
-- app.groups&lt;br /&gt;
create policy groups_insert_any on app.groups&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (created_by = app.current_user_id());&lt;br /&gt;
&lt;br /&gt;
create policy groups_select_member on app.groups&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy groups_update_owner on app.groups&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
      and gm.role = 'owner'&lt;br /&gt;
  )&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
      and gm.role = 'owner'&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy groups_delete_owner on app.groups&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
      and gm.role = 'owner'&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- app.group_members&lt;br /&gt;
create policy gm_select_member on app.group_members&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy gm_insert_mgr on app.group_members&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy gm_update_mgr on app.group_members&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy gm_delete_mgr_or_self on app.group_members&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  (user_id = app.current_user_id()) or&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- todo.todos&lt;br /&gt;
create policy todos_select_owner_or_shared on todo.todos&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_read&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy todos_insert_self on todo.todos&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (owner_id = app.current_user_id());&lt;br /&gt;
&lt;br /&gt;
create policy todos_update_owner_or_shared on todo.todos&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_write&lt;br /&gt;
  )&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_write&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy todos_delete_owner_or_shared on todo.todos&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_del&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- todo.todo_group_shares&lt;br /&gt;
create policy shares_select_visible on todo.todo_group_shares&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
  or exists (select 1 from app.group_members gm where gm.group_id = todo_group_shares.group_id and gm.user_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy shares_insert_todo_owner on todo.todo_group_shares&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy shares_update_todo_owner on todo.todo_group_shares&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy shares_delete_todo_owner on todo.todo_group_shares&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- Base DML privileges (RLS gates real access)&lt;br /&gt;
grant select, insert, update, delete on app.users                  to web_anon;&lt;br /&gt;
grant select, insert, update, delete on app.groups                 to web_anon;&lt;br /&gt;
grant select, insert, update, delete on app.group_members          to web_anon;&lt;br /&gt;
grant select, insert, update, delete on todo.todos                 to web_anon;&lt;br /&gt;
grant select, insert, update, delete on todo.todo_group_shares     to web_anon;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# migrations/001\_init.down.sql&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- +goose Down&lt;br /&gt;
-- Drop in reverse dependency order&lt;br /&gt;
&lt;br /&gt;
-- Policies automatically dropped with tables; explicit drops are harmless if they existed.&lt;br /&gt;
drop trigger if exists trg_todos_updated_at on todo.todos;&lt;br /&gt;
drop function if exists todo.bump_updated_at();&lt;br /&gt;
&lt;br /&gt;
drop trigger if exists trg_groups_owner on app.groups;&lt;br /&gt;
drop function if exists app.add_creator_as_owner();&lt;br /&gt;
&lt;br /&gt;
-- Tables&lt;br /&gt;
drop table if exists todo.todo_group_shares cascade;&lt;br /&gt;
drop table if exists todo.todos cascade;&lt;br /&gt;
&lt;br /&gt;
drop table if exists app.group_members cascade;&lt;br /&gt;
drop table if exists app.groups cascade;&lt;br /&gt;
&lt;br /&gt;
-- Functions&lt;br /&gt;
drop function if exists app.current_user_id();&lt;br /&gt;
&lt;br /&gt;
-- Tables&lt;br /&gt;
drop table if exists app.users cascade;&lt;br /&gt;
&lt;br /&gt;
-- Schemas&lt;br /&gt;
drop schema if exists todo cascade;&lt;br /&gt;
drop schema if exists app cascade;&lt;br /&gt;
&lt;br /&gt;
-- (Optionally) keep roles; comment these if managed outside migrations&lt;br /&gt;
do $$&lt;br /&gt;
begin&lt;br /&gt;
  if exists (select 1 from pg_roles where rolname = 'web_anon') then&lt;br /&gt;
    revoke all on schema public from web_anon;&lt;br /&gt;
  end if;&lt;br /&gt;
end$$;&lt;br /&gt;
&lt;br /&gt;
-- Not dropping roles by default to avoid breaking external config (e.g., PostgREST).&lt;br /&gt;
-- Uncomment if desired:&lt;br /&gt;
-- drop role if exists web_anon;&lt;br /&gt;
-- drop role if exists api_owner;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
# test\_keycloak\_postgrest\_rls.py&lt;br /&gt;
&lt;br /&gt;
End-to-end validator that:&lt;br /&gt;
&lt;br /&gt;
* logs in as two users (Alice &amp;amp; Bob) via Keycloak,&lt;br /&gt;
* auto-provisions `app.users` via `/rpc/current_user_id`,&lt;br /&gt;
* creates a group (auto-adds creator as owner),&lt;br /&gt;
* adds Bob as member,&lt;br /&gt;
* creates a todo,&lt;br /&gt;
* verifies read/write/delete controls via `todo.todo_group_shares`.&lt;br /&gt;
&lt;br /&gt;
```python&lt;br /&gt;
#!/usr/bin/env python3&lt;br /&gt;
&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
E2E RLS test:&lt;br /&gt;
- Requires a PostgREST API exposing schemas &amp;quot;app&amp;quot; and &amp;quot;todo&amp;quot; and using web_anon role.&lt;br /&gt;
- Keycloak must have users whose credentials you supply.&lt;br /&gt;
- Works with public client (no secret) or confidential client (with secret).&lt;br /&gt;
&lt;br /&gt;
ENV VARS:&lt;br /&gt;
  API_BASE=&amp;quot;https://uapp-api.example.com&amp;quot;     # PostgREST base (no trailing slash)&lt;br /&gt;
  KC_BASE=&amp;quot;https://auth.example.com&amp;quot;          # Keycloak base (no trailing slash)&lt;br /&gt;
  KC_REALM=&amp;quot;myrealm&amp;quot;&lt;br /&gt;
  KC_CLIENT_ID=&amp;quot;myclient&amp;quot;&lt;br /&gt;
  KC_CLIENT_SECRET=&amp;quot;&amp;quot;                          # optional; leave empty for public client&lt;br /&gt;
  USER1=&amp;quot;alice@example.com&amp;quot;&lt;br /&gt;
  PASS1=&amp;quot;alice_password&amp;quot;&lt;br /&gt;
  USER2=&amp;quot;bob@example.com&amp;quot;&lt;br /&gt;
  PASS2=&amp;quot;bob_password&amp;quot;&lt;br /&gt;
  VERIFY_SSL=&amp;quot;true&amp;quot;                            # &amp;quot;false&amp;quot; to skip TLS verify (dev only)&lt;br /&gt;
&lt;br /&gt;
Run:&lt;br /&gt;
  python3 test_keycloak_postgrest_rls.py&lt;br /&gt;
&amp;quot;&amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
import os, sys, time, json, base64&lt;br /&gt;
import requests&lt;br /&gt;
from typing import Dict, Any&lt;br /&gt;
&lt;br /&gt;
API_BASE      = os.environ.get(&amp;quot;API_BASE&amp;quot;, &amp;quot;http://localhost:3000&amp;quot;).rstrip(&amp;quot;/&amp;quot;)&lt;br /&gt;
KC_BASE       = os.environ.get(&amp;quot;KC_BASE&amp;quot;, &amp;quot;http://localhost:8080&amp;quot;).rstrip(&amp;quot;/&amp;quot;)&lt;br /&gt;
KC_REALM      = os.environ.get(&amp;quot;KC_REALM&amp;quot;, &amp;quot;example&amp;quot;)&lt;br /&gt;
KC_CLIENT_ID  = os.environ.get(&amp;quot;KC_CLIENT_ID&amp;quot;, &amp;quot;postgrest-client&amp;quot;)&lt;br /&gt;
KC_CLIENT_SECRET = os.environ.get(&amp;quot;KC_CLIENT_SECRET&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
USER1         = os.environ.get(&amp;quot;USER1&amp;quot;, &amp;quot;alice&amp;quot;)&lt;br /&gt;
PASS1         = os.environ.get(&amp;quot;PASS1&amp;quot;, &amp;quot;alice&amp;quot;)&lt;br /&gt;
USER2         = os.environ.get(&amp;quot;USER2&amp;quot;, &amp;quot;bob&amp;quot;)&lt;br /&gt;
PASS2         = os.environ.get(&amp;quot;PASS2&amp;quot;, &amp;quot;bob&amp;quot;)&lt;br /&gt;
VERIFY_SSL    = os.environ.get(&amp;quot;VERIFY_SSL&amp;quot;, &amp;quot;true&amp;quot;).lower() != &amp;quot;false&amp;quot;&lt;br /&gt;
&lt;br /&gt;
SESSION = requests.Session()&lt;br /&gt;
&lt;br /&gt;
def kc_token(username: str, password: str) -&amp;gt; Dict[str, Any]:&lt;br /&gt;
    url = f&amp;quot;{KC_BASE}/realms/{KC_REALM}/protocol/openid-connect/token&amp;quot;&lt;br /&gt;
    data = {&lt;br /&gt;
        &amp;quot;grant_type&amp;quot;: &amp;quot;password&amp;quot;,&lt;br /&gt;
        &amp;quot;client_id&amp;quot;: KC_CLIENT_ID,&lt;br /&gt;
        &amp;quot;username&amp;quot;: username,&lt;br /&gt;
        &amp;quot;password&amp;quot;: password,&lt;br /&gt;
    }&lt;br /&gt;
    if KC_CLIENT_SECRET:&lt;br /&gt;
        data[&amp;quot;client_secret&amp;quot;] = KC_CLIENT_SECRET&lt;br /&gt;
    r = SESSION.post(url, data=data, verify=VERIFY_SSL)&lt;br /&gt;
    if r.status_code != 200:&lt;br /&gt;
        raise RuntimeError(f&amp;quot;Keycloak token error {r.status_code}: {r.text}&amp;quot;)&lt;br /&gt;
    return r.json()&lt;br /&gt;
&lt;br /&gt;
def bearer(token: str) -&amp;gt; Dict[str,str]:&lt;br /&gt;
    return {&amp;quot;Authorization&amp;quot;: f&amp;quot;Bearer {token}&amp;quot;}&lt;br /&gt;
&lt;br /&gt;
def postgrest_call(method: str, path: str, token: str, **kwargs) -&amp;gt; requests.Response:&lt;br /&gt;
    url = f&amp;quot;{API_BASE}{path}&amp;quot;&lt;br /&gt;
    headers = kwargs.pop(&amp;quot;headers&amp;quot;, {})&lt;br /&gt;
    headers.update(bearer(token))&lt;br /&gt;
    # Let PostgREST return exact counts on list calls&lt;br /&gt;
    headers.setdefault(&amp;quot;Prefer&amp;quot;, &amp;quot;count=exact&amp;quot;)&lt;br /&gt;
    if &amp;quot;json&amp;quot; in kwargs and method.upper() in (&amp;quot;POST&amp;quot;,&amp;quot;PATCH&amp;quot;):&lt;br /&gt;
        headers.setdefault(&amp;quot;Content-Type&amp;quot;, &amp;quot;application/json&amp;quot;)&lt;br /&gt;
    return SESSION.request(method=method, url=url, headers=headers, verify=VERIFY_SSL, **kwargs)&lt;br /&gt;
&lt;br /&gt;
def jwt_sub(access_token: str) -&amp;gt; str:&lt;br /&gt;
    # Decode JWT payload (no verify) for logging only&lt;br /&gt;
    try:&lt;br /&gt;
        payload_b64 = access_token.split(&amp;quot;.&amp;quot;)[1]&lt;br /&gt;
        # pad&lt;br /&gt;
        payload_b64 += &amp;quot;=&amp;quot; * ((4 - len(payload_b64) % 4) % 4)&lt;br /&gt;
        payload = json.loads(base64.urlsafe_b64decode(payload_b64))&lt;br /&gt;
        return payload.get(&amp;quot;sub&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
    except Exception:&lt;br /&gt;
        return &amp;quot;&amp;quot;&lt;br /&gt;
&lt;br /&gt;
def rpc_current_user_id(token: str) -&amp;gt; str:&lt;br /&gt;
    r = postgrest_call(&amp;quot;POST&amp;quot;, &amp;quot;/rpc/current_user_id&amp;quot;, token, json={})&lt;br /&gt;
    if r.status_code not in (200, 201):&lt;br /&gt;
        raise RuntimeError(f&amp;quot;/rpc/current_user_id failed: {r.status_code} {r.text}&amp;quot;)&lt;br /&gt;
    # PostgREST returns either raw UUID or {&amp;quot;current_user_id&amp;quot;: &amp;quot;...&amp;quot;} depending on cfg.&lt;br /&gt;
    try:&lt;br /&gt;
        j = r.json()&lt;br /&gt;
        if isinstance(j, dict) and &amp;quot;current_user_id&amp;quot; in j:&lt;br /&gt;
            return j[&amp;quot;current_user_id&amp;quot;]&lt;br /&gt;
        if isinstance(j, list) and j and &amp;quot;current_user_id&amp;quot; in j[0]:&lt;br /&gt;
            return j[0][&amp;quot;current_user_id&amp;quot;]&lt;br /&gt;
        if isinstance(j, str):&lt;br /&gt;
            return j&lt;br /&gt;
    except Exception:&lt;br /&gt;
        pass&lt;br /&gt;
    # text body?&lt;br /&gt;
    return r.text.strip().strip('&amp;quot;')&lt;br /&gt;
&lt;br /&gt;
def assert_ok(resp: requests.Response, msg=&amp;quot;expected 2xx&amp;quot;):&lt;br /&gt;
    if not (200 &amp;lt;= resp.status_code &amp;lt; 300):&lt;br /&gt;
        raise AssertionError(f&amp;quot;{msg}: got {resp.status_code} {resp.text}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
def main():&lt;br /&gt;
    print(f&amp;quot;API_BASE={API_BASE}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;KC_BASE={KC_BASE} realm={KC_REALM} client_id={KC_CLIENT_ID} secret={'set' if bool(KC_CLIENT_SECRET) else 'unset'}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;VERIFY_SSL={VERIFY_SSL}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 1) Login both users&lt;br /&gt;
    t1 = kc_token(USER1, PASS1)&lt;br /&gt;
    t2 = kc_token(USER2, PASS2)&lt;br /&gt;
    a1, a2 = t1[&amp;quot;access_token&amp;quot;], t2[&amp;quot;access_token&amp;quot;]&lt;br /&gt;
    print(f&amp;quot;Alice sub: {jwt_sub(a1)}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;Bob   sub: {jwt_sub(a2)}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 2) Auto-provision/resolve current user IDs&lt;br /&gt;
    u1 = rpc_current_user_id(a1)&lt;br /&gt;
    u2 = rpc_current_user_id(a2)&lt;br /&gt;
    print(f&amp;quot;Alice user_id: {u1}&amp;quot;)&lt;br /&gt;
    print(f&amp;quot;Bob   user_id: {u2}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 3) Alice creates a group (trigger makes her owner automatically)&lt;br /&gt;
    group_payload = {&amp;quot;name&amp;quot;: f&amp;quot;testgrp-{int(time.time())}&amp;quot;, &amp;quot;created_by&amp;quot;: u1}&lt;br /&gt;
    r = postgrest_call(&amp;quot;POST&amp;quot;, &amp;quot;/app.groups&amp;quot;, a1, json=group_payload)&lt;br /&gt;
    assert_ok(r, &amp;quot;create group&amp;quot;)&lt;br /&gt;
    group = r.json()[0]&lt;br /&gt;
    group_id = group[&amp;quot;id&amp;quot;]&lt;br /&gt;
    print(f&amp;quot;Created group {group_id}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 4) Alice adds Bob as member (allowed because Alice is owner via trigger)&lt;br /&gt;
    r = postgrest_call(&amp;quot;POST&amp;quot;, &amp;quot;/app.group_members&amp;quot;, a1, json={&amp;quot;group_id&amp;quot;: group_id, &amp;quot;user_id&amp;quot;: u2, &amp;quot;role&amp;quot;: &amp;quot;member&amp;quot;})&lt;br /&gt;
    assert_ok(r, &amp;quot;add bob to group&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Bob added to group as member&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 5) Alice creates a todo&lt;br /&gt;
    todo_payload = {&amp;quot;owner_id&amp;quot;: u1, &amp;quot;title&amp;quot;: &amp;quot;Secret plan&amp;quot;, &amp;quot;body&amp;quot;: &amp;quot;Do not share yet&amp;quot;}&lt;br /&gt;
    r = postgrest_call(&amp;quot;POST&amp;quot;, &amp;quot;/todo.todos&amp;quot;, a1, json=todo_payload)&lt;br /&gt;
    assert_ok(r, &amp;quot;create todo&amp;quot;)&lt;br /&gt;
    todo = r.json()[0]&lt;br /&gt;
    todo_id = todo[&amp;quot;id&amp;quot;]&lt;br /&gt;
    print(f&amp;quot;Alice created todo {todo_id}&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 6) Bob cannot see it yet&lt;br /&gt;
    r = postgrest_call(&amp;quot;GET&amp;quot;, f&amp;quot;/todo.todos?id=eq.{todo_id}&amp;quot;, a2)&lt;br /&gt;
    assert_ok(r, &amp;quot;bob read check&amp;quot;)&lt;br /&gt;
    assert r.json() == [], &amp;quot;Bob should not see Alice's todo before share&amp;quot;&lt;br /&gt;
    print(&amp;quot;Verified: Bob cannot read unshared todo&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 7) Alice shares read only with the group&lt;br /&gt;
    share_payload = {&amp;quot;todo_id&amp;quot;: todo_id, &amp;quot;group_id&amp;quot;: group_id, &amp;quot;can_read&amp;quot;: True, &amp;quot;can_write&amp;quot;: False, &amp;quot;can_del&amp;quot;: False}&lt;br /&gt;
    r = postgrest_call(&amp;quot;POST&amp;quot;, &amp;quot;/todo.todo_group_shares&amp;quot;, a1, json=share_payload)&lt;br /&gt;
    assert_ok(r, &amp;quot;share todo read-only&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Todo shared (read-only) with group&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 8) Bob can now read, but not update&lt;br /&gt;
    r = postgrest_call(&amp;quot;GET&amp;quot;, f&amp;quot;/todo.todos?id=eq.{todo_id}&amp;quot;, a2)&lt;br /&gt;
    assert_ok(r, &amp;quot;bob read shared todo&amp;quot;)&lt;br /&gt;
    assert len(r.json()) == 1, &amp;quot;Bob should now see the shared todo&amp;quot;&lt;br /&gt;
    print(&amp;quot;Verified: Bob can read shared todo&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    r = postgrest_call(&amp;quot;PATCH&amp;quot;, f&amp;quot;/todo.todos?id=eq.{todo_id}&amp;quot;, a2, json={&amp;quot;title&amp;quot;: &amp;quot;Bob tries edit&amp;quot;})&lt;br /&gt;
    if 200 &amp;lt;= r.status_code &amp;lt; 300:&lt;br /&gt;
        raise AssertionError(&amp;quot;Bob should NOT be able to update with read-only share&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Verified: Bob cannot update with read-only share&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 9) Alice upgrades to can_write&lt;br /&gt;
    r = postgrest_call(&amp;quot;PATCH&amp;quot;, f&amp;quot;/todo.todo_group_shares?todo_id=eq.{todo_id}&amp;amp;group_id=eq.{group_id}&amp;quot;, a1, json={&amp;quot;can_write&amp;quot;: True})&lt;br /&gt;
    assert_ok(r, &amp;quot;upgrade share to write&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Share upgraded: can_write=True&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # Bob updates successfully now&lt;br /&gt;
    r = postgrest_call(&amp;quot;PATCH&amp;quot;, f&amp;quot;/todo.todos?id=eq.{todo_id}&amp;quot;, a2, json={&amp;quot;title&amp;quot;: &amp;quot;Bob edited title&amp;quot;})&lt;br /&gt;
    assert_ok(r, &amp;quot;bob updates after write share&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Verified: Bob can update after write permission&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # 10) Delete tests: Bob still cannot delete&lt;br /&gt;
    r = postgrest_call(&amp;quot;DELETE&amp;quot;, f&amp;quot;/todo.todos?id=eq.{todo_id}&amp;quot;, a2)&lt;br /&gt;
    if 200 &amp;lt;= r.status_code &amp;lt; 300:&lt;br /&gt;
        raise AssertionError(&amp;quot;Bob should NOT be able to delete without can_del&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Verified: Bob cannot delete without delete permission&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # Alice grants delete&lt;br /&gt;
    r = postgrest_call(&amp;quot;PATCH&amp;quot;, f&amp;quot;/todo.todo_group_shares?todo_id=eq.{todo_id}&amp;amp;group_id=eq.{group_id}&amp;quot;, a1, json={&amp;quot;can_del&amp;quot;: True})&lt;br /&gt;
    assert_ok(r, &amp;quot;upgrade share to delete&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Share upgraded: can_del=True&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    # Bob deletes it now&lt;br /&gt;
    r = postgrest_call(&amp;quot;DELETE&amp;quot;, f&amp;quot;/todo.todos?id=eq.{todo_id}&amp;quot;, a2)&lt;br /&gt;
    assert_ok(r, &amp;quot;bob deletes after delete share&amp;quot;)&lt;br /&gt;
    print(&amp;quot;Verified: Bob can delete with delete permission&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
    print(&amp;quot;\n✅ All RLS tests passed.&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
if __name__ == &amp;quot;__main__&amp;quot;:&lt;br /&gt;
    try:&lt;br /&gt;
        main()&lt;br /&gt;
    except Exception as e:&lt;br /&gt;
        print(f&amp;quot;\n❌ Test failed: {e}&amp;quot;, file=sys.stderr)&lt;br /&gt;
        sys.exit(1)&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## Notes / wiring tips&lt;br /&gt;
&lt;br /&gt;
* **PostgREST**&lt;br /&gt;
  Ensure your PostgREST config includes:&lt;br /&gt;
&lt;br /&gt;
  * `db-anon-role = &amp;quot;web_anon&amp;quot;`&lt;br /&gt;
  * `db-schemas = &amp;quot;app,todo&amp;quot;` (and `public` if you want to expose `rpc/current_user_id` without schema prefix)&lt;br /&gt;
  * If your function is under `app`, call `/rpc/current_user_id` only if it’s in a listed schema or exposed via `db-extra-search-path`.&lt;br /&gt;
&lt;br /&gt;
* **Keycloak**&lt;br /&gt;
&lt;br /&gt;
  * The script uses **Resource Owner Password Credentials** for simplicity. On modern KC, make sure your client allows Direct Access Grants and your users are enabled.&lt;br /&gt;
  * If you prefer **authorization code + PKCE**, you can swap the token function, but ROPC is easiest for headless tests.&lt;br /&gt;
&lt;br /&gt;
* **Auto-provision**&lt;br /&gt;
  `/rpc/current_user_id` both resolves and creates the `app.users` row for the current JWT `sub`—this bootstraps group ownership and todo ownership without any admin step.&lt;br /&gt;
&lt;br /&gt;
* **Idempotency**&lt;br /&gt;
  The test uses unique names via timestamp; you can re-run safely.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
You could split into multiple migrations (e.g., `001_schemas.sql`, `002_rls.sql`, `003_triggers.sql`), say the word and I’ll fan it out cleanly.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=RLS_example_3&amp;diff=5649</id>
		<title>RLS example 3</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=RLS_example_3&amp;diff=5649"/>
		<updated>2025-09-18T15:20:13Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;RLS pattern for PostgreSQL that uses the JWT `sub` as the external identity, maps it to an internal UUID `user_id`, and supports per-group CRUD permissions on `todos`. It’s...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;RLS pattern for PostgreSQL that uses the JWT `sub` as the external identity, maps it to an internal UUID `user_id`, and supports per-group CRUD permissions on `todos`. It’s PostgREST-friendly (uses `request.jwt.claims`) and keeps trust/authorization entirely in the DB.&lt;br /&gt;
&lt;br /&gt;
# 1) Extensions, schema, roles&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- One-time setup&lt;br /&gt;
create extension if not exists pgcrypto;  -- gen_random_uuid()&lt;br /&gt;
&lt;br /&gt;
-- App schemas&lt;br /&gt;
create schema if not exists app;&lt;br /&gt;
create schema if not exists todo;&lt;br /&gt;
&lt;br /&gt;
-- Typical PostgREST role model:&lt;br /&gt;
--   - `api_owner` owns objects&lt;br /&gt;
--   - `web_anon` is the run-time role used by PostgREST for authenticated users&lt;br /&gt;
do $$&lt;br /&gt;
begin&lt;br /&gt;
  if not exists (select 1 from pg_roles where rolname = 'api_owner') then&lt;br /&gt;
    create role api_owner login;&lt;br /&gt;
  end if;&lt;br /&gt;
  if not exists (select 1 from pg_roles where rolname = 'web_anon') then&lt;br /&gt;
    create role web_anon nologin;&lt;br /&gt;
  end if;&lt;br /&gt;
end$$;&lt;br /&gt;
&lt;br /&gt;
grant usage on schema app, todo to web_anon;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
# 2) Identity mapping (internal UUID ⇄ external `sub`)&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Internal users table:&lt;br /&gt;
create table if not exists app.users (&lt;br /&gt;
  id                uuid primary key default gen_random_uuid(),&lt;br /&gt;
  external_subject  text not null unique,  -- the JWT `sub` (opaque string, not always a UUID)&lt;br /&gt;
  display_name      text,&lt;br /&gt;
  email             text,&lt;br /&gt;
  is_active         boolean not null default true,&lt;br /&gt;
  created_at        timestamptz not null default now()&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- Resolve the current user's internal UUID from the JWT claims.&lt;br /&gt;
-- Optionally auto-provision a user row if first seen.&lt;br /&gt;
create or replace function app.current_user_id()&lt;br /&gt;
returns uuid&lt;br /&gt;
stable&lt;br /&gt;
security definer&lt;br /&gt;
set search_path = public, pg_temp, app&lt;br /&gt;
language plpgsql&lt;br /&gt;
as $$&lt;br /&gt;
declare&lt;br /&gt;
  claims   jsonb;&lt;br /&gt;
  sub_txt  text;&lt;br /&gt;
  uid      uuid;&lt;br /&gt;
begin&lt;br /&gt;
  claims := nullif(current_setting('request.jwt.claims', true), '')::jsonb;&lt;br /&gt;
  if claims is null then&lt;br /&gt;
    return null;&lt;br /&gt;
  end if;&lt;br /&gt;
&lt;br /&gt;
  sub_txt := claims-&amp;gt;&amp;gt;'sub';&lt;br /&gt;
  if sub_txt is null then&lt;br /&gt;
    return null;&lt;br /&gt;
  end if;&lt;br /&gt;
&lt;br /&gt;
  -- Try to find existing mapping&lt;br /&gt;
  select u.id into uid&lt;br /&gt;
  from app.users u&lt;br /&gt;
  where u.external_subject = sub_txt&lt;br /&gt;
    and u.is_active;&lt;br /&gt;
&lt;br /&gt;
  if uid is not null then&lt;br /&gt;
    return uid;&lt;br /&gt;
  end if;&lt;br /&gt;
&lt;br /&gt;
  -- Optional &amp;quot;auto-provision on first use&amp;quot;&lt;br /&gt;
  insert into app.users (external_subject, display_name, email)&lt;br /&gt;
  values (&lt;br /&gt;
    sub_txt,&lt;br /&gt;
    coalesce(claims-&amp;gt;&amp;gt;'name', null),&lt;br /&gt;
    coalesce(claims-&amp;gt;&amp;gt;'email', null)&lt;br /&gt;
  )&lt;br /&gt;
  returning id into uid;&lt;br /&gt;
&lt;br /&gt;
  return uid;&lt;br /&gt;
end&lt;br /&gt;
$$;&lt;br /&gt;
&lt;br /&gt;
revoke all on function app.current_user_id() from public;&lt;br /&gt;
grant execute on function app.current_user_id() to web_anon;&lt;br /&gt;
grant select on app.users to web_anon;  -- (optional: allow users to read their own profile via RLS below)&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
# 3) Groups &amp;amp; membership&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Groups&lt;br /&gt;
create table if not exists app.groups (&lt;br /&gt;
  id          uuid primary key default gen_random_uuid(),&lt;br /&gt;
  name        text not null unique,&lt;br /&gt;
  created_by  uuid not null references app.users(id) on delete restrict,&lt;br /&gt;
  created_at  timestamptz not null default now()&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- Group membership&lt;br /&gt;
create table if not exists app.group_members (&lt;br /&gt;
  group_id  uuid not null references app.groups(id) on delete cascade,&lt;br /&gt;
  user_id   uuid not null references app.users(id) on delete cascade,&lt;br /&gt;
  role      text not null check (role in ('owner','manager','member')),&lt;br /&gt;
  added_at  timestamptz not null default now(),&lt;br /&gt;
  primary key (group_id, user_id)&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create index if not exists idx_group_members_user on app.group_members(user_id);&lt;br /&gt;
create index if not exists idx_group_members_group on app.group_members(group_id);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
# 4) Todos &amp;amp; per-group shares&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Core todos&lt;br /&gt;
create table if not exists todo.todos (&lt;br /&gt;
  id         uuid primary key default gen_random_uuid(),&lt;br /&gt;
  owner_id   uuid not null references app.users(id) on delete restrict,&lt;br /&gt;
  title      text not null,&lt;br /&gt;
  body       text,&lt;br /&gt;
  is_done    boolean not null default false,&lt;br /&gt;
  created_at timestamptz not null default now(),&lt;br /&gt;
  updated_at timestamptz not null default now()&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create index if not exists idx_todos_owner on todo.todos(owner_id);&lt;br /&gt;
&lt;br /&gt;
-- Shares: grant group access to a todo with simple, explicit action flags&lt;br /&gt;
-- (r = read, w = update, d = delete). You can normalize further if preferred.&lt;br /&gt;
create table if not exists todo.todo_group_shares (&lt;br /&gt;
  todo_id   uuid not null references todo.todos(id) on delete cascade,&lt;br /&gt;
  group_id  uuid not null references app.groups(id) on delete cascade,&lt;br /&gt;
  can_read  boolean not null default true,&lt;br /&gt;
  can_write boolean not null default false,&lt;br /&gt;
  can_del   boolean not null default false,&lt;br /&gt;
  primary key (todo_id, group_id)&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create index if not exists idx_todo_shares_todo on todo.todo_group_shares(todo_id);&lt;br /&gt;
create index if not exists idx_todo_shares_group on todo.todo_group_shares(group_id);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
# 5) RLS policies&lt;br /&gt;
&lt;br /&gt;
Enable RLS:&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
alter table app.users           enable row level security;&lt;br /&gt;
alter table app.groups          enable row level security;&lt;br /&gt;
alter table app.group_members   enable row level security;&lt;br /&gt;
alter table todo.todos          enable row level security;&lt;br /&gt;
alter table todo.todo_group_shares enable row level security;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
## `app.users` (each user sees themselves; optional admin bypass if you have one)&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- SELECT: only yourself&lt;br /&gt;
create policy users_select_self on app.users&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (id = app.current_user_id());&lt;br /&gt;
&lt;br /&gt;
-- UPDATE: let users edit their own profile (optional)&lt;br /&gt;
create policy users_update_self on app.users&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (id = app.current_user_id())&lt;br /&gt;
with check (id = app.current_user_id());&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
## `app.groups`&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- CREATE a group: creator becomes owner via trigger or API-side insert&lt;br /&gt;
create policy groups_insert_any on app.groups&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (created_by = app.current_user_id());&lt;br /&gt;
&lt;br /&gt;
-- SELECT groups if you're a member&lt;br /&gt;
create policy groups_select_member on app.groups&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- UPDATE/DELETE groups only if you're an owner&lt;br /&gt;
create policy groups_update_owner on app.groups&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
      and gm.role = 'owner'&lt;br /&gt;
  )&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
      and gm.role = 'owner'&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy groups_delete_owner on app.groups&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm&lt;br /&gt;
    where gm.group_id = app.groups.id&lt;br /&gt;
      and gm.user_id = app.current_user_id()&lt;br /&gt;
      and gm.role = 'owner'&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
## `app.group_members`&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Anyone can see members of their groups&lt;br /&gt;
create policy gm_select_member on app.group_members&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- Add members only if you're owner/manager of that group&lt;br /&gt;
create policy gm_insert_mgr on app.group_members&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- Update member roles (promote/demote) only if owner/manager&lt;br /&gt;
create policy gm_update_mgr on app.group_members&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- Remove members only if owner/manager (or yourself to leave)&lt;br /&gt;
create policy gm_delete_mgr_or_self on app.group_members&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  (user_id = app.current_user_id()) or&lt;br /&gt;
  exists (&lt;br /&gt;
    select 1 from app.group_members gm2&lt;br /&gt;
    where gm2.group_id = group_members.group_id&lt;br /&gt;
      and gm2.user_id = app.current_user_id()&lt;br /&gt;
      and gm2.role in ('owner','manager')&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
## `todo.todos`&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- READ if you own it OR you are in a group that has read share&lt;br /&gt;
create policy todos_select_owner_or_shared on todo.todos&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_read&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- CREATE: only as yourself (owner_id must equal current user)&lt;br /&gt;
create policy todos_insert_self on todo.todos&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (owner_id = app.current_user_id());&lt;br /&gt;
&lt;br /&gt;
-- UPDATE if owner OR group share allows write&lt;br /&gt;
create policy todos_update_owner_or_shared on todo.todos&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_write&lt;br /&gt;
  )&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_write&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- DELETE if owner OR group share allows delete&lt;br /&gt;
create policy todos_delete_owner_or_shared on todo.todos&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  owner_id = app.current_user_id()&lt;br /&gt;
  or exists (&lt;br /&gt;
    select 1&lt;br /&gt;
    from todo.todo_group_shares s&lt;br /&gt;
    join app.group_members gm&lt;br /&gt;
      on gm.group_id = s.group_id&lt;br /&gt;
     and gm.user_id  = app.current_user_id()&lt;br /&gt;
    where s.todo_id = todos.id&lt;br /&gt;
      and s.can_del&lt;br /&gt;
  )&lt;br /&gt;
);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
## `todo.todo_group_shares`&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Visible if you are the todo owner OR member of the shared group&lt;br /&gt;
create policy shares_select_visible on todo.todo_group_shares&lt;br /&gt;
for select&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
  or exists (select 1 from app.group_members gm where gm.group_id = todo_group_shares.group_id and gm.user_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- Only the todo owner (or group owner/manager, if you prefer) can create/update/delete shares&lt;br /&gt;
create policy shares_cud_todo_owner on todo.todo_group_shares&lt;br /&gt;
for insert&lt;br /&gt;
to web_anon&lt;br /&gt;
with check (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy shares_update_todo_owner on todo.todo_group_shares&lt;br /&gt;
for update&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
)&lt;br /&gt;
with check (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
create policy shares_delete_todo_owner on todo.todo_group_shares&lt;br /&gt;
for delete&lt;br /&gt;
to web_anon&lt;br /&gt;
using (&lt;br /&gt;
  exists (select 1 from todo.todos t where t.id = todo_group_shares.todo_id and t.owner_id = app.current_user_id())&lt;br /&gt;
);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
# 6) Grants&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Base privileges (RLS still applies)&lt;br /&gt;
grant select, insert, update, delete on app.users            to web_anon;&lt;br /&gt;
grant select, insert, update, delete on app.groups           to web_anon;&lt;br /&gt;
grant select, insert, update, delete on app.group_members    to web_anon;&lt;br /&gt;
grant select, insert, update, delete on todo.todos           to web_anon;&lt;br /&gt;
grant select, insert, update, delete on todo.todo_group_shares to web_anon;&lt;br /&gt;
&lt;br /&gt;
-- Sequences (none; we use UUIDs)&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
# 7) (Nice to have) Triggers &amp;amp; helpers&lt;br /&gt;
&lt;br /&gt;
Auto-maintain `updated_at` on todos:&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
create or replace function todo.bump_updated_at()&lt;br /&gt;
returns trigger language plpgsql as $$&lt;br /&gt;
begin&lt;br /&gt;
  new.updated_at := now();&lt;br /&gt;
  return new;&lt;br /&gt;
end$$;&lt;br /&gt;
&lt;br /&gt;
drop trigger if exists trg_todos_updated_at on todo.todos;&lt;br /&gt;
create trigger trg_todos_updated_at&lt;br /&gt;
before update on todo.todos&lt;br /&gt;
for each row execute function todo.bump_updated_at();&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
# 8) How to use (PostgREST / JWT)&lt;br /&gt;
&lt;br /&gt;
* PostgREST forwards JWT claims to `current_setting('request.jwt.claims', true)`.&lt;br /&gt;
* Your token must include a `sub`. Example payload:&lt;br /&gt;
&lt;br /&gt;
  ```json&lt;br /&gt;
  {&lt;br /&gt;
    &amp;quot;sub&amp;quot;: &amp;quot;c2d1ae8a-...-external-id-or-opaque&amp;quot;,&lt;br /&gt;
    &amp;quot;email&amp;quot;: &amp;quot;alice@example.com&amp;quot;,&lt;br /&gt;
    &amp;quot;name&amp;quot;: &amp;quot;Alice Example&amp;quot;,&lt;br /&gt;
    &amp;quot;exp&amp;quot;: 1900000000&lt;br /&gt;
  }&lt;br /&gt;
  ```&lt;br /&gt;
* First request from a `sub` will auto-provision a row in `app.users` and return a stable internal `id` UUID for joins.&lt;br /&gt;
&lt;br /&gt;
# 9) Notes &amp;amp; options&lt;br /&gt;
&lt;br /&gt;
* **Don’t use `sub` as PK**: different IdPs may change/format it differently. Keep `users.id` (UUID) as canonical, store `external_subject` (TEXT) as the mapping.&lt;br /&gt;
* **Group authority in DB**: we intentionally ignore any `groups` claim for authorization. JWT can lie or drift; DB is the source of truth.&lt;br /&gt;
* **Granularity**: the `todo_group_shares` flags (`can_read`, `can_write`, `can_del`) are easy to reason about. If you prefer, model a `perm` enum (`read`, `write`, `delete`, `share`) or a bitmask.&lt;br /&gt;
&lt;br /&gt;
---&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=RLS_example_2&amp;diff=5648</id>
		<title>RLS example 2</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=RLS_example_2&amp;diff=5648"/>
		<updated>2025-09-18T15:08:26Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Of course\! To set up Postgres Row-Level Security (RLS) based on group membership, you need a schema that links users to groups and resources to groups. The RLS policy then ch...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Of course\! To set up Postgres Row-Level Security (RLS) based on group membership, you need a schema that links users to groups and resources to groups. The RLS policy then checks if the current user is a member of the group that owns the row.&lt;br /&gt;
&lt;br /&gt;
Here’s a complete, step-by-step example.&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### \#\# 1. The Database Schema 🏛️&lt;br /&gt;
&lt;br /&gt;
First, we need tables to manage users, groups, their relationships, and the projects we want to protect.&lt;br /&gt;
&lt;br /&gt;
  * **`groups`**: A simple table for our user groups.&lt;br /&gt;
  * **`users`**: Stores user information. The `id` will match the JWT `sub` claim.&lt;br /&gt;
  * **`user_groups`**: A **junction table** that links users and groups in a many-to-many relationship. This is the key to the whole setup.&lt;br /&gt;
  * **`projects`**: The resource we want to secure. Each project belongs to one group.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- end list --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- A table for user roles or groups&lt;br /&gt;
CREATE TABLE groups (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    name TEXT NOT NULL UNIQUE&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- A table for users, where 'id' corresponds to the JWT 'sub'&lt;br /&gt;
CREATE TABLE users (&lt;br /&gt;
    id TEXT PRIMARY KEY, -- Stores the JWT 'sub' claim&lt;br /&gt;
    username TEXT NOT NULL UNIQUE&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- The junction table linking users to groups&lt;br /&gt;
CREATE TABLE user_groups (&lt;br /&gt;
    user_id TEXT NOT NULL REFERENCES users(id) ON DELETE CASCADE,&lt;br /&gt;
    group_id INTEGER NOT NULL REFERENCES groups(id) ON DELETE CASCADE,&lt;br /&gt;
    PRIMARY KEY (user_id, group_id) -- Ensures a user can't be in the same group twice&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
-- The table with data we want to protect with RLS&lt;br /&gt;
CREATE TABLE projects (&lt;br /&gt;
    id SERIAL PRIMARY KEY,&lt;br /&gt;
    name TEXT NOT NULL,&lt;br /&gt;
    group_id INTEGER NOT NULL REFERENCES groups(id) ON DELETE CASCADE&lt;br /&gt;
);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### \#\# 2. Populating with Sample Data 📝&lt;br /&gt;
&lt;br /&gt;
Let's add some data to make the example tangible.&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Create groups&lt;br /&gt;
INSERT INTO groups (name) VALUES ('Admins'), ('Developers'), ('Viewers');&lt;br /&gt;
&lt;br /&gt;
-- Create users&lt;br /&gt;
INSERT INTO users (id, username) VALUES&lt;br /&gt;
('auth0|user123', 'alice'), -- Alice is a Developer&lt;br /&gt;
('auth0|user456', 'bob'),   -- Bob is an Admin and a Developer&lt;br /&gt;
('auth0|user789', 'charlie'); -- Charlie is a Viewer&lt;br /&gt;
&lt;br /&gt;
-- Assign users to groups&lt;br /&gt;
INSERT INTO user_groups (user_id, group_id) VALUES&lt;br /&gt;
('auth0|user123', 2), -- Alice is in 'Developers'&lt;br /&gt;
('auth0|user456', 1), -- Bob is in 'Admins'&lt;br /&gt;
('auth0|user456', 2), -- Bob is also in 'Developers'&lt;br /&gt;
('auth0|user789', 3); -- Charlie is in 'Viewers'&lt;br /&gt;
&lt;br /&gt;
-- Create projects and assign them to groups&lt;br /&gt;
INSERT INTO projects (name, group_id) VALUES&lt;br /&gt;
('Project Alpha', 2),     -- Belongs to 'Developers'&lt;br /&gt;
('Project Omega', 2),     -- Belongs to 'Developers'&lt;br /&gt;
('Admin Dashboard', 1),   -- Belongs to 'Admins'&lt;br /&gt;
('Public Analytics', 3);  -- Belongs to 'Viewers'&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### \#\# 3. Creating the RLS Policy 🛡️&lt;br /&gt;
&lt;br /&gt;
Now, we'll enable RLS on the `projects` table and create a policy. The policy will check if the current user's ID exists in the `user_groups` table associated with the project's `group_id`.&lt;br /&gt;
&lt;br /&gt;
This is the most important step. We'll use a subquery in the `USING` clause.&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- 1. Enable Row-Level Security on the 'projects' table&lt;br /&gt;
ALTER TABLE projects ENABLE ROW LEVEL SECURITY;&lt;br /&gt;
&lt;br /&gt;
-- 2. Force RLS for the table owner as well (good practice)&lt;br /&gt;
ALTER TABLE projects FORCE ROW LEVEL SECURITY;&lt;br /&gt;
&lt;br /&gt;
-- 3. Create the policy&lt;br /&gt;
CREATE POLICY project_access_based_on_group&lt;br /&gt;
ON projects&lt;br /&gt;
FOR ALL  -- The policy applies to SELECT, INSERT, UPDATE, and DELETE&lt;br /&gt;
USING (&lt;br /&gt;
    -- This subquery checks if a link exists in our junction table&lt;br /&gt;
    -- between the current user and the project's group.&lt;br /&gt;
    EXISTS (&lt;br /&gt;
        SELECT 1&lt;br /&gt;
        FROM user_groups&lt;br /&gt;
        WHERE&lt;br /&gt;
            -- Match the user_id in the junction table with the current user's JWT sub&lt;br /&gt;
            user_groups.user_id = current_setting('request.jwt.claim.sub', true)&lt;br /&gt;
            -- And match the group_id with the group of the project row being checked&lt;br /&gt;
            AND user_groups.group_id = projects.group_id&lt;br /&gt;
    )&lt;br /&gt;
);&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**What this policy does:** For any query on the `projects` table, Postgres will evaluate each row against the `USING` clause. A row is only visible or modifiable if the `EXISTS` subquery returns `true`, meaning the current user is a member of the group that owns that specific project.&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### \#\# 4. Testing the Policy ✅&lt;br /&gt;
&lt;br /&gt;
We can simulate requests from different users by setting the `request.jwt.claim.sub` variable.&lt;br /&gt;
&lt;br /&gt;
#### **Test 1: As Alice (Developer)**&lt;br /&gt;
&lt;br /&gt;
Alice is in the 'Developers' group (ID 2). She should only see 'Project Alpha' and 'Project Omega'.&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Simulate a request from Alice&lt;br /&gt;
SET request.jwt.claim.sub = 'auth0|user123';&lt;br /&gt;
&lt;br /&gt;
-- Query the projects table&lt;br /&gt;
SELECT * FROM projects;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**Result:**&lt;br /&gt;
| id | name | group\_id |&lt;br /&gt;
| :-- | :-- | :--- |&lt;br /&gt;
| 1 | Project Alpha | 2 |&lt;br /&gt;
| 2 | Project Omega | 2 |&lt;br /&gt;
&lt;br /&gt;
#### **Test 2: As Bob (Admin &amp;amp; Developer)**&lt;br /&gt;
&lt;br /&gt;
Bob is in two groups. He should see the Admin Dashboard and the Developer projects.&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Simulate a request from Bob&lt;br /&gt;
SET request.jwt.claim.sub = 'auth0|user456';&lt;br /&gt;
&lt;br /&gt;
-- Query the projects table&lt;br /&gt;
SELECT * FROM projects;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**Result:**&lt;br /&gt;
| id | name | group\_id |&lt;br /&gt;
| :-- | :-- | :--- |&lt;br /&gt;
| 1 | Project Alpha | 2 |&lt;br /&gt;
| 2 | Project Omega | 2 |&lt;br /&gt;
| 3 | Admin Dashboard | 1 |&lt;br /&gt;
&lt;br /&gt;
#### **Test 3: As a user not in any groups**&lt;br /&gt;
&lt;br /&gt;
If we simulate a user who hasn't been added to `user_groups`, they should see nothing.&lt;br /&gt;
&lt;br /&gt;
```sql&lt;br /&gt;
-- Simulate a request from an unassigned user&lt;br /&gt;
SET request.jwt.claim.sub = 'auth0|unassigned999';&lt;br /&gt;
&lt;br /&gt;
-- Query the projects table&lt;br /&gt;
SELECT * FROM projects;&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
**Result:**&lt;br /&gt;
(0 rows)&lt;br /&gt;
&lt;br /&gt;
This pattern provides a powerful and scalable way to manage permissions for your application directly within the database.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=React_directory_structure&amp;diff=5647</id>
		<title>React directory structure</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=React_directory_structure&amp;diff=5647"/>
		<updated>2025-09-09T03:32:07Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
```&lt;br /&gt;
/src&lt;br /&gt;
├── assets/&lt;br /&gt;
│   ├── images/&lt;br /&gt;
│   └── styles/&lt;br /&gt;
├── components/&lt;br /&gt;
│   ├── common/&lt;br /&gt;
│   │   ├── Empty.tsx&lt;br /&gt;
│   │   ├── StatusChip.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── fields/&lt;br /&gt;
│   │   ├── MoneyField.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── inputs/&lt;br /&gt;
│       ├── AddressInput.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── features/&lt;br /&gt;
│   ├── companies/&lt;br /&gt;
│   │   ├── CompanyCreate.tsx&lt;br /&gt;
│   │   ├── CompanyEdit.tsx&lt;br /&gt;
│   │   ├── CompanyList.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── invoices/&lt;br /&gt;
│   │   ├── InvoiceCreate.tsx&lt;br /&gt;
│   │   ├── InvoiceEdit.tsx&lt;br /&gt;
│   │   ├── InvoiceList.tsx&lt;br /&gt;
│   │   ├── InvoiceShow.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── users/&lt;br /&gt;
│       ├── UserCreate.tsx&lt;br /&gt;
│       ├── UserEdit.tsx&lt;br /&gt;
│       ├── UserList.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── hooks/&lt;br /&gt;
│   ├── usePermissions.ts&lt;br /&gt;
│   └── useDebounce.ts&lt;br /&gt;
├── layout/&lt;br /&gt;
│   ├── AppBar.tsx&lt;br /&gt;
│   ├── Layout.tsx&lt;br /&gt;
│   ├── Login.tsx&lt;br /&gt;
│   └── Menu.tsx&lt;br /&gt;
├── pages/&lt;br /&gt;
│   ├── Dashboard.tsx&lt;br /&gt;
│   └── Settings.tsx&lt;br /&gt;
├── providers/&lt;br /&gt;
│   ├── authProvider.ts&lt;br /&gt;
│   ├── dataProvider.ts&lt;br /&gt;
│   └── i18nProvider.ts&lt;br /&gt;
├── theme/&lt;br /&gt;
│   └── theme.ts&lt;br /&gt;
├── types/&lt;br /&gt;
│   ├── index.ts&lt;br /&gt;
│   └── react-admin.d.ts&lt;br /&gt;
├── utils/&lt;br /&gt;
│   ├── formatters.ts&lt;br /&gt;
│   └── history.ts&lt;br /&gt;
├── App.tsx&lt;br /&gt;
└── index.tsx&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Of course. Here is a battle-tested directory structure for a scalable B2B React Admin application that emphasizes maintainability and separation of concerns.&lt;br /&gt;
This structure is organized around features (or &amp;quot;resources&amp;quot; in React Admin terms), which is a highly effective way to manage complexity as your application grows.&lt;br /&gt;
Example Directory Structure&lt;br /&gt;
Here is a visual representation of the recommended structure:&lt;br /&gt;
/src&lt;br /&gt;
├── assets/&lt;br /&gt;
│   ├── images/&lt;br /&gt;
│   └── styles/&lt;br /&gt;
├── components/&lt;br /&gt;
│   ├── common/&lt;br /&gt;
│   │   ├── Empty.tsx&lt;br /&gt;
│   │   ├── StatusChip.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── fields/&lt;br /&gt;
│   │   ├── MoneyField.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── inputs/&lt;br /&gt;
│       ├── AddressInput.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── features/&lt;br /&gt;
│   ├── companies/&lt;br /&gt;
│   │   ├── CompanyCreate.tsx&lt;br /&gt;
│   │   ├── CompanyEdit.tsx&lt;br /&gt;
│   │   ├── CompanyList.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── invoices/&lt;br /&gt;
│   │   ├── InvoiceCreate.tsx&lt;br /&gt;
│   │   ├── InvoiceEdit.tsx&lt;br /&gt;
│   │   ├── InvoiceList.tsx&lt;br /&gt;
│   │   ├── InvoiceShow.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── users/&lt;br /&gt;
│       ├── UserCreate.tsx&lt;br /&gt;
│       ├── UserEdit.tsx&lt;br /&gt;
│       ├── UserList.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── hooks/&lt;br /&gt;
│   ├── usePermissions.ts&lt;br /&gt;
│   └── useDebounce.ts&lt;br /&gt;
├── layout/&lt;br /&gt;
│   ├── AppBar.tsx&lt;br /&gt;
│   ├── Layout.tsx&lt;br /&gt;
│   ├── Login.tsx&lt;br /&gt;
│   └── Menu.tsx&lt;br /&gt;
├── pages/&lt;br /&gt;
│   ├── Dashboard.tsx&lt;br /&gt;
│   └── Settings.tsx&lt;br /&gt;
├── providers/&lt;br /&gt;
│   ├── authProvider.ts&lt;br /&gt;
│   ├── dataProvider.ts&lt;br /&gt;
│   └── i18nProvider.ts&lt;br /&gt;
├── theme/&lt;br /&gt;
│   └── theme.ts&lt;br /&gt;
├── types/&lt;br /&gt;
│   ├── index.ts&lt;br /&gt;
│   └── react-admin.d.ts&lt;br /&gt;
├── utils/&lt;br /&gt;
│   ├── formatters.ts&lt;br /&gt;
│   └── history.ts&lt;br /&gt;
├── App.tsx&lt;br /&gt;
└── index.tsx&lt;br /&gt;
&lt;br /&gt;
## Explanation of Each Directory&lt;br /&gt;
features/ (or resources/)&lt;br /&gt;
This is the most important directory. Instead of grouping files by type (e.g., all &amp;quot;List&amp;quot; views together), you group them by the business feature or resource they represent. This makes the codebase much easier to navigate and maintain.&lt;br /&gt;
 * features/companies/: Contains all components related to the &amp;quot;companies&amp;quot; resource: CompanyList, CompanyEdit, CompanyCreate, etc.&lt;br /&gt;
 * index.ts: Each feature folder should have an index.ts file that exports all of its components (a &amp;quot;barrel file&amp;quot;). This allows for cleaner imports in your main App.tsx file.&lt;br /&gt;
   // src/features/companies/index.ts&lt;br /&gt;
export * from './CompanyList';&lt;br /&gt;
export * from './CompanyEdit';&lt;br /&gt;
export * from './CompanyCreate';&lt;br /&gt;
&lt;br /&gt;
components/&lt;br /&gt;
This folder is for shared, reusable UI components that are not tied to a single feature.&lt;br /&gt;
 * common/: General-purpose components like a custom Empty state component or a StatusChip used across multiple resources.&lt;br /&gt;
 * fields/: Custom React Admin &amp;lt;Field&amp;gt; components, such as a MoneyField that formats currency.&lt;br /&gt;
 * inputs/: Custom React Admin &amp;lt;Input&amp;gt; components, like a complex AddressInput group.&lt;br /&gt;
providers/&lt;br /&gt;
This is a dedicated place for your core React Admin providers. Keeping them together makes the app's data and authentication logic easy to find.&lt;br /&gt;
 * dataProvider.ts: Handles all communication with your API.&lt;br /&gt;
 * authProvider.ts: Manages login, logout, permissions, etc.&lt;br /&gt;
 * i18nProvider.ts: Handles internationalization and translations.&lt;br /&gt;
layout/&lt;br /&gt;
Use this directory to override React Admin's default layout components.&lt;br /&gt;
 * Layout.tsx: Your main app shell, which pulls in the custom AppBar and Menu.&lt;br /&gt;
 * Menu.tsx: A custom menu, perhaps with icons or role-based visibility.&lt;br /&gt;
 * Login.tsx: A custom-branded login page.&lt;br /&gt;
pages/&lt;br /&gt;
For any custom pages in your application that are not standard CRUD views.&lt;br /&gt;
 * Dashboard.tsx: A perfect place for a dashboard component that shows charts and summary data.&lt;br /&gt;
 * Settings.tsx: A user settings or application configuration page.&lt;br /&gt;
hooks/&lt;br /&gt;
For custom, reusable React hooks that contain business logic.&lt;br /&gt;
 * usePermissions.ts: A common hook in B2B apps to check the current user's roles and permissions.&lt;br /&gt;
 * useDebounce.ts: A utility hook to debounce user input, useful for search filters.&lt;br /&gt;
theme/&lt;br /&gt;
Contains your Material UI theme configuration.&lt;br /&gt;
 * theme.ts: Define your primary/secondary colors, typography, spacing, and component overrides here.&lt;br /&gt;
types/&lt;br /&gt;
Centralize your TypeScript types for better code quality and autocompletion.&lt;br /&gt;
 * index.ts: Define types for your data models (e.g., Company, User, Invoice).&lt;br /&gt;
 * react-admin.d.ts: Augment React Admin's own types if needed.&lt;br /&gt;
## Putting It All Together (App.tsx)&lt;br /&gt;
This structure leads to a very clean and readable main App.tsx file.&lt;br /&gt;
// src/App.tsx&lt;br /&gt;
import { Admin, Resource, CustomRoutes } from 'react-admin';&lt;br /&gt;
import { Route } from 'react-router-dom';&lt;br /&gt;
&lt;br /&gt;
// Providers&lt;br /&gt;
import { dataProvider } from './providers/dataProvider';&lt;br /&gt;
import { authProvider } from './providers/authProvider';&lt;br /&gt;
&lt;br /&gt;
// Layout and Pages&lt;br /&gt;
import { Layout } from './layout/Layout';&lt;br /&gt;
import { Dashboard } from './pages/Dashboard';&lt;br /&gt;
&lt;br /&gt;
// Features (Resources)&lt;br /&gt;
import { CompanyList, CompanyEdit, CompanyCreate } from './features/companies';&lt;br /&gt;
import { InvoiceList, InvoiceEdit, InvoiceShow } from './features/invoices';&lt;br /&gt;
import { UserList } from './features/users';&lt;br /&gt;
&lt;br /&gt;
const App = () =&amp;gt; (&lt;br /&gt;
  &amp;lt;Admin&lt;br /&gt;
    dataProvider={dataProvider}&lt;br /&gt;
    authProvider={authProvider}&lt;br /&gt;
    layout={Layout}&lt;br /&gt;
    dashboard={Dashboard}&lt;br /&gt;
  &amp;gt;&lt;br /&gt;
    &amp;lt;Resource&lt;br /&gt;
      name=&amp;quot;companies&amp;quot;&lt;br /&gt;
      list={CompanyList}&lt;br /&gt;
      edit={CompanyEdit}&lt;br /&gt;
      create={CompanyCreate}&lt;br /&gt;
    /&amp;gt;&lt;br /&gt;
    &amp;lt;Resource&lt;br /&gt;
      name=&amp;quot;invoices&amp;quot;&lt;br /&gt;
      list={InvoiceList}&lt;br /&gt;
      edit={InvoiceEdit}&lt;br /&gt;
      show={InvoiceShow}&lt;br /&gt;
    /&amp;gt;&lt;br /&gt;
    &amp;lt;Resource name=&amp;quot;users&amp;quot; list={UserList} /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    {/* Custom pages */}&lt;br /&gt;
    &amp;lt;CustomRoutes&amp;gt;&lt;br /&gt;
      &amp;lt;Route path=&amp;quot;/settings&amp;quot; element={&amp;lt;SettingsPage /&amp;gt;} /&amp;gt;&lt;br /&gt;
    &amp;lt;/CustomRoutes&amp;gt;&lt;br /&gt;
  &amp;lt;/Admin&amp;gt;&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
export default App;&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=React_directory_structure&amp;diff=5646</id>
		<title>React directory structure</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=React_directory_structure&amp;diff=5646"/>
		<updated>2025-09-09T03:29:12Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;/src ├── assets/ │   ├── images/ │   └── styles/ ├── components/ │   ├── common/ │   │   ├── Empty.tsx │   │   ├── Stat...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;/src&lt;br /&gt;
├── assets/&lt;br /&gt;
│   ├── images/&lt;br /&gt;
│   └── styles/&lt;br /&gt;
├── components/&lt;br /&gt;
│   ├── common/&lt;br /&gt;
│   │   ├── Empty.tsx&lt;br /&gt;
│   │   ├── StatusChip.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── fields/&lt;br /&gt;
│   │   ├── MoneyField.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── inputs/&lt;br /&gt;
│       ├── AddressInput.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── features/&lt;br /&gt;
│   ├── companies/&lt;br /&gt;
│   │   ├── CompanyCreate.tsx&lt;br /&gt;
│   │   ├── CompanyEdit.tsx&lt;br /&gt;
│   │   ├── CompanyList.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── invoices/&lt;br /&gt;
│   │   ├── InvoiceCreate.tsx&lt;br /&gt;
│   │   ├── InvoiceEdit.tsx&lt;br /&gt;
│   │   ├── InvoiceList.tsx&lt;br /&gt;
│   │   ├── InvoiceShow.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── users/&lt;br /&gt;
│       ├── UserCreate.tsx&lt;br /&gt;
│       ├── UserEdit.tsx&lt;br /&gt;
│       ├── UserList.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── hooks/&lt;br /&gt;
│   ├── usePermissions.ts&lt;br /&gt;
│   └── useDebounce.ts&lt;br /&gt;
├── layout/&lt;br /&gt;
│   ├── AppBar.tsx&lt;br /&gt;
│   ├── Layout.tsx&lt;br /&gt;
│   ├── Login.tsx&lt;br /&gt;
│   └── Menu.tsx&lt;br /&gt;
├── pages/&lt;br /&gt;
│   ├── Dashboard.tsx&lt;br /&gt;
│   └── Settings.tsx&lt;br /&gt;
├── providers/&lt;br /&gt;
│   ├── authProvider.ts&lt;br /&gt;
│   ├── dataProvider.ts&lt;br /&gt;
│   └── i18nProvider.ts&lt;br /&gt;
├── theme/&lt;br /&gt;
│   └── theme.ts&lt;br /&gt;
├── types/&lt;br /&gt;
│   ├── index.ts&lt;br /&gt;
│   └── react-admin.d.ts&lt;br /&gt;
├── utils/&lt;br /&gt;
│   ├── formatters.ts&lt;br /&gt;
│   └── history.ts&lt;br /&gt;
├── App.tsx&lt;br /&gt;
└── index.tsx&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Of course. Here is a battle-tested directory structure for a scalable B2B React Admin application that emphasizes maintainability and separation of concerns.&lt;br /&gt;
This structure is organized around features (or &amp;quot;resources&amp;quot; in React Admin terms), which is a highly effective way to manage complexity as your application grows.&lt;br /&gt;
Example Directory Structure&lt;br /&gt;
Here is a visual representation of the recommended structure:&lt;br /&gt;
/src&lt;br /&gt;
├── assets/&lt;br /&gt;
│   ├── images/&lt;br /&gt;
│   └── styles/&lt;br /&gt;
├── components/&lt;br /&gt;
│   ├── common/&lt;br /&gt;
│   │   ├── Empty.tsx&lt;br /&gt;
│   │   ├── StatusChip.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── fields/&lt;br /&gt;
│   │   ├── MoneyField.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── inputs/&lt;br /&gt;
│       ├── AddressInput.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── features/&lt;br /&gt;
│   ├── companies/&lt;br /&gt;
│   │   ├── CompanyCreate.tsx&lt;br /&gt;
│   │   ├── CompanyEdit.tsx&lt;br /&gt;
│   │   ├── CompanyList.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   ├── invoices/&lt;br /&gt;
│   │   ├── InvoiceCreate.tsx&lt;br /&gt;
│   │   ├── InvoiceEdit.tsx&lt;br /&gt;
│   │   ├── InvoiceList.tsx&lt;br /&gt;
│   │   ├── InvoiceShow.tsx&lt;br /&gt;
│   │   └── index.ts&lt;br /&gt;
│   └── users/&lt;br /&gt;
│       ├── UserCreate.tsx&lt;br /&gt;
│       ├── UserEdit.tsx&lt;br /&gt;
│       ├── UserList.tsx&lt;br /&gt;
│       └── index.ts&lt;br /&gt;
├── hooks/&lt;br /&gt;
│   ├── usePermissions.ts&lt;br /&gt;
│   └── useDebounce.ts&lt;br /&gt;
├── layout/&lt;br /&gt;
│   ├── AppBar.tsx&lt;br /&gt;
│   ├── Layout.tsx&lt;br /&gt;
│   ├── Login.tsx&lt;br /&gt;
│   └── Menu.tsx&lt;br /&gt;
├── pages/&lt;br /&gt;
│   ├── Dashboard.tsx&lt;br /&gt;
│   └── Settings.tsx&lt;br /&gt;
├── providers/&lt;br /&gt;
│   ├── authProvider.ts&lt;br /&gt;
│   ├── dataProvider.ts&lt;br /&gt;
│   └── i18nProvider.ts&lt;br /&gt;
├── theme/&lt;br /&gt;
│   └── theme.ts&lt;br /&gt;
├── types/&lt;br /&gt;
│   ├── index.ts&lt;br /&gt;
│   └── react-admin.d.ts&lt;br /&gt;
├── utils/&lt;br /&gt;
│   ├── formatters.ts&lt;br /&gt;
│   └── history.ts&lt;br /&gt;
├── App.tsx&lt;br /&gt;
└── index.tsx&lt;br /&gt;
&lt;br /&gt;
## Explanation of Each Directory&lt;br /&gt;
features/ (or resources/)&lt;br /&gt;
This is the most important directory. Instead of grouping files by type (e.g., all &amp;quot;List&amp;quot; views together), you group them by the business feature or resource they represent. This makes the codebase much easier to navigate and maintain.&lt;br /&gt;
 * features/companies/: Contains all components related to the &amp;quot;companies&amp;quot; resource: CompanyList, CompanyEdit, CompanyCreate, etc.&lt;br /&gt;
 * index.ts: Each feature folder should have an index.ts file that exports all of its components (a &amp;quot;barrel file&amp;quot;). This allows for cleaner imports in your main App.tsx file.&lt;br /&gt;
   // src/features/companies/index.ts&lt;br /&gt;
export * from './CompanyList';&lt;br /&gt;
export * from './CompanyEdit';&lt;br /&gt;
export * from './CompanyCreate';&lt;br /&gt;
&lt;br /&gt;
components/&lt;br /&gt;
This folder is for shared, reusable UI components that are not tied to a single feature.&lt;br /&gt;
 * common/: General-purpose components like a custom Empty state component or a StatusChip used across multiple resources.&lt;br /&gt;
 * fields/: Custom React Admin &amp;lt;Field&amp;gt; components, such as a MoneyField that formats currency.&lt;br /&gt;
 * inputs/: Custom React Admin &amp;lt;Input&amp;gt; components, like a complex AddressInput group.&lt;br /&gt;
providers/&lt;br /&gt;
This is a dedicated place for your core React Admin providers. Keeping them together makes the app's data and authentication logic easy to find.&lt;br /&gt;
 * dataProvider.ts: Handles all communication with your API.&lt;br /&gt;
 * authProvider.ts: Manages login, logout, permissions, etc.&lt;br /&gt;
 * i18nProvider.ts: Handles internationalization and translations.&lt;br /&gt;
layout/&lt;br /&gt;
Use this directory to override React Admin's default layout components.&lt;br /&gt;
 * Layout.tsx: Your main app shell, which pulls in the custom AppBar and Menu.&lt;br /&gt;
 * Menu.tsx: A custom menu, perhaps with icons or role-based visibility.&lt;br /&gt;
 * Login.tsx: A custom-branded login page.&lt;br /&gt;
pages/&lt;br /&gt;
For any custom pages in your application that are not standard CRUD views.&lt;br /&gt;
 * Dashboard.tsx: A perfect place for a dashboard component that shows charts and summary data.&lt;br /&gt;
 * Settings.tsx: A user settings or application configuration page.&lt;br /&gt;
hooks/&lt;br /&gt;
For custom, reusable React hooks that contain business logic.&lt;br /&gt;
 * usePermissions.ts: A common hook in B2B apps to check the current user's roles and permissions.&lt;br /&gt;
 * useDebounce.ts: A utility hook to debounce user input, useful for search filters.&lt;br /&gt;
theme/&lt;br /&gt;
Contains your Material UI theme configuration.&lt;br /&gt;
 * theme.ts: Define your primary/secondary colors, typography, spacing, and component overrides here.&lt;br /&gt;
types/&lt;br /&gt;
Centralize your TypeScript types for better code quality and autocompletion.&lt;br /&gt;
 * index.ts: Define types for your data models (e.g., Company, User, Invoice).&lt;br /&gt;
 * react-admin.d.ts: Augment React Admin's own types if needed.&lt;br /&gt;
## Putting It All Together (App.tsx)&lt;br /&gt;
This structure leads to a very clean and readable main App.tsx file.&lt;br /&gt;
// src/App.tsx&lt;br /&gt;
import { Admin, Resource, CustomRoutes } from 'react-admin';&lt;br /&gt;
import { Route } from 'react-router-dom';&lt;br /&gt;
&lt;br /&gt;
// Providers&lt;br /&gt;
import { dataProvider } from './providers/dataProvider';&lt;br /&gt;
import { authProvider } from './providers/authProvider';&lt;br /&gt;
&lt;br /&gt;
// Layout and Pages&lt;br /&gt;
import { Layout } from './layout/Layout';&lt;br /&gt;
import { Dashboard } from './pages/Dashboard';&lt;br /&gt;
&lt;br /&gt;
// Features (Resources)&lt;br /&gt;
import { CompanyList, CompanyEdit, CompanyCreate } from './features/companies';&lt;br /&gt;
import { InvoiceList, InvoiceEdit, InvoiceShow } from './features/invoices';&lt;br /&gt;
import { UserList } from './features/users';&lt;br /&gt;
&lt;br /&gt;
const App = () =&amp;gt; (&lt;br /&gt;
  &amp;lt;Admin&lt;br /&gt;
    dataProvider={dataProvider}&lt;br /&gt;
    authProvider={authProvider}&lt;br /&gt;
    layout={Layout}&lt;br /&gt;
    dashboard={Dashboard}&lt;br /&gt;
  &amp;gt;&lt;br /&gt;
    &amp;lt;Resource&lt;br /&gt;
      name=&amp;quot;companies&amp;quot;&lt;br /&gt;
      list={CompanyList}&lt;br /&gt;
      edit={CompanyEdit}&lt;br /&gt;
      create={CompanyCreate}&lt;br /&gt;
    /&amp;gt;&lt;br /&gt;
    &amp;lt;Resource&lt;br /&gt;
      name=&amp;quot;invoices&amp;quot;&lt;br /&gt;
      list={InvoiceList}&lt;br /&gt;
      edit={InvoiceEdit}&lt;br /&gt;
      show={InvoiceShow}&lt;br /&gt;
    /&amp;gt;&lt;br /&gt;
    &amp;lt;Resource name=&amp;quot;users&amp;quot; list={UserList} /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
    {/* Custom pages */}&lt;br /&gt;
    &amp;lt;CustomRoutes&amp;gt;&lt;br /&gt;
      &amp;lt;Route path=&amp;quot;/settings&amp;quot; element={&amp;lt;SettingsPage /&amp;gt;} /&amp;gt;&lt;br /&gt;
    &amp;lt;/CustomRoutes&amp;gt;&lt;br /&gt;
  &amp;lt;/Admin&amp;gt;&lt;br /&gt;
);&lt;br /&gt;
&lt;br /&gt;
export default App;&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Libs_fe&amp;diff=5645</id>
		<title>Libs fe</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Libs_fe&amp;diff=5645"/>
		<updated>2025-09-02T22:58:48Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;leptos  https://github.com/gcanti/fp-ts&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;leptos&lt;br /&gt;
&lt;br /&gt;
https://github.com/gcanti/fp-ts&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Postgres&amp;diff=5644</id>
		<title>Postgres</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Postgres&amp;diff=5644"/>
		<updated>2025-08-31T18:57:39Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;https://github.com/supabase/postgres&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;https://github.com/supabase/postgres&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Keycloak_postgrest&amp;diff=5643</id>
		<title>Keycloak postgrest</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Keycloak_postgrest&amp;diff=5643"/>
		<updated>2025-08-31T02:32:18Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PostgREST can be integrated with Keycloak for authentication using JSON Web Tokens (JWTs). This setup allows Keycloak to manage user authentication and authorization, while PostgREST leverages the issued JWTs to control access to the PostgreSQL database. &lt;br /&gt;
Key Concepts: &lt;br /&gt;
&lt;br /&gt;
• JWT Secret: PostgREST requires a jwt-secret to verify the signature of incoming JWTs. This secret can be a symmetric key (HS256) or an asymmetric public key (RS256) from Keycloak. For RS256, the public key from Keycloak's realm settings (Keys tab) should be configured as a JSON Web Key (JWK) in PostgREST's jwt-secret. &lt;br /&gt;
• Role Claim: PostgREST uses a role claim within the JWT to determine the database role under which a request should be executed. Keycloak can be configured to include a role claim in the issued JWTs, mapping user roles or attributes to database roles. The jwt-role-claim-key in PostgREST configuration specifies the JSONPath expression to extract the role from the JWT claims. &lt;br /&gt;
• Authentication Flow: &lt;br /&gt;
	• A client application authenticates with Keycloak, obtaining an access token (JWT). &lt;br /&gt;
	• The client sends this JWT in the Authorization header of requests to PostgREST. &lt;br /&gt;
	• PostgREST verifies the JWT's signature using the configured jwt-secret. &lt;br /&gt;
	• It extracts the role claim from the JWT. &lt;br /&gt;
	• PostgREST then switches to the corresponding database role for the duration of the request, enforcing PostgreSQL's role-based access control. &lt;br /&gt;
&lt;br /&gt;
Configuration Steps: &lt;br /&gt;
&lt;br /&gt;
• Keycloak Setup: &lt;br /&gt;
	• Create a realm and a client in Keycloak for your PostgREST application. &lt;br /&gt;
	• Configure mappers to include user roles or other relevant information in the JWT's claims, specifically mapping them to a claim that PostgREST can use as a role. &lt;br /&gt;
&lt;br /&gt;
• PostgREST Configuration: &lt;br /&gt;
	• Set the jwt-secret in your postgrest.conf file to the appropriate Keycloak public key (as a JWK) or shared secret. &lt;br /&gt;
	• Configure jwt-role-claim-key to specify where PostgREST should look for the role information within the JWT payload. For example, if Keycloak puts the role in a preferred_username claim and you want to use that as the database role, you would set jwt-role-claim-key = &amp;quot;.preferred_username&amp;quot;. &lt;br /&gt;
	• Ensure the corresponding database roles exist in PostgreSQL and have the necessary permissions granted. &lt;br /&gt;
&lt;br /&gt;
By following these steps, Keycloak handles the identity management and token issuance, while PostgREST securely enforces access control based on the roles embedded in the JWTs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Test jwt&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
Creating a user in Keycloak and obtaining a JSON Web Token (JWT) involves several steps: &lt;br /&gt;
1. Create a User in Keycloak: &lt;br /&gt;
&lt;br /&gt;
• Access the Keycloak Admin Console: Log in to your Keycloak instance's administration console (e.g., http://localhost:8080/auth/admin). &lt;br /&gt;
• Select/Create a Realm: Choose an existing realm or create a new one to manage your users and clients. &lt;br /&gt;
• Navigate to Users: In the left-hand menu, select &amp;quot;Users.&amp;quot; &lt;br /&gt;
• Add User: Click &amp;quot;Add user&amp;quot; and provide the required information, such as username, email, first name, last name, and enable the user. &lt;br /&gt;
• Set Password: Go to the &amp;quot;Credentials&amp;quot; tab for the newly created user and set a password. You may also configure whether the user is required to update their password on the next login. &lt;br /&gt;
&lt;br /&gt;
2. Configure a Client for JWT Generation: &lt;br /&gt;
&lt;br /&gt;
• Navigate to Clients: In the left-hand menu, select &amp;quot;Clients.&amp;quot; &lt;br /&gt;
• Create a Client: Click &amp;quot;Create client&amp;quot; and provide a Client ID. &lt;br /&gt;
• Configure Client Capabilities: &lt;br /&gt;
	• Enable &amp;quot;Client authentication&amp;quot; and set the &amp;quot;Access Type&amp;quot; to &amp;quot;confidential&amp;quot; if you intend to use a client secret for token requests. &lt;br /&gt;
	• Enable &amp;quot;Direct access grants&amp;quot; to allow users to obtain tokens directly using their username and password (though the authorization code flow is generally recommended for security). &lt;br /&gt;
&lt;br /&gt;
• Retrieve Client Secret (if applicable): If using a confidential client, go to the &amp;quot;Credentials&amp;quot; tab of the client and copy the &amp;quot;Client Secret.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
3. Obtain the JWT: &lt;br /&gt;
The method to obtain the JWT depends on the grant type configured for your client. &lt;br /&gt;
&lt;br /&gt;
• Password Grant (for direct username/password authentication): &lt;br /&gt;
	• Use a tool like curl or Postman to send a POST request to the Keycloak token endpoint (e.g., http://localhost:8080/auth/realms/&amp;amp;lt;your_realm&amp;amp;gt;/protocol/openid-connect/token). &lt;br /&gt;
	• Include the following parameters in the request body (URL-encoded or as form data): &lt;br /&gt;
		• grant_type: password &lt;br /&gt;
		• client_id: Your client ID &lt;br /&gt;
		• client_secret: Your client secret (if using a confidential client) &lt;br /&gt;
		• username: The username of the user you created &lt;br /&gt;
		• password: The password of the user you created &lt;br /&gt;
&lt;br /&gt;
	• The response will contain the access_token (your JWT), refresh_token, and other token-related information. &lt;br /&gt;
&lt;br /&gt;
    curl -X POST \&lt;br /&gt;
      http://localhost:8080/auth/realms/&amp;lt;your_realm&amp;gt;/protocol/openid-connect/token \&lt;br /&gt;
      -H 'Content-Type: application/x-www-form-urlencoded' \&lt;br /&gt;
      -d 'grant_type=password&amp;amp;client_id=&amp;lt;your_client_id&amp;gt;&amp;amp;client_secret=&amp;lt;your_client_secret&amp;gt;&amp;amp;username=&amp;lt;your_username&amp;gt;&amp;amp;password=&amp;lt;your_password&amp;gt;'&lt;br /&gt;
&lt;br /&gt;
• Authorization Code Flow (recommended for web applications): This flow involves redirecting the user to Keycloak for authentication and then exchanging an authorization code for a token. It is more secure than the password grant. &lt;br /&gt;
&lt;br /&gt;
Note: The JWT obtained can then be used to authenticate and authorize requests to applications and services integrated with Keycloak. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
PostgREST utilizes environment variables for its configuration, offering an alternative or supplement to a configuration file. These variables are capitalized, prefixed with PGRST_, and use underscores to separate words. &lt;br /&gt;
Key PostgREST Environment Variables: &lt;br /&gt;
&lt;br /&gt;
• PGRST_DB_URI: This is a crucial variable that defines the connection string to the PostgreSQL database. It follows the format: &lt;br /&gt;
&lt;br /&gt;
    postgres://user:password@host:port/database_name&lt;br /&gt;
&lt;br /&gt;
This variable is essential for PostgREST to connect to and expose your database. &lt;br /&gt;
&lt;br /&gt;
• PGRST_SERVER_PORT: Specifies the port on which the PostgREST server will listen for incoming HTTP requests. &lt;br /&gt;
• PGRST_JWT_SECRET: Used to define the secret key for signing and verifying JSON Web Tokens (JWTs) for authentication. &lt;br /&gt;
• PGRST_DB_SCHEMAS: A comma-separated list of database schemas that PostgREST should expose through its API. &lt;br /&gt;
• PGRST_DB_PRE_CONFIG: Specifies a database function that PostgREST should execute before starting to serve requests. This function can be used for in-database configuration, such as setting db-schemas or jwt-secret. &lt;br /&gt;
• PGRST_APP_SETTINGS_*: Allows for configuration of custom application settings. For example, PGRST_APP_SETTINGS_MY_SETTING would correspond to app.settings.my-setting in a configuration file. &lt;br /&gt;
&lt;br /&gt;
Integration with PostgreSQL Environment Variables: &lt;br /&gt;
PostgREST also supports standard libpq environment variables for constructing the database connection string, such as: &lt;br /&gt;
&lt;br /&gt;
• PGHOST: Sets the database server hostname. &lt;br /&gt;
• PGPORT: Sets the TCP port number for the PostgreSQL server. &lt;br /&gt;
• PGDATABASE: Sets the PostgreSQL database name. &lt;br /&gt;
• PGUSER: Sets the username for connecting to the database. &lt;br /&gt;
• PGPASSWORD: Sets the password for database authentication. &lt;br /&gt;
&lt;br /&gt;
Precedence of Configuration: &lt;br /&gt;
PostgREST applies configuration in the following order of precedence, with later methods overriding earlier ones: &lt;br /&gt;
&lt;br /&gt;
• Config File: Values loaded from a postgrest.conf file. &lt;br /&gt;
• Environment Variables: Values set via environment variables (prefixed with PGRST_). &lt;br /&gt;
• In-Database Configuration: Settings defined by a db-pre-config function within the database. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Keycloak_postgrest&amp;diff=5642</id>
		<title>Keycloak postgrest</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Keycloak_postgrest&amp;diff=5642"/>
		<updated>2025-08-30T15:10:39Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PostgREST can be integrated with Keycloak for authentication using JSON Web Tokens (JWTs). This setup allows Keycloak to manage user authentication and authorization, while PostgREST leverages the issued JWTs to control access to the PostgreSQL database. &lt;br /&gt;
Key Concepts: &lt;br /&gt;
&lt;br /&gt;
• JWT Secret: PostgREST requires a jwt-secret to verify the signature of incoming JWTs. This secret can be a symmetric key (HS256) or an asymmetric public key (RS256) from Keycloak. For RS256, the public key from Keycloak's realm settings (Keys tab) should be configured as a JSON Web Key (JWK) in PostgREST's jwt-secret. &lt;br /&gt;
• Role Claim: PostgREST uses a role claim within the JWT to determine the database role under which a request should be executed. Keycloak can be configured to include a role claim in the issued JWTs, mapping user roles or attributes to database roles. The jwt-role-claim-key in PostgREST configuration specifies the JSONPath expression to extract the role from the JWT claims. &lt;br /&gt;
• Authentication Flow: &lt;br /&gt;
	• A client application authenticates with Keycloak, obtaining an access token (JWT). &lt;br /&gt;
	• The client sends this JWT in the Authorization header of requests to PostgREST. &lt;br /&gt;
	• PostgREST verifies the JWT's signature using the configured jwt-secret. &lt;br /&gt;
	• It extracts the role claim from the JWT. &lt;br /&gt;
	• PostgREST then switches to the corresponding database role for the duration of the request, enforcing PostgreSQL's role-based access control. &lt;br /&gt;
&lt;br /&gt;
Configuration Steps: &lt;br /&gt;
&lt;br /&gt;
• Keycloak Setup: &lt;br /&gt;
	• Create a realm and a client in Keycloak for your PostgREST application. &lt;br /&gt;
	• Configure mappers to include user roles or other relevant information in the JWT's claims, specifically mapping them to a claim that PostgREST can use as a role. &lt;br /&gt;
&lt;br /&gt;
• PostgREST Configuration: &lt;br /&gt;
	• Set the jwt-secret in your postgrest.conf file to the appropriate Keycloak public key (as a JWK) or shared secret. &lt;br /&gt;
	• Configure jwt-role-claim-key to specify where PostgREST should look for the role information within the JWT payload. For example, if Keycloak puts the role in a preferred_username claim and you want to use that as the database role, you would set jwt-role-claim-key = &amp;quot;.preferred_username&amp;quot;. &lt;br /&gt;
	• Ensure the corresponding database roles exist in PostgreSQL and have the necessary permissions granted. &lt;br /&gt;
&lt;br /&gt;
By following these steps, Keycloak handles the identity management and token issuance, while PostgREST securely enforces access control based on the roles embedded in the JWTs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## Test jwt&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
Creating a user in Keycloak and obtaining a JSON Web Token (JWT) involves several steps: &lt;br /&gt;
1. Create a User in Keycloak: &lt;br /&gt;
&lt;br /&gt;
• Access the Keycloak Admin Console: Log in to your Keycloak instance's administration console (e.g., http://localhost:8080/auth/admin). &lt;br /&gt;
• Select/Create a Realm: Choose an existing realm or create a new one to manage your users and clients. &lt;br /&gt;
• Navigate to Users: In the left-hand menu, select &amp;quot;Users.&amp;quot; &lt;br /&gt;
• Add User: Click &amp;quot;Add user&amp;quot; and provide the required information, such as username, email, first name, last name, and enable the user. &lt;br /&gt;
• Set Password: Go to the &amp;quot;Credentials&amp;quot; tab for the newly created user and set a password. You may also configure whether the user is required to update their password on the next login. &lt;br /&gt;
&lt;br /&gt;
2. Configure a Client for JWT Generation: &lt;br /&gt;
&lt;br /&gt;
• Navigate to Clients: In the left-hand menu, select &amp;quot;Clients.&amp;quot; &lt;br /&gt;
• Create a Client: Click &amp;quot;Create client&amp;quot; and provide a Client ID. &lt;br /&gt;
• Configure Client Capabilities: &lt;br /&gt;
	• Enable &amp;quot;Client authentication&amp;quot; and set the &amp;quot;Access Type&amp;quot; to &amp;quot;confidential&amp;quot; if you intend to use a client secret for token requests. &lt;br /&gt;
	• Enable &amp;quot;Direct access grants&amp;quot; to allow users to obtain tokens directly using their username and password (though the authorization code flow is generally recommended for security). &lt;br /&gt;
&lt;br /&gt;
• Retrieve Client Secret (if applicable): If using a confidential client, go to the &amp;quot;Credentials&amp;quot; tab of the client and copy the &amp;quot;Client Secret.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
3. Obtain the JWT: &lt;br /&gt;
The method to obtain the JWT depends on the grant type configured for your client. &lt;br /&gt;
&lt;br /&gt;
• Password Grant (for direct username/password authentication): &lt;br /&gt;
	• Use a tool like curl or Postman to send a POST request to the Keycloak token endpoint (e.g., http://localhost:8080/auth/realms/&amp;amp;lt;your_realm&amp;amp;gt;/protocol/openid-connect/token). &lt;br /&gt;
	• Include the following parameters in the request body (URL-encoded or as form data): &lt;br /&gt;
		• grant_type: password &lt;br /&gt;
		• client_id: Your client ID &lt;br /&gt;
		• client_secret: Your client secret (if using a confidential client) &lt;br /&gt;
		• username: The username of the user you created &lt;br /&gt;
		• password: The password of the user you created &lt;br /&gt;
&lt;br /&gt;
	• The response will contain the access_token (your JWT), refresh_token, and other token-related information. &lt;br /&gt;
&lt;br /&gt;
    curl -X POST \&lt;br /&gt;
      http://localhost:8080/auth/realms/&amp;lt;your_realm&amp;gt;/protocol/openid-connect/token \&lt;br /&gt;
      -H 'Content-Type: application/x-www-form-urlencoded' \&lt;br /&gt;
      -d 'grant_type=password&amp;amp;client_id=&amp;lt;your_client_id&amp;gt;&amp;amp;client_secret=&amp;lt;your_client_secret&amp;gt;&amp;amp;username=&amp;lt;your_username&amp;gt;&amp;amp;password=&amp;lt;your_password&amp;gt;'&lt;br /&gt;
&lt;br /&gt;
• Authorization Code Flow (recommended for web applications): This flow involves redirecting the user to Keycloak for authentication and then exchanging an authorization code for a token. It is more secure than the password grant. &lt;br /&gt;
&lt;br /&gt;
Note: The JWT obtained can then be used to authenticate and authorize requests to applications and services integrated with Keycloak. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Keycloak_postgrest&amp;diff=5641</id>
		<title>Keycloak postgrest</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Keycloak_postgrest&amp;diff=5641"/>
		<updated>2025-08-30T05:03:36Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;PostgREST can be integrated with Keycloak for authentication using JSON Web Tokens (JWTs). This setup allows Keycloak to manage user authentication and authorization, while Po...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;PostgREST can be integrated with Keycloak for authentication using JSON Web Tokens (JWTs). This setup allows Keycloak to manage user authentication and authorization, while PostgREST leverages the issued JWTs to control access to the PostgreSQL database. &lt;br /&gt;
Key Concepts: &lt;br /&gt;
&lt;br /&gt;
• JWT Secret: PostgREST requires a jwt-secret to verify the signature of incoming JWTs. This secret can be a symmetric key (HS256) or an asymmetric public key (RS256) from Keycloak. For RS256, the public key from Keycloak's realm settings (Keys tab) should be configured as a JSON Web Key (JWK) in PostgREST's jwt-secret. &lt;br /&gt;
• Role Claim: PostgREST uses a role claim within the JWT to determine the database role under which a request should be executed. Keycloak can be configured to include a role claim in the issued JWTs, mapping user roles or attributes to database roles. The jwt-role-claim-key in PostgREST configuration specifies the JSONPath expression to extract the role from the JWT claims. &lt;br /&gt;
• Authentication Flow: &lt;br /&gt;
	• A client application authenticates with Keycloak, obtaining an access token (JWT). &lt;br /&gt;
	• The client sends this JWT in the Authorization header of requests to PostgREST. &lt;br /&gt;
	• PostgREST verifies the JWT's signature using the configured jwt-secret. &lt;br /&gt;
	• It extracts the role claim from the JWT. &lt;br /&gt;
	• PostgREST then switches to the corresponding database role for the duration of the request, enforcing PostgreSQL's role-based access control. &lt;br /&gt;
&lt;br /&gt;
Configuration Steps: &lt;br /&gt;
&lt;br /&gt;
• Keycloak Setup: &lt;br /&gt;
	• Create a realm and a client in Keycloak for your PostgREST application. &lt;br /&gt;
	• Configure mappers to include user roles or other relevant information in the JWT's claims, specifically mapping them to a claim that PostgREST can use as a role. &lt;br /&gt;
&lt;br /&gt;
• PostgREST Configuration: &lt;br /&gt;
	• Set the jwt-secret in your postgrest.conf file to the appropriate Keycloak public key (as a JWK) or shared secret. &lt;br /&gt;
	• Configure jwt-role-claim-key to specify where PostgREST should look for the role information within the JWT payload. For example, if Keycloak puts the role in a preferred_username claim and you want to use that as the database role, you would set jwt-role-claim-key = &amp;quot;.preferred_username&amp;quot;. &lt;br /&gt;
	• Ensure the corresponding database roles exist in PostgreSQL and have the necessary permissions granted. &lt;br /&gt;
&lt;br /&gt;
By following these steps, Keycloak handles the identity management and token issuance, while PostgREST securely enforces access control based on the roles embedded in the JWTs.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Postgres_Extensions&amp;diff=5640</id>
		<title>Postgres Extensions</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Postgres_Extensions&amp;diff=5640"/>
		<updated>2025-08-29T03:33:51Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;https://blog.timescale.com/blog/top-5-postgresql-extensions/&lt;br /&gt;
&lt;br /&gt;
https://www.tigerdata.com/blog/top-8-postgresql-extensions&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Pgsql-http&amp;diff=5639</id>
		<title>Pgsql-http</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Pgsql-http&amp;diff=5639"/>
		<updated>2025-08-28T21:31:05Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SELECT http_set_curlopt('CURLOPT_TIMEOUT', '20');&lt;br /&gt;
SELECT http_set_curlopt('CURLOPT_TCP_KEEPALIVE', '20');&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
## security &lt;br /&gt;
&lt;br /&gt;
The http extension requires superuser permissions because it is not marked as a trusted extension and can perform operations that pose a significant security risk.&lt;br /&gt;
The http extension allows a user to make outbound HTTP requests directly from the database, which is a powerful capability that could be abused.&lt;br /&gt;
Security Ramifications&lt;br /&gt;
Granting a non-superuser the ability to create and use the http extension can lead to several security vulnerabilities. The risks arise from the fact that a user could leverage this functionality for malicious purposes.&lt;br /&gt;
 * Data Exfiltration: A malicious user with access to the http extension could exfiltrate data from the database. For example, they could construct an HTTP request to send sensitive information to an external server they control.&lt;br /&gt;
 * Access to Internal Network: The http extension allows the PostgreSQL server to act as a client. This could be used to probe, scan, or attack other services on the internal network that the database server has access to.&lt;br /&gt;
 * Server-Side Request Forgery (SSRF): If a user can control the URL and headers of the HTTP request, they could exploit an SSRF vulnerability. This could allow them to make the database server act as a proxy to access other internal resources, potentially bypassing firewalls.&lt;br /&gt;
 * Denial of Service (DoS): A user could make a large number of requests to a particular external server, potentially causing a DoS attack against it. Conversely, they could make requests to a slow-responding or nonexistent server, causing the database's worker processes to hang and impacting database performance.&lt;br /&gt;
In general, any extension that allows the database to interact with the external world or perform actions outside of its core function (data storage and retrieval) is likely to require superuser privileges to ensure it is not used to compromise the system. The http extension's ability to make network requests falls squarely into this category.&lt;br /&gt;
&lt;br /&gt;
## extensions and superuser&lt;br /&gt;
&lt;br /&gt;
No, not all PostgreSQL extensions require superuser privileges to create.&lt;br /&gt;
Most extensions do, but since PostgreSQL 9.1, it has been possible to create extensions that do not require superuser privileges. This is achieved through the use of the TRUSTED keyword in the control file. A trusted extension is one that is considered safe to be installed by non-superusers.&lt;br /&gt;
Key Considerations&lt;br /&gt;
 * Trusted Extensions: An extension marked as TRUSTED in its control file (.control) can be installed by users with the CREATE privilege on the current database.&lt;br /&gt;
 * Privileges: For an extension to be created by a non-superuser, the user must have the CREATE privilege on the database and the extension itself must be designed to be trusted.&lt;br /&gt;
 * Safety: The TRUSTED keyword is a way for extension developers to explicitly declare that their extension is safe to install. This means the extension's code and its SQL commands are designed not to pose a security risk.&lt;br /&gt;
Why Most Extensions Require Superuser&lt;br /&gt;
Many extensions require superuser privileges because they need to perform actions that are not allowed for regular users. These actions can include:&lt;br /&gt;
 * Creating C functions&lt;br /&gt;
 * Accessing the file system&lt;br /&gt;
 * Modifying global settings&lt;br /&gt;
The superuser requirement is a security measure to prevent a malicious or buggy extension from causing system-wide problems.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Pgsql-http&amp;diff=5638</id>
		<title>Pgsql-http</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Pgsql-http&amp;diff=5638"/>
		<updated>2025-08-28T21:25:56Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;SELECT http_set_curlopt('CURLOPT_TIMEOUT', '20');&lt;br /&gt;
SELECT http_set_curlopt('CURLOPT_TCP_KEEPALIVE', '20');&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The http extension requires superuser permissions because it is not marked as a trusted extension and can perform operations that pose a significant security risk.&lt;br /&gt;
The http extension allows a user to make outbound HTTP requests directly from the database, which is a powerful capability that could be abused.&lt;br /&gt;
Security Ramifications&lt;br /&gt;
Granting a non-superuser the ability to create and use the http extension can lead to several security vulnerabilities. The risks arise from the fact that a user could leverage this functionality for malicious purposes.&lt;br /&gt;
 * Data Exfiltration: A malicious user with access to the http extension could exfiltrate data from the database. For example, they could construct an HTTP request to send sensitive information to an external server they control.&lt;br /&gt;
 * Access to Internal Network: The http extension allows the PostgreSQL server to act as a client. This could be used to probe, scan, or attack other services on the internal network that the database server has access to.&lt;br /&gt;
 * Server-Side Request Forgery (SSRF): If a user can control the URL and headers of the HTTP request, they could exploit an SSRF vulnerability. This could allow them to make the database server act as a proxy to access other internal resources, potentially bypassing firewalls.&lt;br /&gt;
 * Denial of Service (DoS): A user could make a large number of requests to a particular external server, potentially causing a DoS attack against it. Conversely, they could make requests to a slow-responding or nonexistent server, causing the database's worker processes to hang and impacting database performance.&lt;br /&gt;
In general, any extension that allows the database to interact with the external world or perform actions outside of its core function (data storage and retrieval) is likely to require superuser privileges to ensure it is not used to compromise the system. The http extension's ability to make network requests falls squarely into this category.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Ica_constrained&amp;diff=5637</id>
		<title>Ica constrained</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Ica_constrained&amp;diff=5637"/>
		<updated>2025-08-25T10:48:18Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;To create an Intermediate Certificate Authority (ICA) that can only issue certificates for subdomains of a specific domain (like *.example.com), you must use the Name Constrai...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To create an Intermediate Certificate Authority (ICA) that can only issue certificates for subdomains of a specific domain (like *.example.com), you must use the Name Constraints extension in the ICA's certificate.&lt;br /&gt;
This restriction is applied when the Root CA signs the ICA's Certificate Signing Request (CSR).&lt;br /&gt;
Here are the steps and the required configuration file section.&lt;br /&gt;
1. Create the Configuration File (ica.cnf)&lt;br /&gt;
You need a custom configuration file to define the ICA extensions. Create a file named ica.cnf (or add this section to your main openssl.cnf) and include the Name Constraints extension.&lt;br /&gt;
The critical line is nameConstraints=critical,permitted;DNS:.example.com.&lt;br /&gt;
[ v3_intermediate_ca ]&lt;br /&gt;
# Standard CA constraints&lt;br /&gt;
basicConstraints = critical, CA:true, pathlen:0&lt;br /&gt;
keyUsage = critical, digitalSignature, cRLSign, keyCertSign&lt;br /&gt;
&lt;br /&gt;
# Name Constraints to restrict domains&lt;br /&gt;
# Permitted: only allow DNS names that are subdomains of .example.com&lt;br /&gt;
# This includes test1.example.com, test2.example.com, etc.&lt;br /&gt;
# Note: It does *not* include example.com itself unless you add another line:&lt;br /&gt;
# permitted;DNS:example.com&lt;br /&gt;
nameConstraints = critical, permitted;DNS:.example.com&lt;br /&gt;
&lt;br /&gt;
Explanation of the Name Constraint:&lt;br /&gt;
 * critical: This flag ensures that any client (browser, application) that doesn't understand the Name Constraints extension will reject the certificate, thus enforcing the security boundary.&lt;br /&gt;
 * permitted;DNS:.example.com: This is the whitelist. The dot (.) prefix makes the constraint match the entire zone. Any name that can be constructed by adding one or more labels to the left of .example.com (e.g., test.example.com, a.b.example.com) is permitted.&lt;br /&gt;
2. Create the Intermediate CA Key and CSR&lt;br /&gt;
First, generate the private key and the Certificate Signing Request (CSR) for your ICA.&lt;br /&gt;
# 1. Create the ICA Private Key&lt;br /&gt;
openssl genrsa -aes256 -out intermediate.key.pem 4096&lt;br /&gt;
&lt;br /&gt;
# 2. Create the ICA CSR&lt;br /&gt;
# Replace the DN fields (C, O, CN, etc.) with your specific values&lt;br /&gt;
openssl req -new -sha256 \&lt;br /&gt;
    -key intermediate.key.pem \&lt;br /&gt;
    -out intermediate.csr.pem \&lt;br /&gt;
    -subj &amp;quot;/C=US/ST=CA/L=MyCity/O=Example Corp/OU=IT/CN=Example.com Constrained Intermediate CA&amp;quot;&lt;br /&gt;
&lt;br /&gt;
3. Sign the Intermediate CA with the Root CA&lt;br /&gt;
Finally, use your Root CA key and certificate to sign the ICA's CSR, making sure to apply the extensions defined in your custom configuration file.&lt;br /&gt;
# Sign the CSR using the Root CA and the custom configuration&lt;br /&gt;
openssl x509 -req -sha256 \&lt;br /&gt;
    -in intermediate.csr.pem \&lt;br /&gt;
    -CA root.crt.pem \&lt;br /&gt;
    -CAkey root.key.pem \&lt;br /&gt;
    -set_serial 0x100 \&lt;br /&gt;
    -days 3650 \&lt;br /&gt;
    -extfile ica.cnf \&lt;br /&gt;
    -extensions v3_intermediate_ca \&lt;br /&gt;
    -out intermediate.crt.pem&lt;br /&gt;
&lt;br /&gt;
The resulting file, intermediate.crt.pem, is your domain-constrained ICA certificate. Any end-entity certificate signed by this ICA will be valid only if its domain name is under *.example.com.&lt;br /&gt;
4. Verify the Name Constraint&lt;br /&gt;
You can verify the created certificate to ensure the constraint was applied correctly:&lt;br /&gt;
openssl x509 -text -noout -in intermediate.crt.pem | grep -A2 &amp;quot;Name Constraints&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Expected Output:&lt;br /&gt;
            X509v3 Name Constraints: critical&lt;br /&gt;
                Permitted: &lt;br /&gt;
                  DNS:.example.com&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Repo_debian&amp;diff=5636</id>
		<title>Repo debian</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Repo_debian&amp;diff=5636"/>
		<updated>2025-08-24T18:42:29Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;To resolve the &amp;quot;repository is not signed&amp;quot; error, you need to generate a GPG key, use it to sign your repository's `Release` file, and then add the public key to your host's tr...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To resolve the &amp;quot;repository is not signed&amp;quot; error, you need to generate a GPG key, use it to sign your repository's `Release` file, and then add the public key to your host's trusted keys.&lt;br /&gt;
&lt;br /&gt;
Here’s the step-by-step process.&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### 1\. Generate a GPG Key&lt;br /&gt;
&lt;br /&gt;
First, generate a GPG key pair on the machine where you build the repository. This key will be used for signing.&lt;br /&gt;
&lt;br /&gt;
1.  Run the key generation command:&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    gpg --full-generate-key&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
2.  Follow the prompts. Reasonable defaults are usually fine:&lt;br /&gt;
&lt;br /&gt;
      * Select **(1) RSA and RSA**.&lt;br /&gt;
      * For key size, enter **4096**.&lt;br /&gt;
      * Set an expiration period (e.g., `1y` for one year) or choose `0` for no expiration.&lt;br /&gt;
      * Enter your real name and email address.&lt;br /&gt;
      * Enter a secure passphrase.&lt;br /&gt;
&lt;br /&gt;
3.  List your keys to find the **Key ID**. You will need this for the signing script.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    gpg --list-secret-keys --keyid-format=long&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
    The output will look similar to this. Your Key ID is the long string of characters after `rsa4096/`.&lt;br /&gt;
&lt;br /&gt;
    ```&lt;br /&gt;
    /home/user/.gnupg/pubring.kbx&lt;br /&gt;
    -----------------------------&lt;br /&gt;
    sec   rsa4096/A1B2C3D4E5F6G7H8 2025-08-24 [SC] [expires: 2026-08-24]&lt;br /&gt;
          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX&lt;br /&gt;
    uid                 [ultimate] Your Name &amp;lt;your.email@example.com&amp;gt;&lt;br /&gt;
    ssb   rsa4096/1A2B3C4D5E6F7G8H 2025-08-24 [E]&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
    In this example, the Key ID is `A1B2C3D4E5F6G7H8`.&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### 2\. Update Your Repository Script to Sign the Release&lt;br /&gt;
&lt;br /&gt;
Modify your script to sign the `Release` file using the GPG key you just created. This generates the `InRelease` and `Release.gpg` files that APT expects.&lt;br /&gt;
&lt;br /&gt;
Replace `YOUR_KEY_ID` with the actual Key ID from the previous step.&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
set -eux&lt;br /&gt;
NAME=&amp;quot;$1&amp;quot;&lt;br /&gt;
GPG_KEY_ID=&amp;quot;A1B2C3D4E5F6G7H8&amp;quot; # &amp;lt;-- REPLACE THIS&lt;br /&gt;
DEB_FILE=&amp;quot;$NAME.deb&amp;quot;&lt;br /&gt;
REPO_ROOT=&amp;quot;$NAME&amp;quot;&lt;br /&gt;
DISTRIBUTION=&amp;quot;stable&amp;quot;&lt;br /&gt;
COMPONENT=&amp;quot;main&amp;quot;&lt;br /&gt;
ARCHITECTURE=&amp;quot;amd64&amp;quot;&lt;br /&gt;
POOL_DIR=&amp;quot;${REPO_ROOT}/pool/${COMPONENT}&amp;quot;&lt;br /&gt;
DIST_DIR=&amp;quot;${REPO_ROOT}/dists/${DISTRIBUTION}&amp;quot;&lt;br /&gt;
BINARY_DIR=&amp;quot;${DIST_DIR}/${COMPONENT}/binary-${ARCHITECTURE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${POOL_DIR}&amp;quot;&lt;br /&gt;
cp &amp;quot;${DEB_FILE}&amp;quot; &amp;quot;${POOL_DIR}/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
pushd &amp;quot;${REPO_ROOT}&amp;quot; &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${BINARY_DIR}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive packages &amp;quot;pool&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages&amp;quot;&lt;br /&gt;
gzip -9c &amp;quot;${BINARY_DIR}/Packages&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages.gz&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive release &amp;quot;${DIST_DIR}&amp;quot; &amp;gt; &amp;quot;${DIST_DIR}/Release&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gpg --default-key &amp;quot;${GPG_KEY_ID}&amp;quot; -abs -o &amp;quot;${DIST_DIR}/Release.gpg&amp;quot; &amp;quot;${DIST_DIR}/Release&amp;quot;&lt;br /&gt;
gpg --default-key &amp;quot;${GPG_KEY_ID}&amp;quot; --clearsign -o &amp;quot;${DIST_DIR}/InRelease&amp;quot; &amp;quot;${DIST_DIR}/Release&amp;quot;&lt;br /&gt;
&lt;br /&gt;
popd &amp;gt; /dev/null&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Now, when you run this script, it will create the necessary signature files in your repository directory.&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### 3\. Make the Public Key Available&lt;br /&gt;
&lt;br /&gt;
Export the public part of your GPG key so client machines can download and trust it.&lt;br /&gt;
&lt;br /&gt;
1.  Export the key to a file. Replace `YOUR_KEY_ID` with your Key ID.&lt;br /&gt;
    ```bash&lt;br /&gt;
    gpg --armor --export YOUR_KEY_ID &amp;gt; public.key&lt;br /&gt;
    ```&lt;br /&gt;
2.  Copy this `public.key` file to your web server so it's downloadable. A good location would be the root of your repository. For example: `http://repo.example.com/uctl-agent/public.key`.&lt;br /&gt;
&lt;br /&gt;
-----&lt;br /&gt;
&lt;br /&gt;
### 4\. Configure Client Hosts&lt;br /&gt;
&lt;br /&gt;
On each machine that uses this repository, you need to download the public key and update the `sources.list` entry.&lt;br /&gt;
&lt;br /&gt;
1.  Download the public key and store it in the APT keyrings directory.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo mkdir -p /etc/apt/keyrings&lt;br /&gt;
    curl -fsSL http://repo.example.com/uctl-agent/public.key | sudo gpg --dearmor -o /etc/apt/keyrings/uctl-agent.gpg&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
2.  Modify your `/etc/apt/sources.list.d/uctl-agent.list` file to tell APT where to find the key for this specific repository using the `signed-by` option.&lt;br /&gt;
&lt;br /&gt;
    ```&lt;br /&gt;
    deb [arch=amd64 signed-by=/etc/apt/keyrings/uctl-agent.gpg] http://repo.example.com/uctl-agent/1.0/ubuntu stable main&lt;br /&gt;
    ```&lt;br /&gt;
&lt;br /&gt;
3.  Finally, run `apt update`. The error should now be gone.&lt;br /&gt;
&lt;br /&gt;
    ```bash&lt;br /&gt;
    sudo apt update&lt;br /&gt;
    ```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Misc&amp;diff=5635</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Misc&amp;diff=5635"/>
		<updated>2025-08-24T17:44:51Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
REPO_ROOT=&amp;quot;/var/www/html/my-debian-repo&amp;quot;&lt;br /&gt;
DISTRIBUTION=&amp;quot;stable&amp;quot;&lt;br /&gt;
COMPONENT=&amp;quot;main&amp;quot;&lt;br /&gt;
ARCHITECTURE=&amp;quot;amd64&amp;quot;&lt;br /&gt;
POOL_DIR=&amp;quot;${REPO_ROOT}/pool/${COMPONENT}&amp;quot;&lt;br /&gt;
DEB_FILE=&amp;quot;deb/my-app.deb&amp;quot;&lt;br /&gt;
BINARY_DIR=&amp;quot;dists/${DISTRIBUTION}/${COMPONENT}/binary-${ARCHITECTURE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${POOL_DIR}&amp;quot;&lt;br /&gt;
cp &amp;quot;${DEB_FILE}&amp;quot; &amp;quot;${POOL_DIR}/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
pushd &amp;quot;${REPO_ROOT}&amp;quot; &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${BINARY_DIR}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive packages &amp;quot;pool&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gzip -9c &amp;quot;${BINARY_DIR}/Packages&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages.gz&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive release &amp;quot;dists/${DISTRIBUTION}&amp;quot; &amp;gt; &amp;quot;dists/${DISTRIBUTION}/Release&amp;quot;&lt;br /&gt;
&lt;br /&gt;
popd &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
server {&lt;br /&gt;
    listen 80;&lt;br /&gt;
    server_name repo.example.com;&lt;br /&gt;
    root /var/www/html/my-debian-repo;&lt;br /&gt;
    autoindex on;&lt;br /&gt;
    location / {&lt;br /&gt;
        try_files $uri $uri/ =404;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
echo &amp;quot;deb http://repo.example.com stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/my-app.list&lt;br /&gt;
sudo apt update&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# More&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Creating a Debian package repository involves organizing your .deb packages and generating the necessary metadata for APT to use. This can be done locally or hosted on a web server. &lt;br /&gt;
Steps to Create a Local Debian Repository: &lt;br /&gt;
install dpkg-dev. &lt;br /&gt;
    sudo apt-get install dpkg-dev&lt;br /&gt;
&lt;br /&gt;
create repository directory and place packages. &lt;br /&gt;
    mkdir -p /path/to/your/repo/pool/main/&lt;br /&gt;
    cp /path/to/your/package.deb /path/to/your/repo/pool/main/&lt;br /&gt;
&lt;br /&gt;
Replace /path/to/your/repo/ with your desired repository location and /path/to/your/package.deb with the actual path to your Debian package. Generate Packages File. &lt;br /&gt;
    cd /path/to/your/repo/&lt;br /&gt;
    dpkg-scanpackages --arch all pool/ &amp;gt; dists/stable/main/binary-all/Packages&lt;br /&gt;
    gzip -9c dists/stable/main/binary-all/Packages &amp;gt; dists/stable/main/binary-all/Packages.gz&lt;br /&gt;
&lt;br /&gt;
Adjust all to your package's architecture (e.g., amd64) if not universal. Add Repository to APT Sources. &lt;br /&gt;
Create a new file in /etc/apt/sources.list.d/, for example, myrepo.list: &lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] file:/path/to/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
The [trusted=yes] option is for local repositories and should be used with caution for remote repositories. Update APT Cache and Install. &lt;br /&gt;
    sudo apt update&lt;br /&gt;
    sudo apt install your-package-name&lt;br /&gt;
&lt;br /&gt;
Creating a Remote Repository (e.g., via HTTP): &lt;br /&gt;
&lt;br /&gt;
• Set up a Web Server: Install and configure a web server like Apache or Nginx to serve the /path/to/your/repo/ directory. &lt;br /&gt;
• Adjust sources.list.d Entry: &lt;br /&gt;
&lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] http://your-server-ip-or-domain/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
Note: For more advanced repository management, consider tools like reprepro or aptly, which offer features like GPG signing, snapshotting, and more robust handling of multiple distributions and architectures. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Misc&amp;diff=5634</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Misc&amp;diff=5634"/>
		<updated>2025-08-24T17:44:11Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
REPO_ROOT=&amp;quot;/var/www/html/my-debian-repo&amp;quot;&lt;br /&gt;
DISTRIBUTION=&amp;quot;stable&amp;quot;&lt;br /&gt;
COMPONENT=&amp;quot;main&amp;quot;&lt;br /&gt;
ARCHITECTURE=&amp;quot;amd64&amp;quot;&lt;br /&gt;
POOL_DIR=&amp;quot;${REPO_ROOT}/pool/${COMPONENT}&amp;quot;&lt;br /&gt;
DEB_FILE=&amp;quot;deb/my-app.deb&amp;quot;&lt;br /&gt;
BINARY_DIR=&amp;quot;dists/${DISTRIBUTION}/${COMPONENT}/binary-${ARCHITECTURE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${POOL_DIR}&amp;quot;&lt;br /&gt;
cp &amp;quot;${DEB_FILE}&amp;quot; &amp;quot;${POOL_DIR}/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
pushd &amp;quot;${REPO_ROOT}&amp;quot; &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${BINARY_DIR}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive packages &amp;quot;pool&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gzip -9c &amp;quot;${BINARY_DIR}/Packages&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages.gz&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive release &amp;quot;dists/${DISTRIBUTION}&amp;quot; &amp;gt; &amp;quot;dists/${DISTRIBUTION}/Release&amp;quot;&lt;br /&gt;
&lt;br /&gt;
popd &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
server {&lt;br /&gt;
    listen 80;&lt;br /&gt;
    server_name repo.example.com;&lt;br /&gt;
    root /var/www/html/my-debian-repo;&lt;br /&gt;
    autoindex on;&lt;br /&gt;
    location / {&lt;br /&gt;
        try_files $uri $uri/ =404;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;deb http://repo.example.com stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/my-app.list&lt;br /&gt;
sudo apt update&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# More&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Creating a Debian package repository involves organizing your .deb packages and generating the necessary metadata for APT to use. This can be done locally or hosted on a web server. &lt;br /&gt;
Steps to Create a Local Debian Repository: &lt;br /&gt;
install dpkg-dev. &lt;br /&gt;
    sudo apt-get install dpkg-dev&lt;br /&gt;
&lt;br /&gt;
create repository directory and place packages. &lt;br /&gt;
    mkdir -p /path/to/your/repo/pool/main/&lt;br /&gt;
    cp /path/to/your/package.deb /path/to/your/repo/pool/main/&lt;br /&gt;
&lt;br /&gt;
Replace /path/to/your/repo/ with your desired repository location and /path/to/your/package.deb with the actual path to your Debian package. Generate Packages File. &lt;br /&gt;
    cd /path/to/your/repo/&lt;br /&gt;
    dpkg-scanpackages --arch all pool/ &amp;gt; dists/stable/main/binary-all/Packages&lt;br /&gt;
    gzip -9c dists/stable/main/binary-all/Packages &amp;gt; dists/stable/main/binary-all/Packages.gz&lt;br /&gt;
&lt;br /&gt;
Adjust all to your package's architecture (e.g., amd64) if not universal. Add Repository to APT Sources. &lt;br /&gt;
Create a new file in /etc/apt/sources.list.d/, for example, myrepo.list: &lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] file:/path/to/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
The [trusted=yes] option is for local repositories and should be used with caution for remote repositories. Update APT Cache and Install. &lt;br /&gt;
    sudo apt update&lt;br /&gt;
    sudo apt install your-package-name&lt;br /&gt;
&lt;br /&gt;
Creating a Remote Repository (e.g., via HTTP): &lt;br /&gt;
&lt;br /&gt;
• Set up a Web Server: Install and configure a web server like Apache or Nginx to serve the /path/to/your/repo/ directory. &lt;br /&gt;
• Adjust sources.list.d Entry: &lt;br /&gt;
&lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] http://your-server-ip-or-domain/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
Note: For more advanced repository management, consider tools like reprepro or aptly, which offer features like GPG signing, snapshotting, and more robust handling of multiple distributions and architectures. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Misc&amp;diff=5633</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Misc&amp;diff=5633"/>
		<updated>2025-08-24T17:41:53Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
REPO_ROOT=&amp;quot;/var/www/html/my-debian-repo&amp;quot;&lt;br /&gt;
DISTRIBUTION=&amp;quot;stable&amp;quot;&lt;br /&gt;
COMPONENT=&amp;quot;main&amp;quot;&lt;br /&gt;
ARCHITECTURE=&amp;quot;amd64&amp;quot;&lt;br /&gt;
POOL_DIR=&amp;quot;${REPO_ROOT}/pool/${COMPONENT}&amp;quot;&lt;br /&gt;
DEB_FILE=&amp;quot;deb/my-app.deb&amp;quot;&lt;br /&gt;
BINARY_DIR=&amp;quot;dists/${DISTRIBUTION}/${COMPONENT}/binary-${ARCHITECTURE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${POOL_DIR}&amp;quot;&lt;br /&gt;
cp &amp;quot;${DEB_FILE}&amp;quot; &amp;quot;${POOL_DIR}/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
pushd &amp;quot;${REPO_ROOT}&amp;quot; &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${BINARY_DIR}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive packages &amp;quot;pool&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gzip -9c &amp;quot;${BINARY_DIR}/Packages&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages.gz&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive release &amp;quot;dists/${DISTRIBUTION}&amp;quot; &amp;gt; &amp;quot;dists/${DISTRIBUTION}/Release&amp;quot;&lt;br /&gt;
&lt;br /&gt;
popd &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
server {&lt;br /&gt;
    listen 80;&lt;br /&gt;
    server_name repo.example.com;&lt;br /&gt;
    root /var/www/html/my-debian-repo;&lt;br /&gt;
    autoindex on;&lt;br /&gt;
    location / {&lt;br /&gt;
        try_files $uri $uri/ =404;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```echo &amp;quot;deb http://repo.example.com stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/my-app.list&lt;br /&gt;
sudo apt update&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# More&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Creating a Debian package repository involves organizing your .deb packages and generating the necessary metadata for APT to use. This can be done locally or hosted on a web server. &lt;br /&gt;
Steps to Create a Local Debian Repository: &lt;br /&gt;
install dpkg-dev. &lt;br /&gt;
    sudo apt-get install dpkg-dev&lt;br /&gt;
&lt;br /&gt;
create repository directory and place packages. &lt;br /&gt;
    mkdir -p /path/to/your/repo/pool/main/&lt;br /&gt;
    cp /path/to/your/package.deb /path/to/your/repo/pool/main/&lt;br /&gt;
&lt;br /&gt;
Replace /path/to/your/repo/ with your desired repository location and /path/to/your/package.deb with the actual path to your Debian package. Generate Packages File. &lt;br /&gt;
    cd /path/to/your/repo/&lt;br /&gt;
    dpkg-scanpackages --arch all pool/ &amp;gt; dists/stable/main/binary-all/Packages&lt;br /&gt;
    gzip -9c dists/stable/main/binary-all/Packages &amp;gt; dists/stable/main/binary-all/Packages.gz&lt;br /&gt;
&lt;br /&gt;
Adjust all to your package's architecture (e.g., amd64) if not universal. Add Repository to APT Sources. &lt;br /&gt;
Create a new file in /etc/apt/sources.list.d/, for example, myrepo.list: &lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] file:/path/to/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
The [trusted=yes] option is for local repositories and should be used with caution for remote repositories. Update APT Cache and Install. &lt;br /&gt;
    sudo apt update&lt;br /&gt;
    sudo apt install your-package-name&lt;br /&gt;
&lt;br /&gt;
Creating a Remote Repository (e.g., via HTTP): &lt;br /&gt;
&lt;br /&gt;
• Set up a Web Server: Install and configure a web server like Apache or Nginx to serve the /path/to/your/repo/ directory. &lt;br /&gt;
• Adjust sources.list.d Entry: &lt;br /&gt;
&lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] http://your-server-ip-or-domain/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
Note: For more advanced repository management, consider tools like reprepro or aptly, which offer features like GPG signing, snapshotting, and more robust handling of multiple distributions and architectures. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Misc&amp;diff=5632</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Misc&amp;diff=5632"/>
		<updated>2025-08-24T17:03:25Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
REPO_ROOT=&amp;quot;/var/www/html/my-debian-repo&amp;quot;&lt;br /&gt;
DISTRIBUTION=&amp;quot;stable&amp;quot;&lt;br /&gt;
COMPONENT=&amp;quot;main&amp;quot;&lt;br /&gt;
ARCHITECTURE=&amp;quot;amd64&amp;quot;&lt;br /&gt;
POOL_DIR=&amp;quot;${REPO_ROOT}/pool/${COMPONENT}&amp;quot;&lt;br /&gt;
DEB_FILE=&amp;quot;deb/my-app.deb&amp;quot;&lt;br /&gt;
BINARY_DIR=&amp;quot;dists/${DISTRIBUTION}/${COMPONENT}/binary-${ARCHITECTURE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${POOL_DIR}&amp;quot;&lt;br /&gt;
cp &amp;quot;${DEB_FILE}&amp;quot; &amp;quot;${POOL_DIR}/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
pushd &amp;quot;${REPO_ROOT}&amp;quot; &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${BINARY_DIR}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive packages &amp;quot;pool&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gzip -9c &amp;quot;${BINARY_DIR}/Packages&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages.gz&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive release &amp;quot;dists/${DISTRIBUTION}&amp;quot; &amp;gt; &amp;quot;dists/${DISTRIBUTION}/Release&amp;quot;&lt;br /&gt;
&lt;br /&gt;
popd &amp;gt; /dev/null&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```server {&lt;br /&gt;
    listen 80;&lt;br /&gt;
    server_name repo.example.com;&lt;br /&gt;
    root /var/www/html/my-debian-repo;&lt;br /&gt;
    autoindex on;&lt;br /&gt;
    location / {&lt;br /&gt;
        try_files $uri $uri/ =404;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```echo &amp;quot;deb http://repo.example.com stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/my-app.list&lt;br /&gt;
sudo apt update&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# More&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Creating a Debian package repository involves organizing your .deb packages and generating the necessary metadata for APT to use. This can be done locally or hosted on a web server. &lt;br /&gt;
Steps to Create a Local Debian Repository: &lt;br /&gt;
install dpkg-dev. &lt;br /&gt;
    sudo apt-get install dpkg-dev&lt;br /&gt;
&lt;br /&gt;
create repository directory and place packages. &lt;br /&gt;
    mkdir -p /path/to/your/repo/pool/main/&lt;br /&gt;
    cp /path/to/your/package.deb /path/to/your/repo/pool/main/&lt;br /&gt;
&lt;br /&gt;
Replace /path/to/your/repo/ with your desired repository location and /path/to/your/package.deb with the actual path to your Debian package. Generate Packages File. &lt;br /&gt;
    cd /path/to/your/repo/&lt;br /&gt;
    dpkg-scanpackages --arch all pool/ &amp;gt; dists/stable/main/binary-all/Packages&lt;br /&gt;
    gzip -9c dists/stable/main/binary-all/Packages &amp;gt; dists/stable/main/binary-all/Packages.gz&lt;br /&gt;
&lt;br /&gt;
Adjust all to your package's architecture (e.g., amd64) if not universal. Add Repository to APT Sources. &lt;br /&gt;
Create a new file in /etc/apt/sources.list.d/, for example, myrepo.list: &lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] file:/path/to/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
The [trusted=yes] option is for local repositories and should be used with caution for remote repositories. Update APT Cache and Install. &lt;br /&gt;
    sudo apt update&lt;br /&gt;
    sudo apt install your-package-name&lt;br /&gt;
&lt;br /&gt;
Creating a Remote Repository (e.g., via HTTP): &lt;br /&gt;
&lt;br /&gt;
• Set up a Web Server: Install and configure a web server like Apache or Nginx to serve the /path/to/your/repo/ directory. &lt;br /&gt;
• Adjust sources.list.d Entry: &lt;br /&gt;
&lt;br /&gt;
    echo &amp;quot;deb [trusted=yes] http://your-server-ip-or-domain/your/repo/ stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/myrepo.list&lt;br /&gt;
&lt;br /&gt;
Note: For more advanced repository management, consider tools like reprepro or aptly, which offer features like GPG signing, snapshotting, and more robust handling of multiple distributions and architectures. &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Misc&amp;diff=5631</id>
		<title>Misc</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Misc&amp;diff=5631"/>
		<updated>2025-08-24T17:02:02Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;``` #!/bin/bash REPO_ROOT=&amp;quot;/var/www/html/my-debian-repo&amp;quot; DISTRIBUTION=&amp;quot;stable&amp;quot; COMPONENT=&amp;quot;main&amp;quot; ARCHITECTURE=&amp;quot;amd64&amp;quot; POOL_DIR=&amp;quot;${REPO_ROOT}/pool/${COMPONENT}&amp;quot; DEB_FILE=&amp;quot;deb/my...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;```&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
REPO_ROOT=&amp;quot;/var/www/html/my-debian-repo&amp;quot;&lt;br /&gt;
DISTRIBUTION=&amp;quot;stable&amp;quot;&lt;br /&gt;
COMPONENT=&amp;quot;main&amp;quot;&lt;br /&gt;
ARCHITECTURE=&amp;quot;amd64&amp;quot;&lt;br /&gt;
POOL_DIR=&amp;quot;${REPO_ROOT}/pool/${COMPONENT}&amp;quot;&lt;br /&gt;
DEB_FILE=&amp;quot;deb/my-app.deb&amp;quot;&lt;br /&gt;
BINARY_DIR=&amp;quot;dists/${DISTRIBUTION}/${COMPONENT}/binary-${ARCHITECTURE}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${POOL_DIR}&amp;quot;&lt;br /&gt;
cp &amp;quot;${DEB_FILE}&amp;quot; &amp;quot;${POOL_DIR}/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
pushd &amp;quot;${REPO_ROOT}&amp;quot; &amp;gt; /dev/null&lt;br /&gt;
&lt;br /&gt;
mkdir -p &amp;quot;${BINARY_DIR}&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive packages &amp;quot;pool&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages&amp;quot;&lt;br /&gt;
&lt;br /&gt;
gzip -9c &amp;quot;${BINARY_DIR}/Packages&amp;quot; &amp;gt; &amp;quot;${BINARY_DIR}/Packages.gz&amp;quot;&lt;br /&gt;
&lt;br /&gt;
apt-ftparchive release &amp;quot;dists/${DISTRIBUTION}&amp;quot; &amp;gt; &amp;quot;dists/${DISTRIBUTION}/Release&amp;quot;&lt;br /&gt;
&lt;br /&gt;
popd &amp;gt; /dev/null&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```server {&lt;br /&gt;
    listen 80;&lt;br /&gt;
    server_name repo.example.com;&lt;br /&gt;
    root /var/www/html/my-debian-repo;&lt;br /&gt;
    autoindex on;&lt;br /&gt;
    location / {&lt;br /&gt;
        try_files $uri $uri/ =404;&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
```echo &amp;quot;deb http://repo.example.com stable main&amp;quot; | sudo tee /etc/apt/sources.list.d/my-app.list&lt;br /&gt;
sudo apt update&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
```&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Virus_file_sums&amp;diff=5630</id>
		<title>Virus file sums</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Virus_file_sums&amp;diff=5630"/>
		<updated>2025-08-23T05:26:53Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
User defines desired state: in configuration files.&lt;br /&gt;
System reads desired state: and fetches current state from remote systems via providers.&lt;br /&gt;
System compares: desired and current states to generate a plan of changes.&lt;br /&gt;
User reviews and approves: the plan.&lt;br /&gt;
System executes the plan: via providers, updating remote resources.&lt;br /&gt;
System updates the state file: to reflect the new current state.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
SHA-256 sums and other cryptographic hashes of malicious files are collected and stored in several public databases, often managed by cybersecurity companies and community projects. These databases are key resources for threat intelligence and incident response. [1, 2, 3, 4, 5]  &lt;br /&gt;
How these databases are used • Security tools: Antivirus software and other security products check the hashes of files on a system against these databases to identify and block known malware. &lt;br /&gt;
• Threat intelligence: Security researchers use these databases to track and analyze trends in malware and to identify relationships between different malware families. &lt;br /&gt;
• Digital forensics: Forensic investigators use malware hash registries to quickly identify and filter malicious files during an investigation, reducing the analysis time. &lt;br /&gt;
• Hash lookup: You can manually submit a file's hash to a public service to see if it's been identified as malicious by the cybersecurity community. [6, 7, 8, 9]  &lt;br /&gt;
&lt;br /&gt;
Key public malware hash databases and services • VirusTotal: A highly-regarded, free online service that analyzes files and URLs for viruses using multiple antivirus engines and other tools. You can search for a file's hash to see previous analysis reports. &lt;br /&gt;
• Malware Hash Registry (MHR): Maintained by Team Cymru, this is a web form that allows you to submit one or more hashes to check them against their database of malicious files. &lt;br /&gt;
• Cisco Talos File Reputation Lookup: A file reputation system that allows you to look up a file's SHA-256 hash against their database of billions of files. &lt;br /&gt;
• Hybrid Analysis: A free malware analysis service that detects and analyzes unknown threats using static analysis, reputation lookups, and AV engines. You can search for hashes that have been previously analyzed. &lt;br /&gt;
• Community-driven GitHub repositories: Projects like  and  are community-maintained collections of malicious hashes available for download. &lt;br /&gt;
• CIRCL hashlookup: A public API to look up hash values against a database of files, which includes the NIST National Software Reference Library (NSRL) as well as malware hashes. [1, 2, 9, 10, 11, 12, 13, 14, 15]  &lt;br /&gt;
&lt;br /&gt;
Important considerations • Polymorphic and evolving malware: Threat intelligence databases are less effective against polymorphic malware, which can change its code and hash with each infection. For this reason, modern security tools use behavioral analysis in addition to hash-based detection. &lt;br /&gt;
• Limited scope of free services: While there are excellent free lookup services, many enterprise-level security firms maintain more comprehensive and up-to-date threat intelligence feeds that are not freely available to the public. &lt;br /&gt;
• The hash isn't everything: A clean hash doesn't guarantee a clean file. An attacker can create a novel piece of malware with a brand-new hash that has not yet been submitted to any database. A clean hash only means that a file with that exact hash hasn't been flagged as malicious yet. [6, 16, 17, 18, 19]  &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
[1] https://crossleydan.medium.com/what-to-do-if-you-find-a-dodgy-file-and-dont-know-what-to-do-343694a5b122[2] https://github.com/aaryanrlondhe/Malware-Hash-Database[3] https://hash.cymru.com/[4] https://www.recordedfuture.com/threat-intelligence-101/intelligence-sources-collection/threat-intelligence-sources[5] https://www.cyberdb.co/the-role-of-cybersecurity-databases-in-modern-incident-response/[6] https://www.malwarepatrol.net/malware-hashes-and-hash-functions/[7] https://www.sei.cmu.edu/blog/detecting-and-grouping-malware-using-section-hashes/[8] https://sleuthkit.org/autopsy/docs/user-docs/3.1/hash_db_page.html[9] https://hash.cymru.com/[10] https://github.com/LGOG/Flagged_Hash_list[11] https://circl.lu/services/hashlookup/[12] https://hash.cymru.com/[13] https://gtidocs.virustotal.com/docs/check-vt[14] https://www.talosintelligence.com/talos_file_reputation[15] https://hybrid-analysis.com/[16] https://www.team-cymru.com/mhr[17] https://security.stackexchange.com/questions/266109/is-it-enough-to-verify-the-hash-to-ensure-file-is-virus-free[18] https://www.sasa-software.com/learning/what-is-file-hashing-in-cybersecurity/[19] https://www.sentinelone.com/cybersecurity-101/threat-intelligence/what-is-polymorphic-malware/&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Virus_file_sums&amp;diff=5629</id>
		<title>Virus file sums</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Virus_file_sums&amp;diff=5629"/>
		<updated>2025-08-23T05:12:43Z</updated>

		<summary type="html">&lt;p&gt;Busk: Created page with &amp;quot;Yes, SHA-256 sums and other cryptographic hashes of malicious files are collected and stored in several public databases, often managed by cybersecurity companies and communit...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Yes, SHA-256 sums and other cryptographic hashes of malicious files are collected and stored in several public databases, often managed by cybersecurity companies and community projects. These databases are key resources for threat intelligence and incident response. [1, 2, 3, 4, 5]  &lt;br /&gt;
How these databases are used • Security tools: Antivirus software and other security products check the hashes of files on a system against these databases to identify and block known malware. &lt;br /&gt;
• Threat intelligence: Security researchers use these databases to track and analyze trends in malware and to identify relationships between different malware families. &lt;br /&gt;
• Digital forensics: Forensic investigators use malware hash registries to quickly identify and filter malicious files during an investigation, reducing the analysis time. &lt;br /&gt;
• Hash lookup: You can manually submit a file's hash to a public service to see if it's been identified as malicious by the cybersecurity community. [6, 7, 8, 9]  &lt;br /&gt;
&lt;br /&gt;
Key public malware hash databases and services • VirusTotal: A highly-regarded, free online service that analyzes files and URLs for viruses using multiple antivirus engines and other tools. You can search for a file's hash to see previous analysis reports. &lt;br /&gt;
• Malware Hash Registry (MHR): Maintained by Team Cymru, this is a web form that allows you to submit one or more hashes to check them against their database of malicious files. &lt;br /&gt;
• Cisco Talos File Reputation Lookup: A file reputation system that allows you to look up a file's SHA-256 hash against their database of billions of files. &lt;br /&gt;
• Hybrid Analysis: A free malware analysis service that detects and analyzes unknown threats using static analysis, reputation lookups, and AV engines. You can search for hashes that have been previously analyzed. &lt;br /&gt;
• Community-driven GitHub repositories: Projects like  and  are community-maintained collections of malicious hashes available for download. &lt;br /&gt;
• CIRCL hashlookup: A public API to look up hash values against a database of files, which includes the NIST National Software Reference Library (NSRL) as well as malware hashes. [1, 2, 9, 10, 11, 12, 13, 14, 15]  &lt;br /&gt;
&lt;br /&gt;
Important considerations • Polymorphic and evolving malware: Threat intelligence databases are less effective against polymorphic malware, which can change its code and hash with each infection. For this reason, modern security tools use behavioral analysis in addition to hash-based detection. &lt;br /&gt;
• Limited scope of free services: While there are excellent free lookup services, many enterprise-level security firms maintain more comprehensive and up-to-date threat intelligence feeds that are not freely available to the public. &lt;br /&gt;
• The hash isn't everything: A clean hash doesn't guarantee a clean file. An attacker can create a novel piece of malware with a brand-new hash that has not yet been submitted to any database. A clean hash only means that a file with that exact hash hasn't been flagged as malicious yet. [6, 16, 17, 18, 19]  &lt;br /&gt;
&lt;br /&gt;
AI responses may include mistakes.&lt;br /&gt;
&lt;br /&gt;
[1] https://crossleydan.medium.com/what-to-do-if-you-find-a-dodgy-file-and-dont-know-what-to-do-343694a5b122[2] https://github.com/aaryanrlondhe/Malware-Hash-Database[3] https://hash.cymru.com/[4] https://www.recordedfuture.com/threat-intelligence-101/intelligence-sources-collection/threat-intelligence-sources[5] https://www.cyberdb.co/the-role-of-cybersecurity-databases-in-modern-incident-response/[6] https://www.malwarepatrol.net/malware-hashes-and-hash-functions/[7] https://www.sei.cmu.edu/blog/detecting-and-grouping-malware-using-section-hashes/[8] https://sleuthkit.org/autopsy/docs/user-docs/3.1/hash_db_page.html[9] https://hash.cymru.com/[10] https://github.com/LGOG/Flagged_Hash_list[11] https://circl.lu/services/hashlookup/[12] https://hash.cymru.com/[13] https://gtidocs.virustotal.com/docs/check-vt[14] https://www.talosintelligence.com/talos_file_reputation[15] https://hybrid-analysis.com/[16] https://www.team-cymru.com/mhr[17] https://security.stackexchange.com/questions/266109/is-it-enough-to-verify-the-hash-to-ensure-file-is-virus-free[18] https://www.sasa-software.com/learning/what-is-file-hashing-in-cybersecurity/[19] https://www.sentinelone.com/cybersecurity-101/threat-intelligence/what-is-polymorphic-malware/&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
	<entry>
		<id>https://tech.uvoo.io/index.php?title=Host&amp;diff=5628</id>
		<title>Host</title>
		<link rel="alternate" type="text/html" href="https://tech.uvoo.io/index.php?title=Host&amp;diff=5628"/>
		<updated>2025-08-21T12:30:38Z</updated>

		<summary type="html">&lt;p&gt;Busk: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In networking, a host is any device connected to a computer network that can send or receive information. These devices are assigned at least one network address, like an IP address, which allows them to be uniquely identified on the network.&lt;br /&gt;
While the term &amp;quot;host&amp;quot; is often used interchangeably with &amp;quot;server,&amp;quot; they are not the same. A host can be a client, a server, or a peer in a peer-to-peer network. In contrast, a network node is a broader term that includes hosts and other networking equipment that do not directly participate in user applications, such as routers or switches. Every host is a node, but not every node is a host.&lt;br /&gt;
Examples of Hosts&lt;br /&gt;
 * Computers: Your desktop PC, laptop, or server.&lt;br /&gt;
 * Mobile Devices: Your smartphone or tablet.&lt;br /&gt;
 * Peripherals: A networked printer or a network-attached storage (NAS) device.&lt;br /&gt;
 * Internet of Things (IoT) devices: Smart home devices like thermostats or security cameras.&lt;br /&gt;
A host can function in different ways:&lt;br /&gt;
 * Server: A host that provides services, resources, or data to other devices (clients) on the network.&lt;br /&gt;
 * Client: A host that requests services or resources from a server.&lt;br /&gt;
 * Peer: In a peer-to-peer network, a host that both provides and requests services to and from other hosts.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
What is a Host?&lt;br /&gt;
This video provides a simple and clear explanation of what a network host is.&lt;br /&gt;
&lt;br /&gt;
YouTube video views will be stored in your YouTube History, and your data will be stored and used by YouTube according to its Terms of Service&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# Node&lt;br /&gt;
&lt;br /&gt;
An example of a node that is not a host is a router.&lt;br /&gt;
A node is any device connected to a network, including computers, routers, and switches. A host, however, is a specific type of node that can be assigned an IP address and participate in user applications, like a computer or a smartphone.&lt;br /&gt;
A router's primary function is to direct traffic between networks. It forwards data packets to their intended destinations but doesn't originate or terminate the data itself in the way a host does. While a router is a network device and thus a node, it doesn't run user applications or directly serve content, so it's not considered a host.&lt;br /&gt;
&lt;br /&gt;
# Node Types&lt;br /&gt;
&lt;br /&gt;
In the context of TCP/IP, there are three primary node types:&lt;br /&gt;
 * Hosts: These are devices that send and receive data. They have a unique IP address and participate in the communication at the application layer. Examples include computers, smartphones, and servers.&lt;br /&gt;
 * Routers: These are devices that connect different networks and forward data packets between them. They operate at the network layer and are essential for routing traffic across the internet.&lt;br /&gt;
 * Switches: These are devices that connect other devices within a single network. They operate at the data link layer and forward frames to the correct destination port based on their MAC address.&lt;/div&gt;</summary>
		<author><name>Busk</name></author>
	</entry>
</feed>