Compare commits

..

10 commits

Author SHA1 Message Date
deb6c38d7b chore: commit homelab setup — deployment, services, orchestration, skill
- Add .gitignore: exclude compiled binaries, build artifacts, and Helm
  values files containing real secrets (authentik, prometheus)
- Add all Kubernetes deployment manifests (deployment/)
- Add services source code: ha-sync, device-inventory, games-console,
  paperclip, parts-inventory
- Add Ansible orchestration: playbooks, roles, inventory, cloud-init
- Add hardware specs, execution plans, scripts, HOMELAB.md
- Add skills/homelab/SKILL.md + skills/install.sh to preserve Copilot skill
- Remove previously-tracked inventory-cli binary from git index

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-09 08:10:32 +02:00
f2c4324fb0 fix: use internal email for gitadmin, free user email for SSO login
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-08 23:42:04 +02:00
26db771279 ha-sync: add internal/kube package with CronJob/Lease management
- internal/kube/client.go: NewClient() with in-cluster + kubeconfig fallback
- internal/kube/cronjob.go: JobSpec, ApplyCronJob, DeleteCronJob, TriggerJob,
  GetLockStatus, SuspendCronJob, ListCronJobs, ImportFromCronJob
- Makefile/Dockerfile: add ha-sync-ctl build target
- rbac.yaml: add batch/cronjobs+jobs permissions and watch verb on leases

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-08 23:38:32 +02:00
88540b6ded feat: deploy Forgejo self-hosted git server
- Add ZFS NFS datasets: media-pool/git (50G) and media-pool/git-db (10G)
- Add nfs-git and nfs-git-db NFS subdir provisioner Helm values
- Deploy Forgejo 10 (StatefulSet) + PostgreSQL 16 (StatefulSet) in infrastructure namespace
- StorageClasses: nfs-git (repos/LFS, 50Gi) and nfs-git-db (postgres, 10Gi)
- Ingress: git.vandachevici.ro with TLS via cert-manager
- SSH NodePort 30022 for git clone ssh://git@host:30022/user/repo.git
- Authentik OIDC provider configured (client ID: ZdnrHgyfUncSIPPrOe1o7UAA42N7BMhUHXjQVw4Y)
- Add 'git' subdomain to dns-updater configmap

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-08 23:10:41 +02:00
29440b68a9 fix: sudo dmidecode fallback when running without root
Without root, dmidecode exits 0 but outputs only a header comment
with no Handle blocks (DMI tables are root-only in sysfs).
The previous empty-string check never triggered the sudo retry.

Now checks for the presence of 'Handle ' lines: if absent, retries
transparently with sudo. Users with passwordless sudo get full hardware
detail (CPU slots, memory sticks/slots, cache, voltage) without needing
to explicitly invoke sudo themselves.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-06 23:45:35 +02:00
69113c1ea7 feat: richer hardware discovery — CPU cache/voltage, memory type/bandwidth/part-no, GPU discovery
- CPU: max speed, bus MHz, L1/L2/L3 cache (from sysfs), voltage, socket type; /proc/cpuinfo fallback for non-root
- Memory sticks: DDR type, form factor, part number, rank, data width, theoretical bandwidth
- GPU: new part type discovered via lspci + /sys/class/drm + nvidia-smi; shows VRAM and display outputs
- discover-only tree updated to show all new fields

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-01 01:20:33 +02:00
bf7a7937e0 feat(inventory-cli): build script, manpage help, discover-only command
build-cli.sh
  Simple shell script that builds inventory-cli inside Docker and
  extracts the binary to build/ (or a custom path).  Replaces the
  need to use the heavier build-and-load.sh just to compile the CLI.

--help
  Replaced the terse usage() stub with a full UNIX man-page style
  reference covering NAME, SYNOPSIS, DESCRIPTION, GLOBAL OPTIONS,
  COMMANDS (grouped by area), PART TYPES, FIELD KEYS, EXAMPLES,
  and NOTES.

discover-only [--type <type>]
  New command that runs local hardware discovery without contacting
  the inventory server and prints results as an ASCII tree rooted at
  the hostname.  Each section (CPUs, CPU Slots, Memory Sticks, Memory
  Slots, Disks, NICs) lists discovered components with key attributes
  inline.  Useful for inspection and troubleshooting.

discovery.cpp: store interface name in K_NAME for NICs
  ifname (e.g. "nic0", "eno1") is now emitted so discover-only and
  the server-side UI can display the kernel device name.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-04-01 00:49:17 +02:00
7a8ea3e88f fix: correct CPU and memory slot deduplication on HP ProLiant
- discovery.cpp: remove K_SERIAL emission from dmidecode CPU "ID" field
  (it is the CPUID instruction result, identical across matching processors
  in a multi-socket system, not a unique per-slot serial number)
  → upsert_part now falls through to K_SOCKET natural key, correctly
    inserting both "Proc 1" and "Proc 2" as separate records

- discovery.cpp: skip memory slot blocks with missing or empty Locator
  (HP ProLiant firmware returns a phantom type-17 block with no Locator;
  this was causing a 19th slot record to be inserted with empty locator
  that could never be deduplicated on subsequent runs)

Verified on HP ProLiant DL360 G7:
  cpu=2, cpu_slots=2, memory_sticks=12, memory_slots=18, disks=5, nics=4

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-31 22:50:33 +02:00
b9736266c9 nfs: use soft,timeo=30 mounts instead of hard on all NFS provisioners
Add soft,timeo=30 mount options to all nfs-subdir-external-provisioner
Helm values files so that newly created PVs use non-blocking NFS mounts.
StorageClasses have been patched directly in the cluster.

Motivation: a USB drive disconnect on kube-node-1 caused the NFS server
to go down for ~2.5 days. The HP Proxmox host had hard NFS mounts to
the Dell which blocked df -h indefinitely until the NFS server recovered.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-31 22:49:59 +02:00
a110afa40b feat(device-inventory): add management web UI and pciutils for NIC discovery
- web-ui/main.go: full CRUD REST API + dark-theme SPA (servers, parts, part-types)
- Dockerfile.cli: add pciutils runtime dep for lspci NIC enrichment

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-31 22:42:20 +02:00
8988 changed files with 2913618 additions and 264 deletions

20
.gitignore vendored Normal file
View file

@ -0,0 +1,20 @@
backups/
# Helm values files containing real secrets (manage manually, never commit)
deployment/helm/authentik/values.yaml
deployment/helm/monitoring/prometheus-values.yaml
# Compiled binaries and build artifacts (source code is in services/)
services/*/bin/
services/*/build/
services/device-inventory/bin/
services/device-inventory/web-ui/inventory-web-ui
orchestration/ansible/roles/inventory-cli/files/inventory-cli
orchestration/ansible/roles/inventory-cli/files/device-inventory
# Session / planning artifacts
plan.md
# OS artifacts
.DS_Store
*.env

323
HOMELAB.md Normal file
View file

@ -0,0 +1,323 @@
# Homelab Specs
---
## Hardware
### Dell OptiPlex 7070
- **Role**: kube-node-1 (control-plane + worker), bare metal
- **IP**: 192.168.2.100
- **SSH**: `dan@192.168.2.100`
- **CPU**: Intel Core i5-9500, 6c/6t, 3.0 GHz base / 4.4 GHz boost, 9 MB L3, 65W TDP, VT-x
- **RAM**: 16 GB DDR4 2666 MT/s DIMM
- **Storage**:
- `nvme0`: Samsung PM991 256 GB — 1G EFI, 2G /boot, 235.4G LVM (100G → /)
- `sda`: Seagate Expansion 2 TB → `/data/photos` (ext4)
- `sdb`: Seagate Expansion+ 2 TB → `/mnt/sdb-ro` (ext4, **READ-ONLY — never touch**)
- `sdc1`: Seagate Expansion 1 TB → `/data/media` (ext4)
- `sdc2`: Seagate Expansion 788 GB → `/data/games` (ext4)
- `sdd`: Samsung HD103SI 1 TB → `/data/owncloud` (ext4)
- `sde`: Hitachi HTS545050 500 GB → `/data/infra` (ext4)
- `sdf`: Seagate 1 TB → `/data/ai` (ext4)
- **Total**: ~7 TB
- **Network**: 1 Gbit/s
- **NFS server**: exports `/data/{games,media,photos,owncloud,infra,ai}` to LAN
### HP ProLiant DL360 G7
- **Role**: Proxmox hypervisor (192.168.2.193)
- **SSH**: `root@192.168.2.193` (local id_rsa)
- **Web UI**: https://proxmox.vandachevici.ro
- **Storage**:
- 2× HPE SAS 900 GB in RAID 1+0 → 900 GB usable (Proxmox OS)
- 4× HPE SAS 900 GB in RAID 1+0 → 1.8 TB usable (VM disks)
- Promise VTrak J830s: 2× 16 TB → `media-pool` (ZFS, ~14 TB usable)
- **Total**: ~18 TB
### Promise VTrak J830s
- Connected to HP ProLiant via SAS
- 2× 16 TB disks, ZFS pool `media-pool`
- ZFS datasets mounted at `/data/X` on HP (matching Dell paths)
---
## Storage Layout
### Dell `/data` drives (primary/local)
| Mount | Device | Size | Contents |
|---|---|---|---|
| `/data/games` | sdc2 | 788 GB | Game server worlds and kits |
| `/data/media` | sdc1 | 1.1 TB | Jellyfin media library |
| `/data/photos` | sda | 916 GB | Immich photo library |
| `/data/owncloud` | sdd | 916 GB | OwnCloud files |
| `/data/infra` | sde | 458 GB | Prometheus, infra data |
| `/data/ai` | sdf | 916 GB | Paperclip, Ollama models |
| `/mnt/sdb-ro` | sdb | 1.8 TB | **READ-ONLY** archive — never modify |
### HP VTrak ZFS datasets (HA mirrors)
| ZFS Dataset | Mountpoint on HP | NFS export |
|---|---|---|
| media-pool/jellyfin | `/data/media` | ✅ |
| media-pool/immich | `/data/photos` | ✅ |
| media-pool/owncloud | `/data/owncloud` | ✅ |
| media-pool/games | `/data/games` | ✅ |
| media-pool/minecraft | `/data/games/minecraft` | ✅ |
| media-pool/factorio | `/data/games/factorio` | ✅ |
| media-pool/openttd | `/data/games/openttd` | ✅ |
| media-pool/infra | `/data/infra` | ✅ |
| media-pool/ai | `/data/ai` | ✅ |
Legacy bind mounts at `/media-pool/X``/data/X` preserved for K8s PV compatibility.
### Cross-mounts (HA access)
| From | Mount point | To |
|---|---|---|
| Dell | `/mnt/hp/data-{games,media,photos,owncloud,infra,ai}` | HP VTrak NFS |
| HP | `/mnt/dell/data-{games,media,photos,owncloud,infra,ai}` | Dell NFS |
---
## VMs on HP ProLiant (Proxmox)
| VM ID | Name | IP | RAM | Role |
|---|---|---|---|---|
| 100 | kube-node-2 | 192.168.2.195 | 16 GB | K8s worker |
| 101 | kube-node-3 | 192.168.2.196 | 16 GB | K8s control-plane + worker |
| 103 | kube-arbiter | 192.168.2.200 | 6 GB | K8s control-plane (etcd + API server, NoSchedule) |
| 104 | local-ai | 192.168.2.88 | — | Ollama + openclaw-gateway (Tesla P4 GPU passthrough) |
| 106 | ansible-control | 192.168.2.70 | — | Ansible control node |
| 107 | remote-ai | 192.168.2.91 | — | openclaw-gateway (remote, cloud AI) |
⚠️ kube-node-2, kube-node-3, and kube-arbiter are all VMs on the HP ProLiant. HP ProLiant failure = loss of 3/4 K8s nodes simultaneously. Mitigation: add a Raspberry Pi 4/5 (8 GB) as a 4th physical host.
SSH: `dan@<ip>` for all VMs
---
## Kubernetes Cluster
- **Version**: 1.32.13
- **CNI**: Flannel
- **Dashboard**: https://192.168.2.100:30443 (self-signed cert, token auth)
- **Token file**: `/home/dan/homelab/kube/cluster/DASHBOARD-ACCESS.txt`
- **StorageClass**: `local-storage` (hostPath on kube-node-1)
- **NFS provisioners**: `nfs-provisioners` namespace (nfs-subdir-external-provisioner)
### Nodes
| Node | Role | IP | Host |
|---|---|---|---|
| kube-node-1 | control-plane + worker | 192.168.2.100 | Dell OptiPlex 7070 (bare metal) |
| kube-node-2 | worker | 192.168.2.195 | VM on HP ProLiant (16 GB RAM) |
| kube-node-3 | control-plane + worker | 192.168.2.196 | VM on HP ProLiant (16 GB RAM) |
| kube-arbiter | control-plane | 192.168.2.200 | VM on HP ProLiant (1c/6GB, tainted NoSchedule) |
**etcd**: 3 members (kube-node-1 + kube-arbiter + kube-node-3) — quorum survives 1 member failure ✅
**controlPlaneEndpoint**: `192.168.2.100:6443` ⚠️ SPOF — kube-vip (Phase 1b) not yet deployed; if kube-node-1 goes down, workers lose API access even though kube-arbiter and kube-node-3 API servers are still running
---
## High Availability Status
### Control Plane
| Component | Status | Notes |
|---|---|---|
| etcd | ✅ 3 members | kube-node-1 + kube-arbiter + kube-node-3; tolerates 1 failure |
| API server VIP | ⚠️ Not yet deployed | controlPlaneEndpoint hardcoded to 192.168.2.100; kube-vip (Phase 1b) pending |
| CoreDNS | ✅ Required anti-affinity | Pods spread across different nodes (kube-node-1 + kube-node-2) |
### Workloads (replicas=2, required pod anti-affinity)
| Service | Replicas | PDB |
|---|---|---|
| authentik-server | 2 | ✅ |
| authentik-worker | 2 | ✅ |
| cert-manager | 2 | ✅ |
| cert-manager-webhook | 2 | ✅ |
| cert-manager-cainjector | 2 | ✅ |
| parts-api | 2 | ✅ |
| parts-ui | 2 | ✅ |
| ha-sync-ui | 2 | ✅ |
| games-console-backend | 2 | ✅ |
| games-console-ui | 2 | ✅ |
| ingress-nginx | DaemonSet | ✅ (runs on all workers) |
### Storage
| PV | Type | Notes |
|---|---|---|
| paperclip-data-pv | NFS (192.168.2.252) | ✅ Migrated from hostPath; can schedule on any node |
| prometheus-storage-pv | hostPath on kube-node-1 | ⚠️ Still pinned to kube-node-1 (out of scope) |
### Known Remaining SPOFs
| Risk | Description | Mitigation |
|---|---|---|
| HP ProLiant physical host | kube-node-2/3 + kube-arbiter are all HP VMs | Add Raspberry Pi 4/5 (8 GB) as 4th physical host |
| controlPlaneEndpoint | Hardcoded to kube-node-1 IP | Deploy kube-vip with VIP (e.g. 192.168.2.50) |
---
### games
| Service | NodePort | Storage |
|---|---|---|
| minecraft-home | 31112 | HP NFS `/data/games/minecraft` |
| minecraft-cheats | 31111 | HP NFS `/data/games/minecraft` |
| minecraft-creative | 31559 | HP NFS `/data/games/minecraft` |
| minecraft-johannes | 31563 | HP NFS `/data/games/minecraft` |
| minecraft-noah | 31560 | HP NFS `/data/games/minecraft` |
| Factorio | — | HP NFS `/data/games/factorio` |
| OpenTTD | — | HP NFS `/data/games/openttd` |
Minecraft operators: LadyGisela5, tomgates24, anutzalizuk, toranaga_samma
### monitoring
- **Helm release**: `obs`, chart `prometheus-community/kube-prometheus-stack`
- **Values file**: `/home/dan/homelab/deployment/helm/prometheus/prometheus-helm-values.yaml`
- **Components**: Prometheus, Grafana, AlertManager, Node Exporter, Kube State Metrics
- **Grafana**: NodePort 31473 → http://192.168.2.100:31473
- **Storage**: 100 Gi hostPath PV at `/data/infra/prometheus` on kube-node-1
### infrastructure
- General MySQL/MariaDB (StatefulSet) — HP NFS `/media-pool/general-db`
- Speedtest Tracker — HP NFS `/media-pool/speedtest`
- DNS updater (DaemonSet, `tunix/digitalocean-dyndns`) — updates DigitalOcean DNS
- Proxmox ingress → 192.168.2.193:8006
### storage
- **OwnCloud** (`owncloud/server:10.12`) — drive.vandachevici.ro, admin: sefu
- MariaDB (StatefulSet), Redis (Deployment), OwnCloud server (2 replicas)
- Storage: HP NFS `/data/owncloud`
### media
- **Jellyfin** — media.vandachevici.ro, storage: HP NFS `/data/media`
- **Immich** — photos.vandachevici.ro, storage: HP NFS `/data/photos`
- Components: server (2 replicas), ML (2 replicas), valkey, postgresql
### iot
- IoT MySQL (StatefulSet, db: `iot_db`)
- IoT API (`iot-api:latest`, NodePort 30800) — requires `topology.homelab/server: dell` label
### ai
- **Paperclip** — paperclip.vandachevici.ro
- Embedded PostgreSQL at `/data/ai/paperclip/instances/default/db`
- Config: `/data/ai/paperclip/instances/default/config.json`
- NFS PV via keepalived VIP `192.168.2.252:/data/ai/paperclip` (can schedule on any node) ✅
- Env: `PAPERCLIP_AGENT_JWT_SECRET` (in K8s secret)
---
## AI / OpenClaw
### local-ai VM (192.168.2.88) — GPU instance
- **GPU**: NVIDIA Tesla P4, 8 GB VRAM (PCIe passthrough from Proxmox)
- VFIO: `/etc/modprobe.d/vfio.conf` ids=10de:1bb3, allow_unsafe_interrupts=1
- initramfs updated for persistence
- **Ollama**: listening on `0.0.0.0:11434`, models at `/data/ollama/models`
- Loaded: `qwen3:8b` (5.2 GB)
- **openclaw-gateway**: `ws://0.0.0.0:18789`, auth mode: token
- Token: in `~/.openclaw/openclaw.json``gateway.auth.token`
- Systemd: `openclaw-gateway.service` (Type=simple, enabled)
### remote-ai VM (192.168.2.91)
- **openclaw-gateway**: installed (v2026.3.13), config at `~/.openclaw/openclaw.json`
- Uses cloud AI providers (Claude API key required)
### Connecting Paperclip to openclaw
- URL: `ws://192.168.2.88:18789/`
- Auth: token from `~/.openclaw/openclaw.json``gateway.auth.token`
---
## Network Endpoints
| Service | URL / Address |
|---|---|
| K8s Dashboard | https://192.168.2.100:30443 |
| Proxmox UI | https://proxmox.vandachevici.ro |
| Grafana | http://192.168.2.100:31473 |
| Jellyfin | https://media.vandachevici.ro |
| Immich (photos) | https://photos.vandachevici.ro |
| OwnCloud | https://drive.vandachevici.ro |
| Paperclip | https://paperclip.vandachevici.ro |
| IoT API | http://192.168.2.100:30800 |
| minecraft-home | 192.168.2.100:31112 |
| minecraft-cheats | 192.168.2.100:31111 |
| minecraft-creative | 192.168.2.100:31559 |
| minecraft-johannes | 192.168.2.100:31563 |
| minecraft-noah | 192.168.2.100:31560 |
| Ollama (local-ai) | http://192.168.2.88:11434 |
| openclaw gateway (local-ai) | ws://192.168.2.88:18789 |
| Ollama (Dell) | http://192.168.2.100:11434 |
### DNS subdomains managed (DigitalOcean)
`photos`, `backup`, `media`, `chat`, `openttd`, `excalidraw`, `prv`, `drive`, `grafana`, `paperclip`, `proxmox`
---
## Common Operations
### Apply manifests
```bash
kubectl apply -f /home/dan/homelab/deployment/<namespace>/
```
### Prometheus (Helm)
```bash
helm upgrade obs prometheus-community/kube-prometheus-stack \
-n monitoring \
-f /home/dan/homelab/deployment/helm/prometheus/prometheus-helm-values.yaml
```
### NFS provisioners (Helm)
```bash
# Example: jellyfin
helm upgrade nfs-jellyfin nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
-n nfs-provisioners \
-f /home/dan/homelab/deployment/helm/nfs-provisioners/values-jellyfin.yaml
```
### Troubleshooting: Flannel CNI after reboot
If all pods stuck in `ContainerCreating` after reboot:
```bash
# 1. Check default route exists on kube-node-1
ip route show | grep default
# Fix: sudo ip route add default via 192.168.2.1 dev eno1
# Persist: check /etc/netplan/00-installer-config.yaml has routes section
# 2. Restart flannel pod on node-1
kubectl delete pod -n kube-flannel -l app=flannel --field-selector spec.nodeName=kube-node-1
```
### Troubleshooting: kube-node-3 NotReady after reboot
Likely swap re-enabled:
```bash
ssh dan@192.168.2.196 "sudo swapoff -a && sudo sed -i 's|^/swap.img|#/swap.img|' /etc/fstab && sudo systemctl restart kubelet"
```
---
## Workspace Structure
```
/home/dan/homelab/
├── HOMELAB.md — this file
├── plan.md — original rebuild plan
├── step-by-step.md — execution tracker
├── deployment/ — K8s manifests and Helm values
│ ├── 00-namespaces.yaml
│ ├── ai/ — Paperclip
│ ├── default/ — DNS updater
│ ├── games/ — Minecraft, Factorio, OpenTTD
│ ├── helm/ — Helm values (prometheus, nfs-provisioners)
│ ├── infrastructure/ — ingress-nginx, cert-manager, general-db, speedtest, proxmox-ingress
│ ├── iot/ — IoT DB + API
│ ├── media/ — Jellyfin, Immich
│ ├── monitoring/ — (managed by Helm)
│ └── storage/ — OwnCloud
├── backups/ — K8s secrets backup (gitignored)
├── hardware/ — hardware spec docs
├── orchestration/
│ └── ansible/ — playbooks, inventory, group_vars, cloud-init
└── services/
└── device-inventory/ — C++ CMake project: network device discovery
```

View file

@ -0,0 +1,45 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: games
---
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: v1
kind: Namespace
metadata:
name: infrastructure
---
apiVersion: v1
kind: Namespace
metadata:
name: storage
---
apiVersion: v1
kind: Namespace
metadata:
name: media
---
apiVersion: v1
kind: Namespace
metadata:
name: iot
---
apiVersion: v1
kind: Namespace
metadata:
name: ai
---
apiVersion: v1
kind: Namespace
metadata:
name: backup
---
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard

150
deployment/README.md Normal file
View file

@ -0,0 +1,150 @@
# Homelab Kubernetes Deployment Manifests
Reconstructed 2026-03-20 from live cluster state using `kubectl.kubernetes.io/last-applied-configuration` annotations.
## Directory Structure
```
deployment/
├── 00-namespaces.yaml # All namespace definitions — apply first
├── games/
│ ├── factorio.yaml # Factorio server (hostPort 34197)
│ ├── minecraft-cheats.yaml # Minecraft cheats (hostPort 25111)
│ ├── minecraft-creative.yaml # Minecraft creative (hostPort 25559)
│ ├── minecraft-home.yaml # Minecraft home (hostPort 25112)
│ ├── minecraft-jaron.yaml # Minecraft jaron (hostPort 25564)
│ ├── minecraft-johannes.yaml # Minecraft johannes (hostPort 25563)
│ ├── minecraft-noah.yaml # Minecraft noah (hostPort 25560)
│ └── openttd.yaml # OpenTTD (NodePort 30979/30978)
├── monitoring/
│ └── prometheus-pv.yaml # Manual local-storage PV for Prometheus
├── infrastructure/
│ ├── cert-issuers.yaml # ClusterIssuers: letsencrypt-prod + staging
│ ├── dns-updater.yaml # DaemonSet + ConfigMap (DigitalOcean DynDNS)
│ ├── general-db.yaml # MySQL 9 StatefulSet (shared DB for speedtest etc.)
│ ├── paperclip.yaml # Paperclip AI — PV + Deployment + Service + Ingress
│ └── speedtest-tracker.yaml # Speedtest Tracker + ConfigMap + Ingress
├── storage/
│ ├── owncloud.yaml # OwnCloud server + ConfigMap + Ingress
│ ├── owncloud-mariadb.yaml # MariaDB 10.6 StatefulSet
│ └── owncloud-redis.yaml # Redis 6 Deployment
├── media/
│ ├── jellyfin.yaml # Jellyfin + ConfigMap + Ingress
│ └── immich.yaml # Immich full stack (server, ml, db, valkey) + Ingress
├── iot/
│ ├── iot-db.yaml # MySQL 9 StatefulSet for IoT data
│ └── iot-api.yaml # IoT API (local image, see note below)
├── ai/
│ └── ollama.yaml # Ollama (currently scaled to 0)
├── default/
│ └── dns-updater-legacy.yaml # Legacy default-ns resources (hp-fast-pv, old ollama)
└── helm/
├── nfs-provisioners/ # Values for all NFS subdir provisioner releases
│ ├── values-vtrak.yaml # nfs-vtrak (default StorageClass)
│ ├── values-general.yaml # nfs-general (500G quota)
│ ├── values-general-db.yaml # nfs-general-db (20G quota)
│ ├── values-immich.yaml # nfs-immich (300G quota)
│ ├── values-jellyfin.yaml # nfs-jellyfin (700G quota)
│ ├── values-owncloud.yaml # nfs-owncloud (200G quota)
│ ├── values-minecraft.yaml # nfs-minecraft (50G quota)
│ ├── values-factorio.yaml # nfs-factorio (10G quota)
│ ├── values-openttd.yaml # nfs-openttd (5G quota)
│ ├── values-speedtest.yaml # nfs-speedtest (5G quota)
│ ├── values-authentik.yaml # nfs-authentik (20G quota)
│ └── values-iot.yaml # nfs-iot (20G quota)
├── cert-manager/
│ └── values.yaml # cert-manager v1.19.3 (crds.enabled=true)
├── ingress-nginx/
│ └── values.yaml # ingress-nginx v4.14.3 (DaemonSet, hostPort)
├── monitoring/
│ └── prometheus-values.yaml # kube-prometheus-stack (Grafana NodePort 31473)
└── authentik/
├── values.yaml # Authentik SSO v2026.2.1
└── redis-values.yaml # Standalone Redis for Authentik
```
## Apply Order
For a fresh cluster, apply in this order:
```bash
BASE=/home/dan/homelab/deployment
# 1. Namespaces
kubectl apply -f $BASE/00-namespaces.yaml
# 2. NFS provisioners (Helm) — run from default namespace
helm install nfs-vtrak nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
-f $BASE/helm/nfs-provisioners/values-vtrak.yaml
# ... repeat for each nfs-* values file
# 3. cert-manager
helm install cert-manager cert-manager/cert-manager -n cert-manager --create-namespace \
-f $BASE/helm/cert-manager/values.yaml
# 4. ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx -n infrastructure \
-f $BASE/helm/ingress-nginx/values.yaml
# 5. ClusterIssuers (requires cert-manager to be ready)
# Create the digitalocean-dns-token secret first:
# kubectl create secret generic digitalocean-dns-token \
# --from-literal=access-token=<TOKEN> -n cert-manager
kubectl apply -f $BASE/infrastructure/cert-issuers.yaml
# 6. Prometheus PV (must exist before helm install)
kubectl apply -f $BASE/monitoring/prometheus-pv.yaml
helm install obs prometheus-community/kube-prometheus-stack -n monitoring \
-f $BASE/helm/monitoring/prometheus-values.yaml
# 7. Infrastructure workloads (create secrets first — see comments in each file)
kubectl apply -f $BASE/infrastructure/dns-updater.yaml
kubectl apply -f $BASE/infrastructure/general-db.yaml
kubectl apply -f $BASE/infrastructure/speedtest-tracker.yaml
kubectl apply -f $BASE/infrastructure/paperclip.yaml
# 8. Storage
kubectl apply -f $BASE/storage/
# 9. Media
kubectl apply -f $BASE/media/
# 10. Games
kubectl apply -f $BASE/games/
# 11. IoT
kubectl apply -f $BASE/iot/
# 12. AI
kubectl apply -f $BASE/ai/
# 13. Authentik
helm install authentik-redis bitnami/redis -n infrastructure \
-f $BASE/helm/authentik/redis-values.yaml
helm install authentik authentik/authentik -n infrastructure \
-f $BASE/helm/authentik/values.yaml
```
## Secrets Required (not stored here)
The following secrets must be created manually before applying the relevant workloads:
| Secret | Namespace | Keys | Used By |
|--------|-----------|------|---------|
| `dns-updater-secret` | infrastructure | `digitalocean-token` | dns-updater DaemonSet |
| `digitalocean-dns-token` | cert-manager | `access-token` | ClusterIssuer (DNS01 solver) |
| `general-db-secret` | infrastructure | `root-password`, `database`, `user`, `password` | general-purpose-db, speedtest-tracker |
| `paperclip-secrets` | infrastructure | `BETTER_AUTH_SECRET` | paperclip |
| `owncloud-db-secret` | storage | `root-password`, `user`, `password`, `database` | owncloud-mariadb, owncloud-server |
| `iot-db-secret` | iot | `root-password`, `database`, `user`, `password` | iot-db, iot-api |
| `immich-secret` | media | `db-username`, `db-password`, `db-name`, `jwt-secret` | immich-server, immich-db |
## Key Notes
- **kube-node-1 is cordoned** — no general workloads schedule there. Exceptions: DaemonSets (dns-updater, ingress-nginx, node-exporter, flannel) and workloads with explicit `nodeSelector: kubernetes.io/hostname: kube-node-1` (paperclip).
- **NFS storage** — all app data lives on ZFS datasets on the HP ProLiant (`192.168.2.193:/VTrak-Storage/<app>`). The NFS provisioners in the `default` namespace handle dynamic PV provisioning.
- **Prometheus** — intentionally uses `local-storage` at `/kube-storage-room/prometheus/` on kube-node-1 (USB disk sde). The `prometheus-storage-pv` PV must be manually created.
- **Paperclip** — uses local image `paperclip:latest` with `imagePullPolicy: Never`, pinned to kube-node-1. The image must be built locally on that node.
- **iot-api** — currently broken (`ErrImageNeverPull` on kube-node-3). The `iot-api:latest` local image is not present on the worker nodes. Either add a nodeSelector or push to a registry.
- **Ollama** — the `ai/ollama` and `default/ollama` deployments are both scaled to 0. Active LLM serving happens on the openclaw VM (192.168.2.88) via systemd Ollama service.
- **Authentik**`helm/authentik/values.yaml` contains credentials in plaintext. Treat this file as sensitive.

68
deployment/ai/ollama.yaml Normal file
View file

@ -0,0 +1,68 @@
---
# NOTE: ollama in the 'ai' namespace is currently scaled to 0 replicas (intentionally stopped).
# The actual AI workload runs on the openclaw VM (192.168.2.88) via Ollama system service.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: ollama-data
namespace: ai
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: nfs-vtrak
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: ollama
namespace: ai
spec:
replicas: 1
selector:
matchLabels:
app: ollama
template:
metadata:
labels:
app: ollama
spec:
containers:
- image: ollama/ollama:latest
name: ollama
ports:
- containerPort: 11434
name: http
resources:
limits:
cpu: '8'
memory: 24Gi
requests:
cpu: 500m
memory: 2Gi
volumeMounts:
- mountPath: /root/.ollama
name: ollama-storage
volumes:
- name: ollama-storage
persistentVolumeClaim:
claimName: ollama-data
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: ollama
namespace: ai
spec:
ports:
- name: http
port: 11434
targetPort: 11434
selector:
app: ollama

View file

@ -0,0 +1,95 @@
---
# Legacy default-namespace resources
# These are the NFS subdir provisioner deployments and the legacy ollama deployment.
# NFS provisioners are managed via Helm — see helm/nfs-provisioners/ for values files.
# The ollama deployment here is a legacy entry (scaled to 0) — the active ollama
# is in the 'ai' namespace. The hp-fast-pv / ollama-data-pvc bind a 1500Gi hostPath
# on the HP ProLiant at /mnt/hp_fast.
---
# hp-fast-pv: hostPath PV on HP ProLiant VMs (path /mnt/hp_fast, 1500Gi)
# No nodeAffinity was set originally — binding may be unreliable.
apiVersion: v1
kind: PersistentVolume
metadata:
annotations: {}
name: hp-fast-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1500Gi
hostPath:
path: /mnt/hp_fast
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: ollama-data-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1500Gi
storageClassName: ''
volumeName: hp-fast-pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: ollama-data
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
storageClassName: nfs-vtrak
---
# Legacy ollama deployment in default namespace (scaled to 0, inactive)
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: ollama
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: ollama
template:
metadata:
labels:
app: ollama
spec:
containers:
- image: ollama/ollama:latest
name: ollama
ports:
- containerPort: 11434
volumeMounts:
- mountPath: /root/.ollama
name: ollama-storage
volumes:
- name: ollama-storage
persistentVolumeClaim:
claimName: ollama-data-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: ollama
namespace: default
spec:
ports:
- port: 11434
targetPort: 11434
selector:
app: ollama

View file

@ -0,0 +1,73 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: games-console-backend
namespace: infrastructure
spec:
replicas: 2
selector:
matchLabels:
app: games-console-backend
template:
metadata:
labels:
app: games-console-backend
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: games-console-backend
topologyKey: kubernetes.io/hostname
serviceAccountName: games-console
containers:
- name: backend
image: games-console-backend:latest
imagePullPolicy: Never
args: ["serve", "--namespace", "games"]
ports:
- containerPort: 8080
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
name: games-console-backend
namespace: infrastructure
spec:
selector:
app: games-console-backend
ports:
- port: 8080
targetPort: 8080
protocol: TCP
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: games-console-backend-np
namespace: infrastructure
spec:
selector:
app: games-console-backend
ports:
- port: 8080
targetPort: 8080
nodePort: 31600
protocol: TCP
type: NodePort

View file

@ -0,0 +1,29 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: games-console
namespace: infrastructure
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
spec:
ingressClassName: nginx
rules:
- host: games.vandachevici.ro
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: games-console-ui
port:
number: 80
tls:
- hosts:
- games.vandachevici.ro
secretName: games-console-tls

View file

@ -0,0 +1,36 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: games-console
namespace: infrastructure
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: games-console
namespace: games
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: games-console
namespace: games
subjects:
- kind: ServiceAccount
name: games-console
namespace: infrastructure
roleRef:
kind: Role
name: games-console
apiGroup: rbac.authorization.k8s.io

View file

@ -0,0 +1,50 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: games-console-ui
namespace: infrastructure
spec:
replicas: 2
selector:
matchLabels:
app: games-console-ui
template:
metadata:
labels:
app: games-console-ui
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: games-console-ui
topologyKey: kubernetes.io/hostname
containers:
- name: ui
image: games-console-ui:latest
imagePullPolicy: Never
ports:
- containerPort: 80
resources:
requests:
cpu: 20m
memory: 32Mi
limits:
cpu: 200m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
name: games-console-ui
namespace: infrastructure
spec:
selector:
app: games-console-ui
ports:
- port: 80
targetPort: 80
protocol: TCP
type: ClusterIP

View file

@ -0,0 +1,67 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: factorio-alone
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: factorio-alone
template:
metadata:
labels:
app: factorio-alone
spec:
containers:
- image: factoriotools/factorio
name: factorio
ports:
- containerPort: 34197
hostPort: 34197
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- mountPath: /factorio
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: factorio-alone-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: factorio-alone
namespace: games
spec:
ports:
- port: 34197
protocol: TCP
targetPort: 34197
selector:
app: factorio-alone
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: factorio-alone-v2-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: nfs-factorio

View file

@ -0,0 +1,92 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: minecraft-cheats
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: minecraft-cheats
template:
metadata:
labels:
app: minecraft-cheats
spec:
containers:
- env:
- name: EULA
value: 'true'
- name: MOTD
value: A Minecraft Server Powered by Docker
- name: DIFFICULTY
value: easy
- name: GAMEMODE
value: survival
- name: MAX_PLAYERS
value: '10'
- name: ENABLE_COMMAND_BLOCK
value: 'true'
- name: DUMP_SERVER_PROPERTIES
value: 'true'
- name: PAUSE_WHEN_EMPTY_SECONDS
value: '0'
- name: OPS
value: LadyGisela5,tomgates24,anutzalizuk,toranaga_samma
image: itzg/minecraft-server
name: minecraft
ports:
- containerPort: 25565
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- mountPath: /data
name: data
nodeSelector:
topology.homelab/server: dell
volumes:
- name: data
persistentVolumeClaim:
claimName: minecraft-cheats-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: minecraft-cheats
namespace: games
spec:
ports:
- port: 25565
protocol: TCP
targetPort: 25565
selector:
app: minecraft-cheats
type: NodePort
ports:
- port: 25565
protocol: TCP
targetPort: 25565
nodePort: 31111
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: minecraft-cheats-v2-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-minecraft

View file

@ -0,0 +1,78 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: minecraft-creative
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: minecraft-creative
template:
metadata:
labels:
app: minecraft-creative
spec:
containers:
- env:
- name: EULA
value: 'true'
- name: PAUSE_WHEN_EMPTY_SECONDS
value: '0'
image: itzg/minecraft-server
name: minecraft
ports:
- containerPort: 25565
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- mountPath: /data
name: data
nodeSelector:
topology.homelab/server: dell
volumes:
- name: data
persistentVolumeClaim:
claimName: minecraft-creative-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: minecraft-creative
namespace: games
spec:
ports:
- port: 25565
protocol: TCP
targetPort: 25565
selector:
app: minecraft-creative
type: NodePort
ports:
- port: 25565
protocol: TCP
targetPort: 25565
nodePort: 31559
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: minecraft-creative-v2-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-minecraft

View file

@ -0,0 +1,92 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: minecraft-home
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: minecraft-home
template:
metadata:
labels:
app: minecraft-home
spec:
containers:
- env:
- name: EULA
value: 'true'
- name: MOTD
value: A Minecraft Server Powered by Docker
- name: DIFFICULTY
value: easy
- name: GAMEMODE
value: survival
- name: MAX_PLAYERS
value: '10'
- name: ENABLE_COMMAND_BLOCK
value: 'true'
- name: DUMP_SERVER_PROPERTIES
value: 'true'
- name: PAUSE_WHEN_EMPTY_SECONDS
value: '0'
- name: OPS
value: LadyGisela5,tomgates24,anutzalizuk,toranaga_samma
image: itzg/minecraft-server
name: minecraft
ports:
- containerPort: 25565
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- mountPath: /data
name: data
nodeSelector:
topology.homelab/server: dell
volumes:
- name: data
persistentVolumeClaim:
claimName: minecraft-home-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: minecraft-home
namespace: games
spec:
ports:
- port: 25565
protocol: TCP
targetPort: 25565
selector:
app: minecraft-home
type: NodePort
ports:
- port: 25565
protocol: TCP
targetPort: 25565
nodePort: 31112
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: minecraft-home-v2-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-minecraft

View file

@ -0,0 +1,72 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: minecraft-jaron
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: minecraft-jaron
template:
metadata:
labels:
app: minecraft-jaron
spec:
containers:
- env:
- name: EULA
value: 'true'
image: itzg/minecraft-server
name: minecraft
ports:
- containerPort: 25565
hostPort: 25564
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- mountPath: /data
name: data
nodeSelector:
topology.homelab/server: dell
volumes:
- name: data
persistentVolumeClaim:
claimName: minecraft-jaron-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: minecraft-jaron
namespace: games
spec:
ports:
- port: 25565
protocol: TCP
targetPort: 25565
selector:
app: minecraft-jaron
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: minecraft-jaron-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-minecraft

View file

@ -0,0 +1,78 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: minecraft-johannes
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: minecraft-johannes
template:
metadata:
labels:
app: minecraft-johannes
spec:
containers:
- env:
- name: EULA
value: 'true'
- name: PAUSE_WHEN_EMPTY_SECONDS
value: '0'
image: itzg/minecraft-server
name: minecraft
ports:
- containerPort: 25565
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- mountPath: /data
name: data
nodeSelector:
topology.homelab/server: dell
volumes:
- name: data
persistentVolumeClaim:
claimName: minecraft-johannes-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: minecraft-johannes
namespace: games
spec:
ports:
- port: 25565
protocol: TCP
targetPort: 25565
selector:
app: minecraft-johannes
type: NodePort
ports:
- port: 25565
protocol: TCP
targetPort: 25565
nodePort: 31563
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: minecraft-johannes-v2-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-minecraft

View file

@ -0,0 +1,78 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: minecraft-noah
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: minecraft-noah
template:
metadata:
labels:
app: minecraft-noah
spec:
containers:
- env:
- name: EULA
value: 'true'
- name: PAUSE_WHEN_EMPTY_SECONDS
value: '0'
image: itzg/minecraft-server
name: minecraft
ports:
- containerPort: 25565
protocol: TCP
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- mountPath: /data
name: data
nodeSelector:
topology.homelab/server: dell
volumes:
- name: data
persistentVolumeClaim:
claimName: minecraft-noah-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: minecraft-noah
namespace: games
spec:
ports:
- port: 25565
protocol: TCP
targetPort: 25565
selector:
app: minecraft-noah
type: NodePort
ports:
- port: 25565
protocol: TCP
targetPort: 25565
nodePort: 31560
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: minecraft-noah-v2-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-minecraft

View file

@ -0,0 +1,78 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: openttd
namespace: games
spec:
replicas: 1
selector:
matchLabels:
app: openttd
template:
metadata:
labels:
app: openttd
spec:
containers:
- env:
- name: savepath
value: /var/openttd
image: bateau/openttd
name: openttd
ports:
- containerPort: 3979
name: game
- containerPort: 3978
name: admin
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /var/openttd
name: saves
volumes:
- name: saves
persistentVolumeClaim:
claimName: openttd-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: openttd
namespace: games
spec:
ports:
- name: game
nodePort: 30979
port: 3979
protocol: TCP
targetPort: 3979
- name: admin
nodePort: 30978
port: 3978
protocol: TCP
targetPort: 3978
selector:
app: openttd
type: NodePort
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: openttd-v2-pvc
namespace: games
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: nfs-openttd

View file

@ -0,0 +1,59 @@
# AI sync is suspended - not currently enabled for syncing
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-ai-dell-to-hp
namespace: infrastructure
spec:
schedule: "*/15 * * * *"
suspend: true
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/dell/ai
- --dest=/mnt/hp/ai
- --pair=ai
- --direction=dell-to-hp
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: dell-data
mountPath: /mnt/dell/ai
- name: hp-data
mountPath: /mnt/hp/ai
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-ai
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-ai
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,60 @@
# AI sync is suspended - not currently enabled for syncing
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-ai-hp-to-dell
namespace: infrastructure
spec:
schedule: "7,22,37,52 * * * *"
suspend: true
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/hp/ai
- --dest=/mnt/dell/ai
- --pair=ai
- --direction=hp-to-dell
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
- --dry-run # REMOVE THIS LINE to enable production sync
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: hp-data
mountPath: /mnt/hp/ai
- name: dell-data
mountPath: /mnt/dell/ai
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-ai
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-ai
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,58 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-games-dell-to-hp
namespace: infrastructure
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/dell/games
- --dest=/mnt/hp/games
- --pair=games
- --direction=dell-to-hp
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: dell-data
mountPath: /mnt/dell/games
- name: hp-data
mountPath: /mnt/hp/games
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-games
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-games
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,59 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-games-hp-to-dell
namespace: infrastructure
spec:
schedule: "7,22,37,52 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/hp/games
- --dest=/mnt/dell/games
- --pair=games
- --direction=hp-to-dell
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
- --dry-run # REMOVE THIS LINE to enable production sync
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: hp-data
mountPath: /mnt/hp/games
- name: dell-data
mountPath: /mnt/dell/games
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-games
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-games
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,58 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-infra-dell-to-hp
namespace: infrastructure
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/dell/infra
- --dest=/mnt/hp/infra
- --pair=infra
- --direction=dell-to-hp
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: dell-data
mountPath: /mnt/dell/infra
- name: hp-data
mountPath: /mnt/hp/infra
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-infra
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-infra
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,59 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-infra-hp-to-dell
namespace: infrastructure
spec:
schedule: "7,22,37,52 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/hp/infra
- --dest=/mnt/dell/infra
- --pair=infra
- --direction=hp-to-dell
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
- --dry-run # REMOVE THIS LINE to enable production sync
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: hp-data
mountPath: /mnt/hp/infra
- name: dell-data
mountPath: /mnt/dell/infra
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-infra
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-infra
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,58 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-media-dell-to-hp
namespace: infrastructure
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/dell/media
- --dest=/mnt/hp/media
- --pair=media
- --direction=dell-to-hp
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: dell-data
mountPath: /mnt/dell/media
- name: hp-data
mountPath: /mnt/hp/media
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-media
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-media
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,59 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-media-hp-to-dell
namespace: infrastructure
spec:
schedule: "7,22,37,52 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/hp/media
- --dest=/mnt/dell/media
- --pair=media
- --direction=hp-to-dell
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
- --dry-run # REMOVE THIS LINE to enable production sync
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: hp-data
mountPath: /mnt/hp/media
- name: dell-data
mountPath: /mnt/dell/media
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-media
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-media
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,58 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-owncloud-dell-to-hp
namespace: infrastructure
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/dell/owncloud
- --dest=/mnt/hp/owncloud
- --pair=owncloud
- --direction=dell-to-hp
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: dell-data
mountPath: /mnt/dell/owncloud
- name: hp-data
mountPath: /mnt/hp/owncloud
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-owncloud
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-owncloud
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,59 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-owncloud-hp-to-dell
namespace: infrastructure
spec:
schedule: "7,22,37,52 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/hp/owncloud
- --dest=/mnt/dell/owncloud
- --pair=owncloud
- --direction=hp-to-dell
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
- --dry-run # REMOVE THIS LINE to enable production sync
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: hp-data
mountPath: /mnt/hp/owncloud
- name: dell-data
mountPath: /mnt/dell/owncloud
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-owncloud
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-owncloud
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,58 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-photos-dell-to-hp
namespace: infrastructure
spec:
schedule: "*/15 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/dell/photos
- --dest=/mnt/hp/photos
- --pair=photos
- --direction=dell-to-hp
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: dell-data
mountPath: /mnt/dell/photos
- name: hp-data
mountPath: /mnt/hp/photos
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-photos
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-photos
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,59 @@
# To enable production sync: remove --dry-run from args below
apiVersion: batch/v1
kind: CronJob
metadata:
name: ha-sync-photos-hp-to-dell
namespace: infrastructure
spec:
schedule: "7,22,37,52 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 3
jobTemplate:
spec:
template:
spec:
serviceAccountName: ha-sync
restartPolicy: OnFailure
containers:
- name: ha-sync
image: ha-sync:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync"]
args:
- --src=/mnt/hp/photos
- --dest=/mnt/dell/photos
- --pair=photos
- --direction=hp-to-dell
- --log-dir=/var/log/ha-sync
- --exclude=*.sock
- --exclude=*.pid
- --exclude=*.lock
- --exclude=lock
- --dry-run # REMOVE THIS LINE to enable production sync
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
volumeMounts:
- name: hp-data
mountPath: /mnt/hp/photos
- name: dell-data
mountPath: /mnt/dell/photos
- name: logs
mountPath: /var/log/ha-sync
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 500m, memory: 256Mi }
volumes:
- name: hp-data
persistentVolumeClaim:
claimName: pvc-hp-photos
- name: dell-data
persistentVolumeClaim:
claimName: pvc-dell-photos
- name: logs
persistentVolumeClaim:
claimName: pvc-ha-sync-logs

View file

@ -0,0 +1,37 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- serviceaccount.yaml
- rbac.yaml
- pv-logs.yaml
- pvc-logs.yaml
- pv-dell-ai.yaml
- pv-dell-games.yaml
- pv-dell-infra.yaml
- pv-dell-media.yaml
- pv-dell-owncloud.yaml
- pv-dell-photos.yaml
- pv-hp-ai.yaml
- pv-hp-games.yaml
- pv-hp-infra.yaml
- pv-hp-media.yaml
- pv-hp-owncloud.yaml
- pv-hp-photos.yaml
- pvc-dell-ai.yaml
- pvc-dell-games.yaml
- pvc-dell-infra.yaml
- pvc-dell-media.yaml
- pvc-dell-owncloud.yaml
- pvc-dell-photos.yaml
- pvc-hp-ai.yaml
- pvc-hp-games.yaml
- pvc-hp-infra.yaml
- pvc-hp-media.yaml
- pvc-hp-owncloud.yaml
- pvc-hp-photos.yaml
# CronJobs are now managed by ha-sync-ctl (DB-driven). See archive/ for old static manifests.
# To migrate: ha-sync-ctl jobs import-k8s then ha-sync-ctl jobs apply-all
- ui-deployment.yaml
- ui-service.yaml
- ui-ingress.yaml

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-dell-ai
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.100
path: /data/ai

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-dell-games
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.100
path: /data/games

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-dell-infra
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.100
path: /data/infra

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-dell-media
spec:
capacity:
storage: 2Ti
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.100
path: /data/media

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-dell-owncloud
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.100
path: /data/owncloud

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-dell-photos
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.100
path: /data/photos

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hp-ai
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.193
path: /data/ai

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hp-games
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.193
path: /data/games

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hp-infra
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.193
path: /data/infra

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hp-media
spec:
capacity:
storage: 2Ti
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.193
path: /data/media

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hp-owncloud
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.193
path: /data/owncloud

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-hp-photos
spec:
capacity:
storage: 500Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.193
path: /data/photos

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-ha-sync-logs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
nfs:
server: 192.168.2.193
path: /data/infra/ha-sync-logs

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-dell-ai
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-dell-ai
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-dell-games
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-dell-games
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-dell-infra
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-dell-infra
resources:
requests:
storage: 100Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-dell-media
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-dell-media
resources:
requests:
storage: 2Ti

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-dell-owncloud
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-dell-owncloud
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-dell-photos
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-dell-photos
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hp-ai
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-hp-ai
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hp-games
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-hp-games
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hp-infra
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-hp-infra
resources:
requests:
storage: 100Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hp-media
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-hp-media
resources:
requests:
storage: 2Ti

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hp-owncloud
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-hp-owncloud
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-hp-photos
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-hp-photos
resources:
requests:
storage: 500Gi

View file

@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ha-sync-logs
namespace: infrastructure
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: pv-ha-sync-logs
resources:
requests:
storage: 10Gi

View file

@ -0,0 +1,26 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ha-sync-lease-manager
namespace: infrastructure
rules:
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create", "get", "update", "delete", "list", "watch"]
- apiGroups: ["batch"]
resources: ["cronjobs", "jobs"]
verbs: ["create", "get", "update", "patch", "delete", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ha-sync-lease-manager
namespace: infrastructure
subjects:
- kind: ServiceAccount
name: ha-sync
namespace: infrastructure
roleRef:
kind: Role
name: ha-sync-lease-manager
apiGroup: rbac.authorization.k8s.io

View file

@ -0,0 +1,4 @@
# Create this secret manually before applying:
# kubectl create secret generic ha-sync-db-secret \
# --from-literal=HA_SYNC_DB_DSN='<user>:<pass>@tcp(general-purpose-db.infrastructure.svc.cluster.local:3306)/general_db?parseTime=true' \
# -n infrastructure

View file

@ -0,0 +1,5 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: ha-sync
namespace: infrastructure

View file

@ -0,0 +1,49 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ha-sync-ui
namespace: infrastructure
spec:
replicas: 2
selector:
matchLabels:
app: ha-sync-ui
template:
metadata:
labels:
app: ha-sync-ui
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: ha-sync-ui
topologyKey: kubernetes.io/hostname
serviceAccountName: ha-sync
containers:
- name: ha-sync-ui
image: ha-sync-ui:latest
imagePullPolicy: Never
command: ["/usr/local/bin/ha-sync-ui"]
ports:
- containerPort: 8080
env:
- name: HA_SYNC_DB_DSN
valueFrom:
secretKeyRef:
name: ha-sync-db-secret
key: HA_SYNC_DB_DSN
- name: HA_SYNC_UI_PORT
value: "8080"
resources:
requests: { cpu: 50m, memory: 64Mi }
limits: { cpu: 200m, memory: 128Mi }
livenessProbe:
httpGet: { path: /health, port: 8080 }
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet: { path: /health, port: 8080 }
initialDelaySeconds: 3
periodSeconds: 5

View file

@ -0,0 +1,28 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ha-sync-ui
namespace: infrastructure
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
spec:
ingressClassName: nginx
rules:
- host: ha-sync.vandachevici.ro
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ha-sync-ui
port:
number: 80
tls:
- hosts:
- ha-sync.vandachevici.ro
secretName: ha-sync-ui-tls

View file

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: ha-sync-ui
namespace: infrastructure
spec:
type: ClusterIP
selector:
app: ha-sync-ui
ports:
- port: 80
targetPort: 8080

View file

@ -0,0 +1,20 @@
# authentik-redis — standalone Redis for Authentik
# Helm release: authentik-redis, namespace: infrastructure
# Chart: bitnami/redis v25.3.2
# Install: helm install authentik-redis bitnami/redis -n infrastructure -f redis-values.yaml
# Repo: helm repo add bitnami https://charts.bitnami.com/bitnami
architecture: standalone
auth:
enabled: false
master:
persistence:
enabled: true
size: 1Gi
storageClass: nfs-authentik
resources:
limits:
memory: 128Mi
requests:
cpu: 30m
memory: 64Mi
resourcesPreset: none

View file

@ -0,0 +1,31 @@
# cert-manager v1.19.3
# Helm release: cert-manager, namespace: cert-manager
# Install: helm install cert-manager cert-manager/cert-manager -n cert-manager --create-namespace -f values.yaml
crds:
enabled: true
replicaCount: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: controller
topologyKey: kubernetes.io/hostname
webhook:
replicaCount: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: webhook
topologyKey: kubernetes.io/hostname
cainjector:
replicaCount: 2
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: cainjector
topologyKey: kubernetes.io/hostname

View file

@ -0,0 +1,16 @@
# ingress-nginx v4.14.3 (app version 1.14.3)
# Helm release: ingress-nginx, namespace: infrastructure
# Install: helm install ingress-nginx ingress-nginx/ingress-nginx -n infrastructure -f values.yaml
controller:
config:
force-ssl-redirect: true
allow-snippet-annotations: "true"
annotations-risk-level: "Critical"
extraArgs:
default-ssl-certificate: "infrastructure/wildcard-vandachevici-tls"
hostPort:
enabled: false
kind: DaemonSet
service:
type: LoadBalancer
loadBalancerIP: 192.168.2.240

View file

@ -0,0 +1,12 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/authentik
server: 192.168.2.193
storageClass:
archiveOnDelete: true
mountOptions:
- soft
- timeo=30
name: nfs-authentik

View file

@ -0,0 +1,14 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/cctv
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-cctv

View file

@ -0,0 +1,13 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /data/games/factorio
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
mountOptions:
- soft
- timeo=30
name: nfs-factorio

View file

@ -0,0 +1,13 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/general-db
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
mountOptions:
- soft
- timeo=30
name: nfs-general-db

View file

@ -0,0 +1,14 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/general
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-general

View file

@ -0,0 +1,14 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/git-db
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-git-db

View file

@ -0,0 +1,14 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/git
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-git

View file

@ -0,0 +1,14 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /data/photos
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-immich

View file

@ -0,0 +1,13 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/iot
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
mountOptions:
- soft
- timeo=30
name: nfs-iot

View file

@ -0,0 +1,14 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /data/media
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-jellyfin

View file

@ -0,0 +1,13 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /data/games/minecraft
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
mountOptions:
- soft
- timeo=30
name: nfs-minecraft

View file

@ -0,0 +1,13 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /data/games/openttd
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
mountOptions:
- soft
- timeo=30
name: nfs-openttd

View file

@ -0,0 +1,14 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /data/owncloud
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-owncloud

View file

@ -0,0 +1,13 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool/speedtest
server: 192.168.2.193
storageClass:
allowVolumeExpansion: true
archiveOnDelete: true
mountOptions:
- soft
- timeo=30
name: nfs-speedtest

View file

@ -0,0 +1,12 @@
nfs:
mountOptions:
- soft
- timeo=30
path: /media-pool
server: 192.168.2.193
storageClass:
defaultClass: false
mountOptions:
- soft
- timeo=30
name: nfs-media-pool

View file

@ -0,0 +1,41 @@
---
# NOTE: Secret 'digitalocean-dns-token' must be created manually in cert-manager namespace:
# kubectl create secret generic digitalocean-dns-token \
# --from-literal=access-token=<YOUR_DO_TOKEN> \
# -n cert-manager
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
annotations: {}
name: letsencrypt-prod
spec:
acme:
email: dan.vandachevici@gmail.com
privateKeySecretRef:
name: letsencrypt-prod-account-key
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
digitalocean:
tokenSecretRef:
key: access-token
name: digitalocean-dns-token
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
annotations: {}
name: letsencrypt-staging
spec:
acme:
email: dan.vandachevici@gmail.com
privateKeySecretRef:
name: letsencrypt-staging-account-key
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- dns01:
digitalocean:
tokenSecretRef:
key: access-token
name: digitalocean-dns-token

View file

@ -0,0 +1,183 @@
---
# NOTE: Images must be built and loaded onto nodes before applying.
# Run: /home/dan/homelab/services/device-inventory/build-and-load.sh
#
# Images required:
# inventory-server:latest → kube-node-2
# inventory-web-ui:latest → kube-node-2
# inventory-cli:latest → kube-node-2, kube-node-3
#
# nfs-general StorageClass is cluster-wide — no extra Helm release needed.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: device-inventory-db-pvc
namespace: infrastructure
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-general
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory-server
namespace: infrastructure
spec:
replicas: 1
selector:
matchLabels:
app: inventory-server
strategy:
type: Recreate
template:
metadata:
labels:
app: inventory-server
spec:
containers:
- name: inventory-server
image: inventory-server:latest
imagePullPolicy: Never
ports:
- containerPort: 9876
name: tcp
resources:
limits:
cpu: 200m
memory: 128Mi
requests:
cpu: 25m
memory: 32Mi
livenessProbe:
tcpSocket:
port: 9876
initialDelaySeconds: 10
periodSeconds: 20
failureThreshold: 5
readinessProbe:
tcpSocket:
port: 9876
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
volumeMounts:
- mountPath: /var/lib/inventory
name: db-storage
volumes:
- name: db-storage
persistentVolumeClaim:
claimName: device-inventory-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: inventory-server
namespace: infrastructure
spec:
selector:
app: inventory-server
ports:
- name: tcp
port: 9876
targetPort: 9876
nodePort: 30987
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory-web-ui
namespace: infrastructure
spec:
replicas: 1
selector:
matchLabels:
app: inventory-web-ui
template:
metadata:
labels:
app: inventory-web-ui
spec:
containers:
- name: inventory-web-ui
image: inventory-web-ui:latest
imagePullPolicy: Never
env:
- name: INVENTORY_HOST
value: inventory-server.infrastructure.svc.cluster.local
- name: INVENTORY_PORT
value: "9876"
- name: PORT
value: "8080"
ports:
- containerPort: 8080
name: http
resources:
limits:
cpu: 100m
memory: 64Mi
requests:
cpu: 10m
memory: 32Mi
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 20
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 3
periodSeconds: 10
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: inventory-web-ui
namespace: infrastructure
spec:
selector:
app: inventory-web-ui
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: inventory-web-ui
namespace: infrastructure
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
spec:
ingressClassName: nginx
rules:
- host: device-inventory.vandachevici.ro
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: inventory-web-ui
port:
number: 80
tls:
- hosts:
- device-inventory.vandachevici.ro
secretName: device-inventory-tls

View file

@ -0,0 +1,72 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations: {}
name: dns-updater-config
namespace: infrastructure
data:
DOMAIN: vandachevici.ro
NAME: photos;backup;media;chat;openttd;excalidraw;prv;drive;grafana;paperclip;proxmox;parts;dns;games;git
REMOVE_DUPLICATES: 'true'
SLEEP_INTERVAL: '60'
---
# NOTE: Secret 'dns-updater-secret' must be created manually:
# kubectl create secret generic dns-updater-secret \
# --from-literal=digitalocean-token=<YOUR_TOKEN> \
# -n infrastructure
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations: {}
labels:
app: dns-updater
name: dns-updater
namespace: infrastructure
spec:
selector:
matchLabels:
app: dns-updater
template:
metadata:
labels:
app: dns-updater
spec:
containers:
- env:
- name: DIGITALOCEAN_TOKEN
valueFrom:
secretKeyRef:
key: digitalocean-token
name: dns-updater-secret
- name: DOMAIN
valueFrom:
configMapKeyRef:
key: DOMAIN
name: dns-updater-config
- name: NAME
valueFrom:
configMapKeyRef:
key: NAME
name: dns-updater-config
- name: SLEEP_INTERVAL
valueFrom:
configMapKeyRef:
key: SLEEP_INTERVAL
name: dns-updater-config
- name: REMOVE_DUPLICATES
valueFrom:
configMapKeyRef:
key: REMOVE_DUPLICATES
name: dns-updater-config
image: tunix/digitalocean-dyndns:latest
name: dns-updater
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 50m
memory: 64Mi
restartPolicy: Always

View file

@ -0,0 +1,320 @@
---
# Forgejo git server + PostgreSQL database
# Domain: git.vandachevici.ro
# Auth: Authentik OIDC (configured post-deploy via admin UI)
# Storage: NFS on HP ProLiant (media-pool/git, media-pool/git-db)
# SSH: NodePort 30022 (clone with: git clone ssh://git@<host>:30022/<user>/<repo>.git)
#
# Post-deploy setup (already done, documented for re-deploy):
# 1. Authentik OIDC provider created via API (provider PK=9, app slug=forgejo)
# 2. Forgejo OAuth2 source configured via CLI:
# forgejo admin auth add-oauth --name authentik --provider openidConnect \
# --auto-discover-url https://auth.vandachevici.ro/application/o/forgejo/.well-known/openid-configuration
# 3. Admin account: gitadmin / email: gitadmin@git.vandachevici.ro (break-glass only)
# Users should sign in via "Sign in with authentik" button
---
apiVersion: v1
kind: Secret
metadata:
name: forgejo-db-secret
namespace: infrastructure
type: Opaque
stringData:
POSTGRES_DB: forgejo
POSTGRES_USER: forgejo
POSTGRES_PASSWORD: Hg9mKnRpQwXvTz2Ld8cJsY4bAeUfN6
---
apiVersion: v1
kind: Secret
metadata:
name: forgejo-secret
namespace: infrastructure
type: Opaque
stringData:
# Random secret key for Forgejo session/cookie signing
# Generate with: openssl rand -hex 32
secret-key: 5f323a291b24ba0d83c5df56569eeeb44e5eda0bcfc9f3d9601d5ab46f5f3754
---
# PostgreSQL for Forgejo
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-db-pvc
namespace: infrastructure
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-git-db
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: forgejo-db
namespace: infrastructure
spec:
replicas: 1
selector:
matchLabels:
app: forgejo-db
serviceName: forgejo-db
template:
metadata:
labels:
app: forgejo-db
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
name: postgres
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_PASSWORD
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: db-data
mountPath: /var/lib/postgresql/data
livenessProbe:
exec:
command:
- pg_isready
- -U
- forgejo
initialDelaySeconds: 30
periodSeconds: 10
failureThreshold: 5
readinessProbe:
exec:
command:
- pg_isready
- -U
- forgejo
initialDelaySeconds: 10
periodSeconds: 5
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
volumes:
- name: db-data
persistentVolumeClaim:
claimName: forgejo-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: forgejo-db
namespace: infrastructure
spec:
selector:
app: forgejo-db
ports:
- name: postgres
port: 5432
targetPort: 5432
---
# Forgejo git server
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-data-pvc
namespace: infrastructure
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nfs-git
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: forgejo
namespace: infrastructure
spec:
replicas: 1
selector:
matchLabels:
app: forgejo
serviceName: forgejo
template:
metadata:
labels:
app: forgejo
spec:
initContainers:
- name: wait-for-db
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z forgejo-db 5432; do
echo "Waiting for PostgreSQL..."
sleep 2
done
echo "PostgreSQL is ready"
containers:
- name: forgejo
image: codeberg.org/forgejo/forgejo:10
ports:
- containerPort: 3000
name: http
- containerPort: 22
name: ssh
env:
- name: FORGEJO__database__DB_TYPE
value: postgres
- name: FORGEJO__database__HOST
value: forgejo-db:5432
- name: FORGEJO__database__NAME
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_DB
- name: FORGEJO__database__USER
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_USER
- name: FORGEJO__database__PASSWD
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_PASSWORD
- name: FORGEJO__server__DOMAIN
value: git.vandachevici.ro
- name: FORGEJO__server__ROOT_URL
value: https://git.vandachevici.ro
- name: FORGEJO__server__SSH_DOMAIN
value: git.vandachevici.ro
- name: FORGEJO__server__SSH_PORT
value: "30022"
- name: FORGEJO__server__SSH_LISTEN_PORT
value: "22"
- name: FORGEJO__security__SECRET_KEY
valueFrom:
secretKeyRef:
name: forgejo-secret
key: secret-key
- name: FORGEJO__service__DISABLE_REGISTRATION
value: "false"
- name: FORGEJO__service__REQUIRE_SIGNIN_VIEW
value: "false"
volumeMounts:
- name: forgejo-data
mountPath: /data
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 60
periodSeconds: 15
failureThreshold: 5
readinessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 1000m
memory: 512Mi
volumes:
- name: forgejo-data
persistentVolumeClaim:
claimName: forgejo-data-pvc
---
# ClusterIP for HTTP (used by ingress)
apiVersion: v1
kind: Service
metadata:
name: forgejo
namespace: infrastructure
spec:
selector:
app: forgejo
ports:
- name: http
port: 3000
targetPort: 3000
---
# NodePort for SSH git access (git clone ssh://git@git.vandachevici.ro:30022/user/repo.git)
apiVersion: v1
kind: Service
metadata:
name: forgejo-ssh
namespace: infrastructure
spec:
type: NodePort
selector:
app: forgejo
ports:
- name: ssh
port: 22
targetPort: 22
nodePort: 30022
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: forgejo
namespace: infrastructure
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
spec:
ingressClassName: nginx
rules:
- host: git.vandachevici.ro
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: forgejo
port:
number: 3000
tls:
- hosts:
- git.vandachevici.ro
secretName: forgejo-tls

View file

@ -0,0 +1,124 @@
---
# NOTE: Secret 'general-db-secret' must be created manually:
# kubectl create secret generic general-db-secret \
# --from-literal=root-password=<ROOT_PASS> \
# --from-literal=database=general_db \
# --from-literal=user=<USER> \
# --from-literal=password=<PASS> \
# -n infrastructure
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: general-db-v2-pvc
namespace: infrastructure
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-general-db
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations: {}
name: general-purpose-db
namespace: infrastructure
spec:
replicas: 1
selector:
matchLabels:
app: general-purpose-db
serviceName: general-purpose-db
template:
metadata:
labels:
app: general-purpose-db
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: root-password
name: general-db-secret
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
key: database
name: general-db-secret
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: user
name: general-db-secret
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: general-db-secret
image: mysql:9
livenessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
- -u
- root
- -pbackup_root_pass
failureThreshold: 10
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 20
name: mysql
ports:
- containerPort: 3306
name: mysql
readinessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
- -u
- root
- -pbackup_root_pass
failureThreshold: 10
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 20
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-data
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: general-db-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: general-purpose-db
namespace: infrastructure
spec:
clusterIP: None
ports:
- name: mysql
port: 3306
targetPort: 3306
selector:
app: general-purpose-db

View file

@ -0,0 +1,23 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: homelab-pool
namespace: metallb-system
spec:
addresses:
- 192.168.2.240-192.168.2.249
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: homelab-l2
namespace: metallb-system
spec:
ipAddressPools:
- homelab-pool
nodeSelectors:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- kube-node-1

View file

@ -0,0 +1,174 @@
---
# PV for paperclip — NFS via keepalived VIP (192.168.2.252), synced between Dell and HP.
# Data lives at /data/ai/paperclip on the active NFS host.
apiVersion: v1
kind: PersistentVolume
metadata:
annotations: {}
name: paperclip-data-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
nfs:
path: /data/ai/paperclip
server: 192.168.2.252
persistentVolumeReclaimPolicy: Retain
storageClassName: ""
volumeMode: Filesystem
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: paperclip-data-pvc
namespace: ai
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: ""
volumeName: paperclip-data-pv
---
# NOTE: Secret 'paperclip-secrets' must be created manually:
# kubectl create secret generic paperclip-secrets \
# --from-literal=BETTER_AUTH_SECRET=<SECRET> \
# -n ai
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
labels:
app: paperclip
name: paperclip
namespace: ai
spec:
replicas: 1
selector:
matchLabels:
app: paperclip
strategy:
type: Recreate
template:
metadata:
labels:
app: paperclip
spec:
containers:
- command:
- paperclipai
- run
- -d
- /paperclip
env:
- name: PAPERCLIP_AGENT_JWT_SECRET
valueFrom:
secretKeyRef:
key: PAPERCLIP_AGENT_JWT_SECRET
name: paperclip-secrets
- name: PORT
value: '3100'
- name: HOST
value: 0.0.0.0
- name: SERVE_UI
value: 'true'
- name: NODE_ENV
value: production
- name: PAPERCLIP_DEPLOYMENT_MODE
value: authenticated
- name: PAPERCLIP_DEPLOYMENT_EXPOSURE
value: private
- name: PAPERCLIP_PUBLIC_URL
value: https://paperclip.vandachevici.ro
- name: PAPERCLIP_MIGRATION_PROMPT
value: never
- name: PAPERCLIP_MIGRATION_AUTO_APPLY
value: 'true'
- name: HOME
value: /paperclip
image: paperclip:latest
imagePullPolicy: Never
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 20
tcpSocket:
port: 3100
name: paperclip
ports:
- containerPort: 3100
name: http
readinessProbe:
failureThreshold: 12
initialDelaySeconds: 30
periodSeconds: 10
tcpSocket:
port: 3100
resources:
limits:
cpu: 2000m
memory: 2Gi
requests:
cpu: 200m
memory: 512Mi
volumeMounts:
- mountPath: /paperclip
name: paperclip-data
volumes:
- name: paperclip-data
persistentVolumeClaim:
claimName: paperclip-data-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app: paperclip
name: paperclip
namespace: ai
spec:
ports:
- name: http
port: 80
targetPort: 3100
selector:
app: paperclip
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-buffering: 'off'
nginx.ingress.kubernetes.io/proxy-read-timeout: '300'
nginx.ingress.kubernetes.io/proxy-send-timeout: '300'
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
name: paperclip-ingress
namespace: ai
spec:
ingressClassName: nginx
rules:
- host: paperclip.vandachevici.ro
http:
paths:
- backend:
service:
name: paperclip
port:
name: http
path: /
pathType: Prefix
tls:
- hosts:
- paperclip.vandachevici.ro
secretName: paperclip-tls

View file

@ -0,0 +1,257 @@
---
# NOTE: Secret 'parts-inventory-secret' must be created manually:
# kubectl create secret generic parts-inventory-secret \
# --from-literal=MONGO_URI="mongodb://parts-db.infrastructure.svc.cluster.local:27017/parts" \
# -n infrastructure
---
# MongoDB PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: parts-db-pvc
namespace: infrastructure
spec:
accessModes: [ReadWriteOnce]
storageClassName: nfs-general
resources:
requests:
storage: 5Gi
---
# MongoDB StatefulSet
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: parts-db
namespace: infrastructure
spec:
replicas: 1
serviceName: parts-db
selector:
matchLabels:
app: parts-db
template:
metadata:
labels:
app: parts-db
spec:
containers:
- name: mongo
image: mongo:4.4
ports:
- containerPort: 27017
name: mongo
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
exec:
command: ["mongo", "--eval", "db.adminCommand('ping')"]
initialDelaySeconds: 30
periodSeconds: 20
failureThreshold: 5
readinessProbe:
exec:
command: ["mongo", "--eval", "db.adminCommand('ping')"]
initialDelaySeconds: 15
periodSeconds: 10
failureThreshold: 3
volumeMounts:
- name: db-data
mountPath: /data/db
volumes:
- name: db-data
persistentVolumeClaim:
claimName: parts-db-pvc
---
# MongoDB Headless Service
apiVersion: v1
kind: Service
metadata:
name: parts-db
namespace: infrastructure
spec:
clusterIP: None
selector:
app: parts-db
ports:
- name: mongo
port: 27017
targetPort: 27017
---
# parts-api Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: parts-api
namespace: infrastructure
spec:
replicas: 2
selector:
matchLabels:
app: parts-api
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: parts-api
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: parts-api
topologyKey: kubernetes.io/hostname
containers:
- name: parts-api
image: parts-api:latest
imagePullPolicy: Never
ports:
- containerPort: 3001
name: http
env:
- name: MONGO_URI
valueFrom:
secretKeyRef:
name: parts-inventory-secret
key: MONGO_URI
- name: PORT
value: "3001"
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 200m
memory: 128Mi
livenessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 15
periodSeconds: 20
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 3001
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
---
# parts-api Service
apiVersion: v1
kind: Service
metadata:
name: parts-api
namespace: infrastructure
spec:
selector:
app: parts-api
ports:
- name: http
port: 3001
targetPort: 3001
type: ClusterIP
---
# parts-ui Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: parts-ui
namespace: infrastructure
spec:
replicas: 2
selector:
matchLabels:
app: parts-ui
template:
metadata:
labels:
app: parts-ui
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: parts-ui
topologyKey: kubernetes.io/hostname
containers:
- name: parts-ui
image: parts-ui:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
name: http
resources:
requests:
cpu: 10m
memory: 16Mi
limits:
cpu: 100m
memory: 64Mi
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 5
periodSeconds: 20
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 3
periodSeconds: 10
failureThreshold: 3
---
# parts-ui Service
apiVersion: v1
kind: Service
metadata:
name: parts-ui
namespace: infrastructure
spec:
selector:
app: parts-ui
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
---
# Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: parts-ui-ingress
namespace: infrastructure
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
spec:
ingressClassName: nginx
rules:
- host: parts.vandachevici.ro
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: parts-ui
port:
number: 80
tls:
- hosts:
- parts.vandachevici.ro
secretName: parts-ui-tls

View file

@ -0,0 +1,57 @@
---
apiVersion: v1
kind: Endpoints
metadata:
name: proxmox
namespace: infrastructure
subsets:
- addresses:
- ip: 192.168.2.193
ports:
- port: 8006
---
apiVersion: v1
kind: Service
metadata:
name: proxmox
namespace: infrastructure
spec:
ports:
- port: 8006
targetPort: 8006
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: proxmox
namespace: infrastructure
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/proxy-ssl-verify: "off"
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
# WebSocket support for noVNC console
nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
spec:
ingressClassName: nginx
rules:
- host: proxmox.vandachevici.ro
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxmox
port:
number: 8006
tls:
- hosts:
- proxmox.vandachevici.ro
secretName: proxmox-tls

View file

@ -0,0 +1,133 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations: {}
name: speedtest-tracker-config
namespace: infrastructure
data:
APP_KEY: base64:F1lxPXfl42EXK1PTsi5DecMkyvTMPZgfAYDdSYwd9ME=
APP_URL: http://192.168.2.100:20000
DB_CONNECTION: mysql
DB_DATABASE: general_db
DB_HOST: general-purpose-db.infrastructure.svc.cluster.local
DB_PORT: '3306'
DISPLAY_TIMEZONE: Etc/UTC
PGID: '1000'
PRUNE_RESULTS_OLDER_THAN: '7'
PUID: '1000'
SPEEDTEST_SCHEDULE: '*/5 * * * *'
SPEEDTEST_SERVERS: 31470,1584,60747
TZ: Etc/UTC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: speedtest-tracker-v2-pvc
namespace: infrastructure
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-speedtest
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: speedtest-tracker
namespace: infrastructure
spec:
replicas: 1
selector:
matchLabels:
app: speedtest-tracker
template:
metadata:
labels:
app: speedtest-tracker
spec:
containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
key: user
name: general-db-secret
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: general-db-secret
envFrom:
- configMapRef:
name: speedtest-tracker-config
- secretRef:
name: general-db-secret
image: lscr.io/linuxserver/speedtest-tracker:latest
name: speedtest-tracker
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
volumeMounts:
- mountPath: /config
name: config
volumes:
- name: config
persistentVolumeClaim:
claimName: speedtest-tracker-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: speedtest-tracker
namespace: infrastructure
spec:
ports:
- name: http
nodePort: 30200
port: 80
targetPort: 80
selector:
app: speedtest-tracker
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
name: speedtest-tracker
namespace: infrastructure
spec:
ingressClassName: nginx
rules:
- host: speedtest.vandachevici.ro
http:
paths:
- backend:
service:
name: speedtest-tracker
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- speedtest.vandachevici.ro
secretName: speedtest-tls

View file

@ -0,0 +1,50 @@
---
apiVersion: v1
kind: Endpoints
metadata:
name: technitium-dns
namespace: infrastructure
subsets:
- addresses:
- ip: 192.168.2.193
ports:
- port: 5380
---
apiVersion: v1
kind: Service
metadata:
name: technitium-dns
namespace: infrastructure
spec:
ports:
- port: 5380
targetPort: 5380
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: technitium-dns
namespace: infrastructure
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.vandachevici.ro/outpost.goauthentik.io/auth/nginx"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.vandachevici.ro/outpost.goauthentik.io/start?rd=$scheme://$http_host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: >-
Set-Cookie,X-authentik-username,X-authentik-groups,X-authentik-email,X-authentik-name,X-authentik-uid
spec:
ingressClassName: nginx
rules:
- host: dns.vandachevici.ro
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: technitium-dns
port:
number: 5380
tls:
- hosts:
- dns.vandachevici.ro
secretName: technitium-dns-tls

View file

@ -0,0 +1,21 @@
---
# Wildcard certificate for *.vandachevici.ro
# Used as nginx-ingress default SSL cert to eliminate the brief self-signed
# cert flash when a new ingress is first deployed.
#
# Requires DNS01 solver (already configured in letsencrypt-prod ClusterIssuer).
# Secret 'wildcard-vandachevici-tls' is referenced in ingress-nginx helm values.
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-vandachevici
namespace: infrastructure
spec:
secretName: wildcard-vandachevici-tls
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
commonName: "*.vandachevici.ro"
dnsNames:
- "*.vandachevici.ro"
- "vandachevici.ro"

View file

@ -0,0 +1,94 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations: {}
name: iot-api-config
namespace: iot
data:
MYSQL_DATABASE: iot_db
---
# NOTE: iot-api uses image 'iot-api:latest' with imagePullPolicy=Never
# The image must be built and loaded onto the scheduled node before deploying.
# Current status: ErrImageNeverPull on kube-node-3 — image not present there.
# To fix: either add nodeSelector to pin to a node with the image, or push the
# image to a registry and change imagePullPolicy to Always.
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: iot-api
namespace: iot
spec:
replicas: 1
selector:
matchLabels:
app: iot-api
template:
metadata:
labels:
app: iot-api
spec:
nodeSelector:
topology.homelab/server: dell
containers:
- env:
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: user
name: iot-db-secret
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: iot-db-secret
envFrom:
- configMapRef:
name: iot-api-config
image: iot-api:latest
imagePullPolicy: Never
livenessProbe:
failureThreshold: 5
httpGet:
path: /
port: 8000
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
name: iot-api
ports:
- containerPort: 8000
name: http
readinessProbe:
failureThreshold: 5
httpGet:
path: /
port: 8000
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 10
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 50m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: iot-api
namespace: iot
spec:
ports:
- name: http
nodePort: 30800
port: 8000
targetPort: 8000
selector:
app: iot-api
type: NodePort

135
deployment/iot/iot-db.yaml Normal file
View file

@ -0,0 +1,135 @@
---
# NOTE: Secret 'iot-db-secret' must be created manually:
# kubectl create secret generic iot-db-secret \
# --from-literal=root-password=<ROOT_PASS> \
# --from-literal=database=iot_db \
# --from-literal=user=<USER> \
# --from-literal=password=<PASS> \
# -n iot
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: iot-db-v2-pvc
namespace: iot
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-iot
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations: {}
name: iot-db
namespace: iot
spec:
replicas: 1
selector:
matchLabels:
app: iot-db
serviceName: iot-db
template:
metadata:
labels:
app: iot-db
spec:
containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: root-password
name: iot-db-secret
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
key: database
name: iot-db-secret
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: user
name: iot-db-secret
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: iot-db-secret
image: mysql:9
livenessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
- -u
- root
- -piot-db-root-passwort
failureThreshold: 10
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 20
name: mysql
ports:
- containerPort: 3306
name: mysql
readinessProbe:
exec:
command:
- mysqladmin
- ping
- -h
- localhost
- -u
- root
- -piot-db-root-passwort
failureThreshold: 10
initialDelaySeconds: 60
periodSeconds: 10
timeoutSeconds: 20
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-data
volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: iot-db-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: iot-db
namespace: iot
spec:
clusterIP: None
ports:
- name: mysql
port: 3306
targetPort: 3306
selector:
app: iot-db
---
# ExternalName alias so apps can use 'db' as hostname
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: db
namespace: iot
spec:
externalName: iot-db.iot.svc.cluster.local
type: ExternalName

View file

@ -0,0 +1,446 @@
---
# NOTE: Secret 'immich-secret' must be created manually:
# kubectl create secret generic immich-secret \
# --from-literal=db-username=<USER> \
# --from-literal=db-password=<PASS> \
# --from-literal=db-name=immich \
# --from-literal=jwt-secret=<JWT_SECRET> \
# -n media
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: immich-db-v2-pvc
namespace: media
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nfs-immich
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: immich-library-v2-pvc
namespace: media
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 290Gi
storageClassName: nfs-immich
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: immich-ml-cache-v2-pvc
namespace: media
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
storageClassName: nfs-immich
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: immich-valkey-v2-pvc
namespace: media
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-immich
---
# immich-db: PostgreSQL with pgvecto.rs / vectorchord extensions for AI embeddings
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations: {}
name: immich-db
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: immich-db
serviceName: immich-db
template:
metadata:
labels:
app: immich-db
spec:
containers:
- env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: db-password
name: immich-secret
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: db-username
name: immich-secret
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: db-name
name: immich-secret
- name: POSTGRES_INITDB_ARGS
value: --data-checksums
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0
livenessProbe:
exec:
command:
- pg_isready
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
name: postgres
ports:
- containerPort: 5432
name: postgres
readinessProbe:
exec:
command:
- pg_isready
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 5
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
subPath: postgres
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: immich-db-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: immich-db
namespace: media
spec:
clusterIP: None
ports:
- name: postgres
port: 5432
targetPort: 5432
selector:
app: immich-db
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: immich-valkey
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: immich-valkey
template:
metadata:
labels:
app: immich-valkey
spec:
containers:
- args:
- --save
- '60'
- '1'
- --loglevel
- warning
image: docker.io/valkey/valkey:9.0-alpine
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
tcpSocket:
port: 6379
timeoutSeconds: 5
name: valkey
ports:
- containerPort: 6379
name: redis
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 10
tcpSocket:
port: 6379
timeoutSeconds: 5
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 50m
memory: 64Mi
volumeMounts:
- mountPath: /data
name: data
volumes:
- name: data
persistentVolumeClaim:
claimName: immich-valkey-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: immich-valkey
namespace: media
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
selector:
app: immich-valkey
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: immich-server
namespace: media
spec:
replicas: 2
selector:
matchLabels:
app: immich-server
template:
metadata:
labels:
app: immich-server
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: immich-server
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: DB_HOSTNAME
value: immich-db
- name: DB_PORT
value: '5432'
- name: DB_USERNAME
valueFrom:
secretKeyRef:
key: db-username
name: immich-secret
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
key: db-password
name: immich-secret
- name: DB_DATABASE_NAME
valueFrom:
secretKeyRef:
key: db-name
name: immich-secret
- name: DB_STORAGE_TYPE
value: HDD
- name: DB_VECTOR_EXTENSION
value: vectorchord
- name: REDIS_HOSTNAME
value: immich-valkey
- name: REDIS_PORT
value: '6379'
- name: IMMICH_MACHINE_LEARNING_URL
value: http://immich-ml:3003
- name: JWT_SECRET
valueFrom:
secretKeyRef:
key: jwt-secret
name: immich-secret
image: ghcr.io/immich-app/immich-server:release
livenessProbe:
failureThreshold: 5
httpGet:
path: /api/server/ping
port: 2283
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
name: immich-server
ports:
- containerPort: 2283
name: http
readinessProbe:
failureThreshold: 3
httpGet:
path: /api/server/ping
port: 2283
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 10
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 250m
memory: 512Mi
volumeMounts:
- mountPath: /usr/src/app/upload
name: library
volumes:
- name: library
persistentVolumeClaim:
claimName: immich-library-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: immich-web
namespace: media
spec:
ports:
- name: http
nodePort: 32283
port: 2283
targetPort: 2283
selector:
app: immich-server
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: immich-ml
namespace: media
spec:
replicas: 2
selector:
matchLabels:
app: immich-ml
template:
metadata:
labels:
app: immich-ml
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: immich-ml
topologyKey: kubernetes.io/hostname
containers:
- env:
- name: TRANSFORMERS_CACHE
value: /cache
- name: HF_XET_CACHE
value: /cache/huggingface-xet
- name: MPLCONFIGDIR
value: /cache/matplotlib-config
image: ghcr.io/immich-app/immich-machine-learning:release
livenessProbe:
failureThreshold: 5
httpGet:
path: /ping
port: 3003
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
name: machine-learning
ports:
- containerPort: 3003
name: http
readinessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 3003
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 10
resources:
limits:
cpu: 4000m
memory: 8Gi
requests:
cpu: 500m
memory: 2Gi
volumeMounts:
- mountPath: /cache
name: cache
volumes:
- name: cache
persistentVolumeClaim:
claimName: immich-ml-cache-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: immich-ml
namespace: media
spec:
ports:
- name: http
port: 3003
targetPort: 3003
selector:
app: immich-ml
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: '0'
nginx.ingress.kubernetes.io/proxy-read-timeout: '600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '600'
name: immich
namespace: media
spec:
ingressClassName: nginx
rules:
- host: photos.vandachevici.ro
http:
paths:
- backend:
service:
name: immich-web
port:
number: 2283
path: /
pathType: Prefix
tls:
- hosts:
- photos.vandachevici.ro
secretName: immich-tls

View file

@ -0,0 +1,161 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
annotations: {}
name: jellyfin-config
namespace: media
data:
JELLYFIN_PublishedServerUrl: https://media.vandachevici.ro
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: jellyfin-config-v2-pvc
namespace: media
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-jellyfin
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: jellyfin-cache-v2-pvc
namespace: media
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs-jellyfin
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations: {}
name: jellyfin-media-v2-pvc
namespace: media
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 650Gi
storageClassName: nfs-jellyfin
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: jellyfin
namespace: media
spec:
replicas: 1
selector:
matchLabels:
app: jellyfin
template:
metadata:
labels:
app: jellyfin
spec:
containers:
- envFrom:
- configMapRef:
name: jellyfin-config
image: jellyfin/jellyfin
livenessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8096
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
name: jellyfin
ports:
- containerPort: 8096
name: http
readinessProbe:
failureThreshold: 5
httpGet:
path: /health
port: 8096
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 10
resources:
limits:
cpu: 2000m
memory: 4Gi
requests:
cpu: 200m
memory: 512Mi
volumeMounts:
- mountPath: /config
name: config
- mountPath: /cache
name: cache
- mountPath: /media
name: media
readOnly: true
volumes:
- name: config
persistentVolumeClaim:
claimName: jellyfin-config-v2-pvc
- name: cache
persistentVolumeClaim:
claimName: jellyfin-cache-v2-pvc
- name: media
persistentVolumeClaim:
claimName: jellyfin-media-v2-pvc
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
name: jellyfin
namespace: media
spec:
ports:
- name: http
port: 8096
targetPort: 8096
selector:
app: jellyfin
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/proxy-body-size: '0'
nginx.ingress.kubernetes.io/proxy-read-timeout: '600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '600'
name: jellyfin
namespace: media
spec:
ingressClassName: nginx
rules:
- host: media.vandachevici.ro
http:
paths:
- backend:
service:
name: jellyfin
port:
number: 8096
path: /
pathType: Prefix
tls:
- hosts:
- media.vandachevici.ro
secretName: jellyfin-tls

View file

@ -0,0 +1,29 @@
---
# Prometheus local-storage PV — hostPath on kube-node-1 at /data/infra/prometheus
# This PV must be created before the Prometheus Helm chart is deployed.
# The Helm chart creates the PVC; this PV satisfies it via storageClassName=local-storage
# and nodeAffinity pinning to kube-node-1.
apiVersion: v1
kind: PersistentVolume
metadata:
annotations: {}
name: prometheus-storage-pv
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 100Gi
hostPath:
path: /data/infra/prometheus
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kube-node-1
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
volumeMode: Filesystem

71
deployment/pdbs.yaml Normal file
View file

@ -0,0 +1,71 @@
---
# PodDisruptionBudgets for all HA-scaled services.
# Ensures at least 1 replica stays up during node drains and rolling updates.
#
# Apply with: kubectl apply -f deployment/pdbs.yaml
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: parts-api-pdb
namespace: infrastructure
spec:
minAvailable: 1
selector:
matchLabels:
app: parts-api
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: parts-ui-pdb
namespace: infrastructure
spec:
minAvailable: 1
selector:
matchLabels:
app: parts-ui
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: ha-sync-ui-pdb
namespace: infrastructure
spec:
minAvailable: 1
selector:
matchLabels:
app: ha-sync-ui
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: games-console-backend-pdb
namespace: infrastructure
spec:
minAvailable: 1
selector:
matchLabels:
app: games-console-backend
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: games-console-ui-pdb
namespace: infrastructure
spec:
minAvailable: 1
selector:
matchLabels:
app: games-console-ui
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: coredns-pdb
namespace: kube-system
spec:
minAvailable: 1
selector:
matchLabels:
k8s-app: kube-dns

View file

@ -0,0 +1,268 @@
---
# HA PVCs — pre-bound to Dell NFS PVs via keepalived VIP 192.168.2.50
# storageClassName: "" + volumeName forces binding to specific PV
# ==================== MEDIA namespace ====================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-media-v2-pvc
namespace: media
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: jellyfin-media-pv
resources:
requests:
storage: 650Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-config-v2-pvc
namespace: media
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: jellyfin-config-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jellyfin-cache-v2-pvc
namespace: media
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: jellyfin-cache-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: immich-library-v2-pvc
namespace: media
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: immich-library-pv
resources:
requests:
storage: 290Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: immich-db-v2-pvc
namespace: media
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: immich-db-pv
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: immich-ml-cache-v2-pvc
namespace: media
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: immich-ml-cache-pv
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: immich-valkey-v2-pvc
namespace: media
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: immich-valkey-pv
resources:
requests:
storage: 1Gi
---
# ==================== STORAGE namespace ====================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: owncloud-files-v2-pvc
namespace: storage
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: owncloud-files-pv
resources:
requests:
storage: 190Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: owncloud-mariadb-v2-pvc
namespace: storage
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: owncloud-mariadb-pv
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: owncloud-redis-v2-pvc
namespace: storage
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: owncloud-redis-pv
resources:
requests:
storage: 1Gi
---
# ==================== GAMES namespace ====================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minecraft-home-v2-pvc
namespace: games
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: minecraft-home-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minecraft-cheats-v2-pvc
namespace: games
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: minecraft-cheats-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minecraft-creative-v2-pvc
namespace: games
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: minecraft-creative-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minecraft-johannes-v2-pvc
namespace: games
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: minecraft-johannes-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minecraft-noah-v2-pvc
namespace: games
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: minecraft-noah-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: factorio-alone-v2-pvc
namespace: games
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: factorio-alone-pv
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openttd-v2-pvc
namespace: games
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: openttd-pv
resources:
requests:
storage: 2Gi
---
# ==================== INFRASTRUCTURE namespace ====================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: general-db-v2-pvc
namespace: infrastructure
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: general-db-pv
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: speedtest-tracker-v2-pvc
namespace: infrastructure
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: speedtest-tracker-pv
resources:
requests:
storage: 1Gi
---
# ==================== IOT namespace ====================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: iot-db-v2-pvc
namespace: iot
spec:
accessModes: [ReadWriteOnce]
storageClassName: ""
volumeName: iot-db-pv
resources:
requests:
storage: 10Gi

Some files were not shown because too many files have changed in this diff Show more