This commit is contained in:
Morten Olsen
2025-11-25 15:11:52 +01:00
commit 70bbeb71f1
19 changed files with 804 additions and 0 deletions

17
.gitignore vendored Normal file
View File

@@ -0,0 +1,17 @@
# Temporary files
*.tmp
*.swp
*~
# IDE
.vscode/
.idea/
*.iml
# OS
.DS_Store
Thumbs.db
# Kustomize build output
kustomize-build/

25
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,25 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: check-json
- id: check-merge-conflict
- id: detect-private-key
- repo: https://github.com/adrienverge/yamllint.git
rev: v1.33.0
hooks:
- id: yamllint
args: [-c=.yamllint]
- repo: https://github.com/instrumenta/kubeval
rev: 0.16.1
hooks:
- id: kubeval
args: [--strict, --ignore-missing-schemas]

16
.yamllint Normal file
View File

@@ -0,0 +1,16 @@
---
extends: default
rules:
line-length:
max: 120
level: warning
indentation:
spaces: 2
indent-sequences: true
comments:
min-spaces-from-content: 1
document-start: disable
truthy:
allowed-values: ['true', 'false', 'on', 'off']

224
AGENTS.md Normal file
View File

@@ -0,0 +1,224 @@
# Foundation Services GitOps Repository
This repository contains the GitOps configuration for foundational services deployed to a K3s home server cluster using Argo CD.
## Repository Structure
```
.
├── apps/ # Argo CD Application manifests
│ ├── foundation-project.yaml # Argo CD project definition
│ ├── root-application.yaml # Root app that syncs all other apps
│ ├── storage-class.yaml # Storage class configuration
│ ├── cert-manager.yaml # Cert Manager application
│ ├── istio.yaml # Istio service mesh application
│ ├── cloudnative-pg.yaml # CloudNativePG operator application
│ ├── nats.yaml # NATS messaging application
│ ├── kyverno.yaml # Kyverno policy engine application
│ ├── trivy.yaml # Trivy security scanner application
│ └── kustomization.yaml # Kustomize configuration
├── storage/ # Storage class configuration
│ ├── storage-class.yaml # Local path storage with symlink support
│ └── kustomization.yaml # Kustomize configuration
├── scripts/ # Utility scripts
│ └── validate.sh # Validation script for CI/CD
├── Makefile # Build and deployment commands
├── .yamllint # YAML linting configuration
└── .pre-commit-config.yaml # Pre-commit hooks configuration
```
## Services Deployed
### Core Infrastructure
- **Argo CD**: GitOps controller (installed via Makefile)
- **Cert Manager**: Certificate management
- **Istio**: Service mesh
- **CloudNativePG**: PostgreSQL operator
- **NATS**: Messaging system
- **Kyverno**: Policy engine
- **Trivy**: Security scanning
### Storage
- **Local Path Storage**: Default storage class with symlinks to `/data/pvc/{namespace}/{name}` for easier backup and management
## Deployment
### Initial Setup
1. **Create the cluster and install Argo CD:**
```bash
make create
```
2. **Wait for Argo CD to be ready:**
```bash
kubectl wait --for=condition=available --timeout=300s deployment/argocd-server -n argocd
```
3. **Get Argo CD admin password:**
```bash
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
```
4. **Port-forward to access Argo CD UI:**
```bash
kubectl port-forward svc/argocd-server -n argocd 8080:443
```
Access at: https://localhost:8080 (username: `admin`)
### Deploy Foundation Services
Deploy all foundation services using the Makefile:
```bash
make deploy
```
This will:
1. Create the `foundation` Argo CD project
2. Deploy the root application that manages all other applications
3. Argo CD will automatically sync all foundational services
### Manual Deployment
Alternatively, you can deploy manually:
```bash
kubectl apply -k apps/
```
## Argo CD Project
All applications are managed under the `foundation` Argo CD project, which allows:
- Deployment to any namespace
- Access to any source repository
- Cluster and namespace resource management
## Storage Configuration
The default storage class (`local-path`) is configured to:
- Store volumes in `/var/lib/rancher/k3s/storage` (K3s default)
- Create symlinks in `/data/pvc/{namespace}/{name}` for easier backup and management
- Use `WaitForFirstConsumer` binding mode for optimal pod scheduling
## Application Details
### Cert Manager
- **Namespace**: `cert-manager`
- **Source**: Helm chart from cert-manager GitHub
- **Version**: v1.13.3
### Istio
- **Namespace**: `istio-system`
- **Source**: Helm chart from Istio releases
- **Version**: 1.20.0
### CloudNativePG
- **Namespace**: `cnpg-system`
- **Source**: CRDs and operator from CloudNativePG GitHub
- **Version**: v1.22.0
### NATS
- **Namespace**: `nats`
- **Source**: Helm chart from NATS Helm repository
- **Version**: 0.20.0
- **Features**: JetStream enabled with 10Gi storage
### Kyverno
- **Namespace**: `kyverno`
- **Source**: Helm chart from Kyverno Helm repository
- **Version**: 3.1.0
### Trivy
- **Namespace**: `trivy-system`
- **Source**: Helm chart from Aqua Security Helm repository
- **Version**: 0.20.0
## Repository URL
Update the repository URL in `apps/root-application.yaml` and `apps/storage-class.yaml` if you're using a different Git repository:
```yaml
source:
repoURL: https://gitea.olsen.cloud/homelab/foundation.git
targetRevision: main
```
## Quality Assurance and Linting
This repository includes several QA tools to ensure manifest quality and correctness.
### Available Commands
```bash
# Run all checks (lint + validate)
make check
# Validate Kubernetes manifests
make validate
# Lint YAML files
make lint
# Format YAML files
make format
# Install required tools
make install-tools
```
### Tools Used
- **kubeconform**: Validates Kubernetes manifests against the Kubernetes API schema
- **yamllint**: Lints YAML files for syntax and style issues
- **kustomize**: Validates Kustomize configurations and builds
### Pre-commit Hooks
The repository includes a `.pre-commit-config.yaml` for automated checks before commits. Install pre-commit hooks:
```bash
pip install pre-commit
pre-commit install
```
### CI/CD Integration
The `scripts/validate.sh` script can be used in CI/CD pipelines:
```bash
./scripts/validate.sh
```
### Validation Details
- **Kubernetes Validation**: Uses kubeconform to validate all manifests against Kubernetes API schemas
- **Kustomize Validation**: Ensures all kustomization.yaml files are valid
- **YAML Linting**: Checks YAML syntax, indentation, and style according to `.yamllint` configuration
## Future Enhancements
When adding new services:
1. Create an Argo CD Application manifest in the `apps/` directory
2. Add it to `apps/kustomization.yaml`
3. Ensure it uses the `foundation` project
4. Run `make check` to validate before committing
5. Commit and push - Argo CD will automatically sync
## Troubleshooting
### Argo CD Applications Not Syncing
- Check Argo CD server logs: `kubectl logs -n argocd deployment/argocd-server`
- Verify repository access: Argo CD needs read access to the Git repository
- Check application status in Argo CD UI or CLI: `argocd app list`
### Storage Issues
- Verify storage class is default: `kubectl get storageclass`
- Check local-path-provisioner logs: `kubectl logs -n kube-system -l app=local-path-provisioner`
- Verify symlinks are created: `ls -la /data/pvc/`
### Service Not Starting
- Check pod status: `kubectl get pods -n <namespace>`
- Check events: `kubectl get events -n <namespace> --sort-by='.lastTimestamp'`
- Review Argo CD application sync status

77
Makefile Normal file
View File

@@ -0,0 +1,77 @@
.PHONY: help create deploy check validate lint format install-tools clean ci
# Default target
help:
@echo "Available targets:"
@echo " make create - Create K3s cluster and install Argo CD"
@echo " make deploy - Deploy all foundation services"
@echo " make check - Run all validation checks (lint + validate)"
@echo " make validate - Validate Kubernetes manifests"
@echo " make lint - Lint YAML files"
@echo " make format - Format YAML files (where possible)"
@echo " make install-tools - Install required tools (kubeconform, yamllint)"
@echo " make ci - Run CI validation (non-interactive)"
@echo " make clean - Clean temporary files"
# Cluster setup
create:
colima delete -f -d
colima start --kubernetes -m 8 --k3s-arg="--disable helm-controller,traefik"
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Deployment
deploy:
kubectl apply -k apps/
# Validation and linting
check: lint validate
@echo "✓ All checks passed"
validate:
@if [ -f scripts/validate.sh ]; then \
./scripts/validate.sh; \
else \
echo "Validating Kubernetes manifests..."; \
which kubeconform > /dev/null || (echo "kubeconform not found. Run 'make install-tools'" && exit 1); \
for file in apps/*.yaml storage/*.yaml; do \
if [ -f "$$file" ] && [[ ! "$$file" == *"kustomization.yaml" ]]; then \
kubeconform -strict -skip Certificate,Issuer,CertificateRequest,ClusterIssuer "$$file" || exit 1; \
fi; \
done; \
echo "Validating Kustomize configurations..."; \
kustomize build apps/ > /dev/null; \
kustomize build storage/ > /dev/null; \
echo "✓ Validation passed"; \
fi
lint:
@echo "Linting YAML files..."
@which yamllint > /dev/null || (echo "yamllint not found. Run 'make install-tools'" && exit 1)
@yamllint -c .yamllint apps/ storage/
@echo "✓ Linting passed"
format:
@echo "Formatting YAML files..."
@which yq > /dev/null || (echo "yq not found. Install with: brew install yq" && exit 1)
@find apps storage -name "*.yaml" -type f -exec yq eval -i . {} \;
@echo "✓ Formatting complete"
# Tool installation
install-tools:
@echo "Installing kubeconform..."
@which kubeconform > /dev/null || (brew install kubeconform || echo "Failed to install kubeconform. Install manually: https://github.com/yannh/kubeconform")
@echo "Installing yamllint..."
@which yamllint > /dev/null || (pip3 install yamllint || echo "Failed to install yamllint. Install manually: pip install yamllint")
@echo "Installing kustomize..."
@which kustomize > /dev/null || (brew install kustomize || echo "Failed to install kustomize. Install manually: https://kustomize.io")
@echo "✓ Tools installation complete"
# CI target (non-interactive, exits on error)
ci: check
@echo "✓ CI validation passed"
# Cleanup
clean:
@find . -type f -name "*.tmp" -delete
@echo "✓ Cleanup complete"

25
apps/cert-manager.yaml Normal file
View File

@@ -0,0 +1,25 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cert-manager
namespace: argocd
spec:
project: foundation
source:
repoURL: https://charts.jetstack.io
chart: cert-manager
targetRevision: v1.19.1
helm:
releaseName: cert-manager
values: |
crds:
enabled: true
destination:
server: https://kubernetes.default.svc
namespace: cert-manager
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

22
apps/cloudnative-pg.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: cloudnative-pg
namespace: argocd
spec:
project: foundation
source:
repoURL: https://cloudnative-pg.github.io/charts
targetRevision: 0.26.1
chart: cloudnative-pg
helm:
releaseName: cloudnative-pg
destination:
server: https://kubernetes.default.svc
namespace: cnpg-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,18 @@
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: foundation
namespace: argocd
spec:
description: Foundation services for the home server
sourceRepos:
- '*'
destinations:
- namespace: '*'
server: https://kubernetes.default.svc
clusterResourceWhitelist:
- group: '*'
kind: '*'
namespaceResourceWhitelist:
- group: '*'
kind: '*'

31
apps/istio.yaml Normal file
View File

@@ -0,0 +1,31 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: istio-base
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "-1"
spec:
project: foundation
source:
repoURL: https://istio-release.storage.googleapis.com/charts
targetRevision: 1.28.0
chart: base
helm:
releaseName: istio-base
values: |
defaultRevision: default
destination:
server: https://kubernetes.default.svc
namespace: istio-system
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
jsonPointers:
- /webhooks/0/failurePolicy
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

30
apps/istiod.yaml Normal file
View File

@@ -0,0 +1,30 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: istiod
namespace: argocd
annotations:
argocd.argoproj.io/sync-wave: "0"
spec:
project: foundation
source:
repoURL: https://istio-release.storage.googleapis.com/charts
targetRevision: 1.28.0
chart: istiod
helm:
releaseName: istiod
destination:
server: https://kubernetes.default.svc
namespace: istio-system
ignoreDifferences:
- group: admissionregistration.k8s.io
kind: ValidatingWebhookConfiguration
jsonPointers:
- /webhooks/0/failurePolicy
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

13
apps/kustomization.yaml Normal file
View File

@@ -0,0 +1,13 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- foundation-project.yaml
- root-application.yaml
- storage-class.yaml
- cert-manager.yaml
- istio.yaml
- istiod.yaml
- cloudnative-pg.yaml
- nats.yaml
- kyverno.yaml
- trivy.yaml

22
apps/kyverno.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kyverno
namespace: argocd
spec:
project: foundation
source:
repoURL: https://kyverno.github.io/kyverno
targetRevision: 3.1.0
chart: kyverno
helm:
releaseName: kyverno
destination:
server: https://kubernetes.default.svc
namespace: kyverno
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

29
apps/nats.yaml Normal file
View File

@@ -0,0 +1,29 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: nats
namespace: argocd
spec:
project: foundation
source:
repoURL: https://nats-io.github.io/k8s/helm/charts/
targetRevision: 2.12.2
chart: nats
helm:
releaseName: nats
values: |
config:
jetstream:
enabled: true
fileStore:
storageSize: 10Gi
storageClassName: local-path
destination:
server: https://kubernetes.default.svc
namespace: nats
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

View File

@@ -0,0 +1,20 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-application
namespace: argocd
spec:
project: foundation
source:
repoURL: https://gitea.olsen.cloud/homelab/foundation.git
targetRevision: main
path: apps
destination:
server: https://kubernetes.default.svc
namespace: argocd
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

20
apps/storage-class.yaml Normal file
View File

@@ -0,0 +1,20 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: storage-class
namespace: argocd
spec:
project: foundation
source:
repoURL: https://gitea.olsen.cloud/homelab/foundation.git
targetRevision: main
path: storage
destination:
server: https://kubernetes.default.svc
namespace: kube-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

22
apps/trivy.yaml Normal file
View File

@@ -0,0 +1,22 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: trivy
namespace: argocd
spec:
project: foundation
source:
repoURL: https://aquasecurity.github.io/helm-charts
targetRevision: 0.20.0
chart: trivy-operator
helm:
releaseName: trivy
destination:
server: https://kubernetes.default.svc
namespace: trivy-system
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true

86
scripts/validate.sh Executable file
View File

@@ -0,0 +1,86 @@
#!/bin/bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Check if tools are installed
check_tool() {
if ! command -v "$1" &> /dev/null; then
echo -e "${RED}$1 is not installed${NC}"
echo " Install with: make install-tools"
return 1
fi
return 0
}
echo "🔍 Running validation checks..."
# Check required tools
MISSING_TOOLS=0
check_tool kubeconform || MISSING_TOOLS=1
check_tool yamllint || MISSING_TOOLS=1
check_tool kustomize || MISSING_TOOLS=1
if [ $MISSING_TOOLS -eq 1 ]; then
exit 1
fi
# Validate Kubernetes manifests
echo -e "\n${YELLOW}Validating Kubernetes manifests...${NC}"
VALIDATION_FAILED=0
# Validate individual YAML files (skip CRDs that kubeconform doesn't handle well)
for file in apps/*.yaml storage/*.yaml; do
if [ -f "$file" ]; then
# Skip kustomization files as they're not Kubernetes resources
if [[ "$file" == *"kustomization.yaml" ]]; then
continue
fi
if ! kubeconform -strict -skip Certificate,Issuer,CertificateRequest,ClusterIssuer "$file" 2>&1; then
echo -e "${RED}✗ Validation failed: $file${NC}"
VALIDATION_FAILED=1
else
echo -e "${GREEN}$file${NC}"
fi
fi
done
# Validate Kustomize builds
echo -e "\n${YELLOW}Validating Kustomize configurations...${NC}"
if ! kustomize build apps/ > /dev/null 2>&1; then
echo -e "${RED}✗ Kustomize build failed: apps/${NC}"
VALIDATION_FAILED=1
else
echo -e "${GREEN}✓ apps/kustomization.yaml${NC}"
fi
if ! kustomize build storage/ > /dev/null 2>&1; then
echo -e "${RED}✗ Kustomize build failed: storage/${NC}"
VALIDATION_FAILED=1
else
echo -e "${GREEN}✓ storage/kustomization.yaml${NC}"
fi
# Lint YAML files
echo -e "\n${YELLOW}Linting YAML files...${NC}"
if ! yamllint -c .yamllint apps/ storage/; then
echo -e "${RED}✗ YAML linting failed${NC}"
VALIDATION_FAILED=1
else
echo -e "${GREEN}✓ YAML linting passed${NC}"
fi
# Summary
echo ""
if [ $VALIDATION_FAILED -eq 0 ]; then
echo -e "${GREEN}✓ All validation checks passed!${NC}"
exit 0
else
echo -e "${RED}✗ Some validation checks failed${NC}"
exit 1
fi

View File

@@ -0,0 +1,4 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- storage-class.yaml

103
storage/storage-class.yaml Normal file
View File

@@ -0,0 +1,103 @@
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
apiVersion: v1
kind: ConfigMap
metadata:
name: local-path-config
namespace: kube-system
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/var/lib/rancher/k3s/storage"]
}
]
}
# The Setup script runs when a PVC is created
setup: |-
#!/bin/sh
set -e
# Variables provided by the provisioner:
# $VOL_DIR = The volume path (e.g., .../pvc-1234-abcd_namespace_pvc-name)
# $PV_NAME = The PersistentVolume name (e.g., pvc-1234-abcd)
# Extract PVC namespace and name from VOL_DIR path
# The local-path-provisioner includes namespace and name in the path format:
# /path/to/storage/pvc-UUID_namespace_pvc-name
# Example: pvc-6f2333de-0b45-41d3-b78f-5d62e737d8bc_nats_nats-js-nats-0
if [ -z "${PVC_NAMESPACE:-}" ] || [ -z "${PVC_NAME:-}" ]; then
if [ -n "${VOL_DIR:-}" ]; then
# Extract the basename (e.g., pvc-6f2333de-0b45-41d3-b78f-5d62e737d8bc_nats_nats-js-nats-0)
VOL_BASENAME=$(basename "$VOL_DIR")
# Split by underscore: format is pvc-UUID_namespace_pvc-name
# Count underscores to find where namespace starts
# The UUID part contains hyphens, so we split on underscore
# Field 1: pvc-UUID (discard)
# Field 2: namespace
# Field 3+: PVC name (may contain underscores, so join them)
NAMESPACE_PART=$(echo "$VOL_BASENAME" | cut -d'_' -f2)
NAME_PART=$(echo "$VOL_BASENAME" | cut -d'_' -f3-)
if [ -n "$NAMESPACE_PART" ] && [ -n "$NAME_PART" ]; then
PVC_NAMESPACE="$NAMESPACE_PART"
PVC_NAME="$NAME_PART"
fi
fi
fi
# Final validation
if [ -z "${PVC_NAMESPACE:-}" ] || [ -z "${PVC_NAME:-}" ]; then
echo "Error: Could not determine PVC_NAMESPACE or PVC_NAME"
echo "VOL_DIR: ${VOL_DIR:-not set}"
echo "VOL_BASENAME: ${VOL_BASENAME:-not set}"
echo "Available environment variables:"
env | grep -E "(PVC_|PV_|VOL_)" || echo " (none found)"
exit 1
fi
# 1. Define your pretty path in /data/pvc for easier management
PRETTY_PATH="/var/lib/rancher/k3s/storage/pvc/${PVC_NAMESPACE}/${PVC_NAME}"
# 2. Create the pretty folder
mkdir -p "$PRETTY_PATH"
chmod 777 "$PRETTY_PATH"
# 3. Create a symlink: Ugly UUID -> Pretty Path
# Kubernetes looks at the UUID path, but data is actually written to Pretty Path
ln -s "$PRETTY_PATH" "$VOL_DIR"
# The Teardown script runs when a PVC is deleted
teardown: |-
#!/bin/sh
set -eu
# Variables provided: $VOL_DIR (The UUID path)
# 1. Resolve where the symlink points to (The pretty path)
if [ -L "$VOL_DIR" ]; then
PRETTY_PATH=$(readlink "$VOL_DIR")
# 2. Remove the Symlink (The UUID entry)
rm "$VOL_DIR"
fi
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent